WASP is very proud to have so many excellent researchers involved in the program. More than 450 researchers, reaching from assistant to senior professors, are affiliated with WASP. Some are international recruitments who have come to Sweden to join the WASP community, others are already well established in the Swedish academic system.
Through a series of portraits, you get the opportunity to get to know them a little bit better.
Meet Rafia Inam
Rafia Inam is a senior research manager at Ericsson Research and an Adjunct Professor at KTH Royal Institute of Technology. Dr Inam joined WASP in 2019.
What is your position/role in WASP?
Industrial supervisor from the Ericsson side.
Why did you choose to join WASP?
It is a good platform for AI research that brings academic and industrial research together.
What are the benefits you see in WASP?
- Research possibilities on the new AI technologies
- Networking with academia and other industries
- Funding possibilities
Briefly describe your research topic.
Explainable AI is the main topic. The greater levels of autonomy increase the need for explainability of the decisions made by AI so that humans can understand them (e.g., the underlying data evidence and causal reasoning) consequently enabling trust. We develop and apply Explainable AI methods to the telecom networks for different use cases. One is 5G slice assurance use case with the main purpose to analyze the root-cause of Service Level Agreement violation prediction in a 5G network slicing setup by identifying important features contributing to the decision. Another use case is the remote electrical tilt use case where we use explainable RL techniques to explain the RL agent’s behavior that decides about the tilt of the antennas in cellular network.
We have applied different explainable AI methods to the 5G networks and using it in our 5G network slice assurance and remote electric tilt optimization use cases. See the following papers:
- BEERL: Both Ends Explanations for Reinforcement Learning. A Terra, R Inam, E Fersman. Journal of Applied Sciences, special issue “Explainable Artificial Intelligence”, Vol 12, No. 21, November 2022.
- Evaluation of Intrinsic Explainable Reinforcement Learning in Remote Electrical Tilt Optimization. Franco Ruggeri, Ahmad Terra, Rafia Inam, Karl-Henrik Johansson. Accepted at 8th International Congress on Information and Communication Technology (ICICT), Feb 2023.
- Explainability Methods for Identifying Root-Cause of SLA Violation Prediction in 5G Network. A Terra, R Inam, S Baskaran, P Batista, I Burdick, E Fersman. GLOBECOM 2020-2020 IEEE Global Communications Conference, 1-7, 2020.
- Using Counterfactuals to Proactively Solve Service Level Agreement Violations in 5G Networks. A Terra, R Inam, P Batista, E Fersman. 2022 IEEE International Conference on Industrial Informatics (INDIN), July 2022.
- Explainable Reinforcement Learning for Human-Robot Collaboration. Alessandro Iucci, Alberto Hata, Ahmad Terra, Rafia Inam, Iolanda Leite. 20th International Conference on Advanced Robotics (ICAR), December 2021.
- Explainable AI – how humans can trust AI. Rafia Inam, Ahmad Terra, Anusha Mujumdar, Elena Fersman, Aneta Vulgarakis. Ericsson White Paper, April 2021.
In what way can your research be of importance to our society in the future?
Trustworthy AI is an integral part of our automated AI-based systems to gain the following advantages: 1) ensure and justify that AI is doing the right thing, 2) improve transparency by spotting the problem, explaining decisions made by AI and the rationale behind them, describing unexplained behavior, 3) develop safe systems, and 4) comply with the regulations like EU AI Act, etc., and rights to explain AI.
For more information about Dr Inam, see https://www.kth.se/profile/raina?l=en
Published: January 25th, 2023