Rafia Inam is a Senior Research Manager at Ericsson Research and an Adjunct Professor at KTH Royal Institute of Technology. Dr Inam joined WASP in 2019.
Industrial supervisor from the Ericsson side.
It is a good platform for AI research that brings academic and industrial research together.
Explainable AI is the main topic. The greater levels of autonomy increase the need for explainability of the decisions made by AI so that humans can understand them (e.g., the underlying data evidence and causal reasoning) consequently enabling trust. We develop and apply Explainable AI methods to the telecom networks for different use cases. One is 5G slice assurance use case with the main purpose to analyze the root-cause of Service Level Agreement violation prediction in a 5G network slicing setup by identifying important features contributing to the decision. Another use case is the remote electrical tilt use case where we use explainable RL techniques to explain the RL agent’s behavior that decides about the tilt of the antennas in cellular network.
We have applied different explainable AI methods to the 5G networks and using it in our 5G network slice assurance and remote electric tilt optimization use cases. See the following papers:
Trustworthy AI is an integral part of our automated AI-based systems to gain the following advantages: 1) ensure and justify that AI is doing the right thing, 2) improve transparency by spotting the problem, explaining decisions made by AI and the rationale behind them, describing unexplained behavior, 3) develop safe systems, and 4) comply with the regulations like EU AI Act, etc., and rights to explain AI.