• News
  • Subscribe Now

How Can AI Be Used Safely? Researchers From Harvard, MIT, IBM & Microsoft Weigh In

By Unknown Author|Source: Techrepublic|Read Time: 3 mins|Share

The report highlights the importance of addressing issues such as transparency, accountability, and bias in AI systems. It also emphasizes the need for ongoing collaboration between researchers, policymakers, and industry leaders to ensure the responsible development and deployment of AI technology. By identifying these key challenges, the report aims to guide future efforts in advancing AI in a way that benefits society as a whole. The AAAI report serves as a valuable resource for shaping the ethical and practical considerations surrounding AI innovation.

How Can AI Be Used Safely? Researchers From Harvard, MIT, IBM & Microsoft Weigh In
Representational image

An important focus of AI research is improving an AI system’s factualness and trustworthiness. Even though significant progress has been made in these areas, some AI experts are pessimistic that these issues will be solved in the near future. That is one of the main findings of a new report by The Association for the Advancement of Artificial Intelligence (AAAI), which includes insights from experts from various academic institutions (e.g., MIT, Harvard, and University of Oxford) and tech giants (e.g., Microsoft and IBM).

Current Trends in AI Research

The goal of the study was to define the current trends and the research challenges to make AI more capable and reliable so the technology can be safely used, wrote AAAI President Francesca Rossi. The report includes 17 topics related to AI research culled by a group of 24 “very diverse” and experienced AI researchers, along with 475 respondents from the AAAI community, she noted. Here are highlights from this AI research report.

Improving an AI System’s Trustworthiness and Factualness

An AI system is considered factual if it doesn’t output false statements, and its trustworthiness can be improved by including criteria “such as human understandability, robustness, and the incorporation of human values,’’ the report’s authors stated. Other criteria to consider are fine-tuning and verifying machine outputs, and replacing complex models with simple understandable models.

Making AI More Ethical and Safer

AI is becoming more popular, and this requires greater responsibility for AI systems, according to the report. For example, emerging threats such as AI-driven cybercrime and autonomous weapons require immediate attention, along with the ethical implications of new AI techniques. Among the most pressing ethical challenges, the top concerns respondents had were: Misinformation (75%), Privacy (58.75%), Responsibility (49.38%). This indicates more transparency, accountability, and explainability in AI systems is needed. And, that ethical and safety concerns should be addressed with interdisciplinary collaboration, continuous oversight, and clearer responsibility. Respondents also cited political and structural barriers, “with concerns that meaningful progress may be hindered by governance and ideological divides.”

Evaluating AI Using Various Factors

Researchers make the case that AI systems introduce “unique evaluation challenges.” Current evaluation approaches focus on benchmark testing, but they said more attention needs to be paid to usability, transparency, and adherence to ethical guidelines.

Implementing AI Agents Introduces Challenges

AI agents have evolved from autonomous problem-solvers to AI frameworks that enhance adaptability, scalability, and cooperation. Yet, the researchers found that the introduction of agentic AI, while providing flexible decision making, has introduced challenges when it comes to efficiency and complexity. The report’s authors state that integrating AI with generative models “requires balancing adaptability, transparency, and computational feasibility in multi-agent environments.”

More Aspects of AI Research

Some of the other AI research-related topics covered in the AAAI report include sustainability, artificial general intelligence, social good, hardware, and geopolitical aspects.


By entering your email you agree to our terms & conditions and privacy policy. You will be getting daily AI news in your inbox at 7 am your time to keep you ahead of the curve. Don't worry you can always unsubscribe.