UL Solutions announces rating program to advance confidence in AI technology
UL Solutions Inc. today announced a new program that seeks to address rising concerns surrounding artificial intelligence (AI) reliability, security, and ethical implications, promoting responsible AI development.
AI systems are software programmed to learn and make decisions based on data and previous actions. The systems use data and instructions to create text, predictions, and other suggestions. The new UL Solutions program, the AI Model Transparency Benchmark, assesses AI model transparency, which is the ability to understand how an AI system makes decisions and produces specific results.
By examining key areas such as data management, model development, security, deployment, and ethical considerations, the benchmark provides a clear and objective rating of an AI system’s transparency and trustworthiness that results in a marketing claim verification. A UL Verified Mark for AI Model Transparency may be displayed on products achieving a rating.
“AI has the potential to revolutionize businesses and society, but only if developed and deployed responsibly,” said Dr. Robert Slone, chief scientist at UL Solutions. “The AI Model Transparency Benchmark is a critical tool for verifying that AI systems are built with integrity, privacy, and accountability in mind.”
Electronic products enhanced by AI systems can offer consumers significant ease-of-use benefits. However, while the integration of AI into consumer products is on the rise, for AI to reach its true potential, consumers must have confidence that these devices securely perform as promised. Based on the AI Index 2024 Annual Report by Stanford University,1 52% of Americans report feeling more concerned than excited about AI.
The AI Model Transparency Benchmark is used to verify marketing claims so that manufacturers can build trust with consumers considering AI-enabled products.
Systems are awarded a score between 0 and 100 points, with higher scores indicating greater transparency. A score of 50 or less is considered “not rated,” indicating significant transparency issues. Scores between 51 and 60 are rated as Silver, reflecting moderate transparency. Scores between 61 and 70 are rated as Gold, indicating high transparency. Scores between 71 and 80 are rated as Platinum, reflecting very high transparency. Scores of 81 and above are rated as Diamond, indicating exceptional transparency.
“As a global leader in safety science, testing, and inspection, we have significant experience in helping manufacturers confidently bring innovative products to market and building consumer trust in new technologies,” said Slone. “The AI Model Transparency Benchmark provides a standardized framework and rating system for evaluating AI systems, which can help businesses and consumers make informed decisions about adopting and using AI.”
1Measuring Trends in AI, Stanford University, 2024.