Wednesday, November 27, 2024

Comprehensive Framework for Ethical AI Use

 

US Launches Comprehensive Framework for Ethical AI Use

AI Technologies Mapped for Safer Corporate Integration
Artificial Intelligence (AI) is swiftly advancing, and it’s expected to soon be pivotal in the operations of nearly all companies. With this advancement comes the need for a standard method of risk management to mitigate the potential dangers of AI and encourage its proper use. Addressing this need, the National Institute of Standards and Technology (NIST) in the United States introduced the “AI Risk Management Framework” (AI RMF) in January 2023.

U.S. Government Supports Responsible AI Application
The U.S. Government has been active in ensuring that corporations adopt AI responsibly. In 2022, guidelines titled “Blueprint for an AI Bill of Rights” were published, setting the stage for ethical AI usage. By October 2023, the Biden administration had furthered this initiative with a presidential directive on AI safety.

The Significance of NIST’s AI RMF
Developed as part of a government drive for responsible AI usage—inclusive of fairness, transparency, and security—AI RMF provides guidance throughout the an AI’s lifecycle. It consists of four ‘Cores’: Govern, Map, Measure, and Manage, each comprising numerous categories and subcategories for thorough governance.

A crucial subcategory under ‘Govern’, identified as Govern 1.6, requires the development of a use case inventory. The act of cataloging AI utilization scenarios serves as an initial step in comprehensively assessing AI applications and their associated risks, hence ensuring effective risk management and adherence to regulations. Crafting these inventories is advised by other protocols such as the European Union’s AI Act and the Office of Management and Budget (OMB) in the U.S.

AI RMF’s Practicality and Future Implications
Although not set as a formal standard or a mandatory requirement, AI RMF is hailed as an optimal starting point for AI governance. Offering globally applicable strategies for wide-ranging use cases—from resume screening and credit risk prediction to fraud detection and unmanned vehicles—AI RMF is considered a practical tool by Evi Fuelle, director at Credo AI. Through public commentary and stakeholder involvement, the framework has become enriched as a corporate guide, with the potential to evolve into an industry-standard directive, especially among businesses interacting with the U.S. federal government.

Important Questions and Answers

1. What is the purpose of the AI Risk Management Framework?
The AI RMF is designed to help organizations manage risks associated with the deployment of AI systems. It provides guidance on maintaining ethical standards such as fairness, transparency, and security throughout the AI lifecycle.

2. Is the AI RMF mandatory for organizations?
No, the framework is not a formal standard or a mandatory requirement but is recommended as a starting point for AI governance.

3. How does the AI RMF align with other international regulations?
The AI RMF is advised by other protocols such as the European Union’s AI Act and the Office of Management and Budget (OMB) in the U.S., which suggests a degree of international and cross-institution alignment on AI governance practices.

Key Challenges and Controversies

– Adoption and Compliance: Encouraging widespread adoption of voluntary frameworks can be challenging, especially for smaller organizations with limited resources.

– Balance of Innovation and Regulation: Striking the right balance between fostering AI innovation and ensuring ethical use can be difficult. Over-regulation may hinder technological advancement, while under-regulation could lead to unethical AI applications.

– Data Privacy: AI often relies on massive data sets, which may include sensitive information. Protecting this data while using AI is both a technical and ethical challenge.

– Job Displacement: One of the most significant societal concerns is that AI could automate jobs, leading to displacement of workers and wider economic implications.

Advantages and Disadvantages

Advantages:
– Enhanced Risk Management: The AI RMF can help organizations identify and mitigate potential risks, leading to safer AI deployments.
– Consumer Trust: Responsible AI usage as outlined by the framework can help build public and consumer trust.

– Regulatory Alignment: The AI RMF complements existing and forthcoming regulations, assisting organizations in maintaining compliance.

Disadvantages:
– Resource Requirements: Implementing the framework requires time, expertise, and potentially financial resources that some organizations may find challenging to allocate.
– Risk of Stifled Innovation: If the framework becomes too prescriptive or onerous, it could potentially stifle innovation by creating an overly complex regulatory environment.

Related Links:
For more information on the responsible use of AI, you may visit the National Institute of Standards and Technology’s official website: NIST. Additionally, information about global AI governance initiatives may be found at the European Union’s main website: European Union.

It is important to note that as AI continues to evolve, frameworks and regulations around its usage will likely develop alongside it, influencing future trends in AI governance and ethics.

Foundations of Automation and Intelligence

 

Today I am going to talk a little about what is happening and what is about to happen. There is a lot of talk about AI and AGI, neither of which are actually fully formed yet. We are seeing the results of work done on Generative Pre-Trained Transformers and Large Language Models. They are closely mimicking the way the human mind uses some of it's memory by doing language prediction and reflection. There is more to the process humans use, than just memory and conversational text recall. 

ChatGPT and Bard are not models but Human Machine Interfaces. 

The models behind it are GPT-3.5, or GPT-4, for ChatGPT and PaLM/PaLM 2 for Bard.

Google is the world leader in Machine and Deep Learning. They have lead the way on the processes of Hardware TPU, Software TensorFlow, large language models PaLM 2, PaLM API, MakerSuite, Bard, Generative AI Studio amd Vertex AI, Gen App Builder, Phenaki, SEANet, Google DeepMind Gemini, .and much more. Google is behind the research on Transformers architectures. 

Transformer: A Novel Neural Network Architecture for Language Understanding

In “Attention Is All You Need” 2017, we introduce the Transformer, a novel neural network architecture based on a self-attention mechanism that we believe to be particularly well suited for language understanding.
https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html

Ref: https://arxiv.org/abs/1706.03762 Attention Is All You Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin()

PaLM 2 is more powerful than GPT-4 but it has been restrained to avoid problems with justice.
https://github.com/Mooler0410/LLMsPracticalGuide