Thursday, December 26, 2024

Dive into Gemini with Practical Examples: Introducing the Google Gemini Cookbook

Google's Gemini AI model has been making waves, and now, Google is making it easier than ever to experiment and integrate with its newly released Gemini Cookbook. This isn't your grandma's cookbook; it's a curated collection of code examples and tutorials designed to get you up and running with Gemini quickly.

What's the Gemini Cookbook All About?

The Gemini Cookbook, hosted on GitHub (link to the repository), is a repository of practical examples that showcase Gemini's capabilities through real-world use cases. It's your one-stop shop for finding ready-to-use code snippets and detailed explanations of how to leverage Gemini.

Think of it as a practical guide to the model. Whether you're a seasoned AI developer or just beginning your journey, the cookbook's structure makes it easy to dive in.

What Can You Expect to Find?

The cookbook offers a variety of examples covering different aspects of working with Gemini, including:

Basic Setup and Authentication: Get started with setting up the Gemini API and authentication process without a hassle.

Text Generation: Learn how to generate creative text formats, like poems, code, scripts, musical pieces, email, letters, etc. with fine control.

Multimodal Applications: Explore how Gemini handles image and video input with examples of prompt engineering for different modalities.

Function Calling: See practical examples of how to use Gemini's function-calling capability to interface with tools and APIs.

Advanced Use Cases: Discover more complex scenarios, pushing the boundaries of what you can do with the model.

Why This Cookbook Matters

Practical and Hands-on: The focus on code examples and practical use cases means you can immediately start applying what you learn.

Variety of Scenarios: Covers different domains and applications, ensuring you'll find relevant examples for your needs.

Learning Through Examples: The best way to understand a technology is by using it. The cookbook provides clear, runnable code that accelerates your learning.

Open-Source Contribution: The cookbook is open to the community and will likely grow with further
contributions.

Staying Up-to-date: As the Gemini models continue to evolve, so will this cookbook, making it an invaluable resource to keep your work current.

Getting Started

To dive in, simply head over to the Google Gemini Cookbook GitHub repository. You'll find clearly organized directories by use case and programming language. Most examples are Python-based and the documentation includes specific instructions for running the code examples.

Who is this for?

Developers: Jumpstart development projects leveraging Gemini's AI power.

Researchers: Experiment with the model and explore advanced scenarios.

Students: Learn Gemini through hands-on practice and real-world examples.

Anyone curious about Gemini: The cookbook provides a low barrier of entry to discover how Gemini could be used.

Final Thoughts

The Gemini Cookbook is a game-changer for anyone wanting to work with Google's cutting-edge AI model. It significantly lowers the learning curve, provides ready-to-use code, and helps you get the most out of Gemini's capabilities.

We encourage you to explore the cookbook, try out the examples, and share your creations with the community. Happy coding!

Contribute to the cookbook on GitHub and help expand this great resource.



Saturday, December 14, 2024

Task Force Lima: Charting a New Course for AI-Driven Defense Innovation

In today’s rapidly evolving technological landscape, the U.S. Department of Defense (DoD) is taking bold steps to ensure it remains at the cutting edge of innovation. One noteworthy development is the introduction of Task Force Lima, an initiative highlighted in a recent executive summary from the DoD’s Artificial Intelligence ecosystem. While the full details are laid out in the December 2024 Executive Summary (TAB A), here’s a closer look at what the document’s key themes suggest and why this matters for the future of national security.

The Context: AI at the Forefront of Defense

Artificial intelligence has long since moved from speculative technology to a strategic necessity. From enhancing supply chain logistics to enabling more nuanced threat detection, AI tools and frameworks are poised to fundamentally change how the U.S. military anticipates, plans, and operates. Task Force Lima—judging from the DoD’s evolving priorities—appears to be part of a broader effort to harness these advances, translating them into tangible improvements in readiness, resilience, and responsiveness.

What Is Task Force Lima?

Though the executive summary provides the finer details, Task Force Lima can be understood as a dedicated team tasked with integrating cutting-edge AI solutions into defense workflows. This likely includes:

  • Capability Assessment: Reviewing current AI-enabled systems and identifying gaps in capability.
  • Integration Roadmap: Outlining a step-by-step guide for how new AI platforms will interface with existing defense technology ecosystems.
  • Ethical and Responsible Adoption: Ensuring that all AI initiatives comply with responsible AI guidelines, emphasizing transparency, accountability, and alignment with U.S. values.

Key Strategic Objectives

1. Operational Efficiency:
One of the core drivers behind Task Force Lima is the push to streamline operations. AI-powered predictive maintenance tools, for instance, can help reduce downtime in key military platforms. Enhanced logistics algorithms could ensure materials arrive where they’re needed, when they’re needed, mitigating supply chain vulnerabilities.

2. Decision-Making Advantage:
In an era where information flows at lightning speed, making sense of vast data sets is critical. AI tools promise to distill complex intelligence inputs into actionable insights. By leveraging machine learning models, Task Force Lima can help commanders and defense analysts gain a decision-making edge—identifying patterns, predicting adversarial actions, and recognizing opportunities faster than ever before.

3. Interoperability and Scalability:
As new AI solutions come online, ensuring they work harmoniously across different units, platforms, and even with allies’ systems is essential. The executive summary likely emphasizes the importance of developing standardized frameworks and interfaces, paving the way for seamless integration now and scalable growth in the future.

4. Workforce Development:
No AI initiative is complete without addressing the human element. Ensuring the DoD workforce is adept at implementing and overseeing AI systems remains a priority. Training programs, career development opportunities, and cross-functional collaboration can help build an AI-savvy force capable of wielding these tools effectively and ethically.

Balancing Innovation with Responsibility

The DoD and its AI Task Forces must navigate a delicate balance: embracing innovation while safeguarding national values and ethical principles. Expect Task Force Lima’s strategic outline to emphasize responsible AI use, from adhering to privacy standards and combatting algorithmic biases to maintaining compliance with existing laws and international norms.

Looking Ahead

As we move into 2025 and beyond, Task Force Lima’s efforts will likely set the tone for how the DoD incorporates AI throughout its enterprise. The December 2024 Executive Summary (TAB A) suggests a comprehensive approach—one that aligns technological ingenuity with mission objectives, ethical considerations, and cross-organizational collaboration.

This initiative points to a future where the American defense apparatus can more rapidly adapt to emerging challenges, protect critical infrastructure, and project stability in an uncertain world. By laying out a strategic vision for AI integration, Task Force Lima could become a linchpin in shaping the next generation of defense capabilities.


Note: This blog post is based on the general context of a DoD AI initiative known as “Task Force Lima.” For direct quotations, detailed timelines, or further specifics, please refer to the actual Executive Summary linked above.

Friday, December 06, 2024

New AI model advances the prediction of weather uncertainties and risks, delivering faster, more accurate forecasts up to 15 days ahead

Weather impacts all of us — shaping our decisions, our safety, and our way of life. As climate change drives more extreme weather events, accurate and trustworthy forecasts are more essential than ever. Yet, weather cannot be predicted perfectly, and forecasts are especially uncertain beyond a few days.

Because a perfect weather forecast is not possible, scientists and weather agencies use probabilistic ensemble forecasts, where the model predicts a range of likely weather scenarios. Such ensemble forecasts are more useful than relying on a single forecast, as they provide decision makers with a fuller picture of possible weather conditions in the coming days and weeks and how likely each scenario is.

Today, in a paper published in Nature, we present GenCast, our new high resolution (0.25°) AI ensemble model. GenCast provides better forecasts of both day-to-day weather and extreme events than the top operational system, the European Centre for Medium-Range Weather Forecasts’ (ECMWF) ENS, up to 15 days in advance. We’ll be releasing our model’s code, weights, and forecasts, to support the wider weather forecasting community.

 GenCast predicts weather and the risks of extreme conditions with state-of-the-art accuracy

Wednesday, November 27, 2024

Comprehensive Framework for Ethical AI Use

 

US Launches Comprehensive Framework for Ethical AI Use

AI Technologies Mapped for Safer Corporate Integration
Artificial Intelligence (AI) is swiftly advancing, and it’s expected to soon be pivotal in the operations of nearly all companies. With this advancement comes the need for a standard method of risk management to mitigate the potential dangers of AI and encourage its proper use. Addressing this need, the National Institute of Standards and Technology (NIST) in the United States introduced the “AI Risk Management Framework” (AI RMF) in January 2023.

U.S. Government Supports Responsible AI Application
The U.S. Government has been active in ensuring that corporations adopt AI responsibly. In 2022, guidelines titled “Blueprint for an AI Bill of Rights” were published, setting the stage for ethical AI usage. By October 2023, the Biden administration had furthered this initiative with a presidential directive on AI safety.

The Significance of NIST’s AI RMF
Developed as part of a government drive for responsible AI usage—inclusive of fairness, transparency, and security—AI RMF provides guidance throughout the an AI’s lifecycle. It consists of four ‘Cores’: Govern, Map, Measure, and Manage, each comprising numerous categories and subcategories for thorough governance.

A crucial subcategory under ‘Govern’, identified as Govern 1.6, requires the development of a use case inventory. The act of cataloging AI utilization scenarios serves as an initial step in comprehensively assessing AI applications and their associated risks, hence ensuring effective risk management and adherence to regulations. Crafting these inventories is advised by other protocols such as the European Union’s AI Act and the Office of Management and Budget (OMB) in the U.S.

AI RMF’s Practicality and Future Implications
Although not set as a formal standard or a mandatory requirement, AI RMF is hailed as an optimal starting point for AI governance. Offering globally applicable strategies for wide-ranging use cases—from resume screening and credit risk prediction to fraud detection and unmanned vehicles—AI RMF is considered a practical tool by Evi Fuelle, director at Credo AI. Through public commentary and stakeholder involvement, the framework has become enriched as a corporate guide, with the potential to evolve into an industry-standard directive, especially among businesses interacting with the U.S. federal government.

Important Questions and Answers

1. What is the purpose of the AI Risk Management Framework?
The AI RMF is designed to help organizations manage risks associated with the deployment of AI systems. It provides guidance on maintaining ethical standards such as fairness, transparency, and security throughout the AI lifecycle.

2. Is the AI RMF mandatory for organizations?
No, the framework is not a formal standard or a mandatory requirement but is recommended as a starting point for AI governance.

3. How does the AI RMF align with other international regulations?
The AI RMF is advised by other protocols such as the European Union’s AI Act and the Office of Management and Budget (OMB) in the U.S., which suggests a degree of international and cross-institution alignment on AI governance practices.

Key Challenges and Controversies

– Adoption and Compliance: Encouraging widespread adoption of voluntary frameworks can be challenging, especially for smaller organizations with limited resources.

– Balance of Innovation and Regulation: Striking the right balance between fostering AI innovation and ensuring ethical use can be difficult. Over-regulation may hinder technological advancement, while under-regulation could lead to unethical AI applications.

– Data Privacy: AI often relies on massive data sets, which may include sensitive information. Protecting this data while using AI is both a technical and ethical challenge.

– Job Displacement: One of the most significant societal concerns is that AI could automate jobs, leading to displacement of workers and wider economic implications.

Advantages and Disadvantages

Advantages:
– Enhanced Risk Management: The AI RMF can help organizations identify and mitigate potential risks, leading to safer AI deployments.
– Consumer Trust: Responsible AI usage as outlined by the framework can help build public and consumer trust.

– Regulatory Alignment: The AI RMF complements existing and forthcoming regulations, assisting organizations in maintaining compliance.

Disadvantages:
– Resource Requirements: Implementing the framework requires time, expertise, and potentially financial resources that some organizations may find challenging to allocate.
– Risk of Stifled Innovation: If the framework becomes too prescriptive or onerous, it could potentially stifle innovation by creating an overly complex regulatory environment.

Related Links:
For more information on the responsible use of AI, you may visit the National Institute of Standards and Technology’s official website: NIST. Additionally, information about global AI governance initiatives may be found at the European Union’s main website: European Union.

It is important to note that as AI continues to evolve, frameworks and regulations around its usage will likely develop alongside it, influencing future trends in AI governance and ethics.

Foundations of Automation and Intelligence

 

Today I am going to talk a little about what is happening and what is about to happen. There is a lot of talk about AI and AGI, neither of which are actually fully formed yet. We are seeing the results of work done on Generative Pre-Trained Transformers and Large Language Models. They are closely mimicking the way the human mind uses some of it's memory by doing language prediction and reflection. There is more to the process humans use, than just memory and conversational text recall. 

ChatGPT and Bard are not models but Human Machine Interfaces. 

The models behind it are GPT-3.5, or GPT-4, for ChatGPT and PaLM/PaLM 2 for Bard.

Google is the world leader in Machine and Deep Learning. They have lead the way on the processes of Hardware TPU, Software TensorFlow, large language models PaLM 2, PaLM API, MakerSuite, Bard, Generative AI Studio amd Vertex AI, Gen App Builder, Phenaki, SEANet, Google DeepMind Gemini, .and much more. Google is behind the research on Transformers architectures. 

Transformer: A Novel Neural Network Architecture for Language Understanding

In “Attention Is All You Need” 2017, we introduce the Transformer, a novel neural network architecture based on a self-attention mechanism that we believe to be particularly well suited for language understanding.
https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html

Ref: https://arxiv.org/abs/1706.03762 Attention Is All You Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin()

PaLM 2 is more powerful than GPT-4 but it has been restrained to avoid problems with justice.
https://github.com/Mooler0410/LLMsPracticalGuide

Monday, June 17, 2024

Welcome to the NIST Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC).


Welcome to the NIST Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC).

The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.


The Department of Veterans Affairs Establishes a Trustworthy AI Framework

 

The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) provides AI organizations with a guiding structure to operate within, and outcomes to aspire towards, all connected to their specific contexts, use cases, and skillsets. The rights-affirming framework operationalizes trustworthy AI within a culture of responsible AI practice and use.  https://airc.nist.gov/AI_RMF_Knowledge_Base


NIST AI 600-1: AI RMF Generative AI Profile

This document can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities.


NIST AI 100-5: A Plan for Global Engagement on AI Standards

A Plan for Global Engagement on AI Standards ( NIST AI 100-5 ) is designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing. 


NIST AI 100-4: Reducing Risks Posed by Synthetic Content

This publication informs, and is complementary to, a separate report on understanding the provenance and detection of synthetic content that AI EO Section 4.5(a) tasks NIST with providing to the White House. NIST AI 100-4 lays out methods for detecting, authenticating and labeling synthetic content, including digital watermarking and metadata recording, where information indicating the origin or history of content such as an image or sound recording is embedded in the content to assist in verifying its authenticity.

Friday, May 24, 2024

WitnessAI is building guardrails for generative AI models - TechCrunch

WitnessAI is building guardrails for generative AI models - TechCrunch
WitnessAI, a new startup, is building tech to give companies greater control over the generative AI models they're deploying.