AI Bytes Newsletter Issue #28

GPT-4o mini, Adobe AI features, OpenAI's NDA practices, Crafting a comprehensive AI policy, Personalizing ChatGPT with Custom Instructions and more

Welcome to AI Bytes Newsletter Issue #28! This week, we're examining critical AI controversies and innovations. We'll explore the ongoing drama surrounding OpenAI's NDA practices and their implications for transparency in the tech industry. Our Skeptics Corner addresses the alarming trend of major AI companies pushing legal and ethical boundaries in their quest for data, including the YouTube transcript scandal involving Apple, Nvidia, and Anthropic. We're also excited to share insights on crafting comprehensive AI policies for businesses and offer essential tips for effectively using AI tools like ChatGPT. Plus, don't miss our featured tool of the week, Adobe Photoshop, and Mike's personal recommendations in the AI space. Join our digital campfire as we swap tales of AI triumphs and face-palms!

The Latest in AI

A Look into the Heart of AI

Featured Innovation
OpenAI’s new model “GPT-4o mini”

Hey everyone, it's Mike from Artificial Antics! This week, OpenAI launched their new GPT-4o mini model. The new model is available in ChatGPT as well as the OpenAI API. This model stands out for being the most cost-effective and fastest they've released, addressing one of AI's biggest challenges—energy consumption. 🌍⚡

Join me as I watch @MattVidPro's initial review and share my first impressions. I'll discuss the potential of this smaller model for projects that need speed and affordability, and how it compares to larger models like GPT-4 and GPT-4o. Plus, we'll test its capabilities with some creative and complex prompts. 🧠💡

Check out my video review below:

Let us know what you think in the comments here, on X.com or LinkedIn.

If you’ve seen a game changing innovation and want to share it with us, hit us up at [email protected]!

Ethical Considerations & Real-World Impact
The Silicon Valley Soap Opera: OpenAI's NDA Drama

We are really starting to feel like broken records here or that we are in Groundhog Day, but OpenAI and Sam Altman appear to be at it again, further eroding the public trust and causing further outcry and the need for proper transparency and processes that lead folks to adopt AI tools, applications, and the like. Like a bad soap opera, we seem to be watching and waiting from day to day, week to week as the world turns, waiting for some semblance of guiding light to keep us on track. But with AI, it involves far more than just the bold and the beautiful...

Recently, whistleblowers have come forward alleging that OpenAI's non-disclosure agreements (NDAs) are potentially illegal, claiming they restrict employees from reporting concerns to government regulators. This (and many of their previous actions) raise significant ethical questions, particularly around transparency and the protection of whistleblowers. According to reports, employees were required to sign severance, non-disparagement, and non-disclosure agreements that could discourage them from contacting regulatory bodies like the Securities and Exchange Commission (SEC), potentially forfeiting whistleblower protections and financial incentives if they chose to report misconduct.

These allegations highlight a concerning trend in the tech industry where some companies may prioritize protecting their public image and proprietary information over ethical practices and legal obligations. While OpenAI has asserted that it respects employees' rights, the implementation of such agreements could potentially suppress important disclosures about questionable activities within the company.

The involvement of Senator Chuck Grassley's office underscores the seriousness of these claims. Grassley has emphasized the importance of protecting national security interests and ensuring that whistleblowers can report wrongdoing without fear of retaliation. This situation raises further questions about the ethical responsibilities of companies like OpenAI in fostering a culture of transparency and accountability.

As scrutiny of OpenAI's exit agreements and NDAs intensifies, it's incredibly important for the tech industry to reflect on its ethical and legal obligations, as well as public perception. Ensuring employees can report concerns without fear of retribution is both a legal requirement and a moral imperative, supporting the industry's integrity and credibility. Transparency and ethical practices are fundamental to building a future where technology serves humanity responsibly. With AI's growing influence, companies developing these technologies must prioritize ethics and foster an environment of open dialogue and accountability. In this ongoing tech industry drama, it's clear that the 'Days of our AI Lives' are far from over - so stay tuned until next week to see if OpenAI will turn over a new leaf or if we're in for another cliffhanger in the world of ethical AI development. After all, AI is meant for more than just the young and the restless, but for all of us to safely enjoy.

AI Tool of the Week - Adobe Photoshop

The Toolbox for Navigating the AI Landscape

Adobe Photoshop, renowned for its industry-leading image editing capabilities, has embraced the power of artificial intelligence to revolutionize the creative process. Let’s explore the cutting-edge AI features that are transforming the way we create, edit, and enhance images.

Adobe Photoshop’s AI-Powered Features:

Generative Fill: Add, remove, and expand content seamlessly for life like results.

Generate Background: Instantly replace backgrounds to blend with the subject.

Content-Aware Fill: Effortlessly remove unwanted objects and fill gaps intelligently.

Combine Photos: Cut out, move, and blend images for unique compositions.

Remix Your Pics: Quick edits, effects, and background swaps for desired looks.

Create the Unexpected: Experiment with blending photos, graphics, and colors creatively.

Digital Brushes: Realistic drawing, painting, and doodling with various digital brushes.

If you’ve got a suggestion on tools we should check out, email us at [email protected] and let us know.

Rico's Roundup

Critical Insights and Curated Content from Rico

Skeptics Corner
The AI Data Heist: When Innovation Meets Misconduct

In the race to dominate the artificial intelligence landscape, tech giants are increasingly finding themselves on the wrong side of the law. The year 2024 has unveiled a disturbing trend: major AI companies, in their relentless pursuit of data to feed their ever-hungry algorithms, are crossing legal and ethical boundaries with alarming frequency. From unauthorized use of YouTube transcripts to the exploitation of copyrighted music, the AI industry's data practices are under intense and increasing scrutiny. It seems that the further we investigate these practices, we unveil a complex and troubling Silicon Valley saga where the relentless pursuit of innovation is colliding head-on with intellectual property rights and ethical considerations. This week I started to focus on the story of Apple training AI models on YouTube content without consent, but believe it is important to layout the patterns of behavior we have observed since the inception of the newsletter.

The YouTube Transcript Scandal: Apple, Nvidia, and Anthropic in the Spotlight

In a shocking revelation, industry titans Apple, Nvidia, and Anthropic have been caught red-handed using YouTube transcripts without permission. The scale of this unauthorized data usage is staggering, involving content from over 48,000 YouTube channels. This vast dataset, encompassing popular creators and major publishers alike, was used to train AI models, raising serious questions about the ethical practices of these tech behemoths.

Dr. Emily Chen, an AI ethics expert at Stanford University, comments, "This incident exemplifies the 'move fast and break things' mentality that has pervaded Silicon Valley for too long. It's a blatant disregard for content creators' rights and a stark reminder of the ethical challenges facing AI development."

The Battle of the Written Word: OpenAI vs. The New York Times

Meanwhile, OpenAI, a frontrunner in AI innovation, finds itself embroiled in a legal tussle with The New York Times. The venerable newspaper alleges that OpenAI used its content without authorization to train its AI models. This high-profile case has become a lightning rod for the ongoing debate between AI innovation and intellectual property protection.

Media law expert Jonathan Franks notes, "This case could set a precedent for how we balance the needs of AI development with the rights of content creators. It's a clash of titans that will shape the future of both AI and journalism."

Sour Notes in the Music Industry: Suno and Udio Face the Music

The AI controversy has hit a crescendo in the music world, with startups Suno and Udio facing legal action for using copyrighted music recordings without permission. This unauthorized use has struck a discordant note with the music industry, known for its vigorous defense of intellectual property.

Grammy-winning producer Sarah Johnson expresses her frustration: "These AI companies are essentially stealing our work. It's not just about royalties; it's about respecting the creative process and the artists behind the music."

A Symphony of Misconduct: The Broader Pattern

These incidents are clearly not isolated; they form a troubling pattern of behavior in the AI industry. Companies, in their haste to push the boundaries of technology, are routinely sidestepping legal and ethical considerations, knowing that the policies and laws take time to catch up. This pattern includes:

1. Data Exploitation: The unauthorized use of vast datasets, infringing on the rights of content creators and publishers.

2. Privacy Violations: Bypassing consent requirements to access valuable personal data for AI training.

3. Intellectual Property Infringement: The widespread use of copyrighted material without permission or compensation.

Changing the Tune: The Need for Accountability

As this and many other AI controversies unfold, it's clear that stricter regulations and robust enforcement mechanisms are desperately needed, and we continue to repeat this like it is some sort of sick mantra, begging to be governed harder; but that is not what we actually want. I truly believe that people do want some teeth to the laws and oversight to some degree, but really just the ability to have barriers to entry removed and to hold these companies responsible for the many IP thefts they are committing to benefit their own companies. These actions leave the little guy who created all of the content buried in the cold ground as they are built upon like foundational blocks, propping up the corporations to bask in the sunlight and riches. Unfortunately, governments and regulatory bodies must take center stage to ensure companies adhere to legal and ethical standards in their data practices to protect the many voices, minds, and creations that these companies are scraping off and monetizing.

I’d have to agree with Technology policy advisor Maria Gonzalez, who suggests, "We need a three-pronged approach: clear guidelines on permissible data use, strengthened enforcement mechanisms with real teeth, and fostering collaboration between AI developers and content creators."

The Next Phase: Moving Towards Ethical AI

The many controversies surrounding tech giants' data practices serve as yet another critical wake-up call for the AI industry. As this situation continues to unfold, it's evident that the path to truly groundbreaking and beneficial AI must be paved with respect for creators, commitment to legal compliance, and unwavering ethical standards. Moving forward, the focus must shift towards implementing comprehensive reforms that balance rapid technological advancement with the protection of individual rights and creative integrity. This includes enhancing transparency, establishing clear accountability measures, investing in ethical AI research, and fostering a culture of responsible innovation within tech companies. To shape an AI future that is both innovative and ethically sound, it is imperative for all stakeholders—tech companies, policymakers, creators, and the public—to engage in ongoing dialogue and collaboration. Only through this collective effort can we ensure that AI development respects intellectual property, upholds ethical standards, and truly benefits society as a whole.

We're always eager to hear your thoughts on this and other complex and rapidly evolving topics. How do you think we can balance innovation with ethical considerations in AI development?

Share your insights and join the conversation. For more in-depth discussions on AI's impact on creativity and society, don't forget to explore our previous episodes of Artificial Antics. Stay tuned for our upcoming installments, where we'll continue to unpack the multifaceted role of AI in our daily lives and its implications for the future.

Must-Read Articles

New ways to get creative with Microsoft Designer, powered by AI | Microsoft 365 Blog

Mistral releases Codestral Mamba for faster, longer code generation

Mike's Musings

Mike’s Weekly Business Byte

Crafting a Comprehensive AI Policy: A Strategic Guide for Businesses

In today’s rapidly evolving technological landscape, integrating AI into business operations is becoming increasingly common. However, this integration brings ethical, legal, and operational challenges. Establishing a comprehensive AI policy is essential for navigating these challenges effectively.

Key Values and Benefits:

✔️ Legal and Regulatory Compliance
✔️ Ethical and Responsible AI Use
✔️ Data Privacy and Security
✔️ Risk Management
✔️ Enhancing Employee Confidence and Morale
✔️ Building Trust with Customers and Partners
✔️ Driving Innovation and Competitive Advantage
✔️ Facilitating Continuous Improvement

Here’s a strategic approach to developing an AI policy that aligns with your business goals and ethical standards:

Assembling the Right Team

An effective AI policy starts with diverse perspectives. Form a cross-functional team with members from IT, legal, HR, operations, and senior management. At Clarity, we call people AI Ambassadors. This ensures your policy addresses all aspects of AI implementation. I’d suggest keeping this group to 10 people, unless it really makes sense.

When in Doubt: Use the same rules you’d use for PII, customer and business data in your standard handbook. For instance, your employees probably shouldn’t be sharing your customer lists with the outside world, AND they also shouldn’t be inputting these into AI tools!

Building Knowledge and Awareness

Educating key stakeholders on AI fundamentals is crucial. Organize training sessions or workshops to establish a common understanding of AI concepts, benefits, and risks.

Defining Policy Objectives and Scope

Clearly define your AI policy goals. Whether it’s responsible AI use, legal compliance, or fostering innovation, your objectives will guide policy development. Also, determine which AI technologies and business areas the policy will cover.

Establishing Ethical Foundations

Identify core ethical principles like fairness, transparency, accountability, and respect for human rights. These principles will guide all AI-related decisions and activities.

Mapping AI Use Cases and Risks

Identify AI applications within your organization and conduct a thorough risk assessment. Consider biases, security vulnerabilities, and unintended consequences for each use case.

Creating Governance Structures

Define clear roles and responsibilities for AI development, deployment, and monitoring. Establish accountability measures and decision-making processes for effective oversight.

Developing Guidelines for Responsible AI Use

Create specific guidelines addressing:

✔️ Data management and privacy protection
✔️ Intellectual property considerations
✔️ Ethical use of AI tools
✔️ Security measures
✔️ Output review and verification processes

Implementing Monitoring and Evaluation Mechanisms

Establish procedures for ongoing monitoring of AI systems’ performance, impact, and adherence to ethical standards. Regularly review and update the policy to keep pace with technological advancements and regulatory changes.

Drafting and Refining the Policy Document

Develop a comprehensive, clear, and accessible policy document. Include definitions of key AI terminology for common understanding across the organization. Circulate the draft for feedback and make necessary revisions.

Securing Leadership Buy-In

Present the final draft to leadership for approval and secure resources for implementation. Leadership support is crucial for the policy’s success.

Communication and Implementation

Create a robust communication plan to disseminate the policy throughout the organization. Use company-wide emails, presentations, and training sessions to ensure all employees understand and can implement the AI policy.

Providing Ongoing Support

Offer continuous training and technical support to help employees navigate AI implementation complexities in line with policy guidelines.

Maintaining Relevance

Regularly assess the policy’s effectiveness, stay informed about AI advancements and regulatory changes, and update the policy as needed to ensure it remains relevant. By following this strategic approach, businesses can create a comprehensive AI policy that promotes responsible AI use, manages risks, and ensures compliance. A well-crafted AI policy not only protects your organization but also positions it to leverage AI’s transformative potential ethically and sustainably.

Download our example AI Policy here and feel free to edit and base your company’s policy off of the template:

AI Usage Policy.pdf450.52 KB • PDF File

Mike’s AI Tip

Essential Tips for Using AI Tools Effectively

Personalizing ChatGPT with Custom Instructions

Custom instructions let you shape ChatGPT’s responses to better suit your needs. Here’s how to get the most out of this feature:

✔️ Be specific about your background and needs
✔️ Set clear response preferences
✔️ Use verbosity and language proficiency levels
✔️ Request specific content elements
✔️ Customize for your field
✔️ Refine over time
✔️ Consider privacy

This week, I’ve written a comprehensive article on Custom Instructions that walks you through how to generate the optimal instructions for your workflows. Check out my article here:

Mike Favorites

Sharing personal recommendations for technology, AI books, podcasts, or documentaries.

Book: AI as Your Teammate

"AI as Your Teammate" by Evan Ryan is a practical guide for entrepreneurs and business owners on leveraging artificial intelligence to scale their businesses without significantly increasing payroll costs. What I like about this book is that the author offers a very wide view of AI, he’s not just focusing on LLMs or even predictive analysis, he shows the power of automation as a whole and how it fits into modern business. Check it out here: https://www.amazon.com/AI-Your-Teammate-Electrify-Increasing/dp/1544526326

Tool: Perplexity Enterprise Pro

I’m currently piloting the enterprise pro version of Perplexity and am really enjoying it. It’s not inexpensive ($40 per seat monthly), and things like the added privacy, security, centralized management AND the ability to hook into multiple models like Claude 3.5 Sonnet which I’m a big fan of. Will it become my daily driver and will I cancel my ChatGPT Team account? Unlikely…

If you’ve got something you think I’d like, hit me up at [email protected] 

Latest Podcast Episode

Connect & Share

Stay Updated

Thank You!

Thanks to our listeners and followers! Continue to explore AI with us. More at Artificial Antics (antics.tv).

Quote of the week: “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it” — Eliezer Yudkowsky