My name is Christie Mealo, and I’m a Senior AI Engineering Manager at CVS Health, where I focus on AI-driven health products, primarily in the area of diabetes management.
In addition to my work at CVS, I’m the founder of Orbit, an AI-powered contact book and networking app designed for value-based networking.
I also lead the Philly Data & AI Meetup group, help guide the Philly Tech Committee, and serve as a chair on Philly iConnect.
Through these roles, I’m deeply involved in organizing communities and events across Philadelphia and the larger East Coast, helping to foster collaboration and innovation in the tech space.
It's been a crazy year for those in tech—what's excited you most about recent developments?
It's been an incredible year in tech, and what excites me most is how generative AI has significantly lowered barriers to entry and creativity for so many people. This technology is empowering individuals with new and novel ideas, allowing them to bring their visions to life in ways that were previously out of reach.
I believe this will shake up the economy in a positive way, leading to the development of a lot of innovative products and introducing new competitors into the market. While we’re undoubtedly in the midst of a hype cycle—or perhaps only at the beginning—it’s thrilling to see where this will take us in the coming years.
What role do you see generative AI playing across industries over the next 6-12 months, and where do you think it will have the biggest impact?
Generative AI is poised to significantly impact various industries over the next 6-12 months. While it's clear that it will continue to transform fields like copywriting, advertising, and creative content, its influence is much broader.
On one hand, generative AI is incredibly exciting because it lowers barriers to entry for innovation and creativity. Tools like ChatGPT, Claude, Gemini, and GitHub Copilot are not only enabling individuals and smaller companies to bring novel ideas to market more quickly but are also optimizing workflows. Personally, these tools have streamlined my day-to-day work, saving me approximately 10 hours each week by automating routine tasks and enhancing productivity.
However, there are valid concerns about the impact of generative AI, particularly regarding its effect on the internet and the truth. As AI-generated content becomes more prevalent, there is a real risk of misinformation and the proliferation of fake information online. This not only threatens the integrity of the internet but also raises ethical questions that need urgent attention.
Interestingly, these challenges are creating new opportunities for AI ethics as a field. We’re likely to see significant job growth in areas focused on developing frameworks and tools to manage these risks, ensuring that AI is used responsibly and that the internet remains a trusted source of information.
While we are only getting started, the balance of benefits and challenges will ultimately shape the economic and social impact of generative AI. It’s an exciting time, but also one that demands careful consideration of the ethical implications.
How can companies effectively navigate the ethical considerations that come with the rapid advancements in AI technology?
As an ex-McKinsey person myself, I feel compelled to steal some good advice and guidelines they have provided for this one:
- Establish clear ethical guidelines: Companies should start by defining ethical principles that align with their values and business goals. These should cover critical areas such as bias and fairness, explainability, transparency, human oversight, data privacy, and security. For instance, ensuring that AI models do not inadvertently discriminate based on race, gender, or other protected characteristics is essential.
- Implement human oversight and accountability: It’s important to have a "human in the loop" to oversee AI decisions, particularly in high-stakes scenarios like financial services or healthcare. This ensures that there is always a human judgment applied to AI outputs, which can help mitigate risks associated with AI decision-making.
- Continuous monitoring and adaptation: Ethical AI isn’t a one-time effort. Companies should establish ongoing monitoring systems to track the performance and impact of AI models over time. This includes regular audits to check for biases or inaccuracies that might emerge as the AI system interacts with new data.
- Educate and empower employees: Building a culture that supports ethical AI requires educating employees across the organization about the importance of these issues. Providing training on ethical AI practices and ensuring that teams are equipped with the necessary tools to implement these principles is crucial for long-term success.
Generative AI is a whole new ballgame, and we still have a lot to learn, but these pillars provide a good start.
What are you excited about at Generative AI Summit Toronto, and why is it important to get together with other leaders like this?
I'm really excited about the opportunity to connect with a diverse group of AI professionals and thought leaders at the Generative AI Summit in Toronto.
The event will feature cutting-edge discussions on the latest advancements in generative AI, and I’m particularly looking forward to the workshops and panels that provide opportunities to interact directly with experts. It’s important to gather with other leaders in the field to share insights, foster collaboration, and drive innovation in this rapidly evolving space.
Christie will be moderating at AI Accelerator Institute’s Generative AI Summit Toronto.
Join us on Novevember 20, 2024.
Get your tickets below.