QLD AI SUMMIT 2024
Etienne’s Takeaways
We had the incredible opportunity to attend the QLD AI Summit 2024, featuring insights from leaders like Dr. Michelle Dickinson, Sholto Douglas (Google DeepMind), and Ben Brookes on the evolving AI landscape.
AI in Creativity & Medicine
Dr Michelle Dickinson highlighted fascinating use cases, such as how AI was used in the film industry to simulate actors’ appearances across decades without the need for prosthetics! In the medical field, AI demonstrated a 30% improvement over humans in diagnosing certain cancers, underscoring its role in healthcare innovation.
Google DeepMind & AGI Predictions
Sholto Douglas presented the scalability of AI models with projections that AGI (Artificial General Intelligence) could be realized by 2030. As models continue to scale, they require more data and compute power, raising the stakes for AI development across industries, with hardware and power required now viewed as a major stumbling block towards AGI.
Open Innovation in AI
Ben Brookes emphasized the power of open-source models for AI transparency and collaboration. However, the risks are real; removing safety guardrails could lead to misuse. Addressing governance and downstream safeguards is crucial as AI continues to transform industries.
Liquid Networks vs Transformer Models
At MIT, Danielle Rus discussed cutting-edge research on liquid networks, smaller, task-focused models that are more efficient yet highly accurate. These models represent a leap forward from transformer models, especially in specialized AI tasks.
Igor’s Takeaways
It was great to attend the QLD AI Summit and learn about the latest advancements in this space. My key takeaways from the day are:
- It was insightful to observe the impressive progress in the field. However, I still believe it’s important to maintain a practical perspective and manage expectations, as the fundamental principles of model training remain largely the same, and the main difference lies in the scale of computing power and volume of data.
- The ‘liquid network’ approach to AI demonstrates some promising results to help reduce the exponential increase in data and compute requirements.
I particularly enjoyed the ‘behind the scenes’ session led by a senior infrastructure engineer from Meta, where they shared insights into what it takes to train a large language model from the infrastructure perspective and the efficiencies required to operate at that scale.
To learn more about how we can help you with your Data & AI, contact our friendly team for a no-obligation discussion HERE