This post discusses the power law and long tail phenomenon in generative AI software, and how it excels at creating the tail end of things but struggles with precision. The author suggests that mastering the slope of working with AI involves developing a sense of when to take the reins and when to let the AI lead, and that climbing and creating are similar in that both involve a struggle to reach a summit. The method used to reach the summit matters, and while generative AI systems may improve and proliferate, the decision of when to use them versus when to call forth our creative natures remains up to us.
This post explores the similarities between audio feedback loops and the phenomenon of model collapse in large language models (LLMs) like GPT. The degradation of the original sound in feedback loops is similar to the loss of dimensionality in LLMs trained on data generated by other AI models. While AI-generated content may appear similar to human-generated content in small quantities, the disparities add up and compromise the viability of the model. The fate of future generative AI models seems linked with the flourishing of human inspiration, and finding organic data for training new AI models may become more challenging as the internet contains more artificial content.
OpenAI, originally a nonprofit, has evolved into a capped-profit entity with 99% of its staff working under its for-profit division, OpenAI LP. The company realized that its nonprofit structure was impairing its ability to procure the key ingredients needed for its AI research, which cost a lot of money. OpenAI LP has raised huge sums of investment in the wake of its release of ChatGPT, and the company is growing fast. OpenAI's exploration of hybrid profit models is every bit as important as the innovation taking place in their AI research labs, as capitalism, in its current form, is a dangerous environment in which to bring forth and unleash powerful autonomous systems.
This post discusses the importance of responsible AI and offers guidelines for evaluating and utilizing responsible AI frameworks. The author suggests evaluating the community engagement and credibility of a framework, whether it requires real behavior change, and whether it acknowledges the possibility of failure. The post also provides a framework for evaluating responsible AI frameworks and highlights a specific initiative, The Framework toward Responsible AI for Fundraising.