Keeping up with a fast-moving industry like artificial intelligence is a tough task. So in order for AI to do it for you, here’s a handy roundup of the past week’s stories in the world of machine learning, along with notable research and experiments that we wouldn’t cover alone.
This week, the movers and shakers of the AI industry, including OpenAI CEO Sam Altman, embarked on a goodwill round with policymakers–showing their visions for AI regulation. Speaking to reporters in London, Altmann warned that a proposed EU artificial intelligence law, which is due to be finalized next year, could prompt OpenAI to eventually withdraw its services from the conglomerate.
“We will try to comply, but if we can’t comply, we will shut down,” he said.
Google CEO Sundar Pichai, also in London, emphasized the need for “proper” AI sandboxes that don’t stifle innovation. And Microsoft’s Brad Smith, at a meeting with lawmakers in Washington, proposed a five-point blueprint for public AI governance.
To the extent that there is a common thread, the tech giants have expressed their willingness to regulate — as long as it doesn’t interfere with their commercial ambitions. Smith has refused, for example, to address the unresolved legal question of whether training AI on copyrighted data (which Microsoft does) is permissible under the fair use doctrine in the US’s stringent licensing requirements about training data. Artificial intelligence, if mandated by the federal government. level, it could be costly for Microsoft and its competitors.
For his part, Altman seemed to go against provisions of the AI Act that require companies to publish summaries of the copyrighted data they used to train their AI models, and to make them partly responsible for how the systems are eventually deployed. Requirements for reducing energy consumption and resource use to train AI – a notoriously computationally intensive process – have also been called into question.
The organizational path abroad remains uncertain. But in the US, OpenAI in the world may finally find its way. Last week, Altman teased members of the Senate Judiciary Committee with carefully crafted statements about the dangers of artificial intelligence, and his recommendations for regulating it. Senator John F. Kennedy (R-LA) was particularly esteemed: “Here’s your chance, folks, to tell us how to make this right… Speak in plain English and tell us what rules need to be implemented,” he said.
In comments to The Daily Beast, Suresh Venkatasubramanian, director of Brown University’s Center for Technical Responsibility, perhaps summed it up best: “We don’t ask arsonists to be in charge of the fire department.” However, that’s what’s at stake here, with AI. It will be up to lawmakers to resist the sweet words of tech execs and narrow down where needed. Only time will tell if they do.
Here are other AI headlines noticed from the past few days:
- ChatGPT comes to more devices: Despite only being available in the US and iOS before expanding to 11 other global markets, OpenAI’s ChatGPT app is off to a great start, Sarah He writes. Trackers say the app has already surpassed half a million downloads in its first six days. That ranks it as one of the best-performing new app releases both this year and last, topped only by the arrival of the Trump-backed February 2022 version, Truth Social.
- OpenAI suggests a regulator: Artificial intelligence is evolving fast enough — and the risks it poses are clear enough — that the OpenAI leadership believes the world needs an international regulatory body similar to the one that governs nuclear power. OpenAI’s founders argued this week that the pace of innovation in AI is too fast to expect existing authorities to adequately rein in the technology, so we need new ones.
- Generative AI comes to Google Search: Google announced this week that it’s starting to open access to new generative AI capabilities in search after teasing them at an I/O event earlier in the month. With this new update, Google says users can easily learn about a new or complex topic, reveal quick tips for specific questions, or get deep information like customer ratings and pricing for product searches.
- TikTok Android is testing: Chatbots are exciting, so it’s no surprise to learn that TikTok is experimenting with its own bot, too. The bot named “Taco” is undergoing limited testing in select markets, as it will appear on the right side of the TikTok interface above the user’s profile and other buttons for likes, comments and bookmarks. Upon clicking, users can ask Tako various questions about the video they are watching or discover new content by asking for recommendations.
- Google on the Charter of Artificial Intelligence: Google’s Sundar Pichai has agreed to work with lawmakers in Europe on what is being referred to as the “Artificial Intelligence Charter” – apparently a temporary set of voluntary rules or standards while formal regulations on AI are developed. According to a note, the block’s intent is to launch The AI Pact “includes all major European and non-European players in the field of AI on a voluntary basis” and ahead of the legal deadline for the aforementioned pan-EU AI law.
- People, but with artificial intelligence: With Spotify’s DJ AI, The company has trained AI on the voice of a real person – the voice of the Head of Cultural Partnerships and podcast host, Xavier “X” Jernigan. Now a broadcaster may turn that same technology into advertising, it seems. According to statements made by Ringer founder Bill Simmons, the streaming service is developing AI technology that will be able to use the podcast host’s voice to make ads that the host reads — without the host actually having to read and record ad copy.
- Product images via generative AI: at Google Marketing Live This week’s event, Google announced the launch of Product Studio, a new tool that allows merchants to easily create product images using generative AI. Brands will be able to create images within Merchant Center Next, Google’s platform for businesses to manage how their products appear in Google search.
- Microsoft is hiding a chatbot in Windows: Microsoft is building its slate on ChatGPT Experience Bing right in Windows 11 — and add a few twists that allow users to ask a proxy to help navigate the operating system. The new Windows Copilot aims to make it easier for Windows users to find and modify settings without having to dig into Windows submenus. But the tools will also allow users to summarize content from the clipboard, or type text.
- Anthropy collects more money: anthropicthe leading AI startup co-founded by OpenAI veterans, has raised $450 million in a Series C funding round led by Spark Capital. Anthropic did not disclose what the tour valued to her business. But the pitch we got in March suggests it could be in the $4 billion football field.
- Adobe brings generative AI to Photoshop: Photoshop got a mix of generative AI this week with the addition of a number of features that allow users to stretch images beyond their limits with AI-generated backgrounds, add objects to images, or use a new production fill feature to remove them with more precision than content-aware fill. previously available. For now, the features will only be available in the beta version of Photoshop. But they really are Causing Some graphic designers fear the future of their industry.
other machine learning
Bill Gates may not be an expert in the field of artificial intelligence, but he is He is Very rich, and he’s been right about things before. Turns out he’s optimistic about personal AI agents, as he told Fortune: “Whoever wins the Personal Agent, that’s the important thing, because you’ll never go to a search site again, you’ll never go to a production site, don’t go to Amazon again.” “. It’s not mentioned exactly how this could happen, but his instinct that people don’t like to borrow problems with a search engine or productivity might be off base.
Risk assessment in AI models is an evolving science, meaning we know almost nothing about it. Google DeepMind (the newly formed super unit comprising Google Brain and DeepMind) and collaborators around the world are trying to move the ball forward, and have produced a model rating framework for “high risk” such as “strong manipulative, deception, cyber skills” or other abilities Dangerous.” Well, it’s the beginning.

Image credits: slack
Particle physicists are finding interesting ways to apply machine learning to their work: “We’ve shown that we can infer extremely complex, high-dimensional beamforms from amazingly small amounts of data,” says Auralee Edelen of SLAC. They created a model that helps them predict the shape of a particle beam in an accelerator, something that normally takes thousands of data points and a lot of computing time. This is more efficient and can help make accelerators easier to use everywhere. Next: “Demonstrate the algorithm experimentally to reconstruct full 6D phase-space distributions.” Yes!
Adobe Research and MIT have teamed up on an interesting computer vision problem: telling which pixels in an image represent the same thing material. Since an object can be multiple textures as well as colors and other visual aspects, this is a very subtle distinction but also intuitive. They had to build a new synthetic dataset to do this, but it didn’t work at first. So they ended up tuning an existing resume template to that data, and they got it right. Why is this useful? It’s hard to say, but it’s awesome.

Box 1: Selection of materials; 2: video source; 3: Fragmentation 4: Mask Image credits: Adobe/MIT
Large language models are usually trained primarily in English for many reasons, but obviously the faster they work in Spanish, Japanese, and Hindi, the better. BLOOMChat is a new model built on top of BLOOM that works in 46 languages at the moment, and is a competitor to GPT-4 and others. This is still very experimental, so don’t go into production with it, but it could be great for testing an AI-adjacent product in multiple languages.
NASA just announced a new crop of SBIR II funding, and there are a couple of interesting AI bits and pieces:
Geolabe detects and predicts groundwater change using artificial intelligence trained on satellite data, and hopes to apply the model to a new NASA satellite constellation that will originate later this year.
Zeus AI algorithmically produces “3D Atmospheric Profiles” based on satellite imagery, which is basically a thicker version of the 2D maps we already have of temperature, humidity, etc.
In space, your computing power is very limited, and while we can run some heuristics there, the training is spot on. But the IEEE researchers want to make an efficient SWaP neuromorphic processor for training AI models in situ.
Bots operating autonomously in high-stakes situations generally need a human observer, and Picknick is looking to have such bots visually communicate their intentions, such as how they reached to open the door, so that the auditor doesn’t have to intervene as much. Maybe a good idea.