A hub for startup news, trends, and insights, covering the global startup ecosystem for founders, investors, and innovators. Community: @startupdis Buy Ads: @strategy (this is our only account).
🔘Founders map the next AI winners at AI Challenges
The AI Challenges online conference gathered visionary founders to outline the unsolved problems that will decide which AI startups survive the next decade, from data access and inference costs to human alignment and product feedback loops. Here’s a concise, investor-facing digest.
🔸 “Data Freedom” confidential, siloed data is the biggest blocker; secure, auditable data pipes will unlock healthcare, legal, and enterprise verticals.
🔸 Inference & LM-training costs, fine-tuning and per-query inference are inefficient and expensive; parameter-efficient adaptation and hybrid edge/cloud designs are prime infra bets.
🔸 The AI Dealmaker: an agent for VCs that automates sourcing, diligence, and portfolio monitoring; retention hinges on measurable time-saved.
🔸 Turning LLM feedback into product outcomes: teams need SDKs and control planes to convert user signals into reliable model and UX improvements.
🔸 AI-native economy: autonomous agents will create new marketplaces (agent payroll, billing, reputation) and reshape how work is exchanged.
🔸 The Future of Human-AI Relationship: empathy, trust, and explainability are now product features, not optional extras.
🔸 Pitch session: five strong AI-native startups presented (themes: agent platforms, data-vaults, feedback control planes, vertical AI employees, cost-efficient inference).
🔸 Online afterparty hosted by venture investor DJ Mak.
A clear signal for investors: bet on speed + trust + economics products that prove time-saved, keep customer data in control, and make inference affordable will act like profitable, accountable “AI employees.”
🧠 Neuralink patient controls robotic arm with his thoughts
Nick Ray, who suffers from amyotrophic lateral sclerosis (ALS), has become the first Neuralink patient to control a robotic arm purely through thought. Implanted with Neuralink’s brain interface in July 2025, he recently connected it to Tesla’s Optimus robot, and the results are stunning.
🔸 Nick reports that the delay between his thoughts and the arm’s movement is “almost unnoticeable.”
🔸 For the first time in years, he was able to put on a cap, heat up nuggets in the microwave, and eat by himself.
🔸 He’s also learned to slowly control his wheelchair through the same neural interface.
🔸 The experiment marks the first integration of Neuralink’s implant with a humanoid robot, showing early progress toward mind-controlled assistive machines.
A powerful glimpse into the future, where human thought could directly control the tools that restore independence.
⚛️ Quantum pioneers win the 2025 Nobel Prize in Physics
The Nobel Committee awarded this year’s Physics Prize to John Clarke, Michel Devoret, and John Martinis for experiments that showed quantum mechanics at work in everyday electronic circuits a breakthrough that paved the way for today’s quantum computers.
🔸 In the 1980s, their superconducting circuits proved that quantum effects can appear in macroscopic systems.
🔸 Their discoveries laid the foundation for quantum processors, sensors, and encryption technologies.
🔸 Devoret is now chief scientist at Google Quantum AI, while Martinis once led the Google Quantum Lab.
🔸 Clarke, based at UC Berkeley, helped design early quantum devices that bridge physics and engineering.
🔸 The trio will share the 11 million SEK ($1.2M) prize.
Quantum theory finally leaves the lab and the engineers who made it practical are getting their due.
🎥 Sora Watermark Remover lets TikTokers clean videos
A new tool called Sora Watermark Remover allows creators to remove watermarks from Sora 2 videos while keeping 100% of the original quality.
🔸 OpenAI’s Sora 2 is freely available, but videos come with watermarks that can be annoying for social sharing.
🔸 The service simply requires uploading the video, it automatically cleans the watermark.
🔸 This is especially useful for TikTokers and other social creators who want polished AI-generated content without branding distractions.
Now Sora 2 videos can look professional without the watermark hassle.
🤖 Mercor hits record growth, and rewrites the AI playbook
In just 17 months, Mercor rocketed from $1M to $500M in revenue, becoming the fastest-growing company in AI history. What began as a remote engineer marketplace is now critical AI infrastructure connecting labs with domain experts who help train and evaluate models.
🔸 From freelance coders to scientists, doctors, and lawyers, Mercor turned expert knowledge into AI training fuel.
🔸 Now partners with OpenAI, Google, Meta, Microsoft, Amazon, and Nvidia.
🔸 Raised $100M at a $2B valuation, now targeting $10B in its next round.
🔸 Already profitable with $6M net profit in the first half of the year.
🔸 Boasts 1600% net retention and zero churn, an almost impossible metric.
🔸 Founded by Thiel Fellows aged just 22–23, now building tools for RL and AI expert marketplaces.
Mercor scaled faster than any AI startup ever, but with OpenAI launching a rival hiring platform, the next round might be a fight for the ecosystem itself.
🎤 NeuTTS-Air kills ElevenLabs’ moat, open-source voice cloning for everyone
A new open-source model called NeuTTS-Air is going viral for cloning any voice locally, no cloud, no paywalls, and total privacy. It can run on a PC or even a smartphone, using just a 3-second audio sample to generate natural, high-quality speech.
🔸 748M-parameter model, fine-tuned for speed and realism, rivals ElevenLabs and OpenAI’s Voice Engine
🔸 Works fully offline, ensuring voice data never leaves your device
🔸 Can generate entire podcasts, narrations, or dialogues from a single short recording
🔸 Released under an open-source license, meaning anyone can build apps, chatbots, or AI creators on top
If ElevenLabs dominated with convenience and polish, NeuTTS-Air is betting on freedom and decentralization, the future of voice AI may no longer need a server.
🎓 Stanford drops free AI lectures from Andrew Ng
Stanford has begun releasing a new open series of AI lectures led by Andrew Ng, the Coursera founder and pioneer of modern machine learning education.
🔸 Covers neural network training, AI agent design, and career-building in AI
🔸 Taught by Andrew Ng himself, returning to his Stanford teaching roots
🔸 Includes hands-on examples and practical labs using current AI frameworks
🔸 Designed for both beginners and professionals seeking to deepen ML foundations
🔸 Available free online, part of Stanford’s push to make AI education globally accessible
Decade after decade, Andrew Ng keeps doing what AI models can’t: teaching humans how to think like machines.
👩🎨 Google introduces PASTA - an AI agent for iterative image generation
Google Research has unveiled PASTA, a Preference Adaptive and Sequential Text-to-image Agent that interacts with users step by step, refining visuals through dialogue instead of raw prompt tweaking.
Unlike traditional models, PASTA learns from user sessions rather than isolated “prompt–image” pairs. It studies how prompts evolve and which images people ultimately choose - effectively learning from the creative process itself.
🔸 The team released the dataset of these real user sessions in open source.
🔸 Two auxiliary models were trained: one predicts user satisfaction with generated images, the other estimates which image the user would select.
🔸 Using these simulators, researchers produced 30K additional synthetic sessions to train the main agent.
🔸 Training used Implicit Q-Learning (IQL), with the goal of maximizing total user satisfaction over multiple iterations.
The result is a genuine text2image agent, not just a generator - one that learns to collaborate. It’s still research-only, but the dataset is available for exploration on Kaggle.
⚙️ Stapply: Your AI job agent
A new AI tool called Stapply promises to make job hunting effortless by acting as a personal AI recruiter, finding, ranking, and even applying to roles for you.
🔸 Indexes all existing vacancies for your search across multiple sources in real time.
🔸 Ranks results based on your preferences and career goals.
🔸 Automatically fills out application forms and attaches your resume.
🔸 Sends the applications directly, eliminating tedious manual steps.
🔸 Works as a personal assistant that adapts to your job-hunting style.
By taking over both the search and application process, Stapply could turn the often frustrating task of job hunting into a seamless, AI-powered matchmaking experience.
🛠 Thinking machines launches Tinker for AI fine-tuning
Thinking Machines, founded by OpenAI veterans including Mira Murati and John Schulman, has unveiled its first product: Tinker, a platform that makes it simple to fine-tune large AI models without heavyweight infrastructure. The startup is already valued at $12B after a $2B seed round.
🔸 Provides API-based fine-tuning for models like Llama and Qwen.
🔸 Automates GPU cluster setup, training stability, and deployment.
🔸 Lets researchers export customized models for their own use.
🔸 Free in beta, with plans for future monetization.
🔸 Aims to democratize access to tuning tools once limited to big tech labs.
🔸 Raises both excitement and safety concerns over broader model manipulation.
Tinker is positioning Thinking Machines as a key infrastructure player in the AI stack, where the competitive edge may come not from training bigger models, but from adapting existing ones.
💼 MyTinyTools unites hundreds of free mini-services
MyTinyTools is a web platform that bundles text, file, code, and cybersecurity tools into a single place, all free to use.
🔸 Dozens of utilities for text generation, editing, images, video, code, and file conversion.
🔸 Productivity helpers like calendars, planners, and timers.
🔸 Everything runs directly in the browser, no downloads needed.
🔸 No registration, no limits, and no hidden fees.
By centralizing everyday tools under one roof, MyTinyTools aims to be a lightweight all-in-one workspace for both personal and professional use.
🎥 OpenAI releases Sora 2 for AI video generation
OpenAI has launched Sora 2, the next generation of its video model, adding realism and creative control.
🔸 More accurate physics, object interactions, and scene coherence.
🔸 Synchronized sound and dialogue for lifelike storytelling.
🔸 Fine-grained control over style, pacing, and scene sequencing.
🔸 Ability to insert yourself or friends into videos while preserving voice and appearance.
🔸 Free with “generous limits,” rolling out first via invites in the US and Canada; Pro tier offers higher-quality generations.
🔸 New iOS app introduces an endless feed of short AI-generated videos.
OpenAI is turning video generation from a demo into a mainstream creative platform.
💻 Claude Sonnet 4.5 launches as Anthropic’s top coding model
Anthropic has released Claude Sonnet 4.5, now leading benchmarks and aiming to act as an autonomous software engineer.
🔸 Beats all models on SWE Bench Verified, the gold standard for software reasoning.
🔸 In tests, ran 30+ hours straight, setting up databases, buying domains, and conducting security audits.
🔸 Can move from prototypes to production-ready applications, not just snippets.
🔸 Launch includes Claude Agent SDK and preview of Imagine with Claude, a tool for on-the-fly software generation.
🔸 Pricing unchanged from Sonnet 4, $3 per million input tokens, $15 per million output tokens.
With Sonnet 4.5, Anthropic isn’t just building a coding assistant, it’s positioning Claude as a tireless full-stack engineer.
🛡️ ChatGPT adds safety routing and parental controls
OpenAI has rolled out new safety routing and parental control features to make ChatGPT safer for teens and sensitive conversations.
🔸 Sensitive chats are automatically routed to GPT-5, trained to handle emotional or high-stakes topics responsibly.
🔸 Parental controls let parents set quiet hours, disable voice mode, turn off memory, and remove image generation.
🔸 Designed to prevent harmful interactions, following past incidents where ChatGPT failed to detect mental distress.
🔸 OpenAI will iterate these features over the next 120 days, balancing safety with user experience.
With these updates, ChatGPT moves beyond simple conversation, it’s now a platform that actively protects and adapts for younger users.
🔍 Perplexity opens its AI-first Search API with real-time updates
Perplexity has launched the Perplexity Search API, giving developers direct access to its constantly refreshed index, positioning itself as a public answer engine, not just another search tool.
🔸 Unlike traditional search incumbents that keep indices closed, Perplexity’s API is designed for open developer access.
🔸 Freshness is the core differentiator: the system processes tens of thousands of index update requests per second to deliver real-time relevance.
🔸 Technical docs reveal an AI-native retrieval architecture, optimized for speed, accuracy, and large-scale integration.
🔸 The launch lightly pokes at legacy players, signaling a shift toward a more transparent and developer-friendly ecosystem.
By reframing search as infrastructure, Perplexity is betting it can own the answer layer of the internet. Would you build on top of it?
⚡️ Startups don’t buy AI they buy speed
a16z and Mercury analyzed transactions from 200,000 startups (June–August 2025) to understand how companies actually spend on AI tools, especially for note-taking, design, coding, and communication. The findings reveal a clear trend: startups aren’t chasing models or hype, they’re investing in speed and efficiency.
🔸 “For everyone” products dominate 60% of AI expenses, OpenAI, Anthropic, Perplexity, Notion, Manus. Assistants and “smart workspaces” remain a competitive, leaderless category.
🔸 The creative stack, Freepik, ElevenLabs, Canva, Photoroom, Midjourney, Descript, Opus Clip, CapCut, now defines daily work. Content creation isn’t a separate role anymore.
🔸 “Vibe-coding” tools like Replit, Cursor, Lovable, and Emergent are mainstream. Replit earns 15× more than Lovable, as companies pay for faster prototyping instead of large dev teams.
🔸 Vertical AI tools are turning into “AI employees”: Crosby Legal (law), 11x (GTM), Alma (immigration). Startups prefer automation over contractors or hires.
🔸 Nearly 70% of top AI tools began as consumer apps and scaled bottom-up, the B2C-to-B2B path now takes just 1–2 years.
AI has stopped being about futuristic tech, the only metric that matters now is how fast it delivers results.
🏠 Design your dream home no architect needed
Interior designers and soon-to-be homeowners, meet HomeByMe a 3D home design platform that lets you build and furnish your dream space just like in The Sims, but with real-world accuracy.
🔸 You can draw your home layout, add walls, doors, and windows directly in 3D.
🔸 The platform includes thousands of real furniture and décor items major brands.
🔸 Everything is drag-and-drop, so you can instantly see how your dream space looks and feels.
🔸 Projects can be shared with designers or contractors for easy collaboration.
🔸 It’s perfect for planning renovations, new homes, or even testing interior aesthetics.
It’s basically The Sims for real life,!but this time, your living room isn’t fictional.
⚙️ Booking.com, Spotify, and Figma now live inside ChatGPT
🔸 Apps work as native chat integrations no installs, no separate mode.
🔸 OpenAI also launched an Apps SDK, letting developers build custom chat-based apps.
🔸 It’s essentially the next iteration of plugins, but more stable and natively integrated.
🔸 Monetization isn’t live yet, though Altman hinted at “various ways” to earn in the future.
🔸 It’s unclear if brands will be able to pay for higher visibility or priority placement in ChatGPT results.
OpenAI is reviving the plugin dream, hoping apps inside ChatGPT succeed where plugins fizzled out.
⚛️ Harvard builds quantum machine that runs for 2 hours nonstop
Physicists at Harvard have created the first quantum computer capable of continuous operation for over two hours, shattering the previous record of just 13 seconds.
🔸 The team, led by Mikhail Lukin, solved a key challenge called atomic loss, where qubits (atoms) disappear due to heat, field errors, or gas collisions.
🔸 Their system uses “optical lattice conveyor belts” and “optical tweezers” to automatically replace lost qubits in real time.
🔸 New atoms instantly sync with the state of existing ones, preserving quantum information without rebooting.
🔸 The machine generates 300,000 atoms per second, maintaining about 3,000 active qubits during operation.
🔸 Researchers believe this approach could enable quantum computers with near-unlimited uptime within a few years.
Harvard’s quantum “conveyor belt” may have just solved the biggest bottleneck in making stable, practical quantum machines.
✅ Top 50 most expensive private companies today
14 of them did not exist 5 years ago.
📊 Powered by Crypto Insider
📞 AOL’s dial-up goes silent after 34 years, the end of an internet era
America Online has officially shut down its iconic dial-up Internet service, marking the end of a 34-year chapter that once defined how millions first logged onto the web. The final modem screech echoed this week, closing the curtain on a service that introduced the world to “You’ve got mail.”
🔸 At its peak in the late 1990s, AOL connected over 23 million users through its dial-up modems and CDs mailed to nearly every U.S. household
🔸 The shutdown ends both AOL’s Internet access and its classic AOL Dialer software, though some competitors like NetZero and Juno still offer dial-up options
🔸 The decision follows the ongoing shift toward broadband and fiber, leaving rural users and nostalgia-driven collectors as the last holdouts
🔸 AOL’s parent, Yahoo (under Apollo Global Management), said the closure reflects the company’s evolution toward modern digital media and advertising
AOL didn’t just sell Internet access, it sold the feeling of being online for the first time. Now, that sound of a modem connecting fades into history, replaced by a permanent broadband hum.
🚀 Mira Murati builds Thinking Machines, AI infrastructure for everyone
Former OpenAI CTO Mira Murati has launched Thinking Machines, a public benefit startup that raised $2B at a $12B valuation, before releasing a single product. The company aims to democratize how AI models are customized and deployed.
🔸 Focused on infrastructure for fine-tuning open models, not creating ever-larger proprietary ones
🔸 Built by a hand-picked team so loyal that engineers reportedly turned down $50M–$1.5B offers from Zuckerberg
🔸 Structured as a public benefit corporation, signaling long-term social and ethical commitments
🔸 Backed by a star roster of investors betting on Murati’s track record from OpenAI and Tesla
🔸 Aims to make AI development accessible to smaller companies, challenging the “winner-takes-all” model
Murati’s journey from Albania to Tesla to OpenAI’s helm shows how engineering rigor can outpace pedigree. Now, she’s betting that the next AI revolution won’t come from bigger models, but smarter infrastructure.
🧠 Richard Sutton vs LLMs: The Bitter Debate
In a recent interview with Dwarkesh Patel, AI pioneer and Turing Award laureate Richard Sutton surprised many by saying that large language models are still not the Bitter Lesson.
Back in 2019, Sutton’s now-legendary essay “The Bitter Lesson” argued that real AI progress comes not from hand-coded human knowledge, but from scaling computation and general learning methods. It became a cornerstone idea in modern ML thinking and LLMs were widely seen as its perfect embodiment.
So why does Sutton disagree?
🔸 He believes LLMs still rely too heavily on human-created data - data that can run out and carry biases.
🔸 Unlike humans or animals, they don’t learn through continuous real-time interaction with their environment.
🔸 In his view, true AI must be able to learn autonomously, not just be fine-tuned on curated text.
Andrej Karpathy responded with a thoughtful counterpoint. He noted that animals aren’t truly “blank slates” either, evolution preloads them with survival knowledge. In that sense, LLM pretraining could be viewed as an algorithmic version of evolution itself.
Karpathy concluded that Sutton’s ideal, a perfectly self-learning system, may be more of a philosophical north star than an attainable endpoint. Still, Sutton’s call for new paradigms is a timely reminder that scaling alone isn’t the whole story.
This exchange will likely be remembered as one of the defining moments in the ongoing debate over what “real intelligence” actually means.
🔍 Wikipedia’s knowledge engine, rebuilt for AI
Wikimedia Deutschland, Jina.AI, and DataStax unveiled the Wikidata Embedding Project, a revamped interface to make Wikipedia’s 120 million+ entries AI-ready. The goal is to let LLMs query not just keywords, but meaning, relationships, and context.
🔸 Converts Wikidata into vector embeddings, enabling semantic search rather than just keyword or SPARQL queries
🔸 Supports the Model Context Protocol (MCP) so AI systems can plug in Wikipedia as a live knowledge source
🔸 Queries return richer context, e.g. “scientist” yields related fields, translations, images, linked concepts
🔸 Openly hosted (on Toolforge) and free for developers to use
🔸 Designed to help retrieval-augmented generation (RAG) systems ground answers in verified knowledge
With this, Wikipedia moves from passive data pile to active real-time feed for AI. The irony: the world’s largest crowdsourced encyclopedia is becoming one of AI’s most reliable knowledge backbones.
🎨 How to Avoid the “AI Look” in Generated Images
You can make AI-generated images look more natural by controlling palette, lighting, film style, and texture, essentially giving the model a human‑photography cheat sheet.
Palette: natural, muted, no neon or HDR. Example: "earth tone palette, saturation -15%, no acidic colors."
Light: soft diffused daylight, color temperature. Example: "natural daylight 5600K, no glare or bloom."
Film/grading: film or cinematic look emulation. Examples: "Kodak Portra 400," "Fujifilm Pro 400H," "cinematic grade, low contrast."
Exposure/contrast: avoid "overexposed" HDR. Example: "low-medium contrast, exposure -1/3 EV, preserved shadows and highlights."
Optics/angle: realistic lens and DOF. Example: "35mm f/4, natural depth of field, no oversharpening."
Textures/"noise": a little grain instead of plastic. Example: "light film grain, no plastic skin or smoothing."
Prohibitions (negative cues): "no neon saturation, no gloss, no oversharpen, no HDR, no bloom, no plastic skin, no synthetic glow, no acid-blue shadows."
💦 Corintis brings micro-liquid cooling to AI chips
Swiss startup Corintis is tackling one of AI’s biggest bottlenecks, chip overheating, with cooling built directly inside processors. The company has raised $24M Series A at a ~$400M valuation, with Intel’s CEO on its board and Microsoft as an early tester.
🔸 Liquid flows through microchannels etched into the chip, pulling heat from hotspots.
🔸 Up to 3× more efficient than traditional fans or heat sinks.
🔸 Cuts both energy and water use in data centers.
🔸 Compatible with existing infrastructure, with potential for full chip integration.
🔸 European production capacity targeted at 1M wafers per year.
🔸 Cooling can account for 20%+ of chip costs, a huge value driver as AI demand surges.
By solving heat at the source, Corintis could turn cooling from a cost burden into one of the most strategic levers in the AI hardware race.
🛒 OpenAI brings shopping into ChatGPT
OpenAI has rolled out native product purchases inside ChatGPT, starting with Etsy integration in the US.
🔸 Users can click “Buy” in chat, with payments charged to a linked card and fulfilled through the seller’s own system.
🔸 Powered by the new open-source Agentic Commerce Protocol, designed to standardize AI-driven transactions.
🔸 OpenAI plans to expand beyond Etsy and outside the US in the coming months.
🔸 Google recently demoed a similar approach, signaling a race to define AI-native commerce.
After this launch, ChatGPT moves from conversation into direct consumer transactions, turning chat into checkout.
💻 Kimi AI launches OK Computer: build websites with one prompt
Kimi AI’s new agent OK Computer acts as a full product team in a single AI, turning a simple prompt into complete websites and apps.
🔸 Plans, designs, and writes code autonomously, delivering multi-page websites and complex applications.
🔸 Produces full diagrams and presentations, handling datasets up to 1 million lines.
🔸 Conducts research and generates reports with actionable recommendations.
🔸 Combines the roles of product manager, strategist, designer, and engineer in one agent.
With OK Computer, creating sophisticated web products no longer requires a full team, a single prompt can now launch entire digital experiences.
⚡️ Paid introduces outcome-based billing for AI agents
London-based Paid, founded by Manny Medina, is enabling AI developers to charge clients based on real results, not flat subscriptions. The startup just raised $21.6M in seed funding to expand the platform.
🔸 Instead of selling AI tools directly, Paid provides infrastructure for performance-linked payments, e.g., efficiency improvements or revenue generated.
🔸 The model reduces risk for agent creators, aligning usage costs with measurable business impact rather than raw compute.
🔸 Early users include Artisan (sales automation) and IFS (ERP software), showing broad enterprise potential.
🔸 Investors include Lightspeed, FUSE, and EQT Ventures, backing Paid as a key building block for the agent economy.
By tying billing to actual outcomes, Paid positions itself as the bridge between AI agents and real-world value capture. Could this redefine how AI tools get paid for?
🔔 Unitree G1 phones home every 5 minutes, a robot straight out of a spy movie
Three cybersecurity researchers set out to find small bugs in the Unitree G1 and instead uncovered persistent telemetry exfiltration: constant MQTT/WebSocket connections to two manufacturer brokers, with full sensor dumps sent regularly.
🔸 Every 300 seconds the robot uploads ~4.5 KB JSON frames containing a complete sensor set, lidars, cameras, microphones, geolocation and device logs.
🔸 Runtime and network traces show continuous connections to two remote hosts, telemetry is not occasional, it’s steady and automatic.
🔸 Config files are encrypted with Blowfish-ECB using a static key shared across all devices, compromise one robot, and you can potentially decrypt configs for the entire fleet.
🔸 All devices ship with the same AES key for Bluetooth/auth steps, any attacker in physical/Bluetooth range could escalate to root on a nearby unit.
🔸 Rough scale: about ~1,500 units already sold and operating in the wild.
This isn’t a firmware hiccup, it’s a structural privacy and supply-chain risk: sensors that shouldn’t be exfiltrated, static keys that shouldn’t be shared, and millions of minutes of telemetry travelling offsite. Would you let a robot with that behavior inside your office or facility?