Reading view

Police release video of possible Brown University shooter

Providence police released security camera video that shows the person they believed to be the Brown University shooter walking away from campus after he allegedly opened fire inside a classroom. WJAR reports. 

💾

©

Providence police released security camera video that shows the person they believed to be the Brown University shooter walking away from campus after he allegedly opened fire inside a classroom. WJAR reports. 
  •  

Trump Ban on Wind Energy Permits 'Unlawful', Court Rules

A January order blocking wind energy projects in America has now been vacated by a U.S. judge and declared unlawful, reports the Associated Press: [Judge Saris of the U.S. district court for the district of Massachusetts] ruled in favor of a coalition of state attorneys general from 17 states and Washington DC, led by Letitia James, New York's attorney general, that challenged President Trump's day one order that paused leasing and permitting for wind energy projects... The coalition that opposed Trump's order argued that Trump does not have the authority to halt project permitting, and that doing so jeopardizes the states' economies, energy mix, public health and climate goals. The coalition includes Arizona, California, Colorado, Connecticut, Delaware, Illinois, Maine, Maryland, Massachusetts, Michigan, Minnesota, New Jersey, New Mexico, New York, Oregon, Rhode Island, Washington state and Washington DC. They say they have invested hundreds of millions of dollars collectively to develop wind energy and even more on upgrading transmission lines to bring wind energy to the electrical grid... Wind is the United States' largest source of renewable energy, providing about 10% of the electricity generated in the nation, according to the American Clean Power Association. But the BBC quotes Timothy Fox, managing director at the Washington, DC-based research firm ClearView Energy Partners, as saying he doesn't expect the ruling to reinvigorate the industry: "It's more symbolic than substantive," he said. "All the court is saying is ... you need to go back to work and consider these applications. What does that really mean?" he said. Officials could still deny permits or bog applications down in lengthy reviews, he noted.

Read more of this story at Slashdot.

  •  

New Rule Forbids GNOME Shell Extensions Made Using AI-Generated Code

An anonymous reader shared this report from Phoronix: Due to the growing number of GNOME Shell extensions looking to appear on extensions.gnome.org that were generated using AI, it's now prohibited. The new rule in their guidelines note that AI-generated code will be explicitly rejected: "Extensions must not be AI-generated While it is not prohibited to use AI as a learning aid or a development tool (i.e. code completions), extension developers should be able to justify and explain the code they submit, within reason. Submissions with large amounts of unnecessary code, inconsistent code style, imaginary API usage, comments serving as LLM prompts, or other indications of AI-generated output will be rejected." In a blog post, GNOME developer Javad Rahmatzadeh explains that "Some devs are using AI without understanding the code..."

Read more of this story at Slashdot.

  •  

Is the R Programming Language Surging in Popularity?

The R programming language "is sometimes frowned upon by 'traditional' software engineers," says the CEO of software quality services vendor Tiobe, "due to its unconventional syntax and limited scalability for large production systems." But he says it "continues to thrive at universities and in research-driven industries, and "for domain experts, it remains a powerful and elegant tool." Yet it's now gaining more popularity as statistics and large-scale data visualization become important (a trend he also sees reflected in the rise of Wolfram/Mathematica). That's according to December's edition of his TIOBE Index, which attempts to rank the popularity of programming languages based on search-engine results for courses, third-party vendors, and skilled engineers. InfoWorld explains: In the December 2025 index, published December 7, R ranks 10th with a 1.96% rating. R has cracked the Tiobe index's top 10 before, such as in April 2020 and July 2020, but not in recent years. The rival Pypl Popularity of Programming Language Index, meanwhile, has R ranked fifth this month with a 5.84% share. "Programming language R is known for fitting statisticians and data scientists like a glove," said Paul Jansen, CEO of software quality services vendor Tiobe, in a bulletin accompanying the December index... Although data science rival Python has eclipsed R in terms of general adoption, Jansen said R has carved out a solid and enduring niche, excelling at rapid experimentation, statistical modeling, and exploratory data analysis. "We have seen many Tiobe index top 10 entrants rising and falling," Jansen wrote. "It will be interesting to see whether R can maintain its current position." "Python remains ahead at 23.64%," notes TechRepublic, "while the familiar chase group behind it holds steady for the moment. The real movement comes deeper in the list, where SQL edges upward, R rises to the top 10, and Delphi/Object Pascal slips away... SQLclimbs from tenth to eighth at 2.10%, adding a small +0.11% that's enough to move it upward in a tightly packed section of the table. Perl holds ninth at 1.97%, strengthened by a +1.33% gain that extends its late-year resurgence." It's interesting to see how TIOBE's ranking compare with PYPL's (which ranks languages based solely on how often language tutorials are searched on Google): TIOBE PYPL Python Python C C/C++ C++ Objective-C Java Java C# R JavaScript JavaScript Visual Basic Swift SQL C# Perl PHP R Rust Despite their different methodologies, both lists put Python at #1, Java at #5, and JavaScript at #7.

Read more of this story at Slashdot.

  •  

Police say shooter at Brown University was 'dressed in all black'

At least two people were killed and eight more critically injured in a shooting at a Brown University engineering building in Providence, Rhode Island. Police said they were still looking for the shooter, who was "dressed in all black."  

💾

©

At least two people were killed and eight more critically injured in a shooting at a Brown University engineering building in Providence, Rhode Island. Police said they were still looking for the shooter, who was "dressed in all black."  
  •  

Brown University student said he hid for two hours after alerts of shooting

A student at Brown University said he hid for two hours inside a university lab after alerts of an active shooter on campus, only managing to escape when police came to evacuate the building. Several people were injured in the shooting, with police continuing their search for the suspect. 

💾

©

A student at Brown University said he hid for two hours inside a university lab after alerts of an active shooter on campus, only managing to escape when police came to evacuate the building. Several people were injured in the shooting, with police continuing their search for the suspect. 
  •  

System76 Launches First Stable Release of COSMIC Desktop and Pop!_OS 24.04 LTS

This week System76 launched the first stable release of its Rust-based COSMIC desktop environment. Announced in 2021, it's designed for all GNU/Linux distributions — and it shipping with Pop!_OS 24.04 LTS (based on Ubuntu 24.04 LTS). An anonymous reader shared this report from 9to5Linux: Previous Pop!_OS releases used a version of the COSMIC desktop that was based on the GNOME desktop environment. However, System76 wanted to create a new desktop environment from scratch while keeping the same familiar interface and user experience built for efficiency and fun. This means that some GNOME apps have been replaced by COSMIC apps, including COSMIC Files instead of Nautilus (Files), COSMIC Terminal instead of GNOME Terminal, COSMIC Text Editor instead of GNOME Text Editor, and COSMIC Media Player instead of Totem (Video Player). Also, the Pop!_Shop graphical package manager used in previous Pop!_OS releases has now been replaced by a new app called COSMIC Store. "If you're ambitious enough, or maybe just crazy enough, there eventually comes a time when you realize you've reached the limits of current potential, and must create something completely new if you're to go further..." explains System76 founder/CEO Carl Richell: For twenty years we have shipped Linux computers. For seven years we've built the Pop!_OS Linux distribution. Three years ago it became clear we had reached the limit of our current potential and had to create something new. Today, we break through that limit with the release of Pop!_OS 24.04 LTS with the COSMIC Desktop Environment. Today is special not only in that it's the culmination of over three years of work, but even more so in that System76 has built a complete desktop environment for the open source community... I hope you love what we've built for you. Now go out there and create. Push the limits, make incredible things, and have fun doing it!

Read more of this story at Slashdot.

  •  

'Free Software Awards' Winners Announced: Andy Wingo, Alx Sa, Govdirectory

This week the Free Software Foundation honored Andy Wingo, Alx Sa, and Govdirectory with this year's annual Free Software Awards (given to community members and groups making "significant" contributions to software freedom): Andy Wingo is one of the co-maintainers of GNU Guile, the official extension language of the GNU operating system and the Scheme "backbone" of GNU Guix. Upon receiving the award, he stated: "Since I learned about free software, the vision of a world in which hackers freely share and build on each others' work has been a profound inspiration to me, and I am humbled by this recognition of my small efforts in the context of the Guile Scheme implementation. I thank my co-maintainer, Ludovic Courtès, for his comradery over the years: we are just building on the work of the past maintainers of Guile, and I hope that we live long enough to congratulate its many future maintainers." The 2024 Award for Outstanding New Free Software Contributor went to Alx Sa for work on the GNU Image Manipulation Program (GIMP). When asked to comment, Alx responded: "I am honored to receive this recognition! I started contributing to the GNU Image Manipulation Program as a way to return the favor because of all the cool things it's allowed me to do. Thanks to the help and mentorship of amazing people like Jehan Pagès, Jacob Boerema, Liam Quin, and so many others, I hope I've been able to help other people do some cool new things, too." Govdirectory was presented with this year's Award for Projects of Social Benefit, given to a project or team responsible for applying free software, or the ideas of the free software movement, to intentionally and significantly benefit society. Govdirectory provides a collaborative and fact-checked listing of government addresses, phone numbers, websites, and social media accounts, all of which can be viewed with free software and under a free license, allowing people to always reach their representatives in freedom... The FSF plans to further highlight the Free Software Award winners in a series of events scheduled for the new year to celebrate their contributions to free software.

Read more of this story at Slashdot.

  •  

Applets Are Officially Going, But Java In the Browser Is Better Than Ever

"The entire java.applet package has been removed from JDK 26, which will release in March 2026," notes Inside Java. But long-time Slashdot reader AirHog links to this blog post reminding us that "Applets Are Officially Gone, But Java In The Browser Is Better Than Ever." This brings to an official end the era of applets, which began in 1996. However, for years it has been possible to build modern, interactive web pages in Java without needing applets or plugins. TeaVM provides fast, performant, and lightweight tooling to transpile Java to run natively in the browser... TeaVM, at its heart, transpiles Java code into JavaScript (or, these days, WASM). However, in order for Java code to be useful for web apps, much more is required, and TeaVM delivers. It includes a minifier, to shrink the generated code and obfuscate the intent, to complicate reverse-engineering. It has a tree-shaker to eliminate unused methods and classes, keeping your app download compact. It packages your code into a single file for easy distribution and inclusion in your HTML page. It also includes wrappers for all popular browser APIs, so you can invoke them from your Java code easily, with full IDE assistance and auto-correct. The blog post also touts Flavour, an open-source framework "for coding, packaging, and optimizing single-page apps implemented in Java... a full front-end toolkit with templates, routing, components, and more" to "build your modern single-page app using 100% Java."

Read more of this story at Slashdot.

  •  

Startup Successfully Uses AI to Find New Geothermal Energy Reservoirs

A Utah-based startup announced last week it used AI to locate a 250-degree Fahrenheit geothermal reservoir, reports CNN. It'll start producing electricity in three to five years, the company estimates — and at least one geologist believes AI could be an exciting "gamechanger" for the geothermal industry. [Startup Zanskar Geothermal & Minerals] named it "Big Blind," because this kind of site — which has no visual indication of its existence, no hot springs or geysers above ground, and no history of geothermal exploration — is known as a "blind" system. It's the first industry-discovered blind site in more than three decades, said Carl Hoiland, co-founder and CEO of Zanskar. "The idea that geothermal is tapped out has been the narrative for decades," but that's far from the case, he told CNN. He believes there are many more hidden sites across the Western U.S. Geothermal energy is a potential gamechanger. It offers the tantalizing prospect of a huge source of clean energy to meet burgeoning demand. It's near limitless, produces scarcely any climate pollution, and is constantly available, unlike wind and solar, which are cheap but rely on the sun shining and the wind blowing. The problem, however, has been how to find and scale it. It requires a specific geology: underground reservoirs of hot water or steam, along with porous rocks that allow the water to move through them, heat up, and be brought to the surface where it can power turbines... The AI models Zanskar uses are fed information on where blind systems already exist. This data is plentiful as, over the last century and more, humans have accidentally stumbled on many around the world while drilling for other resources such as oil and gas. The models then scour huge amounts of data — everything from rock composition to magnetic fields — to find patterns that point to the existence of geothermal reserves. AI models have "gotten really good over the last 10 years at being able to pull those types of signals out of noise," Hoiland said... Zanskar's discovery "is very significant," said James Faulds, a professor of geosciences at Nevada Bureau of Mines and Geology.... Estimates suggest over three-quarters of US geothermal resources are blind, Faulds told CNN. "Refining methods to find such systems has the potential to unleash many tens and perhaps hundreds of gigawatts in the western US alone," he said... Big Blind is the company's first blind site discovery, but it's the third site it has drilled and hit commercial resources. "We expect dozens, to eventually hundreds, of new sites to be coming to market," Hoiland said.... Hoiland says Zanskar's work shows conventional geothermal still has huge untapped potential. Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

  •  

Firefox Survey Finds Only 16% Feel In Control of Their Privacy Choices Online

Choosing your browser "is one of the most important digital decisions you can make, shaping how you experience the web, protect your data, and express yourself online," says the Firefox blog. They've urged readers to "take a stand for independence and control in your digital life." But they also recently polled 8,000 adults in France, Germany, the UK and the U.S. on "how they navigate choice and control both online and offline" (attending in-person events in Chicago, Berlin, LA, and Munich, San Diego, Stuttgart): The survey, conducted by research agency YouGov, showcases a tension between people's desire to have control over their data and digital privacy, and the reality of the internet today — a reality defined by Big Tech platforms that make it difficult for people to exercise meaningful choice online: — Only 16% feel in control of their privacy choices (highest in Germany at 21%) — 24% feel it's "too late" because Big Tech already has too much control or knows too much about them. And 36% said the feeling of Big Tech companies knowing too much about them is frustrating — highest among respondents in the U.S. (43%) and the UK (40%) — Practices respondents said frustrated them were Big Tech using their data to train AI without their permission (38%) and tracking their data without asking (47%; highest in U.S. — 55% and lowest in France — 39%) And from our existing research on browser choice, we know more about how defaults that are hard to change and confusing settings can bury alternatives, limiting people's ability to choose for themselves — the real problem that fuels these dynamics. Taken together our new and existing insights could also explain why, when asked which actions feel like the strongest expressions of their independence online, choosing not to share their data (44%) was among the top three responses in each country (46% in the UK; 45% in the U.S.; 44% in France; 39% in Germany)... We also see a powerful signal in how people think about choosing the communities and platforms they join — for 29% of respondents, this was one of their top three expressions of independence online. "For Firefox, community has always been at the heart of what we do," says their VP of Global Marketing, "and we'll keep fighting to put real choice and control back in people's hands so the web once again feels like it belongs to the communities that shape it." At TwitchCon in San Diego Firefox even launched a satirical new online card game with a privacy theme called Data War.

Read more of this story at Slashdot.

  •  

How I Get Free Traffic from ChatGPT in 2025 (AIO vs SEO)

Three weeks ago, I tested something that completely changed how I think about organic traffic. I opened ChatGPT and asked a simple question: "What's the best course on building SaaS with WordPress?" The answer that appeared stopped me cold. My course showed up as the first result, recommended directly by the AI with specific reasons why it was valuable.

I hadn't paid for advertising. I hadn't done any special promotion. The AI simply decided my content was the best answer to that question and served it to the user. This wasn't luck or a fluke. When I tested the same query in Perplexity, the same thing happened. My website ranked at the top of AI-generated responses, pulling in free traffic directly from AI models that millions of people now use as their primary search tool.

This represents a fundamental shift in how people discover content online. For years, we've optimized for Google's algorithm, carefully crafting meta descriptions and building backlinks to climb traditional search rankings. That work still matters, but a massive new traffic source has emerged that most content creators are completely ignoring. While everyone focuses exclusively on traditional SEO, AI Optimization is quietly becoming one of the most valuable skills for anyone who publishes content online.

The opportunity is enormous right now precisely because it's so new. Early adopters are claiming top positions in AI responses while their competitors remain oblivious to this emerging channel. But this window won't stay open forever. As more people recognize the value of appearing in AI results, competition will increase and optimization will become more sophisticated. The time to understand and implement AIO strategies is now, while the landscape is still relatively uncrowded.

In this comprehensive guide, I'll show you exactly how AI Optimization works, how it differs from traditional SEO, what specific tactics actually move the needle, and how to track your performance so you know what's working. More importantly, I'll explain why you can't afford to ignore this traffic source if you want to remain visible online as user behavior continues shifting toward AI-powered search.

Understanding the Fundamental Shift in Search Behavior

Something profound has changed in how people find information online, and most website owners haven't noticed yet. The change isn't about a new Google algorithm update or a shift in social media platforms. It's about where people go when they have questions that need answering.

For twenty years, the pattern was predictable and universal. Someone needs information, they open Google, they type a query, they scan through ten blue links, they click a few results, they piece together answers from multiple sources. This process trained us to optimize for that journey. We focused on ranking in those ten blue links because that's where traffic came from. The entire SEO industry built around understanding and exploiting that single funnel.

But look at what's happening now. Someone needs information, they open ChatGPT or Claude or Perplexity, they ask a question in natural language, they receive a comprehensive answer immediately with sources cited. No clicking through multiple websites. No comparing different perspectives. No scanning search results pages. The AI synthesizes information and delivers a direct answer, fundamentally changing the discovery process.

The numbers tell the story. ChatGPT reached 100 million users faster than any consumer application in history, hitting that milestone in just two months after launch. By early 2025, ChatGPT alone processes over 10 million queries daily through its web browsing feature. Perplexity has grown to millions of daily users who rely on it as their primary search tool. Google has responded by launching AI Mode, available in over 180 countries, which provides AI-generated answers above traditional search results.

These aren't niche tools used by tech enthusiasts. They're mainstream applications that everyday people now use for research, planning, learning, and decision-making. When someone searches for "best productivity apps for small teams," they're increasingly likely to ask an AI rather than Google. When a business owner needs to understand a technical topic, they're prompting Claude instead of reading blog posts. When students research topics for papers, they're querying Perplexity instead of clicking through search results.

This behavioral shift creates a new visibility challenge. Your content might rank perfectly on Google, but if it's invisible to AI models when they're formulating answers, you're missing an enormous and growing segment of potential traffic. The users who discover information through AI tools never even see your traditional search rankings because they never visit a search results page.

The problem compounds because AI search is still in its explosive growth phase. Usage is doubling and tripling year over year as more people discover these tools and integrate them into their daily workflows. The traffic opportunity today is significant, but it's tiny compared to what it will become in the next few years as AI search becomes default behavior for entire demographics.

What AIO Actually Means and Why It Matters

AIO stands for AI Optimization, and it represents the practice of optimizing your content to appear in AI-generated responses when people query language models. Think of it as SEO's younger sibling, similar in purpose but different in execution because the underlying mechanisms for how AI models select and cite sources differ fundamentally from how Google ranks web pages.

Traditional SEO focuses on signals that Google's algorithms evaluate when determining search rankings. You optimize title tags and meta descriptions. You build backlinks from authoritative sites. You ensure your site loads quickly and works on mobile devices. You create content that targets specific keywords with appropriate density and placement. These tactics work because they align with how Google's systems assess page quality and relevance.

AIO requires understanding how language models decide which sources to reference when answering questions. These models don't follow the same rules as search engine algorithms. They're not counting backlinks or analyzing page load speed. They're evaluating whether content provides clear, accurate, comprehensive answers to questions people actually ask. They're assessing credibility through different signals than traditional search engines use. They're making probabilistic decisions about which information best satisfies a query based on patterns learned during training and information retrieved during real-time web searches.

The distinction matters because tactics that boost Google rankings don't automatically improve your chances of being cited by AI models, and vice versa. A page optimized perfectly for SEO might never appear in AI responses if it doesn't align with how language models evaluate content. Conversely, content that AI models consistently cite might not rank highly in traditional search if it lacks conventional SEO signals.

This doesn't mean you should abandon SEO and focus exclusively on AIO. The two approaches are complementary, not competing. People still use Google extensively, and traditional search traffic remains valuable. The point is that comprehensive visibility requires optimizing for both channels. You need your content discoverable through conventional search engines and reliably cited by AI models. This dual approach captures traffic from users regardless of which discovery method they prefer.

The strategic value of AIO extends beyond just additional traffic. When an AI model cites your content, it provides context explaining why your resource is valuable. The model doesn't just list your URL like a search result—it summarizes your key points, extracts relevant information, and positions your content as a trusted source. This creates a stronger credibility signal than a traditional search result because the AI has effectively pre-vetted your content and endorsed it as worth reading.

Think about the user experience difference. In traditional search, someone sees your site listed among ten results and must decide whether to click based on a title and two-line description. In AI search, someone reads an answer that includes information from your content, sees your site cited as the source, and arrives at your page already understanding its value and relevance. The qualification happens before the click, resulting in higher-quality traffic with better engagement metrics.

Google AI Mode and the Future of Search

Google's introduction of AI Mode represents a pivotal moment in search engine evolution and confirms that AI-generated answers are becoming a core component of how major platforms deliver information. Understanding this development helps contextualize why AIO matters and where organic discovery is headed.

AI Mode transforms Google's interface from a list of links into a conversational AI that provides direct answers. When you access AI Mode (available at google.com/ai or through the Google app), you interact with a language model that searches the web in real-time and synthesizes comprehensive responses to your questions. Instead of scanning through multiple websites, you receive curated information with sources cited, similar to ChatGPT with web search or Perplexity.

What makes this particularly significant is Google's market position. Despite the rise of alternative AI search tools, Google still processes billions of searches daily and serves as the primary discovery mechanism for most internet users. When Google integrates AI-generated answers into its core search experience, it's not experimenting with a niche feature—it's fundamentally changing how the world's most popular search engine works.

The financial implications validate this direction. Google reported that AI features contributed to a 10% increase in search revenue, reaching $50.7 billion in Q1 2025. This isn't a failing experiment that might be discontinued. It's a successful product innovation that's generating substantial revenue while improving user experience. Google has every incentive to expand AI Mode and integrate its capabilities more deeply into standard search.

Currently, AI Mode exists as a separate interface that users must access intentionally, but the trajectory is clear. Google has indicated that AI-generated answers will eventually become a more prominent part of standard search results. While they've walked back statements about making AI Mode the default search experience after initial concerns, the long-term direction remains toward greater AI integration. Traditional search results won't disappear, but AI-generated summaries will occupy increasingly valuable real estate on search result pages.

This evolution mirrors what happened with featured snippets and knowledge panels over the past decade. Google gradually introduced elements that answered questions directly on the search page rather than requiring clicks to external sites. AI Mode represents the next iteration of this trend—more comprehensive answers, synthesized from multiple sources, delivered conversationally rather than as extracted snippets.

For content creators, this creates both opportunities and challenges. The opportunity is that appearing in AI-generated responses places your content in a prominent, trusted position that provides context and drives qualified traffic. The challenge is that optimization strategies must adapt to capture this visibility. Content that ranks well in traditional search results won't automatically appear in AI Mode responses without deliberate optimization for how AI systems evaluate and select sources.

The global availability of AI Mode in over 180 countries means this isn't a gradual rollout that you can monitor and prepare for leisurely. It's happening now, and users worldwide are already accessing AI-powered search. Your competitors might be optimizing for these systems while you're still focused exclusively on traditional SEO, giving them an advantage in capturing traffic from this rapidly growing segment.

How to Track Your AIO Performance

One of the biggest challenges with AI Optimization is measurement. Traditional SEO provides robust analytics through Google Search Console, showing exactly which queries trigger impressions, how often people click your results, and where you rank for specific keywords. These metrics make it straightforward to track SEO progress and identify opportunities for improvement.

AIO lacks this infrastructure. ChatGPT doesn't provide website owners with analytics showing how often their content appears in responses. Perplexity doesn't send performance reports. Google AI Mode doesn't have a Search Console equivalent yet. This creates a visibility problem—you can't optimize what you can't measure.

Several commercial tools have emerged to fill this gap, offering AIO tracking and monitoring services. Ahrefs introduced features for tracking AI visibility at $129 per month. SE Ranking offers similar capabilities starting at $95 monthly. First Answer provides specialized AIO tracking for $39 per month but limits you to just 10 query tests. Keyword.com offers competitive pricing with various tier options.

These tools work by systematically querying AI models with specific prompts and analyzing which sources appear in the responses. They help you understand whether your content shows up for relevant queries, how you compare to competitors, and how your visibility changes over time. For businesses with substantial budgets, these professional tools provide valuable insights with minimal setup effort.

However, the pricing creates barriers for smaller website owners, bloggers, and businesses just beginning to explore AIO. Spending $100-300 monthly on tracking tools makes sense when you're generating significant revenue from AI traffic, but it's prohibitive when you're still validating whether AIO is worth your investment. This gap between professional tools and budget-conscious creators leaves many people flying blind with no way to measure their AIO performance.

The solution is building your own tracking system using no-code automation tools. This approach requires more initial setup but provides ongoing monitoring at a fraction of commercial tool costs. The system I built uses Make.com, a no-code automation platform, to query AI models systematically, analyze responses, and track mentions over time. Make offers 1,000 operations monthly on their free tier, making it possible to start tracking without any monetary investment.

The tracking system consists of three automated scenarios that work together to provide comprehensive AIO monitoring. The first scenario handles query tracking and brand mentions, automatically sending prompts to ChatGPT and recording which sources appear in responses. The second scenario performs keyword performance analysis, tracking specific topics or phrases relevant to your business and monitoring whether you're gaining or losing visibility. The third scenario focuses on competitor tracking, identifying when competitors appear in AI responses and analyzing their positioning compared to yours.

Building this system requires understanding of Make.com's interface and basic automation concepts, but it's accessible to anyone willing to invest a few hours in setup. The difficulty level sits at intermediate—more complex than basic automation but far simpler than custom programming. Once configured, the system runs automatically on whatever schedule you set, collecting data and building a historical record of your AIO performance.

The workflow begins with identifying the queries you want to track. These are essentially "AIO keywords"—questions that people might ask AI models where your content should ideally appear in the answer. Unlike traditional SEO keywords, which are often short phrases, AIO queries tend to be longer, more conversational questions that reflect how people actually talk to AI assistants.

For example, instead of targeting the SEO keyword "WordPress hosting," you'd track the AIO query "What's the best WordPress hosting for SaaS applications?" or "Which hosting provider should I choose for a WordPress-based business site?" These natural language questions better represent how people interact with AI tools and help you optimize for actual usage patterns rather than keyword variations.

Finding these queries requires a different research approach than traditional keyword research. Rather than using tools that show search volume and competition metrics, you need to understand what questions your target audience actually asks AI models. This means thinking about their problems, concerns, and information needs, then formulating those as conversational queries. Tools like an LLM Query Generator can help by analyzing your content and suggesting relevant questions people might ask to find that information.

Once you've identified target queries, the automated system tests them periodically—daily, weekly, or on whatever schedule makes sense for your monitoring needs. Each test queries the AI model with your specified prompt, captures the response, parses which sources were cited, and records whether your content appeared. Over time, this builds a database showing your visibility trends, how often competitors appear for the same queries, and which topics you're gaining or losing ground on.

The data collected enables strategic decisions about content creation and optimization. If certain queries consistently show competitor sources but never yours, that signals an opportunity to create or improve content addressing that topic. If you're appearing reliably for some questions but not others in the same category, you can analyze what makes your successful content different and apply those lessons to underperforming pieces. If your visibility is declining over time, you know you need to refresh and strengthen your content to maintain AI citation rates.

This measurement foundation transforms AIO from guesswork into a data-driven practice. Instead of optimizing blindly and hoping AI models notice, you track actual performance and refine your approach based on concrete results. The initial investment in building or subscribing to tracking tools pays dividends through improved optimization efficiency and clearer understanding of what tactics actually work for your specific content and audience.

The Seven Proven Tactics That Actually Work

Understanding AIO conceptually is valuable, but implementation requires specific, actionable tactics that demonstrably improve your chances of appearing in AI-generated responses. These seven strategies have proven effective across different content types, industries, and AI platforms. They work because they align with how language models evaluate sources and decide which content to cite when formulating answers.

The first tactic centers on incorporating statistics, numbers, and verifiable proof throughout your content. AI models exhibit a strong preference for factual, data-backed information over general statements or opinions. When a model encounters two sources covering the same topic, one making vague claims and another providing specific numbers with citations, the statistical content almost always wins.

This doesn't mean stuffing your content with random numbers. It means grounding your claims in specific, verifiable data wherever possible. Instead of writing "Our tool is widely used," you'd write "Our tool has 150,000 monthly active users with a 4.7 out of 5 satisfaction rating based on 3,200 reviews." The specificity signals credibility to AI models, which learned during training that precise data indicates reliable sources.

The same principle applies to any factual claim. When discussing market trends, cite specific growth percentages and time periods. When mentioning company performance, include actual revenue figures or user counts. When describing product features, provide concrete specifications rather than abstract descriptions. Each piece of specific data you add increases the likelihood that AI models will view your content as authoritative and citation-worthy.

This approach requires sourcing and maintaining accurate information, which means you can't fabricate numbers or exaggerate metrics. AI models increasingly cross-reference claims across sources, and inconsistencies damage credibility. The data you include must be truthful and, where relevant, attributed to primary sources. But when you consistently provide specific, accurate information, you build a reputation as a reliable source that AI models return to repeatedly.

The second tactic involves active engagement on Reddit, Quora, and similar community forums. This strategy works for a less obvious reason than you might expect. It's not primarily about direct traffic from forum posts, though that can be valuable. It's about creating authentic mentions and discussions of your content across platforms that AI models frequently encounter during training and web searches.

Language models learn from vast datasets that include substantial amounts of community discussion content. Reddit threads, Quora answers, and forum posts represent genuine human conversations about real topics, making them high-value training data. When your content or expertise appears naturally in these discussions, it creates signals that AI models recognize and incorporate into their understanding of what resources exist and who's knowledgeable about specific topics.

The key word here is "naturally." AI models have learned to recognize and discount obvious spam, self-promotion, and link-dropping. Simply posting your URL in relevant threads won't help and might actually hurt if it generates negative reactions or gets flagged as spam. Instead, you need to participate genuinely in communities where your expertise is relevant, providing real value in discussions and mentioning your content only when it truly addresses someone's question or adds to the conversation.

This means answering questions thoroughly, sharing insights from your experience, helping solve problems, and building a reputation as a knowledgeable contributor before you ever share links. When you do reference your content, it should be in the context of "I wrote a detailed guide about exactly this problem that covers X, Y, and Z" rather than "Check out my site." The former contributes to the discussion while the latter feels promotional.

Over time, this authentic participation creates a distributed network of references to your expertise and content across platforms that AI models access. These organic mentions, especially when they're accompanied by positive community response, signal that you're a legitimate authority worth citing. The impact accumulates gradually but compounds over months as you build a presence in relevant communities.

The third tactic focuses on optimizing for natural language queries rather than keyword stuffing. Traditional SEO often encourages optimizing for specific keyword phrases, sometimes at the expense of natural writing. You might structure sentences awkwardly to include exact keyword matches or repeat phrases more often than sounds natural. This approach can work for search engines that match keywords mechanically.

AI models process language differently. They understand semantic meaning and context, not just keyword matching. When people query AI tools, they ask complete questions in conversational language: "What's the best WordPress hosting for SaaS applications?" rather than "WordPress hosting SaaS." Your content needs to answer these natural questions directly and comprehensively to appear in AI responses.

This means structuring your content around questions your audience actually asks. Include FAQ sections that address common queries in full-sentence question format. Write subheadings as questions rather than just topics. Provide complete answers that someone could understand without additional context. Make your content readable and helpful to humans first, trusting that AI models will recognize and value that quality.

The practical implementation involves thinking about the conversation your audience wants to have rather than the keywords they might type. What are they trying to accomplish? What confuses them? What decisions are they facing? What objections or concerns do they have? When you address these elements in natural, conversational language, you simultaneously create content that people find valuable and that AI models recognize as comprehensive answers to common questions.

The fourth tactic requires creating comparison tables and structured data that AI models can easily parse and reference. Language models excel at processing structured information organized in clear, consistent formats. When they encounter well-formatted comparison tables, step-by-step lists, or data organized in predictable structures, they can extract and cite that information more reliably than when similar content appears in dense paragraphs.

This doesn't mean every piece of content should become a table or list. It means that when you're presenting information that naturally fits structured formats—comparisons between options, sequential steps in a process, multiple examples of a concept, sets of tips or recommendations—you should use formatting that makes that structure explicit and easy to process.

For example, if you're comparing different software tools, create an actual comparison table with columns for features, pricing, pros, and cons rather than describing each tool in paragraph form. If you're explaining a multi-step process, number the steps and use consistent formatting for each. If you're providing examples, use a predictable structure where each example follows the same pattern.

The benefit extends beyond AI optimization. Structured content is easier for human readers to scan and comprehend too. People increasingly skim content rather than reading every word, and clear structure helps them extract key information quickly. When you optimize for both AI processing and human scanning through better structure, you improve the experience for all visitors while increasing AI citation rates.

Implementation requires evaluating your existing content and identifying opportunities to add structure without forcing it artificially. Look for places where you're listing multiple items in prose that would be clearer as bullet points. Find sections comparing options that would benefit from table format. Identify processes that could be broken into numbered steps. These changes often improve content quality while making it more AI-friendly.

The fifth tactic involves building multi-platform authority by publishing consistent information across different channels. AI models, particularly those with web search capabilities, often cross-reference information across sources to verify accuracy and assess credibility. When they find the same core information presented consistently on your website, in your social media content, in articles you've published elsewhere, and in your responses on community platforms, it signals that you're a legitimate authority on that topic.

This doesn't mean duplicating content identically across platforms, which could create SEO problems and doesn't align with best practices for different mediums. It means maintaining consistent expertise, perspectives, and factual information while adapting the format and style to each platform's norms and audience expectations.

Your core message and expertise should be recognizable across a blog post on your website, a LinkedIn article, a Twitter thread, a YouTube video description, and a guest post on another site. The specific examples might vary, and the depth of coverage will differ based on format constraints, but the fundamental information should align. This consistency reinforces your authority and makes it easier for AI models to identify you as a reliable source on specific topics.

Building this multi-platform presence takes time and consistent effort. You can't create authority across channels overnight, but you can develop a systematic approach to repurposing and adapting your best content for different platforms. Each piece of substantial content you create should have a distribution plan that gets the core insights in front of audiences across multiple channels over time.

The strategic value compounds as your presence grows. Early on, you might only appear in AI responses when the model happens to encounter your website. As you build presence across platforms, the model has multiple opportunities to encounter your expertise from different angles, increasing the likelihood that it recognizes you as an authority worth citing.

The sixth tactic emphasizes showing fresh update signals throughout your content. AI models, especially those with real-time web access, demonstrate preference for current information over dated content. When choosing between two sources covering the same topic, with one clearly recent and another older, the fresher content usually gets cited unless there's a compelling reason to reference historical information.

This creates both an opportunity and a maintenance requirement. The opportunity is that regularly updating content can improve AI citation rates even if the core information hasn't changed dramatically. The requirement is that high-performing content needs periodic refreshes to maintain its competitive position as newer articles on the same topics emerge.

Making freshness obvious requires explicit signals that AI models can easily detect. The most straightforward approach is including "Last updated: [Date]" at the top of articles, making it immediately clear that the content reflects current information. This simple addition can significantly impact whether AI models view your content as relevant for queries about current state or recent developments.

Beyond update dates, freshness signals include referencing recent events, citing current statistics and data, mentioning the current year in context where relevant, and updating examples to reflect current tools and practices. These signals reassure both AI models and human readers that the information hasn't become outdated even if the core topic is relatively stable.

The practical challenge is balancing the benefit of updates against the time investment required. You can't refresh every piece of content constantly, so prioritize based on importance and competitive pressure. Content that generates significant traffic or ranks well in AI responses deserves regular attention to maintain those positions. Content about rapidly changing topics needs more frequent updates than evergreen material. Content facing new competition from recently published articles needs refreshing to remain competitive.

Implementing a content refresh schedule helps manage this systematically. Rather than updating randomly when you remember, establish a process where high-value content gets reviewed quarterly or semi-annually. During these reviews, update statistics, add recent examples, remove dated references, and add the new update date. This structured approach ensures your most important content remains fresh without requiring constant attention to every article.

The seventh tactic involves implementing JSON-LD structured data markup on your web pages. This technical optimization helps AI models understand your content's structure and purpose by providing machine-readable information about what your page contains, what type of content it is, and how different elements relate to each other.

Structured data uses a standardized format called Schema.org vocabulary implemented through JSON-LD script tags. These tags don't affect how your content appears to human visitors, but they provide clear signals to automated systems parsing your pages, including AI models determining whether your content answers specific queries.

Common structured data types relevant for most content include Article (marking blog posts and articles), HowTo (for step-by-step guides), FAQ (for question-and-answer sections), Person (for author bios), Organization (for company information), and Product (for product pages). Implementing appropriate schema markup for your content type helps AI models categorize and understand your content more accurately.

The technical implementation requires adding JSON-LD scripts to your page HTML, typically in the header section. Many content management systems, including WordPress, offer plugins that generate this markup automatically based on your content, eliminating the need for manual coding. For custom implementations, Schema.org provides documentation and examples for each data type.

While structured data implementation requires more technical knowledge than the other tactics, its value extends beyond AIO. Search engines like Google also use structured data to create enhanced search results like rich snippets, knowledge panels, and featured answers. This means the optimization work benefits both traditional SEO and AI visibility simultaneously.

The cumulative effect of implementing all seven tactics is substantial. Each strategy individually improves your chances of appearing in AI responses, but they work synergistically when combined. Content that includes specific statistics, appears in community discussions, answers natural language questions directly, presents information in structured formats, exists consistently across platforms, shows clear freshness signals, and implements proper schema markup sends multiple reinforcing signals that AI models recognize and value.

Building a Sustainable AIO Strategy

Understanding individual tactics is important, but sustainable success requires integrating AIO into your overall content strategy rather than treating it as a separate, occasional activity. This means developing systematic approaches that maintain and improve your AI visibility over time without requiring constant manual intervention.

The foundation of any sustainable strategy is creating content with AIO in mind from the beginning rather than retrofitting optimization after publication. This doesn't mean abandoning your audience's needs to serve AI algorithms—it means recognizing that content optimized for AI models is typically also better for human readers because both value clarity, structure, accuracy, and comprehensiveness.

When planning new content, start by identifying the questions your target audience asks AI models about your topic. These questions form the backbone of your content structure. If you're writing about project management tools, for example, you'd want to address questions like "What's the best project management software for small teams?", "How much do project management tools typically cost?", and "What features should I look for in project management software?" Each of these questions likely deserves a dedicated section with a clear, direct answer.

Your content outline should reflect these natural queries in your subheadings and section structure. This organizational approach simultaneously improves readability for humans scanning your content and makes it easier for AI models to identify which sections answer specific questions. When someone asks an AI about project management tool features, a model searching your content can quickly locate and cite the relevant section because you've structured it logically around that question.

The next consideration is information density and specificity. AI models favor content that provides concrete, actionable information over vague generalizations or superficial coverage. This means investing in depth rather than breadth for your most important topics. A comprehensive 3,000-word guide that thoroughly addresses a topic will typically perform better in AI citations than ten shallow 300-word articles that skim the surface.

This depth requirement influences content strategy decisions about volume versus quality. Rather than publishing something new every day with minimal research, you might publish twice weekly but ensure each piece provides genuine value with proper research, specific examples, and comprehensive coverage. The quality-focused approach generates better long-term results both for human audiences and AI visibility.

Maintenance and updates become critical components of sustainable strategy. AI models accessing the web in real-time naturally favor fresh content, so static articles gradually lose visibility even if they were initially successful. Building systematic content review and refresh processes prevents this decay and maintains your competitive position.

A practical maintenance schedule might review your top-performing content quarterly, your mid-tier content semi-annually, and your long-tail content annually. During these reviews, you update statistics and examples, add new sections covering recent developments, remove or update outdated information, and add a new "last updated" date to signal freshness. This regular maintenance keeps your content competitive and shows both AI models and human visitors that you're actively maintaining accuracy.

Competitive analysis should inform your ongoing strategy. Monitor which sources AI models cite for queries where you want visibility. Analyze what makes those sources effective—is it their structure? Their level of detail? Their use of data and statistics? Their freshness? Understanding your competition's strengths helps you identify gaps in your own content and opportunities to differentiate through superior quality or unique angles.

This competitive intelligence doesn't mean copying what others do well. It means understanding the bar you need to meet or exceed to compete for AI citations in your niche. If competing content provides basic overviews, offering in-depth analysis gives you an advantage. If competitors focus on theory, adding practical examples and case studies differentiates you. If everyone covers similar points, finding unique angles or addressing overlooked aspects of the topic creates competitive advantage.

Distribution and promotion strategies must extend beyond traditional channels to build the multi-platform presence that signals authority to AI models. This means systematically sharing your expertise across relevant communities, contributing to discussions on forums and social media, publishing on platforms like Medium or LinkedIn in addition to your own site, and building genuine relationships within your niche rather than just broadcasting content.

The goal isn't maximum reach across every possible platform—that's neither sustainable nor effective. Instead, identify the two or three platforms where your target audience genuinely spends time and where your expertise provides value. Focus your distribution efforts there, building consistent presence and contributing meaningfully over time. This focused approach generates better results than scattered efforts across a dozen platforms.

Collaboration and linking strategy matter differently for AIO than for traditional SEO. While backlinks remain important for search engine rankings, AI citation rates appear more influenced by the quality and relevance of the connection than purely by link volume. Being cited by a highly authoritative source in your niche can boost AI visibility even if it provides only one link, while dozens of low-quality directory links might not impact AI citations at all.

This suggests prioritizing genuine partnerships, guest posting on respected sites in your industry, and earning mentions from authoritative sources through excellent work rather than pursuing link-building tactics focused purely on volume. The relationship-based approach to link acquisition aligns well with AIO because it creates the kind of genuine authority signals that AI models recognize and value.

The Future Trajectory of AI Search

Understanding where AI search is headed helps you prepare for upcoming changes rather than constantly reacting to new developments. While predicting specific features or timeline is difficult, several clear trends are shaping the evolution of AI-powered discovery.

The most obvious trend is continued growth in AI search usage. As more people discover tools like ChatGPT, Claude, and Perplexity, and as these tools improve their interfaces and expand capabilities, the percentage of information-seeking behavior flowing through AI models will increase. This doesn't necessarily mean traditional search engines will disappear, but it does mean the traffic pie is being redivided, with AI search claiming an expanding slice.

This growth trajectory suggests that early adoption advantages in AIO will compound over time. Establishing strong AI visibility now, while competition remains relatively light, positions you favorably as usage explodes and competition intensifies. The content creators building AI authority today will have structural advantages over those who wait until AI search is fully mainstream and optimization becomes more competitive.

Integration between different search modalities is accelerating. Google is bringing AI answers into traditional search results. Bing is integrating ChatGPT-powered features. New platforms are emerging that combine search, AI chat, and traditional browsing in unified experiences. This convergence means optimization strategies must account for hybrid discovery experiences where users might see both traditional results and AI-generated answers, potentially in the same interface.

The technical sophistication of AI models continues advancing rapidly, with implications for optimization strategies. Future models will better understand nuance, maintain longer context, cross-reference information more effectively, and potentially access real-time data more seamlessly. These improvements might make some current optimization tactics less important while creating new opportunities for differentiation.

For example, as models improve at understanding semantic meaning and context, exact keyword matching will matter even less than it does now. Conversely, models might become better at assessing content quality through subtle signals like writing sophistication, logical coherence, and comprehensive coverage. This evolution favors creators focused on genuine quality over those trying to game systems through technical tricks.

Personalization in AI search is emerging as models learn to consider individual user preferences, history, and context when formulating responses. This creates both opportunities and challenges for content visibility. The opportunity is that AI might recommend your content more prominently to users whose preferences align with your perspective or style. The challenge is that you might become invisible to users whose personalization profile doesn't match, even if your content is objectively relevant to their query.

Adapting to this personalized future likely requires building distinct brand identity and perspective rather than trying to be everything to everyone. If AI models categorize you clearly—as the practical, actionable advice source versus the theoretical deep-dive resource—you'll appear reliably for users whose preferences match that positioning. Trying to be too generic might result in appearing rarely for anyone as models route users to more distinctive alternatives.

Commercial considerations will shape AI search evolution as platforms figure out monetization beyond subscriptions. We're already seeing early experiments with citations including affiliate tracking, sponsored placements in AI responses, and premium content partnerships. The specific implementations will evolve, but the trajectory toward commercial integration seems certain.

For content creators, this commercial evolution might create new opportunities to monetize AI visibility beyond indirect traffic benefits. If platforms begin sharing revenue with cited sources, strong AI visibility could become directly profitable. If sponsored placements become normalized, there might be ways to amplify your organic visibility through paid promotion similar to how PPC complements SEO.

Regulation and AI model behavior around copyrighted content remains in flux, with implications for what content models can reference and how prominently different sources appear. Current legal frameworks are struggling to accommodate AI's information synthesis capabilities, and future regulations might significantly impact how models cite sources, what compensation creators receive, and what controls you have over whether AI systems can reference your content.

Staying informed about these regulatory developments and adjusting strategy accordingly will matter increasingly. The content creators who navigate this evolving landscape successfully will be those who remain flexible and adapt to changes rather than expecting today's rules to persist indefinitely.

Practical Implementation Plan

Transforming AIO knowledge into actual improved visibility requires systematic implementation rather than sporadic efforts. Here's a practical framework for incorporating these strategies into your content workflow.

Start with an audit of your existing content to identify which pieces should be prioritized for AIO optimization. Not every article deserves equal attention—focus first on content that already performs well in traditional search, addresses important topics for your audience, or covers queries where you have genuine expertise to offer. These high-potential pieces are most likely to generate meaningful results from optimization efforts.

During the audit, evaluate each priority article against the seven optimization tactics. Does it include specific statistics and verifiable data? Could you add more? Is the content structured with clear headings that reflect natural language questions? Have you included an FAQ section addressing common queries? Is there a clear "last updated" date? Can you add comparison tables or other structured data? Does schema markup exist and is it appropriate for the content type?

Create a prioritized optimization checklist based on this audit, identifying which pieces need which improvements. Some content might only need a few additions like update dates and FAQ sections, while others might benefit from more substantial restructuring. This systematic approach prevents you from trying to fix everything at once and ensures you tackle the highest-impact improvements first.

Implement changes incrementally, testing as you go rather than making all modifications simultaneously. This allows you to learn which specific changes seem to impact your AI citation rates most significantly. While many factors influence visibility, you might discover that certain tactics work particularly well for your niche or content style, allowing you to prioritize those approaches for future content.

For new content creation, build AIO considerations into your standard workflow. Before writing, identify the key questions your content will answer and structure your outline around those questions. Plan to include specific data points and examples during research. Decide what structured elements (tables, step-by-step lists, comparisons) would enhance the content. Add these considerations to whatever content creation process you already use rather than treating AIO as a separate, optional step.

Establish monitoring routines to track your AI visibility over time. Whether you use commercial tracking tools or build your own system, schedule regular reviews of your performance. Monthly checks might suffice initially, though weekly monitoring makes sense if you're actively optimizing and want faster feedback on what's working.

When reviewing tracking data, look for patterns rather than obsessing over individual fluctuations. Is your visibility generally improving, declining, or stable? Which topics show stronger AI citation rates? Where are competitors consistently appearing instead of you? What queries used to show your content but no longer do? These patterns inform where to focus future optimization efforts and what's working well versus what needs adjustment.

Build a distribution schedule that ensures your content reaches the platforms where community discussion happens. Rather than sporadic promotion when you remember, systematically share new content and participate in relevant discussions on a regular cadence. This might mean dedicating 30 minutes daily to community engagement, or setting aside specific times weekly for distribution activities. The consistent approach yields better results than irregular bursts of activity.

Document what works as you implement and test different approaches. Keep notes on which tactics seem most effective for your content, which platforms drive the most engaged traffic, which topics generate the most AI citations. This knowledge base becomes increasingly valuable over time as you identify patterns specific to your niche and audience that might differ from general best practices.

Consider forming or joining groups of content creators in your niche who are also working on AIO to share insights and results. The field is new enough that collective learning accelerates progress for everyone involved. What you discover about effective tactics in your niche might help others, and their experiences can inform your strategy even if you're in slightly different spaces.

Plan for iterative improvement rather than expecting immediate perfection. AIO is still an emerging practice without definitive best practices etched in stone. You'll make mistakes, try things that don't work, and occasionally optimize for factors that turn out not to matter. This experimentation is part of the learning process. What matters is systematic iteration—trying approaches, measuring results, adjusting based on feedback, and gradually improving your effectiveness over time.

Set realistic timelines for seeing results. Unlike paid advertising where you can generate traffic immediately, organic visibility through either SEO or AIO builds gradually. You might see some quick wins from optimizing high-performing content, but establishing strong overall AI visibility typically takes months of consistent effort. Understand this going in to maintain motivation during the initial period where you're investing effort without dramatic visible results.

Taking Action Today

The opportunity in AI Optimization exists because most content creators haven't recognized its importance yet. Traditional SEO remains the primary focus, while this emerging traffic channel grows rapidly with relatively light competition. This window won't stay open indefinitely. As more people understand AIO's value, competition will intensify and optimization will become more sophisticated.

Your competitive advantage comes from starting now rather than waiting until AIO is fully mainstream. Begin with these immediate actions that require minimal investment but start building your foundation.

First, test your own AI visibility today. Open ChatGPT, Claude, or Perplexity and ask questions where your content should logically appear as a relevant source. Be honest in your queries—use the actual questions your audience would ask rather than phrasing things to favor your content. See whether AI models cite you, and if so, how prominently. This reality check shows you where you stand currently.

Second, identify your top five most important pieces of content—articles that address core topics for your audience or drive significant traffic currently. These become your initial optimization targets. Don't try to optimize everything at once. Focus on making these five pieces as strong as possible for AI citation.

Third, implement quick wins on those priority pieces. Add "Last updated: [current date]" to each. Create a simple FAQ section addressing three to five common questions related to each article's topic. Add specific statistics or data points if they're currently missing. These improvements take hours rather than days but can meaningfully impact AI visibility.

Fourth, set up basic tracking even if you don't build a comprehensive system immediately. Create a simple spreadsheet listing queries where you want visibility. Test those queries weekly in one or two AI platforms and note whether your content appears. This manual tracking takes just 15-30 minutes weekly but provides feedback on whether your optimization efforts are working.

Fifth, join one or two communities where your target audience discusses topics related to your content. You don't need to be everywhere—pick platforms where you can genuinely contribute value and commit to participating regularly. Start by reading and understanding the community culture before posting, then gradually engage in discussions where your expertise adds value.

The investment required isn't massive. You don't need expensive tools, extensive technical knowledge, or a large team. You need understanding of the principles, systematic implementation of practical tactics, and consistency over time. The same qualities that make someone successful with traditional content creation—providing genuine value, maintaining quality standards, and persisting through the gradual process of building authority—work for AIO as well.

The difference is timing. Traditional SEO is mature with intense competition and well-established players dominating many niches. AIO is emerging with room for newcomers to establish authority while the landscape is still taking shape. This timing advantage creates opportunities for content creators of all sizes to build significant AI visibility if they act now rather than waiting.

Start today. Audit your content. Implement quick optimizations. Begin tracking your performance. Engage in communities. Build the multi-platform presence that signals authority. Each small step compounds over time into substantial competitive advantage as AI search grows to represent an ever-larger percentage of how people discover information online.

The future of organic visibility includes AI citations alongside traditional search rankings. The question isn't whether to optimize for both—it's whether you'll start while competition is light or wait until fighting for AI visibility becomes as challenging as ranking in traditional search is today.

Choose wisely. The traffic is already flowing. The only question is whether it flows to you or your competitors.

  •  

'We will retaliate’: Trump responds to deaths of U.S. soldiers and interpreter in Syria

President Donald Trump told reporters the U.S. planned to “retaliate” after two U.S. Army soldiers and a civilian U.S. interpreter were killed in an attack in Palmyra, Syria. NBC News’ Raf Sanchez reports on how Syria’s new regime will cooperate with the U.S. 

💾

©

President Donald Trump told reporters the U.S. planned to “retaliate” after two U.S. Army soldiers and a civilian U.S. interpreter were killed in an attack in Palmyra, Syria. NBC News’ Raf Sanchez reports on how Syria’s new regime will cooperate with the U.S. 
  •  

The World's Electric Car Sales Have Spiked 21% So Far in 2025

Electrek reports: EV and battery supply chain research specialists Benchmark Mineral Intelligence reports that 2.0 million electric vehicles were sold globally in November 2025, bringing global EV sales to 18.5 million units year-to-date. That's a 21% increase compared to the same period in 2024. Europe was the clear growth leader in November, while North America continued to lag following the expiration of US EV tax credits. China, meanwhile, remains the world's largest EV market by a wide margin. Europe's EV market jumped 36% year-over-year in November 2025, with BEV sales up 35% and plug-in hybrid (PHEV) sales rising 39%. That brings Europe's total EV sales to 3.8 million units for the year so far, up 33% compared to January-November 2024... In North America, EV sales in the US did tick up month-over-month in November, following a sharp October drop after federal tax credits expired on September 30, 2025. Brands including Kia (up 30%), Hyundai (up 20%), Honda (up 11%), and Subaru (232 Solterra sales versus just 13 the month before) all saw gains, but overall volumes remain below levels when the federal tax credit was still available... [North America shows a -1% drop in EV sales from January to November 2025 vs. January to November 2024] Year-to-date, EV sales in China are up 19%, with 11.6 million units sold. One of the biggest headlines out of China is exports. BYD reported a record 131,935 EV exports in November, blowing past its previous high of around 90,000 units set in June. BYD sales in Europe have jumped more than fourfold this year to around 200,000 vehicles, doubled in Southeast Asia, and climbed by more than 50% in South America... "Overall, EV demand remains resilient, supported by expanding model ranges and sustained policy incentives worldwide," said Rho Motion data manager Charles Lester. Beyond China, Europe, and North America, the rest of the world saw a 48% spike in EV sales in 2025 vs the same 11 months in 2024, representing 1.5 million EVs sold. "The takeaway: EV demand continues to grow worldwide," the article adds, "but policy support — or the lack thereof — is increasingly shaping where this growth shows up."

Read more of this story at Slashdot.

  •  

How a 23-Year-Old in 1975 Built the World's First Handheld Digital Camera

In 1975, 23-year-old electrical engineer Steve Sasson joined Kodak. And in a new interview with the BBC, he remembers that he'd found the whole photographic process "really annoying.... I wanted to build a camera with no moving parts. Now that was just to annoy the mechanical engineers..." "You take your picture, you have to wait a long time, you have to fiddle with these chemicals. Well, you know, I was raised on Star Trek, and all the good ideas come from Star Trek. So I said what if we could just do it all electronically...?" Researchers at Bell Labs in the US had, in 1969, created a type of integrated circuit called a charge-coupled device (CCD). An electric charge could be stored on a metal-oxide semiconductor (MOS), and could be passed from one MOS to another. Its creators believed one of its applications might one day be used as part of an imaging device — though they hadn't worked out how that might happen. The CCD, nevertheless, was quickly developed. By 1974, the US microchip company Fairchild Semiconductors had built the first commercial CCD, measuring just 100 x 100 pixels — the tiny electronic samples taken of an original image. The new device's ability to capture an image was only theoretical — no-one had, as yet, tried to take an image and display it. (NASA, it turned out, was also looking at this technology, but not for consumer cameras....) The CCD circuit responded to light but could only form an image if Sasson was somehow able to attach a lens to it. He could then convert the light into digital information — a blizzard of 1s and 0s — but there was just one problem: money. "I had no money to build this thing. Nobody told me to build it, and I certainly couldn't demand any money for it," he says. "I basically stole all the parts, I was in Kodak and the apparatus division, which had a lot of parts. I stole the optical assembly from an XL movie camera downstairs in a used parts bin. I was just walking by, you see it, and you take it, you know." He was also able to source an analogue to digital converter from a $12 (about £5 in 1974) digital voltmeter, rather than spending hundreds on the part. I could manage to get all these parts without anybody really noticing," he says.... The bulky device needed a way to store the information the CCD was capturing, so Sasson used an audio cassette deck. But he also needed a way to view the image once it was saved on the magnetic tape. "We had to build a playback unit," Sasson says. "And, again, nobody asked me to do that either. So all I got to do is the reverse of what I did with the camera, and then I have to turn that digital pattern into an NTSC television signal." NTSC (National Television System Committee) was the conversion standard used by American TV sets. Sasson had to turn only 100 lines of digital code captured by the camera into the 400 lines that would form a television signal. The solution was a Motorola microprocessor, and by December 1975, the camera and its playback unit was complete, the article points out. With his colleague Jim Schueckler, Sasson had spent more than a year putting together the "increasingly bulky" device, that "looked like an oversized toaster." The camera had a shutter that would take an image at about 1/20th of a second, and — if everything worked as it should — the cassette tape would start to move as the camera transferred the stored information from its CCD [which took 23 seconds]. "It took about 23 seconds to play it back, and then about eight seconds to reconfigure it to make it look like a television signal, and send it to the TV set that I stole from another lab...." In 1978, Kodak was granted the first patent for a digital camera. It was Sasson's first invention. The patent is thought to have earned Eastman Kodak billions in licensing and infringement payments by the time they sold the rights to it, fearing bankruptcy, in 2012... As for Sasson, he never worked on anything other than the digital technology he had helped to create until he retired from Eastman Kodak in 2009. Thanks to long-time Slashdot reader sinij for sharing the article.

Read more of this story at Slashdot.

  •  

More of America's Coal-Fired Power Plants Cease Operations

New England's last coal-fired power plant "has ceased operations three years ahead of its planned retirement date," reports the New Hampshire Bulletin. "The closure of the New Hampshire facility paves the way for its owner to press ahead with an initiative to transform the site into a clean energy complex including solar panels and battery storage systems." "The end of coal is real, and it is here," said Catherine Corkery, chapter director for Sierra Club New Hampshire. "We're really excited about the next chapter...." The closure in New Hampshire — so far undisputed by the federal government — demonstrates that prolonging operations at some facilities just doesn't make economic sense for their owners. "Coal has been incredibly challenged in the New England market for over adecade," said Dan Dolan, president of the New England Power Generators Association. Merrimack Station, a 438-megawatt power plant, came online in the1960s and provided baseload power to the New England region for decades. Gradually, though, natural gas — which is cheaper and more efficient — took over the regional market... Additionally, solar power production accelerated from 2010 on, lowering demand on the grid during the day and creating more evening peaks. Coal plants take longer to ramp up production than other sources, and are therefore less economical for these shorter bursts of demand, Dolan said. In recent years, Merrimack operated only a few weeks annually. In 2024, the plant generated just0.22% of the region's electricity. It wasn't making enough money to justify continued operations, observers said. The closure "is emblematic of the transition that has been occurring in the generation fleet in New England for many years," Dolan said. "The combination of all those factors has meant that coal facilities are no longer economic in this market." Meanwhile Los Angeles — America's second-largest city — confirmed that the last coal-fired power plant supplying its electricity stopped operations just before Thanksgiving, reports the Utah News Dispatch: Advocates from the Sierra Club highlighted in a news release that shutting down the units had no impact on customers, and questioned who should "shoulder the cost of keeping an obsolete coal facility on standby...." Before ceasing operations, the coal units had been working at low capacities for several years because the agency's users hadn't been calling on the power [said John Ward, spokesperson for Intermountain Power Agency]. The coal-powered units "had a combined capacity of around 1,800 megawatts when fully operational," notes Electrek, "and as recently as 2024, they still supplied around 11% of LA's electricity. The plant sits in Utah's Great Basin region and powered Southern California for decades." Now, for the first time, none of California's power comes from coal. There's a political hiccup with IPP, though: the Republican-controlled Utah Legislature blocked the Intermountain Power Agency from fully retiring the coal units this year, ordering that they can't be disconnected or decommissioned. But despite that mandate, no buyers have stepped forward to keep the outdated coal units online. The Los Angeles Department of Water and Power (LADWP) is transitioning to newly built, hydrogen-capable generating units at the same IPP location, part of a modernization effort called IPP Renewed. These new units currently run on natural gas, but they're designed to burn a blend of natural gas and up to 30% green hydrogen, and eventually100% green hydrogen. LADWP plans to start adding green hydrogen to the fuel mix in 2026. "With the plant now idled but legally required to remain connected, serious questions remain about who will shoulder the cost of keeping an obsolete coal facility on standby," says the Sierra Club. One of the natural gas units started commerical operations last Octoboer, with the second starting later this month, IPP spokesperson John Ward told Agency]. the Utah News Dispatch.

Read more of this story at Slashdot.

  •  

Peter Greene, Known for 'Pulp Fiction' and 'The Mask,' Dies at 60

Character actor Peter Greene, known for playing villains and criminals in dozens of movies, including “Pulp Fiction” and “The Mask,” died on Friday at 60 years old. His cause of death was not reported.

💾

©

Character actor Peter Greene, known for playing villains and criminals in dozens of movies, including “Pulp Fiction” and “The Mask,” died on Friday at 60 years old. His cause of death was not reported.
  •  

Rust in Linux's Kernel 'is No Longer Experimental'

Steven J. Vaughan-Nichols files this report from Tokyo: At the invitation-only Linux Kernel Maintainers Summit here, the top Linux maintainers decided, as Jonathan Corbet, Linux kernel developer, put it, "The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the 'experimental' tag will be coming off." As Linux kernel maintainer Steven Rosted told me, "There was zero pushback." This has been a long time coming. This shift caps five years of sometimes-fierce debate over whether the memory-safe language belonged alongside C at the heart of the world's most widely deployed open source operating system... It all began when Alex Gaynor and Geoffrey Thomas at the 2019 Linux Security Summit said that about two-thirds of Linux kernel vulnerabilities come from memory safety issues. Rust, in theory, could avoid these by using Rust's inherently safer application programming interfaces (API)... In those early days, the plan was not to rewrite Linux in Rust; it still isn't, but to adopt it selectively where it can provide the most security benefit without destabilizing mature C code. In short, new drivers, subsystems, and helper libraries would be the first targets... Despite the fuss, more and more programs were ported to Rust. By April 2025, the Linux kernel contained about 34 million lines of C code, with only 25 thousand lines written in Rust. At the same time, more and more drivers and higher-level utilities were being written in Rust. For instance, the Debian Linux distro developers announced that going forward, Rust would be a required dependency in its foundational Advanced Package Tool (APT). This change doesn't mean everyone will need to use Rust. C is not going anywhere. Still, as several maintainers told me, they expect to see many more drivers being written in Rust. In particular, Rust looks especially attractive for "leaf" drivers (network, storage, NVMe, etc.), where the Rust-for-Linux bindings expose safe wrappers over kernel C APIs. Nevertheless, for would-be kernel and systems programmers, Rust's new status in Linux hints at a career path that blends deep understanding of C with fluency in Rust's safety guarantees. This combination may define the next generation of low-level development work.

Read more of this story at Slashdot.

  •  

Sharks and rays gain landmark protections as nations move to curb international trade

For the first time, global governments have agreed to widespread international trade bans and restrictions for sharks and rays being driven to extinction.

Last week, more than 70 shark and ray species, including oceanic whitetip sharks, whale sharks, and manta rays, received new safeguards under the Convention on International Trade in Endangered Species of Wild Fauna and Flora. The convention, known as CITES, is a United Nations treaty that requires countries to regulate or prohibit international trade in species whose survival is threatened.

Sharks and rays are closely related species that play similar roles as apex predators in the ocean, helping to maintain healthy marine ecosystems. They have been caught and traded for decades, contributing to a global market worth nearly $1 billion annually, according to Luke Warwick, director of shark and ray conservation at Wildlife Conservation Society (WCS), an international nonprofit dedicated to preserving animals and their habitats.

Read full article

Comments

© Anadolu / Contributor

  •  

DOJ weighs novel federal hate crime case against suspect in Charlie Kirk's assassination

The Justice Department is weighing how to bring federal charges against Charlie Kirk's alleged assassin, Tyler Robinson.

© Rick Egan

A memorial for political activist Charlie Kirk stands on the grounds of Utah Valley University on Sept. 13 in Orem, Utah.

© Joe Raedle

A memorial for political activist Charlie Kirk stands on the grounds of Utah Valley University on Sept. 13 in Orem, Utah.

© Tom Williams

A memorial for political activist Charlie Kirk stands on the grounds of Utah Valley University on Sept. 13 in Orem, Utah.
  •  

Germany Covers Nearly 56 Percent of 2025 Electricity Use With Renewables

Longtime Slashdot reader AmiMoJo shares a report from Clean Energy Wire: Renewable energy sources covered nearly 56 percent of Germany's gross electricity consumption in 2025, according to preliminary figures by energy industry group BDEW and research institute ZSW. Despite a 'historically weak' first quarter of the year for wind power production and a significant drop in hydropower output, the share of renewables grew by 0.7 percentage points compared to the previous year thanks to an increase in installed solar power capacity. Solar power output increased by 18.7 percent over the whole year, while the strong growth in installed capacity from previous years could be sustained, with more than 17 gigawatts (GW) added to the system. With March being the least windy month in Germany since records began in 1950, wind power output, on the other hand, faced a drop of 5.2 percent compared to 2024. However, stronger winds in the second and third quarter compensated for much of the early-year decrease. Onshore turbines with a capacity of 5.2 GW were added to the grid, a marked increase from the 3.3 GW in the previous year. Due to significantly less precipitation this year compared to 2024, hydropower output dropped by nearly one quarter (24.1%), while remaining only a fraction (3.2%) of total renewable power output.

Read more of this story at Slashdot.

  •  

Chinese Whistleblower Living In US Is Being Hunted By Beijing With US Tech

A former Chinese official who fled to the U.S. says Beijing has used advanced surveillance technology from U.S. companies to track, intimidate, and punish him and his family across borders. ABC News reports: Retired Chinese official Li Chuanliang was recuperating from cancer on a Korean resort island when he got an urgent call: Don't return to China, a friend warned. You're now a fugitive. Days later, a stranger snapped a photo of Li in a cafe. Terrified South Korea would send him back, Li fled, flew to the U.S. on a tourist visa and applied for asylum. But even there -- in New York, in California, deep in the Texas desert -- the Chinese government continued to hunt him down with the help of surveillance technology. Li's communications were monitored, his assets seized and his movements followed in police databases. More than 40 friends and relatives -- including his pregnant daughter -- were identified and detained, even by tracking down their cab drivers through facial recognition software. Three former associates died in detention, and for months shadowy men Li believed to be Chinese operatives stalked him across continents, interviews and documents seen by The Associated Press show. The Chinese government is using an increasingly powerful tool to cement its power at home and vastly amplify it abroad: Surveillance technology, much of it originating in the U.S., an AP investigation has found. Within China, this technology helped identify and punish almost 900,000 officials last year alone, nearly five times more than in 2012, according to state numbers. Beijing says it is cracking down on corruption, but critics charge that such technology is used in China and elsewhere to stifle dissent and exact retribution on perceived enemies. Outside China, the same technology is being used to threaten wayward officials, along with dissidents and alleged criminals, under what authorities call Operations "Fox Hunt" and "Sky Net." The U.S. has criticized these overseas operations as a "threat" and an "affront to national sovereignty." More than 14,000 people, including some 3,000 officials, have been brought back to China from more than 120 countries through coercion, arrests and pressure on relatives, according to state information.

Read more of this story at Slashdot.

  •  

Peter Greene, actor known for 'Pulp Fiction' and 'The Mask,' dead at 60

Peter Greene, the actor known for playing villains and criminals, including in his role as Zed in "Pulp Fiction," died at his New York City home Friday, his manager confirmed.

© Alamy

Actor Peter Greene attends the premiere of "The Bounty Hunter" at the Ziegfeld Theatre in 2010 in New York City.

© Craig Barritt

Actor Peter Greene attends the premiere of "The Bounty Hunter" at the Ziegfeld Theatre in 2010 in New York City.
  •  

Google testing out new textbook tool 'Learn Your Way'

Google is testing out its new AI tool called "Learn Your Way," which turns educational texts into an interactive experience. NBC News' Gadi Schwartz talks to Google's Dr. Courtney Heldreth and tests out how the tool works.  

💾

©

Google is testing out its new AI tool called "Learn Your Way," which turns educational texts into an interactive experience. NBC News' Gadi Schwartz talks to Google's Dr. Courtney Heldreth and tests out how the tool works.  
  •  

Users express frustration over Apple autocorrect update

NBC News' Steven Romo looks at recent online complaints about Apple's autocorrect following the latest software update and what role artificial intelligence may have in the situation.

💾

©

NBC News' Steven Romo looks at recent online complaints about Apple's autocorrect following the latest software update and what role artificial intelligence may have in the situation.
  •  

New photos released showing Jeffrey Epstein with powerful men

Democrats on the House Oversight committee released new photos obtained from Jeffrey Epstein’s estate. The photos show Epstein with a number of powerful men including President Trump, President Clinton, Woody Allen and Steve Bannon. The photos do not appear to show any illegal activity. NBC News’ Ryan Nobles reports.

💾

©

Democrats on the House Oversight committee released new photos obtained from Jeffrey Epstein’s estate. The photos show Epstein with a number of powerful men including President Trump, President Clinton, Woody Allen and Steve Bannon. The photos do not appear to show any illegal activity. NBC News’ Ryan Nobles reports.
  •  

Ukrainians Sue US Chip Firms For Powering Russian Drones, Missiles

An anonymous reader quotes a report from Ars Technica: Dozens of Ukrainian civilians filed a series of lawsuits in Texas this week, accusing some of the biggest US chip firms of negligently failing to track chips that evaded export curbs. Those chips were ultimately used to power Russian and Iranian weapon systems, causing wrongful deaths last year. Their complaints alleged that for years, Texas Instruments (TI), AMD, and Intel have ignored public reporting, government warnings, and shareholder pressure to do more to track final destinations of chips and shut down shady distribution channels diverting chips to sanctioned actors in Russia and Iran. Putting profits over human lives, tech firms continued using "high-risk" channels, Ukrainian civilians' legal team alleged in a press statement, without ever strengthening controls. All that intermediaries who placed bulk online orders had to do to satisfy chip firms was check a box confirming that the shipment wouldn't be sent to sanctioned countries, lead attorney Mikal Watts told reporters at a press conference on Wednesday, according to the Kyiv Independent. "There are export lists," Watts said. "We know exactly what requires a license and what doesn't. And companies know who they're selling to. But instead, they rely on a checkbox that says, 'I'm not shipping to Putin.' That's it. No enforcement. No accountability." [...] Damages sought include funeral expenses and medical costs, as well as "exemplary damages" that are "intended to punish especially wrongful conduct and to deter similar conduct in the future." For plaintiffs, the latter is the point of the litigation, which they hope will cut off key supply chains to keep US tech out of weapon systems deployed against innocent civilians. "They want to send a clear message that American companies must take responsibility when their technologies are weaponized and used to commit harm across the globe," the press statement said. "Corporations must be held accountable when its unlawful decisions made in the name of profit directly cause the death of innocents and widespread human suffering." For chip firms, the litigation could get costly if more civilians join, with the threat of a loss potentially forcing changes that could squash supply chains currently working to evade sanctions. "We want to make this process so expensive and painful that companies are forced to act," Watts said. "That is our contribution to stopping the war against civilians."

Read more of this story at Slashdot.

  •  

Arizona City Rejects Data Center After Lobbying Push

Chandler, Arizona unanimously rejected a proposed AI data center despite heavy lobbying from Big Tech interests and former Sen. Kyrsten Sinema. Politico reports: The Chandler City Council last night voted down a request by a New York developer to rezone land to build a data center and business complex. The local battle escalated in October after Sinema showed up at a planning commission meeting to offer public comment warning officials in her home state that federal authority may soon stomp on local regulations. "Chandler right now has the opportunity to determine how and when these new, innovative AI data centers will be built," she told local officials. "When federal preemption comes, we'll no longer have that privilege." Explaining her no vote, Chandler Vice Mayor Christine Ellis said that she had long framed her decision about the local benefits rather than the national push to build AI. She recalled a meeting with Sinema where she asked point-blank, "what's in it for Chandler?" "If you can't show me what's in it for Chandler, then we are not having a conversation," Ellis said before voting against the project. [...] The project, along with Sinema's involvement, attracted significant community opposition, with speakers raising concerns about whether the project would use too much water or raise power prices. Residents packed the council chambers, with many holding up signs reading "No More Data Centers." According to the city's planning office, more than 200 comments were filed against the proposal compared to just eight in favor.

Read more of this story at Slashdot.

  •  

Framework Raises DDR5 Memory Prices By 50% For DIY Laptops

Framework Computer raised DDR5 memory prices for its Laptop DIY Editions by 50% due to industry-wide memory shortages. Phoronix reports: Framework Computer is keeping the prior prices for existing pre-orders and also is foregoing any price changes for their pre-built laptops or the Framework Desktop. Framework Computer also lets you order DIY laptops without any memory at all if so desired for re-using existing modules or should you score a deal elsewhere. Due to their memory pricing said to be more competitive below market rates, they also adjusted their return policy to prevent scalpers from purchasing DIY Edition laptops with memory while then returning just the laptops. The DDR5 must be returned now with DIY laptop order returns. Additional details can be found via the Framework Blog.

Read more of this story at Slashdot.

  •  

Doom Studio id Software Forms 'Wall-To-Wall' Union

id Software employees voted to form a wall-to-wall union with the CWA, covering all roles at the Doom studio. "The vote wasn't unanimous, though a majority did vote in favor of the union," notes Engadget. From the report: The union will work in conjunction with the Communications Workers of America (CWA), which is the same organization involved with parent company ZeniMax's recent unionization efforts. Microsoft, who owns ZeniMax, has already recognized this new effort, according to a statement by the CWA. It agreed to a labor neutrality agreement with the CWA and ZeniMax workers last year, paving the way for this sort of thing. From the onset, this union will look to protect remote work for id Software employees. "Remote work isn't a perk. It's a necessity for our health, our families, and our access needs. RTO policies should not be handed down from executives with no consideration for accessibility or our well-being," said id Software Lead Services Programmer Chris Hays. He also said he looks forward to getting worker protections regarding the "responsible use of AI."

Read more of this story at Slashdot.

  •  

US To Mandate AI Vendors Measure Political Bias For Federal Sales

An anonymous reader quotes a report from Reuters: The U.S. government will require artificial intelligence vendors to measure political "bias" to sell their chatbots to federal agencies, according to a Trump administration statement (PDF) released on Thursday. The requirement will apply to all large language models bought by federal agencies, with the exception of national security systems, according to the statement. President Donald Trump ordered federal agencies in July to avoid buying large language models that he labeled as "woke." Thursday's statement gives more detail to that directive, saying that developers should not "intentionally encode partisan or ideological judgments" into a chatbot's outputs. Further reading: Trump Signs Executive Order For Single National AI Regulation Framework, Limiting Power of States

Read more of this story at Slashdot.

  •  

Russian Hackers Debut Simple Ransomware Service, But Store Keys In Plain Text

The pro-Russian CyberVolk group resurfaced with a Telegram-based ransomware-as-a-service platform, but fatally undermined its own operation by hardcoding master encryption keys in plaintext. The Register reports: First, the bad news: the CyberVolk 2.x (aka VolkLocker) ransomware-as-a-service operation that launched in late summer. It's run entirely through Telegram, which makes it very easy for affiliates that aren't that tech savvy to lock files and demand a ransom payment. CyberVolk's soldiers can use the platform's built-in automation to generate payloads, coordinate ransomware attacks, and manage their illicit business operations, conducting everything through Telegram. But here's the good news: the ransomware slingers got sloppy when it came time to debug their code and hardcoded the master keys -- this same key encrypts all files on a victim's system -- into the executable files. This could allow victims to recover encrypted data without paying the extortion fee, according to SentinelOne senior threat researcher Jim Walter, who detailed the gang's resurgence and flawed code in a Thursday report.

Read more of this story at Slashdot.

  •  

Haiku gets new Go port

There’s a new Haiku monthly activity report, and this one’s a true doozy. Let’s start with the biggest news.

The most notable development in November was the introduction of a port of the Go programming language, version 1.18. This is still a few years old (from 2022; the current is Go 1.25), but it’s far newer than the previous Go port to Haiku (1.4 from 2014); and unlike the previous port which was never in the package repositories, this one is now already available there (for x86_64 at least) and can be installed via pkgman.

↫ Haiku activity report

As the project notes, they’re still a few versions behind, but at least it’s a lot more modern of an implementation than they had before. Now that it’s in the repositories for Haiku, it might also attract more people to work on the port, potentially bringing even newer versions to the BeOS-inspired operating system. Welcome as it may be, this new Go port isn’t the only big ticket item this month.

Haiku can now gracefully recover from an app_server crash, something it used to be able to do a long time ago, but which was broken for a long time. The app_server is Haiku’s display server and window manager, so the ability to restart it at runtime after a crash, and have it reconnect with still-running applications, is incredibly welcome. As far as I can tell, all modern operating systems can do this by now, so it’s great to have this functionality restored in Haiku.

Of course, aside from these two big improvements, there’s the usual load of fixes and changes in applications, drivers, and other components of the operating system.

  •  

Bill Gates' Daughter Secures $30 Million For AI App Built In Stanford Dorm

Phoebe Gates, Bill Gates' youngest daughter, has raised $30 million for the AI shopping app she built in her Stanford dorm room with classmate Sophia Kianni. The app is called Phia and is pitched as a way to simplify price comparison and secondhand shopping. "Its AI-powered search engine -- available as an app and as a browser extension for Chrome and Safari -- pulls listings from more than 40,000 retail and resale sites so users can compare prices, surface real-time deals, and determine whether an item's cost is typical, high or fair," reports the San Francisco Chronicle. The app has reached 750,000 downloads in eight months and is valued at $180 million. From the report: Gates told Elle that when she first floated the idea to her parents, they urged her to keep it as a side project -- advice she followed by enrolling in Stanford's night program after moving to New York and finishing her degree in 2024. "They were like, 'Okay, you can do this as a side thing, but you need to stay in school.' I don't think people would expect that from my family, to be honest," she said. Her father dropped out of Harvard University in 1975 to launch Microsoft. Kianni even paused her degree temporarily "to learn, as quickly as possible, as much as we could about the industry that we would be operating in," she told Vogue. Bill Gates has not invested in the company, though he has publicly supported its mission.

Read more of this story at Slashdot.

  •  

Rethinking sudo with object capabilities

Alpine Linux maintainer Ariadne Conill has published a very interesting blog post about the shortcomings of both sudo and doas, and offers a potential different way of achieving the same goals as those tools.

Systems built around identity-based access control tend to rely on ambient authority: policy is centralized and errors in the policy configuration or bugs in the policy engine can allow attackers to make full use of that ambient authority. In the case of a SUID binary like doas or sudo, that means an attacker can obtain root access in the event of a bug or misconfiguration.

What if there was a better way? Instead of thinking about privilege escalation as becoming root for a moment, what if it meant being handed a narrowly scoped capability, one with just enough authority to perform a specific action and nothing more? Enter the object-capability model.

↫ Ariadne Conill

To bring this approach to life, they created a tool called capsudo. Instead of temporarily changing your identity, capsudo can grant far more fine-grained capabilities that match the exact task you’re trying to accomplish. As an example, Conill details mounting and unmounting – with capsudo, you can not only grant the ability for a user to mount and unmount whatever device, but also allow the user to only mount or unmount just one specific device. Another example given is how capsudo can be used to give a service account user to only those resources the account needs to perform its tasks.

Of course, Conill explains all of this way better than I ever could, with actual example commands and more details. Conill happens to be the same person who created Wayback, illustrating that they have a tendency to look at problems in a unique and interesting way. I’m not smart enough to determine if this approach makes sense compared to sudo or doas, but the way it’s described it does feel like a superior, more secure solution.

  •  

Google Translate Expands Live Translation To All Earbuds On Android

An anonymous reader quotes a report from Ars Technica: Google has increasingly moved toward keeping features locked to its hardware products, but the Translate app is bucking that trend. The live translate feature is breaking out of the Google bubble with support for any earbuds you happen to have connected to your Android phone. The app is also getting improved translation quality across dozens of languages and some Duolingo-like learning features. The latest version of Google's live translation is built on Gemini and initially rolled out earlier this year. It supports smooth back-and-forth translations as both on-screen text and audio. Beginning a live translate session in Google Translate used to require Pixel Buds, but that won't be the case going forward. Google says a beta test of expanded headphone support is launching today in the US, Mexico, and India. The audio translation attempts to preserve the tone and cadence of the original speaker, but it's not as capable as the full AI-reproduced voice translations you can do on the latest Pixel phones. Google says this feature should work on any earbuds or headphones, but it's only for Android right now. The feature will expand to iOS in the coming months. [...] The new translation model, which is also available in the search-based translation interface, supports over 70 languages.

Read more of this story at Slashdot.

  •  

I Tried the New Sunscreen Ingredient the FDA Is Finally Approving After Over 20 Years

Some unexpected good news from the FDA: bemotrizinol, a sunscreen ingredient that has been used in Europe and Asia for decades, is finally being added to the allowable ingredients list for products sold in the U.S. Bemotrizinol is the active ingredient in sunscreens like Bioré Watery Essence, which has a cult following for being unlike anything we can get in the U.S.

I’ve tried Bioré UV Aqua Rich Watery Essence (that’s the full name of the product) in its original Japanese formulation. This sunscreen is a cult favorite on skincare and Asian beauty forums because of its non-greasy feel, and because it protects against both UVA and UVB rays without leaving a white cast. I got mine from a friend who had either picked it up while traveling or possibly ordered from overseas; you can’t buy it in U.S.-based stores. 

I’ll explain why this is below, but first: it truly is nothing like anything we have locally. Even our most “non-greasy” sunscreens tend to feel a little goopy or sticky. This one really feels like nothing after you rub it in. I instantly understood why it’s so sought-after. Remembering that experience, I’m looking forward to what we might see in American sunscreens once manufacturers are allowed to include this ingredient. 

What’s so special about bemotrizinol?

Bemotrizinol has a lot of things going for it. One is that it “plays well with other sunscreen ingredients,” as one dermatologist told Women’s Health. You can make lighter, nicer-feeling sunscreens with it, hence the popularity of the Bioré formulation I tried. To see what I mean, check out this video where a dermatologist shows off the differences between Bioré's Japanese formulation and the version it sells in the U.S. The ingredients are different, and the texture just isn't the same.

It’s also more effective at broad-spectrum protection. With our current sunscreen formulations, all active ingredients protect against UVB rays (the rays that cause sunburn) but only a few can also provide protection against UVA rays (which contribute to wrinkling and aging of skin). UVB is considered to be the bigger risk for skin cancer, but both probably contribute to cancer risk. Right now, most broad-spectrum U.S. sunscreens use mineral components like zinc oxide. Mineral sunscreens work pretty well, but can leave a white cast on your skin when applied as thickly as you’re supposed to. 

Bemotrizinol is a chemical UV filter, so it doesn’t leave that white cast. But it protects well against UVA rays in addition to UVB, and it’s more photostable than a lot of our existing chemical sunscreen ingredients so it can last longer on the skin. In other words, it’s a chemical sunscreen, but combines some of the best features of both chemical and mineral sunscreens. 

It’s also considered to be one of the safest sunscreens. All sunscreens on the market are much safer than going without sunscreen, but all of our chemical sunscreen ingredients are currently undergoing a safety evaluation because regulators determined they are probably fine but need more research to know for sure. Currently only our two mineral sunscreen ingredients (zinc oxide and titanium dioxide) are considered GRAS, or generally recognized as safe and effective. Bemotrizinol will be the third.

If you're looking at ingredient lists on Asian or European sunscreens, be aware that it goes by several names. Tinosorb S is bemotrizinol; so is bis-ethylhexyloxyphenol methoxyphenyl triazine.

Why it’s taken so long

Ask anyone in the skincare world what they think about U.S. sunscreens, and for decades now you’d get complaints that we’re missing out on the best sunscreens that the rest of the world uses. (Our last new sunscreen ingredient was approved in 1996.) In most countries, sunscreens are regulated as cosmetics, but in the U.S. they are regulated as drugs. That means the U.S. requires more rigorous testing and approval. 

The CARES act, passed in 2020 for pandemic relief, provided a way for over-the-counter drugs to be sold without going through the complete approval process, so long as the FDA was satisfied they were safe and effective. Bemotrizinol met the criteria, thanks in large part to the fact that it’s been used safely since 2000 in Europe, Asia, and Australia. The FDA’s rule on bemotrizinol still needs to be finalized, but it seems likely we’ll see new sunscreens on shelves before the end of 2026.

  •  

How I Use the NotebookLM Slide Deck Generator to Study More Easily

Once again, there is a new feature available on Google's NotebookLM, the AI tool that functions like a personal assistant and only references material you provide for it. This one is a slide deck generator, which can be useful if you need to make a presentation in a hurry, but I've been using it a little differently to help myself retain new information.

Generating a slide deck in NotebookLM

First, you should know how to generate a deck. In case you're unfamiliar with NotebookLM, it's basically just like ChatGPT, but instead of pulling answers from the big, wide Internet, it only relies on PDFs, links, videos, and text you input as resources. This makes it the perfect tool for working on a specific project or studying for a class, since you don't run the risk of inadvertently getting misled by some random, unrelated source.

You can use the chat bot feature the way you would ChatGPT, asking questions and getting summaries of your materials. You can also automatically generate flashcards, videos, infographics, mind maps, fake podcasts, and much more.

To generate slides, it's the same process you'd follow to make those: In the left-side panel, select all of the sources you want the tool to pull from. In the right-side panel, select Slide Deck from the menu. After a few minutes, you'll get slides you can download as a PDF, the same as you would if you were downloading a PowerPoint, and you can upload those to Google Slides or PowerPoint to create a simple presentation.

Why I like NotebookLM's slide deck feature

I've mentioned before that while I love NotebookLM and use it every day for both work and personal pursuits, I can't stand its app. It just doesn't work nearly as well as the browser version, which is a shame because the browser version works so well. I pretty much ignore the app and don't use NotebookLM on mobile or, when I do, I use my mobile browser to access it, which we all know is an annoying workaround that never quite translates right on the smaller screen.

NotebookLM slides on mobile
Credit: Google/Lindsey Ellefson

With the slide PDF, however, I get a ready-made study guide complete with visuals, which I can send to myself via iMessage and study on the go. When I generate my own study materials without NotebookLM, I almost always do it in Google Slides, then download the full PDF and review the slides like a giant study guide, so this new feature is taking a bunch of the work out of doing that for me.

  •  

OpenAI built an AI coding agent and uses it to improve the agent itself

With the popularity of AI coding tools rising among some software developers, their adoption has begun to touch every aspect of the process, including the improvement of AI coding tools themselves.

In interviews with Ars Technica this week, OpenAI employees revealed the extent to which the company now relies on its own AI coding agent, Codex, to build and improve the development tool. “I think the vast majority of Codex is built by Codex, so it’s almost entirely just being used to improve itself,” said Alexander Embiricos, product lead for Codex at OpenAI, in a conversation on Tuesday.

Codex, which OpenAI launched in its modern incarnation as a research preview in May 2025, operates as a cloud-based software engineering agent that can handle tasks like writing features, fixing bugs, and proposing pull requests. The tool runs in sandboxed environments linked to a user’s code repository and can execute multiple tasks in parallel. OpenAI offers Codex through ChatGPT’s web interface, a command-line interface (CLI), and IDE extensions for VS Code, Cursor, and Windsurf.

Read full article

Comments

© Mininyx Doodle via Getty Images

  •