❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Big AI companies sign safety pledge

21 May 2024 at 09:12
logos of four companies

Enlarge (credit: Financial Times)

Leading artificial intelligence companies have signed up to a new round of voluntary commitments on AI safety, the UK and South Korean governments have announced.

The companies, which include tech giants Amazon, Google, Meta, and Microsoft as well as Sam Altman-led OpenAI, Elon Musk’s xAI, and Chinese developer Zhipu AI, will publish frameworks outlining how they will measure the risks of their β€œfrontier” AI models.

The groups committed β€œnot to develop or deploy a model at all” if severe risks could not be mitigated, the two governments said ahead of the opening of a global AI summit in Seoul on Tuesday.

Read 17 remaining paragraphs | Comments

Game dev says contract barring β€œsubjective negative reviews” was a mistake

13 May 2024 at 11:59
Artist's conception of NetEase using a legal contract to try to stop a wave of negative reviews of its closed alpha.

Enlarge / Artist's conception of NetEase using a legal contract to try to stop a wave of negative reviews of its closed alpha. (credit: NetEase)

The developers of team-based shooter Marvel Rivals have apologized for a contract clause that made creators promise not to provide "subjective negative reviews of the game" in exchange for early access to a closed alpha test.

The controversial early access contract gained widespread attention over the weekend when streamer Brandon Larned shared a portion on social media. In the "non-disparagement" clause shared by Larned, creators who are provided with an early download code are asked not to "make any public statements or engage in discussions that are detrimental to the reputation of the game." In addition to the "subjective negative review" example above, the clause also specifically prohibits "making disparaging or satirical comments about any game-related material" and "engaging in malicious comparisons with competitors or belittling the gameplay or differences of Marvel Rivals."

Extremely disappointed in @MarvelRivals.

Multiple creators asked for key codes to gain access to the playtest and are asked to sign a contract.

The contract signs away your right to negatively review the game.

Many streamers have signed without reading just to play

Insanity. pic.twitter.com/c11BUDyka9

β€” Brandon Larned (@A_Seagull) May 12, 2024

In a Discord post noticed by PCGamesN over the weekend, Chinese developer NetEase apologized for what it called "inappropriate and misleading terms" in the contract. "Our stand is absolutely open for both suggestions and criticisms to improve our games, and... our mission is to make Marvel Rivals better [and] satisfy players by those constructive suggestions."

Read 6 remaining paragraphs | Comments

Licensing AI Engineers

25 March 2024 at 07:04

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

❌
❌