❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Game dev says contract barring β€œsubjective negative reviews” was a mistake

13 May 2024 at 11:59
Artist's conception of NetEase using a legal contract to try to stop a wave of negative reviews of its closed alpha.

Enlarge / Artist's conception of NetEase using a legal contract to try to stop a wave of negative reviews of its closed alpha. (credit: NetEase)

The developers of team-based shooter Marvel Rivals have apologized for a contract clause that made creators promise not to provide "subjective negative reviews of the game" in exchange for early access to a closed alpha test.

The controversial early access contract gained widespread attention over the weekend when streamer Brandon Larned shared a portion on social media. In the "non-disparagement" clause shared by Larned, creators who are provided with an early download code are asked not to "make any public statements or engage in discussions that are detrimental to the reputation of the game." In addition to the "subjective negative review" example above, the clause also specifically prohibits "making disparaging or satirical comments about any game-related material" and "engaging in malicious comparisons with competitors or belittling the gameplay or differences of Marvel Rivals."

Extremely disappointed in @MarvelRivals.

Multiple creators asked for key codes to gain access to the playtest and are asked to sign a contract.

The contract signs away your right to negatively review the game.

Many streamers have signed without reading just to play

Insanity. pic.twitter.com/c11BUDyka9

β€” Brandon Larned (@A_Seagull) May 12, 2024

In a Discord post noticed by PCGamesN over the weekend, Chinese developer NetEase apologized for what it called "inappropriate and misleading terms" in the contract. "Our stand is absolutely open for both suggestions and criticisms to improve our games, and... our mission is to make Marvel Rivals better [and] satisfy players by those constructive suggestions."

Read 6 remaining paragraphs | Comments

That inequality lies at the heart of what we call "data colonialism"

By: kmt
7 May 2024 at 04:26
"The term might be unsettling, but we believe it is appropriate. Pick up any business textbook and you will never see the history of the past thirty years described this way. A title like Thomas Davenport's Big Data at Work spends more than two hundred pages celebrating the continuous extraction of data from every aspect of the contemporary workplace, without once mentioning the implications for those workers. EdTech platforms and the tech giants like Microsoft that service them talk endlessly about the personalisation of the educational experience, without ever noting the huge informational power that accrues to them in the process." (Today's colonial "data grab" is deepening global inequalities, LSE)

The book: Data Grab - The new Colonialism of Big Tech and how to fight back (Penguin) Interview with the authors: Q and A with Nick Couldry and Ulises A Mejias on Data Grab

Licensing AI Engineers

25 March 2024 at 07:04

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

❌
❌