16.8.23

 

Orwell was right

 


A short comic by Mike Dawson. Stick to the end for a valid point.

Labels: , , ,


7.8.23

 

The Problem with Counterfeit People

 


A sensible proposal in The Atlantic from philosopher Daniel Dennett. Is anyone campaigning for a law or regulation to this effect?

Creating counterfeit digital people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation.

There may be a way of at least postponing and possibly even extinguishing this ominous development, borrowing from the success—limited but impressive—in keeping counterfeit money merely in the nuisance category for most of us (or do you carefully examine every $20 bill you receive?).

As [historian Yuval Noah] Harari says, we must “make it mandatory for AI to disclose that it is an AI.” How could we do that? By adopting a high-tech “watermark” system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning.


 

Will A.I. become the new McKinsey?


Ted Chiang, ever thoughtful, suggests a new metaphor for A.I. Published in The New Yorker.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

Labels: ,


31.7.23

 

Our Labor Built AI

 



An introduction for laymen from The Nib. By Dan Nott and Scott Cambo.

Labels: , ,


17.7.23

 

Gradual Effect Handlers

 

As part of the SAGE project funded by Huawei, Li-yao Xia has written in Agda a model of a gradual type system for effects and handlers. It is available on GitHub.


Labels: , ,


21.6.23

 

Writing and Speaking with Style

 



I instruct all my students to read Strunk and White's The Elements of Style and Pinker's The Sense of Style. Now I have another resource to recommend.

Benjamin Pierce writes:

In 2021, Rajeev Alur and I created a course at Penn called Writing and
Speaking with Style.  Aimed at PhD students in computer science and other
areas of science and engineering, the course is a semester-long immersion
in effective technical writing and speaking.  Since then, I've run it twice
more, improving and polishing each time.  I think it's pretty good now.  :-)

In hopes that the course materials may be useful to others, all the slide
decks, timeline, readings, and detailed notes for instructors are now
publicly available: you can find it all here.

Labels:


16.5.23

 

Naomi Klein on AI Hallucinations




Amongst all the nonsense, something sensible in the press about AI: "AI machines aren’t ‘hallucinating’, But their makers are" in The Guardian. Written by Naomi Klein, the author of one of my favourite books, This Changes Everything.

But first, it’s helpful to think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.

This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal. Why, for instance, should a for-profit company be permitted to feed the paintings, drawings and photographs of living artists into a program like Stable Diffusion or Dall-E 2 so it can then be used to generate doppelganger versions of those very artists’ work, with the benefits flowing to everyone but the artists themselves?

The painter and illustrator Molly Crabapple is helping lead a movement of artists challenging this theft. “AI art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history. Perpetrated by respectable-seeming corporate entities backed by Silicon Valley venture capital. It’s daylight robbery,” a new open letter she co-drafted states.

The trick, of course, is that Silicon Valley routinely calls theft “disruption” – and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don’t apply to your new tech; scream that regulation will only help China – all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands.

We saw it with Google’s book and art scanning. With Musk’s space colonization. With Uber’s assault on the taxi industry. With Airbnb’s attack on the rental market. With Facebook’s promiscuity with our data. Don’t ask for permission, the disruptors like to say, ask for forgiveness. (And lubricate the asks with generous campaign contributions.)

Labels: , ,


24.3.23

 

Benchmarking best practices

 




A handy summary prepared by Jesse Sigal. Thanks, Jesse!


Advice

- Determine what is relevant for you to actually benchmark (areas include accuracy, computational complexity, speed, memory usage, average/best/worst case, power usage, degree of achievable parallelism, probability of failure, clock time, performance vs time for anytime algorithms).

- Make sure you run on appropriate data, including generating random (but representable) data and running statistical analysis.

- Consider using multiple datasets and cross-validation.

- Consider the extreme cases as well.- Find benchmarks the field will care about.

Books

- “Writing for Computer Science” by Justin Zobel

- “The art of computer systems performance analysis” (1990) by Raj Jain

Papers

- A. Crapé and L. Eeckhout, “A Rigorous Benchmarking and Performance Analysis Methodology for Python Workloads,” 2020 IEEE International Symposium on Workload Characterization (IISWC), Beijing, China, 2020, pp. 83-93, doi: 10.1109/IISWC50251.2020.00017.

- A. Georges, D. Buytaert, L. Eechkout, “Statistically rigorous java performance evaluation,” OOPSLA '07: Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems, languages and applications, October 2007 Pages https://doi.org/10.1145/1297027.1297033

- Benchmarking Crimes: An Emerging Threat in Systems Security. van der Kouwe, E.; Andriesse, D.; Bos, H.; Giuffrida, C.; and Heiser, G. Technical Report arXiv preprint arXiv:1801.02381, January 2018.

- Hoefler, Torsten, and Roberto Belli. "Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results." Proceedings of the international conference for high performance computing, networking, storage and analysis. 2015.

- Hunold, Sascha, and Alexandra Carpen-Amarie. "Reproducible MPI benchmarking is still not as easy as you think." IEEE Transactions on Parallel and Distributed Systems 27.12 (2016): 3617-3630.

Online resources

http://gernot-heiser.org/benchmarking-crimes.html

https://www.sigplan.org/Resources/EmpiricalEvaluation/

https://software.ac.uk/

https://www.acm.org/publications/policies/artifact-review-and-badging-current



Labels: , ,


7.3.23

 

Benchmarking Crimes



Some resources on benchmarking, recommended to the SPLS Zulip.
  1. Benchmarking Crimes, by Gernot Heiser.
  2. Empirical Evaluation Guidelines, from SIGPLAN.

Labels: ,


15.12.22

 

The Rise and Fall of Peer Review

 


A fascinating blog post by Adam Mastroianni, suggesting that peer review is a failed experiment.

From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.

(Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)

That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal.

Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.

The results are in. It failed.

Thanks to Scott Delman for the pointer.

The post also cites a scientific paper by Mastroianni that he published direct to his blog, circumventing peer review while allowing him to write in a far more readable style. It's a great read, and you can find it here: Things Could be Better.

Labels: ,


This page is powered by Blogger. Isn't yours?