9.5.24

 

Cabaret of Dangerous Ideas

I'll be appearing at the Fringe in the Cabaret of Dangerous Ideas, 12.20-13.20 Monday 5 August and 12.20-13.20 Saturday 17 August, at Stand 5. (The 5 August show is joint with Matthew Knight of the National Museums of Scotland.)

Here's the brief summary:

Chatbots like ChatGPT and Google's Gemini dominate the news. But the answers they give are, literally, bullshit. Historically, artificial intelligence has two strands. One is machine learning, which powers ChatGPT and art-bots like Midjourney, and which threatens to steal the work of writers and artists and put some of us out of work. The other is the 2,000-year-old discipline of logic. Professor Philip Wadler (The University of Edinburgh) takes you on a tour of the risks and promises of these two strands, and explores how they may work better together.
I'm looking forward to the audience interaction. Everyone should laugh and learn something. Do come!


 

I'm speaking at Lambda Days 2024.

I'm looking forward to Lambda Days 2024 and catching up with friends in Krakow.

Labels:


16.8.23

 

Orwell was right

 


A short comic by Mike Dawson. Stick to the end for a valid point.

Labels: , , ,


7.8.23

 

The Problem with Counterfeit People

 


A sensible proposal in The Atlantic from philosopher Daniel Dennett. Is anyone campaigning for a law or regulation to this effect?

Creating counterfeit digital people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation.

There may be a way of at least postponing and possibly even extinguishing this ominous development, borrowing from the success—limited but impressive—in keeping counterfeit money merely in the nuisance category for most of us (or do you carefully examine every $20 bill you receive?).

As [historian Yuval Noah] Harari says, we must “make it mandatory for AI to disclose that it is an AI.” How could we do that? By adopting a high-tech “watermark” system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning.


 

Will A.I. become the new McKinsey?


Ted Chiang, ever thoughtful, suggests a new metaphor for A.I. Published in The New Yorker.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

Labels: ,


31.7.23

 

Our Labor Built AI

 



An introduction for laymen from The Nib. By Dan Nott and Scott Cambo.

Labels: , ,


17.7.23

 

Gradual Effect Handlers

 

As part of the SAGE project funded by Huawei, Li-yao Xia has written in Agda a model of a gradual type system for effects and handlers. It is available on GitHub.


Labels: , ,


21.6.23

 

Writing and Speaking with Style

 



I instruct all my students to read Strunk and White's The Elements of Style and Pinker's The Sense of Style. Now I have another resource to recommend.

Benjamin Pierce writes:

In 2021, Rajeev Alur and I created a course at Penn called Writing and
Speaking with Style.  Aimed at PhD students in computer science and other
areas of science and engineering, the course is a semester-long immersion
in effective technical writing and speaking.  Since then, I've run it twice
more, improving and polishing each time.  I think it's pretty good now.  :-)

In hopes that the course materials may be useful to others, all the slide
decks, timeline, readings, and detailed notes for instructors are now
publicly available: you can find it all here.

Labels:


16.5.23

 

Naomi Klein on AI Hallucinations




Amongst all the nonsense, something sensible in the press about AI: "AI machines aren’t ‘hallucinating’, But their makers are" in The Guardian. Written by Naomi Klein, the author of one of my favourite books, This Changes Everything.

But first, it’s helpful to think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.

This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal. Why, for instance, should a for-profit company be permitted to feed the paintings, drawings and photographs of living artists into a program like Stable Diffusion or Dall-E 2 so it can then be used to generate doppelganger versions of those very artists’ work, with the benefits flowing to everyone but the artists themselves?

The painter and illustrator Molly Crabapple is helping lead a movement of artists challenging this theft. “AI art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history. Perpetrated by respectable-seeming corporate entities backed by Silicon Valley venture capital. It’s daylight robbery,” a new open letter she co-drafted states.

The trick, of course, is that Silicon Valley routinely calls theft “disruption” – and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don’t apply to your new tech; scream that regulation will only help China – all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands.

We saw it with Google’s book and art scanning. With Musk’s space colonization. With Uber’s assault on the taxi industry. With Airbnb’s attack on the rental market. With Facebook’s promiscuity with our data. Don’t ask for permission, the disruptors like to say, ask for forgiveness. (And lubricate the asks with generous campaign contributions.)

Labels: , ,


24.3.23

 

Benchmarking best practices

 




A handy summary prepared by Jesse Sigal. Thanks, Jesse!


Advice

- Determine what is relevant for you to actually benchmark (areas include accuracy, computational complexity, speed, memory usage, average/best/worst case, power usage, degree of achievable parallelism, probability of failure, clock time, performance vs time for anytime algorithms).

- Make sure you run on appropriate data, including generating random (but representable) data and running statistical analysis.

- Consider using multiple datasets and cross-validation.

- Consider the extreme cases as well.- Find benchmarks the field will care about.

Books

- “Writing for Computer Science” by Justin Zobel

- “The art of computer systems performance analysis” (1990) by Raj Jain

Papers

- A. Crapé and L. Eeckhout, “A Rigorous Benchmarking and Performance Analysis Methodology for Python Workloads,” 2020 IEEE International Symposium on Workload Characterization (IISWC), Beijing, China, 2020, pp. 83-93, doi: 10.1109/IISWC50251.2020.00017.

- A. Georges, D. Buytaert, L. Eechkout, “Statistically rigorous java performance evaluation,” OOPSLA '07: Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems, languages and applications, October 2007 Pages https://doi.org/10.1145/1297027.1297033

- Benchmarking Crimes: An Emerging Threat in Systems Security. van der Kouwe, E.; Andriesse, D.; Bos, H.; Giuffrida, C.; and Heiser, G. Technical Report arXiv preprint arXiv:1801.02381, January 2018.

- Hoefler, Torsten, and Roberto Belli. "Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results." Proceedings of the international conference for high performance computing, networking, storage and analysis. 2015.

- Hunold, Sascha, and Alexandra Carpen-Amarie. "Reproducible MPI benchmarking is still not as easy as you think." IEEE Transactions on Parallel and Distributed Systems 27.12 (2016): 3617-3630.

Online resources

http://gernot-heiser.org/benchmarking-crimes.html

https://www.sigplan.org/Resources/EmpiricalEvaluation/

https://software.ac.uk/

https://www.acm.org/publications/policies/artifact-review-and-badging-current



Labels: , ,


This page is powered by Blogger. Isn't yours?