Weekly Learning: the Artificial Intelligence Edition

AI in Design

AI in Design introduces some interesting tools and trends about how AI is used to help designers.

Nothing quite new here, but the point is this: future designers are either designing AI systems, or working with AI systems to design other things.

As a designer, there’s no way to avoid learning and understanding AI.

Actionable AI Ethics

Someone is writing a book about applying AI ethics in actual work. Let’s just hope the author actually finishes it.

AI Research at Google

Is Google’s AI research about to implode? looks at challenges in AI research in the case of Google. In case you’re curious: the author’s answer is yes.

There are many challenges in AI research:

The DeepMind research program had shown what deep neural networks could do, but it had also revealed what they couldn’t do. For example, although they could train their system to win at Atari games Space Invaders and Breakout, it couldn’t play games like Montezuma Revenge where rewards could only be collected after completing a series of actions (for example, climb down ladder, get down rope, get down another ladder, jump over skull and climb up a third ladder).

There’s also a problem of “underspecification,” which means “many models explain the same data.”

…underspecification presents significant challenges for the credibility of modern machine learning. It affects everything from tumour detection to self-driving cars, and (of course) language models.

The author also finds some recent incidents at Google are troubling:

What concerns me is that when Google’s own researchers start to produce novel ideas then the company perceives these as a threat. So much of a threat that they fire their most innovative researcher and shut down the groups that are doing truly novel work.

The implication:

If the leaders who claim to be representing the majority of ‘Googlers’ can’t accept even the mildest of critiques from co-workers, then it does not bode well for the company’s innovation. The risk is that the ‘assumption free’ neural network proponents will double down on their own mistakes.

And the punch line:

Maybe one day we will see the transition from Hinton (ways of representing complex data in a neural network) to Gebru (accountable representation of those data sets) in the same way as we see the transition from Newton to Einstein. And when we do, it won’t be Google that should take the credit.

Computers and Narrative

Why Computers Will Never Write Good Novels explains:

Although these remarkable displays of computer cleverness all originate in the Aristotelian syllogisms that Boole equated with the human mind, it turns out that the logic of their thought is different from the logic that you and I typically use to think.

The problem:

…as natural as causal reasoning feels to us, computers can’t do it.

And that defines the computers:

This inability to perform causal reasoning means that computers cannot do all sorts of stuff that our human brain can. They cannot escape the mathematical present-tense of 2 + 2 is 4 to cogitate in was or will be. They cannot think historically or hatch future schemes to do anything, including take over the world.

That’s why computers cannot write literature:

Literature is a wonderwork of imaginative weird and dynamic variety. But at the bottom of its strange and branching multiplicity is an engine of causal reasoning. The engine we call narrative.

And unfortunately:

None of this narrative think-work can be done by computers…

The best that computers can do is spit out word soups. Those word soups are syllogistically equivalent to literature. But they’re narratively different.

Good AI Vs. Bad AI

Fight Fire with Fire: Using Good AI to Combat Bad AI talks about:

Real-world cases and expert opinions about the present and future of audio deepfakes and AI-powered fraud in particular and how to use AI to build a strong defense against malicious AI.

The problem:

…audio deepfakes are extremely hard to catch red-handed. Victims rarely have any evidence as they don’t typically record inbound calls, and the technology detecting synthetic speech in real-time is still in its infancy.

The solution:

To distinguish between real and fake audio, the detector technology uses visual representations of spectrograms, which are also used to train speech synthesis models. While sounding almost identical to the unsuspecting ear, spectrograms of fake versus real audio differ.

It’s a hard job.

Elements of AI

Elements of AI is a well-designed free online course.

Architects of Intelligence

Architects of Intelligence: The truth about AI from the people building it is a thought-provoking book you deserve.

{END}

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: