Google Mixes Art with AI, Pushing Limits of Computer Creativity | Be Korea-savvy

Google Mixes Art with AI, Pushing Limits of Computer Creativity


The ambitious yet experimental "Magenta" project is one of many projects from Google Brain, which is responsible for many AI products like Google Translate and Google Photos. The team is dedicated to experimenting with new and different forms of machine learning to make computers smarter. (Image: Google Inc.)

The ambitious yet experimental “Magenta” project is one of many projects from Google Brain, which is responsible for many AI products like Google Translate and Google Photos. The team is dedicated to experimenting with new and different forms of machine learning to make computers smarter. (Image: Google Inc.)

SEOUL, June 22 (Korea Bizwire) – Google Inc. is teaching an artificial intelligence (AI) algorithm to generate music and art as part of its efforts to expand the limits of computer creativity to the level of human beings going forward, the company’s senior engineer said Thursday.

The ambitious yet experimental “Magenta” project is one of many projects from Google Brain, which is responsible for many AI products like Google Translate and Google Photos. The team is dedicated to experimenting with new and different forms of machine learning to make computers smarter.

“Can we use machine learning to create compelling art and music?” said Douglas Eck, a research scientist from Google Brain. The program is done by using TensorFlow — Google’s open-source library for machine learning — to train computers to some day create art.

Google released two new software programs in May, one of which is named SketchRNN and allows users to draw lines and sketches while predicting what comes next.

“We gathered more than five million user-drawn sketches,” said Eck, adding that the program produces drawings by guessing what users want to draw.

The scientist said there are still limits to the program, as the computer can only currently “output” 75 shapes like as cats and yoga poses.

Google forecasts that the software may eventually be able to automatically draw the details of a cat’s face and even fill in colors, and eventually advance to the scope of making art pieces.

The second program, called NSynth, takes individual audio samples such as the sound of a guitar and a piano segment played by professional musicians and combines them to make a totally different sound, Google said.

The program can analyze and differentiate distinct audio qualities and combine the properties to form a brand-new sound that has never been created before by using publicly available recurrent neural network code, the researcher said.

“The goal of the program is not to replace artists but to help artists and original song writers by creating totally different sounds based on a huge data set,” Eck said.

The scientist said there are still many obstacles to overcome to reach the level of human creativity, as one neural network is still not able to create the long-term sound that people can recognize as compelling music.

“In order to create the actual waveform (of the music), it is necessary to make 16,000 predictions per second,” the engineer said, adding that the program is currently doing a poor job at creating music.

(Yonhap)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>