Inceptionism: an AI built on pragmatic principles turns out to be an artificial Derrida

Not willing to accept this, now they say that the artificial neural network dreams. Name: Google Deep Dream.

The problem is that the Google Deep Dream images can be compared with dreams by us humans.  Or in the general case of an automatic classifier of big data there is no term of comparison.  How can we, the pragmatic scientists, know that the output obtained from data (by training a neural network on other data) is full of dreamy dog eyes or not?

If we can’t trust the meaning obtained from big data, by pragmatic means, then we might as well renounce at analytic philosophy and turn towards continental (so called) philosophy.

That is seen as not serious, of course, which means that the ANN dreams, whatever that means. With this stance we transform a  kick in the ass of our most fundamental beliefs into a perceived progress.

___________________________________________________________________

One thought on “Inceptionism: an AI built on pragmatic principles turns out to be an artificial Derrida”

  1. Hysteresis of patterning feedback more-like. The interest is in the fact that they do such a good job of producing cohesively recognizable hallucinatory distortions. That points to some value in neural nets as a model for sensory-cognitive (dys)function but says nothing about A.I.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s