Abstract It has been argued that biological visual systems have evolved to perform well on "typical" natural stimuli. Previous studies have sucessfully tested this idea on isolated image dimensions, typically relying on fairly simple statistical relationships. However, it is unclear if biological visual systems are really tuned to the full set of complex and multidimensional statistical regularities that define natural images. Recent developments in artificial intelligence have made it possible to identify image models that can learn arbitrarily complex and multidimensional image statistics. In this talk I will report on recent results in which we use a class of these image models (so called Generative Adversarial Nets, GANs) to probe human vision with stimuli that have (almost) the full set of statistical regularities that natural images have. We find that humans are indeed able to identify changes in the high dimensional image properties captured by GANs. Manipulation of these high dimensional image properties tends to coincide with subjectively meaningful image changes and seems to follow fairly simple rules that can be formulated as high-dimensional extensions of known mechanisms at lower levels of processing. Thus, human vision appears to be well adapted to the full set of statistical regularities that define our typical visual environment.