World-famous technology giant Google has made its final decision regarding the appearance of obscene content created by artificial intelligence in search results. Accordingly, the company officially announced that especially non-consensual ‘deepfake’ content will not be included in search engines and will be blocked as soon as it is noticed.
What if Google cannot remove deepfake content?
Google announced that content created with deepfake in search results will be instantly removed from search pages. However, some images may not be completely removed from search results for technical reasons. Google has also produced a solution regarding this issue.
Although Google has tried to use its own artificial intelligence-supported images for search results, these images do not contain real people and do not contain obscene content. The company announced that it is collaborating with experts and victims of non-consensual ‘deepfakes’ to solve this problem and make its system stronger.
Google has been allowing people to request removal of obscene ‘deepfake’ content for a while. Upon receiving a justified request, Google algorithms would query and filter obscene filters that resembled this person and, if necessary, remove them immediately.
With the new study by Google, if these images cannot be removed from search results, at least their visibility will be reduced. In this way, the individual’s personal rights will be more protected.
What do you think about this issue? Do you think the company is right in its decision to keep such images in the background in search results? You can share your opinions with us in the comments.
Source link: https://shiftdelete.net/google-18-deepfake-fotograflarla-ilgili-son-kararini-verdi