Deep fake Manipulation images are a modern headache

0
665
Fake Technology
As AI is progressing and people are benefiting from it, the use of AI is also leading to spreading news and hateful image on the internet which is called Deepfake.

Deepfake Manipulation images are a modern headache

Introduction to deep fake Manipulation

Deepfake technology has become a headache in modern times and everyone is worry about it, but some researchers are active in trying to solve it. After applying a certain code, the fake pictures can destroy, said a team of MIT Technology.

AI is leading to spreading news and spreading hateful images

There are many benefits that are being derive from AI technology and people are benefiting from it. However, AI is also spreading false news and spreading hate images on the internet, which is called Deepfake.

Deep fake technology

Deep fake technology has caught the nose of not only cyber experts but also law enforcement agencies. And they are very concerned about it. Many countries are suffering from propaganda because of it.

U.N Worried about by expressing concern

The United Nations is also worry about this by expressing concern and spreading false information on the Internet. It is also spreading hateful content, which makes society and countries hate each other. An MIT research team says after research that they have develops a key to the dissemination of real images that will be used as a weapon against propaganda

International Machine Learning Conference

This discussion, which was held on Tuesday, explain that by making a few minor changes. To the code Fake images create with AI technology can identified as fake. The experts further said that through this coding meaningful distortion can create in them which can be avoided from propaganda.

Dengerous Propaganda

The team made minor changes to some images. To explain that they can be use for propaganda that can be use for any evil and dangerous propaganda. The team said that deepfake is a dangerous weapon use to spread hate. He said that with a slight change. In the coding, the propagation of such fake images can be dealt with.

Coding As safeguard

First, the researchers say that images that could use as propaganda or misuse could be protected by coding. That if this code is apply to the images the power cannot be misuse. Relies on this coding as a safeguard. Against undetect adversarial attacks designe. To crack the operation of targete diffusion models. Explaining this further, the research team said that the propagation of fake images. Can be thwarted through this coding.

Encoder Attack would essentially stop

Experts say that such an encoder attack would essentially stop. Or at least slow down the spread of propaganda and hateful content. And help stop the spread of fake images that look real.

Propaganda
Propaganda destroys the human personality by making societies prone to destruction and hatred.Deepfake is a form of mainstream propaganda today. Read here too

AI Developer and MIT Researchers

But for this coding, an expert AI developer must join the team. MIT researchers further said that no single person can be trusted for this.

Abundance of Data

The abundance of data available on the Internet has increased the learning potential by enabling learners to learn from it and progress in implementation. But during this learning process, there are fears that data misuse has arisen in this way.

Deepfake looks More real

Everyone knows that deepfake looks more real. It is so real that the difference between real and fake is not understand are reliable. A contest was held for this, called the Deepfake.

Detection Challenge

Detection Challenge, in which a question-and-answer algorithm was develope to detect real and fake. Join by his Microsoft Facebook AWS and AI’s Media Integrate team. The challenge was aim at motivating researchers and testing how successful they were in their efforts. Contest winners were awarded $1,000,000.

Read More

Purpose of Competition

The purpose of this competition was to raise public awareness about deepfake technology. How common people can avoid its propaganda. How this deepfake technology can understand and what kind of criticism is made on it. Want to know about.

DFDC compettion conceived to Expose deep fake

The DFDC competition was conceive to expose people. To the best quality deep fake and real videos. And the contest was host by a company name is DetectFake.

Purpose of Detect Fake

The purpose of DetectFake was to raise awareness. About DeepFake and to provide an opportunity to learn more. And see how the general public can comfortably tell the difference between a fake and the real thing.

Propaganda Media

When propaganda is enter to Media field. There is no indication or sign. That allows you to accurately know which one is real and which one is fake.

Identify a Deepfake

1. We can identify a deepfake by some very small signs.

Look carefully at the face. Because the face is change by high quality Deepfake AI. So that concentrate on the face.

2. Look carefully at cheeks and forehead. It usually seems to fail to hide the effects of age. The skin age does not match the hair and eyes.

3. What shadows are showing on the eyes and eyebrows. Generally, the shadows that are visible in real pictures do not show it.

Deepfake can never make a scene look completely natural.

4. If there is glass Mirror in the picture, pay special attention to it.

Does it have glitter?

Or too much glitter?

The angle of light changes as someone moves. Does the angle of glare change if it moves?

5. Pay attention to whether it shows more or less facial hair. If there is a mustache or beard. Consider whether it is real or a deepfake product. If there is already facial hair, deepfake completely removes facial hair. If it fails to change, it leaves some deficiency. Or focus on the lack of them.

Identification Signs of DeepFake

The above are some of the signs that can help guide the searchers. As they navigate through the DeepFake propaganda in order to help the searchers. Distinguish between the genuine and the fake information. However, a high-quality deep fake is difficult to spot and identification symbols. But with some attention. People need to know that although it is a defensive coding system. Was develope by an MIT researcher. Preventing deep fakes can also prevent viral propaganda.

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here