Facebook is developing an AI to automatically monitor and purge bad livestream channels, which can be far more effective than human exposu...
Facebook
is developing an AI to automatically monitor and purge bad livestream
channels, which can be far more effective than human exposure.
The
livestream has been launched by Facebook for quite some time and is
used by the internet community as a new way of interacting with friends
and family on social networks. However, there are also quite a few Facebook accounts that use this feature to livestream images, objectionable content. Although
there is a report button so users can petition Facebook to remove
content offensive livestream but it seems that is still not enough to
keep this social networking environment clean.
In the past, Facebook has introduced a censorship system that combines the control of human-based artificial intelligence (AI) creators developed by Facebook itself to detect objectionable videos. Force them and remove them more quickly. To date, Facebook has announced that it will develop its own artificial intelligence without the help of other technology units and will eventually replace all human supervisors with WHO.
In fact, there are quite a few development platforms that have the ability to categorize content to detect objectionable content and remove them. Facebook plans to go further in planning the development of both AI hardware, which is essentially a separate artificial intelligence chip.
According to Bloomberg, Facebook's chips will consume less resources in the computer and increase the performance of the hardware. Currently, the average AI will detect objectionable content within an average of 10 minutes since they were posted even though no report was made. This AI can be improved to completely replace people in the field of censorship with higher productivity.
Currently, the concern of technology experts is just how Facebook will train their AI how to distinguish the most objectionable content accurately. For example, social activists can post articles, clips about violence, crime ... for the purpose of warning or breaking the truth ... and sometimes this content also May be confused AI that removed.
For example, the case of a police raid against a young man named Philo Castile in 2017. He was shot dead by police and the whole incident was accidentally turned back by Castile girlfriend, live on Facebook. Facebook's censors have been rather embarrassed by the case, initially deleting the livestream clip, claiming it was repulsive, then replaying the clip with a warning of the content and finally resuming it. delete clip. Similar cases may continue to occur in the future and Facebook may need to train its AI carefully before allowing the system to self-monitor sensitive content instead of humans.Be Careful Mark Zuckerberg: Hillary Clinton wants to be CEO of Facebook

In the past, Facebook has introduced a censorship system that combines the control of human-based artificial intelligence (AI) creators developed by Facebook itself to detect objectionable videos. Force them and remove them more quickly. To date, Facebook has announced that it will develop its own artificial intelligence without the help of other technology units and will eventually replace all human supervisors with WHO.
In fact, there are quite a few development platforms that have the ability to categorize content to detect objectionable content and remove them. Facebook plans to go further in planning the development of both AI hardware, which is essentially a separate artificial intelligence chip.
According to Bloomberg, Facebook's chips will consume less resources in the computer and increase the performance of the hardware. Currently, the average AI will detect objectionable content within an average of 10 minutes since they were posted even though no report was made. This AI can be improved to completely replace people in the field of censorship with higher productivity.
Currently, the concern of technology experts is just how Facebook will train their AI how to distinguish the most objectionable content accurately. For example, social activists can post articles, clips about violence, crime ... for the purpose of warning or breaking the truth ... and sometimes this content also May be confused AI that removed.
For example, the case of a police raid against a young man named Philo Castile in 2017. He was shot dead by police and the whole incident was accidentally turned back by Castile girlfriend, live on Facebook. Facebook's censors have been rather embarrassed by the case, initially deleting the livestream clip, claiming it was repulsive, then replaying the clip with a warning of the content and finally resuming it. delete clip. Similar cases may continue to occur in the future and Facebook may need to train its AI carefully before allowing the system to self-monitor sensitive content instead of humans.Be Careful Mark Zuckerberg: Hillary Clinton wants to be CEO of Facebook
Source: Tàng kiếm sơn trang