Popular news sites

 找回密码
 注册

QQ登录

只需一步,快速开始

查看: 8|回复: 8314

[转帖] "AI will take over the world, but it will not conquer human beings." The latest interview with the Turing winner Yang Likun comes

[复制链接]
楼主
发表于2024-03-03 20:46:26 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
    

    Steven Levy: In a recent speech,moment information website you said "machine learning is bad."Why do you say that like a pioneer like you?

    Yann lecun: Machine learning is great.But that kind of thought we only need to expand the scale of existing technology, can we realize the idea of human level AI?To make machines learn as efficiently as humans and animals, we still lack some important things, but we don't know what it is.

    I don't want to attack these systems, nor do they want to say that they are useless. I have always focused on these during my career.However, we must curb the excitement of some people. They think we only need to expand the scale to get human intelligence soon.It's definitely not like this.

    You think you have the responsibility to reveal these things.

    That's right.AI will bring many benefits to the world, but some people are abusing it through people's fear of this technology.We must be vigilant so that people are discouraged.This is also our mistakes we made in other innovative world technology.Take the invention of the 15th -century printing, the Catholic Church hate it, right?People can read the Bible themselves without asking the pastor.Almost all of the powers oppose the use of extensive use of printing, because this will change the power structure.They are right, which has caused a 200 -year religious conflict.But it also brought the Enlightenment.[Note: Historians may point out that the church actually uses printing to achieve its own purpose, but Lecun thinks so anyway.]

    Why are so many people in the science and technology world ringing AI's alarm clock?

    Some people are seeking attention, and some people do not see the real situation today.They didn't realize,AI can actually reduce hatred speech and error messageEssenceIn Meta, we have made great progress in AI in this regard.Five years ago, about 20% to 25% of the hatred remarks deleted from the platform were the AI systems that were deleted before anyone saw it.Last year, this proportion reached 95%.

    What do you think of chat robots?Are they strong enough to replace human work?

    They are great, and people have made great progress in this regard.They will achieve creativity democracy to a certain extentThey can write very smooth text. These texts have a great style, but they are also boring, because what they think of may be completely fake.

    Meta seems to want to develop these technologies and apply it to the product.

    In the long run, in the future, all the interactions with the digital world, and to some extent, our interaction between us will use the AI system as the medium.We must try things that are not strong enough to do this, but we are about to realize this point, or help humans create things in daily life, whether it is text or real -time translation, and so on.

    In Meta, how does Mark promote AI work?

    Mark is very invested.Earlier this year, I discussed with him and told him the content I just told you just now that all our interaction in the future will use AI as the medium.ChatGPT showed us the role of AI on new products, which was earlier than we expected.We see that the public's obsession with AI functions far exceeds our imagination.Therefore, Mark decided to create a product department focusing on generating AI.

    Why did Meta decide to share the LLAMA code with others in an open source?

    When you have an open platform and many people can contribute to it, progress will become faster.The system will eventually develop safer and better performance.Imagine that in the future, all interactions with the digital world will use the AI system as the medium.You don't want the AI system to be controlled by a few companies on the West Coast of the United States.Maybe Americans don't care, maybe the US government doesn't care.But I now tell you that in Europe, they will not like it.They will say, "Well, this can speak correctly. But what about French? What about German? Hungarian? What about Dutch or other languages? How do you train it? How does this reflect our culture?"

    This seems to be a good way for startups to use your products and defeat competitors.

    We don't need to compromise with anyone, this is the direction of world development.AI must be open source because when the platform becomes an important part of the communication structure, we need a common infrastructure.

    One company does not agree with this statement, that is Openai, and you don't seem to like it.

    At the beginning, they imagined to create a non -profit organization to engage in AI research, thereby competing companies such as Google and Meta.I think this is a wrong idea, and it turns out that I am right.Openai is no longer open (Open).Meta has always been open, too.The second thing I want to say is that unless you have a way to provide funds for AI research, it is difficult for you to carry out substantial AI research.In the end, they had to set up a profit organization and invested from Microsoft.Therefore, although OpenAI has certain independence, they are basically Microsoft's cooperative research institutions now.The third point is that they believe that General Artificial Intelligence (AGI) is just around the corner, and they will develop earlier than anyone, but they cannot do it.

    Sam Altman was kicked out of the position of CEO, and then returned to different boards. What do you think of this dramatic event of OpenAI?Do you think this has a impact on the research community or industry?

    I don't think the research community is concerned about Openai, because they have not published papers and did not disclose what they are doing.Some of my former colleagues and students worked in Openai, and we felt sad for them because OpenAI had unstable factors.The development of research work is inseparable from a stable environment, and once similar dramatic events occur, people will hesitate.In addition, for those engaged in research, another important aspect is openness, and Openai is no longer open.Therefore, in this sense, Openai has changed, and they are no longer regarded as the contributors in the research community.All of this is in the hands of the open platform.

    This incident is called the victory of AI "acceleration", and "acceleration" is exactly the opposite of "extinction theory".I know you are not a "extinction supporter", but are you a "accelerator"?

    No, I don't like these labels.I don't belong to any ideological genre.I am very cautious that I will not push this kind of thought to extremes, because it is too easy to fall into a complete cycle and do stupid things.

    The European Union recently released a set of AI regulations, one of which is to a large extent exempt the open source model.What will this impact on Meta and other companies?

    This affects Meta to a certain extent, but we have enough strength to comply with any regulations.This is much more important for countries that have no resources to build the AI system from scratch.They can rely on an open source platform to have AI systems that meet their culture, language and interest.in the near future,Most of our interaction with the digital world will use the AI system as the mediumEssenceYou don't want these things to be controlled by a few companies in California.

    Do you participate in helping the regulatory agency draw this conclusion?

    I was discussing with the regulatory agency, but I didn't talk to them directly.I have been communicating with governments, especially the French governments, but also indirectly communicate with governments in other countries.Basically, they do not want citizens' digital consumption to be controlled by a few people, and the French government has received the idea very early.Unfortunately, I have not talked to people at the EU level. They are more affected by doomsday predictions and hope to supervise everything to prevent the disaster that they think.However, this was opposed by the French, German and Italian governments, and they believe that the EU must make special regulations for the open source platform.

    But is it really difficult to control and regulate open source AI?

    There are already related regulations for products with very important safety.For example, if you want to design new medicines with AI, there are already regulations to ensure that this product is safe.I think this makes sense.The question that people are arguing are whether the supervision of AI research and development is reasonable.I don't think it makes sense.

    Can't anyone use a complex open source system released by large companies to occupy the world?As long as the source code and weight are obtained, terrorists or scammers can provide destructive ability to the AI system.

    They need to get 2,000 GPUs in a hidden place, and they need enough funds and talents to complete this work.

    I think they will eventually come up with how to make their own AI chips.

    That's right, but it will be realized for several years than advanced technology.This is in world history. Whenever technology advances, you cannot stop bad people from obtaining it, and then goodwill AI against evil AI.The way to maintain ahead is to accelerate progress. The way to achieve faster progress is to open research and allow more people to participate.

    How to define AGI?

    I don't like the word AGI because there is no general intelligence at all.Smart is not a linear thing that can be measured. Different types of intelligent entities have different skills.

    Once the computer reaches the intelligent level of human beings, they will not stop here.With rich knowledge, machine -level mathematical ability and better algorithms, they will create super intelligence, right?

    Yes, there is no doubt that the machine will eventually be smarter than humans.We don't know how long it takes, it may be a few years, or it may be a century.

    By then, do we have to catch it?

    no.We will all have AI assistants, just like working with a group of super smart employees, but they are not human.Humans will be threatened because of this, but I think we should be excited.What excites me the most is to work with someone who is smarter than me, because this will expand your own ability.

    But if the computer gets super intelligence, why do they still need us?

    We have no reason to believe that once the AI system becomes intelligent, we will want to replace humans.If people think that the AI system will have the same motivation as humans, it is very wrong.They are not because we will set in design.

    If humans have not established these goals, and the super intelligent system is pursuing a certain goal and eventually hurts humans?Just like the example of a philosopher Nick Bostrom: a system designed to create a pupil needle in any case, occupying the entire world in order to create more closing needles.

    If you just establish a system and ignore the protection measures, it will look too stupid.This is like making a car equipped with a 1000 -horsepower engine but no brake system.Setting goals in the AI system is the only way to ensure its controllability and security. I call it a target -driven AI.This is a new architecture, and we have not seen any examples.

    Is this your job now?

    Yes, our idea is that the machine has a goal that it needs to satisfy, and it cannot produce anything that does not meet these goals.These goals may include protective measures or other things that prevent danger, which is a way to make the AI system safe.

    Do you think you will regret the consequences of the AI that you promote?

    If I think this is the case, I won't do it again.

    You are a jazz fans.Can nothing produced by AI comparable to the exciting creativity that can only be produced so far?Can it create a work with soul?

    The answer is complicated.Yes, the AI system will eventually be able to create music, visual art or other works. Its technical quality is similar to humans and is even better.However, the AI system does not have the ability to create impromptu music, because improvisation music depends on human emotional and emotional communication.At least there is no such ability now, which is why jazz needs to listen on the spot.

    You haven't answered whether my music has a soul.

    You already have music that has no soul at all.This kind of music can be heard in the background music played in the restaurant. It is mainly produced by machines. This is the market.

    But I am talking about the peak of art.If I let you listen to the best recording of Charlie Parker, and then tell you that this is generated by AI, will you feel deceived?

    Yes, not.Yes, because music is not just an auditory experience, many of them are cultural experiences, which is admirable to performers.Your example is like Milli Vanilli, which is really an important part of art experience.

    If the AI system is comparable to elite art achievements and you don't know the story behind it, then the market will be full of Charlie Parker -level music, but we cannot distinguish between the differences.

    I think there is no problem.I will still buy the original version, just like I still buy a handmade bowl worth $ 300. Although I spend $ 5 to buy something that looks similar, it is still a place with hundreds of years.We will still go to the scene to listen to my favorite jazz musician's performance, although they can be imitated.The experience of the AI system is different.

    You have recently won a honor from President Macron. I can't read these French ...

    CHEVALIER de La Légion D'Honneur.It was founded by Napoleon.It is a bit similar to the British jazz title, but we have a revolution, so we do not call people "Jazz."

    Is there any weapons and equipment?

    No, they don't have things like swords.However, people with this weapon can wear small red stripes on the placket.

    Can the AI model win this award?

    It won't be fast, anyway, I don't think this is a good idea.

    Original Author: Steven Levy

    Original link:

    https://www.wired.com/story/artificial-ntelligence-Meta- yann-lecun-interview/

    Compilation: Yan Yimi


您需要登录后才可以回帖 登录 | 注册

本版积分规则

game| digital| parenting| history| variety show| real estate| ( 浙ICP备2921366号-1 )

Powered by Popular news sites X3.4

© 2001-2017

返回顶部