condensedcrab 4 months ago

I think that the main issue with research in experimental fields in academia is still the throughput issue of running experiments.

Most grad students and professors can cook up reasonable hypotheses, but someone still has to run all of the experiments. The phase space is generally too wide to explore all research directions so human intuition is still needed to figure out what will pan out so the students are fed and the next grant comes in.

nritchie 4 months ago

As a scientist, I like to believe that my greatest contribution is my fresh ideas. Maybe AI will be able to do this better than I can. If so, I'm not sure that I really want to be a scientist anymore. TBH, I'm in science because I enjoy the process of thinking, coming up with ideas and testing them. Eliminate any part and it just becomes another tedious job.

  • jszymborski 4 months ago

    As Sir Martyn Poliakoff likes to say (greatly paraphrasing), your mind is like a vegetable soup, and stirring it around surfaces all sorts of interesting bits.

    I see this as a great way of stirring things up! Like, once I asked ChatGPT about some ideas I had, and it had hallucinated something which gave me a great idea.

  • mnky9800n 4 months ago

    The truth is that as scientific problems become sufficiently complex we will need to trust a computer and what it tells us. And that computer acts as an abstraction layer for the knowledge in the same way that Python abstracts away memory management. This will allow us to conceive ideas that were otherwise impossible. To put it a different way, it used to be a department of physicists or glaciologists or hydrologists or whatever were happy enough talking to each other. Nowadays people are rather interdisciplinary and often those people are talking across departments trying to come up with new ideas. Eventually the complexity of the new idea will be such that you cannot conceive it without the help of a computer. At that time you will have to trust the output of the computer in the same way you trusted your colleagues. I suppose we already sort of do that with many computational tools. Like you have trust in the software we use to analyse data. People use python because the data science ecosystem is well developed and well trusted. That’s the problem with the current crops of LLMs. None of them are able to build trust as a knowledge abstraction layer. But when they do they will become very useful as you can ask them to do all sorts of things.

    • imtringued 4 months ago

      My problem with AI nutjobs like you is that you can think of a similar scenario with classical algorithms/technology with almost no discernible changes.

      There is no way that humans fully understand the inner workings of a camera sensor, but they trust the output of a camera recording to make decisions.

      The point is that you don't have to trust the output of the camera the same way you trust a colleague. You don't need to anthromorphise the camera to make use of it.

      Nothing changes. There is nothing special about LLMs. There is no need to worship them or think of them as anything other than tools.

      • mnky9800n 4 months ago

        I think if you scroll my comments you will find I’m mostly negative on ai these days so it’s strange you are name calling me but it’s okay. I am trying to point out what you say, tools abstract expertise and we should accept that has always been the case so instead of seeing it as some kind of strange anthropomorphized object we should simply ask ourselves what new problems can we solve with what new tools. And in order to do that we need to build trust in those new tools.

  • spking 4 months ago

    AI can "play" the drums better and more precisely than me but I still enjoy the act of playing the drums. AI can or will probably drive a Formula 1 car better than Max Verstappen but he will still enjoy being a driver. I don't see why we can't coexist with machine counterparts.

    • mynameajeff 4 months ago

      From all the attempts I've seen in the past few years, we haven't been able to get AI to successfully complete a lap of wheel to wheel racing at full speed on a real track[1], I'm not sure if they're exactly competing for a seat as a pay driver at Haas let alone racing for Red Bull anytime soon.

      [1] https://youtu.be/TPzBH-7ckO0

  • numba888 4 months ago

    > If so, I'm not sure that I really want to be a scientist anymore

    I'm not sure you have many interesting options.

hnuser123456 4 months ago

We need continuous self learning and improvement at the level of post-training weight updates.

My endeavors in this area show that it is very difficult for an LLM to build up and integrate novel knowledge through conversation. Any new developments that aren't in context, are completely forgotten unless they are continuously kept in context, along with enough reassuring text that the novel knowledge is accurate if it goes against the LLM's training.

Is there any research into 1B-2B sized models that can be continuously finetuned on RTX 3090?

  • j_bum 4 months ago

    Excellent point, and I agree with you.

    I’m curious if RAG can solve any of this problem?

    • imtringued 4 months ago

      RAG helps you with retrieving new information, but what GP wants is that the model itself incorporates this knowledge into its parameters as it consumes it.

    • numba888 4 months ago

      RAG can be part of the solution which is unlikely that simple.

randcraw 4 months ago

You know, at this point I've seen so much self-promotional material arising from Google in the form of bold claims and titles in research papers that I no longer take immediate interest in them but instead wait for other more trustworthy sources to vet whether it's actually something of substance. This is a good example whose value can only be assessed by others who don't benefit from pushing a product.

absolutelastone 4 months ago

Sure is a lot of "Towards..." in titles lately. That tells me what wasn't accomplished rather than what you did.