• sudoer777@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      I recently saw a Hacker News article on people running some Deepseek V4 Flash variant on macOS, which is a surprisingly capable model for its size

    • ugjka@lemmy.ugjka.net
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      1 day ago

      Self hosting an llm ain’t the same thing as self hosting nextcloud for your docs and calendar. Yes there are small models but their output is laughable

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        3
        ·
        1 day ago

        Small models are improving and becoming more capable. The quality of local LLMs is basically unbounded. The context size of local LLMs is bounded by hardware. So local LLMs can be very capable for small, self-contained tasks.

        qwen 3.6 35b running locally:

        Write a Python script that can pull weather data from public sources and provide the high and low temperature for the current day in Miami, FL.
        

        Single shot. No tool/internet use, so it didn’t pull this script from elsewhere.

        import requests
        
        def get_miami_weather():
            # Miami, FL coordinates
            LATITUDE = 25.7617
            LONGITUDE = -80.1918
        
            # Open-Meteo API URL (free, no API key required)
            url = (
                f"https://api.open-meteo.com/v1/forecast?"
                f"latitude={LATITUDE}&longitude={LONGITUDE}"
                f"&daily=temperature_2m_max,temperature_2m_min"
                f"&timezone=auto"
            )
        
            try:
                response = requests.get(url, timeout=10)
                response.raise_for_status()  # Raises error for 4xx/5xx HTTP status codes
                data = response.json()
        
                # Index 0 corresponds to the current day
                high_c = data["daily"]["temperature_2m_max"][0]
                low_c = data["daily"]["temperature_2m_min"][0]
        
                # Convert to Fahrenheit (commonly used in the US)
                high_f = (high_c * 9/5) + 32
                low_f = (low_c * 9/5) + 32
        
                print("🌤️  Miami, FL Weather for Today:")
                print(f"High: {high_f:.1f}°F ({high_c:.1f}°C)")
                print(f"Low:  {low_f:.1f}°F ({low_c:.1f}°C)")
        
            except requests.exceptions.HTTPError as http_err:
                print(f"❌ HTTP error occurred: {http_err}")
            except requests.exceptions.ConnectionError:
                print("❌ Error: Could not connect to the weather API.")
            except requests.exceptions.Timeout:
                print("❌ Error: Request timed out.")
            except requests.exceptions.RequestException as err:
                print(f"❌ An error occurred: {err}")
            except KeyError as key_err:
                print(f"❌ Error parsing data: Missing expected key {key_err}")
            except Exception as err:
                print(f"❌ Unexpected error: {err}")
        
        if __name__ == "__main__":
            get_miami_weather()
        

        Output:

        % python3 ./m_weather.py
        🌤️  Miami, FL Weather for Today:
        High: 88.0°F (31.1°C)
        Low:  73.2°F (22.9°C)
        

        I tried to keep the size and scope within something that would reasonably fit in a comment. Looks pretty decent to me, but I can’t write Python myself. Never learned. I double-checked the LAT & LON of Miami, and it’s spot on.

        It did take 47 seconds, while a cloud LLM would probably take 5 or less.

        All I’m saying is local LLM isn’t garbage and it is getting better all the time.

          • chilicheeselies@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 hours ago

            Gemma 4 e2b is pretty impressive for its size.

            This area of computer is improving very fast. I truely belive the future of this is locally installed open models

        • Rimu@piefed.social
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          That’s interesting.

          How much ram did it use while running?

          If you used a GPU, how much does it cost in today’s prices?

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            12
            ·
            edit-2
            1 day ago

            It’s a MacBook Pro. 36GB of ram. I am sure Macs have some kind of gpu and I understand it somehow combines GPU ram with system ram, but I don’t really know Mac hardware very well.

            It’s beefy for a laptop, but the desktop I built for myself several years ago had 32 GB of ram and a GTX 1660, so I’m guessing they are similar in capability. I gave that to my daughter, so I can’t run a comparison right now.

            EDIT: After doing just a bit of research, I’ve learned the unified memory architecture that Macs use, while not ideal for many purposes, is actually a big advantage for running larger inference models. So it’s possible that this particular model wouldn’t run at all on my Linux box or would run much slower because the full model wouldn’t fit in the 6GB of VRAM and create a lot of memory thrashing.

            • boonhet@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              19 hours ago

              Yup, you want memory accessible to the GPU for local AI. AMD Strix Point and Mac devices are popular options. CPU can run LLMs but very slowly. I’ve got 32 GB of RAM and 8 VRAM and it’s borderline useless for models that don’t fit in the VRAM.

            • SabinStargem@lemmy.today
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 day ago

              You can use something like KoboldCPP on Linux, which allows both RAM and VRAM combined to run a model. O’course, not as fast when compared to pure VRAM or the Mac approach, but it is an option. I use my 128gb RAM with some GPUs for running models.

                • SabinStargem@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 hours ago

                  Speed depends on how much of the model is on VRAM, and the dense/MoE architecture of that model. The RAM’s benefit is more about having the ability to run the model in the first place. In any case, a dense Qwen3.6 27b would take up about 27-33gb-ish of memory, plus whatever context size you set.

                  Upcoming implementation of MTP will increase the size of models, but in exchange, they will also run faster. About a 30%ish boost for dense models, a bit less for Mixture of Expert varieties, from the looks of it.

                  • boonhet@sopuli.xyz
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    11 hours ago

                    When I’ve tried running a ~14 gigabyte distillation of whatever model it is I was trying to run, it would come out super slow at I believe 50/50 GPU to CPU. It gets so slow it was just more bearable to run a 7 or 8 b model that would actually fit entirely in VRAM and run entirely on GPU. Also made the rest of computer usage more bearable.

                    To be fair I do only have a 6 core 6 thread CPU though. It shot up to 600% usage so even the DDR4 memory wasn’t really bottlenecking it. I suspect a 9950X would fare a lot better.

        • humanspiral@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          qwen 3.6 is awesome, but 48-64gb is still real money these days. (though 32gb on dedicated separate machine is also more money). Sonnet 3.5 to opus 4.5 level benchmarks. and the online cost metrics for 27b and 35b are way off considering the overall usefulness of a 48-64gb machine (inclusive of gpu vram for 35b) which even in single, non batching, use could displace $5-$7/day of use.

          Local costs are much lower than online costs in linked chart, but if online, there are better models

          • chilicheeselies@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            Depends on if you even need a better model though. Can you run a good enough model is what matters for the most part.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      13
      ·
      1 day ago

      You see hot that’s tangential to what you’re replying to?

      Ai is evil

      LOCAL AI is not all evil

      Computers are expensive

      Your point is completely valid, but in another discussion.

      • Fondots@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        edit-2
        1 day ago

        Sorry, but I think the point about local AI not necessarily being evil is the tangent here.

        The OP is about motherboard shortages, which is being driven by the big AI companies and is making hardware unaffordable for normal users

        The top level reply to that is about how that’s bad because it removes the ability for people to be in control of their own computing

        Then someone comes in, saying “yeah, but you can host your own AI so that it’s not evil so not all AI is bad”

        Then someone points out that you can only host your AI if you can afford the hardware to do so which, as the OP and the comment you replied to pointed out, is getting really hard to do.