<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>planets on Random thoughts of Peter &#39;CzP&#39; Czanik</title>
    <link>https://peter.czanik.hu/tags/planets/</link>
    <description>Recent content in planets on Random thoughts of Peter &#39;CzP&#39; Czanik</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Thu, 16 Apr 2026 09:58:48 +0200</lastBuildDate><atom:link href="https://peter.czanik.hu/tags/planets/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Discussing RTO in my Genesi t-shirt...</title>
      <link>https://peter.czanik.hu/posts/genesi-rto/</link>
      <pubDate>Thu, 16 Apr 2026 09:58:48 +0200</pubDate>
      
      <guid>https://peter.czanik.hu/posts/genesi-rto/</guid>
      <description>&lt;p&gt;This Monday I talked to a couple of friends about work while wearing my Genesi t-shirt. A teacher going back to school after Spring break and an IT guy explaining the nightmare of RTO threat. I love coincidences :-) Why do I say that?&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/genesi_t.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Genesi t-shirt&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;As I wrote a few years ago about &lt;a href=&#34;https://peter.czanik.hu/posts/working-from-home/&#34;&gt;working from home&lt;/a&gt;: &amp;ldquo;After graduating from university, I worked from home for a small US-based company. I never met my boss while working there and met only one of my colleagues at a conference in Brussels. I eventually met my boss some seven years later, when I gave a talk at a conference in Washington, D.C.&amp;rdquo; The company was &lt;a href=&#34;https://genesi.company/&#34;&gt;Genesi&lt;/a&gt;, and that was the work culture which defines me. I received the t-shirt on the photo during my visit to Washington, D.C.. Luckily, I&amp;rsquo;m still living mostly this way, visiting the office 1-2 times a week: &lt;a href=&#34;https://peter.czanik.hu/posts/working-hybrid/&#34;&gt;working hybrid&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Imagine the contrast I felt, when I realized that I&amp;rsquo;m talking to someone who works on a very strict fixed schedule. For a teacher vacation is only possible when there is no school, like Spring break in Hungary last week. There is a fixed schedule all year around. Compare that to my Genesi years: no regular meetings, communicating by e-mail &amp;amp; chat, and working when it was the right time for me: sometimes in the morning, other days during the night. It was fantastic, especially with small kids. I have been working on flexible hours ever since, limited only by meetings.&lt;/p&gt;
&lt;p&gt;COVID made remote work less of a niche. Sometimes even mandatory. Many people in IT started to work remotely. Most of our work does not require a fixed place or time. On-line meetings became the norm, teams are often not location based anymore but scattered around the globe. As long as you have an Internet connection and a noise canceling microphone you can join a meeting from anywhere, even from the top of a mountain. It is easy to get used to this flexibility and very difficult to give it up.&lt;/p&gt;
&lt;p&gt;RTO became a periodical threat. It&amp;rsquo;s a lot cheaper to announce RTO and let people leave voluntarily than sending them away. Quite a few friends write me every once in a while that they have to return to the office starting in a few weeks time. Then, a few weeks later they happily share: they gave me an exemption, so they do not want me to leave&amp;hellip;&lt;/p&gt;
&lt;p&gt;Wearing my Genesi t-shirt all these problems feel so distant. I hope that it stays this way!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Streaming syslog-ng data to your lakehouse using OpenTelemetry</title>
      <link>https://peter.czanik.hu/other/syslog-ng-streaming-data-to-your-lakehouse-using-opentelemetry/</link>
      <pubDate>Wed, 15 Apr 2026 14:12:22 +0200</pubDate>
      
      <guid>https://peter.czanik.hu/other/syslog-ng-streaming-data-to-your-lakehouse-using-opentelemetry/</guid>
      <description>&lt;p&gt;Version 4.11.0 of syslog-ng contains contributions from Databricks related to OAuth2 authentication. Recently, they published a blog about how this enables their customers to send logs to their data lake using syslog-ng and the OpenTelemetry protocol.&lt;/p&gt;
&lt;p&gt;The syslog-ng project received two contributions from Databricks in the last weeks of 2025. The first one turned the already existing OAuth2 support generic and extensible, so it can be used anywhere, not just with Microsoft Azure (but of course, Azure compatibility was preserved). The next pull request was built on the first one and enabled OAuth2 support for gRPC-based destinations, like OpenTelemetry, Loki, BigQuery, PubSub, ClickHouse, etc. These changes were released as part of the syslog-ng 4.11.0 release. You can read more about these in the release notes at &lt;a href=&#34;https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0&#34;&gt;https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Besides an excellent overview about syslog-ng, the related Databricks blog also provides step-by-step instructions on how to use syslog-ng with their product. You can read it at: &lt;a href=&#34;https://community.databricks.com/t5/technical-blog/streaming-syslog-ng-data-to-your-lakehouse-powered-by-zerobus/ba-p/153979&#34;&gt;https://community.databricks.com/t5/technical-blog/streaming-syslog-ng-data-to-your-lakehouse-powered-by-zerobus/ba-p/153979&lt;/a&gt;&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/syslog-ng-logo1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;syslog-ng logo&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Originally published at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/streaming-syslog-ng-data-to-your-lakehouse-using-opentelemetry&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/streaming-syslog-ng-data-to-your-lakehouse-using-opentelemetry&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>My new toy: April 1 syslog-ng performance tests</title>
      <link>https://peter.czanik.hu/posts/new-toy-1st-of-april-syslog-ng-performance-tests/</link>
      <pubDate>Wed, 01 Apr 2026 12:35:28 +0200</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-1st-of-april-syslog-ng-performance-tests/</guid>
      <description>&lt;p&gt;Almost 15 years ago, Balabit had a campaign, stating that syslog-ng could process 650k messages a second. Now I am happy to present 7 million EPS (events per second). Timing the announcement to April 1 is not a coincidence :-)&lt;/p&gt;
&lt;p&gt;While the 650k EPS measurement was true, it was misleading. This value was measured right after syslog-ng 3.2 introduced multi-threading, in lab environment, under optimal circumstances, using synthetic log messages. However, there was no fine print explaining this, just the statement that syslog-ng could process 650k EPS. It was fixed after a while, but it took years to recover from the effects of this marketing campaign, and engineers ten years later still had a nervous breakdown when someone mentioned “650k”. Why? Because from that moment, everyone expected syslog-ng to collect logs at that message rate in a production environment with complex configurations. Which was of course not the case.&lt;/p&gt;
&lt;p&gt;Fast-forward to today, I&amp;rsquo;m happy to share that:&lt;/p&gt;
&lt;h1 id=&#34;syslog-ng-can-collect-logs-at-7-million-eps&#34;&gt;syslog-ng can collect logs at 7 million EPS&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Is this measurement value valid? Yes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Does it apply to real world? No.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Does it sound good? Definitely :-)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/sngbench1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;My latest syslog-ng benchmark results&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h1 id=&#34;the-tool-sngbench&#34;&gt;The tool: sngbench&lt;/h1&gt;
&lt;p&gt;I love playing with various non-x86 systems. I have various ARM, POWER, MIPS systems at home, and sometimes I access other architectures, like RISC-V remotely. And, of course, not just different architectures, but different operating systems: various Linux distributions, MacOS, FreeBSD, sometimes also other BSD variants. I&amp;rsquo;m a server guy, and for the past 15+ years: a syslog-ng guy. Sometimes I had access to an exotic system on the other side of the world only for less than an hour, but I almost always tested syslog-ng.&lt;/p&gt;
&lt;p&gt;For many years I had a bunch of shell scripts and configs to benchmark syslog-ng performance. Not for real world production loads, but rather for comparing architectures and operating systems. I needed a script which could do measurements with minimal dependencies and do it quickly, in one go. This is how &lt;a href=&#34;https://github.com/czanik/sngbench&#34;&gt;sngbench&lt;/a&gt; was born, based on my previous ugly scripts. It has quite a few advantages and shortcomings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Minimal dependencies: bash and syslog-ng&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;No complex setup: everything runs on the same host&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;network bandwidth is not a limiting factor&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;loggen&lt;/code&gt; and &lt;code&gt;syslog-ng&lt;/code&gt; processes are competing for resources&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Two bundled configurations: a performance tuned and the default syslog-ng.conf from openSUSE with minimal modifications to add a TCP source&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;By default, very short (20 seconds) measurements, so disk I/O is not a limiting factor&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Many different test scenarios: from a single TCP connection to 4 * 128&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Of course this describes just the &amp;ldquo;factory defaults&amp;rdquo;. You can easily change the test scenarios and configurations too.&lt;/p&gt;
&lt;h1 id=&#34;how-i-reached-7-million-eps-and-why-it-is-not-relevant&#34;&gt;How I reached 7 million EPS, and why it is not relevant&lt;/h1&gt;
&lt;p&gt;I was testing syslog-ng code which was not yet even merged to the development branch. First, I tested these patches with various settings. Along the way I remembered that Splunk guidelines mention &lt;code&gt;so-rcvbuf&lt;/code&gt; tuning also for TCP connections. Previously I only used that for optimizing UDP performance. Now I have done it for TCP. Wonders happened :-)&lt;/p&gt;
&lt;p&gt;But, of course, the main question is: can you achieve this performance in production? TL;DR: No.&lt;/p&gt;
&lt;p&gt;My tests are run from localhost. Network bandwidth is not an issue. Tests are run in short bursts. This is peak performance; when it comes to writing logs to files or forwarding to a cluster of Splunk or Elasticsearch endpoints around the clock, that would be slower. Also, in my fastest test case, logs came from four different &lt;code&gt;loggen&lt;/code&gt; instances, over 32 TCP connections each, at a constant rate. In the real world, logs come in bursts and connections are opened and closed regularly.&lt;/p&gt;
&lt;h1 id=&#34;test-environment-and-tests&#34;&gt;Test environment and tests&lt;/h1&gt;
&lt;p&gt;I used my &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI mini workstation&lt;/a&gt; with Fedora Linux 44 Beta. First, I took a base line with stock syslog-ng 4.11.0 included in the distribution. Then I used my syslog-ng git snapshot packages for Fedora from &lt;a href=&#34;https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng-githead/&#34;&gt;https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng-githead/&lt;/a&gt;. Initially it also had jemalloc support compiled in. Later I disabled it and purely focused on the yet to be merged &lt;a href=&#34;https://github.com/syslog-ng/syslog-ng/pull/5654&#34;&gt;parallelize() optimizations&lt;/a&gt; from GitHub. I experimented with enabling and disabling parallelize(), adding various batch_size() values, and finally also so-recbuf().&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/unbox4.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;AI in a miniature box :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>My new toy: Back to high-end audio</title>
      <link>https://peter.czanik.hu/posts/new-toy-back-to-high-end-audio/</link>
      <pubDate>Tue, 31 Mar 2026 10:12:38 +0200</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-back-to-high-end-audio/</guid>
      <description>&lt;p&gt;My &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI mini workstation from HP&lt;/a&gt; has seen some non-AI workloads this weekend. I installed Capture One for photo editing and a couple of software synthesizers. And realized along the way that while built-in speakers are nice, high-end audio is a lot better! :-)&lt;/p&gt;
&lt;p&gt;For months, I have been listening to music on devices that are designed for speech: a pair of Jabra headphones and the speakers of my various laptops. There were many reasons for this, including peer pressure, and some hearing loss at a way too loud concert. I was also lazy to use my high-end devices and tried to persuade myself that audio equipment designed for meetings is good enough for music too. Well&amp;hellip;&lt;/p&gt;
&lt;p&gt;This weekend, I installed various software synthesizers on my new computer. Not that I learned music or could play any instruments, but I still enjoy experimenting with music (well, with noise, actually :-) ). As I connected the machine to the big screen in the living room, I also connected it to my HiFi system. Suddenly, I realized how much better it sounds than my laptop or anything I&amp;rsquo;ve listened to in the past few months.&lt;/p&gt;
&lt;p&gt;While making noise with a couple of software synths and listening to music from my TIDAL subscription, I also recharged my Focal headphones. My &lt;a href=&#34;https://www.focal.com/products/bathys&#34;&gt;Focal Bathys&lt;/a&gt; is not as good as my HiFi, but also has a wonderful sound regardless.&lt;/p&gt;
&lt;p&gt;So I guess that after a few months long detour, I am back to using high-end audio gear whenever it is technically possible. I love the extra detail I can hear on my &lt;a href=&#34;https://heedaudio.com/loudspeakers/&#34;&gt;Heed Enigma&lt;/a&gt; speakers or on my Focal headphones. Of course, nothing can replace listening to live music at concerts, but high-end gear is much better at approximating the vibe of various live events than anything below it.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/unbox4.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;AI in a miniature box :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>My new toy: Open WebUI first steps</title>
      <link>https://peter.czanik.hu/posts/new-toy-openwebui-first-steps/</link>
      <pubDate>Thu, 26 Mar 2026 13:42:45 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-openwebui-first-steps/</guid>
      <description>&lt;p&gt;Once I got hardware-accelerated AI working under Linux on my &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI mini workstation from HP&lt;/a&gt;, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with &lt;a href=&#34;https://github.com/open-webui/open-webui&#34;&gt;Open WebUI&lt;/a&gt; on Fedora Linux.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/openwebui1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Open WebUI talking about central log collection :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h1 id=&#34;everything-in-containers&#34;&gt;Everything in containers&lt;/h1&gt;
&lt;p&gt;As Open WebUI is not yet available as a package in Fedora, my initial approach was to use containers. I found a Docker compose setup which was tested on Fedora Linux 43 according to its documentation: &lt;a href=&#34;https://github.com/jesuswasrasta/ollama-rocm-webui-docker&#34;&gt;https://github.com/jesuswasrasta/ollama-rocm-webui-docker&lt;/a&gt;. As I (also) use Fedora 43, it sounded like a good choice.&lt;/p&gt;
&lt;p&gt;It worked; however, I quickly realized that hardware acceleration for AI was not working. Instead of that, most CPUs were running close to 100%. It was a good test for cooling: I could hear the miniature box from the next room through closed doors :-)&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/ollamatop1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;ollama eating CPU :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;As it turned out, the content of the &lt;code&gt;HSA_OVERRIDE_GFX_VERSION&lt;/code&gt; environment variable was incorrect. When I set it according to the docs, hardware acceleration still did not work. Removing the environment variable &lt;code&gt;ollama&lt;/code&gt; found the hardware, but never answered a prompt anymore.&lt;/p&gt;
&lt;h1 id=&#34;ollama-from-the-system&#34;&gt;Ollama from the system&lt;/h1&gt;
&lt;p&gt;My next experiment was that I kept using Open WebUI from the container, but I installed &lt;code&gt;ollama&lt;/code&gt; from the Fedora package repository directly on the system. The good news? Some smaller models ran really fast, using hardware acceleration. The bad news: most models failed to load with an error message that the given model format is unknown.&lt;/p&gt;
&lt;h1 id=&#34;update-to-fedora-44-beta&#34;&gt;Update to Fedora 44 beta&lt;/h1&gt;
&lt;p&gt;I guessed that ollama was too old in Fedora 43. Solution? Update the whole system to Fedora 44 beta. It seems to have helped. A lot more models work now, including the largest freely available Granite models from IBM.&lt;/p&gt;
&lt;h1 id=&#34;why-granite&#34;&gt;Why Granite?&lt;/h1&gt;
&lt;p&gt;First of all: I&amp;rsquo;m an IBM Champion, and thus using IBM technologies is for granted. But also because I learned some background stories from a &lt;a href=&#34;https://www.gaborsamu.com/&#34;&gt;friend working at IBM on LSF&lt;/a&gt;, which makes it also a personal choice.&lt;/p&gt;
&lt;p&gt;What I&amp;rsquo;ve been showing here is AI inferencing on my HP AI system. But before the model can be used (for inferencing), it needs to be trained. These models are trained on large, GPU rich conpute clusters. To get an idea of the scale of such clusters, you can learn more in this research paper (&lt;a href=&#34;https://arxiv.org/abs/2407.05467&#34;&gt;https://arxiv.org/abs/2407.05467&lt;/a&gt;). It duscusses the IBM Blue Vela system which supports IBMs&amp;rsquo; GenAI mission. What&amp;rsquo;s interesting is the Blue Vela uses a more traditional HPC software stack including IBM LSF for workload management and Storage Scale (GPFS) for rapid access to large data sets.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/unbox4.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;AI in a miniature box :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Compiling syslog-ng on an old Mac</title>
      <link>https://peter.czanik.hu/other/syslog-ng-compiling-on-an-old-mac/</link>
      <pubDate>Wed, 25 Mar 2026 15:43:49 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/other/syslog-ng-compiling-on-an-old-mac/</guid>
      <description>&lt;p&gt;I have an aging, but fully functional MacBook. I bought it for syslog-ng testing, but I also use for watching movies. Homebrew no more fully supports old, Intel-based Macs. This blog helps to compile the latest syslog-ng release on these old, but otherwise functional machines.&lt;/p&gt;
&lt;p&gt;Read more at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/compiling-syslog-ng-on-an-old-mac&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/compiling-syslog-ng-on-an-old-mac&lt;/a&gt;&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/syslog-ng-logo1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;syslog-ng logo&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

</description>
    </item>
    
    <item>
      <title>My new toy: first steps with AI on Linux</title>
      <link>https://peter.czanik.hu/posts/new-toy-first-steps-with-ai-on-linux/</link>
      <pubDate>Wed, 25 Mar 2026 12:48:34 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-first-steps-with-ai-on-linux/</guid>
      <description>&lt;p&gt;Ever since I bought my &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI mini workstation from HP&lt;/a&gt;, my goal was to run hardware accelerated artificial intelligence workloads in a Linux environment. Read more to learn how things turned out on Ubuntu and Fedora!&lt;/p&gt;
&lt;p&gt;I have been using various AI tools for a while now. Generating pictures about some impossible situations, like a dinosaur climbing the Hungarian parliament building, finding information where a simple web search is useless, or explaining syslog-ng code to me. All these are nice, sometimes even useful, however I prefer to know what is behind the magic. Well, at least part of it :-) I want to get a bottom up view of various components and processes, and getting my hands dirty. Hopefully this miniature but powerful box will help me in getting known with AI better.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/unbox4.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;AI in a miniature box :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h1 id=&#34;testing-ai-on-ubuntu&#34;&gt;Testing AI on Ubuntu&lt;/h1&gt;
&lt;p&gt;As mentioned in my &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-installing-ubuntu-on-hp-z2-mini/&#34;&gt;installing Ubuntu&lt;/a&gt; blog, the 24.04 LTS installer did not work on this machine. I found a nice  tutorial about &lt;a href=&#34;https://github.com/Gygeek/Framework-strix-halo-llm-setup&#34;&gt;AI on the Ryzen AI Max+ 395&lt;/a&gt; which mentioned using 25.10, so I installed that version instead of the LTS. It installed without any troubles, 3D graphics worked out of the box.&lt;/p&gt;
&lt;p&gt;However, AI is a different story. &lt;a href=&#34;https://www.amd.com/en/products/software/rocm.html&#34;&gt;ROCm&lt;/a&gt;, hardware acceleration for AI workloads on AMD chips, is only packaged for Ubuntu LTS releases. The workaround described in the tutorial was to use distrobox. Unfortunately, the steps described in the tutorial did not work. Containerization brought in various problems with permissions, software availability, and so on. Most likely an experienced distrobox user could resolve these. In my case, after reading the distrobox documentation for hours, I just gave up.&lt;/p&gt;
&lt;h1 id=&#34;getting-started-with-hardware-accelerated-ai-on-fedora&#34;&gt;Getting started with hardware accelerated AI on Fedora&lt;/h1&gt;
&lt;p&gt;Next, I turned to Fedora Linux 43. The wiki page of the &lt;a href=&#34;https://fedoraproject.org/wiki/SIGs/HC&#34;&gt;Fedora Heterogeneous Computing Special Interest Group&lt;/a&gt; proved to be a good starting point. Fedora has &lt;a href=&#34;https://www.amd.com/en/products/software/rocm.html&#34;&gt;ROCm&lt;/a&gt; packaged as part of the distro, and the wiki page gives clear instructions how to get started.&lt;/p&gt;
&lt;p&gt;Once I set up user rights and installed the necessary packages, I was able to get some info about my hardware. You can see the output of &lt;code&gt;rocminfo&lt;/code&gt; and &lt;code&gt;rocm-clinfo&lt;/code&gt; at the bottom of this blog. I did not want to shorten those, but given the many lines of output, I was not sure if anyone would read the rest of my blog :-)&lt;/p&gt;
&lt;h1 id=&#34;testing-with-llama&#34;&gt;Testing with llama&lt;/h1&gt;
&lt;p&gt;Of course, seeing info about the hardware is nice, but it&amp;rsquo;s even better to see it in action. The Ubuntu ROCm tutorial mentioned &lt;code&gt;llama&lt;/code&gt;, so I started with that one. Luckily Fedora includes it as a ready to install package, so I did not have to compile it from source. I also installed &lt;code&gt;huggingface-hub&lt;/code&gt;, also from a package:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;dnf install python3-huggingface-hub llama-cpp
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This allowed me to download the model mentioned in the tutorial, and ask a few questions from the downloaded LLM. For now I just used the sample command line, but based on the output llama found the hardware and used it. Next up: learn more about the available models.&lt;/p&gt;
&lt;p&gt;You can find the output of the following command at the end of this blog:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p &amp;#34;Explain quantum computing in simple terms:&amp;#34;   -n 256
&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&#34;testing-with-pytorch&#34;&gt;Testing with pytorch&lt;/h1&gt;
&lt;p&gt;When I mentioned a friend that hardware accelerated AI seems to work on my Linux box, he suggested to me to try it with &lt;a href=&#34;https://pytorch.org/&#34;&gt;PyTorch&lt;/a&gt;. Luckily this was available as a ready to install package for Fedora as well:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;dnf install python3-torch
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;I was quite a bit surprised, as the above command installed 8 GB worth of RPM packages (&lt;code&gt;texlive&lt;/code&gt; accounting for a good part of it). I do not know much about PyTorch, but did a quick test anyway. Here is the really complex Pyhon code I built based on the documentation:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-python&#34; data-lang=&#34;python&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f92672&#34;&gt;import&lt;/span&gt; torch
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;x &lt;span style=&#34;color:#f92672&#34;&gt;=&lt;/span&gt; torch&lt;span style=&#34;color:#f92672&#34;&gt;.&lt;/span&gt;rand(&lt;span style=&#34;color:#ae81ff&#34;&gt;5&lt;/span&gt;, &lt;span style=&#34;color:#ae81ff&#34;&gt;3&lt;/span&gt;)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;print(x)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;print(&lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#39;Is hw AI accel available&amp;#39;&lt;/span&gt;)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;print(torch&lt;span style=&#34;color:#f92672&#34;&gt;.&lt;/span&gt;cuda&lt;span style=&#34;color:#f92672&#34;&gt;.&lt;/span&gt;is_available())
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;And here is the output from the above code:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;tensor([[0.1034, 0.0183, 0.1233],
        [0.1787, 0.0097, 0.8426],
        [0.2872, 0.6351, 0.8468],
        [0.8226, 0.2991, 0.8539],
        [0.2061, 0.6422, 0.8146]])
Is hw AI accel available
True
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;It&amp;rsquo;s simple, but looks promising :-)&lt;/p&gt;
&lt;h1 id=&#34;outputs&#34;&gt;Outputs&lt;/h1&gt;
&lt;h2 id=&#34;ooutput-of-rocminfo-and-rocm-clinfo&#34;&gt;Ooutput of rocminfo and rocm-clinfo&lt;/h2&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;czanik@fedora:~$ rocminfo 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;ROCk module is loaded
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;=====================    
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;HSA System Attributes    
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;=====================    
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Runtime Version:         1.1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Runtime Ext Version:     1.7
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;System Timestamp Freq.:  1000.000000MHz
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Machine Model:           LARGE                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;System Endianness:       LITTLE                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Mwaitx:                  DISABLED
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;XNACK enabled:           NO
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;DMAbuf Support:          YES
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;VMM Support:             YES
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#960050;background-color:#1e0010&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#960050;background-color:#1e0010&#34;&gt;&lt;/span&gt;==========               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;HSA Agents               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;==========               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;*******                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Agent 1                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;*******                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Name:                    AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Uuid:                    CPU-XX                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Marketing Name:          AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Vendor Name:             CPU                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Feature:                 None specified                     
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Profile:                 FULL_PROFILE                       
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Float Round Mode:        NEAR                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max Queue Number:        0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Min Size:          0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Max Size:          0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Type:              MULTI                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Node:                    0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Device Type:             CPU                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cache Info:              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    L1:                      49152(0xc000) KB                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Chip ID:                 0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  ASIC Revision:           0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cacheline Size:          64(0x40)                           
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max Clock Freq. (MHz):   5187                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  BDFID:                   0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Internal Node ID:        0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Compute Unit:            32                                 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  SIMDs per CU:            0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Shader Engines:          0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Shader Arrs. per Eng.:   0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  WatchPts on Addr. Ranges:1                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Memory Properties:       
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Features:                None
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Pool Info:               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 1                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    131136832(0x7d0fd40) KB            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 2                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    131136832(0x7d0fd40) KB            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 3                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    131136832(0x7d0fd40) KB            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 4                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    131136832(0x7d0fd40) KB            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  ISA Info:                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;*******                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Agent 2                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;*******                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Name:                    gfx1151                            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Uuid:                    GPU-XX                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Marketing Name:          Radeon 8060S Graphics              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Vendor Name:             AMD                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Feature:                 KERNEL_DISPATCH                    
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Profile:                 BASE_PROFILE                       
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Float Round Mode:        NEAR                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max Queue Number:        128(0x80)                          
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Min Size:          64(0x40)                           
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Max Size:          131072(0x20000)                    
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Type:              MULTI                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Node:                    1                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Device Type:             GPU                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cache Info:              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    L1:                      32(0x20) KB                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    L2:                      2048(0x800) KB                     
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    L3:                      32768(0x8000) KB                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Chip ID:                 5510(0x1586)                       
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  ASIC Revision:           0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cacheline Size:          128(0x80)                          
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max Clock Freq. (MHz):   2900                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  BDFID:                   50432                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Internal Node ID:        1                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Compute Unit:            40                                 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  SIMDs per CU:            2                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Shader Engines:          2                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Shader Arrs. per Eng.:   2                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  WatchPts on Addr. Ranges:4                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Coherent Host Access:    FALSE                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Memory Properties:       APU
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Features:                KERNEL_DISPATCH 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Fast F16 Operation:      TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Wavefront Size:          32(0x20)                           
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Workgroup Max Size:      1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Workgroup Max Size per Dimension:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    x                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    y                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    z                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max Waves Per CU:        32(0x20)                           
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max Work-item Per CU:    1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Grid Max Size:           4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Grid Max Size per Dimension:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    x                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    y                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    z                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max fbarriers/Workgrp:   32                                 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Packet Processor uCode:: 34                                 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  SDMA engine uCode::      18                                 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  IOMMU Support::          None                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Pool Info:               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 1                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    65568416(0x3e87ea0) KB             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:2048KB                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       FALSE                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 2                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    65568416(0x3e87ea0) KB             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:2048KB                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       FALSE                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 3                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GROUP                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    64(0x40) KB                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             FALSE                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           0KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:0KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         0KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       FALSE                              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  ISA Info:                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    ISA 1                    
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Name:                    amdgcn-amd-amdhsa--gfx1151         
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Machine Models:          HSA_MACHINE_MODEL_LARGE            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Profiles:                HSA_PROFILE_BASE                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Default Rounding Mode:   NEAR                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Default Rounding Mode:   NEAR                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Fast f16:                TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Workgroup Max Size:      1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Workgroup Max Size per Dimension:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        x                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        y                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        z                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Grid Max Size:           4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Grid Max Size per Dimension:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        x                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        y                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        z                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      FBarrier Max Size:       32                                 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    ISA 2                    
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Name:                    amdgcn-amd-amdhsa--gfx11-generic   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Machine Models:          HSA_MACHINE_MODEL_LARGE            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Profiles:                HSA_PROFILE_BASE                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Default Rounding Mode:   NEAR                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Default Rounding Mode:   NEAR                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Fast f16:                TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Workgroup Max Size:      1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Workgroup Max Size per Dimension:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        x                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        y                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        z                        1024(0x400)                        
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Grid Max Size:           4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Grid Max Size per Dimension:
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        x                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        y                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        z                        4294967295(0xffffffff)             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      FBarrier Max Size:       32                                 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;*******                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Agent 3                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;*******                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Name:                    aie2                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Uuid:                    AIE-XX                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Marketing Name:          AIE-ML                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Vendor Name:             AMD                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Feature:                 AGENT_DISPATCH                     
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Profile:                 BASE_PROFILE                       
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Float Round Mode:        NEAR                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max Queue Number:        1(0x1)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Min Size:          64(0x40)                           
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Max Size:          64(0x40)                           
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue Type:              SINGLE                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Node:                    0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Device Type:             DSP                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cache Info:              
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    L2:                      2048(0x800) KB                     
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    L3:                      32768(0x8000) KB                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Chip ID:                 0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  ASIC Revision:           0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cacheline Size:          0(0x0)                             
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max Clock Freq. (MHz):   0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  BDFID:                   0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Internal Node ID:        0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Compute Unit:            0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  SIMDs per CU:            0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Shader Engines:          0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Shader Arrs. per Eng.:   0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  WatchPts on Addr. Ranges:0                                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Memory Properties:       
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Features:                AGENT_DISPATCH
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Pool Info:               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 1                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: KERNARG, COARSE GRAINED
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    131136832(0x7d0fd40) KB            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 2                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    65536(0x10000) KB                  
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:0KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Pool 3                   
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Size:                    131136832(0x7d0fd40) KB            
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Allocatable:             TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Granule:           4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Recommended Granule:4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Alloc Alignment:         4KB                                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      Accessible by all:       TRUE                               
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  ISA Info:                
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;*** Done ***             
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;and&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-console&#34; data-lang=&#34;console&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;czanik@fedora:~$ rocm-clinfo 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Number of platforms:				 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Platform Profile:				 FULL_PROFILE
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Platform Version:				 OpenCL 2.1 AMD-APP (3649.0)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Platform Name:				 AMD Accelerated Parallel Processing
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Platform Vendor:				 Advanced Micro Devices, Inc.
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Platform Extensions:				 cl_khr_icd cl_amd_event_callback 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#960050;background-color:#1e0010&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#960050;background-color:#1e0010&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#960050;background-color:#1e0010&#34;&gt;&lt;/span&gt;  Platform Name:				 AMD Accelerated Parallel Processing
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Number of devices:				 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Device Type:					 CL_DEVICE_TYPE_GPU
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Vendor ID:					 1002h
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Board name:					 Radeon 8060S Graphics
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Device Topology:				 PCI[ B#197, D#0, F#0 ]
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max compute units:				 20
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max work items dimensions:			 3
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Max work items[0]:				 1024
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Max work items[1]:				 1024
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Max work items[2]:				 1024
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max work group size:				 256
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred vector width char:			 4
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred vector width short:			 2
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred vector width int:			 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred vector width long:			 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred vector width float:			 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred vector width double:		 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Native vector width char:			 4
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Native vector width short:			 2
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Native vector width int:			 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Native vector width long:			 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Native vector width float:			 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Native vector width double:			 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max clock frequency:				 2900Mhz
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Address bits:					 64
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max memory allocation:			 57070749280
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Image support:				 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max number of images read arguments:		 128
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max number of images write arguments:		 8
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max image 2D width:				 16384
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max image 2D height:				 16384
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max image 3D width:				 16384
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max image 3D height:				 16384
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max image 3D depth:				 8192
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max samplers within kernel:			 16
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max size of kernel argument:			 1024
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Alignment (bits) of base address:		 2048
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Minimum alignment (bytes) for any datatype:	 128
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Single precision floating point capability
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Denorms:					 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Quiet NaNs:					 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Round to nearest even:			 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Round to zero:				 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Round to +ve and infinity:			 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    IEEE754-2008 fused multiply-add:		 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cache type:					 Read/Write
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cache line size:				 128
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Cache size:					 32768
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Global memory size:				 67142057984
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Constant buffer size:				 57070749280
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max number of constant args:			 8
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Local memory type:				 Local
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Local memory size:				 65536
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max pipe arguments:				 16
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max pipe active reservations:			 16
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max pipe packet size:				 1236174432
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max global variable size:			 57070749280
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max global variable preferred total size:	 67142057984
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max read/write image args:			 64
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max on device events:				 1024
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue on device max size:			 8388608
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Max on device queues:				 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue on device preferred size:		 262144
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  SVM capabilities:				 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Coarse grain buffer:			 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Fine grain buffer:				 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Fine grain system:				 No
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Atomics:					 No
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred platform atomic alignment:		 0
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred global atomic alignment:		 0
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Preferred local atomic alignment:		 0
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Kernel Preferred work group size multiple:	 32
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Error correction support:			 0
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Unified memory for Host and Device:		 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Profiling timer resolution:			 1
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Device endianess:				 Little
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Available:					 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Compiler available:				 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Execution capabilities:				 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Execute OpenCL kernels:			 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Execute native function:			 No
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue on Host properties:				 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Out-of-Order:				 No
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Profiling :					 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Queue on Device properties:				 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Out-of-Order:				 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    Profiling :					 Yes
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Platform ID:					 0x7ffb97d11d80
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Name:						 gfx1151
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Vendor:					 Advanced Micro Devices, Inc.
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Device OpenCL C version:			 OpenCL C 2.0 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Driver version:				 3649.0 (HSA1.1,LC)
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Profile:					 FULL_PROFILE
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Version:					 OpenCL 2.0 
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Extensions:					 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program 
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;output-from-llama&#34;&gt;Output from llama&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;root@fedora:~# llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p &amp;#34;Explain quantum computing in simple terms:&amp;#34;   -n 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
build: 0 (unknown) with HIP version: 6.4.43484-9999 for x86_64-redhat-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (Radeon 8060S Graphics) - 64031 MiB free
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/models/llama-2-7b.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = [&amp;#34;&amp;lt;unk&amp;gt;&amp;#34;, &amp;#34;&amp;lt;s&amp;gt;&amp;#34;, &amp;#34;&amp;lt;/s&amp;gt;&amp;#34;, &amp;#34;&amp;lt;0x00&amp;gt;&amp;#34;, &amp;#34;&amp;lt;...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V2
print_info: file type   = Q4_K - Medium
print_info: file size   = 3.80 GiB (4.84 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 4096
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 32
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 4096
print_info: n_embd_v_gqa     = 4096
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 11008
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: model type       = 7B
print_info: model params     = 6.74 B
print_info: general.name     = LLaMA v2
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 &amp;#39;&amp;lt;s&amp;gt;&amp;#39;
print_info: EOS token        = 2 &amp;#39;&amp;lt;/s&amp;gt;&amp;#39;
print_info: UNK token        = 0 &amp;#39;&amp;lt;unk&amp;gt;&amp;#39;
print_info: LF token         = 13 &amp;#39;&amp;lt;0x0A&amp;gt;&amp;#39;
print_info: EOG token        = 2 &amp;#39;&amp;lt;/s&amp;gt;&amp;#39;
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        ROCm0 model buffer size =  3820.94 MiB
load_tensors:          CPU model buffer size =    70.31 MiB
..................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context:  ROCm_Host  output buffer size =     0.12 MiB
llama_kv_cache_unified:      ROCm0 KV buffer size =  2048.00 MiB
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers,  1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context:      ROCm0 compute buffer size =   288.00 MiB
llama_context:  ROCm_Host compute buffer size =    16.01 MiB
llama_context: graph nodes  = 1158
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 16

system_info: n_threads = 16 (n_threads_batch = 16) / 32 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | REPACK = 1 | 

sampler seed: 2232334333
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -&amp;gt; logit-bias -&amp;gt; penalties -&amp;gt; dry -&amp;gt; top-n-sigma -&amp;gt; top-k -&amp;gt; typical -&amp;gt; top-p -&amp;gt; min-p -&amp;gt; xtc -&amp;gt; temp-ext -&amp;gt; dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = 256, n_keep = 1

 Explain quantum computing in simple terms: what is it, how does it work, and what are its potential benefits?
This is a difficult question to answer because quantum computing is not yet a well-defined field of study, and many of the potential applications are still being researched. However, we can say that quantum computing is a type of computation that relies on the principles of quantum mechanics (the branch of physics that describes the behaviour of particles such as electrons and photons).
These particles obey a set of rules that are different from those obeyed by classical computers, which rely on the principles of classical mechanics. Quantum computing uses a particle’s quantum state (such as its spin) to store information. This means that quantum computers can perform computations that are not possible on classical computers.
In the simplest terms, quantum computing is a type of computation that takes advantage of the unique properties of quantum mechanics. These properties include superposition, entanglement, and non-locality. Superposition is the ability of a quantum system to exist in multiple states simultaneously.
This means that a quantum system can be in two different places at the same time, or have two different properties at the same time. Entanglement is the ability of two quantum systems to be inter

llama_perf_sampler_print:    sampling time =       4.27 ms /   265 runs   (    0.02 ms per token, 62075.43 tokens per second)
llama_perf_context_print:        load time =     631.46 ms
llama_perf_context_print: prompt eval time =      63.57 ms /     9 tokens (    7.06 ms per token,   141.57 tokens per second)
llama_perf_context_print:        eval time =    7110.09 ms /   255 runs   (   27.88 ms per token,    35.86 tokens per second)
llama_perf_context_print:       total time =    7184.25 ms /   264 tokens
&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&#34;closing-words&#34;&gt;Closing words&lt;/h1&gt;
&lt;p&gt;These are just my first steps. Most of the time I was not even fully aware what I was doing, just reused some sample command lines and code. These experiments were good enough to see that AI works on Linux as well, not just on Windows.&lt;/p&gt;
&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>My new toy: FreeBSD on the HP Z2 mini revisited</title>
      <link>https://peter.czanik.hu/posts/new-toy-freebsd-on-the-hp-z2-mini-revisited/</link>
      <pubDate>Thu, 19 Mar 2026 10:43:33 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-freebsd-on-the-hp-z2-mini-revisited/</guid>
      <description>&lt;p&gt;Last week, I wrote about my &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-installing-freebsd-on-hp-z2-mini/&#34;&gt;initial FreeBSD experiences&lt;/a&gt; on my new toy, an &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI workstation from HP&lt;/a&gt;. FreeBSD runs lightning fast on it, but the desktop was somewhat problematic. Well, I made lots of improvements this week!&lt;/p&gt;
&lt;h1 id=&#34;a-bit-of-debugging&#34;&gt;A bit of debugging&lt;/h1&gt;
&lt;p&gt;While there are still some rough edges, there have been tons of improvements since last week. I do not have plans to use FreeBSD on the desktop in the long term, but still, I just could not believe that the FreeBSD GUI is this problematic on this device. I did some experimentation though and it helped a lot… :-)&lt;/p&gt;
&lt;p&gt;The initial problem I realized while browsing the output of &lt;code&gt;dmesg&lt;/code&gt; was that &lt;code&gt;desktop-installer&lt;/code&gt; enabled the wrong kernel modules repository for me. The line leading there was this:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;KLD amdgpu.ko: depends on kernel - not available or version mismatch
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The next problem occurred when I fixed this problem: there was a kernel panic on boot, when &lt;code&gt;amdgpu.ko&lt;/code&gt; was loaded.&lt;/p&gt;
&lt;p&gt;I did a fresh FreeBSD install and instead of using the latest packages, I decided to go with the quarterly packages. This way, the desktop installer configured the right kmod repo – however, loading  amdgpu.ko still caused a kernel panic. Another experiment I made was using the ATI driver instead of AMD. The installer says that AMD is for modern cards, and ATI is for older ones. Well, as it turned out, even if the chip is barely half a year old, it counts as &amp;ldquo;old&amp;rdquo;… :-)&lt;/p&gt;
&lt;p&gt;I am still not convinced that proper hardware-based acceleration works: both X.org logs and the GNOME &amp;ldquo;About&amp;rdquo; page showed software rendering. However, I had no problem with graphics performance: TuxRacer worked perfectly well… :-) And the GNOME desktop also worked nicely and as stable, including video playback. The only pain point when using GNOME was that screen locking still did not work.&lt;/p&gt;
&lt;h1 id=&#34;kde-to-the-rescue&#34;&gt;KDE to the rescue&lt;/h1&gt;
&lt;p&gt;Even if it&amp;rsquo;s just software rendering, the graphics problem seems to be resolved. However, the screen locking problem still bothered me, as I&amp;rsquo;m an IT security guy with a healthy dose of paranoia (which means that I lock my screen even when I’m home alone… :-)).&lt;/p&gt;
&lt;p&gt;So even if I haven’t tried KDE for the past 5+ years, I gave it a try now. After so many years on XFCE and GNOME, the interface looks a bit weird. However, everything I tried on it seems to work just fine, including screen locking.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/fb3.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;KDE on FreeBSD&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Central log collection - more than just compliance</title>
      <link>https://peter.czanik.hu/other/syslog-ng-central-log-collection-more-than-just-compliance/</link>
      <pubDate>Wed, 18 Mar 2026 16:10:37 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/other/syslog-ng-central-log-collection-more-than-just-compliance/</guid>
      <description>&lt;p&gt;I often hear, even at security conferences that “no central log collection here” or “we have something due to compliance”. Central logging is more than just compliance. It makes logs easier to use, available and secure, thus making your life easier in operations, security, development, but also in marketing, sales, and so on.&lt;/p&gt;
&lt;h2 id=&#34;what-are-logs-and-what-is-central-log-collection&#34;&gt;What are logs and what is central log collection?&lt;/h2&gt;
&lt;p&gt;Most operating systems and applications keep track of what they are doing. They write log messages. A syslog message might look similar:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Mar 16 13:13:49 cent sshd[543817]: Accepted publickey for toor from 192.168.97.14 port 58246 ssh2: RSA SHA256:GeGHdsl1IZrnTniKUxxxX4NpP8Q
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Applications might store their logs separately and have their own log format, like this Apache access log:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;192.168.0.164 - - [16/Mar/2026:13:17:01 +0100] &amp;#34;HEAD /other/syslog-ng-insider-2026-03-4110-release-opensearch-elasticsearch/ HTTP/1.1&amp;#34; 200 3764 &amp;#34;-&amp;#34; &amp;#34;SkytabBot/1.0 (URL Resolution)&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Central log collection simply means that log messages are collected to a central location instead or in addition to saving them locally.&lt;/p&gt;
&lt;p&gt;In this blog we take a look at what ease of use, availability, and security of central log collection mean for you.&lt;/p&gt;
&lt;h2 id=&#34;ease-of-use&#34;&gt;Ease of use&lt;/h2&gt;
&lt;p&gt;If you have a single computer in your organization, finding a log message about an event on that computer takes some time. Once you have 2 computers, you have to check 2 computers to find that event. It might take twice as much time, but it is still easier than implementing central log collection. Not to mention, which one is the central computer. :-)&lt;/p&gt;
&lt;p&gt;Once you have a network of 10 computers, logging in to each of them to find a log message about an event becomes a huge overhead. It is still doable, but implementing central log collection is a lot easier already in the short term, than looking at the logs on the machines where they were created.&lt;/p&gt;
&lt;p&gt;On a network of 100 computers, it is practically impossible to find relevant logs by security or operations, unless logs are collected centrally.&lt;/p&gt;
&lt;h2 id=&#34;availability&#34;&gt;Availability&lt;/h2&gt;
&lt;p&gt;Collecting logs centrally means that log messages are available even when the sending machine is down. If you want to know what happened, you do not have to get the machine up and running again, but you can check the logs at the central location. If you see signs of a hardware failure, you can go with a spare part immediately, reducing the time and effort needed to repair the machines.&lt;/p&gt;
&lt;h2 id=&#34;security&#34;&gt;Security&lt;/h2&gt;
&lt;p&gt;When a computer is compromised, log messages are often altered or deleted completely. However, this tactic only works with logs stored locally. Collecting logs at a central location allows you to use the unmodified logs and to figure out how the compromise happened.&lt;/p&gt;
&lt;h2 id=&#34;what-is-next&#34;&gt;What is next?&lt;/h2&gt;
&lt;p&gt;It is time to introduce central logging to your organization if you have not yet done it yet. Of course I am a bit biased, but syslog-ng is the perfect tool to do so. You can get started by reading / watching the syslog-ng tutorial on &lt;a href=&#34;https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/&#34;&gt;https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/&lt;/a&gt;.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/syslog-ng-logo1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;syslog-ng logo&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Originally published at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/central-log-collection---more-than-just-compliance&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/central-log-collection&amp;mdash;more-than-just-compliance&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>My new toy: AI first steps with the HP Z2 Mini</title>
      <link>https://peter.czanik.hu/posts/new-toy-ai-first-steps-on-the-hp-z2-mini/</link>
      <pubDate>Tue, 17 Mar 2026 14:40:33 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-ai-first-steps-on-the-hp-z2-mini/</guid>
      <description>&lt;p&gt;In the past few weeks, I installed five different operating systems on my latest toy: an &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI workstation from HP&lt;/a&gt;. I love playing with OSes, but my main goal with the new machine is to learn various aspects of AI. I took my first steps in this adventure on Windows.&lt;/p&gt;
&lt;p&gt;Of course, you might ask: why on Windows? Well, it&amp;rsquo;s easy: because it&amp;rsquo;s easy… :-) There is nothing to install or configure there, as Windows has multiple built-in apps that support AI and can utilize the NPU (hardware-accelerated AI) support of the AMD Ryzen 395 chip. The Windows Task Manager application already has a graph for NPU utilization and I found good use of it already: the Task Manager was open on the NPU tab while I conducted my experiments, allowing me to verify that it was indeed in use, as I ran a couple of generative AI tasks.&lt;/p&gt;
&lt;p&gt;I find it somewhat surprising, but Paint now also comes with integrated AI support. Of course, you must pay for image generation but generating simple graphics (or &amp;ldquo;stickers&amp;rdquo;, as they are called in Paint) is free. If the prompt field is empty, its prompt text suggests &amp;ldquo;a cat wearing sunglasses&amp;rdquo;, so that&amp;rsquo;s what I generated. It has no practical use for me, but looks fun regardless… :-) You can see an example in the first figure.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/winai1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Windows Paint with AI&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;It&amp;rsquo;s only slightly crazier that the simplest Windows app, Notepad, now also has AI features. I asked Notepad to write me a three-paragraph-long blog about writing a blog with AI in a boring, formal style. It completed the task successfully: the result is pretty boring and full of bullshit. Here it is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The integration of artificial intelligence into the realm of content creation, particularly in the sphere of blog writing, presents a multitude of advantages that merit serious consideration. Firstly, AI-driven tools offer an unprecedented level of efficiency, allowing content creators to produce substantial volumes of text without the need for exhaustive manual input. This technological boon is particularly beneficial for those with time constraints or heavy workloads, as it liberates them from the mundane task of typing and editing.&lt;/p&gt;
&lt;p&gt;Moreover, the precision and consistency of AI in language generation cannot be understated. By leveraging advanced algorithms, AI systems can maintain a uniform tone and style throughout a piece, which is a common challenge for human writers. This consistency ensures that the content is cohesive and that the voice of the piece remains unaltered, thereby enhancing the overall reading experience.&lt;/p&gt;
&lt;p&gt;Lastly, the ability of AI to learn from vast datasets and adapt to various writing styles is a significant asset. It empowers writers to explore new topics and styles with confidence, knowing that the AI can provide a solid foundation upon which to build. This adaptability not only streamlines the writing process but also encourages creativity and innovation in content creation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/winai2.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Windows Notepad with AI&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I also wanted to try a controversial Windows feature: Recall. Well, it does not work. When I started it, I got a nice error message stating that it needs Secure Boot. Linux requires it to be turned off, so I cannot test it now. But I must admit that I do not mind that… :-)&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/winai3.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Windows Recall needs Secure Boot&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;If everything goes well, I&amp;rsquo;ll make my first steps next week to enable hardware-accelerated AI under Linux.&lt;/p&gt;
&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>The syslog-ng Insider 2026-03: 4.11.0 release; OpenSearch; ElasticSearch</title>
      <link>https://peter.czanik.hu/other/syslog-ng-insider-2026-03-4110-release-opensearch-elasticsearch/</link>
      <pubDate>Wed, 11 Mar 2026 13:01:47 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/other/syslog-ng-insider-2026-03-4110-release-opensearch-elasticsearch/</guid>
      <description>&lt;p&gt;The March syslog-ng newsletter is now on-line:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Version 4.11.0 of syslog-ng is now available&lt;/li&gt;
&lt;li&gt;Using OpenSearch data streams in syslog-ng&lt;/li&gt;
&lt;li&gt;Changes in the syslog-ng Elasticsearch destination&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is available at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2026-03-4-11-0-release-opensearch-elasticsearch&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2026-03-4-11-0-release-opensearch-elasticsearch&lt;/a&gt;&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/syslog-ng-logo1.png&#34;/&gt;&lt;figcaption&gt;
             &lt;h4&gt;syslog-ng logo&lt;/h4&gt;
         &lt;/figcaption&gt;
 &lt;/figure&gt;

</description>
    </item>
    
    <item>
      <title>New toy: Installing FreeBSD on the HP Z2 Mini</title>
      <link>https://peter.czanik.hu/posts/new-toy-installing-freebsd-on-hp-z2-mini/</link>
      <pubDate>Tue, 10 Mar 2026 13:13:13 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-installing-freebsd-on-hp-z2-mini/</guid>
      <description>&lt;p&gt;Finally, I also installed FreeBSD on my new &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI focused mini workstation from HP&lt;/a&gt;. I even managed to install GNOME on the machine with minimal effort. However, I also ran into many problems.&lt;/p&gt;
&lt;p&gt;So far it&amp;rsquo;s a mixed experience. Installation went smoothly, FreeBSD 15.0 was up and running in no time. However, FreeBSD is not found by any of the Linux boot managers I use (different flavors of GRUB), and it&amp;rsquo;s not in the EFI boot menu either. The only way I could boot FreeBSD was bringing up the EFI boot menu, choosing &lt;code&gt;boot from file&lt;/code&gt; and loading &lt;code&gt;EFI/freebsd/loader.efi&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Once FreeBSD boots on the machine, it is lightning fast. One of the fastest machines I have ever used, in the size of a &lt;code&gt;Lord of the rings&lt;/code&gt; book. Still it stays silent while compiling software from FreeBSD ports.&lt;/p&gt;
&lt;p&gt;I do not plan to use this box as a FreeBSD desktop, but of course I was curious how much FreeBSD desktop support evolved since I last tried it. I found a &lt;a href=&#34;https://freebsdfoundation.org/resource/installing-a-desktop-environment-on-freebsd/&#34;&gt;nice article on the FreeBSD Foundation website&lt;/a&gt;, describing how to install a GUI on FreeBSD using the new &lt;code&gt;desktop-installer&lt;/code&gt; tool. It asked tons of questions, did some magic, and after a while I had GNOME up and running.&lt;/p&gt;
&lt;p&gt;The good:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;no manual package installation or configuration editing necessary&lt;/li&gt;
&lt;li&gt;the exact same GNOME look and feel as on all Linux distributions I tested (except for Ubuntu)&lt;/li&gt;
&lt;li&gt;sound works, using the built in speaker&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The bad:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;no accelerated graphics at all&lt;/li&gt;
&lt;li&gt;3D games start, play music, but no graphics&lt;/li&gt;
&lt;li&gt;playing YouTube in Firefox works, both graphics and sound, but low quality&lt;/li&gt;
&lt;li&gt;the screensaver starts automatigically, but cannot be unlocked (workaround: disable screensaver)&lt;/li&gt;
&lt;/ul&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/fb1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;The same boring GNOME as everywhere else :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/fb2.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;The same boring GNOME as everywhere else :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I might try to debug some of these issues, but most likely I&amp;rsquo;ll just reinstall FreeBSD, and keep using it in text-only mode. As far as I could see, there is no in hardware AI acceleration available on FreeBSD. However, with 32 CPU cores, a fast SSD and 128 GB of RAM, this is an ideal box for running complex test environments in FreeBSD jails. I love &lt;a href=&#34;https://bastillebsd.org/&#34;&gt;Bastille&lt;/a&gt; and plan to install it once I cleaned up the machine after the GNOME experiment.&lt;/p&gt;
&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>New toy: Installing Fedora Linux on the HP Z2 Mini</title>
      <link>https://peter.czanik.hu/posts/new-toy-installing-fedora-on-hp-z2-mini/</link>
      <pubDate>Wed, 04 Mar 2026 12:56:13 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-installing-fedora-on-hp-z2-mini/</guid>
      <description>&lt;p&gt;The data sheet of my new &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI-focused mini workstation from HP&lt;/a&gt; does not mention Fedora, but I could install it just fine. I expected this though, because when I asked around about Linux support and hardware AI acceleration for AMD Ryzen 39X chips, all responses came from Fedora users… :-)&lt;/p&gt;
&lt;p&gt;Installing Fedora on the HP Z2 Mini was a smooth experience, even though I hadn’t used the graphical installer for ages. I installed the Fedora server variant during Covid, and I&amp;rsquo;m upgrading it ever since. Still, using the graphical installer was easy, so Fedora was up and running in no time.&lt;/p&gt;
&lt;p&gt;Rebooting Fedora is not always fun, though. This box has two SSDs in it. In most cases, booting is OK, but sometimes the numbering of SSDs seems to be reversed. When this happens, booting gets stuck in an infinite loop, but a simple reboot solves the problem.&lt;/p&gt;
&lt;p&gt;I guess I&amp;rsquo;m getting older, but I appreciate that GNOME looks exactly the same as on any other Linux distro, except Ubuntu. Everything in GNOME works from muscle memory, just as in most applications. Of course, under the hood, Linux distros are different: they have different package managers, repositories, backgrounds and application defaults. However, for a simple user, there is no need to learn the desktop from scratch, just because their friend installed another Linux distro for them…&lt;/p&gt;
&lt;p&gt;Also, while I&amp;rsquo;m not a gamer, when I saw during installation that Steam was available, I gave it a try as well. It worked flawlessly. I do not follow the current Windows situation, but when I installed Need for Speed a few years ago, I had to go through many steps and install the game twice due to a failed attempt to make it work. Today, installing and starting NFS was a simple next-next-finish experience, so I could start the latest reincarnation of my favorite childhood game without any problems.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/game2.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Steam on Fedora :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/game3.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Need for Speed&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I did a few steps to configure accelerated AI on the machine. I installed a few extra packages from Copr and they found something, after I worked around a couple minor problems:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;root@fedora:~# /usr/xrt/bin/xrt-smi examine
System Configuration
  OS Name              : Linux
  Release              : 6.18.13-200.fc43.x86_64
  Machine              : x86_64
  CPU Cores            : 32
  Memory               : 96311 MB
  Distribution         : Fedora Linux 43 (Workstation Edition)
  GLIBC                : 2.42
  Model                : HP Z2 Mini G1a Workstation Desktop PC
  BIOS Vendor          : HP
  BIOS Version         : X53 Ver. 01.05.02

XRT
  Version              : 2.19.0
  Branch               :
  Hash                 :
  Hash Date            : 2025-04-25 00:00:00
  virtio-pci           : unknown, unknown
  amdxdna              : unknown, unknown
  NPU Firmware Version : 1.0.0.166

Device(s) Present
|BDF             |Name          |
|----------------|--------------|
|[0000:c6:00.1]  |RyzenAI-npu5  |
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;More in-depth AI testing will follow later, once I also installed FreeBSD on the box.&lt;/p&gt;
&lt;p&gt;This blog is part of a longer series about my adventures with my new HP Z2 Mini and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Syslog-ng 4.11.0 packaging status</title>
      <link>https://peter.czanik.hu/other/syslog-ng-4110-packaging-status/</link>
      <pubDate>Wed, 04 Mar 2026 09:31:45 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/other/syslog-ng-4110-packaging-status/</guid>
      <description>&lt;p&gt;Committed users compile syslog-ng for themselves from source. However, most of us wait until a software is available as an easy-to-install package for our operating system of choice. In this blog post, you will see an overview on the available packages for syslog-ng 4.11.0.&lt;/p&gt;
&lt;p&gt;Read more at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-4-11-0-packaging-status&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-4-11-0-packaging-status&lt;/a&gt;&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/syslog-ng-logo1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;syslog-ng logo&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

</description>
    </item>
    
    <item>
      <title>New toy: Installing Ubuntu on the HP Z2 Mini</title>
      <link>https://peter.czanik.hu/posts/new-toy-installing-ubuntu-on-hp-z2-mini/</link>
      <pubDate>Thu, 26 Feb 2026 15:06:13 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-installing-ubuntu-on-hp-z2-mini/</guid>
      <description>&lt;p&gt;The data sheet of my new &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;AI focused mini workstation from HP&lt;/a&gt; mentions Ubuntu 24.04 as the supported Linux distribution. I have tried that, but I could not get the installer to run. However, 25.10 installed without any problems, even from an openSUSE branded USB stick :-)&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/usbstick1.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Only the chameleon works with this machine:-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I must admit that I&amp;rsquo;m not an Ubuntu fan, but installed it anyway, as Ubuntu is the &amp;ldquo;official&amp;rdquo; Linux distro for this machine. GNOME is heavily modified compared to other distros. For GUI apps the focus seems to be shifted to snaps from distro packages.&lt;/p&gt;
&lt;p&gt;For now I did not test the in hardware AI support, just tried to collect some first impressions. I ended up installing a few 3D games and playing :-) Having AMD graphics has the advantage that everything works out of box. There is no need for binary only drivers, extra repositories, praying to the binary gods, etc. It just works. Fully open source.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/game1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;SuperTuxKart :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Version 4.11.0 of syslog-ng is now available</title>
      <link>https://peter.czanik.hu/other/version-4110-of-syslog-ng-is-now-available/</link>
      <pubDate>Wed, 25 Feb 2026 14:11:05 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/other/version-4110-of-syslog-ng-is-now-available/</guid>
      <description>&lt;p&gt;Version 4.11.0 of syslog-ng is now available. The main attraction is the brand new Kafka source, but there are many other smaller features and improvements, as well.&lt;/p&gt;
&lt;h1 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h1&gt;
&lt;p&gt;If you happen to use Debian, Ubuntu or the RHEL family of operating systems (RHEL, CentOS, Rocky Linux, Alma Linux, Oracle Linux, etc.) then ready-to-use packages are already available as part of the release process. For details, check the README in the syslog-ng source code repository on GitHub: &lt;a href=&#34;https://github.com/syslog-ng/syslog-ng/?tab=readme-ov-file#installation-from-binaries&#34;&gt;https://github.com/syslog-ng/syslog-ng/?tab=readme-ov-file#installation-from-binaries&lt;/a&gt; The syslog-ng container is also updated to this release: &lt;a href=&#34;https://github.com/syslog-ng/syslog-ng/?tab=readme-ov-file#installation-from-binaries&#34;&gt;https://github.com/syslog-ng/syslog-ng/?tab=readme-ov-file#installation-from-binaries&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I plan to update Fedora 44 and Rawhide soon, just like openSUSE Tumbleweed. For other distributions, you often need to wait a bit more or use third-party repositories. Our 3rd-party repo page has some pointers: &lt;a href=&#34;https://www.syslog-ng.com/products/open-source-log-management/3rd-party-binaries.aspx&#34;&gt;https://www.syslog-ng.com/products/open-source-log-management/3rd-party-binaries.aspx&lt;/a&gt;&lt;/p&gt;
&lt;h1 id=&#34;what-is-new&#34;&gt;What is new?&lt;/h1&gt;
&lt;p&gt;The largest new feature is the Kafka source, which allows you to collect log messages from Kafka streams. For many years, syslog-ng had a Kafka destination, allowing you to send log messages to a Kafka-based data pipeline. The Kafka source enables syslog-ng to collect log messages from Kafka, parse and filter log messages, and route them to various destinations. You can learn more about the Kafka source from the syslog-ng documentation at &lt;a href=&#34;https://syslog-ng.github.io/admin-guide/060_Sources/038_Kafka/README&#34;&gt;https://syslog-ng.github.io/admin-guide/060_Sources/038_Kafka/README&lt;/a&gt; .&lt;/p&gt;
&lt;p&gt;Support for Elasticsearch / OpenSearch data streams was also added: &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/changes-in-the-syslog-ng-elasticsearch-destination&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/changes-in-the-syslog-ng-elasticsearch-destination&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;4.11 also includes many other interesting new features and bug fixes, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;OAuth2 support in the cloud-auth module, including gRPC-based destinations&lt;/li&gt;
&lt;li&gt;Failover support in the load-balancer&lt;/li&gt;
&lt;li&gt;Improved performance and lowered resource usage on macOS&lt;/li&gt;
&lt;li&gt;cmake support feature parity with autotools&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a complete list of changes, check the release notes on GitHub: &lt;a href=&#34;https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0&#34;&gt;https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As usual, while we make every effort to make all features work everywhere, it is not always technically possible. For example, compilers and / or dependencies are too old to support gRPC-based modules in older RHEL, SUSE and Debian releases.&lt;/p&gt;
&lt;h1 id=&#34;what-is-next&#34;&gt;What is next?&lt;/h1&gt;
&lt;p&gt;As usual: feedback is very welcome. If you have any problems with the syslog-ng 4.11.0 release, open an issue on GitHub at &lt;a href=&#34;https://github.com/syslog-ng/syslog-ng/issues&#34;&gt;https://github.com/syslog-ng/syslog-ng/issues&lt;/a&gt; Your report helps us to make syslog-ng better. Of course, we are also very happy about any positive feedback :-)&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/syslog-ng-logo1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;syslog-ng logo&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Originally published at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/version-4-11-0-of-syslog-ng-is-now-available&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/version-4-11-0-of-syslog-ng-is-now-available&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>New toy: Installing openSUSE Tumbleweed on the HP Z2 Mini</title>
      <link>https://peter.czanik.hu/posts/new-toy-installing-opensuse-tumbleweed-on-hp-z2-mini/</link>
      <pubDate>Tue, 24 Feb 2026 12:58:13 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-installing-opensuse-tumbleweed-on-hp-z2-mini/</guid>
      <description>&lt;p&gt;Last week I introduced you to my new toy at home: &lt;a href=&#34;https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/&#34;&gt;an AI focused mini workstation from HP&lt;/a&gt;. It arrived with Windows pre-installed, but of course I also wanted to have Linux on the box.&lt;/p&gt;
&lt;p&gt;Documentation mentions that I have to disable secure boot and make a few more changes before installing Linux. I did all the suggested BIOS changes before installing Linux.&lt;/p&gt;
&lt;p&gt;The data sheet mentions Ubuntu 24.04 as the supported Linux distribution. I have tried that, but I could not get the installer to run. Along the way I realized that the USB boot support is very picky on this box. Using my old USB sticks, which work perfectly in my laptop and old desktop, does not work at all. Also, changing the USB stick requires you to turn the machine off and on, a simple reboot is not enough. Finally I found a USB-C stick, and that almost worked with Ubuntu 24.04. It booted, but the installer crashed.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/usbstick1.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;The USB sticks I tried&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;As I have been a &lt;a href=&#34;https://peter.czanik.hu/posts/opensuse_memories_1/&#34;&gt;S.u.S.E. / openSUSE user for the past 30 years&lt;/a&gt;, I did not mind this failure much. I downloaded the openSUSE Tumbleweed installer, and it worked like a charm. Best of all, unlike openSUSE Leap 16.0, Tumbleweed still has the good old YaST installer I used for decades. Installation was quick, easy and rock solid.&lt;/p&gt;
&lt;p&gt;Surprise arrived when I rebooted the machine. Windows was not available in the boot menu. As it turned out, Tumbleweed used a new flavor of GRUB2 by default: grub2-bls, but that does not seem to boot other operating systems. There is no supported way to switch back to grub2-efi, so I reistalled openSUSE. Luckily it&amp;rsquo;s an easy job, and I did not have any data yet on the machine. So, it was just a few mouse clicks.&lt;/p&gt;
&lt;p&gt;openSUSE is my daily driver, so I did not spend much time exploring the system. It seems to work just fine. Installing a few games and checking the in hardware AI support comes once I finished installing all operating systems on the machine. Next to Windows I plan to install openSUSE, Fedora and Ubuntu on the Linux side, and FreeBSD as well.&lt;/p&gt;
&lt;p&gt;This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the &lt;a href=&#34;https://peter.czanik.hu/tags/toy/&#34;&gt;toy&lt;/a&gt; tag.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>New toy in the house for AI, gaming, Linux, Windows and FreeBSD</title>
      <link>https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/</link>
      <pubDate>Thu, 19 Feb 2026 12:56:32 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/</guid>
      <description>&lt;p&gt;There is a new toy in the house. It is a miniature workstation from HP, built around AMD&amp;rsquo;s Ryzen AI Max+ PRO 395 chip. If you are interested in the specifications and other details, check the HP product page at &lt;a href=&#34;https://www.hp.com/us-en/workstations/z2-mini-a.html&#34;&gt;https://www.hp.com/us-en/workstations/z2-mini-a.html&lt;/a&gt;. In the long run, this box will serve many purposes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;learning AI, but running as much as possible locally instead of utilizing cloud services&lt;/li&gt;
&lt;li&gt;learning Kubernetes by building everything from scratch on multiple virtual machines&lt;/li&gt;
&lt;li&gt;home server: running complex test environments on a single box (128 GB of RAM should be enough in most cases :-) )&lt;/li&gt;
&lt;li&gt;photo editing using Capture One Pro&lt;/li&gt;
&lt;li&gt;occasional gaming :-)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For now, I have finished unboxing and taken the first steps with Windows. It worked, though I made a mistake during setup and had to reinstall. I do not mind, since I do not like using pre-installed operating systems anyway. At least I know the machine works.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/unbox1.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;The whole packaging is smaller than my previous desktop computer&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/unbox2.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;The computer itself is barely larger than a book&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/unbox3.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;Keyboard, mouse, display port converter all in the box&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/hpmini/unbox4.jpg&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;On the chaos of my desk :-)&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Right now I am hesitant to migrate any production applications or data to the new box. I have already clicked &amp;ldquo;use the whole disk&amp;rdquo; instead of creating a partition a few times, so I want to finalize the partitioning before using the box for anything beyond hardware testing. Better safe than sorry :-)&lt;/p&gt;
&lt;p&gt;I plan to install a couple of Linux distributions. I mainly use openSUSE on the desktop, but I found instructions for Fedora to accelerate AI on this AMD chip. So, I&amp;rsquo;ll most likely install both. And, of course, I also plan to install FreeBSD 15 on the machine and see how it works both as a server and as a desktop.&lt;/p&gt;
&lt;p&gt;I plan to post updates about my experiences in the coming weeks.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>UDP reliability improved in syslog-ng Debian packaging</title>
      <link>https://peter.czanik.hu/other/syslog-ng-udp-reliability-improved-in-debian-packaging/</link>
      <pubDate>Thu, 19 Feb 2026 11:19:37 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/other/syslog-ng-udp-reliability-improved-in-debian-packaging/</guid>
      <description>&lt;p&gt;UDP log collection is a legacy feature that does not provide any security or reliability, but is still in wide use. You can improve its reliability using eBPF on Linux in recent syslog-ng versions. Support for eBPF was added to Debian packages while preparing for the 4.11.0 syslog-ng release.&lt;/p&gt;
&lt;p&gt;You can learn more about eBPF support in syslog-ng from the documentation or reading my blog at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-4-2-extra-udp-performance&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-4-2-extra-udp-performance&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Right now, packaging changes only affect the syslog-ng nightly Debian / Ubuntu packages and the syslog-ng nightly container image. You can learn more about how to use them in the syslog-ng README on GitHub at &lt;a href=&#34;https://github.com/syslog-ng/syslog-ng/&#34;&gt;https://github.com/syslog-ng/syslog-ng/&lt;/a&gt; Once the syslog-ng 4.11.0 release is available, using the stable syslog-ng packages will include improved UDP support as well.&lt;/p&gt;
&lt;p&gt;Are you interested in improving TCP performance for a single or few high traffic connections? You are looking for the parallelize() option: &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/accelerating-single-tcp-connections-in-syslog-ng-parallelize&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/accelerating-single-tcp-connections-in-syslog-ng-parallelize&lt;/a&gt; The good news is that the required changes are now available in ivykis upstream, so this feature is not limited to our builds anymore.&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/syslog-ng-logo1.png&#34;/&gt;&lt;figcaption&gt;
            &lt;h4&gt;syslog-ng logo&lt;/h4&gt;
        &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Originally published at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/udp-reliability-improved-in-syslog-ng-debian-packaging&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/udp-reliability-improved-in-syslog-ng-debian-packaging&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>The syslog-ng Insider 2026-02: stats-exporter; blank filter; Kafka source</title>
      <link>https://peter.czanik.hu/other/syslog-ng-insider-2026-02-stats-exporter-blank-filter-kafka-source/</link>
      <pubDate>Fri, 13 Feb 2026 11:34:09 +0100</pubDate>
      
      <guid>https://peter.czanik.hu/other/syslog-ng-insider-2026-02-stats-exporter-blank-filter-kafka-source/</guid>
      <description>&lt;p&gt;The February syslog-ng newsletter is now on-line:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The syslog-ng stats-exporter() now has all functionality of syslog-ng-ctl&lt;/li&gt;
&lt;li&gt;Using the blank() filter of syslog-ng&lt;/li&gt;
&lt;li&gt;How to test the syslog-ng Kafka source by building the package yourself?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is available at &lt;a href=&#34;https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2026-02-stats-exporter-blank-filter-kafka-source&#34;&gt;https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2026-02-stats-exporter-blank-filter-kafka-source&lt;/a&gt;&lt;/p&gt;
&lt;figure&gt;&lt;img src=&#34;https://peter.czanik.hu/images/syslog-ng-logo1.png&#34;/&gt;&lt;figcaption&gt;
             &lt;h4&gt;syslog-ng logo&lt;/h4&gt;
         &lt;/figcaption&gt;
 &lt;/figure&gt;

</description>
    </item>
    
  </channel>
</rss>
