mirror of https://github.com/cirruslabs/tart.git
1 line
76 KiB
XML
1 line
76 KiB
XML
<?xml version="1.0" encoding="UTF-8" ?> <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/"> <channel> <title>Tart Virtualization</title><description>Tart is a virtualization toolset to build, run and manage macOS and Linux virtual machines (VMs) on Apple Silicon. </description><link>https://tart.run/</link><atom:link href="https://tart.run/feed_rss_updated.xml" rel="self" type="application/rss+xml" /> <managingEditor>Cirrus Labs</managingEditor><docs>https://github.com/cirruslabs/tart/</docs><language>en</language> <pubDate>Sun, 12 Apr 2026 02:36:10 -0000</pubDate> <lastBuildDate>Sun, 12 Apr 2026 02:36:10 -0000</lastBuildDate> <ttl>1440</ttl> <generator>MkDocs RSS plugin - v1.17.9</generator>  <item> <title>Changing Tart License</title> <author>Fedor Korotkov</author> <description><h1 id="changing-tart-license">Changing Tart License<a class="headerlink" href="#changing-tart-license" title="Permanent link">&para;</a></h1> <p><strong>TLDR:</strong> We are transitioning Tart's licensing from AGPL-3.0 to <a href="https://fair.io/">Fair Source 100</a>. This change will permit unlimited installations on personal computers, but organizations that exceed a certain number of server installations utilizing 100 CPU cores will be required to obtain a paid license.</p> <h2 id="background">Background<a class="headerlink" href="#background" title="Permanent link">&para;</a></h2> <p>Exactly a year ago on February 11<sup>th</sup> 2022 we started working on Tart – a tiny CLI to run macOS virtual machines on Apple Silicon. Three months later we successfully started using Tart in our own production system and decided to share Tart with everyone.</p> <p><img src="https://github.com/cirruslabs/tart/raw/main/Resources/TartSocial.png"/></p> <p>The goal was to establish a community of users and contributors to transform Tart from a small CLI to a robust tool for various scenarios. <strong>Unfortunately, we were not successful in attracting a significant number of contributors.</strong> It's important to note that we did have seven individuals who contributed to the development of Tart to the best of their abilities. However, one of the challenges of contributing to Tart is that the skill set required for a contribution is vastly different from the skill set typically possessed by regular Tart users in their daily work. Specifically, a contributor needs to have knowledge of the Swift programming language, as well as a background in operating systems and network stack. This is the reason why <strong>98.8% of the code and all the major features were contributed by Cirrus Labs engineers.</strong></p> <!-- more --> <p>Tart is experiencing significant success among users and has seen widespread adoption for various applications. The latest macOS Ventura virtual machine image has been downloaded over 27,000 times! We are continually receiving feedback from an increasing number of users who are utilizing Tart in ways we had not initially anticipated. However, with a growing user base comes a rise in requests for new features and enhancements. It can be challenging to justify dedicating our engineering resources to meeting these demands when they do not align with the needs of our company, Cirrus Labs. As a small, self-funded organization, our priority is to provide for our employees and their families along with developing great products.</p> <p>In addition, the <strong>decision to use AGPL-3.0 as the license for Tart was not thoroughly considered at the time of its release.</strong> The choice was made because many companies that were commercializing their products had recently switched to the AGPL license. However, AGPL has a reputation for being viral, open to interpretation, and not in line with current standards. Additionally, many organizations have policies against using any AGPL-licensed software in their stacks, which has limited Tart's potential for wider adoption. See <a href="https://opensource.google/documentation/reference/using/agpl-policy">Google's AGPL policy</a>, for example.</p> <p>In order to ensure Tart's long-term viability and to allow us to allocate engineering resources towards further improving Tart, we plan to transition to a licensing model that includes a nominal fee for companies that reach a substantial level of usage.</p> <h2 id="what-is-changing">What is changing<a class="headerlink" href="#what-is-changing" title="Permanent link">&para;</a></h2> <p>In the near future, we are set to launch the first version of Orchard for Tart, a tool that facilitates the coordination of Tart virtual machines on a cluster of Apple Silicon servers. Concurrently, we will also release version 1.0.0 of Tart, which will establish a stable API and offer long-term support under a new Fair Source 100 license.</p> <p>The Fair Source 100 license for Tart means that once a certain threshold of server installations utilizing 100 CPU cores is exceeded, a paid license will be required. A "server installation" refers to the installation of Tart on a physical device without a physical display connected. For example, a Mac Mini with a HDMI Dummy Plug is considered a server, but a Mac Mini on a desk with a connected physical display is considered a personal computer. <strong>Usage on personal computers and before reaching the 100 CPU cores limit is royalty-free and does not have the viral properties of AGPL.</strong></p> <div class="admonition note"> <p class="admonition-title">Pricing update</p> <p>This post announced Tart licensing in February 2023 and originally listed monthly prices. Pricing has since changed to yearly billing. See <a href="../../../../../licensing/#license-tiers">Licensing and Support</a> for the latest terms.</p> </div> <p>When an organization surpasses the 100 CPU cores limit, they will be required to obtain a <a href="../../../../../licensing/#license-tiers">Gold Tier License</a>, which costs $12,000 per year. Upon reaching a limit of 500 CPU cores, a <a href="../../../../../licensing/#license-tiers">Platinum Tier License</a> ($36,000 per year) will be required, and for organizations that exceed 3000 CPU cores, a custom <a href="../../../../../licensing/#license-tiers">Diamond Tier License</a> ($12 per core per year) will be necessary. <strong>All paid license tiers will include priority feature development and SLAs on support with urgent issues.</strong></p> <h2 id="have-we-considered-alternatives">Have we considered alternatives?<a class="headerlink" href="#have-we-considered-alternatives" title="Permanent link">&para;</a></h2> <p>We have evaluated other options. Initially, we reached out to some of our largest users and asked them to consider sponsoring the development of features that they were interested in. However, we received no response or were eventually ignored. Another option we considered was using the open core model and developing enterprise-specific features. However, this approach is not addressing concerns related to the viral nature of AGPL for non-enterprise users. Ultimately, we concluded that transitioning to a source-available model with a mandatory paid licensing is fair, as the licensing fees are relatively insignificant for companies that reach a significant level of usage.</p> <p>If you have any questions or concerns, please feel free to reach out to <a href="mailto:licensing@cirruslabs.org">licensing@cirruslabs.org</a>. If the new licensing model is not suitable for your organization, you are welcome to continue using the AGPL version of Tart, but please ensure it is not used in a non-AGPL environment.</p></description> <link>https://tart.run/blog/2023/02/11/changing-tart-license/</link> <pubDate>Fri, 13 Feb 2026 15:46:01 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2023/02/11/changing-tart-license/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2023/02/11/changing-tart-license.png" type="image/png" length="57582" /> </item> <item> <title>Press Release: Cirrus Labs Successfully Enforces Its Fair Source License</title> <author>Fedor Korotkov</author> <description><h1 id="press-release-cirrus-labs-successfully-enforces-its-fair-source-license">Press Release: Cirrus Labs Successfully Enforces Its Fair Source License<a class="headerlink" href="#press-release-cirrus-labs-successfully-enforces-its-fair-source-license" title="Permanent link">&para;</a></h1> <p><strong>New York City, NY – October 27<sup>th</sup>, 2025 – Cirrus Labs, Inc.</strong>, a leading provider of platforms for digital transformation, today announced that it has reached a settlement agreement regarding a violation of its Fair Source License.</p> <!-- more --> <p>Cirrus Labs makes its Tart Virtualization Toolset, a leading virtualization toolset to build, run and manage macOS and Linux virtual machines (VMs) on Apple Silicon, freely available on GitHub under the Fair Source License, a source-available license. Tart is used by tens of thousands of engineers at no charge within its generous free‑use limits. Many large enterprises that need to exceed those limits support continued development through paid licenses. Cirrus Labs also uses Tart to power <a href="https://cirrus-runners.app/">Cirrus Runners</a> — a drop‑in replacement for macOS and Linux runners for GitHub Actions — offered at a fixed monthly price for unlimited usage.</p> <p>Cirrus Labs discovered that, <strong>despite a prior licensing request that was declined due to a conflict of interest</strong>, another company used Tart in a manner that exceeded the license’s free‑use limits, in order to create a competing product.</p> <p>After several months of negotiations, the matter was settled and a settlement payment to Cirrus Labs was agreed upon.</p> <div class="admonition quote"> <p class="admonition-title">Comment by Fedor Korotkov, CEO of Cirrus Labs</p> <p>As a company we embrace healthy competition that ultimately benefits the end user. Most of our users have no trouble complying with our license, and even when they need something more than our free use limits, we can almost always grant them a license that fits their needs. <strong>This was an exceptional case.</strong> We are pleased to have reached this settlement, which validates our source-available licensing strategy and reinforces our commitment to protecting our company and serving our community.</p> </div> <p>Cirrus Labs was represented in this matter by <a href="https://byronraphael.com/attorneys/jordan-raphael/">Jordan Raphael</a> of Byron Raphael LLP, a boutique intellectual property law firm, and <a href="https://www.techlawpartners.com/heather">Heather Meeker</a>, a well-known specialist in open source and source available licensing.</p> <p>The specific financial terms of the settlement and the identity of the counterparty remain confidential.</p> <p><strong>About Cirrus Labs:</strong> Cirrus Labs, Inc. is a bootstrapped developer-infrastructure company founded in 2017. Our offerings among others include Tart and Cirrus Runners, and our software is used by teams at category-leading companies including Atlassian, Figma, Zendesk, Sentry and many more.</p> <p>Learn more at <a href="https://tart.run/">https://tart.run/</a> and <a href="https://cirrus-runners.app/">https://cirrus-runners.app/</a>.</p> <p><strong>Contact:</strong> <a href="mailto:hello@cirruslabs.org">hello@cirruslabs.org</a></p></description> <link>https://tart.run/blog/2025/10/27/press-release-cirrus-labs-successfully-enforces-its-fair-source-license/</link> <pubDate>Mon, 27 Oct 2025 16:04:35 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2025/10/27/press-release-cirrus-labs-successfully-enforces-its-fair-source-license/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2025/10/27/press-release-cirrus-labs-successfully-enforces-its-fair-source-license.png" type="image/png" length="71218" /> </item> <item> <title>Announcing Orchard orchestration for managing macOS virtual machines at scale</title> <author>Fedor Korotkov</author> <description><h1 id="announcing-orchard-orchestration-for-managing-macos-virtual-machines-at-scale">Announcing Orchard orchestration for managing macOS virtual machines at scale<a class="headerlink" href="#announcing-orchard-orchestration-for-managing-macos-virtual-machines-at-scale" title="Permanent link">&para;</a></h1> <p>Today we are happy to announce general availability of Orchard – our new orchestrator to manage Tart virtual machines at scale. In this post we’ll cover the motivation behind creating yet another orchestrator and why we didn’t go with Kubernetes or Nomad integration.</p> <h2 id="what-problem-are-we-trying-to-solve">What problem are we trying to solve?<a class="headerlink" href="#what-problem-are-we-trying-to-solve" title="Permanent link">&para;</a></h2> <p>After releasing Tart we pretty quickly started getting requests about managing macOS virtual machines on a cluster of Apple Silicon machines rather than just a single host which only allows a maximum of two virtual machines at a time. By the end of 2022 the requests reached a tipping point, and we started planning.</p> <!-- more --> <p>First, we established some constraints about the end users and potential workload our solution should handle. Running macOS or Linux virtual machines on Apple Silicon is a very niche use case. These VMs are either used in automation solutions like CI/CD or for managing remote desktop environments. In this case <strong>we are aiming to manage only thousands of virtual machines and not millions</strong>.</p> <p>Second, <strong>operators of such solutions won’t have experience of operating Kubernetes or Nomad</strong>. Operators will most likely come with experience of using such systems but not managing them. And again, having built-in things like RBAC and ability to scale to millions were appealing but it seemed like it would be a solution for a few rather than a solution for everybody to use. Additionally Orchard should provide <strong>first class support for accessing virtual machines over SSH/VNC</strong> and support script execution.</p> <p>By that time, the idea of building a simple opinionated orchestrator got more and more appealing. Plus we kind of already did it for <a href="https://cirrus-ci.org/guide/persistent-workers/">Cirrus CI’s persistent workers</a> feature.</p> <h2 id="technical-constraints">Technical constraints<a class="headerlink" href="#technical-constraints" title="Permanent link">&para;</a></h2> <p>With the UX constraints and expectations in place we started thinking about architecture for the orchestrator that we started calling <strong>Orchard</strong>.</p> <script src="https://unpkg.com/@dotlottie/player-component@latest/dist/dotlottie-player.js"></script> <p><dotlottie-player src="/assets/animations/Orchard.lottie" mode="normal" style="width: 100%; height: 360px; margin: auto; background-color: rgb(5 62 94)" autoplay loop /></p> <p>Since Orchard will manage a maximum of a couple thousands virtual machines and not millions we <strong>decided to not think much about horizontal scalability.</strong> Just a single instance of Orchard controller should be enough if it can restart quickly and persist state between restarts.</p> <p><strong>Orchard should be secure by default</strong>. All the communication between a controller and workers should be secure. All external API requests to Orchard controller should be authorized.</p> <p>During development it’s crucial to have a quick feedback cycle. <strong>It should be extremely easy to run Orchard in development</strong>. Configuring a production cluster should be also easy for novice operators.</p> <h2 id="high-level-implementation-details">High-level implementation details<a class="headerlink" href="#high-level-implementation-details" title="Permanent link">&para;</a></h2> <p>Cirrus Labs started as a predominantly Kotlin shop with a little Go. But over the years we gradually moved a lot of things to Go. We love the expressibility of Kotlin as a language but the ecosystem for writing system utilities and services is superb in Go.</p> <p>Orchard is a single Go project that implements both controller server interface and worker client logic in a single repository. This simplifies code sharing and testability of the both components and allows to change them in a single pull request.</p> <p>Another benefit is that Orchard can be distributed as a single binary. We intend to run Orchard controller on a single host. Data model for the orchestration didn’t look complex as well. These observations lead us to exploring the use of an embedded database. Just imagine! <strong>Orchard can be distributed as a single binary with no external dependencies on any database or runtime!</strong></p> <p>And we did exactly that! Orchard is distributed as a single binary that can be run in “controller” mode on a Linux/macOS host and in “worker” mode on macOS hosts. Orchard controller is using extremely fast <a href="https://dgraph.io/docs/badger/">BadgerDB</a> key-value storage to persist data.</p> <h2 id="conclusion">Conclusion<a class="headerlink" href="#conclusion" title="Permanent link">&para;</a></h2> <p>Please give <a href="https://github.com/cirruslabs/orchard">Orchard</a> a try! To run it locally in development mode on any Apple Silicon device please run the following command:</p> <div class="highlight"><pre><span></span><code><a id="__codelineno-0-1" name="__codelineno-0-1" href="#__codelineno-0-1"></a>brew<span class="w"> </span>install<span class="w"> </span>cirruslabs/cli/orchard <a id="__codelineno-0-2" name="__codelineno-0-2" href="#__codelineno-0-2"></a>orchard<span class="w"> </span>dev </code></pre></div> <p>This will launch a development cluster with a single worker on your machine. Refer to <a href="https://github.com/cirruslabs/orchard#creating-virtual-machines">Orchard documentation</a> on how to create your first virtual machine and access it.</p> <p>In a <a href="../../28/ssh-over-grpc-or-how-orchard-simplifies-accessing-vms-in-private-networks/">separate blog post</a> we’ll cover how Orchard implements seamless SSH access over a gRPC connection. Stay tuned and please don’t hesitate to <a href="https://github.com/cirruslabs/orchard/discussions/landing">reach out</a>! </p></description> <link>https://tart.run/blog/2023/04/25/announcing-orchard-orchestration-for-managing-macos-virtual-machines-at-scale/</link> <pubDate>Mon, 22 Sep 2025 20:02:39 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2023/04/25/announcing-orchard-orchestration-for-managing-macos-virtual-machines-at-scale/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2023/04/25/announcing-orchard-orchestration-for-managing-macos-virtual-machines-at-scale.png" type="image/png" length="77793" /> </item> <item> <title>SSH over gRPC or how Orchard simplifies accessing VMs in private networks</title> <author>Nikolay Edigaryev</author> <description><h1 id="ssh-over-grpc-or-how-orchard-simplifies-accessing-vms-in-private-networks">SSH over gRPC or how Orchard simplifies accessing VMs in private networks<a class="headerlink" href="#ssh-over-grpc-or-how-orchard-simplifies-accessing-vms-in-private-networks" title="Permanent link">&para;</a></h1> <p>We started developing <a href="https://github.com/cirruslabs/orchard">Orchard</a>, an orchestrator for <a href="https://tart.run/">Tart</a>, with the requirement that it should allow users to access virtual machines running on worker nodes in private networks that users might not have access to.</p> <p>At the same time, we wanted to enable users to access VMs on these remote workers just as easily as they’d access network services on their local Tart VMs.</p> <p>While these features sound great on paper, they pose a technical problem: how do we connect to the remote workers, let alone VMs running on these workers, if we can’t assume that these workers will be easily reachable? And how do we establish an SSH connection with a VM running on a remote worker through all these hoops?</p> <!-- more --> <h2 id="implementing-port-forwarding-grpc-to-the-rescue">Implementing port forwarding: gRPC to the rescue<a class="headerlink" href="#implementing-port-forwarding-grpc-to-the-rescue" title="Permanent link">&para;</a></h2> <p>We need to keep a full-duplex connection with the controller for the port-forwarding to work, and the two obvious protocol options are:</p> <ul> <li>WebSocket API through a new controller’s REST API endpoint</li> <li>gRPC using <code>Content-Type</code> differentiation</li> </ul> <p>We’ve chosen the gRPC for controller ↔︎ worker connection, simply because it requires less code on our side and it will only be used internally, which means we don’t need to document it as extensively as our REST API. In essence, port forwarding is streaming of bytes of a connection in both ways, so gRPC streams looked like a natural solution. The resulting protocol is dead simple:</p> <div class="highlight"><pre><span></span><code><a id="__codelineno-0-1" name="__codelineno-0-1" href="#__codelineno-0-1"></a><span class="kd">service</span><span class="w"> </span><span class="n">Controller</span><span class="w"> </span><span class="p">{</span> <a id="__codelineno-0-2" name="__codelineno-0-2" href="#__codelineno-0-2"></a><span class="w"> </span><span class="k">rpc</span><span class="w"> </span><span class="n">Watch</span><span class="p">(</span><span class="n">google.protobuf.Empty</span><span class="p">)</span><span class="w"> </span><span class="k">returns</span><span class="w"> </span><span class="p">(</span><span class="n">stream</span><span class="w"> </span><span class="n">WatchInstruction</span><span class="p">);</span> <a id="__codelineno-0-3" name="__codelineno-0-3" href="#__codelineno-0-3"></a> <a id="__codelineno-0-4" name="__codelineno-0-4" href="#__codelineno-0-4"></a><span class="w"> </span><span class="k">rpc</span><span class="w"> </span><span class="n">PortForward</span><span class="p">(</span><span class="n">stream</span><span class="w"> </span><span class="n">PortForwardData</span><span class="p">)</span><span class="w"> </span><span class="k">returns</span><span class="w"> </span><span class="p">(</span><span class="n">stream</span><span class="w"> </span><span class="n">PortForwardData</span><span class="p">);</span> <a id="__codelineno-0-5" name="__codelineno-0-5" href="#__codelineno-0-5"></a><span class="p">}</span> <a id="__codelineno-0-6" name="__codelineno-0-6" href="#__codelineno-0-6"></a> <a id="__codelineno-0-7" name="__codelineno-0-7" href="#__codelineno-0-7"></a><span class="kd">message</span><span class="w"> </span><span class="nc">WatchInstruction</span><span class="w"> </span><span class="p">{</span> <a id="__codelineno-0-8" name="__codelineno-0-8" href="#__codelineno-0-8"></a><span class="w"> </span><span class="kd">message</span><span class="w"> </span><span class="nc">PortForward</span><span class="w"> </span><span class="p">{</span> <a id="__codelineno-0-9" name="__codelineno-0-9" href="#__codelineno-0-9"></a><span class="w"> </span><span class="kt">string</span><span class="w"> </span><span class="na">session</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1</span><span class="p">;</span> <a id="__codelineno-0-10" name="__codelineno-0-10" href="#__codelineno-0-10"></a><span class="w"> </span><span class="kt">string</span><span class="w"> </span><span class="na">vm_uid</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">2</span><span class="p">;</span> <a id="__codelineno-0-11" name="__codelineno-0-11" href="#__codelineno-0-11"></a><span class="w"> </span><span class="kt">uint32</span><span class="w"> </span><span class="na">vm_port</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">3</span><span class="p">;</span> <a id="__codelineno-0-12" name="__codelineno-0-12" href="#__codelineno-0-12"></a><span class="w"> </span><span class="p">}</span> <a id="__codelineno-0-13" name="__codelineno-0-13" href="#__codelineno-0-13"></a> <a id="__codelineno-0-14" name="__codelineno-0-14" href="#__codelineno-0-14"></a><span class="w"> </span><span class="k">oneof</span><span class="w"> </span><span class="n">action</span><span class="w"> </span><span class="p">{</span> <a id="__codelineno-0-15" name="__codelineno-0-15" href="#__codelineno-0-15"></a><span class="w"> </span><span class="n">PortForward</span><span class="w"> </span><span class="na">port_forward_action</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1</span><span class="p">;</span> <a id="__codelineno-0-16" name="__codelineno-0-16" href="#__codelineno-0-16"></a><span class="w"> </span><span class="p">}</span> <a id="__codelineno-0-17" name="__codelineno-0-17" href="#__codelineno-0-17"></a><span class="p">}</span> <a id="__codelineno-0-18" name="__codelineno-0-18" href="#__codelineno-0-18"></a> <a id="__codelineno-0-19" name="__codelineno-0-19" href="#__codelineno-0-19"></a><span class="kd">message</span><span class="w"> </span><span class="nc">PortForwardData</span><span class="w"> </span><span class="p">{</span> <a id="__codelineno-0-20" name="__codelineno-0-20" href="#__codelineno-0-20"></a><span class="w"> </span><span class="kt">bytes</span><span class="w"> </span><span class="na">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1</span><span class="p">;</span> <a id="__codelineno-0-21" name="__codelineno-0-21" href="#__codelineno-0-21"></a><span class="p">}</span> </code></pre></div> <p>On bootstrap, each Orchard worker establishes a <code>Watch()</code> RPC stream and waits for the <code>PortForward</code> instruction from the controller indefinitely. This long-running session might be used not just for port-forwarding, but for notifying the workers about changed resources, which results in workers picking up your VM for execution instantly.</p> <p>Once <code>PortForward</code> instruction is received, the worker connects to the specified VM and port locally and opens a new <code>PortForward()</code> RPC stream with the controller, carrying the unique <code>session</code> identifier in the gRPC metadata to help distinguish several port forwarding requests.</p> <p>We’re using a pretty ingenious <a href="https://github.com/mitchellh/go-grpc-net-conn">Golang package that turns any gRPC stream into a <code>net.Conn</code></a>. This allows us to abstract from the gRPC details and simply proxy two <code>net.Conns</code>, thus providing the port forwarding functionality.</p> <p>We’ve also initially considered using <a href="https://github.com/hashicorp/yamux">Yamux</a> to only keep a single connection with each worker, however, that involves the burden of dealing with flow control and potential implementation bugs associated with it, so we’ve decided to simply open an additional connection for each port forwarding session and let the OS deal with it.</p> <h2 id="building-on-top-of-the-port-forwarding">Building on top of the port-forwarding<a class="headerlink" href="#building-on-top-of-the-port-forwarding" title="Permanent link">&para;</a></h2> <p>First of all, we’ve made the new port-forwarding functionality available for integrations via the Orchard’s REST API:</p> <p><img alt="OpenAPI documentation for Orchard's port-forwarding endpoint" src="../../../../../assets/images/orchard-port-forwarding-api.png" /></p> <p>All you need is to use a WebSocket client when accessing this endpoint to make it work.</p> <p>Secondly, we’ve exposed three commands in the Orchard CLI that all use this endpoint:</p> <h3 id="orchard-port-forward"><code>orchard port-forward</code><a class="headerlink" href="#orchard-port-forward" title="Permanent link">&para;</a></h3> <p>Opens a TCP port locally and forwards everything sent to it to the specified VM (and vice versa).</p> <p>For example, <code>orchard port-forward vm sonoma-builder 2222:22</code> will forward traffic from the local TCP port <code>2222</code> to the <code>ventura-builder</code> VM’s TCP port <code>22</code>.</p> <h3 id="orchard-ssh"><code>orchard ssh</code><a class="headerlink" href="#orchard-ssh" title="Permanent link">&para;</a></h3> <p>Connects to the specified VM on the default SSH port <code>22</code>, optionally only launching a command (if specified), similarly to what the official OpenSSH client does.</p> <p>For example, <code>orchard ssh vm sonoma-builder</code> will open an interactive session with the <code>ventura-builder</code> VM.</p> <p>You can also send local scripts for execution by utilizing redirection:</p> <div class="highlight"><pre><span></span><code><a id="__codelineno-1-1" name="__codelineno-1-1" href="#__codelineno-1-1"></a>orchard<span class="w"> </span>ssh<span class="w"> </span>vm<span class="w"> </span>sonoma-builder<span class="w"> </span><span class="s1">&#39;sh -s&#39;</span><span class="w"> </span>&lt;<span class="w"> </span>script.sh </code></pre></div> <h3 id="orchard-vnc"><code>orchard vnc</code><a class="headerlink" href="#orchard-vnc" title="Permanent link">&para;</a></h3> <p>Establishes a port forwarding to the specified VM’s default VNC port <code>5900</code> and opens the default macOS Screen Sharing app.</p> <p>For example, <code>orchard vnc vm sonoma-builder</code> will establish a port-forwarding to the <code>ventura-builder</code> VM's port <code>5900</code> under the hood and launch macOS Screen Sharing app.</p> <p>Note that the SSH and VNC commands expect the VM resource to specify credentials in it’s definition (can be done via <code>orchard create vm</code>), and will otherwise fall back to the credentials specified by <code>--username</code> and <code>--password</code>, or if none specified — to de-facto standard of <code>admin:admin</code> credentials.</p> <h2 id="conclusion">Conclusion<a class="headerlink" href="#conclusion" title="Permanent link">&para;</a></h2> <p>Overall, the technology described in this article somewhat resembles what <a href="https://cirrus-ci.org/blog/2021/08/06/introducing-cirrus-terminal-a-simple-way-to-get-ssh-like-access-to-your-tasks/">we previously did for Cirrus Terminal</a>. The only difference is that in Cirrus Terminal we carry terminal-specific characters, and in Orchard — we carry bytes for an arbitrary TCP connection.</p> <p>We really hope this feature will be useful for many, just as the Cirrus Terminal, and that it will remove the pain of scaling Tart beyond a single machine.</p> <p>You can give <a href="https://github.com/cirruslabs/orchard">Orchard</a> a try by running it locally in development mode on any Apple Silicon device:</p> <div class="highlight"><pre><span></span><code><a id="__codelineno-2-1" name="__codelineno-2-1" href="#__codelineno-2-1"></a>brew<span class="w"> </span>install<span class="w"> </span>cirruslabs/cli/orchard <a id="__codelineno-2-2" name="__codelineno-2-2" href="#__codelineno-2-2"></a>orchard<span class="w"> </span>dev </code></pre></div> <p>This will launch a development cluster with a single worker on your machine. Refer to <a href="https://github.com/cirruslabs/orchard#creating-virtual-machines">Orchard documentation</a> on how to create your first virtual machine and access it.</p> <p>Stay tuned and don’t hesitate to send us your feedback either <a href="https://github.com/cirruslabs/orchard">on GitHub</a> or <a href="https://twitter.com/cirrus_labs">Twitter</a>!</p></description> <link>https://tart.run/blog/2023/04/28/ssh-over-grpc-or-how-orchard-simplifies-accessing-vms-in-private-networks/</link> <pubDate>Mon, 22 Sep 2025 20:02:39 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2023/04/28/ssh-over-grpc-or-how-orchard-simplifies-accessing-vms-in-private-networks/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2023/04/28/ssh-over-grpc-or-how-orchard-simplifies-accessing-vms-in-private-networks.png" type="image/png" length="79111" /> </item> <item> <title>Tart 2.0.0 and community updates</title> <author>Fedor Korotkov</author> <description><h1 id="tart-200-and-community-updates">Tart 2.0.0 and community updates<a class="headerlink" href="#tart-200-and-community-updates" title="Permanent link">&para;</a></h1> <p>Today we'd like to share some news and updates around the Tart ecosystem since the Tart 1.0.0 release back in February.</p> <!-- more --> <h2 id="community-growth">Community Growth<a class="headerlink" href="#community-growth" title="Permanent link">&para;</a></h2> <p>In the last 7 months Tart community almost tripled and growth is continuing to accelerate. Tart just crossed 25,000 installations, dozens of companies that we know of are using Tart in their daily workflows. If your company is not in the list please consider <a href="https://github.com/cirruslabs/tart/blob/main/Resources/Users/HowToAddYourself.md">joining</a>!</p> <div class="grid cards"> <ul> <li><img alt="" height="65" src="https://github.com/cirruslabs/tart/raw/main/Resources/Users/Krisp.png" /></li> <li><img alt="" height="65" src="https://github.com/cirruslabs/tart/raw/main/Resources/Users/Mullvad.png" /></li> <li><img alt="" height="65" src="https://github.com/cirruslabs/tart/raw/main/Resources/Users/ahrefs.png" /></li> <li><img alt="" height="65" src="https://github.com/cirruslabs/tart/raw/main/Resources/Users/Suran.png" /></li> <li><img alt="" height="65" src="https://github.com/cirruslabs/tart/raw/main/Resources/Users/Symflower.png" /></li> <li><img alt="" height="65" src="https://github.com/cirruslabs/tart/raw/main/Resources/Users/Transloadit.png" /></li> <li><img alt="" height="65" src="https://github.com/cirruslabs/tart/raw/main/Resources/Users/PITSGlobalDataRecoveryServices.png" /></li> <li><img alt="" height="65" src="https://github.com/cirruslabs/tart/raw/main/Resources/Users/Uphold.png" /></li> </ul> </div> <p>We are also very pleased by how the community responded to <a href="../../../02/11/changing-tart-license/">the license change</a>. We now have a number of companies running Tart at scale under the new license. Revenue from the licensing allowed us to allocate time to continue improving Tart which brings us to the section below.</p> <h2 id="recent-updates-and-whats-changing-in-tart-200">Recent updates and what's changing in Tart 2.0.0<a class="headerlink" href="#recent-updates-and-whats-changing-in-tart-200" title="Permanent link">&para;</a></h2> <p>In the last 7 months we've had 12 feature releases that brought a lot of features requested by the community. Here are just a few of them to highlight:</p> <p>-<a href="../../../../../integrations/gitlab-runner/">Custom GitLab Runner Executor</a>. -<a href="../../../04/25/announcing-orchard-orchestration-for-managing-macos-virtual-machines-at-scale/">Cluster Management via Orchard</a>. -Numerous compatibility improvements for all kinds of OCI-registries. -Sonoma Support (see details <a href="#macos-sonoma-updates">below</a>).</p> <p>But one of the most requested features/complaints was around pulling huge Tart images from remote OCI-compatible registries. With an ideal network conditions <code>tart pull</code> worked pretty good but in case of any network issues it was required to restart the pull from scratch. Additionally, some registries are notably slow streaming a single blob but can stream multiple blobs in parallel. Finally, the initial format of storing Tart VMs was very naive: disk image is compressed via a single stream which is chunked up into blobs that are serially uploaded to a registry. A single compression stream means that Tart can also only decompress blobs serially.</p> <p>Given these three observations above we came up with an improved format of storing Tart VM disk images. In Tart 2.0.0 disk images are chunked up first and compressed independently into blobs, when pushed, each blob has attached annotations of expected uncompressed size and a checksum. This way when Tart 2.0.0 is pulling an image pushed by Tart 2.0.0 each blob can be pulled, uncompressed and written at the right offset independently. Having checksums along expected uncompressed blob size also allowed to support resumable pulls. Upon a failure Tart 2.0.0 will compare checksums of chunks and will continue pulling only missing blobs.</p> <p>Overall in our experiments we saw a 10% improvement in compressed size of the images and <strong>4 times faster pulls</strong>.</p> <p>In order to try the new image format please upgrade Tart and try to pull any of <a href="https://github.com/orgs/cirruslabs/packages?tab=packages&amp;q=macos-sonoma">the Sonoma images</a>:</p> <div class="highlight"><pre><span></span><code><a id="__codelineno-0-1" name="__codelineno-0-1" href="#__codelineno-0-1"></a>brew<span class="w"> </span>upgrade<span class="w"> </span>cirruslabs/cli/tart <a id="__codelineno-0-2" name="__codelineno-0-2" href="#__codelineno-0-2"></a>tart<span class="w"> </span>pull<span class="w"> </span>ghcr.io/cirruslabs/macos-sonoma-base:latest </code></pre></div> <h2 id="macos-sonoma-updates">macOS Sonoma Updates<a class="headerlink" href="#macos-sonoma-updates" title="Permanent link">&para;</a></h2> <p>Tart VMs now can be run in a "suspendable" mode which will enable VM snapshotting instead of the standard shutdown. VMs with an existing snapshot will <code>run</code> from the same state as they got snapshotted. Please check demo down below:</p> <div> <blockquote class="twitter-tweet" data-theme="dark"> <p lang="en" dir="ltr"> Tart 1.8.0 brings macOS Sonoma updates! 🍏 Now you can suspend and resume your virtual machines for even faster startup times. Check out the demo below 👇 <a href="https://t.co/RoRFT8Nwst">pic.twitter.com/RoRFT8Nwst</a> </p>&mdash; Cirrus Labs (@cirrus_labs) <a href="https://twitter.com/cirrus_labs/status/1677308360385765382?ref_src=twsrc%5Etfw">July 7, 2023</a> </blockquote> <script src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div> <p>There are two caveats to the "suspendable" mode support:</p> <ol> <li>Both host and guest should be running macOS Sonoma.</li> <li>Snapshots are locally encrypted and can't be shared between physical hosts. Therefore <code>tart push</code> won't push the corresponding snapshotted state of the VM.</li> </ol> <p>Try the "suspendable" mode for yourself by passing <code>--suspendable</code> flag to a <code>tart run</code> command:</p> <div class="highlight"><pre><span></span><code><a id="__codelineno-1-1" name="__codelineno-1-1" href="#__codelineno-1-1"></a>tart<span class="w"> </span>clone<span class="w"> </span>ghcr.io/cirruslabs/macos-sonoma-base:latest<span class="w"> </span>sonoma-base <a id="__codelineno-1-2" name="__codelineno-1-2" href="#__codelineno-1-2"></a>tart<span class="w"> </span>run<span class="w"> </span>--suspendable<span class="w"> </span>sonoma-base </code></pre></div> <h2 id="conclusion">Conclusion<a class="headerlink" href="#conclusion" title="Permanent link">&para;</a></h2> <p>We are very excited about this major release of Tart. Please give it a try and let us know how it went!</p> <p>Stay tuned for new updates and announcements! There are a few coming up very shortly...</p></description> <link>https://tart.run/blog/2023/09/20/tart-200-and-community-updates/</link> <pubDate>Mon, 22 Sep 2025 20:02:39 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2023/09/20/tart-200-and-community-updates/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2023/09/20/tart-200-and-community-updates.png" type="image/png" length="61095" /> </item> <item> <title>Tart is now available on AWS Marketplace</title> <author>Fedor Korotkov</author> <description><h1 id="tart-is-now-available-on-aws-marketplace">Tart is now available on AWS Marketplace<a class="headerlink" href="#tart-is-now-available-on-aws-marketplace" title="Permanent link">&para;</a></h1> <p>Announcing <a href="https://aws.amazon.com/marketplace/pp/prodview-qczco34wlkdws">official AMIs for EC2 Mac Instances</a> with preconfigured Tart installation that is optimized to work within AWS infrastructure.</p> <p>EC2 Mac Instances is a gem of engineering powered by AWS Nitro devices. Just imagine there is a physical Mac Mini with a plugged in Nitro device that can push the physical power button!</p> <p><img alt="EC2 M2 Pro" src="../../../../images/ec2-mac2-m2pro.png" /></p> <p>This clever synergy between Apple Hardware and Nitro System allows seamless integration with VPC networking and booting macOS from an EBS volume.</p> <p>In this blog post we’ll see how a virtualization solution like Tart can compliment and elevate experience with EC2 Mac Instances.</p> <!-- more --> <p>Let’s start from the basics, what EC2 Mac Instances allow to do compared to physical Mac Minis seating in offices of many companies around the world?</p> <p>First and foremost, EC2 Mac Instances sit inside AWS data centers and can leverage all the goodies of VPC networking within your company's existing infrastructure. No need to connect your Macs in the office through a VPN and deal with networking and security.</p> <p>Additionally, EC2 Mac Instances are booting from EBS volumes which means it is possible to always have reproducible instances and apply all the best practices of Infrastructure-as-Code. Managing a fleet of physical Macs is a pain and it's very hard to make them configured in a reproducible and stable way. With booting from identical EBS volumes your team is always sure about the identical initial state of the fleet.</p> <h2 id="compromises-of-ec2-mac-instances">Compromises of EC2 Mac Instances<a class="headerlink" href="#compromises-of-ec2-mac-instances" title="Permanent link">&para;</a></h2> <p>The flexibility of EBS volumes for macOS comes with some compromises that virtualization solutions like Tart can help with. The initial boot from an EBS volume takes some time and not instant. macOS itself is pretty heavy and a Nitro device needs to download tens of gigabytes that macOS requires in order to boot. This means that <strong>resetting a EC2 Mac Instance to a clean state is not instant and usually takes a couple of minutes</strong> when you can’t utilize the precious resources for your workloads.</p> <p>It is much easier to tailor such EBS volumes with tools like Packer but there is still a <strong>friction to test newly created EBS volumes</strong> since one needs to start and run a EC2 Mac Instance and it’s not possible to test things locally. Similarly it is even harder to test beta versions of macOS that require manual interaction with a running instance.</p> <h2 id="solution">Solution<a class="headerlink" href="#solution" title="Permanent link">&para;</a></h2> <p>Tart can help with all the compromises! Tart virtual machines (VMs) have nearly native performance thanks to utilizing native <code>Virtualization.Framework</code> that was developed along the first Apple Silicon chip. <strong>Tart VMs can be copied/disposed instantly and booting a fresh Tart VM takes only several seconds</strong>. It is also possible to run two different Tart VMs in parallel that can have completely different versions of macOS and packages. For example, it is possible to have the latest stable macOS with the release version of Xcode along with the next version of macOS with the latest beta of Xcode.</p> <p>Creation of Tart VMs can be automated with <a href="https://github.com/cirruslabs/packer-plugin-tart">a Packer plugin</a> the same way as creation of EC2 AMIs with one caveat that <strong>Tart Packer Plugin works locally so you can test the same virtual machine locally as you would run it in the cloud</strong>.</p> <p>Lightweight nature of Tart VMs with a focus on an easy-to-integrate Tart CLI compliments any macOS automation and helps to reduce the feedback cycle and improves reproducibility of macOS environments even further.</p> <h2 id="conclusion">Conclusion<a class="headerlink" href="#conclusion" title="Permanent link">&para;</a></h2> <p>We are excited to bring <a href="https://aws.amazon.com/marketplace/pp/prodview-qczco34wlkdws">official AMIs that include Tart installation optimized to work within AWS</a>. In the coming weeks when macOS Sonoma will become available on AWS we’ll release another update specifically targeting EC2 Mac Instances. This update will simplify access to local SSDs of Mac Instances that are slightly faster than EBS volumes. Stay tuned and don’t hesitate to ask any <a href="https://tart.run/licensing/">questions</a>.</p></description> <link>https://tart.run/blog/2023/10/06/tart-is-now-available-on-aws-marketplace/</link> <pubDate>Mon, 22 Sep 2025 20:02:39 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2023/10/06/tart-is-now-available-on-aws-marketplace/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2023/10/06/tart-is-now-available-on-aws-marketplace.png" type="image/png" length="67557" /> </item> <item> <title>New dashboard with insights into performance of Cirrus Runners</title> <author>Fedor Korotkov</author> <description><h1 id="new-dashboard-with-insights-into-performance-of-cirrus-runners">New dashboard with insights into performance of Cirrus Runners<a class="headerlink" href="#new-dashboard-with-insights-into-performance-of-cirrus-runners" title="Permanent link">&para;</a></h1> <p>This month we are celebrating one year since launching Cirrus Runners — managed Apple Silicon infrastructure for your GitHub Actions. During the last 12 months we ran millions of workflows for our customers and now ready to share some insights into price performance of them for our customers.</p> <p>One of the key difference with Cirrus Runners is how they are getting billed for. Customers purchase Cirrus Runners via monthly subscription that costs $150 per each Cirrus Runner. Each runner can be used 24 hours a day 7 days a week to run GitHub Actions workflows for an organization. If there are more outstanding jobs than available runners then they are queued and executed as soon as there is a free runner. This is different from how GitHub-managed GitHub Actions are billed for — you pay for each minute of execution time.</p> <p>The benefit of a fixed price is that you can run as many jobs as you want without worrying about the cost. The downside is that you need to make sure that you are using your runners efficiently. This is where the new dashboard comes in handy.</p> <!-- more --> <p>But first, <strong>let's see theoretically the lowest price per minute</strong> of a Cirrus Runners. If you run 24 hours a day 7 days a week then you will get 43,200 minutes of execution time per month. This means that the price per minute is $0.0035 if your runners utilization is 100%. But even if your engineering teams is located in a single time zone and works 8 hours a day 5 days a week then you will get 9,600 minutes of execution time per month which comes down to $0.015 per-minute. This is still more than 10 times cheaper than recently announced Apple Silicon GitHub-manged runners that cost $0.16 per minute.</p> <p>Now lets take a look at the new Cirrus Runners dashboard of a real customers that run their workflows on Cirrus Runners and <strong>practically pushing the price performance pretty close to the theoretical minimum</strong>.</p> <p><img alt="Cirrus Runners Dashboard" src="../../../../images/runners-price-performance-2.png" /></p> <p>As you can see above Cirrus Runners Dashboard focuses on 4 core metrics:</p> <ol> <li><strong>Minutes Used</strong> — overall amount of minutes that Cirrus Runners were executing jobs.</li> <li><strong>Workflow Runs</strong> — absolute number of workflow runs that were executed on Cirrus Runners.</li> <li><strong>Queue Size</strong> — number of jobs that were queued and waiting for a free Cirrus Runner.</li> <li><strong>Queue Time</strong> — average time that jobs were waiting in the queue.</li> </ol> <p>In this particular example price performance of Cirrus Runners is $0.006 per minute which is 2 times more than the theoretical minimum and <strong>26 times better than GitHub-managed Apple Silicon runners</strong>. But this is a extreme example, looking at queue time and queue size we can see that the downside of such great price performance is that jobs are waiting in the queue on average around 5 minutes.</p> <p>Here is another example of Cirrus Runners Dashboard for a different customer that has a slightly higher price performance of $0.017 per minute but at the same time doesn't experience queue time at all. <strong>Note that $0.017 is still 10 times cheaper than GitHub-managed Apple Silicon runners</strong>.</p> <p><img alt="Cirrus Runners Dashboard" src="../../../../images/runners-price-performance-3.png" /></p> <h2 id="conclusion">Conclusion<a class="headerlink" href="#conclusion" title="Permanent link">&para;</a></h2> <p>Having a fixed price for Cirrus Runners is a great way to save money on your CI/CD infrastructure and just in general have predictable budged. But it requires keeping the balance between price per minute and queue time. Cirrus Runners Dashboard helps you to keep an eye on this balance and make sure that you are getting the most out of your Cirrus Runners.</p></description> <link>https://tart.run/blog/2023/11/03/new-dashboard-with-insights-into-performance-of-cirrus-runners/</link> <pubDate>Mon, 22 Sep 2025 20:02:39 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2023/11/03/new-dashboard-with-insights-into-performance-of-cirrus-runners/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2023/11/03/new-dashboard-with-insights-into-performance-of-cirrus-runners.png" type="image/png" length="70247" /> </item> <item> <title>Bridging the gaps with the Tart Guest Agent</title> <author>Nikolay Edigaryev</author> <description><h1 id="bridging-the-gaps-with-the-tart-guest-agent">Bridging the gaps with the Tart Guest Agent<a class="headerlink" href="#bridging-the-gaps-with-the-tart-guest-agent" title="Permanent link">&para;</a></h1> <p>We're introducing a new improvement for the Tart usability experience: a <a href="https://github.com/cirruslabs/tart-guest-agent">Tart Guest Agent</a>.</p> <p>This agent provides automatic disk resizing, seamless clipboard sharing for macOS guests (a <a href="https://github.com/cirruslabs/tart/issues/14">long-awaited</a> feature), and the ability to run commands, without SSH and networking, using the new <code>tart exec</code> command.</p> <p>As of recently, we include this agent in all non-vanilla Cirrus Labs images, so you likely won't need to do anything to benefit from these usability improvements.</p> <p>Read on to learn why we chose to implement the agent from scratch in Golang, and which features we plan to add next.</p> <!-- more --> <h2 id="existing-solutions">Existing solutions<a class="headerlink" href="#existing-solutions" title="Permanent link">&para;</a></h2> <p>Tart uses the Virtualization.Framework, and the latter implemented a SPICE client some time ago, however, one piece was missing: the agent that runs inside the guest.</p> <p>The original <a href="https://gitlab.freedesktop.org/spice/linux/vd_agent">SPICE <code>vdagent</code> implementation</a> only supports Linux. While <a href="https://github.com/utmapp/vd_agent">a fork</a> from the UTM project adds macOS support, the long-term viability of maintaining this fork without upstreaming changes is uncertain.</p> <p>Moreover, if we were to add some extra functionality (as we did), there would be more than one agent binary to ship and install, which complicates maintenance and makes it harder to explain to users why we need a bunch of agent binaries.</p> <p>In the end, we decided to go with our own solution, one that would easily accomodate future ideas.</p> <h2 id="rolling-our-own-agent">Rolling our own agent<a class="headerlink" href="#rolling-our-own-agent" title="Permanent link">&para;</a></h2> <p>After carefully inspecting the <a href="https://www.spice-space.org/agent-protocol.html"><code>vdagent</code> protocol</a> we've realized that the clipboard sharing is actually a small subset of the whole protocol, making it relatively simple to implement.</p> <p>Thanks to Golang, we were able to implement the protocol much faster than we could have with a lower-level language like C (with all due respect), which requires manual memory management and complex event loops.</p> <p>As for the command execution via <code>tart exec</code>, we've decided to go with gRPC with a rather simple protocol:</p> <p><img alt="An visualization of gRPC protocol used by the Tart Guest Agent" src="../../../../images/tart-guest-agent-grpc-protocol.png" /></p> <p>For each <code>tart exec</code> invocation a new gRPC <code>Exec</code> bidirectional stream is established with the agent running inside a VM. After the gRPC stream is established, <code>tart exec</code> sends a command to execute to the guest and streams the I/O. Once the command terminates, <code>tart exec</code> collects the process exit code and quits with exactly that exit code.</p> <p>Using gRPC simplifies <code>tart exec</code> implementation because of code generation and forms a nice bridge between the host and the guest which allows us to easily expand the protocol later down the road when we decide to introduce new features.</p> <p>Thanks to <a href="https://github.com/grpc/grpc-swift">gRPC Swift</a>, which is built on top of <a href="https://github.com/apple/swift-nio">SwiftNIO</a>, we get <a href="https://docs.swift.org/swift-book/documentation/the-swift-programming-language/concurrency/"><code>async/await</code></a> support for free, further simplifying the <code>tart exec</code> logic.</p> <p>As for the Tart Guest Agent, the final result is a Golang binary that <a href="https://github.com/cirruslabs/tart-guest-agent?tab=readme-ov-file#guest-agent-for-tart-vms">can be customized</a> depending on the execution context:</p> <ul> <li>launchd global daemon — runs as a privileged user (<code>root</code>), has no clipboard access<ul> <li><code>--resize-disk</code> — resizes the disk when there's a free space at the end of a disk (assuming that one previously ran <code>tart set --disk-size</code>)</li> </ul> </li> <li>launchd global agent — runs as a normal user (<code>admin</code>), has clipboard access<ul> <li><code>--run-vdagent</code> — clipboard sharing</li> <li><code>--run-rpc</code> — <code>tart exec</code> and new functionality in the future</li> </ul> </li> </ul> <p>We’ve also introduced <code>--run-daemon</code> (which implies <code>--resize-disk</code>) and <code>--run-agent</code> (which implies both <code>--run-vdagent</code> and <code>--run-rpc</code>) to help run the most appropriate functionality based on the given context.</p> <h2 id="future-plans">Future plans<a class="headerlink" href="#future-plans" title="Permanent link">&para;</a></h2> <p>First, we'd like to thank our paid clients, without whom this feature wouldn't have been possible.</p> <p><a href="../../../../../licensing/">Become one now</a> and enjoy higher allowances for Tart VMs and Orchard workers—while helping ensure that our roadmap aligns with your company's needs.</p> <p>In the near future we plan to implement:</p> <ul> <li>Linux support — to provide seamless experience for Linux guests too</li> <li>a new <code>tart ip</code> resolver — to provide a more robust IP retrieval facility for Linux guests, which often struggle to populate the host's ARP table with their network activity</li> <li><code>tart cp</code> command — to copy files from/to guest VMs</li> </ul> <p>Stay tuned, and feel free to send us feedback on <a href="https://github.com/cirruslabs/tart">GitHub</a> and <a href="https://x.com/cirrus_labs">Twitter</a>!</p></description> <link>https://tart.run/blog/2025/06/01/bridging-the-gaps-with-the-tart-guest-agent/</link> <pubDate>Sun, 01 Jun 2025 23:54:45 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2025/06/01/bridging-the-gaps-with-the-tart-guest-agent/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2025/06/01/bridging-the-gaps-with-the-tart-guest-agent.png" type="image/png" length="66038" /> </item> <item> <title>Jumping through the hoops: SSH jump host functionality in Orchard</title> <author>Nikolay Edigaryev</author> <description><h1 id="jumping-through-the-hoops-ssh-jump-host-functionality-in-orchard">Jumping through the hoops: SSH jump host functionality in Orchard<a class="headerlink" href="#jumping-through-the-hoops-ssh-jump-host-functionality-in-orchard" title="Permanent link">&para;</a></h1> <p>Almost a year ago, when we started building <a href="https://github.com/cirruslabs/orchard">Orchard</a>, an orchestration system for Tart, we quickly realized that most worker machines will be in a private network, and that VMs will be only reachable from the worker machines themselves. Thus, one of our goals became to simplify accessing the compute resources in a cluster through a centralized controller host.</p> <p>This effort resulted in commands like <code>orchard port-forward</code> and <code>orchard ssh</code>, which were later improved to support connecting not just to the VMs, but to the worker machines themselves.</p> <p>Today, we’re making an even further step in this effort: with a trivial configuration, an Orchard controller can act as an SSH jump host to allow connecting to the VMs using just the <code>ssh</code> command like <code>ssh -J &lt;service account name&gt;@orchard-controller.example.com &lt;VM name&gt;</code>!</p> <!-- more --> <h2 id="implementation">Implementation<a class="headerlink" href="#implementation" title="Permanent link">&para;</a></h2> <p>In a typical cluster there’s one controller, to which workers connect by calling various REST API endpoints to synchronize the worker &amp; VMs state. Each worker also maintains a persistent bi-directional gRPC connection with the controller, with the goal of improving the overall reactivity and making the port-forwarding work.</p> <p>The gRPC service definition that the controller offers is pretty minimalistic:</p> <div class="highlight"><pre><span></span><code><a id="__codelineno-0-1" name="__codelineno-0-1" href="#__codelineno-0-1"></a><span class="kd">service</span><span class="w"> </span><span class="n">Controller</span><span class="w"> </span><span class="p">{</span> <a id="__codelineno-0-2" name="__codelineno-0-2" href="#__codelineno-0-2"></a><span class="w"> </span><span class="k">rpc</span><span class="w"> </span><span class="n">Watch</span><span class="p">(</span><span class="n">google.protobuf.Empty</span><span class="p">)</span><span class="w"> </span><span class="k">returns</span><span class="w"> </span><span class="p">(</span><span class="n">stream</span><span class="w"> </span><span class="n">WatchInstruction</span><span class="p">);</span> <a id="__codelineno-0-3" name="__codelineno-0-3" href="#__codelineno-0-3"></a><span class="w"> </span><span class="k">rpc</span><span class="w"> </span><span class="n">PortForward</span><span class="p">(</span><span class="n">stream</span><span class="w"> </span><span class="n">PortForwardData</span><span class="p">)</span><span class="w"> </span><span class="k">returns</span><span class="w"> </span><span class="p">(</span><span class="n">stream</span><span class="w"> </span><span class="n">PortForwardData</span><span class="p">);</span> <a id="__codelineno-0-4" name="__codelineno-0-4" href="#__codelineno-0-4"></a><span class="p">}</span> </code></pre></div> <p>Each watch instruction corresponds a single action to be done by the worker, which can either be a request for establishing a port-forwarding stream or a request for VMs re-syncing:</p> <div class="highlight"><pre><span></span><code><a id="__codelineno-1-1" name="__codelineno-1-1" href="#__codelineno-1-1"></a><span class="k">oneof</span><span class="w"> </span><span class="n">action</span><span class="w"> </span><span class="p">{</span> <a id="__codelineno-1-2" name="__codelineno-1-2" href="#__codelineno-1-2"></a><span class="w"> </span><span class="n">PortForward</span><span class="w"> </span><span class="na">port_forward_action</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1</span><span class="p">;</span> <a id="__codelineno-1-3" name="__codelineno-1-3" href="#__codelineno-1-3"></a><span class="w"> </span><span class="n">SyncVMs</span><span class="w"> </span><span class="na">sync_vms_action</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">2</span><span class="p">;</span> <a id="__codelineno-1-4" name="__codelineno-1-4" href="#__codelineno-1-4"></a><span class="p">}</span> </code></pre></div> <p>Now, when the user invokes <code>orchard port-forward</code> or <code>orchard ssh</code>, controller effectively becomes a rendezvous point by accepting the WebSocket connection from the user, and then asking the worker associated with the requested VM to establish a port-forwarding stream, and finally proxying the two streams together.</p> <p><img alt="An illustration showing the Orchard controller and worker proxying the SSH connection" src="../../../../images/jumping-through-the-hoops.png" /></p> <p>SSH protocol works the same way, multiplexing multiple channels in a single transport connection, where each channel can be upgraded either to an interactive session (that’s what you get when you <code>ssh</code> to the server) or X11 channel (for X11 forwarding using <code>-X</code>), direct or forward TCP/IP channels (these are used for local and remote port-forwarding when using <code>-L</code> and <code>-R</code> options correspondingly) and so on.</p> <p>In fact, <code>ssh -J</code> jump host functionality also uses the direct TCP/IP channel, which is <a href="https://datatracker.ietf.org/doc/html/rfc4254#section-7.2">just a single port-forwarding request</a> that needs to be implemented. We’ve used <a href="https://pkg.go.dev/golang.org/x/crypto/ssh">Golang's SSH library</a> as the most mature choice for this task, and it’s been pleasant to work with so far.</p> <p>The support for <code>ssh -J</code> has landed in Orchard version 0.19.0. To configure the SSH jump host, simply add the <code>--listen-ssh</code> command-line argument to your <code>orchard controller run</code> invocation.</p> <p>Once running, you can connect to any VM in the cluster using the <code>ssh -J &lt;service account name&gt;@orchard-controller.example.com &lt;VM name&gt;</code>. The password for the jump host is the corresponding service account’s token.</p> <h2 id="future-plans">Future plans<a class="headerlink" href="#future-plans" title="Permanent link">&para;</a></h2> <p>First of all, we’d like to thank our paid clients, without which this feature wouldn’t be possible. <a href="../../../../../licensing/">Become one now</a> and get the benefit of higher Tart VMs and Orchard workers allowances and making sure that the roadmap for Tart and Orchard is aligned with your company's needs.</p> <p>In the near future we plan to implement a mechanism similar to <code>authorized_keys</code> file that will allow attaching public SSH keys to the Orchard controller’s service accounts, and thus avoid the need to type the passwords.</p> <p>Stay tuned and don’t hesitate to send us your feedback on <a href="https://github.com/cirruslabs/orchard">GitHub</a> and <a href="https://x.com/cirrus_labs">Twitter</a>!</p></description> <link>https://tart.run/blog/2024/06/20/jumping-through-the-hoops-ssh-jump-host-functionality-in-orchard/</link> <pubDate>Thu, 20 Jun 2024 22:39:41 +0000</pubDate> <source url="https://tart.run/feed_rss_updated.xml">Tart Virtualization</source><guid isPermaLink="true">https://tart.run/blog/2024/06/20/jumping-through-the-hoops-ssh-jump-host-functionality-in-orchard/</guid> <enclosure url="https://tart.run/assets/images/social/blog/2024/06/20/jumping-through-the-hoops-ssh-jump-host-functionality-in-orchard.png" type="image/png" length="70009" /> </item> </channel> </rss> |