This past Monday morning, the OA-9 mission launched from Wallops Island to deliver supplies to the ISS. I was lucky to be able to watch the launch from the launch facility’s viewing area alongside the JPL RainCube team whose spacecraft was onboard the rocket.
This year, Black Hat celebrated its 20th anniversary. The keynote moved to a packed Mandalay Bay Arena with some noteworthy production values. On the content side, the conference featured again various Machine Learning (ML) themed talks. Most noteworthy, the Revoke-Obfuscation talk discussed utilizing ML as tool to detect Powershell obfuscations without buzzword abuse (read the abstract).
Last week, I gave a talk at Security Data Science Colloquium hosted by Microsoft on their campus in Redmond. You can find my slides below.
Over the past year, I came across quite a few blog posts of people that moved back to Windows from Mac. Evidently, it’s time for my own. If that sort of anecdote doesn’t interest you, look at some other posts around here. Otherwise, here we go…
It has been an exciting month at CrowdStrike, especially for the Data Science Team as we released our anti-malware engine to Google’s VirusTotal service last week—the first fully machine learning-based engine to be integrated on VirusTotal. The engine shared is part of the larger Falcon Host product; the main intent is to provide a pre-execution static analysis capability.
In cybersecurity, machine learning is frequently described as a panacea solving all our problems. In reality, things are of course a bit more complicated. Machine learning can help extracting more value from data, but a prerequisite is to have quality data to being with—at the right scale and the right scope—which is not always a given in the security space.
After creating a basic VGA signal using discrete CMOS logic chips in Part 1, my goal was to get something more interesting onto the screen. Long story short, after adding thirty-something additional ICs, this is the outcome:
I’m excited to announce that after a 5 year hiatus I’ve co-authored a new academic paper, which my colleague and co-author Brett presented this past weekend at the 2015 IEEE International Workshop on Machine Learning for Signal Processing in Boston. You can grab the paper (and a BibTeX reference for it) on the publications page.
It’s time again to leave the realm of Big Data behind for a small electronics project. After generating a VGA signal using an Arduino, I’ve decided to next generate a VGA signal from scratch. From scratch here means using 74HC00 series logic ICs.1
The video mode I picked is XGA at a 60 Hz refresh rate. XGA has a resolution of 1024×768 pixels and a pixel frequency of 65 MHz. By dividing the horizontal resolution by 4, we get a width of 256 pixels and a pixel frequency of 16.25 MHz. To keep the aspect ratio, I am also dividing the vertical resolution by 4, so the effective resolution produced is 256×192.
The typical response I get when I mention our usage of Spark is something along the lines of “Oh, it must be about the extra speed over Hadoop you get from the in-memory processing.” Speed and the in-memory aspect are certainly two things Spark is known for, and they are also touted on the project’s website prominently. However, neither of those are among the primary reasons why I invested resources to move my team to Spark as the default Big Data framework. Let’s take a look at what makes the difference.