Last week, I gave a talk at Security Data Science Colloquium hosted by Microsoft on their campus in Redmond. You can find my slides below.
Over the past year, I came across quite a few blog posts of people that moved back to Windows from Mac. Evidently, it’s time for my own. If that sort of anecdote doesn’t interest you, look at some other posts around here. Otherwise, here we go…
It has been an exciting month at CrowdStrike, especially for the Data Science Team as we released our anti-malware engine to Google’s VirusTotal service last week—the first fully machine learning-based engine to be integrated on VirusTotal. The engine shared is part of the larger Falcon Host product; the main intent is to provide a pre-execution static analysis capability.
In cybersecurity, machine learning is frequently described as a panacea solving all our problems. In reality, things are of course a bit more complicated. Machine learning can help extracting more value from data, but a prerequisite is to have quality data to being with—at the right scale and the right scope—which is not always a given in the security space.
After creating a basic VGA signal using discrete CMOS logic chips in Part 1, my goal was to get something more interesting onto the screen. Long story short, after adding thirty-something additional ICs, this is the outcome:
I’m excited to announce that after a 5 year hiatus I’ve co-authored a new academic paper, which my colleague and co-author Brett presented this past weekend at the 2015 IEEE International Workshop on Machine Learning for Signal Processing in Boston. You can grab the paper (and a BibTeX reference for it) on the publications page.
It’s time again to leave the realm of Big Data behind for a small electronics project. After generating a VGA signal using an Arduino, I’ve decided to next generate a VGA signal from scratch. From scratch here means using 74HC00 series logic ICs.1
The video mode I picked is XGA at a 60 Hz refresh rate. XGA has a resolution of 1024×768 pixels and a pixel frequency of 65 MHz. By dividing the horizontal resolution by 4, we get a width of 256 pixels and a pixel frequency of 16.25 MHz. To keep the aspect ratio, I am also dividing the vertical resolution by 4, so the effective resolution produced is 256×192.
The typical response I get when I mention our usage of Spark is something along the lines of “Oh, it must be about the extra speed over Hadoop you get from the in-memory processing.” Speed and the in-memory aspect are certainly two things Spark is known for, and they are also touted on the project’s website prominently. However, neither of those are among the primary reasons why I invested resources to move my team to Spark as the default Big Data framework. Let’s take a look at what makes the difference.
Here’s my quick (and belated) take on the Black Hat 2015 sessions I’ve attended. This year’s schedule offered a rich selection of Machine Learning related content, and it is refreshing to see that it is finally becoming a mainstream tool in the security community.
It goes without saying that all opinions are mine and not the ones of my employer. If I’m misjudging your session, then feel free to reach out—my opinion is formed based on data available, and it is of course always a challenge to cramp months of research results into an hour-long session. (If you are still disgruntled, take comfort in the fact that you attended the Speaker Party while I did not.)
Where available I have linked slide decks, whitepapers, or additional resources. Note that in some cases the slides presented at the event differed and have been updated (my remarks are applying to the version presented at the event unless noted otherwise).
Since I’ve always liked to understand technology from first principle, I’ve embarked on a small project to generate a VGA signal from scratch on an Arduino Uno. (On the other hand, it could also be that after all Big Data work, a small data project in the 2 KB of RAM the Uno offers sounded quite appealing.)