Learn how to automate your systems, how to build chat bots and the future of deep learning. Explore the applications of machine learning, NLP, and computer vision transferring Neural Network know-how from academia to architects
The introduction of smart phones in the mid-2000s forever changed the way users interact with data and computation--and through it prompted a renaissance of digital innovation. Yet, at the same time, the architectures, applications and services that fostered this new reality fundamentally altered the relationship between users and security and privacy. In this talk I map the scientific community's initial efforts evaluating smart phone application security and privacy. I consider several key scientific questions and explore the methods and tools used to answer them. In this talk, I show how our joint understanding of adversary and industry practices have matured over time, and conclude with a discussion of the open problems and opportunities in mobile device security and privacy.
While both the SYSTEM_ALERT_WINDOW and the BIND_ACCESSIBILITY_SERVICE Android permissions have been abused individually (e.g., in UI redressing attacks, accessibility attacks), previous attacks based on these permissions failed to completely control the UI feedback loop and thus either rely on vanishing side-channels to time the appearance of overlay UI, cannot respond properly to user input, or make the attacks literally visible. In this work, we demonstrate how combining the capabilities of these permissions leads to complete control of the UI feedback loop and creates devastating and stealthy attacks. In particular, we demonstrate how such an app can launch a variety of stealthy, powerful attacks, ranging from stealing user’s login credentials and security PIN, to the silent installation of a God-like app with all permissions enabled. To make things even worse, we note that when installing an app targeting a recent Android SDK, the list of its required permissions is not shown to the user and that these attacks can be carried out without needing to lure the user to knowingly enable any permission, thus leaving him completely unsuspecting. In fact, we found that the SYSTEM_ALERT_WINDOW permission is automatically granted for apps installed from the Play Store and, even though the BIND_ACCESSIBILITY_SERVICE is not automatically granted, our experiment shows that it is very easy to lure users to unknowingly grant that permission by abusing capabilities from the SYSTEM_ALERT_WINDOW permission. We also found that it is straightforward to get a proof-of-concept app requiring both permissions accepted on the official store. We evaluated the practicality of these attacks by performing a user study: none of the 20 human subjects that took part of the experiment even suspected they had been attacked. We conclude with a number of observations and best-practices that Google and developers can adopt to secure the Android GUI.
I will talk about what machine learning privacy is, and will discuss how and why machine learning models leak information about the individual data records on which they were trained. My quantitative analysis will be based on the fundamental membership inference attacks: given a data record and (black-box) access to a model, determine if a record was in the model's training set. I will demonstrate how to build such inference attacks on different classification models e.g., trained by commercial "machine learning as a service" providers such as Google and Amazon.
Bias-resistant public randomness is a critical component in many (distributed) protocols. Existing solutions do not scale to hundreds or thousands of participants, as is needed in many decentralized systems. In this talk, we present two large-scale distributed protocols, RandHound and RandHerd, which provide publicly-verifiable, unpredictable, and unbiasable randomness against Byzantine adversaries targeting different application scenarios. Finally, we also discuss some applications of our protocols like sharding and proof-of-stake.
A balanced cyber security investment strategy is essential to build an adaptive security capability stack, this session will cover approaches to building a resilient cyber security portfolio.
Internet of Things (IoT) is changing the world we live in. Everybody wants to connect new objects to Internet, opening the door to a new spectrum of cyber threats and risks. No doubt, security is the top 1 concern in the IoT industry. However, developers are finding many challenges when implementing security on connected objects. One of these challenges is how to protect data and communications from eavesdropping. Traditional encryption algorithms and security protocols require a significant computing power, which is not available in small IoT hardware boards. This presentation will go, using practical examples, through lightweight encryption algorithms and solutions that can be used to break these barriers.
Memory corruption errors in C/C++ programs remain the most common source of security vulnerabilities in today’s systems. Over the last 10+ years we have deployed several defenses. Data Execution Prevention (DEP) protects against code injection, eradicating this attack vector. Yet, control-flow hijacking and code reuse remain challenging despite wide deployment of Address Space Layout Randomization (ASLR) and stack canaries. These defenses are probabilistic and rely on information hiding. The deployed defenses complicate attacks, yet control-flow hijack attacks (redirecting execution to a location that would not be reached in a benign execution) are still prevalent. Attacks reuse existing gadgets (short sequences of code), often leveraging information disclosures to learn the location of the desired gadgets. Strong defense mechanisms have not yet been widely deployed due to (i) the time it takes to roll out a security mechanism, (ii) incompatibility with specific features, and (iii) performance overhead. In the meantime, only a set of low-overhead but incomplete mitigations has been deployed in practice. Control-flow hijacking attacks exploit memory corruption vulnerabilities to divert program execution away from the intended control flow. Researchers have spent more than a decade studying and refining future defenses based on Control-Flow Integrity (CFI). This technique is now integrated into several production compilers. Microsoft compiles large parts of their codebase with Control-Flow Guard, a coarse-grained CFI mechanism, and allows developers to compile their software with the same mitigation mechanism. Google, on the other hand, developed a fine-grained CFI mechanism on top of LLVM that increases precision and compiles Chrome with this stronger mechanism. Researchers so far have shown that both coarse-grained and fine-grained CFI mechanisms can generally be bypassed. The accepted notion is that CFI makes successful control-flow hijacking attacks harder but the question remains how much harder an attack becomes? Attacks are now even more application specific and require a detailed analysis of the available whole-function-gadgets.
This talk will overview key legal and marketing issues presented by today's 'breach' prone world. More than ever, transactions of increasingly sensitive nature are being conducted online. When a breach occurs, this has legal and marketing implications. This talk will provide a comparative update to major changes in US, Canadian and UK/EU laws and rules affecting cybersecurity and related areas including issues raised by modern technologies. Also, how to maintain brand image amidst a breach will be discussed.
An overwhelming number of security controls revolve around generating and forwarding alerts to System Administrators or a Security Operations Center (SOC). These mechanisms often require a significant human element to actively manage and triage alerts. In addition, alerting tools require ongoing TLC, a.k.a. tuning, and typically result in alert fatigue or delayed response times. To ensure a timely response to security events, 24/7 SOCs and response SLAs become a necessity. Unfortunately, a SOC is a luxury which many organizations cannot afford. To overcome this challenge, automated corrective access controls must be deployed in conjunction with preventative access controls in order to effectively manage security threats and discourage alert fatigue. In a cloud environment, automated corrective controls can be triggered based on specific events deemed as security violations. In AWS, this can be achieved using AWS Lambda functions. This talk will focus on how to implement automated corrective access controls in AWS to quarantine users based on security policy violations.
Industrial Control Systems (ICS) are critical to local and national communities alike; they are the systems running the power grid, filtering water, pumping oil, and manufacturing the items we rely upon. It thus makes sense that cyber attacks against these infrastructures are a highly interesting topic to everyone. Unfortunately, when there's high interest in an area, and a lack of case-studies, the void that forms is filled with hype. There are real threats that need to be explored such as the attacks that took place on the Ukrainian power grid but there is also a lot of hype and misunderstanding about the threats ICS face. This presentation will be a case-study driven presentation on what the hype is, what the facts are, and what is being done to make our global infrastructure more secure.
In the last few years, private companies, government agencies, and security vendors have boosted the number of initiatives to share Threat Intelligence, predominantly focus on the sharing of Indicators Of Compromise (IOC's). In this talk, we will review what Threat Intelligence is, what the different use cases are and how it can help your organization. Finally, we will give you and overview of MISP (MISP - Open Source Threat Intelligence Platform) and OTX (Open Threat Exchange) with a focus on helping you start to consume Indicators Of Compromise.
Web applications are notoriously challenging to secure because they have so many avenues for attackers. Providing proper functionality and doing it securely requires a balancing act, which can often put security on the backburner. Focusing your development efforts on proper web application development techniques, coding standards and security testing tools will ensure that your web application will be as secure as possible upon deployment. Attendees of this session will learn: 1. The proper approach to building a secure web application 2. Necessary security coding standards that anyone can apply 3. Must use tools for proper security testing
ChaoSlingr is a Security Chaos Engineering Tool focused primarily on the experimentation on AWS Infrastructure to bring system security weaknesses to the forefront. The industry has traditionally put emphasis on the importance of preventative security control measures and defense-in-depth where-as our mission is to drive new knowledge and perspective into the attack surface by delivering proactively through detective experimentation. With so much focus on the preventative mechanisms we never attempt beyond one-time or annual pen testing requirements to actually validate whether or not those controls actually are performing as designed. Our mission is to address security weaknesses proactively, going beyond the reactive processes that currently dominate traditional security models.
As software has proliferated to become a critical part of our daily lives, increasing in both variety and volume beyond the ability of human hackers to effectively analyze it, the need for automated techniques to identify and mitigate bugs and vulnerabilities has become painfully apparent. Over the last few decades, several paradigms for the design of such automation have been explored by security researchers, numerous buzzwords have been coined, and many papers have been written to convey various techniques. However, despite decades of work, techniques for the automation of finding and fixing bugs are still in their infancy, and most such analyses are still done by hand. In this talk, I will delve into why this is the case, using the DARPA Cyber Grand Challenge as a vantage point to explore the issue. I will explore the road we have taken to get where we are, the fundamental (and not so fundamental) limitations holding us back, and muse about the next steps. I'll discuss this all in the context of my research into cyber autonomy and in the challenges and hurdles that my team, Shellphish, faced in the Cyber Grand Challenge and in applying our Cyber Reasoning System beyond that contest.
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data. The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as "teachers" for a "student" model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy.
Over the past few years, malware authors have developed increasingly sophisticated and creative ways to infect endpoints. Encrypting ransomware is no longer merely an annoyance. It's a highly persistent and organized criminal "business model" in full deployment, with new abilities to move laterally through networks and infect machines previously thought not possible to infect. The damage from becoming a ransomware victim is considerable, and can even put organizations out of business. At Webroot, we believe it's possible to effectively protect businesses and users, but only by understanding your adversary and the techniques they use for their attacks. In this webinar, Webroot's own Senior Threat Research Analyst, Tyler Moffitt, will offer expert insights into emerging encrypting ransomware variants--and how you can stay ahead.
Companies facing rampant attacks and data breaches have started turning to artificial intelligence techniques, such as machine learning, for security tasks. A machine learning classifier automatically learns models of malicious activity from a set of known-benign and known-malicious observations, without the need for a precise description of the activity prepared in advance. However, the effectiveness of these techniques primarily depends on the feature engineering process, which is usually a manual task based on human knowledge and intuition. Can we automate this process? Can we build an intelligent system that not only learns from examples, but can also help us build other intelligent systems? We developed a system, called FeatureSmith, that engineers features for malware detectors by synthesizing the knowledge described in thousands of research papers. As a demonstration, we trained a machine learning classifier with automatically engineered features for detecting Android malware and we achieved a performance comparable to that of a state-of-the-art detector for Android malware, which uses manually engineered features. In addition, FeatureSmith can suggest informative features that are absent from the manually engineered set and can link the features generated to human-understandable concepts that describe malware behaviors.
Chief Information Security Officer (CISO)s rarely have a typical day, it’s a “CHANGE” which is the only constant in a life of a CISO. No two CISO looks alike. Every organisation treat them differently and they all come from different background. They are known with different designations such as Chief Security Officer, Information Security Officer, Head of IT Security, Security Manager etc etc. One thing which is common among all is similar set of broad challenges and set of techniques used by them to get the job done and to set the priority.
They are all around us and we can very well see them through their exploits and plunders. However, it is not the case that we have to be at the receiving end and they lead this game every time. I believe, if we do our basics right we can outsmart them while staying ahead of the curve, hence I present to you a tried and tested strategy for Cyber Defense
In the Apple ecosystem, in order to explore the internals and security aspects of an Apple iOS based device, it is necessary to use a jailbreak. While many associate jailbreaking with hackers simply out to steal sensitive data, jailbreaking is unique way for the research community to explore and enhance the features and capabilities of a device. By creating and using jailbreaks, we can gain valuable information which can help us stay ahead of those who are looking to leverage threats for personal gain. In this talk I will be focusing on a process of jailbreaking modern iOS devices. We will start by diving into the history of jailbreaks. When did they first surface? How have they evolved over time? Next, we’ll take a look at the purposes and goals of jailbreaking. Finally, we’ll walk thru the evolution of iOS security enhancements over time, including modern exploit mitigation techniques and how jailbreaks are currently being used to better educate and protect the security research community. Attendees will gain an in-depth understanding of the steps needed nowadays for creating a jailbreak and why they are important. They will learn how iOS security mitigations work and what is needed in order to better understand the innerworkings of today’s latest technologies. Finally, attendees will learn how to use an exploit chain, and helper tools and techniques to create jailbreaks and better understand the iOS platform...
Cybersecurity breach is a given, most organizations should be prepared for a breach. There are ways to contain the impact of a breach. Attacker dwell time has reduced to 90+ days (Source: Ponemon) as compared to 270+ days a few years back. But it continues to be a key metric that captures the lacunae in today’s detection systems and processes. In this session, we discuss tools & techniques to reduce attacker dwell time to less than a day. We also look at methods to increase the speed of cyber security incident response. Together, this enables reducing the business impact of a Cybersecurity intrusion.
Introduction / OSINT and the first step of Cyber Kill Chain: "Hacker Reconnaissance" / Brief description of OSINT / What is "Hacker Reconnaissance" in the cyber kill chain / OSINT sources and demo of Censys, Shodan, Hacker Forums, Paste Sites, Vuln DBs and Cyber Threat Search Engine / OSINT mind map / Internet wide scanners / Hacker sites and deepweb / Known vulnerability databases / Google DORK / NormShield Cyber Threat Search Engine / OSINT & Hacker Reconnaissance tools in Kali Linux and Windows / theHarvester / sublist3r / Foca / Make your own tool with python / Basic REST API usage / Cymon integration for IP check
The cyber threat landscape is ever-changing with new and more advanced threats. In 2017 we have experienced major global ransomware attacks with devastating impacts and experts predict the frequency and severity of these attacks will increase in the near future. This session will explore the cyber threats on the horizon and best practices for detecting and mitigating these threats on the cyber battlefield in the never-ending cyber war of the 21st century.
Potentially unwanted programs (PUP) such as adware and rogueware, while not outright malicious, exhibit intrusive behavior that generates user complaints and makes security vendors flag them as undesirable. PUP has been little studied in the research literature despite recent indications that its prevalence may have surpassed that of malware. We have performed a systematic study of Windows PUP over a period of 4 years using a variety of datasets including malware repositories, AV telemetry from 3.9 million real Windows hosts, dynamic executions, and financial statements. This presentation summarizes what we have learned from our measurements on PUP prevalence, its distribution through pay-perinstall (PPI) services, which link advertisers that want to promote their programs with affiliate publishers willing to bundle their programs with offers for other software, and the economics of PPI services that distribute PUP.
Many machine learning models are vulnerable to adversarial examples, maliciously perturbed inputs designed to mislead the model. Adversarial training explicitly includes adversarial examples at training time in order to increase a model’s robustness to attacks. To keep adversarial training tractable, we usually rely on simple first-order approximations of the worst-case perturbation for each data point. We show that this form of adversarial training admits an unsatisfactory global minimum, wherein the model’s decision surface is highly curved near training points, thus resulting in first-order methods that produce poor adversarial examples. We experimentally verify that adversarially trained models on MNIST and ImageNet exhibit this curious behavior. We further show that these models remain surprisingly vulnerable to black-box attacks, where adversarial examples are crafted on a separate model trained for the same task. We harness our observations in two ways: First, we propose a simple yet powerful novel attack that first applies a small random perturbation to an input, before finding the optimal perturbation under a first-order approximation. Our attack outperforms prior first-order attacks on models trained with or without adversarial training. Second, we propose Ensemble Adversarial Training, an extension of adversarial training that additionally augments training data with perturbed inputs obtained from a number of fixed pre-trained models. On ImageNet and MNIST, ensemble adversarial training vastly increases robustness to black-box attacks. This is joint work with Alexey Kurakin, Nicolas Papernot, Dan Boneh & Patrick McDaniel
Building machine learning models of malware behavior is widely accepted as a panacea towards effective malware classification. A crucial requirement for building sustainable learning models, though, is to train on a wide variety of malware samples. Unfortunately, malware evolves rapidly and it thus becomes hard—if not impossible—to generalize learning models to reflect future, previously-unseen behaviors. Consequently, most malware classifiers become unsustainable in the long run, becoming rapidly antiquated as malware continues to evolve. In this talk, I present Transcend, a framework to identify aging classification models in vivo during deployment, much before the machine learning model’s performance starts to degrade. This is a significant departure from conventional approaches that retrain aging models retrospectively when poor performance is observed. Our approach uses a statistical comparison of samples seen during deployment with those used to train the model, thereby building metrics for prediction quality. I then show how Transcend can be used to identify concept drift based on two separate case studies on Android and Windows malware, raising a red flag before the model starts making consistently poor decisions due to out-of-date training.
DarkLight is a first of its kind, AI-based expert system which enables sense-making and decision-making for active cyber defense and information sharing. /It helps an organization to immediately deploy a scientific, evidence-based foundation for vastly improved cyber security operations and automation of their most highly-prized resource: the logic and experience of the human analyst. DarkLight automates what was previously solely a human task in frameworks such as the Integrated Adaptive Cyber Defense (IACD), a collaboration between NSA, DHS, Johns Hopkins APL and many industryleading vendors. Upper-level sense-making and decision-making functions which require human expertise and analytic tradecraft in the loop are now captured, augmented and/or automated to perform at machine speed, while the human remains on the loop only as needed, to further train and guide the AI. Created, tested and proven at one of the nation's most advanced research aboratories over the course of more than four years, the company has been granted multiple patents on this unique technology. The company recently emerged from stealth and first demonstrated the product publicly at the RSA Early Stage Expo in February 2017.
This talk will focus on an important but frequently overlooked area of Industrial Control System cybersecurity, asset and configuration management. While asset owners often do a good job of physical inventories, their management of software assets and their configurations on the OT side often leave much to be desired. While NERC CIP has forced utilities to dramatically improve their change and configuration management processes, particularly the tracking and approval aspects, many other industries are still operating in the dark in many cases. Moreover, even with utilities, the efforts may still rely heavily on labor-intensive, spreadsheet-based processes. We will discuss how organizations can insert more automation to not only improve security but also reduce costs. The session will highlight both the tools available, and, more importantly, the steps used to integrate those tools into a process appropriate for the environment.
Hear how a whole new trend is on the rise, the evolution of ransomware as we know it to pseudo ransomware, where even determining the purpose of an attack is difficult to ascertain. We will share details on what pseudo ransomware is and why it is gaining traction. The audience will see examples of wiper campaigns that destroy entire organizations. The attackers pretend that this is ransomware but is in fact developed for the sole purpose of destruction, as opposed to extortion. Based on interviews with ransomware actors, we will share the motivations, tactics and techniques behind these attacks and why these are so different from what we have seen with ransomware to date. The session will include insights from a full take-down operation against a major ransomware family. This session will be conducted by one of Europe’s leading cybercrime experts, who has been at the forefront of the NoMoreRansom Initiative, a multi-company effort working in tandem with law enforcement to pool resources to address the ransomware threat and assist victims in retrieving stolen data without paying criminals.
This session will cover threats that traditional anti-virus software is not equipped to face, including file-less malware, polymorphic malware, weaponized documents, targeted attacks, in-memory attacks, ransomware, phishing, and other undetectable advanced threats, and will explore a new endpoint protection method that guards applications in runtime through isolation to prevent compromise. Cyber attacks are on the rise, both in numbers and craftiness. New research shows that attackers are increasingly beating security detection at the gateway and on the endpoint by initiating attacks that don't drop malicious files at all, thus evading file-based detection. And even when they do use malicious files, once they get past the gateway filtering, the typical detection mechanisms aren't picking them up. The research found that few pieces of malware actually had signatures within AV engines. Only half of file-based attacks had been submitted to malware repositories and, of those, only 20 percent made it to AV engines. Are you truly prepared to defend your turf and your data when (not if) the attackers come after you? How should you be preparing beyond just having an endpoint security solution in place? Join us for this compelling session to see how advanced attacks such as zero-day attacks, ransomware and file-less malware can impact your organization causing operational, financial, and reputation damage, and the steps you can take to minimize your risk by preventing compromise.
Many believe that “To pay or not to pay?” is the fundamental dilemma in ransomware and cyber extortion. However, who is the crisis manager, and what should be the engagement process with the hackers, shareholders, customers and frustrated employees are far more relevant, and urgent, issues to address. This brief presentation will focus solely on managing the human dimension in cyber crisis, and will cast light on how to negotiate with cyber-criminals, as well as how to set up the company’s crisis management structures.
Setting up a threat intelligence program can be hard, but it doesn’t need to be, providing that you focus on the basics and keep things simple. In this talk I will share insights and fundamental concepts that I have learned in my own threat intel journey. I hope to leave you with valuable information that may allow you to finally turn the tables on your adversaries.
Snoops and active attackers mean our networks are increasingly hostile. Protecting users and their information requires HTTPS on every page of every site. Browsers have started limiting powerful features like geolocation and ServiceWorkers to pages served over HTTPS, and actively warn users when visiting non-secure pages. Fortunately, moving sites of any size and complexity to HTTPS is easier than ever. Certificates can be acquired automatically at no cost, new protocols like HTTP/2 and Brotli compression mean that secure connections can improve performance, and web developers can utilize features like upgrade insecure requests and referrer policy to avoid common pitfalls as they upgrade to HTTPS. Eric Lawrence offers practical advice to defuse common concerns about migrating to HTTPS, and share news about the latest browser changes to encourage web developers to secure their sites.
The Firebase Realtime Database has lots of cool features that make it enjoyable for app development, but its easy to use API is not enough for it to be production ready. The feature that makes the database useful is actually its powerful security model. This talk will explain why the security rules engine is such a critical feature, and explore practical aspects of the security rules language.
Every day, billions of people use the consumer web to find information, connect with friends and colleagues, post or store content, and conduct business. And while these services have been a boon to the economy, they have also been a boon to the underground economy -- billions of people are now subject to being scammed, defrauded, impersonated, or tricked into releasing sensitive information. In this talk we will discuss the threats faced by consumer-facing apps and websites and give an overview of how we can use data and machine learning to stop them. Topics we will consider include account access, account creation, and automation.
The World Wide Web is facilitating a huge erosion of privacy. Because web technologies permit it, governments, advertising networks and internet service providers monitor what everyone reads online. We can combat this spying by redesigning web browsers to stop exposing users’ private information. I will discuss the Tor Project’s work to develop Tor Browser, a web browser with many unique features that protect user privacy. I will describe Tor Browser’s privacy technologies including onion networking, first-party isolation, fingerprinting resistance, disk hygiene and hardening against exploits. I will also talk about Tor Project’s ongoing collaboration with Mozilla to bring these technologies to Firefox.
In this talk we will explore the many different ways of automating security testing with the OWASP Zed Attack Proxy and how it ties to an overall Software Security Initiative. Over the years, ZAP has made many advancements to its powerful APIs and introduced scripts to make security automation consumable for mortals. This talk is structured to demonstrate how ZAP's API, and scripts could be integrated with Automated Testing frameworks beyond selenium, Continuous Integration and Continuous Delivery Pipelines beyond Jenkins, scanning authenticated parts of the application, options to manage the discovered vulnerabilities and so on with real world case studies and implementation challenges.
The high demand of medical, fitness, fintech, and business apps and a rise in IoT devices at an alarming rate have brought about great cyber risk issues. These issues need to be addressed at the beginning of the life cycle of the software that operates them. These devices are collected a great mass of sensitive data on companies operations and the individual. Companies producing these services lack a strategy for secure development, creating a high risk of exposure of all the data to cyber security threats; therefore, creating the need for security in all stages of the development. This is possible with the close collaboration between security professionals and developers. We are going to explore a more dynamic and secure way of managing infrastructure and automated deployment by giving equal prioritization to maximizing risk management and prevention, flexibility, speed, time to market, and security.
The world is full of authorities who promise to behave randomly: commercial lottery drawings worth millions of euros, school assignments, group draws for international football tournaments, or choosing travelers for secondary security screening. More technically speaking, randomness is a requirement in many algorithms and hence verifiable randomness is a necessary prerequisite for algorithmic transparency. Yet today authorities typically provide no evidence to prove that they are actually behaving randomly. If any evidence is provided, it is typically in the form of a physical randomness process such as rolling dice. This talk will describe the ways in which we can use cryptography to provide stronger evidence to verify that lotteries are behaving correctly.