Blogs

Picture courtesy of "Bob the courier" @ Flickr

Thursday, December 31, 2020

Defending Democracy 2020

Defending Democracy

Democracy was invented in Athens, Greece. There were 3,000 electors and they cast votes using clay tablets deposited in a jar: white for, black against. When the votes were all cast the jar was broken open and tablets counted. They also counted the uncast tablets, to make sure there were not any more added or removed. Simple and effective secret balloting with built-in cross-checking and voter registration and verification. For the Greeks also knew if people could cheat, then they will.

Fast forward 2,000 years and we have 300,000,000+ people and about 200,000,000 eligible voters. Add in digital technology and paper ballots, early-voting, and mail-in ballots. There has been a lot made of paper ballots and how sacrosanct they are, and how tamper-proof they are. How true is that, and what has changed to undermine the authenticity of paper ballots? What remedies can be applied to restore the security of balloting in a technology age?

In modern digital voting systems, there are a lot of safeguards built in. We have evolved from all digital systems to ones that combine paper and digital records along with precinct level digital scanners to verify the paper ballot images. Coupled to this are the poll books that track who is voting in person at polling stations, along with who has submitted absentee ballots. The focus has been in security of this process and ensuring that votes reported matched the number of paper ballots and the poll book entries. The number of absentee ballots were limited to overseas voters, military, diplomats and temporarily out of state residents who followed a strict procedure to obtain ballots and return votes. Furthermore, auditing processes are also tuned to match these election procedures along with triggers of when an audit is necessary, and the types of audits needed. This all built a secure “castle” around the election voting process.

Recently this scenario has changed dramatically by the widespread introduction of mail-in paper ballots. Effectively this has been like the invention of gunpowder and cannon to the security of the current voting castle. It is not designed to cope with the challenges this provides. And if people can cheat then they will. Worse, if you are going to cheat, cheat big because then it is even harder to argue against the outcomes and you avoid the audit criteria that are all focused on narrow winning margins in close elections.

The challenges that mail-in paper ballots provide in todays modern digital world are many. First digital technologies can replicate paper ballots that are hard to invalidate by simple visual inspection. Second digital scanners designed to handle tabulations in precinct are ill-equipped to cross-check mail-in ballots. Where did this ballot originate from, has it been already counted, has this person already voted, is this person a registered voter? Of course, it is possible to use secure one time use digital codes on paper ballots to ensure these aspects are checked. Today that is not happening.

A further aspect of digital technology is the free access to electoral voter rolls and addresses with personal data such as age, life preferences and more. Coupled to this is highspeed computer data analysis tools that can cross reference these to death records, state, and county records, and more. This allows mail-in ballots to be tailored and printed for targeted voter populations and the creation of “vote dumps”.

The upshot is that people can generate paper mail-in ballots that bypass the necessary checks, that visually can pass inspection and will be included into the vote counting process by the clerks tasked with receiving and accepting them. Similarly, the computer scanners will accept them, and most importantly, once these ballots are included into the regular blocks of ballots then they are indistinguishable and cannot be separated back out again. To use the castle analogy again, this is the perfect Trojan Horse.

With mail-in paper ballots you have very limited ability to crosscheck between the number of ballots mailed out, the numbers received back, and the people who did cast those ballots. Has the same person voted multiple times? Has the same ballot been copied and submitted by multiple people? Did the person vote or did someone else vote on their behalf? What has happened to the ballots we did not receive back again? How can people see if the ballot they mailed in has been counted? Clearly there are simply too many variables at play and a huge potential for exploitation of the process.

Restoring Trusted Election Processes

Modern digital technology provides many conveniences and the ability to validate and verify. Banking systems are an obvious example. Banking systems work well because the identity of the actors and transactions are known. This is the biggest challenge with voting systems, the need to retain voter privacy. However, one aspect of accounting that can and should apply is the idea of double ledgers. Simply put, if there is more than one secure record chain of custody, then those can be crosschecked. With polling place in person voting then you have this with the paper ballots, the digital scans, and the poll book entries. All these can be crosschecked for accuracy matching tally counts.

Today we have three aspects of voting: early voting, day-of voting, and now large-scale mail-in voting.

What is required are additional security measures and mechanisms to validate and verify across these three. Creating those multiple sets of records that can be matched. This is indeed very possible and can be implemented.

A further aspect that we can see from the original Greek system is also needed. One of independent verification. A simple system can be easily witnessed and inspected. Today almost all of elections are being managed by three commercial vendors and their systems. The software is owned by them and the details are trade secrets. Computer scientists will tell you they can make things secure with encryption and other tools such as scanner QR codes. All this does is effectively obfuscate things for poll workers and observers so that the entire process is opaque.

To solve that requires the use of open public international election standards coupled with the use of software that is open source. This allows the process being used to be independently verified and reviewed. This is not new. Watch dog groups have been asking for this for decades. The international standards have been built and published. The industry and commercial vendors have repeated obstructed the adoption and built their own proprietary methods instead. Ironically those international standards do include the very mechanisms and crosschecks needed to secure mail-in balloting.

 

 

 

 

 

 

 

Thursday, May 18, 2017

Standard Authentication Mechanisms

Introduction

There are multiple standards for authentication: SAML, OAuth, and OpenID Connect. This paper will compare and contrast these standards and explain the contexts in which they are best used. First, let’s clarify the definition of authentication. Authentication is the verification of someone’s identity. This is different than authorization: authorization is granting someone access to information based on who they are, and optionally, other criteria such as time of day.

We will touch on authorization briefly because some authentication standards support authorization to some extent. More specifically, there are two kinds of authorization: coarse-grained and fine-grained. An example of coarse-grained authorization is whether a user has access to a particular web application (or subset thereof). However, an example of fine-grained authorization is whether a user can see a particular field on a page, or whether they can perform a particular operation (e.g. button-click on a page). Regardless, one cannot authorize a user unless they are first authenticated. OAuth 2.0 supports coarse-grained authorization by returning the allowed scopes -- as a result of a successful authentication.

Authentication Mechanisms

There are three major standards for authenticating users on the internet and in the context of software-based identity verification: SAML, OAuth 2.0 and OpenID Connect. SAML was designed to allow a prior authentication to support a subsequent authentication to a different application (of the same user). This is typically referred to as Single Signon (SSO). The benefits of SSO are:

  • Stronger security because fewer passwords are required, and users tend to expose their passwords when they have too many
  • Increased efficiency for employees (less time logging into applications)
  • Centralizes authentication to a single Identity Provider (IDP), and allows directories such as Active Directory to be used as the source-of-truth for user identity
  • Facilitates automation such that new users can be automatically provisioned into the applications in which they have been granted access


SAML uses what I call a “trusted introduction” or “vouching” paradigm where, because the IDP is trusted as the gatekeeper and true authenticator, then if we establish trust between the IDP and various SAML-enabled applications (referred to as Service Providers or SPs), then the SP can trust that the IDP such that it vouches for the access of the user to that SP/application. This trust is established via public/private asymmetric key encryption between the IDP and the SP -- per the SAML protocol.

In contrast to this, OAuth 2.0 uses a “valet key” concept for granting access and authorization (not necessarily authentication). When a user wants to access an application (a “Resource Server” or RS), the RS requests the valet key (referred to as an access token) from an Authorization Server (AS) such as Facebook or Google. This access token can then be used to access the endpoint RS/application. OAuth does not support dynamic discovery, and therefore authorization applications must be hard-coded into the RS website.

Finally, OpenID Connect is essentially a layer on top of OAuth 2.0 that supports federated authentication. It leverages the valet key underpinnings in OAuth 2.0, but adds the concept of what some call a referral letter. The endpoint RS/application requests a referral letter (conceptually) from the AS (e.g. Google), and the AS uses the OAuth 2.0 protocol as the mechanism for message exchange. The AS could then be considered a “notary” using our metaphor. The AS stamps or notarizes the referral letter, and then this letter can be provided to the RS site -- whereby the RS accepts it and allows the user to access this RS/application. In the following sections we will explore these 3 authentication mechanisms in greater detail. OpenID Connect supports dynamic registration and discovery of the authentication providers as well as a login flow.

SAML Overview

SAML (we will limit this discussion to SAML 2.0) is the most mature authentication standard. The SAML standard came from the OASIS organization. However, dues to its complexity and use of PKE and XML, it was not a good fit for mobile devices and the proliferation of new social websites and applications. Having said that, SAML is the dominant enterprise authentication standard today, and several major vendors provide excellent products - both on-premise (e.g. Oracle, CA, Ping) and cloud-based (e.g. Okta and OneLogin).

From a technical standpoint, SAML leverages the concept of an Assertion -- which is a collection of standardized XML elements conveying a fact. There are 3 types of assertions: 1) authentication, 2) attribute, and 3) authorization. SAML also defines Protocols which are sets of message exchanges to achieve a particular function (e.g. Single Logout Protocol to log someone out of all the SAML sessions establish concurrently by the IDP). SAML also defines Profiles -- which are concrete use-cases of applied assertions. An example of a profile is the Web Browser SSO Profile. Finally, SAML defines Bindings. Bindings define how protocols and profiles can be implemented at the lower messaging layer. Examples are HTTP Post or SAML SOAP bindings.

WIth regard to configuring SAML, the following points will facilitate an understanding of how to implement SAML for a particular vendor configuration:

  • SAML Issuer: Arbitrary identitier for IDP but must match the Issuer setup in SP configuration
  • SAML Recipient: typically maps to a login URL
  • SAML Audience: identifies the SP; this is often set to the top-level domain name of the SP application
  • IDP Login URL: used for SP-initiated to redirect into IDP
  • IDP Logout URL: useful for when user logs out of SP

Note that the IDP typically provides a PKE digital certificate to the SP out-of-band. The SP can then decode DigSig in SAML Assertion using IDP public key. And the asserted principle can either be the agreed upon user-name or it could be a federated ID (e.g. an Active Directory identifier). Finally, the SAML RelayState parameter can be used for conveying URLs among session request/responses (e.g. for deep-linking into the SP application). What follows is a good pictorial representation of the typical SAML message flows.

OAuth 2.0 Overview

OAuth is an IETF standard. It was designed to be friendlier to mobile devices and REST-based applications -- when compared to SAML. OAuth basically authorizes client programs to access resources on behalf of an end-user. OAuth 1.1 was more limited and complex than OAuth 2.0. Using OAuth terminology, the SP is now called a Resource Server, and the access tokens are obtained from an Authorization Server -- which is roughly analogous to a SAML IDP. End-users are considered Resource Owners. OAuth offers 4 major concepts:

  • Grant Types
  • Client Types
  • Client Profiles
  • Application-defined Scopes

More specifically, there are 4 Grant Types:

  • Authorization code: client exchanges an authorization code for an access token; this avoids passing the token through the browser where it could be intercepted
  • Implicit: access token is granted as soon as the user authenticates. This is not as secure and is best for temporary access to non-critical data.
  • Resource owner password: client uses username and password to obtain access token; this requires a high degree of trust of the client application.
  • Client credentials: client provides a credential that, due to a configuration step and exchange of something out-of-band, the client can be trusted. A typical example of this is the JWT bearer flow whereby the JWT is signed with a private key, and the public key is uploaded to the authorization server.


There are 3 main client profiles:

  • Web application flow
  • User agent flow (agent is typically a brower)
  • Native flow (e.g. for mobile applications)


OpenID Connect

OpenID Connect is a standard established by the OpenID Foundation. OpenID Connect is a layer on top of OAuth 2.0 that serves to support verifying the identity of an end-user. OpenID Connect defines a JSON Web Token (JWT) security token that makes claims about the Authentication of an End-User by an Authorization Server. OpenID Connect defines 3 types of flows:
  • Authorization Code Flow
  • Implicit Flow
  • Hybrid Flow

The authorization code flow is best for clients that can securely maintain a client secret between themselves and the Authorization Server. The implicit flow is typically implemented by browser-based clients using Javascript. It does not require a separate token endpoint, and so all tokens are returned by the authorization server. This is not as secure as the other flows. The hybrid flow is considered a combination of both prior flows -- in that some tokens are returned by the authorization server and some are returned by the token endpoint. There is also a higher-level login flow, and deep-linking is also supported for this flow. Finally, there are numerous certified open source implementations of OpenID Connect for various programming languages. More information can be found here:

http://openid.net/developers/certified/

Summary

In general, SAML is an older and more heavy-weight approach to authentication -- in the sense that it requires XML parsing and more complexity in the baseline implementation. OAuth 2.0 is lighter weight but does not fully describe end-user identity verification. However, OAuth 2.0 is more foundational because it grants an “access token” that could be used to assert any number of things. In contrast, OpenID Connect is aimed strictly at authentication, and it supports more features with regard to authentication such as dynamic discovery of authentication providers, a login flow, and deep-linking.

Monday, August 1, 2011

Oracle NIEM resources site launches

The Oracle NIEM resource site is now available.  The site provides access to the latest news and features available for the NIEM community from Oracle.
Links and resources are also show cased that provide open source and freeware resources for implementing NIEM IEPD XML exchanges, LEXS starter kits and more.
Online video tutorial materials are also now available to help developers jump start their NIEM development projects. 
For all the latest news and details see the site directly.

Wednesday, February 2, 2011

Latest CAMeditor v2.0 for NIEM now available from Sourceforge

The latest CAMeditor release is now available from Sourceforge:
Highlights include:
1) GUI-based drag/drop from dictionaries into schema/template designer
2) Viewing schema as graphical MindMap

3) Complete external code lists support for code values via files and import of code lists
4) Enhancements to the evaluator, NDR checks and NIEM re-use scoring tools
5) Performance enhancements to XSD schema importing tool
6) Generation of NIEM EIEC / BIEC dictionary schema from ERwin enterprise model components
7) Enhanced XML example generation
Then there's a raft of fixin's and improvements from over 4 months of development since the last release.  Full details are in the release notes PDF documentation and the online tracker system.
Thanks to everyone who has contributed to helping make this new release the best yet.  And special shout out goes to IEPD Factory team and working with us so we can share code list files between tools.

Please feel free to provide feedback via Sourceforge project discussion area, OASIS CAM dev list, or contacting the team directly.


For more information on NIEM work at Oracle Corporation please see:

Sunday, September 19, 2010

Coordinated Emergency Response

Although our National Response Framework (NRF) has not gotten much attention lately, I wanted to publish some work done by myself and Alex Karman.  We implemented a POC and wrote a white paper to demonstrate infrastructure to support NRF.  The white paper is called Coordinated Emergency Response and it accessible here:

https://docs.google.com/document/edit?id=1lwpolvgQOccg3Mm4EKckSal_qHVA8jogyWIJcRpPS0I&hl=en&authkey=CKHy5u8D#

We feel this technology and approach would be useful to implement across the Local, State, and Federal agencies in support of emergency response.

Wednesday, September 1, 2010

MITA TAC 2010 - Security as a Service (MMIS Conference)


With respect to the MITA TAC, the theme and effort we focused on for 2010 was centered on “Security as a Service” (SaaS).  SaaS has the benefit of:
  1. Centralizing control of security
  2. Re-using critical security functionality throughout an organization
  3. Providing consistency with respect to how security is implemented
  4. Leveraging standards to implement security for interoperability, plugability and extensibility 

The upshot is that the MITA TAC achieved the following:

  1. Clarified what the key services are: authentication, authorization, and auditing
  2. Leveraged industry standards for security to build out and deploy examples of these services
  3. Exposed these web services on the internet – focusing on the 2010 MMIS Conference
  4. Demoed these services at the MITA TAC interoperability booth