Internal Audit Report
MOBILE COMPUTING DEVICE SECURITY
Report No. SC‐12‐14 June 2012 David Lane Principal IT Auditor Approved Barry Long, Director Internal Audit & Advisory Services
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
TABLE OF CONTENTS I.
EXECUTIVE SUMMARY ..................................................................................................................................... 2
II.
INTRODUCTION Purpose ............................................................................................................................................................. 3 Background ....................................................................................................................................................... 3 Scope ................................................................................................................................................................ 4
III.
OBSERVATIONS REQUIRING MANAGEMENT CORRECTIVE ACTION A.
Types of Mobile Devices and Uses at UCSC ........................................................................................... 5
B. Governance and Support ........................................................................................................................ 7 C. Security Controls Over University Data ................................................................................................ 11 D. Mobile Device Security Policy Maturity ................................................................................................ 15 APPENDCIES A.
Mobile Device Survey Questions .......................................................................................................... 18
B.
Mobile Device Detailed Follow‐up Review ........................................................................................... 19
C.
Symantec: A Window Into Mobile Device Security .............................................................................. 21
1
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
I.
EXECUTIVE SUMMARY
Internal Audit & Advisory Services (IAS) has completed an audit of Mobile Device Security. The purpose of the review was to identify the existence and operating effectiveness of security controls over selected mobile devices designed to protect university data, and to assess the adequacy of campus mobile computing security policies, risk assessment, and governance. Overall, mobile device security controls appeared to be in place and generally effective in protecting university data, but were reliant on: user knowledge of data types and security standards, effective campus data steward governance, security controls included within third party applications, security features included on mobile devices, and user knowledge of security device configuration settings. Because most mobile devices were owned by employees, and governance over mobile device security and of data of accessed using mobile devices had not been fully established and communicated, it was difficult for the campus to get a perspective on risk or identify high risk users for mitigation efforts, which might include further education and training. Campus mobile computing policies provided general guidance but users were not always aware of these policies and needed additional guidance to address the unique features and security settings of their mobile devices. The following issues requiring management corrective action were identified during the review: A.
Types of Mobile Devices and Uses at UCSC: Rapidly changing technology and user behavior present challenges in the university’s ability to control and protect university data. There was no process in place for managing access or storage methods using mobile devices that could put university data at risk.
B.
Governance and Support: Governance over mobile device security and use, including roles and responsibilities for data accessed by mobile device users, had not been fully defined or communicated. Campus support for mobile device users was generally limited to help desk support with additional support provided on an informal basis.
C.
Security Controls of University Data: Some users indicated they had FERPA data in their email and third party cloud storage accounts that was sensitive and could be broadly defined as restricted, which is in violation of existing campus guidance. Mobile devices accessing third party storage accounts were not configured to comply with the campus password protection policy. Much of the mobile device security was reliant on third party application security, device configuration settings, and user knowledge.
D.
Mobile Device Security Policy Maturity: Nearly all users surveyed were not aware of mobile device security training or best practices.
Management has agreed to all corrective actions recommended to address risks identified in these areas. Observations and related management corrective actions are described in greater detail in section III of this report.
2
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
II.
INTRODUCTION
Purpose
The purpose of the audit was to identify the existence and effectiveness of security controls over selected mobile devices used to access university data, and to assess the adequacy of campus mobile computing security policies and procedures, governance, and campus risk assessments of these devices. Background University employees are using a variety of personal and university owned mobile devices to perform functions historically performed with workstations. The present governance structure on campus over mobile devices is divided; Information Technology Services (ITS) generally assumes responsibility for establishing information security policy and minimum connectivity standards of campus computing devices, including mobile devices; while data users and Resource Proprietors/Data Stewards share responsibility over the appropriate classification, use, and security of data accessed by the mobile device. The campus (and the university systemwide) has not historically focused policy or support on mobile devices because until only recently, cell phones could only perform limited functions and first generation PDA’s were not widely used for functions beyond email and calendar. At UCSC, mobile devices are primarily used for voice, text, email and calendar; however, mobile devices are also used to access enterprise systems, manage departmental and enterprise servers, and/or work with restricted data. Mobile Device use has grown dramatically over the past few years and growth is expected to continue in the forseeable future. Hewlett Packard (HP) and Advanced Micro Devices (AMD) reported in a June 2012 webcast that by the year 2013, one third of the workforce would be using mobile devices for business related functions with consumer devices defining how, when, and where we work. The increase in mobile device functionality has effectively blurred the line between cell phones and workstations. Mobile devices offer convenience and enhanced productivity for users and the university. The affordable pricing and aggressive marketing by cell phone companies has resulted in a large number of personally owned mobile devices being used for business purposes. From our survey of 112 mobile devices, 69% were personally owned and operated. The bring your own device (BYOD) phenomenon is not unique to the university and both private and public sector institutions are struggling to adapt and maintain security in this new environment. White papers, webinars and related publications on mobile device security are published on a daily basis. One of the more informative papers we have seen was published by Symantec: A Window Into Mobile Device Security, included in Appendix C. Employee purchased mobile devices potentially save the university a considerable amount of money. However, the savings represent cost avoidance and increased productivity, rather than funding. Net savings/cost of support is not easily determined. Current literature on the topic of mobile device management generally concludes that policy, training, and support are major factors affecting security when centralized control is not possible in a BYOD environment. University of California (UC) has started a project/partnership to develop mobile applications (apps) for all internet enabled mobile devices. The UC distributed collaboration partnership, Mobile Web Framework
3
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
(MWF) involves at least five other campuses. All apps developed through this partnership will eventually be vetted by each institution. The partnership is designed to leverage students and faculty researchers to create mobile device applications that will be provided in software as a service model. The project is still in development stages, but some beta testing is expected in the spring 2013. This project will potentially provide users with apps that have been reviewed to provide a level of assurance they do not contain any malware and are appropriate for use with university data. Scope We initially outlined the scope of the review using an Information Systems Audit and Control Association (ISACA) audit program, which was based on ISACA’s COBIT and Committee of Sponsoring Organizations of the Treadway Commission (COSO) models. We applied specific sections of this program to develop a survey instrument (appendix A) and a detailed follow‐up review (appendix B) to understand and test device use and configuration. In our review, we: Distributed email surveys to 275 staff employees with a Payroll/Personnel System (PPS) title code indicating a director level or above, along with other users identified by users in the first round survey.
Received response from 104 staff employees, or approximately 38% of the staff employees, representing 112 mobile devices.
Completed detailed follow‐up review of eleven mobile devices selected based on risk assessment of survey results.
Interviewed and obtained information from faculty members who actively use mobile devices to manage student data.
Reviewed UC and UCSC policies and procedures relevant to mobile devices.
Reviewed industry publications and resources focused on mobile device security.
Our intent was to provide some baseline information on campus mobile device use, but was far from a complete inventory and survey. One faculty member surveyed their academic department on our behalf to determine frequency of Dropbox use. The results of this faculty survey were not included in our survey numbers, but they do demonstrate that many more UCSC employees are using mobile devices than those we surveyed and reviewed.
4
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
III. A.
OBSERVATIONS REQUIRING MANAGEMENT CORRECTIVE ACTION Types of Mobile Devices and Uses at UCSC
Rapidly changing technology and user behavior present challenges in the university’s ability to control and protect university data. There was no process in place for managing access or storage methods using mobile devices that could put university data at risk. Risk Statement/Effect Without a full understanding of the access and storage methods used in the mobile computing environment and information on high risk uses along with the challenge of a growing percentage BYOD accessing restricted data and enterprise systems; The campus would be unable to effectively control and protect university data. Not all mobile devices were configured to meet campus password policy to appropriately secure restricted or confidential data. Agreements A.1
Information Technology Services will work with Records and Information Management Services in providing information to mobile device users, including guidance and education for appropriately accessing, using, and storing data, with different levels of sensitivity using a mobile device, such as a smartphone or tablet, consistent with the Information Technology Services defined definitions. (Refer to Agreement B.1)
Implementation Date 1/31/2013 Responsible Manager Director, Client Services & Security
A. Types of Mobile Devices and Uses at UCSC – Detailed Comments Types of Devices in Use We surveyed 104 staff employees who used 112 mobile devices for university business purposes. The two most prevalent devices identified in our survey were the Apple iPhone/IPOD/IPAD and the Android phones and tablets. Apple devices all share the IOS operating system which is based on the Apple Mac OS X operating system. The Android device operating system is based on Linux married with a Java‐based platform. The unique features and security of these two types of mobile devices are detailed in appendix C, Symantec white paper ‐ A Window into Mobile Device Security. Mobile Device Characteristics When operating system updates are available for Apple devices they are typically applied with the mobile device in synced with a workstation via iTunes, or by user action when promoted that an operating system update is available. While it potentially may be a problem if users did not proactively update their devices, all the Apple mobile devices we reviewed in detail were patched and up‐to‐date. Android mobile devices are configured by default to automatically update the operating system from the Google servers when they are connected to the internet. Apple mobile devices encrypt all data stored on the mobile device memory by default. Android mobile devices must be specially configured to encrypt data at rest and we assisted several users in encrypting their Android data. We did not detect any restricted data stored on mobile devices that would require encryption.
5
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
Survey Results As outlined in table 1., Survey of Stated Uses of Mobile Devices, of the 112 mobile devices surveyed, 73 were an Apple iPhone, iPad or iPod, seventeen were Android phones or tablets, seven were Blackberry, and fifteen were lesser known brands of cell phones that may not be classified as a “smart phone”. Table 1. Survey of Stated Uses of Mobile Devices Device Type Personally Owned University Owned Totals Apple 54 19 73 Android 14 3 17 BlackBerry 1 6 7 Other11 8 7 15 Total 77 35 112 Percentage 69% 31% 100% Personally Owned Devices From our survey, 69% of the mobile devices used on campus were personally owned with essentially no central visibility over their existence and use. These personally owned mobile devices were difficult to monitor and control. The security risks, user behaviors, and mobile device technology are constantly changing which makes an inventory and classic risk assessment very difficult and alternative means to identify high risk (level) users may need to be considered. High Risk Users As outlined in table 2.,The majority of users surveyed used their mobile devices for voice, text, email and calendar, and one in five users indicated that their mobile devices were used to connect to enterprise systems, connect to workstations or file servers, manage university systems, or access and work with restricted data. These users represented a higher level of security risk to the campus. High risk users are identified as users who:
Work with restricted data in email (not allowed per UCSC guidelines)
Work with restricted and confidential data using free services (not allowed per UCSC guidelines)
Access Enterprise systems with privileged accounts
Administer University servers or systems
Synchronize with non‐ university owned computers propagating restricted or confidential university data.
Table 2. Survey of Mobile Devices Device Type UCSC Email Apple Android BlackBerry Other1
74 10 6 3
Calendar
Server
Desktop
70 9 5 2
15 4
3 3
Enterprise Systems 16 4
There was no process in place to identify high risk mobile device users who have access to restricted data or enterprise systems to help assure their mobile devices are configured according to recommended best practices. 1 Other mobile devices include: HTC, JSony, Samsung, LG, HP, etc.
6
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
B.
Governance and Support
Governance over mobile device security and use, including roles and responsibilities for data accessed by mobile device users, had not been fully defined or communicated. Campus support for mobile device users was generally limited to help desk support with additional support provided on an informal basis Risk Statement/Effect Without sufficient oversight and support over mobile devices and data, mobile device use and configuration may not assure appropriate security of university data. Agreements B.1 Information Technology Services will work with Records and Information Implementation Date Management Services to define appropriate uses of mobile devices with 1/31/2013 different types of university data and provide criteria for communications with Responsible Manager mobile device users. (Refer to Agreement A.1) Director, Client Services & Security B.2
Records and Information Management Services will work with Information Implementation Date Technology Services to assign the roles and responsibilities of electronic data use by Resource Proprietors/Data Stewards and clarify the responsiblities of end 1/31/2013 users. Responsible Manager Director, Records & Information Management Sves.
B.3 Information Technology Services will work with Records and Information Implementation Date Management Services to define, list and provide information on data protection 7/31/2013 to Resource Proprietors/Data Stewards to enable them to fulfill their Responsible Manager responsibilities as defined in IS‐2 and IS‐3. Director, Client Services & Security B.4
Information Technology Services will incorporate mobile devices in the support Implementation Date for Campus IT services to allow users to appropriately configure and evaluate the 5/31/2013 security of their mobile devices. Responsible Manager Director, Client Services & Security
B.
Governance and Support –Detailed Observations
Governance ITS has provided much of the existing policy over mobile phone use and has assumed responsibility for network connectivity standards, similar to its role over campus workstations. However, ITS has not been charged with regulating individual mobile device use related to data. Users are responsible for data that they access using their mobile device.
7
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
Because the use of mobile devices is new and emerging, there has been little if any resolution around who is responsible for ensuring that users are not inappropriately accessing data with their mobile devices. Accordingly, governance over mobile devices needs to address the unique characteristics that make mobile devices different from workstations including: Common use of free apps. Mobile devices are largely personally owned and not managed by the university. Many mobile devices do not support anti‐virus, firewalls or other security software available for workstations. Mobile devices are not presently centrally managed using Tivoli Endpoint Manager support center software (BigFix). There is no standard configurations for commonly used mobile devices. University Bulletin IS‐2 and IS‐3 refer to the campus data steward as the individual with ultimate responsibility for security and classification of a defined set of university electronic information for which they are responsible. However, Resource Proprietors/Data Stewards have not been clearly identified at UCSC, and the roles, responsibilities, and training that would be assumed by Resource Proprietors/Data Stewards do not appear to be integrated in practice with policy and guidance issued the campus Information Technology Security Committee (ITSC) and ITS or formally communicated. This is particularly critical in carrying out appropriate protocols for the protection of data accessed and exchanged between by mobile device users. In addition, it is unclear whether or not users of mobile devices understand their roles and responsibilities over data accessed, maintained, and generated on their devices. Business and Finance Bulletin IS‐2 specifically states: Resource Proprietors are those individuals responsible for information resources and processes supporting University functions. This includes individuals who create the information, such as the owner of intellectual property. Resource Proprietors are responsible for:
ensuring the inventory and classification of information for which they have responsibility,
in consultation with the Resource Custodian, determining the level of risk and ensuring implementation of appropriate security controls to address that risk,
approving requests for access, release, and disclosure of information, and
ensuring appropriate security awareness training for individuals they authorize to access information.
Resource Proprietors should establish and review procedures to ensure compliance with federal or state regulations or University policy. Resource Proprietors are responsible for ensuring that University Resources are used in ways consistent with the mission of the University as a whole. The Resource Proprietor should ensure that recipients of restricted information are informed that appropriate security measures must be in place before restricted information is transferred to the destination system.
8
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
The lack of formal proactive campus governance over Resource Proprietors/Data Stewards and users presents an emerging risk where mobile device users can generate potentially restricted or confidential data, and does not allow for the mitigation of the following number of challenges related to mobile device security: Devices can be from a variety of manufacturers and may be running a number of different operating systems.
Users inherently have administrator level access and can add applications to the devices that may contain malware or can make configuration changes that may compromise security.
Effective anti‐virus and anti‐malware software does not exist for all mobile devices.
Devices can be rooted or unlocked allowing an even greater array of apps to be installed.
Support staff and even security experts may not always agree on appropriate configurations and practices.
Devices may not be configured for optimal security.
Restricted or confidential data can be generated using mobile devices.
Resource Proprietors/Data Stewards may not be aware of the existence of restricted or confidential data generated by users.
Informal Support of Mobile Devices The majority of support available to mobile device users on campus was in an informal manner, by Local IT Specialists (LITS) and other staff who have acquired specific mobile device expertise. The ITS help desk does not have any support staff designated as mobile device experts We were told that typically when a mobile device user requests support from the help desk, they would go through the operating manual with the user to determine how to accomplish the desired task. The following survey response indicates that most users feel they need additional support or information to assure their mobile device is safe and secure.
"Don’t know" or didn't answer, 65
Response to question #5 Is information and support available for you to keep your mobile device safe and secure for business processes?
No, 11
Yes, 36
*Numbers on pie chart are based on 112 responses to survey
Support of Mobile Device Configuration ITS has provided instructions to configure iPhones/iPods, Androids, and Blackberry devices to use UCSC email, CruzTime, and wireless network services. Under the “getting help” section of the ITS web site on Mobile Devices and Wireless they provide a direct link to find and contact the Divisional Liaisons, which has the potential for promoting an informal service model that bypasses the formalized ticketing system through the help desk. The informal support model may be necessary until the help desk support staff obtain mobile device expertise. The
9
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
Divisional Liaisons (Dave Lane) and LITS can document support in the ticketing system, but we observed this was not always done when advice was provided to mobile device users. The six mobile device experts we identified and utilized as subject matter experts resided within ITS divisions outside the support center including: Learning Technologies Core Technologies Client Relationship Management Mobile devices can be configured insecurely including; browsers set to save passwords, applications set to save passwords, no passcodes set, apps with malware installed on devices, email passwords saved when an email contains restricted or confidential data, devices with access to restricted data not set to erase data after repeated failed log‐on attempts, location tracking not enabled, remote wipe not enabled. Some of the users we interviewed who were accessing enterprise systems expressed a desire to have someone review their device to assure them that it was configured securely for use with university systems. The campus does not currently offer this as a regular service.
10
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
C.
Security Controls of University Data
Some users indicated they had FERPA data in their email and third party cloud storage accounts that was sensitive and could be broadly defined as restricted, which is in violation of existing campus guidance. Mobile devices accessing third party storage accounts were not configured to comply with the campus password protection policy. Much of the mobile device security was reliant on third party application security, device configuration settings, and user knowledge. Risk Statement/Effect Use of email and third party cloud storage accounts for FERPA data without appropriate levels of security may lead to inappropriate disclosure of university data. Mobile device users applying free applications and third party services to their devices are relying on these applications and services to provide security that may or may not be commensurate with normal university security standards. Agreements The Campus Registrar will work with Information Technology Services on the sensitivity classification of FERPA data and evaluation of the practice of emailing FERPA data; and work with the Academic Senate and/or other appropriate staff and faculty representatives in communicating to users and Resource Proprietors/Data Stewards , the campus electronic data policy and recommended practices for eliminating or protecting University data. Information Technology Security will establish implementing procedures and C.2 obtain ITSC approval; which includes the use of proactive wipe routines, such as the 10 failed in log attempts, as a compensating control to address the standard four digit PIN passwords used by most mobile devices in place of the 8 mixed character password required by campus password policy for accessing restricted or confidential data. Information Technology Services will compile a list of best practices for mobile C.3 device security including access to third party cloud storage and lost or stolen devices. C.1
Implementation Date 6/30/2013 Responsible Manager Campus Registrar Implementation Date 11/30/2012 Responsible Manager Director, Client Services & Security Implementation Date 11/30/2012 Responsible Manager Director, Client Services & Security
C. Security Controls Over University Data – Detailed Observations FERPA Data in Cloud Storage (free services) & Email A number of staff and faculty indicated that they routinely sent emails back and forth containing information about a student’s needs and requests, and used Cloud Storage (free services). A number of staff and faculty surveyed, who work with students, also use Dropbox to store data, some of which we were told was likely to be sensitive or confidential FERPA data. UCSC guidelines on use of “free services” (like Dropbox) state: Important: Restricted and confidential information must never be stored, received, processed or published on non‐UC systems unless you have worked with Purchasing or Business Contracts to ensure that a UC‐approved agreement is in place that addresses
11
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
information security and privacy requirements and concerns. Similarly, don't rely on external information systems or services for critical University business processes unless a UC‐approved agreement is in place. Staff in these divisions was not informed that restricted and confidential data should not be emailed or stored in free cloud services. In addition, as noted in section III. Governance and Support, governance over mobile device security and use, including roles and responsibilities for data accessed via mobile devices, had not been fully defined or communicated. In addition, emailing restricted data is inappropriate and has implications go beyond mobile device use. Mobile devices may increase the likelihood of generating and sharing restricted or confidential data by email and cloud storage. The practice of emailing FERPA data and saving it in free cloud storage is a broader topic than mobile devices. The Information Technology Security Committee should be made aware of these practices. During the review we became aware of an ITS group called the Campus Storage Solutions Group that is charged with evaluating various cloud and campus storage options. This group is expected to issue a report recommending suitable storage options, and may help to guide campus faculty and staff on appropriate cloud storage options. We provided the Campus Storage Solutions Group with copies of all correspondence with staff and faculty regarding their use of Dropbox. Password Protection Most mobile devices were configured to require strong passwords to access restricted or confidential university data. All devices reviewed that were used to access enterprise systems such as CruzBuy or university servers did not save passwords in their browsers or applications, although we verified that it is possible to save CruzBuy passwords in mobile device browsers. UCSC password policy and standards are written so that they appropriately apply to restricted data and are recommended for access to confidential data. For this reason we did not focus on the four digit PIN codes used to protect most of the devices we reviewed, unless the four digits PIN alone granted access to restricted or confidential data. Email and Dropbox were the only passwords stored on the devices we reviewed. One device reviewed that had saved passwords for these services did not have a PIN code set to restrict access to the device. We noted specifically that the campus password policy and password standards were not written with mobile devices in mind. Most of the devices we reviewed had a four digit PIN code to activate the device and retrieve email. Password standards state: These Standards are required for passwords that provide access to University restricted data, or where otherwise required by law, UC or campus policy, or contract.
and
1. Passwords must be at least 8 characters in length and contain at least 3 of the following 4 types of characters: As long as the four digit PIN does not provide access to restricted or confidential data, it is out of compliance with policy. Unfortunately we found a number of devices that had saved password for email and Dropbox with only a four digit PIN. The current policy does not take into account a common setting on mobile devices to erase data after 10 failed log‐in attempts. This could be a compensating control in campus password standards.
12
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
The mobile device users connecting to enterprise systems or other university servers were required to type a password that was in compliance with this standard in order to access the university system. As noted, email and Dropbox passwords were saved on all devices reviewed using these services. Most of the users email and Dropbox accounts did not contain restricted or confidential data, however when we interviewed the Division of Undergraduate Education staff and Psychology Department faculty we found they often had emails that contained student information that could be covered by FERPA. Failed Log‐on Security Failed log‐on security was not always sufficient. Only three out ten devices reviewed in detail were set to erase device data, including stored email and Dropbox passwords, after 10 failed log‐in attempts. If devices are set to erase data after 10 failed log‐in attempts the location service will no longer work after the data has been erased. The decision to set devices to erase data should be based on the data accessible once the four digit PIN is correctly entered. Remote wipe services offer similar controls for lost devices. Users who have sensitive data in their email or Dropbox accounts should set their devices to erase data after 10 failed attempts or use alternative controls such as remote wipe. Mobile Device Configuration Mobile device users were generally unaware of security configuration and setting for their devices. This included setting for use of encryption, password display, use of credentials, firewalls, blue tooth settings, anti‐virus, etc. Consequently, mobile devices were not configured for maximum security. Devices that were configured to save email and DropBox passwords when those systems contained restricted or confidential data were the highest risk. Google email and DropBox both save passwords by default and many users might not have the technical knowledge to change these configuration settings. Android security is highly dependent on the permissions a user grants apps when they are installed, but apps cannot be installed without granting the permissions the app requests. Most users do not have the training and knowledge to understand all the technical implications of granting specific permissions to apps as detailed in appendix C. Malware We did not identify any malware in our review; however, there was no software available to scan all the devices we reviewed for positive confirmation that they did not contain malware. All the IT security publications related to mobile devices we reviewed predict an increase in malware targeting these devices in the near future. A recent San Jose Mercury News article published in April 2012 reported that the number of known Android apps with malware increased from 400 in June 2011 to over 15 thousand in February 2012. Google has attempted to better screen apps that go on their market, but Android apps can be distributed through multiple web sites and can be signed by self‐generated digital certificates. Security experts have also demonstrated that it is possible to get apps with malware in the Apple store. Apps have also been seen that pass the security scans but then download malicious code once they are installed on the device. It was reported that Symantec and Juniper Networks estimate that hundreds of thousands of mobile devices have been infected with malware through app downloads. Apple has a means to remove apps containing malware from their store, but they have not demonstrated an ability to remove these apps from devices once installed. We could not find published lists of Android or Apple apps known to contain malware.
13
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
Anti‐Virus The anti‐virus, anti‐malware software to protect and analyze these devices is still at a low maturity level and most users have not adopted their use on mobile devices Anti‐virus products were freely available for Android devices, but only three of 13 Android devices we surveyed had anti‐virus installed and running. Apple IOS is designed to isolate apps from one another and the operating system which appears to have impeded development of an effective anti‐virus tool for their devices. As these devices are used more and more for e‐commerce transactions, more direct attacks through the internet, social engineering and malware are likely to occur. Eventually the anti‐virus/malware, firewall and other security related software products will likely be available, but until that happens there would appear to be an ongoing risk that malware is not detected timely.
14
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
D.
Mobile Device Security Policy Maturity
Nearly all users surveyed were not aware of mobile device security training or best practices. Risk Statement/Effect Without sufficient education and training, mobile device users would be at risk of compromising university data. Agreement D.1
Information Technology Services will supplement system‐wide security training Implementation Date related to mobile devices, including: 11/30/2012 use of free or low cost services (cloud storage), Responsible Manager a list of specific actions for users to take if their mobile device is lost or Director, Client stolen, and Services & Security their best practices.
D. Training and Policy Maturity–Detailed Observations Policy Maturity Many users we surveyed were unaware of existing policies over cell phone use and did not necessarily consult policies such as network connectivity requirements, use of free service guidelines (draft), or view mobile device security training materials posted on the web. Mobile device users did not comply with university policy because most did not understand that the policy should be applied to their mobile device(s). Many users appeared to have the mindset that they only had a new cell phone and did not recognize that it was part of the computing infrastructure at UCSC. ITS has made efforts to assure their own staff undergo annual security training, which includes a module on mobile devices, but the use of this training material outside of ITS is extremely limited. While mobile device policy can be further refined and clarified the larger issue appears to be communicating the policy to the users and providing them with training so that they understand the risks and best practices to protect the data and their mobile devices. A web based mobile device training targeted at non‐ITS staff and faculty may best achieve the goals of educating staff and faculty and minimizing the risk to university data. A web based training could also help users self‐identify if they are engaged in activities that warrant requesting additional support from ITS. Training ITS has a non‐interactive PDF file published as a mobile device training module as part of their Computer Overview Security Training, There is also one page dedicated to mobile devices in their Computer Security and Policy Education and Training for IT Service Providers at UCSC. The training contains a fairly comprehensive list of best practices to protect data on mobile devices; however, none of the users we interviewed or surveyed was familiar with the training materials. Some ITS employees acknowledged seeing the training module after we made them aware that it was module five of the larger Computer Overview Security Training they were required to complete on an annual basis. The 25 publicly available training videos linked through the ITS web page does not include a section on mobile device security. Similarly, the Registrar’s Office on‐line training on FERPA covered information does not address mobile devices or appropriate means of storing electronic data.
15
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
Some best practice issues did not appear to be included on the ITS Computer Security Tutorial Module Six, related to mobile devices, such as:
How password policy applies to mobile devices.
Appropriate and inappropriate use of cloud storage with mobile devices. Recommendations to configure a mobile device to erase data after 10 failed attempts if it contains or grants access to restricted data (if this determined to be the appropriate control).
How a mobile device should be configured so that a “call if found” number is displayed should the mobile device be lost.
How to spot warning signs that your mobile device may be infected with malware (such as the battery goes dead quickly or unexplained minutes on plan statement).
What actions should be taken if a mobile device is lost or stolen?
Training and the use of Free Services The use of “Free Services” web site is not linked directly to the mobile device web training pages. None of the staff or faculty we interviewed who were using Dropbox had read the draft on “Free Services”. During the audit we learned that the Data Center Manager is chairing the Campus Storage Solutions group that is expected to issue a report about various cloud and campus storage options. It is our hope that the report issued by this group will provide guidance to staff and faculty to use storage options in a manner that assures security of different classes of university data. Training relating to Encryption Encrypting data prior to storing it in a free service such as Dropbox would greatly increase the security; however, the average user may require technical support to implement this. The draft web page on Free Services does not discuss user encryption of data as a compensating control. It does list when a free service should not be used to store data, but we were informed that Dropbox is commonly used to store restricted or confidential data. The terms and conditions for the free use of Dropbox provide some level of assurance in that the data is encrypted, and we are not granting Dropbox rights to use the data, but it is clearly stated that we use the service at our own risk and liability of Dropbox is limited to $20 should anything go wrong. Training support staff related to Lost or Stolen Mobile Devices In our review we noted that there was no place where users can go to get help in the case of lost or stolen mobile device. Users typically report lost or stolen mobile devices to campus police or the help desk. Neither of these offices had specific information to tell users what actions they should take if their mobile device was lost or stolen. This advice is most critical when the mobile device potentially has direct access to restricted data. In discussing this scenario with the mobile device experts we identified in our review some obvious steps that should be taken came to light, such as:
Change email, Dropbox and any other passwords that may have been stored on the mobile device immediately.
Attempt to track mobile device location, if location tracking was enabled.
Send a call if found phone number to be displayed on the mobile device if possible.
Remotely wipe the mobile device if sensitive data is accessible by cracking the four digit pin and cannot be protected by other means.
Notify wireless service provider the mobile device is missing and deactivate the mobile device to prevent misuse if location tracking fails.
16
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
***
17
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
APPENDIX A ‐ Mobile Device Survey Questions 1.
Do you use a mobile device for university business purposes? (Mobile devices include handheld computing devices that store or send information, or connect to the Internet, e.g. smartphones, PDAs, and tablets (IPADs). (PLEASE DO NOT INCLUDE LAPTOP COMPUTERS) Note: if your answer to this question is “no” you do not need to go any further.
2. 3.
Is the device(s) university owned or personally owned? Please list the type (make, model, etc.) of mobile device(s) you use for university business purposes. Make ______________ Model _________________ Make _______________Model _________________
4.
What business functions do you perform using your mobile device(s)? (e‐mail, ____/calendar ___,/ connect to file servers____,/ connect to workstation to transfer files ____,/ connect to enterprise systems,____ / Please list other____)
5.
6.
7. 8.
Is information and support available for you to keep your mobile device safe and secure for business processes? (Feel free to add any comments you feel are relevant to this question.) Is restricted or sensitive data on your mobile device? Yes___/ No___/ I don’t know ___. If yes, is the data encrypted?___ Is anti‐virus software configured on your mobile device? If yes, please list _____________________. Are there other employees working under your supervision who use mobile devices for business purposes? If yes, please list.
18
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
APPENDIX B – Mobile Device Detailed Follow‐up Review 1.
Do you have a password/PIN set on your mobile device?
2.
If you use a password, is it set to lock the device after a period of inactivity?
3.
Which web browser do you use to access the internet and/or UCSC servers or applications? (Auditor to check to see if passwords are saved).
4.
Do you use an email client to access UCSC email? If so is it configured with POP3, IMAP or another protocol (this will determine if data is downloaded to the phone)?
5.
Do you have a firewall installed on your mobile device? If yes list and describe settings.
6.
Do you use other apps, such as ssh clients, to access servers or systems? Is so do they store your password. Please demonstrate log‐in to show password is not saved.
7.
Do any of your apps store data on the internal memory in the device? If yes describe type of data.
8.
Do your apps and/or web browser store passwords to access email and other systems? Do you know what type of data is stored on your device?
9.
Do you use Dropbox, Google Docs or other on‐line data storage? If yes describe how data is accessed, i.e. is a password required and does it comply with password standards.
10.
Have you viewed the ITS on‐line training module for mobile devices?
11.
Have you ever requested service for your mobile device from ITS? If yes please elaborate on request and results.
12.
Does your device have a voice interface (such as IPHONE Siri)?
13.
Does your mobile device support data encryption? If yes please describe if it is used.
14.
Is Bluetooth enabled on your device?
15.
If Bluetooth is enabled do you have a device pairing password for Bluetooth communications?
16.
Is your device configured to automatically update operating system software (also is it still supported by the vendor)?
17.
Is your device configured to delete all data if the wrong password is entered repeatedly?
18.
Is your device configured so that it can be remotely wiped if the device were lost or stolen?
19.
Do you have the ability to track the location of your device if it is lost or stolen?
20.
Has your mobile device been “unlocked” to allow use of other service providers and/or install apps from other sources?
21.
How do you verify that the apps you install on your device are secure and free of malware?
22.
Is your device configured to automatically log‐on to any of the following types of system?
23.
Is your device configured to access the internet via free Wi‐Fi access points when available?
24.
Are there any services the campus should provide to help keep you device secure?
Support for Lost or Stolen Devices In our review we noted that there was not place where users can go to get help in the case of lost or stolen mobile device. Users typically report lost or stolen mobile devices to campus police or the help desk. Neither of
19
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
these offices had specific information to tell users what actions they should take if their device was lost or stolen. This advice is most critical when the device potentially has direct access to restricted data. In discussing this scenario with the mobile device experts we identified in our review some obvious steps that should be taken came to light, such as: 25. Change email, Dropbox and any other passwords that may have been stored on the device immediately. 26. Attempt to track device location, if location tracking was enabled. 27. Send a call if found phone number to be displayed on the device if possible 28. Remotely wipe the device if sensitive data is accessible by cracking the four digit pin and cannot be protected by other means. 29. Notify wireless service provider the device is missing and deactivate the device to prevent misuse if location tracking fails. 30. Has your device ever been infected with malware or have you installed apps that created problems? If yes please elaborate.
20
Mobile Computing Device Security
Internal Audit Report SC‐12‐14
APPENDIX C – Symantec Whitepaper: A Window into Mobile Device Security
SEE NEXT PAGE
21
Security Response
A Window Into Mobile Device Security Examining the security approaches employed in Apple’s iOS and Google’s Android Carey Nachenberg VP, Fellow Contents
Executive Summary............................................ 1 Introduction........................................................ 1 Mobile Security Goals......................................... 2 Web-based and network-based attacks ...... 2 Malware ........................................................ 2 Social Engineering Attacks........................... 3 Resource Abuse............................................. 3 Data Loss ...................................................... 3 Data Integrity Threats................................... 3 Device Security Models...................................... 3 Apple iOS....................................................... 4 Android........................................................ 10 iOS vs. Android: Security Overview...................17 Device Ecosystems . ..........................................17 Mobile Security Solutions................................ 20 Mobile Antivirus.......................................... 20 Secure Browser........................................... 21 Mobile Device Management (MDM)........... 21 Enterprise Sandbox..................................... 21 Data Loss Prevention (DLP)........................ 22 Conclusion........................................................ 22
Executive Summary The mass-adoption of both consumer and managed mobile devices in the enterprise has increased employee productivity but has also exposed the enterprise to new security risks. The latest mobile platforms were designed with security in mind—both teams of engineers attempted to build security features directly into the operating system to limit attacks from the outset. However, as the paper discusses, while these security provisions raise the bar, they may be insufficient to protect the enterprise assets that regularly find their way onto devices. Finally, complicating the security picture is the fact that virtually all of today’s mobile devices operate in an ecosystem, much of it not controlled by the enterprise—they connect and synchronize out-ofthe-box with third-party cloud services and computers whose security posture is potentially unknown and outside of the enterprise’s control.
Introduction With so many consumer devices finding their way into the enterprise, CIOs and CISOs are facing a trial by fire. Every day, more users are using mobile devices to access corporate services, view corporate data, and conduct business. Moreover, many of these devices are not controlled by the administrator, meaning that sensitive enterprise data is not subject to the enterprise’s existing compliance, security, and Data Loss Prevention policies. To complicate matters, today’s mobile devices are not islands— they are connected to an entire ecosystem of supporting cloud and PC-based services. Many corporate employees synchronize their device(s) with at least one public cloud based service that is outside of
Security Response
A Window Into Mobile Device Security
the administrator’s control. Moreover, many users also directly synchronize their mobile device with their home computer to back up key device settings and data. In both scenarios, key enterprise assets may be stored in any number of insecure locations outside the direct governance of the enterprise. In this paper, we will review the security models of the two most popular mobile platforms in use today, Android and iOS, in order to understand the impact these devices will have as their adoption grows within enterprises.
Mobile Security Goals One thing is clear—when it comes to security, the two major mobile platforms share little in common with their traditional desktop and server operating system cousins. While both platforms were built upon existing operating systems (iOS is based on Apple’s OSX operating system and Android is based on Linux), they each employ far more elaborate security models that are designed into their core implementations. The ostensible goals of their creators: to make the platforms inherently secure rather than to force users to rely upon third-party security software. So have Apple and Google been successful in their quest to create secure platforms? To answer this question, we will provide a thorough analysis of each platform’s security model and then analyze each implementation to determine its effectiveness against today’s major threats, including: • Web-based and network-based attacks. • Malware • Social engineering attacks. • Resource and service availability abuse. • Malicious and unintentional data loss. • Attacks on the integrity of the device’s data. The sections below provide a brief overview of each attack class.
Web-based and network-based attacks These attacks are typically launched by malicious websites or compromised legitimate websites. The attacking website sends malformed network content to the victim’s browser, causing the browser to run malicious logic of the attacker’s choosing. Once the browser has been exploited, the malicious logic attempts to install malware on the system or steal confidential data that flows through the Web browser. A typical Web-based attack works as follows: an unsuspecting user surfs to a malicious Web page. The server on which the page is hosted identifies the client device as running a potentially vulnerable version of the operating system. The attacking website then sends down a specially crafted set of malicious data to the Web browser, causing the Web browser to run malicious instructions from the attacker. Once these instructions have control of the Web browser, they have access to the user’s surfing history, logins, credit card numbers, passwords, etc., and may even be able to access other parts of the device (such as its calendar, the contact database, etc.).
Malware Malware can be broken up into three high-level categories: traditional computer viruses, computer worms, and Trojan horse programs. Traditional computer viruses work by attaching themselves to legitimate host programs much like a parasite attaches itself to a host organism. Computer worms spread from device to device over a network. Trojan horse programs don’t self-replicate, but instead perform malicious actions, including compromising the confidentiality, integrity, or availability of the device or using its resources for malicious purposes. Examples of mobile malware include the iPhoneOS.Ikee worm, which was targeted at iOS-based devices (for example, iPhones) or the Android.Pjapps threat, which enrolled infected Android devices in a hacker-controlled botnet.
Page 2
Security Response
A Window Into Mobile Device Security
Social Engineering Attacks Social engineering attacks, such as phishing, leverage social engineering to trick the user into disclosing sensitive information. Social engineering attacks can also be used to entice a user to install malware on a mobile device.
Resource Abuse The goal of many attacks is to misuse the network, computing, or identity resources of a device for unsanctioned purposes. The two most common such abuses are the sending of spam emails from compromised devices and the use of compromised devices to launch denial of service attacks on either third-party websites or perhaps on the mobile carrier’s voice or data network. In the spam relay scenario, an attacker surreptitiously transmits spam emails to a herd of compromised devices and then instructs these devices to forward these emails over standard email or SMS messaging services to unsuspecting victims. The spam therefore appears to originate from legitimate mobile devices. In the denial of service attack scenario, the attacker might instruct a large herd of previously-compromised devices to send a flood of network data (for example, network packets, SMS messages, etc.) to one or more targets on the Internet. Given the limited bandwidth available on today’s wireless networks, such an attack could potentially impact the quality of either voice or data services on the wireless network in addition to impacting a targeted website.
Data Loss Data loss occurs when an employee or hacker exfiltrates sensitive information from a protected device or network. This loss can be either unintentional or malicious in nature. In one scenario, an enterprise employee might access their work calendar or contact list from a mobile device. If they then synchronize this device with their home PC, for example, to add music or other multimedia content to the device, the enterprise data may be unknowingly backed up onto the user’s unmanaged home computer and become a target for hackers. In an alternative scenario, a user may access a sensitive enterprise email attachment on their mobile device, and then have their device stolen. In some instances, an attacker may be able to access this sensitive attachment simply by extracting the built-in SD flash memory card from the device.
Data Integrity Threats In a data integrity attack, the attacker attempts to corrupt or modify data without the permission of the data’s owner. Attackers may attempt to launch such attacks in order to disrupt the operations of an enterprise or potentially for financial gain (for example, to encrypt the user’s data until the user pays a ransom fee). In addition to such intentional attacks, data may also be corrupted or modified by natural forces (for example, by random data corruption). For example, a malware program might delete or maliciously modify the contents of the mobile device’s address book or calendar.
Device Security Models The designers of iOS and Android based their security implementations, to varying degrees, upon five distinct pillars: • Traditional Access Control: Traditional access control seeks to protect devices using techniques such as passwords and idle-time screen locking. • Application Provenance: Provenance is an approach where each application is stamped with the identity of its author and then made tamper resistant (using a digital signature). This enables a user to decide whether or not to use an application based on the identity of its author. In some implementations, a publisher may also analyze the application for security risks before publication, further increasing the pedigree of an app. • Encryption: Encryption seeks to conceal data at rest on the device to address device loss or theft. • Isolation: Isolation techniques attempt to limit an application’s ability to access the sensitive data or systems on a device. Page 3
Security Response
A Window Into Mobile Device Security
• Permissions-based access control: Permission-based access control grants a set of permissions to each application and then limits each application to accessing device data/systems that are within the scope of those permissions, blocking the applications if they attempt to perform actions that exceed these permissions. Now that we’ve introduced the threat categories we wish to defend against and the five security pillars, the following sections provide a detailed security analysis of each mobile platform.
Apple iOS Apple’s iOS operating system that powers iPod, iPhone, and iPad devices is effectively a slimmed down version of Apple’s OS X Mac operating system. OS X is inherently a Unix-based system that traces its roots to NEXT corporation’s Mach operating system, and ultimately to the FreeBSD variant of Unix. While iOS leverages all five of the security pillars, its security model is primarily based on four of the five pillars: traditional access control, application provenance, encryption, and isolation. The next sections will cover these four primary pillars in detail, and then briefly discuss iOS’s secondary reliance on permission-based access control.
Traditional Access Control iOS provides traditional access control security options, including password configuration options as well as account lockout options. For example, an administrator may choose the strength of the passcode and specify how frequently the user must update their passcode. They can also specify such items as the maximum number of failed login attempts before the device wipes itself.
How effective has Apple’s Access Control implementation been? The access control features provided by iOS provide a reasonable level of security for the device’s data in the event of loss or device theft. Essentially, iOS is at parity with traditional Windows-based desktops in this area.
Application Provenance Before fine art galleries sell expensive pieces of art, they make sure to verify the provenance and authenticity of these works. This gives the buyer confidence that they are obtaining an original work of quality and value. Apple employs a similar model with their iOS Developer and iOS Developer Enterprise programs. Before software developers can release software to iPhone, iPod, and iPad users, they must go through a registration process with Apple and pay an annual licensing fee. Developers must then “digitally sign” each app with an Apple-issued digital certificate before its release. This signing process embeds the developer’s identity directly into to the app guarantees that the app author is an Apple-approved developer (since only these developers are issued such a certificate), and ensures that the app’s logic cannot be tampered with after its creation by the author. Today, Apple gives developers two different ways to distribute their applications to customers. First, anyone wishing to sell their iOS app to the general public must do so by publishing their app on Apple’s App Store. To post an app on the App Store, the software developer must first submit the app for certification by Apple—this certification process typically takes one to two weeks. Once an app has been certified, Apple posts it for sale on its App Store.* Second, corporations wishing to deploy privately-developed apps to their internal workforce may register with Apple’s iOS Developer Enterprise program. To be approved for this program, Apple requires that the applicant corporation be certified by Dun and Bradstreet, indicating that they’re an established corporation with a clean track record. As a member of this program, enterprises may distribute apps developed in-house via an internal corporate website or by pushing the app using Apple’s iOS management platform. As before, each app must be digitally signed by the enterprise before distribution to the internal workforce. Moreover, internally developed apps can only be used on devices on which the enterprise has installed a digital certificate called a “provisioning * Apple has the ability to rapidly remove apps (that are found to be malicious or that violate their licensing agreement) from their App Store, but does not yet appear to posses an automated mechanism to remove malicious apps directly from iPhones/iPads once an app has been installed on the device. Page 4
Security Response
A Window Into Mobile Device Security
profile”. This certificate may be installed at the same time as the enterprise app, or in advance of the deployment of one or more enterprise apps. If the certificate is ever removed from the device or expires, then all apps signed with the certificate will cease to function. While Apple explicitly permits corporations to distribute internal applications to their workforces, they prohibit sale/distribution of internally developed apps to third parties. If detected, this activity could lead to revocation of the enterprise’s ability to participate in the iOS Developer Enterprise program. If such an abuse is detected, Apple presumably can simply issue a global revocation for the corporation’s provisioning profile, immediately disabling all apps released by the vendor. This certificate requirement also enables a corporation to instantly disable its internally developed applications by simply removing the certificate from a device. This could be used, for example, to deprovision an employee’s private device once the employee leaves the company. The provenance approach employed by Apple certainly increases the odds that software developers will be held accountable for their applications and we believe that this has had a strong deterrent effect. However, it is by no means foolproof. First, is certainly possible that a malware author could use a stolen identity to register for an account to sell malicious apps on the Apple App Store. Second, Apple does not discuss its app certification approach and it is possible that an attacker could slip malware past this certification process. On the positive side, Apple’s requirement that all apps be digitally signed by Apple-approved software vendors does ensure that applications aren’t tampered with, modified, or infected by hackers.
How effective has Apple’s Application Provenance implementation been? The primary security goal of Apple’s provenance approach is to limit malware, and in this regard, Apple has been effective. Thus far, we haven’t seen actual malware targeting non-jailbroken iOS devices. Why is this? It is likely that malware authors steer away from the iOS platform because they understand that (A) they must register and pay to obtain a signing certificate from Apple, which makes it more likely they will get identified and prosecuted if they perform malicious activities, and (B) Apple tests each and every application that is submitted for publication on the App store for malicious behavior or violations of their policies, making it more likely that the attacker will be caught. Finally (C), Apple’s code signing model prevents tampering with published apps—there is no way for an attacker to maliciously modify another app (for example, to add spyware to it) without breaking the “seal” on that app’s digital signature. It is important to note that Apple’s provenance approach only applies to devices that have not been “jailbroken”. Jailbroken devices—devices that have been intentionally hacked by their owners to give the owner administrative control over the device’s operating system—have their provenance system disabled and may run apps from any source. Such jailbroken devices have already been the target of at least two computer worm attacks (described in the iOS Malware section), and will likely be the target of increasing volumes of malware in the future.
Encryption The latest iPhones, iPads, and iPod Touch devices (that is, those using the iOS 4 operating system and beyond) employ a hybrid encryption model. First, iOS uses hardware-accelerated AES-256 encryption to encrypt all data stored in the flash memory of the device. Second, iOS protects specific additional data items, such as email, using an additional layer of encryption. At first glance, iOS’s full-device encryption approach would appear to offer a high degree of protection. However, Apple’s implementation has a hitch. Since iOS runs background applications even when the user is not logged in to their device, and since these background applications need to access the device’s storage, iOS needs to keep a copy of the decryption key around at all times so it can decrypt the device’s data and provide it to these background apps. In other words, the majority of the data on each device is encrypted in such a manner that it can be decrypted without the need for the user to input the device’s master passcode. This means that an attacker with physical access to an iOS device and with a functional jailbreak attack can potentially read most of the device’s data without knowing the device’s passcode. In addition to hardware encryption, our research indicates that a small subset of iOS’s data is secondarily encrypted in such a way that it may only be accessed if the device is unlocked via the user passcode. If the attacker doesn’t have access to the device’s passcode, then this data is essentially 100 percent secure while the device is Page 5
Security Response
A Window Into Mobile Device Security
locked, whether or not an attacker has physical access to the device. Based on our research, iOS encrypts emails and attachments using this secondary level of encryption. Apple has indicated that other data may also be encrypted with this second level of encryption; however, we have not been able to verify this directly. Third-party applications can also manually leverage this encryption if they implement the required programming logic.
How effective has iOS’s encryption implementation been? The main goal of encryption is to prevent loss of data due to device theft or loss. In this regard, Apple’s encryption implementation may be considered a marginal success: The main use case behind iOS’s device-level encryption is rapid device wiping. Since every byte of data on the device is hardware encrypted with an encryption key, a device can be wiped by simply throwing away this key. If the encryption key is discarded, then all of the device’s data is rendered inaccessible. This is exactly how Apple’s device wiping technology works. Thus, if an administrator or user knows that a device has been lost or stolen early enough, they can almost certainly send a “kill signal” to the device via a third-party Mobile Device Management (MDM) solution and ensure that all of the data on the device is protected. Similarly, iOS devices can be configured to automatically throw away their hardware encryption key if the user enters an incorrect passcode too many times, rendering the data wholly unreadable. However, a determined attacker that has physical access to a device and a functional jailbreaking tool can potentially obtain far more information. In a February 2011 report, German security researchers from the Fraunhofer Institute showed that using a six minute-long automated process, they could bypass the hardware encryption protections on an up-to-date, passcode-locked iPhone (running iOS 4.2.1) and obtain passwords and login information for most of the device’s systems, including its Exchange email passwords and credentials, Wi-Fi passwords, VPN passwords, and voicemail passwords.* While most casual attackers won’t have the sophistication required to launch this type of attack, this clearly shows that iOS’s hardware encryption strategy is still vulnerable to attack. Moreover, whether or not an iOS device is locked and in the user’s pocket, or unlocked in their hand, apps running on the device may freely access iOS’s calendar, contact list, photos (many of which are tagged with GPS coordinates), etc., since Apple’s hardware decrypts this data on behalf of every running app. So should a malicious app bypass Apple’s vetting process, or should an attacker compromise a legitimate app on the device (for example, by using a Web-based attack to compromise the Safari Web browser), the attacker could easily access and steal data from many of the device’s systems.
Isolation (Sandboxing) The iOS operating system isolates each app from every other app on the system—apps aren’t allowed to view or modify each other’s data, logic, etc. One app can’t even find out if another app is present on the device. Nor can apps access the iOS operating system kernel—they can’t install privileged “drivers” on the device or otherwise obtain root-level (administrator) access to the device. This inherent design choice ensures a high degree of separation between apps, and between each app and the operating system. All third-party applications running on iOS run with the same limited level of device control and are ultimately totally controllable by the iOS operating system and the user. For example, every third-party application running on an iOS device is subject to termination if the device is running low on available memory—no app can designate itself as “system critical” to avoid such termination by the operating system. The user may also terminate any app at any time with a few taps of the touchscreen. This is in contrast to PC-based applications that can easily install themselves into the operating system kernel and obtain an elevated privilege level to obtain total control of a system (and prevent easy termination by the user). In addition to being isolated from each other and from the operating system kernel, applications are isolated from the phone’s SMS and email in/out-boxes and email attachments within these mailboxes. Apps are also prohibited from sending SMS messages (without user participation) and from initiating or answering phone calls without the user’s participation. * http://www.sit.fraunhofer.de/en/Images/sc_iPhone%20Passwords_tcm502-80443.pdf Page 6
Security Response
A Window Into Mobile Device Security
On the other hand, iOS apps are allowed to freely access the following system-level resources without any explicit granting of permission by the user. They may: • Communicate to any computer over the wireless Internet. • Access the device’s address book including mailing addresses, notes associated with each contact, etc. • Access the device’s calendar entries. • Access the device’s unique identifier (a proprietary ID issued to each device by Apple). • Access the device’s phone number (this may be disabled via a simple configuration change by the user). • Access the device’s music/video files and its photo gallery. • Access the recent safari search history. • Access items in the device’s auto-completion history. • Access recently viewed items in the YouTube application. • Access the Wi-Fi connection logs. • Access the device’s microphone and video camera.
How effective has iOS’s application Isolation approach been? Application isolation is meant to address a number of different attacks, including preventing Web-based and network-based attacks, limiting the impact of malware, preventing malicious data loss, preventing attacks on the integrity of the device’s data, and ensuring the availability of the device’s services and data. Let’s examine iOS’s Isolation model on each:
Web-based and network-based attacks Since iOS isolates each app from every other app on the system, this means that if an attacker compromises an app, they will not be able to attack other apps or the iOS operating system itself (unless an unpatched vulnerability in iOS is attacked). For example, consider Apple’s Safari Web browser. If an attacker were to deliver an attack via a malicious Web page that took control of the browser’s logic, this attack would be unable to spread to any other apps on the system beyond the browser, limiting its impact. However, this attack, once running in the Web browser process, could still access system-wide resources such as the calendar, the contact list, photos, the device’s unique ID, etc., since these resources are available for access by all apps under the default iOS isolation policy. The malicious code could then exfiltrate this sensitive data to the attacker without his code having to ever escape the confines of the browser’s sandbox. Moreover, resident malicious code within the browser process can also steal any data hosted in or that flows through the browser process itself, including Web passwords, credit card numbers, CCV security codes, account numbers, browsing history, bookmarks, etc. And such a malicious agent in the browser could also initiate malicious transactions on behalf of the user, without their consent. So, in summary, iOS’s isolation approach has thus far provided a great deal of protection against network-based attacks. However, attacks against specific apps like the Web browser, while being self-contained and blocked from impacting other apps, can still cause significant harm to a device.
Limiting the impact of malware While it is difficult to measure this empirically due to the small number of actual malware samples on iOS, iOS’s isolation framework is theoretically effective at preventing classic malware attacks on the iPhone. Since apps can’t access or modify other apps on the system, this prevents a malicious app from infecting or maliciously modifying other apps on the system, as a traditional parasitic computer virus might do. Further, the isolation layer prevents apps from installing operating system kernel drivers (such as kernel-based malware or root-kits) capable of running with the same administrator-level access as the operating system’s kernel. In this regard, iOS’s isolation system has thus far been effective.
Preventing Resource Abuse iOS’s isolation system can prevent a subset of resource abuse attacks. On the negative side, iOS apps are given unrestricted access to the Internet, so technically they could be used to launch email-based spam campaigns, search engine optimization campaigns (the attacker tricks a search engine into raising the ranking/visibility of
Page 7
Security Response
A Window Into Mobile Device Security
a particular website) and some types of denial of service attacks against websites or a carrier’s network. However, it is important to note that we have never seen an actual example of such an attack. On the positive side, given that iOS’s isolation system prevents the automated transmission of SMS messages or automated initiation of phone calls, this eliminates the possibility of SMS-based DoS attacks, telephony-based DoS attacks, and SMS-based SPAM attacks on non-jailbroken devices.
Preventing Malicious Data Loss The isolation approach implemented by iOS completely prevents each app from accessing other apps’ data—this policy is enforced regardless of whether apps encrypt their data or not, so long as the device has not been jailbroken. Moreover, beyond the library of media files, the calendar, and the contact database which are all accessible to any app, iOS has no centralized repository of shared data that might pose a serious compromise risk. That said, these limited reservoirs of information (the calendar, media library, etc.) often store sensitive information, such as: • Conference call numbers and passwords. • Passwords for other systems (for example, bank accounts or enterprise logins). • Credit card or bank account numbers that might be easily forgotten. • Key codes for alarms and secure corporate offices. • Employee names and phone numbers. • Sensitive audio or video content, including internal audio and video pod-casts from senior management. All of these items can obtained by any third-party app and exfilrated off the device over the Internet without any warning from the iOS’s security systems.
Preventing Attacks on the Integrity of the Device’s Data iOS’s isolation policy allows apps to modify or delete the contents of the calendar and the contact list, but completely prevents modification or deletion of content from the user’s media and photo libraries and from other device systems. While a malicious app could easily delete or modify all of the user’s contacts and calendar entries, these can easily be recovered from a local backup (automatically created by iTunes during local syncs) or by synchronizing with a cloud-based data source like Exchange, MobileMe, or Google Calendar.
Permissions-based Access Control Apple has built a relatively limited permission system into iOS. Essentially, there are only four system resources that apps may access that first require explicit permission from the user. All other access to system services or data is either explicitly allowed or blocked by iOS’s built-in isolation policy. Here are the permissions that an app may request: • To access location data from the device’s global positioning system. • To receive remote notification alerts from the Internet (used by cloud-based services to send realtime notifications to apps running on a user’s iPhone or iPad). • To initiate an outgoing phone call. • To send an outgoing SMS or email message.* If an app attempts to use any of these features, the user will first be prompted for permission before the activity is allowed. If the user grants permission to either the GPS system or the notification alert system, then the app is permanently granted access to these systems. In contrast, the user is prompted every time an app attempts to initiate an outgoing call or send an SMS message. * Technically, iOS blocks local applications from using built-in iOS messaging systems to surreptitiously send SMS or email messages directly off the device without the user’s consent. However, given that apps can connect to any other computer on the Internet without the user’s express consent, apps can directly connect to Internet-based messaging services (for example, SMTP servers or SMS relay services) and then use these third-party services to send emails or SMS messages without the user’s consent, effectively bypassing this permission-based protection system. It is important to note that SMS messages sent through one of these third-party services would not result in a charge to the user, which may have been Apple’s primary goal: to prevent unauthorized sending of expensive text messages from a device.
Page 8
Security Response
A Window Into Mobile Device Security
How effective has iOS’s permission system been? The GPS permission requirement prevents unwanted applications from surreptitiously tracking the user’s location. To date, there have been no known attacks that have bypassed this protection on non-jailbroken devices. The second push notification permission is not related to security but rather to ensuring battery life. The push notification subsystem in iOS frequently checks the Internet for new notifications, which can result in significant battery drain. Therefore, the effectiveness (or lack thereof) this aspect of the permission system has no impact on a device’s security. To our knowledge, these two permissions have effectively prevented attacks that attempt to surreptitiously initiate phone calls or send SMS messages (for example, to expensive pay services). As we’ll see, Android’s more lenient permission model has resulted in at least one such attack.
Vulnerabilities As of the time of this paper’s writing, security researchers had discovered roughly 200 different vulnerabilities in various versions of the iOS operating system since its initial release. Of these, the vast majority of these vulnerabilities were of lower severity. Specifically, most would allow an attacker to take control of a single process (for example, the Safari process) but not permit the attacker to take administrator-level control of the device. The remaining handful vulnerabilities were of the highest severity, and when exploited, enabled an attacker to take administrator-level control of the device, granting them access to virtually all data and services on the device. These more severe vulnerabilities are classified as privilege escalation vulnerabilities because they enable an attacker to escalate their privileges and gain total control over the device. While each of these vulnerabilities could have been targeted for malicious purposes, the majority of exploitation appears to have been initiated by device owners for the purpose of jail-breaking rather than as a means to maliciously compromise devices. According to Symantec’s data at the time this paper was authored, Apple took an average of 12 days to patch each vulnerability once it was discovered.
Brief Overview of iOS Malware (and False Alarms) Aurora Feint (July, 2008): This iPhone game uploaded contacts stored in iPhone’s address book to the developer’s servers in an unencrypted form. Apple briefly pulled this app from the Apple App Store, later restoring it after receiving an explanation from the developer. The developer Figure 1 explained that they used this contact data to match players up iPhone.Ikee Wallpaper with their friends to enable over-the-air gameplay. Storm8 (November, 2009): Storm8 Corporation released three games onto the Apple App Store that were downloaded by over twenty million users. These games transmitted the phone number from the iOS device to Storm8’s servers for the purpose of uniquely identifying each user in their multiplayer game. Storm8 subsequently switched from using the device’s phone number to using Apple’s unique device ID value to identify users of its games. iPhoneOS.Ikee Worm (November, 2009): This computer worm spread over-the-air (for example, across cellular and Wi-Fi networks) to jailbroken iOS devices, changing the device’s background wallpaper to display a picture of 80’s pop-star Rick Astley. iPhoneOS.Ikee performed no other malicious activity beyond changing the device’s wallpaper. The worm was only capable of attacking devices that met three criteria: First, the device had to have been previously jailbroken by its owner. Second, the owner must have previously installed an SSH (secure shell) application on the device
Page 9
Security Response
A Window Into Mobile Device Security
(these applications typically enable a user to remotely connect to and control one computer or device from another computer). Third, the worm would only attack devices for which the default SSH password had not been changed. iPhoneOS.Ikee.B (November, 2009): This computer worm also spread over-the-air to jailbroken iOS devices using the same SSH default password attack used by iPhoneOS.Ikee. Once the worm infected a new device, it would lock the screen and display the following text: “Your iPhone’s been hacked because it’s really insecure! Please visit doiop.com/iHacked and secure your iPhone right now!” In order to unlock an infected phone, the user was required to pay a €5 ransom to the attacker’s PayPal account.
Figure 2
iPhone.Ikee.B Message
Summary of iOS Security Overall, Symantec considers iOS’s security model to be well designed and thus far it has proven largely resistant to attack. To summarize: • iOS’s encryption system provides strong protection of emails and email attachments, and enables device wipe, but thus far has provided less protection against a physical device compromise by a determined attacker. • iOS’s provenance approach ensures that Apple vets every single publicly available app. While this vetting approach is not foolproof, and almost certainly can be circumvented by a determined attacker, it has thus far proved a deterrent against malware attacks, data loss attacks, data integrity attacks, and denial of service attacks. • iOS’s isolation model totally prevents traditional types of computer viruses and worms, and limits the data that spyware can access. It also limits most network-based attacks, such as buffer overflows, from taking control of the device. However, it does not necessarily prevent all classes of data loss attacks, resource abuse attacks, or data integrity attacks. • iOS’s permission model ensures that apps can’t obtain the device’s location, send SMS messages, or initiate phone calls without the owner’s permission. • None of iOS’s protection technologies address social engineering attacks such as phishing or spam.
Android Android is a marriage of the Linux operating system and a Java-based platform called Dalvik, which is an offshoot of the popular Java platform. Essentially, software developers write their apps in the Java programming language and then using Google tools convert their resulting Java programs to run on the proprietary Dalvik platform on Android devices. Once converted, such an app can run on any Android device. It is unclear why Google chose to use a non-standard Java platform to run its apps; perhaps this approach was taken to avoid patent infringement. Each Android app runs within its own virtual machine (just as Java applications do), and each virtual machine is isolated in its own Linux process. This model ensures that no process can access the resources of any another process (unless the device is jailbroken). While Java’s virtual machine was designed to be a secure, “sandboxed” system capable of containing potentially malicious programs, Android does not rely upon its virtual machine technology to enforce security. Instead, all protection is enforced directly by the Linux-based Android operating system. Android’s security model is primarily based on three of the five security pillars: traditional access control, isolation, and a permission-based security model. However, it is important to note that Android’s security does not simply arise from its software implementation. Google releases the programming source code for the entire Android project, enabling scrutiny from the broader security community. Google argues that this openness helps to uncover flaws and leads to improvements over time that materially impact the platform’s level of security.* * This claim appears to be true—there have been less than two-dozen vulnerabilities discovered in the Android platform since it’s release, an extremely low number.
Page 10
Security Response
A Window Into Mobile Device Security
The next three sections will explore Android’s use of these primary pillars, while the following two sections will then examine Android’s secondary reliance upon the Provenance and Encryption approaches.
Traditional Access Control Android 2.X versions provide rudimentary password configuration options, including the ability to specify the strength of the device passcode, specify the phone’s lockout time span, and specify how many failed login attempts must occur before the device wipes its data. Android 3.0 also introduces the notion of password expiration, enabling administrators to compel users to update their password on a regular schedule.
How effective has Android’s Access Control implementation been? Android password policy system is sufficient to protect devices against casual attacks. However, since current versions of Android do not encrypt data stored on the removable SD memory card (for example, the 16- or 32-gigabyte memory chip used to store data and multimedia files), an attacker with physical access to an Android device could simply eject the SD memory card and obtain a subset of the device’s data in a matter of seconds, bypassing any and all password controls enabled on the device.
Isolation Like iOS, Android employs a strong isolation system to ensure that apps only access approved system resources. This isolation system not only isolates each app from other apps on the system, but also prevents apps from accessing or modifying the operating system kernel, ensuring that a malicious app can’t gain administrator-level control over a device. The default isolation policy prohibits access to virtually every subsystem of the device, with the following noteworthy exceptions: • Apps may obtain the list of apps installed on the device and examine each application’s programming logic (but not its private data). • Apps may read (but not write to) the contents of the user’s SD flash card, which typically holds the user’s music, video files, installed programs, and possibly documents or saved attachments. Apps may read all of the data on the SD card without restriction (regardless of which app created a particular piece of data, all apps can read that data). • Apps may launch other applications on the system, such as the Web browser, the maps application, etc. Of course, this default isolation policy is so strict that it inhibits the creation of many classes of applications. As such, Android permits applications to request á-la-carte access to the device’s other subsystems—the isolation system then enforces each app’s expanded set of permissions. This á-la-carte access model is discussed in the next major section below.
How effective is Android’s isolation system? When we consider Android’s isolation system, we must evaluate its ability to (A) limit the damage in the situation where an attacker manages to compromise a legitimate app (for example, a Web browser), and (B) its ability to block or constrain traditional malware. First, since Android isolates each app from every other app on the system, from most of the device’s services, and from the operating system itself, this means that if an attacker compromises a legitimate app, they will not be able to attack other apps or the Android operating system itself. This is a positive of Android’s isolation model. Let’s consider Android’s Web browser. Web browsers are by far the most targeted class of legitimate application, since attackers know that Web browsers often have security flaws that can easily be exploited by a properly crafted malicious Web page. Imagine that an attacker posted a malicious Web page that attacked a known flaw of Android’s Web browser. If an unsuspecting user surfed to this Web page, the attack could inject itself into the Android browser and begin running. Once running in the Web browser’s process, would this attack pose a threat? Yes and no. First, Android’s isolation policy would ensure that the attack could not spread beyond the browser to other apps on the system or to the operating system kernel itself. However, such an attack could access any parts of the system that the Web Page 11
Security Response
A Window Into Mobile Device Security
browser app had been granted permission to access. For example, if the Web browser had permission to save or modify data on the user’s SD storage card (for example, to save downloads on the card), then the attacker could take advantage of this permission to corrupt data on the SD storage card. Therefore, an attacker effectively gains the same control over the device as the app they manage to attack, with varying implications depending on the set of permissions requested by the compromised app. Moreover, malicious code within the attacked process can also steal any data that flows through the process itself. In the case of a Web browser, the attack could easily obtain login names, passwords, credit card numbers, CCV security codes, account numbers, browsing history, bookmarks, etc. Since mobile users often access internal enterprise applications via their mobile Web browser, this could lead to leakage of highly sensitive enterprise data, even if VPN or SSL encryption are employed. And such a malicious agent in the browser could also initiate malicious transactions on behalf of the user, without their consent. Next let’s consider the ability of Android’s isolation system to protect against malicious apps such as Trojan horses and spyware. Since Android’s isolation system is designed to isolate each app from other apps on the system, this ensures that a malicious app can’t tamper with other apps on the system, access their private data or access the Android operating system kernel. However, given that each app can request permission to access other device subsystems such as the email inbox, the GPS system, or the network, it is possible for such a malicious app to operate within the confines of Android’s isolation system and still conduct many categories of attacks, including resource attacks, data loss attacks, etc. Ultimately, as we’ll see in the next section, the reliance upon the (potentially uninformed) user to grant a set of permissions to an app is the weak link in Android’s isolation approach. Attackers have in a small number of instances bypassed Android’s isolation system by exploiting flaws in its implementation (that is, vulnerabilities). However, the number of vulnerabilities in Android has generally been small in number, and most have been fixed quickly (our Vulnerabilities section covers this in more detail).
Permissions-based Access Control By default, most Android applications can do very little without explicitly requesting permission from the user to do so. For example, if an app wants to communicate over the Internet, it must explicitly request permission from the user to do this; otherwise the default isolation policy blocks it from initiating direct network communications. Each Android app therefore contains an embedded list of permissions that it needs in order to function properly. This list of requests is presented to the user in non-technical language at the time an app is installed on the device, and the user can then decide whether or not to allow the app to be installed based on their tolerance for risk. If the user chooses to proceed with the installation, the app is granted permission to access all of the requested subsystems. On the other hand, if the user chooses to abort the installation, then the app is completely blocked from running. Android offers no middle ground (allowing some permissions, but rejecting others). Third-party apps can request permission to use the following high-level subsystems: • Networking subsystems: Apps can establish network connections with other networked devices over Wi-Fi or using the cellular signal. • Device identifiers: Apps can obtain the device’s phone number, the device ID (IMEI) number, its SIM card’s serial number, and the device’s subscriber ID (IMSI) number. These codes can be used by criminals to commit cellular phone fraud. • Messaging systems: Apps can access emails and attachments in the device’s inbox, outbox, and SMS systems. Apps can also initiate transmission of outgoing emails and SMS messages without user prompting and intercept incoming emails and SMS messages. • Calendar and Address book: Apps can read, modify, delete, and add new entries to the system calendar and address book. • Multimedia and image files: Apps may access multimedia (for example, MP3 files) and pictures hosted by the device’s photo application.
Page 12
Security Response
A Window Into Mobile Device Security
• External memory card access: Apps can request to save, modify, or delete existing data on external plugand-play SD memory cards. Once granted this permission, apps have unrestricted access to all of the data on the SD card, which is not encrypted by default. • Global positioning system: Apps may obtain the device’s location. • Telephony system: Apps can initiate and potentially terminate phone calls without the user’s consent. • Logs and browsing history: Apps may access the device’s logs (such as the log of outgoing and incoming calls, the system’s error log, etc.) as well as the Web browser’s list of bookmarks and surfing history. • Task list: An app may obtain the list of currently running apps.
How effective is Android’s permission system? At first glance, Android’s permission system seems to be extremely robust, enabling software vendors to limit an application to the minimal set of device resources required for operation. The problem with this approach is that ultimately, it relies upon the user to make all policy decisions and decide whether an app’s requested combination of permissions is safe or not. Unfortunately, in the vast majority of cases, users are not technically equipped to make these security decisions. In contrast, Apple’s iOS platform simply denies access, under all circumstances, to many of the device’s more sensitive subsystems. This increases the security of iOS-based devices since it removes the user from the security decision-making process. However, this also constrains each application’s functionality, potentially limiting the utility of certain classes of iOS apps. For example, consider a video game which requests permission to access the Internet and also to access the device’s identification numbers. It’s difficult for a novice user to determine whether this combination of privileges is dangerous or not. In a legitimate scenario, the app might use the device’s unique ID to look up the user’s high scores on a server. Yet by requesting these same two privileges an app could also export the device’s IMEI and IMSI numbers to an attacker—both of these device identification numbers could be used by criminals to commit wireless fraud. For example, an IMEI number stolen from a working phone can be used to unlock a stolen phone that was previously disabled by the carrier. The typical user has no basis to understand the implications of granting a particular set of permissions, and many benign-looking combinations of permissions can be used to launch an attack. So far, we’ve seen only a handful of different malware apps released for Android, but it’s already clear that many are able to cause damage without having to “crack” or bypass Android’s permission system. Each malicious app simply requests the set of permissions it needs to operate, and in most cases, users happily grant these permissions on the promise of playing the next great video game or using an up-and-coming calendar organizer. Android’s isolation system then happily grants the app full access to the requested set of device services. By requesting the proper permissions, a malicious app could launch resource abuse attacks (for example, sending large volumes of spam or launching distributed denial of service attacks), data loss attacks (stealing data from the device’s calendar, contact list, etc.), and perform data availability/integrity attacks (by modifying/deleting data in calendar, contact list, or on the SD card). So, to conclude, while Android implements a robust permission system, its dependence on the user and its leniency in offering access to most of the device’s sub-systems compromise its effectiveness and have already opened up Android devices to attack.
Application Provenance Whereas Apple’s iOS platform is built upon a strong application provenance model, the provenance approach adopted by Google for Android devices is less rigorous and consequently, less secure.
Android’s Digital Signing Model The ultimate goal of digitally signing an application is twofold: one, to ensure that the app’s logic is not tampered with, and two, to allow a user of the app to determine the identity of the app’s author. Google’s approach undermines both of these goals. Why is this? Like Apple, the Android operating system will only install and run apps that have been properly signed with a digital certificate. However, unlike Apple, software developers need not apply to Google to obtain a code-signing certificate. Instead, application developers can generate their own signing certificates, as often as they like, without any oversight. In fact, the software developer can place any company name and contact Page 13
Security Response
A Window Into Mobile Device Security
information in their certificate that they like, for example “Author=Dr. Seuss”. The result is that a malware author can generate “anonymous” digital certificates as often as they like and none of these certificates or malware signed with them can be traced back to the author. In order for developers to sell their apps on Google’s official Android Marketplace, developers must pay a $25 fee via credit card. This enables Google to associate the payee with the digital certificate used to digitally sign the developer’s apps and should act as a mild deterrent against malware authors posting malware on the Android Marketplace (if they use their own credit card to register). However, given that developers have the ability to distribute their apps from virtually any website on the Internet—not just the Android marketplace—malware programs can also be distributed with anonymity without any vetting by Google. This approach has two problems. First, it makes it much easier for malware authors to create and distribute malicious applications since these applications can’t be tracked back to their source. Second, this approach makes it easier for attackers to add Trojan horses to existing legitimate apps. The attacker can obtain a legitimate app, add some malicious logic to the app, and then re-sign the updated version with an anonymous certificate and post it onto the Internet. While the newly signed app will lose its original digital signature, Android will certify and install the newly signed malicious app with its anonymous digital signature. Thus, Android’s model does not realistically prevent tampering.
Android’s Vetting Model While Apple enforces a single vetting and distribution channel for all iOS apps, Google chose a far more open model for Android apps. First, Google does not appear to perform a rigorous security analysis of applications posted onto its Android Marketplace. This means that malware authors can distribute their apps through this distribution channel with less likelihood of being discovered. While Apple’s certification approach can certainly fail to detect some classes of attacks, it at least acts as a deterrent to malware authors. In contrast, Google’s lack of validation offers less of a deterrent. Second, Android application developers can distribute their apps from virtually any website on the Internet— they are not limited to distributing their apps via the Android Marketplace. While, by default, Android devices may only download applications from Google, users may override this setting with a few taps of their touchscreen and then download apps from virtually anywhere.* This allows users to “side-load” apps from any source on the Internet. Like iOS, Android will never silently install an app onto a device. The user is always notified before a new application is installed (with a few notable exceptions, described below). This prevents drive-by attacks common on PCs and requires attackers to employ social engineering to trick users into agreeing to install malicious apps on their devices.
How effective is Android’s provenance approach? History shows us that platforms that allow software developers to anonymously release their applications have experienced larger volumes of malware than those platforms that require each app to be digitally stamped with the certified identity of its author. The Android platform appears to reinforce this historical precedent. During 2010 and 2011, attacks such as Android.Rootcager, Android.Pjapps, and Android.Bgserv all took advantage of weaknesses in Android’s provenance model. In each of these cases, the attacker appropriated an existing, legitimate application, stripped the original, legitimate digital signature from the application, injected malicious code into the application, re-signed the Trojanized application using an uncertified digital signature, and then distributed the app via either the official Android Marketplace or third-party websites. In all, these threats impacted hundreds of thousands of users. Since attackers can effectively generate their own digital certificates as frequently as they like and use them to sign malware, we argue that this compromises the value of Android’s provenance system, especially for apps distributed outside Android’s App Marketplace where there’s no software developer vetting process. Moreover, * Some wireless carriers forbid such “side-loading” of apps from third-party websites.
Page 14
Security Response
A Window Into Mobile Device Security
we argue that since no single authority evaluates/verifies all Android apps, attackers are more likely to release attacks without worry of getting caught—this too, we believe, has led to an increase in the prevalence of Android malware.
Encryption As of the time of this writing, only the latest generation of Android tablet devices (running Android 3.0) support hardware encryption to protect data. Unfortunately, at this time Google has not disclosed how this encryption works, making it difficult to determine its strengths and weaknesses. However devices running earlier versions of Android (including virtually all Android-based mobile phones available at the time this paper was authored) rely upon the isolation model, instead of encryption, to protect data such as passwords, user names, and application-specific data. This means that if an attacker is able to jailbreak a device or otherwise obtain administrator-level access to a device by exploiting a vulnerability or by obtaining physical access to a device, they can access virtually every byte of data on the device, including most of the passwords, Exchange/private email account credentials, etc.* As with iOS, third-party Android applications may optionally encrypt their data using standards-based encryption algorithms, but application developers must explicitly add this logic to their program. Otherwise, all data created by applications is saved in an unencrypted form.
Vulnerabilities As of the time of this paper’s writing, security researchers had discovered 18 different vulnerabilities in various versions of the Android operating system since its initial release. Of these, most were of lower severity and would only allow an attacker to take control of a single process (for example, the Web browser process) but not permit the attacker to take administrator-level control of the device. The remaining few vulnerabilities were of the highest severity, and when exploited, enabled an attacker to take root-level control of the device, granting them access to virtually all data on the device. To date, all but four of these eighteen vulnerabilities have been patched by Google. Of the four unaddressed vulnerabilities, one is of the more severe privilege escalation type. This vulnerability has been addressed in the 2.3 release of Android, but has not been fixed for prior versions of the operating system. Given that most carriers have not updated their customer’s phones from Android 2.2 to 2.3, this means that virtually every existing Android phone (at the time of this writing) is currently open to attack. This vulnerability may be exploited by any third-party app and does not require the attacker to have physical access to the device. As an example, the recent Android.Rootcager and Android.Bgserv threats both leveraged this vulnerability to obtain administratorlevel control of devices. Even more interestingly (and controversially), Google’s fix tool for Android.Rootcager also had to exploit this vulnerability in order to circumvent Android’s isolation system to remove parts of the threat from the device. According to Symantec’s data at the time this paper was authored, Google took an average of eight days to patch each vulnerability once it was discovered.
Brief Overview of Android Malware Android.Pjapps / Android.Geinimi (January/February, 2010): These threats were designed to steal information from Android devices and enroll the compromised device in a botnet. Once enrolled, these Trojans enabled the attacker to launch attacks on third-party websites, steal additional device data, deliver advertising to the user, cause the user’s phone to send expensive SMS messages, etc. To distribute these threats, the attackers obtained existing legitimate programs from the Android store, injected the malware logic into them and then distributed these modified versions on third-party Android marketplace websites. Users download what they thought were popular, legitimate applications without knowledge of the extra malicious payload included in the packages. * In a few cases, Android does store “authentication tokens” rather than passwords to prevent loss of passwords. These authentication tokens are numeric values derived from the original password using a one-way hashing function, making it impossible to obtain the user’s original password, while still enabling a login from the device.
Page 15
A Window Into Mobile Device Security
Security Response
AndroidOS.FakePlayer (August, 2010): This malicious app masquerades as a media player application. Once installed, it silently sends SMS messages (at a cost of several dollars per message) to premium SMS numbers in Russia. Devices connected to wireless carriers outside of Russia are unaffected since the SMS messages are not properly delivered; however, this threat illustrates how easy it is to steal funds from unsuspecting users.
Figure 3
AndroidOS.FakePlayer Permissions
Android.Rootcager (February, 2011): Also known as Android. DroidDream, this attack was similar in nature to the Android. Pjapps attack—the attacker infected and redistributed more than 58 legitimate applications on Google’s App Market. Hundreds of thousands of users were infected, tricked into thinking they were downloading legitimate applications. Once installed by the user, the threat attempted to exploit two different vulnerabilities in Android to obtain administrator-level control of the device. The threat then installed additional software on the device, without the user’s consent. The software exfiltrates a number of confidential items, including: device ID/serial numbers, device model information, carrier information, and has the ability to download and install future malware packages without the user’s knowledge (this is only possible since the threat exploited a vulnerability to bypass Android’s isolation model). Both vulnerabilities used by Android.Rootcager were patched in Android’s 2.3 release; however, most Android-based devices on the market are running earlier Android versions as of this paper’s writing, meaning that most devices are still susceptible to this style of attack. Android.Bgserv (March, 2011): In response to the Android. Rootcager threat, Google deployed a tool over-the-air to clean up infected Android devices. Shortly after this cleanup tool was released, attackers capitalized on the hype and released a malicious fake version of the cleanup tool. This Trojan horse exfiltrates user data such as the devices IMEI number and its phone number to a server in China.
Figure 4
Android.Bgserv Service
Summary of Android’s Security Overall, while we believe the Android security model is a major improvement over the models used by traditional desktop and server-based operating systems, it has two major drawbacks. First, its provenance system enables attackers to anonymously create and distribute malware. Second, its permission system, while extremely powerful, ultimately relies upon the user to make important security decisions. Unfortunately, most users are not technically capable of making such decisions and this has already led to social engineering attacks. To summarize: • Android’s provenance approach ensures that only digitally signed applications may be installed on Android devices. However, attackers can use anonymous digital certificates to sign their threats and distribute them across the Internet without any certification by Google. Attackers can also easily “trojanize” or inject malicious code into legitimate applications and then easily redistribute them across the Internet, signing them with a new, anonymous certificate. On the plus side, Google does rePage 16
A Window Into Mobile Device Security
Security Response
quire application authors wishing to distribute their apps via the official Android App Marketplace to pay a fee and register with Google (sharing the developer’s digital signature with Google). As with Apple’s registration approach, this should act as a deterrent to less organized attackers. • Android’s default isolation policy effectively isolates apps from each other and from most of the device’s systems including the Android operating system kernel, with several notable exceptions (apps can read all data on the SD card unfettered). • Android’s permission model ensures that apps are isolated from virtually every major device system unless they explicitly request access to those systems. Unfortunately, Android ultimately relies upon the user to decide whether or not to grant permissions to an app, Table 1 leaving Android open to social engineering attacks. Most users are unequipped to make such security Resisting attack types decisions, leaving them open to malware and all of the Google secondary attacks (for example DDoS attacks, Data Resistance to: Apple iOS Android Loss attacks) that malware can launch. • Android recently began offering built-in encryption Web-based attacks in Android 3.0. However, earlier versions of Android (running on virtually all mobile phones in the field), Malware attacks contain no encryption capability, instead relying upon isolation and permissions to safeguard data. Thus, a Social Engineering simple jailbreak of an Android phone or theft of the attacks device’s SD card can lead to a significant amount of Resource Abuse/ data loss. Service attacks • As with iOS, Android has no mechanism to prevent Data Loss (Malicious social engineering attacks such as phishing attacks or and Unintentional) other (off-device) Web-based trickery.
iOS vs. Android: Security Overview Tables one and two summarize our conclusions about the various strengths and weaknesses of both the iOS and Android mobile platforms.
Device Ecosystems Today’s iOS and Android devices do not work in a vacuum—they’re almost always connected to one or more cloud-based services (such as an enterprise Exchange server, Gmail, MobileMe, etc.), a home or work PC, or all of the above. Users connect their devices to the cloud and to PC/Mac computers in order to: • Synchronize their enterprise email, calendars, and contacts with their device. • Synchronize their private email, calendars, and contacts, and other digital content with their device (for example, music and movie files). • Back up their device’s email, calendars, contacts, and other settings in case their device is lost.
Data Integrity attacks
Table 2
Security feature implementation Security Pillar
Apple iOS
Google Android
Access Control Application Provenance Encryption Isolation Permission-based Access Control
When properly deployed, both Android and iOS platforms allow users to simultaneously synchronize their devices with multiple (private and enterprise) cloud services without risking data exposure between these clouds. However, these services may be easily abused by employees, resulting in exposure of enterprise data on both unsanctioned employee devices as well as in the private cloud. As such, it is important to understand the entire ecosystem that these devices participate in, in order to formulate an effective device security strategy.
Legend Full Protection Good Protection Moderate Protection Little Protection Little or No Protection
Page 17
A Window Into Mobile Device Security
Security Response
In a typical deployment, an employee connects their device to both an enterprise cloud service such as an Exchange server that holds the employee’s work calendar, contacts, and email, as well as to a private cloud service, such as Gmail or Apple’s MobileMe, which holds their private contacts, calendar events, and email. Once a device is connected to one or more data sources, both iOS and Android provide a consolidated view of both corporate and private email, calendars, and contact lists that unifies data from both services into one seamless user interface, while internally maintaining a layer of isolation for the data from each service. In such a sanctioned deployment, this isolation ensures that no enterprise data finds its way onto the private cloud servers and vice versa (in other words, a work meeting entered into the employee’s work calendar will never be synchronized into the user’s private Gmail calendar, and a private appointment entered in the user’s Gmail calendar would never find its way into the Exchange server). This isolation prevents loss of enterprise data and also enables administrators to safely wipe enterprise data from a device while letting the user retain their own private data should they leave the company. While such an enterprise-sanctioned deployment isolates data from each source, ensuring that enterprise data is not inadvertently synchronized with the employee’s private cloud, it is quite easy for users to use third-party tools or services to intentionally or unintentionally expose enterprise data to third-party cloud services, unmanaged computers and devices. Here are the most common scenarios:
Scenario #1: Unsanctioned Desktop Sync In this scenario, the employee brings in a consumer device that is not under enterprise management nor approved to hold enterprise data. In order to synchronize the user’s work calendars/contact lists with the device, the employee uses either iTunes or a third-party synchronization tool to directly synchronize contacts and calendar entries from their PC/Mac to their device. This approach effectively migrates all of the user’s enterprise calendar/contact data from the Exchange server via the local Outlook or iCal client on the user’s work computer into the employee’s private device. Once the employee has used this approach to synchronize their work calendars/contacts to their enterprise device, they may opt to enroll their device in a third-party cloud service, such as Gmail or Apple’s MobileMe service to make their calendar/contacts available via any Web browser (for example, so the employee, or perhaps a spouse can view their schedule online). Services like Gmail and MobileMe are able to directly pull data off the user’s device into the cloud. This type of employee behavior results in exposure of enterprise data to private clouds which do not have enterprise-level governance or protections (for example, password strength requirements), potentially opening up this data to attackers. In addition, this type of activity frequently exposes enterprise data to the employee’s home computer (as we’ll see below).
Figure 5
Scenario #1: Unsanctioned Desktop Sync
3rd Party Cloud Service
3:33 Friday, March 13
Consumer-owned device Work PC slide to unlock
Direct USB Sync from Work PC to Consumer-owned device (and then from device into the cloud)
Page 18
A Window Into Mobile Device Security
Security Response
Scenario #2: Unsanctioned Enterprise Desktop to Cloud Sync In this scenario, the employee installs a tool such as Google Calendar Synch or iTunes on their work PC/Mac and uses this to synchronize their data directly with an unsanctioned private cloud, such as Gmail or MobileMe. Such synchronization programs automatically synchronize the desktop calendar and contacts from Outlook or iCal with a cloud service. This effectively migrates all of the user’s enterprise calendar/contact data from the Exchange server, via the user’s desktop, into an uncontrolled cloud. Next, the employee configures their device to synchronize directly with the private cloud provider, causing the device to download the employee’s enterprise calendars and contacts. The employee is therefore able to obtain their work data on their device without having to connect the device directly to the enterprise Exchange server or to the employee’s work PC. Again, this type of activity exposes private enterprise data to both a third-party private cloud as well as to the employee’s consumer device—both of which are not governed or secured by the enterprise. Figure 6
Scenario #2: Unsanctioned Enterprise Desktop to Cloud Sync
3rd Party Cloud Service
3:33 Friday, March 13
Consumer-owned device Work PC slide to unlock
Sync to 3rd-party cloud, then to consumer-owned device
Scenario #3: Unsanctioned Enterprise Device Sync with Home PC Users regularly synchronize their home and work devices with their home PC/Mac to transfer music and multimedia files and to synchronize their device’s calendars and contact lists with their private calendar/contact lists on their home computer. Employees with an iPhone use the standard iTunes software to perform this synchronization, whereas Android users need to leverage a third-party package.
Page 19
A Window Into Mobile Device Security
Security Response
Even if the user only chooses to synchronize music and multimedia files between their device and their home computer (opting not to synchronize calendars or contacts), the synchronization software will make a backup of the device’s data to the computer in case a problem occurs. This backup typically contains the entire calendar and address book, notes, as well as device settings. Importantly, this data may not necessarily be encrypted when backed up to the user’s home computer. For example, iTunes, by default, does not password-protect or encrypt the backup stored on the user’s computer. This has the unintended consequence of migrating the employee’s enterprise-owned data onto the their home PC in an unencrypted form even in cases where the employee opted not to explicitly synchronize enterprise calendar or contact data with their home computer. Figure 7
Scenario #3: Unsanctioned Enterprise Device Sync with Home PC
Enterprise device backup via USB to home PC/Mac
Home PC
3:33 Friday, March 13
slide to unlock
Enterprise-owned device
Mobile Security Solutions We expect the nascent market for mobile security solutions to develop quickly. The nature of these security solutions will be largely driven by the evolving threat landscape and the constraints imposed by the security models of each platform. Some of the initial mobile security approaches we have observed so far include:
Mobile Antivirus There are already a number of first-generation antivirus scanners for the Android platform. However, given iOS’s strict isolation model, it is impossible to implement an iOS-based antivirus scanner for iOS-based devices (without relaxation of the isolation system by Apple). These scanners are effective at detecting known Android threats, but provide little protection against unknown threats. Ultimately Symantec expects traditional scanners to be replaced by cloud-enabled, reputation-based protection.
Page 20
Security Response
A Window Into Mobile Device Security
Of the six different categories of threats that face mobile devices, mobile antivirus solutions can address threats in the malware category as well as a subset of malware-based attacks in the resource abuse, data loss, and data integrity categories.
Secure Browser Several companies have introduced their own secure Web browser apps for both iOS and Android platforms. These apps are meant to be used instead of the built-in Web browsers provided on the iOS and Android platforms. Each time the user visits a URL in the secure browser, the browser checks the URL against a blacklist or reputation database and then blocks any malicious pages. The only problem with this secure browser approach is that the user cannot use the familiar, factory-installed Web browser shipped with the device. Instead, they must use the third-party secure Web browser to do all surfing. Of the six different categories of threats that face mobile devices, secure browsers can effectively address Webbased attacks and social engineering attacks. The secure browser can also potentially block the introduction of malware downloaded through the browser.
Mobile Device Management (MDM) These tools enable the administrator to remotely administer managed iOS and Android devices. Typical security policies might include setting the password strength, configuring the device’s VPN settings, specifying the screen lock duration (how long before the screen locks and a password is required to unlock the device), or disabling specific device functions (like access to an App marketplace) to prohibit potentially risky behaviors. In addition, the administrator may perform security operations like wiping lost or stolen devices, or using the device’s onboard geo-location service to locate a device. While mobile device management solutions don’t specifically protect against any explicit threat category, they can help to reduce the risk of attack from many of the categories. For example, if the administrator uses the MDM solution to configure the device to block introduction of all-new apps, this can eliminate the introduction of new malware, and also limit resource abuse, integrity threats, and some intentional or unintentional data loss.
Enterprise Sandbox Sandbox solutions aim to provide a secure sandbox environment where employees can access enterprise resources such as email, calendar, contacts, corporate websites, and sensitive documents. All data stored in the sandbox, and data transmitted to and from services accessible via the sandbox, is encrypted. To use the sandbox, the user must first log in and the sandbox must then check with a corporate server to ensure that the user is still authorized to access both local data (on the device) as well as enterprise services. Given that the sandbox is provisioned by the corporate administrator, it can easily be deprovisioned if the device is lost or stolen. This approach has the effect of dividing the device’s contents into two zones: a secure zone for the enterprise data, and an insecure zone for the employee’s personal and private data. The benefit of such a solution is it enables the consumer to use their own device, yet it still lets them safely access enterprise data. The drawback of such a device is that the user can’t use the regular mail, calendar, or contact apps built into the device to access enterprise resources, forcing them to adapt to a different set of sandboxed, potentially less usable equivalent apps. This may compel some employees to bypass the sandbox and use unsanctioned means to access enterprise resources. The enterprise sandbox is primarily focused at protecting enterprise assets, such as enterprise documents, access to the enterprise intranet, enterprise emails and calendar events, etc., from attack. Therefore the sandbox approach is focused on preventing malicious and unintentional data loss. While this approach doesn’t actually block the other six attack categories explicitly, it does implicitly limit the impact of these attacks on enterprise assets.
Page 21
Security Response
A Window Into Mobile Device Security
Data Loss Prevention (DLP) These tools scan the publicly accessible storage areas of each device for sensitive materials. Due to iOS’s isolation system, iOS-based DLP tools can only inspect the calendar and contact lists for sensitive information. On Android, such a tool could scan the external flash storage (that is, the SD card), the email and SMS inboxes, as well as the calendar and contact lists. These products would be unable to scan the data of any other apps on the system, such as document viewers, word processors, spreadsheets, third-party email clients, etc., due to the isolation models of both iOS and Android. Thus, DLP solutions are not able to detect all sensitive data stored on or flowing through these devices.
Conclusion Today’s mobile devices are a mixed bag when it comes to security. On the one hand, these platforms have been designed from the ground up to be more secure—they raise the bar by leveraging techniques such as application isolation, provenance, encryption, and permission-based access control. On the other hand, these devices were designed for consumers, and as such, they have traded off their security to ensure usability to varying degrees. These tradeoffs have contributed to the massive popularity of these platforms, but they also increase the risk of using these devices in the enterprise. Increasing this risk is the fact that employees bring their own consumer devices into the enterprise and leverage them without oversight, accessing corporate resources such as calendars, contact lists, corporate documents, and even email. In addition, employees often synchronize this enterprise data with third-party cloud services, as well as their home PC. This back door connectivity results in the loss of potentially sensitive enterprise data across third-party systems that are out of the enterprise’s direct control and governance. To conclude, while mobile devices promise to greatly improve productivity, they also introduce a number of new risks that must be managed by enterprises. We hope that by explaining the security models that undergird each platform, and the ecosystems these devices participate in, we’ve provided you, the reader, with the knowledge to more effectively derive value from these devices and also more effectively manage this risks they introduce.
Page 22
Security Response
Any technical information that is made available by Symantec Corporation is the copyrighted work of Symantec Corporation and is owned by Symantec Corporation. NO WARRANTY . The technical information is being delivered to you as is and Symantec Corporation makes no warranty as to its accuracy or use. Any use of the technical documentation or the information contained herein is at the risk of the user. Documentation may include technical or other inaccuracies or typographical errors. Symantec reserves the right to make changes without prior notice.
About the author Carey Nachenberg is a Vice President in Symantec’s Security, Technology, and Response organization and a Symantec Fellow.
For specific country offices and contact numbers, please visit our Web site. For product information in the U.S., call toll-free 1 (800) 745 6054.
Symantec Corporation World Headquarters 350 Ellis Street Mountain View, CA 94043 USA +1 (650) 527-8000 www.symantec.com
About Symantec Symantec is a global leader in providing security, storage and systems management solutions to help businesses and consumers secure and manage their information. Headquartered in Moutain View, Calif., Symantec has operations in more than 40 countries. More information is available at www.symantec.com.
Copyright © 2011 Symantec Corporation. All rights reserved. Symantec and the Symantec logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners.