GO PAPERLESS IN 2024: 90 days for $90 on new accounts. 15 users included!

Security & Infrastructure

Here’s how we keep your data secure and available

The Appenate platform provides robust and secure functionality for the rapid creation and deployment of connected, data-driven business applications. Our application architecture and failover designs leverage world-class technology to deliver a massively scalable, highly available and cost-effective software as a service offering.

Built on Microsoft Azure

Appenate is hosted on Microsoft’s Azure cloud infrastructure, which enables us to deliver highly scalable, available and fault-tolerant services.  Our application architecture has been designed to leverage Azure’s strong geo-redundancy, replication and recovery options, and follows Microsoft recommended best practices and processes.

Azure meets a broad set of international and industry-specific security, privacy and compliance standards including ISO 27001, HIPAA, FedRAMP, SOC 1 and SOC 2, as well as country-specific standards like Australia IRAP, UK G-Cloud, and Singapore MTCS. More information, including white papers and other resources, can be found at: https://azure.microsoft.com/en-us/support/trust-center

azure01
Our Operational Practices

Appenate utilises industry standard tools and practices to perform software development, quality assurance, deployment and configuration during daily operations of the Appenate SaaS platform.  Software and environment changes are versioned and committed to source control systems, with continuous integration tools providing automated testing and build procedures.

Application updates are deployed to a staging environment and then promoted to production using Azure’s Virtual IP address mechanism to avoid downtime.  In the event of issues with the new production deployment, we are able to immediately roll back to the prior stable version. All environmental aspects are defined via controlled configuration files, ensuring that application deployments execute on a consistent infrastructure and operating system environment.

We employ robust monitoring tools to log, analyse and constantly measure platform performance, availability and responsiveness.  Automated alerts and notifications are raised when key measures approach acceptability limits, allowing our team to respond timeously and proactively to issues.

backup
Data Replication and Backup

Data generated and stored on the Appenate platform is replicated between two physical data centres via Azure’s paired region approach.  We utilise Azure geo-replication and geo-redundancy features for storage and database operations, guided by Microsoft recommended practices.  Point in time backups are also automatically executed hourly for database and daily for general file storage.

System Failover and Disaster Recovery

Our application architecture follows best practices to ensure failover and recovery can occur across multiple levels and scenarios.  At a hosting level, Appenate is deployed across a primary and secondary data centre pair.  These data centres are sufficiently physically distant from each other to reduce the likelihood of natural disasters, civil unrest, power outages, or physical network outages affecting both regions at once.  In the event of tier failure or outright disaster, failover procedures will transition services from our primary to the secondary centre.

Network and Platform Security

Appenate server instances run behind Azure’s comprehensive firewall and load balancing solution.  Inbound connections from both the Internet and remote management ports are blocked by default, with access tightly restricted to legitimate protocol and traffic only.  All firewall configurations are version controlled and peer reviewed as part of our standard change management processes. For more information on Azure-specific security, refer to Microsoft’s self-assessment paper here: https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Backend access to Appenate databases, storage accounts and server instances is restricted to qualified Appenate team members only, with all actions performed using Microsoft provided management tools across SSL secured connections.

All app, web browser and REST API interactions with the Appenate platform occur using 256 bit SSL/TLS encryption (HTTPS protocol).  Users are required to log in with an email and password, and their login and access activity is recorded.  API access is authenticated against a platform generated 32 character secret key token.  Passwords stored on mobile devices and Appenate servers are always encrypted using AES 256 bit encryption algorithms according to industry standard practices. When a user account is terminated or deactivated, an automatic wipe of local app data is executed when/if the user next attempts to access the app.

Frequently Asked Questions

Below is a set of system and security questions commonly asked of Appenate. Please note that our infrastructure and system design is subject to change and thus may result in the answers below being revised from time to time.  All answers apply to our cloud services unless otherwise indicated.

Privacy & Security

Is data “encrypted at rest” (e.g. in static backups, databases, file storage) and in transit?

As of May 25 2018, all data will be encrypted when at rest. When data is transported between servers and devices it’s encrypted over HTTPS using 256 bit SSL.

Are employees only provided with access to the network and network services that they have been specifically authorized to use based on their role?  What about customers?

Strictly only employees have network and infrastructure services access, the access level is based on their role. Our customers have no network or infrastructure services access.

Are privileged and generic account access tightly controlled and reviewed on a periodic basis, at least annually?

Yes. We rotate and renew passwords through our password management software on a regular (at least annual) basis.

Are shared user accounts prohibited for employees? What about customers?

Some shared accounts are employed based on access role, otherwise employees have their own dedicated accounts. Clients have no access/accounts as mentioned above.

Does your password construction require multiple strength requirements?

We require a minimum 6 characters in passwords on our basic password management level.  OWASP and NIST SP 800-63-3 password policy options will be available from May 2018 for all customer accounts. Customers can also implement their own choice of strength requirements by creating users & passwords through our APIs and turning off user password change functionality in the app.

Is the network boundary protected by a firewall with ingress and egress filtering?

Yes. All firewalls and load balancing facilities are provided by Microsoft’s Azure platform. Refer to Microsoft’s STAR self-assessment details found here: https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Are public facing servers in a well-defined De-Militarized Zone (DMZ)?

Yes, this is inherited from Azure’s default infrastructure zoning. Refer to Microsoft’s STAR self-assessment details found here: https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Is internal network segmentation used to further isolate sensitive production resources such as PCI data?

We do not store PCI data, but network segmentation is employed based on Azure’s default configurations in this respect. Refer to Microsoft’s STAR self-assessment details found here: https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Is network intrusion Detection or Prevention implemented and monitored?

We run a broad spectrum of monitoring tools, supplemented by notifications and alerts provided by Azure. This includes intrusion detection and email confirmations of network access.

Are all desktops protected using regularly updated virus, worm, spyware and malicious code software?

Yes, we use Windows and Mac computers with auto-updating of operating systems and antivirus enabled.

Are servers protected using industry hardening practices? Are the practices documented?

Yes, we utilise various security services to provide regular system security audits.  Customers can also contacts us to conduct penetration testing as desired to meet their requirements.

Is there an ongoing program for network and vulnerability scanning, e.g. port scanning?

We subscribe to services that conduct automated penetration tests monthly using industry security standard tools and services.

Is there active vendor patch management for all operating systems, network devices and applications?

Yes. Our servers are constantly updated and patched by Microsoft automatically via their Azure service.

Are all production system errors and security events recorded and preserved?

We preserve logs for a minimum of 1 month, with some remaining for up to 6 months, depending on severity and action required.

Are security events and log data regularly reviewed?

Yes.  Logs are reviewed daily, weekly and monthly – depending on the nature of the log events.

Is there a documented privacy program in place with safeguards to ensure the protection of client confidential information?

Yes.  Refer to our Privacy Policy and GDPR information page.

Is there a process in place to notify clients if any privacy breach occurs?

Yes. We have a standard, documented process for responding to security breaches. This includes notifying impacted customers within 72 hours of a confirmed breach.

Do you store, process, transmit (i.e. “handle”) Personally Identifiable Information (PII)?

Yes.  Refer to our Privacy Policy for more information on this.

In what country or countries is PII stored?

This depends on where your account is hosted. We have 3 possible hosting locations – USA, EU and Australia. Refer to our Privacy Policy for more details.

Are system logs protected from alteration and destruction?

This is provided by Azure internally. Refer to Microsoft’s STAR self-assessment details found here: https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Are boundary and VLAN points of entry protected by intrusion protection and detection devices that provide alerts when under attack?

This is provided by Azure internally.  Refer to Microsoft’s STAR self-assessment details found here: https://cloudsecurityalliance.org/star-registrant/microsoft-azure

Are logs and events correlated with a tool providing warnings of an attack in progress?

Our monitoring tools provide access to the necessary logging events when seeking correlation to attacks.

Is system level security based on industry standard frameworks such as ISO-27001, NIST800-53, or an equivalent framework as appropriate?

Microsoft Azure is audited annually by ISO-27001 for compliance. Appenate follows industry best practices for data and system security, including ISO 27001 recommendations.  We are not currently audited or otherwise certified under such frameworks.  We aim to formally gain a relevant certification in the future.

How is data is segregated from other clients within in the solution, including networking, front-ends, back-end storage and backups?

Every client account is logically separated from other clients, through the use of a required, persistent tenant identifier on all database records. All application code requires this tenant identifier for all operations – both read and write.  An automated testing regime is also in place to protect code changes from regressions and possible cross-tenant data contamination.

The tenant identifier is “hard linked” to every user account and logically enforced through fixed “WHERE” clauses on database queries and equivalent measures for file access. A platform user is not able to change or otherwise unlink their session or account from this tenant identifier.  Thus there is no logical possibility of a user having login authorisation under a different tenant identifier.   Even if they tried to access pages using a different tenant’s id, the system would reject the request due to the user account not being registered to the requested tenant ID.

Do you have an Incident Response Plan?

Yes, we maintain a “living document” which outlines disaster and incident response checklists, contact details and key system facilities for understanding and responding to incidents.

What level of network protection does Appenate implement?

All network level security is managed by Microsoft Azure. See: http://download.microsoft.com/download/C/A/3/CA3FC5C0-ECE0-4F87-BF4B-D74064A00846/AzureNetworkSecurity_v3_Feb2015.pdf

Does Appenate install Microsoft Antimalware for Cloud Services and Virtual Machines or another antivirus solution on VMs, and can VMs be routinely reimaged to clean out intrusions that may have gone undetected?

We have the option to install Antimalware if needed, however our default configuration is the same as Microsoft’s – which is antimalware is not installed.

We don’t remotely login or otherwise install software on our Cloud Services instances aside from our standard closed loop deployments through standard Azure management tools. Thus the risk of malware installation is minimal due to the lack of any direct login access to the instances.

Our servers are re-created using new, default Cloud Service instances every time we deploy a platform upgrade, which happens on average every 2 days or less.

This highly frequent re-creation of fresh instances also reduces any possible exposure time to malware in the highly unlikely event such was deployed to our servers.

Performance & Disaster Recovery

Does the platform provide reports for Quality of Service (QOS) performance measurements (resource utilisation, throughput, availability etc)?

We don’t provide such metrics to customers, aside from availability and response timings as per our status page on status.www.appenate.com.

Is the disaster recovery program tested at least annually?

Yes, we perform recovery checks and tests annually.

What is the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) of the system?

Our RTO is 4 hours, with RPO being 1 hour.

Do you provide backup and restore plans for individual customers?

All aspects are multi-tenanted, so backups are taken across the entire customer base. We run complete file backups every 24 hours and benefit from Azure database point in time backups taken every 5 minutes.

What is the maximum time that back-ups are retained?

We retain database point-in-time backups for 30 days and general file backups for a similar period.

What is the expected turnaround time for a data restore?

Any customer restore in any non-disaster scenario must be requested and scheduled with our support team. Turn around is between 1 and 2 business days.

Can a single entity (e.g. a Form) be restored without impacting the entire platform?

If restoration of a specific record or artefact is required by a customer, this can be performed online via a per request basis and is chargeable work. There is no impact on the platform or customer account.

Is High Availability provided – i.e. where one server instance becomes unavailable does another become available?

We run multiple server instances at all system tiers, including database (which is replicated). Failure of a server instance within the data centre is handled by Azure’s load balancers, with the problem instance recycle and/or removed and replaced with a new instance.

Is data stored and available in another location (data centre) to meet disaster recovery requirements?

Yes. All data is replicated to a second regional data centre which differs by geographic location.

Is the Appenate failover process an active/active, automated switchover process?

Failure of a server instance within the primary data centre is handled by Azure’s load balancers, with the problem instance recycle and/or removed and replaced with a new instance.

In the event that the entire data centre were to have a critical failure, then switchover to our secondary centre is a manual process, as we need to perform a full assessment of the issue first to ensure there is no simple workarounds to keep the existing primary centre presence available. If we determine that a move to our secondary centre is required, then switchover will be initiated manually to meet our target recovery objectives.

Like what you see?