Items tagged with: api
Cloud Healthcare API accelerates digital transformation by bridging the gap between existing clinical systems (HL7v2, FHIR, & DICOM) and Google Cloud.
Article word count: 842
HN Discussion: https://news.ycombinator.com/item?id=19604837
Posted by epiphone (karma: 62)
Post stats: Points: 146 - Comments: 66 - 2019-04-08T13:13:22Z
#HackerNews #api #cloud #google #healthcare
Standards-based APIs powering actionable healthcare insights for security and compliance-focused environments.
try gcp free view documentation
The engine for interoperability
Cloud Healthcare API bridges the gap between care systems and applications built on Google Cloud. By supporting standards-based data formats and protocols of existing healthcare technologies, Cloud Healthcare API connects your data to advanced Google Cloud capabilities, including streaming data processing with Cloud Dataflow, scalable analytics with BigQuery, and machine learning with Cloud Machine Learning Engine. In addition, Cloud Healthcare API simplifies application development and device integration to accelerate digital transformation and enable real-time integration with care networks.
FHIR (Fast Healthcare Interoperability Resources) is the emerging standard for healthcare data interoperability. Using REST semantics, FHIR specifies a robust, extensible data model for interacting with clinical resources. Google Cloud can transform data from other formats into and out of FHIR resources to simplify data ingestion, making the data available for use with analytics and machine learning tools. The FHIR API provides full support for STU3 resources.
HL7v2 is an essential communication modality for any application seeking to connect to existing clinical systems. The HL7v2 API implements a REST interface for ingesting, sending, searching, and retrieving HL7v2 messages. The HL7v2 API has been integrated with an open source adapter to send and receive messages over Minimal Lower Layer Protocol (MLLP) as well as several common HL7v2 interface engines. The adapter runs within Google Kubernetes Engine to provide rapid provisioning, communicates over Cloud Pub/Sub to deliver horizontal scalability, and connects with Cloud VPN to enable transport security.
DICOM is the established standard for storing and exchanging medical images and their metadata across a wide range of modalities, including radiology, cardiology, ophthalmology, and dermatology. DICOMweb is a REST API used for storing, querying, and retrieving these images. The DICOMweb support in Cloud Healthcare API allows existing imaging devices, PACS solutions, and viewers to interact with the Cloud Healthcare API. This can be done either directly or via open source adapters designed to support existing DICOM DIMSE protocols. This allows customers to scalably store their medical imaging data and connect their data to powerful tools for analytics and machine learning.
De-identification is the process of removing or obfuscating identifying information from datasets so that the data cannot be linked back to specific individuals. De-identification is often a step in pre-processing healthcare datasets. It can be a critical step so that healthcare data can be made available for analysis, training, and evaluating machine learning models, and sharing with non-privileged parties, while protecting patient privacy. Cloud Healthcare API provides capabilities to de-identify several types of data stored in the service, facilitating these use cases and several others. This includes de-identifying structured medical records in FHIR format, as well as medical images in DICOM format (both metadata and pixel data).
Data is private, secure, and in your control
Data locality is a core component of Cloud Healthcare API. You choose the storage location for each dataset from current available locations that correspond to distinct geographic areas. Your organization controls where data is stored on Google Cloud via Cloud Healthcare API. Cloud Healthcare API services are integrated with Cloud Audit Logging, which allows your organization to track actions affecting your data. By default, administrative modifications to datasets, data stores, and IAM policies are logged. You can also enable audit logging of item creation, modification, and reads within each data store. Cloud Healthcare API is built using Google’s multi-layered security approach that leverages cutting-edge security capabilities, including data-loss prevention tools, precise policy controls, robust identity management, encryption, and many more.
Analytics and machine learning
Once data is brought to Google Cloud Platform, Cloud Healthcare API enables customers to integrate their data with powerful analytic tools such as BigQuery, visualization and machine learning tools such as Cloud Datalab, as well as third-party tools such as Tableau.
Customers can also use Cloud Healthcare API to connect their medical data to powerful machine learning solutions such as AutoML and Cloud ML Engine, which simplify custom machine learning model training. Once a model has been trained, customers can leverage Cloud Healthcare APIʼs DICOM and FHIR support to deploy the model into existing clinical workflows.
Cloud Healthcare API allows you to unlock the true value of your healthcare data by bringing it to advanced analytics and machine learning solutions such as BigQuery, Cloud AutoML, and Cloud ML Engine.
Cloud Healthcare API provides web-native, serverless scaling optimized by Google’s infrastructure. Simply activate the API and start sending requests — no initial capacity configuration required. Although some limits exist (e.g., Cloud Pub/Sub quotas), capacity can expand to match usage patterns.
Cloud Healthcare API integrates with Apigee, recognized by Gartner as a leader in full lifecycle API management, to deliver app and service ecosystems around your data.
Cloud Healthcare API organizes your healthcare information into datasets with one or more modality-specific stores per set. Each store exposes both a REST and RPC interface. You can use Cloud IAM to set fine-grained access policies.
Cloud Healthcare API supports bulk import and export of FHIR data and DICOM data, accelerating time-to-delivery for applications with dependencies on existing datasets while providing a convenient API for moving data between projects.
Cloud Healthcare API quickstart
Cloud Healthcare APIs and reference
Cloud Healthcare API documentation
HackerNewsBot debug: Calculated post rank: 119 - Loop: 237 - Rank min: 100 - Author rank: 620
I was trying to make an #easy-share-option for Android/D* a while back but discovered that at the time, the only part of the #API which was implemented was the #OAuth2 / #OpenID login.
I did start to implement parts of the API to achieve my goal (sloppily, and in PHP ;) ) but got sidetracked into trying to find out what the state of the D* API was at the time (ie nonexistent) so let it slide.
Maybe I'll chuck the code up on my #github at some point. #diaspora #client #native
New crate: oslobike v0.1.0
Wohoo! Just published my first crate. Very simple, and not quite finished, but at least it's there. If you for some reason would want to know which city bike stations in Oslo has any bikes available, and want to find out using Rust – I've got you covered 😀
#programming #oslo #citybike #rust #api
A performance comparison between three different methods of deploying an API on AWS
Article word count: 1882
HN Discussion: https://news.ycombinator.com/item?id=19233466
Posted by abd12 (karma: 700)
Post stats: Points: 108 - Comments: 50 - 2019-02-23T14:38:32Z
#HackerNews #api #aws #comparison #containers #performance #serverless
In my last post, I showed how to connect AWS API Gateway directly to SNS using a service integration.
A few people asked me about the performance implications of this architecture.
Is it significantly faster than using a Lambda-based approach?
How does it compare to EC2 or ECS?
My answer: I don’t know! But I know how to find out (sort of).
In this post, we do a performance bake-off of three ways to deploy the same HTTP endpoint in AWS:
* Using an API Gateway service proxy * With the new hotness, AWS Lambda * With the old hotness, Docker containers on AWS Fargate
We’ll deploy our three services and throw 15,000 requests at each of them. Who will win?
If you’re impatient, skip here to see full results
Table of Contents:
Before we review the results, let’s set up the problem.
I wanted to keep our example as simple as possible so that the comparison is limited to the architecture itself rather than the application code. Further, I wanted an example that would work with the API Gateway service proxy so we could use it as a comparison as well.
I decided to set up a simple endpoint that receives an HTTP POST request and forwards the request payload into an AWS SNS topic.
Let’s take a look at the architecture and deployment methods for each of our three approaches.
Go Serverless with AWS Lambda
The first approach is to use AWS API Gateway and AWS Lambda. Our architecture will look like this:
SNS Publish with Lambda
A user will make an HTTP POST request to our endpoint, which will be handled by API Gateway. API Gateway will forward the request to our AWS Lambda function for processing. The Lambda function will send our request payload to the SNS topic before returning a response.
If you want to deploy this example, the code is available here. I use the Serverless Framework for deploying the architecture because I think it’s the easiest way to do it.*
*Full disclosure: I work for Serverless, Inc., creators of the Serverless Framework. Want to come work with me on awesome stuff? We’re hiring engineers. Please reach out if you have any interest.
Skipping the middleman with API Gateway service proxy
The second approach is similar to the first, but we remove Lambda from the equation. We use an API Gateway service proxy integration to publish directly to our SNS topic from API Gateway:
APIG Service Proxy
Before doing any testing, my hunch is that this will be faster than the previous method since we’re cutting out a network hop in the middle. Check below for full results. Note that API Gateway service proxies won’t work for all parts of your infrastructure, even if the performance is faster.
If you want additional details on how, when, and why to use this, check out my earlier post on using an API Gateway service proxy integration. It does a step-by-step walkthrough of setting up your first service proxy.
To deploy this example, there is a CloudFormation template here. This will let you quickly spin up the stack for testing.
Containerizing your workload with Docker and AWS Fargate
The final approach is to run our compute in Docker containers. There are a few different approaches for doing this on AWS, but I chose to use AWS Fargate.
The architecture will look as follows:
Fargate to SNS
Users will make HTTP POST requests to an HTTP endpoint, which will be handled by an Application Load Balancer (ALB). This ALB will forward requests to our Fargate container instances. The application on our Fargate container instances will forward the request payload to SNS.
With Fargate, you can run tasks or services. A task is a one-off container that will run until it dies or finishes execution. A service is a defined set of a certain number of instances of a task. Fargate will ensure the correct number of instances of your service are running.
We’ll use a service so that we can run a sufficient number of instances. Further, you can easily set up a load balancer for managing HTTP traffic across your service instances.
You can find code and instructions for deploying this architecture to Fargate here. I use the incredible fargate CLI tool, which makes it dead simple to go from Dockerfile to running container.
Now that we know our architecture, let’s jump into the bakeoff!
After I deployed all three of the architectures, I wanted to do testing in two phases.
First, I ran a small sample of 2000 requests to check the performance of new deploys. This was running at around 40 requests per second.
Then, I ran a larger test of 15000 requests to see how each architecture performed when they are warmed up. For this larger test, I was sending around 100 requests per second.
Let’s check the results in order.
When I ran my initial Fargate warmup, I got the following results:
screen shot 2019-02-19 at 7 33 08 pm
Around 10% of my requests were failing altogether!
When I dug in, it looked like I was overwhelming my container instances, causing them to die.
I’m not a Docker or Flask performance expert, and that’s not the goal of this exercise. To remedy this, I decided to bump the specs on my deployments.
The general goal for this bakeoff is to get a best-case outcome for each of these architectures, rather than an apples-to-apples comparison of cost vs performance.
For Fargate, this meant deploying 50 instances of my container with pretty beefy settings — 8 GB of memory and 4 full CPU units per container instance.
For the Lambda service, I set memory to the maximum of 3GB.
For APIG service proxy, there are no knobs to tune. 🎉
With that out of the way, let’s check the initial results.
Initial warmup results
For the first 2000 requests to each type of endpoint, the performance results are as follows:
api performance results -- warmup
Note: Chart using a log scale
The raw data for the results are:
Endpoint type # requests 50% 66% 75% 80% 90% 95% 98% 99% 100%
APIG Service Proxy 2051 80 90 110 120 150 190 220 250 520
AWS Lambda 2084 94 100 110 120 150 180 210 290 5100
Fargate 2047 68 73 76 80 110 110 130 140 550
Takeaways from the warmup test
1. Fargate was consistently the fastest across all percentiles.
2. AWS Lambda had the longest tail on all of them. This is due to the cold start problem.
3. API Gateway service proxy outperformed AWS Lambda at the median, but performance in the upper-middle of the range (75% - 99%) was pretty similar between the two.
Now that we’ve done our warmup test, let’s check out the results from the full performance test.
Full performance test results
For the main part of the performance test, I ran 15,000 requests at each of the three architectures. I planned to use 500 ‘users’ in Locust to accomplish this, though, as noted below, I had to make some modifications for Fargate.
First, let’s check the results:
api performance results -- full test
Note: Chart using a log scale
The raw data for the results are:
Endpoint type # requests 50% 66% 75% 80% 90% 95% 98% 99% 100%
APIG Service Proxy 15185 73 79 84 90 130 180 250 290 670
AWS Lambda 15249 86 92 98 110 140 160 180 220 920
Fargate 15057 69 72 75 77 91 110 130 170 800
Takeaways from the full performance test
1. Fargate was still the fastest across the board, though the gap narrowed. API Gateway service proxy was nearly as fast as Fargate at the median, and AWS Lambda wasn’t far behind.
2. The real differences show up between the 80th and 99th percentile. Fargate had a lot more consistent performance as it moved up the percentiles. The 98th percentile request for Fargate is less than double the median (130ms vs 69ms, respectively). In contrast, the 98th percentile for API Gateway service proxy was more than triple the median (250ms vs 73ms, respectively).
3. AWS Lambda outperformed the API Gateway service proxy at some higher percentiles. Between the 95th and 99th percentiles, AWS Lambda was actually faster than the API Gateway service proxy. This was surprising to me.
I mentioned above that I wanted to use 500 Locust ‘users’ when testing the application. Both AWS Lambda and API Gateway service proxy handled 15000+ requests without a single error.
With Fargate, I consistently had failed requests:
screen shot 2019-02-20 at 6 25 16 am
I finally throttled it down to 200 Locust users when testing for Fargate, which got my error rate down to around 3% of overall requests. Still, this was infinitely higher than the error with AWS Lambda.
I’m not saying you can’t deploy a Fargate service without tolerating a certain percentage of failures. Rather, performance tuning Docker containers was more time than I wanted to spend on a quick performance test.
UPDATED NOTES ON FARGATE ERRORS
I’ve gotten some pushback saying that the test is worthless due to the Fargate errors, or that I was way over-provisioned on Fargate.
A few notes on that:
First, Nathan Peck, an awesome and helpful container advocate at AWS, reached out to say the failures were likely around some system settings like the ‘nofile’ ulimit.
That sounds pretty reasonable to me, but I haven’t taken the time to test it out. I don’t have huge interest in digging deep into container performance tuning for this. If that’s something you’re into, let me know and I’ll link to your results if they’re interesting!
The key points on Fargate are:
1. You can get much lower failure rates than I got. You’ll just need to tune it.
2. I didn’t use 50 instances with a ton of CPU and memory because I thought Fargate needed it. I used it because I didn’t want to think about resource exhaustion at all (even though I did end up hitting the open file limits). I was going for a best-case scenario — if the load balancer, container, and SNS are all humming, what kind of latency can we get?
3. I don’t think this invalidates the general results of what a basic ‘optimistic-case’ could look like with Fargate within these general constraints (multiple instances + Python + calling SNS).
If you’re making a million dollar decision on this, you should run your own tests.
If you want a quick, fun read, these results should be directionally correct.
This was a fun and enlightening experience for me, and I hope it was helpful for you. There’s not a clear right answer on which architecture you should use based on these performance results.
Here’s how I think about it:
* Do you need high performance? Using dedicated instances with Fargate (or ECS/EKS/EC2) is your best best. This will require more setup and infrastructure management, but that may be necessary for your use case. * Is your business logic limited? If so, use API Gateway service proxy. API Gateway service proxy is a performant, low-maintenance way to stand up endpoints and forward data to another AWS service. * In the vast number of other situations, use AWS Lambda. Lambda is dead-simple to deploy (if you’re using a deployment tool). It’s reliable and scalable. You don’t have to worry about tuning a bunch of knobs to get solid performance. And it’s code, so you can do anything you want. I use it for almost everything.
HackerNewsBot debug: Calculated post rank: 88 - Loop: 425 - Rank min: 80 - Author rank: 127
Google today announced the general availability of a new API for Google Docs that will allow developers to automate many of the tasks that users typically do manually in the company’s online office…
Article word count: 245
HN Discussion: https://news.ycombinator.com/item?id=19136075
Posted by Manu1987 (karma: 416)
Post stats: Points: 165 - Comments: 44 - 2019-02-11T17:15:23Z
#HackerNews #api #automation #docs #for #gets #google #task
Google today announced the general availability of a new API for Google Docs that will allow developers to automate many of the tasks that users typically do manually in the company’s online office suite. The API has been in developer preview since last April’s Google Cloud Next 2018 and is now available to all developers.
As Google notes, the REST API was designed to help developers build workflow automation services for their users, build content management services and create documents in bulk. Using the API, developers can also set up processes that manipulate documents after the fact to update them, and the API also features the ability to insert, delete, move, merge and format text, insert inline images and work with lists, among other things.
The canonical use case here is invoicing, where you need to regularly create similar documents with ever-changing order numbers and line items based on information from third-party systems (or maybe even just a Google Sheet). Google also notes that the API’s import/export abilities allow you to use Docs for internal content management systems.
Some of the companies that built solutions based on the new API during the preview period include Zapier, Netflix, Mailchimp and Final Draft. Zapier integrated the Docs API into its own workflow automation tool to help its users create offer letters based on a template, for example, while Netflix used it to build an internal tool that helps its engineers gather data and automate its documentation workflow.
HackerNewsBot debug: Calculated post rank: 124 - Loop: 157 - Rank min: 100 - Author rank: 35
https://friendica.stefan-muenz.de/api/account/verify_credentials?skip_status=true NO RESPONSE Internal Server Error
I suppose that I have to install something additional or configure something to get this work. Any idea or tipp?
#friendiqa #mobile #api
Libcamera könnte die betagte V4L2-API für die Nutzung von Kameras und TV-Karten unter Linux ablösen und vieles vereinfachen.Libcamera soll Einbinden von Kameras unter Linux vereinfachen\#API #EmbeddedLinuxConferenceEurope2018 #Kamera #Libcamera #Linux #V4L2
Twidere for Friendica
Because the question is asked every now and then... Yes, there are apps for Friendica.
From "about Features" on friendi.ca:
Basic Twitter/GNU Social API provides easy access from a growing number of mobile and third party applications (Twidere, AndStatus, Bitlbee, Choqok, Frentcl, Gwibber, Hotot, IdentiCurse, Pidgin/Purple, Mustard, Pino, TTYtter, and more)There is also a native app for Android called Friendiqa.
Clients for Android, SailfishOS and Windows
I prefer and personally use the Twitter/GnuSocial/Mastodon App Twidere on Android mostly for getting notifications about interactions and sharing posts and images while using a mobile device.
Twidere has the ability to set up multiple user accounts and also has a nice automatic day/night mode.
Twidere light/day theme
Twidere dark/night theme
And here is a nice blog post about how to set up Twidere for Friendica:
@Libranet Support #friendica #app #api #android #twidere
lot of really intelligent ideas and thoughts so far that have thus far ignored overt and blatant vulnerabilities to whatever their stack may be (i.e. chipset issues such as Intel's ME) and/or seem overly stuck in centralized model past.
Enterprise is on the verge of eating itself apparently..... stumbling giants with lots of brains, lots of talk, and next to no idea about what to do.
#ITSec #NetSec #Security #Enterprise #Hacking #API #Networking #WTF