Provides methods for creating and configuring a data layer, providing applications
the ability to invoke data operations for various endpoints.
// esm import { data } from'@paychex/core';
// cjs const { data } = require('@paychex/core');
// iife const { data } = window['@paychex/core'];
// amd require(['@paychex/core'], function({ data }) { ... }); define(['@paychex/core'], function({ data }) { ... });
Basic Concepts
A proxy is a runtime set of rules that will be applied (in order)
to transform Requests prior to sending them to an Adapter.
You can configure the proxy rules at any time.
An adapter converts a Request into a Promise resolved with
a Response. It should never throw an Error; instead, if a failure occurs, it
should set the appropriate properties on the Response. The following adapter repositories
can be used:
adapter
description
@paychex/adapter-xhr
Uses XMLHttpRequest to fulfill a data operation. Works in web browsers.
@paychex/adapter-node
Uses https to fulfill a data operation. Works in NodeJS.
A data pipeline is a sequence of steps whose job is to retrieve data. For that
reason, even the simplest data pipeline requires these 3 steps:
pass that Request to the appropriate Adapter, which will
perform an operation (typically a network call) and return a Response
The data module contains a factory method that creates a
DataLayer. The DataLayer can perform all 3 steps above.
However, data pipelines usually are more complex and require additional business
logic to be applied correctly. Some additional logic you may want to apply to your
pipelines includes:
caching responses
retrying on certain failures
reauthenticating if a 401 is returned
These features and more are available through wrapper functions in the
utils module.
Combine functions from both modules to create a generic data pipeline that meets your
most common needs. Consumers of your pipeline can bolt on additional wrapping functions
to meet their unique requirements.
// we extend the functionality of `fetch` by wrapping // it and returning a function with the same signature // that proxies to the real fetch while adding custom // error handling logic functionwithCustomErrors(fetch) { returnasyncfunctionuseCustomErrors(request) { returnawaitfetch(request) .catch(errors.rethrow({ app:'my app' })); }; }
// automatically retry failed requests using // an exponential falloff between attempts; // the version of fetch we bring in has already // been wrapped to add custom error data and // request headers; this shows how we can combine // cross-cutting logic with custom one-off logic constload = data.utils.withRetry(fetch, data.utils.falloff());
Map of strings representing either Request headers
or Responsemeta headers. The header name is the key
and the header data is the value. If you pass an array of strings as the value,
the strings will be combined and separated by commas.
Provides methods for creating and configuring a data layer, providing applications the ability to invoke data operations for various endpoints.
Basic Concepts
A
proxy
is a runtime set of rules that will be applied (in order) to transform Requests prior to sending them to an Adapter. You can configure the proxy rules at any time.An
adapter
converts a Request into a Promise resolved with a Response. It should never throw an Error; instead, if a failure occurs, it should set the appropriate properties on the Response. The following adapter repositories can be used:XMLHttpRequest
to fulfill a data operation. Works in web browsers.https
to fulfill a data operation. Works in NodeJS.A
data pipeline
is a sequence of steps whose job is to retrieve data. For that reason, even the simplest data pipeline requires these 3 steps:The
data
module contains a factory method that creates a DataLayer. The DataLayer can perform all 3 steps above.However, data pipelines usually are more complex and require additional business logic to be applied correctly. Some additional logic you may want to apply to your pipelines includes:
These features and more are available through wrapper functions in the utils module.
Combine functions from both modules to create a generic data pipeline that meets your most common needs. Consumers of your pipeline can bolt on additional wrapping functions to meet their unique requirements.