enforced new prettier config options

This commit is contained in:
Jordan Violet
2022-11-27 23:46:05 -05:00
parent c8ffc21374
commit 34cde35638
157 changed files with 2298 additions and 4940 deletions

View File

@@ -1,9 +1,9 @@
---
name: Bug Report
about: Create a report to help us improve.
title: "[Bug] Your Bug Report Here"
labels: ""
assignees: ""
title: '[Bug] Your Bug Report Here'
labels: ''
assignees: ''
---
**Describe the bug** A clear and concise description of what the bug is.
@@ -15,8 +15,7 @@ assignees: ""
3. Scroll down to '....'
4. See error
**Expected behavior** A clear and concise description of what you expected to
happen.
**Expected behavior** A clear and concise description of what you expected to happen.
**Actual behavior** A clear and concise description of what actually happens.

View File

@@ -1,19 +1,15 @@
---
name: Feature Request
about: Suggest an idea for this project.
title: "[Feature] Your Feature Request Here"
labels: ""
assignees: ""
title: '[Feature] Your Feature Request Here'
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like.**
A clear and concise description of what you want to happen. Ex. It would be nice if [...]
**Describe the solution you'd like.** A clear and concise description of what you want to happen. Ex. It would be nice if [...]
**Describe alternatives you've considered.**
A clear and concise description of any alternative solutions or features you've considered. Ex. I have seen similar features on [...]
**Describe alternatives you've considered.** A clear and concise description of any alternative solutions or features you've considered. Ex. I have seen similar features on [...]
**Additional context**
Add any other context or screenshots about the feature request here.
**Additional context** Add any other context or screenshots about the feature request here.

3
.github/bot.yml vendored
View File

@@ -92,10 +92,13 @@ addReviewerBasedOnLabel:
firstPRWelcomeComment: >
🎉 Thanks for opening this pull request! Please be sure to check out our contributing guidelines. 🙌
# Comment to be posted to congratulate user on their first merged PR
firstPRMergeComment: >
🎉 Awesome work, congrats on your first merged pull request! 🙌
# Comment to be posted to on first time issues
firstIssueWelcomeComment: >
🎉 Thanks for opening your first issue here! Be sure to follow the issue template, and welcome to the community! 🙌

View File

@@ -3,10 +3,10 @@ name: Build/Deploy to GitHub Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
branches: ['main']
paths-ignore:
- "README.md"
- ".github/**"
- 'README.md'
- '.github/**'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
@@ -19,11 +19,11 @@ permissions:
# Allow one concurrent deployment
concurrency:
group: "pages"
group: 'pages'
cancel-in-progress: true
env:
BASE_URL: "/"
BASE_URL: '/'
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
@@ -37,7 +37,7 @@ jobs:
- name: Set up Node
uses: actions/setup-node@v3
with:
node-version: "16"
node-version: '16'
# Install and build Developer Community site
- name: Build Developer Community site
run: |

View File

@@ -2,18 +2,18 @@
# https://github.com/firebase/firebase-tools
name: Deploy to Firebase Hosting on PR
"on": pull_request
'on': pull_request
jobs:
build_and_preview:
if: "${{ github.event.pull_request.head.repo.full_name == github.repository }}"
if: '${{ github.event.pull_request.head.repo.full_name == github.repository }}'
runs-on: ubuntu-latest
env:
NODE_ENV: "development"
NODE_ENV: 'development'
steps:
- uses: actions/checkout@v2
- run: npm ci && npm run gen-api-docs-all && npm run build
- uses: FirebaseExtended/action-hosting-deploy@v0
with:
repoToken: "${{ secrets.GITHUB_TOKEN }}"
firebaseServiceAccount: "${{ secrets.FIREBASE_SERVICE_ACCOUNT_DEVELOPER_COMMUNITY_SITE }}"
repoToken: '${{ secrets.GITHUB_TOKEN }}'
firebaseServiceAccount: '${{ secrets.FIREBASE_SERVICE_ACCOUNT_DEVELOPER_COMMUNITY_SITE }}'
projectId: developer-community-site

View File

@@ -1,9 +1,9 @@
{
"arrowParens": "always",
"bracketSpacing": false,
"bracketSameLine": true,
"printWidth": 80,
"proseWrap": "never",
"singleQuote": true,
"trailingComma": "all"
}
"arrowParens": "always",
"bracketSpacing": false,
"bracketSameLine": true,
"printWidth": 80,
"proseWrap": "never",
"singleQuote": true,
"trailingComma": "all"
}

View File

@@ -2,127 +2,78 @@
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
Examples of behavior that contributes to a positive environment for our community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the
overall community
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
- The use of sexualized language or imagery, and sexual attention or advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email
address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
- Publishing others' private information, such as a physical or email address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
https://developer.sailpoint.com/discuss/c/feedback/.
All complaints will be reviewed and investigated promptly and fairly.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at https://developer.sailpoint.com/discuss/c/feedback/. All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Community Impact**: A violation through a single incident or series of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
**Consequence**: A permanent ban from any sort of public interaction within the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.
For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.

View File

@@ -1,7 +1,6 @@
# Contributing to developer.sailpoint.com
We love your input! We want to make contributing to this project as easy and
transparent as possible. Look below if you would like to:
We love your input! We want to make contributing to this project as easy and transparent as possible. Look below if you would like to:
- [Report an issue](#reporting-issues)
- [Make a feature request](#making-feature-requests)
@@ -13,14 +12,11 @@ transparent as possible. Look below if you would like to:
## We Develop with GitHub
We use GitHub to host code, track issues and feature requests, and
accept pull requests.
We use GitHub to host code, track issues and feature requests, and accept pull requests.
## We Use GitHub Flow
Pull requests are the best way to propose changes to the codebase, and
[Github Flow](https://docs.github.com/en/get-started/quickstart/github-flow) is our preferred method of accepting pull requests.
The basics of GitHub flow are as follows:
Pull requests are the best way to propose changes to the codebase, and [Github Flow](https://docs.github.com/en/get-started/quickstart/github-flow) is our preferred method of accepting pull requests. The basics of GitHub flow are as follows:
1. Fork the repo and create your branch from `main`.
2. Make your changes.
@@ -28,9 +24,7 @@ The basics of GitHub flow are as follows:
## We Use the MIT Software License
In short, when you submit code changes, your submissions are understood to be
under the same [MIT License](http://choosealicense.com/licenses/mit/) that
covers the project.
In short, when you submit code changes, your submissions are understood to be under the same [MIT License](http://choosealicense.com/licenses/mit/) that covers the project.
# Reporting Issues
@@ -47,8 +41,7 @@ Our maintainers _love_ thorough bug reports. **Great bug reports** tend to have:
- Screenshots!
- Operating System
- Browser
- Notes (possibly including why you think this might be happening, or stuff you
tried that didn't work)
- Notes (possibly including why you think this might be happening, or stuff you tried that didn't work)
# Making Feature Requests
@@ -76,19 +69,13 @@ Looking to add a new feature yourself? Great! Here are the steps to contribute a
- Fork the repository, copy the main branch only
- Pull down the code, build, and ensure it's running properly
- Create a new branch from main with the naming convention
`feature/your-feature-name`
- Create a new branch from main with the naming convention `feature/your-feature-name`
- Create a pull request from your branch to our origin repository's main branch!
# Discussing General Issues or Questions
If none of the above options work for you, you can submit a general issue using GitHub's
[issues](https://github.com/sailpoint-oss/developer.sailpoint.com/issues). You
can also head over to the
[Developer Community forum](https://developer.sailpoint.com/discuss) to discuss
with us directly on the forum about what you're thinking!
If none of the above options work for you, you can submit a general issue using GitHub's [issues](https://github.com/sailpoint-oss/developer.sailpoint.com/issues). You can also head over to the [Developer Community forum](https://developer.sailpoint.com/discuss) to discuss with us directly on the forum about what you're thinking!
# License
By contributing, you agree that your contributions will be licensed under the
MIT License.
By contributing, you agree that your contributions will be licensed under the MIT License.

View File

@@ -1,8 +1,6 @@
<a id="readme-top"></a>
[![Discourse Topics][discourse-shield]][discourse-url] ![Issues][issues-shield]
![Latest Releases][release-shield] ![Contributor Shield][contributor-shield]
[![Deploy to Production](https://github.com/sailpoint-oss/developer.sailpoint.com/actions/workflows/build-and-deploy-prod-gh-pages.yml/badge.svg)](https://github.com/sailpoint-oss/developer.sailpoint.com/actions/workflows/build-and-deploy-prod-gh-pages.yml)
[![Discourse Topics][discourse-shield]][discourse-url] ![Issues][issues-shield] ![Latest Releases][release-shield] ![Contributor Shield][contributor-shield] [![Deploy to Production](https://github.com/sailpoint-oss/developer.sailpoint.com/actions/workflows/build-and-deploy-prod-gh-pages.yml/badge.svg)](https://github.com/sailpoint-oss/developer.sailpoint.com/actions/workflows/build-and-deploy-prod-gh-pages.yml)
[discourse-shield]: https://img.shields.io/discourse/topics?label=Discuss%20This%20Tool&server=https%3A%2F%2Fdeveloper.sailpoint.com%2Fdiscuss
[discourse-url]: https://developer.sailpoint.com/discuss/
@@ -26,20 +24,11 @@
## About The Project
This repository contains the complete build, with assets, for everything seen on
developer.sailpoint.com. This includes the homepage, all static elements,
_documentation_, API specifications, et. al. The API specifications come in from
a GitHub Action in another repository, but ultimately the API specifications
used to generate this static site are those found in the `static` folder.
This repository contains the complete build, with assets, for everything seen on developer.sailpoint.com. This includes the homepage, all static elements, _documentation_, API specifications, et. al. The API specifications come in from a GitHub Action in another repository, but ultimately the API specifications used to generate this static site are those found in the `static` folder.
Please use GitHub
[issues](https://github.com/sailpoint-oss/developer.sailpoint.com/issues) to
[submit bugs](https://github.com/sailpoint-oss/developer.sailpoint.com/issues/new?assignees=&labels=&template=bug-report.md&title=%5BBug%5D+Your+Bug+Report+Here)
or make
[feature requests](https://github.com/sailpoint-oss/developer.sailpoint.com/issues/new?assignees=&labels=&template=feature-request.md&title=%5BFeature%5D+Your+Feature+Request+Here).
Please use GitHub [issues](https://github.com/sailpoint-oss/developer.sailpoint.com/issues) to [submit bugs](https://github.com/sailpoint-oss/developer.sailpoint.com/issues/new?assignees=&labels=&template=bug-report.md&title=%5BBug%5D+Your+Bug+Report+Here) or make [feature requests](https://github.com/sailpoint-oss/developer.sailpoint.com/issues/new?assignees=&labels=&template=feature-request.md&title=%5BFeature%5D+Your+Feature+Request+Here).
If you'd like to contribute directly (which we encourage!) please read the
contribution guidelines below, first!
If you'd like to contribute directly (which we encourage!) please read the contribution guidelines below, first!
<!-- GETTING STARTED -->
@@ -69,8 +58,7 @@ npm install npm@latest -g
npm install
```
3. Generate the API docs. They are auto-generated, so we do not track them in
the repository and instead build them at runtime.
3. Generate the API docs. They are auto-generated, so we do not track them in the repository and instead build them at runtime.
```bash
npm run gen-api-docs-all
@@ -83,31 +71,24 @@ npm install npm@latest -g
## Discuss
[Click Here](https://developer.sailpoint.com/discuss) to discuss this tool with
other users.
[Click Here](https://developer.sailpoint.com/discuss) to discuss this tool with other users.
<!-- LICENSE -->
## License
Distributed under the MIT License. See [the license](./LICENSE) for more
information.
Distributed under the MIT License. See [the license](./LICENSE) for more information.
<!-- CONTRIBUTING -->
## Contributing
Before you contribute you
[must sign our CLA](https://cla-assistant.io/sailpoint-oss/developer.sailpoint.com).
Please also read our [contribution guidelines](./CONTRIBUTING.md) for all the
details on contributing.
Before you contribute you [must sign our CLA](https://cla-assistant.io/sailpoint-oss/developer.sailpoint.com). Please also read our [contribution guidelines](./CONTRIBUTING.md) for all the details on contributing.
<!-- CODE OF CONDUCT -->
## Code of Conduct
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community. Read our
[code of conduct](./CODE_OF_CONDUCT.md) to learn more.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. Read our [code of conduct](./CODE_OF_CONDUCT.md) to learn more.
<p align="right">(<a href="#readme-top">back to top</a>)</p>

View File

@@ -1,3 +1,3 @@
module.exports = {
presets: [require.resolve("@docusaurus/core/lib/babel/preset")],
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};

View File

@@ -1,47 +1,47 @@
// @ts-check
// Note: type annotations allow type checking and IDEs autocompletion
const lightCodeTheme = require("prism-react-renderer/themes/github");
const darkCodeTheme = require("prism-react-renderer/themes/dracula");
const lightCodeTheme = require('prism-react-renderer/themes/github');
const darkCodeTheme = require('prism-react-renderer/themes/dracula');
const footer = require("./footer");
const navbar = require("./navbar");
const plugins = require("./plugins");
const baseUrl = "/";
const footer = require('./footer');
const navbar = require('./navbar');
const plugins = require('./plugins');
const baseUrl = '/';
/** @type {import('@docusaurus/types').Config} */
const config = {
title: "SailPoint Developer Community",
title: 'SailPoint Developer Community',
tagline:
"The SailPoint Developer Community has everything you need to build, extend, and automate scalable identity solutions.",
url: "https://developer.sailpoint.com",
'The SailPoint Developer Community has everything you need to build, extend, and automate scalable identity solutions.',
url: 'https://developer.sailpoint.com',
baseUrl,
favicon: "img/SailPoint-Logo-Icon.ico",
onBrokenLinks: "throw",
onBrokenMarkdownLinks: "throw",
onDuplicateRoutes: "throw",
favicon: 'img/SailPoint-Logo-Icon.ico',
onBrokenLinks: 'throw',
onBrokenMarkdownLinks: 'throw',
onDuplicateRoutes: 'throw',
i18n: {
defaultLocale: "en",
locales: ["en"],
defaultLocale: 'en',
locales: ['en'],
},
presets: [
[
"classic",
'classic',
/** @type {import('@docusaurus/preset-classic').Options} */
({
docs: {
editUrl:
"https://github.com/sailpoint-oss/developer-community-site/edit/main/",
'https://github.com/sailpoint-oss/developer-community-site/edit/main/',
showLastUpdateAuthor: true,
showLastUpdateTime: true,
sidebarCollapsible: true,
sidebarPath: require.resolve("./sidebars.js"),
docLayoutComponent: "@theme/DocPage",
docItemComponent: "@theme/ApiItem", // Derived from docusaurus-theme-openapi
sidebarPath: require.resolve('./sidebars.js'),
docLayoutComponent: '@theme/DocPage',
docItemComponent: '@theme/ApiItem', // Derived from docusaurus-theme-openapi
},
theme: {
customCss: require.resolve("./src/css/custom.css"),
customCss: require.resolve('./src/css/custom.css'),
},
}),
],
@@ -51,11 +51,11 @@ const config = {
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */
({
algolia: {
appId: "TB01H1DFAM",
apiKey: "726952a7a9389c484b6c96808a3e0010",
indexName: "prod_DEVELOPER_SAILPOINT_COM",
appId: 'TB01H1DFAM',
apiKey: '726952a7a9389c484b6c96808a3e0010',
indexName: 'prod_DEVELOPER_SAILPOINT_COM',
searchPagePath: false,
placeholder: "Search the Developer Community",
placeholder: 'Search the Developer Community',
},
docs: {
sidebar: {
@@ -64,7 +64,7 @@ const config = {
},
},
colorMode: {
defaultMode: "light",
defaultMode: 'light',
respectPrefersColorScheme: true,
},
navbar: navbar,
@@ -72,7 +72,7 @@ const config = {
prism: {
theme: lightCodeTheme,
darkTheme: darkCodeTheme,
additionalLanguages: ["http", "java", "ruby", "php", "csharp"],
additionalLanguages: ['http', 'java', 'ruby', 'php', 'csharp'],
},
}),
@@ -82,7 +82,7 @@ const config = {
mermaid: true,
},
themes: ["docusaurus-theme-openapi-docs", "@docusaurus/theme-mermaid"],
themes: ['docusaurus-theme-openapi-docs', '@docusaurus/theme-mermaid'],
};
module.exports = config;

View File

@@ -1,110 +1,110 @@
module.exports = {
style: "light",
style: 'light',
links: [
{
title: "IdentityNow",
title: 'IdentityNow',
items: [
{
label: "Your First API Call",
to: "idn/api/getting-started",
label: 'Your First API Call',
to: 'idn/api/getting-started',
},
{
label: "Build A Transform",
to: "idn/docs/transforms/guides/your-first-transform",
label: 'Build A Transform',
to: 'idn/docs/transforms/guides/your-first-transform',
},
{
label: "Build A SaaS Connector",
to: "idn/docs/saas-connectivity",
label: 'Build A SaaS Connector',
to: 'idn/docs/saas-connectivity',
},
{
label: "Get Certified",
href: "https://university.sailpoint.com/Saba/Web_spf/NA10P1PRD075/guest/categorydetail/categ000000000003041/true/xxemptyxx/",
label: 'Get Certified',
href: 'https://university.sailpoint.com/Saba/Web_spf/NA10P1PRD075/guest/categorydetail/categ000000000003041/true/xxemptyxx/',
},
],
},
{
title: "IdentityIQ",
title: 'IdentityIQ',
items: [
{
label: "Build A Plugin",
to: "https://documentation.sailpoint.com/identityiq/help/plugins/identityiq_plugins.html",
label: 'Build A Plugin',
to: 'https://documentation.sailpoint.com/identityiq/help/plugins/identityiq_plugins.html',
},
{
label: "Get Certified",
href: "https://university.sailpoint.com/Saba/Web_spf/NA10P1PRD075/guest/categorydetail/categ000000000003042/true/xxemptyxx/",
label: 'Get Certified',
href: 'https://university.sailpoint.com/Saba/Web_spf/NA10P1PRD075/guest/categorydetail/categ000000000003042/true/xxemptyxx/',
},
],
},
{
title: "Community",
title: 'Community',
items: [
{
label: "Discuss",
to: "https://developer.sailpoint.com/discuss",
label: 'Discuss',
to: 'https://developer.sailpoint.com/discuss',
},
{
label: "Submit an Idea",
to: "https://developer-sailpoint.ideas.aha.io/",
label: 'Submit an Idea',
to: 'https://developer-sailpoint.ideas.aha.io/',
},
{
label: "Contact Our Team",
to: "https://developer.sailpoint.com/discuss/new-message?groupname=developer_relations",
label: 'Contact Our Team',
to: 'https://developer.sailpoint.com/discuss/new-message?groupname=developer_relations',
},
],
},
{
title: "More",
title: 'More',
items: [
{
label: "Engineering Blog",
href: "https://medium.com/sailpointtechblog",
label: 'Engineering Blog',
href: 'https://medium.com/sailpointtechblog',
},
{
label: "GitHub",
href: "https://github.com/sailpoint-oss",
label: 'GitHub',
href: 'https://github.com/sailpoint-oss',
},
{
label: "Twitter",
href: "https://twitter.com/sailpoint",
label: 'Twitter',
href: 'https://twitter.com/sailpoint',
},
],
},
{
title: "Company",
title: 'Company',
items: [
{
label: "The SailPoint Story",
to: "https://www.sailpoint.com/company/",
label: 'The SailPoint Story',
to: 'https://www.sailpoint.com/company/',
},
{
label: "The SailPoint Way",
to: "https://www.sailpoint.com/company/diversity-inclusion-and-belonging/",
label: 'The SailPoint Way',
to: 'https://www.sailpoint.com/company/diversity-inclusion-and-belonging/',
},
{
label: "Leadership Team",
to: "https://www.sailpoint.com/company/#h-our-leadership",
label: 'Leadership Team',
to: 'https://www.sailpoint.com/company/#h-our-leadership',
},
{
label: "Become A Partner",
to: "https://www.sailpoint.com/partners/become-partner/",
label: 'Become A Partner',
to: 'https://www.sailpoint.com/partners/become-partner/',
},
],
},
{
title: "Legal",
title: 'Legal',
items: [
{
label: "Terms & Conditions",
to: "https://developer.sailpoint.com/discuss/tos",
label: 'Terms & Conditions',
to: 'https://developer.sailpoint.com/discuss/tos',
},
],
},
],
logo: {
alt: "SailPoint Developer Community Logo",
src: "/img/SailPoint-Developer-Community-Lockup.png",
srcDark: "img/SailPoint-Developer-Community-Inverse-Lockup.png",
href: "https://developer.sailpoint.com",
alt: 'SailPoint Developer Community Logo',
src: '/img/SailPoint-Developer-Community-Lockup.png',
srcDark: 'img/SailPoint-Developer-Community-Inverse-Lockup.png',
href: 'https://developer.sailpoint.com',
},
copyright: `Copyright © ${new Date().getFullYear()} SailPoint Technologies, Inc. All Rights Reserved.`,
};

102
navbar.js
View File

@@ -1,87 +1,87 @@
module.exports = {
title: "",
title: '',
logo: {
alt: "SailPoint Developer Community",
src: "img/SailPoint-Developer-Community-Lockup.png",
srcDark: "img/SailPoint-Developer-Community-Inverse-Lockup.png",
alt: 'SailPoint Developer Community',
src: 'img/SailPoint-Developer-Community-Lockup.png',
srcDark: 'img/SailPoint-Developer-Community-Inverse-Lockup.png',
},
items: [
{
type: "dropdown",
label: "IdentityNow",
position: "left",
type: 'dropdown',
label: 'IdentityNow',
position: 'left',
items: [
{ to: "#", label: "API Specifications", className: "navbar__section" },
{ to: "/idn/api/v3", label: "V3 APIs", className: "indent" },
{ to: "/idn/api/beta", label: "Beta APIs", className: "indent" },
{ to: "#", label: "Documentation", className: "navbar__section" },
{ to: "idn/docs", label: "IDN Documentation", className: "indent" },
{ to: "#", label: "External Links", className: "navbar__section" },
{to: '#', label: 'API Specifications', className: 'navbar__section'},
{to: '/idn/api/v3', label: 'V3 APIs', className: 'indent'},
{to: '/idn/api/beta', label: 'Beta APIs', className: 'indent'},
{to: '#', label: 'Documentation', className: 'navbar__section'},
{to: 'idn/docs', label: 'IDN Documentation', className: 'indent'},
{to: '#', label: 'External Links', className: 'navbar__section'},
{
href: "https://documentation.sailpoint.com",
label: "Product Documentation",
className: "indent",
href: 'https://documentation.sailpoint.com',
label: 'Product Documentation',
className: 'indent',
},
{
href: "https://university.sailpoint.com/Saba/Web_spf/NA10P1PRD075/guest/categorydetail/categ000000000003041/true/xxemptyxx/",
label: "IdentityNow Certifications",
className: "indent",
href: 'https://university.sailpoint.com/Saba/Web_spf/NA10P1PRD075/guest/categorydetail/categ000000000003041/true/xxemptyxx/',
label: 'IdentityNow Certifications',
className: 'indent',
},
],
},
{
type: "dropdown",
label: "IdentityIQ",
position: "left",
type: 'dropdown',
label: 'IdentityIQ',
position: 'left',
items: [
{ to: "#", label: "API Specifications", className: "navbar__section" },
{ to: "/iiq/api", label: "IIQ APIs", className: "indent" },
{ to: "#", label: "External Links", className: "navbar__section" },
{to: '#', label: 'API Specifications', className: 'navbar__section'},
{to: '/iiq/api', label: 'IIQ APIs', className: 'indent'},
{to: '#', label: 'External Links', className: 'navbar__section'},
{
href: "https://documentation.sailpoint.com",
label: "Product Documentation",
className: "indent",
href: 'https://documentation.sailpoint.com',
label: 'Product Documentation',
className: 'indent',
},
{
href: "https://university.sailpoint.com/Saba/Web_spf/NA10P1PRD075/guest/categorydetail/categ000000000003042/true/xxemptyxx/",
label: "IdentityIQ Certifications",
className: "indent",
href: 'https://university.sailpoint.com/Saba/Web_spf/NA10P1PRD075/guest/categorydetail/categ000000000003042/true/xxemptyxx/',
label: 'IdentityIQ Certifications',
className: 'indent',
},
],
},
{
position: "left",
label: "Blog",
to: "https://medium.com/sailpointengineering",
position: 'left',
label: 'Blog',
to: 'https://medium.com/sailpointengineering',
},
{
position: "left",
label: "Ideas",
to: "https://developer-sailpoint.ideas.aha.io/",
position: 'left',
label: 'Ideas',
to: 'https://developer-sailpoint.ideas.aha.io/',
},
{
position: "left",
label: "Discuss",
to: "https://developer.sailpoint.com/discuss",
position: 'left',
label: 'Discuss',
to: 'https://developer.sailpoint.com/discuss',
},
{
type: "dropdown",
label: "Support",
position: "right",
type: 'dropdown',
label: 'Support',
position: 'right',
items: [
{
label: "Submit Support Ticket",
href: "https://support.sailpoint.com/hc/en-us/requests/new?ticket_form_id=360000629992",
label: 'Submit Support Ticket',
href: 'https://support.sailpoint.com/hc/en-us/requests/new?ticket_form_id=360000629992',
},
{ label: "Compass", href: "https://community.sailpoint.com" },
{ label: "Platform Status", href: "https://status.sailpoint.com/" },
{label: 'Compass', href: 'https://community.sailpoint.com'},
{label: 'Platform Status', href: 'https://status.sailpoint.com/'},
],
},
{
position: "right",
to: "https://github.com/sailpoint-oss",
className: "header-github-link",
"aria-label": "SailPoint Open-source GitHub",
position: 'right',
to: 'https://github.com/sailpoint-oss',
className: 'header-github-link',
'aria-label': 'SailPoint Open-source GitHub',
},
],
};

View File

@@ -1,80 +1,80 @@
module.exports = [
[
"@docusaurus/plugin-content-docs",
'@docusaurus/plugin-content-docs',
{
id: "idn",
path: "products/idn",
routeBasePath: "idn",
id: 'idn',
path: 'products/idn',
routeBasePath: 'idn',
editUrl:
"https://github.com/sailpoint-oss/developer-community-site/edit/main/",
'https://github.com/sailpoint-oss/developer-community-site/edit/main/',
showLastUpdateAuthor: true,
showLastUpdateTime: true,
sidebarPath: require.resolve("./products/idn/sidebar.js"),
docItemComponent: "@theme/ApiItem",
sidebarPath: require.resolve('./products/idn/sidebar.js'),
docItemComponent: '@theme/ApiItem',
},
],
[
"@docusaurus/plugin-content-docs",
'@docusaurus/plugin-content-docs',
{
id: "iiq",
path: "products/iiq",
routeBasePath: "iiq",
id: 'iiq',
path: 'products/iiq',
routeBasePath: 'iiq',
editUrl:
"https://github.com/sailpoint-oss/developer-community-site/edit/main/",
'https://github.com/sailpoint-oss/developer-community-site/edit/main/',
showLastUpdateAuthor: true,
showLastUpdateTime: true,
sidebarPath: require.resolve("./products/iiq/sidebar.js"),
docItemComponent: "@theme/ApiItem",
sidebarPath: require.resolve('./products/iiq/sidebar.js'),
docItemComponent: '@theme/ApiItem',
},
],
[
"@docusaurus/plugin-google-gtag",
'@docusaurus/plugin-google-gtag',
{
trackingID: "GTM-TSD78J",
trackingID: 'GTM-TSD78J',
anonymizeIP: false,
},
],
[
"docusaurus-plugin-openapi-docs",
'docusaurus-plugin-openapi-docs',
{
id: "idn-api",
docsPluginId: "idn",
id: 'idn-api',
docsPluginId: 'idn',
config: {
idn_v3: {
specPath: "static/api-specs/idn/sailpoint-api.v3.yaml",
outputDir: "products/idn/api/v3",
specPath: 'static/api-specs/idn/sailpoint-api.v3.yaml',
outputDir: 'products/idn/api/v3',
sidebarOptions: {
groupPathsBy: "tag",
categoryLinkSource: "tag",
groupPathsBy: 'tag',
categoryLinkSource: 'tag',
},
template: "api.mustache",
template: 'api.mustache',
},
idn_beta: {
specPath: "static/api-specs/idn/sailpoint-api.beta.yaml",
outputDir: "products/idn/api/beta",
specPath: 'static/api-specs/idn/sailpoint-api.beta.yaml',
outputDir: 'products/idn/api/beta',
sidebarOptions: {
groupPathsBy: "tag",
categoryLinkSource: "tag",
groupPathsBy: 'tag',
categoryLinkSource: 'tag',
},
template: "api.mustache",
template: 'api.mustache',
},
},
},
],
[
"docusaurus-plugin-openapi-docs",
'docusaurus-plugin-openapi-docs',
{
id: "iiq-api",
docsPluginId: "iiq",
id: 'iiq-api',
docsPluginId: 'iiq',
config: {
iiq: {
specPath: "static/api-specs/iiq/swagger.json",
outputDir: "products/iiq/api",
specPath: 'static/api-specs/iiq/swagger.json',
outputDir: 'products/iiq/api',
sidebarOptions: {
groupPathsBy: "tag",
categoryLinkSource: "tag",
groupPathsBy: 'tag',
categoryLinkSource: 'tag',
},
template: "api.mustache",
template: 'api.mustache',
},
},
},

View File

@@ -5,26 +5,19 @@ pagination_label: Authentication
sidebar_label: Authentication
sidebar_position: 2
sidebar_class_name: authentication
keywords: ["authentication"]
description:
The quickest way to authenticate and start using SailPoint APIs is to generate
a personal access token.
keywords: ['authentication']
description: The quickest way to authenticate and start using SailPoint APIs is to generate a personal access token.
slug: /api/authentication
tags: ["Authentication"]
tags: ['Authentication']
---
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
## Overview
The quickest way to authenticate and start using SailPoint APIs is to generate a
[personal access token](#personal-access-tokens). If you are interested in using
OAuth2 for authentication, then please continue to read this document.
The quickest way to authenticate and start using SailPoint APIs is to generate a [personal access token](#personal-access-tokens). If you are interested in using OAuth2 for authentication, then please continue to read this document.
In order to use the IdentityNow REST API, you must first authenticate with
IdentityNow and get an `access_token`. This `access_token` will need to be
provided in the `Authorization` header of each API request. The steps of the
flow are as follows:
In order to use the IdentityNow REST API, you must first authenticate with IdentityNow and get an `access_token`. This `access_token` will need to be provided in the `Authorization` header of each API request. The steps of the flow are as follows:
<div align="center">
@@ -45,37 +38,20 @@ sequenceDiagram
</div>
1. **Access Token Request** - The HTTP client (a script, application, Postman,
cURL, etc.) makes a request to IdentityNow to get an `access_token`. The
details of this are described in the
[Authentication Details](#authentication-details) section.
2. **Access Token Response** - Assuming the request is valid, IdentityNow will
issue an `access_token` to the HTTP client in response.
3. **API Request** - The HTTP client makes a request to an IdentityNow API
endpoint. Included in that request is the header
`Authorization: Bearer {access_token}`.
4. **API Response** - Assuming the request and the `access_token` are valid,
IdentityNow will return a response to the client. If unexpected errors occur,
see the [Troubleshooting](#troubleshooting) section of this document.
1. **Access Token Request** - The HTTP client (a script, application, Postman, cURL, etc.) makes a request to IdentityNow to get an `access_token`. The details of this are described in the [Authentication Details](#authentication-details) section.
2. **Access Token Response** - Assuming the request is valid, IdentityNow will issue an `access_token` to the HTTP client in response.
3. **API Request** - The HTTP client makes a request to an IdentityNow API endpoint. Included in that request is the header `Authorization: Bearer {access_token}`.
4. **API Response** - Assuming the request and the `access_token` are valid, IdentityNow will return a response to the client. If unexpected errors occur, see the [Troubleshooting](#troubleshooting) section of this document.
The SailPoint authentication/authorization model is fully
[OAuth 2.0](https://oauth.net/2/) compliant, with issued `access_tokens`
leveraging the [JSON Web Token (JWT)](https://jwt.io/) standard. This document
provides the necessary information for interacting with SailPoint's OAuth2
services.
The SailPoint authentication/authorization model is fully [OAuth 2.0](https://oauth.net/2/) compliant, with issued `access_tokens` leveraging the [JSON Web Token (JWT)](https://jwt.io/) standard. This document provides the necessary information for interacting with SailPoint's OAuth2 services.
## Find Your Tenant's OAuth Details
This document assumes your IDN instance is using the domain name supplied by
SailPoint. If your instance is using a vanity URL, then you will need to open
the following URL in your browser to get your OAuth info. See
[finding your org/tenant name](./getting-started.md#finding-your-orgtenant-name)
in the [getting started guide](./getting-started.md) to get your `{tenant}`.
This document assumes your IDN instance is using the domain name supplied by SailPoint. If your instance is using a vanity URL, then you will need to open the following URL in your browser to get your OAuth info. See [finding your org/tenant name](./getting-started.md#finding-your-orgtenant-name) in the [getting started guide](./getting-started.md) to get your `{tenant}`.
`https://{tenant}.api.identitynow.com/oauth/info`
This page will present you with your `authorizeEndpoint` and `tokenEndpoint`,
which you will need to follow along with the examples in this document.
This page will present you with your `authorizeEndpoint` and `tokenEndpoint`, which you will need to follow along with the examples in this document.
```json
{
@@ -91,98 +67,53 @@ which you will need to follow along with the examples in this document.
## Personal Access Tokens
A personal access token is a method of authenticating to an API as a user
without needing to supply a username and password. The primary use case for
personal access tokens is in scripts or programs that don't have an easy way to
implement an OAuth 2.0 flow and that need to call API endpoints that require a
user context. Personal access tokens are also convenient when using Postman to
explore and test APIs.
A personal access token is a method of authenticating to an API as a user without needing to supply a username and password. The primary use case for personal access tokens is in scripts or programs that don't have an easy way to implement an OAuth 2.0 flow and that need to call API endpoints that require a user context. Personal access tokens are also convenient when using Postman to explore and test APIs.
:::info Update
Previously, only users with the `Admin` or `Source Admin` role were allowed to
generate personal access tokens. Now, all users are able to generate personal
access tokens!
Previously, only users with the `Admin` or `Source Admin` role were allowed to generate personal access tokens. Now, all users are able to generate personal access tokens!
:::
To generate a personal access token from the IdentityNow UI, perform the
following steps after logging into your IdentityNow instance:
To generate a personal access token from the IdentityNow UI, perform the following steps after logging into your IdentityNow instance:
1. Select **Preferences** from the drop-down menu under your username, then
**Personal Access Tokens** on the left. You can also go straight to the page
using this URL, replacing `{tenant}` with your IdentityNow tenant:
`https://{tenant}.identitynow.com/ui/d/user-preferences/personal-access-tokens`.
1. Select **Preferences** from the drop-down menu under your username, then **Personal Access Tokens** on the left. You can also go straight to the page using this URL, replacing `{tenant}` with your IdentityNow tenant: `https://{tenant}.identitynow.com/ui/d/user-preferences/personal-access-tokens`.
2. Click **New Token** and enter a meaningful description to help differentiate
the token from others.
2. Click **New Token** and enter a meaningful description to help differentiate the token from others.
:::caution
The **New Token** button will be disabled when youve reached the limit of 10
personal access tokens per user. To avoid reaching this limit, we recommend you
delete any tokens that are no longer needed.
The **New Token** button will be disabled when youve reached the limit of 10 personal access tokens per user. To avoid reaching this limit, we recommend you delete any tokens that are no longer needed.
:::
3. Click **Create Token** to generate and view the two components that comprise
the token: the `Secret` and the `Client ID`.
3. Click **Create Token** to generate and view the two components that comprise the token: the `Secret` and the `Client ID`.
:::danger Important
After you create the token, the value of the `Client ID` will be visible in the
Personal Access Tokens list, but the corresponding `Secret` will not be visible
after you close the window. You will need to store the `Secret` somewhere
secure.
After you create the token, the value of the `Client ID` will be visible in the Personal Access Tokens list, but the corresponding `Secret` will not be visible after you close the window. You will need to store the `Secret` somewhere secure.
:::
4. Copy both values somewhere that will be secure and accessible to you when you
need to use the the token.
4. Copy both values somewhere that will be secure and accessible to you when you need to use the the token.
To generate a personal access token from the API, use the
[create personal access token endpoint](/idn/api/beta/create-personal-access-token).
To generate a personal access token from the API, use the [create personal access token endpoint](/idn/api/beta/create-personal-access-token).
To use a personal access token to generate an `access_token` that can be used to
authenticate requests to the API, follow the
[Client Credentials Grant Flow](#client-credentials-grant-flow), using the
`Client ID` and `Client Secret` obtained from the personal access token.
To use a personal access token to generate an `access_token` that can be used to authenticate requests to the API, follow the [Client Credentials Grant Flow](#client-credentials-grant-flow), using the `Client ID` and `Client Secret` obtained from the personal access token.
## OAuth 2.0
[OAuth 2.0](https://oauth.net/2/) is an industry-standard protocol for
authorization, and provides a variety of authorization flows for web
applications, desktop applications, mobile phones, and devices. This
specification and its extensions are developed within the
[IETF OAuth Working Group](https://www.ietf.org/mailman/listinfo/oauth).
[OAuth 2.0](https://oauth.net/2/) is an industry-standard protocol for authorization, and provides a variety of authorization flows for web applications, desktop applications, mobile phones, and devices. This specification and its extensions are developed within the [IETF OAuth Working Group](https://www.ietf.org/mailman/listinfo/oauth).
There are several different authorization flows that OAuth 2.0 supports, and
each of these has a grant-type which defines the different use cases. Some of
the common ones which might be used with IdentityNow are as follows:
There are several different authorization flows that OAuth 2.0 supports, and each of these has a grant-type which defines the different use cases. Some of the common ones which might be used with IdentityNow are as follows:
1. [**Authorization Code**](https://oauth.net/2/grant-types/authorization-code/) -
This grant type is used by clients to exchange an authorization code for an
`access_token`. This is mainly used for web applications as there is a login
into IdentityNow, with a subsequent redirect back to the web application /
client.
2. [**Client Credentials**](https://oauth.net/2/grant-types/client-credentials/) -
This grant type is used by clients to obtain an `access_token` outside the
context of a user. Because this is outside of a user context, only a subset
of IdentityNow REST APIs may be accessible with this kind of grant type.
3. [**Refresh Token**](https://oauth.net/2/grant-types/refresh-token/) - This
grant type is used by clients in order to exchange a refresh token for a new
`access_token` when the existing `access_token` has expired. This allows
clients to continue using the API without having to re-authenticate as
frequently. This grant type is commonly used together with
`Authorization Code` to prevent a user from having to log in several times
per day.
1. [**Authorization Code**](https://oauth.net/2/grant-types/authorization-code/) - This grant type is used by clients to exchange an authorization code for an `access_token`. This is mainly used for web applications as there is a login into IdentityNow, with a subsequent redirect back to the web application / client.
2. [**Client Credentials**](https://oauth.net/2/grant-types/client-credentials/) - This grant type is used by clients to obtain an `access_token` outside the context of a user. Because this is outside of a user context, only a subset of IdentityNow REST APIs may be accessible with this kind of grant type.
3. [**Refresh Token**](https://oauth.net/2/grant-types/refresh-token/) - This grant type is used by clients in order to exchange a refresh token for a new `access_token` when the existing `access_token` has expired. This allows clients to continue using the API without having to re-authenticate as frequently. This grant type is commonly used together with `Authorization Code` to prevent a user from having to log in several times per day.
## JSON Web Token (JWT)
[JSON Web Token (JWT)](https://jwt.io) is an industry-standard protocol for
creating access tokens which assert various claims about the resource who has
authenticated. The tokens have a specific structure consisting of a header,
payload, and signature.
[JSON Web Token (JWT)](https://jwt.io) is an industry-standard protocol for creating access tokens which assert various claims about the resource who has authenticated. The tokens have a specific structure consisting of a header, payload, and signature.
A raw JWT might look like this:
@@ -242,43 +173,29 @@ You can check the JWT access token data online at [jwt.io](https://jwt.io).
## Authentication Details
This section details how to call the SailPoint Platform OAuth 2.0 token
endpoints to get an `access_token`.
This section details how to call the SailPoint Platform OAuth 2.0 token endpoints to get an `access_token`.
### Prerequisites
Before any OAuth 2.0 token requests can be initiated, a Client ID and secret are
necessary. As an `ORG_ADMIN`, browse to your API Management Admin Page at
`https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel`
and create an API client with the appropriate grant types for your use case. If
you are not an admin of your org, you can ask an admin to create this for you.
Be sure to save your `Client Secret` somewhere secure, as you will not be able
to view or change it later.
Before any OAuth 2.0 token requests can be initiated, a Client ID and secret are necessary. As an `ORG_ADMIN`, browse to your API Management Admin Page at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel` and create an API client with the appropriate grant types for your use case. If you are not an admin of your org, you can ask an admin to create this for you. Be sure to save your `Client Secret` somewhere secure, as you will not be able to view or change it later.
### OAuth 2.0 Token Request
When authenticating to IdentityNow, the OAuth 2.0 token endpoint resides on the
IdentityNow API Gateway at:
When authenticating to IdentityNow, the OAuth 2.0 token endpoint resides on the IdentityNow API Gateway at:
```text
POST https://{tenant}.api.identitynow.com/oauth/token
```
How you call this endpoint to get your token depends largely on the OAuth 2.0
flow and grant type you wish to implement. The details for each grant type
within IdentityNow are described in the following sections.
How you call this endpoint to get your token depends largely on the OAuth 2.0 flow and grant type you wish to implement. The details for each grant type within IdentityNow are described in the following sections.
### Authorization Code Grant Flow
Further Reading:
[https://oauth.net/2/grant-types/authorization-code/](https://oauth.net/2/grant-types/authorization-code/)
Further Reading: [https://oauth.net/2/grant-types/authorization-code/](https://oauth.net/2/grant-types/authorization-code/)
This grant type is used by clients to exchange an authorization code for an
`access_token`. This is mainly used for web apps as there is a login into
IdentityNow, with a subsequent redirect back to the web app / client.
This grant type is used by clients to exchange an authorization code for an `access_token`. This is mainly used for web apps as there is a login into IdentityNow, with a subsequent redirect back to the web app / client.
The OAuth 2.0 client you are using must have `AUTHORIZATION_CODE` as one of its
grant types. The redirect URLs must also match the list in the client as well:
The OAuth 2.0 client you are using must have `AUTHORIZATION_CODE` as one of its grant types. The redirect URLs must also match the list in the client as well:
```json
{
@@ -330,41 +247,33 @@ sequenceDiagram
GET https://{tenant}.identitynow.com/oauth/authorize?client_id={client-id}&client_secret={client-secret}&response_type=code&redirect_uri={redirect-url}
```
3. IdentityNow redirects the user to a login prompt to authenticate to
IdentityNow.
3. IdentityNow redirects the user to a login prompt to authenticate to IdentityNow.
4. The user authenticates to IdentityNow.
5. Once authentication is successful, IdentityNow issues an authorization code
back to the web app.
5. Once authentication is successful, IdentityNow issues an authorization code back to the web app.
6. The web app submits an **OAuth 2.0 Token Request** to IdentityNow in the
form:
6. The web app submits an **OAuth 2.0 Token Request** to IdentityNow in the form:
```text
POST https://{tenant}.api.identitynow.com/oauth/token?grant_type=authorization_code&client_id={client-id}&client_secret={client-secret}&code={code}&redirect_uri={redirect-url}
```
> **Note**: the token endpoint URL is `{tenant}.api.identitynow.com`, while the
> authorize URL is `{tenant}.identitynow.com`. Be sure to use the correct URL
> when setting up your webapp to use this flow.
> **Note**: the token endpoint URL is `{tenant}.api.identitynow.com`, while the authorize URL is `{tenant}.identitynow.com`. Be sure to use the correct URL when setting up your webapp to use this flow.
7. IdentityNow validates the token request and submits a response. If
successful, the response will contain a JWT `access_token`.
7. IdentityNow validates the token request and submits a response. If successful, the response will contain a JWT `access_token`.
The query parameters in the OAuth 2.0 token request for the Authorization Code
grant are as follows:
The query parameters in the OAuth 2.0 token request for the Authorization Code grant are as follows:
| Key | Description |
| ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `grant_type` | Set to `authorization_code` for the authorization code grant type. |
| `client_id` | This is the client ID for the API client (e.g. `b61429f5-203d-494c-94c3-04f54e17bc5c`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel` |
| Key | Description |
| --- | --- |
| `grant_type` | Set to `authorization_code` for the authorization code grant type. |
| `client_id` | This is the client ID for the API client (e.g. `b61429f5-203d-494c-94c3-04f54e17bc5c`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel` |
| `client_secret ` | This is the client secret for the API client (e.g. `c924417c85b19eda40e171935503d8e9747ca60ddb9b48ba4c6bb5a7145fb6c5`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel` |
| `code` | This is a code returned by `/oauth/authorize`. |
| `redirect_uri` | This is a URL of the application to redirect to once the token has been granted. |
| `code` | This is a code returned by `/oauth/authorize`. |
| `redirect_uri` | This is a URL of the application to redirect to once the token has been granted. |
Here is an example OAuth 2.0 token request for the Authorization Code grant
type.
Here is an example OAuth 2.0 token request for the Authorization Code grant type.
```bash
curl -X POST \
@@ -374,20 +283,11 @@ curl -X POST \
### Client Credentials Grant Flow
Further Reading:
[https://oauth.net/2/grant-types/client-credentials/](https://oauth.net/2/grant-types/client-credentials/)
Further Reading: [https://oauth.net/2/grant-types/client-credentials/](https://oauth.net/2/grant-types/client-credentials/)
This grant type is used by clients to obtain an access token outside the context
of a user. This is probably the simplest authentication flow, but comes with a
major drawback; API endpoints that require
[user level permissions](https://documentation.sailpoint.com/saas/help/common/users/user_level_matrix.html)
will not work. [Personal Access Tokens](#personal-access-tokens) are a form of
Client Credentials that have a user context, so they do not share this drawback.
However, the APIs that can be invoked with a personal access token depend on the
permissions of the user that generated it.
This grant type is used by clients to obtain an access token outside the context of a user. This is probably the simplest authentication flow, but comes with a major drawback; API endpoints that require [user level permissions](https://documentation.sailpoint.com/saas/help/common/users/user_level_matrix.html) will not work. [Personal Access Tokens](#personal-access-tokens) are a form of Client Credentials that have a user context, so they do not share this drawback. However, the APIs that can be invoked with a personal access token depend on the permissions of the user that generated it.
An OAuth 2.0 client using the Client Credentials flow must have
`CLIENT_CREDENTIALS` as one of its grantTypes:
An OAuth 2.0 client using the Client Credentials flow must have `CLIENT_CREDENTIALS` as one of its grantTypes:
```json
{
@@ -404,8 +304,7 @@ An OAuth 2.0 client using the Client Credentials flow must have
}
```
[Personal Access Tokens](#personal-access-tokens) are implicly granted a
`CLIENT_CREDENTIALS` grant type.
[Personal Access Tokens](#personal-access-tokens) are implicly granted a `CLIENT_CREDENTIALS` grant type.
The overall authorization flow looks like this:
@@ -415,20 +314,17 @@ The overall authorization flow looks like this:
POST https://{tenant}.api.identitynow.com/oauth/token?grant_type=client_credentials&client_id={client-id}&client_secret={client-secret}
```
2. IdentityNow validates the token request and submits a response. If
successful, the response will contain a JWT access token.
2. IdentityNow validates the token request and submits a response. If successful, the response will contain a JWT access token.
The query parameters in the OAuth 2.0 Token Request for the Client Credentials
grant are as follows:
The query parameters in the OAuth 2.0 Token Request for the Client Credentials grant are as follows:
| Key | Description |
| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `grant_type` | Set to `CLIENT_CREDENTIALS` for the authorization code grant type. |
| `client_id` | This is the client ID describing for the API client (e.g. `b61429f5-203d-494c-94c3-04f54e17bc5c`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel` or by [creating a personal access token](#personal-access-tokens). |
| Key | Description |
| --- | --- |
| `grant_type` | Set to `CLIENT_CREDENTIALS` for the authorization code grant type. |
| `client_id` | This is the client ID describing for the API client (e.g. `b61429f5-203d-494c-94c3-04f54e17bc5c`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel` or by [creating a personal access token](#personal-access-tokens). |
| `client_secret` | This is the client secret describing for the API client (e.g. `c924417c85b19eda40e171935503d8e9747ca60ddb9b48ba4c6bb5a7145fb6c5`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel` or by [creating a personal access token](#personal-access-tokens). |
Here is an example request to generate an `access_token` using Client
Credentials.
Here is an example request to generate an `access_token` using Client Credentials.
```bash
curl -X POST \
@@ -438,17 +334,11 @@ curl -X POST \
### Refresh Token Grant Flow
Further Reading:
[https://oauth.net/2/grant-types/refresh-token/](https://oauth.net/2/grant-types/refresh-token/)
Further Reading: [https://oauth.net/2/grant-types/refresh-token/](https://oauth.net/2/grant-types/refresh-token/)
This grant type is used by clients in order to exchange a refresh token for a
new `access_token` once the existing `access_token` has expired. This allows
clients to continue to have a valid `access_token` without the need for the user
to login as frequently.
This grant type is used by clients in order to exchange a refresh token for a new `access_token` once the existing `access_token` has expired. This allows clients to continue to have a valid `access_token` without the need for the user to login as frequently.
The OAuth 2.0 client you are using must have `REFRESH_TOKEN` as one of its grant
types, and is typically used in conjunction with another grant type, like
`CLIENT_CREDENTIALS` or `AUTHORIZATION_CODE`:
The OAuth 2.0 client you are using must have `REFRESH_TOKEN` as one of its grant types, and is typically used in conjunction with another grant type, like `CLIENT_CREDENTIALS` or `AUTHORIZATION_CODE`:
```json
{
@@ -468,29 +358,24 @@ types, and is typically used in conjunction with another grant type, like
The overall authorization flow looks like this:
1. The client application receives an `access_token` and a `refresh_token` via
one of the other OAuth grant flows, like `AUTHORIZATION_CODE`.
2. The client application notices that the `access_token` is about to expire,
based on the `expires_in` attribute contained within the JWT token.
1. The client application receives an `access_token` and a `refresh_token` via one of the other OAuth grant flows, like `AUTHORIZATION_CODE`.
2. The client application notices that the `access_token` is about to expire, based on the `expires_in` attribute contained within the JWT token.
3. The client submits an **OAuth 2.0 Token Request** to IdentityNow in the form:
```text
POST https://{tenant}.api.identitynow.com/oauth/token?grant_type=refresh_token&client_id={client_id}&client_secret={client_secret}&refresh_token={refresh_token}
```
4. IdentityNow validates the token request and submits a response. If
successful, the response will contain a new `access_token` and
`refresh_token`.
4. IdentityNow validates the token request and submits a response. If successful, the response will contain a new `access_token` and `refresh_token`.
The query parameters in the OAuth 2.0 Token Request for the Refresh Token grant
are as follows:
The query parameters in the OAuth 2.0 Token Request for the Refresh Token grant are as follows:
| Key | Description |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `grant_type` | Set to `refresh_token` for the authorization code grant type. |
| `client_id` | This is the client ID for the API client (e.g. `b61429f5-203d-494c-94c3-04f54e17bc5c`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel`. |
| Key | Description |
| --- | --- |
| `grant_type` | Set to `refresh_token` for the authorization code grant type. |
| `client_id` | This is the client ID for the API client (e.g. `b61429f5-203d-494c-94c3-04f54e17bc5c`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel`. |
| `client_secret` | This is the client secret for the API client (e.g. `c924417c85b19eda40e171935503d8e9747ca60ddb9b48ba4c6bb5a7145fb6c5`). This can be generated at `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel`. |
| `refresh_token` | This is the `refresh_token` that was provided along with the now expired `access_token`. |
| `refresh_token` | This is the `refresh_token` that was provided along with the now expired `access_token`. |
Here is an example call OAuth 2.0 Token Request for the Refresh Token grant.
@@ -502,8 +387,7 @@ curl -X POST \
## OAuth 2.0 Token Response
A successful request to `https://{tenant}.api.identitynow.com/oauth/token` will
contain a response body similar to this:
A successful request to `https://{tenant}.api.identitynow.com/oauth/token` will contain a response body similar to this:
```json
{
@@ -526,10 +410,7 @@ contain a response body similar to this:
}
```
The `access_token` contains the JSON Web Token which is subsequently used in any
further REST API calls through the IdentityNow API gateway. To use the
`access_token`, simply include it in the `Authorization` header as a `Bearer`
token. For example:
The `access_token` contains the JSON Web Token which is subsequently used in any further REST API calls through the IdentityNow API gateway. To use the `access_token`, simply include it in the `Authorization` header as a `Bearer` token. For example:
```bash
curl -X GET \
@@ -538,76 +419,42 @@ curl -X GET \
-H 'cache-control: no-cache'
```
The `expires_in` describes the lifetime, in seconds, of the `access_token`. For
example, the value 749 means that the `access_token` will expire in 12.5 minutes
from the time the response was generated. The exact expiration date is also
contained within the `access_token`. You can view this expiration time by
decoding the JWT `access_token` using a tool like [jwt.io](https://jwt.io/).
The `expires_in` describes the lifetime, in seconds, of the `access_token`. For example, the value 749 means that the `access_token` will expire in 12.5 minutes from the time the response was generated. The exact expiration date is also contained within the `access_token`. You can view this expiration time by decoding the JWT `access_token` using a tool like [jwt.io](https://jwt.io/).
The `refresh_token` contains a JSON Web Token for use in a
[Refresh Token](#refresh-token-grant-flow) grant flow. The `refresh_token` will
only be present if the API client has the `REFRESH_CODE` grant flow.
The `refresh_token` contains a JSON Web Token for use in a [Refresh Token](#refresh-token-grant-flow) grant flow. The `refresh_token` will only be present if the API client has the `REFRESH_CODE` grant flow.
The `user_id` and `identity_id` define the identity context of the person that
authenticated. This is not set for the Client Credentials grant type since it
doesn't have a user context.
The `user_id` and `identity_id` define the identity context of the person that authenticated. This is not set for the Client Credentials grant type since it doesn't have a user context.
## Which OAuth 2.0 Grant Flow Should I use
Deciding which OAuth 2.0 grant flow you should use largely depends on your use
case.
Deciding which OAuth 2.0 grant flow you should use largely depends on your use case.
### Daily Work or Quick Actions
For daily work or short, quick administrative actions, you may not really need
to worry about grant types, as an access token can easily be obtained in the
user interface. In order to see this:
For daily work or short, quick administrative actions, you may not really need to worry about grant types, as an access token can easily be obtained in the user interface. In order to see this:
1. Login to IdentityNow.
2. Go to `https://{tenant}.identitynow.com/ui/session`.
3. The `accessToken` is visible in the user interface.
4. Use this access token in the `Authorization` header when making API calls. If
the access token expires, log back into Identity Now and retrieve the new
access token.
4. Use this access token in the `Authorization` header when making API calls. If the access token expires, log back into Identity Now and retrieve the new access token.
While this is very simple to use, this is only valid for a short period of time
(a few minutes).
While this is very simple to use, this is only valid for a short period of time (a few minutes).
### Postman
If you are using the popular HTTP client, [Postman](https://www.getpostman.com),
you have a couple of options on how you might setup your authorization. You can
just leverage the accessToken as mentioned above, or you can also configure
Postman to use OAuth 2.0 directly.
If you are using the popular HTTP client, [Postman](https://www.getpostman.com), you have a couple of options on how you might setup your authorization. You can just leverage the accessToken as mentioned above, or you can also configure Postman to use OAuth 2.0 directly.
### Web Applications
If you are making a web application, the best grant flow to use is the
[Authorization Code](#authorization-code-grant-flow) grant flow. This will allow
users to be directed to IdentityNow to login, and then redirected back to the
web application via a URL redirect. This also works well with SSO, strong
authentication, or pass-through authentication mechanisms.
If you are making a web application, the best grant flow to use is the [Authorization Code](#authorization-code-grant-flow) grant flow. This will allow users to be directed to IdentityNow to login, and then redirected back to the web application via a URL redirect. This also works well with SSO, strong authentication, or pass-through authentication mechanisms.
SailPoint does not recommend using a password grant flow for web applications as
it would involve entering IdentityNow credentials in the web application. This
flow also doesn't allow you to work with SSO, strong authentication, or
pass-through authentication.
SailPoint does not recommend using a password grant flow for web applications as it would involve entering IdentityNow credentials in the web application. This flow also doesn't allow you to work with SSO, strong authentication, or pass-through authentication.
### Scripts or Programs
If you are writing scripts or programs that leverage the IdentityNow APIs, which
OAuth 2.0 grant from you should use typically depends on what you are doing, and
which user context you need to operate under.
If you are writing scripts or programs that leverage the IdentityNow APIs, which OAuth 2.0 grant from you should use typically depends on what you are doing, and which user context you need to operate under.
Because scripts, code, or programs do not have an interactive web-interface it
is difficult, but not impossible, to implement a working
[Authorization Code](#authorization-code-grant-flow) flow. Most scripts or
programs typically run as a
[Client Credentials](#client-credentials-grant-flow). If your APIs can work
under an API context without a user, then
[Client Credentials](#client-credentials-grant-flow) is ideal. However, if your
APIs need a user or admin context, then the
[Personal Access Token](#personal-access-tokens) approach will be more suitable.
Because scripts, code, or programs do not have an interactive web-interface it is difficult, but not impossible, to implement a working [Authorization Code](#authorization-code-grant-flow) flow. Most scripts or programs typically run as a [Client Credentials](#client-credentials-grant-flow). If your APIs can work under an API context without a user, then [Client Credentials](#client-credentials-grant-flow) is ideal. However, if your APIs need a user or admin context, then the [Personal Access Token](#personal-access-tokens) approach will be more suitable.
## Troubleshooting
@@ -616,8 +463,7 @@ Having issues? Follow these steps.
1. **Verify the API End Point Calls**
1. Verify the structure of the API call:
1. Verify that the API calls are going through the API gateway:
`https://{tenant}.api.identitynow.com`
1. Verify that the API calls are going through the API gateway: `https://{tenant}.api.identitynow.com`
1. Verify you are calling their version correctly:
- Private APIs: `https://{tenant}.api.identitynow.com/cc/api/{endpoint}`
@@ -625,33 +471,19 @@ Having issues? Follow these steps.
- V3 APIs: `https://{tenant}.api.identitynow.com/v3/{endpoint}`
- Beta APIs: `https://{tenant}.api.identitynow.com/beta/{endpoint}`
1. Verify that the API calls have the correct headers (e.g., `content-type`),
query parameters, and body data.
1. If the HTTP response is **401 Unauthorized** , this is an indication that
either there is no `Authorization` header or the `access_token` is invalid.
Verify that the API calls are supplying the `access_token` in the
`Authorization` header correctly (ex. `Authorization: Bearer {access_token}`)
and that the `access_token` has not expired.
1. If the HTTP response is **403 Forbidden**, this is an indication that the
`access_token` is valid, but the user you are running as doesn't have access
to this endpoint. Check the access rights which are associated with the user.
1. Verify that the API calls have the correct headers (e.g., `content-type`), query parameters, and body data.
1. If the HTTP response is **401 Unauthorized** , this is an indication that either there is no `Authorization` header or the `access_token` is invalid. Verify that the API calls are supplying the `access_token` in the `Authorization` header correctly (ex. `Authorization: Bearer {access_token}`) and that the `access_token` has not expired.
1. If the HTTP response is **403 Forbidden**, this is an indication that the `access_token` is valid, but the user you are running as doesn't have access to this endpoint. Check the access rights which are associated with the user.
:::info
This can also be due to calling an API which expects a user, but your
authorization grant type might not have a user context. Calling most
administrative APIs with a CLIENT_CREDENTIAL grant will often produce this
result.
This can also be due to calling an API which expects a user, but your authorization grant type might not have a user context. Calling most administrative APIs with a CLIENT_CREDENTIAL grant will often produce this result.
:::
2. **Verify the OAuth 2.0 Client**
1. Verify that the OAuth 2.0 Client is not a Legacy OAuth client. Legacy OAuth
clients will not work. This is very apparent by looking at the Client ID, as
OAuth 2.0 Client IDs have dashes. Here is an example: Legacy Client ID:
`G6xLlBBOKIcOAQuK` OAuth 2.0 Client ID:
`b61429f5-203d-494c-94c3-04f54e17bc5c`
1. Verify that the OAuth 2.0 Client is not a Legacy OAuth client. Legacy OAuth clients will not work. This is very apparent by looking at the Client ID, as OAuth 2.0 Client IDs have dashes. Here is an example: Legacy Client ID: `G6xLlBBOKIcOAQuK` OAuth 2.0 Client ID: `b61429f5-203d-494c-94c3-04f54e17bc5c`
1. Verify the OAuth 2.0 Client ID exists. This can be verified by calling:
@@ -665,14 +497,9 @@ or
GET /beta/oauth-clients/
```
You can also view all of the active clients in the UI by going to
`https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel`.
You can also view all of the active clients in the UI by going to `https://{tenant}.identitynow.com/ui/admin/#admin:global:security:apimanagementpanel`.
3. Verify that the OAuth 2.0 Client grant types match the OAuth 2.0 grant type
flow you are trying to use. For instance, this client will work with
[Authorization Code](#authorization-code-grant-flow) and
[Client Credentials](#client-Credentials-grant-flow) flows, but not
[Refresh Token](#refresh-token-grant-flow) flows:
3. Verify that the OAuth 2.0 Client grant types match the OAuth 2.0 grant type flow you are trying to use. For instance, this client will work with [Authorization Code](#authorization-code-grant-flow) and [Client Credentials](#client-Credentials-grant-flow) flows, but not [Refresh Token](#refresh-token-grant-flow) flows:
```json
{
@@ -690,15 +517,8 @@ You can also view all of the active clients in the UI by going to
}
```
4. If using an A[Authorization Code](#authorization-code-grant-flow) flow,
verify the redirect URL(s) for your application match the `redirectUris`
value in the client. You can check this using the
[oauth-clients endpoint](/idn/api/beta/list-oauth-clients).
4. If using an A[Authorization Code](#authorization-code-grant-flow) flow, verify the redirect URL(s) for your application match the `redirectUris` value in the client. You can check this using the [oauth-clients endpoint](/idn/api/beta/list-oauth-clients).
5. **Verify the OAuth 2.0 Calls**
6. Verify that the OAuth call flow is going to the right URLs, with the correct
query parameters and data values. A common source of errors is using the
wrong host for authorization and token API calls. The token endpoint URL is
`{tenant}.api.identitynow.com`, while the authorize URL is
`{tenant}.identitynow.com`.
6. Verify that the OAuth call flow is going to the right URLs, with the correct query parameters and data values. A common source of errors is using the wrong host for authorization and token API calls. The token endpoint URL is `{tenant}.api.identitynow.com`, while the authorize URL is `{tenant}.identitynow.com`.

View File

@@ -5,90 +5,57 @@ pagination_label: Getting Started
sidebar_label: Getting Started
sidebar_position: 1
sidebar_class_name: gettingStarted
keywords: ["getting started"]
keywords: ['getting started']
description: This is this place to get started with IdentityNow APIs.
slug: /api/getting-started
tags: ["Getting Started"]
tags: ['Getting Started']
---
## Find Your Tenant Name
To form the proper URL for an API request, you must know your tenant name. To
find your tenant name by log into IdentityNow, navigate to Admin, select the
Dashboard dropdown, and select Overview. The org name is displayed within the
Org Details section of the dashboard. If you do not have admin access, you can
still find your tenant name and the API base URL you will use for API calls. To
do so, view your session details when you are logged into your IdentityNow
instance. Change your URL to the following:
`https://{your-IdentityNow-hostname}.com/ui/session`, where
`{your-IdentityNow-hostname}` is your company's domain name for accessing
IdentityNow. The session detail you want is the `baseUrl`, which has the form of
`https://{tenant}.api.identitynow.com`.
To form the proper URL for an API request, you must know your tenant name. To find your tenant name by log into IdentityNow, navigate to Admin, select the Dashboard dropdown, and select Overview. The org name is displayed within the Org Details section of the dashboard. If you do not have admin access, you can still find your tenant name and the API base URL you will use for API calls. To do so, view your session details when you are logged into your IdentityNow instance. Change your URL to the following: `https://{your-IdentityNow-hostname}.com/ui/session`, where `{your-IdentityNow-hostname}` is your company's domain name for accessing IdentityNow. The session detail you want is the `baseUrl`, which has the form of `https://{tenant}.api.identitynow.com`.
## Make Your First API Call
To get started, create a
[personal access token](./authentication.md#personal-access-tokens), which can
then be used to generate access tokens to authenticate your API calls. To
generate a personal access token from IdentityNow, do the following after
logging into your IdentityNow instance:
To get started, create a [personal access token](./authentication.md#personal-access-tokens), which can then be used to generate access tokens to authenticate your API calls. To generate a personal access token from IdentityNow, do the following after logging into your IdentityNow instance:
1. Select **Preferences** from the drop-down menu under your username. Then
select **Personal Access Tokens** on the left. You can also go straight to
the page using this URL, replacing `{tenant}` with your IdentityNow tenant:
`https://{tenant}.identitynow.com/ui/d/user-preferences/personal-access-tokens`.
1. Select **Preferences** from the drop-down menu under your username. Then select **Personal Access Tokens** on the left. You can also go straight to the page using this URL, replacing `{tenant}` with your IdentityNow tenant: `https://{tenant}.identitynow.com/ui/d/user-preferences/personal-access-tokens`.
2. Select **New Token** and enter a meaningful description to differentiate the
token from others.
2. Select **New Token** and enter a meaningful description to differentiate the token from others.
:::caution
The **New Token** button will be disabled when you reach the limit of 10
personal access tokens per user. To avoid reaching this limit, delete any tokens
that are no longer needed.
The **New Token** button will be disabled when you reach the limit of 10 personal access tokens per user. To avoid reaching this limit, delete any tokens that are no longer needed.
:::
3. Select **Create Token** to generate and view two components the token
comprises: the `Secret` and the `Client ID`.
3. Select **Create Token** to generate and view two components the token comprises: the `Secret` and the `Client ID`.
:::danger Important
After you create the token, the value of the `Client ID` will be visible in
the Personal Access Tokens list, but the corresponding `Secret` will not be
visible after you close the window. Store the `Secret` somewhere secure.
After you create the token, the value of the `Client ID` will be visible in the Personal Access Tokens list, but the corresponding `Secret` will not be visible after you close the window. Store the `Secret` somewhere secure.
:::
4. Copy both values somewhere that will be secure and accessible to you when you
need to use the the token.
4. Copy both values somewhere that will be secure and accessible to you when you need to use the the token.
5. To create an `access_token` that can be used to authenticate API requests,
use the following cURL command, replacing `{tenant}` with your IdentityNow
tenant. The response body will contain an `access_token`, which will look
like a long string of random characters.
5. To create an `access_token` that can be used to authenticate API requests, use the following cURL command, replacing `{tenant}` with your IdentityNow tenant. The response body will contain an `access_token`, which will look like a long string of random characters.
```bash
curl --location --request POST 'https://{tenant}.api.identitynow.com/oauth/token?grant_type=client_credentials&client_id={client_id}&client_secret={secret}'
```
6. To test your `access_token`, execute the following cURL command, replacing
`{tenant}` with your IdentityNow tenant and `access_token` with the token you
generated in the previous step. If this is successful, you should get a JSON
representation of an identity in your tenant.
6. To test your `access_token`, execute the following cURL command, replacing `{tenant}` with your IdentityNow tenant and `access_token` with the token you generated in the previous step. If this is successful, you should get a JSON representation of an identity in your tenant.
```bash
curl --request GET --url 'https://{tenant}.api.identitynow.com/v3/public-identities?limit=1' --header 'authorization: Bearer {access_token}'
```
For more information about SailPoint Platform authentication, see
[API Authentication](./authentication.md)
For more information about SailPoint Platform authentication, see [API Authentication](./authentication.md)
## Rate Limits
There is a rate limit of 100 requests per `access_token` per 10 seconds for V3
API calls through the API gateway. If you exceed the rate limit, expect the
following response from the API:
There is a rate limit of 100 requests per `access_token` per 10 seconds for V3 API calls through the API gateway. If you exceed the rate limit, expect the following response from the API:
**HTTP Status Code**: 429 Too Many Requests
@@ -98,15 +65,7 @@ following response from the API:
## Authorization
Each API resource requires a specific level of authorization attached to your
`access_token`. You can view these levels of authorization in the
[user level access matrix](https://documentation.sailpoint.com/saas/help/common/users/user_level_matrix.html).
Review the authorization constraints for each API endpoint to understand the
user level needed to invoke the endpoint. Tokens generated outside of a user
context, like the
[Client Credentials](./authentication.md#client-credentials-grant-flow) grant
type, are limited in the endpoints that it can call. If your token does not have
permission to call an endpoint, you will receive the following response:
Each API resource requires a specific level of authorization attached to your `access_token`. You can view these levels of authorization in the [user level access matrix](https://documentation.sailpoint.com/saas/help/common/users/user_level_matrix.html). Review the authorization constraints for each API endpoint to understand the user level needed to invoke the endpoint. Tokens generated outside of a user context, like the [Client Credentials](./authentication.md#client-credentials-grant-flow) grant type, are limited in the endpoints that it can call. If your token does not have permission to call an endpoint, you will receive the following response:
**HTTP Status Code**: 403 Forbidden
@@ -128,10 +87,4 @@ permission to call an endpoint, you will receive the following response:
## API Tools
There are several API tools that make exploring and testing APIs easier than
using the command line or a programming language. One tool is
[Postman](https://www.postman.com/downloads/). SailPoint provides an official
Postman workspace where our collections are always up to date with the latest
API changes.
[Click here](https://developer.sailpoint.com/discuss/t/official-identitynow-postman-workspace/6153)
to get started with our Postman workspace.
There are several API tools that make exploring and testing APIs easier than using the command line or a programming language. One tool is [Postman](https://www.postman.com/downloads/). SailPoint provides an official Postman workspace where our collections are always up to date with the latest API changes. [Click here](https://developer.sailpoint.com/discuss/t/official-identitynow-postman-workspace/6153) to get started with our Postman workspace.

View File

@@ -5,18 +5,14 @@ pagination_label: Rate Limiting
sidebar_label: Rate Limiting
sidebar_position: 4
sidebar_class_name: rateLimit
keywords: ["rate limit"]
description:
There is a rate limit of 100 requests per access_token per 10 seconds for V3
API calls through the API gateway.
tags: ["Rate Limit"]
keywords: ['rate limit']
description: There is a rate limit of 100 requests per access_token per 10 seconds for V3 API calls through the API gateway.
tags: ['Rate Limit']
---
## Rate Limits
There is a rate limit of 100 requests per `access_token` per 10 seconds for V3
API calls through the API gateway. If you exceed the rate limit, expect the
following response from the API:
There is a rate limit of 100 requests per `access_token` per 10 seconds for V3 API calls through the API gateway. If you exceed the rate limit, expect the following response from the API:
**HTTP Status Code**: 429 Too Many Requests

View File

@@ -5,31 +5,26 @@ pagination_label: Standard Collection Parameters
sidebar_label: Standard Collection Parameters
sidebar_position: 3
sidebar_class_name: standardCollectionParameters
keywords: ["standard collection parameters"]
description:
Many endpoints in the IdentityNow API support a generic syntax for paginating,
filtering and sorting the results.
tags: ["Standard Collection Parameters"]
keywords: ['standard collection parameters']
description: Many endpoints in the IdentityNow API support a generic syntax for paginating, filtering and sorting the results.
tags: ['Standard Collection Parameters']
---
Many endpoints in the IdentityNow API support a generic syntax for paginating,
filtering and sorting the results. A collection endpoint has the following
characteristics:
Many endpoints in the IdentityNow API support a generic syntax for paginating, filtering and sorting the results. A collection endpoint has the following characteristics:
- The HTTP verb is always GET.
- The last component in the URL is a plural noun (ex. `/v3/public-identities`).
- The return value from a successful request is always an array of JSON objects.
This array may be empty if there are no results.
- The return value from a successful request is always an array of JSON objects. This array may be empty if there are no results.
## Paginating Results
Use the following optional query parameters to achieve pagination:
| Name | Description | Default | Constraints |
| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------------------------------------ |
| `limit` | Integer specifying the maximum number of records to return in a single API call. If it is not specified, a default limit is used. | `250` | Maxiumum of 250 records per page |
| `offset` | Integer specifying the offset of the first result from the beginning of the collection. The **offset** value is record-based, not page-based, and the index starts at 0. For example, **offset=0** and **limit=20** returns records 0-19, but **offset=1** and **limit=20** returns records 1-20. | `0` | Between 0 and the last record index. |
| `count` | Boolean indicating whether a total count is returned, factoring in any filter parameters, in the **X-Total-Count** response header. The value is the total size of the collection that would be returned if **limit** and **offset** were ignored. For example, if the total number of records is 1000, then count=true would return 1000 in the **X-Total-Count** header. Because requesting a total count can have performance impact, do not send **count=true** if that value is not being used. | `false` | Must be `true` or `false` |
| Name | Description | Default | Constraints |
| --- | --- | --- | --- |
| `limit` | Integer specifying the maximum number of records to return in a single API call. If it is not specified, a default limit is used. | `250` | Maxiumum of 250 records per page |
| `offset` | Integer specifying the offset of the first result from the beginning of the collection. The **offset** value is record-based, not page-based, and the index starts at 0. For example, **offset=0** and **limit=20** returns records 0-19, but **offset=1** and **limit=20** returns records 1-20. | `0` | Between 0 and the last record index. |
| `count` | Boolean indicating whether a total count is returned, factoring in any filter parameters, in the **X-Total-Count** response header. The value is the total size of the collection that would be returned if **limit** and **offset** were ignored. For example, if the total number of records is 1000, then count=true would return 1000 in the **X-Total-Count** header. Because requesting a total count can have performance impact, do not send **count=true** if that value is not being used. | `false` | Must be `true` or `false` |
Examples:
@@ -39,10 +34,7 @@ Examples:
## Filtering Results
Any collection with a `filters` parameter supports filtering. This means that an
item is only included in the returned array if the filters expression evaluates
to true for that item. Check the available request parameters for the collection
endpoint you are using to see if it supports filtering.
Any collection with a `filters` parameter supports filtering. This means that an item is only included in the returned array if the filters expression evaluates to true for that item. Check the available request parameters for the collection endpoint you are using to see if it supports filtering.
### Data Types
@@ -51,51 +43,46 @@ Filter expressions are applicable to fields of the following types:
- Numeric
- Boolean: either **true** or **false**
- Strings. Enumerated values are a special case of this.
- Date-time. In V3, all date time values are in ISO-8601 format, as specified in
[RFC 3339 - Date and Time on the Internet: Timestamps](https://tools.ietf.org/html/rfc3339).
- Date-time. In V3, all date time values are in ISO-8601 format, as specified in [RFC 3339 - Date and Time on the Internet: Timestamps](https://tools.ietf.org/html/rfc3339).
### Filter Syntax
The V3 filter syntax is similar to, but not exactly the same as, that specified
by the SCIM standard. These are some key differences:
The V3 filter syntax is similar to, but not exactly the same as, that specified by the SCIM standard. These are some key differences:
- A slightly different set of supported operators
- Case-sensitivity of operators. All V3 filter operators are in lowercase;
specifying "EQ" instead of "eq" is not allowed.
- Case-sensitivity of operators. All V3 filter operators are in lowercase; specifying "EQ" instead of "eq" is not allowed.
### Primitive Operators
These filter operators apply directly to fields and their values:
| Operator | Description | Example |
| -------- | ------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| `ca` | True if the collection-valued field contains all the listed values. | groups ca ("Venezia","Firenze") |
| `co` | True if the value of the field contains the specified value as a substring.(Applicable to string-valued fields only.) | name co "Rajesh" |
| `eq` | True if the value of the field indicated by the first operand is equal to the value specified by the second operand. | identitySummary.id eq "2c9180846e85e4b8016eafeba20c1314" |
| `ge` | True if the value of the field indicated by the first operand is greater or equal to the value specified by the second operand. | daysUntilEscalation ge 7 name ge "Genaro" |
| `gt` | True if the value of the field indicated by the first operand is greater than the value specified by the second operand. | daysUntilEscalation gt 7 name gt "Genaro" created gt 2018-12-18T23:05:55Z |
| `in` | True if the field value is in the list of values. | accountActivityItemId in ("2c9180846b0a0583016b299f210c1314","2c9180846b0a0581016b299e82560c1314") |
| `le` | True if the value of the field indicated by the first operand is less or equal to the value specified by the second operand. | daysUntilEscalation le 7 name le "Genaro" |
| `lt` | True if the value of the field indicated by the first operand is less than the value specified by the second operand. | daysUntilEscalation lt 7 name lt "Genaro" created lt 2018-12-18T23:05:55Z |
| `ne` | True if the value of the field indicated by the first operand is not equal to the value specified by the second operand. | type ne "ROLE" |
| `pr` | True if the field is present, that is, not null. | pr accountRequestInfo |
| `sw` | True if the value of the field starts with the specified value.(Applicable to string-valued fields only.) | name sw "Rajesh" |
| Operator | Description | Example |
| --- | --- | --- |
| `ca` | True if the collection-valued field contains all the listed values. | groups ca ("Venezia","Firenze") |
| `co` | True if the value of the field contains the specified value as a substring.(Applicable to string-valued fields only.) | name co "Rajesh" |
| `eq` | True if the value of the field indicated by the first operand is equal to the value specified by the second operand. | identitySummary.id eq "2c9180846e85e4b8016eafeba20c1314" |
| `ge` | True if the value of the field indicated by the first operand is greater or equal to the value specified by the second operand. | daysUntilEscalation ge 7 name ge "Genaro" |
| `gt` | True if the value of the field indicated by the first operand is greater than the value specified by the second operand. | daysUntilEscalation gt 7 name gt "Genaro" created gt 2018-12-18T23:05:55Z |
| `in` | True if the field value is in the list of values. | accountActivityItemId in ("2c9180846b0a0583016b299f210c1314","2c9180846b0a0581016b299e82560c1314") |
| `le` | True if the value of the field indicated by the first operand is less or equal to the value specified by the second operand. | daysUntilEscalation le 7 name le "Genaro" |
| `lt` | True if the value of the field indicated by the first operand is less than the value specified by the second operand. | daysUntilEscalation lt 7 name lt "Genaro" created lt 2018-12-18T23:05:55Z |
| `ne` | True if the value of the field indicated by the first operand is not equal to the value specified by the second operand. | type ne "ROLE" |
| `pr` | True if the field is present, that is, not null. | pr accountRequestInfo |
| `sw` | True if the value of the field starts with the specified value.(Applicable to string-valued fields only.) | name sw "Rajesh" |
### Composite Operators
These operators are applied to other filter expressions:
| Operator | Description | Example |
| -------- | ------------------------------------------------------ | -------------------------------------- |
| `and` | True if both the filter-valued operands are true. | startDate gt 2018 and name sw "Genaro" |
| `not` | True if the filter-valued operand is false. | not groups ca ("Venezia","Firenze") |
| `or` | True if either of the filter-valued operands are true. | startDate gt 2018 or name sw "Genaro" |
| Operator | Description | Example |
| --- | --- | --- |
| `and` | True if both the filter-valued operands are true. | startDate gt 2018 and name sw "Genaro" |
| `not` | True if the filter-valued operand is false. | not groups ca ("Venezia","Firenze") |
| `or` | True if either of the filter-valued operands are true. | startDate gt 2018 or name sw "Genaro" |
### Escaping Special Characters in a Filter
Certain characters must be escaped before they can be used in a filter
expression. For example, the following filter expression attempting to find all
sources with the name `#Employees` will produce a 400 error:
Certain characters must be escaped before they can be used in a filter expression. For example, the following filter expression attempting to find all sources with the name `#Employees` will produce a 400 error:
`/v3/sources?filters=name eq "#Employees"`
@@ -103,13 +90,11 @@ To properly escape this filter, do the following:
`/v3/sources?filters=name eq "%23Employees"`
If you are searching for a string containing double quotes, use the following
escape sequence:
If you are searching for a string containing double quotes, use the following escape sequence:
`/v3/sources/?filters=name eq "\"Employees\""`
The following table lists the special characters that are incompatible with
`filters` and how to escape them.
The following table lists the special characters that are incompatible with `filters` and how to escape them.
| Character | Escape Sequence |
| --------- | --------------- |
@@ -121,68 +106,38 @@ The following table lists the special characters that are incompatible with
### Known Limitations
Although filter expressions are a very general mechanism, individual API
endpoints will only support filtering on a specific set of fields that are
relevant to that endpoint, and will frequently only support a subset of
operations for each field. For example, an endpoint might allow filtering on the
name field but not support use of the co operator on that field. Consult the
documentation for each API endpoint to determine what fields and operators can
be used. Attempts to use an unsupported filter expression will result in a 400
Bad Request response.
Although filter expressions are a very general mechanism, individual API endpoints will only support filtering on a specific set of fields that are relevant to that endpoint, and will frequently only support a subset of operations for each field. For example, an endpoint might allow filtering on the name field but not support use of the co operator on that field. Consult the documentation for each API endpoint to determine what fields and operators can be used. Attempts to use an unsupported filter expression will result in a 400 Bad Request response.
Examples:
- `/v3/public-identities?filters=email eq "john.doe@example.com"`
- `/v3/public-identities?filters=firstname sw "john" or email sw "joe"`
- `not prop1 eq val1 or prop2 eq val2 and prop3 eq val3` is equivalent to
`(not (prop1 eq val1)) or ((prop2 eq val2) and (prop3 eq val3))`
- `not (prop1 eq val1 or prop2 eq val2) and prop3 eq val3` is equivalent to
`(not ((prop1 eq val1) or (prop2 eq val2))) and (prop3 eq val3)`
- `not prop1 eq val1 or prop2 eq val2 and prop3 eq val3` is equivalent to `(not (prop1 eq val1)) or ((prop2 eq val2) and (prop3 eq val3))`
- `not (prop1 eq val1 or prop2 eq val2) and prop3 eq val3` is equivalent to `(not ((prop1 eq val1) or (prop2 eq val2))) and (prop3 eq val3)`
:::info
- Spaces in URLs must be escaped with `%20`. Most programming languages,
frameworks, libraries, and tools will do this for you, but some won't. In the
event that your tool doesn't escape spaces, you will need to format your query
as `/v3/public-identities?filters=email%20eq%20"john.doe@example.com"`
- Spaces in URLs must be escaped with `%20`. Most programming languages, frameworks, libraries, and tools will do this for you, but some won't. In the event that your tool doesn't escape spaces, you will need to format your query as `/v3/public-identities?filters=email%20eq%20"john.doe@example.com"`
- You must escape spaces in URLs with `%20`. Most programming languages,
frameworks, libraries, and tools do this for you, but some do not. In the
event that your tool does not escape spaces, you must format your query as
`/v3/public-identities?filters=email%20eq%20"john.doe@example.com"`
- You must escape spaces in URLs with `%20`. Most programming languages, frameworks, libraries, and tools do this for you, but some do not. In the event that your tool does not escape spaces, you must format your query as `/v3/public-identities?filters=email%20eq%20"john.doe@example.com"`
- Unless explicitly noted otherwise, strings are compared lexicographically.
Most comparisons are not case sensitive. Any situations where the comparisons
are case sensitive will be called out.
- Unless explicitly noted otherwise, strings are compared lexicographically. Most comparisons are not case sensitive. Any situations where the comparisons are case sensitive will be called out.
- Date-times are compared temporally; an earlier date-time is less than a later
date-time.
- Date-times are compared temporally; an earlier date-time is less than a later date-time.
- The usual precedence and associativity of the composite operators applies,
with **not** having higher priority than **and**, which in turn has higher
priority than **or**. You can use parentheses to override this precedence.
- The usual precedence and associativity of the composite operators applies, with **not** having higher priority than **and**, which in turn has higher priority than **or**. You can use parentheses to override this precedence.
:::
### Sorting Results
Result sorting is supported with the standard `sorters` parameter. Its syntax is
a set of comma-separated field names. You may optionally prefix each field name
with a "-" character, indicating that the sort is descending based on the value
of that field. Otherwise, the sort is ascending.
Result sorting is supported with the standard `sorters` parameter. Its syntax is a set of comma-separated field names. You may optionally prefix each field name with a "-" character, indicating that the sort is descending based on the value of that field. Otherwise, the sort is ascending.
For example, to sort primarily by **type** in ascending order, and secondarily
by **modified date** in descending order, use `sorters=type,-modified`
For example, to sort primarily by **type** in ascending order, and secondarily by **modified date** in descending order, use `sorters=type,-modified`
## Putting it all Together
Pagination, filters, and sorters can be mixed and match to achieve the desired
output for a given collection endpoint. Here are some examples:
Pagination, filters, and sorters can be mixed and match to achieve the desired output for a given collection endpoint. Here are some examples:
- `/v3/public-identities?limit=20&filters=firstname eq "john"&sorters=-name`
returns the first 20 identities that have a first name of John and are sorted
in descending order by full name.
- `/v3/account-activities?limit=10&offset=2&sorters=-created` sorts the results
by descending created time, so the most recent activities appear first. The
limit and offset returns the 3rd page of this sorted response with 10 records
displayed.
- `/v3/public-identities?limit=20&filters=firstname eq "john"&sorters=-name` returns the first 20 identities that have a first name of John and are sorted in descending order by full name.
- `/v3/account-activities?limit=10&offset=2&sorters=-created` sorts the results by descending created time, so the most recent activities appear first. The limit and offset returns the 3rd page of this sorted response with 10 records displayed.

View File

@@ -5,37 +5,25 @@ pagination_label: Access Request Dynamic Approval
sidebar_label: Access Request Dynamic Approval
sidebar_class_name: accessRequestDynamicApproval
keywords:
["event", "trigger", "access", "request", "dynamic", "approval", "available"]
['event', 'trigger', 'access', 'request', 'dynamic', 'approval', 'available']
description: Fires after an access request is submitted.
slug: /docs/event-triggers/triggers/access-request-dynamic-approval
tags: ["Event Triggers", "Available Event Triggers", "Request Response"]
tags: ['Event Triggers', 'Available Event Triggers', 'Request Response']
---
## Event Context
The Access Request Dynamic Approval event trigger provides a way to route a
request to an additional approval step by an identity or a governance group.
The Access Request Dynamic Approval event trigger provides a way to route a request to an additional approval step by an identity or a governance group.
When an access request is submitted, the Access Request Dynamic Approval trigger
does the following:
When an access request is submitted, the Access Request Dynamic Approval trigger does the following:
- Sends data about the access request and expects a response including the ID of
an existing identity or workgroup (i.e. governance group) to add to the
approval workflow.
- Based on the ID received, an approval task is assigned to the identity or
governance group in IdentityNow for a decision as an additional step after
other configured approval requirements are met.
- If the new approver is also the target identity for this request, the manager
is assigned instead. If the identity has no manager, a random org admin is
assigned.
- If the ID of the additional approver is wrong, then a random org admin is
assigned.
- You can choose to **NOT** add an additional approver by providing an empty
object as the response to the triggered REST request.
- Sends data about the access request and expects a response including the ID of an existing identity or workgroup (i.e. governance group) to add to the approval workflow.
- Based on the ID received, an approval task is assigned to the identity or governance group in IdentityNow for a decision as an additional step after other configured approval requirements are met.
- If the new approver is also the target identity for this request, the manager is assigned instead. If the identity has no manager, a random org admin is assigned.
- If the ID of the additional approver is wrong, then a random org admin is assigned.
- You can choose to **NOT** add an additional approver by providing an empty object as the response to the triggered REST request.
You can use this trigger to develop logic outside of IdentityNows
out-of-the-box offerings to route an approval step to users such as the
following:
You can use this trigger to develop logic outside of IdentityNows out-of-the-box offerings to route an approval step to users such as the following:
- The recipients department head
- The recipients cost center
@@ -44,15 +32,9 @@ following:
## Configuration
This is a `REQUEST_RESPONSE` trigger type. For more information about how to
respond to a `REQUEST_RESPONSE` type trigger, see
[responding to a request response type trigger](../responding-to-a-request-response-trigger.mdx)
. This trigger intercepts newly submitted access requests and allows the
subscribing service to add one additional identity or governance group as the
last step in the approver list for the access request.
This is a `REQUEST_RESPONSE` trigger type. For more information about how to respond to a `REQUEST_RESPONSE` type trigger, see [responding to a request response type trigger](../responding-to-a-request-response-trigger.mdx) . This trigger intercepts newly submitted access requests and allows the subscribing service to add one additional identity or governance group as the last step in the approver list for the access request.
The subscribing service will receive the following input from the trigger
service.
The subscribing service will receive the following input from the trigger service.
<!-- The input schema can be found in the [API specification](https://developer.sailpoint.com/apis/beta/#section/Access-Request-Dynamic-Approver-Event-Trigger-Input): -->
@@ -82,13 +64,11 @@ service.
}
```
The subscribing service can use this information to make a decision about
whether to add additional approvers to the access request.
The subscribing service can use this information to make a decision about whether to add additional approvers to the access request.
<!-- The output schema can be found in the [API specification](https://developer.sailpoint.com/apis/beta/#section/Access-Request-Dynamic-Approver-Event-Trigger-Output). -->
To add an identity to the approver list, the subscribing service responds to the
event trigger with the following payload:
To add an identity to the approver list, the subscribing service responds to the event trigger with the following payload:
```json
{
@@ -98,8 +78,7 @@ event trigger with the following payload:
}
```
To add a governance group to the approver list, the subscribing service responds
to the event trigger with the following payload:
To add a governance group to the approver list, the subscribing service responds to the event trigger with the following payload:
```json
{
@@ -109,8 +88,7 @@ to the event trigger with the following payload:
}
```
If no identity or group should be added to a particular access request, then the
subscribing service responds with the following object:
If no identity or group should be added to a particular access request, then the subscribing service responds with the following object:
```json
{

View File

@@ -6,41 +6,33 @@ sidebar_label: Access Request Postapproval
sidebar_class_name: accessRequestPostapproval
keywords:
[
"event",
"trigger",
"access",
"request",
"postapproval",
"post",
"approval",
"available",
'event',
'trigger',
'access',
'request',
'postapproval',
'post',
'approval',
'available',
]
description: Fires after an access request is approved.
slug: /docs/event-triggers/triggers/access-request-postapproval
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
The SailPoint IdentityNow platform now includes event triggers within the Access
Request Approval workflow. The Access Request Postapproval event trigger
provides more proactive governance and ensures users can quickly obtain needed
access.
The SailPoint IdentityNow platform now includes event triggers within the Access Request Approval workflow. The Access Request Postapproval event trigger provides more proactive governance and ensures users can quickly obtain needed access.
![Flow](./img/access-request-postapproval-path.png)
When an access request is approved, some uses cases for this trigger include the
following:
When an access request is approved, some uses cases for this trigger include the following:
- Notify the requester that the access request has been approved or denied.
- Notify the administrator or system to take the appropriate provisioning
actions for the requested access.
- Notify a third party system to trigger another action (e.g. customer feedback
survey, initiate another business process), or it can be used for auditing
once an access request decision has been made.
- Notify the administrator or system to take the appropriate provisioning actions for the requested access.
- Notify a third party system to trigger another action (e.g. customer feedback survey, initiate another business process), or it can be used for auditing once an access request decision has been made.
The Access Request event trigger is a flexible way to extend the Access Request
workflow after access is approved for the requester.
The Access Request event trigger is a flexible way to extend the Access Request workflow after access is approved for the requester.
This is an example input from this trigger:

View File

@@ -4,52 +4,36 @@ title: Access Request Preapproval
pagination_label: Access Request Preapproval
sidebar_label: Access Request Preapproval
sidebar_class_name: accessRequestPreapproval
keywords: ["event", "trigger", "access", "request", "preapproval", "available"]
keywords: ['event', 'trigger', 'access', 'request', 'preapproval', 'available']
description: Fires after an access request is submitted.
slug: /docs/event-triggers/triggers/access-request-preapproval
tags: ["Event Triggers", "Available Event Triggers", "Request Response"]
tags: ['Event Triggers', 'Available Event Triggers', 'Request Response']
---
## Event Context
The platform now includes event triggers within the Access Request approval
workflow. The Access Request Submitted event trigger provides more proactive
governance, ensures users can quickly obtain needed access, and helps with more
preventative measures towards unintended access.
The platform now includes event triggers within the Access Request approval workflow. The Access Request Submitted event trigger provides more proactive governance, ensures users can quickly obtain needed access, and helps with more preventative measures towards unintended access.
![Flow](./img/access-request-preapproval-path.png)
When an access request is submitted, some uses cases for this trigger include
the following:
When an access request is submitted, some uses cases for this trigger include the following:
- Provide the approver with additional context about the access request, like
any Separation of Duties (SOD) policy violations, for example.
- Notify the approver through a different medium, such as Slack or Outlook
Actionable Messages.
- Send a Terms of Agreement form of the requested Application to be signed by
the access requester.
- On average, you can expect about 1 access request for every 4 identities
within your org per day. On average you can expect about 1 to 2 access
requests within a 10 second period.
- Provide the approver with additional context about the access request, like any Separation of Duties (SOD) policy violations, for example.
- Notify the approver through a different medium, such as Slack or Outlook Actionable Messages.
- Send a Terms of Agreement form of the requested Application to be signed by the access requester.
- On average, you can expect about 1 access request for every 4 identities within your org per day. On average you can expect about 1 to 2 access requests within a 10 second period.
Additional use cases include the following:
- Send a Slack Notification to the approver or an approval channel and approve
the request within Slack.
- Send a Slack Notification to the approver or an approval channel and approve the request within Slack.
- Create an Outlook Actionable Message.
- Create a Google Doc for the requester to fill out and submit.
## Configuration
This is a `REQUEST_RESPONSE` trigger type. For more information about how to
respond to a `REQUEST_RESPONSE` type trigger, see
[responding to a request response type trigger](../responding-to-a-request-response-trigger.mdx).
This trigger intercepts newly submitted access requests and allows the
subscribing service to perform a preliminary approval/denial before the access
request moves to the next approver in the chain.
This is a `REQUEST_RESPONSE` trigger type. For more information about how to respond to a `REQUEST_RESPONSE` type trigger, see [responding to a request response type trigger](../responding-to-a-request-response-trigger.mdx). This trigger intercepts newly submitted access requests and allows the subscribing service to perform a preliminary approval/denial before the access request moves to the next approver in the chain.
The subscribing service will receive the following input from the trigger
service.
The subscribing service will receive the following input from the trigger service.
<!-- The input schema can be found in the [API specification](https://developer.sailpoint.com/apis/beta/#section/Access-Request-Pre-Approval-Event-Trigger-Input): -->
@@ -79,13 +63,11 @@ service.
}
```
The subscribing service can use this information to make a decision about
whether to approve or deny the request.
The subscribing service can use this information to make a decision about whether to approve or deny the request.
<!-- The output schema can be found in the [API specification](https://developer.sailpoint.com/apis/beta/#section/Access-Request-Pre-Approval-Event-Trigger-Output). -->
To approve an access request, the subscribing service responds to the event
trigger with the following payload:
To approve an access request, the subscribing service responds to the event trigger with the following payload:
```json
{
@@ -95,8 +77,7 @@ trigger with the following payload:
}
```
To deny an access request, the subscribing service responds to the event trigger
with the following payload:
To deny an access request, the subscribing service responds to the event trigger with the following payload:
```json
{
@@ -106,12 +87,7 @@ with the following payload:
}
```
This event trigger interrupts the normal workflow for access requests. Access
requests can only proceed if the subscribing service responds within the alotted
time by approving the request. If the subscribing service is non-responsive or
it is responding with an incorrect payload, access requests will fail after the
**Separation of Duties** check. If you see numerous access requests failing at
this stage, verify that your subscribing service itself is operating correctly.
This event trigger interrupts the normal workflow for access requests. Access requests can only proceed if the subscribing service responds within the alotted time by approving the request. If the subscribing service is non-responsive or it is responding with an incorrect payload, access requests will fail after the **Separation of Duties** check. If you see numerous access requests failing at this stage, verify that your subscribing service itself is operating correctly.
![AR failed](./img/access-request-preapproval-failure.png)

View File

@@ -5,35 +5,23 @@ pagination_label: Account Aggregation Completed
sidebar_label: Account Aggregation Completed
sidebar_class_name: accountAggregationCompleted
keywords:
["event", "trigger", "account", "aggregation", "completed", "available"]
['event', 'trigger', 'account', 'aggregation', 'completed', 'available']
description: Fires after an account aggregation completed, terminated, or failed.
slug: /docs/event-triggers/triggers/account-aggregation-completed
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
The platform has introduced an event trigger within the Source Aggregation
workflow to provide additional monitoring capabilities. This trigger helps
ensure account aggregations are performing as expected and identity data always
reflects current source account information for better identity governance.
Aggregations connect to a source and collect account information from the source
to discover the number of accounts that have been added, changed, or removed.
For more information about account aggregation see
[Account Aggregation Data flow](https://community.sailpoint.com/t5/Technical-White-Papers/Account-Aggregation-Data-Flow/ta-p/79914#toc-hId-1367430234)
The platform has introduced an event trigger within the Source Aggregation workflow to provide additional monitoring capabilities. This trigger helps ensure account aggregations are performing as expected and identity data always reflects current source account information for better identity governance. Aggregations connect to a source and collect account information from the source to discover the number of accounts that have been added, changed, or removed. For more information about account aggregation see [Account Aggregation Data flow](https://community.sailpoint.com/t5/Technical-White-Papers/Account-Aggregation-Data-Flow/ta-p/79914#toc-hId-1367430234)
![Flow](./img/aggregation-diagram.png)
After the initial collection of accounts in the source system during aggregation
completes, some uses cases for this trigger include the following:
After the initial collection of accounts in the source system during aggregation completes, some uses cases for this trigger include the following:
- Notify an administrator that IdentityNow was able to successfully connect to
the source system and collect source accounts.
- Notify an administrator when the aggregation is terminated manually during the
account collection phase.
- Notify an administrator or system (e.g. PagerDuty) that IdentityNow failed to
collect accounts during aggregation and indicate required remediation for the
source system.
- Notify an administrator that IdentityNow was able to successfully connect to the source system and collect source accounts.
- Notify an administrator when the aggregation is terminated manually during the account collection phase.
- Notify an administrator or system (e.g. PagerDuty) that IdentityNow failed to collect accounts during aggregation and indicate required remediation for the source system.
:::info
@@ -77,28 +65,15 @@ The source account activity is summarized in `stats`, as seen in this example:
}
```
In this example, there are 10 changed accounts (`scanned` (200) - `unchanged` -
(190)). Changed accounts include accounts that are `added` (6) and accounts that
are `changed` (4), equaling 10 accounts. Removed accounts may or may not be
included in the changed account total depending on the sources. For this
example, `removed` (3) may be considered a changed account in some sources and
would show a `scanned` count of 203 instead of 200.
In this example, there are 10 changed accounts (`scanned` (200) - `unchanged` - (190)). Changed accounts include accounts that are `added` (6) and accounts that are `changed` (4), equaling 10 accounts. Removed accounts may or may not be included in the changed account total depending on the sources. For this example, `removed` (3) may be considered a changed account in some sources and would show a `scanned` count of 203 instead of 200.
> This event trigger fires even without changed accounts. The unchanged count
> will match the scanned accounts in the response.
> This event trigger fires even without changed accounts. The unchanged count will match the scanned accounts in the response.
The status of the aggregation can be one of three possible values:
- **Success**: Account collection was successful and aggregation can move to the
next step.
- **Error**: There is a failure in account collection or an issue connecting to
the source. The `errors` vary by source.
- **Termination**: The aggregation was terminated during the account collection
phase. Aggregation can be terminated when the account deletion threshold is
exceeded. For example, an account delete threshold of 10% is set by default
for the source, and if the number of `removed` accounts for the above example
is 21 (more than 10% of `scanned` accounts (200)), the aggregation is
cancelled.
- **Success**: Account collection was successful and aggregation can move to the next step.
- **Error**: There is a failure in account collection or an issue connecting to the source. The `errors` vary by source.
- **Termination**: The aggregation was terminated during the account collection phase. Aggregation can be terminated when the account deletion threshold is exceeded. For example, an account delete threshold of 10% is set by default for the source, and if the number of `removed` accounts for the above example is 21 (more than 10% of `scanned` accounts (200)), the aggregation is cancelled.
![Account_Delete_Threshold](./img/aggregation-delete-threshold.png)

View File

@@ -4,32 +4,24 @@ title: Identity Attributes Changed
pagination_label: Identity Attributes Changed
sidebar_label: Identity Attributes Changed
sidebar_class_name: identityAttributesChanged
keywords: ["event", "trigger", "identity", "attributes", "changed", "available"]
keywords: ['event', 'trigger', 'identity', 'attributes', 'changed', 'available']
description: Fires after one or more identity attributes changed.
slug: /docs/event-triggers/triggers/identity-attribute-changed
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
![Flow](./img/trigger-path.png)
Identity Attribute Changed events occur when any attributes aggegrated from an
authoritative source differ from the current attributes for an identity during
an identity refresh. See
[Configuring Correlation](https://community.sailpoint.com/t5/Connectors/Configuring-Correlation/ta-p/74045)
for more information.
Identity Attribute Changed events occur when any attributes aggegrated from an authoritative source differ from the current attributes for an identity during an identity refresh. See [Configuring Correlation](https://community.sailpoint.com/t5/Connectors/Configuring-Correlation/ta-p/74045) for more information.
This event trigger provides a flexible way to extend Joiner-Mover-Leaver
processes. This provides more proactive governance and ensures users can quickly
get necessary access when they enter your organization.
This event trigger provides a flexible way to extend Joiner-Mover-Leaver processes. This provides more proactive governance and ensures users can quickly get necessary access when they enter your organization.
Some uses cases for this trigger include the following:
- Notify an administrator or system to take the appropriate provisioning actions
as part of the Mover workflow.
- Notify a system to trigger another action, like triggering a certification
campaign when an identity's manager changes, for example.
- Notify an administrator or system to take the appropriate provisioning actions as part of the Mover workflow.
- Notify a system to trigger another action, like triggering a certification campaign when an identity's manager changes, for example.
This is an example input from this trigger:

View File

@@ -4,35 +4,24 @@ title: Identity Created
pagination_label: Identity Created
sidebar_label: Identity Created
sidebar_class_name: identityCreated
keywords: ["event", "trigger", "identity", "created", "available"]
keywords: ['event', 'trigger', 'identity', 'created', 'available']
description: Fires after an identity is created.
slug: /docs/event-triggers/triggers/identity-created
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
![Flow](./img/identity-created-path.png)
Identity Created events occur when a new identity is detected during an
aggregration and refresh from an authoritative source. New identities are
detected when an account from the authoritative source is not correlated to an
existing identity. For more information, see
[Configuring Correlation](https://community.sailpoint.com/t5/Connectors/Configuring-Correlation/ta-p/74045).
The Identity Created event contains all of the identity attributes as they are
configured in the identity profile. For more information, see
[Mapping Identity Profiles](https://community.sailpoint.com/t5/Admin-Help/Mapping-Identity-Profiles/ta-p/77877).
Identity Created events occur when a new identity is detected during an aggregration and refresh from an authoritative source. New identities are detected when an account from the authoritative source is not correlated to an existing identity. For more information, see [Configuring Correlation](https://community.sailpoint.com/t5/Connectors/Configuring-Correlation/ta-p/74045). The Identity Created event contains all of the identity attributes as they are configured in the identity profile. For more information, see [Mapping Identity Profiles](https://community.sailpoint.com/t5/Admin-Help/Mapping-Identity-Profiles/ta-p/77877).
This event trigger provides a flexible way to extend Joiner-Mover-Leaver
processes. This provides more proactive governance and ensures users can quickly
get necessary access when they enter your organization.
This event trigger provides a flexible way to extend Joiner-Mover-Leaver processes. This provides more proactive governance and ensures users can quickly get necessary access when they enter your organization.
Some uses cases for this trigger include the following:
- Notify an administrator or system to take the appropriate birthright
provisioning actions as part of the Joiner workflow.
- Notify a third party system to trigger another action (e.g. create an
onboarding experience for a new hire).
- Notify an administrator or system to take the appropriate birthright provisioning actions as part of the Joiner workflow.
- Notify a third party system to trigger another action (e.g. create an onboarding experience for a new hire).
This is an example input from this trigger:

View File

@@ -4,19 +4,16 @@ title: Available Event Triggers
pagination_label: Available Event Triggers
sidebar_label: Available Event Triggers
sidebar_class_name: availableEventTriggers
keywords: ["event", "trigger", "available"]
keywords: ['event', 'trigger', 'available']
description: Event triggers that are generally available.
sidebar_position: 7
slug: /docs/event-triggers/available
tags: ["Event Triggers", "Available Event Triggers"]
tags: ['Event Triggers', 'Available Event Triggers']
---
import DocCardList from "@theme/DocCardList";
import { useCurrentSidebarCategory } from "@docusaurus/theme-common";
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
The event triggers in this section are generally available to all IDN tenants.
Event triggers currently in development are considered
[Early Access](../early-access/index.mdx) and require a support ticket to be
enabled in a tenant.
The event triggers in this section are generally available to all IDN tenants. Event triggers currently in development are considered [Early Access](../early-access/index.mdx) and require a support ticket to be enabled in a tenant.
<DocCardList items={useCurrentSidebarCategory().items} />

View File

@@ -5,31 +5,24 @@ pagination_label: Provisioning Action Completed
sidebar_label: Provisioning Action Completed
sidebar_class_name: provisioningActionCompleted
keywords:
["event", "trigger", "provisioning", "action", "completed", "available"]
['event', 'trigger', 'provisioning', 'action', 'completed', 'available']
description: Fires after a provisioning action completed on a source.
slug: /docs/event-triggers/triggers/provisioning-action-completed
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
![Flow](./img/provisioning-action.png)
The Provisioning Action Completed event trigger notifies subscribed applications
after the action is completed. This event trigger provides a flexible way to
extend the Provisioning workflow after access has changed for an identity within
SailPoint. This provides more proactive governance and ensures users can quickly
get necessary access.
The Provisioning Action Completed event trigger notifies subscribed applications after the action is completed. This event trigger provides a flexible way to extend the Provisioning workflow after access has changed for an identity within SailPoint. This provides more proactive governance and ensures users can quickly get necessary access.
Some uses cases for this trigger include the following:
- Notify the requester that the access request has been fulfilled.
- Notify an application user and/or access certifier that access has been
revoked.
- Notify an application user and/or access certifier that access has been revoked.
- Notify an administrator or system that provisioning has been completed.
- Notify a third party system to trigger another action, like continuing
additional provisioning actions or auditing of provisioning activities, for
example.
- Notify a third party system to trigger another action, like continuing additional provisioning actions or auditing of provisioning activities, for example.
This is an example input from this trigger:
@@ -79,26 +72,20 @@ This is an example input from this trigger:
Before consuming this event trigger, the following prerequesites must be met:
- An oAuth Client configured with authority as `ORG_ADMIN`.
- An org enabled with the `ARSENAL_ALLOW_POSTPROVISIONING_TRIGGERS` feature
flag.
- An org enabled with the `ARSENAL_ALLOW_POSTPROVISIONING_TRIGGERS` feature flag.
- Configure connectors for provisioning into target applications.
- An org configured for automated provisioning. See the Event Context section
for specific setup.
- An org configured for automated provisioning. See the Event Context section for specific setup.
To provision to a target application, the connector for the source must support
the following connector features:
To provision to a target application, the connector for the source must support the following connector features:
- `ENABLE` - Can enable or disable accounts.
- `UNLOCK` - Can lock or unlock accounts.
- `PROVISIONING` - Can write to accounts. Currently, the trigger does not
include attribute synchronization.
- `PROVISIONING` - Can write to accounts. Currently, the trigger does not include attribute synchronization.
- `PASSWORD` - Can update password for accounts.
For a list of supported connectors and features, see
[Supported Connectors for IdentityNow](https://community.sailpoint.com/t5/Connectors/Supported-Sources-Connectors-for-IdentityNow/ta-p/80019).
For a list of supported connectors and features, see [Supported Connectors for IdentityNow](https://community.sailpoint.com/t5/Connectors/Supported-Sources-Connectors-for-IdentityNow/ta-p/80019).
For information about configuring sources for provisioning, see
[How can I edit the Create Profile on a source?](https://community.sailpoint.com/t5/Connectors/How-can-I-edit-the-Create-Profile-on-a-source/ta-p/74429).
For information about configuring sources for provisioning, see [How can I edit the Create Profile on a source?](https://community.sailpoint.com/t5/Connectors/How-can-I-edit-the-Create-Profile-on-a-source/ta-p/74429).
Provisioning events occur in these workflows:
@@ -110,46 +97,34 @@ Provisioning events occur in these workflows:
### Access Request
When an Access Request approval process has completed with all positive
approvals, the access request is fulfilled with provisioning to the target
application with requested access.
When an Access Request approval process has completed with all positive approvals, the access request is fulfilled with provisioning to the target application with requested access.
![Flow](./img/provisioning-access-request.png)
Access acquired through a role request can also be revoked, and those changes
can be provisioned to an account.
Access acquired through a role request can also be revoked, and those changes can be provisioned to an account.
The following steps must be completed:
- Source Connector configured for `PROVISIONING`. Access requests in SailPoint
SaaS currently do not support `ACCOUNT_ONLY_REQUEST` or
`ADDITIONAL_ACCOUNT_REQUEST`.
- Source Connector configured for `PROVISIONING`. Access requests in SailPoint SaaS currently do not support `ACCOUNT_ONLY_REQUEST` or `ADDITIONAL_ACCOUNT_REQUEST`.
- Source entitlements mapped in Account Schema.
- Access profile using source entitlements. Role setup is optional.
- Application enabled for Access Request.
> **NOTE:** There is no indication to the approver in the IdentityNow UI that
> the approval is for a revoke action. This must be considered for all usage of
> these APIs.
> **NOTE:** There is no indication to the approver in the IdentityNow UI that the approval is for a revoke action. This must be considered for all usage of these APIs.
![Flow](./img/provisioning-access-request-2.png)
### Certification
Provisioning removal of accounts acquired through Access Request occurs through
certifications.
Provisioning removal of accounts acquired through Access Request occurs through certifications.
> **Note:** Certifications cannot revoke access acquired via role membership or
> lifecycle Changes.
> **Note:** Certifications cannot revoke access acquired via role membership or lifecycle Changes.
![Flow](./img/provisioning-access-request-certification.png)
### Role Membership
Access defined in access profiles can be grouped into roles, and roles can be
assigned to identities using `COMPLEX_CRITERION` or `IDENTITY_LIST`. See
[Admin UI](https://community.sailpoint.com/t5/Admin-Help/Standard-Role-Membership-Criteria-Options/ta-p/74392)
for information on how to set `COMPLEX_CRITERION`.
Access defined in access profiles can be grouped into roles, and roles can be assigned to identities using `COMPLEX_CRITERION` or `IDENTITY_LIST`. See [Admin UI](https://community.sailpoint.com/t5/Admin-Help/Standard-Role-Membership-Criteria-Options/ta-p/74392) for information on how to set `COMPLEX_CRITERION`.
> **Note:** `CUSTOM` role membership through rules is no longer supported.
@@ -165,8 +140,7 @@ This trigger fires when an account has been provisioned, enabled, or disabled.
To provision access with lifecycle states, the prerequisites must be met:
- Source connector configured for `ENABLE` to enable/disable accounts and/or
`PROVISIONING` to create/update/delete accounts.
- Source connector configured for `ENABLE` to enable/disable accounts and/or `PROVISIONING` to create/update/delete accounts.
- Source entitlements mapped from an authoritative source.
- Source entitlements mapped to access profiles.
- Identity profile using an authoritative source.
@@ -174,14 +148,11 @@ To provision access with lifecycle states, the prerequisites must be met:
### Password Management
Password changes can be provisioned to target applications through password
reset or password interception. Also, unlocking of accounts can be provisioned
via password change within SailPoint SaaS.
Password changes can be provisioned to target applications through password reset or password interception. Also, unlocking of accounts can be provisioned via password change within SailPoint SaaS.
For password management setup, you must configure the following:
- Source connector configured for `PASSWORD` for password changes and/or
`UNLOCK` for unlocking changes.
- Source connector configured for `PASSWORD` for password changes and/or `UNLOCK` for unlocking changes.
- Password sync group
## Additional Information and Links

View File

@@ -4,41 +4,26 @@ title: Saved Search Complete
pagination_label: Saved Search Complete
sidebar_label: Saved Search Complete
sidebar_class_name: savedSearchComplete
keywords: ["event", "trigger", "saved", "search", "complete", "available"]
keywords: ['event', 'trigger', 'saved', 'search', 'complete', 'available']
description: Fires after a scheduled search completed.
slug: /docs/event-triggers/triggers/saved-search-completed
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
![Flow](./img/saved-search-path.png)
Users can subscribe to Saved Searches and receive an email of a report generated
from the saved search. For example, a user can save a search query called
"Identities with upcoming end dates" and create a subscription to receive a
daily report showing identities with an end date within 10 days from the current
date. This event trigger can also notify an external HTTP application that a
report generated from a saved search subscription is available to be processed.
Users can subscribe to Saved Searches and receive an email of a report generated from the saved search. For example, a user can save a search query called "Identities with upcoming end dates" and create a subscription to receive a daily report showing identities with an end date within 10 days from the current date. This event trigger can also notify an external HTTP application that a report generated from a saved search subscription is available to be processed.
Saved Search Completed events occur based on the schedules set for saved search
subscriptions. For example, if you have a scheduled saved search for Monday,
Tuesday, Wednesday, Thursday, Friday at 6:00 GMT, your HTTP endpoint will also
receive a notification at those times. This can be set using the `schedule`
object in the
[create scheduled search endpoint](/idn/api/v3/scheduled-search-create).
Saved Search Completed events occur based on the schedules set for saved search subscriptions. For example, if you have a scheduled saved search for Monday, Tuesday, Wednesday, Thursday, Friday at 6:00 GMT, your HTTP endpoint will also receive a notification at those times. This can be set using the `schedule` object in the [create scheduled search endpoint](/idn/api/v3/scheduled-search-create).
To receive this event when a saved search query does not have any results, set
`emailEmptyResults` to `TRUE`. You can also set the expiration date in the
`expiration` field within the `schedule` object. Your HTTP endpoint will stop
receiving these events when the scheduled search expires.
To receive this event when a saved search query does not have any results, set `emailEmptyResults` to `TRUE`. You can also set the expiration date in the `expiration` field within the `schedule` object. Your HTTP endpoint will stop receiving these events when the scheduled search expires.
Some uses cases for this trigger include the following:
- Perform quality control, such as continuously checking for Separation of
Duties (SOD) violations.
- Respond to upcoming joiner-mover-leaver scenarios, such as deprovisioning
access before an employee's separation date.
- Perform quality control, such as continuously checking for Separation of Duties (SOD) violations.
- Respond to upcoming joiner-mover-leaver scenarios, such as deprovisioning access before an employee's separation date.
This is an example input from this trigger:

View File

@@ -4,19 +4,17 @@ title: Source Created
pagination_label: Source Created
sidebar_label: Source Created
sidebar_class_name: sourceCreated
keywords: ["event", "trigger", "source", "created", "available"]
keywords: ['event', 'trigger', 'source', 'created', 'available']
description: Fires after a source is created.
slug: /docs/event-triggers/triggers/source-created
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
Source Created events occur when a new source is successfully created via the
API or the Admin UI. Some uses cases for this trigger include the following:
Source Created events occur when a new source is successfully created via the API or the Admin UI. Some uses cases for this trigger include the following:
- Provide evidence to show auditors connector logic and sources are not
manipulated outside of proper change control processes.
- Provide evidence to show auditors connector logic and sources are not manipulated outside of proper change control processes.
- Auto-configure new sources with proper owners using external data sources.
This is an example input from this trigger:

View File

@@ -4,19 +4,17 @@ title: Source Deleted
pagination_label: Source Deleted
sidebar_label: Source Deleted
sidebar_class_name: sourceDeleted
keywords: ["event", "trigger", "source", "deleted", "available"]
keywords: ['event', 'trigger', 'source', 'deleted', 'available']
description: Fires after a source is deleted.
slug: /docs/event-triggers/triggers/source-deleted
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
Source Deleted events occur when a source is successfully deleted via the API or
the Admin UI. Some uses cases for this trigger include the following:
Source Deleted events occur when a source is successfully deleted via the API or the Admin UI. Some uses cases for this trigger include the following:
- Provide evidence to show auditors that connector logic and sources are not
manipulated outside of proper change control processes.
- Provide evidence to show auditors that connector logic and sources are not manipulated outside of proper change control processes.
- Alert admins when a source was deleted incorrectly.
This is an example input from this trigger:

View File

@@ -4,19 +4,17 @@ title: Source Updated
pagination_label: Source Updated
sidebar_label: Source Updated
sidebar_class_name: sourceUpdated
keywords: ["event", "trigger", "source", "updated", "available"]
keywords: ['event', 'trigger', 'source', 'updated', 'available']
description: Fires after a source is updated.
slug: /docs/event-triggers/triggers/source-updated
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
Source Updated events occur when configuration changes are made to a source.
Some uses cases for this trigger include the following:
Source Updated events occur when configuration changes are made to a source. Some uses cases for this trigger include the following:
- Provide evidence to show auditors connector logic and sources are not
manipulated outside of proper change control processes.
- Provide evidence to show auditors connector logic and sources are not manipulated outside of proper change control processes.
- Trigger review of an updated source.
This is an example input from this trigger:

View File

@@ -4,33 +4,26 @@ title: VA Cluster Status Change
pagination_label: VA Cluster Status Change
sidebar_label: VA Cluster Status Change
sidebar_class_name: vaClusterStatusChange
keywords: ["event", "trigger", "va", "cluster", "status", "change", "available"]
keywords: ['event', 'trigger', 'va', 'cluster', 'status', 'change', 'available']
description: Fires after the status of a VA cluster has changed.
slug: /docs/event-triggers/triggers/va-cluster-status-change
tags: ["Event Triggers", "Available Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Available Event Triggers', 'Fire and Forget']
---
## Event Context
VA (Virtual Appliance) Cluster Status Change Events occur when a health check is
run on a VA cluster and the health status is different from the previous health
check. Customers can use this trigger to monitor all the health status changes
of their VA clusters.
VA (Virtual Appliance) Cluster Status Change Events occur when a health check is run on a VA cluster and the health status is different from the previous health check. Customers can use this trigger to monitor all the health status changes of their VA clusters.
Some uses cases for this trigger include the following:
- Create real-time health dashboards for VA clusters.
- Notify an administrator or system to take the appropriate actions when a
health status changes.
- Notify an administrator or system to take the appropriate actions when a health status changes.
Additional notes about VA Cluster Status Changes:
- VA cluster health checks run every 30 minutes.
- This trigger will invoke on any VA cluster health status change (i.e. healthy
-> unhealthy, unhealthy -> healthy).
- See
[troubleshooting virtual appliances](https://community.sailpoint.com/t5/IdentityNow-Connectors/Virtual-Appliance-Troubleshooting-Guide/ta-p/78735)
for more information.
- This trigger will invoke on any VA cluster health status change (i.e. healthy -> unhealthy, unhealthy -> healthy).
- See [troubleshooting virtual appliances](https://community.sailpoint.com/t5/IdentityNow-Connectors/Virtual-Appliance-Troubleshooting-Guide/ta-p/78735) for more information.
Healthy Cluster Source

View File

@@ -4,16 +4,15 @@ title: Identity Deleted
pagination_label: Identity Deleted
sidebar_label: Identity Deleted
sidebar_class_name: identityDeleted
keywords: ["event", "trigger", "identity", "deleted", "early access"]
keywords: ['event', 'trigger', 'identity', 'deleted', 'early access']
description: Fires after an identity is deleted.
slug: /docs/event-triggers/triggers/identity-deleted
tags: ["Event Triggers", "Early Access Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Early Access Event Triggers', 'Fire and Forget']
---
:::info
This is an early access event trigger. Please contact support to have it enabled
in your tenant.
This is an early access event trigger. Please contact support to have it enabled in your tenant.
:::
@@ -21,25 +20,14 @@ in your tenant.
![Flow](./img/identity-deleted-path.png)
Identity deleted events occur when an identity's associated account is deleted
from the identity's authoritative source. After accounts are aggregated and the
identity refresh process finds an identity that is not correlated to an account,
the associated identity is deleted from IdentityNow. For more information, see
[Configuring Correlation](https://community.sailpoint.com/t5/Connectors/Configuring-Correlation/ta-p/74045).
The Identity deleted event contains any identity attributes as they are
configured in the identity profile. For more information, see
[Mapping Identity Profiles](https://community.sailpoint.com/t5/Admin-Help/Mapping-Identity-Profiles/ta-p/77877).
Identity deleted events occur when an identity's associated account is deleted from the identity's authoritative source. After accounts are aggregated and the identity refresh process finds an identity that is not correlated to an account, the associated identity is deleted from IdentityNow. For more information, see [Configuring Correlation](https://community.sailpoint.com/t5/Connectors/Configuring-Correlation/ta-p/74045). The Identity deleted event contains any identity attributes as they are configured in the identity profile. For more information, see [Mapping Identity Profiles](https://community.sailpoint.com/t5/Admin-Help/Mapping-Identity-Profiles/ta-p/77877).
This event trigger provides a flexible way to extend joiner-mover-leaver
processes. This provides more proactive governance and ensures users can quickly
get necessary access when they enter your organization.
This event trigger provides a flexible way to extend joiner-mover-leaver processes. This provides more proactive governance and ensures users can quickly get necessary access when they enter your organization.
Some uses cases for this trigger include the following:
- Notify an administrator or system to take the appropriate provisioning actions
as part of the leaver workflow.
- Notify a system to trigger another action (e.g. deactivate an employees badge
upon termination).
- Notify an administrator or system to take the appropriate provisioning actions as part of the leaver workflow.
- Notify a system to trigger another action (e.g. deactivate an employees badge upon termination).
This is an example input from this trigger:

View File

@@ -4,19 +4,16 @@ title: Early Access Event Triggers
pagination_label: Early Access Event Triggers
sidebar_label: Early Access Event Triggers
sidebar_class_name: earlyAccessEventTriggers
keywords: ["event", "trigger", "early access"]
keywords: ['event', 'trigger', 'early access']
description: Event triggers that require a support ticket to enable.
sidebar_position: 8
slug: /docs/event-triggers/early-access
tags: ["Event Triggers", "Early Access Event Triggers"]
tags: ['Event Triggers', 'Early Access Event Triggers']
---
import DocCardList from "@theme/DocCardList";
import { useCurrentSidebarCategory } from "@docusaurus/theme-common";
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
New event triggers undergoing active development may appear in the early access
event trigger list. You can use these triggers by submitting a support ticket to
have them enabled in your tenant. Because these triggers are early access, they
are subject to change at any time.
New event triggers undergoing active development may appear in the early access event trigger list. You can use these triggers by submitting a support ticket to have them enabled in your tenant. Because these triggers are early access, they are subject to change at any time.
<DocCardList items={useCurrentSidebarCategory().items} />

View File

@@ -4,28 +4,23 @@ title: Source Account Created
pagination_label: Source Account Created
sidebar_label: Source Account Created
sidebar_class_name: sourceAccountCreated
keywords: ["event", "trigger", "source", "account", "created", "early access"]
keywords: ['event', 'trigger', 'source', 'account', 'created', 'early access']
description: Fires after a source account is created.
slug: /docs/event-triggers/triggers/source-account-created
tags: ["Event Triggers", "Early Access Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Early Access Event Triggers', 'Fire and Forget']
---
:::info
This is an early access event trigger. Please contact support to have it enabled
in your tenant.
This is an early access event trigger. Please contact support to have it enabled in your tenant.
:::
## Event Context
Source Account Created events occur after a new account is detected during an
account aggregration and refresh from a source. This trigger cannot determine
whether account creation happened on a source or in IdentityNow. It omits events
related to IdentityNow accounts, such as the IdentityNow Admin.
Source Account Created events occur after a new account is detected during an account aggregration and refresh from a source. This trigger cannot determine whether account creation happened on a source or in IdentityNow. It omits events related to IdentityNow accounts, such as the IdentityNow Admin.
Use this event trigger to watch for new accounts with highly privileged access,
such as an account created in Active Directory Domain Admins.
Use this event trigger to watch for new accounts with highly privileged access, such as an account created in Active Directory Domain Admins.
This is an example input from this trigger:

View File

@@ -4,30 +4,23 @@ title: Source Account Deleted
pagination_label: Source Account Deleted
sidebar_label: Source Account Deleted
sidebar_class_name: sourceAccountDeleted
keywords: ["event", "trigger", "source", "account", "deleted", "early access"]
keywords: ['event', 'trigger', 'source', 'account', 'deleted', 'early access']
description: Fires after a source account is deleted.
slug: /docs/event-triggers/triggers/source-account-deleted
tags: ["Event Triggers", "Early Access Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Early Access Event Triggers', 'Fire and Forget']
---
:::info
This is an early access event trigger. Please contact support to have it enabled
in your tenant.
This is an early access event trigger. Please contact support to have it enabled in your tenant.
:::
## Event Context
Source Account Deleted events occur whenever an account is deleted from its
source during an account aggregation operation. The account may have been
manually removed or deleted as the result of a provisioning event. The trigger
cannot determine whether the account deletion happened on a source or in
IdentityNow. It omits events related to IdentityNow accounts, such as the
IdentityNow Admin.
Source Account Deleted events occur whenever an account is deleted from its source during an account aggregation operation. The account may have been manually removed or deleted as the result of a provisioning event. The trigger cannot determine whether the account deletion happened on a source or in IdentityNow. It omits events related to IdentityNow accounts, such as the IdentityNow Admin.
Use this event trigger to watch for deletions of authoritative accounts, such as
an account deleted on Workday.
Use this event trigger to watch for deletions of authoritative accounts, such as an account deleted on Workday.
This is an example input from this trigger:

View File

@@ -4,36 +4,29 @@ title: Source Account Updated
pagination_label: Source Account Updated
sidebar_label: Source Account Updated
sidebar_class_name: sourceAccountUpdated
keywords: ["event", "trigger", "source", "account", "updated", "early access"]
keywords: ['event', 'trigger', 'source', 'account', 'updated', 'early access']
description: Fires after a source account is updated.
pagination_next: docs/identity-now/event-triggers/event-triggers
slug: /docs/event-triggers/triggers/source-account-updated
tags: ["Event Triggers", "Early Access Event Triggers", "Fire and Forget"]
tags: ['Event Triggers', 'Early Access Event Triggers', 'Fire and Forget']
---
:::info
This is an early access event trigger. Please contact support to have it enabled
in your tenant.
This is an early access event trigger. Please contact support to have it enabled in your tenant.
:::
## Event Context
Source Account Updated events occur whenever one or more account attributes
change on a single account during an account aggregation operation. The trigger
cannot determine whether the account update happened on a source or in
IdentityNow. It omits events related to IdentityNow accounts, such as the
IdentityNow Admin. The following actions are considered updates:
Source Account Updated events occur whenever one or more account attributes change on a single account during an account aggregation operation. The trigger cannot determine whether the account update happened on a source or in IdentityNow. It omits events related to IdentityNow accounts, such as the IdentityNow Admin. The following actions are considered updates:
- Update account attributes
- Enable or disable an account
- Lock or unlock source accounts
- Change source account password
Use this event trigger to watch for updates to accounts that add highly
privileged access, such as an account that is granted privileged access on a
sensitive source.
Use this event trigger to watch for updates to accounts that add highly privileged access, such as an account that is granted privileged access on a sensitive source.
This is an example input from this trigger:

View File

@@ -5,41 +5,19 @@ pagination_label: Filtering Events
sidebar_label: Filtering Events
sidebar_position: 4
sidebar_class_name: filteringEvents
keywords: ["filtering", "events"]
description:
Many triggers can produce a staggering amount of events if left unfiltered.
Event filtering helps you solve this problem.
keywords: ['filtering', 'events']
description: Many triggers can produce a staggering amount of events if left unfiltered. Event filtering helps you solve this problem.
slug: /docs/event-triggers/filtering-events
tags: ["Event Triggers"]
tags: ['Event Triggers']
---
## What is a Filter
Many triggers can produce a staggering amount of events if left unfiltered,
resulting in more network traffic and more processing time on a subscribing
service. Your subscribing service usually only needs to be notified of events
containing a key attribute or value you want to process. For example, the
Identity Attributes Changed trigger emits an event whenever an identity has a
change in attributes. This can occur during the mover process when an identity
changes departments or a manager is promoted, resulting in several identities
receiving a new manager. Rather than inundate your subscribing service with
every identity change, you can use an event trigger filter to specify which
events your service is interested in processing.
Many triggers can produce a staggering amount of events if left unfiltered, resulting in more network traffic and more processing time on a subscribing service. Your subscribing service usually only needs to be notified of events containing a key attribute or value you want to process. For example, the Identity Attributes Changed trigger emits an event whenever an identity has a change in attributes. This can occur during the mover process when an identity changes departments or a manager is promoted, resulting in several identities receiving a new manager. Rather than inundate your subscribing service with every identity change, you can use an event trigger filter to specify which events your service is interested in processing.
## Benefits of Using Filters
Network bandwidth and processing power come at a cost, especially when you are
using managed solutions like AWS or no-code providers like Zapier. Without
filtering, a subscribing service would be sent every single event that the
trigger receives. The first thing any subscriber must do in this scenario is
inspect each event to figure out which ones it must process and which ones it
can ignore. Taking this approach with managed providers that charge per
invocation, like AWS Lambda, can become expensive. Furthermore, some no-code
providers may put a limit on the total number of invocations that a service can
make in a given month, which would be quickly exhausted with this approach.
Trigger filters take the filtering logic out of your subscribing service and
place it on the event trigger within SailPoint, so you only receive the events
matching your filter criteria.
Network bandwidth and processing power come at a cost, especially when you are using managed solutions like AWS or no-code providers like Zapier. Without filtering, a subscribing service would be sent every single event that the trigger receives. The first thing any subscriber must do in this scenario is inspect each event to figure out which ones it must process and which ones it can ignore. Taking this approach with managed providers that charge per invocation, like AWS Lambda, can become expensive. Furthermore, some no-code providers may put a limit on the total number of invocations that a service can make in a given month, which would be quickly exhausted with this approach. Trigger filters take the filtering logic out of your subscribing service and place it on the event trigger within SailPoint, so you only receive the events matching your filter criteria.
## Constructing a Filter
@@ -55,61 +33,49 @@ Although variable selection in Workflows users Goessner, the trigger filter fiel
### Expressions
JSONPath expressions specify a path to an element or array of elements in a JSON
structure. Expressions are used to select data in a JSON structure to check for
the existence of attributes or to narrow down the data where the filter logic is
applied.
JSONPath expressions specify a path to an element or array of elements in a JSON structure. Expressions are used to select data in a JSON structure to check for the existence of attributes or to narrow down the data where the filter logic is applied.
| Expression | Description | Example |
| ----------------- | ------------------------------------------------------------------------ | ----------------------------------------- |
| $ | **Root** - The root object / element. | $ |
| @ | **Current** - The current object / element of an array. | $.changes[?(@.attribute == "department")] |
| . | **Child operator** - Selects a child element of an object. | $.identity |
| .. | **Recursive descent** - JSONPath borrows this syntax from E4X. | $..id |
| \* | **Wildcard** - All objects / elements regardless of their names. | $.changes[*] |
| [] | **Subscript** - In Javascript and JSON, it is the native array operator. | $.changes[1].attribute |
| [,] | **Union** - Selects elements of an array. | $.changes[0,1,2] |
| [start:stop:step] | **Array slice** - Selects elements of an array. | $.changes[0:2:1] |
| [:n] | **Array slice** - Selects the first `n` elements of an array. | $.changes[:2] |
| [-n:] | **Array slice** - Selects the last `n` elements of an array. | $.changes[-1:] |
| ?() | **Filter expression** - Applies a filter expression. | $[?($.identity.name == "john.doe")] |
| () | **Script expression** - Applies a script expression. | $.changes[(@.length-1)] |
| Expression | Description | Example |
| --- | --- | --- |
| $ | **Root** - The root object / element. | $ |
| @ | **Current** - The current object / element of an array. | $.changes[?(@.attribute == "department")] |
| . | **Child operator** - Selects a child element of an object. | $.identity |
| .. | **Recursive descent** - JSONPath borrows this syntax from E4X. | $..id |
| \* | **Wildcard** - All objects / elements regardless of their names. | $.changes[*] |
| [] | **Subscript** - In Javascript and JSON, it is the native array operator. | $.changes[1].attribute |
| [,] | **Union** - Selects elements of an array. | $.changes[0,1,2] |
| [start:stop:step] | **Array slice** - Selects elements of an array. | $.changes[0:2:1] |
| [:n] | **Array slice** - Selects the first `n` elements of an array. | $.changes[:2] |
| [-n:] | **Array slice** - Selects the last `n` elements of an array. | $.changes[-1:] |
| ?() | **Filter expression** - Applies a filter expression. | $[?($.identity.name == "john.doe")] |
| () | **Script expression** - Applies a script expression. | $.changes[(@.length-1)] |
### Operators
JSONPath operators provide more options to filter JSON structures.
| Operator | Description | Example |
| -------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------- |
| == | **Equals to** - Evaluates to `true` if operands match. | $[?($.identity.name == "john.doe")] |
| != | **Not equal to** - Evaluates to `true` if operands do not match. | $[?($.identity.name != "george.washington")] |
| > | **Greater than** - Evaluates to `true` if the left operand is greater than the right operand. It works on strings and numbers. | $[?($.attributes.created > '2020-04-27T16:48:33.200Z')] |
| >= | **Greater than or equal to** - Evaluates to `true` if the left operand is greater than or equal to the right operand. | $[?($.attributes.created >= '2020-04-27T16:48:33.597Z')] |
| < | **Less than** - Evaluates to `true` if the left operand is less than the right operand. | $[?($.attributes.created < '2020-04-27T16:48:33.200Z')] |
| <= | **Less than or equal to** - Evaluates to `true` if the left operand is less than or equal to the right operand. | $[?($.attributes.created <= '2020-04-27T16:48:33.200Z')] |
| && | Logical **AND** operator that evaluates `true` only if both conditions are `true`. | $.changes[?(@.attribute == "cloudLifecycleState" && @.newValue == "terminated")] |
| ! | **Not** - Negates the boolean expression. | $.identity.attributes[?(!@.alternateEmail)] |
| \|\| | Logical **OR** operator that evaluates `true` if at least one condition is `true`. | $.changes[?(@.attribute == "cloudLifecycleState" \|\| @.attribute == "department")] |
| contains | **Contains** - Checks whether a string contains the specified substring (case sensitive). | $[?($.identity.name contains "john")] |
| Operator | Description | Example |
| --- | --- | --- |
| == | **Equals to** - Evaluates to `true` if operands match. | $[?($.identity.name == "john.doe")] |
| != | **Not equal to** - Evaluates to `true` if operands do not match. | $[?($.identity.name != "george.washington")] |
| > | **Greater than** - Evaluates to `true` if the left operand is greater than the right operand. It works on strings and numbers. | $[?($.attributes.created > '2020-04-27T16:48:33.200Z')] |
| >= | **Greater than or equal to** - Evaluates to `true` if the left operand is greater than or equal to the right operand. | $[?($.attributes.created >= '2020-04-27T16:48:33.597Z')] |
| < | **Less than** - Evaluates to `true` if the left operand is less than the right operand. | $[?($.attributes.created < '2020-04-27T16:48:33.200Z')] |
| <= | **Less than or equal to** - Evaluates to `true` if the left operand is less than or equal to the right operand. | $[?($.attributes.created <= '2020-04-27T16:48:33.200Z')] |
| && | Logical **AND** operator that evaluates `true` only if both conditions are `true`. | $.changes[?(@.attribute == "cloudLifecycleState" && @.newValue == "terminated")] |
| ! | **Not** - Negates the boolean expression. | $.identity.attributes[?(!@.alternateEmail)] |
| \|\| | Logical **OR** operator that evaluates `true` if at least one condition is `true`. | $.changes[?(@.attribute == "cloudLifecycleState" \|\| @.attribute == "department")] |
| contains | **Contains** - Checks whether a string contains the specified substring (case sensitive). | $[?($.identity.name contains "john")] |
### Developing Filters
Developing a filter can be faster when you use a tool like an online
[JSONpath editor](https://jsonpath.herokuapp.com/). These tools can provide
quick feedback on your filter, allowing you to focus on the exact filter
expression you want before testing it on a trigger.
Developing a filter can be faster when you use a tool like an online [JSONpath editor](https://jsonpath.herokuapp.com/). These tools can provide quick feedback on your filter, allowing you to focus on the exact filter expression you want before testing it on a trigger.
Start by opening a [JSONpath editor](https://jsonpath.herokuapp.com/) in your
browser. Make sure that the correct implementation is selected if there is more
than one option. In the case of event trigger filters, you will want to select the **Jayway** option. You can then paste in an example trigger input and start
crafting your JSONpath expression.
Start by opening a [JSONpath editor](https://jsonpath.herokuapp.com/) in your browser. Make sure that the correct implementation is selected if there is more than one option. In the case of event trigger filters, you will want to select the **Jayway** option. You can then paste in an example trigger input and start crafting your JSONpath expression.
![JSONPath editor](./img/jsonpath-editor.png)
Most of the examples provided in the operator tables above can be used against
the Identity Attributes Changed event trigger input, as seen below. You can find
all of the input/output schemas for the other available triggers in our
[API specification](/idn/api/beta/triggers#available-event-triggers).
Most of the examples provided in the operator tables above can be used against the Identity Attributes Changed event trigger input, as seen below. You can find all of the input/output schemas for the other available triggers in our [API specification](/idn/api/beta/triggers#available-event-triggers).
```json
{
@@ -148,29 +114,17 @@ all of the input/output schemas for the other available triggers in our
## Validating Filters
When you are finished developing your JSONpath filter, you must validate it with
SailPoint's trigger service. There are two ways to do this: use the UI or the
API.
When you are finished developing your JSONpath filter, you must validate it with SailPoint's trigger service. There are two ways to do this: use the UI or the API.
### Validating Filters Using the UI
To validate a filter using the UI, subscribe to a new event trigger or edit an
existing one. In the configuration options, paste your JSONpath expression in
the `Filter` input box and select `Update`. If you do not receive an error
message, then your filter expression is valid with SailPoint.
To validate a filter using the UI, subscribe to a new event trigger or edit an existing one. In the configuration options, paste your JSONpath expression in the `Filter` input box and select `Update`. If you do not receive an error message, then your filter expression is valid with SailPoint.
![UI filter](./img/ui-filter.png)
### Validating Filters Using the API
You can validate a trigger filter by using the
[validate filter](/idn/api/beta/validate-filter) API endpoint. You must escape
any double quotes, as seen in the example payload in the API description. Also,
you must provide a sample input for the validation engine to run against. It is
best to use the input example included in the input/output schemas for the event
trigger you want to apply your filter to. Refer to
[this table](/idn/api/beta/triggers#available-event-triggers) to find the schema
of your event trigger. This is an example request:
You can validate a trigger filter by using the [validate filter](/idn/api/beta/validate-filter) API endpoint. You must escape any double quotes, as seen in the example payload in the API description. Also, you must provide a sample input for the validation engine to run against. It is best to use the input example included in the input/output schemas for the event trigger you want to apply your filter to. Refer to [this table](/idn/api/beta/triggers#available-event-triggers) to find the schema of your event trigger. This is an example request:
```text
POST https://{tenant}.api.identitynow.com/beta/trigger-subscriptions/validate-filter
@@ -216,25 +170,10 @@ POST https://{tenant}.api.identitynow.com/beta/trigger-subscriptions/validate-fi
## Testing Filters
If SailPoint accepts your trigger filter, you must test whether it actually
works. You must configure your trigger subscription to point to the URL of your
testing service. [webhook.site](https://webhook.site) is an easy to use testing
service. Just copy the unique URL it generates and paste it into your
subscription's integration URL field. The easiest way to test a trigger
subscription is to use the UI to fire off a test event.
If SailPoint accepts your trigger filter, you must test whether it actually works. You must configure your trigger subscription to point to the URL of your testing service. [webhook.site](https://webhook.site) is an easy to use testing service. Just copy the unique URL it generates and paste it into your subscription's integration URL field. The easiest way to test a trigger subscription is to use the UI to fire off a test event.
![test subscription](./img/test-subscription.png)
Once you fire off a test event, monitor your webhook.site webpage for an
incoming event. If the filter matches the test input, you will an event come in.
If the filter does not match the input, then it will nott fire. Test both
scenarios to make sure your filter is not always evaluating to `true`, and that
it will indeed evaluate to `false` under the correct circumstances. For example,
the filter `$[?($.identity.name contains "john")]` will match the test event for
Identity Attributes Changed and you will see an event in webhook.site, but you
also want to make sure that `$[?($.identity.name contains "archer")]` doesn't
fire because the test input is always the same.
Once you fire off a test event, monitor your webhook.site webpage for an incoming event. If the filter matches the test input, you will an event come in. If the filter does not match the input, then it will nott fire. Test both scenarios to make sure your filter is not always evaluating to `true`, and that it will indeed evaluate to `false` under the correct circumstances. For example, the filter `$[?($.identity.name contains "john")]` will match the test event for Identity Attributes Changed and you will see an event in webhook.site, but you also want to make sure that `$[?($.identity.name contains "archer")]` doesn't fire because the test input is always the same.
If you want to control the test input to validate your filter against a more
robust set of data, use the
[test invocation](/idn/api/beta/start-test-invocation) API endpoint.
If you want to control the test input to validate your filter against a more robust set of data, use the [test invocation](/idn/api/beta/start-test-invocation) API endpoint.

View File

@@ -5,53 +5,24 @@ pagination_label: Event Triggers
sidebar_label: Event Triggers
sidebar_position: 3
sidebar_class_name: eventTriggers
keywords: ["event", "triggers", "webhooks"]
description:
The result of any action performed in a service is called an event. Services
like IdentityNow constantly generate events like an update to a setting or the
completion of an account aggregation.
keywords: ['event', 'triggers', 'webhooks']
description: The result of any action performed in a service is called an event. Services like IdentityNow constantly generate events like an update to a setting or the completion of an account aggregation.
slug: /docs/event-triggers
tags: ["Event Triggers"]
tags: ['Event Triggers']
---
## What Are Triggers
The result of any action performed in a service is called an **event**. Services
like IdentityNow constantly generate events like an update to a setting or the
completion of an account aggregation. Most events a service generates are of
little value to clients, so services create event triggers, also known as web
hooks, that allow clients to subscribe to specific events they are interested
in. Similar to news letters or RSS feeds, each subscription tells the service
what event a client is interested in and where to send the client the
notification.
The result of any action performed in a service is called an **event**. Services like IdentityNow constantly generate events like an update to a setting or the completion of an account aggregation. Most events a service generates are of little value to clients, so services create event triggers, also known as web hooks, that allow clients to subscribe to specific events they are interested in. Similar to news letters or RSS feeds, each subscription tells the service what event a client is interested in and where to send the client the notification.
## How Are Triggers Different from APIs
The biggest difference between event triggers and APIs is how data is accessed.
Requesting data with an API is an active process, but receiving data from an
event trigger is a passive process. Clients who want to get the latest data with
an API must initiate the request. Clients who subscribe to an event trigger do
not need to initiate a request. They are notified when the event occurs. This is
similar to keeping up with the latest world news on the internet. You can
initiate the request for data by opening a news website in your browser, or you
can subscribe to a mail list to receive the latest news as it happens.
The biggest difference between event triggers and APIs is how data is accessed. Requesting data with an API is an active process, but receiving data from an event trigger is a passive process. Clients who want to get the latest data with an API must initiate the request. Clients who subscribe to an event trigger do not need to initiate a request. They are notified when the event occurs. This is similar to keeping up with the latest world news on the internet. You can initiate the request for data by opening a news website in your browser, or you can subscribe to a mail list to receive the latest news as it happens.
## When to Use Triggers
It is best to use event triggers when you need to react to an event in
real-time. Although you can set up a polling mechanism using APIs, polling uses
more bandwidth and resources, and if you poll too quickly, you can reach an
API's rate limits. Event triggers use less bandwidth, they do not affect your
API rate limit, and they are as close as you can get to real-time. However,
event triggers have downsides to consider. They must be accessible from the
public internet so the trigger service knows where to send the notification, and
they can be harder to configure and operate than APIs are.
It is best to use event triggers when you need to react to an event in real-time. Although you can set up a polling mechanism using APIs, polling uses more bandwidth and resources, and if you poll too quickly, you can reach an API's rate limits. Event triggers use less bandwidth, they do not affect your API rate limit, and they are as close as you can get to real-time. However, event triggers have downsides to consider. They must be accessible from the public internet so the trigger service knows where to send the notification, and they can be harder to configure and operate than APIs are.
## How to Get Started With Event Triggers
Event triggers require different setup and testing steps than APIs do, so you
should follow each document to better understand event triggers and the
necessary steps to configure one. If this is your first time using event
triggers, then you should use the
[webhook testing service](./preparing-a-subscriber-service.md#webhook-testing-service)
as you follow along.
Event triggers require different setup and testing steps than APIs do, so you should follow each document to better understand event triggers and the necessary steps to configure one. If this is your first time using event triggers, then you should use the [webhook testing service](./preparing-a-subscriber-service.md#webhook-testing-service) as you follow along.

View File

@@ -5,70 +5,34 @@ pagination_title: Preparing a Subscriber Service
sidebar_label: Preparing a Subscriber Service
sidebar_position: 2
sidebar_class_name: preparingSubscriberService
keywords: ["event", "triggers", "subscriber"]
description:
Before you can subscribe to an event trigger, you must prepare a service that
can accept incoming HTTP requests from the event trigger service.
keywords: ['event', 'triggers', 'subscriber']
description: Before you can subscribe to an event trigger, you must prepare a service that can accept incoming HTTP requests from the event trigger service.
slug: /docs/event-triggers/preparing-subscriber-service
tags: ["Event Triggers"]
tags: ['Event Triggers']
---
Before you can subscribe to an event trigger, you must prepare a service that
can accept incoming HTTP requests from the event trigger service. More
specifically, your client service must accept a POST request to an endpoint of
its choosing, with the ability to parse the JSON data sent by the trigger. There
are many ways to accomplish this, but this guide covers four of the most common
types of client services you can build to handle event triggers.
Before you can subscribe to an event trigger, you must prepare a service that can accept incoming HTTP requests from the event trigger service. More specifically, your client service must accept a POST request to an endpoint of its choosing, with the ability to parse the JSON data sent by the trigger. There are many ways to accomplish this, but this guide covers four of the most common types of client services you can build to handle event triggers.
## Webhook Testing Service
There are many webhook testing websites that generate a unique URL you can use
to subscribe to an event trigger and explore the data sent by the trigger. One
site is https://webhook.site. This site generates a unique URL whenever you open
it, which you can copy and paste into the subscription configuration in
IdentityNow. Any events that the trigger generates will be sent to this website
for you to analyze.
There are many webhook testing websites that generate a unique URL you can use to subscribe to an event trigger and explore the data sent by the trigger. One site is https://webhook.site. This site generates a unique URL whenever you open it, which you can copy and paste into the subscription configuration in IdentityNow. Any events that the trigger generates will be sent to this website for you to analyze.
![Webhook.site](./img/webhook-site.png)
The purpose of webhook testing services is to make it easy to set up a trigger
and see the data of the events that will eventually be sent to your production
service. This can help in the early development process when you explore the
data the event trigger sends and how to best access the data you need.
The purpose of webhook testing services is to make it easy to set up a trigger and see the data of the events that will eventually be sent to your production service. This can help in the early development process when you explore the data the event trigger sends and how to best access the data you need.
## Native SaaS Workflows
Some SaaS vendors provide built-in workflow builders in their products so you do
not have to use a no-code provider. Slack, for example, has a premium
[workflow builder](https://slack.com/help/articles/360035692513-Guide-to-Workflow-Builder)
feature that generates a unique URL you can use to configure your subscription.
Slack's workflow builder can then listen for events sent by your trigger and
perform Slack specific actions on the data, like sending a user a message when
his or her access request is approved.
Some SaaS vendors provide built-in workflow builders in their products so you do not have to use a no-code provider. Slack, for example, has a premium [workflow builder](https://slack.com/help/articles/360035692513-Guide-to-Workflow-Builder) feature that generates a unique URL you can use to configure your subscription. Slack's workflow builder can then listen for events sent by your trigger and perform Slack specific actions on the data, like sending a user a message when his or her access request is approved.
![Slack workflow](./img/slack-workflow.png)
## No-code Provider
No-code/low-code providers, like Zapier and Microsoft Power Automate, make it
easy to consume event triggers and perform actions based on the event data. They
are popular solutions for those looking to prototype or quickly create automated
business processes, and they cater to novices and advanced users alike. Each
no-code provider has documentation about how to create a new workflow and
subscribe to an event trigger or webhook, so you must find the relevant
documentation for your no-code provider to learn how to set one up. Zapier has
the ability to configure a webhook action that generates a unique URL you can
configure in your event trigger subscription.
No-code/low-code providers, like Zapier and Microsoft Power Automate, make it easy to consume event triggers and perform actions based on the event data. They are popular solutions for those looking to prototype or quickly create automated business processes, and they cater to novices and advanced users alike. Each no-code provider has documentation about how to create a new workflow and subscribe to an event trigger or webhook, so you must find the relevant documentation for your no-code provider to learn how to set one up. Zapier has the ability to configure a webhook action that generates a unique URL you can configure in your event trigger subscription.
![Zapier webhook](./img/zapier-webhook.png)
## Custom Application
A custom application is one you write in a language of your choosing and host in
your own infrastructure, cloud, or on-premise. This is the most advanced option
for implementing an event trigger client service. Although it requires a great
deal of skill and knowledge to build, deploy, and operate your own service that
can consume requests over HTTP, a custom application offers the most power and
flexibility to implement your use cases. You can learn more about custom
applications by checking out our
[Event Trigger Example Application](https://github.com/sailpoint-oss/event-trigger-examples).
A custom application is one you write in a language of your choosing and host in your own infrastructure, cloud, or on-premise. This is the most advanced option for implementing an event trigger client service. Although it requires a great deal of skill and knowledge to build, deploy, and operate your own service that can consume requests over HTTP, a custom application offers the most power and flexibility to implement your use cases. You can learn more about custom applications by checking out our [Event Trigger Example Application](https://github.com/sailpoint-oss/event-trigger-examples).

View File

@@ -5,32 +5,24 @@ pagination_label: Responding To Request Response Triggers
sidebar_label: Responding To Request Response Triggers
sidebar_position: 6
sidebar_class_name: respondingRequestResponseTriggers
keywords: ["event", "trigger", "request reseponse"]
description:
You can specify how your application interacts with a REQUEST_RESPONSE type
trigger service by selecting an invocation response mode in the Response Type
dropdown when editing or creating a REQUEST_RESPONSE subscription.
keywords: ['event', 'trigger', 'request reseponse']
description: You can specify how your application interacts with a REQUEST_RESPONSE type trigger service by selecting an invocation response mode in the Response Type dropdown when editing or creating a REQUEST_RESPONSE subscription.
slug: /docs/event-triggers/responding-request-response-trigger
tags: ["Event Triggers"]
tags: ['Event Triggers']
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
## Invocation Response Modes for REQUEST_RESPONSE Type Triggers
You can specify how your application interacts with a `REQUEST_RESPONSE` type
trigger service by selecting an invocation response mode in the **Response
Type** dropdown when editing or creating a `REQUEST_RESPONSE` subscription.
There are three response modes to choose from: `SYNC`, `ASYNC`, and `DYNAMIC`.
These response modes are only available when the subscription type is set to
`HTTP`.
You can specify how your application interacts with a `REQUEST_RESPONSE` type trigger service by selecting an invocation response mode in the **Response Type** dropdown when editing or creating a `REQUEST_RESPONSE` subscription. There are three response modes to choose from: `SYNC`, `ASYNC`, and `DYNAMIC`. These response modes are only available when the subscription type is set to `HTTP`.
| Response Modes | Description |
| -------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| `SYNC` | This type of response creates a _synchronous_ flow between the trigger service and the custom application. Once a trigger has been invoked, the custom application is expected to respond within 10 seconds. If the application takes longer than 10 seconds to respond, the trigger invocation will terminate without making any decisions. |
| `ASYNC` | This type of response creates an _asynchronous_ flow between the trigger service and the custom application. When a trigger is invoked, the custom application does not need to respond immediately. The trigger service will provide a URL and a secret that the custom application can use to complete the invocation at a later time. The application must complete the invocation before the configured deadline on the subscription. |
| `DYNAMIC` | This type of response gives the custom application the ability to choose whether it handles the invocation request synchronously or asynchronously on a per-event basis. In some cases, the application may choose `SYNC` mode because it is able to respond quickly to the invocation. In other cases, it may choose `ASYNC` because it needs to run a long running task before responding to the invocation. |
| Response Modes | Description |
| --- | :-: |
| `SYNC` | This type of response creates a _synchronous_ flow between the trigger service and the custom application. Once a trigger has been invoked, the custom application is expected to respond within 10 seconds. If the application takes longer than 10 seconds to respond, the trigger invocation will terminate without making any decisions. |
| `ASYNC` | This type of response creates an _asynchronous_ flow between the trigger service and the custom application. When a trigger is invoked, the custom application does not need to respond immediately. The trigger service will provide a URL and a secret that the custom application can use to complete the invocation at a later time. The application must complete the invocation before the configured deadline on the subscription. |
| `DYNAMIC` | This type of response gives the custom application the ability to choose whether it handles the invocation request synchronously or asynchronously on a per-event basis. In some cases, the application may choose `SYNC` mode because it is able to respond quickly to the invocation. In other cases, it may choose `ASYNC` because it needs to run a long running task before responding to the invocation. |
## Responding to REQUEST_RESPONSE Trigger
@@ -40,12 +32,7 @@ These response modes are only available when the subscription type is set to
<!-- Uncomment this once the model definition links are fixed
The custom application responds to the trigger invocation with an appropriate payload. For example, the application may receive a request from the [Access Request Dynamic Approver](https://developer.sailpoint.com/apis/beta/#tag/Event-Trigger-Models) trigger. The application will have **10 seconds** to analyze the event details and respond with a 200 (OK) status code and a [response payload](https://developer.sailpoint.com/apis/beta/#section/Access-Request-Dynamic-Approver-Event-Trigger-Output) that contains the identity to add to the approval chain. -->
The custom application responds to the trigger invocation with an appropriate
payload. For example, the application may receive a request from the Access
Request Dynamic Approver trigger. The application will have **10 seconds** to
analyze the event details and respond with a 200 (OK) status code and a response
payload that contains the identity to add to the approval chain. For example,
the response may look like this:
The custom application responds to the trigger invocation with an appropriate payload. For example, the application may receive a request from the Access Request Dynamic Approver trigger. The application will have **10 seconds** to analyze the event details and respond with a 200 (OK) status code and a response payload that contains the identity to add to the approval chain. For example, the response may look like this:
200 (OK)
@@ -63,13 +50,7 @@ the response may look like this:
<!-- Uncomment this once the model definition links are fixed
The custom application only needs to acknowledge that it has received the trigger invocation request by returning an HTTP status of 200 (OK) with an empty JSON object (ex. `{}`) in the response body within **10 seconds** of receiving the event. It then has until the configured deadline on the subscription to provide a full response to the invocation. For example, the application may receive a request from the [Access Request Dynamic Approver](https://developer.sailpoint.com/apis/beta/#tag/Event-Trigger-Models) trigger. An example of the request payload that the application might receive is as follows: -->
The custom application only needs to acknowledge that it has received the
trigger invocation request by returning an HTTP status of 200 (OK) with an empty
JSON object (ex. `{}`) in the response body within **10 seconds** of receiving
the event. It then has until the configured deadline on the subscription to
provide a full response to the invocation. For example, the application may
receive a request from the Access Request Dynamic Approver trigger. An example
of the request payload that the application might receive is as follows:
The custom application only needs to acknowledge that it has received the trigger invocation request by returning an HTTP status of 200 (OK) with an empty JSON object (ex. `{}`) in the response body within **10 seconds** of receiving the event. It then has until the configured deadline on the subscription to provide a full response to the invocation. For example, the application may receive a request from the Access Request Dynamic Approver trigger. An example of the request payload that the application might receive is as follows:
```json
{
@@ -104,8 +85,7 @@ of the request payload that the application might receive is as follows:
}
```
The application will immediately respond to the invocation with a 200 (OK)
status code and an empty JSON object.
The application will immediately respond to the invocation with a 200 (OK) status code and an empty JSON object.
200 (OK)
@@ -113,13 +93,9 @@ status code and an empty JSON object.
{}
```
Once the application has made a decision on how to respond, it will use the
`callbackURL` and `secret` provided in the `_metadata` object from the original
request to complete the invocation. An example response might look like the
following:
Once the application has made a decision on how to respond, it will use the `callbackURL` and `secret` provided in the `_metadata` object from the original request to complete the invocation. An example response might look like the following:
POST
`https://{tenant}.api.identitynow.com/beta/trigger-invocations/e9103ca9-02c4-bb0f-9441-94b3af012345/complete`
POST `https://{tenant}.api.identitynow.com/beta/trigger-invocations/e9103ca9-02c4-bb0f-9441-94b3af012345/complete`
```json
{
@@ -135,17 +111,9 @@ POST
</TabItem>
<TabItem value="dynamic" label="DYNAMIC Response">
The custom application determines arbitrarily whether to respond to the trigger
invocation as `SYNC` or `ASYNC`. If the application wishes to respond as `SYNC`,
it should follow the directions for a `SYNC` response type, responding within
**10 seconds** of the invocation. In the case of `ASYNC`, the custom application
only needs to acknowledge that it has received the trigger invocation request
with a 202 (Accepted) within **10 seconds** of receiving the event and complete
the invocation at a later time using the `callbackURL` and `secret` provided in
the `_metadata` object.
The custom application determines arbitrarily whether to respond to the trigger invocation as `SYNC` or `ASYNC`. If the application wishes to respond as `SYNC`, it should follow the directions for a `SYNC` response type, responding within **10 seconds** of the invocation. In the case of `ASYNC`, the custom application only needs to acknowledge that it has received the trigger invocation request with a 202 (Accepted) within **10 seconds** of receiving the event and complete the invocation at a later time using the `callbackURL` and `secret` provided in the `_metadata` object.
An example of the request payload that the application might receive is as
follows:
An example of the request payload that the application might receive is as follows:
```json
{
@@ -192,8 +160,7 @@ To respond as `SYNC`, simply respond to the invocation within 10 seconds.
}
```
To respond as `ASYNC`, start by responding to the invocation with a 202
(Accepted).
To respond as `ASYNC`, start by responding to the invocation with a 202 (Accepted).
202 (Accepted)
@@ -201,11 +168,9 @@ To respond as `ASYNC`, start by responding to the invocation with a 202
{}
```
Then, use the `callbackURL` and `secret` to send a POST request to the
invocation with the decision.
Then, use the `callbackURL` and `secret` to send a POST request to the invocation with the decision.
POST
`https://{tenant}.api.identitynow.com/beta/trigger-invocations/e9103ca9-02c4-bb0f-9441-94b3af012345/complete`
POST `https://{tenant}.api.identitynow.com/beta/trigger-invocations/e9103ca9-02c4-bb0f-9441-94b3af012345/complete`
```json
{
@@ -225,9 +190,4 @@ POST
## Trigger Invocation Status
To check the status of a particular trigger invocation, you can use the
[list invocation statuses](/idn/api/beta/list-invocation-status) endpoint. The
status endpoint works for both `REQUEST_RESPONSE` and `FIRE_AND_FORGET`
triggers. However, the status of `FIRE_AND_FORGET` trigger invocations will
contain null values in their `completeInvocationInput` since `FIRE_AND_FORGET`
triggers don't need a response to complete.
To check the status of a particular trigger invocation, you can use the [list invocation statuses](/idn/api/beta/list-invocation-status) endpoint. The status endpoint works for both `REQUEST_RESPONSE` and `FIRE_AND_FORGET` triggers. However, the status of `FIRE_AND_FORGET` trigger invocations will contain null values in their `completeInvocationInput` since `FIRE_AND_FORGET` triggers don't need a response to complete.

View File

@@ -5,49 +5,26 @@ pagination_label: Subscribing to a Trigger
sidebar_label: Subscribing to a Trigger
sidebar_position: 3
sidebar_class_name: subscribingToTrigger
keywords: ["event", "trigger", "subscribing"]
description:
Usually, you will subscribe to event triggers using the user interface in IDN.
Refer to subscribing to event triggers to learn how to subscribe to an event
trigger through the IDN UI.
keywords: ['event', 'trigger', 'subscribing']
description: Usually, you will subscribe to event triggers using the user interface in IDN. Refer to subscribing to event triggers to learn how to subscribe to an event trigger through the IDN UI.
slug: /docs/event-triggers/subscribing-to-trigger
tags: ["Event Triggers"]
tags: ['Event Triggers']
---
## View the Available Triggers
SailPoint is continuously developing new event triggers to satisfy different use
cases. Some of these triggers are considered **early access** and are only
available in an IDN tenant upon request. To see a list of available event
triggers in your tenant, go to the **Event Triggers** tab in the **Admin**
section of IdentityNow. The first page is a list of your tenant's available
event triggers. You can select each trigger to learn more about its type, what
causes it to fire, and what the payload will look like.
SailPoint is continuously developing new event triggers to satisfy different use cases. Some of these triggers are considered **early access** and are only available in an IDN tenant upon request. To see a list of available event triggers in your tenant, go to the **Event Triggers** tab in the **Admin** section of IdentityNow. The first page is a list of your tenant's available event triggers. You can select each trigger to learn more about its type, what causes it to fire, and what the payload will look like.
![Available triggers](./img/available-triggers.png)
## Subscribe to a Trigger from the UI
Usually, you will subscribe to event triggers using the user interface in IDN.
Refer to
[subscribing to event triggers](https://documentation.sailpoint.com/saas/help/common/event_triggers.html#subscribing-to-event-triggers)
to learn how to subscribe to an event trigger through the IDN UI.
Usually, you will subscribe to event triggers using the user interface in IDN. Refer to [subscribing to event triggers](https://documentation.sailpoint.com/saas/help/common/event_triggers.html#subscribing-to-event-triggers) to learn how to subscribe to an event trigger through the IDN UI.
## Subscribe to a Trigger from the API
Sometimes, you may need to use the API to subscribe to event triggers. This can
occur when you want to programatically subscribe/unsubscribe from event triggers
in a custom application or no-code solution that does not have a native
integration with SailPoint.
Sometimes, you may need to use the API to subscribe to event triggers. This can occur when you want to programatically subscribe/unsubscribe from event triggers in a custom application or no-code solution that does not have a native integration with SailPoint.
If this is your first time calling a SailPoint API, refer to the
[getting started guide](../../../api/getting-started.md) to learn how to
generate a token and call the APIs.
If this is your first time calling a SailPoint API, refer to the [getting started guide](../../../api/getting-started.md) to learn how to generate a token and call the APIs.
Start by reviewing the list of
[available event triggers](/idn/api/beta/triggers#available-event-triggers), and
take note of the **ID** of the trigger you want to subscribe to (ex
`idn:access-request-dynamic-approver`). Use the
[create subscription](/idn/api/beta/create-subscription) endpoint to subscribe
to an event trigger of your choosing. See the API docs for the latest details
about how to craft a subscription request.
Start by reviewing the list of [available event triggers](/idn/api/beta/triggers#available-event-triggers), and take note of the **ID** of the trigger you want to subscribe to (ex `idn:access-request-dynamic-approver`). Use the [create subscription](/idn/api/beta/create-subscription) endpoint to subscribe to an event trigger of your choosing. See the API docs for the latest details about how to craft a subscription request.

View File

@@ -5,41 +5,23 @@ pagination_label: Testing Triggers
sidebar_label: Testing Triggers
sidebar_position: 5
sidebar_class_name: testingTriggers
keywords: ["event", "trigger", "testing"]
description:
It is important to test your trigger subscription configuration with your
actual subscribing service before enabling your subscription for production
use.
keywords: ['event', 'trigger', 'testing']
description: It is important to test your trigger subscription configuration with your actual subscribing service before enabling your subscription for production use.
slug: /docs/event-triggers/testing-triggers
tags: ["Event Triggers"]
tags: ['Event Triggers']
---
It is important to test your trigger subscription configuration with your actual
subscribing service (not a test site like [webhook.site](https://webhook.site))
before enabling your subscription for production use. Testing subscriptions
ensures that your subscribing service can successfully receive events and that
you are receiving the correct events based on the filter you have provided.
It is important to test your trigger subscription configuration with your actual subscribing service (not a test site like [webhook.site](https://webhook.site)) before enabling your subscription for production use. Testing subscriptions ensures that your subscribing service can successfully receive events and that you are receiving the correct events based on the filter you have provided.
## Sending Test Invocations
The easiest way to send a test event to your subscribing service is to use the
**Test Subscription** command. Go to your subscription in the Event Trigger UI,
select **Options** to the right of the subscription, and select **Test
Subscription**.
The easiest way to send a test event to your subscribing service is to use the **Test Subscription** command. Go to your subscription in the Event Trigger UI, select **Options** to the right of the subscription, and select **Test Subscription**.
![test subscription](./img/test-subscription.png)
Doing so sends a test event to your subscribing service, using the default
example payload for the specific trigger you are subscribing to. This is an easy
way to validate that your service can receive events, but it lacks the ability
to modify the event payload to test your filter against different payloads.
However, there is an API endpoint you can use to modify the test payload.
Doing so sends a test event to your subscribing service, using the default example payload for the specific trigger you are subscribing to. This is an easy way to validate that your service can receive events, but it lacks the ability to modify the event payload to test your filter against different payloads. However, there is an API endpoint you can use to modify the test payload.
If you want to control the test input to validate your filter against a more
robust set of data, you can use the
[test invocation](/idn/api/beta/start-test-invocation) API endpoint. You can use
this API to send an input payload with any values that you want. This is an
example of an invocation of this API:
If you want to control the test input to validate your filter against a more robust set of data, you can use the [test invocation](/idn/api/beta/start-test-invocation) API endpoint. You can use this API to send an input payload with any values that you want. This is an example of an invocation of this API:
```text
POST `https://{tenant}.api.identitynow.com/beta/trigger-invocations/test`
@@ -79,38 +61,23 @@ POST `https://{tenant}.api.identitynow.com/beta/trigger-invocations/test`
### Trigger Service Issues
If your subscribing service is not receiving your test invocations, you have a
couple of options to debug the issue. Start by viewing the activity log for the
subscription in the UI to ensure your test events are actually being sent.
If your subscribing service is not receiving your test invocations, you have a couple of options to debug the issue. Start by viewing the activity log for the subscription in the UI to ensure your test events are actually being sent.
![activity log](./img/activity-log.png)
Check the **Created** date with the time you sent the test events. If they are
being sent, check the event details. Look for any errors being reported, and
ensure your subscribing service's subscription ID is in the `subcriptionId` the
event was sent to.
Check the **Created** date with the time you sent the test events. If they are being sent, check the event details. Look for any errors being reported, and ensure your subscribing service's subscription ID is in the `subcriptionId` the event was sent to.
![debug connection](./img/debug-connection.png)
You can also view the activity log by using the
[list latest invocation statuses](/idn/api/beta/list-invocation-status)
endpoint.
You can also view the activity log by using the [list latest invocation statuses](/idn/api/beta/list-invocation-status) endpoint.
### Filter Issues
If you do not see your events in the activity log, it may be a filtering issue.
If the filter you configured on the subscription is not matching the test event
data, no event will be sent. Double check your filter expression with the test
payload in a JSONpath editor to ensure the filter is valid and matches your
data. See [Filtering Events](./filtering-events.md) for more information.
If you do not see your events in the activity log, it may be a filtering issue. If the filter you configured on the subscription is not matching the test event data, no event will be sent. Double check your filter expression with the test payload in a JSONpath editor to ensure the filter is valid and matches your data. See [Filtering Events](./filtering-events.md) for more information.
### Misconfigured Subscription
Double check that your subscription configuration is correct.
- Ensure the URL you provided is accessible from the public internet. If your
subscribing service is hosted internally in your company's intranet, you may
be able to access it from your computer, but the trigger service may not be
able to.
- Verify that the authentication details are correct. Verify that the
username/password or bearer token is valid.
- Ensure the URL you provided is accessible from the public internet. If your subscribing service is hosted internally in your company's intranet, you may be able to access it from your computer, but the trigger service may not be able to.
- Verify that the authentication details are correct. Verify that the username/password or bearer token is valid.

View File

@@ -5,24 +5,15 @@ pagination_label: Trigger Types
sidebar_label: Trigger Types
sidebar_position: 1
sidebar_class_name: triggerTypes
keywords: ["event", "trigger", "types"]
description:
Different types of triggerst exist, and those types of triggers do different
things depending on their type.
keywords: ['event', 'trigger', 'types']
description: Different types of triggerst exist, and those types of triggers do different things depending on their type.
slug: /docs/event-triggers/trigger-types
tags: ["Event Triggers"]
tags: ['Event Triggers']
---
## Fire and Forget
A fire and forget trigger only supports one-way communication with subscribers.
Its only job is to forward each event it receives to each subscribing service.
This trigger type does not wait for a response from subscribers. It has no way
of knowing whether subscribers actually receive the event, and it does not have
any mechanism for resending events. Think of this trigger type as live
television. You can only see what is happening in real-time. You cannot rewind
the live feed or interact with the broadcast in any way. This trigger type is
the simplest and most common trigger type among SailPoint's event triggers.
A fire and forget trigger only supports one-way communication with subscribers. Its only job is to forward each event it receives to each subscribing service. This trigger type does not wait for a response from subscribers. It has no way of knowing whether subscribers actually receive the event, and it does not have any mechanism for resending events. Think of this trigger type as live television. You can only see what is happening in real-time. You cannot rewind the live feed or interact with the broadcast in any way. This trigger type is the simplest and most common trigger type among SailPoint's event triggers.
:::caution
@@ -32,14 +23,7 @@ Fire and forget triggers can have a maximum of 50 subscribers per event.
## Request Response
A request response trigger allows two-way communication between the trigger
service and the subscriber. The main difference with this trigger type is that
it expects a response from the subscriber with directions about how to proceed
with the event. For example, the access request dynamic approval event trigger
will send the subscriber details about the access request, and the subscriber
may respond to the trigger with the identity ID to include in the approval
process for an access request. This trigger type allows subscribers to not only
receive events in real-time, but to act on them as well.
A request response trigger allows two-way communication between the trigger service and the subscriber. The main difference with this trigger type is that it expects a response from the subscriber with directions about how to proceed with the event. For example, the access request dynamic approval event trigger will send the subscriber details about the access request, and the subscriber may respond to the trigger with the identity ID to include in the approval process for an access request. This trigger type allows subscribers to not only receive events in real-time, but to act on them as well.
:::caution

View File

@@ -8,111 +8,83 @@ sidebar_class_name: IdentityNow
hide_title: true
keywords:
[
"IdentityNow",
"development",
"developer",
"portal",
"getting started",
"docs",
"documentation",
'IdentityNow',
'development',
'developer',
'portal',
'getting started',
'docs',
'documentation',
]
description:
This is the intoduction documentation to development on the IdentityNow
platform.
description: This is the intoduction documentation to development on the IdentityNow platform.
slug: /docs
tags: ["Introduction", "Getting Started"]
tags: ['Introduction', 'Getting Started']
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
🧭 There are many different ways in which you are able to extend the IdentityNow
platfrom beyond what comes out of the box. Please, explore our documentation and
see what is possible! This documentation assumes that you are a current customer
or partner and already have access to the IdentityNow application.
🧭 There are many different ways in which you are able to extend the IdentityNow platfrom beyond what comes out of the box. Please, explore our documentation and see what is possible! This documentation assumes that you are a current customer or partner and already have access to the IdentityNow application.
:::info Are you a partner?
Looking to become a partner? If you are interested in becoming a partner, be it
an ISV or Channel/Implementation partner,
[click here](https://www.sailpoint.com/partners/become-partner/).
Looking to become a partner? If you are interested in becoming a partner, be it an ISV or Channel/Implementation partner, [click here](https://www.sailpoint.com/partners/become-partner/).
:::
## Before You Get Started
Please read this introduction carefully, as it contains recommendations and
need-to-know information pertaining to all features of the IdentityNow platform.
Please read this introduction carefully, as it contains recommendations and need-to-know information pertaining to all features of the IdentityNow platform.
### Authentication
Many of the interactions you have through our various features will have you
interacting with our APIs either directly or indirectly. It would be valuable to
familiarize yourself with [Authentication](../../api/authentication.md) on our
platform.
Many of the interactions you have through our various features will have you interacting with our APIs either directly or indirectly. It would be valuable to familiarize yourself with [Authentication](../../api/authentication.md) on our platform.
### Understanding JSON
JSON (JavaScript Object Notation) is a lightweight data-interchange format. It
is easy for humans to read and write. It is easy for machines to parse and
generate. JSON is at the heart of every API and development feature that
SailPoint offers in IdentityNow—usually either inputs or outputs to/from a
system.
[Learn more about JSON here](https://www.w3schools.com/js/js_json_intro.asp).
JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. JSON is at the heart of every API and development feature that SailPoint offers in IdentityNow—usually either inputs or outputs to/from a system. [Learn more about JSON here](https://www.w3schools.com/js/js_json_intro.asp).
### Understanding Webhooks
A webhook in web development is a method of augmenting or altering the behavior
of a web page or web application with custom callbacks. These callbacks may be
maintained, modified, and managed by third-party users and developers who may
not necessarily be affiliated with the originating website or application. Our
[Event Triggers](docs/event-triggers) are a form of webhook, for example.
[Learn more about webhooks here](https://zapier.com/blog/what-are-webhooks/).
A webhook in web development is a method of augmenting or altering the behavior of a web page or web application with custom callbacks. These callbacks may be maintained, modified, and managed by third-party users and developers who may not necessarily be affiliated with the originating website or application. Our [Event Triggers](docs/event-triggers) are a form of webhook, for example. [Learn more about webhooks here](https://zapier.com/blog/what-are-webhooks/).
## Recommended Technologies
While you can use whichever development tools you are most comfortable with or
find most useful, we will recommend tools here for those that are new to
development.
While you can use whichever development tools you are most comfortable with or find most useful, we will recommend tools here for those that are new to development.
:::tip
Our team, when developing documentation, example code/applications, videos, etc.
will almost always use one of the tools listed below. We will soon add
programming languages to this list!
Our team, when developing documentation, example code/applications, videos, etc. will almost always use one of the tools listed below. We will soon add programming languages to this list!
:::
### IDEs (Integrated Development Environments)
IDEs are great for consolidating different aspects of programming into one tool.
They're great for not only writing code, but managing your code as well. While
you can use any IDE you feel is best fit for you and the task, here is what we
use:
IDEs are great for consolidating different aspects of programming into one tool. They're great for not only writing code, but managing your code as well. While you can use any IDE you feel is best fit for you and the task, here is what we use:
<Tabs groupId="operating-systems">
<TabItem value="win" label="Windows">
| IDE | Description |
| ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [VS Code](https://code.visualstudio.com/) | VS Code is a lightweight IDE that we believe is perfect for development on our IdentityNow platform. We also have great plug-in support from our community, like [this one](https://marketplace.visualstudio.com/items?itemName=yannick-beot-sp.vscode-sailpoint-identitynow)! |
| [IntelliJ](https://www.jetbrains.com/idea/) | If you happen to be writing in Java or developing Rules on our platform, we typically recommend IntelliJ. While Java development can be done in VS Code, you will have an easier time using an IDE that was purpose-built for Java. |
| IDE | Description |
| --- | --- |
| [VS Code](https://code.visualstudio.com/) | VS Code is a lightweight IDE that we believe is perfect for development on our IdentityNow platform. We also have great plug-in support from our community, like [this one](https://marketplace.visualstudio.com/items?itemName=yannick-beot-sp.vscode-sailpoint-identitynow)! |
| [IntelliJ](https://www.jetbrains.com/idea/) | If you happen to be writing in Java or developing Rules on our platform, we typically recommend IntelliJ. While Java development can be done in VS Code, you will have an easier time using an IDE that was purpose-built for Java. |
</TabItem>
<TabItem value="mac" label="Mac">
| IDE | Description |
| ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [VS Code](https://code.visualstudio.com/) | VS Code is a lightweight IDE that we believe is perfect for development on our IdentityNow platform. We also have great plug-in support from our community, like [this one](https://marketplace.visualstudio.com/items?itemName=yannick-beot-sp.vscode-sailpoint-identitynow)! |
| [IntelliJ](https://www.jetbrains.com/idea/) | If you happen to be writing in Java or developing Rules on our platform, we typically recommend IntelliJ. While Java development can be done in VS Code, you will have an easier time using an IDE that was purpose-built for Java. |
| IDE | Description |
| --- | --- |
| [VS Code](https://code.visualstudio.com/) | VS Code is a lightweight IDE that we believe is perfect for development on our IdentityNow platform. We also have great plug-in support from our community, like [this one](https://marketplace.visualstudio.com/items?itemName=yannick-beot-sp.vscode-sailpoint-identitynow)! |
| [IntelliJ](https://www.jetbrains.com/idea/) | If you happen to be writing in Java or developing Rules on our platform, we typically recommend IntelliJ. While Java development can be done in VS Code, you will have an easier time using an IDE that was purpose-built for Java. |
</TabItem>
<TabItem value="linux" label="Linux">
| IDE | Description |
| ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [VS Code](https://code.visualstudio.com/) | VS Code is a lightweight IDE that we believe is perfect for development on our IdentityNow platform. We also have great plug-in support from our community, like [this one](https://marketplace.visualstudio.com/items?itemName=yannick-beot-sp.vscode-sailpoint-identitynow)! |
| [IntelliJ](https://www.jetbrains.com/idea/) | If you happen to be writing in Java or developing Rules on our platform, we typically recommend IntelliJ. While Java development can be done in VS Code, you will have an easier time using an IDE that was purpose-built for Java. |
| IDE | Description |
| --- | --- |
| [VS Code](https://code.visualstudio.com/) | VS Code is a lightweight IDE that we believe is perfect for development on our IdentityNow platform. We also have great plug-in support from our community, like [this one](https://marketplace.visualstudio.com/items?itemName=yannick-beot-sp.vscode-sailpoint-identitynow)! |
| [IntelliJ](https://www.jetbrains.com/idea/) | If you happen to be writing in Java or developing Rules on our platform, we typically recommend IntelliJ. While Java development can be done in VS Code, you will have an easier time using an IDE that was purpose-built for Java. |
</TabItem>
</Tabs>
@@ -121,16 +93,14 @@ use:
### CLI Environments
When interacting with our platform or writing code related to IdentityNow, we
often use the CLI. While you can use any CLI that you feel is best fit for you
and your job, here are the CLI environments we use and recommend:
When interacting with our platform or writing code related to IdentityNow, we often use the CLI. While you can use any CLI that you feel is best fit for you and your job, here are the CLI environments we use and recommend:
<Tabs groupId="operating-systems">
<TabItem value="win" label="Windows">
| CLI Tool | Description |
| -------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Windows PowerShell | Windows PowerShell is a modern terminal on windows (also available on Mac/Linux) that offers versatile CLI, task automation, and configuration management options. |
| CLI Tool | Description |
| --- | --- |
| Windows PowerShell | Windows PowerShell is a modern terminal on windows (also available on Mac/Linux) that offers versatile CLI, task automation, and configuration management options. |
| [Windows Terminal](https://apps.microsoft.com/store/detail/windows-terminal/9N0DX20HK701?hl=en-us&gl=us) | The Windows Terminal is a modern, fast, efficient, powerful, and productive terminal application for users of command-line tools and shells like Command Prompt, PowerShell, and WSL. Its main features include multiple tabs, panes, Unicode and UTF-8 character support, a GPU accelerated text rendering engine, and custom themes, styles, and configurations. Terminal is just a more beautiful version of PowerShell 😁 |
</TabItem>
@@ -143,8 +113,8 @@ and your job, here are the CLI environments we use and recommend:
</TabItem>
<TabItem value="linux" label="Linux">
| CLI Tool | Description |
| ------------------------ | -------------------------------------------------- |
| CLI Tool | Description |
| --- | --- |
| Linux Terminal (default) | On Linux, we recommend using the default terminal. |
</TabItem>
@@ -154,31 +124,23 @@ and your job, here are the CLI environments we use and recommend:
### Version Control
Writing code typically requires version control to adequately track changes in
sets of files. While you can use any version control that you feel is best fit
for you and your job, here are the version control tools that we use and
recommend:
Writing code typically requires version control to adequately track changes in sets of files. While you can use any version control that you feel is best fit for you and your job, here are the version control tools that we use and recommend:
| Version Control Tool | Description |
| ---------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [git](https://git-scm.com/) | Git is a free and open-source, distributed version control system designed to handle everything from small to very large projects. Git runs locally on your machine. |
| Version Control Tool | Description |
| --- | --- |
| [git](https://git-scm.com/) | Git is a free and open-source, distributed version control system designed to handle everything from small to very large projects. Git runs locally on your machine. |
| [GitHub](https://github.com) | GitHub is an internet hosting service for managing git in the cloud. We use GitHub on our team to collaborate amongst the other developers on our team, as well as with our community. |
---
### API Clients
API clients make it easy to call APIs without having to first write code. API
clients are great for testing and getting familiar with APIs to get a better
understanding of what the inputs/outputs are and how they work.
API clients make it easy to call APIs without having to first write code. API clients are great for testing and getting familiar with APIs to get a better understanding of what the inputs/outputs are and how they work.
| API Client | Description |
| --------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| API Client | Description |
| --- | --- |
| [Postman](https://www.postman.com/downloads/) | Postman is an API platform for building and using APIs. Postman simplifies each step of the API lifecycle and streamlines collaboration so you can create better APIs—faster. |
## Glossary
Identity is a complex topic and there are many terms used, and quite often!
Please
[refer to our glossary](https://documentation.sailpoint.com/saas/help/common/glossary.html)
whenever possible if you aren't sure what something means.
Identity is a complex topic and there are many terms used, and quite often! Please [refer to our glossary](https://documentation.sailpoint.com/saas/help/common/glossary.html) whenever possible if you aren't sure what something means.

View File

@@ -4,38 +4,32 @@ title: Account Profile Attribute Generator
pagination_label: Account Profile Attribute Generator
sidebar_label: Account Profile Attribute Generator
sidebar_class_name: accountProfileAttributeGenerator
keywords: ["cloud", "rules", "account profile", "attribute generator"]
description:
This rule generates complex account attribute values during provisioning, e.g. when
creating an account.
keywords: ['cloud', 'rules', 'account profile', 'attribute generator']
description: This rule generates complex account attribute values during provisioning, e.g. when creating an account.
slug: /docs/rules/cloud-rules/account-profile-attribute-generator
tags: ["Rules"]
tags: ['Rules']
---
## Overview
This rule generates complex account attribute values during provisioning, e.g. when creating an account.
You would typically use this rule when you are creating an account to generate attributes like usernames.
This rule generates complex account attribute values during provisioning, e.g. when creating an account. You would typically use this rule when you are creating an account to generate attributes like usernames.
## Execution
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has
read-only access to IdentityNow data models, but it does not have access to
on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint
personnel.
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has read-only access to IdentityNow data models, but it does not have access to on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint personnel.
![Rule Execution](../img/cloud_execution.png)
## Input
| Argument | Type | Purpose |
| ----------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| identity | sailpoint.object.Identity | Reference to identity object representing the identity being calculated. |
| application | java.lang.Object | Attribute value of the identity attribute before the rule runs. |
| field | sailpoint.object.Field | Field object used to get information about the attribute being generated. |
| Argument | Type | Purpose |
| --- | --- | --- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| identity | sailpoint.object.Identity | Reference to identity object representing the identity being calculated. |
| application | java.lang.Object | Attribute value of the identity attribute before the rule runs. |
| field | sailpoint.object.Field | Field object used to get information about the attribute being generated. |
## Output

View File

@@ -4,40 +4,33 @@ title: Account Profile Attribute Generator (from Template)
pagination_label: Account Profile Attribute Generator (from Template)
sidebar_label: Account Profile Attribute Generator (from Template)
sidebar_class_name: accountProfileAttributeGeneratorTemplate
keywords: ["cloud", "rules", "account profile", "attribute generator"]
description:
This rule generates complex account attribute values during provisioning, e.g. when
creating an account. The rule's configuration comes from a template of values.
keywords: ['cloud', 'rules', 'account profile', 'attribute generator']
description: This rule generates complex account attribute values during provisioning, e.g. when creating an account. The rule's configuration comes from a template of values.
slug: /docs/rules/cloud-rules/account-profile-attribute-generator-template
tags: ["Rules"]
tags: ['Rules']
---
# Account Profile Attribute Generator (from Template)
## Overview
This rule generates complex account attribute values during provisioning, e.g. when creating an account.
The rule's configuration comes from a template of values.
You would typically use this rule when you are creating an account to generate attributes like usernames.
This rule generates complex account attribute values during provisioning, e.g. when creating an account. The rule's configuration comes from a template of values. You would typically use this rule when you are creating an account to generate attributes like usernames.
## Execution
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has
read-only access to IdentityNow data models, but it does not have access to
on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint
personnel.
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has read-only access to IdentityNow data models, but it does not have access to on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint personnel.
![Rule Execution](../img/cloud_execution.png)
## Input
| Argument | Type | Purpose |
| -------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| identity | sailpoint.object.Identity | Reference to identity object representing the identity being calculated. |
| field | sailpoint.object.Field | Field object used to get information about the attribute being generated. |
| Argument | Type | Purpose |
| --- | --- | --- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| identity | sailpoint.object.Identity | Reference to identity object representing the identity being calculated. |
| field | sailpoint.object.Field | Field object used to get information about the attribute being generated. |
## Output

View File

@@ -4,35 +4,30 @@ title: Before Provisioning Rule
pagination_label: Before Provisioning Rule
sidebar_label: Before Provisioning Rule
sidebar_class_name: beforeProvisioningRule
keywords: ["cloud", "rules", "before provisioning"]
keywords: ['cloud', 'rules', 'before provisioning']
description: This rule runs before provisioning to a source.
slug: /docs/rules/cloud-rules/before-provisioning-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
Use this rule to modify a provisioning plan as provisioning is sent out.
Do not use this rule to create new attributes. Use an account
creation profile (provisioning policy) instead.
Use this rule to modify a provisioning plan as provisioning is sent out. Do not use this rule to create new attributes. Use an account creation profile (provisioning policy) instead.
## Execution
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has
read-only access to IdentityNow data models, but it does not have access to
on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint
personnel.
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has read-only access to IdentityNow data models, but it does not have access to on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint personnel.
![Rule Execution](../img/cloud_execution.png)
## Input
| Argument | Type | Purpose |
| ----------- | --------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| plan | sailpoint.object.ProvisioningPlan | Reference to identity object representing the identity being calculated. |
| application | java.lang.Object | Attribute value for the identity attribute before the rule runs. |
| Argument | Type | Purpose |
| --- | --- | --- |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| plan | sailpoint.object.ProvisioningPlan | Reference to identity object representing the identity being calculated. |
| application | java.lang.Object | Attribute value for the identity attribute before the rule runs. |
> Note: Logs are not supported for BeforeProvisioning rules.

View File

@@ -4,12 +4,10 @@ title: Correlation Rule
pagination_label: Correlation Rule
sidebar_label: Correlation Rule
sidebar_class_name: Correlation Rule
keywords: ["cloud", "rules", "correlation"]
description:
This rule associates or correlates an account to an identity, based on
complex logic.
keywords: ['cloud', 'rules', 'correlation']
description: This rule associates or correlates an account to an identity, based on complex logic.
slug: /docs/rules/cloud-rules/correlation-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
@@ -18,26 +16,23 @@ This rule associates or correlates an account to an identity, based on complex l
## Execution
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has
read-only access to IdentityNow data models, but it does not have access to
on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint
personnel.
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has read-only access to IdentityNow data models, but it does not have access to on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint personnel.
![Rule Execution](../img/cloud_execution.png)
## Input
| Argument | Type | Purpose |
| -------- | ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| account | sailpoint.object.ResourceObject | Read-only representation of account data that has been aggregated. Use this as a basis to determine correlation linkages with a specific identity. |
| Argument | Type | Purpose |
| --- | --- | --- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| account | sailpoint.object.ResourceObject | Read-only representation of account data that has been aggregated. Use this as a basis to determine correlation linkages with a specific identity. |
## Output
| Argument | Type | Purpose |
| --------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Argument | Type | Purpose |
| --- | --- | --- |
| returnMap | java.util.Map | Map object containing a reference to the identity attributes to correlate to. These should contain both `identityAttributeName` and `identityAttributeValue` as keys. |
## Template

View File

@@ -4,10 +4,10 @@ title: Generic Rule
pagination_label: Generic Rule
sidebar_label: Generic Rule
sidebar_class_name: Generic Rule
keywords: ["cloud", "rules", "generic"]
keywords: ['cloud', 'rules', 'generic']
description: This rule performs transforms.
slug: /docs/rules/cloud-rules/generic-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
@@ -16,26 +16,23 @@ This rule performs transforms.
## Execution
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has
read-only access to IdentityNow data models, but it does not have access to
on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint
personnel.
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has read-only access to IdentityNow data models, but it does not have access to on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint personnel.
![Rule Execution](../img/cloud_execution.png)
## Input
| Argument | Type | Purpose |
| -------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| Argument | Type | Purpose |
| --- | --- | --- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
## Output
| Argument | Type | Purpose |
| -------- | ---------------- | ------------------------------------------------------------- |
| value | java.lang.Object | Value returned for the account attribute, typically a string. |
| Argument | Type | Purpose |
| --- | --- | --- |
| value | java.lang.Object | Value returned for the account attribute, typically a string. |
## Template
@@ -54,8 +51,7 @@ This rule performs transforms.
## Example - Name Normalizer
This rule normalizes any names into normal names capitaliztion. For
example: JOHN DOE -> John Doe.
This rule normalizes any names into normal names capitaliztion. For example: JOHN DOE -> John Doe.
```java
<?xml version='1.0' encoding='UTF-8'?>

View File

@@ -4,44 +4,38 @@ title: Identity Attribute Rule
pagination_label: Identity Attribute Rule
sidebar_label: Identity Attribute Rule
sidebar_class_name: identityAttributeRule
keywords: ["cloud", "rules", "identity attribute"]
description:
This rule calculates and returns an identity attribute for a specific
identity.
keywords: ['cloud', 'rules', 'identity attribute']
description: This rule calculates and returns an identity attribute for a specific identity.
slug: /docs/rules/cloud-rules/identity-attribute-rule
tags: ["Rules"]
tags: ['Rules']
---
# Identity Attribute Rule
## Overview
This rule calculates and returns an identity attribute for a specific identity.
This rule is also known as a "complex" rule on the identity profile.
This rule calculates and returns an identity attribute for a specific identity. This rule is also known as a "complex" rule on the identity profile.
## Execution
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has
read-only access to IdentityNow data models, but it does not have access to
on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint
personnel.
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has read-only access to IdentityNow data models, but it does not have access to on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint personnel.
![Rule Execution](../img/cloud_execution.png)
## Input
| Argument | Type | Purpose |
| -------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| identity | sailpoint.object.Identity | Reference to identity object representing the identity being calculated. |
| oldValue | java.lang.Object | Attribute value for the identity attribute before the rule runs. |
| Argument | Type | Purpose |
| --- | --- | --- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| identity | sailpoint.object.Identity | Reference to identity object representing the identity being calculated. |
| oldValue | java.lang.Object | Attribute value for the identity attribute before the rule runs. |
## Output
| Argument | Type | Purpose |
| -------------- | ---------------- | ------------------------------------------ |
| Argument | Type | Purpose |
| --- | --- | --- |
| attributeValue | java.lang.Object | Value returned for the identity attribute. |
## Template

View File

@@ -5,23 +5,17 @@ pagination_label: Cloud Executed Rules
sidebar_label: Cloud Executed Rules
sidebar_position: 1
sidebar_class_name: cloudExecutedRules
keywords: ["cloud", "rules"]
keywords: ['cloud', 'rules']
description: Overview of cloud-executed rules
slug: /docs/rules/cloud-rules
tags: ["Rules"]
tags: ['Rules']
---
## Overview
**Cloud-Executed Rules** or **Cloud Rules** typically only perform a
specific function, such as calculating attribute values.
Cloud Rules all execute within the SailPoint cloud and offer access to
objects and data, but they do not offer any sort of externalized
connectivity.
**Cloud-Executed Rules** or **Cloud Rules** typically only perform a specific function, such as calculating attribute values. Cloud Rules all execute within the SailPoint cloud and offer access to objects and data, but they do not offer any sort of externalized connectivity.
Because these rules execute in a multi-tenant cloud environment, they have a very
restricted context, and the review process is carefully scrutinized to ensure
that they execute in an efficient and secure manner.
Because these rules execute in a multi-tenant cloud environment, they have a very restricted context, and the review process is carefully scrutinized to ensure that they execute in an efficient and secure manner.
## Supported Cloud Rules
@@ -34,23 +28,13 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
## Configuration Process
To ensure maximum compatibility, platform integrity, and security, SailPoint has
instantiated a review process to ensure that any submitted Cloud Rules meet SailPoint
requirements and that they do not contain code that can harm the system.
The review process also checks the rules to verify their intended purposes and use cases.
To ensure maximum compatibility, platform integrity, and security, SailPoint has instantiated a review process to ensure that any submitted Cloud Rules meet SailPoint requirements and that they do not contain code that can harm the system. The review process also checks the rules to verify their intended purposes and use cases.
In this process, SailPoint does _not check_ whether the rule executes correctly
or verify that it works as expected to deliver specific outcomes. The review is merely
an integrity check on the rule itself.
In this process, SailPoint does _not check_ whether the rule executes correctly or verify that it works as expected to deliver specific outcomes. The review is merely an integrity check on the rule itself.
## Submitting for Rule Review
To submit your Cloud Rule for review, approval, and inclusion in the
SailPoint platform, submit them with
[SailPoint Professional Services](https://www.sailpoint.com/services/professional/).
If you need help writing and testing rules, Professional Services can help you with
that process as well. Make sure your contact information is up to date,
in case the review team needs to contact you.
To submit your Cloud Rule for review, approval, and inclusion in the SailPoint platform, submit them with [SailPoint Professional Services](https://www.sailpoint.com/services/professional/). If you need help writing and testing rules, Professional Services can help you with that process as well. Make sure your contact information is up to date, in case the review team needs to contact you.
## Review Guidelines
@@ -58,20 +42,14 @@ All submitted rules must follow proper rule submission guidelines.
- **Best Practices**
- Ensure that all rule configurations are complete and accurate.
- Check whether your rule follows SailPoint best practice guidance, and ensure that you have
considered other product features first.
- Check whether your rule follows SailPoint best practice guidance, and ensure that you have considered other product features first.
- **Rule Quality**
- Rules must follow the [Rule Guidelines](../index.md#rule-guidelines)
and [Code Restrictions](../index.md#rule-code-restrictions)
- Rules must follow the [Rule Guidelines](../index.md#rule-guidelines) and [Code Restrictions](../index.md#rule-code-restrictions)
- Rules must be adequately tested prior to submission.
- **Documentation**
- Include detailed comments for non-obvious features in the configurations,
including supporting documentation where appropriate. This includes
justification for why something was created or done in a certain way. -
_e.g. I did this because..._
- Include detailed comments for non-obvious features in the configurations, including supporting documentation where appropriate. This includes justification for why something was created or done in a certain way. - _e.g. I did this because..._
- **Standards**
- Rules must omit commented out blocks or unfinished, incomplete, or untested
code.
- Rules must omit commented out blocks or unfinished, incomplete, or untested code.
- Rules must be submitted with appropriate UTF-8 encoding.
- Rules must convert url-encoded characters:
- `&amp;` should be `&`
@@ -89,62 +67,34 @@ This should be your file name:
`Rule - IdentityAttribute - Calculate Lifecycle.xml`
If you do not have a type, use "Generic" as the type. It would look
like this:
If you do not have a type, use "Generic" as the type. It would look like this:
`Rule - Generic - My Generic Rule.xml`
- **Updating Existing Rules and Versioning**
- The best practice is to maintain a single rule for a given use case in the
tenant. Creating additional rules while updating to maintain versioning is
not supported because doing so may cause issues during reviews and support.
- **Example:** For an AD Before Provisioning rule called "AD
BeforeProvisioningRule", you have the file "Rule - BeforeProvisioning -
AD BeforeProvisioningRule.xml". When you are updating the logic for AD, it is best
to update the file/rule with the same name, so changes can be properly
tracked to the single object.
- The best practice is to maintain a single rule for a given use case in the tenant. Creating additional rules while updating to maintain versioning is not supported because doing so may cause issues during reviews and support.
- **Example:** For an AD Before Provisioning rule called "AD BeforeProvisioningRule", you have the file "Rule - BeforeProvisioning - AD BeforeProvisioningRule.xml". When you are updating the logic for AD, it is best to update the file/rule with the same name, so changes can be properly tracked to the single object.
- **Deployment Window Requirements**
- Rules are generally reviewed and deployed, if they are accepted
without feedback, within 24 hours.
- If specific windows are required and you want full control of when a rule
is updated, use these steps to follow the versioning best practices:
- Rules are generally reviewed and deployed, if they are accepted without feedback, within 24 hours.
- If specific windows are required and you want full control of when a rule is updated, use these steps to follow the versioning best practices:
- Submit your request for a new rule with the name: `<original name>-TEMP`
- Apply the new rule during the change window.
- Validate the updated rule logic.
- Once the rule is validated, submit your request to update original rule with the updated logic.
- Once the original rule is updated, apply the original rule as the production
configuration.
- Once the original rule is updated, apply the original rule as the production configuration.
- Submit your request to delete the TEMP rule.
## Review Expectations
Once you have submitted your rule and you are in the review process, remember these points:
- **Timing:** SailPoint will examine your rule as soon as possible. Most rules are
reviewed within 24 hours of submission. However, if your rule is complex,
poorly documented, hard to read, or if it presents new issues, it may require
greater scrutiny and consideration. If your rule is repeatedly rejected for
the same guideline violation, your rule's review may take longer to complete.
- **Status Updates:** Your rule's current status will be reflected in your
[SailPoint Expert Services request](https://www.sailpoint.com/services/professional/#contact-form),
so you can monitor its progress there.
- **Expedite Requests:** If you have a critical timing issue, you can request an
expedited review. Respect your fellow implementers by seeking expedited
review only when you truly need it. If you are found to be abusing this system, SailPoint
may reject further requests going forward.
- **Rejections:** SailPoint's goal is to apply these guidelines fairly and consistently,
but mistaken rejections can happen. If your rule has been rejected and you have questions or you
would like to provide additional information, communicate directly with
the rule review team. This may help get your rule into IdentityNow, and it can
help SailPoint improve the process or identify a need for clarity in its policies. If
you still disagree with the outcome, let SailPoint know and someone can look into it.
- **Changes:** Rule changes or modifications to meet guidelines are not the reviewer's
responsibility. They are the responsibility of the person(s) submitting the rule.
Reviewers may give advice, examples, etc. to
help, but doing so does not guarantee a solution. You should test the rules with the changes
before resubmission.
- **Timing:** SailPoint will examine your rule as soon as possible. Most rules are reviewed within 24 hours of submission. However, if your rule is complex, poorly documented, hard to read, or if it presents new issues, it may require greater scrutiny and consideration. If your rule is repeatedly rejected for the same guideline violation, your rule's review may take longer to complete.
- **Status Updates:** Your rule's current status will be reflected in your [SailPoint Expert Services request](https://www.sailpoint.com/services/professional/#contact-form), so you can monitor its progress there.
- **Expedite Requests:** If you have a critical timing issue, you can request an expedited review. Respect your fellow implementers by seeking expedited review only when you truly need it. If you are found to be abusing this system, SailPoint may reject further requests going forward.
- **Rejections:** SailPoint's goal is to apply these guidelines fairly and consistently, but mistaken rejections can happen. If your rule has been rejected and you have questions or you would like to provide additional information, communicate directly with the rule review team. This may help get your rule into IdentityNow, and it can help SailPoint improve the process or identify a need for clarity in its policies. If you still disagree with the outcome, let SailPoint know and someone can look into it.
- **Changes:** Rule changes or modifications to meet guidelines are not the reviewer's responsibility. They are the responsibility of the person(s) submitting the rule. Reviewers may give advice, examples, etc. to help, but doing so does not guarantee a solution. You should test the rules with the changes before resubmission.
```

View File

@@ -4,10 +4,10 @@ title: Manager Correlation Rule
pagination_label: Manager Correlation Rule
sidebar_label: Manager Correlation Rule
sidebar_class_name: managerCorrelationRule
keywords: ["cloud", "rules", "manager correlation"]
keywords: ['cloud', 'rules', 'manager correlation']
description: This rule calculates a manager relationship between identities.
slug: /docs/rules/cloud-rules/manager-correlation-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
@@ -16,27 +16,24 @@ This rule calculates a manager relationship between identities.
## Execution
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has
read-only access to IdentityNow data models, but it does not have access to
on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint
personnel.
- **Cloud Execution** - This rule executes in the IdentityNow cloud, and it has read-only access to IdentityNow data models, but it does not have access to on-premise sources or connectors.
- **Logging** - Logging statements are currently only visible to SailPoint personnel.
![Rule Execution](../img/cloud_execution.png)
## Input
| Argument | Type | Purpose |
| --------------------- | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| link | sailpoint.object.Link | Read-only representation of account data that has been aggregated. Use this as a basis to determine manager linkages to a specific manager identity. |
| managerAttributeValue | java.lang.Object | Attribute value stored in the manager attribute. |
| Argument | Type | Purpose |
| --- | --- | --- |
| log | org.apache.log4j.Logger | Logger to log statements. _Note: This executes in the cloud, and logging is currently not exposed to anyone other than SailPoint._ |
| idn | sailpoint.server.IdnRuleUtil | Provides a read-only starting point for using the SailPoint API. From this passed reference, the rule can interrogate the IdentityNow data model including identities or account information via helper methods as described in [IdnRuleUtil](../idn_rule_utility.md). |
| link | sailpoint.object.Link | Read-only representation of account data that has been aggregated. Use this as a basis to determine manager linkages to a specific manager identity. |
| managerAttributeValue | java.lang.Object | Attribute value stored in the manager attribute. |
## Output
| Argument | Type | Purpose |
| --------- | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Argument | Type | Purpose |
| --- | --- | --- |
| returnMap | java.util.Map | Map object containing a reference to the identity attributes to identify the manager's identity. These should contain both `identityAttributeName` and `identityAttributeValue` as keys. |
## Template

View File

@@ -4,11 +4,10 @@ title: Before and After Operations on Source Account Rule
pagination_label: Before and After Operations
sidebar_label: Before and After Rule Operations
sidebar_class_name: beforeAndAfterRuleOperations
keywords: ["cloud", "rules"]
description: This rule executes PowerShell commands on the IQService component
after a source account has an operation performed on it.
keywords: ['cloud', 'rules']
description: This rule executes PowerShell commands on the IQService component after a source account has an operation performed on it.
slug: /docs/rules/connector-rules/before-and-after-rule-operations
tags: ["Rules"]
tags: ['Rules']
---
# Before and After Operations on Source Account Rule
@@ -19,49 +18,37 @@ This rule executes PowerShell commands on the IQService component after a source
The following operations can be performed on a source:
| Rule Name | Rule Type | Source Type(s) | Purpose |
| -------------------- | --------------------- | ---------------------------------------- | -------------------------------------------------------------------------------------------- |
| Before Creation Rule | ConnectorBeforeCreate | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is created. |
| Before Modify Rule | ConnectorBeforeModify | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is modified. |
| Before Delete Rule | ConnectorBeforeDelete | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is deleted. |
| After Creation Rule | ConnectorAfterCreate | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is created. |
| After Modify Rule | ConnectorAfterModify | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is modified. |
| After Delete Rule | ConnectorAfterDelete | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is deleted. |
| Rule Name | Rule Type | Source Type(s) | Purpose |
| --- | --- | --- | --- |
| Before Creation Rule | ConnectorBeforeCreate | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is created. |
| Before Modify Rule | ConnectorBeforeModify | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is modified. |
| Before Delete Rule | ConnectorBeforeDelete | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is deleted. |
| After Creation Rule | ConnectorAfterCreate | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is created. |
| After Modify Rule | ConnectorAfterModify | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is modified. |
| After Delete Rule | ConnectorAfterDelete | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is deleted. |
## Execution
- **Connector Execution** - This rule executes within the virtual appliance. It
may offer special abilities to perform connector-related functions, and it may
offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the
virtual appliance, and they are viewable by SailPoint personnel.
- **Connector Execution** - This rule executes within the virtual appliance. It may offer special abilities to perform connector-related functions, and it may offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the virtual appliance, and they are viewable by SailPoint personnel.
![Rule Execution](../img/connector_execution.png)
## Input
| Argument | Type | Purpose |
| ----------- | -------------------------------------- | -------------------------------------------------------------------------- |
| Application | System.Collections.Hashtable | Map of the application configuration. |
| Request | SailPoint.Utils.objects.AccountRequest | Reference to the account request provisioning instructions. |
| Result | SailPoint.Utils.objects.ServiceResult | Reference to the provisioning result that can be manipulated if necessary. |
| Argument | Type | Purpose |
| --- | --- | --- |
| Application | System.Collections.Hashtable | Map of the application configuration. |
| Request | SailPoint.Utils.objects.AccountRequest | Reference to the account request provisioning instructions. |
| Result | SailPoint.Utils.objects.ServiceResult | Reference to the provisioning result that can be manipulated if necessary. |
## Architecture Best Practices
For supportability, it is recommended that you write these operation rules with
only the most basic logic necessary to trigger a PowerShell script and shift
the bulk of the downstream events and/or modifications to the PowerShell script
itself. This script would reside on the client's servers and can therefore be
easily maintained or modified by the client as needed. It also allows the client
to implement changes to the PowerShell scripted functionality without requiring
code review by SailPoint because the code runs outside of the IdentityNow platform.
For supportability, it is recommended that you write these operation rules with only the most basic logic necessary to trigger a PowerShell script and shift the bulk of the downstream events and/or modifications to the PowerShell script itself. This script would reside on the client's servers and can therefore be easily maintained or modified by the client as needed. It also allows the client to implement changes to the PowerShell scripted functionality without requiring code review by SailPoint because the code runs outside of the IdentityNow platform.
## Rule Template
This example triggers on the BeforeCreate operation. If you want
to use another operation, replace `BeforeCreate` in the name and
`ConnectorBeforeCreate` in the type with one of the other operations described
earlier in the [Overview](#overview) section.
This example triggers on the BeforeCreate operation. If you want to use another operation, replace `BeforeCreate` in the name and `ConnectorBeforeCreate` in the type with one of the other operations described earlier in the [Overview](#overview) section.
```xml
<?xml version='1.0' encoding='UTF-8'?>
@@ -135,10 +122,7 @@ if($enableDebug) {
## Powershell Script Template
You can also use the following Powershell script template for each operation in
the [Overview](#overview) section. Be sure to update the `$logFile` variable
with the operation you use to ensure you are logging to a file with the correct
operation name.
You can also use the following Powershell script template for each operation in the [Overview](#overview) section. Be sure to update the `$logFile` variable with the operation you use to ensure you are logging to a file with the correct operation name.
```powershell
###############################################################################################################################

View File

@@ -4,11 +4,10 @@ title: BuildMap Rule
pagination_label: BuildMap Rule
sidebar_label: BuildMap Rule
sidebar_class_name: buildMapRule
keywords: ["cloud", "rules"]
description: This rule manipulates raw input data provided by the
rows and columns in a file and builds a map from the incoming data.
keywords: ['cloud', 'rules']
description: This rule manipulates raw input data provided by the rows and columns in a file and builds a map from the incoming data.
slug: /docs/rules/connector-rules/buildmap-rule
tags: ["Rules"]
tags: ['Rules']
---
# BuildMap Rule
@@ -19,22 +18,19 @@ This rule manipulates raw input data provided by the rows and columns in a file
## Execution
- **Connector Execution** - This rule executes within the virtual appliance. It
may offer special abilities to perform connector-related functions, and it may
offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the
virtual appliance, and they are viewable by SailPoint personnel.
- **Connector Execution** - This rule executes within the virtual appliance. It may offer special abilities to perform connector-related functions, and it may offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the virtual appliance, and they are viewable by SailPoint personnel.
![Rule Execution](../img/connector_execution.png)
## Input
| Argument | Type | Purpose |
| ----------- | ---------------------------- | ------------------------------------------------------------------------------------------- |
| col | java.util.List | Ordered list of the column names from the files header records or specified columns list. |
| record | java.util.List | Ordered list of the values for the current record, parsed based on the specified delimiter. |
| application | System.Collections.Hashtable | Map of the application configuration. |
| schema | sailpoint.object.Schema | Reference to the schema object for the delimited file source being read. |
| Argument | Type | Purpose |
| --- | --- | --- |
| col | java.util.List | Ordered list of the column names from the files header records or specified columns list. |
| record | java.util.List | Ordered list of the values for the current record, parsed based on the specified delimiter. |
| application | System.Collections.Hashtable | Map of the application configuration. |
| schema | sailpoint.object.Schema | Reference to the schema object for the delimited file source being read. |
## Template

View File

@@ -5,65 +5,48 @@ pagination_label: Connector Executed Rules
sidebar_label: Connector Executed Rules
sidebar_position: 1
sidebar_class_name: cloudExecutedRules
keywords: ["connector", "rules"]
keywords: ['connector', 'rules']
description: Overview of connector-executed rules.
slug: /docs/rules/connector-rules
tags: ["Rules"]
tags: ['Rules']
---
**Connector-Executed Rules** or **Connector Rules** are rules that are executed
in the IdentityNow virtual appliance, and they are usually extensions of the
connector itself. The rules are commonly used to perform complex
connector-related functions, so they are specific to only certain connectors.
Because these rules execute in the virtual appliance, they do not have access to
query the IdentityNow data model or fetch information from IdentityNow. They
rely instead on contextual information sent from IdentityNow. Connector-executed
rules may also have managed connections provided in their contexts to support
querying end systems or sources. Though these managed connections may be used,
making additional connections or call-outs is not allowed.
**Connector-Executed Rules** or **Connector Rules** are rules that are executed in the IdentityNow virtual appliance, and they are usually extensions of the connector itself. The rules are commonly used to perform complex connector-related functions, so they are specific to only certain connectors. Because these rules execute in the virtual appliance, they do not have access to query the IdentityNow data model or fetch information from IdentityNow. They rely instead on contextual information sent from IdentityNow. Connector-executed rules may also have managed connections provided in their contexts to support querying end systems or sources. Though these managed connections may be used, making additional connections or call-outs is not allowed.
Unlike cloud rules, connector rules do not have a rule review process and are
directly editable with the
[Connector Rule REST APIs](https://developer.sailpoint.com/idn/api/beta/connector-rule-management).
For more details, see [Configuration Process](#configuration-process).
Unlike cloud rules, connector rules do not have a rule review process and are directly editable with the [Connector Rule REST APIs](https://developer.sailpoint.com/idn/api/beta/connector-rule-management). For more details, see [Configuration Process](#configuration-process).
## Supported Connector Rules
| Rule Name | Rule Type | Source Type(s) | Purpose |
| -------------------------------------------------------- | --------------------------------------------------------- | ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Before Creation Rule](./before_after_operation_rule.md) | [ConnectorBeforeCreate](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is created. |
| [Before Modify Rule](./before_after_operation_rule.md) | [ConnectorBeforeModify](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is modified. |
| [Before Delete Rule](./before_after_operation_rule.md) | [ConnectorBeforeDelete](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is deleted. |
| [After Creation Rule](./before_after_operation_rule.md) | [ConnectorAfterCreate](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is created. |
| [After Modify Rule](./before_after_operation_rule.md) | [ConnectorAfterModify](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is modified. |
| [After Delete Rule](./before_after_operation_rule.md) | [ConnectorAfterDelete](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is deleted. |
| Build Map Rule | BuildMap | Delimited File | Calculates and transforms data from a parsed file during the aggregation process. _Note: This is only available for the Delimited File source type, not Generic source types._ |
| JDBC Build Map Rule | JDBCBuildMap | JDBC | Calculates and transforms data from a database query result during the aggregation process. It can also perform additional calls back to the database. _Note: This rule is available for the JDBC Generic source, as well as other sources that derive from the JDBC connector (e.g., Oracle EBS, PeopleSoft, etc.)_ |
| JDBC Provision Rule | JDBCProvision | JDBC | Executes database queries to perform provisioning of account and access for all account operations. |
| SAP Build Map Rule | SAPBuildMap | SAP HR, SAP | Calculates and transforms data from SAP during the aggregation process. It can also perform additional calls back to the SAP system using SAP BAPI calls. |
| SAP HR Provisioning Modify Rule | SapHrOperationProvisioning | SAP HR | Performs SAP HR modification operations during provisioning. Often used for attribute sync to custom SAP HR attributes. |
| Web Services Before Operation Rule | WebServiceBeforeOperationRule | Web Services | Executes before the next web-services HTTP(S) operation. Often used to calculate values. |
| Web Services After Operation Rule | WebServiceAfterOperationRule | Web Services | Executes after a web-services HTTP(S) operation. Often used to parse complex data. |
| Rule Name | Rule Type | Source Type(s) | Purpose |
| --- | --- | --- | --- |
| [Before Creation Rule](./before_after_operation_rule.md) | [ConnectorBeforeCreate](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is created. |
| [Before Modify Rule](./before_after_operation_rule.md) | [ConnectorBeforeModify](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is modified. |
| [Before Delete Rule](./before_after_operation_rule.md) | [ConnectorBeforeDelete](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component before a source account is deleted. |
| [After Creation Rule](./before_after_operation_rule.md) | [ConnectorAfterCreate](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is created. |
| [After Modify Rule](./before_after_operation_rule.md) | [ConnectorAfterModify](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is modified. |
| [After Delete Rule](./before_after_operation_rule.md) | [ConnectorAfterDelete](./before_after_operation_rule.md) | Active Directory, Azure Active Directory | Executes PowerShell commands on the IQService component after a source account is deleted. |
| Build Map Rule | BuildMap | Delimited File | Calculates and transforms data from a parsed file during the aggregation process. _Note: This is only available for the Delimited File source type, not Generic source types._ |
| JDBC Build Map Rule | JDBCBuildMap | JDBC | Calculates and transforms data from a database query result during the aggregation process. It can also perform additional calls back to the database. _Note: This rule is available for the JDBC Generic source, as well as other sources that derive from the JDBC connector (e.g., Oracle EBS, PeopleSoft, etc.)_ |
| JDBC Provision Rule | JDBCProvision | JDBC | Executes database queries to perform provisioning of account and access for all account operations. |
| SAP Build Map Rule | SAPBuildMap | SAP HR, SAP | Calculates and transforms data from SAP during the aggregation process. It can also perform additional calls back to the SAP system using SAP BAPI calls. |
| SAP HR Provisioning Modify Rule | SapHrOperationProvisioning | SAP HR | Performs SAP HR modification operations during provisioning. Often used for attribute sync to custom SAP HR attributes. |
| Web Services Before Operation Rule | WebServiceBeforeOperationRule | Web Services | Executes before the next web-services HTTP(S) operation. Often used to calculate values. |
| Web Services After Operation Rule | WebServiceAfterOperationRule | Web Services | Executes after a web-services HTTP(S) operation. Often used to parse complex data. |
## Configuration Process
Connector Rules are directly editable with the
[Connector Rule REST APIs](https://developer.sailpoint.com/idn/api/beta/connector-rule-management),
which provide ability to interact with rules directly.
Connector Rules are directly editable with the [Connector Rule REST APIs](https://developer.sailpoint.com/idn/api/beta/connector-rule-management), which provide ability to interact with rules directly.
| Name | Path |
| ----------------------------------------------------------------------------------------------------- | ------------------------------------- |
| [List Connector Rules](https://developer.sailpoint.com/apis/beta/#operation/getConnectorRuleList) | `GET /beta/connector-rules/` |
| [Get Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/getConnectorRule) | `GET /beta/connector-rules/{id}` |
| [Create Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/createConnectorRule) | `POST /beta/connector-rules/` |
| [Update Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/updateConnectorRule) | `PUT /beta/connector-rules/{id}` |
| [Delete Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/deleteConnectorRule) | `DELETE /beta/connector-rules/{id}` |
| Name | Path |
| --- | --- |
| [List Connector Rules](https://developer.sailpoint.com/apis/beta/#operation/getConnectorRuleList) | `GET /beta/connector-rules/` |
| [Get Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/getConnectorRule) | `GET /beta/connector-rules/{id}` |
| [Create Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/createConnectorRule) | `POST /beta/connector-rules/` |
| [Update Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/updateConnectorRule) | `PUT /beta/connector-rules/{id}` |
| [Delete Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/deleteConnectorRule) | `DELETE /beta/connector-rules/{id}` |
| [Validate Connector Rule](https://developer.sailpoint.com/apis/beta/#operation/validateConnectorRule) | `POST /beta/connector-rules/validate` |
SailPoint architectural optimizations have added resiliency and protections
against malformed or long-running rules. These APIs also offer built-in
protection and checking against potentially harmful code. For more information,
see [Rule Code Restrictions](../../rules/index.md#rule-code-restrictions).
SailPoint architectural optimizations have added resiliency and protections against malformed or long-running rules. These APIs also offer built-in protection and checking against potentially harmful code. For more information, see [Rule Code Restrictions](../../rules/index.md#rule-code-restrictions).
## Connector Rule Object Model
@@ -90,46 +73,27 @@ requestEndPoint.getBody().put(\"jsonBody\",requestXML); \n }\n
}
```
- `id` - Unique UUID that the REST APIs refers to this rule by. This is
generated on creation.
- `id` - Unique UUID that the REST APIs refers to this rule by. This is generated on creation.
- `name` - Name the user interface and references may use to refer to this rule.
- `description` - Description of the rules purpose or usage.
- `created` - Timestamp when the rule was created.
- `modified` - Timestamp when the rule was last modified. The default is `null`.
- `type` - Type of connector rule. For a list of supported rule types, see
[Supported Connector Rules](#supported-connector-rules).
- `type` - Type of connector rule. For a list of supported rule types, see [Supported Connector Rules](#supported-connector-rules).
- `attributes` - List of attributes.
- `sourceVersion` - String indicating the rule's version. Typically, this is
the same as `version`.
- `sourceVersion` - String indicating the rule's version. Typically, this is the same as `version`.
- `sourceCode` - Object housing the actual source code that makes the rule work.
- `version` - String indicating the rule's version. Typically, this is the
same as `sourceVersion`.
- `script` - Rules code the connector runs. This must be an escaped string.
For help with formatting, use an escaping tool like
[Free Formatter.](https://www.freeformatter.com/java-dotnet-escape.html#before-output)
- `version` - String indicating the rule's version. Typically, this is the same as `sourceVersion`.
- `script` - Rules code the connector runs. This must be an escaped string. For help with formatting, use an escaping tool like [Free Formatter.](https://www.freeformatter.com/java-dotnet-escape.html#before-output)
## Attaching Connector-Related Rules to Sources
Once a connector-related rule has been imported to your tenant, you must
configure any sources that need to reference that rule during the desired
operation. You can accomplish this configuration through the execution of an API
call on the source. The following examples all use a `PATCH` operation for a
partial source update, but `PUT` operations work too, as long as the entire
source object model is provided.
Once a connector-related rule has been imported to your tenant, you must configure any sources that need to reference that rule during the desired operation. You can accomplish this configuration through the execution of an API call on the source. The following examples all use a `PATCH` operation for a partial source update, but `PUT` operations work too, as long as the entire source object model is provided.
For the `PATCH` operations, you must provide an `op` key. For new
configurations, this key is typically set to `add` as the example shows, but
they can be any of the following:
For the `PATCH` operations, you must provide an `op` key. For new configurations, this key is typically set to `add` as the example shows, but they can be any of the following:
- `add` - Add a new value to the configuration. Use this operation if this is
the first time you are setting the value, i.e. it has never been configured
before.
- `replace` - Use this operation to change the existing value. Use this
operation if you are updating the value, i.e. you want to change the
configuration.
- `remove` - Removes a value from the configuration. Use this operation if you
want to unset a value. **Caution: Removals can be destructive if the path is
improperly configured. This can negatively alter your source config.**
- `add` - Add a new value to the configuration. Use this operation if this is the first time you are setting the value, i.e. it has never been configured before.
- `replace` - Use this operation to change the existing value. Use this operation if you are updating the value, i.e. you want to change the configuration.
- `remove` - Removes a value from the configuration. Use this operation if you want to unset a value. **Caution: Removals can be destructive if the path is improperly configured. This can negatively alter your source config.**
## Example API calls by Rule Type
@@ -247,9 +211,7 @@ Content-Type: `application/json-patch+json`
Content-Type: `application/json-patch+json`
_Note: Replace
`_`with the index location of operation the way it is configured on the source. For example, 0, 1, 2, etc. You can use a`GET`call on the source first to verify the index location prior to executing the`PATCH`
call to attach the rule.\*
_Note: Replace `_`with the index location of operation the way it is configured on the source. For example, 0, 1, 2, etc. You can use a`GET`call on the source first to verify the index location prior to executing the`PATCH` call to attach the rule.\*
```json
[
@@ -265,10 +227,7 @@ call to attach the rule.\*
`PATCH` /v3/sources/{id} Content-Type: `application/json-patch+json`
_Note: Replace \[\*\] with the index location of the operation the way it is
configured on the source. For example, 0, 1, 2, etc. You can use a `GET` call on
the source first to verify the index location prior to executing the `PATCH`
call to attach the rule._
_Note: Replace \[\*\] with the index location of the operation the way it is configured on the source. For example, 0, 1, 2, etc. You can use a `GET` call on the source first to verify the index location prior to executing the `PATCH` call to attach the rule._
```json
[

View File

@@ -4,11 +4,10 @@ title: JDBC BuildMap Rule
pagination_label: JDBC BuildMap Rule
sidebar_label: JDBC BuildMap Rule
sidebar_class_name: jdbcBuildMapRule
keywords: ["cloud", "rules"]
description: This rule manipulates raw input data provided by the
rows and columns in a file and builds a map from the incoming data.
keywords: ['cloud', 'rules']
description: This rule manipulates raw input data provided by the rows and columns in a file and builds a map from the incoming data.
slug: /docs/rules/connector-rules/jdbc-buildmap-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
@@ -17,29 +16,26 @@ This rule manipulates raw input data provided by the rows and columns in a file
## Execution
- **Connector Execution** - This rule executes within the virtual appliance. It
may offer special abilities to perform connector-related functions, and it may
offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the
virtual appliance, and they are viewable by SailPoint personnel.
- **Connector Execution** - This rule executes within the virtual appliance. It may offer special abilities to perform connector-related functions, and it may offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the virtual appliance, and they are viewable by SailPoint personnel.
![Rule Execution](../img/connector_execution.png)
## Input
| Argument | Type | Purpose |
| ----------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------- |
| result | java.sql.ResultSet | Current ResultSet from the JDBC Connector. |
| connection | java.sql.Connection | Reference to the current SQL connection. |
| state | java.util.Map | Map that can be used to store and share data between executions of this rule during a single aggregation run. |
| application | sailpoint.object.Application | Attribute value of the identity attribute before the rule runs. |
| schema | sailpoint.object.Schema | Reference to the schema object for the delimited file source being read. |
| Argument | Type | Purpose |
| --- | --- | --- |
| result | java.sql.ResultSet | Current ResultSet from the JDBC Connector. |
| connection | java.sql.Connection | Reference to the current SQL connection. |
| state | java.util.Map | Map that can be used to store and share data between executions of this rule during a single aggregation run. |
| application | sailpoint.object.Application | Attribute value of the identity attribute before the rule runs. |
| schema | sailpoint.object.Schema | Reference to the schema object for the delimited file source being read. |
## Output
| Argument | Type | Purpose |
| -------- | ------------ | ---------------------------------------------------------------------- |
| map | java.utl.Map | Map of names/values representing a row of data from the JDBC resource. |
| Argument | Type | Purpose |
| --- | --- | --- |
| map | java.utl.Map | Map of names/values representing a row of data from the JDBC resource. |
## Template

View File

@@ -4,43 +4,37 @@ title: JDBC Provision Rule
pagination_label: JDBC Provision Rule
sidebar_label: JDBC Provision Rule
sidebar_class_name: jdbcProvisionRule
keywords: ["cloud", "rules", "jdbc"]
description: This rule performs provisioning actions from a provisioning
plan provided by a supplied JDBC connection. These actions typically issue SQL commands, such
as insert, update, select, and delete.
keywords: ['cloud', 'rules', 'jdbc']
description: This rule performs provisioning actions from a provisioning plan provided by a supplied JDBC connection. These actions typically issue SQL commands, such as insert, update, select, and delete.
slug: /docs/rules/connector-rules/jdbc-provisioning-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
This rule performs provisioning actions from a provisioning plan provided by a supplied JDBC connection.
These actions typically issue SQL commands, such as insert, update, select, and delete.
This rule performs provisioning actions from a provisioning plan provided by a supplied JDBC connection. These actions typically issue SQL commands, such as insert, update, select, and delete.
## Execution
- **Connector Execution** - This rule executes within the virtual appliance. It
may offer special abilities to perform connector-related functions, and it may
offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the
virtual appliance, and they are viewable by SailPoint personnel.
- **Connector Execution** - This rule executes within the virtual appliance. It may offer special abilities to perform connector-related functions, and it may offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the virtual appliance, and they are viewable by SailPoint personnel.
![Rule Execution](../img/connector_execution.png)
## Input
| Argument | Type | Purpose |
| ----------- | --------------------------------- | ------------------------------------------------------------------------ |
| connection | java.sql.Connection | Reference to the current SQL connection. |
| plan | sailpoint.object.ProvisioningPlan | Provisioning plan containing the provisioning request(s). |
| application | sailpoint.object.Application | Attribute value for the identity attribute before the rule runs. |
| schema | sailpoint.object.Schema | Reference to the schema object for the delimited file source being read. |
| Argument | Type | Purpose |
| --- | --- | --- |
| connection | java.sql.Connection | Reference to the current SQL connection. |
| plan | sailpoint.object.ProvisioningPlan | Provisioning plan containing the provisioning request(s). |
| application | sailpoint.object.Application | Attribute value for the identity attribute before the rule runs. |
| schema | sailpoint.object.Schema | Reference to the schema object for the delimited file source being read. |
## Output
| Argument | Type | Purpose |
| -------- | ----------------------------------- | ------------------------------------------------------------------------------------------------------- |
| result | sailpoint.object.ProvisioningResult | ProvisioningResult object containing the provisioning request's status (success, failure, retry, etc.). |
| Argument | Type | Purpose |
| --- | --- | --- |
| result | sailpoint.object.ProvisioningResult | ProvisioningResult object containing the provisioning request's status (success, failure, retry, etc.). |
## Template

View File

@@ -4,39 +4,33 @@ title: SAP BuildMap Rule
pagination_label: SAP BuildMap Rule
sidebar_label: SAP BuildMap Rule
sidebar_class_name: sapBuildMapRule
keywords: ["cloud", "rules", "sap"]
description: This rule gathers additional attributes from SAP systems to
build accounts. This rule is implemented using SAP's Java Connector (JCo)
framework provided by a supplied SAP connection.
keywords: ['cloud', 'rules', 'sap']
description: This rule gathers additional attributes from SAP systems to build accounts. This rule is implemented using SAP's Java Connector (JCo) framework provided by a supplied SAP connection.
slug: /docs/rules/connector-rules/sap-buildmap-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
This rule gathers additional attributes from SAP systems to build accounts.
This rule is implemented using SAP's Java Connector (JCo) framework provided by a supplied SAP connection.
This rule gathers additional attributes from SAP systems to build accounts. This rule is implemented using SAP's Java Connector (JCo) framework provided by a supplied SAP connection.
## Execution
- **Connector Execution** - This rule executes within the virtual appliance. It
may offer special abilities to perform connector-related functions, and it may
offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the
virtual appliance, and they are viewable by SailPoint personnel.
- **Connector Execution** - This rule executes within the virtual appliance. It may offer special abilities to perform connector-related functions, and it may offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the virtual appliance, and they are viewable by SailPoint personnel.
![Rule Execution](../img/connector_execution.png)
## Input
| Argument | Type | Purpose |
| ----------- | ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| object | sailpoint.object.Attributes | Reference to a SailPoint attributes object (basically a map object with some added convenience methods) that holds the attributes that have been built up by the default connector implementation. The rule should modify this object to change, add, or remove attributes from the map. |
| connector | sailpoint.connector.SAPInternalConnector | Reference to the current SAP connector. |
| state | java.util.Map | Map that can be used to store and share data between executions of this rule during a single aggregation run. |
| application | sailpoint.object.Application | Attribute value for the identity attribute before the rule runs. |
| schema | sailpoint.object.Schema | Reference to the schema object for the delimited file source being read. |
| destination | com.sap.conn.jco.JCoDestination | Connected and ready-to-use SAP destination object that can be used to call BAPI function modules and call to SAP tables. |
| Argument | Type | Purpose |
| --- | --- | --- |
| object | sailpoint.object.Attributes | Reference to a SailPoint attributes object (basically a map object with some added convenience methods) that holds the attributes that have been built up by the default connector implementation. The rule should modify this object to change, add, or remove attributes from the map. |
| connector | sailpoint.connector.SAPInternalConnector | Reference to the current SAP connector. |
| state | java.util.Map | Map that can be used to store and share data between executions of this rule during a single aggregation run. |
| application | sailpoint.object.Application | Attribute value for the identity attribute before the rule runs. |
| schema | sailpoint.object.Schema | Reference to the schema object for the delimited file source being read. |
| destination | com.sap.conn.jco.JCoDestination | Connected and ready-to-use SAP destination object that can be used to call BAPI function modules and call to SAP tables. |
## Template

View File

@@ -4,45 +4,39 @@ title: SAP HR Provisioning Modify Rule
pagination_label: SAP HR Provisioning Modify Rule
sidebar_label: SAP HR Provisioning Modify Rule
sidebar_class_name: sapHRProvisioningModifyRule
keywords: ["cloud", "rules", "sap"]
description: This rule performs SAP HR modification operations during
provisioning. This rule is typically used for attribute sync to custom SAP HR
attributes.
keywords: ['cloud', 'rules', 'sap']
description: This rule performs SAP HR modification operations during provisioning. This rule is typically used for attribute sync to custom SAP HR attributes.
slug: /docs/rules/connector-rules/sap-provisioning-modify-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
This rule performs SAP HR modification operations during provisioning.
This rule is typically used for attribute sync to custom SAP HR attributes.
This rule performs SAP HR modification operations during provisioning. This rule is typically used for attribute sync to custom SAP HR attributes.
## Execution
- **Connector Execution** - This rule executes within the virtual appliance. It
may offer special abilities to perform connector-related functions, and it may
offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the
virtual appliance, and they are viewable by SailPoint personnel.
- **Connector Execution** - This rule executes within the virtual appliance. It may offer special abilities to perform connector-related functions, and it may offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the virtual appliance, and they are viewable by SailPoint personnel.
![Rule Execution](../img/connector_execution.png)
## Input
| Argument | Type | Purpose |
| ----------- | ------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| application | sailpoint.object.Application | Reference to the application object. |
| schema | sailpoint.object.Schema | Reference to the application schema. |
| destination | com.sap.conn.jco.JCoDestination | Connected and ready-to-use SAP destination object that can be used to call BAPI function modules and call to SAP tables. |
| plan | sailpoint.object.ProvisioningPlan | Provisioning plan containing the provisioning request(s). |
| request | sailpoint.object.ProvisioningPlan.AbstractRequest | AccountRequest being processed. It is always null for this global rule. It is only set for SapHrOperationProvisioning. |
| connector | sailpoint.connector.SAPHRConnector | Application connector being used for the operation. |
| Argument | Type | Purpose |
| --- | --- | --- |
| application | sailpoint.object.Application | Reference to the application object. |
| schema | sailpoint.object.Schema | Reference to the application schema. |
| destination | com.sap.conn.jco.JCoDestination | Connected and ready-to-use SAP destination object that can be used to call BAPI function modules and call to SAP tables. |
| plan | sailpoint.object.ProvisioningPlan | Provisioning plan containing the provisioning request(s). |
| request | sailpoint.object.ProvisioningPlan.AbstractRequest | AccountRequest being processed. It is always null for this global rule. It is only set for SapHrOperationProvisioning. |
| connector | sailpoint.connector.SAPHRConnector | Application connector being used for the operation. |
## Output
| Argument | Type | Purpose |
| -------- | ----------------------------------- | ------------------------------------------------------------------------------------------------------- |
| result | sailpoint.object.ProvisioningResult | ProvisioningResult object containing the provisioning request's status (success, failure, retry, etc.). |
| Argument | Type | Purpose |
| --- | --- | --- |
| result | sailpoint.object.ProvisioningResult | ProvisioningResult object containing the provisioning request's status (success, failure, retry, etc.). |
## Template

View File

@@ -4,10 +4,10 @@ title: Web Services After Operation Rule
pagination_label: Web Services After Operation Rule
sidebar_label: Web Services After Operation Rule
sidebar_class_name: webServicesAfterOperationRule
keywords: ["cloud", "rules", "webservices"]
keywords: ['cloud', 'rules', 'webservices']
description: This rule calculates attributes after a web-service operation call.
slug: /docs/rules/connector-rules/webservices-after-provisioning-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
@@ -16,28 +16,25 @@ This rule calculates attributes after a web-service operation call.
## Execution
- **Connector Execution** - This rule executes within the virtual appliance. It
may offer special abilities to perform connector-related functions, and it may
offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the
virtual appliance, and they are viewable by SailPoint personnel.
- **Connector Execution** - This rule executes within the virtual appliance. It may offer special abilities to perform connector-related functions, and it may offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the virtual appliance, and they are viewable by SailPoint personnel.
![Rule Execution](../img/connector_execution.png)
## Input
| Argument | Type | Purpose |
| ----------------------- | ------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| application | sailpoint.object.Application | Application whose data file is being processed. |
| processedResponseObject | List<Map<String, Object>> | List of map (account/group). The map contains a key, the identityAttribute of the application schema, and a value, all the account/group attributes (schema) passed by the connector after parsing the respective API response. |
| requestEndPoint | sailpoint.connector.webservices.EndPoint | Current request information. It contains the header, body, context url, method type, response attribute map, successful response code. |
| restClient | sailpoint.connector.webservices.WebServicesClient | WebServicesClient (HttpClient) object that enables the user to call the Web Services API target system. |
| rawResponseObject | String | String object that holds the raw response returned from the target system, which can be in JSON or XML form. |
| Argument | Type | Purpose |
| --- | --- | --- |
| application | sailpoint.object.Application | Application whose data file is being processed. |
| processedResponseObject | List<Map<String, Object>> | List of map (account/group). The map contains a key, the identityAttribute of the application schema, and a value, all the account/group attributes (schema) passed by the connector after parsing the respective API response. |
| requestEndPoint | sailpoint.connector.webservices.EndPoint | Current request information. It contains the header, body, context url, method type, response attribute map, successful response code. |
| restClient | sailpoint.connector.webservices.WebServicesClient | WebServicesClient (HttpClient) object that enables the user to call the Web Services API target system. |
| rawResponseObject | String | String object that holds the raw response returned from the target system, which can be in JSON or XML form. |
## Output
| Argument | Type | Purpose |
| ------------------------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Argument | Type | Purpose |
| --- | --- | --- |
| updatedAccountOrGroupList | java.util.Map | `Map` object returned from the After Operation Rule. It may contain any or all of the following: an updated list of account / group resource objects, identified by key `data`/ attribute values to be updated into application by the connector state map identified by key `connectorStateMap`. Each resource (account/group) object is of type `Map`, which contains the **key-value** pair. The **key** represents the schema attribute name, and the **value** represents the account/group attribute value. |
## Template

View File

@@ -4,10 +4,10 @@ title: Web Services Before Operation Rule
pagination_label: Web Services Before Operation Rule
sidebar_label: Web Services Before Operation Rule
sidebar_class_name: webServicesBeforeOperationRule
keywords: ["cloud", "rules", "webservices"]
keywords: ['cloud', 'rules', 'webservices']
description: This rule calculates attributes before a web-service operation call.
slug: /docs/rules/connector-rules/webservices-before-provisioning-rule
tags: ["Rules"]
tags: ['Rules']
---
## Overview
@@ -16,28 +16,25 @@ This rule calculates attributes before a web-service operation call.
## Execution
- **Connector Execution** - This rule executes within the virtual appliance. It
may offer special abilities to perform connector-related functions, and it may
offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the
virtual appliance, and they are viewable by SailPoint personnel.
- **Connector Execution** - This rule executes within the virtual appliance. It may offer special abilities to perform connector-related functions, and it may offer managed connections to sources.
- **Logging** - Logging statements are viewable within the ccg.log on the virtual appliance, and they are viewable by SailPoint personnel.
![Rule Execution](../img/connector_execution.png)
## Input
| Argument | Type | Purpose |
| ---------------- | ------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| application | sailpoint.object.Application | Application whose data file is being processed. |
| provisioningPlan | sailpoint.object.ProvisioningPlan | Provisioning plan used to update the payload of the http request. The provisioning plan has an account request that defines the operation to be performed on the account. An account request can contain multiple attributes requests. Each attribute request represents an operation on a single account attribute. This argument enables the user to update the body/payload or URL attributes of an endpoint object using the provisioningPlan information. |
| requestEndPoint | sailpoint.connector.webservices.EndPoint | Current request information. It contains the header, body, context url, method type, response attribute map, and successful response code. |
| restClient | sailpoint.connector.webservices.WebServicesClient | WebServicesClient (HttpClient) object that enables the user to call the Web Services API target system. |
| oldResponseMap | java.util.Map | Response object returned from earlier endpoint configuration of the same operation type, like Account Aggregation, Get Object, etc. |
| Argument | Type | Purpose |
| --- | --- | --- |
| application | sailpoint.object.Application | Application whose data file is being processed. |
| provisioningPlan | sailpoint.object.ProvisioningPlan | Provisioning plan used to update the payload of the http request. The provisioning plan has an account request that defines the operation to be performed on the account. An account request can contain multiple attributes requests. Each attribute request represents an operation on a single account attribute. This argument enables the user to update the body/payload or URL attributes of an endpoint object using the provisioningPlan information. |
| requestEndPoint | sailpoint.connector.webservices.EndPoint | Current request information. It contains the header, body, context url, method type, response attribute map, and successful response code. |
| restClient | sailpoint.connector.webservices.WebServicesClient | WebServicesClient (HttpClient) object that enables the user to call the Web Services API target system. |
| oldResponseMap | java.util.Map | Response object returned from earlier endpoint configuration of the same operation type, like Account Aggregation, Get Object, etc. |
## Output
| Argument | Type | Purpose |
| -------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Argument | Type | Purpose |
| --- | --- | --- |
| EndPoint / Map | sailpoint.connector.webservices.EndPoint / sailpoint.connector.webservices.Map | The rule allows the user to return the endpoint object (requestEndPoint) or a map. The map can hold **updatedEndPoint** and **connectorStateMap** keys where the value expected are **Endpoint** (requestEndPoint) and **connectorStateMap** object respectively. The **connectorStateMap** object is a map that contains a key and a value for the attribute that must be updated in the application by the rule. |
```xml

View File

@@ -5,61 +5,43 @@ pagination_label: IdentityNow Rule Utility
sidebar_label: IdentityNow Rule Utility
sidebar_position: 2
sidebar_class_name: ruleUtility
keywords: ["rule", "utility"]
keywords: ['rule', 'utility']
description: Using IDNRuleUtil as a Wrapper for Common Rule Operations
slug: /docs/rules/rule-utility
tags: ["Rules"]
tags: ['Rules']
---
## Overview
Use this guide to learn how to configure searchable account attributes
within IdentityNow and then leverage them within the IDNRuleUtil wrapper class
when searching accounts for attributes such as uniqueness checks. There are also
methods in the IDNRuleUtil wrapper class you can use without the additional
searchable attributes.
Use this guide to learn how to configure searchable account attributes within IdentityNow and then leverage them within the IDNRuleUtil wrapper class when searching accounts for attributes such as uniqueness checks. There are also methods in the IDNRuleUtil wrapper class you can use without the additional searchable attributes.
Search attributes allow you to search across accounts and sources to determine
whether a specific attribute value is being used in your IdentityNow environment.
Search attributes allow you to search across accounts and sources to determine whether a specific attribute value is being used in your IdentityNow environment.
There are three critical components involves with working with searchable
attributes:
There are three critical components involves with working with searchable attributes:
- [Configuration of search attributes within IdentityNow](#configuration-of-search-attributes-within-identitynow)
- Seed data for accounts already aggregated into the system.
- Ensure attribute promotion happens for new/changed accounts that are
aggregated.
- Ensure attribute promotion happens for new/changed accounts that are aggregated.
- [Create rules that can be used to query the newly created attribute values](#create-rules-that-can-be-used-to-query-the-newly-created-attribute-values)
- [Implement rules within the Create Profile section of each source an account is being provisioned for](#implement-rules-within-the-create-profile-section-of-each-source-for-which-an-account-is-being-provisioned)
## Configuration of Search Attributes within IdentityNow
When you are planning to implement search attributes, it is important that you consider the way
new accounts' values will be generated and which attributes should be used
as references.
When you are planning to implement search attributes, it is important that you consider the way new accounts' values will be generated and which attributes should be used as references.
You need the following information to create search attributes:
- IDs for sources that will be searched.
- Attribute name for each source that will be searched (such as mail, email,
emailAddress).
- Attribute name for each source that will be searched (such as mail, email, emailAddress).
- Unique name for the new attribute that will become common to all accounts in
the account search configuration (e.g., newMail, newEmail, newEmailAddress).
- Unique name for the new attribute that will become common to all accounts in the account search configuration (e.g., newMail, newEmail, newEmailAddress).
- Display name for the new attribute configuration.
The following example shows how to create a new attribute with the
[Search Attributes API](/idn/api/beta/create-search-attribute-config):
The following example shows how to create a new attribute with the [Search Attributes API](/idn/api/beta/create-search-attribute-config):
Your company has two sources. The first is Active Directory, and the second is
Workday. When the system aggregates new accounts, the company wants to query
IdentityNow to see whether an email address already exists. If the email address is
not in use, you can assign it to the new account. If it is in use, you can
iterate on the email address value (add a 1 for example). You can then query
IdentityNow once more to see whether your incremented email address is in use. You can
repeat this procedure until you have determined that an email address is unique.
Your company has two sources. The first is Active Directory, and the second is Workday. When the system aggregates new accounts, the company wants to query IdentityNow to see whether an email address already exists. If the email address is not in use, you can assign it to the new account. If it is in use, you can iterate on the email address value (add a 1 for example). You can then query IdentityNow once more to see whether your incremented email address is in use. You can repeat this procedure until you have determined that an email address is unique.
The following information is necessary to create your search attribute:
@@ -68,14 +50,12 @@ The following information is necessary to create your search attribute:
- Active Directory: `4028112837fe14c70177fe1955e9032c`
- Workday: `4028812877fa18c72177fs195baa0341`
- Attribute name on each source that will be searched (such as mail, email,
emailAddress):
- Attribute name on each source that will be searched (such as mail, email, emailAddress):
- Active Directory: `mail`
- Workday: `emailAddress`
- Unique name for the new attribute that will become common to all accounts in
the account search configuration (e.g., newMail, newEmail, newEmailAddress):
- Unique name for the new attribute that will become common to all accounts in the account search configuration (e.g., newMail, newEmail, newEmailAddress):
- `promotedEmailAddress`
@@ -84,18 +64,9 @@ The following information is necessary to create your search attribute:
### Create the New Search Attribute in IdentityNow
To call the APIs for search attributes, you need a personal access
token and the name of your tenant to provide with the request. To retrieve a
personal access token, see
[Personal Access Tokens](../../../api/authentication.md#personal-access-tokens).
To get the name of your tenant, see
[Finding Your Organization Tenant Name](../../../api/getting-started.md#find-your-tenant-name)
To call the APIs for search attributes, you need a personal access token and the name of your tenant to provide with the request. To retrieve a personal access token, see [Personal Access Tokens](../../../api/authentication.md#personal-access-tokens). To get the name of your tenant, see [Finding Your Organization Tenant Name](../../../api/getting-started.md#find-your-tenant-name)
Doing so creates an account search configuration for the two sources/attributes
specified. All new/changed accounts that are aggregated have this new
attribute(“promotedEmailAddress”) created in the account schema and the value of
the attribute(“mail” or “emailAddress”), depending on the source, is promoted
to that new attribute.
Doing so creates an account search configuration for the two sources/attributes specified. All new/changed accounts that are aggregated have this new attribute(“promotedEmailAddress”) created in the account schema and the value of the attribute(“mail” or “emailAddress”), depending on the source, is promoted to that new attribute.
```bash
curl --location -g --request POST 'https://{tenant}.api.identitynow.com/beta/accounts/search-attribute-config' \
@@ -114,29 +85,17 @@ curl --location -g --request POST 'https://{tenant}.api.identitynow.com/beta/acc
:::caution
Aggregation only processes new and/or changed accounts for many sources.
If an account is unchanged, an aggregation will not seed the new attribute or its value for this account.
Therefore, it is mandatory that a non-optimized aggregation be performed when an account
search configuration is created/modified for each source involved in that configuration.
Aggregation only processes new and/or changed accounts for many sources. If an account is unchanged, an aggregation will not seed the new attribute or its value for this account. Therefore, it is mandatory that a non-optimized aggregation be performed when an account search configuration is created/modified for each source involved in that configuration.
:::
If this source has already been aggregated before the account search
configuration was created, a non-optimized aggregation must now be performed
to seed the new attribute data for all existing accounts.
If this source has already been aggregated before the account search configuration was created, a non-optimized aggregation must now be performed to seed the new attribute data for all existing accounts.
At this point, the configuration exists to promote attributes on any new/changed
account that comes into IdentityNow. These attributes and their associated
values are stored for use in custom rules. Each account that exists on either of
these sources will now have a new attribute called “promotedEmailAddress”. _The
value of this attribute will be the value of `mail` if it is the Active
Directory Source or `emailAddress` if it is the Workday source._
At this point, the configuration exists to promote attributes on any new/changed account that comes into IdentityNow. These attributes and their associated values are stored for use in custom rules. Each account that exists on either of these sources will now have a new attribute called “promotedEmailAddress”. _The value of this attribute will be the value of `mail` if it is the Active Directory Source or `emailAddress` if it is the Workday source._
## Create Rules that Can Be Used to Query the Newly Created Attribute values
To access the promoted attribute data mentioned in the above section, you can use library
methods that have been implemented to allow access to that data. There are two
methods that have been implemented:
To access the promoted attribute data mentioned in the above section, you can use library methods that have been implemented to allow access to that data. There are two methods that have been implemented:
```java
/**
@@ -167,14 +126,9 @@ public int attrSearchCountAccounts(List<String> sourceIds, String attributeName,
public String attrSearchGetIdentityName(List<String> sourceIds, String attributeName, String operation, List<String> values)
```
Each of these utility library methods are loaded into the context that is
available from within your custom rule. It can be accessed by appending the
prefix “idn.” to the method call.
Each of these utility library methods are loaded into the context that is available from within your custom rule. It can be accessed by appending the prefix “idn.” to the method call.
Example: You want to use the promoted attribute data to determine an email address's uniqueness
before using it to provision a new account to one of the
sources involved in the account search configuration. You can call these methods
to determine that uniqueness.
Example: You want to use the promoted attribute data to determine an email address's uniqueness before using it to provision a new account to one of the sources involved in the account search configuration. You can call these methods to determine that uniqueness.
```java
import sailpoint.object.*;
@@ -202,58 +156,33 @@ Note that there are two method calls within the earlier example rule.
:::
Calling the _`idn.attrSearchCountAccounts()`_ method with both example source
IDs causes a search of all accounts for a value
“promotedEmailAddress=jc@sailpoint.com”. The search returns the count of accounts
containing that attribute value pair.
Calling the _`idn.attrSearchCountAccounts()`_ method with both example source IDs causes a search of all accounts for a value “promotedEmailAddress=jc@sailpoint.com”. The search returns the count of accounts containing that attribute value pair.
If _`idn.attrSearchCountAccounts()`_ returns non-zero, it may be
useful to determine which identity owns the account(s) containing that value. The
_`idn.attrSearchGetIdentityName()`_ method will return that identity name.
If _`idn.attrSearchCountAccounts()`_ returns non-zero, it may be useful to determine which identity owns the account(s) containing that value. The _`idn.attrSearchGetIdentityName()`_ method will return that identity name.
## Implement Rules within the Create Profile Section of Each Source for an Acount is Being Provisioned For
Create Profile can be found at **Admin** > **Connections** > **Source** >
`SourceName` > **Accounts** > **Create Profile**
Create Profile can be found at **Admin** > **Connections** > **Source** > `SourceName` > **Accounts** > **Create Profile**
You can invoke rules in different ways, but one of the most common
implementations involves binding it to the Create Profile. This results in
the rule's being used to generate/check the values used during new account
provisioning.
You can invoke rules in different ways, but one of the most common implementations involves binding it to the Create Profile. This results in the rule's being used to generate/check the values used during new account provisioning.
When a `Generator` is selected for the `distinguishedName` attribute, a rule that invokes the provided library methods can be selected.
This is an example of such a scenario:
When a `Generator` is selected for the `distinguishedName` attribute, a rule that invokes the provided library methods can be selected. This is an example of such a scenario:
Through a lifecycle state change, an account needs to be provisioned to an
Active Directory source.
Through a lifecycle state change, an account needs to be provisioned to an Active Directory source.
When the provisioning plan is created, the rule that generates the value for
`distinguishedName` is called. The rule invokes the library methods mentioned
earlier to determine the uniqueness of the attribute. In this case it may do the following:
When the provisioning plan is created, the rule that generates the value for `distinguishedName` is called. The rule invokes the library methods mentioned earlier to determine the uniqueness of the attribute. In this case it may do the following:
Call _`idn.attrSearchCountAccounts()`_ to determine whether any other accounts are
using first.last as a distinguishedName. If a count of 1 or more is returned,
the call can be retried with first.last+1. The call is repeated until a zero is
returned. At that point, the value is unique and can be used. The value is
returned to the calling rule.
Call _`idn.attrSearchCountAccounts()`_ to determine whether any other accounts are using first.last as a distinguishedName. If a count of 1 or more is returned, the call can be retried with first.last+1. The call is repeated until a zero is returned. At that point, the value is unique and can be used. The value is returned to the calling rule.
In some cases where a non zero value is returned, it may be useful to know
which identity owns the account that value belongs to. To find
out this information, call _`idn.attrSearchGetIdentityName()`_ to determine the
identity in question.
In some cases where a non zero value is returned, it may be useful to know which identity owns the account that value belongs to. To find out this information, call _`idn.attrSearchGetIdentityName()`_ to determine the identity in question.
## IdnRuleUtil.java Descriptors
:::caution
Both the normal SailPoint context passed into the Beanshell rule
evaluation and the new IdnRuleUtil referenced here include an
"Identity" class:
Both the normal SailPoint context passed into the Beanshell rule evaluation and the new IdnRuleUtil referenced here include an "Identity" class:
The SailPoint context Identity class is provided via `sailpoint.object.Identity`
The IdnRuleUtil Identity class is provided via `sailpoint.rule.Identity` When
referencing an Identity class, you must be explicit as to which Identity class
you are using to avoid a namespace conflict. For example:
The SailPoint context Identity class is provided via `sailpoint.object.Identity` The IdnRuleUtil Identity class is provided via `sailpoint.rule.Identity` When referencing an Identity class, you must be explicit as to which Identity class you are using to avoid a namespace conflict. For example:
:::
@@ -265,8 +194,7 @@ sailpoint.rule.Identity foundIdentity = idn.getIdentityById("uid");
String email = foundIdentity.getAttribute("email");
```
The below section provides a full accounting of the methods available to rule
writers using the IdnRuleUtil class:
The below section provides a full accounting of the methods available to rule writers using the IdnRuleUtil class:
```java
/**

View File

@@ -5,104 +5,62 @@ pagination_label: Rules
sidebar_label: Rules
sidebar_position: 2
sidebar_class_name: rules
keywords: ["rules"]
keywords: ['rules']
description: Documentation for rule development in IdentityNow.
slug: /docs/rules
tags: ["Rules"]
tags: ['Rules']
---
## Overview
In SailPoint solutions, rules serve as a flexible configuration framework
implementers can leverage to preform complex or advanced configurations. Though
rules allow some advanced flexibility, you must take special considerations when
you are deciding to implement rules.
In SailPoint solutions, rules serve as a flexible configuration framework implementers can leverage to preform complex or advanced configurations. Though rules allow some advanced flexibility, you must take special considerations when you are deciding to implement rules.
## Rule Execution
IdentityNow is a multi-tenant cloud solution, and its architecture varies differently
from other SailPoint products like IdentityIQ. Therefore, the way rules execute within
IdentityNow reflects the architectural design considerations the platform was built on.
These considerations determine the rule's limitations.
IdentityNow is a multi-tenant cloud solution, and its architecture varies differently from other SailPoint products like IdentityIQ. Therefore, the way rules execute within IdentityNow reflects the architectural design considerations the platform was built on. These considerations determine the rule's limitations.
There are two primary places where you can execute rules:
- **Cloud Execution** - These rules are executed in the IdentityNow multi-tenant
cloud.
- **Connector Execution** - These rules are executed on the on-premise
IdentityNow virtual appliance.
- **Cloud Execution** - These rules are executed in the IdentityNow multi-tenant cloud.
- **Connector Execution** - These rules are executed on the on-premise IdentityNow virtual appliance.
![Rule Execution](./img/rule_execution.png)
**Cloud-Executed Rules** or **Cloud Rules** typically only perform a
specific function, such as calculating attribute values.
Many of these rules may be able to query the IdentityNow
data-model in a read-only fashion, but they do not have the ability to
commit transactions, save objects, etc.
**Cloud-Executed Rules** or **Cloud Rules** typically only perform a specific function, such as calculating attribute values. Many of these rules may be able to query the IdentityNow data-model in a read-only fashion, but they do not have the ability to commit transactions, save objects, etc.
Because these rules execute in a multi-tenant cloud environment, they have a restricted context,
and they are closely scrutinized to ensure that they execute in an efficient and secure manner.
Because these rules execute in a multi-tenant cloud environment, they have a restricted context, and they are closely scrutinized to ensure that they execute in an efficient and secure manner.
For more details, see [Cloud Rules](./cloud-rules/index.md).
**Connector-Executed Rules** or **Connector Rules** are rules executed
in the IdentityNow virtual appliance, and they are often an extension connector itself.
The rules are commonly used for performing complex connector-related functions,
so they are specific to only certain connectors. Because these rules execute in the virtual appliance,
they do not have access to query the IdentityNow data model or fetch information from
IdentityNow. They rely instead on contextual information sent from IdentityNow.
Connector-executed rules may also have managed connections supplied in their
contexts to support querying end systems or sources. Though you may use these
managed connections, you cannot make making additional connections or call-outs.
**Connector-Executed Rules** or **Connector Rules** are rules executed in the IdentityNow virtual appliance, and they are often an extension connector itself. The rules are commonly used for performing complex connector-related functions, so they are specific to only certain connectors. Because these rules execute in the virtual appliance, they do not have access to query the IdentityNow data model or fetch information from IdentityNow. They rely instead on contextual information sent from IdentityNow. Connector-executed rules may also have managed connections supplied in their contexts to support querying end systems or sources. Though you may use these managed connections, you cannot make making additional connections or call-outs.
For more details, see the [Connector Rules](./connector-rules/index.md).
## Support Considerations
Though IdentityNow shares some common functionality with other SailPoint
products like IdentityIQ, the same rules are not necessarily supported,
nor do they necessarily execute the same way or with the same context and variables.
SailPoint recommends that you become familiar with which rules execute with which
products, as well as the nuances in their execution contexts.
Though IdentityNow shares some common functionality with other SailPoint products like IdentityIQ, the same rules are not necessarily supported, nor do they necessarily execute the same way or with the same context and variables. SailPoint recommends that you become familiar with which rules execute with which products, as well as the nuances in their execution contexts.
From a SailPoint support perspective, rules are considered configurations.
SailPoint supports the underlying platform but not the rule configurations themselves.
Any problems with the way rules are implemented or run over time are the responsibilities
the customer or implementer must manage. SailPoint's IdentityNow Expert Services need hours to
cover any rule configuration work (e.g., creating rules, best practices reviews,
application to your IdentityNow environment, and promotion between sandbox &
prod environments). Contact your Customer Success Manager with any questions.
While rules allow some advanced flexibility, you must consider these support implications
when you are deciding whether to implement rules. Consider rule usage a last resort, and
use IdentityNow features instead whenever you can.
From a SailPoint support perspective, rules are considered configurations. SailPoint supports the underlying platform but not the rule configurations themselves. Any problems with the way rules are implemented or run over time are the responsibilities the customer or implementer must manage. SailPoint's IdentityNow Expert Services need hours to cover any rule configuration work (e.g., creating rules, best practices reviews, application to your IdentityNow environment, and promotion between sandbox & prod environments). Contact your Customer Success Manager with any questions. While rules allow some advanced flexibility, you must consider these support implications when you are deciding whether to implement rules. Consider rule usage a last resort, and use IdentityNow features instead whenever you can.
## Rule Guidelines
- **Supported Rules**
- You must use one of the Supported Rules defined in
[Supported Cloud Rules](./cloud-rules/index.md#supported-cloud-rules) and
[Supported Connector Rules](./connector-rules/index.md#supported-connector-rules).
You must also annotate the rule with the correct type.
- You must use one of the Supported Rules defined in [Supported Cloud Rules](./cloud-rules/index.md#supported-cloud-rules) and [Supported Connector Rules](./connector-rules/index.md#supported-connector-rules). You must also annotate the rule with the correct type.
- Adhere to the rule's purpose as defined in Supported Rules.
Do not use the rule differently from its intended purpose.
- Adhere to the rule's purpose as defined in Supported Rules. Do not use the rule differently from its intended purpose.
- The rules must use only available SailPoint product features, and they must
not make unsupported API calls.
- The rules must use only available SailPoint product features, and they must not make unsupported API calls.
- **Logging**
- Use logging statements sparingly but informatively. Do not make unnecessary
logging calls.
- Use logging statements sparingly but informatively. Do not make unnecessary logging calls.
- Do not use `System.out` statements to output data. Internal log aggregators do not pick up these statements.
- If you want rules to log statements, use `log.debug()`, `log.info()`,
`log.warn()`, or `log.error()` statements.
- If you want rules to log statements, use `log.debug()`, `log.info()`, `log.warn()`, or `log.error()` statements.
- When you are logging, do not log full object serialization to logs. Calls to
`.toXml()` or similar methods are prohibited.
- When you are logging, do not log full object serialization to logs. Calls to `.toXml()` or similar methods are prohibited.
- Logging of sensitive data is prohibited.
@@ -112,49 +70,31 @@ use IdentityNow features instead whenever you can.
- Do not spawn any additional threads in the rule.
- Connections to systems other than through provided connection contexts are
strictly prohibited.
- Connections to systems other than through provided connection contexts are strictly prohibited.
- Do not call out to external sources, files, services, APIs, etc. unless that
is a connectors purpose. Avoid using file system object manipulation like
opening temp files or spooling to text or CSV files. This can cause
unforeseen issues when connections are leaked or improperly used.
- Do not call out to external sources, files, services, APIs, etc. unless that is a connectors purpose. Avoid using file system object manipulation like opening temp files or spooling to text or CSV files. This can cause unforeseen issues when connections are leaked or improperly used.
- When you are using conditional execution, do not leave any dead or inaccessible
code. All methods that return values should be able to return a value.
- When you are using conditional execution, do not leave any dead or inaccessible code. All methods that return values should be able to return a value.
- **Error Handling**
- Use proper error handling including `try { ... }` , `catch { ... }` and
`finally { ... }` blocks to allow exceptions to propagate as intended. This
is especially true of connector-executed rules.
- Do not assume that objects are always available. They can be null. Make sure
that you have proper null checks to prevent Null Pointer Exceptions (NPEs).
- Use proper error handling including `try { ... }` , `catch { ... }` and `finally { ... }` blocks to allow exceptions to propagate as intended. This is especially true of connector-executed rules.
- Do not assume that objects are always available. They can be null. Make sure that you have proper null checks to prevent Null Pointer Exceptions (NPEs).
- **Security**
- Implement appropriate security measures in rules to ensure proper
handling of user information and prevent its unauthorized use, disclosure,
or access by third parties.
- Implement appropriate security measures in rules to ensure proper handling of user information and prevent its unauthorized use, disclosure, or access by third parties.
- Logging of sensitive data is prohibited, and it will cause the rule to be rejected.
- Do not include test values, passwords, keys, or sensitive values in the rule
code.
- Do not include test values, passwords, keys, or sensitive values in the rule code.
- **Performance**
- Rules should be as performant as possible to achieve the task at hand.
- Be careful with iterative rules execution. Heavily iterative rules will
have greater performance scrutiny.
- Do not iterate over lists of objects like accounts or identities. Doing so
causes cache bloat. Use a projection query wherever possible to find the data
you need, and then return the values you want. If you are unsure, ask
[SailPoint Expert Services](https://www.sailpoint.com/services/professional/#contact-form).
- Be careful with iterative rules execution. Heavily iterative rules will have greater performance scrutiny.
- Do not iterate over lists of objects like accounts or identities. Doing so causes cache bloat. Use a projection query wherever possible to find the data you need, and then return the values you want. If you are unsure, ask [SailPoint Expert Services](https://www.sailpoint.com/services/professional/#contact-form).
## Rule Code Restrictions
The following code fragments are not allowed in any SailPoint
[Cloud Rules](./cloud-rules/index.md) or
[Connector Rules](./connector-rules/index.md). Any usage of these will be
blocked in the system.
The following code fragments are not allowed in any SailPoint [Cloud Rules](./cloud-rules/index.md) or [Connector Rules](./connector-rules/index.md). Any usage of these will be blocked in the system.
```java
context.
@@ -218,13 +158,7 @@ Log4j
Logger.getLogger
```
Note that the earlier code fragments are not allowed within
[connector-executed rules](./connector-rules/index.md#supported-connector-rules)
because they are not valid at the connector level. They will, for a short time, still
be allowed for pre-existing [cloud-executed rules](./cloud-rules/index.md) as a
review exception. However, any new rules using these constructs will be returned
to the submitter, and the submitter will be asked to rewrite the rule, using the
[IDN Rule Utility](./idn_rule_utility.md) helper methods instead:
Note that the earlier code fragments are not allowed within [connector-executed rules](./connector-rules/index.md#supported-connector-rules) because they are not valid at the connector level. They will, for a short time, still be allowed for pre-existing [cloud-executed rules](./cloud-rules/index.md) as a review exception. However, any new rules using these constructs will be returned to the submitter, and the submitter will be asked to rewrite the rule, using the [IDN Rule Utility](./idn_rule_utility.md) helper methods instead:
- context
- .getObjectById()
@@ -235,20 +169,12 @@ to the submitter, and the submitter will be asked to rewrite the rule, using the
## Other Rules
While IdentityNow shares some common functionality with other SailPoint
products like IdentityIQ, the same rules are not necessarily supported,
nor do they necessarily execute the same way. SailPoint recommends that you become familiar with
which rules execute with which products, as well as the nuances in their execution
contexts. IdentityNow considers any other rules not mentioned in the Cloud-Executed Rules or
Connector-Executed Rules sections to be unsupported.
While IdentityNow shares some common functionality with other SailPoint products like IdentityIQ, the same rules are not necessarily supported, nor do they necessarily execute the same way. SailPoint recommends that you become familiar with which rules execute with which products, as well as the nuances in their execution contexts. IdentityNow considers any other rules not mentioned in the Cloud-Executed Rules or Connector-Executed Rules sections to be unsupported.
## Deprecated Rules
The following rules have been deprecated in IdentityNow. SailPoint recommends
using supported product functionality instead of these rules:
The following rules have been deprecated in IdentityNow. SailPoint recommends using supported product functionality instead of these rules:
- **Certification Exclusion Rules** - Use configurable certification campaign
filters instead.
- **Certification Exclusion Rules** - Use configurable certification campaign filters instead.
- **Identity Selector Rules** - Use role standard assignment criteria instead.
- **Integration Rules** - Use
[Before Provisioning](./cloud-rules/before_provisioning_rule.md) rules instead.
- **Integration Rules** - Use [Before Provisioning](./cloud-rules/before_provisioning_rule.md) rules instead.

View File

@@ -4,53 +4,40 @@ title: SaaS Configuration
pagination_label: SaaS Configuration
sidebar_position: 3
sidebar_class_name: saasConfiguration
keywords: ["configuration"]
keywords: ['configuration']
description: Use SaaS Configuration APIs to import and export configurations.
slug: /docs/saas-configuration
tags: ["SaaS Configuration"]
tags: ['SaaS Configuration']
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
This is a guide about using the SailPoint SaaS Configuration APIs to import
configurations into and export configurations from the SailPoint SaaS system.
Use these APIs to get configurations in bulk in support of environmental
promotion, go-live, or tenant-to-tenant configuration management processes and
pipelines.
This is a guide about using the SailPoint SaaS Configuration APIs to import configurations into and export configurations from the SailPoint SaaS system. Use these APIs to get configurations in bulk in support of environmental promotion, go-live, or tenant-to-tenant configuration management processes and pipelines.
For more details around how to manage configurations, refer to
[SailPoint SaaS Change Management and Deployment Best Practices](https://community.sailpoint.com/t5/IdentityNow-Articles/SailPoint-SaaS-Change-Management-and-Deployment-Best-Practices/ta-p/189871).
For more details around how to manage configurations, refer to [SailPoint SaaS Change Management and Deployment Best Practices](https://community.sailpoint.com/t5/IdentityNow-Articles/SailPoint-SaaS-Change-Management-and-Deployment-Best-Practices/ta-p/189871).
## Audience
This document is intended for technically proficient administrators,
implementers, integrators or even developers. No coding experience is necessary,
but being able to understand JSON data structures and make REST API web-service
calls is necessary to fully understand this guide.
This document is intended for technically proficient administrators, implementers, integrators or even developers. No coding experience is necessary, but being able to understand JSON data structures and make REST API web-service calls is necessary to fully understand this guide.
## Supported Objects
| **Object** | **Object Type** | **Export** | **Import** |
| :-------------------------- | :--------------------- | :------------------------------------------------------------------------------------------------------------ | :------------------------------------------------------------------------------------------------------------ |
| **Object** | **Object Type** | **Export** | **Import** |
| :-- | :-- | :-- | :-- |
| Event Trigger Subscriptions | `TRIGGER_SUBSCRIPTION` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
| Identity Profiles | `IDENTITY_PROFILE` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
| Rules | `RULE` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
| Sources | `SOURCE` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
| Transforms | `TRANSFORM` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
| Identity Profiles | `IDENTITY_PROFILE` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
| Rules | `RULE` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
| Sources | `SOURCE` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
| Transforms | `TRANSFORM` | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) | ![:check_mark:](https://pf-emoji-service--cdn.us-east-1.prod.public.atl-paas.net/atlassian/check_mark_32.png) |
:::tip
The available supported objects are also available via REST API! See List
Configuration Objects in the **API Reference** section of this document.
The available supported objects are also available via REST API! See List Configuration Objects in the **API Reference** section of this document.
:::
**Rule Import and Export -** Rules can be exported from one tenant and imported
into another. Cloud rules have already been reviewed and installed in other
tenants, and connector rules do not require a rule review. During the import and
export process, rules cannot be changed in the migration process because these
are validated by the usage of `jwsHeader` and `jwsSignature` in the object.
**Rule Import and Export -** Rules can be exported from one tenant and imported into another. Cloud rules have already been reviewed and installed in other tenants, and connector rules do not require a rule review. During the import and export process, rules cannot be changed in the migration process because these are validated by the usage of `jwsHeader` and `jwsSignature` in the object.
## Exporting Configurations
@@ -63,27 +50,12 @@ are validated by the usage of `jwsHeader` and `jwsSignature` in the object.
![img](./img/sp-config-export.png)
1. **Start Export** - Start the export process by configuring a JSON payload for
the export options. This payload will be sent to
`POST /beta/sp-config/export`.
2. **Response with Export Status** - An export status will be given in response.
This contains a `jobId` and a `status` to be used to subsequently monitor the
process. Initially, this may have a status of `NOT_STARTED`.
3. **Get Export Status** - Using the `jobId` from the previous status, call
`GET /beta/sp-config/export/{id}` where the `{id}` is the `jobId`.
4. **Response with Export Status** - An export status will be given in response.
This contains a `jobId` and a `status` to be used to subsequently monitor the
process. After a period of time, the process `status` should move to either
`COMPLETE` or `FAILED`. Depending on the amount of objects being exported,
this could take awhile. It may be ncessary to iterate over steps 3 and 4
until the status reflects a completion. If it takes too long, the export
process may expire.
5. **Get Export Results** - Once the status is `COMPLETE`, download the export
results by calling `GET /beta/sp-config/export/{id}/download` where the
`{id}` is the `jobId`.
6. **Response with Export Results** - In response, the export process will
produce a set of JSON objects you can download as an export result set. These
will reflect the objects that were selected in the export options earlier.
1. **Start Export** - Start the export process by configuring a JSON payload for the export options. This payload will be sent to `POST /beta/sp-config/export`.
2. **Response with Export Status** - An export status will be given in response. This contains a `jobId` and a `status` to be used to subsequently monitor the process. Initially, this may have a status of `NOT_STARTED`.
3. **Get Export Status** - Using the `jobId` from the previous status, call `GET /beta/sp-config/export/{id}` where the `{id}` is the `jobId`.
4. **Response with Export Status** - An export status will be given in response. This contains a `jobId` and a `status` to be used to subsequently monitor the process. After a period of time, the process `status` should move to either `COMPLETE` or `FAILED`. Depending on the amount of objects being exported, this could take awhile. It may be ncessary to iterate over steps 3 and 4 until the status reflects a completion. If it takes too long, the export process may expire.
5. **Get Export Results** - Once the status is `COMPLETE`, download the export results by calling `GET /beta/sp-config/export/{id}/download` where the `{id}` is the `jobId`.
6. **Response with Export Results** - In response, the export process will produce a set of JSON objects you can download as an export result set. These will reflect the objects that were selected in the export options earlier.
## Importing Configurations
@@ -96,27 +68,12 @@ are validated by the usage of `jwsHeader` and `jwsSignature` in the object.
![img](./img/sp-config-import.png)
1. **Start Import** - Start the import process by configuring a JSON payload for
the import options. This will then be sent to `POST /beta/sp-config/import`.
2. **Response with Import Status** - An import status will be given in response.
This contains a `jobId` and a `status` to be used to subsequently monitor the
process. Initially this might have a status of `NOT_STARTED`.
3. **Get Import Status** - Using the `jobId` from the previous status, call
`GET /beta/sp-config/import/{id}` where the `{id}` is the `jobId`.
4. **Response with Import Status** - An import status will be given in response.
This contains a `jobId` and a `status` to be used to subsequently monitor the
process. After a period of time, the process `status` will move to either
`COMPLETE` or `FAILED`. Depending on the amount of objects being imported,
this could take awhile. It may be necessary to iterate over steps 3 and 4
until the status reflects a completion. If it takes too long, the import
process may expire.
5. **Get Import Results** - Once the status is `COMPLETE`, download the import
results by calling `GET /beta/sp-config/import/{id}/download` where the
`{id}` is the `jobId`.
6. **Response with Import Results** - In response, the import process should
produce listing of object that successfully imported, as well as any errors,
warnings, or information about the import process. This result set will
reflect the objects that were selected to be imported earlier.
1. **Start Import** - Start the import process by configuring a JSON payload for the import options. This will then be sent to `POST /beta/sp-config/import`.
2. **Response with Import Status** - An import status will be given in response. This contains a `jobId` and a `status` to be used to subsequently monitor the process. Initially this might have a status of `NOT_STARTED`.
3. **Get Import Status** - Using the `jobId` from the previous status, call `GET /beta/sp-config/import/{id}` where the `{id}` is the `jobId`.
4. **Response with Import Status** - An import status will be given in response. This contains a `jobId` and a `status` to be used to subsequently monitor the process. After a period of time, the process `status` will move to either `COMPLETE` or `FAILED`. Depending on the amount of objects being imported, this could take awhile. It may be necessary to iterate over steps 3 and 4 until the status reflects a completion. If it takes too long, the import process may expire.
5. **Get Import Results** - Once the status is `COMPLETE`, download the import results by calling `GET /beta/sp-config/import/{id}/download` where the `{id}` is the `jobId`.
6. **Response with Import Results** - In response, the import process should produce listing of object that successfully imported, as well as any errors, warnings, or information about the import process. This result set will reflect the objects that were selected to be imported earlier.
## API Reference Guide
@@ -549,10 +506,7 @@ Content-Type: application/json
:::tip
Import also has a “preview” option you can use to see what an import will look
like without actually having to import and change your tenant. Any errors
discovered during reference or resource resolution will be provided. To use
this, simply set query option `preview` to `true`.
Import also has a “preview” option you can use to see what an import will look like without actually having to import and change your tenant. Any errors discovered during reference or resource resolution will be provided. To use this, simply set query option `preview` to `true`.
Example: POST /beta/sp-config/import?preview=true

View File

@@ -5,10 +5,10 @@ pagination_label: Common CLI Commands
sidebar_label: Common CLI Commands
sidebar_position: 3
sidebar_class_name: commonCliCommands
keywords: ["connectivity", "connectors", "commands", "cli"]
keywords: ['connectivity', 'connectors', 'commands', 'cli']
description: These are the CLI commands most commonly used when building SaaS Connectors.
slug: /docs/saas-connectivity/common-cli-commands
tags: ["Connectivity"]
tags: ['Connectivity']
---
Below is a list of commands and their usages:
@@ -17,21 +17,16 @@ Below is a list of commands and their usages:
- Create a project on your local system: `sail conn init "my-project"`
- Test your connector locally: `npm run dev`
- **Deployment**
- Create an empty connector in your IDN Org (used to get id so you can
upload): `sail conn create "my-project"`
- Create an empty connector in your IDN Org (used to get id so you can upload): `sail conn create "my-project"`
- Build a project: `npm run pack-zip`
- Upload your connector to your IDN Org:
`sail conn upload -c [connectorID | connectorAlias] -f dist/[connector filename].zip`
- Upload your connector to your IDN Org: `sail conn upload -c [connectorID | connectorAlias] -f dist/[connector filename].zip`
- **Exploring**
- List connectors in your IDN Org: `sail conn list`
- List your connector tags:
`sail conn tags list -c [connectorID | connectorAlias]`
- List your connector tags: `sail conn tags list -c [connectorID | connectorAlias]`
- **Testing and Debugging**
- Test your connector on the IDN Org:
`sail connectors invoke [action] -c [connectorID | connectorAlias] -p config.json`
- Test your connector on the IDN Org: `sail connectors invoke [action] -c [connectorID | connectorAlias] -p config.json`
- Get a list of actions: `sail conn invoke -h`
- Run read-only integration tests against your connector:
`sail conn validate -p config.json -c [connectorID | connectorAlias] -r`
- Run read-only integration tests against your connector: `sail conn validate -p config.json -c [connectorID | connectorAlias] -r`
- Tail IDN Org connector logs: `sail conn logs tail`
- **Delete**
- Delete a connector: `sail conn delete -c [connectorID | connectorAlias]`

View File

@@ -3,10 +3,10 @@ id: account-create
title: Account Create
pagination_label: Account Create
sidebar_label: Account Create
keywords: ["connectivity", "connectors", "account create"]
keywords: ['connectivity', 'connectors', 'account create']
description: Create account on the source.
slug: /docs/saas-connectivity/commands/account-create
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -57,32 +57,17 @@ tags: ["Connectivity", "Connector Command"]
## Description
The account create command triggers whenever IDN is told to provision
entitlements for an identity on the target source, but no account for the
identity on the target source exists yet. For example, if you create an access
profile that grants a group on the target source and then add that access
profile to a role, any identity matching that roles membership criteria will be
granted to the group. IDN determines which identities do not have accounts on
the target source and triggers the account create command for each identity. If
an identity already has an account, then it invokes the account update command.
The account create command triggers whenever IDN is told to provision entitlements for an identity on the target source, but no account for the identity on the target source exists yet. For example, if you create an access profile that grants a group on the target source and then add that access profile to a role, any identity matching that roles membership criteria will be granted to the group. IDN determines which identities do not have accounts on the target source and triggers the account create command for each identity. If an identity already has an account, then it invokes the account update command.
## The Provisioning Plan
The account create command accepts a provisioning plan from IDN and creates the
corresponding account(s) in the target source. When you configure your source in
IDN, you must set up Create Profile to tell IDN how to provision new accounts
for your source.
The account create command accepts a provisioning plan from IDN and creates the corresponding account(s) in the target source. When you configure your source in IDN, you must set up Create Profile to tell IDN how to provision new accounts for your source.
You can create the provisioning plan through the `accountCreateTemplate` in the
`connector-spec.json` file, and you can also modify its behavior in IDN using
the create profile screen:
You can create the provisioning plan through the `accountCreateTemplate` in the `connector-spec.json` file, and you can also modify its behavior in IDN using the create profile screen:
![Account Create](./img/account_create_idn.png)
Create Profile provides the instructions for the provisioning plan and
determines which attributes are sent to your connector code. For example, if all
the account attributes in the preceding image are configured for a value, then
the following JSON payload is sent to your connector:
Create Profile provides the instructions for the provisioning plan and determines which attributes are sent to your connector code. For example, if all the account attributes in the preceding image are configured for a value, then the following JSON payload is sent to your connector:
```javascript
{
@@ -105,9 +90,7 @@ the following JSON payload is sent to your connector:
}
```
The provisioning plan does not include any disabled attributes. In the earlier
image, `password` is disabled, so the payload to your connector does not not
include a field for `password`:
The provisioning plan does not include any disabled attributes. In the earlier image, `password` is disabled, so the payload to your connector does not not include a field for `password`:
```javascript
{
@@ -131,8 +114,7 @@ include a field for `password`:
The provisioning plan presents multi-valued entitlements in two different ways:
If a multi-valued entitlement, like groups, has only one value, then the
provisioning plan represents it as a string value:
If a multi-valued entitlement, like groups, has only one value, then the provisioning plan represents it as a string value:
```javascript
{
@@ -151,8 +133,7 @@ provisioning plan represents it as a string value:
}
```
If a multi-valued entitlement has more than one value, then the plan represents
it as an array:
If a multi-valued entitlement has more than one value, then the plan represents it as an array:
```javascript
{
@@ -174,10 +155,7 @@ it as an array:
}
```
Your connector code must handle the possibility of both cases. The following
code example from
[AirtableAccount.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/models/AirtableAccount.ts)
shows how to handle a multi-valued attribute:
Your connector code must handle the possibility of both cases. The following code example from [AirtableAccount.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/models/AirtableAccount.ts) shows how to handle a multi-valued attribute:
```javascript
public static createWithStdAccountCreateInput(record: StdAccountCreateInput): AirtableAccount {
@@ -199,53 +177,27 @@ public static createWithStdAccountCreateInput(record: StdAccountCreateInput): Ai
## The return object
When the account is returned to IDN, any values you set are updated in IDN. So
if an account ID is auto-generated on the source system, you must send the
account ID back to IDN so IDN is aware of it for future account update
activities. This is useful for the compound key type.
When the account is returned to IDN, any values you set are updated in IDN. So if an account ID is auto-generated on the source system, you must send the account ID back to IDN so IDN is aware of it for future account update activities. This is useful for the compound key type.
## Password Handling
There are three main ways to handle passwords on a source:
1. SSO, LDAP, or other federated authentication mechanisms are the preferred
means of providing user login on a target source. If your source can
integrate with a federated login service, use that service. If your source
requires you to provide a password when you create accounts, even with a
federated login, it is best to create a strong, random password. Your users
will use the federated login, so they never need to know this password.
1. SSO, LDAP, or other federated authentication mechanisms are the preferred means of providing user login on a target source. If your source can integrate with a federated login service, use that service. If your source requires you to provide a password when you create accounts, even with a federated login, it is best to create a strong, random password. Your users will use the federated login, so they never need to know this password.
2. If your source has a password reset feature at login, it is best to initially
create the account with a strong, random password the user does not have
access to. Once the account is created, make the user request a password
reset to set their own password. This method is the safest alternative to
federated authentication because the initial password is strong and never
known to anyone, and the user can generate his or her own password through
secure channels.
2. If your source has a password reset feature at login, it is best to initially create the account with a strong, random password the user does not have access to. Once the account is created, make the user request a password reset to set their own password. This method is the safest alternative to federated authentication because the initial password is strong and never known to anyone, and the user can generate his or her own password through secure channels.
3. The least secure method is setting a static password in the create profile
that is well known among your users. This approach is not recommended. It
does not require any automated communications with your users.
3. The least secure method is setting a static password in the create profile that is well known among your users. This approach is not recommended. It does not require any automated communications with your users.
There are two ways you can generate random passwords:
1. Use the “Create Password” generator in Create Profile. (This can also be
configured in the `accountCreateTemplate`)
1. Use the “Create Password” generator in Create Profile. (This can also be configured in the `accountCreateTemplate`)
![Create Password](./img/create_password_idn.png)
2. Disable the 'password' field.
Use Create Profile and generate a random password in code. There are some
JavaScript libraries that can generate random strings suitable for passwords,
like [random-string](https://www.npmjs.com/package/random-string) and
[crypto-random-string](https://www.npmjs.com/package/crypto-random-string).
Import either one of these libraries into your code to use them. The following
example from
[airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts)
uses a ternary operator to ensure the password is always provided. If the
provisioning plan provides a password, use that value. If the provisioning plan
does not provide a password, generate a random one.
Use Create Profile and generate a random password in code. There are some JavaScript libraries that can generate random strings suitable for passwords, like [random-string](https://www.npmjs.com/package/random-string) and [crypto-random-string](https://www.npmjs.com/package/crypto-random-string). Import either one of these libraries into your code to use them. The following example from [airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts) uses a ternary operator to ensure the password is always provided. If the provisioning plan provides a password, use that value. If the provisioning plan does not provide a password, generate a random one.
```javascript
async createAccount(input: StdAccountCreateInput): Promise<AirtableAccount> {
@@ -274,10 +226,7 @@ async createAccount(input: StdAccountCreateInput): Promise<AirtableAccount> {
## Testing in IdentityNow
One way to test whether the account create code works in IDN is to set up an
access profile and role that grants members an entitlement from the connectors
target source. Start by creating an access profile that grants one or more
entitlements from the target source.
One way to test whether the account create code works in IDN is to set up an access profile and role that grants members an entitlement from the connectors target source. Start by creating an access profile that grants one or more entitlements from the target source.
![Testing 1](./img/testing1.png)
@@ -285,11 +234,8 @@ Next, create a role that uses the access profile created in the previous step.
![Testing 2](./img/testing2.png)
Modify the role membership to use Identity List and select one or more users
that do not have accounts in the target source yet.
Modify the role membership to use Identity List and select one or more users that do not have accounts in the target source yet.
![Testing 3](./img/testing3.png)
Click the Update button in the upper right corner to initiate the account
provisioning process. Doing so creates the account(s) on the target source once
the process is complete.
Click the Update button in the upper right corner to initiate the account provisioning process. Doing so creates the account(s) on the target source once the process is complete.

View File

@@ -3,10 +3,10 @@ id: account-delete
title: Account Delete
pagination_label: Account Delete
sidebar_label: Account Delete
keywords: ["connectivity", "connectors", "account delete"]
keywords: ['connectivity', 'connectors', 'account delete']
description: Remove account from a source.
slug: /docs/saas-connectivity/commands/account-delete
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -35,16 +35,9 @@ tags: ["Connectivity", "Connector Command"]
## Description
The account delete command sends one attribute from IDN, the identity to delete.
This can be passed to your connector to delete the account from the source
system.
The account delete command sends one attribute from IDN, the identity to delete. This can be passed to your connector to delete the account from the source system.
Enable account delete in IDN through a BeforeProvisioning rule. The connector
honors whichever operation the provisioning plan sends. For more information,
see the
[documentation](https://community.sailpoint.com/t5/IdentityNow-Articles/IdentityNow-Rule-Guide/ta-p/76665)
and an
[example implementation](https://community.sailpoint.com/t5/IdentityNow-Wiki/IdentityNow-Rule-Guide-Before-Provisioning-Rule/ta-p/77415).
Enable account delete in IDN through a BeforeProvisioning rule. The connector honors whichever operation the provisioning plan sends. For more information, see the [documentation](https://community.sailpoint.com/t5/IdentityNow-Articles/IdentityNow-Rule-Guide/ta-p/76665) and an [example implementation](https://community.sailpoint.com/t5/IdentityNow-Wiki/IdentityNow-Rule-Guide-Before-Provisioning-Rule/ta-p/77415).
The following snippet shows an example of account delete command implementation:

View File

@@ -3,10 +3,10 @@ id: account-discover
title: Account Discover
pagination_label: Account Discover
sidebar_label: Account Discover
keywords: ["connectivity", "connectors", "account discover"]
keywords: ['connectivity', 'connectors', 'account discover']
description: Dynamically determine account schema from the source.
slug: /docs/saas-connectivity/commands/account-discover
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -51,27 +51,11 @@ tags: ["Connectivity", "Connector Command"]
## Description
The account discover schema command tells IDN to dynamically create the account
schema for the source rather than use the account schema provided by the
connector in connector-spec.json. It is often ideal to statically define the
account schema because it is generally more performant and easier to develop and
reason about the code. However, some sources have schemas that can be different
for each customer deployment. It can also be difficult to determine which
account attributes to statically expose, which requires the schema to be
dynamically generated. SalesForce is an example of a source that can have
thousands of account attributes, which makes it impractical to statically define
a set of attributes that satisfies all connector users. Although the SalesForce
connector defines a standard set of account attributes out of the box, it also
allows schema discovery for users looking for more attributes.
The account discover schema command tells IDN to dynamically create the account schema for the source rather than use the account schema provided by the connector in connector-spec.json. It is often ideal to statically define the account schema because it is generally more performant and easier to develop and reason about the code. However, some sources have schemas that can be different for each customer deployment. It can also be difficult to determine which account attributes to statically expose, which requires the schema to be dynamically generated. SalesForce is an example of a source that can have thousands of account attributes, which makes it impractical to statically define a set of attributes that satisfies all connector users. Although the SalesForce connector defines a standard set of account attributes out of the box, it also allows schema discovery for users looking for more attributes.
## Implementation
If your connector requires dynamic schema discovery, you must add
std:account:discover-schema to the list of commands in connector-spec.json.
Because the account schema is dynamic, you do not need to specify an
accountSchema or an accountCreateTemplate object in connector-spec.json. Your
connector-spec.json file will look similar to this example from the
[Airtable connector](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/connector-spec.json).
If your connector requires dynamic schema discovery, you must add std:account:discover-schema to the list of commands in connector-spec.json. Because the account schema is dynamic, you do not need to specify an accountSchema or an accountCreateTemplate object in connector-spec.json. Your connector-spec.json file will look similar to this example from the [Airtable connector](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/connector-spec.json).
```javascript
{
@@ -220,10 +204,7 @@ connector-spec.json file will look similar to this example from the
## Programmatically build an account schema
There are many ways to programmatically build the account schema for a source.
This section will cover one such method. To start, register your command in the
main connector file,
[index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts).
There are many ways to programmatically build the account schema for a source. This section will cover one such method. To start, register your command in the main connector file, [index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts).
```javascript
export const connector = async () => {
@@ -248,13 +229,7 @@ export const connector = async () => {
}
```
Next, implement the `discoverSchema()` function in your client code. The
following function calls the necessary endpoints to get the full schema of the
user account you want to represent in IDN. After you receive a response from
your call, you must build your account schema object that will return to IDN.
The response has a structure like the accountSchema property in the
connector-spec.json file. The following is an example from
[airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts).
Next, implement the `discoverSchema()` function in your client code. The following function calls the necessary endpoints to get the full schema of the user account you want to represent in IDN. After you receive a response from your call, you must build your account schema object that will return to IDN. The response has a structure like the accountSchema property in the connector-spec.json file. The following is an example from [airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts).
```javascript
async getAccountSchema(): Promise<StdAccountDiscoverSchemaOutput> {
@@ -366,22 +341,15 @@ This code produces the following payload that will be sent back to IDN.
}
```
There are many properties in this payload, so you may want to remove some, but
it can be hard to determine which properties to keep in a dynamic way. If you
can programmatically determine which properties to remove, you can alter the
`discoverSchema()` function to remove them.
There are many properties in this payload, so you may want to remove some, but it can be hard to determine which properties to keep in a dynamic way. If you can programmatically determine which properties to remove, you can alter the `discoverSchema()` function to remove them.
## Test in IdentityNow
To test the account discover schema command in IDN, ensure that you upload your
latest connector code and create a new source in IDN. After you configure and
test your source connection, go to the Account Schema page. You will see an
empty schema.
To test the account discover schema command in IDN, ensure that you upload your latest connector code and create a new source in IDN. After you configure and test your source connection, go to the Account Schema page. You will see an empty schema.
![Discover Schema 1](./img/discover_schema_idn1.png)
To discover the schema for this source, click the Options dropdown in the
upper right and select Discover Schema.
To discover the schema for this source, click the Options dropdown in the upper right and select Discover Schema.
![Discover Schema 2](./img/discover_schema_idn2.png)
@@ -389,8 +357,6 @@ IDN then asks you to assign attributes to Account ID and 'Account Name.'
![Discover Schema 3](./img/discover_schema_idn3.png)
Save the schema. You now have a populated account schema. A user of this source
must provide further details, like descriptions and identifying which attributes
are entitlements.
Save the schema. You now have a populated account schema. A user of this source must provide further details, like descriptions and identifying which attributes are entitlements.
![Discover Schema 4](./img/discover_schema_idn4.png)

View File

@@ -3,10 +3,10 @@ id: account-enable
title: Account Enable
pagination_label: Account Enable
sidebar_label: Account Enable
keywords: ["connectivity", "connectors", "account enable"]
keywords: ['connectivity', 'connectors', 'account enable']
description: Enable an account on the source.
slug: /docs/saas-connectivity/commands/account-enable
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -51,28 +51,13 @@ tags: ["Connectivity", "Connector Command"]
## Description
You typically invoke the account enable and account disable commands during the
joiner, mover, leaver (JML) lifecycle. An identitys leaving from the
organization or change to a role that does not require access to one or more
accounts triggers the account disable command. An identitys rejoining the
organization or move to a role that grants access to a previously disabled
account triggers the account enable command.
You typically invoke the account enable and account disable commands during the joiner, mover, leaver (JML) lifecycle. An identitys leaving from the organization or change to a role that does not require access to one or more accounts triggers the account disable command. An identitys rejoining the organization or move to a role that grants access to a previously disabled account triggers the account enable command.
Disabling accounts is generally preferred if the source supports account
disabling so the account data remains for later reactivation or inspection. If
the source does not support account disabling or deleting accounts is preferred
when an identity leaves the organization, the connector performs the necessary
steps to delete an account with the account disable function.
Disabling accounts is generally preferred if the source supports account disabling so the account data remains for later reactivation or inspection. If the source does not support account disabling or deleting accounts is preferred when an identity leaves the organization, the connector performs the necessary steps to delete an account with the account disable function.
> 🚧 It is important to note that although SaaS Connectivity supports the
> account delete command, IDN never sends the account delete command, only the
> account enable command. The connectors developer determines the appropriate
> action for account disable on the source.
> 🚧 It is important to note that although SaaS Connectivity supports the account delete command, IDN never sends the account delete command, only the account enable command. The connectors developer determines the appropriate action for account disable on the source.
Account enable/disable is similar to implementing the account update command. If
you have implemented your source call to modify any of the values on your
source, then you can use the same method to implement the command. The following
code implements enable and disable:
Account enable/disable is similar to implementing the account update command. If you have implemented your source call to modify any of the values on your source, then you can use the same method to implement the command. The following code implements enable and disable:
```javascript
.stdAccountDisable(async (context: Context, input: StdAccountDisableInput, res: Response<StdAccountDisableOutput>) => {

View File

@@ -3,10 +3,10 @@ id: account-list
title: Account List
pagination_label: Account List
sidebar_label: Account List
keywords: ["connectivity", "connectors", "account list"]
keywords: ['connectivity', 'connectors', 'account list']
description: Aggregate all accounts from the source into IdentityNow.
slug: /docs/saas-connectivity/commands/account-list
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -39,24 +39,13 @@ tags: ["Connectivity", "Connector Command"]
## Description
The account list command aggregates all accounts from the target source into
IdentityNow. IDN calls this command during a manual or scheduled account
aggregation.
The account list command aggregates all accounts from the target source into IdentityNow. IDN calls this command during a manual or scheduled account aggregation.
![Account List](./img/account_list_idn.png)
## Implementation
For you to be able to implement this endpoint, the web service must expose an
API for listing user accounts and entitlements (i.e. roles or groups).
Sometimes, a target sources API has a single endpoint providing all the
attributes and entitlements a source account contains. However, some APIs may
break these attributes and entitlements into separate API endpoints, requiring
you to make multiple calls to gather all an account's necessary data. The
following code from
[airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts)
shows the necessary steps to create a complete account from the various
endpoints the API offers:
For you to be able to implement this endpoint, the web service must expose an API for listing user accounts and entitlements (i.e. roles or groups). Sometimes, a target sources API has a single endpoint providing all the attributes and entitlements a source account contains. However, some APIs may break these attributes and entitlements into separate API endpoints, requiring you to make multiple calls to gather all an account's necessary data. The following code from [airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts) shows the necessary steps to create a complete account from the various endpoints the API offers:
```javascript
async getAllAccounts(): Promise<AirtableAccount[]> {
@@ -74,9 +63,7 @@ async getAllAccounts(): Promise<AirtableAccount[]> {
}
```
The following code snippet from
[index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts)
shows how to register the account list command on the connector object:
The following code snippet from [index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts) shows how to register the account list command on the connector object:
```javascript
export const connector = async () => {
@@ -98,9 +85,7 @@ export const connector = async () => {
...
```
IDN expects each user in the target source to be converted into a format IDN
understands. The specific attributes the web service returns depend on what your
source provides.
IDN expects each user in the target source to be converted into a format IDN understands. The specific attributes the web service returns depend on what your source provides.
```javascript
public toStdAccountListOutput(): StdAccountListOutput {
@@ -125,9 +110,7 @@ private buildStandardObject(): StdAccountListOutput | StdAccountCreateOutput | S
}
```
The result of the account list command is not an array of objects but several
individual objects. This is the format IDN expects, so if you see something like
the following result while testing, it is normal:
The result of the account list command is not an array of objects but several individual objects. This is the format IDN expects, so if you see something like the following result while testing, it is normal:
```javascript
{

View File

@@ -3,10 +3,10 @@ id: account-read
title: Account Read
pagination_label: Account Read
sidebar_label: Account Read
keywords: ["connectivity", "connectors", "account read"]
keywords: ['connectivity', 'connectors', 'account read']
description: Aggregate a single account from the source into IdentityNow.
slug: /docs/saas-connectivity/commands/account-read
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -49,18 +49,13 @@ tags: ["Connectivity", "Connector Command"]
## Description
The account read command aggregates a single account from the target source into
IdentityNow. IDN can call this command during a “one-off” account refresh, which
you can trigger by aggregating an individual account in IDN.
The account read command aggregates a single account from the target source into IdentityNow. IDN can call this command during a “one-off” account refresh, which you can trigger by aggregating an individual account in IDN.
![Account Read](./img/account_read_idn.png)
## Implementation
Implementation of account read is similar to account list's implementation,
except the code only needs to get one account, not all the accounts. The
following snippet is from
[airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts):
Implementation of account read is similar to account list's implementation, except the code only needs to get one account, not all the accounts. The following snippet is from [airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts):
```javascript
async getAccount(identity: SimpleKeyType | CompoundKeyType): Promise<AirtableAccount> {
@@ -88,14 +83,9 @@ async getAccount(identity: SimpleKeyType | CompoundKeyType): Promise<AirtableAcc
}
```
One special case of this command is the `NotFound` type. On line 20, if an
account is not found, the `ConnectorError` is thrown with the
`ConnectorErrorType.NotFound` type. This tells IDN the account does not exist,
and IDN then triggers the account create logic to generate the account.
One special case of this command is the `NotFound` type. On line 20, if an account is not found, the `ConnectorError` is thrown with the `ConnectorErrorType.NotFound` type. This tells IDN the account does not exist, and IDN then triggers the account create logic to generate the account.
The following code snippet from
[index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts)
shows how to register the account read command on the connector object:
The following code snippet from [index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts) shows how to register the account read command on the connector object:
```javascript
// Connector must be exported as module property named connector

View File

@@ -3,10 +3,10 @@ id: account-unlock
title: Account Unlock
pagination_label: Account Unlock
sidebar_label: Account Unlock
keywords: ["connectivity", "connectors", "account unlock"]
keywords: ['connectivity', 'connectors', 'account unlock']
description: Lock and unlock an account on the source.
slug: /docs/saas-connectivity/commands/account-unlock
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -49,13 +49,9 @@ tags: ["Connectivity", "Connector Command"]
## Description
The account lock and account unlock commands provide ways to temporarily prevent
access to an account. IDN only supports the unlock command, so accounts must be
locked on the source level, but they can be unlocked through IDN, and IDN can
store the account's status.
The account lock and account unlock commands provide ways to temporarily prevent access to an account. IDN only supports the unlock command, so accounts must be locked on the source level, but they can be unlocked through IDN, and IDN can store the account's status.
Implementing account unlock is similar to the other commands that update
attributes on an account. The following code unlocks an account:
Implementing account unlock is similar to the other commands that update attributes on an account. The following code unlocks an account:
```javascript
.stdAccountUnlock(async (context: Context, input: StdAccountUnlockInput, res: Response<StdAccountUnlockOutput>) => {

View File

@@ -3,10 +3,10 @@ id: account-update
title: Account Update
pagination_label: Account Update
sidebar_label: Account Update
keywords: ["connectivity", "connectors", "account update"]
keywords: ['connectivity', 'connectors', 'account update']
description: Update an account on the source.
slug: /docs/saas-connectivity/commands/account-update
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -58,39 +58,19 @@ tags: ["Connectivity", "Connector Command"]
## Description
The account update command triggers whenever IDN is told to modify an identity's
attributes or entitlements on the target source. For example, granting an
identity a new entitlement through a role, changing an identitys lifecycle
state, or modifying an identity attribute tied to an account attribute all
trigger the account update command.
The account update command triggers whenever IDN is told to modify an identity's attributes or entitlements on the target source. For example, granting an identity a new entitlement through a role, changing an identitys lifecycle state, or modifying an identity attribute tied to an account attribute all trigger the account update command.
## Input Schema
The payload from IDN contains the ID of the identity to modify, the
configuration items the connector needs to call the source API, and one or more
change operations to apply to the identity. Each operation has the following
special considerations:
The payload from IDN contains the ID of the identity to modify, the configuration items the connector needs to call the source API, and one or more change operations to apply to the identity. Each operation has the following special considerations:
- **Set:** Set tells the connector to overwrite the current value of the
attribute or entitlement with the new value provided in the payload. The
entire entitlement array resets if there are multi-valued entitlements.
- **Set:** Set tells the connector to overwrite the current value of the attribute or entitlement with the new value provided in the payload. The entire entitlement array resets if there are multi-valued entitlements.
- **Add:** Add only works for multi-valued entitlements. Add tells the connector
to add one or more values to the entitlement. Add is often useful for group
entitlements when new groups are added to the identity. If only one
entitlement is added, it is represented as a `string`. If more than one
entitlement is added, it represented as an `array of strings`.
- **Add:** Add only works for multi-valued entitlements. Add tells the connector to add one or more values to the entitlement. Add is often useful for group entitlements when new groups are added to the identity. If only one entitlement is added, it is represented as a `string`. If more than one entitlement is added, it represented as an `array of strings`.
- **Remove:** Remove is similar to add, but it also works for attributes or
single-valued entitlements. If you apply remove to multi-valued entitlements,
doing so tells the connector to remove the value(s) from the entitlement. If
only one entitlement is removed, it is represented as a `string`. If more than
one entitlement is removed, it is represented as an `array of strings`. If you
apply remove to a single-valued entitlement or account attribute, doing so
tells the connector to set the value to `null` or `empty`.
- **Remove:** Remove is similar to add, but it also works for attributes or single-valued entitlements. If you apply remove to multi-valued entitlements, doing so tells the connector to remove the value(s) from the entitlement. If only one entitlement is removed, it is represented as a `string`. If more than one entitlement is removed, it is represented as an `array of strings`. If you apply remove to a single-valued entitlement or account attribute, doing so tells the connector to set the value to `null` or `empty`.
The following example payload tells the connector to perform the following
update actions:
The following example payload tells the connector to perform the following update actions:
- Set the title of the account to “Developer Advocate.”
@@ -114,28 +94,10 @@ update actions:
## Response Schema
After the connector applies the operations defined in the input payload, the
connector must respond to IDN with the changes to the account so IDN can update
the identity accordingly. If an account update operation results in no changes
to the account, the connector responds with an empty object `{}`. If the update
operation results in one or more changes to the account, the connector responds
with the complete account as it exists in the source, just like an account read
response. IDN can parse the response and apply the differences accordingly.
After the connector applies the operations defined in the input payload, the connector must respond to IDN with the changes to the account so IDN can update the identity accordingly. If an account update operation results in no changes to the account, the connector responds with an empty object `{}`. If the update operation results in one or more changes to the account, the connector responds with the complete account as it exists in the source, just like an account read response. IDN can parse the response and apply the differences accordingly.
## Testing in IdentityNow
You can test the account update command the way you test the
[Account Create](./account-create.md) command. Follow the steps in “Testing in
IdentityNow” from “Account Create” to set up an access profile and role. Be sure
to run the aggregation so the account(s) are created in the target source. Once
the account(s) are created in the target source, modify the access profile to
grant an additional entitlement. Return to the role and click the Update
button in the upper right corner. Doing so triggers the account update command
because the accounts are already created in the target source. Once the update
is complete, ensure the account(s) have the additional entitlement.
You can test the account update command the way you test the [Account Create](./account-create.md) command. Follow the steps in “Testing in IdentityNow” from “Account Create” to set up an access profile and role. Be sure to run the aggregation so the account(s) are created in the target source. Once the account(s) are created in the target source, modify the access profile to grant an additional entitlement. Return to the role and click the Update button in the upper right corner. Doing so triggers the account update command because the accounts are already created in the target source. Once the update is complete, ensure the account(s) have the additional entitlement.
Note: Testing the account update command for removing entitlements using this
method does not work. You can remove the entitlement from the access profile and
run an update, but IDN will not send an update command to the connector to
remove the entitlement. We are looking for suggestions on how to test the
removal of entitlements.
Note: Testing the account update command for removing entitlements using this method does not work. You can remove the entitlement from the access profile and run an update, but IDN will not send an update command to the connector to remove the entitlement. We are looking for suggestions on how to test the removal of entitlements.

View File

@@ -3,10 +3,10 @@ id: entitlement-list
title: Entitlement List
pagination_label: Entitlement List
sidebar_label: Entitlement List
keywords: ["connectivity", "connectors", "entitlement list"]
keywords: ['connectivity', 'connectors', 'entitlement list']
description: Gather a list of all entitlements available on the source.
slug: /docs/saas-connectivity/commands/entitlement-list
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -41,23 +41,13 @@ tags: ["Connectivity", "Connector Command"]
## Description
The entitlement list command triggers during a manual or scheduled entitlement
aggregation operation within IDN. This operation gathers a list of all
entitlements available on the target source, usually multi-valued entitlements
like groups or roles. This operation provides IDN administrators with a list of
entitlements available on the source so they can create access profiles and
roles accordingly, and it provides IDN with more details about the entitlements.
The entitlement schemas minimum requirements are name and ID, but you can add
other values, such as created date, updated date, status, etc.
The entitlement list command triggers during a manual or scheduled entitlement aggregation operation within IDN. This operation gathers a list of all entitlements available on the target source, usually multi-valued entitlements like groups or roles. This operation provides IDN administrators with a list of entitlements available on the source so they can create access profiles and roles accordingly, and it provides IDN with more details about the entitlements. The entitlement schemas minimum requirements are name and ID, but you can add other values, such as created date, updated date, status, etc.
![Discover Schema 4](./img/entitlement_list_idn.png)
## Defining the Schema
The entitlement schema is defined in the
[connector-spec.json](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/connector-spec.json)
file. Currently, only the multi-valued “group” type is supported. The following
values are the minimum requirements, but you can add more attributes.
The entitlement schema is defined in the [connector-spec.json](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/connector-spec.json) file. Currently, only the multi-valued “group” type is supported. The following values are the minimum requirements, but you can add more attributes.
```javascript
...
@@ -85,8 +75,7 @@ values are the minimum requirements, but you can add more attributes.
## Implementation
This can be implemented in the main connector file,
[index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts):
This can be implemented in the main connector file, [index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts):
```javascript
...

View File

@@ -3,17 +3,15 @@ id: entitlement-read
title: Entitlement Read
pagination_label: Entitlement Read
sidebar_label: Entitlement Read
keywords: ["connectivity", "connectors", "entitlement read"]
keywords: ['connectivity', 'connectors', 'entitlement read']
description: Fetch a single entitlements attributes from the source.
slug: /docs/saas-connectivity/commands/entitlement-read
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
:::note
At this time Entitlement Read is not triggered from IDN for any specific
workflow and as such it is not necessary to implement this in order to have a
fully functional connector.
At this time Entitlement Read is not triggered from IDN for any specific workflow and as such it is not necessary to implement this in order to have a fully functional connector.
:::
@@ -54,10 +52,7 @@ fully functional connector.
## Response Schema
Entitlement read fetches a single entitlements attributes and returns the
resulting object to IDN, similar to how entitlement list does. You can implement
this in the main connector file,
[index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts):
Entitlement read fetches a single entitlements attributes and returns the resulting object to IDN, similar to how entitlement list does. You can implement this in the main connector file, [index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts):
```javascript
...

View File

@@ -5,10 +5,10 @@ pagination_label: Connector Commands
sidebar_label: Connector Commands
sidebar_position: 7
sidebar_class_name: connectorCommands
keywords: ["connectivity", "connector", "commands"]
keywords: ['connectivity', 'connector', 'commands']
description: All commands available to implement in a SaaS Connector.
slug: /docs/saas-connectivity/connector-commands
tags: ["Connectivity"]
tags: ['Connectivity']
---
Below you will find all of commands available to implement in a SaaS Connector.

View File

@@ -3,10 +3,10 @@ id: test-connection
title: Test Connection
pagination_label: Test Connection
sidebar_label: Test Connection
keywords: ["connectivity", "connectors", "test connection"]
keywords: ['connectivity', 'connectors', 'test connection']
description: Ensure the connector can communicate with the source.
slug: /docs/saas-connectivity/commands/test-connection
tags: ["Connectivity", "Connector Command"]
tags: ['Connectivity', 'Connector Command']
---
| Input/Output | Data Type |
@@ -23,28 +23,15 @@ tags: ["Connectivity", "Connector Command"]
## Summary
The test connection command ensures the connector can communicate with the
target web service. It validates API credentials, host names, ports, and other
configuration items. To implement this command, look for either a health
endpoint or a simple GET endpoint. Some web services implement a health endpoint
that returns status information about the service, which can be useful to test a
connection. If no health endpoint exists, use a simple GET endpoint that takes
few to no parameters to ensure the connector can make a successful call to the
web service.
The test connection command ensures the connector can communicate with the target web service. It validates API credentials, host names, ports, and other configuration items. To implement this command, look for either a health endpoint or a simple GET endpoint. Some web services implement a health endpoint that returns status information about the service, which can be useful to test a connection. If no health endpoint exists, use a simple GET endpoint that takes few to no parameters to ensure the connector can make a successful call to the web service.
Use Test Connection in the IDN UI after an admin has finished entering
configuration information for a new instance of the connector.
Use Test Connection in the IDN UI after an admin has finished entering configuration information for a new instance of the connector.
![Test Connection](./img/test_command_idn.png)
## Implementation
In
[index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts),
add the test connection function handler to your connector. Within this
function, send a simple request to your web service to ensure the connection
works. The web service this connector targets has a JavaScript SDK, so define
your own function like the following example to test the connection:
In [index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts), add the test connection function handler to your connector. Within this function, send a simple request to your web service to ensure the connection works. The web service this connector targets has a JavaScript SDK, so define your own function like the following example to test the connection:
```javascript
export const connector = async () => {
@@ -64,9 +51,7 @@ export const connector = async () => {
}
```
To implement the `testConnection()` function, use the following function created
in the web service client code,
[airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts).
To implement the `testConnection()` function, use the following function created in the web service client code, [airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts).
```javascript
/**
@@ -85,7 +70,4 @@ in the web service client code,
}
```
This function calls an endpoint on the target web service to list all users. If
the call is successful, the web service returns an empty object, which is okay
because you do not need to do anything with the data. Your only goal is to
ensure that you can make API calls with the provided configuration.
This function calls an endpoint on the target web service to list all users. If the call is successful, the web service returns an empty object, which is okay because you do not need to do anything with the data. Your only goal is to ensure that you can make API calls with the provided configuration.

View File

@@ -5,49 +5,33 @@ pagination_label: Connector Spec File
sidebar_label: Connector Spec File
sidebar_position: 4
sidebar_class_name: connectorSpecFile
keywords: ["connectivity", "connectors", "spec", "specification"]
description:
The connector spec file tells IDN how the connector should interact between
IDN and the custom connector. It is the glue between IDN and the connector, so
understanding the different sections are key to understanding how to build a
custom connectors.
keywords: ['connectivity', 'connectors', 'spec', 'specification']
description: The connector spec file tells IDN how the connector should interact between IDN and the custom connector. It is the glue between IDN and the connector, so understanding the different sections are key to understanding how to build a custom connectors.
slug: /docs/saas-connectivity/connector-spec
tags: ["Connectivity"]
tags: ['Connectivity']
---
## Summary
The connector spec file tells IDN how the connector should interact between IDN
and the custom connector. It is the glue between IDN and the connector, so
understanding the different sections are key to understanding how to build a
custom connectors.
The connector spec file tells IDN how the connector should interact between IDN and the custom connector. It is the glue between IDN and the connector, so understanding the different sections are key to understanding how to build a custom connectors.
## Sample File
To see a sample spec file, see this link:
[connector-spec.json](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/connector-spec.json)
To see a sample spec file, see this link: [connector-spec.json](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/connector-spec.json)
## Description of Fields
The following describes in detail the different fields in the connector spec:
- **name:** The name of the connector as it appears in IDN. Tags can be appended
to this name.
- **name:** The name of the connector as it appears in IDN. Tags can be appended to this name.
- **keyType:** Either “simple” or “compound” This determines which type of key
your connector expects to receive and send back for each of the commands. This
must always be indicated in your connector spec - the connector returns the
correct type for each command that returns a key type.
- **keyType:** Either “simple” or “compound” This determines which type of key your connector expects to receive and send back for each of the commands. This must always be indicated in your connector spec - the connector returns the correct type for each command that returns a key type.
- For example, the stdAccountRead command input is the StdAccountReadInput. if
you select keyType as “simple,” then the StdAccountReadInput.key will be the
type SimpleKey.
- For example, the stdAccountRead command input is the StdAccountReadInput. if you select keyType as “simple,” then the StdAccountReadInput.key will be the type SimpleKey.
- **commands:** The list of commands the connector supports. A full list of
available commands can be found here.
- **commands:** The list of commands the connector supports. A full list of available commands can be found here.
- **sourceConfig** A list of configuration items you must provide when you
create a source in IDN. The order of these items is preserved in the UI.
- **sourceConfig** A list of configuration items you must provide when you create a source in IDN. The order of these items is preserved in the UI.
- **type:** This is always “menu” - it indicates a new menu for the sidebar. You can have multiple sections defined for complex connector configurations
- **label:** This label indicates the text that will show up on the sidebar in IDN
- **items:** The array of items in the menu
@@ -56,9 +40,7 @@ The following describes in detail the different fields in the connector spec:
- **sectionHelpMessage:** A description about the section that can help the user understand what it is used for and how to fill out the fields
- **key:** The name of the configuration item as it is referenced in code.
- **label:** The name of the configuration item as it appears in the UI.
- **required** (Optional): Set to 'false' by default. Valid values are
'true' or 'false.' You must populate required configuration items in the
IDN source configuration wizard before continuing.
- **required** (Optional): Set to 'false' by default. Valid values are 'true' or 'false.' You must populate required configuration items in the IDN source configuration wizard before continuing.
- **type:** The configuration items' types. The following types are valid:
- text
- secret
@@ -67,83 +49,35 @@ The following describes in detail the different fields in the connector spec:
- number
- checkbox
- json
- **accountSchema:** The schema for an account in IDN populated by data from
the source.
- **displayAttribute:** Identifies the attribute (defined below) used to
map to `Account Name` in the IdentityNow account schema. This should be
a unique value even though it is not required because the connector will
use this value to correlate accounts in IDN to accounts in the source
system.
- **identityAttribute:** Identifies the attribute (defined below) used to
map to `Account ID` in the IdentityNow account schema. This must be a
globally unique identifier, such as email address, employee ID, etc.
- **groupAttribute:** Identifies the attribute used to map accounts to
entitlements. For example, a web service can define `groups` that users
are members of, and the `groups` grant entitlements to each user. In
this case, **groupAttribute** is “groups,” and there is also an account
attribute called “groups”.
- **attributes:** One or more attributes that map to a users attribute on
the target source. Each attribute defines the following:
- **accountSchema:** The schema for an account in IDN populated by data from the source.
- **displayAttribute:** Identifies the attribute (defined below) used to map to `Account Name` in the IdentityNow account schema. This should be a unique value even though it is not required because the connector will use this value to correlate accounts in IDN to accounts in the source system.
- **identityAttribute:** Identifies the attribute (defined below) used to map to `Account ID` in the IdentityNow account schema. This must be a globally unique identifier, such as email address, employee ID, etc.
- **groupAttribute:** Identifies the attribute used to map accounts to entitlements. For example, a web service can define `groups` that users are members of, and the `groups` grant entitlements to each user. In this case, **groupAttribute** is “groups,” and there is also an account attribute called “groups”.
- **attributes:** One or more attributes that map to a users attribute on the target source. Each attribute defines the following:
- **name:** The attributes name as it appears in IDN.
- **type:** The attributes type. Possible values are `string`,
`boolean`, `long`, and `int`.
- **description:** A helpful description of the attribute. This is
useful to source owners when they are trying to understand the account
schema.
- **managed:** This indicates whether the entitlements are manageable
through IDN or read-only.
- **entitlement:** This boolean indicates whether the attribute is an
entitlement. Entitlements give identities privileges on the source
system. Use this indication to determine which fields to synchronize
with accounts in IDN for tasks such as separation of duties and role
assignment. The boolean indicates whether the attribute is an
entitlement.
- **multi:** This indicates entitlements that are stored in an array
format. This one field can store multiple entitlements for a single
account.
- **entitlementSchemas:** A list of entitlement schemas in IDN populated
by data from the source.
- **type:** The entitlements type. Currently, only `group` is
supported.
- **displayAttribute:** The entitlement attributes name. This can be
the `name` or another human friendly identifier for a group.
- **identityAttribute:** The entitlement attributes unique ID. This can
be the `id` or another unique key for a group.
- **attributes:** The entitlements list of attributes. This list of
attributes is an example: `id`, `name`, and `description`.
- **type:** The attributes type. Possible values are `string`, `boolean`, `long`, and `int`.
- **description:** A helpful description of the attribute. This is useful to source owners when they are trying to understand the account schema.
- **managed:** This indicates whether the entitlements are manageable through IDN or read-only.
- **entitlement:** This boolean indicates whether the attribute is an entitlement. Entitlements give identities privileges on the source system. Use this indication to determine which fields to synchronize with accounts in IDN for tasks such as separation of duties and role assignment. The boolean indicates whether the attribute is an entitlement.
- **multi:** This indicates entitlements that are stored in an array format. This one field can store multiple entitlements for a single account.
- **entitlementSchemas:** A list of entitlement schemas in IDN populated by data from the source.
- **type:** The entitlements type. Currently, only `group` is supported.
- **displayAttribute:** The entitlement attributes name. This can be the `name` or another human friendly identifier for a group.
- **identityAttribute:** The entitlement attributes unique ID. This can be the `id` or another unique key for a group.
- **attributes:** The entitlements list of attributes. This list of attributes is an example: `id`, `name`, and `description`.
- **name:** The name of the attribute as it appears in IDN.
- **type:** The attributes type. Possible values are `string`,
`boolean`, `long`, and `int`.
- **description:** A helpful description the attribute. This is useful
to source owners when they are trying to understand the account
schema.
- **accountCreateTemplate:** A map of identity attributes IDN will pass to
the connector to create an account in the target source.
- **key:** The unique identifier of the attribute. This is also the name
that is presented in the Create Profile screen in IDN.
- **type:** The attributes type. Possible values are `string`, `boolean`, `long`, and `int`.
- **description:** A helpful description the attribute. This is useful to source owners when they are trying to understand the account schema.
- **accountCreateTemplate:** A map of identity attributes IDN will pass to the connector to create an account in the target source.
- **key:** The unique identifier of the attribute. This is also the name that is presented in the Create Profile screen in IDN.
- **label:** A friendly name for presentation purposes.
- **type:** The attributes type. Possible values are `string`,
`boolean`, `long`, and `int`.
- **initialValue (Optional):** Use this to specify identitAttribute
mapping, generator or default values.
- **type:** The initial value type. Possible values are
`identityAttribute`, `generator`, `static`.
- **type:** The attributes type. Possible values are `string`, `boolean`, `long`, and `int`.
- **initialValue (Optional):** Use this to specify identitAttribute mapping, generator or default values.
- **type:** The initial value type. Possible values are `identityAttribute`, `generator`, `static`.
- **attributes:** Attributes change depending on the type selected.
- **name:** Use this to identify the mapping for identityAttribute
type, or the generator to use (`Create Password`,
`Create Unique Account ID`).
- **name:** Use this to identify the mapping for identityAttribute type, or the generator to use (`Create Password`, `Create Unique Account ID`).
- **value:** Use this as the default value for the static type.
- **maxSize:** Use this for the Create Unique Account ID generator
type. This value specifies the maximum size of the username to be
generated.
- **maxUniqueChecks:** Use this for the Create Unique Account ID
generator type. This value specifies the maximum retries in case a
unique ID is not found with the first random generated user.
- **template:** Use this for the Create Unique Account ID generator
type. This value specifies the template used for generation.
Example: `"$(firstname).$(lastname)$(uniqueCounter)"`.
- **required (Optional):** Determines whether the account create
operation requires this attribute. It defaults to `false`. If it is
`true` and IdentityNow encounters an identity missing this
attribute, IDN does not send the account to the connector for
account creation.
- **maxSize:** Use this for the Create Unique Account ID generator type. This value specifies the maximum size of the username to be generated.
- **maxUniqueChecks:** Use this for the Create Unique Account ID generator type. This value specifies the maximum retries in case a unique ID is not found with the first random generated user.
- **template:** Use this for the Create Unique Account ID generator type. This value specifies the template used for generation. Example: `"$(firstname).$(lastname)$(uniqueCounter)"`.
- **required (Optional):** Determines whether the account create operation requires this attribute. It defaults to `false`. If it is `true` and IdentityNow encounters an identity missing this attribute, IDN does not send the account to the connector for account creation.

View File

@@ -5,19 +5,12 @@ pagination_label: Example Connectors
sidebar_label: Example Connectors
sidebar_position: 5
sidebar_class_name: exampleConnectors
keywords: ["connectivity", "connectors", "example"]
description:
Here are a few example connectors that were built for you to download and
learn from.
keywords: ['connectivity', 'connectors', 'example']
description: Here are a few example connectors that were built for you to download and learn from.
slug: /docs/saas-connectivity/example-connectors
tags: ["Connectivity"]
tags: ['Connectivity']
---
- [Airtable connector](https://github.com/sailpoint-oss/airtable-example-connector)
is a real connector that works like a flat file data source and is great for
demonstrating how a connector works.
- [Airtable connector](https://github.com/sailpoint-oss/airtable-example-connector) is a real connector that works like a flat file data source and is great for demonstrating how a connector works.
- [Discourse Connector](https://github.com/sailpoint-oss/discourse-connector-2)
is a real connector that works with the
[Discourse service](https://www.discourse.org/). The documentation for each
command references code from this example application.
- [Discourse Connector](https://github.com/sailpoint-oss/discourse-connector-2) is a real connector that works with the [Discourse service](https://www.discourse.org/). The documentation for each command references code from this example application.

View File

@@ -5,25 +5,13 @@ pagination_label: API Calls
sidebar_label: API Calls
sidebar_position: 1
sidebar_class_name: apiCalls
keywords: ["connectivity", "connectors", "api calls"]
description:
Calling API endpoints sequentially for hundreds or thousands of accounts is
slow. If several API calls are required to build a users account, then it is
recommended that you use asynchronous functions to speed up this task.
keywords: ['connectivity', 'connectors', 'api calls']
description: Calling API endpoints sequentially for hundreds or thousands of accounts is slow. If several API calls are required to build a users account, then it is recommended that you use asynchronous functions to speed up this task.
slug: /docs/saas-connectivity/in-depth/api-calls
tags: ["Connectivity"]
tags: ['Connectivity']
---
Calling API endpoints sequentially for hundreds or thousands of accounts is
slow. If several API calls are required to build a users account, then it is
recommended that you use asynchronous functions to speed up this task.
Asynchronous functions allow your program to execute several commands at once,
which is especially important for high latency commands like calling API
endpoints - each call to an endpoint can take anywhere from several milliseconds
to several seconds. The following code snippet from
[discourse-client.ts](https://github.com/sailpoint-oss/discourse-connector-2/blob/main/Discourse/src/discourse-client.ts)
shows how you can use asynchronous functions to quickly build a list of account
profiles for your sources users:
Calling API endpoints sequentially for hundreds or thousands of accounts is slow. If several API calls are required to build a users account, then it is recommended that you use asynchronous functions to speed up this task. Asynchronous functions allow your program to execute several commands at once, which is especially important for high latency commands like calling API endpoints - each call to an endpoint can take anywhere from several milliseconds to several seconds. The following code snippet from [discourse-client.ts](https://github.com/sailpoint-oss/discourse-connector-2/blob/main/Discourse/src/discourse-client.ts) shows how you can use asynchronous functions to quickly build a list of account profiles for your sources users:
```javascript
async getUsers(): Promise<User[]> {
@@ -50,21 +38,12 @@ async getUsers(): Promise<User[]> {
```
- Line 3 gets all the user IDs for a default group to which all the users you
want to track are assigned.
- Line 3 gets all the user IDs for a default group to which all the users you want to track are assigned.
- Line 6 gets more attributes for each user present in the group. There can be
hundreds of users who need their attributes fetched, so use Promise.all to
build and execute the API calls asynchronously, speeding up the operations
completion time.
- Line 6 gets more attributes for each user present in the group. There can be hundreds of users who need their attributes fetched, so use Promise.all to build and execute the API calls asynchronously, speeding up the operations completion time.
- Line 9 uses the same strategy as Line 6, except it calls another endpoint that
will get each users email address, which isnt present in the previous API
call. Use Promise.all again to speed up the operation.
- Line 9 uses the same strategy as Line 6, except it calls another endpoint that will get each users email address, which isnt present in the previous API call. Use Promise.all again to speed up the operation.
- Line 12-14 combines the data you gathered from the preceding calls to complete
your user accounts.
- Line 12-14 combines the data you gathered from the preceding calls to complete your user accounts.
> 📘 As a general guideline, any time you must execute several API calls that
> all call the same endpoint, it is recommended that you use Promise.all to
> speed up the operation.
> 📘 As a general guideline, any time you must execute several API calls that all call the same endpoint, it is recommended that you use Promise.all to speed up the operation.

View File

@@ -5,20 +5,15 @@ pagination_label: Debugging
sidebar_label: Debugging
sidebar_position: 2
sidebar_class_name: debugging
keywords: ["connectivity", "connectors", "debugging"]
description: An easy way to debug locally is to use console.log() to print debug
information to your console.
keywords: ['connectivity', 'connectors', 'debugging']
description: An easy way to debug locally is to use console.log() to print debug information to your console.
slug: /docs/saas-connectivity/in-depth/debugging
tags: ["Connectivity"]
tags: ['Connectivity']
---
## Debug locally
An easy way to debug locally is to use `console.log()` to print debug
information to your console. You can add `console.log()` statements anywhere,
and the messages they print can contain static text or variables. For example,
to see the contents of an input object when you are invoking the
`stdAccountCreate` command, you can craft the following debug logic:
An easy way to debug locally is to use `console.log()` to print debug information to your console. You can add `console.log()` statements anywhere, and the messages they print can contain static text or variables. For example, to see the contents of an input object when you are invoking the `stdAccountCreate` command, you can craft the following debug logic:
```javascript
export const connector = async () => {
@@ -26,48 +21,41 @@ export const connector = async () => {
async (
context: Context,
input: StdAccountCreateInput,
res: Response<StdAccountCreateOutput>
res: Response<StdAccountCreateOutput>,
) => {
// Print the contents of input to the console. Must use
// JSON.stringify() to print the contents of an object.
console.log(
`Input received for account create: ${JSON.stringify(input)}`
`Input received for account create: ${JSON.stringify(input)}`,
);
if (!input.attributes.id) {
throw new ConnectorError("identity cannot be null");
throw new ConnectorError('identity cannot be null');
}
const user = await airtable.createAccount(input);
logger.info(user, "created user in Airtable");
logger.info(user, 'created user in Airtable');
res.send(user.toStdAccountCreateOutput());
}
},
);
};
```
`console.log()` statements work anywhere, and they work when you deploy your
connector to IDN. However, these statements can create clutter in your code. You
will often have to clean up debug statements once you are done.
`console.log()` statements work anywhere, and they work when you deploy your connector to IDN. However, these statements can create clutter in your code. You will often have to clean up debug statements once you are done.
If your IDE supports debugging JavaScript, then your IDEs built-in debugger can
be a powerful and easy way to debug your code.
If your IDE supports debugging JavaScript, then your IDEs built-in debugger can be a powerful and easy way to debug your code.
## Debug in VS Code
### Debug through the javascript debug terminal
In VS Code, open a javascript debug terminal window and run the npm run dev
command.
In VS Code, open a javascript debug terminal window and run the npm run dev command.
`npm run dev`
Now you can set breakpoints in your typescript files in VS Code:
![debugging 1](./img/debugging1.png)
Now you can set breakpoints in your typescript files in VS Code: ![debugging 1](./img/debugging1.png)
### Debug through the VS Code Debug configuration
To simplify the debugging process, you can consolidate the debugging steps into
a VS Code launch configuration. The following snippet is an example of how you
would do so:
To simplify the debugging process, you can consolidate the debugging steps into a VS Code launch configuration. The following snippet is an example of how you would do so:
**Launch.json:**
@@ -95,13 +83,10 @@ would do so:
}
```
With these configurations set, you can run the debugger by selecting the options
shown in the following image:
With these configurations set, you can run the debugger by selecting the options shown in the following image:
![debugging 2](./img/debugging2.png)
## Debug in IdentityNow
You can use the `sail conn logs` command to gain insight into how your connector
is performing while running in IDN. See the section on logging for more
information.
You can use the `sail conn logs` command to gain insight into how your connector is performing while running in IDN. See the section on logging for more information.

View File

@@ -5,24 +5,17 @@ pagination_label: Error Handling
sidebar_label: Error Handling
sidebar_position: 3
sidebar_class_name: errorHandling
keywords: ["connectivity", "connectors", "error handling"]
description:
Any time code can fail due to validation issues, connectivity or configuration
errors, handle the error and provide information back to the user about what
went wrong.
keywords: ['connectivity', 'connectors', 'error handling']
description: Any time code can fail due to validation issues, connectivity or configuration errors, handle the error and provide information back to the user about what went wrong.
slug: /docs/saas-connectivity/in-depth/error-handling
tags: ["Connectivity"]
tags: ['Connectivity']
---
Any time code can fail due to validation issues, connectivity or configuration
errors, handle the error and provide information back to the user about what
went wrong. If you handle your errors properly, it will be easier to debug and
pinpoint what happened in your connector when something goes wrong.
Any time code can fail due to validation issues, connectivity or configuration errors, handle the error and provide information back to the user about what went wrong. If you handle your errors properly, it will be easier to debug and pinpoint what happened in your connector when something goes wrong.
## Connector Errors
The connector SDK has a built-in ConnectorError to use in your project to handle
most generic errors:
The connector SDK has a built-in ConnectorError to use in your project to handle most generic errors:
[airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts)
@@ -52,16 +45,12 @@ export class AirtableClient {
## Custom Errors
You can also create custom errors and use them in your code to give more
meaningful and specific responses to error states. For example, when you are
configuring your connector, it is recommended that you throw an
`InvalidConfigurationError` instead of a generic ConnectorError. To do this,
create the custom error:
You can also create custom errors and use them in your code to give more meaningful and specific responses to error states. For example, when you are configuring your connector, it is recommended that you throw an `InvalidConfigurationError` instead of a generic ConnectorError. To do this, create the custom error:
[invalid-configuration-error.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/errors/invalid-configuration-error.ts)
```javascript
import { ConnectorError, ConnectorErrorType } from "@sailpoint/connector-sdk";
import {ConnectorError, ConnectorErrorType} from '@sailpoint/connector-sdk';
/**
* Thrown when an application missing configuration during initialization
@@ -75,7 +64,7 @@ export class InvalidConfigurationError extends ConnectorError {
*/
constructor(message: string, type?: ConnectorErrorType) {
super(message, type);
this.name = "InvalidConfigurationError";
this.name = 'InvalidConfigurationError';
}
}
```

View File

@@ -5,12 +5,10 @@ pagination_label: Linting
sidebar_label: Linting
sidebar_position: 4
sidebar_class_name: linting
keywords: ["connectivity", "connectors", "linting"]
description:
Automatically check your connector source code for programmatic and stylistic
errors.
keywords: ['connectivity', 'connectors', 'linting']
description: Automatically check your connector source code for programmatic and stylistic errors.
slug: /docs/saas-connectivity/in-depth/linting
tags: ["Connectivity"]
tags: ['Connectivity']
---
To add linting to your project, simply install the linter using NPM:
@@ -26,11 +24,11 @@ env:
extends:
- eslint:recommended
- plugin:@typescript-eslint/recommended
parser: "@typescript-eslint/parser"
parser: '@typescript-eslint/parser'
parserOptions:
ecmaVersion: latest
sourceType: module
plugins:
- "@typescript-eslint"
- '@typescript-eslint'
rules: {}
```

View File

@@ -5,10 +5,10 @@ pagination_label: Logging
sidebar_label: Logging
sidebar_position: 5
sidebar_class_name: logging
keywords: ["connectivity", "connectors", "logging"]
keywords: ['connectivity', 'connectors', 'logging']
description: You can use this feature to read the logs of your connectors.
slug: /docs/saas-connectivity/in-depth/logging
tags: ["Connectivity"]
tags: ['Connectivity']
---
## Printing Logs with the CLI
@@ -28,11 +28,9 @@ $ sail conn logs
[2022-07-14T11:04:24.941-04:00] INFO | commandOutcome ▶︎ {"commandType":"std:test-connection","completed":true,"elapsed":49,"message":"command completed","requestId":"cca732a2-084d-4433-9bd5-ed22fa397d8d","version":8}
```
To tail the logs to see output as it happens, execute the `sail conn logs tail`
command.
To tail the logs to see output as it happens, execute the `sail conn logs tail` command.
It can also be helpful to execute the logs command along with grep to filter
your results to a specific connector or text:
It can also be helpful to execute the logs command along with grep to filter your results to a specific connector or text:
```bash
$ sail conn logs | grep 'connector version 29'
@@ -41,8 +39,7 @@ $ sail conn logs | grep 'connector version 29'
## Logging with console.log
anywhere that you use console.log in your code will expose the output to the
logs. The following example has a printed statement in the index.ts file:
anywhere that you use console.log in your code will expose the output to the logs. The following example has a printed statement in the index.ts file:
```javascript
// Connector must be exported as module property named connector
@@ -62,8 +59,7 @@ export const connector = async () => {
```
When you run the `sail conn logs` command, you will see the following in the
output:
When you run the `sail conn logs` command, you will see the following in the output:
```bash
$ sail conn logs tail
@@ -74,22 +70,20 @@ $ sail conn logs tail
## Logging using the SDK
Use the built in logging tool to simplify the logging process and enhance your
loggers capabilities. To start, import the logger from the sdk:
Use the built in logging tool to simplify the logging process and enhance your loggers capabilities. To start, import the logger from the sdk:
`import { logger as SDKLogger } from '@sailpoint/connector-sdk'`
Next, add a simple configuration for the logger to use throughout your
application.
Next, add a simple configuration for the logger to use throughout your application.
[logger.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/logger/logger.ts)
```javascript
import { logger as SDKLogger } from "@sailpoint/connector-sdk";
import {logger as SDKLogger} from '@sailpoint/connector-sdk';
export const logger = SDKLogger.child(
// specify your connector name
{ connectorName: "Airtable" }
{connectorName: 'Airtable'},
);
```
@@ -121,42 +115,37 @@ export const connector = async () => {
## Configuring the SDK to Mask Sensitive Values
The SDK Logger uses [Pino](https://github.com/pinojs/pino) under the hood, which
has the built-in capability to search and remove json paths that can contain
sensitive information.
The SDK Logger uses [Pino](https://github.com/pinojs/pino) under the hood, which has the built-in capability to search and remove json paths that can contain sensitive information.
> 🚧 Never expose any Personal Identifiable Information in any logging
> operations.
> 🚧 Never expose any Personal Identifiable Information in any logging operations.
Start by looking at line 116 to 122 in your logger configuration, which looks
like the one below:
Start by looking at line 116 to 122 in your logger configuration, which looks like the one below:
```javascript
import { logger as SDKLogger } from "@sailpoint/connector-sdk";
import {logger as SDKLogger} from '@sailpoint/connector-sdk';
export const logger = SDKLogger.child(
// specify your connector name
{ connectorName: "Airtable" },
{connectorName: 'Airtable'},
// This is optional for removing specific information you might not want to be logged
{
redact: {
paths: [
"*.password",
"*.username",
"*.email",
"*.id",
"*.firstName",
"*.lastName",
"*.displayName",
'*.password',
'*.username',
'*.email',
'*.id',
'*.firstName',
'*.lastName',
'*.displayName',
],
censor: "****",
censor: '****',
},
}
},
);
```
Now compare that with the object you want to remove information from while still
logging information in it:
Now compare that with the object you want to remove information from while still logging information in it:
[AirtableAccount.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/models/AirtableAccount.ts)
@@ -176,10 +165,7 @@ export class AirtableAccount {
}
```
Now when you log the contents of an `AirtableAccount` object, you will see all
the fields redacted. For example, in
[index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts)
we log the `accounts` in the following code snippet:
Now when you log the contents of an `AirtableAccount` object, you will see all the fields redacted. For example, in [index.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/index.ts) we log the `accounts` in the following code snippet:
```javascript
.stdAccountList(async (context: Context, input: undefined, res: Response<StdAccountListOutput>) => {
@@ -202,5 +188,4 @@ $ sail conn logs
[2022-07-14T11:19:30.678-04:00] INFO | commandOutcome ▶︎ {"commandType":"std:account:list","completed":true,"elapsed":1290,"message":"command completed","requestId":"379a8a4510944daf9d02b51a29ae863e","version":8}
```
You can see that any of the PII information has now been transformed into
"\*\*\*\*"
You can see that any of the PII information has now been transformed into "\*\*\*\*"

View File

@@ -5,31 +5,15 @@ pagination_label: Handling Rate Limits
sidebar_label: Handling Rate Limits
sidebar_position: 6
sidebar_class_name: handlingRateLimits
keywords: ["connectivity", "connectors", "rate limits"]
keywords: ['connectivity', 'connectors', 'rate limits']
description: Rate limiting for SaaS Connectivity.
slug: /docs/saas-connectivity/in-depth/handling-rate-limits
tags: ["Connectivity"]
tags: ['Connectivity']
---
APIs often implement rate limits to prevent any one user from abusing the API or
using an unfair amount of resources, limiting what other users of the API can
do. The rate limits can manifest in many ways, but one of the most common ways
is the 429 (Too Many Requests) HTTP status code. You must check the
documentation of the API you are using to see whether it enforces rate limits
and how it notifies you when you reach that limit. An example of rate limit
documentation for Stripes API can be found
[here](https://stripe.com/docs/rate-limits).
APIs often implement rate limits to prevent any one user from abusing the API or using an unfair amount of resources, limiting what other users of the API can do. The rate limits can manifest in many ways, but one of the most common ways is the 429 (Too Many Requests) HTTP status code. You must check the documentation of the API you are using to see whether it enforces rate limits and how it notifies you when you reach that limit. An example of rate limit documentation for Stripes API can be found [here](https://stripe.com/docs/rate-limits).
If you are using a vendor supplied client library for the API, check the
documentation for that client library to see whether it handles rate limits for
you. If it does, you do not need to worry about rate limits. If it does not or
if you have to implement your own library for interacting with the target API,
you must handle rate limiting yourself. If you are implementing your own library
for the target API, the easiest way to handle rate limits is to use the
[axios-retry](https://www.npmjs.com/package/axios-retry) NPM package in
conjunction with the [axios](https://www.npmjs.com/package/axios) HTTP request
library. Start by including both packages in the dependencies section of your
`package.json` file:
If you are using a vendor supplied client library for the API, check the documentation for that client library to see whether it handles rate limits for you. If it does, you do not need to worry about rate limits. If it does not or if you have to implement your own library for interacting with the target API, you must handle rate limiting yourself. If you are implementing your own library for the target API, the easiest way to handle rate limits is to use the [axios-retry](https://www.npmjs.com/package/axios-retry) NPM package in conjunction with the [axios](https://www.npmjs.com/package/axios) HTTP request library. Start by including both packages in the dependencies section of your `package.json` file:
```json
...
@@ -41,15 +25,7 @@ library. Start by including both packages in the dependencies section of your
...
```
Next, run `npm install` in your project directory to install the packages. Once
they are installed, go to the section of your code that handles API calls to
your source and wrap your Axios HTTP client object in an Axios retry object. In
the following snippet, the code automatically retries an API call that fails
with a 429 error code three times, using exponential back-off between each API
call. You can configure this better to suit your APIs rate limit. The following
code snippet from
[discourse-client.ts](https://github.com/sailpoint-oss/discourse-connector-2/blob/main/src/discourse-client.ts)
shows the code necessary to set up the retry logic:
Next, run `npm install` in your project directory to install the packages. Once they are installed, go to the section of your code that handles API calls to your source and wrap your Axios HTTP client object in an Axios retry object. In the following snippet, the code automatically retries an API call that fails with a 429 error code three times, using exponential back-off between each API call. You can configure this better to suit your APIs rate limit. The following code snippet from [discourse-client.ts](https://github.com/sailpoint-oss/discourse-connector-2/blob/main/src/discourse-client.ts) shows the code necessary to set up the retry logic:
```javascript
import { ConnectorError } from "@sailpoint/connector-sdk"
@@ -107,8 +83,7 @@ export class DiscourseClient {
...
```
Because `axios-retry` wraps an `axios` object, you can make API calls like you
normally would with Axios without any special options or configuration.
Because `axios-retry` wraps an `axios` object, you can make API calls like you normally would with Axios without any special options or configuration.
```javascript
private async getUserEmailAddress(username: string): Promise<string> {

View File

@@ -5,17 +5,15 @@ pagination_label: Testing
sidebar_label: Testing
sidebar_position: 7
sidebar_class_name: testing
keywords: ["connectivity", "connectors", "testing"]
keywords: ['connectivity', 'connectors', 'testing']
description: Testing SaaS Connectivity.
slug: /docs/saas-connectivity/in-depth/testing
tags: ["Connectivity"]
tags: ['Connectivity']
---
## Getting Started
When you set up a new project, the following test files are created:
`index.spec.ts` and `my-client.spec.ts`. Execute the tests immediately using npm
test.
When you set up a new project, the following test files are created: `index.spec.ts` and `my-client.spec.ts`. Execute the tests immediately using npm test.
```bash
$ npm run test
@@ -41,34 +39,24 @@ Ran all test suites.
{"level":"INFO","message":"Running test connection"}
```
You can also view the results in an html report by viewing the `index.html` file
inside the `coverage/lcov-report` folder:
You can also view the results in an html report by viewing the `index.html` file inside the `coverage/lcov-report` folder:
![Account List](./img/testing1.png) ![Account List](./img/testing2.png)
## Testing Techniques
[Jest](https://jestjs.io/docs/getting-started) is a testing framework provided
for javascript that focuses on simplicity. CLI includes it when it generates the
project. It is recommended to use Jest to test your code.
[Jest](https://jestjs.io/docs/getting-started) is a testing framework provided for javascript that focuses on simplicity. CLI includes it when it generates the project. It is recommended to use Jest to test your code.
Testing your code is important because it can highlight implementation issues
before they get into production. If your tests are setup with good descriptions,
the tests can also help explain why certain conditions are important in the
code, so if a new developer breaks a test, he or she will know what broke and
why the functionality is important.
Testing your code is important because it can highlight implementation issues before they get into production. If your tests are setup with good descriptions, the tests can also help explain why certain conditions are important in the code, so if a new developer breaks a test, he or she will know what broke and why the functionality is important.
If you have good tests setup, then you can quickly identify and fix changes or
updates that occur in dependent sources.
If you have good tests setup, then you can quickly identify and fix changes or updates that occur in dependent sources.
Jest provides
[many different ways to test your code](https://jestjs.io/docs/using-matchers).
Some techniques are listed below:
Jest provides [many different ways to test your code](https://jestjs.io/docs/using-matchers). Some techniques are listed below:
### Test a method and evaluate the response using `expect`
```javascript
it("get users populates correct fields", async () => {
it('get users populates correct fields', async () => {
// Execute the method
let res = await discourseClient.getUsers();
@@ -76,7 +64,7 @@ it("get users populates correct fields", async () => {
expect(res.length).toBe(2);
// Evaluate the response email and ensure it matches the expected result
expect(res[0].email === "test.test@test.com");
expect(res[0].email === 'test.test@test.com');
});
```
@@ -84,8 +72,7 @@ it("get users populates correct fields", async () => {
- Line 7 asserts that the response is an array with 2 elements.
- Line 10 evaluates the email field in the array to ensure it matches the
expected result.
- Line 10 evaluates the email field in the array to ensure it matches the expected result.
### Test a method to ensure it calls another method using `spyOn`
@@ -106,8 +93,7 @@ it("get users populates correct fields", async () => {
})
```
- Line 4 sets up the spy. “generateRandomPassword” is an internal method that
gets called when the password is not provided.
- Line 4 sets up the spy. “generateRandomPassword” is an internal method that gets called when the password is not provided.
- Line 7 executes the method.
@@ -115,21 +101,11 @@ it("get users populates correct fields", async () => {
## Setting up Mock Services
The easiest way to mock your client is to set up a mock service that returns
data just like your service would in production so you can test all your
functions and data manipulation in your unit tests.
The easiest way to mock your client is to set up a mock service that returns data just like your service would in production so you can test all your functions and data manipulation in your unit tests.
Mocks help test your code without actually invoking your service and allow you
to simulate the kind of response your client expects to receive. They can also
help you pinpoint where failures occur in case something changes on your
service. By using a mock service, you can test your entire application without
connecting to your service.
Mocks help test your code without actually invoking your service and allow you to simulate the kind of response your client expects to receive. They can also help you pinpoint where failures occur in case something changes on your service. By using a mock service, you can test your entire application without connecting to your service.
Create a mock file Jest provides a way to set up a mock service. It stores your
mock files in a folder called \_\_mocks\_\_. If you name your typescript files
the exact same as the files they are mocking, those mock implementations will be
called instead when your unit tests are running. In the following example, a
mock has been created to simulate calls to the airtable client:
Create a mock file Jest provides a way to set up a mock service. It stores your mock files in a folder called \_\_mocks\_\_. If you name your typescript files the exact same as the files they are mocking, those mock implementations will be called instead when your unit tests are running. In the following example, a mock has been created to simulate calls to the airtable client:
[airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/__mocks__/airtable.ts)
@@ -210,21 +186,11 @@ export class AirtableClient {
}
```
The method signatures are exactly the same on this mock file as the signature
sin the "real"
[airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts).
The only difference is that the response objects from all the calls are made
without actually calling any external dependencies, so it can be run quickly in
a unit test without having to make api calls to a real client
The method signatures are exactly the same on this mock file as the signature sin the "real" [airtable.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/airtable.ts). The only difference is that the response objects from all the calls are made without actually calling any external dependencies, so it can be run quickly in a unit test without having to make api calls to a real client
### Define json mock objects
The responses are stored in directly imported json files. This helps keep the
code focused on the logic and allows the response objects to be more easily
generated directly from a tool like Postman without requiring any major
formatting of the response. Enable this situation by setting
`"resolveJsonModule": true` in your `tsconfig.json`. The following response file
is an example:
The responses are stored in directly imported json files. This helps keep the code focused on the logic and allows the response objects to be more easily generated directly from a tool like Postman without requiring any major formatting of the response. Enable this situation by setting `"resolveJsonModule": true` in your `tsconfig.json`. The following response file is an example:
[account.json](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/src/__mocks__/account.json)
@@ -246,16 +212,15 @@ is an example:
### Use the mock in your tests
The mock is defined in the test file, and Jest does the rest. Jest overrides all
the calls to use the methods in the `__mocks__` folder.
The mock is defined in the test file, and Jest does the rest. Jest overrides all the calls to use the methods in the `__mocks__` folder.
[index.spec.ts](https://github.com/sailpoint-oss/airtable-example-connector/blob/main/test/index.spec.ts)
```javascript
import { connector } from "../src/index";
import { StandardCommand } from "@sailpoint/connector-sdk";
import { PassThrough } from "stream";
import {connector} from '../src/index';
import {StandardCommand} from '@sailpoint/connector-sdk';
import {PassThrough} from 'stream';
// setup your mock object
jest.mock("../src/airtable");
jest.mock('../src/airtable');
```

View File

@@ -5,57 +5,30 @@ pagination_label: SaaS Connectivity
sidebar_label: SaaS Connectivity
sidebar_position: 4
sidebar_class_name: saasConnectivity
keywords: ["connectivity", "connectors"]
description:
SaaS Connectivity is a cloud based connector runtime that makes developing and
deploying web service connectors easy.
keywords: ['connectivity', 'connectors']
description: SaaS Connectivity is a cloud based connector runtime that makes developing and deploying web service connectors easy.
slug: /docs/saas-connectivity
tags: ["Connectivity"]
tags: ['Connectivity']
---
SaaS Connectivity is a cloud based connector runtime that makes developing and
deploying web service connectors easier than Connector 1.0 does. However,
because the cloud hosts SaaS Connectivity, not a Virtual Appliance (VA), SaaS
Connectivity is limited in the types of applications it can connect to. For
example, you cannot use SaaS Connectivity to connect to on-prem services that
can only communicate within an intranet (no public internet access). This
excludes JDBC and Mainframe applications, to name a few.
SaaS Connectivity is a cloud based connector runtime that makes developing and deploying web service connectors easier than Connector 1.0 does. However, because the cloud hosts SaaS Connectivity, not a Virtual Appliance (VA), SaaS Connectivity is limited in the types of applications it can connect to. For example, you cannot use SaaS Connectivity to connect to on-prem services that can only communicate within an intranet (no public internet access). This excludes JDBC and Mainframe applications, to name a few.
## What Are Connectors
Connectors are the bridges between the SailPoint Identity Now (IDN) SaaS
platform and the source systems that IDN needs to communicate with and aggregate
data from. An example of a source system IDN may need to communicate with would
be an Oracle HR system or GitHub. In these cases, IDN synchronizes data between
systems to ensure account entitlements and state are correct through the
organization.
Connectors are the bridges between the SailPoint Identity Now (IDN) SaaS platform and the source systems that IDN needs to communicate with and aggregate data from. An example of a source system IDN may need to communicate with would be an Oracle HR system or GitHub. In these cases, IDN synchronizes data between systems to ensure account entitlements and state are correct through the organization.
## Why Are We Introducing a New Connector
VA connectors always communicate with external sources through the Virtual
Appliance (VA) as seen in the diagram below:
VA connectors always communicate with external sources through the Virtual Appliance (VA) as seen in the diagram below:
![Old Connectivity](./img/old_connectivity_diagram.png)
VA connectors can be disadvantageous because you need an on-prem virtual
appliance to have any external connectivity with them, even when that
connectivity is a SaaS service like Salesforce.com.
VA connectors can be disadvantageous because you need an on-prem virtual appliance to have any external connectivity with them, even when that connectivity is a SaaS service like Salesforce.com.
It is also challenging to create a custom connector in the VA Connector
framework. Therefore, there are generic connectors available such as flat file,
JDBC and webservice connectors. These options provide flexibility in configuring
almost any source, but this configuration can be complex. For example, when you
create a JDBC connector, you must use SQL to define the data model.
It is also challenging to create a custom connector in the VA Connector framework. Therefore, there are generic connectors available such as flat file, JDBC and webservice connectors. These options provide flexibility in configuring almost any source, but this configuration can be complex. For example, when you create a JDBC connector, you must use SQL to define the data model.
The new Cloud connectors work differently - they run on the IDN platform instead
(see diagram below).
The new Cloud connectors work differently - they run on the IDN platform instead (see diagram below).
![New Connectivity](./img/new_connectivity_diagram.png)
With this process, you can run an entire IDN instance without a VA. The new
connector also includes a CLI tool to manage cloud connectors and an SDK to
create custom connectors. Because it is simpler to create a custom connector,
you can create specific connectors for a variety of sources, and the connectors'
configuration can be much simpler. For example, you can now configure a formerly
complicated webservice connector by providing two parameters (Base URL and API
Key) in a custom cloud connector.
With this process, you can run an entire IDN instance without a VA. The new connector also includes a CLI tool to manage cloud connectors and an SDK to create custom connectors. Because it is simpler to create a custom connector, you can create specific connectors for a variety of sources, and the connectors' configuration can be much simpler. For example, you can now configure a formerly complicated webservice connector by providing two parameters (Base URL and API Key) in a custom cloud connector.

View File

@@ -5,15 +5,12 @@ pagination_label: Postman Collection
sidebar_label: Postman Collection
sidebar_position: 6
sidebar_class_name: postmanCollection
keywords: ["connectivity", "connectors", "postman"]
description:
Use the following Postman Collection file to run tests for each of the
commands locally.
keywords: ['connectivity', 'connectors', 'postman']
description: Use the following Postman Collection file to run tests for each of the commands locally.
slug: /docs/saas-connectivity/postman-collection
tags: ["Connectivity", "Postman"]
tags: ['Connectivity', 'Postman']
---
Use the following Postman Collection file to run tests for each of the commands
locally.
Use the following Postman Collection file to run tests for each of the commands locally.
[Postman Collection](./assets/SaaS_Connectivity.postman_collection)

View File

@@ -5,12 +5,10 @@ pagination_label: Prerequisites
sidebar_label: Prerequisites
sidebar_position: 1
sidebar_class_name: prerequisites
keywords: ["connectivity", "connectors", "prerequisites"]
description:
These are some prerequisites you must have before you start building SaaS
Connectors.
keywords: ['connectivity', 'connectors', 'prerequisites']
description: These are some prerequisites you must have before you start building SaaS Connectors.
slug: /docs/saas-connectivity/prerequisites
tags: ["Connectivity"]
tags: ['Connectivity']
---
## Packages
@@ -26,18 +24,11 @@ To develop a connector, the following packages are required:
## IDE
Although you can develop connectors in a text editor, use an Integrated
Development Environment (IDE) for a better experience. There are many IDEs that
support Javascript/Typescript, including
[Visual Sudio Code](https://code.visualstudio.com/Download), a free IDE with
native support for Javascript/Typescript. VS Code provides syntax highlight,
debugging, hints, code completion, and other helpful options.
Although you can develop connectors in a text editor, use an Integrated Development Environment (IDE) for a better experience. There are many IDEs that support Javascript/Typescript, including [Visual Sudio Code](https://code.visualstudio.com/Download), a free IDE with native support for Javascript/Typescript. VS Code provides syntax highlight, debugging, hints, code completion, and other helpful options.
## Install CLI
SailPoint provides a CLI tool to manage the connectors' lifecycles. To install
and set up the CLI, follow the instructions in this repository's README file
[SailPoint CLI on GitHub](https://github.com/sailpoint-oss/sailpoint-cli)
SailPoint provides a CLI tool to manage the connectors' lifecycles. To install and set up the CLI, follow the instructions in this repository's README file [SailPoint CLI on GitHub](https://github.com/sailpoint-oss/sailpoint-cli)
## Create New Project
@@ -47,12 +38,9 @@ To create an empty connector project, run the following command:
sail conn init my-first-project
```
The CLI init command creates a new folder with your project name in the location
where you run the command.
The CLI init command creates a new folder with your project name in the location where you run the command.
Run npm install to change the directory to the project folder and install the
dependencies. You may need to provide your GitHub credentials because the CLI
tool depends on a SailPoint internal GitHub repository.
Run npm install to change the directory to the project folder and install the dependencies. You may need to provide your GitHub credentials because the CLI tool depends on a SailPoint internal GitHub repository.
### Source Files
@@ -72,31 +60,15 @@ my-first-project
This directory contains three main files:
- **index.ts:** Use this file to register all the available commands the
connector supports, provide the necessary configuration options to the client
code implementing the API for the source, and pass data the client code
obtains to IdentityNow. This file can either use a vendor supplied client
Software Development Kit (SDK) to interact with the web service or reference
custom client code within the project.
- **index.ts:** Use this file to register all the available commands the connector supports, provide the necessary configuration options to the client code implementing the API for the source, and pass data the client code obtains to IdentityNow. This file can either use a vendor supplied client Software Development Kit (SDK) to interact with the web service or reference custom client code within the project.
- **my-client.ts:** Use this template to create custom client code to interact
with a web services APIs. If the web service does not provide an SDK, you can
modify this file to implement the necessary API calls to interact with the
source web service.
- **my-client.ts:** Use this template to create custom client code to interact with a web services APIs. If the web service does not provide an SDK, you can modify this file to implement the necessary API calls to interact with the source web service.
- **connector-spec.ts** This file describes how the connector works to IDN. More
information about the connector spec is available in the next section. At a
high level, it has the information for the following:
- **connector-spec.ts** This file describes how the connector works to IDN. More information about the connector spec is available in the next section. At a high level, it has the information for the following:
- What commands the connector supports
- What config values the user must provide when creating the connector
- Defining the account schema
- Defining the entitlment schema
- Defining the account create template that maps fields from IDN to the
connector
- Defining the account create template that maps fields from IDN to the connector
These files are templates that provide guidance to begin implementing the
connector on the target web service. Although you can implement a connector's
entire functionality within these three files (or even just one if the web
service provides an SDK), you can implement your own code architecture, like
breaking out common utility functions into a separate file or creating separate
files for each operation.
These files are templates that provide guidance to begin implementing the connector on the target web service. Although you can implement a connector's entire functionality within these three files (or even just one if the web service provides an SDK), you can implement your own code architecture, like breaking out common utility functions into a separate file or creating separate files for each operation.

View File

@@ -5,31 +5,19 @@ pagination_label: Test, Build, and Deploy
sidebar_label: Test, Build, and Deploy
sidebar_position: 2
sidebar_class_name: testBuildDeploy
keywords: ["connectivity", "connectors", "test", "build", "deploy"]
description:
As you implement command handlers, you must test them. The connector SDK
provides some utility methods to locally run your connector to test, build,
and deploy.
keywords: ['connectivity', 'connectors', 'test', 'build', 'deploy']
description: As you implement command handlers, you must test them. The connector SDK provides some utility methods to locally run your connector to test, build, and deploy.
slug: /docs/saas-connectivity/test-build-deploy
tags: ["Connectivity"]
tags: ['Connectivity']
---
## Testing Your Connector
You can use the following Postman Collection file to locally run tests for each
of the commands.
You can use the following Postman Collection file to locally run tests for each of the commands.
[Postman Collection](./assets/SaaS_Connectivity.postman_collection)
As you implement command handlers, you must test them. The connector SDK
provides some utility methods to locally run your connector. To start, run
`npm run dev` within the connector project folder. This script locally starts an
Express server on port 3000, which can be used to invoke a command against the
connector. You do not need to restart this process after making changes to
connector code. Once the Express server is started, you can send `POST` requests
to `localhost:3000` and test your command handlers. For example, you can run
`POST localhost:3000` with the following payload to run the stdAccountRead
handler method.
As you implement command handlers, you must test them. The connector SDK provides some utility methods to locally run your connector. To start, run `npm run dev` within the connector project folder. This script locally starts an Express server on port 3000, which can be used to invoke a command against the connector. You do not need to restart this process after making changes to connector code. Once the Express server is started, you can send `POST` requests to `localhost:3000` and test your command handlers. For example, you can run `POST localhost:3000` with the following payload to run the stdAccountRead handler method.
```json
{
@@ -43,34 +31,25 @@ handler method.
}
```
- **type:** The command handlers name. It also refers to the operation being
performed.
- **type:** The command handlers name. It also refers to the operation being performed.
- **input:** Input to provide to the command handler.
- **config:** The configuration values required to test locally. A `token` value
is not required, but the default project specifies `token`, so you must
include it in your request to begin.
- **config:** The configuration values required to test locally. A `token` value is not required, but the default project specifies `token`, so you must include it in your request to begin.
## Create and Upload Connector Bundle
Follow these steps to use the CLI to package a connector bundle, create it in
your IdentityNow org, and upload it to IdentityNow.
Follow these steps to use the CLI to package a connector bundle, create it in your IdentityNow org, and upload it to IdentityNow.
### Package Connector Files
You must compress the files in the connector project into a zip file before
uploading them to IdentityNow.
You must compress the files in the connector project into a zip file before uploading them to IdentityNow.
Use the CLI to run `npm run pack-zip` to build and package the connector bundle.
Put the resulting zip file in the `dist` folder.
Use the CLI to run `npm run pack-zip` to build and package the connector bundle. Put the resulting zip file in the `dist` folder.
### Create Connector In Your Org
Before uploading the zip file, you must create an entry for the connector in
your IdentityNow org. Run `sail conn create "my-project"` to create a connector
entry.
Before uploading the zip file, you must create an entry for the connector in your IdentityNow org. Run `sail conn create "my-project"` to create a connector entry.
The response to this command contains a connector ID you can use to manage this
connector.
The response to this command contains a connector ID you can use to manage this connector.
```bash
$ sail conn create "example-connector"
@@ -99,9 +78,7 @@ $ sail conn list
### Upload Connector Zip File to IdentityNow
Run
`sail conn upload -c [connectorID | connectorAlias] -f dist/[connector filename].zip`
to upload the zip file built from the previous step to IdentityNow.
Run `sail conn upload -c [connectorID | connectorAlias] -f dist/[connector filename].zip` to upload the zip file built from the previous step to IdentityNow.
```bash
$ sail conn upload -c example-connector -f dist/example-connector-0.1.0.zip
@@ -112,10 +89,7 @@ $ sail conn upload -c example-connector -f dist/example-connector-0.1.0.zip
+--------------------------------------+---------+
```
The first version upload of connector zip file also creates the `latest` tag,
pointing to the latest version of the connector file. After uploading the
connector bundle zip file, you can run `sail conn tags list -c example-connector`
to see the connector tags.
The first version upload of connector zip file also creates the `latest` tag, pointing to the latest version of the connector file. After uploading the connector bundle zip file, you can run `sail conn tags list -c example-connector` to see the connector tags.
```bash
$ sail conn tags list -c example-connector
@@ -128,16 +102,11 @@ $ sail conn tags list -c example-connector
## Test Your Connector in IdentityNow
Follow these steps to test a connector bundle in both IdentityNow and the
IdentityNow user interface (UI).
Follow these steps to test a connector bundle in both IdentityNow and the IdentityNow user interface (UI).
### Test Your Connector Bundle In IdentityNow
The connector CLI provides ways to test invoking commands with any connector
upload version. Before running a command, create a file, **config.json**, in the
root project folder. Include any configuration items required to interact with
the target web service in this file, such as API token, username, password,
organization, version, etc. The following snippet is an example:
The connector CLI provides ways to test invoking commands with any connector upload version. Before running a command, create a file, **config.json**, in the root project folder. Include any configuration items required to interact with the target web service in this file, such as API token, username, password, organization, version, etc. The following snippet is an example:
```json
{
@@ -145,11 +114,9 @@ organization, version, etc. The following snippet is an example:
}
```
This file is required and requires at least one key value even if your connector
does not require anything.
This file is required and requires at least one key value even if your connector does not require anything.
Next, invoke the command using the connector ID and config.json. For example,
this command invokes std:account:list command on the connector:
Next, invoke the command using the connector ID and config.json. For example, this command invokes std:account:list command on the connector:
```
sail connectors invoke account-list -c example-connector -p config.json
@@ -165,14 +132,10 @@ $ sail connectors invoke account-list -c example-connector -p config.json
> ⚠️ Sensitive information!
>
> Ensure that you add config.json to your .gitignore file so you do not
> accidentally store secrets in your code repository.
> Ensure that you add config.json to your .gitignore file so you do not accidentally store secrets in your code repository.
## Test Your Connector from IdentityNow UI
Go to your IdentityNow orgs source section. Create a source from the connector
you just uploaded. This connector will display in the dropdown list:
**example-connector (tag: latest)**
Go to your IdentityNow orgs source section. Create a source from the connector you just uploaded. This connector will display in the dropdown list: **example-connector (tag: latest)**
After creating a source, you can to test connection, aggregate account, etc.
from the IdentityNow UI.
After creating a source, you can to test connection, aggregate account, etc. from the IdentityNow UI.

View File

@@ -4,16 +4,15 @@ title: Guides
pagination_label: Guides
sidebar_label: Guides
sidebar_class_name: transforms
keywords: ["transforms", "guides"]
keywords: ['transforms', 'guides']
description: Transform Guides
slug: /docs/transforms/guides
tags: ["Transforms", "Guides"]
tags: ['Transforms', 'Guides']
---
# Transform Guides
Not sure how to use transforms yet? Read these guides to see how you can use
transforms and learn how to get started!
Not sure how to use transforms yet? Read these guides to see how you can use transforms and learn how to get started!
```mdx-code-block
import DocCardList from '@theme/DocCardList';

View File

@@ -4,27 +4,22 @@ title: Generate Temporary Password
pagination_label: Generate Temporary Password
sidebar_label: Generate Temporary Password
sidebar_class_name: generateTemporaryPassword
keywords: ["transforms", "guides", "password"]
keywords: ['transforms', 'guides', 'password']
description: Generate a temporary password for all users.
sidebar_position: 2
slug: /docs/transforms/guides/temporary-password
tags: ["Transforms", "Guides", "Password"]
tags: ['Transforms', 'Guides', 'Password']
---
## Overview
In this guide, you will learn how to create a nested transform in order to
generate a temporary password from a user's attributes.
In this guide, you will learn how to create a nested transform in order to generate a temporary password from a user's attributes.
- The authoritative source's data feed includes both a first_name and a
last_name field for every worker.
- The authoritative source's data feed includes both a first_name and a last_name field for every worker.
- A hire date is provided within the authoritative source data feed: the
hire_date field is provided for every worker and is in the format of
YYYY-MM-DD.
- A hire date is provided within the authoritative source data feed: the hire_date field is provided for every worker and is in the format of YYYY-MM-DD.
For an initial (temporary) password, set a static value driven off a formula
that can be communicated to the new hire by email. This is the formula:
For an initial (temporary) password, set a static value driven off a formula that can be communicated to the new hire by email. This is the formula:
- The first character is the user's first initial in lowercase.
- The user's last name comes next with the first character in uppercase.
@@ -33,8 +28,7 @@ that can be communicated to the new hire by email. This is the formula:
## Create the Example Source from a Deliminated file
This is the CSV file you will upload to create your source for testing this
transform:
This is the CSV file you will upload to create your source for testing this transform:
| id | email | first_name | last_name | hire_date |
| ------ | ---------------------------- | ---------- | --------- | ---------- |
@@ -42,37 +36,29 @@ transform:
| 100011 | frank.williams@sailpoint.com | Frank | Williams | 2020-07-10 |
| 100012 | paddy.lowe@sailpoint.com | Paddy | Lowe | 2020-09-20 |
To upload your CSV source, go to **Admin** > **Connections** > **Sources** and
select **Create New**.
To upload your CSV source, go to **Admin** > **Connections** > **Sources** and select **Create New**.
Fill in the form to create a source:
![Create Source](./img/create_source.png)
The source configuration workflow will appear. Keep all the default settings and
under **Review and Finish** on the left hand side, select **Exit
Configuration**.
The source configuration workflow will appear. Keep all the default settings and under **Review and Finish** on the left hand side, select **Exit Configuration**.
## Upload Schema and Accounts
In your newly created source, go to **Import Data** > **Account Schema**. Under
**Options**, select **Upload Schema**. Locate the CSV file from earlier in this
document.
In your newly created source, go to **Import Data** > **Account Schema**. Under **Options**, select **Upload Schema**. Locate the CSV file from earlier in this document.
Once your account schema is uploaded, you will see your available attributes to
use within the transform.
Once your account schema is uploaded, you will see your available attributes to use within the transform.
![Create Source](./img/account_schema.png)
Now you can upload your accounts. Go to **Import Data** > **Import Accounts** >
**Import Data**. Locate the CSV file from earlier in this document.
Now you can upload your accounts. Go to **Import Data** > **Import Accounts** > **Import Data**. Locate the CSV file from earlier in this document.
![Account Summary](./img/account_summary.png)
## Create an Identity Profile for the Source
Create an identity profile for your source. Go to **Admin** > **Identities** >
**Identity Profiles** and select **New**.
Create an identity profile for your source. Go to **Admin** > **Identities** > **Identity Profiles** and select **New**.
![Identity Profile](./img/account_summary.png)
@@ -80,21 +66,11 @@ Fill out the form and select the source you created earlier.
## Create the Transform
To create the transform for generating the user's temporary password, you will
use multiple different operations. You are going to break it out into pieces and
then put it all together at the end. The
[static transform](../operations/static.md) will be your main transform. You
will use nested transforms to create each part of the password and then use
those variables created in the final value.
To create the transform for generating the user's temporary password, you will use multiple different operations. You are going to break it out into pieces and then put it all together at the end. The [static transform](../operations/static.md) will be your main transform. You will use nested transforms to create each part of the password and then use those variables created in the final value.
### The First Character is the User's First Initial in Lowercase
The first part of the password is the user's first intitial in lowercase. You
can create that attribute by using the
[substring operation](../operations/substring.md) to get the first initial and
then passing that attribute as input into the
[lower operation](../operations/lower.md). In this example, the variable is
`firstInitialLowercase`, and you will use it later in your static string.
The first part of the password is the user's first intitial in lowercase. You can create that attribute by using the [substring operation](../operations/substring.md) to get the first initial and then passing that attribute as input into the [lower operation](../operations/lower.md). In this example, the variable is `firstInitialLowercase`, and you will use it later in your static string.
**First Initial Variable**
@@ -153,12 +129,7 @@ then passing that attribute as input into the
### The User's Last Name Comes Next with the First Character in Uppercase
Adding to the transform, you can create a variable for the first character of
the last name. You can do so by using the
[substring operation](/idn/docs/transforms/operations/substring) and the
[upper operation](/idn/docs/transforms/operations/upper). Once you have the
variable `lastInitialUppercase` created, you can add that variable to the end of
the static string in the value key.
Adding to the transform, you can create a variable for the first character of the last name. You can do so by using the [substring operation](/idn/docs/transforms/operations/substring) and the [upper operation](/idn/docs/transforms/operations/upper). Once you have the variable `lastInitialUppercase` created, you can add that variable to the end of the static string in the value key.
**Last Initial Variable**
@@ -234,10 +205,7 @@ the static string in the value key.
}
```
You also need the end of the last name without the first character you already
have capitalized from the last step. You can get that by using the substring
method and providing only the begin key, which will return everything after the
index you specify.
You also need the end of the last name without the first character you already have capitalized from the last step. You can get that by using the substring method and providing only the begin key, which will return everything after the index you specify.
**Last Name Variable**
@@ -322,10 +290,7 @@ index you specify.
### The User's Two-Digit Start Month Comes Next, Taken from the Hire_Date
To get the two-digit start month, use the
[split operation](/idn/docs/transforms/operations/split). The `hire_date` is in
the format of `YYYY-MM-DD`. To to get the month, split on `-` and set the index
to return as 1.
To get the two-digit start month, use the [split operation](/idn/docs/transforms/operations/split). The `hire_date` is in the format of `YYYY-MM-DD`. To to get the month, split on `-` and set the index to return as 1.
**Hire Date Month Variable**
@@ -425,8 +390,7 @@ to return as 1.
### The Last Part of the Password is a Static String: "RstP\*!7"
To add the final part of the password, which is the static string `RstP\*!7`,
use the static operation.
To add the final part of the password, which is the static string `RstP\*!7`, use the static operation.
**Static String Variable**
@@ -528,12 +492,7 @@ use the static operation.
To verify your transform is working, create the transfrom through the REST API.
To call the APIs for transforms, you need a personal access token and your
tenant's name to provide with the request. For more information about how to get
a personal access token, see
[Personal Access Tokens](../../../../api/authentication.md#personal-access-tokens).
For more information about how to get the name of your tenant, see
[Finding Your Organization Tenant Name](../../../../api/getting-started.md#find-your-tenant-name).
To call the APIs for transforms, you need a personal access token and your tenant's name to provide with the request. For more information about how to get a personal access token, see [Personal Access Tokens](../../../../api/authentication.md#personal-access-tokens). For more information about how to get the name of your tenant, see [Finding Your Organization Tenant Name](../../../../api/getting-started.md#find-your-tenant-name).
```bash
curl --location --request POST 'https://{tenant}.api.identitynow.com/v3/transforms' \
@@ -619,31 +578,22 @@ curl --location --request POST 'https://{tenant}.api.identitynow.com/v3/transfor
}'
```
Once you have created the transform successfully, you can apply the new
transform and preview what the password will look like for each user.
Once you have created the transform successfully, you can apply the new transform and preview what the password will look like for each user.
Log in to your IdentityNow tenant and go to **Admin** > **Identities** >
**Identity Profiles**. Select the name of the profile you created earlier,
Transform Example. Select the **Mappings** tab, scroll to the bottom and select
**Add New Attribute**. Name the attribute `Temporary Password`. To save the new
mappings, you must fill out the id, email, first name and last name mappings.
Log in to your IdentityNow tenant and go to **Admin** > **Identities** > **Identity Profiles**. Select the name of the profile you created earlier, Transform Example. Select the **Mappings** tab, scroll to the bottom and select **Add New Attribute**. Name the attribute `Temporary Password`. To save the new mappings, you must fill out the id, email, first name and last name mappings.
![Attribute Mapping](./img/temporary_password_attribute_mapping.png)
Once you have saved the mappings, select **Preview** in the upper right of the
page and select the Lewis Hamilton identity under **Identity to Preview**. The
temporaryPassword shows up as `lHamilton12RstP*!7`.
Once you have saved the mappings, select **Preview** in the upper right of the page and select the Lewis Hamilton identity under **Identity to Preview**. The temporaryPassword shows up as `lHamilton12RstP*!7`.
This is an example table of values with the temporary password for each user:
| id | email | first_name | last_name | hire_date | temporaryPassword |
| ------ | ---------------------------- | ---------- | --------- | ---------- | ------------------- |
| 100010 | lewis.hamilton@sailpoint.com | Lewis | hamilton | 2020-12-12 | lHamilton12RstP\*!7 |
| 100011 | frank.williams@sailpoint.com | Frank | Williams | 2020-07-10 | fWilliams07RstP\*!7 |
| 100012 | paddy.lowe@sailpoint.com | Paddy | Lowe | 2020-09-20 | pLowe09RstP\*!7 |
| id | email | first_name | last_name | hire_date | temporaryPassword |
| --- | --- | --- | --- | --- | --- |
| 100010 | lewis.hamilton@sailpoint.com | Lewis | hamilton | 2020-12-12 | lHamilton12RstP\*!7 |
| 100011 | frank.williams@sailpoint.com | Frank | Williams | 2020-07-10 | fWilliams07RstP\*!7 |
| 100012 | paddy.lowe@sailpoint.com | Paddy | Lowe | 2020-09-20 | pLowe09RstP\*!7 |
## Next Steps
Looking for more examples or having trouble with one of your complex transforms?
Reach out in the
[Developer Community Forum](https://developer.sailpoint.com/discuss/).
Looking for more examples or having trouble with one of your complex transforms? Reach out in the [Developer Community Forum](https://developer.sailpoint.com/discuss/).

View File

@@ -4,17 +4,16 @@ title: Your First Transform
pagination_label: Your First Transform
sidebar_label: Your First Transform
sidebar_class_name: yourFirstTransform
keywords: ["transforms", "guides", "first"]
keywords: ['transforms', 'guides', 'first']
description: Learn to build your first transform!
sidebar_position: 1
slug: /docs/transforms/guides/your-first-transform
tags: ["Transforms", "Guides", "First"]
tags: ['Transforms', 'Guides', 'First']
---
## Overview
In this guide, you will learn how to use
[IdentityNow's Transform REST APIs](/idn/api/v3/transforms) to do the following:
In this guide, you will learn how to use [IdentityNow's Transform REST APIs](/idn/api/v3/transforms) to do the following:
- [List Transforms in Your IdentityNow Tenant](#list-transforms-in-your-identitynow-tenant)
- [Create a Transform](#create-a-transform)
@@ -24,33 +23,22 @@ In this guide, you will learn how to use
## List Transforms in your IdentityNow Tenant
To call the APIs for transforms, you need a personal access token and your
tenant's name to provide with the request. For more information about how to get
a personal access token, see
[Personal Access Tokens](../../../../api/authentication.md#personal-access-tokens).
For more information about how to get the name of your tenant, see
[Finding Your Organization Tenant Name](../../../../api/getting-started.md#finding-your-orgtenant-name).
To call the APIs for transforms, you need a personal access token and your tenant's name to provide with the request. For more information about how to get a personal access token, see [Personal Access Tokens](../../../../api/authentication.md#personal-access-tokens). For more information about how to get the name of your tenant, see [Finding Your Organization Tenant Name](../../../../api/getting-started.md#finding-your-orgtenant-name).
Before you create your first custom transform, see what transforms are already
in the tenant. You can get this information by calling the
[List Transforms API](/idn/api/v3/get-transforms-list).
Before you create your first custom transform, see what transforms are already in the tenant. You can get this information by calling the [List Transforms API](/idn/api/v3/get-transforms-list).
```bash
curl --location --request GET 'https://{tenant}.api.identitynow.com/v3/transforms' --header 'Authorization: Bearer {token}'
```
The response body contains an array of transform objects containing the
following values:
The response body contains an array of transform objects containing the following values:
- **id** - The id of the transform
- **name** - The name of the transform
- **type** - The type of transform, see
[Transform Operations](../operations/index.md)
- **type** - The type of transform, see [Transform Operations](../operations/index.md)
- **attributes** - Object of attributes related to the transform
- **internal** - A `true` or `false` attribute to determine whether the
transform is internal or custom
- **true** - The transform is internal and cannot be modified without
contacting Sailpoint.
- **internal** - A `true` or `false` attribute to determine whether the transform is internal or custom
- **true** - The transform is internal and cannot be modified without contacting Sailpoint.
- **false** - The tranform is custom and can be modified with the API.
```json
@@ -91,11 +79,7 @@ following values:
## Create a Transform
This [lookup transform](../operations/lookup.md) takes the input value of an
attribute, locates it in the table provided, and returns its corresponding
value. If the transform does not find your input value in the lookup table, it
returns the default value. Replace `{tenant}` and `{token}` with the values you
got ealier.
This [lookup transform](../operations/lookup.md) takes the input value of an attribute, locates it in the table provided, and returns its corresponding value. If the transform does not find your input value in the lookup table, it returns the default value. Replace `{tenant}` and `{token}` with the values you got ealier.
```bash
curl --location --request POST 'https://{tenant}.api.identitynow.com/v3/transforms' \
@@ -134,19 +118,15 @@ curl --location --request POST 'https://{tenant}.api.identitynow.com/v3/transfor
}
```
Once you have created the transform, you can find it in IdentityNow by going to
**Admin** > **Identities** > **Identity Profiles** > (An Identity Profile) >
**Mappings** (tab).
Once you have created the transform, you can find it in IdentityNow by going to **Admin** > **Identities** > **Identity Profiles** > (An Identity Profile) > **Mappings** (tab).
![Mappings Tab](./img/mappings_tab.png)
For more information about creating transforms, see
[Create Transform](/idn/api/v3/create-transform).
For more information about creating transforms, see [Create Transform](/idn/api/v3/create-transform).
## Get Transform by ID
To get the transform created with the API, call the `GET` endpoint, using the
`id` returned by the create API response.
To get the transform created with the API, call the `GET` endpoint, using the `id` returned by the create API response.
```bash
curl --location --request GET 'https://{tenant}.api.identitynow.com/v3/transforms/b23788a0-41a2-453b-89ae-0d670fa0cb6a' \
@@ -172,13 +152,11 @@ curl --location --request GET 'https://{tenant}.api.identitynow.com/v3/transform
}
```
For more information about getting a transform by its `id` see the API
[Transform by ID](/idn/api/v3/get-transform).
For more information about getting a transform by its `id` see the API [Transform by ID](/idn/api/v3/get-transform).
## Update a Transform
To update a transform, call the `PUT` endpoint with the updated transform body.
This example adds another item to the lookup table, `EN-CA.`
To update a transform, call the `PUT` endpoint with the updated transform body. This example adds another item to the lookup table, `EN-CA.`
:::caution
@@ -225,26 +203,19 @@ curl --location --request PUT 'https://{tenant}.api.identitynow.com/v3/transform
}
```
For more information about updating transforms, see
[Update a transform](/idn/api/v3/update-transform).
For more information about updating transforms, see [Update a transform](/idn/api/v3/update-transform).
## Delete a Transform
To delete the transform, call the DELETE endpoint with the `id` of the transform
to delete. The server responds with a 204 when the transform is successfully
removed.
To delete the transform, call the DELETE endpoint with the `id` of the transform to delete. The server responds with a 204 when the transform is successfully removed.
```bash
curl --location --request DELETE 'https://{tenant}.api.identitynow.com/v3/transforms/b23788a0-41a2-453b-89ae-0d670fa0cb6a' \
--header 'Authorization: Bearer {token}'
```
For more information about deleting transforms, see the API
[Delete Transform](/idn/api/v3/delete-transform).
For more information about deleting transforms, see the API [Delete Transform](/idn/api/v3/delete-transform).
## Next Steps
Congratulations on creating your first transform! Now that you understand the
lifecycle of transforms, see [complex usecase](./temporary-password.md) to learn
how to use a nested transform structure to create a temporary password that can
be sent to each user.
Congratulations on creating your first transform! Now that you understand the lifecycle of transforms, see [complex usecase](./temporary-password.md) to learn how to use a nested transform structure to create a temporary password that can be sent to each user.

View File

@@ -5,22 +5,17 @@ pagination_label: Transforms
sidebar_label: Transforms
sidebar_position: 1
sidebar_class_name: transforms
keywords: ["transforms"]
keywords: ['transforms']
description: Building Transforms in IdentityNow
slug: /docs/transforms
tags: ["Transforms"]
tags: ['Transforms']
---
In SailPoint's cloud services, transforms allow you to manipulate attribute
values while aggregating from or provisioning to a source. This guide provides a
reference to help you understand the purpose, configuration, and usage of
transforms.
In SailPoint's cloud services, transforms allow you to manipulate attribute values while aggregating from or provisioning to a source. This guide provides a reference to help you understand the purpose, configuration, and usage of transforms.
## What Are Transforms
Transforms are configurable objects that define easy ways to manipulate
attribute data without requiring you to write code. Transforms are configurable
building blocks with sets of inputs and outputs:
Transforms are configurable objects that define easy ways to manipulate attribute data without requiring you to write code. Transforms are configurable building blocks with sets of inputs and outputs:
<div align="center">
@@ -31,27 +26,19 @@ flowchart LR
</div>
Because there is no code to write, an administrator can configure these by using
a JSON object structure and uploading them into IdentityNow using
[IdentityNow's Transform REST APIs](/idn/api/v3/transforms).
Because there is no code to write, an administrator can configure these by using a JSON object structure and uploading them into IdentityNow using [IdentityNow's Transform REST APIs](/idn/api/v3/transforms).
:::info
Sometimes transforms are referred to as Seaspray, the codename for transforms.
IdentityNow Transforms and Seaspray are essentially the same.
Sometimes transforms are referred to as Seaspray, the codename for transforms. IdentityNow Transforms and Seaspray are essentially the same.
:::
## How Transforms Work
Transforms typically have an input(s) and output(s). The way the transformation
occurs mainly depends on the type of transform. Refer to
[Operations in IdentityNow Transforms](./operations/index.md) for more
information.
Transforms typically have an input(s) and output(s). The way the transformation occurs mainly depends on the type of transform. Refer to [Operations in IdentityNow Transforms](./operations/index.md) for more information.
For example, a [Lower transform](./operations/lower.md) transforms any input
text strings into lowercase versions as output. So if the input were `Foo`, the
lowercase output of the transform would be `foo`:
For example, a [Lower transform](./operations/lower.md) transforms any input text strings into lowercase versions as output. So if the input were `Foo`, the lowercase output of the transform would be `foo`:
<div align="center">
@@ -62,10 +49,7 @@ flowchart LR
</div>
There are other types of transforms too. For example, an
[E.164 Phone transform](./operations/e164-phone.md) transforms any input phone
number strings into an E.164 formatted version as output. So if the input were
`(512) 346-2000`, the output would be `+1 5123462000`:
There are other types of transforms too. For example, an [E.164 Phone transform](./operations/e164-phone.md) transforms any input phone number strings into an E.164 formatted version as output. So if the input were `(512) 346-2000`, the output would be `+1 5123462000`:
<div align="center">
@@ -78,11 +62,7 @@ flowchart LR
### Multiple Transform Inputs
In the previous examples, each transform had a single input. Some transforms can
specify more than one input. For example, the
[Concat transform](./operations/concatenation.md) concatenates one or more
strings together. If `Foo` and `Bar` were inputs, the transformed output would
be `FooBar`:
In the previous examples, each transform had a single input. Some transforms can specify more than one input. For example, the [Concat transform](./operations/concatenation.md) concatenates one or more strings together. If `Foo` and `Bar` were inputs, the transformed output would be `FooBar`:
<div align="center">
@@ -96,15 +76,9 @@ flowchart LR
### Complex Nested Transforms
For more complex use cases, a single transform may not be enough. It is possible
to link several transforms together. IdentityNow calls these 'nested' transforms
because they are transform objects within other transform objects.
For more complex use cases, a single transform may not be enough. It is possible to link several transforms together. IdentityNow calls these 'nested' transforms because they are transform objects within other transform objects.
An example of a nested transform would be using the previous
[Concat transform](./operations/concatenation.md) and passing its output as an
input to another [Lower transform](./operations/lower.md). If the inputs `Foo`
and `Bar` were passed into the transforms, the ultimate output would be
`foobar`, concatenated and in lowercase.
An example of a nested transform would be using the previous [Concat transform](./operations/concatenation.md) and passing its output as an input to another [Lower transform](./operations/lower.md). If the inputs `Foo` and `Bar` were passed into the transforms, the ultimate output would be `foobar`, concatenated and in lowercase.
<div align="center">
@@ -116,22 +90,13 @@ flowchart LR
</div>
There is no hard limit for the number of transforms that can be nested. However,
the more transforms applied, the more complex the nested transform will be,
which can make it difficult to understand and maintain.
There is no hard limit for the number of transforms that can be nested. However, the more transforms applied, the more complex the nested transform will be, which can make it difficult to understand and maintain.
## Configuring Transform Behavior
Some transforms can specify an attributes map that configures the transform
behavior. Each transform type has different configuration attributes and
different uses. To better understand what is configurable per transform, refer
to the Transform Types section and the associated Transform guide(s) that cover
each transform.
Some transforms can specify an attributes map that configures the transform behavior. Each transform type has different configuration attributes and different uses. To better understand what is configurable per transform, refer to the Transform Types section and the associated Transform guide(s) that cover each transform.
It is possible to extend the earlier complex nested transform example. If a
Replace transform, which replaces certain strings with replacement text, were
added, and the transform were configured to replace `Bar` with `Baz` the output
would be added as an input to the Concat and Lower transforms:
It is possible to extend the earlier complex nested transform example. If a Replace transform, which replaces certain strings with replacement text, were added, and the transform were configured to replace `Bar` with `Baz` the output would be added as an input to the Concat and Lower transforms:
<div align="center">
@@ -143,16 +108,11 @@ flowchart LR
</div>
The output of the Replace transform would be `Baz` which is then passed as an
input to the Concat transform along with `Foo` producing an output of `FooBaz`.
This is then passed as an input into the Lower transform, producing a final
output of `foobaz`.
The output of the Replace transform would be `Baz` which is then passed as an input to the Concat transform along with `Foo` producing an output of `FooBaz`. This is then passed as an input into the Lower transform, producing a final output of `foobaz`.
## Transform Syntax
Transforms are JSON objects. Prior to this, the transforms have been shown as
flows of building blocks to help illustrate basic transform ideas. However at
the simplest level, a transform looks like this:
Transforms are JSON objects. Prior to this, the transforms have been shown as flows of building blocks to help illustrate basic transform ideas. However at the simplest level, a transform looks like this:
```json
{
@@ -167,71 +127,51 @@ the simplest level, a transform looks like this:
There are three main components of a transform object:
1. `name` - This specifies the name of the transform. It refers to a transform
in the IdentityNow API or User Interface (UI). Only provide a name on the
root-level transform. Nested transforms do not have names.
1. `name` - This specifies the name of the transform. It refers to a transform in the IdentityNow API or User Interface (UI). Only provide a name on the root-level transform. Nested transforms do not have names.
2. `type` - This specifies the transform type, which ultimately determines the
transform's behavior.
2. `type` - This specifies the transform type, which ultimately determines the transform's behavior.
3. `attributes` - This specifies any attributes or configurations for
controlling how the transform works. As mentioned earlier in
[Configuring Transform Behavior](#configuring-transform-behavior), each
transform type has different sets of attributes available.
3. `attributes` - This specifies any attributes or configurations for controlling how the transform works. As mentioned earlier in [Configuring Transform Behavior](#configuring-transform-behavior), each transform type has different sets of attributes available.
## Template Engine
Seaspray ships with the Apache Velocity template engine that allows a transform
to reference, transform, and render values passed into the transform context.
Every string value in a Seaspray transform can contain templated text and will
run through the template engine.
Seaspray ships with the Apache Velocity template engine that allows a transform to reference, transform, and render values passed into the transform context. Every string value in a Seaspray transform can contain templated text and will run through the template engine.
### Example
In the following string, the text `$firstName` is replaced by the value of
firstName in the template context. The same goes for `$lastName`.
In the following string, the text `$firstName` is replaced by the value of firstName in the template context. The same goes for `$lastName`.
If
$firstName=John and $lastName=Doe then the string `$firstName.$lastName`would render as`John.Doe`.
If $firstName=John and $lastName=Doe then the string `$firstName.$lastName`would render as`John.Doe`.
### Identity Attribute Context
The following variables are available to the Apache Velocity template engine
when a transform is used to source an identity attribute.
The following variables are available to the Apache Velocity template engine when a transform is used to source an identity attribute.
| Variable | Type | Description |
| ------------------- | -------------------------------- | ------------------------------------------------------------- |
| identity | sailpoint.object.Identity | This is the identity the attribute promotion is performed on. |
| oldValue | Object | This is the definition of the attribute being promoted. |
| attributeDefinition | sailpoint.object.ObjectAttribute | This is the attribute's previous value. |
| Variable | Type | Description |
| --- | --- | --- |
| identity | sailpoint.object.Identity | This is the identity the attribute promotion is performed on. |
| oldValue | Object | This is the definition of the attribute being promoted. |
| attributeDefinition | sailpoint.object.ObjectAttribute | This is the attribute's previous value. |
### Account Profile Context
The following variables are available to the Apache Velocity template engine
when a transform is used in an account profile.
The following variables are available to the Apache Velocity template engine when a transform is used in an account profile.
| Variable | Type | Description |
| ----------- | ---------------------------- | ------------------------------------------------------------------------- |
| field | sailpoint.object.Field | This is the field definition backing the account profile attribute. |
| identity | sailpoint.object.Identity | This is the identity the account profile is generating for. |
| Variable | Type | Description |
| --- | --- | --- |
| field | sailpoint.object.Field | This is the field definition backing the account profile attribute. |
| identity | sailpoint.object.Identity | This is the identity the account profile is generating for. |
| application | sailpoint.object.Application | This is the application backing the source that owns the account profile. |
| current | Object | This is the attribute's current value. |
| current | Object | This is the attribute's current value. |
## Implicit vs Explicit Input
A special configuration attribute available to all transforms is input. If the
input attribute is not specified, this is referred to as implicit input, and the
system determines the input based on what is configured. If the input attribute
is specified, then this is referred to as explicit input, and the system's input
is ignored in favor of whatever the transform explicitly specifies. A good way
to understand this concept is to walk through an example. Imagine that
IdentityNow has the following:
A special configuration attribute available to all transforms is input. If the input attribute is not specified, this is referred to as implicit input, and the system determines the input based on what is configured. If the input attribute is specified, then this is referred to as explicit input, and the system's input is ignored in favor of whatever the transform explicitly specifies. A good way to understand this concept is to walk through an example. Imagine that IdentityNow has the following:
- An account on Source 1 with department set to `Services`.
- An account on Source 2 with department set to `Engineering`.
The following two examples explain how a transform with an implicit or explicit
input would work with those sources.
The following two examples explain how a transform with an implicit or explicit input would work with those sources.
### Implicit Input
@@ -239,8 +179,7 @@ An identity profile is configured the following way:
![Configuring Transform Behavior 2](./img/configuring_transform_behavior_2.png)
As an example, the "Lowercase Department" transform being used is written the
following way:
As an example, the "Lowercase Department" transform being used is written the following way:
```json
{
@@ -250,13 +189,9 @@ following way:
}
```
Notice that the attributes has no input. This is an implicit input example. The
transform uses the input provided by the attribute you mapped on the identity
profile.
Notice that the attributes has no input. This is an implicit input example. The transform uses the input provided by the attribute you mapped on the identity profile.
In this example, the transform would produce `services` when the source is
aggregated because Source 1 is providing a department of `Services` which the
transform then lowercases.
In this example, the transform would produce `services` when the source is aggregated because Source 1 is providing a department of `Services` which the transform then lowercases.
### Explicit Input
@@ -278,15 +213,9 @@ As an example, the `Lowercase Department` has been changed the following way:
}
```
Notice that there is an `input` in the attributes. This is an explicit input
example. The transform uses the value Source 2 provides for the `department`
attribute, ignoring your configuration in the identity profile.
Notice that there is an `input` in the attributes. This is an explicit input example. The transform uses the value Source 2 provides for the `department` attribute, ignoring your configuration in the identity profile.
In this example, the transform would produce "engineering" because Source 2 is
providing a department of `Engineering` which the transform then lowercases.
Though the system is still providing an implicit input of Source 1's department
attribute, the transform ignores this and uses the explicit input specified as
Source 2's department attribute.
In this example, the transform would produce "engineering" because Source 2 is providing a department of `Engineering` which the transform then lowercases. Though the system is still providing an implicit input of Source 1's department attribute, the transform ignores this and uses the explicit input specified as Source 2's department attribute.
:::tip
@@ -296,21 +225,15 @@ This is also an example of a nested transform.
### Account Transforms
Account attribute transforms are configured on the account create profiles. They
determine the templates for new accounts created during provisioning events.
Account attribute transforms are configured on the account create profiles. They determine the templates for new accounts created during provisioning events.
#### Configuration
These can be configured in IdentityNow by going to **Admin** > **Sources** > (A
Source) > **Accounts** (tab) > **Create Profile**. These can also be configured
with IdentityNow REST APIs.
These can be configured in IdentityNow by going to **Admin** > **Sources** > (A Source) > **Accounts** (tab) > **Create Profile**. These can also be configured with IdentityNow REST APIs.
You can select the installed, available transforms from this interface.
Alternately, you can add more complex transforms with REST APIs.
You can select the installed, available transforms from this interface. Alternately, you can add more complex transforms with REST APIs.
In the following example, we can call the
[Create Provisioning Policy API](/idn/api/v3/create-provisioning-policy) to
create a full name field using the first and last name identity attributes.
In the following example, we can call the [Create Provisioning Policy API](/idn/api/v3/create-provisioning-policy) to create a full name field using the first and last name identity attributes.
```bash
curl --location --request POST 'https://{tenant}.api.identitynow.com/v3/sources/{source_id}/provisioning-policies' \
@@ -379,107 +302,60 @@ curl --location --request POST 'https://{tenant}.api.identitynow.com/v3/sources/
}'
```
For more information on the IdentityNow REST API endpoints used to managed
transform objects in APIs, refer to
[IdentityNow Transform REST APIs](/idn/api/v3/transforms).
For more information on the IdentityNow REST API endpoints used to managed transform objects in APIs, refer to [IdentityNow Transform REST APIs](/idn/api/v3/transforms).
:::tip
For details about authentication against REST APIs, refer to the
[authentication docs](../../../api/authentication.md).
For details about authentication against REST APIs, refer to the [authentication docs](../../../api/authentication.md).
:::
#### Testing Transforms on Account Create
To test a transform for an account create profile, you must generate a new
account creation provisioning event. This involves granting access to an
identity who does not already have an account on this source; an account is
created as a byproduct of the access assignment. This can be initiated with
access request or even role assignment.
To test a transform for an account create profile, you must generate a new account creation provisioning event. This involves granting access to an identity who does not already have an account on this source; an account is created as a byproduct of the access assignment. This can be initiated with access request or even role assignment.
#### Applying Transforms on Account Create
Once the transforms are saved to the account profile, they are automatically
applied for any subsequent provisioning events.
Once the transforms are saved to the account profile, they are automatically applied for any subsequent provisioning events.
## Testing Transforms
**Testing Transforms in Identity Profile Mappings**
To test a transform for identity data, go to **Identities** > **Identity
Profiles** and select **Mappings**. Select the transform to map one of your
identity attributes, select **Save**, and preview your identity data.
To test a transform for identity data, go to **Identities** > **Identity Profiles** and select **Mappings**. Select the transform to map one of your identity attributes, select **Save**, and preview your identity data.
**Testing Transforms for Account Attributes**
To test a transform for account data, you must provision a new account on that
source. For example, you can create an access request that would result in a new
account on that source, or you can assign a new role.
To test a transform for account data, you must provision a new account on that source. For example, you can create an access request that would result in a new account on that source, or you can assign a new role.
## Transform Best Practices
- **Designing Complex Transforms** - Start with small transform _building
blocks_ and add to them. It can be helpful to diagram out the inputs and
outputs if you are using many transforms.
- **Designing Complex Transforms** - Start with small transform _building blocks_ and add to them. It can be helpful to diagram out the inputs and outputs if you are using many transforms.
- **JSON Editor** - Because transforms are JSON objects, it is recommended that
you use a good JSON editor. Atom, Sublime Text, and Microsoft Code work well
because they have JSON formatting and plugins that can do JSON validation,
completion, formatting, and folding. This is very useful for large complex
JSON objects.
- **JSON Editor** - Because transforms are JSON objects, it is recommended that you use a good JSON editor. Atom, Sublime Text, and Microsoft Code work well because they have JSON formatting and plugins that can do JSON validation, completion, formatting, and folding. This is very useful for large complex JSON objects.
- **Leverage Examples** - Many implementations use similar sets of transforms,
and a lot of common solutions can be found in examples. Feel free to share
your own transform examples on the
[Developer Community forum](https://developer.sailpoint.com/discuss)!
- **Leverage Examples** - Many implementations use similar sets of transforms, and a lot of common solutions can be found in examples. Feel free to share your own transform examples on the [Developer Community forum](https://developer.sailpoint.com/discuss)!
- **Same Problem, Multiple Solutions** - There can be multiple ways to solve the
same problem, but use the solution that makes the most sense to your
implementation and is easiest to administer and understand.
- **Same Problem, Multiple Solutions** - There can be multiple ways to solve the same problem, but use the solution that makes the most sense to your implementation and is easiest to administer and understand.
- **Encapsulate Repetition** - If you are copying and pasting the same
transforms over and over, it can be useful to make a transform a standalone
transform and make other transforms reference it by using the reference type.
- **Encapsulate Repetition** - If you are copying and pasting the same transforms over and over, it can be useful to make a transform a standalone transform and make other transforms reference it by using the reference type.
- **Plan for Bad Data** - Data will not always be perfect, so plan for data
failures and try to ensure transforms still produce workable results in case
data is missing, malformed, or there are incorrect values.
- **Plan for Bad Data** - Data will not always be perfect, so plan for data failures and try to ensure transforms still produce workable results in case data is missing, malformed, or there are incorrect values.
## Transforms vs. Rules
Sometimes it can be difficult to decide when to implement a transform and when
to implement a rule. Both transforms and rules can calculate values for identity
or account attributes.
Sometimes it can be difficult to decide when to implement a transform and when to implement a rule. Both transforms and rules can calculate values for identity or account attributes.
Despite their functional similarity, transforms and rules have very different
implementations. Transforms are JSON-based configurations, editable with
IdentityNow's transform REST APIs. Rules are implemented with code (typically
BeanShell, a Java-like syntax), so they must follow the
[IdentityNow Rule Guidelines](https://community.sailpoint.com/docs/DOC-12122),
and they require SailPoint to be reviewed and installed into the tenant. Rules,
however, can do things that transforms cannot in some cases.
Despite their functional similarity, transforms and rules have very different implementations. Transforms are JSON-based configurations, editable with IdentityNow's transform REST APIs. Rules are implemented with code (typically BeanShell, a Java-like syntax), so they must follow the [IdentityNow Rule Guidelines](https://community.sailpoint.com/docs/DOC-12122), and they require SailPoint to be reviewed and installed into the tenant. Rules, however, can do things that transforms cannot in some cases.
Because transforms have easier and more accessible implementations, they are
generally recommended. With transforms, any IdentityNow administrator can view,
create, edit, and delete transforms directly with REST API without SailPoint
involvement.
Because transforms have easier and more accessible implementations, they are generally recommended. With transforms, any IdentityNow administrator can view, create, edit, and delete transforms directly with REST API without SailPoint involvement.
If something cannot be done with a transform, then consider using a rule. When
you are transitioning from a transform to a rule, you must take special
consideration when you decide where the rule executes.
If something cannot be done with a transform, then consider using a rule. When you are transitioning from a transform to a rule, you must take special consideration when you decide where the rule executes.
- If you are calculating identity attributes, you can use
[Identity Attribute rules](https://community.sailpoint.com/docs/DOC-12616)
instead of identity transforms.
- If you are calculating identity attributes, you can use [Identity Attribute rules](https://community.sailpoint.com/docs/DOC-12616) instead of identity transforms.
- If you are calculating account attributes (during provisioning), you can use
[Attribute Generator rules](https://community.sailpoint.com/docs/DOC-12645)
instead of account transforms.
- If you are calculating account attributes (during provisioning), you can use [Attribute Generator rules](https://community.sailpoint.com/docs/DOC-12645) instead of account transforms.
- All rules you build must follow the
[IdentityNow Rule Guidelines](https://community.sailpoint.com/docs/DOC-12122).
- All rules you build must follow the [IdentityNow Rule Guidelines](https://community.sailpoint.com/docs/DOC-12122).
If you use a rule, make note of it for administrative purposes. The best
practice is to check in these types of artifacts into some sort of version
control (e.g., GitHub, et. Al.) for records.
If you use a rule, make note of it for administrative purposes. The best practice is to check in these types of artifacts into some sort of version control (e.g., GitHub, et. Al.) for records.

View File

@@ -4,38 +4,27 @@ title: Account Attribute
pagination_label: Account Attribute
sidebar_label: Account Attribute
sidebar_class_name: accountAttribute
keywords: ["transforms", "operations", "account", "attribute"]
keywords: ['transforms', 'operations', 'account', 'attribute']
description: Look up an account for a particular source on an identity.
slug: /docs/transforms/operations/account-attribute
tags: ["Transforms", "Transform Operations"]
tags: ['Transforms', 'Transform Operations']
---
## Overview
Use the account attribute transform to look up an account for a particular
source on an identity and return a specific attribute value from that account.
Use the account attribute transform to look up an account for a particular source on an identity and return a specific attribute value from that account.
:::note Other Considerations
- If there are multiple accounts, then IdentityNow by default takes the value
from the oldest account (based on the account created date). You can configure
this behavior by specifying `accountSortAttribute` and `accountSortDescending`
attributes.
- If there are multiple accounts and the oldest account has a null attribute
value, by default IdentityNow moves to the next account that can have a value
(if there are any). You can override this behavior with the
`accountReturnFirstLink` property.
- You can filter the multiple accounts returned based on the data they contain
so that you can target specific accounts. This is often used to target
accounts that are "active" instead of those that are not.
- If there are multiple accounts, then IdentityNow by default takes the value from the oldest account (based on the account created date). You can configure this behavior by specifying `accountSortAttribute` and `accountSortDescending` attributes.
- If there are multiple accounts and the oldest account has a null attribute value, by default IdentityNow moves to the next account that can have a value (if there are any). You can override this behavior with the `accountReturnFirstLink` property.
- You can filter the multiple accounts returned based on the data they contain so that you can target specific accounts. This is often used to target accounts that are "active" instead of those that are not.
:::
## Transform Structure
The account attribute transform's configuration can take several attributes as
inputs. The following example shows a fully configured transform with all
required and optional attributes.
The account attribute transform's configuration can take several attributes as inputs. The following example shows a fully configured transform with all required and optional attributes.
```json
{
@@ -57,95 +46,54 @@ required and optional attributes.
- **Required Attributes**
- **type** - This must always be set to `accountAttribute`.
- **name** - This is a required attribute for all transforms. It represents
the name of the transform as it will appear in the UI's dropdown menus.
- **name** - This is a required attribute for all transforms. It represents the name of the transform as it will appear in the UI's dropdown menus.
- **sourceName** - This is a reference to the source to search for accounts.
- This is a reference by a source's display name attribute (e.g., Active
Directory). If the display name is updated, this reference must also be
updated.
- As an alternative, you can provide an `applicationId` or `applicationName`
instead.
- `applicationId` - This is a reference by a source's external GUID/ID
attribute (e.g., "ff8081815a8b3925015a8b6adac901ff").
- `applicationName` - This is a reference by a source's immutable name
attribute (e.g., "Active Directory \[source\]").
- **attributeName** - The name of the attribute on the account to return. This
matches the name of the account attribute name visible in the user interface
or on the source schema.
- This is a reference by a source's display name attribute (e.g., Active Directory). If the display name is updated, this reference must also be updated.
- As an alternative, you can provide an `applicationId` or `applicationName` instead.
- `applicationId` - This is a reference by a source's external GUID/ID attribute (e.g., "ff8081815a8b3925015a8b6adac901ff").
- `applicationName` - This is a reference by a source's immutable name attribute (e.g., "Active Directory \[source\]").
- **attributeName** - The name of the attribute on the account to return. This matches the name of the account attribute name visible in the user interface or on the source schema.
- **Optional Attributes**
- **requiresPeriodicRefresh** - This is a `true` or `false` value indicating
whether the transform logic must be reevaluated every evening as part of the
identity refresh process.
- **accountSortAttribute** - This configuration's value is a string name of
the attribute to use when determining the ordering of returned accounts when
there are multiple entries.
- **requiresPeriodicRefresh** - This is a `true` or `false` value indicating whether the transform logic must be reevaluated every evening as part of the identity refresh process.
- **accountSortAttribute** - This configuration's value is a string name of the attribute to use when determining the ordering of returned accounts when there are multiple entries.
- Accounts can be sorted by any schema attribute.
- If no sort attribute is defined, the transform will default to "created"
(ascending sort on created date - oldest object wins).
- **accountSortDescending** - This configuration's value is a boolean
(true/false). It controls the sort order when there are multiple accounts.
- If no sort attribute is defined, the transform will default to "created" (ascending sort on created date - oldest object wins).
- **accountSortDescending** - This configuration's value is a boolean (true/false). It controls the sort order when there are multiple accounts.
- If not defined, the transform will default to false (ascending order)
- **accountReturnFirstLink** - This configuration's value is a boolean
(true/false). It controls which account to source a value from for an
attribute. If this flag is set to true, the transform returns the value from
the first account in the list, even if it is null. If this flag is set to
false, the transform returns the first non-null value.
- If the configuration's value is not defined, the transform will default to
the false setting.
- **accountFilter** - This expression queries the database to narrow search
results. This configuration's value is a `sailpoint.object.Filter`
expression for searching against the database. The default filter always
includes the source and identity, and any subsequent expressions are
combined in an AND operation with the existing search criteria.
- **accountReturnFirstLink** - This configuration's value is a boolean (true/false). It controls which account to source a value from for an attribute. If this flag is set to true, the transform returns the value from the first account in the list, even if it is null. If this flag is set to false, the transform returns the first non-null value.
- If the configuration's value is not defined, the transform will default to the false setting.
- **accountFilter** - This expression queries the database to narrow search results. This configuration's value is a `sailpoint.object.Filter` expression for searching against the database. The default filter always includes the source and identity, and any subsequent expressions are combined in an AND operation with the existing search criteria.
- Only certain searchable attributes are available:
- `nativeIdentity` - This is the account ID.
- `displayName` - This is the account name.
- `entitlements` - This boolean value determine whether the account has
entitlements.
- **accountPropertyFilter** - Use this expression to search and filter
accounts in memory. This configuration's value is a
`sailpoint.object.Filter` expression for searching against the returned
resultset.
- All account attributes are available for filtering because this operation
is performed in memory.
- `entitlements` - This boolean value determine whether the account has entitlements.
- **accountPropertyFilter** - Use this expression to search and filter accounts in memory. This configuration's value is a `sailpoint.object.Filter` expression for searching against the returned resultset.
- All account attributes are available for filtering because this operation is performed in memory.
- Examples:
- `(status != "terminated")`
- `(department == "Engineering")`
- `(groups.containsAll({"Admin"}) || location == "Austin")`
- **input** - This is an optional attribute that can explicitly define the
input data passed into the transform logic. If no input is provided, the
transform takes its input from the source and attribute combination
configured with the UI.
- **input** - This is an optional attribute that can explicitly define the input data passed into the transform logic. If no input is provided, the transform takes its input from the source and attribute combination configured with the UI.
## Examples
HR systems can have multiple HR records for a person, especially in rehire and
conversion scenarios. In order to get the correct identity data, you must get
data from only the latest active accounts.
HR systems can have multiple HR records for a person, especially in rehire and conversion scenarios. In order to get the correct identity data, you must get data from only the latest active accounts.
- `sourceName` is "Corporate HR" because that is the name of the authoritative
source.
- `sourceName` is "Corporate HR" because that is the name of the authoritative source.
- `attributeName` is "HIREDATE" because that is the attribute you want from the
authoritative source.
- `attributeName` is "HIREDATE" because that is the attribute you want from the authoritative source.
- `accountSortAttribute` is "created" because you want to sort on created dates
in case there are multiple accounts.
- `accountSortAttribute` is "created" because you want to sort on created dates in case there are multiple accounts.
- `accountSortDescending` is true because you want to sort based on the newest
or latest account from the HR system.
- `accountSortDescending` is true because you want to sort based on the newest or latest account from the HR system.
- `accountReturnFirstLink` is true because you want to return the value of
HIREDATE, event if it is null.
- `accountReturnFirstLink` is true because you want to return the value of HIREDATE, event if it is null.
- `accountPropertyFilter` is filtering the accounts to look at only active
accounts. Terminated accounts will not appear (assuming there are no data
issues).
- `accountPropertyFilter` is filtering the accounts to look at only active accounts. Terminated accounts will not appear (assuming there are no data issues).
:::info
You cannot use `accountFilter` here because WORKER_STATUS\_\_c is not a
searchable attribute, but `accountPropertyFilter` works instead.
You cannot use `accountFilter` here because WORKER_STATUS\_\_c is not a searchable attribute, but `accountPropertyFilter` works instead.
:::
@@ -170,15 +118,11 @@ searchable attribute, but `accountPropertyFilter` works instead.
<p>&nbsp;</p>
When you are mapping values like a username, focus on primary accounts from a
particular source or accounts that are not service accounts.
When you are mapping values like a username, focus on primary accounts from a particular source or accounts that are not service accounts.
- `sourceName` is "Active Directory" because that is the source this data is
coming from.
- `attributeName` is "sAMAccountName" because you are mapping the username of
the user.
- `accountFilter` is an expression filtering the accounts to make sure they are
not service accounts.
- `sourceName` is "Active Directory" because that is the source this data is coming from.
- `attributeName` is "sAMAccountName" because you are mapping the username of the user.
- `accountFilter` is an expression filtering the accounts to make sure they are not service accounts.
:::info

View File

@@ -4,27 +4,21 @@ title: Base64 Decode
pagination_label: Base64 Decode
sidebar_label: Base64 Decode
sidebar_class_name: base64Decode
keywords: ["transforms", "operations", "base64", "decode"]
keywords: ['transforms', 'operations', 'base64', 'decode']
description: Render base64 data in its original binary format.
slug: /docs/transforms/operations/base64-decode
tags: ["Transforms", "Transform Operations"]
tags: ['Transforms', 'Transform Operations']
---
## Overview
Base64 is mostly used to encode binary data like images so that the data can be
represented as a string within HTML, email, or other text documents. Base64 is
also commonly used to encode data that can be unsupported or damaged during
transfer, storage, or output.
Base64 is mostly used to encode binary data like images so that the data can be represented as a string within HTML, email, or other text documents. Base64 is also commonly used to encode data that can be unsupported or damaged during transfer, storage, or output.
The base64 decode transform allows you to take incoming data that has been
encoded using a Base64-based text encoding scheme and render the data in its
original binary format.
The base64 decode transform allows you to take incoming data that has been encoded using a Base64-based text encoding scheme and render the data in its original binary format.
:::note Other Considerations
- If the input to the Base64 decode transform is null, the transform returns a
null value.
- If the input to the Base64 decode transform is null, the transform returns a null value.
:::
@@ -43,18 +37,13 @@ The base64 decode transform only requires the `type` and `name` attributes:
- **Required Attributes**
- **type** - This must be set to `base64Decode`.
- **name** - This is a required attribute for all transforms. It represents
the name of the transform as it will appear in the UI's dropdown menus.
- **name** - This is a required attribute for all transforms. It represents the name of the transform as it will appear in the UI's dropdown menus.
- **Optional Attributes**
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether
the transform logic should be reevaluated every evening as part of the
identity refresh process.
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether the transform logic should be reevaluated every evening as part of the identity refresh process.
## Examples
This example takes the incoming attribute configured in the identity profile
attribute UI, assumes it is a Base64 encoded string, and returns its original
binary value.
This example takes the incoming attribute configured in the identity profile attribute UI, assumes it is a Base64 encoded string, and returns its original binary value.
Input:
@@ -81,8 +70,7 @@ Output:
<p>&nbsp;</p>
This example takes the incoming attribute configured in the identity profile
attribute UI, assumes it is a Base64 encoded string, and returns it as an image.
This example takes the incoming attribute configured in the identity profile attribute UI, assumes it is a Base64 encoded string, and returns it as an image.
Input:

View File

@@ -4,27 +4,21 @@ title: Base64 Encode
pagination_label: Base64 Encode
sidebar_label: Base64 Encode
sidebar_class_name: base64Encode
keywords: ["transforms", "operations", "base64", "encode"]
keywords: ['transforms', 'operations', 'base64', 'encode']
description: Encode data with a Base64-based text encoding scheme.
slug: /docs/transforms/operations/base64-encode
tags: ["Transforms", "Transform Operations"]
tags: ['Transforms', 'Transform Operations']
---
## Overview
Base64 is mostly used to encode binary data like images so that the data can be
represented as a string within HTML, email or other text documents. Base64 is
also commonly used to encode data that can be unsupported or damaged during
transfer, storage, or output.
Base64 is mostly used to encode binary data like images so that the data can be represented as a string within HTML, email or other text documents. Base64 is also commonly used to encode data that can be unsupported or damaged during transfer, storage, or output.
The base64 encode transform allows you to take incoming data and encode it using
a Base64-based text encoding scheme. The output of the transform is a string
comprising 64 basic ASCII characters.
The base64 encode transform allows you to take incoming data and encode it using a Base64-based text encoding scheme. The output of the transform is a string comprising 64 basic ASCII characters.
:::note Other Considerations
- If the input to the Base64 encode transform is null, the transform returns a
null value.
- If the input to the Base64 encode transform is null, the transform returns a null value.
:::
@@ -43,17 +37,13 @@ The Base64 encode transform only requires the `type` and `name` attributes:
- **Required Attributes**
- **type** - This must be set to `base64Encode`.
- **name** - This is a required attribute for all transforms. It represents
the name of the transform as it will appear in the UI's dropdown menus.
- **name** - This is a required attribute for all transforms. It represents the name of the transform as it will appear in the UI's dropdown menus.
- **Optional Attributes**
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether
the transform logic should be reevaluated every evening as part of the
identity refresh process.
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether the transform logic should be reevaluated every evening as part of the identity refresh process.
## Examples
This example takes the incoming attribute configured in the identity profile
attribute UI and returns it as a Base64 encoded string.
This example takes the incoming attribute configured in the identity profile attribute UI and returns it as a Base64 encoded string.
Input:
@@ -80,8 +70,7 @@ MTIzNA==
<p>&nbsp;</p>
This example takes a binary image as in input and returns it as a Base64 encoded
string.
This example takes a binary image as in input and returns it as a Base64 encoded string.
Input:

View File

@@ -4,23 +4,19 @@ title: Concatenation
pagination_label: Concatenation
sidebar_label: Concatenation
sidebar_class_name: concatenation
keywords: ["transforms", "operations", "concatenation"]
keywords: ['transforms', 'operations', 'concatenation']
description: Join two or more string values into a combined output.
slug: /docs/transforms/operations/concatenation
tags: ["Transforms", "Transform Operations"]
tags: ['Transforms', 'Transform Operations']
---
## Overview
Use the concatenation transform to join two or more string values into a
combined output. The concatenation transform often joins elements such as first
and last name into a full display name, but it has many other uses.
Use the concatenation transform to join two or more string values into a combined output. The concatenation transform often joins elements such as first and last name into a full display name, but it has many other uses.
## Transform Structure
The concatenation transform requires an array list of `values` that need to be
joined. These values can be static strings or the return values of other nested
transforms.
The concatenation transform requires an array list of `values` that need to be joined. These values can be static strings or the return values of other nested transforms.
```json
{
@@ -36,19 +32,14 @@ transforms.
- **Required Attributes**
- **type** - This must always be set to `concat`.
- **name** - This is a required attribute for all transforms. It represents
the name of the transform as it will appear in the UI's dropdown menus.
- **name** - This is a required attribute for all transforms. It represents the name of the transform as it will appear in the UI's dropdown menus.
- **values** - This is the array of items to join.
- **Optional Attributes**
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether
the transform logic should be reevaluated every evening as part of the
identity refresh process.
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether the transform logic should be reevaluated every evening as part of the identity refresh process.
## Examples
This transform joins the user's first name from the "HR Source" with his/her
last name, adds a space between them, and then adds a parenthetical note that
the user is a contractor at the end.
This transform joins the user's first name from the "HR Source" with his/her last name, adds a space between them, and then adds a parenthetical note that the user is a contractor at the end.
**Transform Request Body**:
@@ -83,8 +74,7 @@ the user is a contractor at the end.
<p>&nbsp;</p>
This transform joins the user's job title with his/her job code value and adds a
hyphen between those two pieces of data.
This transform joins the user's job title with his/her job code value and adds a hyphen between those two pieces of data.
**Transform Request Body**:

View File

@@ -4,36 +4,27 @@ title: Conditional
pagination_label: Conditional
sidebar_label: Conditional
sidebar_class_name: conditional
keywords: ["transforms", "operations", "conditional"]
keywords: ['transforms', 'operations', 'conditional']
description: Output different values depending on simple conditional logic.
slug: /docs/transforms/operations/conditional
tags: ["Transforms", "Transform Operations"]
tags: ['Transforms', 'Transform Operations']
---
## Overview
Use the conditional transform to output different values depending on simple
conditional logic. This is a convenient transform - the same capability can be
implemented with a "static" transform, but this transform has greater simplicity
and null-safe error checking.
Use the conditional transform to output different values depending on simple conditional logic. This is a convenient transform - the same capability can be implemented with a "static" transform, but this transform has greater simplicity and null-safe error checking.
:::note Other Considerations
- The two operands within the transform cannot be null; if they are, an
IllegalArgumentException is thrown.
- The `expression` attribute must be "eq," or the transform will throw an
IllegalArgumentException.
- All attribute string values are case-sensitive, so differently cased strings
(e.g., "engineering" and "Engineering") will not return as matched.
- The two operands within the transform cannot be null; if they are, an IllegalArgumentException is thrown.
- The `expression` attribute must be "eq," or the transform will throw an IllegalArgumentException.
- All attribute string values are case-sensitive, so differently cased strings (e.g., "engineering" and "Engineering") will not return as matched.
:::
## Transform Structure
In addition to the `type` and `name` attributes, the conditional transform
requires an `expression`, a `positiveCondition`, and a `negativeCondition`. If
the expression evaluates to false, the transform returns the negative condition;
otherwise it returns the positive condition.
In addition to the `type` and `name` attributes, the conditional transform requires an `expression`, a `positiveCondition`, and a `negativeCondition`. If the expression evaluates to false, the transform returns the negative condition; otherwise it returns the positive condition.
```json
{
@@ -51,25 +42,16 @@ otherwise it returns the positive condition.
- **Required Attributes**
- **type** - This must always be set to `conditional`.
- **name** - This is a required attribute for all transforms. It represents
the name of the transform as it will appear in the UI's dropdown menus.
- **expression** - This comparison statement follows the structure of
`ValueA eq ValueB` where `ValueA` and `ValueB` are static strings or outputs
of other transforms; the `eq` operator is the only valid comparison.
- **positiveCondition** - This is the output of the transform if the
expression evaluates to true.
- **negativeCondition** - This is the output of the transform if the
expression evaluates to false.
- **name** - This is a required attribute for all transforms. It represents the name of the transform as it will appear in the UI's dropdown menus.
- **expression** - This comparison statement follows the structure of `ValueA eq ValueB` where `ValueA` and `ValueB` are static strings or outputs of other transforms; the `eq` operator is the only valid comparison.
- **positiveCondition** - This is the output of the transform if the expression evaluates to true.
- **negativeCondition** - This is the output of the transform if the expression evaluates to false.
- **Optional Attributes**
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether
the transform logic should be reevaluated every evening as part of the
identity refresh process.
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether the transform logic should be reevaluated every evening as part of the identity refresh process.
## Examples
This transform takes the user's HR-defined department attribute and compares it
to the value of "Science". If this is the user's department, the transform
returns `true`. Otherwise, it returns `false`.
This transform takes the user's HR-defined department attribute and compares it to the value of "Science". If this is the user's department, the transform returns `true`. Otherwise, it returns `false`.
**Transform Request Body**:
@@ -96,10 +78,7 @@ returns `true`. Otherwise, it returns `false`.
<p>&nbsp;</p>
This transform extends the previous one by returning the output of another
Seaspray transform depending on the result of the expression. You can assign
Seaspray transforms' outputs to variables and then reference them within the
`positiveCondition` and `negativeCondition` attributes.
This transform extends the previous one by returning the output of another Seaspray transform depending on the result of the expression. You can assign Seaspray transforms' outputs to variables and then reference them within the `positiveCondition` and `negativeCondition` attributes.
**Transform Request Body**:

View File

@@ -4,39 +4,26 @@ title: Date Compare
pagination_label: Date Compare
sidebar_label: Date Compares
sidebar_class_name: dateCompare
keywords: ["transforms", "operations", "date", "compare"]
keywords: ['transforms', 'operations', 'date', 'compare']
description: Compare two dates and return a calculated value.
slug: /docs/transforms/operations/date-compare
tags: ["Transforms", "Transform Operations"]
tags: ['Transforms', 'Transform Operations']
---
## Overview
Use the date compare transform to compare two dates and, depending on the
comparison result, return one value if one date is after the other or return a
different value if it is before the other. A common use case is to calculate
lifecycle states (e.g., the user is "active" if the current date is greater than
or equal to the user's hire date, etc.).
Use the date compare transform to compare two dates and, depending on the comparison result, return one value if one date is after the other or return a different value if it is before the other. A common use case is to calculate lifecycle states (e.g., the user is "active" if the current date is greater than or equal to the user's hire date, etc.).
:::note Other Considerations
- In addition to explicit date values, the transform recognizes the "now"
keyword that always evaluates to the exact date and time when the transform is
evaluating.
- All dates **must** be in
[ISO8601 format](https://en.wikipedia.org/wiki/ISO_8601) in order for the date
compare transform to evaluate properly.
- In addition to explicit date values, the transform recognizes the "now" keyword that always evaluates to the exact date and time when the transform is evaluating.
- All dates **must** be in [ISO8601 format](https://en.wikipedia.org/wiki/ISO_8601) in order for the date compare transform to evaluate properly.
:::
## Transform Structure
The date compare transform takes as an input the two dates to compare, denoted
as `firstDate` and `secondDate`. The transform also requires an `operator`
designation so it knows which condition to evaluate for. Lastly, the transform
requires both a `positiveCondition` and a `negativeCondition` -- the former
returns if the comparison evaluates to `true`; the latter returns if the
comparison evaluates to `false`.
The date compare transform takes as an input the two dates to compare, denoted as `firstDate` and `secondDate`. The transform also requires an `operator` designation so it knows which condition to evaluate for. Lastly, the transform requires both a `positiveCondition` and a `negativeCondition` -- the former returns if the comparison evaluates to `true`; the latter returns if the comparison evaluates to `false`.
```json
{
@@ -62,33 +49,22 @@ comparison evaluates to `false`.
- **Required Attributes**
- **type** - This must always be set to `dateCompare`.
- **name** - This is a required attribute for all transforms. It represents
the name of the transform as it will appear in the UI's dropdown menus.
- **firstDate** - This is the first date to consider (i.e., the date that
would be on the left hand side of the comparison operation).
- **secondDate** - This is the second date to consider (i.e., the date that
would be on the right hand side of the comparison operation).
- **operator** - This is the comparison to perform. The following values are
valid:
- **name** - This is a required attribute for all transforms. It represents the name of the transform as it will appear in the UI's dropdown menus.
- **firstDate** - This is the first date to consider (i.e., the date that would be on the left hand side of the comparison operation).
- **secondDate** - This is the second date to consider (i.e., the date that would be on the right hand side of the comparison operation).
- **operator** - This is the comparison to perform. The following values are valid:
- **LT**: Strictly less than: firstDate < secondDate
- **LTE**: Less than or equal to: firstDate <= secondDate
- **GT**: Strictly greater than: firstDate > secondDate
- **GTE**: Greater than or equal to: firstDate >= secondDate
- **positiveCondition** - This is the value to return if the comparison is
true.
- **negativeCondition** - This is the value to return if the comparison is
false.
- **positiveCondition** - This is the value to return if the comparison is true.
- **negativeCondition** - This is the value to return if the comparison is false.
- **Optional Attributes**
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether
the transform logic should be reevaluated every evening as part of the
identity refresh process. The default value is `false`.
- **requiresPeriodicRefresh** - This `true` or `false` value indicates whether the transform logic should be reevaluated every evening as part of the identity refresh process. The default value is `false`.
## Examples
This transform accomplishes a basic lifecycle state calculation. It compares the
user's termination date with his/her HR record. If the current datetime (denoted
by `now`) is less than that date, the transform returns "active". If the current
datetime is greater than that date, the transform returns "terminated".
This transform accomplishes a basic lifecycle state calculation. It compares the user's termination date with his/her HR record. If the current datetime (denoted by `now`) is less than that date, the transform returns "active". If the current datetime is greater than that date, the transform returns "terminated".
**Transform Request Body**:
@@ -116,9 +92,7 @@ datetime is greater than that date, the transform returns "terminated".
<p>&nbsp;</p>
This transform compares the user's hire date to a fixed date in the past. If the
user was hired prior to January 1, 1996, the transform returns "legacy". If the
user was hired later than January 1, 1996, it returns "regular".
This transform compares the user's hire date to a fixed date in the past. If the user was hired prior to January 1, 1996, the transform returns "legacy". If the user was hired later than January 1, 1996, it returns "regular".
**Transform Request Body**:

Some files were not shown because too many files have changed in this diff Show More