Class AmazonMachineLearningClient

    • Field Detail

      • configFactory

        protected static final ClientConfigurationFactory configFactory
        Client configuration factory providing ClientConfigurations tailored to this client
    • Constructor Detail

      • AmazonMachineLearningClient

        public AmazonMachineLearningClient()
        Constructs a new client to invoke service methods on Amazon Machine Learning. A credentials provider chain will be used that searches for credentials in this order:
        • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
        • Java System Properties - aws.accessKeyId and aws.secretKey
        • Instance profile credentials delivered through the Amazon EC2 metadata service

        All service calls made using this new client object are blocking, and will not return until the service call completes.

        See Also:
        DefaultAWSCredentialsProviderChain
      • AmazonMachineLearningClient

        public AmazonMachineLearningClient​(ClientConfiguration clientConfiguration)
        Constructs a new client to invoke service methods on Amazon Machine Learning. A credentials provider chain will be used that searches for credentials in this order:
        • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
        • Java System Properties - aws.accessKeyId and aws.secretKey
        • Instance profile credentials delivered through the Amazon EC2 metadata service

        All service calls made using this new client object are blocking, and will not return until the service call completes.

        Parameters:
        clientConfiguration - The client configuration options controlling how this client connects to Amazon Machine Learning (ex: proxy settings, retry counts, etc.).
        See Also:
        DefaultAWSCredentialsProviderChain
      • AmazonMachineLearningClient

        public AmazonMachineLearningClient​(AWSCredentials awsCredentials)
        Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account credentials.

        All service calls made using this new client object are blocking, and will not return until the service call completes.

        Parameters:
        awsCredentials - The AWS credentials (access key ID and secret key) to use when authenticating with AWS services.
      • AmazonMachineLearningClient

        public AmazonMachineLearningClient​(AWSCredentials awsCredentials,
                                           ClientConfiguration clientConfiguration)
        Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account credentials and client configuration options.

        All service calls made using this new client object are blocking, and will not return until the service call completes.

        Parameters:
        awsCredentials - The AWS credentials (access key ID and secret key) to use when authenticating with AWS services.
        clientConfiguration - The client configuration options controlling how this client connects to Amazon Machine Learning (ex: proxy settings, retry counts, etc.).
      • AmazonMachineLearningClient

        public AmazonMachineLearningClient​(AWSCredentialsProvider awsCredentialsProvider)
        Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account credentials provider.

        All service calls made using this new client object are blocking, and will not return until the service call completes.

        Parameters:
        awsCredentialsProvider - The AWS credentials provider which will provide credentials to authenticate requests with AWS services.
      • AmazonMachineLearningClient

        public AmazonMachineLearningClient​(AWSCredentialsProvider awsCredentialsProvider,
                                           ClientConfiguration clientConfiguration)
        Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account credentials provider and client configuration options.

        All service calls made using this new client object are blocking, and will not return until the service call completes.

        Parameters:
        awsCredentialsProvider - The AWS credentials provider which will provide credentials to authenticate requests with AWS services.
        clientConfiguration - The client configuration options controlling how this client connects to Amazon Machine Learning (ex: proxy settings, retry counts, etc.).
      • AmazonMachineLearningClient

        public AmazonMachineLearningClient​(AWSCredentialsProvider awsCredentialsProvider,
                                           ClientConfiguration clientConfiguration,
                                           RequestMetricCollector requestMetricCollector)
        Constructs a new client to invoke service methods on Amazon Machine Learning using the specified AWS account credentials provider, client configuration options, and request metric collector.

        All service calls made using this new client object are blocking, and will not return until the service call completes.

        Parameters:
        awsCredentialsProvider - The AWS credentials provider which will provide credentials to authenticate requests with AWS services.
        clientConfiguration - The client configuration options controlling how this client connects to Amazon Machine Learning (ex: proxy settings, retry counts, etc.).
        requestMetricCollector - optional request metric collector
    • Method Detail

      • createBatchPrediction

        public CreateBatchPredictionResult createBatchPrediction​(CreateBatchPredictionRequest createBatchPredictionRequest)

        Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a DataSource. This operation creates a new BatchPrediction, and uses an MLModel and the data files referenced by the DataSource as information sources.

        CreateBatchPrediction is an asynchronous operation. In response to CreateBatchPrediction, Amazon Machine Learning (Amazon ML) immediately returns and sets the BatchPrediction status to PENDING. After the BatchPrediction completes, Amazon ML sets the status to COMPLETED.

        You can poll for status updates by using the GetBatchPrediction operation and checking the Status parameter of the result. After the COMPLETED status appears, the results are available in the location specified by the OutputUri parameter.

        Specified by:
        createBatchPrediction in interface AmazonMachineLearning
        Parameters:
        createBatchPredictionRequest -
        Returns:
        Result of the CreateBatchPrediction operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        InternalServerException - An error on the server occurred when trying to process a request.
        IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.
      • createDataSourceFromRedshift

        public CreateDataSourceFromRedshiftResult createDataSourceFromRedshift​(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)

        Creates a DataSource from Amazon Redshift. A DataSource references data that can be used to perform either CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.

        CreateDataSourceFromRedshift is an asynchronous operation. In response to CreateDataSourceFromRedshift, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING. After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in COMPLETED or PENDING status can only be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

        If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

        The observations should exist in the database hosted on an Amazon Redshift cluster and should be specified by a SelectSqlQuery . Amazon ML executes Unload command in Amazon Redshift to transfer the result set of SelectSqlQuery to S3StagingLocation.

        After the DataSource is created, it's ready for use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel, the DataSource requires another item -- a recipe. A recipe describes the observation variables that participate in training an MLModel. A recipe describes how each input variable will be used in training. Will the variable be included or excluded from training? Will the variable be manipulated, for example, combined with another variable or split apart into word combinations? The recipe provides answers to these questions. For more information, see the Amazon Machine Learning Developer Guide.

        Specified by:
        createDataSourceFromRedshift in interface AmazonMachineLearning
        Parameters:
        createDataSourceFromRedshiftRequest -
        Returns:
        Result of the CreateDataSourceFromRedshift operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        InternalServerException - An error on the server occurred when trying to process a request.
        IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.
      • createDataSourceFromS3

        public CreateDataSourceFromS3Result createDataSourceFromS3​(CreateDataSourceFromS3Request createDataSourceFromS3Request)

        Creates a DataSource object. A DataSource references data that can be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

        CreateDataSourceFromS3 is an asynchronous operation. In response to CreateDataSourceFromS3, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING. After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in COMPLETED or PENDING status can only be used to perform CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.

        If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

        The observation data used in a DataSource should be ready to use; that is, it should have a consistent structure, and missing data values should be kept to a minimum. The observation data must reside in one or more CSV files in an Amazon Simple Storage Service (Amazon S3) bucket, along with a schema that describes the data items by name and type. The same schema must be used for all of the data files referenced by the DataSource.

        After the DataSource has been created, it's ready to use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel, the DataSource requires another item: a recipe. A recipe describes the observation variables that participate in training an MLModel. A recipe describes how each input variable will be used in training. Will the variable be included or excluded from training? Will the variable be manipulated, for example, combined with another variable, or split apart into word combinations? The recipe provides answers to these questions. For more information, see the Amazon Machine Learning Developer Guide.

        Specified by:
        createDataSourceFromS3 in interface AmazonMachineLearning
        Parameters:
        createDataSourceFromS3Request -
        Returns:
        Result of the CreateDataSourceFromS3 operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        InternalServerException - An error on the server occurred when trying to process a request.
        IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.
      • createEvaluation

        public CreateEvaluationResult createEvaluation​(CreateEvaluationRequest createEvaluationRequest)

        Creates a new Evaluation of an MLModel. An MLModel is evaluated on a set of observations associated to a DataSource. Like a DataSource for an MLModel, the DataSource for an Evaluation contains values for the Target Variable. The Evaluation compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the MLModel functions on the test data. Evaluation generates a relevant performance metric such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding MLModelType: BINARY, REGRESSION or MULTICLASS.

        CreateEvaluation is an asynchronous operation. In response to CreateEvaluation, Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING. After the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED.

        You can use the GetEvaluation operation to check progress of the evaluation during the creation operation.

        Specified by:
        createEvaluation in interface AmazonMachineLearning
        Parameters:
        createEvaluationRequest -
        Returns:
        Result of the CreateEvaluation operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        InternalServerException - An error on the server occurred when trying to process a request.
        IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.
      • createMLModel

        public CreateMLModelResult createMLModel​(CreateMLModelRequest createMLModelRequest)

        Creates a new MLModel using the data files and the recipe as information sources.

        An MLModel is nearly immutable. Users can only update the MLModelName and the ScoreThreshold in an MLModel without creating a new MLModel.

        CreateMLModel is an asynchronous operation. In response to CreateMLModel, Amazon Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING. After the MLModel is created and ready for use, Amazon ML sets the status to COMPLETED.

        You can use the GetMLModel operation to check progress of the MLModel during the creation operation.

        CreateMLModel requires a DataSource with computed statistics, which can be created by setting ComputeStatistics to true in CreateDataSourceFromRDS, CreateDataSourceFromS3, or CreateDataSourceFromRedshift operations.

        Specified by:
        createMLModel in interface AmazonMachineLearning
        Parameters:
        createMLModelRequest -
        Returns:
        Result of the CreateMLModel operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        InternalServerException - An error on the server occurred when trying to process a request.
        IdempotentParameterMismatchException - A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.
      • deleteBatchPrediction

        public DeleteBatchPredictionResult deleteBatchPrediction​(DeleteBatchPredictionRequest deleteBatchPredictionRequest)

        Assigns the DELETED status to a BatchPrediction, rendering it unusable.

        After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation to verify that the status of the BatchPrediction changed to DELETED.

        Caution: The result of the DeleteBatchPrediction operation is irreversible.

        Specified by:
        deleteBatchPrediction in interface AmazonMachineLearning
        Parameters:
        deleteBatchPredictionRequest -
        Returns:
        Result of the DeleteBatchPrediction operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        ResourceNotFoundException - A specified resource cannot be located.
        InternalServerException - An error on the server occurred when trying to process a request.
      • deleteDataSource

        public DeleteDataSourceResult deleteDataSource​(DeleteDataSourceRequest deleteDataSourceRequest)

        Assigns the DELETED status to a DataSource, rendering it unusable.

        After using the DeleteDataSource operation, you can use the GetDataSource operation to verify that the status of the DataSource changed to DELETED.

        Caution: The results of the DeleteDataSource operation are irreversible.

        Specified by:
        deleteDataSource in interface AmazonMachineLearning
        Parameters:
        deleteDataSourceRequest -
        Returns:
        Result of the DeleteDataSource operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        ResourceNotFoundException - A specified resource cannot be located.
        InternalServerException - An error on the server occurred when trying to process a request.
      • deleteEvaluation

        public DeleteEvaluationResult deleteEvaluation​(DeleteEvaluationRequest deleteEvaluationRequest)

        Assigns the DELETED status to an Evaluation, rendering it unusable.

        After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation to verify that the status of the Evaluation changed to DELETED.

        Caution: The results of the DeleteEvaluation operation are irreversible.

        Specified by:
        deleteEvaluation in interface AmazonMachineLearning
        Parameters:
        deleteEvaluationRequest -
        Returns:
        Result of the DeleteEvaluation operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        ResourceNotFoundException - A specified resource cannot be located.
        InternalServerException - An error on the server occurred when trying to process a request.
      • deleteMLModel

        public DeleteMLModelResult deleteMLModel​(DeleteMLModelRequest deleteMLModelRequest)

        Assigns the DELETED status to an MLModel, rendering it unusable.

        After using the DeleteMLModel operation, you can use the GetMLModel operation to verify that the status of the MLModel changed to DELETED.

        Caution: The result of the DeleteMLModel operation is irreversible.

        Specified by:
        deleteMLModel in interface AmazonMachineLearning
        Parameters:
        deleteMLModelRequest -
        Returns:
        Result of the DeleteMLModel operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        ResourceNotFoundException - A specified resource cannot be located.
        InternalServerException - An error on the server occurred when trying to process a request.
      • getDataSource

        public GetDataSourceResult getDataSource​(GetDataSourceRequest getDataSourceRequest)

        Returns a DataSource that includes metadata and data file information, as well as the current status of the DataSource .

        GetDataSource provides results in normal or verbose format. The verbose format adds the schema description and the list of files pointed to by the DataSource to the normal format.

        Specified by:
        getDataSource in interface AmazonMachineLearning
        Parameters:
        getDataSourceRequest -
        Returns:
        Result of the GetDataSource operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        ResourceNotFoundException - A specified resource cannot be located.
        InternalServerException - An error on the server occurred when trying to process a request.
      • getMLModel

        public GetMLModelResult getMLModel​(GetMLModelRequest getMLModelRequest)

        Returns an MLModel that includes detailed metadata, and data source information as well as the current status of the MLModel.

        GetMLModel provides results in normal or verbose format.

        Specified by:
        getMLModel in interface AmazonMachineLearning
        Parameters:
        getMLModelRequest -
        Returns:
        Result of the GetMLModel operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        ResourceNotFoundException - A specified resource cannot be located.
        InternalServerException - An error on the server occurred when trying to process a request.
      • predict

        public PredictResult predict​(PredictRequest predictRequest)

        Generates a prediction for the observation using the specified ML Model.

        Note

        Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.

        Specified by:
        predict in interface AmazonMachineLearning
        Parameters:
        predictRequest -
        Returns:
        Result of the Predict operation returned by the service.
        Throws:
        InvalidInputException - An error on the client occurred. Typically, the cause is an invalid input value.
        ResourceNotFoundException - A specified resource cannot be located.
        LimitExceededException - The subscriber exceeded the maximum number of operations. This exception can occur when listing objects such as DataSource.
        InternalServerException - An error on the server occurred when trying to process a request.
        PredictorNotMountedException - The exception is thrown when a predict request is made to an unmounted MLModel.
      • getCachedResponseMetadata

        public ResponseMetadata getCachedResponseMetadata​(AmazonWebServiceRequest request)
        Returns additional metadata for a previously executed successful, request, typically used for debugging issues where a service isn't acting as expected. This data isn't considered part of the result data returned by an operation, so it's available through this separate, diagnostic interface.

        Response metadata is only cached for a limited period of time, so if you need to access this extra diagnostic information for an executed request, you should use this method to retrieve it as soon as possible after executing the request.

        Specified by:
        getCachedResponseMetadata in interface AmazonMachineLearning
        Parameters:
        request - The originally executed request
        Returns:
        The response metadata for the specified request, or null if none is available.