Class ClusterLoadAssignment.Policy.Builder

    • Field Detail

      • bitField0_

        private int bitField0_
      • overprovisioningFactor_

        private com.google.protobuf.UInt32Value overprovisioningFactor_
      • overprovisioningFactorBuilder_

        private com.google.protobuf.SingleFieldBuilder<com.google.protobuf.UInt32Value,​com.google.protobuf.UInt32Value.Builder,​com.google.protobuf.UInt32ValueOrBuilder> overprovisioningFactorBuilder_
      • endpointStaleAfter_

        private com.google.protobuf.Duration endpointStaleAfter_
      • endpointStaleAfterBuilder_

        private com.google.protobuf.SingleFieldBuilder<com.google.protobuf.Duration,​com.google.protobuf.Duration.Builder,​com.google.protobuf.DurationOrBuilder> endpointStaleAfterBuilder_
      • weightedPriorityHealth_

        private boolean weightedPriorityHealth_
    • Constructor Detail

      • Builder

        private Builder()
      • Builder

        private Builder​(com.google.protobuf.AbstractMessage.BuilderParent parent)
    • Method Detail

      • getDescriptor

        public static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
      • internalGetFieldAccessorTable

        protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable()
        Specified by:
        internalGetFieldAccessorTable in class com.google.protobuf.GeneratedMessage.Builder<ClusterLoadAssignment.Policy.Builder>
      • maybeForceBuilderInitialization

        private void maybeForceBuilderInitialization()
      • getDescriptorForType

        public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
        Specified by:
        getDescriptorForType in interface com.google.protobuf.Message.Builder
        Specified by:
        getDescriptorForType in interface com.google.protobuf.MessageOrBuilder
        Overrides:
        getDescriptorForType in class com.google.protobuf.GeneratedMessage.Builder<ClusterLoadAssignment.Policy.Builder>
      • getDefaultInstanceForType

        public ClusterLoadAssignment.Policy getDefaultInstanceForType()
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageLiteOrBuilder
        Specified by:
        getDefaultInstanceForType in interface com.google.protobuf.MessageOrBuilder
      • build

        public ClusterLoadAssignment.Policy build()
        Specified by:
        build in interface com.google.protobuf.Message.Builder
        Specified by:
        build in interface com.google.protobuf.MessageLite.Builder
      • buildPartial

        public ClusterLoadAssignment.Policy buildPartial()
        Specified by:
        buildPartial in interface com.google.protobuf.Message.Builder
        Specified by:
        buildPartial in interface com.google.protobuf.MessageLite.Builder
      • isInitialized

        public final boolean isInitialized()
        Specified by:
        isInitialized in interface com.google.protobuf.MessageLiteOrBuilder
        Overrides:
        isInitialized in class com.google.protobuf.GeneratedMessage.Builder<ClusterLoadAssignment.Policy.Builder>
      • mergeFrom

        public ClusterLoadAssignment.Policy.Builder mergeFrom​(com.google.protobuf.CodedInputStream input,
                                                              com.google.protobuf.ExtensionRegistryLite extensionRegistry)
                                                       throws java.io.IOException
        Specified by:
        mergeFrom in interface com.google.protobuf.Message.Builder
        Specified by:
        mergeFrom in interface com.google.protobuf.MessageLite.Builder
        Overrides:
        mergeFrom in class com.google.protobuf.AbstractMessage.Builder<ClusterLoadAssignment.Policy.Builder>
        Throws:
        java.io.IOException
      • ensureDropOverloadsIsMutable

        private void ensureDropOverloadsIsMutable()
      • getDropOverloadsList

        public java.util.List<ClusterLoadAssignment.Policy.DropOverload> getDropOverloadsList()
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
        Specified by:
        getDropOverloadsList in interface ClusterLoadAssignment.PolicyOrBuilder
      • getDropOverloadsCount

        public int getDropOverloadsCount()
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
        Specified by:
        getDropOverloadsCount in interface ClusterLoadAssignment.PolicyOrBuilder
      • getDropOverloads

        public ClusterLoadAssignment.Policy.DropOverload getDropOverloads​(int index)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
        Specified by:
        getDropOverloads in interface ClusterLoadAssignment.PolicyOrBuilder
      • setDropOverloads

        public ClusterLoadAssignment.Policy.Builder setDropOverloads​(int index,
                                                                     ClusterLoadAssignment.Policy.DropOverload value)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • setDropOverloads

        public ClusterLoadAssignment.Policy.Builder setDropOverloads​(int index,
                                                                     ClusterLoadAssignment.Policy.DropOverload.Builder builderForValue)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • addDropOverloads

        public ClusterLoadAssignment.Policy.Builder addDropOverloads​(ClusterLoadAssignment.Policy.DropOverload value)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • addDropOverloads

        public ClusterLoadAssignment.Policy.Builder addDropOverloads​(int index,
                                                                     ClusterLoadAssignment.Policy.DropOverload value)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • addDropOverloads

        public ClusterLoadAssignment.Policy.Builder addDropOverloads​(ClusterLoadAssignment.Policy.DropOverload.Builder builderForValue)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • addDropOverloads

        public ClusterLoadAssignment.Policy.Builder addDropOverloads​(int index,
                                                                     ClusterLoadAssignment.Policy.DropOverload.Builder builderForValue)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • addAllDropOverloads

        public ClusterLoadAssignment.Policy.Builder addAllDropOverloads​(java.lang.Iterable<? extends ClusterLoadAssignment.Policy.DropOverload> values)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • clearDropOverloads

        public ClusterLoadAssignment.Policy.Builder clearDropOverloads()
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • removeDropOverloads

        public ClusterLoadAssignment.Policy.Builder removeDropOverloads​(int index)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • getDropOverloadsBuilder

        public ClusterLoadAssignment.Policy.DropOverload.Builder getDropOverloadsBuilder​(int index)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • getDropOverloadsOrBuilder

        public ClusterLoadAssignment.Policy.DropOverloadOrBuilder getDropOverloadsOrBuilder​(int index)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
        Specified by:
        getDropOverloadsOrBuilder in interface ClusterLoadAssignment.PolicyOrBuilder
      • getDropOverloadsOrBuilderList

        public java.util.List<? extends ClusterLoadAssignment.Policy.DropOverloadOrBuilder> getDropOverloadsOrBuilderList()
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
        Specified by:
        getDropOverloadsOrBuilderList in interface ClusterLoadAssignment.PolicyOrBuilder
      • addDropOverloadsBuilder

        public ClusterLoadAssignment.Policy.DropOverload.Builder addDropOverloadsBuilder()
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • addDropOverloadsBuilder

        public ClusterLoadAssignment.Policy.DropOverload.Builder addDropOverloadsBuilder​(int index)
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • getDropOverloadsBuilderList

        public java.util.List<ClusterLoadAssignment.Policy.DropOverload.Builder> getDropOverloadsBuilderList()
         Action to trim the overall incoming traffic to protect the upstream
         hosts. This action allows protection in case the hosts are unable to
         recover from an outage, or unable to autoscale or unable to handle
         incoming traffic volume for any reason.
        
         At the client each category is applied one after the other to generate
         the 'actual' drop percentage on all outgoing traffic. For example:
        
         .. code-block:: json
        
         { "drop_overloads": [
         { "category": "throttle", "drop_percentage": 60 }
         { "category": "lb", "drop_percentage": 50 }
         ]}
        
         The actual drop percentages applied to the traffic at the clients will be
         "throttle"_drop = 60%
         "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
         actual_outgoing_load = 20% // remaining after applying all categories.
        
         Envoy supports only one element and will NACK if more than one element is present.
         Other xDS-capable data planes will not necessarily have this limitation.
        
         In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
         "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
         any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
         When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
         setting are in place, the min of these two wins.
         
        repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
      • hasOverprovisioningFactor

        public boolean hasOverprovisioningFactor()
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
        Specified by:
        hasOverprovisioningFactor in interface ClusterLoadAssignment.PolicyOrBuilder
        Returns:
        Whether the overprovisioningFactor field is set.
      • getOverprovisioningFactor

        public com.google.protobuf.UInt32Value getOverprovisioningFactor()
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
        Specified by:
        getOverprovisioningFactor in interface ClusterLoadAssignment.PolicyOrBuilder
        Returns:
        The overprovisioningFactor.
      • setOverprovisioningFactor

        public ClusterLoadAssignment.Policy.Builder setOverprovisioningFactor​(com.google.protobuf.UInt32Value value)
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
      • setOverprovisioningFactor

        public ClusterLoadAssignment.Policy.Builder setOverprovisioningFactor​(com.google.protobuf.UInt32Value.Builder builderForValue)
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
      • mergeOverprovisioningFactor

        public ClusterLoadAssignment.Policy.Builder mergeOverprovisioningFactor​(com.google.protobuf.UInt32Value value)
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
      • clearOverprovisioningFactor

        public ClusterLoadAssignment.Policy.Builder clearOverprovisioningFactor()
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
      • getOverprovisioningFactorBuilder

        public com.google.protobuf.UInt32Value.Builder getOverprovisioningFactorBuilder()
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
      • getOverprovisioningFactorOrBuilder

        public com.google.protobuf.UInt32ValueOrBuilder getOverprovisioningFactorOrBuilder()
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
        Specified by:
        getOverprovisioningFactorOrBuilder in interface ClusterLoadAssignment.PolicyOrBuilder
      • getOverprovisioningFactorFieldBuilder

        private com.google.protobuf.SingleFieldBuilder<com.google.protobuf.UInt32Value,​com.google.protobuf.UInt32Value.Builder,​com.google.protobuf.UInt32ValueOrBuilder> getOverprovisioningFactorFieldBuilder()
         Priority levels and localities are considered overprovisioned with this
         factor (in percentage). This means that we don't consider a priority
         level or locality unhealthy until the fraction of healthy hosts
         multiplied by the overprovisioning factor drops below 100.
         With the default value 140(1.4), Envoy doesn't consider a priority level
         or a locality unhealthy until their percentage of healthy hosts drops
         below 72%. For example:
        
         .. code-block:: json
        
         { "overprovisioning_factor": 100 }
        
         Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
         :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
         
        .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
      • hasEndpointStaleAfter

        public boolean hasEndpointStaleAfter()
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
        Specified by:
        hasEndpointStaleAfter in interface ClusterLoadAssignment.PolicyOrBuilder
        Returns:
        Whether the endpointStaleAfter field is set.
      • getEndpointStaleAfter

        public com.google.protobuf.Duration getEndpointStaleAfter()
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
        Specified by:
        getEndpointStaleAfter in interface ClusterLoadAssignment.PolicyOrBuilder
        Returns:
        The endpointStaleAfter.
      • setEndpointStaleAfter

        public ClusterLoadAssignment.Policy.Builder setEndpointStaleAfter​(com.google.protobuf.Duration value)
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
      • setEndpointStaleAfter

        public ClusterLoadAssignment.Policy.Builder setEndpointStaleAfter​(com.google.protobuf.Duration.Builder builderForValue)
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
      • mergeEndpointStaleAfter

        public ClusterLoadAssignment.Policy.Builder mergeEndpointStaleAfter​(com.google.protobuf.Duration value)
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
      • clearEndpointStaleAfter

        public ClusterLoadAssignment.Policy.Builder clearEndpointStaleAfter()
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
      • getEndpointStaleAfterBuilder

        public com.google.protobuf.Duration.Builder getEndpointStaleAfterBuilder()
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
      • getEndpointStaleAfterOrBuilder

        public com.google.protobuf.DurationOrBuilder getEndpointStaleAfterOrBuilder()
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
        Specified by:
        getEndpointStaleAfterOrBuilder in interface ClusterLoadAssignment.PolicyOrBuilder
      • getEndpointStaleAfterFieldBuilder

        private com.google.protobuf.SingleFieldBuilder<com.google.protobuf.Duration,​com.google.protobuf.Duration.Builder,​com.google.protobuf.DurationOrBuilder> getEndpointStaleAfterFieldBuilder()
         The max time until which the endpoints from this assignment can be used.
         If no new assignments are received before this time expires the endpoints
         are considered stale and should be marked unhealthy.
         Defaults to 0 which means endpoints never go stale.
         
        .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
      • getWeightedPriorityHealth

        public boolean getWeightedPriorityHealth()
         If true, use the :ref:`load balancing weight
         <envoy_v3_api_field_config.endpoint.v3.LbEndpoint.load_balancing_weight>` of healthy and unhealthy
         hosts to determine the health of the priority level. If false, use the number of healthy and unhealthy hosts
         to determine the health of the priority level, or in other words assume each host has a weight of 1 for
         this calculation.
        
         Note: this is not currently implemented for
         :ref:`locality weighted load balancing <arch_overview_load_balancing_locality_weighted_lb>`.
         
        bool weighted_priority_health = 6;
        Specified by:
        getWeightedPriorityHealth in interface ClusterLoadAssignment.PolicyOrBuilder
        Returns:
        The weightedPriorityHealth.
      • setWeightedPriorityHealth

        public ClusterLoadAssignment.Policy.Builder setWeightedPriorityHealth​(boolean value)
         If true, use the :ref:`load balancing weight
         <envoy_v3_api_field_config.endpoint.v3.LbEndpoint.load_balancing_weight>` of healthy and unhealthy
         hosts to determine the health of the priority level. If false, use the number of healthy and unhealthy hosts
         to determine the health of the priority level, or in other words assume each host has a weight of 1 for
         this calculation.
        
         Note: this is not currently implemented for
         :ref:`locality weighted load balancing <arch_overview_load_balancing_locality_weighted_lb>`.
         
        bool weighted_priority_health = 6;
        Parameters:
        value - The weightedPriorityHealth to set.
        Returns:
        This builder for chaining.
      • clearWeightedPriorityHealth

        public ClusterLoadAssignment.Policy.Builder clearWeightedPriorityHealth()
         If true, use the :ref:`load balancing weight
         <envoy_v3_api_field_config.endpoint.v3.LbEndpoint.load_balancing_weight>` of healthy and unhealthy
         hosts to determine the health of the priority level. If false, use the number of healthy and unhealthy hosts
         to determine the health of the priority level, or in other words assume each host has a weight of 1 for
         this calculation.
        
         Note: this is not currently implemented for
         :ref:`locality weighted load balancing <arch_overview_load_balancing_locality_weighted_lb>`.
         
        bool weighted_priority_health = 6;
        Returns:
        This builder for chaining.