Interface ClusterLoadAssignment.PolicyOrBuilder

All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder
All Known Implementing Classes:
ClusterLoadAssignment.Policy, ClusterLoadAssignment.Policy.Builder
Enclosing class:
ClusterLoadAssignment

public static interface ClusterLoadAssignment.PolicyOrBuilder extends com.google.protobuf.MessageOrBuilder
  • Method Details

    • getDropOverloadsList

       Action to trim the overall incoming traffic to protect the upstream
       hosts. This action allows protection in case the hosts are unable to
       recover from an outage, or unable to autoscale or unable to handle
       incoming traffic volume for any reason.
      
       At the client each category is applied one after the other to generate
       the 'actual' drop percentage on all outgoing traffic. For example:
      
       .. code-block:: json
      
       { "drop_overloads": [
       { "category": "throttle", "drop_percentage": 60 }
       { "category": "lb", "drop_percentage": 50 }
       ]}
      
       The actual drop percentages applied to the traffic at the clients will be
       "throttle"_drop = 60%
       "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
       actual_outgoing_load = 20% // remaining after applying all categories.
      
       Envoy supports only one element and will NACK if more than one element is present.
       Other xDS-capable data planes will not necessarily have this limitation.
      
       In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
       "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
       any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
       When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
       setting are in place, the min of these two wins.
       
      repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
    • getDropOverloads

      ClusterLoadAssignment.Policy.DropOverload getDropOverloads(int index)
       Action to trim the overall incoming traffic to protect the upstream
       hosts. This action allows protection in case the hosts are unable to
       recover from an outage, or unable to autoscale or unable to handle
       incoming traffic volume for any reason.
      
       At the client each category is applied one after the other to generate
       the 'actual' drop percentage on all outgoing traffic. For example:
      
       .. code-block:: json
      
       { "drop_overloads": [
       { "category": "throttle", "drop_percentage": 60 }
       { "category": "lb", "drop_percentage": 50 }
       ]}
      
       The actual drop percentages applied to the traffic at the clients will be
       "throttle"_drop = 60%
       "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
       actual_outgoing_load = 20% // remaining after applying all categories.
      
       Envoy supports only one element and will NACK if more than one element is present.
       Other xDS-capable data planes will not necessarily have this limitation.
      
       In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
       "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
       any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
       When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
       setting are in place, the min of these two wins.
       
      repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
    • getDropOverloadsCount

      int getDropOverloadsCount()
       Action to trim the overall incoming traffic to protect the upstream
       hosts. This action allows protection in case the hosts are unable to
       recover from an outage, or unable to autoscale or unable to handle
       incoming traffic volume for any reason.
      
       At the client each category is applied one after the other to generate
       the 'actual' drop percentage on all outgoing traffic. For example:
      
       .. code-block:: json
      
       { "drop_overloads": [
       { "category": "throttle", "drop_percentage": 60 }
       { "category": "lb", "drop_percentage": 50 }
       ]}
      
       The actual drop percentages applied to the traffic at the clients will be
       "throttle"_drop = 60%
       "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
       actual_outgoing_load = 20% // remaining after applying all categories.
      
       Envoy supports only one element and will NACK if more than one element is present.
       Other xDS-capable data planes will not necessarily have this limitation.
      
       In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
       "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
       any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
       When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
       setting are in place, the min of these two wins.
       
      repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
    • getDropOverloadsOrBuilderList

      List<? extends ClusterLoadAssignment.Policy.DropOverloadOrBuilder> getDropOverloadsOrBuilderList()
       Action to trim the overall incoming traffic to protect the upstream
       hosts. This action allows protection in case the hosts are unable to
       recover from an outage, or unable to autoscale or unable to handle
       incoming traffic volume for any reason.
      
       At the client each category is applied one after the other to generate
       the 'actual' drop percentage on all outgoing traffic. For example:
      
       .. code-block:: json
      
       { "drop_overloads": [
       { "category": "throttle", "drop_percentage": 60 }
       { "category": "lb", "drop_percentage": 50 }
       ]}
      
       The actual drop percentages applied to the traffic at the clients will be
       "throttle"_drop = 60%
       "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
       actual_outgoing_load = 20% // remaining after applying all categories.
      
       Envoy supports only one element and will NACK if more than one element is present.
       Other xDS-capable data planes will not necessarily have this limitation.
      
       In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
       "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
       any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
       When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
       setting are in place, the min of these two wins.
       
      repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
    • getDropOverloadsOrBuilder

      ClusterLoadAssignment.Policy.DropOverloadOrBuilder getDropOverloadsOrBuilder(int index)
       Action to trim the overall incoming traffic to protect the upstream
       hosts. This action allows protection in case the hosts are unable to
       recover from an outage, or unable to autoscale or unable to handle
       incoming traffic volume for any reason.
      
       At the client each category is applied one after the other to generate
       the 'actual' drop percentage on all outgoing traffic. For example:
      
       .. code-block:: json
      
       { "drop_overloads": [
       { "category": "throttle", "drop_percentage": 60 }
       { "category": "lb", "drop_percentage": 50 }
       ]}
      
       The actual drop percentages applied to the traffic at the clients will be
       "throttle"_drop = 60%
       "lb"_drop = 20%  // 50% of the remaining 'actual' load, which is 40%.
       actual_outgoing_load = 20% // remaining after applying all categories.
      
       Envoy supports only one element and will NACK if more than one element is present.
       Other xDS-capable data planes will not necessarily have this limitation.
      
       In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
       "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
       any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
       When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
       setting are in place, the min of these two wins.
       
      repeated .envoy.config.endpoint.v3.ClusterLoadAssignment.Policy.DropOverload drop_overloads = 2;
    • hasOverprovisioningFactor

      boolean hasOverprovisioningFactor()
       Priority levels and localities are considered overprovisioned with this
       factor (in percentage). This means that we don't consider a priority
       level or locality unhealthy until the fraction of healthy hosts
       multiplied by the overprovisioning factor drops below 100.
       With the default value 140(1.4), Envoy doesn't consider a priority level
       or a locality unhealthy until their percentage of healthy hosts drops
       below 72%. For example:
      
       .. code-block:: json
      
       { "overprovisioning_factor": 100 }
      
       Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
       :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
       
      .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
      Returns:
      Whether the overprovisioningFactor field is set.
    • getOverprovisioningFactor

      com.google.protobuf.UInt32Value getOverprovisioningFactor()
       Priority levels and localities are considered overprovisioned with this
       factor (in percentage). This means that we don't consider a priority
       level or locality unhealthy until the fraction of healthy hosts
       multiplied by the overprovisioning factor drops below 100.
       With the default value 140(1.4), Envoy doesn't consider a priority level
       or a locality unhealthy until their percentage of healthy hosts drops
       below 72%. For example:
      
       .. code-block:: json
      
       { "overprovisioning_factor": 100 }
      
       Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
       :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
       
      .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
      Returns:
      The overprovisioningFactor.
    • getOverprovisioningFactorOrBuilder

      com.google.protobuf.UInt32ValueOrBuilder getOverprovisioningFactorOrBuilder()
       Priority levels and localities are considered overprovisioned with this
       factor (in percentage). This means that we don't consider a priority
       level or locality unhealthy until the fraction of healthy hosts
       multiplied by the overprovisioning factor drops below 100.
       With the default value 140(1.4), Envoy doesn't consider a priority level
       or a locality unhealthy until their percentage of healthy hosts drops
       below 72%. For example:
      
       .. code-block:: json
      
       { "overprovisioning_factor": 100 }
      
       Read more at :ref:`priority levels <arch_overview_load_balancing_priority_levels>` and
       :ref:`localities <arch_overview_load_balancing_locality_weighted_lb>`.
       
      .google.protobuf.UInt32Value overprovisioning_factor = 3 [(.validate.rules) = { ... }
    • hasEndpointStaleAfter

      boolean hasEndpointStaleAfter()
       The max time until which the endpoints from this assignment can be used.
       If no new assignments are received before this time expires the endpoints
       are considered stale and should be marked unhealthy.
       Defaults to 0 which means endpoints never go stale.
       
      .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
      Returns:
      Whether the endpointStaleAfter field is set.
    • getEndpointStaleAfter

      com.google.protobuf.Duration getEndpointStaleAfter()
       The max time until which the endpoints from this assignment can be used.
       If no new assignments are received before this time expires the endpoints
       are considered stale and should be marked unhealthy.
       Defaults to 0 which means endpoints never go stale.
       
      .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
      Returns:
      The endpointStaleAfter.
    • getEndpointStaleAfterOrBuilder

      com.google.protobuf.DurationOrBuilder getEndpointStaleAfterOrBuilder()
       The max time until which the endpoints from this assignment can be used.
       If no new assignments are received before this time expires the endpoints
       are considered stale and should be marked unhealthy.
       Defaults to 0 which means endpoints never go stale.
       
      .google.protobuf.Duration endpoint_stale_after = 4 [(.validate.rules) = { ... }
    • getWeightedPriorityHealth

      boolean getWeightedPriorityHealth()
       If true, use the :ref:`load balancing weight
       <envoy_v3_api_field_config.endpoint.v3.LbEndpoint.load_balancing_weight>` of healthy and unhealthy
       hosts to determine the health of the priority level. If false, use the number of healthy and unhealthy hosts
       to determine the health of the priority level, or in other words assume each host has a weight of 1 for
       this calculation.
      
       Note: this is not currently implemented for
       :ref:`locality weighted load balancing <arch_overview_load_balancing_locality_weighted_lb>`.
       
      bool weighted_priority_health = 6;
      Returns:
      The weightedPriorityHealth.