Skip to content
Snippets Groups Projects
  1. Jan 17, 2020
  2. Jul 31, 2019
  3. Jul 15, 2019
  4. Jul 03, 2019
  5. Jun 21, 2019
  6. Jun 13, 2019
  7. Apr 25, 2019
  8. Apr 04, 2019
    • Keith Busch's avatar
      node: Add memory-side caching attributes · acc02a10
      Keith Busch authored
      
      System memory may have caches to help improve access speed to frequently
      requested address ranges. While the system provided cache is transparent
      to the software accessing these memory ranges, applications can optimize
      their own access based on cache attributes.
      
      Provide a new API for the kernel to register these memory-side caches
      under the memory node that provides it.
      
      The new sysfs representation is modeled from the existing cpu cacheinfo
      attributes, as seen from /sys/devices/system/cpu/<cpu>/cache/.  Unlike CPU
      cacheinfo though, the node cache level is reported from the view of the
      memory. A higher level number is nearer to the CPU, while lower levels
      are closer to the last level memory.
      
      The exported attributes are the cache size, the line size, associativity
      indexing, and write back policy, and add the attributes for the system
      memory caches to sysfs stable documentation.
      
      Signed-off-by: default avatarKeith Busch <keith.busch@intel.com>
      Reviewed-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: default avatarBrice Goglin <Brice.Goglin@inria.fr>
      Tested-by: default avatarBrice Goglin <Brice.Goglin@inria.fr>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      acc02a10
    • Keith Busch's avatar
      node: Add heterogenous memory access attributes · e1cf33aa
      Keith Busch authored
      
      Heterogeneous memory systems provide memory nodes with different latency
      and bandwidth performance attributes. Provide a new kernel interface
      for subsystems to register the attributes under the memory target
      node's initiator access class. If the system provides this information,
      applications may query these attributes when deciding which node to
      request memory.
      
      The following example shows the new sysfs hierarchy for a node exporting
      performance attributes:
      
        # tree -P "read*|write*"/sys/devices/system/node/nodeY/accessZ/initiators/
        /sys/devices/system/node/nodeY/accessZ/initiators/
        |-- read_bandwidth
        |-- read_latency
        |-- write_bandwidth
        `-- write_latency
      
      The bandwidth is exported as MB/s and latency is reported in
      nanoseconds. The values are taken from the platform as reported by the
      manufacturer.
      
      Memory accesses from an initiator node that is not one of the memory's
      access "Z" initiator nodes linked in the same directory may observe
      different performance than reported here. When a subsystem makes use
      of this interface, initiators of a different access number may not have
      the same performance relative to initiators in other access numbers, or
      omitted from the any access class' initiators.
      
      Descriptions for memory access initiator performance access attributes
      are added to sysfs stable documentation.
      
      Acked-by: default avatarJonathan Cameron <Jonathan.Cameron@huawei.com>
      Tested-by: default avatarJonathan Cameron <Jonathan.Cameron@huawei.com>
      Signed-off-by: default avatarKeith Busch <keith.busch@intel.com>
      Reviewed-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Tested-by: default avatarBrice Goglin <Brice.Goglin@inria.fr>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e1cf33aa
    • Keith Busch's avatar
      node: Link memory nodes to their compute nodes · 08d9dbe7
      Keith Busch authored
      
      Systems may be constructed with various specialized nodes. Some nodes
      may provide memory, some provide compute devices that access and use
      that memory, and others may provide both. Nodes that provide memory are
      referred to as memory targets, and nodes that can initiate memory access
      are referred to as memory initiators.
      
      Memory targets will often have varying access characteristics from
      different initiators, and platforms may have ways to express those
      relationships. In preparation for these systems, provide interfaces for
      the kernel to export the memory relationship among different nodes memory
      targets and their initiators with symlinks to each other.
      
      If a system provides access locality for each initiator-target pair, nodes
      may be grouped into ranked access classes relative to other nodes. The
      new interface allows a subsystem to register relationships of varying
      classes if available and desired to be exported.
      
      A memory initiator may have multiple memory targets in the same access
      class. The target memory's initiators in a given class indicate the
      nodes access characteristics share the same performance relative to other
      linked initiator nodes. Each target within an initiator's access class,
      though, do not necessarily perform the same as each other.
      
      A memory target node may have multiple memory initiators. All linked
      initiators in a target's class have the same access characteristics to
      that target.
      
      The following example show the nodes' new sysfs hierarchy for a memory
      target node 'Y' with access class 0 from initiator node 'X':
      
        # symlinks -v /sys/devices/system/node/nodeX/access0/
        relative: /sys/devices/system/node/nodeX/access0/targets/nodeY -> ../../nodeY
      
        # symlinks -v /sys/devices/system/node/nodeY/access0/
        relative: /sys/devices/system/node/nodeY/access0/initiators/nodeX -> ../../nodeX
      
      The new attributes are added to the sysfs stable documentation.
      
      Reviewed-by: default avatarJonathan Cameron <Jonathan.Cameron@huawei.com>
      Signed-off-by: default avatarKeith Busch <keith.busch@intel.com>
      Reviewed-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Tested-by: default avatarBrice Goglin <Brice.Goglin@inria.fr>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      08d9dbe7
  9. Mar 21, 2019
    • Kimberly Brown's avatar
      Drivers: hv: vmbus: Expose monitor data only when monitor pages are used · 46fc1548
      Kimberly Brown authored
      
      There are two methods for signaling the host: the monitor page mechanism
      and hypercalls. The monitor page mechanism is used by performance
      critical channels (storage, networking, etc.) because it provides
      improved throughput. However, latency is increased. Monitor pages are
      allocated to these channels.
      
      Monitor pages are not allocated to channels that do not use the monitor
      page mechanism. Therefore, these channels do not have a valid monitor id
      or valid monitor page data. In these cases, some of the "_show"
      functions return incorrect data. They return an invalid monitor id and
      data that is beyond the bounds of the hv_monitor_page array fields.
      
      The "channel->offermsg.monitor_allocated" value can be used to determine
      whether monitor pages have been allocated to a channel.
      
      Add "is_visible()" callback functions for the device-level and
      channel-level attribute groups. These functions will hide the monitor
      sysfs files when the monitor mechanism is not used.
      
      Remove ".default_attributes" from "vmbus_chan_attrs" and create a
      channel-level attribute group. These changes allow the new
      "is_visible()" callback function to be applied to the channel-level
      attributes.
      
      Call "sysfs_create_group()" in "vmbus_add_channel_kobj()" to create the
      channel's sysfs files. Add a new function,
      “vmbus_remove_channel_attr_group()”, and call it in "free_channel()" to
      remove the channel's sysfs files when the channel is closed.
      
      Signed-off-by: default avatarKimberly Brown <kimbrownkd@gmail.com>
      Reviewed-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Reviewed-by: default avatarMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      46fc1548
  10. Feb 15, 2019
    • Kimberly Brown's avatar
      Drivers: hv: vmbus: Expose counters for interrupts and full conditions · 396ae57e
      Kimberly Brown authored
      
      Counter values for per-channel interrupts and ring buffer full
      conditions are useful for investigating performance.
      
      Expose counters in sysfs for 2 types of guest to host interrupts:
      1) Interrupts caused by the channel's outbound ring buffer transitioning
      from empty to not empty
      2) Interrupts caused by the channel's inbound ring buffer transitioning
      from full to not full while a packet is waiting for enough buffer space to
      become available
      
      Expose 2 counters in sysfs for the number of times that write operations
      encountered a full outbound ring buffer:
      1) The total number of write operations that encountered a full
      condition
      2) The number of write operations that were the first to encounter a
      full condition
      
      Increment the outbound full condition counters in the
      hv_ringbuffer_write() function because, for most drivers, a full
      outbound ring buffer is detected in that function. Also increment the
      outbound full condition counters in the set_channel_pending_send_size()
      function. In the hv_sock driver, a full outbound ring buffer is detected
      and set_channel_pending_send_size() is called before
      hv_ringbuffer_write() is called.
      
      I tested this patch by confirming that the sysfs files were created and
      observing the counter values. The values seemed to increase by a
      reasonable amount when the Hyper-v related drivers were in use.
      
      Signed-off-by: default avatarKimberly Brown <kimbrownkd@gmail.com>
      Reviewed-by: default avatarMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      396ae57e
  11. Jan 26, 2019
  12. Dec 11, 2018
  13. Sep 20, 2018
  14. Sep 14, 2018
    • Marek Marczykowski-Górecki's avatar
      xen/balloon: add runtime control for scrubbing ballooned out pages · 197ecb38
      Marek Marczykowski-Górecki authored
      
      Scrubbing pages on initial balloon down can take some time, especially
      in nested virtualization case (nested EPT is slow). When HVM/PVH guest is
      started with memory= significantly lower than maxmem=, all the extra
      pages will be scrubbed before returning to Xen. But since most of them
      weren't used at all at that point, Xen needs to populate them first
      (from populate-on-demand pool). In nested virt case (Xen inside KVM)
      this slows down the guest boot by 15-30s with just 1.5GB needed to be
      returned to Xen.
      
      Add runtime parameter to enable/disable it, to allow initially disabling
      scrubbing, then enable it back during boot (for example in initramfs).
      Such usage relies on assumption that a) most pages ballooned out during
      initial boot weren't used at all, and b) even if they were, very few
      secrets are in the guest at that time (before any serious userspace
      kicks in).
      Convert CONFIG_XEN_SCRUB_PAGES to CONFIG_XEN_SCRUB_PAGES_DEFAULT (also
      enabled by default), controlling default value for the new runtime
      switch.
      
      Signed-off-by: default avatarMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
      Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
      Signed-off-by: default avatarBoris Ostrovsky <boris.ostrovsky@oracle.com>
      197ecb38
  15. Aug 28, 2018
  16. Aug 02, 2018
  17. Jul 29, 2018
  18. Jun 19, 2018
  19. Jun 15, 2018
  20. May 14, 2018
  21. Apr 27, 2018
  22. Apr 16, 2018
  23. Mar 06, 2018
  24. Feb 23, 2018
  25. Jan 09, 2018
  26. Oct 31, 2017
  27. Oct 12, 2017
  28. Oct 04, 2017
  29. Aug 28, 2017
  30. Aug 21, 2017
  31. Jun 15, 2017
  32. Jun 13, 2017
Loading