[OpenWrt-Devel] [PATCH] brcm2708: add linux 4.4 support

Álvaro Fernández Rojas noltari at gmail.com
Fri Jan 15 18:18:38 EST 2016


- random-bcm2708 and spi-bcm2708 have been removed.
- sound-soc-bcm2708-i2s has been upstreamed as sound-soc-bcm2835-i2s.

Let's keep linux 4.1 for a while, since linux 4.4 appears to have some issues
with multicast traffic on RPi ethernet:
https://gist.github.com/Noltari/5b1cfdecce5ed4bc08fd

Signed-off-by: Álvaro Fernández Rojas <noltari at gmail.com>
---
 target/linux/brcm2708/bcm2708/config-4.4           |   355 +
 target/linux/brcm2708/bcm2709/config-4.4           |   387 +
 target/linux/brcm2708/modules.mk                   |    65 +-
 ...0001-smsx95xx-fix-crimes-against-truesize.patch |    33 +
 ...02-smsc95xx-Disable-turbo-mode-by-default.patch |    20 +
 ...around-for-issue-where-dirty-page-count-g.patch |    27 +
 .../0004-BCM2835_DT-Fix-I2S-register-map.patch     |    50 +
 ...-Prevent-spurious-interrupts-and-trap-the.patch |    31 +
 .../0006-irqchip-bcm2835-Add-FIQ-support.patch     |   127 +
 ...-irqchip-irq-bcm2835-Add-2836-FIQ-support.patch |    96 +
 ...erial-8250-Don-t-crash-when-nr_uarts-is-0.patch |    20 +
 ...2835-Set-base-to-0-give-expected-gpio-num.patch |    22 +
 ...2835-Fix-interrupt-handling-for-GPIOs-28-.patch |   146 +
 ...2835-Only-request-the-interrupts-listed-i.patch |    27 +
 ...cm2835-Support-pin-groups-other-than-7-11.patch |    80 +
 ...RM-bcm2835-Set-Serial-number-and-Revision.patch |    58 +
 ...-get-base-address-for-DMA-from-devicetree.patch |    65 +
 ...-add-24bit-support-update-bclk_ratio-to-m.patch |    79 +
 ...s-setup-clock-only-if-CPU-is-clock-master.patch |    54 +
 ...835-i2s-Eliminate-debugfs-directory-error.patch |    36 +
 .../0018-bcm2835-i2s-Register-PCM-device.patch     |    63 +
 ...i2s-Enable-MMAP-support-via-a-DT-property.patch |    44 +
 ...0-dmaengine-bcm2835-Add-slave-dma-support.patch |   320 +
 ...ine-bcm2835-set-residue_granularity-field.patch |    29 +
 ...cm2835-Load-driver-early-and-support-lega.patch |    98 +
 ...-dma-Fix-dreq-not-set-for-slave-transfers.patch |    21 +
 ...-Limit-cyclic-transfers-on-lite-channels-.patch |    37 +
 .../0025-bcm2835-Add-support-for-uart1.patch       |    57 +
 ...irmware-bcm2835-Add-missing-property-tags.patch |    62 +
 .../0027-Main-bcm2708-bcm2709-linux-port.patch     |  2418 +
 ...-squash-include-ARCH_BCM2708-ARCH_BCM2709.patch |   138 +
 .../patches-4.4/0029-Add-dwc_otg-driver.patch      | 60780 +++++++++++++++++++
 .../0030-bcm2708-framebuffer-driver.patch          |  3455 ++
 .../0031-dmaengine-Add-support-for-BCM2708.patch   |   612 +
 ...-parameter-to-mmc-multi_io_quirk-callback.patch |    75 +
 .../0033-MMC-added-alternative-MMC-driver.patch    |  1691 +
 ...835-sdhost-driver-and-an-overlay-to-enabl.patch |  2022 +
 ...ma-Add-vc_cma-driver-to-enable-use-of-CMA.patch |  1326 +
 .../0036-bcm2708-alsa-sound-driver.patch           |  2678 +
 .../patches-4.4/0037-bcm2708-vchiq-driver.patch    | 13200 ++++
 .../0038-vc_mem-Add-vc_mem-driver.patch            |   991 +
 ...deoCore-shared-memory-service-for-BCM2835.patch |  4393 ++
 ...omem-device-for-rootless-user-GPIO-access.patch |   306 +
 .../brcm2708/patches-4.4/0041-Add-SMI-driver.patch |  1930 +
 .../patches-4.4/0042-Add-SMI-NAND-driver.patch     |   358 +
 ...3-lirc-added-support-for-RaspberryPi-GPIO.patch |   841 +
 .../patches-4.4/0044-Add-cpufreq-driver.patch      |   257 +
 ...-thermal-driver-for-reporting-core-temper.patch |   193 +
 .../0046-Add-Chris-Boot-s-i2c-driver.patch         |   635 +
 .../0047-char-broadcom-Add-vcio-module.patch       |   221 +
 ...048-firmware-bcm2835-Support-ARCH_BCM270x.patch |   106 +
 .../0049-bcm2835-add-v4l2-camera-device.patch      |  7338 +++
 ...-mkknlimg-and-knlinfo-scripts-from-tools-.patch |   461 +
 ...port-for-the-CONFIG_CMDLINE_EXTEND-option.patch |    58 +
 ...0052-BCM2708-Add-core-Device-Tree-support.patch |  4564 ++
 ...3-bcm2835-Match-with-BCM2708-Device-Trees.patch |   515 +
 .../0054-fbdev-add-FBIOCOPYAREA-ioctl.patch        |    91 +
 ...up-console-framebuffer-imageblit-function.patch |   209 +
 ...9-Allow-mac-address-to-be-set-in-smsc95xx.patch |    91 +
 ...e-realtime-clock-1-wire-chip-DS1307-and-1.patch |   242 +
 ...061-Added-Device-IDs-for-August-DVB-T-205.patch |    22 +
 ...le-CONFIG_MEMCG-but-leave-it-disabled-due.patch |    49 +
 .../0063-ASoC-Add-support-for-PCM5102A-codec.patch |   128 +
 .../0064-ASoC-Add-support-for-HifiBerry-DAC.patch  |   165 +
 .../0065-ASoC-Add-support-for-Rpi-DAC.patch        |   275 +
 ...-Implement-MCLK-configuration-options-add.patch |    40 +
 ...d-support-for-HiFiBerry-Digi.-Driver-is-b.patch |   282 +
 ...-Set-idle_bias_off-to-false-Idle-bias-has.patch |    22 +
 ...audIO-Sound-Card-support-for-Raspberry-Pi.patch |   178 +
 ...ce-default-mouse-polling-interval-to-60Hz.patch |    36 +
 .../0071-Added-support-for-HiFiBerry-DAC.patch     |   190 +
 ...r-for-HiFiBerry-Amp-amplifier-add-on-boar.patch |   816 +
 ...ate-ds1307-driver-for-device-tree-support.patch |    27 +
 ...Add-pwr_led-and-the-required-input-trigge.patch |   170 +
 ...d-device-tree-compatible-string-and-an-ov.patch |    29 +
 .../0076-Add-driver-for-rpi-proto.patch            |   210 +
 .../0077-config-Add-default-configs.patch          |  2537 +
 .../0078-bcm2835-bcm2835_defconfig.patch           |  1426 +
 ...Add-touchscreen-driver-for-pi-LCD-display.patch |   290 +
 ...opy_to_user-and-__copy_from_user-performa.patch |  1510 +
 ...poweroff-Allow-it-to-work-on-Raspberry-Pi.patch |    35 +
 ...spidev-compatible-string-to-silence-warni.patch |    21 +
 .../0083-scripts-dtc-Add-overlay-support.patch     |  4389 ++
 ...fd-Add-Raspberry-Pi-Sense-HAT-core-driver.patch |   838 +
 .../patches-4.4/0085-RaspiDAC3-support.patch       |   243 +
 ...86-tpa6130a2-Add-headphone-switch-control.patch |    91 +
 .../0087-irq-bcm2835-Fix-building-with-2708.patch  |    28 +
 ..._display-add-backlight-driver-and-overlay.patch |   250 +
 ...89-bcm2835-dma-Fix-up-convert-to-DMA-pool.patch |    85 +
 ...ti-platform-support-for-mkknlimg-and-knli.patch |   247 +
 ...-suport-for-3D-rendering-using-the-V3D-en.patch |  5558 ++
 .../0092-drm-vc4-Force-HDMI-to-connected.patch     |    23 +
 .../0093-drm-vc4-bo-cache-locking-fixes.patch      |   147 +
 .../0094-drm-vc4-bo-cache-locking-cleanup.patch    |    92 +
 ...vc4-Use-job_lock-to-protect-seqno_cb_list.patch |    54 +
 ...c4-Drop-struct_mutex-around-CL-validation.patch |    63 +
 ...c4-Drop-struct_mutex-around-CL-validation.patch |    74 +
 ...dd-support-for-more-display-plane-formats.patch |    35 +
 ...m-vc4-No-need-to-stop-the-stopped-threads.patch |    26 +
 ...ove-extra-barrier-s-aroudn-CTnCA-CTnEA-se.patch |    33 +
 ...rm-vc4-Fix-a-typo-in-a-V3D-debug-register.patch |    33 +
 ...ble-VC4-modules-and-increase-CMA-size-wit.patch |   153 +
 .../brcm2708/patches-4.4/0103-squash-fixups.patch  |    43 +
 ...missing-vc4-kms-v3d-overlay.dtb-to-makefi.patch |    20 +
 ...-Also-build-the-driver-for-downstream-ker.patch |    22 +
 ...dts-Added-overlay-for-gpio_ir_recv-driver.patch |   104 +
 ...pio-module-and-add-a-device-tree-overlay-.patch |   100 +
 .../0108-New-overlay-for-PiScreen2r.patch          |   148 +
 ...verlay-for-Adafruit-PiTFT-2.8-capacitive-.patch |   145 +
 ...110-Add-support-for-the-HiFiBerry-DAC-Pro.patch |   539 +
 .../0111-BCM270X_DT-Add-at86rf233-overlay.patch    |   130 +
 .../0112-mm-Remove-the-PFN-busy-warning.patch      |    25 +
 ...optional-field-in-the-driver-struct-for-G.patch |    40 +
 ...-an-interface-for-capturing-the-GPU-state.patch |   333 +
 ...ate-a-bunch-of-code-to-match-upstream-sub.patch |  1894 +
 ...-driver-s-gem_object_free-function-from-C.patch |    59 +
 ...17-drm-vc4-Add-support-for-MSAA-rendering.patch |   518 +
 ...ew-more-non-functional-changes-to-sync-to.patch |   345 +
 ...-hpd-gpios-for-HDMI-GPIO-like-what-landed.patch |    22 +
 ...chronize-validation-code-for-v2-submissio.patch |   612 +
 ...use-mmc_debug-if-CONFIG_MMC_BCM2835-is-no.patch |    37 +
 ...k-timeout-fix-modprobe-baudrate-parameter.patch |   108 +
 ...-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch |   110 +
 ...Add-the-sdtweak-overlay-for-tuning-sdhost.patch |    74 +
 ...-Don-t-override-bus-width-capabilities-fr.patch |    24 +
 ...0126-SDIO-overlay-add-bus_width-parameter.patch |    42 +
 ...-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch |    23 +
 127 files changed, 141130 insertions(+), 11 deletions(-)
 create mode 100644 target/linux/brcm2708/bcm2708/config-4.4
 create mode 100644 target/linux/brcm2708/bcm2709/config-4.4
 create mode 100644 target/linux/brcm2708/patches-4.4/0001-smsx95xx-fix-crimes-against-truesize.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0002-smsc95xx-Disable-turbo-mode-by-default.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0003-vmstat-Workaround-for-issue-where-dirty-page-count-g.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0004-BCM2835_DT-Fix-I2S-register-map.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0005-irq-bcm2836-Prevent-spurious-interrupts-and-trap-the.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0006-irqchip-bcm2835-Add-FIQ-support.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0007-irqchip-irq-bcm2835-Add-2836-FIQ-support.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0008-serial-8250-Don-t-crash-when-nr_uarts-is-0.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0009-pinctrl-bcm2835-Set-base-to-0-give-expected-gpio-num.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0010-pinctrl-bcm2835-Fix-interrupt-handling-for-GPIOs-28-.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0011-pinctrl-bcm2835-Only-request-the-interrupts-listed-i.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0012-spi-bcm2835-Support-pin-groups-other-than-7-11.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0013-ARM-bcm2835-Set-Serial-number-and-Revision.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0014-bcm2835-i2s-get-base-address-for-DMA-from-devicetree.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0015-bcm2835-i2s-add-24bit-support-update-bclk_ratio-to-m.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0016-bcm2835-i2s-setup-clock-only-if-CPU-is-clock-master.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0017-bcm2835-i2s-Eliminate-debugfs-directory-error.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0018-bcm2835-i2s-Register-PCM-device.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0019-bcm2835-i2s-Enable-MMAP-support-via-a-DT-property.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0020-dmaengine-bcm2835-Add-slave-dma-support.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0021-dmaengine-bcm2835-set-residue_granularity-field.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0022-dmaengine-bcm2835-Load-driver-early-and-support-lega.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0023-bcm2835-dma-Fix-dreq-not-set-for-slave-transfers.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0024-bcm2835-dma-Limit-cyclic-transfers-on-lite-channels-.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0025-bcm2835-Add-support-for-uart1.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0026-firmware-bcm2835-Add-missing-property-tags.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0027-Main-bcm2708-bcm2709-linux-port.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0028-squash-include-ARCH_BCM2708-ARCH_BCM2709.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0029-Add-dwc_otg-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0030-bcm2708-framebuffer-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0031-dmaengine-Add-support-for-BCM2708.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0032-Add-blk_pos-parameter-to-mmc-multi_io_quirk-callback.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0033-MMC-added-alternative-MMC-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0034-Adding-bcm2835-sdhost-driver-and-an-overlay-to-enabl.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0035-cma-Add-vc_cma-driver-to-enable-use-of-CMA.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0036-bcm2708-alsa-sound-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0037-bcm2708-vchiq-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0038-vc_mem-Add-vc_mem-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0039-vcsm-VideoCore-shared-memory-service-for-BCM2835.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0040-Add-dev-gpiomem-device-for-rootless-user-GPIO-access.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0041-Add-SMI-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0042-Add-SMI-NAND-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0043-lirc-added-support-for-RaspberryPi-GPIO.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0044-Add-cpufreq-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0045-Added-hwmon-thermal-driver-for-reporting-core-temper.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0046-Add-Chris-Boot-s-i2c-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0047-char-broadcom-Add-vcio-module.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0048-firmware-bcm2835-Support-ARCH_BCM270x.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0049-bcm2835-add-v4l2-camera-device.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0050-scripts-Add-mkknlimg-and-knlinfo-scripts-from-tools-.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0051-fdt-Add-support-for-the-CONFIG_CMDLINE_EXTEND-option.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0052-BCM2708-Add-core-Device-Tree-support.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0053-bcm2835-Match-with-BCM2708-Device-Trees.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0054-fbdev-add-FBIOCOPYAREA-ioctl.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0058-Speed-up-console-framebuffer-imageblit-function.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0059-Allow-mac-address-to-be-set-in-smsc95xx.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0060-enabling-the-realtime-clock-1-wire-chip-DS1307-and-1.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0061-Added-Device-IDs-for-August-DVB-T-205.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0062-config-Enable-CONFIG_MEMCG-but-leave-it-disabled-due.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0063-ASoC-Add-support-for-PCM5102A-codec.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0064-ASoC-Add-support-for-HifiBerry-DAC.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0065-ASoC-Add-support-for-Rpi-DAC.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0066-ASoC-wm8804-Implement-MCLK-configuration-options-add.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0067-ASoC-BCM-Add-support-for-HiFiBerry-Digi.-Driver-is-b.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0068-ASoC-wm8804-Set-idle_bias_off-to-false-Idle-bias-has.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0069-Add-IQaudIO-Sound-Card-support-for-Raspberry-Pi.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0070-hid-Reduce-default-mouse-polling-interval-to-60Hz.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0071-Added-support-for-HiFiBerry-DAC.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0072-Added-driver-for-HiFiBerry-Amp-amplifier-add-on-boar.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0073-Update-ds1307-driver-for-device-tree-support.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0074-BCM270x_DT-Add-pwr_led-and-the-required-input-trigge.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0075-enc28j60-Add-device-tree-compatible-string-and-an-ov.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0076-Add-driver-for-rpi-proto.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0077-config-Add-default-configs.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0078-bcm2835-bcm2835_defconfig.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0079-rpi-ft5406-Add-touchscreen-driver-for-pi-LCD-display.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0080-Improve-__copy_to_user-and-__copy_from_user-performa.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0081-gpio-poweroff-Allow-it-to-work-on-Raspberry-Pi.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0082-spidev-Add-spidev-compatible-string-to-silence-warni.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0083-scripts-dtc-Add-overlay-support.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0084-mfd-Add-Raspberry-Pi-Sense-HAT-core-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0085-RaspiDAC3-support.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0086-tpa6130a2-Add-headphone-switch-control.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0087-irq-bcm2835-Fix-building-with-2708.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0088-rpi_display-add-backlight-driver-and-overlay.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0089-bcm2835-dma-Fix-up-convert-to-DMA-pool.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0090-scripts-Multi-platform-support-for-mkknlimg-and-knli.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0091-drm-vc4-Add-suport-for-3D-rendering-using-the-V3D-en.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0092-drm-vc4-Force-HDMI-to-connected.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0093-drm-vc4-bo-cache-locking-fixes.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0094-drm-vc4-bo-cache-locking-cleanup.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0095-drm-vc4-Use-job_lock-to-protect-seqno_cb_list.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0096-drm-vc4-Drop-struct_mutex-around-CL-validation.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0097-drm-vc4-Drop-struct_mutex-around-CL-validation.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0098-drm-vc4-Add-support-for-more-display-plane-formats.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0099-drm-vc4-No-need-to-stop-the-stopped-threads.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0100-drm-vc4-Remove-extra-barrier-s-aroudn-CTnCA-CTnEA-se.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0101-drm-vc4-Fix-a-typo-in-a-V3D-debug-register.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0102-drm-vc4-Enable-VC4-modules-and-increase-CMA-size-wit.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0103-squash-fixups.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0104-squash-add-missing-vc4-kms-v3d-overlay.dtb-to-makefi.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0105-clk-bcm2835-Also-build-the-driver-for-downstream-ker.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0106-dts-Added-overlay-for-gpio_ir_recv-driver.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0107-Build-i2c_gpio-module-and-add-a-device-tree-overlay-.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0108-New-overlay-for-PiScreen2r.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0109-dts-Added-overlay-for-Adafruit-PiTFT-2.8-capacitive-.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0110-Add-support-for-the-HiFiBerry-DAC-Pro.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0111-BCM270X_DT-Add-at86rf233-overlay.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0112-mm-Remove-the-PFN-busy-warning.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0113-drm-Put-an-optional-field-in-the-driver-struct-for-G.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0114-drm-vc4-Add-an-interface-for-capturing-the-GPU-state.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0115-drm-vc4-Update-a-bunch-of-code-to-match-upstream-sub.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0116-drm-Use-the-driver-s-gem_object_free-function-from-C.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0117-drm-vc4-Add-support-for-MSAA-rendering.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0118-drm-vc4-A-few-more-non-functional-changes-to-sync-to.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0119-drm-vc4-Use-hpd-gpios-for-HDMI-GPIO-like-what-landed.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0120-drm-vc4-Synchronize-validation-code-for-v2-submissio.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0121-MMC-Do-not-use-mmc_debug-if-CONFIG_MMC_BCM2835-is-no.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0122-Extend-clock-timeout-fix-modprobe-baudrate-parameter.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0123-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0124-BCM270X_DT-Add-the-sdtweak-overlay-for-tuning-sdhost.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0125-bcm2835-mmc-Don-t-override-bus-width-capabilities-fr.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0126-SDIO-overlay-add-bus_width-parameter.patch
 create mode 100644 target/linux/brcm2708/patches-4.4/0127-fixup-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch

diff --git a/target/linux/brcm2708/bcm2708/config-4.4 b/target/linux/brcm2708/bcm2708/config-4.4
new file mode 100644
index 0000000..22b8995
--- /dev/null
+++ b/target/linux/brcm2708/bcm2708/config-4.4
@@ -0,0 +1,355 @@
+# CONFIG_AIO is not set
+CONFIG_ALIGNMENT_TRAP=y
+# CONFIG_AMBA_PL08X is not set
+# CONFIG_APM_EMULATION is not set
+CONFIG_ARCH_BCM2708=y
+# CONFIG_ARCH_BCM2709 is not set
+CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
+CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
+CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
+# CONFIG_ARCH_HAS_SG_CHAIN is not set
+CONFIG_ARCH_HAVE_CUSTOM_GPIO_H=y
+CONFIG_ARCH_HIBERNATION_POSSIBLE=y
+CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
+CONFIG_ARCH_NR_GPIO=0
+CONFIG_ARCH_REQUIRE_GPIOLIB=y
+# CONFIG_ARCH_SELECT_MEMORY_MODEL is not set
+# CONFIG_ARCH_SPARSEMEM_DEFAULT is not set
+CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
+CONFIG_ARCH_SUPPORTS_UPROBES=y
+CONFIG_ARCH_SUSPEND_POSSIBLE=y
+CONFIG_ARCH_USE_BUILTIN_BSWAP=y
+CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
+CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
+CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
+CONFIG_ARM=y
+CONFIG_ARM_AMBA=y
+CONFIG_ARM_CPU_SUSPEND=y
+CONFIG_ARM_ERRATA_411920=y
+CONFIG_ARM_L1_CACHE_SHIFT=5
+# CONFIG_ARM_SP805_WATCHDOG is not set
+CONFIG_ARM_THUMB=y
+CONFIG_ARM_UNWIND=y
+# CONFIG_BACKLIGHT_CLASS_DEVICE is not set
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+# CONFIG_BCM2708_NOL2CACHE is not set
+CONFIG_BCM2708_VCHIQ=y
+CONFIG_BCM2708_VCMEM=y
+# CONFIG_BCM2835_DEVGPIOMEM is not set
+CONFIG_BCM2835_MBOX=y
+# CONFIG_BCM2835_SMI is not set
+CONFIG_BCM2835_WDT=y
+CONFIG_BCM_VCIO=y
+CONFIG_BCM_VC_CMA=y
+CONFIG_BCM_VC_SM=y
+# CONFIG_BLK_DEV_INITRD is not set
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=4096
+CONFIG_BLK_DEV_SD=y
+CONFIG_BRCM_CHAR_DRIVERS=y
+CONFIG_BUILD_BIN2C=y
+# CONFIG_CACHE_L2X0 is not set
+CONFIG_CLKDEV_LOOKUP=y
+CONFIG_CLKSRC_MMIO=y
+CONFIG_CLKSRC_OF=y
+CONFIG_CLKSRC_PROBE=y
+CONFIG_CLONE_BACKWARDS=y
+CONFIG_CMA=y
+CONFIG_CMA_ALIGNMENT=8
+CONFIG_CMA_AREAS=7
+# CONFIG_CMA_DEBUG is not set
+# CONFIG_CMA_DEBUGFS is not set
+CONFIG_CMA_SIZE_MBYTES=16
+# CONFIG_CMA_SIZE_SEL_MAX is not set
+CONFIG_CMA_SIZE_SEL_MBYTES=y
+# CONFIG_CMA_SIZE_SEL_MIN is not set
+# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
+CONFIG_CMDLINE="dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait"
+CONFIG_COMMON_CLK=y
+CONFIG_CONFIGFS_FS=y
+CONFIG_CONSOLE_TRANSLATIONS=y
+CONFIG_CPU_32v6=y
+CONFIG_CPU_ABRT_EV6=y
+# CONFIG_CPU_BPREDICT_DISABLE is not set
+CONFIG_CPU_CACHE_V6=y
+CONFIG_CPU_CACHE_VIPT=y
+CONFIG_CPU_COPY_V6=y
+CONFIG_CPU_CP15=y
+CONFIG_CPU_CP15_MMU=y
+CONFIG_CPU_HAS_ASID=y
+# CONFIG_CPU_ICACHE_DISABLE is not set
+CONFIG_CPU_IDLE=y
+CONFIG_CPU_IDLE_GOV_LADDER=y
+CONFIG_CPU_IDLE_GOV_MENU=y
+CONFIG_CPU_PABRT_V6=y
+CONFIG_CPU_PM=y
+CONFIG_CPU_TLB_V6=y
+CONFIG_CPU_V6=y
+CONFIG_CRC16=y
+CONFIG_CRYPTO_CRC32C=y
+CONFIG_CRYPTO_HASH=y
+CONFIG_CRYPTO_HASH2=y
+CONFIG_CRYPTO_RNG2=y
+CONFIG_CRYPTO_WORKQUEUE=y
+CONFIG_DCACHE_WORD_ACCESS=y
+CONFIG_DEBUG_BUGVERBOSE=y
+CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_LL_INCLUDE="mach/debug-macro.S"
+# CONFIG_DEBUG_UART_8250 is not set
+# CONFIG_DEBUG_USER is not set
+CONFIG_DEFAULT_CFQ=y
+# CONFIG_DEFAULT_DEADLINE is not set
+CONFIG_DEFAULT_IOSCHED="cfq"
+CONFIG_DEVTMPFS=y
+CONFIG_DMADEVICES=y
+# CONFIG_DMA_BCM2708 is not set
+CONFIG_DMA_BCM2835=y
+CONFIG_DMA_CMA=y
+CONFIG_DMA_ENGINE=y
+CONFIG_DMA_OF=y
+CONFIG_DMA_VIRTUAL_CHANNELS=y
+CONFIG_DNOTIFY=y
+CONFIG_DTC=y
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_EDAC_ATOMIC_SCRUB=y
+CONFIG_EDAC_SUPPORT=y
+CONFIG_ENABLE_MUST_CHECK=y
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_POSIX_ACL=y
+CONFIG_EXT4_FS_SECURITY=y
+CONFIG_FB=y
+CONFIG_FB_BCM2708=y
+CONFIG_FB_CFB_COPYAREA=y
+CONFIG_FB_CFB_FILLRECT=y
+CONFIG_FB_CFB_IMAGEBLIT=y
+CONFIG_FB_CMDLINE=y
+# CONFIG_FB_RPISENSE is not set
+CONFIG_FIQ=y
+CONFIG_FIRMWARE_IN_KERNEL=y
+CONFIG_FIX_EARLYCON_MEM=y
+# CONFIG_FONTS is not set
+CONFIG_FONT_8x16=y
+CONFIG_FONT_8x8=y
+CONFIG_FONT_SUPPORT=y
+# CONFIG_FPE_FASTFPE is not set
+# CONFIG_FPE_NWFPE is not set
+CONFIG_FRAMEBUFFER_CONSOLE=y
+# CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY is not set
+# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
+CONFIG_FREEZER=y
+CONFIG_FS_MBCACHE=y
+CONFIG_FS_POSIX_ACL=y
+CONFIG_GENERIC_ALLOCATOR=y
+CONFIG_GENERIC_ATOMIC64=y
+CONFIG_GENERIC_BUG=y
+CONFIG_GENERIC_CLOCKEVENTS=y
+CONFIG_GENERIC_IDLE_POLL_SETUP=y
+CONFIG_GENERIC_IO=y
+CONFIG_GENERIC_IRQ_SHOW=y
+CONFIG_GENERIC_IRQ_SHOW_LEVEL=y
+CONFIG_GENERIC_PCI_IOMAP=y
+CONFIG_GENERIC_PINCONF=y
+CONFIG_GENERIC_SCHED_CLOCK=y
+CONFIG_GENERIC_SMP_IDLE_THREAD=y
+CONFIG_GENERIC_STRNCPY_FROM_USER=y
+CONFIG_GENERIC_STRNLEN_USER=y
+CONFIG_GPIOLIB=y
+CONFIG_GPIO_DEVRES=y
+CONFIG_GPIO_SYSFS=y
+CONFIG_HANDLE_DOMAIN_IRQ=y
+CONFIG_HARDIRQS_SW_RESEND=y
+CONFIG_HAS_DMA=y
+CONFIG_HAS_IOMEM=y
+CONFIG_HAS_IOPORT_MAP=y
+# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
+# CONFIG_HAVE_ARCH_BITREVERSE is not set
+CONFIG_HAVE_ARCH_JUMP_LABEL=y
+CONFIG_HAVE_ARCH_KGDB=y
+CONFIG_HAVE_ARCH_PFN_VALID=y
+CONFIG_HAVE_ARCH_TRACEHOOK=y
+# CONFIG_HAVE_BOOTMEM_INFO_NODE is not set
+CONFIG_HAVE_BPF_JIT=y
+CONFIG_HAVE_CC_STACKPROTECTOR=y
+CONFIG_HAVE_CLK=y
+CONFIG_HAVE_CLK_PREPARE=y
+CONFIG_HAVE_CONTEXT_TRACKING=y
+CONFIG_HAVE_C_RECORDMCOUNT=y
+CONFIG_HAVE_DEBUG_KMEMLEAK=y
+CONFIG_HAVE_DMA_API_DEBUG=y
+CONFIG_HAVE_DMA_ATTRS=y
+CONFIG_HAVE_DMA_CONTIGUOUS=y
+CONFIG_HAVE_DYNAMIC_FTRACE=y
+CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
+CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
+CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
+CONFIG_HAVE_FUNCTION_TRACER=y
+CONFIG_HAVE_GENERIC_DMA_COHERENT=y
+CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
+CONFIG_HAVE_KERNEL_GZIP=y
+CONFIG_HAVE_KERNEL_LZ4=y
+CONFIG_HAVE_KERNEL_LZMA=y
+CONFIG_HAVE_KERNEL_LZO=y
+CONFIG_HAVE_KERNEL_XZ=y
+CONFIG_HAVE_LATENCYTOP_SUPPORT=y
+CONFIG_HAVE_MEMBLOCK=y
+CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
+CONFIG_HAVE_NET_DSA=y
+CONFIG_HAVE_OPROFILE=y
+CONFIG_HAVE_OPTPROBES=y
+CONFIG_HAVE_PERF_EVENTS=y
+CONFIG_HAVE_PERF_REGS=y
+CONFIG_HAVE_PERF_USER_STACK_DUMP=y
+CONFIG_HAVE_PROC_CPU=y
+CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
+CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
+CONFIG_HAVE_UID16=y
+CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
+CONFIG_HW_CONSOLE=y
+CONFIG_HZ_FIXED=0
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_INPUT=y
+CONFIG_INPUT_MOUSEDEV=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+CONFIG_IOMMU_HELPER=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_IRQCHIP=y
+CONFIG_IRQ_DOMAIN=y
+CONFIG_IRQ_FORCED_THREADING=y
+CONFIG_IRQ_WORK=y
+CONFIG_JBD2=y
+CONFIG_KERNEL_GZIP=y
+# CONFIG_KERNEL_XZ is not set
+# CONFIG_LCD_CLASS_DEVICE is not set
+CONFIG_LEDS_GPIO=y
+CONFIG_LEDS_TRIGGER_INPUT=y
+CONFIG_LIBFDT=y
+CONFIG_LOGO=y
+CONFIG_LOGO_LINUX_CLUT224=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+CONFIG_MACH_BCM2708=y
+CONFIG_MAC_PARTITION=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_MAILBOX=y
+# CONFIG_MAILBOX_TEST is not set
+CONFIG_MAX_RAW_DEVS=256
+CONFIG_MEMORY_ISOLATION=y
+CONFIG_MIGRATION=y
+CONFIG_MMC=y
+CONFIG_MMC_BCM2835=y
+CONFIG_MMC_BCM2835_DMA=y
+CONFIG_MMC_BCM2835_PIO_DMA_BARRIER=2
+CONFIG_MMC_BCM2835_SDHOST=y
+CONFIG_MMC_BLOCK=y
+CONFIG_MMC_BLOCK_MINORS=32
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MODULES_USE_ELF_REL=y
+# CONFIG_MTD is not set
+CONFIG_MULTI_IRQ_HANDLER=y
+CONFIG_NEED_DMA_MAP_STATE=y
+CONFIG_NEED_MACH_IO_H=y
+CONFIG_NEED_MACH_MEMORY_H=y
+CONFIG_NEED_PER_CPU_KM=y
+CONFIG_NLS=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_DEFAULT="utf8"
+CONFIG_NO_BOOTMEM=y
+CONFIG_NO_HZ=y
+CONFIG_NO_HZ_COMMON=y
+CONFIG_NO_HZ_IDLE=y
+CONFIG_OABI_COMPAT=y
+CONFIG_OF=y
+CONFIG_OF_ADDRESS=y
+CONFIG_OF_EARLY_FLATTREE=y
+CONFIG_OF_FLATTREE=y
+CONFIG_OF_GPIO=y
+CONFIG_OF_IRQ=y
+CONFIG_OF_NET=y
+CONFIG_OF_RESERVED_MEM=y
+CONFIG_OLD_SIGACTION=y
+CONFIG_OLD_SIGSUSPEND3=y
+CONFIG_PAGE_OFFSET=0xC0000000
+# CONFIG_PCI_DOMAINS_GENERIC is not set
+# CONFIG_PCI_SYSCALL is not set
+CONFIG_PERF_USE_VMALLOC=y
+CONFIG_PGTABLE_LEVELS=2
+CONFIG_PHYS_OFFSET=0
+CONFIG_PINCTRL=y
+CONFIG_PINCTRL_BCM2835=y
+# CONFIG_PL330_DMA is not set
+CONFIG_PM=y
+CONFIG_PM_CLK=y
+# CONFIG_PM_DEBUG is not set
+CONFIG_PM_SLEEP=y
+CONFIG_POWER_SUPPLY=y
+CONFIG_PRINTK_TIME=y
+CONFIG_PROC_PAGE_MONITOR=y
+CONFIG_RASPBERRYPI_FIRMWARE=y
+CONFIG_RATIONAL=y
+CONFIG_RAW_DRIVER=y
+# CONFIG_RCU_STALL_COMMON is not set
+CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_SCHED_HRTICK=y
+# CONFIG_SCHED_INFO is not set
+CONFIG_SCSI=y
+# CONFIG_SCSI_LOWLEVEL is not set
+# CONFIG_SCSI_PROC_FS is not set
+# CONFIG_SERIAL_8250_DMA is not set
+CONFIG_SERIAL_8250_FSL=y
+CONFIG_SERIAL_8250_NR_UARTS=1
+CONFIG_SERIAL_8250_RUNTIME_UARTS=0
+# CONFIG_SERIAL_AMBA_PL010 is not set
+CONFIG_SERIAL_AMBA_PL011=y
+CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
+CONFIG_SERIAL_OF_PLATFORM=y
+CONFIG_SPARSE_IRQ=y
+# CONFIG_SQUASHFS is not set
+CONFIG_SRCU=y
+# CONFIG_STAGING is not set
+# CONFIG_STRIP_ASM_SYMS is not set
+# CONFIG_SUNXI_SRAM is not set
+CONFIG_SUSPEND=y
+CONFIG_SUSPEND_FREEZER=y
+CONFIG_SWIOTLB=y
+CONFIG_SYS_SUPPORTS_APM_EMULATION=y
+# CONFIG_TEXTSEARCH is not set
+CONFIG_THERMAL=y
+CONFIG_THERMAL_BCM2835=y
+CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
+CONFIG_THERMAL_GOV_STEP_WISE=y
+CONFIG_THERMAL_OF=y
+CONFIG_TICK_CPU_ACCOUNTING=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_UEVENT_HELPER_PATH=""
+# CONFIG_UID16 is not set
+CONFIG_UNCOMPRESS_INCLUDE="mach/uncompress.h"
+CONFIG_USB=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+CONFIG_USB_COMMON=y
+CONFIG_USB_DWCOTG=y
+# CONFIG_USB_EHCI_HCD is not set
+CONFIG_USB_NET_DRIVERS=y
+CONFIG_USB_NET_SMSC95XX=y
+CONFIG_USB_STORAGE=y
+CONFIG_USB_SUPPORT=y
+CONFIG_USB_UAS=y
+CONFIG_USB_USBNET=y
+CONFIG_USE_OF=y
+CONFIG_VECTORS_BASE=0xffff0000
+CONFIG_VFP=y
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_VT_CONSOLE_SLEEP=y
+CONFIG_VT_HW_CONSOLE_BINDING=y
+CONFIG_WATCHDOG_CORE=y
+CONFIG_XZ_DEC_ARM=y
+CONFIG_XZ_DEC_BCJ=y
+CONFIG_ZBOOT_ROM_BSS=0x0
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZONE_DMA_FLAG=0
diff --git a/target/linux/brcm2708/bcm2709/config-4.4 b/target/linux/brcm2708/bcm2709/config-4.4
new file mode 100644
index 0000000..99603a0
--- /dev/null
+++ b/target/linux/brcm2708/bcm2709/config-4.4
@@ -0,0 +1,387 @@
+# CONFIG_AIO is not set
+CONFIG_ALIGNMENT_TRAP=y
+# CONFIG_AMBA_PL08X is not set
+# CONFIG_APM_EMULATION is not set
+# CONFIG_ARCH_BCM2708 is not set
+CONFIG_ARCH_BCM2709=y
+CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
+CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
+CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
+# CONFIG_ARCH_HAS_SG_CHAIN is not set
+CONFIG_ARCH_HAS_TICK_BROADCAST=y
+CONFIG_ARCH_HAVE_CUSTOM_GPIO_H=y
+CONFIG_ARCH_HIBERNATION_POSSIBLE=y
+CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
+CONFIG_ARCH_NR_GPIO=0
+CONFIG_ARCH_REQUIRE_GPIOLIB=y
+# CONFIG_ARCH_SELECT_MEMORY_MODEL is not set
+# CONFIG_ARCH_SPARSEMEM_DEFAULT is not set
+CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
+CONFIG_ARCH_SUPPORTS_UPROBES=y
+CONFIG_ARCH_SUSPEND_POSSIBLE=y
+CONFIG_ARCH_USE_BUILTIN_BSWAP=y
+CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
+CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
+CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
+CONFIG_ARM=y
+CONFIG_ARM_AMBA=y
+CONFIG_ARM_ARCH_TIMER=y
+CONFIG_ARM_ARCH_TIMER_EVTSTREAM=y
+CONFIG_ARM_CPU_SUSPEND=y
+CONFIG_ARM_L1_CACHE_SHIFT=6
+CONFIG_ARM_L1_CACHE_SHIFT_6=y
+# CONFIG_ARM_LPAE is not set
+# CONFIG_ARM_SP805_WATCHDOG is not set
+CONFIG_ARM_THUMB=y
+# CONFIG_ARM_THUMBEE is not set
+CONFIG_ARM_UNWIND=y
+CONFIG_ARM_VIRT_EXT=y
+# CONFIG_BACKLIGHT_CLASS_DEVICE is not set
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+CONFIG_BCM2708_NOL2CACHE=y
+CONFIG_BCM2708_VCHIQ=y
+CONFIG_BCM2708_VCMEM=y
+# CONFIG_BCM2835_DEVGPIOMEM is not set
+CONFIG_BCM2835_MBOX=y
+# CONFIG_BCM2835_SMI is not set
+CONFIG_BCM2835_WDT=y
+CONFIG_BCM_VCIO=y
+CONFIG_BCM_VC_CMA=y
+CONFIG_BCM_VC_SM=y
+# CONFIG_BLK_DEV_INITRD is not set
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=4096
+CONFIG_BLK_DEV_SD=y
+CONFIG_BRCM_CHAR_DRIVERS=y
+CONFIG_BUILD_BIN2C=y
+# CONFIG_CACHE_L2X0 is not set
+CONFIG_CLKDEV_LOOKUP=y
+CONFIG_CLKSRC_OF=y
+CONFIG_CLKSRC_PROBE=y
+CONFIG_CLONE_BACKWARDS=y
+CONFIG_CMA=y
+CONFIG_CMA_ALIGNMENT=8
+CONFIG_CMA_AREAS=7
+# CONFIG_CMA_DEBUG is not set
+# CONFIG_CMA_DEBUGFS is not set
+CONFIG_CMA_SIZE_MBYTES=16
+# CONFIG_CMA_SIZE_SEL_MAX is not set
+CONFIG_CMA_SIZE_SEL_MBYTES=y
+# CONFIG_CMA_SIZE_SEL_MIN is not set
+# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
+CONFIG_CMDLINE="dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait"
+CONFIG_COMMON_CLK=y
+CONFIG_CONFIGFS_FS=y
+CONFIG_CONSOLE_TRANSLATIONS=y
+CONFIG_CPU_32v6K=y
+CONFIG_CPU_32v7=y
+CONFIG_CPU_ABRT_EV7=y
+# CONFIG_CPU_BPREDICT_DISABLE is not set
+CONFIG_CPU_CACHE_V7=y
+CONFIG_CPU_CACHE_VIPT=y
+CONFIG_CPU_COPY_V6=y
+CONFIG_CPU_CP15=y
+CONFIG_CPU_CP15_MMU=y
+CONFIG_CPU_HAS_ASID=y
+# CONFIG_CPU_ICACHE_DISABLE is not set
+CONFIG_CPU_IDLE=y
+CONFIG_CPU_IDLE_GOV_LADDER=y
+CONFIG_CPU_IDLE_GOV_MENU=y
+CONFIG_CPU_PABRT_V7=y
+CONFIG_CPU_PM=y
+CONFIG_CPU_RMAP=y
+CONFIG_CPU_TLB_V7=y
+CONFIG_CPU_V7=y
+CONFIG_CRC16=y
+CONFIG_CRYPTO_CRC32C=y
+CONFIG_CRYPTO_HASH=y
+CONFIG_CRYPTO_HASH2=y
+CONFIG_CRYPTO_RNG2=y
+CONFIG_CRYPTO_WORKQUEUE=y
+CONFIG_DCACHE_WORD_ACCESS=y
+CONFIG_DEBUG_BUGVERBOSE=y
+CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_LL_INCLUDE="mach/debug-macro.S"
+# CONFIG_DEBUG_UART_8250 is not set
+# CONFIG_DEBUG_USER is not set
+CONFIG_DEFAULT_CFQ=y
+# CONFIG_DEFAULT_DEADLINE is not set
+CONFIG_DEFAULT_IOSCHED="cfq"
+CONFIG_DEVTMPFS=y
+CONFIG_DMADEVICES=y
+# CONFIG_DMA_BCM2708 is not set
+CONFIG_DMA_BCM2835=y
+CONFIG_DMA_CMA=y
+CONFIG_DMA_ENGINE=y
+CONFIG_DMA_OF=y
+CONFIG_DMA_VIRTUAL_CHANNELS=y
+CONFIG_DNOTIFY=y
+CONFIG_DTC=y
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_EDAC_ATOMIC_SCRUB=y
+CONFIG_EDAC_SUPPORT=y
+CONFIG_ENABLE_MUST_CHECK=y
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_POSIX_ACL=y
+CONFIG_EXT4_FS_SECURITY=y
+CONFIG_FB=y
+CONFIG_FB_BCM2708=y
+CONFIG_FB_CFB_COPYAREA=y
+CONFIG_FB_CFB_FILLRECT=y
+CONFIG_FB_CFB_IMAGEBLIT=y
+CONFIG_FB_CMDLINE=y
+# CONFIG_FB_RPISENSE is not set
+CONFIG_FIQ=y
+CONFIG_FIRMWARE_IN_KERNEL=y
+CONFIG_FIX_EARLYCON_MEM=y
+# CONFIG_FONTS is not set
+CONFIG_FONT_8x16=y
+CONFIG_FONT_8x8=y
+CONFIG_FONT_SUPPORT=y
+# CONFIG_FPE_FASTFPE is not set
+# CONFIG_FPE_NWFPE is not set
+CONFIG_FRAMEBUFFER_CONSOLE=y
+# CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY is not set
+# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
+CONFIG_FREEZER=y
+CONFIG_FS_MBCACHE=y
+CONFIG_FS_POSIX_ACL=y
+CONFIG_GENERIC_ALLOCATOR=y
+CONFIG_GENERIC_BUG=y
+CONFIG_GENERIC_CLOCKEVENTS=y
+CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
+CONFIG_GENERIC_IDLE_POLL_SETUP=y
+CONFIG_GENERIC_IO=y
+CONFIG_GENERIC_IRQ_SHOW=y
+CONFIG_GENERIC_IRQ_SHOW_LEVEL=y
+CONFIG_GENERIC_PCI_IOMAP=y
+CONFIG_GENERIC_PINCONF=y
+CONFIG_GENERIC_SCHED_CLOCK=y
+CONFIG_GENERIC_SMP_IDLE_THREAD=y
+CONFIG_GENERIC_STRNCPY_FROM_USER=y
+CONFIG_GENERIC_STRNLEN_USER=y
+CONFIG_GPIOLIB=y
+CONFIG_GPIO_DEVRES=y
+CONFIG_GPIO_SYSFS=y
+CONFIG_HANDLE_DOMAIN_IRQ=y
+CONFIG_HARDIRQS_SW_RESEND=y
+CONFIG_HAS_DMA=y
+CONFIG_HAS_IOMEM=y
+CONFIG_HAS_IOPORT_MAP=y
+# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
+CONFIG_HAVE_ARCH_BITREVERSE=y
+CONFIG_HAVE_ARCH_JUMP_LABEL=y
+CONFIG_HAVE_ARCH_KGDB=y
+CONFIG_HAVE_ARCH_PFN_VALID=y
+CONFIG_HAVE_ARCH_TRACEHOOK=y
+CONFIG_HAVE_ARM_ARCH_TIMER=y
+# CONFIG_HAVE_BOOTMEM_INFO_NODE is not set
+CONFIG_HAVE_BPF_JIT=y
+CONFIG_HAVE_CC_STACKPROTECTOR=y
+CONFIG_HAVE_CLK=y
+CONFIG_HAVE_CLK_PREPARE=y
+CONFIG_HAVE_CONTEXT_TRACKING=y
+CONFIG_HAVE_C_RECORDMCOUNT=y
+CONFIG_HAVE_DEBUG_KMEMLEAK=y
+CONFIG_HAVE_DMA_API_DEBUG=y
+CONFIG_HAVE_DMA_ATTRS=y
+CONFIG_HAVE_DMA_CONTIGUOUS=y
+CONFIG_HAVE_DYNAMIC_FTRACE=y
+CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
+CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
+CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
+CONFIG_HAVE_FUNCTION_TRACER=y
+CONFIG_HAVE_GENERIC_DMA_COHERENT=y
+CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
+CONFIG_HAVE_KERNEL_GZIP=y
+CONFIG_HAVE_KERNEL_LZ4=y
+CONFIG_HAVE_KERNEL_LZMA=y
+CONFIG_HAVE_KERNEL_LZO=y
+CONFIG_HAVE_KERNEL_XZ=y
+CONFIG_HAVE_MEMBLOCK=y
+CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
+CONFIG_HAVE_NET_DSA=y
+CONFIG_HAVE_OPROFILE=y
+CONFIG_HAVE_OPTPROBES=y
+CONFIG_HAVE_PERF_EVENTS=y
+CONFIG_HAVE_PERF_REGS=y
+CONFIG_HAVE_PERF_USER_STACK_DUMP=y
+CONFIG_HAVE_PROC_CPU=y
+CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
+CONFIG_HAVE_SMP=y
+CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
+CONFIG_HAVE_UID16=y
+CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
+CONFIG_HOTPLUG_CPU=y
+CONFIG_HW_CONSOLE=y
+CONFIG_HZ_FIXED=0
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_INPUT=y
+CONFIG_INPUT_MOUSEDEV=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+CONFIG_IOMMU_HELPER=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_IRQCHIP=y
+CONFIG_IRQ_DOMAIN=y
+CONFIG_IRQ_FORCED_THREADING=y
+CONFIG_IRQ_WORK=y
+CONFIG_JBD2=y
+CONFIG_KERNEL_GZIP=y
+# CONFIG_KERNEL_XZ is not set
+# CONFIG_LCD_CLASS_DEVICE is not set
+CONFIG_LEDS_GPIO=y
+CONFIG_LEDS_TRIGGER_INPUT=y
+CONFIG_LIBFDT=y
+CONFIG_LOCK_SPIN_ON_OWNER=y
+CONFIG_LOGO=y
+CONFIG_LOGO_LINUX_CLUT224=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+CONFIG_LZO_COMPRESS=y
+CONFIG_LZO_DECOMPRESS=y
+CONFIG_MACH_BCM2709=y
+CONFIG_MAC_PARTITION=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_MAILBOX=y
+# CONFIG_MAILBOX_TEST is not set
+CONFIG_MAX_RAW_DEVS=256
+CONFIG_MEMORY_ISOLATION=y
+CONFIG_MFD_SYSCON=y
+CONFIG_MIGHT_HAVE_CACHE_L2X0=y
+CONFIG_MIGRATION=y
+CONFIG_MMC=y
+CONFIG_MMC_BCM2835=y
+CONFIG_MMC_BCM2835_DMA=y
+CONFIG_MMC_BCM2835_PIO_DMA_BARRIER=2
+CONFIG_MMC_BCM2835_SDHOST=y
+CONFIG_MMC_BLOCK=y
+CONFIG_MMC_BLOCK_MINORS=32
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MODULES_USE_ELF_REL=y
+# CONFIG_MTD is not set
+CONFIG_MULTI_IRQ_HANDLER=y
+CONFIG_MUTEX_SPIN_ON_OWNER=y
+CONFIG_NEED_DMA_MAP_STATE=y
+CONFIG_NEED_MACH_IO_H=y
+CONFIG_NEED_MACH_MEMORY_H=y
+CONFIG_NEON=y
+CONFIG_NET_FLOW_LIMIT=y
+CONFIG_NLS=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_DEFAULT="utf8"
+CONFIG_NO_BOOTMEM=y
+CONFIG_NO_HZ=y
+CONFIG_NO_HZ_COMMON=y
+CONFIG_NO_HZ_IDLE=y
+CONFIG_NR_CPUS=4
+CONFIG_OABI_COMPAT=y
+CONFIG_OF=y
+CONFIG_OF_ADDRESS=y
+CONFIG_OF_EARLY_FLATTREE=y
+CONFIG_OF_FLATTREE=y
+CONFIG_OF_GPIO=y
+CONFIG_OF_IRQ=y
+CONFIG_OF_NET=y
+CONFIG_OF_RESERVED_MEM=y
+CONFIG_OLD_SIGACTION=y
+CONFIG_OLD_SIGSUSPEND3=y
+CONFIG_PAGE_OFFSET=0x80000000
+# CONFIG_PCI_DOMAINS_GENERIC is not set
+# CONFIG_PCI_SYSCALL is not set
+CONFIG_PERF_USE_VMALLOC=y
+CONFIG_PGTABLE_LEVELS=2
+CONFIG_PHYS_OFFSET=0
+CONFIG_PINCTRL=y
+CONFIG_PINCTRL_BCM2835=y
+# CONFIG_PL330_DMA is not set
+CONFIG_PM=y
+CONFIG_PM_CLK=y
+# CONFIG_PM_DEBUG is not set
+CONFIG_PM_SLEEP=y
+CONFIG_PM_SLEEP_SMP=y
+CONFIG_POWER_SUPPLY=y
+CONFIG_PRINTK_TIME=y
+CONFIG_PROC_PAGE_MONITOR=y
+CONFIG_RASPBERRYPI_FIRMWARE=y
+CONFIG_RATIONAL=y
+CONFIG_RAW_DRIVER=y
+CONFIG_RCU_STALL_COMMON=y
+CONFIG_REGMAP=y
+CONFIG_REGMAP_MMIO=y
+CONFIG_RFS_ACCEL=y
+CONFIG_RPS=y
+CONFIG_RWSEM_SPIN_ON_OWNER=y
+CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_SCHED_HRTICK=y
+# CONFIG_SCHED_INFO is not set
+CONFIG_SCSI=y
+# CONFIG_SCSI_LOWLEVEL is not set
+# CONFIG_SCSI_PROC_FS is not set
+# CONFIG_SERIAL_8250_DMA is not set
+CONFIG_SERIAL_8250_FSL=y
+CONFIG_SERIAL_8250_NR_UARTS=1
+CONFIG_SERIAL_8250_RUNTIME_UARTS=0
+# CONFIG_SERIAL_AMBA_PL010 is not set
+CONFIG_SERIAL_AMBA_PL011=y
+CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
+CONFIG_SERIAL_OF_PLATFORM=y
+CONFIG_SMP=y
+CONFIG_SMP_ON_UP=y
+CONFIG_SPARSE_IRQ=y
+# CONFIG_SQUASHFS is not set
+CONFIG_SRCU=y
+# CONFIG_STAGING is not set
+# CONFIG_STRIP_ASM_SYMS is not set
+# CONFIG_SUNXI_SRAM is not set
+CONFIG_SUSPEND=y
+CONFIG_SUSPEND_FREEZER=y
+CONFIG_SWIOTLB=y
+CONFIG_SWP_EMULATE=y
+CONFIG_SYS_SUPPORTS_APM_EMULATION=y
+# CONFIG_TEXTSEARCH is not set
+CONFIG_THERMAL=y
+CONFIG_THERMAL_BCM2835=y
+CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
+CONFIG_THERMAL_GOV_STEP_WISE=y
+CONFIG_THERMAL_OF=y
+# CONFIG_THUMB2_KERNEL is not set
+CONFIG_TICK_CPU_ACCOUNTING=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_TREE_RCU=y
+CONFIG_UEVENT_HELPER_PATH=""
+# CONFIG_UID16 is not set
+CONFIG_UNCOMPRESS_INCLUDE="mach/uncompress.h"
+CONFIG_USB=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+CONFIG_USB_COMMON=y
+CONFIG_USB_DWCOTG=y
+# CONFIG_USB_EHCI_HCD is not set
+CONFIG_USB_NET_DRIVERS=y
+CONFIG_USB_NET_SMSC95XX=y
+CONFIG_USB_STORAGE=y
+CONFIG_USB_SUPPORT=y
+CONFIG_USB_UAS=y
+CONFIG_USB_USBNET=y
+CONFIG_USE_OF=y
+CONFIG_VECTORS_BASE=0xffff0000
+CONFIG_VFP=y
+CONFIG_VFPv3=y
+CONFIG_VMSPLIT_2G=y
+# CONFIG_VMSPLIT_3G is not set
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_VT_CONSOLE_SLEEP=y
+CONFIG_VT_HW_CONSOLE_BINDING=y
+CONFIG_WATCHDOG_CORE=y
+CONFIG_XPS=y
+CONFIG_XZ_DEC_ARM=y
+CONFIG_XZ_DEC_BCJ=y
+CONFIG_ZBOOT_ROM_BSS=0x0
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZONE_DMA_FLAG=0
diff --git a/target/linux/brcm2708/modules.mk b/target/linux/brcm2708/modules.mk
index 3bc592c..4a00152 100644
--- a/target/linux/brcm2708/modules.mk
+++ b/target/linux/brcm2708/modules.mk
@@ -36,7 +36,7 @@ define KernelPackage/sound-soc-bcm2708-i2s
   FILES:= \
 	$(LINUX_DIR)/sound/soc/bcm/snd-soc-bcm2708-i2s.ko
   AUTOLOAD:=$(call AutoLoad,68,snd-soc-bcm2708-i2s)
-  DEPENDS:=@TARGET_brcm2708 +kmod-regmap +kmod-sound-soc-core
+  DEPENDS:=@TARGET_brcm2708 @LINUX_4_1 +kmod-regmap +kmod-sound-soc-core
   $(call AddDepends/sound)
 endef
 
@@ -46,6 +46,25 @@ endef
 
 $(eval $(call KernelPackage,sound-soc-bcm2708-i2s))
 
+define KernelPackage/sound-soc-bcm2835-i2s
+  TITLE:=SoC Audio support for the Broadcom 2835 I2S module
+  KCONFIG:= \
+	CONFIG_SND_BCM2835_SOC_I2S \
+	CONFIG_SND_SOC_DMAENGINE_PCM=y \
+	CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM=y
+  FILES:= \
+	$(LINUX_DIR)/sound/soc/bcm/snd-soc-bcm2835-i2s.ko
+  AUTOLOAD:=$(call AutoLoad,68,snd-soc-bcm2835-i2s)
+  DEPENDS:=@TARGET_brcm2708 @LINUX_4_4 +kmod-regmap +kmod-sound-soc-core
+  $(call AddDepends/sound)
+endef
+
+define KernelPackage/sound-soc-bcm2835-i2s/description
+  This package contains support for codecs attached to the Broadcom 2835 I2S interface
+endef
+
+$(eval $(call KernelPackage,sound-soc-bcm2835-i2s))
+
 define KernelPackage/sound-soc-hifiberry-dac
   TITLE:=Support for HifiBerry DAC
   KCONFIG:= \
@@ -55,7 +74,10 @@ define KernelPackage/sound-soc-hifiberry-dac
 	$(LINUX_DIR)/sound/soc/bcm/snd-soc-hifiberry-dac.ko \
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-pcm5102a.ko
   AUTOLOAD:=$(call AutoLoad,68,snd-soc-pcm5102a snd-soc-hifiberry-dac)
-  DEPENDS:=kmod-sound-soc-bcm2708-i2s +kmod-i2c-bcm2708
+  DEPENDS:= \
+	LINUX_4_1:kmod-sound-soc-bcm2708-i2s \
+	LINUX_4_4:kmod-sound-soc-bcm2835-i2s \
+	+kmod-i2c-bcm2708
   $(call AddDepends/sound)
 endef
 
@@ -75,7 +97,10 @@ define KernelPackage/sound-soc-hifiberry-dacplus
 	$(LINUX_DIR)/sound/soc/bcm/snd-soc-hifiberry-dacplus.ko \
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-pcm512x.ko
   AUTOLOAD:=$(call AutoLoad,68,clk-hifiberry-dacpro snd-soc-pcm512x snd-soc-hifiberry-dacplus)
-  DEPENDS:=kmod-sound-soc-bcm2708-i2s +kmod-i2c-bcm2708
+  DEPENDS:= \
+	LINUX_4_1:kmod-sound-soc-bcm2708-i2s \
+	LINUX_4_4:kmod-sound-soc-bcm2835-i2s \
+	+kmod-i2c-bcm2708
   $(call AddDepends/sound)
 endef
 
@@ -94,7 +119,10 @@ define KernelPackage/sound-soc-hifiberry-digi
 	$(LINUX_DIR)/sound/soc/bcm/snd-soc-hifiberry-digi.ko \
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-wm8804.ko
   AUTOLOAD:=$(call AutoLoad,68,snd-soc-wm8804 snd-soc-hifiberry-digi)
-  DEPENDS:=kmod-sound-soc-bcm2708-i2s +kmod-i2c-bcm2708
+  DEPENDS:= \
+	LINUX_4_1:kmod-sound-soc-bcm2708-i2s \
+	LINUX_4_4:kmod-sound-soc-bcm2835-i2s \
+	+kmod-i2c-bcm2708
   $(call AddDepends/sound)
 endef
 
@@ -113,7 +141,10 @@ define KernelPackage/sound-soc-hifiberry-amp
 	$(LINUX_DIR)/sound/soc/bcm/snd-soc-hifiberry-amp.ko \
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-tas5713.ko
   AUTOLOAD:=$(call AutoLoad,68,snd-soc-tas5713 snd-soc-hifiberry-amp)
-  DEPENDS:=kmod-sound-soc-bcm2708-i2s +kmod-i2c-bcm2708
+  DEPENDS:= \
+	LINUX_4_1:kmod-sound-soc-bcm2708-i2s \
+	LINUX_4_4:kmod-sound-soc-bcm2835-i2s \
+	+kmod-i2c-bcm2708
   $(call AddDepends/sound)
 endef
 
@@ -132,7 +163,10 @@ define KernelPackage/sound-soc-rpi-dac
 	$(LINUX_DIR)/sound/soc/bcm/snd-soc-rpi-dac.ko \
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-pcm1794a.ko
   AUTOLOAD:=$(call AutoLoad,68,snd-soc-pcm1794a snd-soc-rpi-dac)
-  DEPENDS:=kmod-sound-soc-bcm2708-i2s +kmod-i2c-bcm2708
+  DEPENDS:= \
+	LINUX_4_1:kmod-sound-soc-bcm2708-i2s \
+	LINUX_4_4:kmod-sound-soc-bcm2835-i2s \
+	+kmod-i2c-bcm2708
   $(call AddDepends/sound)
 endef
 
@@ -151,7 +185,10 @@ define KernelPackage/sound-soc-rpi-proto
 	$(LINUX_DIR)/sound/soc/bcm/snd-soc-rpi-proto.ko \
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-wm8731.ko
   AUTOLOAD:=$(call AutoLoad,68,snd-soc-wm8731 snd-soc-rpi-proto)
-  DEPENDS:=kmod-sound-soc-bcm2708-i2s +kmod-i2c-bcm2708
+  DEPENDS:= \
+	LINUX_4_1:kmod-sound-soc-bcm2708-i2s \
+	LINUX_4_4:kmod-sound-soc-bcm2835-i2s \
+	+kmod-i2c-bcm2708
   $(call AddDepends/sound)
 endef
 
@@ -172,7 +209,10 @@ define KernelPackage/sound-soc-iqaudio-dac
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-pcm512x.ko \
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-pcm512x-i2c.ko
   AUTOLOAD:=$(call AutoLoad,68,snd-soc-pcm512x snd-soc-pcm512x-i2c snd-soc-iqaudio-dac)
-  DEPENDS:=kmod-sound-soc-bcm2708-i2s +kmod-i2c-bcm2708
+  DEPENDS:= \
+	LINUX_4_1:kmod-sound-soc-bcm2708-i2s \
+	LINUX_4_4:kmod-sound-soc-bcm2835-i2s \
+	+kmod-i2c-bcm2708
   $(call AddDepends/sound)
 endef
 
@@ -195,7 +235,10 @@ define KernelPackage/sound-soc-raspidac3
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-pcm512x-i2c.ko \
 	$(LINUX_DIR)/sound/soc/codecs/snd-soc-tpa6130a2.ko
   AUTOLOAD:=$(call AutoLoad,68,snd-soc-pcm512x snd-soc-pcm512x-i2c snd-soc-tpa6130a2 snd-soc-raspidac3)
-  DEPENDS:=kmod-sound-soc-bcm2708-i2s +kmod-i2c-bcm2708
+  DEPENDS:= \
+	LINUX_4_1:kmod-sound-soc-bcm2708-i2s \
+	LINUX_4_4:kmod-sound-soc-bcm2835-i2s \
+	+kmod-i2c-bcm2708
   $(call AddDepends/sound)
 endef
 
@@ -212,7 +255,7 @@ define KernelPackage/random-bcm2708
   KCONFIG:=CONFIG_HW_RANDOM_BCM2708
   FILES:=$(LINUX_DIR)/drivers/char/hw_random/bcm2708-rng.ko
   AUTOLOAD:=$(call AutoLoad,11,bcm2708-rng)
-  DEPENDS:=@TARGET_brcm2708 +kmod-random-core
+  DEPENDS:=@TARGET_brcm2708 @LINUX_4_1 +kmod-random-core
 endef
 
 define KernelPackage/random-bcm2708/description
@@ -281,7 +324,7 @@ define KernelPackage/spi-bcm2708
     CONFIG_SPI_MASTER=y
   FILES:=$(LINUX_DIR)/drivers/spi/spi-bcm2708.ko
   AUTOLOAD:=$(call AutoLoad,89,spi-bcm2708)
-  DEPENDS:=@TARGET_brcm2708
+  DEPENDS:=@TARGET_brcm2708 @LINUX_4_1
 endef
 
 define KernelPackage/spi-bcm2708/description
diff --git a/target/linux/brcm2708/patches-4.4/0001-smsx95xx-fix-crimes-against-truesize.patch b/target/linux/brcm2708/patches-4.4/0001-smsx95xx-fix-crimes-against-truesize.patch
new file mode 100644
index 0000000..48cb813
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0001-smsx95xx-fix-crimes-against-truesize.patch
@@ -0,0 +1,33 @@
+From 8c2c0f30ef9ee0eccd3e56c6aeb110097569d5aa Mon Sep 17 00:00:00 2001
+From: Steve Glendinning <steve.glendinning at smsc.com>
+Date: Thu, 19 Feb 2015 18:47:12 +0000
+Subject: [PATCH 001/127] smsx95xx: fix crimes against truesize
+
+smsc95xx is adjusting truesize when it shouldn't, and following a recent patch from Eric this is now triggering warnings.
+
+This patch stops smsc95xx from changing truesize.
+
+Signed-off-by: Steve Glendinning <steve.glendinning at smsc.com>
+---
+ drivers/net/usb/smsc95xx.c | 2 --
+ 1 file changed, 2 deletions(-)
+ mode change 100644 => 100755 drivers/net/usb/smsc95xx.c
+
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1785,7 +1785,6 @@ static int smsc95xx_rx_fixup(struct usbn
+ 				if (dev->net->features & NETIF_F_RXCSUM)
+ 					smsc95xx_rx_csum_offload(skb);
+ 				skb_trim(skb, skb->len - 4); /* remove fcs */
+-				skb->truesize = size + sizeof(struct sk_buff);
+ 
+ 				return 1;
+ 			}
+@@ -1803,7 +1802,6 @@ static int smsc95xx_rx_fixup(struct usbn
+ 			if (dev->net->features & NETIF_F_RXCSUM)
+ 				smsc95xx_rx_csum_offload(ax_skb);
+ 			skb_trim(ax_skb, ax_skb->len - 4); /* remove fcs */
+-			ax_skb->truesize = size + sizeof(struct sk_buff);
+ 
+ 			usbnet_skb_return(dev, ax_skb);
+ 		}
diff --git a/target/linux/brcm2708/patches-4.4/0002-smsc95xx-Disable-turbo-mode-by-default.patch b/target/linux/brcm2708/patches-4.4/0002-smsc95xx-Disable-turbo-mode-by-default.patch
new file mode 100644
index 0000000..185da76
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0002-smsc95xx-Disable-turbo-mode-by-default.patch
@@ -0,0 +1,20 @@
+From 96f6fc6f990423f39b73013cd91f00a615315a90 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Fri, 17 Apr 2015 16:58:45 +0100
+Subject: [PATCH 002/127] smsc95xx: Disable turbo mode by default
+
+---
+ drivers/net/usb/smsc95xx.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -70,7 +70,7 @@ struct smsc95xx_priv {
+ 	u8 suspend_flags;
+ };
+ 
+-static bool turbo_mode = true;
++static bool turbo_mode = false;
+ module_param(turbo_mode, bool, 0644);
+ MODULE_PARM_DESC(turbo_mode, "Enable multiple frames per Rx transaction");
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0003-vmstat-Workaround-for-issue-where-dirty-page-count-g.patch b/target/linux/brcm2708/patches-4.4/0003-vmstat-Workaround-for-issue-where-dirty-page-count-g.patch
new file mode 100644
index 0000000..4361b54
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0003-vmstat-Workaround-for-issue-where-dirty-page-count-g.patch
@@ -0,0 +1,27 @@
+From 3ea06ff9ba42a29f37b46ced2fb90ff9e06da445 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 18 Jun 2014 13:42:01 +0100
+Subject: [PATCH 003/127] vmstat: Workaround for issue where dirty page count
+ goes negative
+
+See:
+https://github.com/raspberrypi/linux/issues/617
+http://www.spinics.net/lists/linux-mm/msg72236.html
+---
+ include/linux/vmstat.h | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/include/linux/vmstat.h
++++ b/include/linux/vmstat.h
+@@ -219,7 +219,11 @@ static inline void __inc_zone_state(stru
+ static inline void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
+ {
+ 	atomic_long_dec(&zone->vm_stat[item]);
++	if (item == NR_FILE_DIRTY && unlikely(atomic_long_read(&zone->vm_stat[item]) < 0))
++		atomic_long_set(&zone->vm_stat[item], 0);
+ 	atomic_long_dec(&vm_stat[item]);
++	if (item == NR_FILE_DIRTY && unlikely(atomic_long_read(&vm_stat[item]) < 0))
++		atomic_long_set(&vm_stat[item], 0);
+ }
+ 
+ static inline void __inc_zone_page_state(struct page *page,
diff --git a/target/linux/brcm2708/patches-4.4/0004-BCM2835_DT-Fix-I2S-register-map.patch b/target/linux/brcm2708/patches-4.4/0004-BCM2835_DT-Fix-I2S-register-map.patch
new file mode 100644
index 0000000..1b4a086
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0004-BCM2835_DT-Fix-I2S-register-map.patch
@@ -0,0 +1,50 @@
+From 6c72a609c205138b739a1484aa1a4ce6dd395c43 Mon Sep 17 00:00:00 2001
+From: Robert Tiemann <rtie at gmx.de>
+Date: Mon, 20 Jul 2015 11:01:25 +0200
+Subject: [PATCH 004/127] BCM2835_DT: Fix I2S register map
+
+---
+ Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt   | 4 ++--
+ Documentation/devicetree/bindings/sound/brcm,bcm2835-i2s.txt | 4 ++--
+ arch/arm/boot/dts/bcm2835.dtsi                               | 4 ++--
+ 3 files changed, 6 insertions(+), 6 deletions(-)
+
+--- a/Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt
++++ b/Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt
+@@ -48,8 +48,8 @@ Example:
+ 
+ bcm2835_i2s: i2s at 7e203000 {
+ 	compatible = "brcm,bcm2835-i2s";
+-	reg = <	0x7e203000 0x20>,
+-	      < 0x7e101098 0x02>;
++	reg = <	0x7e203000 0x24>,
++	      < 0x7e101098 0x08>;
+ 
+ 	dmas = <&dma 2>,
+ 	       <&dma 3>;
+--- a/Documentation/devicetree/bindings/sound/brcm,bcm2835-i2s.txt
++++ b/Documentation/devicetree/bindings/sound/brcm,bcm2835-i2s.txt
+@@ -16,8 +16,8 @@ Example:
+ 
+ bcm2835_i2s: i2s at 7e203000 {
+ 	compatible = "brcm,bcm2835-i2s";
+-	reg = <0x7e203000 0x20>,
+-	      <0x7e101098 0x02>;
++	reg = <0x7e203000 0x24>,
++	      <0x7e101098 0x08>;
+ 
+ 	dmas = <&dma 2>,
+ 	       <&dma 3>;
+--- a/arch/arm/boot/dts/bcm2835.dtsi
++++ b/arch/arm/boot/dts/bcm2835.dtsi
+@@ -120,8 +120,8 @@
+ 
+ 		i2s: i2s at 7e203000 {
+ 			compatible = "brcm,bcm2835-i2s";
+-			reg = <0x7e203000 0x20>,
+-			      <0x7e101098 0x02>;
++			reg = <0x7e203000 0x24>,
++			      <0x7e101098 0x08>;
+ 
+ 			dmas = <&dma 2>,
+ 			       <&dma 3>;
diff --git a/target/linux/brcm2708/patches-4.4/0005-irq-bcm2836-Prevent-spurious-interrupts-and-trap-the.patch b/target/linux/brcm2708/patches-4.4/0005-irq-bcm2836-Prevent-spurious-interrupts-and-trap-the.patch
new file mode 100644
index 0000000..a0dc4e7
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0005-irq-bcm2836-Prevent-spurious-interrupts-and-trap-the.patch
@@ -0,0 +1,31 @@
+From 38395f1ae1258743c3e0081c86bb4b65ad06dd69 Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Fri, 4 Dec 2015 17:41:50 +0000
+Subject: [PATCH 005/127] irq-bcm2836: Prevent spurious interrupts, and trap
+ them early
+
+The old arch-specific IRQ macros included a dsb to ensure the
+write to clear the mailbox interrupt completed before returning
+from the interrupt. The BCM2836 irqchip driver needs the same
+precaution to avoid spurious interrupts.
+
+Spurious interrupts are still possible for other reasons,
+though, so trap them early.
+---
+ drivers/irqchip/irq-bcm2836.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/irqchip/irq-bcm2836.c
++++ b/drivers/irqchip/irq-bcm2836.c
+@@ -170,9 +170,10 @@ __exception_irq_entry bcm2836_arm_irqchi
+ 		u32 ipi = ffs(mbox_val) - 1;
+ 
+ 		writel(1 << ipi, mailbox0);
++		dsb();
+ 		handle_IPI(ipi, regs);
+ #endif
+-	} else {
++	} else if (stat) {
+ 		u32 hwirq = ffs(stat) - 1;
+ 
+ 		handle_IRQ(irq_linear_revmap(intc.domain, hwirq), regs);
diff --git a/target/linux/brcm2708/patches-4.4/0006-irqchip-bcm2835-Add-FIQ-support.patch b/target/linux/brcm2708/patches-4.4/0006-irqchip-bcm2835-Add-FIQ-support.patch
new file mode 100644
index 0000000..28ba48b
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0006-irqchip-bcm2835-Add-FIQ-support.patch
@@ -0,0 +1,127 @@
+From 44b5e890373665231d9a5876966ef3a670b9efd7 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Fri, 12 Jun 2015 19:01:05 +0200
+Subject: [PATCH 006/127] irqchip: bcm2835: Add FIQ support
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Add a duplicate irq range with an offset on the hwirq's so the
+driver can detect that enable_fiq() is used.
+Tested with downstream dwc_otg USB controller driver.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+Reviewed-by: Eric Anholt <eric at anholt.net>
+Acked-by: Stephen Warren <swarren at wwwdotorg.org>
+---
+ arch/arm/mach-bcm/Kconfig     |  1 +
+ drivers/irqchip/irq-bcm2835.c | 51 ++++++++++++++++++++++++++++++++++++++-----
+ 2 files changed, 47 insertions(+), 5 deletions(-)
+
+--- a/arch/arm/mach-bcm/Kconfig
++++ b/arch/arm/mach-bcm/Kconfig
+@@ -128,6 +128,7 @@ config ARCH_BCM2835
+ 	select ARM_ERRATA_411920
+ 	select ARM_TIMER_SP804
+ 	select CLKSRC_OF
++	select FIQ
+ 	select PINCTRL
+ 	select PINCTRL_BCM2835
+ 	help
+--- a/drivers/irqchip/irq-bcm2835.c
++++ b/drivers/irqchip/irq-bcm2835.c
+@@ -55,7 +55,7 @@
+ #include <asm/mach/irq.h>
+ 
+ /* Put the bank and irq (32 bits) into the hwirq */
+-#define MAKE_HWIRQ(b, n)	((b << 5) | (n))
++#define MAKE_HWIRQ(b, n)	(((b) << 5) | (n))
+ #define HWIRQ_BANK(i)		(i >> 5)
+ #define HWIRQ_BIT(i)		BIT(i & 0x1f)
+ 
+@@ -71,9 +71,13 @@
+ 					| SHORTCUT1_MASK | SHORTCUT2_MASK)
+ 
+ #define REG_FIQ_CONTROL		0x0c
++#define REG_FIQ_ENABLE		0x80
++#define REG_FIQ_DISABLE		0
+ 
+ #define NR_BANKS		3
+ #define IRQS_PER_BANK		32
++#define NUMBER_IRQS		MAKE_HWIRQ(NR_BANKS, 0)
++#define FIQ_START		(NR_IRQS_BANK0 + MAKE_HWIRQ(NR_BANKS - 1, 0))
+ 
+ static const int reg_pending[] __initconst = { 0x00, 0x04, 0x08 };
+ static const int reg_enable[] __initconst = { 0x18, 0x10, 0x14 };
+@@ -98,14 +102,38 @@ static void __exception_irq_entry bcm283
+ 	struct pt_regs *regs);
+ static void bcm2836_chained_handle_irq(struct irq_desc *desc);
+ 
++static inline unsigned int hwirq_to_fiq(unsigned long hwirq)
++{
++	hwirq -= NUMBER_IRQS;
++	/*
++	 * The hwirq numbering used in this driver is:
++	 *   BASE (0-7) GPU1 (32-63) GPU2 (64-95).
++	 * This differ from the one used in the FIQ register:
++	 *   GPU1 (0-31) GPU2 (32-63) BASE (64-71)
++	 */
++	if (hwirq >= 32)
++		return hwirq - 32;
++
++	return hwirq + 64;
++}
++
+ static void armctrl_mask_irq(struct irq_data *d)
+ {
+-	writel_relaxed(HWIRQ_BIT(d->hwirq), intc.disable[HWIRQ_BANK(d->hwirq)]);
++	if (d->hwirq >= NUMBER_IRQS)
++		writel_relaxed(REG_FIQ_DISABLE, intc.base + REG_FIQ_CONTROL);
++	else
++		writel_relaxed(HWIRQ_BIT(d->hwirq),
++			       intc.disable[HWIRQ_BANK(d->hwirq)]);
+ }
+ 
+ static void armctrl_unmask_irq(struct irq_data *d)
+ {
+-	writel_relaxed(HWIRQ_BIT(d->hwirq), intc.enable[HWIRQ_BANK(d->hwirq)]);
++	if (d->hwirq >= NUMBER_IRQS)
++		writel_relaxed(REG_FIQ_ENABLE | hwirq_to_fiq(d->hwirq),
++			       intc.base + REG_FIQ_CONTROL);
++	else
++		writel_relaxed(HWIRQ_BIT(d->hwirq),
++			       intc.enable[HWIRQ_BANK(d->hwirq)]);
+ }
+ 
+ static struct irq_chip armctrl_chip = {
+@@ -151,8 +179,9 @@ static int __init armctrl_of_init(struct
+ 		panic("%s: unable to map IC registers\n",
+ 			node->full_name);
+ 
+-	intc.domain = irq_domain_add_linear(node, MAKE_HWIRQ(NR_BANKS, 0),
+-			&armctrl_ops, NULL);
++	intc.base = base;
++	intc.domain = irq_domain_add_linear(node, NUMBER_IRQS * 2,
++					    &armctrl_ops, NULL);
+ 	if (!intc.domain)
+ 		panic("%s: unable to create IRQ domain\n", node->full_name);
+ 
+@@ -182,6 +211,18 @@ static int __init armctrl_of_init(struct
+ 		set_handle_irq(bcm2835_handle_irq);
+ 	}
+ 
++	/* Make a duplicate irq range which is used to enable FIQ */
++	for (b = 0; b < NR_BANKS; b++) {
++		for (i = 0; i < bank_irqs[b]; i++) {
++			irq = irq_create_mapping(intc.domain,
++					MAKE_HWIRQ(b, i) + NUMBER_IRQS);
++			BUG_ON(irq <= 0);
++			irq_set_chip(irq, &armctrl_chip);
++			set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
++		}
++	}
++	init_FIQ(FIQ_START);
++
+ 	return 0;
+ }
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0007-irqchip-irq-bcm2835-Add-2836-FIQ-support.patch b/target/linux/brcm2708/patches-4.4/0007-irqchip-irq-bcm2835-Add-2836-FIQ-support.patch
new file mode 100644
index 0000000..9fa768c
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0007-irqchip-irq-bcm2835-Add-2836-FIQ-support.patch
@@ -0,0 +1,96 @@
+From e3e8c56abfe6a036025f75908b63ae69d8eaed11 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Fri, 23 Oct 2015 16:26:55 +0200
+Subject: [PATCH 007/127] irqchip: irq-bcm2835: Add 2836 FIQ support
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/irqchip/irq-bcm2835.c | 42 ++++++++++++++++++++++++++++++++++++++++--
+ 1 file changed, 40 insertions(+), 2 deletions(-)
+
+--- a/drivers/irqchip/irq-bcm2835.c
++++ b/drivers/irqchip/irq-bcm2835.c
+@@ -50,6 +50,8 @@
+ #include <linux/of_irq.h>
+ #include <linux/irqchip.h>
+ #include <linux/irqdomain.h>
++#include <linux/mfd/syscon.h>
++#include <linux/regmap.h>
+ 
+ #include <asm/exception.h>
+ #include <asm/mach/irq.h>
+@@ -70,6 +72,9 @@
+ #define BANK0_VALID_MASK	(BANK0_HWIRQ_MASK | BANK1_HWIRQ | BANK2_HWIRQ \
+ 					| SHORTCUT1_MASK | SHORTCUT2_MASK)
+ 
++#undef ARM_LOCAL_GPU_INT_ROUTING
++#define ARM_LOCAL_GPU_INT_ROUTING 0x0c
++
+ #define REG_FIQ_CONTROL		0x0c
+ #define REG_FIQ_ENABLE		0x80
+ #define REG_FIQ_DISABLE		0
+@@ -95,6 +100,7 @@ struct armctrl_ic {
+ 	void __iomem *enable[NR_BANKS];
+ 	void __iomem *disable[NR_BANKS];
+ 	struct irq_domain *domain;
++	struct regmap *local_regmap;
+ };
+ 
+ static struct armctrl_ic intc __read_mostly;
+@@ -128,12 +134,35 @@ static void armctrl_mask_irq(struct irq_
+ 
+ static void armctrl_unmask_irq(struct irq_data *d)
+ {
+-	if (d->hwirq >= NUMBER_IRQS)
++	if (d->hwirq >= NUMBER_IRQS) {
++		if (num_online_cpus() > 1) {
++			unsigned int data;
++			int ret;
++
++			if (!intc.local_regmap) {
++				pr_err("FIQ is disabled due to missing regmap\n");
++				return;
++			}
++
++			ret = regmap_read(intc.local_regmap,
++					  ARM_LOCAL_GPU_INT_ROUTING, &data);
++			if (ret) {
++				pr_err("Failed to read int routing %d\n", ret);
++				return;
++			}
++
++			data &= ~0xc;
++			data |= (1 << 2);
++			regmap_write(intc.local_regmap,
++				     ARM_LOCAL_GPU_INT_ROUTING, data);
++		}
++
+ 		writel_relaxed(REG_FIQ_ENABLE | hwirq_to_fiq(d->hwirq),
+ 			       intc.base + REG_FIQ_CONTROL);
+-	else
++	} else {
+ 		writel_relaxed(HWIRQ_BIT(d->hwirq),
+ 			       intc.enable[HWIRQ_BANK(d->hwirq)]);
++	}
+ }
+ 
+ static struct irq_chip armctrl_chip = {
+@@ -211,6 +240,15 @@ static int __init armctrl_of_init(struct
+ 		set_handle_irq(bcm2835_handle_irq);
+ 	}
+ 
++	if (is_2836) {
++		intc.local_regmap =
++			syscon_regmap_lookup_by_compatible("brcm,bcm2836-arm-local");
++		if (IS_ERR(intc.local_regmap)) {
++			pr_err("Failed to get local register map. FIQ is disabled for cpus > 1\n");
++			intc.local_regmap = NULL;
++		}
++	}
++
+ 	/* Make a duplicate irq range which is used to enable FIQ */
+ 	for (b = 0; b < NR_BANKS; b++) {
+ 		for (i = 0; i < bank_irqs[b]; i++) {
diff --git a/target/linux/brcm2708/patches-4.4/0008-serial-8250-Don-t-crash-when-nr_uarts-is-0.patch b/target/linux/brcm2708/patches-4.4/0008-serial-8250-Don-t-crash-when-nr_uarts-is-0.patch
new file mode 100644
index 0000000..46bc447
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0008-serial-8250-Don-t-crash-when-nr_uarts-is-0.patch
@@ -0,0 +1,20 @@
+From 4bff078f28e6a2d55d18e06c0a92b0b78b8ea6cb Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Tue, 30 Jun 2015 14:12:42 +0100
+Subject: [PATCH 008/127] serial: 8250: Don't crash when nr_uarts is 0
+
+---
+ drivers/tty/serial/8250/8250_core.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -509,6 +509,8 @@ static void __init serial8250_isa_init_p
+ 
+ 	if (nr_uarts > UART_NR)
+ 		nr_uarts = UART_NR;
++	if (!nr_uarts)
++		return;
+ 
+ 	for (i = 0; i < nr_uarts; i++) {
+ 		struct uart_8250_port *up = &serial8250_ports[i];
diff --git a/target/linux/brcm2708/patches-4.4/0009-pinctrl-bcm2835-Set-base-to-0-give-expected-gpio-num.patch b/target/linux/brcm2708/patches-4.4/0009-pinctrl-bcm2835-Set-base-to-0-give-expected-gpio-num.patch
new file mode 100644
index 0000000..96381b6
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0009-pinctrl-bcm2835-Set-base-to-0-give-expected-gpio-num.patch
@@ -0,0 +1,22 @@
+From 91ea61bbf9285586d27442dc3b85ea34805ccf38 Mon Sep 17 00:00:00 2001
+From: notro <notro at tronnes.org>
+Date: Thu, 10 Jul 2014 13:59:47 +0200
+Subject: [PATCH 009/127] pinctrl-bcm2835: Set base to 0 give expected gpio
+ numbering
+
+Signed-off-by: Noralf Tronnes <notro at tronnes.org>
+---
+ drivers/pinctrl/bcm/pinctrl-bcm2835.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -373,7 +373,7 @@ static struct gpio_chip bcm2835_gpio_chi
+ 	.get = bcm2835_gpio_get,
+ 	.set = bcm2835_gpio_set,
+ 	.to_irq = bcm2835_gpio_to_irq,
+-	.base = -1,
++	.base = 0,
+ 	.ngpio = BCM2835_NUM_GPIOS,
+ 	.can_sleep = false,
+ };
diff --git a/target/linux/brcm2708/patches-4.4/0010-pinctrl-bcm2835-Fix-interrupt-handling-for-GPIOs-28-.patch b/target/linux/brcm2708/patches-4.4/0010-pinctrl-bcm2835-Fix-interrupt-handling-for-GPIOs-28-.patch
new file mode 100644
index 0000000..c3a051f
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0010-pinctrl-bcm2835-Fix-interrupt-handling-for-GPIOs-28-.patch
@@ -0,0 +1,146 @@
+From 15367f46e17775c4d736ed1cfc318218362c6a4d Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Tue, 24 Feb 2015 13:40:50 +0000
+Subject: [PATCH 010/127] pinctrl-bcm2835: Fix interrupt handling for GPIOs
+ 28-31 and 46-53
+
+Contrary to the documentation, the BCM2835 GPIO controller actually has
+four interrupt lines - one each for the three IRQ groups and one common. Rather
+confusingly, the GPIO interrupt groups don't correspond directly with the GPIO
+control banks. Instead, GPIOs 0-27 generate IRQ GPIO0, 28-45 GPIO1 and
+46-53 GPIO2.
+
+Awkwardly, the GPIOS for IRQ GPIO1 straddle two 32-entry GPIO banks, so it is
+cleaner to split out a function to process the interrupts for a single GPIO
+bank.
+
+This bug has only just been observed because GPIOs above 27 can only be
+accessed on an old Raspberry Pi with the optional P5 header fitted, where
+the pins are often used for I2S instead.
+---
+ drivers/pinctrl/bcm/pinctrl-bcm2835.c | 51 ++++++++++++++++++++++++++---------
+ 1 file changed, 39 insertions(+), 12 deletions(-)
+
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -47,6 +47,7 @@
+ #define MODULE_NAME "pinctrl-bcm2835"
+ #define BCM2835_NUM_GPIOS 54
+ #define BCM2835_NUM_BANKS 2
++#define BCM2835_NUM_IRQS  3
+ 
+ #define BCM2835_PIN_BITMAP_SZ \
+ 	DIV_ROUND_UP(BCM2835_NUM_GPIOS, sizeof(unsigned long) * 8)
+@@ -88,13 +89,13 @@ enum bcm2835_pinconf_pull {
+ 
+ struct bcm2835_gpio_irqdata {
+ 	struct bcm2835_pinctrl *pc;
+-	int bank;
++	int irqgroup;
+ };
+ 
+ struct bcm2835_pinctrl {
+ 	struct device *dev;
+ 	void __iomem *base;
+-	int irq[BCM2835_NUM_BANKS];
++	int irq[BCM2835_NUM_IRQS];
+ 
+ 	/* note: locking assumes each bank will have its own unsigned long */
+ 	unsigned long enabled_irq_map[BCM2835_NUM_BANKS];
+@@ -105,7 +106,7 @@ struct bcm2835_pinctrl {
+ 	struct gpio_chip gpio_chip;
+ 	struct pinctrl_gpio_range gpio_range;
+ 
+-	struct bcm2835_gpio_irqdata irq_data[BCM2835_NUM_BANKS];
++	struct bcm2835_gpio_irqdata irq_data[BCM2835_NUM_IRQS];
+ 	spinlock_t irq_lock[BCM2835_NUM_BANKS];
+ };
+ 
+@@ -378,17 +379,16 @@ static struct gpio_chip bcm2835_gpio_chi
+ 	.can_sleep = false,
+ };
+ 
+-static irqreturn_t bcm2835_gpio_irq_handler(int irq, void *dev_id)
++static int bcm2835_gpio_irq_handle_bank(struct bcm2835_pinctrl *pc,
++					unsigned int bank, u32 mask)
+ {
+-	struct bcm2835_gpio_irqdata *irqdata = dev_id;
+-	struct bcm2835_pinctrl *pc = irqdata->pc;
+-	int bank = irqdata->bank;
+ 	unsigned long events;
+ 	unsigned offset;
+ 	unsigned gpio;
+ 	unsigned int type;
+ 
+ 	events = bcm2835_gpio_rd(pc, GPEDS0 + bank * 4);
++	events &= mask;
+ 	events &= pc->enabled_irq_map[bank];
+ 	for_each_set_bit(offset, &events, 32) {
+ 		gpio = (32 * bank) + offset;
+@@ -396,7 +396,30 @@ static irqreturn_t bcm2835_gpio_irq_hand
+ 
+ 		generic_handle_irq(irq_linear_revmap(pc->irq_domain, gpio));
+ 	}
+-	return events ? IRQ_HANDLED : IRQ_NONE;
++
++	return (events != 0);
++}
++
++static irqreturn_t bcm2835_gpio_irq_handler(int irq, void *dev_id)
++{
++	struct bcm2835_gpio_irqdata *irqdata = dev_id;
++	struct bcm2835_pinctrl *pc = irqdata->pc;
++	int handled = 0;
++
++	switch (irqdata->irqgroup) {
++	case 0: /* IRQ0 covers GPIOs 0-27 */
++		handled = bcm2835_gpio_irq_handle_bank(pc, 0, 0x0fffffff);
++		break;
++	case 1: /* IRQ1 covers GPIOs 28-45 */
++		handled = bcm2835_gpio_irq_handle_bank(pc, 0, 0xf0000000) |
++			  bcm2835_gpio_irq_handle_bank(pc, 1, 0x00003fff);
++		break;
++	case 2: /* IRQ2 covers GPIOs 46-53 */
++		handled = bcm2835_gpio_irq_handle_bank(pc, 1, 0x003fc000);
++		break;
++	}
++
++	return handled ? IRQ_HANDLED : IRQ_NONE;
+ }
+ 
+ static inline void __bcm2835_gpio_irq_config(struct bcm2835_pinctrl *pc,
+@@ -985,8 +1008,6 @@ static int bcm2835_pinctrl_probe(struct
+ 	for (i = 0; i < BCM2835_NUM_BANKS; i++) {
+ 		unsigned long events;
+ 		unsigned offset;
+-		int len;
+-		char *name;
+ 
+ 		/* clear event detection flags */
+ 		bcm2835_gpio_wr(pc, GPREN0 + i * 4, 0);
+@@ -1001,10 +1022,15 @@ static int bcm2835_pinctrl_probe(struct
+ 		for_each_set_bit(offset, &events, 32)
+ 			bcm2835_gpio_wr(pc, GPEDS0 + i * 4, BIT(offset));
+ 
++		spin_lock_init(&pc->irq_lock[i]);
++	}
++
++	for (i = 0; i < BCM2835_NUM_IRQS; i++) {
++		int len;
++		char *name;
+ 		pc->irq[i] = irq_of_parse_and_map(np, i);
+ 		pc->irq_data[i].pc = pc;
+-		pc->irq_data[i].bank = i;
+-		spin_lock_init(&pc->irq_lock[i]);
++		pc->irq_data[i].irqgroup = i;
+ 
+ 		len = strlen(dev_name(pc->dev)) + 16;
+ 		name = devm_kzalloc(pc->dev, len, GFP_KERNEL);
+@@ -1062,6 +1088,7 @@ static struct platform_driver bcm2835_pi
+ 	.remove = bcm2835_pinctrl_remove,
+ 	.driver = {
+ 		.name = MODULE_NAME,
++		.owner = THIS_MODULE,
+ 		.of_match_table = bcm2835_pinctrl_match,
+ 	},
+ };
diff --git a/target/linux/brcm2708/patches-4.4/0011-pinctrl-bcm2835-Only-request-the-interrupts-listed-i.patch b/target/linux/brcm2708/patches-4.4/0011-pinctrl-bcm2835-Only-request-the-interrupts-listed-i.patch
new file mode 100644
index 0000000..0ef4ba2
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0011-pinctrl-bcm2835-Only-request-the-interrupts-listed-i.patch
@@ -0,0 +1,27 @@
+From 167da31b9a7d3111c83993e4d614bb95bbefdcbb Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Thu, 26 Feb 2015 09:58:22 +0000
+Subject: [PATCH 011/127] pinctrl-bcm2835: Only request the interrupts listed
+ in the DTB
+
+Although the GPIO controller can generate three interrupts (four counting
+the common one), the device tree files currently only specify two. In the
+absence of the third, simply don't register that interrupt (as opposed to
+registering 0), which has the effect of making it impossible to generate
+interrupts for GPIOs 46-53 which, since they share pins with the SD card
+interface, is unlikely to be a problem.
+---
+ drivers/pinctrl/bcm/pinctrl-bcm2835.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -1029,6 +1029,8 @@ static int bcm2835_pinctrl_probe(struct
+ 		int len;
+ 		char *name;
+ 		pc->irq[i] = irq_of_parse_and_map(np, i);
++		if (pc->irq[i] == 0)
++			break;
+ 		pc->irq_data[i].pc = pc;
+ 		pc->irq_data[i].irqgroup = i;
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0012-spi-bcm2835-Support-pin-groups-other-than-7-11.patch b/target/linux/brcm2708/patches-4.4/0012-spi-bcm2835-Support-pin-groups-other-than-7-11.patch
new file mode 100644
index 0000000..73e2781
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0012-spi-bcm2835-Support-pin-groups-other-than-7-11.patch
@@ -0,0 +1,80 @@
+From bc9d2c297e886dfcc340414a61de970942ad7319 Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Wed, 24 Jun 2015 14:10:44 +0100
+Subject: [PATCH 012/127] spi-bcm2835: Support pin groups other than 7-11
+
+The spi-bcm2835 driver automatically uses GPIO chip-selects due to
+some unreliability of the native ones. In doing so it chooses the
+same pins as the native chip-selects would use, but the existing
+code always uses pins 7 and 8, wherever the SPI function is mapped.
+
+Search the pinctrl group assigned to the driver for pins that
+correspond to native chip-selects, and use those for GPIO chip-
+selects.
+
+Signed-off-by: Phil Elwell <phil at raspberrypi.org>
+---
+ drivers/spi/spi-bcm2835.c | 45 +++++++++++++++++++++++++++++++++++++--------
+ 1 file changed, 37 insertions(+), 8 deletions(-)
+
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -688,6 +688,8 @@ static int bcm2835_spi_setup(struct spi_
+ {
+ 	int err;
+ 	struct gpio_chip *chip;
++	struct device_node *pins;
++	u32 pingroup_index;
+ 	/*
+ 	 * sanity checking the native-chipselects
+ 	 */
+@@ -704,15 +706,42 @@ static int bcm2835_spi_setup(struct spi_
+ 			"setup: only two native chip-selects are supported\n");
+ 		return -EINVAL;
+ 	}
+-	/* now translate native cs to GPIO */
+ 
+-	/* get the gpio chip for the base */
+-	chip = gpiochip_find("pinctrl-bcm2835", chip_match_name);
+-	if (!chip)
+-		return 0;
++	/* now translate native cs to GPIO */
++	/* first look for chip select pins in the devices pin groups */
++	for (pingroup_index = 0;
++	     (pins = of_parse_phandle(spi->master->dev.of_node,
++				     "pinctrl-0",
++				      pingroup_index)) != 0;
++	     pingroup_index++) {
++		u32 pin;
++		u32 pin_index;
++		for (pin_index = 0;
++		     of_property_read_u32_index(pins,
++						"brcm,pins",
++						pin_index,
++						&pin) == 0;
++		     pin_index++) {
++			if (((spi->chip_select == 0) &&
++			     ((pin == 8) || (pin == 36) || (pin == 46))) ||
++			    ((spi->chip_select == 1) &&
++			     ((pin == 7) || (pin == 35)))) {
++				spi->cs_gpio = pin;
++				break;
++			}
++		}
++		of_node_put(pins);
++	}
++	/* if that fails, assume GPIOs 7-11 are used */
++	if (!gpio_is_valid(spi->cs_gpio) ) {
++		/* get the gpio chip for the base */
++		chip = gpiochip_find("pinctrl-bcm2835", chip_match_name);
++		if (!chip)
++			return 0;
+ 
+-	/* and calculate the real CS */
+-	spi->cs_gpio = chip->base + 8 - spi->chip_select;
++		/* and calculate the real CS */
++		spi->cs_gpio = chip->base + 8 - spi->chip_select;
++	}
+ 
+ 	/* and set up the "mode" and level */
+ 	dev_info(&spi->dev, "setting up native-CS%i as GPIO %i\n",
diff --git a/target/linux/brcm2708/patches-4.4/0013-ARM-bcm2835-Set-Serial-number-and-Revision.patch b/target/linux/brcm2708/patches-4.4/0013-ARM-bcm2835-Set-Serial-number-and-Revision.patch
new file mode 100644
index 0000000..e2b55b6
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0013-ARM-bcm2835-Set-Serial-number-and-Revision.patch
@@ -0,0 +1,58 @@
+From e04c4837cde13f4782fc5a274599f580d8a29715 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Wed, 3 Jun 2015 12:26:13 +0200
+Subject: [PATCH 013/127] ARM: bcm2835: Set Serial number and Revision
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+The VideoCore bootloader passes in Serial number and
+Revision number through Device Tree. Make these available to
+userspace through /proc/cpuinfo.
+
+Mainline status:
+
+There is a commit in linux-next that standardize passing the serial
+number through Device Tree (string: /serial-number):
+ARM: 8355/1: arch: Show the serial number from devicetree in cpuinfo
+
+There was an attempt to do the same with the revision number, but it
+didn't get in:
+[PATCH v2 1/2] arm: devtree: Set system_rev from DT revision
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ arch/arm/mach-bcm/board_bcm2835.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+--- a/arch/arm/mach-bcm/board_bcm2835.c
++++ b/arch/arm/mach-bcm/board_bcm2835.c
+@@ -17,12 +17,16 @@
+ #include <linux/of_address.h>
+ #include <linux/of_platform.h>
+ #include <linux/clk/bcm2835.h>
++#include <asm/system_info.h>
+ 
+ #include <asm/mach/arch.h>
+ #include <asm/mach/map.h>
+ 
+ static void __init bcm2835_init(void)
+ {
++	struct device_node *np = of_find_node_by_path("/system");
++	u32 val;
++	u64 val64;
+ 	int ret;
+ 
+ 	bcm2835_init_clocks();
+@@ -33,6 +37,11 @@ static void __init bcm2835_init(void)
+ 		pr_err("of_platform_populate failed: %d\n", ret);
+ 		BUG();
+ 	}
++
++	if (!of_property_read_u32(np, "linux,revision", &val))
++		system_rev = val;
++	if (!of_property_read_u64(np, "linux,serial", &val64))
++		system_serial_low = val64;
+ }
+ 
+ static const char * const bcm2835_compat[] = {
diff --git a/target/linux/brcm2708/patches-4.4/0014-bcm2835-i2s-get-base-address-for-DMA-from-devicetree.patch b/target/linux/brcm2708/patches-4.4/0014-bcm2835-i2s-get-base-address-for-DMA-from-devicetree.patch
new file mode 100644
index 0000000..0c36bda
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0014-bcm2835-i2s-get-base-address-for-DMA-from-devicetree.patch
@@ -0,0 +1,65 @@
+From c8225021ad8a8e8d2b4560bed644c5552f9f6684 Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Sun, 11 Oct 2015 16:44:05 +0200
+Subject: [PATCH 014/127] bcm2835-i2s: get base address for DMA from devicetree
+
+Code copied from spi-bcm2835. Get physical address from devicetree
+instead of using hardcoded constant.
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+---
+ sound/soc/bcm/bcm2835-i2s.c | 20 ++++++++++++--------
+ 1 file changed, 12 insertions(+), 8 deletions(-)
+
+--- a/sound/soc/bcm/bcm2835-i2s.c
++++ b/sound/soc/bcm/bcm2835-i2s.c
+@@ -38,6 +38,7 @@
+ #include <linux/delay.h>
+ #include <linux/io.h>
+ #include <linux/clk.h>
++#include <linux/of_address.h>
+ 
+ #include <sound/core.h>
+ #include <sound/pcm.h>
+@@ -158,10 +159,6 @@ static const unsigned int bcm2835_clk_fr
+ #define BCM2835_I2S_INT_RXR		BIT(1)
+ #define BCM2835_I2S_INT_TXW		BIT(0)
+ 
+-/* I2S DMA interface */
+-/* FIXME: Needs IOMMU support */
+-#define BCM2835_VCMMU_SHIFT		(0x7E000000 - 0x20000000)
+-
+ /* General device struct */
+ struct bcm2835_i2s_dev {
+ 	struct device				*dev;
+@@ -791,6 +788,15 @@ static int bcm2835_i2s_probe(struct plat
+ 	int ret;
+ 	struct regmap *regmap[2];
+ 	struct resource *mem[2];
++	const __be32 *addr;
++	dma_addr_t dma_reg_base;
++
++	addr = of_get_address(pdev->dev.of_node, 0, NULL, NULL);
++	if (!addr) {
++		dev_err(&pdev->dev, "could not get DMA-register address\n");
++		return -ENODEV;
++	}
++	dma_reg_base = be32_to_cpup(addr);
+ 
+ 	/* Request both ioareas */
+ 	for (i = 0; i <= 1; i++) {
+@@ -817,12 +823,10 @@ static int bcm2835_i2s_probe(struct plat
+ 
+ 	/* Set the DMA address */
+ 	dev->dma_data[SNDRV_PCM_STREAM_PLAYBACK].addr =
+-		(dma_addr_t)mem[0]->start + BCM2835_I2S_FIFO_A_REG
+-					  + BCM2835_VCMMU_SHIFT;
++		dma_reg_base + BCM2835_I2S_FIFO_A_REG;
+ 
+ 	dev->dma_data[SNDRV_PCM_STREAM_CAPTURE].addr =
+-		(dma_addr_t)mem[0]->start + BCM2835_I2S_FIFO_A_REG
+-					  + BCM2835_VCMMU_SHIFT;
++		dma_reg_base + BCM2835_I2S_FIFO_A_REG;
+ 
+ 	/* Set the bus width */
+ 	dev->dma_data[SNDRV_PCM_STREAM_PLAYBACK].addr_width =
diff --git a/target/linux/brcm2708/patches-4.4/0015-bcm2835-i2s-add-24bit-support-update-bclk_ratio-to-m.patch b/target/linux/brcm2708/patches-4.4/0015-bcm2835-i2s-add-24bit-support-update-bclk_ratio-to-m.patch
new file mode 100644
index 0000000..b627327
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0015-bcm2835-i2s-add-24bit-support-update-bclk_ratio-to-m.patch
@@ -0,0 +1,79 @@
+From 328b2e8b8a38fe62431c2ad5ae22cee31740b10d Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Sun, 11 Oct 2015 15:21:16 +0200
+Subject: [PATCH 015/127] bcm2835-i2s: add 24bit support, update bclk_ratio to
+ more correct values
+
+Code ported from bcm2708-i2s driver in Raspberry Pi tree.
+
+RPi commit 62c05a0b5328d9376d39c9e74da10b8a2465c234 ("ASoC: BCM2708:
+Add 24 bit support")
+
+This adds 24 bit support to the I2S driver of the BCM2708.
+Besides enabling the 24 bit flags, it includes two bug fixes:
+
+MMAP is not supported. Claiming this leads to strange issues
+when the format of driver and file do not match.
+
+The datasheet states that the width extension bit should be set
+for widths greater than 24, but greater or equal would be correct.
+This follows from the definition of the width field.
+
+Signed-off-by: Florian Meier <florian.meier at koalo.de>
+
+RPi commit 3e8c672bc4e92d457aa4654bbb4cfd79a18a2327 ("bcm2708-i2s:
+Update bclk_ratio to more correct values")
+
+Discussion about blck_ratio affecting sound quality:
+https://github.com/raspberrypi/linux/issues/681
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+---
+ sound/soc/bcm/bcm2835-i2s.c | 12 +++++++++---
+ 1 file changed, 9 insertions(+), 3 deletions(-)
+
+--- a/sound/soc/bcm/bcm2835-i2s.c
++++ b/sound/soc/bcm/bcm2835-i2s.c
+@@ -340,11 +340,15 @@ static int bcm2835_i2s_hw_params(struct
+ 	switch (params_format(params)) {
+ 	case SNDRV_PCM_FORMAT_S16_LE:
+ 		data_length = 16;
+-		bclk_ratio = 40;
++		bclk_ratio = 50;
++		break;
++	case SNDRV_PCM_FORMAT_S24_LE:
++		data_length = 24;
++		bclk_ratio = 50;
+ 		break;
+ 	case SNDRV_PCM_FORMAT_S32_LE:
+ 		data_length = 32;
+-		bclk_ratio = 80;
++		bclk_ratio = 100;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -420,7 +424,7 @@ static int bcm2835_i2s_hw_params(struct
+ 	/* Setup the frame format */
+ 	format = BCM2835_I2S_CHEN;
+ 
+-	if (data_length > 24)
++	if (data_length >= 24)
+ 		format |= BCM2835_I2S_CHWEX;
+ 
+ 	format |= BCM2835_I2S_CHWID((data_length-8)&0xf);
+@@ -711,6 +715,7 @@ static struct snd_soc_dai_driver bcm2835
+ 		.channels_max = 2,
+ 		.rates =	SNDRV_PCM_RATE_8000_192000,
+ 		.formats =	SNDRV_PCM_FMTBIT_S16_LE
++				| SNDRV_PCM_FMTBIT_S24_LE
+ 				| SNDRV_PCM_FMTBIT_S32_LE
+ 		},
+ 	.capture = {
+@@ -718,6 +723,7 @@ static struct snd_soc_dai_driver bcm2835
+ 		.channels_max = 2,
+ 		.rates =	SNDRV_PCM_RATE_8000_192000,
+ 		.formats =	SNDRV_PCM_FMTBIT_S16_LE
++				| SNDRV_PCM_FMTBIT_S24_LE
+ 				| SNDRV_PCM_FMTBIT_S32_LE
+ 		},
+ 	.ops = &bcm2835_i2s_dai_ops,
diff --git a/target/linux/brcm2708/patches-4.4/0016-bcm2835-i2s-setup-clock-only-if-CPU-is-clock-master.patch b/target/linux/brcm2708/patches-4.4/0016-bcm2835-i2s-setup-clock-only-if-CPU-is-clock-master.patch
new file mode 100644
index 0000000..e096656
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0016-bcm2835-i2s-setup-clock-only-if-CPU-is-clock-master.patch
@@ -0,0 +1,54 @@
+From fce554c6331b34458db54722cb06eb517a32b305 Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Sun, 11 Oct 2015 15:25:51 +0200
+Subject: [PATCH 016/127] bcm2835-i2s: setup clock only if CPU is clock master
+
+Code ported from bcm2708-i2s driver in Raspberry Pi tree.
+
+RPi commit c14827ecdaa36607f6110f9ce8df96e698672191 ("bcm2708: Allow
+option card devices to be configured via DT")
+
+Original work by Zoltan Szenczi, committed to RPi tree by
+Phil Elwell.
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+---
+ sound/soc/bcm/bcm2835-i2s.c | 28 +++++++++++++++++++---------
+ 1 file changed, 19 insertions(+), 9 deletions(-)
+
+--- a/sound/soc/bcm/bcm2835-i2s.c
++++ b/sound/soc/bcm/bcm2835-i2s.c
+@@ -411,15 +411,25 @@ static int bcm2835_i2s_hw_params(struct
+ 		divf = dividend & BCM2835_CLK_DIVF_MASK;
+ 	}
+ 
+-	/* Set clock divider */
+-	regmap_write(dev->clk_regmap, BCM2835_CLK_PCMDIV_REG, BCM2835_CLK_PASSWD
+-			| BCM2835_CLK_DIVI(divi)
+-			| BCM2835_CLK_DIVF(divf));
++	/* Clock should only be set up here if CPU is clock master */
++	switch (dev->fmt & SND_SOC_DAIFMT_MASTER_MASK) {
++	case SND_SOC_DAIFMT_CBS_CFS:
++	case SND_SOC_DAIFMT_CBS_CFM:
++		/* Set clock divider */
++		regmap_write(dev->clk_regmap, BCM2835_CLK_PCMDIV_REG,
++				  BCM2835_CLK_PASSWD
++				| BCM2835_CLK_DIVI(divi)
++				| BCM2835_CLK_DIVF(divf));
+ 
+-	/* Setup clock, but don't start it yet */
+-	regmap_write(dev->clk_regmap, BCM2835_CLK_PCMCTL_REG, BCM2835_CLK_PASSWD
+-			| BCM2835_CLK_MASH(mash)
+-			| BCM2835_CLK_SRC(clk_src));
++		/* Setup clock, but don't start it yet */
++		regmap_write(dev->clk_regmap, BCM2835_CLK_PCMCTL_REG,
++				  BCM2835_CLK_PASSWD
++				| BCM2835_CLK_MASH(mash)
++				| BCM2835_CLK_SRC(clk_src));
++		break;
++	default:
++		break;
++	}
+ 
+ 	/* Setup the frame format */
+ 	format = BCM2835_I2S_CHEN;
diff --git a/target/linux/brcm2708/patches-4.4/0017-bcm2835-i2s-Eliminate-debugfs-directory-error.patch b/target/linux/brcm2708/patches-4.4/0017-bcm2835-i2s-Eliminate-debugfs-directory-error.patch
new file mode 100644
index 0000000..1caa347
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0017-bcm2835-i2s-Eliminate-debugfs-directory-error.patch
@@ -0,0 +1,36 @@
+From 45995262bd8d5194e9430d2a826c84ed28c408eb Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Sun, 11 Oct 2015 15:49:51 +0200
+Subject: [PATCH 017/127] bcm2835-i2s: Eliminate debugfs directory error
+
+Code ported from bcm2708-i2s driver in Raspberry Pi tree.
+
+RPi commit fd7d7a3dbe9262d16971ef81c234ed28c6499dd7 ("bcm2708:
+Eliminate i2s debugfs directory error")
+
+Qualify the two regmap ranges uses by bcm2708-i2s ('-i2s' and '-clk')
+to avoid the name clash when registering debugfs entries.
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+---
+ sound/soc/bcm/bcm2835-i2s.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/sound/soc/bcm/bcm2835-i2s.c
++++ b/sound/soc/bcm/bcm2835-i2s.c
+@@ -782,6 +782,7 @@ static const struct regmap_config bcm283
+ 		.precious_reg = bcm2835_i2s_precious_reg,
+ 		.volatile_reg = bcm2835_i2s_volatile_reg,
+ 		.cache_type = REGCACHE_RBTREE,
++		.name = "i2s",
+ 	},
+ 	{
+ 		.reg_bits = 32,
+@@ -790,6 +791,7 @@ static const struct regmap_config bcm283
+ 		.max_register = BCM2835_CLK_PCMDIV_REG,
+ 		.volatile_reg = bcm2835_clk_volatile_reg,
+ 		.cache_type = REGCACHE_RBTREE,
++		.name = "clk",
+ 	},
+ };
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0018-bcm2835-i2s-Register-PCM-device.patch b/target/linux/brcm2708/patches-4.4/0018-bcm2835-i2s-Register-PCM-device.patch
new file mode 100644
index 0000000..eeb7d61
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0018-bcm2835-i2s-Register-PCM-device.patch
@@ -0,0 +1,63 @@
+From b58d4ef09eca4674d1530f0c8e1ca074b269ebea Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Sun, 11 Oct 2015 15:35:20 +0200
+Subject: [PATCH 018/127] bcm2835-i2s: Register PCM device
+
+Code ported from bcm2708-i2s driver in Raspberry Pi tree.
+
+RPi commit ba46b4935a23aa2caac1855ead52a035d4776680 ("ASoC: Add
+support for BCM2708")
+
+This driver adds support for digital audio (I2S)
+for the BCM2708 SoC that is used by the
+Raspberry Pi. External audio codecs can be
+connected to the Raspberry Pi via P5 header.
+
+It relies on cyclic DMA engine support for BCM2708.
+
+Signed-off-by: Florian Meier <florian.meier at koalo.de>
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+---
+ sound/soc/bcm/bcm2835-i2s.c | 23 ++++++++++++++++++++++-
+ 1 file changed, 22 insertions(+), 1 deletion(-)
+
+--- a/sound/soc/bcm/bcm2835-i2s.c
++++ b/sound/soc/bcm/bcm2835-i2s.c
+@@ -799,6 +799,25 @@ static const struct snd_soc_component_dr
+ 	.name		= "bcm2835-i2s-comp",
+ };
+ 
++static const struct snd_pcm_hardware bcm2835_pcm_hardware = {
++	.info			= SNDRV_PCM_INFO_INTERLEAVED |
++				  SNDRV_PCM_INFO_JOINT_DUPLEX,
++	.formats		= SNDRV_PCM_FMTBIT_S16_LE |
++				  SNDRV_PCM_FMTBIT_S24_LE |
++				  SNDRV_PCM_FMTBIT_S32_LE,
++	.period_bytes_min	= 32,
++	.period_bytes_max	= 64 * PAGE_SIZE,
++	.periods_min		= 2,
++	.periods_max		= 255,
++	.buffer_bytes_max	= 128 * PAGE_SIZE,
++};
++
++static const struct snd_dmaengine_pcm_config bcm2835_dmaengine_pcm_config = {
++	.prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config,
++	.pcm_hardware = &bcm2835_pcm_hardware,
++	.prealloc_buffer_size = 256 * PAGE_SIZE,
++};
++
+ static int bcm2835_i2s_probe(struct platform_device *pdev)
+ {
+ 	struct bcm2835_i2s_dev *dev;
+@@ -870,7 +889,9 @@ static int bcm2835_i2s_probe(struct plat
+ 		return ret;
+ 	}
+ 
+-	ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
++	ret = devm_snd_dmaengine_pcm_register(&pdev->dev,
++			&bcm2835_dmaengine_pcm_config,
++			SND_DMAENGINE_PCM_FLAG_COMPAT);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Could not register PCM: %d\n", ret);
+ 		return ret;
diff --git a/target/linux/brcm2708/patches-4.4/0019-bcm2835-i2s-Enable-MMAP-support-via-a-DT-property.patch b/target/linux/brcm2708/patches-4.4/0019-bcm2835-i2s-Enable-MMAP-support-via-a-DT-property.patch
new file mode 100644
index 0000000..c422cc7
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0019-bcm2835-i2s-Enable-MMAP-support-via-a-DT-property.patch
@@ -0,0 +1,44 @@
+From 61f155e164c5dbfa5cec9a099e4aa802c2155423 Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Sun, 11 Oct 2015 15:55:21 +0200
+Subject: [PATCH 019/127] bcm2835-i2s: Enable MMAP support via a DT property
+
+Code ported from bcm2708-i2s driver in Raspberry Pi tree.
+
+RPi commit 7ee829fd77a30127db5d0b3c7d79b8718166e568 ("bcm2708-i2s:
+Enable MMAP support via a DT property and overlay")
+
+The i2s driver used to claim to support MMAP, but that feature was disabled
+when some problems were found. Add the ability to enable this feature
+through Device Tree, using the i2s-mmap overlay.
+
+See: #1004
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+---
+ sound/soc/bcm/bcm2835-i2s.c | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+--- a/sound/soc/bcm/bcm2835-i2s.c
++++ b/sound/soc/bcm/bcm2835-i2s.c
+@@ -799,7 +799,7 @@ static const struct snd_soc_component_dr
+ 	.name		= "bcm2835-i2s-comp",
+ };
+ 
+-static const struct snd_pcm_hardware bcm2835_pcm_hardware = {
++static struct snd_pcm_hardware bcm2835_pcm_hardware = {
+ 	.info			= SNDRV_PCM_INFO_INTERLEAVED |
+ 				  SNDRV_PCM_INFO_JOINT_DUPLEX,
+ 	.formats		= SNDRV_PCM_FMTBIT_S16_LE |
+@@ -835,6 +835,11 @@ static int bcm2835_i2s_probe(struct plat
+ 	}
+ 	dma_reg_base = be32_to_cpup(addr);
+ 
++	if (of_property_read_bool(pdev->dev.of_node, "brcm,enable-mmap"))
++		bcm2835_pcm_hardware.info |=
++			SNDRV_PCM_INFO_MMAP |
++			SNDRV_PCM_INFO_MMAP_VALID;
++
+ 	/* Request both ioareas */
+ 	for (i = 0; i <= 1; i++) {
+ 		void __iomem *base;
diff --git a/target/linux/brcm2708/patches-4.4/0020-dmaengine-bcm2835-Add-slave-dma-support.patch b/target/linux/brcm2708/patches-4.4/0020-dmaengine-bcm2835-Add-slave-dma-support.patch
new file mode 100644
index 0000000..c49e482
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0020-dmaengine-bcm2835-Add-slave-dma-support.patch
@@ -0,0 +1,320 @@
+From 780a1039ccfd293d583742a4f2326997b15f5aff Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Thu, 9 Apr 2015 12:34:11 +0200
+Subject: [PATCH 020/127] dmaengine: bcm2835: Add slave dma support
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Add slave transfer capability to BCM2835 dmaengine driver.
+This patch is pulled from the bcm2708-dmaengine driver in the
+Raspberry Pi repo. The work was done by Gellert Weisz.
+
+Tested using the bcm2835-mmc driver from the same repo.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/dma/bcm2835-dma.c | 206 ++++++++++++++++++++++++++++++++++++++++++----
+ 1 file changed, 192 insertions(+), 14 deletions(-)
+
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -1,11 +1,10 @@
+ /*
+  * BCM2835 DMA engine support
+  *
+- * This driver only supports cyclic DMA transfers
+- * as needed for the I2S module.
+- *
+  * Author:      Florian Meier <florian.meier at koalo.de>
+  *              Copyright 2013
++ *              Gellert Weisz <gellert at raspberrypi.org>
++ *              Copyright 2013-2014
+  *
+  * Based on
+  *	OMAP DMAengine support by Russell King
+@@ -95,6 +94,8 @@ struct bcm2835_desc {
+ 	size_t size;
+ };
+ 
++#define BCM2835_DMA_WAIT_CYCLES	0  /* Slow down DMA transfers: 0-31 */
++
+ #define BCM2835_DMA_CS		0x00
+ #define BCM2835_DMA_ADDR	0x04
+ #define BCM2835_DMA_SOURCE_AD	0x0c
+@@ -111,12 +112,16 @@ struct bcm2835_desc {
+ #define BCM2835_DMA_RESET	BIT(31) /* WO, self clearing */
+ 
+ #define BCM2835_DMA_INT_EN	BIT(0)
++#define BCM2835_DMA_WAIT_RESP	BIT(3)
+ #define BCM2835_DMA_D_INC	BIT(4)
++#define BCM2835_DMA_D_WIDTH	BIT(5)
+ #define BCM2835_DMA_D_DREQ	BIT(6)
+ #define BCM2835_DMA_S_INC	BIT(8)
++#define BCM2835_DMA_S_WIDTH	BIT(9)
+ #define BCM2835_DMA_S_DREQ	BIT(10)
+ 
+ #define BCM2835_DMA_PER_MAP(x)	((x) << 16)
++#define BCM2835_DMA_WAITS(x)	(((x) & 0x1f) << 21)
+ 
+ #define BCM2835_DMA_DATA_TYPE_S8	1
+ #define BCM2835_DMA_DATA_TYPE_S16	2
+@@ -130,6 +135,14 @@ struct bcm2835_desc {
+ #define BCM2835_DMA_CHAN(n)	((n) << 8) /* Base address */
+ #define BCM2835_DMA_CHANIO(base, n) ((base) + BCM2835_DMA_CHAN(n))
+ 
++#define MAX_NORMAL_TRANSFER	SZ_1G
++/*
++ * Max length on a Lite channel is 65535 bytes.
++ * DMA handles byte-enables on SDRAM reads and writes even on 128-bit accesses,
++ * but byte-enables don't exist on peripheral addresses, so align to 32-bit.
++ */
++#define MAX_LITE_TRANSFER	(SZ_64K - 4)
++
+ static inline struct bcm2835_dmadev *to_bcm2835_dma_dev(struct dma_device *d)
+ {
+ 	return container_of(d, struct bcm2835_dmadev, ddev);
+@@ -226,12 +239,18 @@ static irqreturn_t bcm2835_dma_callback(
+ 	d = c->desc;
+ 
+ 	if (d) {
+-		/* TODO Only works for cyclic DMA */
+-		vchan_cyclic_callback(&d->vd);
+-	}
++		if (c->cyclic) {
++			vchan_cyclic_callback(&d->vd);
+ 
+-	/* Keep the DMA engine running */
+-	writel(BCM2835_DMA_ACTIVE, c->chan_base + BCM2835_DMA_CS);
++			/* Keep the DMA engine running */
++			writel(BCM2835_DMA_ACTIVE,
++			       c->chan_base + BCM2835_DMA_CS);
++
++		} else {
++			vchan_cookie_complete(&c->desc->vd);
++			bcm2835_dma_start_desc(c);
++		}
++	}
+ 
+ 	spin_unlock_irqrestore(&c->vc.lock, flags);
+ 
+@@ -339,8 +358,6 @@ static void bcm2835_dma_issue_pending(st
+ 	struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
+ 	unsigned long flags;
+ 
+-	c->cyclic = true; /* Nothing else is implemented */
+-
+ 	spin_lock_irqsave(&c->vc.lock, flags);
+ 	if (vchan_issue_pending(&c->vc) && !c->desc)
+ 		bcm2835_dma_start_desc(c);
+@@ -358,7 +375,7 @@ static struct dma_async_tx_descriptor *b
+ 	struct bcm2835_desc *d;
+ 	dma_addr_t dev_addr;
+ 	unsigned int es, sync_type;
+-	unsigned int frame;
++	unsigned int frame, max_size;
+ 	int i;
+ 
+ 	/* Grab configuration */
+@@ -393,7 +410,12 @@ static struct dma_async_tx_descriptor *b
+ 
+ 	d->c = c;
+ 	d->dir = direction;
+-	d->frames = buf_len / period_len;
++	if (c->ch >= 8) /* LITE channel */
++		max_size = MAX_LITE_TRANSFER;
++	else
++		max_size = MAX_NORMAL_TRANSFER;
++	period_len = min(period_len, max_size);
++	d->frames = (buf_len - 1) / (period_len + 1);
+ 
+ 	d->cb_list = kcalloc(d->frames, sizeof(*d->cb_list), GFP_KERNEL);
+ 	if (!d->cb_list) {
+@@ -441,17 +463,171 @@ static struct dma_async_tx_descriptor *b
+ 				BCM2835_DMA_PER_MAP(c->dreq);
+ 
+ 		/* Length of a frame */
+-		control_block->length = period_len;
++		if (frame != d->frames - 1)
++			control_block->length = period_len;
++		else
++			control_block->length = buf_len - (d->frames - 1) *
++						period_len;
+ 		d->size += control_block->length;
+ 
+ 		/*
+ 		 * Next block is the next frame.
+-		 * This DMA engine driver currently only supports cyclic DMA.
++		 * This function is called on cyclic DMA transfers.
+ 		 * Therefore, wrap around at number of frames.
+ 		 */
+ 		control_block->next = d->cb_list[((frame + 1) % d->frames)].paddr;
+ 	}
+ 
++	c->cyclic = true;
++
++	return vchan_tx_prep(&c->vc, &d->vd, flags);
++}
++
++static struct dma_async_tx_descriptor *
++bcm2835_dma_prep_slave_sg(struct dma_chan *chan,
++			  struct scatterlist *sgl,
++			  unsigned int sg_len,
++			  enum dma_transfer_direction direction,
++			  unsigned long flags, void *context)
++{
++	struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
++	enum dma_slave_buswidth dev_width;
++	struct bcm2835_desc *d;
++	dma_addr_t dev_addr;
++	struct scatterlist *sgent;
++	unsigned int i, sync_type, split_cnt, max_size;
++
++	if (!is_slave_direction(direction)) {
++		dev_err(chan->device->dev, "direction not supported\n");
++		return NULL;
++	}
++
++	if (direction == DMA_DEV_TO_MEM) {
++		dev_addr = c->cfg.src_addr;
++		dev_width = c->cfg.src_addr_width;
++		sync_type = BCM2835_DMA_S_DREQ;
++	} else {
++		dev_addr = c->cfg.dst_addr;
++		dev_width = c->cfg.dst_addr_width;
++		sync_type = BCM2835_DMA_D_DREQ;
++	}
++
++	/* Bus width translates to the element size (ES) */
++	switch (dev_width) {
++	case DMA_SLAVE_BUSWIDTH_4_BYTES:
++		break;
++	default:
++		dev_err(chan->device->dev, "buswidth not supported: %i\n",
++			dev_width);
++		return NULL;
++	}
++
++	/* Allocate and setup the descriptor. */
++	d = kzalloc(sizeof(*d), GFP_NOWAIT);
++	if (!d)
++		return NULL;
++
++	d->dir = direction;
++
++	if (c->ch >= 8) /* LITE channel */
++		max_size = MAX_LITE_TRANSFER;
++	else
++		max_size = MAX_NORMAL_TRANSFER;
++
++	/*
++	 * Store the length of the SG list in d->frames
++	 * taking care to account for splitting up transfers
++	 * too large for a LITE channel
++	 */
++	d->frames = 0;
++	for_each_sg(sgl, sgent, sg_len, i) {
++		unsigned int len = sg_dma_len(sgent);
++
++		d->frames += len / max_size + 1;
++	}
++
++	/* Allocate memory for control blocks */
++	d->control_block_size = d->frames * sizeof(struct bcm2835_dma_cb);
++	d->control_block_base = dma_zalloc_coherent(chan->device->dev,
++			d->control_block_size, &d->control_block_base_phys,
++			GFP_NOWAIT);
++	if (!d->control_block_base) {
++		kfree(d);
++		return NULL;
++	}
++
++	/*
++	 * Iterate over all SG entries, create a control block
++	 * for each frame and link them together.
++	 * Count the number of times an SG entry had to be split
++	 * as a result of using a LITE channel
++	 */
++	split_cnt = 0;
++
++	for_each_sg(sgl, sgent, sg_len, i) {
++		unsigned int j;
++		dma_addr_t addr = sg_dma_address(sgent);
++		unsigned int len = sg_dma_len(sgent);
++
++		for (j = 0; j < len; j += max_size) {
++			struct bcm2835_dma_cb *control_block =
++				&d->control_block_base[i + split_cnt];
++
++			/* Setup addresses */
++			if (d->dir == DMA_DEV_TO_MEM) {
++				control_block->info = BCM2835_DMA_D_INC |
++						      BCM2835_DMA_D_WIDTH |
++						      BCM2835_DMA_S_DREQ;
++				control_block->src = dev_addr;
++				control_block->dst = addr + (dma_addr_t)j;
++			} else {
++				control_block->info = BCM2835_DMA_S_INC |
++						      BCM2835_DMA_S_WIDTH |
++						      BCM2835_DMA_D_DREQ;
++				control_block->src = addr + (dma_addr_t)j;
++				control_block->dst = dev_addr;
++			}
++
++			/* Common part */
++			control_block->info |=
++				BCM2835_DMA_WAITS(BCM2835_DMA_WAIT_CYCLES);
++			control_block->info |= BCM2835_DMA_WAIT_RESP;
++
++			/* Enable */
++			if (i == sg_len - 1 && len - j <= max_size)
++				control_block->info |= BCM2835_DMA_INT_EN;
++
++			/* Setup synchronization */
++			if (sync_type)
++				control_block->info |= sync_type;
++
++			/* Setup DREQ channel */
++			if (c->dreq)
++				control_block->info |=
++					BCM2835_DMA_PER_MAP(c->dreq);
++
++			/* Length of a frame */
++			control_block->length = min(len - j, max_size);
++			d->size += control_block->length;
++
++			if (i < sg_len - 1 || len - j > max_size) {
++				/* Next block is the next frame. */
++				control_block->next =
++					d->control_block_base_phys +
++					sizeof(struct bcm2835_dma_cb) *
++					(i + split_cnt + 1);
++			} else {
++				/* Next block is empty. */
++				control_block->next = 0;
++			}
++
++			if (len - j > max_size)
++				split_cnt++;
++		}
++	}
++
++	c->cyclic = false;
++
+ 	return vchan_tx_prep(&c->vc, &d->vd, flags);
+ error_cb:
+ 	i--;
+@@ -620,6 +796,7 @@ static int bcm2835_dma_probe(struct plat
+ 	od->ddev.device_tx_status = bcm2835_dma_tx_status;
+ 	od->ddev.device_issue_pending = bcm2835_dma_issue_pending;
+ 	od->ddev.device_prep_dma_cyclic = bcm2835_dma_prep_dma_cyclic;
++	od->ddev.device_prep_slave_sg = bcm2835_dma_prep_slave_sg;
+ 	od->ddev.device_config = bcm2835_dma_slave_config;
+ 	od->ddev.device_terminate_all = bcm2835_dma_terminate_all;
+ 	od->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+@@ -708,4 +885,5 @@ module_platform_driver(bcm2835_dma_drive
+ MODULE_ALIAS("platform:bcm2835-dma");
+ MODULE_DESCRIPTION("BCM2835 DMA engine driver");
+ MODULE_AUTHOR("Florian Meier <florian.meier at koalo.de>");
++MODULE_AUTHOR("Gellert Weisz <gellert at raspberrypi.org>");
+ MODULE_LICENSE("GPL v2");
diff --git a/target/linux/brcm2708/patches-4.4/0021-dmaengine-bcm2835-set-residue_granularity-field.patch b/target/linux/brcm2708/patches-4.4/0021-dmaengine-bcm2835-set-residue_granularity-field.patch
new file mode 100644
index 0000000..1e562ce
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0021-dmaengine-bcm2835-set-residue_granularity-field.patch
@@ -0,0 +1,29 @@
+From 6ff0d626e7d84df71f6bc75e2c5ed35c42858bcc Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Sat, 3 Oct 2015 15:58:59 +0200
+Subject: [PATCH 021/127] dmaengine: bcm2835: set residue_granularity field
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+bcm2835-dma supports residue reporting at burst level but didn't report
+this via the residue_granularity field.
+
+Without this field set properly we get playback issues with I2S cards.
+
+[by HiassofT, taken from bcm2708-dmaengine]
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/dma/bcm2835-dma.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -802,6 +802,7 @@ static int bcm2835_dma_probe(struct plat
+ 	od->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ 	od->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ 	od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
++	od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+ 	od->ddev.dev = &pdev->dev;
+ 	INIT_LIST_HEAD(&od->ddev.channels);
+ 	spin_lock_init(&od->lock);
diff --git a/target/linux/brcm2708/patches-4.4/0022-dmaengine-bcm2835-Load-driver-early-and-support-lega.patch b/target/linux/brcm2708/patches-4.4/0022-dmaengine-bcm2835-Load-driver-early-and-support-lega.patch
new file mode 100644
index 0000000..e3c9595
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0022-dmaengine-bcm2835-Load-driver-early-and-support-lega.patch
@@ -0,0 +1,98 @@
+From 16dc5e0535e48ce3e9c6995c87118e9e7b5b775a Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Sat, 3 Oct 2015 22:22:55 +0200
+Subject: [PATCH 022/127] dmaengine: bcm2835: Load driver early and support
+ legacy API
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Load driver early since at least bcm2708_fb doesn't support deferred
+probing and even if it did, we don't want the video driver deferred.
+Support the legacy DMA API which is needed by bcm2708_fb.
+Don't mask out channel 2.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/dma/Kconfig       |  2 +-
+ drivers/dma/bcm2835-dma.c | 30 ++++++++++++++++++++++++------
+ 2 files changed, 25 insertions(+), 7 deletions(-)
+
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -108,7 +108,7 @@ config COH901318
+ 
+ config DMA_BCM2835
+ 	tristate "BCM2835 DMA engine support"
+-	depends on ARCH_BCM2835
++	depends on ARCH_BCM2835 || ARCH_BCM2708 || ARCH_BCM2709
+ 	select DMA_ENGINE
+ 	select DMA_VIRTUAL_CHANNELS
+ 
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -36,6 +36,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/list.h>
+ #include <linux/module.h>
++#include <linux/platform_data/dma-bcm2708.h>
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/io.h>
+@@ -786,6 +787,10 @@ static int bcm2835_dma_probe(struct plat
+ 	if (IS_ERR(base))
+ 		return PTR_ERR(base);
+ 
++	rc = bcm_dmaman_probe(pdev, base, BCM2835_DMA_BULK_MASK);
++	if (rc)
++		dev_err(&pdev->dev, "Failed to initialize the legacy API\n");
++
+ 	od->base = base;
+ 
+ 	dma_cap_set(DMA_SLAVE, od->ddev.cap_mask);
+@@ -818,11 +823,8 @@ static int bcm2835_dma_probe(struct plat
+ 		goto err_no_dma;
+ 	}
+ 
+-	/*
+-	 * Do not use the FIQ and BULK channels,
+-	 * because they are used by the GPU.
+-	 */
+-	chans_available &= ~(BCM2835_DMA_FIQ_MASK | BCM2835_DMA_BULK_MASK);
++	/* Channel 0 is used by the legacy API */
++	chans_available &= ~BCM2835_DMA_BULK_MASK;
+ 
+ 	for (i = 0; i < pdev->num_resources; i++) {
+ 		irq = platform_get_irq(pdev, i);
+@@ -866,6 +868,7 @@ static int bcm2835_dma_remove(struct pla
+ {
+ 	struct bcm2835_dmadev *od = platform_get_drvdata(pdev);
+ 
++	bcm_dmaman_remove(pdev);
+ 	dma_async_device_unregister(&od->ddev);
+ 	bcm2835_dma_free(od);
+ 
+@@ -881,7 +884,22 @@ static struct platform_driver bcm2835_dm
+ 	},
+ };
+ 
+-module_platform_driver(bcm2835_dma_driver);
++static int bcm2835_dma_init(void)
++{
++	return platform_driver_register(&bcm2835_dma_driver);
++}
++
++static void bcm2835_dma_exit(void)
++{
++	platform_driver_unregister(&bcm2835_dma_driver);
++}
++
++/*
++ * Load after serial driver (arch_initcall) so we see the messages if it fails,
++ * but before drivers (module_init) that need a DMA channel.
++ */
++subsys_initcall(bcm2835_dma_init);
++module_exit(bcm2835_dma_exit);
+ 
+ MODULE_ALIAS("platform:bcm2835-dma");
+ MODULE_DESCRIPTION("BCM2835 DMA engine driver");
diff --git a/target/linux/brcm2708/patches-4.4/0023-bcm2835-dma-Fix-dreq-not-set-for-slave-transfers.patch b/target/linux/brcm2708/patches-4.4/0023-bcm2835-dma-Fix-dreq-not-set-for-slave-transfers.patch
new file mode 100644
index 0000000..42cf9fd
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0023-bcm2835-dma-Fix-dreq-not-set-for-slave-transfers.patch
@@ -0,0 +1,21 @@
+From 50eef5c715b894683aebf81332c82426dc10f8cb Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Sat, 10 Oct 2015 12:29:18 +0200
+Subject: [PATCH 023/127] bcm2835-dma: Fix dreq not set for slave transfers
+
+Set dreq to slave_id if it is not set like in bcm2708-dmaengine.
+---
+ drivers/dma/bcm2835-dma.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -657,6 +657,8 @@ static int bcm2835_dma_slave_config(stru
+ 	}
+ 
+ 	c->cfg = *cfg;
++	if (!c->dreq)
++		c->dreq = cfg->slave_id;
+ 
+ 	return 0;
+ }
diff --git a/target/linux/brcm2708/patches-4.4/0024-bcm2835-dma-Limit-cyclic-transfers-on-lite-channels-.patch b/target/linux/brcm2708/patches-4.4/0024-bcm2835-dma-Limit-cyclic-transfers-on-lite-channels-.patch
new file mode 100644
index 0000000..4461fec
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0024-bcm2835-dma-Limit-cyclic-transfers-on-lite-channels-.patch
@@ -0,0 +1,37 @@
+From c7e464c38d38ad59899c94dfad6c3455c18f7d76 Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Sun, 11 Oct 2015 12:28:30 +0200
+Subject: [PATCH 024/127] bcm2835-dma: Limit cyclic transfers on lite channels
+ to 32k
+
+Transfers larger than 32k cause repeated clicking with I2S soundcards.
+The exact reason is yet unknown, so limit to 32k as bcm2708-dmaengine
+did as an intermediate fix.
+---
+ drivers/dma/bcm2835-dma.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -144,6 +144,12 @@ struct bcm2835_desc {
+  */
+ #define MAX_LITE_TRANSFER	(SZ_64K - 4)
+ 
++/*
++ * Transfers larger than 32k cause issues with the bcm2708-i2s driver,
++ * so limit transfer size to 32k as bcm2708-dmaengine did.
++ */
++#define MAX_CYCLIC_LITE_TRANSFER	SZ_32K
++
+ static inline struct bcm2835_dmadev *to_bcm2835_dma_dev(struct dma_device *d)
+ {
+ 	return container_of(d, struct bcm2835_dmadev, ddev);
+@@ -412,7 +418,7 @@ static struct dma_async_tx_descriptor *b
+ 	d->c = c;
+ 	d->dir = direction;
+ 	if (c->ch >= 8) /* LITE channel */
+-		max_size = MAX_LITE_TRANSFER;
++		max_size = MAX_CYCLIC_LITE_TRANSFER;
+ 	else
+ 		max_size = MAX_NORMAL_TRANSFER;
+ 	period_len = min(period_len, max_size);
diff --git a/target/linux/brcm2708/patches-4.4/0025-bcm2835-Add-support-for-uart1.patch b/target/linux/brcm2708/patches-4.4/0025-bcm2835-Add-support-for-uart1.patch
new file mode 100644
index 0000000..cd7f242
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0025-bcm2835-Add-support-for-uart1.patch
@@ -0,0 +1,57 @@
+From 34cb40cb97cd3080d3d0f314b2b063939b90c069 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Sat, 15 Aug 2015 20:50:02 +0200
+Subject: [PATCH 025/127] bcm2835: Add support for uart1
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+This is a hack until a proper solution is agreed upon.
+Martin Sperl is doing some work in this area.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ arch/arm/mach-bcm/board_bcm2835.c | 25 +++++++++++++++++++++++++
+ 1 file changed, 25 insertions(+)
+
+--- a/arch/arm/mach-bcm/board_bcm2835.c
++++ b/arch/arm/mach-bcm/board_bcm2835.c
+@@ -22,6 +22,29 @@
+ #include <asm/mach/arch.h>
+ #include <asm/mach/map.h>
+ 
++/* Use this hack until a proper solution is agreed upon */
++static void __init bcm2835_init_uart1(void)
++{
++	struct device_node *np;
++
++	np = of_find_compatible_node(NULL, NULL, "brcm,bcm2835-aux-uart");
++	if (of_device_is_available(np)) {
++		np = of_find_compatible_node(NULL, NULL,
++					     "bcrm,bcm2835-aux-enable");
++		if (np) {
++			void __iomem *base = of_iomap(np, 0);
++
++			if (!base) {
++				pr_err("bcm2835: Failed enabling Mini UART\n");
++				return;
++			}
++
++			writel(1, base);
++			pr_info("bcm2835: Mini UART enabled\n");
++		}
++	}
++}
++
+ static void __init bcm2835_init(void)
+ {
+ 	struct device_node *np = of_find_node_by_path("/system");
+@@ -42,6 +65,8 @@ static void __init bcm2835_init(void)
+ 		system_rev = val;
+ 	if (!of_property_read_u64(np, "linux,serial", &val64))
+ 		system_serial_low = val64;
++
++	bcm2835_init_uart1();
+ }
+ 
+ static const char * const bcm2835_compat[] = {
diff --git a/target/linux/brcm2708/patches-4.4/0026-firmware-bcm2835-Add-missing-property-tags.patch b/target/linux/brcm2708/patches-4.4/0026-firmware-bcm2835-Add-missing-property-tags.patch
new file mode 100644
index 0000000..8bd7be9
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0026-firmware-bcm2835-Add-missing-property-tags.patch
@@ -0,0 +1,62 @@
+From 40946ea47dd52c827b30d3601f7b393c46fbfcf3 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Fri, 26 Jun 2015 14:21:20 +0200
+Subject: [PATCH 026/127] firmware: bcm2835: Add missing property tags
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ include/soc/bcm2835/raspberrypi-firmware.h | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/include/soc/bcm2835/raspberrypi-firmware.h
++++ b/include/soc/bcm2835/raspberrypi-firmware.h
+@@ -63,6 +63,7 @@ enum rpi_firmware_property_tag {
+ 	RPI_FIRMWARE_GET_MIN_VOLTAGE =                        0x00030008,
+ 	RPI_FIRMWARE_GET_TURBO =                              0x00030009,
+ 	RPI_FIRMWARE_GET_MAX_TEMPERATURE =                    0x0003000a,
++	RPI_FIRMWARE_GET_STC =                                0x0003000b,
+ 	RPI_FIRMWARE_ALLOCATE_MEMORY =                        0x0003000c,
+ 	RPI_FIRMWARE_LOCK_MEMORY =                            0x0003000d,
+ 	RPI_FIRMWARE_UNLOCK_MEMORY =                          0x0003000e,
+@@ -72,10 +73,12 @@ enum rpi_firmware_property_tag {
+ 	RPI_FIRMWARE_SET_ENABLE_QPU =                         0x00030012,
+ 	RPI_FIRMWARE_GET_DISPMANX_RESOURCE_MEM_HANDLE =       0x00030014,
+ 	RPI_FIRMWARE_GET_EDID_BLOCK =                         0x00030020,
++	RPI_FIRMWARE_GET_CUSTOMER_OTP =                       0x00030021,
+ 	RPI_FIRMWARE_SET_CLOCK_STATE =                        0x00038001,
+ 	RPI_FIRMWARE_SET_CLOCK_RATE =                         0x00038002,
+ 	RPI_FIRMWARE_SET_VOLTAGE =                            0x00038003,
+ 	RPI_FIRMWARE_SET_TURBO =                              0x00038009,
++	RPI_FIRMWARE_SET_CUSTOMER_OTP =                       0x00038021,
+ 
+ 	/* Dispmanx TAGS */
+ 	RPI_FIRMWARE_FRAMEBUFFER_ALLOCATE =                   0x00040001,
+@@ -89,6 +92,7 @@ enum rpi_firmware_property_tag {
+ 	RPI_FIRMWARE_FRAMEBUFFER_GET_VIRTUAL_OFFSET =         0x00040009,
+ 	RPI_FIRMWARE_FRAMEBUFFER_GET_OVERSCAN =               0x0004000a,
+ 	RPI_FIRMWARE_FRAMEBUFFER_GET_PALETTE =                0x0004000b,
++	RPI_FIRMWARE_FRAMEBUFFER_GET_TOUCHBUF =               0x0004000f,
+ 	RPI_FIRMWARE_FRAMEBUFFER_RELEASE =                    0x00048001,
+ 	RPI_FIRMWARE_FRAMEBUFFER_TEST_PHYSICAL_WIDTH_HEIGHT = 0x00044003,
+ 	RPI_FIRMWARE_FRAMEBUFFER_TEST_VIRTUAL_WIDTH_HEIGHT =  0x00044004,
+@@ -98,6 +102,7 @@ enum rpi_firmware_property_tag {
+ 	RPI_FIRMWARE_FRAMEBUFFER_TEST_VIRTUAL_OFFSET =        0x00044009,
+ 	RPI_FIRMWARE_FRAMEBUFFER_TEST_OVERSCAN =              0x0004400a,
+ 	RPI_FIRMWARE_FRAMEBUFFER_TEST_PALETTE =               0x0004400b,
++	RPI_FIRMWARE_FRAMEBUFFER_TEST_VSYNC =                 0x0004400e,
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_PHYSICAL_WIDTH_HEIGHT =  0x00048003,
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_VIRTUAL_WIDTH_HEIGHT =   0x00048004,
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_DEPTH =                  0x00048005,
+@@ -106,6 +111,9 @@ enum rpi_firmware_property_tag {
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_VIRTUAL_OFFSET =         0x00048009,
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_OVERSCAN =               0x0004800a,
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_PALETTE =                0x0004800b,
++	RPI_FIRMWARE_FRAMEBUFFER_SET_VSYNC =                  0x0004800e,
++
++	RPI_FIRMWARE_VCHIQ_INIT =                             0x00048010,
+ 
+ 	RPI_FIRMWARE_GET_COMMAND_LINE =                       0x00050001,
+ 	RPI_FIRMWARE_GET_DMA_CHANNELS =                       0x00060001,
diff --git a/target/linux/brcm2708/patches-4.4/0027-Main-bcm2708-bcm2709-linux-port.patch b/target/linux/brcm2708/patches-4.4/0027-Main-bcm2708-bcm2709-linux-port.patch
new file mode 100644
index 0000000..56120c6
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0027-Main-bcm2708-bcm2709-linux-port.patch
@@ -0,0 +1,2418 @@
+From 75ae3f717d7598a6eb1582097923ee0838a02a8b Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Sun, 12 May 2013 12:24:19 +0100
+Subject: [PATCH 027/127] Main bcm2708/bcm2709 linux port
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ arch/arm/Kconfig                                 |  49 +++
+ arch/arm/Kconfig.debug                           |   8 +
+ arch/arm/Makefile                                |   2 +
+ arch/arm/kernel/head.S                           |   8 +
+ arch/arm/kernel/process.c                        |  10 +
+ arch/arm/mach-bcm2708/Kconfig                    |  23 ++
+ arch/arm/mach-bcm2708/Makefile                   |   5 +
+ arch/arm/mach-bcm2708/Makefile.boot              |   3 +
+ arch/arm/mach-bcm2708/bcm2708.c                  | 231 ++++++++++++
+ arch/arm/mach-bcm2708/include/mach/debug-macro.S |  22 ++
+ arch/arm/mach-bcm2708/include/mach/io.h          |  27 ++
+ arch/arm/mach-bcm2708/include/mach/memory.h      |  57 +++
+ arch/arm/mach-bcm2708/include/mach/platform.h    | 112 ++++++
+ arch/arm/mach-bcm2708/include/mach/system.h      |  37 ++
+ arch/arm/mach-bcm2708/include/mach/uncompress.h  |  84 +++++
+ arch/arm/mach-bcm2708/include/mach/vmalloc.h     |  20 ++
+ arch/arm/mach-bcm2709/Kconfig                    |  16 +
+ arch/arm/mach-bcm2709/Makefile                   |   5 +
+ arch/arm/mach-bcm2709/Makefile.boot              |   3 +
+ arch/arm/mach-bcm2709/bcm2709.c                  | 380 ++++++++++++++++++++
+ arch/arm/mach-bcm2709/include/mach/debug-macro.S |  22 ++
+ arch/arm/mach-bcm2709/include/mach/entry-macro.S | 123 +++++++
+ arch/arm/mach-bcm2709/include/mach/io.h          |  27 ++
+ arch/arm/mach-bcm2709/include/mach/memory.h      |  57 +++
+ arch/arm/mach-bcm2709/include/mach/platform.h    | 188 ++++++++++
+ arch/arm/mach-bcm2709/include/mach/system.h      |  37 ++
+ arch/arm/mach-bcm2709/include/mach/uncompress.h  |  84 +++++
+ arch/arm/mach-bcm2709/include/mach/vc_mem.h      |  35 ++
+ arch/arm/mach-bcm2709/include/mach/vmalloc.h     |  20 ++
+ arch/arm/mach-bcm2709/vc_mem.c                   | 431 +++++++++++++++++++++++
+ arch/arm/mm/Kconfig                              |   2 +-
+ arch/arm/mm/proc-v6.S                            |  15 +-
+ arch/arm/mm/proc-v7.S                            |   1 +
+ arch/arm/tools/mach-types                        |   2 +
+ drivers/clocksource/Makefile                     |   2 +-
+ drivers/irqchip/Makefile                         |   3 +
+ include/linux/mmc/host.h                         |   1 +
+ 37 files changed, 2147 insertions(+), 5 deletions(-)
+ create mode 100644 arch/arm/mach-bcm2708/Kconfig
+ create mode 100644 arch/arm/mach-bcm2708/Makefile
+ create mode 100644 arch/arm/mach-bcm2708/Makefile.boot
+ create mode 100644 arch/arm/mach-bcm2708/bcm2708.c
+ create mode 100644 arch/arm/mach-bcm2708/include/mach/debug-macro.S
+ create mode 100644 arch/arm/mach-bcm2708/include/mach/io.h
+ create mode 100644 arch/arm/mach-bcm2708/include/mach/memory.h
+ create mode 100644 arch/arm/mach-bcm2708/include/mach/platform.h
+ create mode 100644 arch/arm/mach-bcm2708/include/mach/system.h
+ create mode 100644 arch/arm/mach-bcm2708/include/mach/uncompress.h
+ create mode 100644 arch/arm/mach-bcm2708/include/mach/vmalloc.h
+ create mode 100644 arch/arm/mach-bcm2709/Kconfig
+ create mode 100644 arch/arm/mach-bcm2709/Makefile
+ create mode 100644 arch/arm/mach-bcm2709/Makefile.boot
+ create mode 100644 arch/arm/mach-bcm2709/bcm2709.c
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/debug-macro.S
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/entry-macro.S
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/io.h
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/memory.h
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/platform.h
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/system.h
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/uncompress.h
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/vc_mem.h
+ create mode 100644 arch/arm/mach-bcm2709/include/mach/vmalloc.h
+ create mode 100644 arch/arm/mach-bcm2709/vc_mem.c
+
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -317,6 +317,52 @@ choice
+ 	default ARCH_VERSATILE if !MMU
+ 	default ARCH_MULTIPLATFORM if MMU
+ 
++config ARCH_BCM2708
++	bool "Broadcom BCM2708 family"
++	select CPU_V6
++	select ARM_AMBA
++	select CLKSRC_MMIO
++	select CLKSRC_OF if OF
++	select HAVE_SCHED_CLOCK
++	select NEED_MACH_GPIO_H
++	select NEED_MACH_MEMORY_H
++	select COMMON_CLK
++	select ARCH_HAS_CPUFREQ
++	select GENERIC_CLOCKEVENTS
++	select ARM_ERRATA_411920
++	select MACH_BCM2708
++	select MULTI_IRQ_HANDLER
++	select SPARSE_IRQ
++	select VC4
++	select FIQ
++	help
++	  This enables support for Broadcom BCM2708 boards.
++
++config ARCH_BCM2709
++	bool "Broadcom BCM2709 family"
++	select CPU_V7
++	select HAVE_SMP
++	select ARM_AMBA
++	select MIGHT_HAVE_CACHE_L2X0
++	select HAVE_SCHED_CLOCK
++	select NEED_MACH_MEMORY_H
++	select NEED_MACH_IO_H
++	select COMMON_CLK
++	select ARCH_HAS_CPUFREQ
++	select GENERIC_CLOCKEVENTS
++	select MACH_BCM2709
++	select MULTI_IRQ_HANDLER
++	select SPARSE_IRQ
++	select MFD_SYSCON
++	select VC4
++	select FIQ
++	select USE_OF
++	select ARCH_REQUIRE_GPIOLIB
++	select PINCTRL
++	select PINCTRL_BCM2835
++	help
++	  This enables support for Broadcom BCM2709 boards.
++
+ config ARCH_MULTIPLATFORM
+ 	bool "Allow multiple platforms to be selected"
+ 	depends on MMU
+@@ -808,6 +854,9 @@ config ARCH_VIRT
+ # Kconfigs may be included either alphabetically (according to the
+ # plat- suffix) or along side the corresponding mach-* source.
+ #
++source "arch/arm/mach-bcm2708/Kconfig"
++source "arch/arm/mach-bcm2709/Kconfig"
++
+ source "arch/arm/mach-mvebu/Kconfig"
+ 
+ source "arch/arm/mach-alpine/Kconfig"
+--- a/arch/arm/Kconfig.debug
++++ b/arch/arm/Kconfig.debug
+@@ -1241,6 +1241,14 @@ choice
+ 		  options; the platform specific options are deprecated
+ 		  and will be soon removed.
+ 
++	config DEBUG_BCM2708_UART0
++		bool "Broadcom BCM270X UART0 (PL011)"
++		depends on ARCH_BCM2708 || ARCH_BCM2709
++		help
++		  Say Y here if you want the debug print routines to direct
++		  their output to UART 0. The port must have been initialised
++		  by the boot-loader before use.
++
+ endchoice
+ 
+ config DEBUG_EXYNOS_UART
+--- a/arch/arm/Makefile
++++ b/arch/arm/Makefile
+@@ -159,6 +159,8 @@ textofs-$(CONFIG_ARCH_AXXIA) := 0x003080
+ 
+ # Machine directory name.  This list is sorted alphanumerically
+ # by CONFIG_* macro name.
++machine-$(CONFIG_ARCH_BCM2708)		+= bcm2708
++machine-$(CONFIG_ARCH_BCM2709)		+= bcm2709
+ machine-$(CONFIG_ARCH_ALPINE)		+= alpine
+ machine-$(CONFIG_ARCH_AT91)		+= at91
+ machine-$(CONFIG_ARCH_AXXIA)		+= axxia
+--- a/arch/arm/kernel/head.S
++++ b/arch/arm/kernel/head.S
+@@ -700,6 +700,14 @@ ARM_BE8(rev16	ip, ip)
+ 	ldrcc	r7, [r4], #4	@ use branch for delay slot
+ 	bcc	1b
+ 	ret	lr
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
++	nop
+ #endif
+ ENDPROC(__fixup_a_pv_table)
+ 
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -91,6 +91,16 @@ void arch_cpu_idle_exit(void)
+ 	ledtrig_cpu(CPU_LED_IDLE_END);
+ }
+ 
++char bcm2708_reboot_mode = 'h';
++
++int __init reboot_setup(char *str)
++{
++	bcm2708_reboot_mode = str[0];
++	return 1;
++}
++
++__setup("reboot=", reboot_setup);
++
+ void __show_regs(struct pt_regs *regs)
+ {
+ 	unsigned long flags;
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/Kconfig
+@@ -0,0 +1,23 @@
++menu "Broadcom BCM2708 Implementations"
++	depends on ARCH_BCM2708
++
++config MACH_BCM2708
++	bool "Broadcom BCM2708 Development Platform"
++	select NEED_MACH_MEMORY_H
++	select NEED_MACH_IO_H
++	select CPU_V6
++	select USE_OF
++	select ARCH_REQUIRE_GPIOLIB
++	select PINCTRL
++	select PINCTRL_BCM2835
++	help
++	  Include support for the Broadcom(R) BCM2708 platform.
++
++config BCM2708_NOL2CACHE
++	bool "Videocore L2 cache disable"
++	depends on MACH_BCM2708
++        default n
++        help
++          Do not allow ARM to use GPU's L2 cache. Requires disable_l2cache in config.txt.
++
++endmenu
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/Makefile
+@@ -0,0 +1,5 @@
++#
++# Makefile for the linux kernel.
++#
++
++obj-$(CONFIG_MACH_BCM2708) 	+= bcm2708.o
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/Makefile.boot
+@@ -0,0 +1,3 @@
++   zreladdr-y	:= 0x00008000
++params_phys-y	:= 0x00000100
++initrd_phys-y	:= 0x00800000
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/bcm2708.c
+@@ -0,0 +1,231 @@
++/*
++ *  linux/arch/arm/mach-bcm2708/bcm2708.c
++ *
++ *  Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++
++#include <linux/init.h>
++#include <linux/dma-mapping.h>
++#include <linux/module.h>
++#include <linux/of_platform.h>
++#include <asm/system_info.h>
++#include <asm/mach-types.h>
++#include <asm/mach/arch.h>
++#include <asm/mach/map.h>
++
++#include <mach/system.h>
++
++#include <linux/broadcom/vc_cma.h>
++
++/* Effectively we have an IOMMU (ARM<->VideoCore map) that is set up to
++ * give us IO access only to 64Mbytes of physical memory (26 bits).  We could
++ * represent this window by setting our dmamasks to 26 bits but, in fact
++ * we're not going to use addresses outside this range (they're not in real
++ * memory) so we don't bother.
++ *
++ * In the future we might include code to use this IOMMU to remap other
++ * physical addresses onto VideoCore memory then the use of 32-bits would be
++ * more legitimate.
++ */
++
++/* command line parameters */
++static unsigned boardrev, serial;
++static unsigned reboot_part = 0;
++
++static struct map_desc bcm2708_io_desc[] __initdata = {
++	{
++	 .virtual = IO_ADDRESS(ARMCTRL_BASE),
++	 .pfn = __phys_to_pfn(ARMCTRL_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(UART0_BASE),
++	 .pfn = __phys_to_pfn(UART0_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(UART1_BASE),
++	 .pfn = __phys_to_pfn(UART1_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(DMA_BASE),
++	 .pfn = __phys_to_pfn(DMA_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(MCORE_BASE),
++	 .pfn = __phys_to_pfn(MCORE_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(ST_BASE),
++	 .pfn = __phys_to_pfn(ST_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(USB_BASE),
++	 .pfn = __phys_to_pfn(USB_BASE),
++	 .length = SZ_128K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(PM_BASE),
++	 .pfn = __phys_to_pfn(PM_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(GPIO_BASE),
++	 .pfn = __phys_to_pfn(GPIO_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE}
++};
++
++void __init bcm2708_map_io(void)
++{
++	iotable_init(bcm2708_io_desc, ARRAY_SIZE(bcm2708_io_desc));
++}
++
++int calc_rsts(int partition)
++{
++	return PM_PASSWORD |
++		((partition & (1 << 0))  << 0) |
++		((partition & (1 << 1))  << 1) |
++		((partition & (1 << 2))  << 2) |
++		((partition & (1 << 3))  << 3) |
++		((partition & (1 << 4))  << 4) |
++		((partition & (1 << 5))  << 5);
++}
++
++static void bcm2708_restart(enum reboot_mode mode, const char *cmd)
++{
++	extern char bcm2708_reboot_mode;
++	uint32_t pm_rstc, pm_wdog;
++	uint32_t timeout = 10;
++	uint32_t pm_rsts = 0;
++
++	if(bcm2708_reboot_mode == 'q')
++	{
++		// NOOBS < 1.3 booting with reboot=q
++		pm_rsts = readl(__io_address(PM_RSTS));
++		pm_rsts = PM_PASSWORD | pm_rsts | PM_RSTS_HADWRQ_SET;
++	}
++	else if(bcm2708_reboot_mode == 'p')
++	{
++		// NOOBS < 1.3 halting
++		pm_rsts = readl(__io_address(PM_RSTS));
++		pm_rsts = PM_PASSWORD | pm_rsts | PM_RSTS_HADWRH_SET;
++	}
++	else
++	{
++		pm_rsts = calc_rsts(reboot_part);
++	}
++
++	writel(pm_rsts, __io_address(PM_RSTS));
++
++	/* Setup watchdog for reset */
++	pm_rstc = readl(__io_address(PM_RSTC));
++
++	pm_wdog = PM_PASSWORD | (timeout & PM_WDOG_TIME_SET); // watchdog timer = timer clock / 16; need password (31:16) + value (11:0)
++	pm_rstc = PM_PASSWORD | (pm_rstc & PM_RSTC_WRCFG_CLR) | PM_RSTC_WRCFG_FULL_RESET;
++
++	writel(pm_wdog, __io_address(PM_WDOG));
++	writel(pm_rstc, __io_address(PM_RSTC));
++}
++
++/* We can't really power off, but if we do the normal reset scheme, and indicate to bootcode.bin not to reboot, then most of the chip will be powered off */
++static void bcm2708_power_off(void)
++{
++	extern char bcm2708_reboot_mode;
++	if(bcm2708_reboot_mode == 'q')
++	{
++		// NOOBS < v1.3
++		bcm2708_restart('p', "");
++	}
++	else
++	{
++		/* partition 63 is special code for HALT the bootloader knows not to boot*/
++		reboot_part = 63;
++		/* continue with normal reset mechanism */
++		bcm2708_restart(0, "");
++	}
++}
++
++static void __init bcm2708_init_uart1(void)
++{
++	struct device_node *np;
++
++	np = of_find_compatible_node(NULL, NULL, "brcm,bcm2835-aux-uart");
++	if (of_device_is_available(np)) {
++		pr_info("bcm2708: Mini UART enabled\n");
++		writel(1, __io_address(UART1_BASE + 0x4));
++	}
++}
++
++void __init bcm2708_init(void)
++{
++	int ret;
++
++	vc_cma_early_init();
++
++	pm_power_off = bcm2708_power_off;
++
++	ret = of_platform_populate(NULL, of_default_bus_match_table, NULL,
++				   NULL);
++	if (ret) {
++		pr_err("of_platform_populate failed: %d\n", ret);
++		BUG();
++	}
++
++	bcm2708_init_uart1();
++
++	system_rev = boardrev;
++	system_serial_low = serial;
++}
++
++void __init bcm2708_init_early(void)
++{
++	/*
++	 * Some devices allocate their coherent buffers from atomic
++	 * context. Increase size of atomic coherent pool to make sure such
++	 * the allocations won't fail.
++	 */
++	init_dma_coherent_pool_size(SZ_4M);
++}
++
++static void __init board_reserve(void)
++{
++	vc_cma_reserve();
++}
++
++static const char * const bcm2708_compat[] = {
++	"brcm,bcm2708",
++	NULL
++};
++
++MACHINE_START(BCM2708, "BCM2708")
++    /* Maintainer: Broadcom Europe Ltd. */
++	.map_io = bcm2708_map_io,
++	.init_machine = bcm2708_init,
++	.init_early = bcm2708_init_early,
++	.reserve = board_reserve,
++	.restart	= bcm2708_restart,
++	.dt_compat = bcm2708_compat,
++MACHINE_END
++
++module_param(boardrev, uint, 0644);
++module_param(serial, uint, 0644);
++module_param(reboot_part, uint, 0644);
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/include/mach/debug-macro.S
+@@ -0,0 +1,22 @@
++/* arch/arm/mach-bcm2708/include/mach/debug-macro.S
++ *
++ * Debugging macro include header
++ *
++ *  Copyright (C) 2010 Broadcom
++ *  Copyright (C) 1994-1999 Russell King
++ *  Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++*/
++
++#include <mach/platform.h>
++
++		.macro	addruart, rp, rv, tmp
++		ldr	\rp, =UART0_BASE
++		ldr	\rv, =IO_ADDRESS(UART0_BASE)
++		.endm
++
++#include <debug/pl01x.S>
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/include/mach/io.h
+@@ -0,0 +1,27 @@
++/*
++ *  arch/arm/mach-bcm2708/include/mach/io.h
++ *
++ *  Copyright (C) 2003 ARM Limited
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#ifndef __ASM_ARM_ARCH_IO_H
++#define __ASM_ARM_ARCH_IO_H
++
++#define IO_SPACE_LIMIT 0xffffffff
++
++#define __io(a)		__typesafe_io(a)
++
++#endif
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/include/mach/memory.h
+@@ -0,0 +1,57 @@
++/*
++ *  arch/arm/mach-bcm2708/include/mach/memory.h
++ *
++ *  Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#ifndef __ASM_ARCH_MEMORY_H
++#define __ASM_ARCH_MEMORY_H
++
++/* Memory overview:
++
++   [ARMcore] <--virtual addr-->
++   [ARMmmu] <--physical addr-->
++   [GERTmap] <--bus add-->
++   [VCperiph]
++
++*/
++
++/*
++ * Physical DRAM offset.
++ */
++#define BCM_PLAT_PHYS_OFFSET	UL(0x00000000)
++#define VC_ARMMEM_OFFSET	UL(0x00000000)   /* offset in VC of ARM memory */
++
++#ifdef CONFIG_BCM2708_NOL2CACHE
++ #define _REAL_BUS_OFFSET UL(0xC0000000)   /* don't use L1 or L2 caches */
++#else
++ #define _REAL_BUS_OFFSET UL(0x40000000)   /* use L2 cache */
++#endif
++
++/* We're using the memory at 64M in the VideoCore for Linux - this adjustment
++ * will provide the offset into this area as well as setting the bits that
++ * stop the L1 and L2 cache from being used
++ *
++ * WARNING: this only works because the ARM is given memory at a fixed location
++ *          (ARMMEM_OFFSET)
++ */
++#define BUS_OFFSET          (VC_ARMMEM_OFFSET + _REAL_BUS_OFFSET)
++#define __virt_to_bus(x)    ((x) + (BUS_OFFSET - PAGE_OFFSET))
++#define __bus_to_virt(x)    ((x) - (BUS_OFFSET - PAGE_OFFSET))
++#define __pfn_to_bus(x)     (__pfn_to_phys(x) + (BUS_OFFSET - BCM_PLAT_PHYS_OFFSET))
++#define __bus_to_pfn(x)     __phys_to_pfn((x) - (BUS_OFFSET - BCM_PLAT_PHYS_OFFSET))
++
++#endif
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/include/mach/platform.h
+@@ -0,0 +1,112 @@
++/*
++ * arch/arm/mach-bcm2708/include/mach/platform.h
++ *
++ * Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++
++#ifndef _BCM2708_PLATFORM_H
++#define _BCM2708_PLATFORM_H
++
++
++/* macros to get at IO space when running virtually */
++#define IO_ADDRESS(x)	(((x) & 0x0fffffff) + (((x) >> 4) & 0x0f000000) + 0xf0000000)
++
++#define __io_address(n)     IOMEM(IO_ADDRESS(n))
++
++
++/*
++ *  SDRAM
++ */
++#define BCM2708_SDRAM_BASE           0x00000000
++
++/*
++ *  Logic expansion modules
++ *
++ */
++
++
++/* ------------------------------------------------------------------------
++ *  BCM2708 ARMCTRL Registers
++ * ------------------------------------------------------------------------
++ */
++
++#define HW_REGISTER_RW(addr) (addr)
++#define HW_REGISTER_RO(addr) (addr)
++
++/*
++ * Definitions and addresses for the ARM CONTROL logic
++ * This file is manually generated.
++ */
++
++#define BCM2708_PERI_BASE        0x20000000
++#define IC0_BASE                 (BCM2708_PERI_BASE + 0x2000)
++#define ST_BASE                  (BCM2708_PERI_BASE + 0x3000)   /* System Timer */
++#define MPHI_BASE		 (BCM2708_PERI_BASE + 0x6000)	/* Message -based Parallel Host Interface */
++#define DMA_BASE		 (BCM2708_PERI_BASE + 0x7000)	/* DMA controller */
++#define ARM_BASE                 (BCM2708_PERI_BASE + 0xB000)	 /* BCM2708 ARM control block */
++#define PM_BASE			 (BCM2708_PERI_BASE + 0x100000) /* Power Management, Reset controller and Watchdog registers */
++#define PCM_CLOCK_BASE           (BCM2708_PERI_BASE + 0x101098) /* PCM Clock */
++#define RNG_BASE                 (BCM2708_PERI_BASE + 0x104000) /* Hardware RNG */
++#define GPIO_BASE                (BCM2708_PERI_BASE + 0x200000) /* GPIO */
++#define UART0_BASE               (BCM2708_PERI_BASE + 0x201000)	/* Uart 0 */
++#define MMCI0_BASE               (BCM2708_PERI_BASE + 0x202000) /* MMC interface */
++#define I2S_BASE                 (BCM2708_PERI_BASE + 0x203000) /* I2S */
++#define SPI0_BASE		 (BCM2708_PERI_BASE + 0x204000) /* SPI0 */
++#define BSC0_BASE		 (BCM2708_PERI_BASE + 0x205000) /* BSC0 I2C/TWI */
++#define UART1_BASE               (BCM2708_PERI_BASE + 0x215000) /* Uart 1 */
++#define EMMC_BASE                (BCM2708_PERI_BASE + 0x300000) /* eMMC interface */
++#define SMI_BASE		 (BCM2708_PERI_BASE + 0x600000) /* SMI */
++#define BSC1_BASE		 (BCM2708_PERI_BASE + 0x804000) /* BSC1 I2C/TWI */
++#define USB_BASE                 (BCM2708_PERI_BASE + 0x980000) /* DTC_OTG USB controller */
++#define MCORE_BASE               (BCM2708_PERI_BASE + 0x0000)   /* Fake frame buffer device (actually the multicore sync block*/
++
++#define ARMCTRL_BASE             (ARM_BASE + 0x000)
++#define ARMCTRL_IC_BASE          (ARM_BASE + 0x200)           /* ARM interrupt controller */
++#define ARMCTRL_TIMER0_1_BASE    (ARM_BASE + 0x400)           /* Timer 0 and 1 */
++#define ARMCTRL_0_SBM_BASE       (ARM_BASE + 0x800)           /* User 0 (ARM)'s Semaphores Doorbells and Mailboxes */
++
++/*
++ * Watchdog
++ */
++#define PM_RSTC			       (PM_BASE+0x1c)
++#define PM_RSTS			       (PM_BASE+0x20)
++#define PM_WDOG			       (PM_BASE+0x24)
++
++#define PM_WDOG_RESET                                         0000000000
++#define PM_PASSWORD		       0x5a000000
++#define PM_WDOG_TIME_SET	       0x000fffff
++#define PM_RSTC_WRCFG_CLR              0xffffffcf
++#define PM_RSTC_WRCFG_SET              0x00000030
++#define PM_RSTC_WRCFG_FULL_RESET       0x00000020
++#define PM_RSTC_RESET                  0x00000102
++
++#define PM_RSTS_HADPOR_SET                                 0x00001000
++#define PM_RSTS_HADSRH_SET                                 0x00000400
++#define PM_RSTS_HADSRF_SET                                 0x00000200
++#define PM_RSTS_HADSRQ_SET                                 0x00000100
++#define PM_RSTS_HADWRH_SET                                 0x00000040
++#define PM_RSTS_HADWRF_SET                                 0x00000020
++#define PM_RSTS_HADWRQ_SET                                 0x00000010
++#define PM_RSTS_HADDRH_SET                                 0x00000004
++#define PM_RSTS_HADDRF_SET                                 0x00000002
++#define PM_RSTS_HADDRQ_SET                                 0x00000001
++
++#define UART0_CLOCK      3000000
++
++#endif
++
++/* END */
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/include/mach/system.h
+@@ -0,0 +1,37 @@
++/*
++ *  arch/arm/mach-bcm2708/include/mach/system.h
++ *
++ *  Copyright (C) 2010 Broadcom
++ *  Copyright (C) 2003 ARM Limited
++ *  Copyright (C) 2000 Deep Blue Solutions Ltd
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#ifndef __ASM_ARCH_SYSTEM_H
++#define __ASM_ARCH_SYSTEM_H
++
++#include <linux/io.h>
++#include <mach/platform.h>
++
++static inline void arch_idle(void)
++{
++	/*
++	 * This should do all the clock switching
++	 * and wait for interrupt tricks
++	 */
++	cpu_do_idle();
++}
++
++#endif
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/include/mach/uncompress.h
+@@ -0,0 +1,84 @@
++/*
++ *  arch/arm/mach-bcn2708/include/mach/uncompress.h
++ *
++ *  Copyright (C) 2010 Broadcom
++ *  Copyright (C) 2003 ARM Limited
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++
++#include <linux/io.h>
++#include <linux/amba/serial.h>
++#include <mach/platform.h>
++
++#define UART_BAUD 115200
++
++#define BCM2708_UART_DR   __io(UART0_BASE + UART01x_DR)
++#define BCM2708_UART_FR   __io(UART0_BASE + UART01x_FR)
++#define BCM2708_UART_IBRD __io(UART0_BASE + UART011_IBRD)
++#define BCM2708_UART_FBRD __io(UART0_BASE + UART011_FBRD)
++#define BCM2708_UART_LCRH __io(UART0_BASE + UART011_LCRH)
++#define BCM2708_UART_CR   __io(UART0_BASE + UART011_CR)
++
++/*
++ * This does not append a newline
++ */
++static inline void putc(int c)
++{
++	while (__raw_readl(BCM2708_UART_FR) & UART01x_FR_TXFF)
++		barrier();
++
++	__raw_writel(c, BCM2708_UART_DR);
++}
++
++static inline void flush(void)
++{
++	int fr;
++
++	do {
++		fr = __raw_readl(BCM2708_UART_FR);
++		barrier();
++	} while ((fr & (UART011_FR_TXFE | UART01x_FR_BUSY)) != UART011_FR_TXFE);
++}
++
++static inline void arch_decomp_setup(void)
++{
++	int temp, div, rem, frac;
++
++	temp = 16 * UART_BAUD;
++	div = UART0_CLOCK / temp;
++	rem = UART0_CLOCK % temp;
++	temp = (8 * rem) / UART_BAUD;
++	frac = (temp >> 1) + (temp & 1);
++
++	/* Make sure the UART is disabled before we start */
++	__raw_writel(0, BCM2708_UART_CR);
++
++	/* Set the baud rate */
++	__raw_writel(div, BCM2708_UART_IBRD);
++	__raw_writel(frac, BCM2708_UART_FBRD);
++
++	/* Set the UART to 8n1, FIFO enabled */
++	__raw_writel(UART01x_LCRH_WLEN_8 | UART01x_LCRH_FEN, BCM2708_UART_LCRH);
++
++	/* Enable the UART */
++	__raw_writel(UART01x_CR_UARTEN | UART011_CR_TXE | UART011_CR_RXE,
++			BCM2708_UART_CR);
++}
++
++/*
++ * nothing to do
++ */
++#define arch_decomp_wdog()
+--- /dev/null
++++ b/arch/arm/mach-bcm2708/include/mach/vmalloc.h
+@@ -0,0 +1,20 @@
++/*
++ *  arch/arm/mach-bcm2708/include/mach/vmalloc.h
++ *
++ *  Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#define VMALLOC_END		(0xe8000000)
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/Kconfig
+@@ -0,0 +1,16 @@
++menu "Broadcom BCM2709 Implementations"
++	depends on ARCH_BCM2709
++
++config MACH_BCM2709
++	bool "Broadcom BCM2709 Development Platform"
++	help
++	  Include support for the Broadcom(R) BCM2709 platform.
++
++config BCM2708_NOL2CACHE
++	bool "Videocore L2 cache disable"
++	depends on MACH_BCM2709
++        default y
++        help
++          Do not allow ARM to use GPU's L2 cache. Requires disable_l2cache in config.txt.
++
++endmenu
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/Makefile
+@@ -0,0 +1,5 @@
++#
++# Makefile for the linux kernel.
++#
++
++obj-$(CONFIG_MACH_BCM2709) 	+= bcm2709.o
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/Makefile.boot
+@@ -0,0 +1,3 @@
++   zreladdr-y	:= 0x00008000
++params_phys-y	:= 0x00000100
++initrd_phys-y	:= 0x00800000
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/bcm2709.c
+@@ -0,0 +1,380 @@
++/*
++ *  linux/arch/arm/mach-bcm2709/bcm2709.c
++ *
++ *  Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++
++#include <linux/init.h>
++#include <linux/dma-mapping.h>
++#include <linux/interrupt.h>
++#include <linux/clk-provider.h>
++#include <linux/clocksource.h>
++#include <linux/io.h>
++#include <linux/module.h>
++#include <linux/of_platform.h>
++
++#include <asm/system_info.h>
++#include <asm/mach-types.h>
++#include <asm/cputype.h>
++
++#include <asm/mach/arch.h>
++#include <asm/mach/map.h>
++
++#include <mach/system.h>
++
++#include <linux/broadcom/vc_cma.h>
++
++/* Effectively we have an IOMMU (ARM<->VideoCore map) that is set up to
++ * give us IO access only to 64Mbytes of physical memory (26 bits).  We could
++ * represent this window by setting our dmamasks to 26 bits but, in fact
++ * we're not going to use addresses outside this range (they're not in real
++ * memory) so we don't bother.
++ *
++ * In the future we might include code to use this IOMMU to remap other
++ * physical addresses onto VideoCore memory then the use of 32-bits would be
++ * more legitimate.
++ */
++
++/* command line parameters */
++static unsigned boardrev, serial;
++static unsigned reboot_part = 0;
++
++static struct map_desc bcm2709_io_desc[] __initdata = {
++	{
++	 .virtual = IO_ADDRESS(ARMCTRL_BASE),
++	 .pfn = __phys_to_pfn(ARMCTRL_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(UART0_BASE),
++	 .pfn = __phys_to_pfn(UART0_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(UART1_BASE),
++	 .pfn = __phys_to_pfn(UART1_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(DMA_BASE),
++	 .pfn = __phys_to_pfn(DMA_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(MCORE_BASE),
++	 .pfn = __phys_to_pfn(MCORE_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(ST_BASE),
++	 .pfn = __phys_to_pfn(ST_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(USB_BASE),
++	 .pfn = __phys_to_pfn(USB_BASE),
++	 .length = SZ_128K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(PM_BASE),
++	 .pfn = __phys_to_pfn(PM_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(GPIO_BASE),
++	 .pfn = __phys_to_pfn(GPIO_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++	{
++	 .virtual = IO_ADDRESS(ARM_LOCAL_BASE),
++	 .pfn = __phys_to_pfn(ARM_LOCAL_BASE),
++	 .length = SZ_4K,
++	 .type = MT_DEVICE},
++};
++
++void __init bcm2709_map_io(void)
++{
++	iotable_init(bcm2709_io_desc, ARRAY_SIZE(bcm2709_io_desc));
++}
++
++int calc_rsts(int partition)
++{
++	return PM_PASSWORD |
++		((partition & (1 << 0))  << 0) |
++		((partition & (1 << 1))  << 1) |
++		((partition & (1 << 2))  << 2) |
++		((partition & (1 << 3))  << 3) |
++		((partition & (1 << 4))  << 4) |
++		((partition & (1 << 5))  << 5);
++}
++
++static void bcm2709_restart(enum reboot_mode mode, const char *cmd)
++{
++	extern char bcm2708_reboot_mode;
++	uint32_t pm_rstc, pm_wdog;
++	uint32_t timeout = 10;
++	uint32_t pm_rsts = 0;
++
++	if(bcm2708_reboot_mode == 'q')
++	{
++		// NOOBS < 1.3 booting with reboot=q
++		pm_rsts = readl(__io_address(PM_RSTS));
++		pm_rsts = PM_PASSWORD | pm_rsts | PM_RSTS_HADWRQ_SET;
++	}
++	else if(bcm2708_reboot_mode == 'p')
++	{
++		// NOOBS < 1.3 halting
++		pm_rsts = readl(__io_address(PM_RSTS));
++		pm_rsts = PM_PASSWORD | pm_rsts | PM_RSTS_HADWRH_SET;
++	}
++	else
++	{
++		pm_rsts = calc_rsts(reboot_part);
++	}
++
++	writel(pm_rsts, __io_address(PM_RSTS));
++
++	/* Setup watchdog for reset */
++	pm_rstc = readl(__io_address(PM_RSTC));
++
++	pm_wdog = PM_PASSWORD | (timeout & PM_WDOG_TIME_SET); // watchdog timer = timer clock / 16; need password (31:16) + value (11:0)
++	pm_rstc = PM_PASSWORD | (pm_rstc & PM_RSTC_WRCFG_CLR) | PM_RSTC_WRCFG_FULL_RESET;
++
++	writel(pm_wdog, __io_address(PM_WDOG));
++	writel(pm_rstc, __io_address(PM_RSTC));
++}
++
++/* We can't really power off, but if we do the normal reset scheme, and indicate to bootcode.bin not to reboot, then most of the chip will be powered off */
++static void bcm2709_power_off(void)
++{
++	extern char bcm2708_reboot_mode;
++	if(bcm2708_reboot_mode == 'q')
++	{
++		// NOOBS < v1.3
++		bcm2709_restart('p', "");
++	}
++	else
++	{
++		/* partition 63 is special code for HALT the bootloader knows not to boot*/
++		reboot_part = 63;
++		/* continue with normal reset mechanism */
++		bcm2709_restart(0, "");
++	}
++}
++
++static void __init bcm2709_init_uart1(void)
++{
++	struct device_node *np;
++
++	np = of_find_compatible_node(NULL, NULL, "brcm,bcm2835-aux-uart");
++	if (of_device_is_available(np)) {
++		pr_info("bcm2709: Mini UART enabled\n");
++		writel(1, __io_address(UART1_BASE + 0x4));
++	}
++}
++
++void __init bcm2709_init(void)
++{
++	int ret;
++
++	vc_cma_early_init();
++
++	pm_power_off = bcm2709_power_off;
++
++	ret = of_platform_populate(NULL, of_default_bus_match_table, NULL,
++				   NULL);
++	if (ret) {
++		pr_err("of_platform_populate failed: %d\n", ret);
++		BUG();
++	}
++
++	bcm2709_init_uart1();
++
++	system_rev = boardrev;
++	system_serial_low = serial;
++}
++
++static void __init bcm2709_timer_init(void)
++{
++	// timer control
++	writel(0, __io_address(ARM_LOCAL_CONTROL));
++	// timer pre_scaler
++	writel(0x80000000, __io_address(ARM_LOCAL_PRESCALER)); // 19.2MHz
++	//writel(0x06AAAAAB, __io_address(ARM_LOCAL_PRESCALER)); // 1MHz
++
++	of_clk_init(NULL);
++	clocksource_probe();
++}
++
++
++void __init bcm2709_init_early(void)
++{
++	/*
++	 * Some devices allocate their coherent buffers from atomic
++	 * context. Increase size of atomic coherent pool to make sure such
++	 * the allocations won't fail.
++	 */
++	init_dma_coherent_pool_size(SZ_4M);
++}
++
++static void __init board_reserve(void)
++{
++	vc_cma_reserve();
++}
++
++
++#ifdef CONFIG_SMP
++#include <linux/smp.h>
++
++#include <asm/cacheflush.h>
++#include <asm/smp_plat.h>
++int dc4=0;
++//void dc4_log(unsigned x) { if (dc4) writel((x), __io_address(ST_BASE+10 + raw_smp_processor_id()*4)); }
++void dc4_log_dead(unsigned x) { if (dc4) writel((readl(__io_address(ST_BASE+0x10 + raw_smp_processor_id()*4)) & 0xffff) | ((x)<<16), __io_address(ST_BASE+0x10 + raw_smp_processor_id()*4)); }
++
++static void bcm2835_send_doorbell(const struct cpumask *mask, unsigned int irq)
++{
++        int cpu;
++        /*
++         * Ensure that stores to Normal memory are visible to the
++         * other CPUs before issuing the IPI.
++         */
++        dsb();
++
++        /* Convert our logical CPU mask into a physical one. */
++        for_each_cpu(cpu, mask)
++	{
++		/* submit softirq */
++		writel(1<<irq, __io_address(ARM_LOCAL_MAILBOX0_SET0 + 0x10 * MPIDR_AFFINITY_LEVEL(cpu_logical_map(cpu), 0)));
++	}
++}
++
++void __init bcm2709_smp_init_cpus(void)
++{
++	void secondary_startup(void);
++	unsigned int i, ncores;
++
++	ncores = 4; // xxx scu_get_core_count(NULL);
++	printk("[%s] enter (%x->%x)\n", __FUNCTION__, (unsigned)virt_to_phys((void *)secondary_startup), (unsigned)__io_address(ST_BASE + 0x10));
++	printk("[%s] ncores=%d\n", __FUNCTION__, ncores);
++
++	for (i = 0; i < ncores; i++) {
++		set_cpu_possible(i, true);
++		/* enable IRQ (not FIQ) */
++		writel(0x1, __io_address(ARM_LOCAL_MAILBOX_INT_CONTROL0 + 0x4 * i));
++		//writel(0xf, __io_address(ARM_LOCAL_TIMER_INT_CONTROL0   + 0x4 * i));
++	}
++	set_smp_cross_call(bcm2835_send_doorbell);
++}
++
++/*
++ * for arch/arm/kernel/smp.c:smp_prepare_cpus(unsigned int max_cpus)
++ */
++void __init bcm2709_smp_prepare_cpus(unsigned int max_cpus)
++{
++    //void __iomem *scu_base;
++
++    printk("[%s] enter\n", __FUNCTION__);
++    //scu_base = scu_base_addr();
++    //scu_enable(scu_base);
++}
++
++/*
++ * for linux/arch/arm/kernel/smp.c:secondary_start_kernel(void)
++ */
++void __init bcm2709_secondary_init(unsigned int cpu)
++{
++    printk("[%s] enter cpu:%d\n", __FUNCTION__, cpu);
++    //gic_secondary_init(0);
++}
++
++/*
++ * for linux/arch/arm/kernel/smp.c:__cpu_up(..)
++ */
++int __init bcm2709_boot_secondary(unsigned int cpu, struct task_struct *idle)
++{
++    void secondary_startup(void);
++    void *mbox_set = __io_address(ARM_LOCAL_MAILBOX3_SET0 + 0x10 * MPIDR_AFFINITY_LEVEL(cpu_logical_map(cpu), 0));
++    void *mbox_clr = __io_address(ARM_LOCAL_MAILBOX3_CLR0 + 0x10 * MPIDR_AFFINITY_LEVEL(cpu_logical_map(cpu), 0));
++    unsigned secondary_boot = (unsigned)virt_to_phys((void *)secondary_startup);
++    int timeout=20;
++    unsigned t = -1;
++    //printk("[%s] enter cpu:%d (%x->%p) %x\n", __FUNCTION__, cpu, secondary_boot, wake, readl(wake));
++
++    dsb();
++    BUG_ON(readl(mbox_clr) != 0);
++    writel(secondary_boot, mbox_set);
++
++    while (--timeout > 0) {
++	t = readl(mbox_clr);
++	if (t == 0) break;
++	cpu_relax();
++    }
++    if (timeout==0)
++        printk("[%s] cpu:%d failed to start (%x)\n", __FUNCTION__, cpu, t);
++    else
++        printk("[%s] cpu:%d started (%x) %d\n", __FUNCTION__, cpu, t, timeout);
++
++    return 0;
++}
++
++
++struct smp_operations  bcm2709_smp_ops __initdata = {
++	.smp_init_cpus		= bcm2709_smp_init_cpus,
++	.smp_prepare_cpus	= bcm2709_smp_prepare_cpus,
++	.smp_secondary_init	= bcm2709_secondary_init,
++	.smp_boot_secondary	= bcm2709_boot_secondary,
++};
++#endif
++
++static const char * const bcm2709_compat[] = {
++	"brcm,bcm2709",
++	"brcm,bcm2708", /* Could use bcm2708 in a pinch */
++	NULL
++};
++
++MACHINE_START(BCM2709, "BCM2709")
++    /* Maintainer: Broadcom Europe Ltd. */
++#ifdef CONFIG_SMP
++	.smp		= smp_ops(bcm2709_smp_ops),
++#endif
++	.map_io = bcm2709_map_io,
++	.init_time = bcm2709_timer_init,
++	.init_machine = bcm2709_init,
++	.init_early = bcm2709_init_early,
++	.reserve = board_reserve,
++	.restart	= bcm2709_restart,
++	.dt_compat = bcm2709_compat,
++MACHINE_END
++
++MACHINE_START(BCM2708, "BCM2709")
++    /* Maintainer: Broadcom Europe Ltd. */
++#ifdef CONFIG_SMP
++	.smp		= smp_ops(bcm2709_smp_ops),
++#endif
++	.map_io = bcm2709_map_io,
++	.init_time = bcm2709_timer_init,
++	.init_machine = bcm2709_init,
++	.init_early = bcm2709_init_early,
++	.reserve = board_reserve,
++	.restart	= bcm2709_restart,
++	.dt_compat = bcm2709_compat,
++MACHINE_END
++
++module_param(boardrev, uint, 0644);
++module_param(serial, uint, 0644);
++module_param(reboot_part, uint, 0644);
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/debug-macro.S
+@@ -0,0 +1,22 @@
++/* arch/arm/mach-bcm2708/include/mach/debug-macro.S
++ *
++ * Debugging macro include header
++ *
++ *  Copyright (C) 2010 Broadcom
++ *  Copyright (C) 1994-1999 Russell King
++ *  Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++*/
++
++#include <mach/platform.h>
++
++		.macro	addruart, rp, rv, tmp
++		ldr	\rp, =UART0_BASE
++		ldr	\rv, =IO_ADDRESS(UART0_BASE)
++		.endm
++
++#include <debug/pl01x.S>
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/entry-macro.S
+@@ -0,0 +1,123 @@
++/*
++ * arch/arm/mach-bcm2708/include/mach/entry-macro.S
++ *
++ * Low-level IRQ helper macros for BCM2708 platforms
++ *
++ *  Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#include <mach/hardware.h>
++#include <mach/irqs.h>
++
++		.macro	disable_fiq
++		.endm
++
++		.macro  get_irqnr_preamble, base, tmp
++		ldr	\base, =IO_ADDRESS(ARMCTRL_IC_BASE)
++		.endm
++
++		.macro  arch_ret_to_user, tmp1, tmp2
++		.endm
++
++		.macro	get_irqnr_and_base, irqnr, irqstat, base, tmp
++		/* get core number */
++		mrc     p15, 0, \tmp, c0, c0, 5
++		ubfx    \tmp, \tmp, #0, #2
++
++		/* get core's local interrupt controller */
++		ldr	\irqstat, = __io_address(ARM_LOCAL_IRQ_PENDING0)	@ local interrupt source
++		add	\irqstat, \irqstat, \tmp, lsl #2
++		ldr	\tmp, [\irqstat]
++		/* ignore gpu interrupt */
++		bic     \tmp, #0x100
++		/* ignore mailbox interrupts */
++		bics    \tmp, #0xf0
++		beq	1005f
++
++		@ For non-zero x, LSB(x) = 31 - CLZ(x^(x-1))
++		@ N.B. CLZ is an ARM5 instruction.
++		mov	\irqnr, #(ARM_IRQ_LOCAL_BASE + 31)
++		sub	\irqstat, \tmp, #1
++		eor	\irqstat, \irqstat, \tmp
++		clz	\tmp, \irqstat
++		sub	\irqnr, \tmp
++		b	1020f
++1005:
++		/* get core number */
++		mrc     p15, 0, \tmp, c0, c0, 5
++		ubfx    \tmp, \tmp, #0, #2
++
++                cmp	\tmp, #1
++		beq	1020f
++                cmp	\tmp, #2
++		beq	1020f
++                cmp	\tmp, #3
++		beq	1020f
++
++		/* get masked status */
++		ldr	\irqstat, [\base, #(ARM_IRQ_PEND0 - ARMCTRL_IC_BASE)]
++		mov	\irqnr, #(ARM_IRQ0_BASE + 31)
++		and	\tmp, \irqstat, #0x300		 @ save bits 8 and 9
++		/* clear bits 8 and 9, and test */
++		bics	\irqstat, \irqstat, #0x300
++		bne	1010f
++
++		tst	\tmp, #0x100
++		ldrne	\irqstat, [\base, #(ARM_IRQ_PEND1 - ARMCTRL_IC_BASE)]
++		movne \irqnr, #(ARM_IRQ1_BASE + 31)
++		@ Mask out the interrupts also present in PEND0 - see SW-5809
++		bicne \irqstat, #((1<<7) | (1<<9) | (1<<10))
++		bicne \irqstat, #((1<<18) | (1<<19))
++		bne	1010f
++
++		tst	\tmp, #0x200
++		ldrne \irqstat, [\base, #(ARM_IRQ_PEND2 - ARMCTRL_IC_BASE)]
++		movne \irqnr, #(ARM_IRQ2_BASE + 31)
++		@ Mask out the interrupts also present in PEND0 - see SW-5809
++		bicne \irqstat, #((1<<21) | (1<<22) | (1<<23) | (1<<24) | (1<<25))
++		bicne \irqstat, #((1<<30))
++		beq 1020f
++
++1010:
++		@ For non-zero x, LSB(x) = 31 - CLZ(x^(x-1))
++		@ N.B. CLZ is an ARM5 instruction.
++		sub	\tmp, \irqstat, #1
++		eor	\irqstat, \irqstat, \tmp
++		clz	\tmp, \irqstat
++		sub	\irqnr, \tmp
++
++1020:	@ EQ will be set if no irqs pending
++
++		.endm
++
++		.macro  test_for_ipi, irqnr, irqstat, base, tmp
++		/* get core number */
++		mrc     p15, 0, \tmp, c0, c0, 5
++		ubfx    \tmp, \tmp, #0, #2
++		/* get core's mailbox interrupt control */
++		ldr	\irqstat, = __io_address(ARM_LOCAL_MAILBOX0_CLR0)	@ mbox_clr
++		add	\irqstat, \irqstat, \tmp, lsl #4
++		ldr	\tmp, [\irqstat]
++		cmp     \tmp, #0
++		beq	1030f
++		clz	\tmp, \tmp
++		rsb	\irqnr, \tmp, #31
++		mov	\tmp, #1
++		lsl	\tmp, \irqnr
++		str	\tmp, [\irqstat]  @ clear interrupt source
++		dsb
++1030:	@ EQ will be set if no irqs pending
++		.endm
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/io.h
+@@ -0,0 +1,27 @@
++/*
++ *  arch/arm/mach-bcm2708/include/mach/io.h
++ *
++ *  Copyright (C) 2003 ARM Limited
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#ifndef __ASM_ARM_ARCH_IO_H
++#define __ASM_ARM_ARCH_IO_H
++
++#define IO_SPACE_LIMIT 0xffffffff
++
++#define __io(a)		__typesafe_io(a)
++
++#endif
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/memory.h
+@@ -0,0 +1,57 @@
++/*
++ *  arch/arm/mach-bcm2708/include/mach/memory.h
++ *
++ *  Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#ifndef __ASM_ARCH_MEMORY_H
++#define __ASM_ARCH_MEMORY_H
++
++/* Memory overview:
++
++   [ARMcore] <--virtual addr-->
++   [ARMmmu] <--physical addr-->
++   [GERTmap] <--bus add-->
++   [VCperiph]
++
++*/
++
++/*
++ * Physical DRAM offset.
++ */
++#define BCM_PLAT_PHYS_OFFSET	UL(0x00000000)
++#define VC_ARMMEM_OFFSET	UL(0x00000000)   /* offset in VC of ARM memory */
++
++#ifdef CONFIG_BCM2708_NOL2CACHE
++ #define _REAL_BUS_OFFSET UL(0xC0000000)   /* don't use L1 or L2 caches */
++#else
++ #define _REAL_BUS_OFFSET UL(0x40000000)   /* use L2 cache */
++#endif
++
++/* We're using the memory at 64M in the VideoCore for Linux - this adjustment
++ * will provide the offset into this area as well as setting the bits that
++ * stop the L1 and L2 cache from being used
++ *
++ * WARNING: this only works because the ARM is given memory at a fixed location
++ *          (ARMMEM_OFFSET)
++ */
++#define BUS_OFFSET          (VC_ARMMEM_OFFSET + _REAL_BUS_OFFSET)
++#define __virt_to_bus(x)    ((x) + (BUS_OFFSET - PAGE_OFFSET))
++#define __bus_to_virt(x)    ((x) - (BUS_OFFSET - PAGE_OFFSET))
++#define __pfn_to_bus(x)     (__pfn_to_phys(x) + (BUS_OFFSET - BCM_PLAT_PHYS_OFFSET))
++#define __bus_to_pfn(x)     __phys_to_pfn((x) - (BUS_OFFSET - BCM_PLAT_PHYS_OFFSET))
++
++#endif
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/platform.h
+@@ -0,0 +1,188 @@
++/*
++ * arch/arm/mach-bcm2708/include/mach/platform.h
++ *
++ * Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++
++#ifndef _BCM2708_PLATFORM_H
++#define _BCM2708_PLATFORM_H
++
++
++/* macros to get at IO space when running virtually */
++#define IO_ADDRESS(x)	(((x) & 0x00ffffff) + (((x) >> 4) & 0x0f000000) + 0xf0000000)
++
++#define __io_address(n)     IOMEM(IO_ADDRESS(n))
++
++
++/*
++ *  SDRAM
++ */
++#define BCM2708_SDRAM_BASE           0x00000000
++
++/*
++ *  Logic expansion modules
++ *
++ */
++
++
++/* ------------------------------------------------------------------------
++ *  BCM2708 ARMCTRL Registers
++ * ------------------------------------------------------------------------
++ */
++
++#define HW_REGISTER_RW(addr) (addr)
++#define HW_REGISTER_RO(addr) (addr)
++
++/*
++ * Definitions and addresses for the ARM CONTROL logic
++ * This file is manually generated.
++ */
++
++#define BCM2708_PERI_BASE        0x3F000000
++#define IC0_BASE                 (BCM2708_PERI_BASE + 0x2000)
++#define ST_BASE                  (BCM2708_PERI_BASE + 0x3000)   /* System Timer */
++#define MPHI_BASE		 (BCM2708_PERI_BASE + 0x6000)	/* Message -based Parallel Host Interface */
++#define DMA_BASE		 (BCM2708_PERI_BASE + 0x7000)	/* DMA controller */
++#define ARM_BASE                 (BCM2708_PERI_BASE + 0xB000)	 /* BCM2708 ARM control block */
++#define PM_BASE			 (BCM2708_PERI_BASE + 0x100000) /* Power Management, Reset controller and Watchdog registers */
++#define PCM_CLOCK_BASE           (BCM2708_PERI_BASE + 0x101098) /* PCM Clock */
++#define RNG_BASE                 (BCM2708_PERI_BASE + 0x104000) /* Hardware RNG */
++#define GPIO_BASE                (BCM2708_PERI_BASE + 0x200000) /* GPIO */
++#define UART0_BASE               (BCM2708_PERI_BASE + 0x201000)	/* Uart 0 */
++#define MMCI0_BASE               (BCM2708_PERI_BASE + 0x202000) /* MMC interface */
++#define I2S_BASE                 (BCM2708_PERI_BASE + 0x203000) /* I2S */
++#define SPI0_BASE		 (BCM2708_PERI_BASE + 0x204000) /* SPI0 */
++#define BSC0_BASE		 (BCM2708_PERI_BASE + 0x205000) /* BSC0 I2C/TWI */
++#define UART1_BASE               (BCM2708_PERI_BASE + 0x215000) /* Uart 1 */
++#define EMMC_BASE                (BCM2708_PERI_BASE + 0x300000) /* eMMC interface */
++#define SMI_BASE		 (BCM2708_PERI_BASE + 0x600000) /* SMI */
++#define BSC1_BASE		 (BCM2708_PERI_BASE + 0x804000) /* BSC1 I2C/TWI */
++#define USB_BASE                 (BCM2708_PERI_BASE + 0x980000) /* DTC_OTG USB controller */
++#define MCORE_BASE               (BCM2708_PERI_BASE + 0x0000)   /* Fake frame buffer device (actually the multicore sync block*/
++
++#define ARMCTRL_BASE             (ARM_BASE + 0x000)
++#define ARMCTRL_IC_BASE          (ARM_BASE + 0x200)           /* ARM interrupt controller */
++#define ARMCTRL_TIMER0_1_BASE    (ARM_BASE + 0x400)           /* Timer 0 and 1 */
++#define ARMCTRL_0_SBM_BASE       (ARM_BASE + 0x800)           /* User 0 (ARM)'s Semaphores Doorbells and Mailboxes */
++
++/*
++ * Watchdog
++ */
++#define PM_RSTC			       (PM_BASE+0x1c)
++#define PM_RSTS			       (PM_BASE+0x20)
++#define PM_WDOG			       (PM_BASE+0x24)
++
++#define PM_WDOG_RESET                                         0000000000
++#define PM_PASSWORD		       0x5a000000
++#define PM_WDOG_TIME_SET	       0x000fffff
++#define PM_RSTC_WRCFG_CLR              0xffffffcf
++#define PM_RSTC_WRCFG_SET              0x00000030
++#define PM_RSTC_WRCFG_FULL_RESET       0x00000020
++#define PM_RSTC_RESET                  0x00000102
++
++#define PM_RSTS_HADPOR_SET                                 0x00001000
++#define PM_RSTS_HADSRH_SET                                 0x00000400
++#define PM_RSTS_HADSRF_SET                                 0x00000200
++#define PM_RSTS_HADSRQ_SET                                 0x00000100
++#define PM_RSTS_HADWRH_SET                                 0x00000040
++#define PM_RSTS_HADWRF_SET                                 0x00000020
++#define PM_RSTS_HADWRQ_SET                                 0x00000010
++#define PM_RSTS_HADDRH_SET                                 0x00000004
++#define PM_RSTS_HADDRF_SET                                 0x00000002
++#define PM_RSTS_HADDRQ_SET                                 0x00000001
++
++#define UART0_CLOCK      3000000
++
++#define ARM_LOCAL_BASE 0x40000000
++#define ARM_LOCAL_CONTROL		HW_REGISTER_RW(ARM_LOCAL_BASE+0x000)
++
++#define ARM_LOCAL_CONTROL		HW_REGISTER_RW(ARM_LOCAL_BASE+0x000)
++#define ARM_LOCAL_PRESCALER		HW_REGISTER_RW(ARM_LOCAL_BASE+0x008)
++#define ARM_LOCAL_GPU_INT_ROUTING	HW_REGISTER_RW(ARM_LOCAL_BASE+0x00C)
++#define ARM_LOCAL_PM_ROUTING_SET	HW_REGISTER_RW(ARM_LOCAL_BASE+0x010)
++#define ARM_LOCAL_PM_ROUTING_CLR	HW_REGISTER_RW(ARM_LOCAL_BASE+0x014)
++#define ARM_LOCAL_TIMER_LS		HW_REGISTER_RW(ARM_LOCAL_BASE+0x01C)
++#define ARM_LOCAL_TIMER_MS		HW_REGISTER_RW(ARM_LOCAL_BASE+0x020)
++#define ARM_LOCAL_INT_ROUTING		HW_REGISTER_RW(ARM_LOCAL_BASE+0x024)
++#define ARM_LOCAL_AXI_COUNT		HW_REGISTER_RW(ARM_LOCAL_BASE+0x02C)
++#define ARM_LOCAL_AXI_IRQ		HW_REGISTER_RW(ARM_LOCAL_BASE+0x030)
++#define ARM_LOCAL_TIMER_CONTROL		HW_REGISTER_RW(ARM_LOCAL_BASE+0x034)
++#define ARM_LOCAL_TIMER_WRITE		HW_REGISTER_RW(ARM_LOCAL_BASE+0x038)
++
++#define ARM_LOCAL_TIMER_INT_CONTROL0	HW_REGISTER_RW(ARM_LOCAL_BASE+0x040)
++#define ARM_LOCAL_TIMER_INT_CONTROL1	HW_REGISTER_RW(ARM_LOCAL_BASE+0x044)
++#define ARM_LOCAL_TIMER_INT_CONTROL2	HW_REGISTER_RW(ARM_LOCAL_BASE+0x048)
++#define ARM_LOCAL_TIMER_INT_CONTROL3	HW_REGISTER_RW(ARM_LOCAL_BASE+0x04C)
++
++#define ARM_LOCAL_MAILBOX_INT_CONTROL0	HW_REGISTER_RW(ARM_LOCAL_BASE+0x050)
++#define ARM_LOCAL_MAILBOX_INT_CONTROL1	HW_REGISTER_RW(ARM_LOCAL_BASE+0x054)
++#define ARM_LOCAL_MAILBOX_INT_CONTROL2	HW_REGISTER_RW(ARM_LOCAL_BASE+0x058)
++#define ARM_LOCAL_MAILBOX_INT_CONTROL3	HW_REGISTER_RW(ARM_LOCAL_BASE+0x05C)
++
++#define ARM_LOCAL_IRQ_PENDING0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x060)
++#define ARM_LOCAL_IRQ_PENDING1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x064)
++#define ARM_LOCAL_IRQ_PENDING2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x068)
++#define ARM_LOCAL_IRQ_PENDING3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x06C)
++
++#define ARM_LOCAL_FIQ_PENDING0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x070)
++#define ARM_LOCAL_FIQ_PENDING1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x074)
++#define ARM_LOCAL_FIQ_PENDING2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x078)
++#define ARM_LOCAL_FIQ_PENDING3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x07C)
++
++#define ARM_LOCAL_MAILBOX0_SET0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x080)
++#define ARM_LOCAL_MAILBOX1_SET0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x084)
++#define ARM_LOCAL_MAILBOX2_SET0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x088)
++#define ARM_LOCAL_MAILBOX3_SET0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x08C)
++
++#define ARM_LOCAL_MAILBOX0_SET1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x090)
++#define ARM_LOCAL_MAILBOX1_SET1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x094)
++#define ARM_LOCAL_MAILBOX2_SET1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x098)
++#define ARM_LOCAL_MAILBOX3_SET1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x09C)
++
++#define ARM_LOCAL_MAILBOX0_SET2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0A0)
++#define ARM_LOCAL_MAILBOX1_SET2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0A4)
++#define ARM_LOCAL_MAILBOX2_SET2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0A8)
++#define ARM_LOCAL_MAILBOX3_SET2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0AC)
++
++#define ARM_LOCAL_MAILBOX0_SET3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0B0)
++#define ARM_LOCAL_MAILBOX1_SET3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0B4)
++#define ARM_LOCAL_MAILBOX2_SET3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0B8)
++#define ARM_LOCAL_MAILBOX3_SET3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0BC)
++
++#define ARM_LOCAL_MAILBOX0_CLR0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0C0)
++#define ARM_LOCAL_MAILBOX1_CLR0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0C4)
++#define ARM_LOCAL_MAILBOX2_CLR0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0C8)
++#define ARM_LOCAL_MAILBOX3_CLR0		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0CC)
++
++#define ARM_LOCAL_MAILBOX0_CLR1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0D0)
++#define ARM_LOCAL_MAILBOX1_CLR1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0D4)
++#define ARM_LOCAL_MAILBOX2_CLR1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0D8)
++#define ARM_LOCAL_MAILBOX3_CLR1		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0DC)
++
++#define ARM_LOCAL_MAILBOX0_CLR2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0E0)
++#define ARM_LOCAL_MAILBOX1_CLR2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0E4)
++#define ARM_LOCAL_MAILBOX2_CLR2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0E8)
++#define ARM_LOCAL_MAILBOX3_CLR2		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0EC)
++
++#define ARM_LOCAL_MAILBOX0_CLR3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0F0)
++#define ARM_LOCAL_MAILBOX1_CLR3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0F4)
++#define ARM_LOCAL_MAILBOX2_CLR3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0F8)
++#define ARM_LOCAL_MAILBOX3_CLR3		HW_REGISTER_RW(ARM_LOCAL_BASE+0x0FC)
++
++#endif
++
++/* END */
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/system.h
+@@ -0,0 +1,37 @@
++/*
++ *  arch/arm/mach-bcm2708/include/mach/system.h
++ *
++ *  Copyright (C) 2010 Broadcom
++ *  Copyright (C) 2003 ARM Limited
++ *  Copyright (C) 2000 Deep Blue Solutions Ltd
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#ifndef __ASM_ARCH_SYSTEM_H
++#define __ASM_ARCH_SYSTEM_H
++
++#include <linux/io.h>
++#include <mach/platform.h>
++
++static inline void arch_idle(void)
++{
++	/*
++	 * This should do all the clock switching
++	 * and wait for interrupt tricks
++	 */
++	cpu_do_idle();
++}
++
++#endif
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/uncompress.h
+@@ -0,0 +1,84 @@
++/*
++ *  arch/arm/mach-bcn2708/include/mach/uncompress.h
++ *
++ *  Copyright (C) 2010 Broadcom
++ *  Copyright (C) 2003 ARM Limited
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++
++#include <linux/io.h>
++#include <linux/amba/serial.h>
++#include <mach/platform.h>
++
++#define UART_BAUD 115200
++
++#define BCM2708_UART_DR   __io(UART0_BASE + UART01x_DR)
++#define BCM2708_UART_FR   __io(UART0_BASE + UART01x_FR)
++#define BCM2708_UART_IBRD __io(UART0_BASE + UART011_IBRD)
++#define BCM2708_UART_FBRD __io(UART0_BASE + UART011_FBRD)
++#define BCM2708_UART_LCRH __io(UART0_BASE + UART011_LCRH)
++#define BCM2708_UART_CR   __io(UART0_BASE + UART011_CR)
++
++/*
++ * This does not append a newline
++ */
++static inline void putc(int c)
++{
++	while (__raw_readl(BCM2708_UART_FR) & UART01x_FR_TXFF)
++		barrier();
++
++	__raw_writel(c, BCM2708_UART_DR);
++}
++
++static inline void flush(void)
++{
++	int fr;
++
++	do {
++		fr = __raw_readl(BCM2708_UART_FR);
++		barrier();
++	} while ((fr & (UART011_FR_TXFE | UART01x_FR_BUSY)) != UART011_FR_TXFE);
++}
++
++static inline void arch_decomp_setup(void)
++{
++	int temp, div, rem, frac;
++
++	temp = 16 * UART_BAUD;
++	div = UART0_CLOCK / temp;
++	rem = UART0_CLOCK % temp;
++	temp = (8 * rem) / UART_BAUD;
++	frac = (temp >> 1) + (temp & 1);
++
++	/* Make sure the UART is disabled before we start */
++	__raw_writel(0, BCM2708_UART_CR);
++
++	/* Set the baud rate */
++	__raw_writel(div, BCM2708_UART_IBRD);
++	__raw_writel(frac, BCM2708_UART_FBRD);
++
++	/* Set the UART to 8n1, FIFO enabled */
++	__raw_writel(UART01x_LCRH_WLEN_8 | UART01x_LCRH_FEN, BCM2708_UART_LCRH);
++
++	/* Enable the UART */
++	__raw_writel(UART01x_CR_UARTEN | UART011_CR_TXE | UART011_CR_RXE,
++			BCM2708_UART_CR);
++}
++
++/*
++ * nothing to do
++ */
++#define arch_decomp_wdog()
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/vc_mem.h
+@@ -0,0 +1,35 @@
++/*****************************************************************************
++* Copyright 2010 - 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#if !defined( VC_MEM_H )
++#define VC_MEM_H
++
++#include <linux/ioctl.h>
++
++#define VC_MEM_IOC_MAGIC  'v'
++
++#define VC_MEM_IOC_MEM_PHYS_ADDR    _IOR( VC_MEM_IOC_MAGIC, 0, unsigned long )
++#define VC_MEM_IOC_MEM_SIZE         _IOR( VC_MEM_IOC_MAGIC, 1, unsigned int )
++#define VC_MEM_IOC_MEM_BASE         _IOR( VC_MEM_IOC_MAGIC, 2, unsigned int )
++#define VC_MEM_IOC_MEM_LOAD         _IOR( VC_MEM_IOC_MAGIC, 3, unsigned int )
++
++#if defined( __KERNEL__ )
++#define VC_MEM_TO_ARM_ADDR_MASK 0x3FFFFFFF
++
++extern unsigned long mm_vc_mem_phys_addr;
++extern unsigned int  mm_vc_mem_size;
++extern int vc_mem_get_current_size( void );
++#endif
++
++#endif  /* VC_MEM_H */
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/include/mach/vmalloc.h
+@@ -0,0 +1,20 @@
++/*
++ *  arch/arm/mach-bcm2708/include/mach/vmalloc.h
++ *
++ *  Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++#define VMALLOC_END		(0xff000000)
+--- /dev/null
++++ b/arch/arm/mach-bcm2709/vc_mem.c
+@@ -0,0 +1,431 @@
++/*****************************************************************************
++* Copyright 2010 - 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/fs.h>
++#include <linux/device.h>
++#include <linux/cdev.h>
++#include <linux/mm.h>
++#include <linux/slab.h>
++#include <linux/debugfs.h>
++#include <asm/uaccess.h>
++#include <linux/dma-mapping.h>
++#include <linux/platform_data/mailbox-bcm2708.h>
++
++#ifdef CONFIG_ARCH_KONA
++#include <chal/chal_ipc.h>
++#elif defined(CONFIG_ARCH_BCM2708) || defined(CONFIG_ARCH_BCM2709)
++#else
++#include <csp/chal_ipc.h>
++#endif
++
++#include "mach/vc_mem.h"
++
++#define DRIVER_NAME  "vc-mem"
++
++// Device (/dev) related variables
++static dev_t vc_mem_devnum = 0;
++static struct class *vc_mem_class = NULL;
++static struct cdev vc_mem_cdev;
++static int vc_mem_inited = 0;
++
++#ifdef CONFIG_DEBUG_FS
++static struct dentry *vc_mem_debugfs_entry;
++#endif
++
++/*
++ * Videocore memory addresses and size
++ *
++ * Drivers that wish to know the videocore memory addresses and sizes should
++ * use these variables instead of the MM_IO_BASE and MM_ADDR_IO defines in
++ * headers. This allows the other drivers to not be tied down to a a certain
++ * address/size at compile time.
++ *
++ * In the future, the goal is to have the videocore memory virtual address and
++ * size be calculated at boot time rather than at compile time. The decision of
++ * where the videocore memory resides and its size would be in the hands of the
++ * bootloader (and/or kernel). When that happens, the values of these variables
++ * would be calculated and assigned in the init function.
++ */
++// in the 2835 VC in mapped above ARM, but ARM has full access to VC space
++unsigned long mm_vc_mem_phys_addr = 0x00000000;
++unsigned int mm_vc_mem_size = 0;
++unsigned int mm_vc_mem_base = 0;
++
++EXPORT_SYMBOL(mm_vc_mem_phys_addr);
++EXPORT_SYMBOL(mm_vc_mem_size);
++EXPORT_SYMBOL(mm_vc_mem_base);
++
++static uint phys_addr = 0;
++static uint mem_size = 0;
++static uint mem_base = 0;
++
++
++/****************************************************************************
++*
++*   vc_mem_open
++*
++***************************************************************************/
++
++static int
++vc_mem_open(struct inode *inode, struct file *file)
++{
++	(void) inode;
++	(void) file;
++
++	pr_debug("%s: called file = 0x%p\n", __func__, file);
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   vc_mem_release
++*
++***************************************************************************/
++
++static int
++vc_mem_release(struct inode *inode, struct file *file)
++{
++	(void) inode;
++	(void) file;
++
++	pr_debug("%s: called file = 0x%p\n", __func__, file);
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   vc_mem_get_size
++*
++***************************************************************************/
++
++static void
++vc_mem_get_size(void)
++{
++}
++
++/****************************************************************************
++*
++*   vc_mem_get_base
++*
++***************************************************************************/
++
++static void
++vc_mem_get_base(void)
++{
++}
++
++/****************************************************************************
++*
++*   vc_mem_get_current_size
++*
++***************************************************************************/
++
++int
++vc_mem_get_current_size(void)
++{
++	return mm_vc_mem_size;
++}
++
++EXPORT_SYMBOL_GPL(vc_mem_get_current_size);
++
++/****************************************************************************
++*
++*   vc_mem_ioctl
++*
++***************************************************************************/
++
++static long
++vc_mem_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++	int rc = 0;
++
++	(void) cmd;
++	(void) arg;
++
++	pr_debug("%s: called file = 0x%p\n", __func__, file);
++
++	switch (cmd) {
++	case VC_MEM_IOC_MEM_PHYS_ADDR:
++		{
++			pr_debug("%s: VC_MEM_IOC_MEM_PHYS_ADDR=0x%p\n",
++				__func__, (void *) mm_vc_mem_phys_addr);
++
++			if (copy_to_user((void *) arg, &mm_vc_mem_phys_addr,
++					 sizeof (mm_vc_mem_phys_addr)) != 0) {
++				rc = -EFAULT;
++			}
++			break;
++		}
++	case VC_MEM_IOC_MEM_SIZE:
++		{
++			// Get the videocore memory size first
++			vc_mem_get_size();
++
++			pr_debug("%s: VC_MEM_IOC_MEM_SIZE=%u\n", __func__,
++				mm_vc_mem_size);
++
++			if (copy_to_user((void *) arg, &mm_vc_mem_size,
++					 sizeof (mm_vc_mem_size)) != 0) {
++				rc = -EFAULT;
++			}
++			break;
++		}
++	case VC_MEM_IOC_MEM_BASE:
++		{
++			// Get the videocore memory base
++			vc_mem_get_base();
++
++			pr_debug("%s: VC_MEM_IOC_MEM_BASE=%u\n", __func__,
++				mm_vc_mem_base);
++
++			if (copy_to_user((void *) arg, &mm_vc_mem_base,
++					 sizeof (mm_vc_mem_base)) != 0) {
++				rc = -EFAULT;
++			}
++			break;
++		}
++	case VC_MEM_IOC_MEM_LOAD:
++		{
++			// Get the videocore memory base
++			vc_mem_get_base();
++
++			pr_debug("%s: VC_MEM_IOC_MEM_LOAD=%u\n", __func__,
++				mm_vc_mem_base);
++
++			if (copy_to_user((void *) arg, &mm_vc_mem_base,
++					 sizeof (mm_vc_mem_base)) != 0) {
++				rc = -EFAULT;
++			}
++			break;
++		}
++	default:
++		{
++			return -ENOTTY;
++		}
++	}
++	pr_debug("%s: file = 0x%p returning %d\n", __func__, file, rc);
++
++	return rc;
++}
++
++/****************************************************************************
++*
++*   vc_mem_mmap
++*
++***************************************************************************/
++
++static int
++vc_mem_mmap(struct file *filp, struct vm_area_struct *vma)
++{
++	int rc = 0;
++	unsigned long length = vma->vm_end - vma->vm_start;
++	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
++
++	pr_debug("%s: vm_start = 0x%08lx vm_end = 0x%08lx vm_pgoff = 0x%08lx\n",
++		__func__, (long) vma->vm_start, (long) vma->vm_end,
++		(long) vma->vm_pgoff);
++
++	if (offset + length > mm_vc_mem_size) {
++		pr_err("%s: length %ld is too big\n", __func__, length);
++		return -EINVAL;
++	}
++	// Do not cache the memory map
++	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
++
++	rc = remap_pfn_range(vma, vma->vm_start,
++			     (mm_vc_mem_phys_addr >> PAGE_SHIFT) +
++			     vma->vm_pgoff, length, vma->vm_page_prot);
++	if (rc != 0) {
++		pr_err("%s: remap_pfn_range failed (rc=%d)\n", __func__, rc);
++	}
++
++	return rc;
++}
++
++/****************************************************************************
++*
++*   File Operations for the driver.
++*
++***************************************************************************/
++
++static const struct file_operations vc_mem_fops = {
++	.owner = THIS_MODULE,
++	.open = vc_mem_open,
++	.release = vc_mem_release,
++	.unlocked_ioctl = vc_mem_ioctl,
++	.mmap = vc_mem_mmap,
++};
++
++#ifdef CONFIG_DEBUG_FS
++static void vc_mem_debugfs_deinit(void)
++{
++	debugfs_remove_recursive(vc_mem_debugfs_entry);
++	vc_mem_debugfs_entry = NULL;
++}
++
++
++static int vc_mem_debugfs_init(
++	struct device *dev)
++{
++	vc_mem_debugfs_entry = debugfs_create_dir(DRIVER_NAME, NULL);
++	if (!vc_mem_debugfs_entry) {
++		dev_warn(dev, "could not create debugfs entry\n");
++		return -EFAULT;
++	}
++
++	if (!debugfs_create_x32("vc_mem_phys_addr",
++				0444,
++				vc_mem_debugfs_entry,
++				(u32 *)&mm_vc_mem_phys_addr)) {
++		dev_warn(dev, "%s:could not create vc_mem_phys entry\n",
++			__func__);
++		goto fail;
++	}
++
++	if (!debugfs_create_x32("vc_mem_size",
++				0444,
++				vc_mem_debugfs_entry,
++				(u32 *)&mm_vc_mem_size)) {
++		dev_warn(dev, "%s:could not create vc_mem_size entry\n",
++			__func__);
++		goto fail;
++	}
++
++	if (!debugfs_create_x32("vc_mem_base",
++				0444,
++				vc_mem_debugfs_entry,
++				(u32 *)&mm_vc_mem_base)) {
++		dev_warn(dev, "%s:could not create vc_mem_base entry\n",
++			 __func__);
++		goto fail;
++	}
++
++	return 0;
++
++fail:
++	vc_mem_debugfs_deinit();
++	return -EFAULT;
++}
++
++#endif /* CONFIG_DEBUG_FS */
++
++
++/****************************************************************************
++*
++*   vc_mem_init
++*
++***************************************************************************/
++
++static int __init
++vc_mem_init(void)
++{
++	int rc = -EFAULT;
++	struct device *dev;
++
++	pr_debug("%s: called\n", __func__);
++
++	mm_vc_mem_phys_addr = phys_addr;
++	mm_vc_mem_size = mem_size;
++	mm_vc_mem_base = mem_base;
++
++	vc_mem_get_size();
++
++	pr_info("vc-mem: phys_addr:0x%08lx mem_base=0x%08x mem_size:0x%08x(%u MiB)\n",
++		mm_vc_mem_phys_addr, mm_vc_mem_base, mm_vc_mem_size, mm_vc_mem_size / (1024 * 1024));
++
++	if ((rc = alloc_chrdev_region(&vc_mem_devnum, 0, 1, DRIVER_NAME)) < 0) {
++		pr_err("%s: alloc_chrdev_region failed (rc=%d)\n",
++		       __func__, rc);
++		goto out_err;
++	}
++
++	cdev_init(&vc_mem_cdev, &vc_mem_fops);
++	if ((rc = cdev_add(&vc_mem_cdev, vc_mem_devnum, 1)) != 0) {
++		pr_err("%s: cdev_add failed (rc=%d)\n", __func__, rc);
++		goto out_unregister;
++	}
++
++	vc_mem_class = class_create(THIS_MODULE, DRIVER_NAME);
++	if (IS_ERR(vc_mem_class)) {
++		rc = PTR_ERR(vc_mem_class);
++		pr_err("%s: class_create failed (rc=%d)\n", __func__, rc);
++		goto out_cdev_del;
++	}
++
++	dev = device_create(vc_mem_class, NULL, vc_mem_devnum, NULL,
++			    DRIVER_NAME);
++	if (IS_ERR(dev)) {
++		rc = PTR_ERR(dev);
++		pr_err("%s: device_create failed (rc=%d)\n", __func__, rc);
++		goto out_class_destroy;
++	}
++
++#ifdef CONFIG_DEBUG_FS
++	/* don't fail if the debug entries cannot be created */
++	vc_mem_debugfs_init(dev);
++#endif
++
++	vc_mem_inited = 1;
++	return 0;
++
++	device_destroy(vc_mem_class, vc_mem_devnum);
++
++      out_class_destroy:
++	class_destroy(vc_mem_class);
++	vc_mem_class = NULL;
++
++      out_cdev_del:
++	cdev_del(&vc_mem_cdev);
++
++      out_unregister:
++	unregister_chrdev_region(vc_mem_devnum, 1);
++
++      out_err:
++	return -1;
++}
++
++/****************************************************************************
++*
++*   vc_mem_exit
++*
++***************************************************************************/
++
++static void __exit
++vc_mem_exit(void)
++{
++	pr_debug("%s: called\n", __func__);
++
++	if (vc_mem_inited) {
++#if CONFIG_DEBUG_FS
++		vc_mem_debugfs_deinit();
++#endif
++		device_destroy(vc_mem_class, vc_mem_devnum);
++		class_destroy(vc_mem_class);
++		cdev_del(&vc_mem_cdev);
++		unregister_chrdev_region(vc_mem_devnum, 1);
++	}
++}
++
++module_init(vc_mem_init);
++module_exit(vc_mem_exit);
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Broadcom Corporation");
++
++module_param(phys_addr, uint, 0644);
++module_param(mem_size, uint, 0644);
++module_param(mem_base, uint, 0644);
+--- a/arch/arm/mm/Kconfig
++++ b/arch/arm/mm/Kconfig
+@@ -358,7 +358,7 @@ config CPU_PJ4B
+ 
+ # ARMv6
+ config CPU_V6
+-	bool "Support ARM V6 processor" if (!ARCH_MULTIPLATFORM || ARCH_MULTI_V6) && (ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX)
++	bool "Support ARM V6 processor" if (!ARCH_MULTIPLATFORM || ARCH_MULTI_V6) && (ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX || MACH_BCM2708)
+ 	select CPU_32v6
+ 	select CPU_ABRT_EV6
+ 	select CPU_CACHE_V6
+--- a/arch/arm/mm/proc-v6.S
++++ b/arch/arm/mm/proc-v6.S
+@@ -73,10 +73,19 @@ ENDPROC(cpu_v6_reset)
+  *
+  *	IRQs are already disabled.
+  */
++
++/* See jira SW-5991 for details of this workaround */
+ ENTRY(cpu_v6_do_idle)
+-	mov	r1, #0
+-	mcr	p15, 0, r1, c7, c10, 4		@ DWB - WFI may enter a low-power mode
+-	mcr	p15, 0, r1, c7, c0, 4		@ wait for interrupt
++	.align 5
++	mov     r1, #2
++1:	subs	r1, #1
++	nop
++	mcreq	p15, 0, r1, c7, c10, 4		@ DWB - WFI may enter a low-power mode
++	mcreq	p15, 0, r1, c7, c0, 4		@ wait for interrupt
++	nop
++	nop
++	nop
++	bne 1b
+ 	ret	lr
+ 
+ ENTRY(cpu_v6_dcache_clean_area)
+--- a/arch/arm/mm/proc-v7.S
++++ b/arch/arm/mm/proc-v7.S
+@@ -480,6 +480,7 @@ __errata_finish:
+ 	orr	r0, r0, r6			@ set them
+  THUMB(	orr	r0, r0, #1 << 30	)	@ Thumb exceptions
+ 	ret	lr				@ return to head.S:__ret
++        .space 256
+ ENDPROC(__v7_setup)
+ 
+ 	.align	2
+--- a/arch/arm/tools/mach-types
++++ b/arch/arm/tools/mach-types
+@@ -522,6 +522,8 @@ torbreck		MACH_TORBRECK		TORBRECK		3090
+ prima2_evb		MACH_PRIMA2_EVB		PRIMA2_EVB		3103
+ paz00			MACH_PAZ00		PAZ00			3128
+ acmenetusfoxg20		MACH_ACMENETUSFOXG20	ACMENETUSFOXG20		3129
++bcm2708			MACH_BCM2708		BCM2708			3138
++bcm2709			MACH_BCM2709		BCM2709			3139
+ ag5evm			MACH_AG5EVM		AG5EVM			3189
+ ics_if_voip		MACH_ICS_IF_VOIP	ICS_IF_VOIP		3206
+ wlf_cragg_6410		MACH_WLF_CRAGG_6410	WLF_CRAGG_6410		3207
+--- a/drivers/clocksource/Makefile
++++ b/drivers/clocksource/Makefile
+@@ -19,7 +19,7 @@ obj-$(CONFIG_CLKSRC_NOMADIK_MTU)	+= noma
+ obj-$(CONFIG_CLKSRC_DBX500_PRCMU)	+= clksrc-dbx500-prcmu.o
+ obj-$(CONFIG_ARMADA_370_XP_TIMER)	+= time-armada-370-xp.o
+ obj-$(CONFIG_ORION_TIMER)	+= time-orion.o
+-obj-$(CONFIG_ARCH_BCM2835)	+= bcm2835_timer.o
++obj-$(CONFIG_ARCH_BCM2835)$(CONFIG_ARCH_BCM2708)	+= bcm2835_timer.o
+ obj-$(CONFIG_ARCH_CLPS711X)	+= clps711x-timer.o
+ obj-$(CONFIG_ARCH_ATLAS7)	+= timer-atlas7.o
+ obj-$(CONFIG_ARCH_MOXART)	+= moxart_timer.o
+--- a/drivers/irqchip/Makefile
++++ b/drivers/irqchip/Makefile
+@@ -2,6 +2,9 @@ obj-$(CONFIG_IRQCHIP)			+= irqchip.o
+ 
+ obj-$(CONFIG_ARCH_BCM2835)		+= irq-bcm2835.o
+ obj-$(CONFIG_ARCH_BCM2835)		+= irq-bcm2836.o
++obj-$(CONFIG_ARCH_BCM2708)		+= irq-bcm2835.o
++obj-$(CONFIG_ARCH_BCM2709)		+= irq-bcm2835.o
++obj-$(CONFIG_ARCH_BCM2709)		+= irq-bcm2836.o
+ obj-$(CONFIG_ARCH_EXYNOS)		+= exynos-combiner.o
+ obj-$(CONFIG_ARCH_HIP04)		+= irq-hip04.o
+ obj-$(CONFIG_ARCH_MMP)			+= irq-mmp.o
+--- a/include/linux/mmc/host.h
++++ b/include/linux/mmc/host.h
+@@ -289,6 +289,7 @@ struct mmc_host {
+ #define MMC_CAP2_HSX00_1_2V	(MMC_CAP2_HS200_1_2V_SDR | MMC_CAP2_HS400_1_2V)
+ #define MMC_CAP2_SDIO_IRQ_NOTHREAD (1 << 17)
+ #define MMC_CAP2_NO_WRITE_PROTECT (1 << 18)	/* No physical write protect pin, assume that card is always read-write */
++#define MMC_CAP2_FORCE_MULTIBLOCK (1 << 31)	/* Always use multiblock transfers */
+ 
+ 	mmc_pm_flag_t		pm_caps;	/* supported pm features */
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0028-squash-include-ARCH_BCM2708-ARCH_BCM2709.patch b/target/linux/brcm2708/patches-4.4/0028-squash-include-ARCH_BCM2708-ARCH_BCM2709.patch
new file mode 100644
index 0000000..53fc545
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0028-squash-include-ARCH_BCM2708-ARCH_BCM2709.patch
@@ -0,0 +1,138 @@
+From ceefa4e6b0d4b529eed6666120674cccde24d59a Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 11 Nov 2015 21:01:15 +0000
+Subject: [PATCH 028/127] squash: include ARCH_BCM2708 / ARCH_BCM2709
+
+---
+ drivers/char/hw_random/Kconfig    |  2 +-
+ drivers/mailbox/Kconfig           |  2 +-
+ drivers/mailbox/bcm2835-mailbox.c | 18 ++++++++++++++++--
+ drivers/pinctrl/Makefile          |  1 +
+ drivers/pwm/Kconfig               |  2 +-
+ drivers/spi/Kconfig               |  2 +-
+ drivers/watchdog/Kconfig          |  2 +-
+ sound/soc/bcm/Kconfig             |  2 +-
+ 8 files changed, 23 insertions(+), 8 deletions(-)
+
+--- a/drivers/char/hw_random/Kconfig
++++ b/drivers/char/hw_random/Kconfig
+@@ -90,7 +90,7 @@ config HW_RANDOM_BCM63XX
+ 
+ config HW_RANDOM_BCM2835
+ 	tristate "Broadcom BCM2835 Random Number Generator support"
+-	depends on ARCH_BCM2835
++	depends on ARCH_BCM2835 || ARCH_BCM2708 || ARCH_BCM2709
+ 	default HW_RANDOM
+ 	---help---
+ 	  This driver provides kernel-side support for the Random Number
+--- a/drivers/mailbox/Kconfig
++++ b/drivers/mailbox/Kconfig
+@@ -65,7 +65,7 @@ config ALTERA_MBOX
+ 
+ config BCM2835_MBOX
+ 	tristate "BCM2835 Mailbox"
+-	depends on ARCH_BCM2835
++	depends on ARCH_BCM2835 || ARCH_BCM2708 || ARCH_BCM2709
+ 	help
+ 	  An implementation of the BCM2385 Mailbox.  It is used to invoke
+ 	  the services of the Videocore. Say Y here if you want to use the
+--- a/drivers/mailbox/bcm2835-mailbox.c
++++ b/drivers/mailbox/bcm2835-mailbox.c
+@@ -51,12 +51,15 @@
+ #define MAIL1_WRT	(ARM_0_MAIL1 + 0x00)
+ #define MAIL1_STA	(ARM_0_MAIL1 + 0x18)
+ 
++/* On ARCH_BCM270x these come through <linux/interrupt.h> (arm_control.h ) */
++#ifndef ARM_MS_FULL
+ /* Status register: FIFO state. */
+ #define ARM_MS_FULL		BIT(31)
+ #define ARM_MS_EMPTY		BIT(30)
+ 
+ /* Configuration register: Enable interrupts. */
+ #define ARM_MC_IHAVEDATAIRQEN	BIT(0)
++#endif
+ 
+ struct bcm2835_mbox {
+ 	void __iomem *regs;
+@@ -151,7 +154,7 @@ static int bcm2835_mbox_probe(struct pla
+ 		return -ENOMEM;
+ 	spin_lock_init(&mbox->lock);
+ 
+-	ret = devm_request_irq(dev, irq_of_parse_and_map(dev->of_node, 0),
++	ret = devm_request_irq(dev, platform_get_irq(pdev, 0),
+ 			       bcm2835_mbox_irq, 0, dev_name(dev), mbox);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register a mailbox IRQ handler: %d\n",
+@@ -209,7 +212,18 @@ static struct platform_driver bcm2835_mb
+ 	.probe		= bcm2835_mbox_probe,
+ 	.remove		= bcm2835_mbox_remove,
+ };
+-module_platform_driver(bcm2835_mbox_driver);
++
++static int __init bcm2835_mbox_init(void)
++{
++	return platform_driver_register(&bcm2835_mbox_driver);
++}
++arch_initcall(bcm2835_mbox_init);
++
++static void __init bcm2835_mbox_exit(void)
++{
++	platform_driver_unregister(&bcm2835_mbox_driver);
++}
++module_exit(bcm2835_mbox_exit);
+ 
+ MODULE_AUTHOR("Lubomir Rintel <lkundrak at v3.sk>");
+ MODULE_DESCRIPTION("BCM2835 mailbox IPC driver");
+--- a/drivers/pinctrl/Makefile
++++ b/drivers/pinctrl/Makefile
+@@ -40,6 +40,7 @@ obj-$(CONFIG_PINCTRL_TB10X)	+= pinctrl-t
+ obj-$(CONFIG_PINCTRL_ST) 	+= pinctrl-st.o
+ obj-$(CONFIG_PINCTRL_ZYNQ)	+= pinctrl-zynq.o
+ 
++obj-$(CONFIG_ARCH_BCM2708)$(CONFIG_ARCH_BCM2709) += bcm/
+ obj-$(CONFIG_ARCH_BCM)		+= bcm/
+ obj-$(CONFIG_ARCH_BERLIN)	+= berlin/
+ obj-y				+= freescale/
+--- a/drivers/pwm/Kconfig
++++ b/drivers/pwm/Kconfig
+@@ -85,7 +85,7 @@ config PWM_BCM_KONA
+ 
+ config PWM_BCM2835
+ 	tristate "BCM2835 PWM support"
+-	depends on ARCH_BCM2835
++	depends on ARCH_BCM2835 || ARCH_BCM2708 || ARCH_BCM2709
+ 	help
+ 	  PWM framework driver for BCM2835 controller (Raspberry Pi)
+ 
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -78,7 +78,7 @@ config SPI_ATMEL
+ config SPI_BCM2835
+ 	tristate "BCM2835 SPI controller"
+ 	depends on GPIOLIB
+-	depends on ARCH_BCM2835 || COMPILE_TEST
++	depends on ARCH_BCM2835 || ARCH_BCM2708 || ARCH_BCM2709 || COMPILE_TEST
+ 	depends on GPIOLIB
+ 	help
+ 	  This selects a driver for the Broadcom BCM2835 SPI master.
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -1291,7 +1291,7 @@ config BCM63XX_WDT
+ 
+ config BCM2835_WDT
+ 	tristate "Broadcom BCM2835 hardware watchdog"
+-	depends on ARCH_BCM2835
++	depends on ARCH_BCM2835 || ARCH_BCM2708 || ARCH_BCM2709
+ 	select WATCHDOG_CORE
+ 	help
+ 	  Watchdog driver for the built in watchdog hardware in Broadcom
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -1,6 +1,6 @@
+ config SND_BCM2835_SOC_I2S
+ 	tristate "SoC Audio support for the Broadcom BCM2835 I2S module"
+-	depends on ARCH_BCM2835 || COMPILE_TEST
++	depends on ARCH_BCM2835 || MACH_BCM2708 || MACH_BCM2709 || COMPILE_TEST
+ 	select SND_SOC_GENERIC_DMAENGINE_PCM
+ 	select REGMAP_MMIO
+ 	help
diff --git a/target/linux/brcm2708/patches-4.4/0029-Add-dwc_otg-driver.patch b/target/linux/brcm2708/patches-4.4/0029-Add-dwc_otg-driver.patch
new file mode 100644
index 0000000..ee8c4fa
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0029-Add-dwc_otg-driver.patch
@@ -0,0 +1,60780 @@
+From 3fa74ccc327ddeb8f718cf4c42d61b43cc0a9626 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 1 May 2013 19:46:17 +0100
+Subject: [PATCH 029/127] Add dwc_otg driver
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+
+usb: dwc: fix lockdep false positive
+
+Signed-off-by: Kari Suvanto <karis79 at gmail.com>
+
+usb: dwc: fix inconsistent lock state
+
+Signed-off-by: Kari Suvanto <karis79 at gmail.com>
+
+Add FIQ patch to dwc_otg driver. Enable with dwc_otg.fiq_fix_enable=1. Should give about 10% more ARM performance.
+Thanks to Gordon and Costas
+
+Avoid dynamic memory allocation for channel lock in USB driver. Thanks ddv2005.
+
+Add NAK holdoff scheme. Enabled by default, disable with dwc_otg.nak_holdoff_enable=0. Thanks gsh
+
+Make sure we wait for the reset to finish
+
+dwc_otg: fix bug in dwc_otg_hcd.c resulting in silent kernel
+	 memory corruption, escalating to OOPS under high USB load.
+
+dwc_otg: Fix unsafe access of QTD during URB enqueue
+
+In dwc_otg_hcd_urb_enqueue during qtd creation, it was possible that the
+transaction could complete almost immediately after the qtd was assigned
+to a host channel during URB enqueue, which meant the qtd pointer was no
+longer valid having been completed and removed. Usually, this resulted in
+an OOPS during URB submission. By predetermining whether transactions
+need to be queued or not, this unsafe pointer access is avoided.
+
+This bug was only evident on the Pi model A where a device was attached
+that had no periodic endpoints (e.g. USB pendrive or some wlan devices).
+
+dwc_otg: Fix incorrect URB allocation error handling
+
+If the memory allocation for a dwc_otg_urb failed, the kernel would OOPS
+because for some reason a member of the *unallocated* struct was set to
+zero. Error handling changed to fail correctly.
+
+dwc_otg: fix potential use-after-free case in interrupt handler
+
+If a transaction had previously aborted, certain interrupts are
+enabled to track error counts and reset where necessary. On IN
+endpoints the host generates an ACK interrupt near-simultaneously
+with completion of transfer. In the case where this transfer had
+previously had an error, this results in a use-after-free on
+the QTD memory space with a 1-byte length being overwritten to
+0x00.
+
+dwc_otg: add handling of SPLIT transaction data toggle errors
+
+Previously a data toggle error on packets from a USB1.1 device behind
+a TT would result in the Pi locking up as the driver never handled
+the associated interrupt. Patch adds basic retry mechanism and
+interrupt acknowledgement to cater for either a chance toggle error or
+for devices that have a broken initial toggle state (FT8U232/FT232BM).
+
+dwc_otg: implement tasklet for returning URBs to usbcore hcd layer
+
+The dwc_otg driver interrupt handler for transfer completion will spend
+a very long time with interrupts disabled when a URB is completed -
+this is because usb_hcd_giveback_urb is called from within the handler
+which for a USB device driver with complicated processing (e.g. webcam)
+will take an exorbitant amount of time to complete. This results in
+missed completion interrupts for other USB packets which lead to them
+being dropped due to microframe overruns.
+
+This patch splits returning the URB to the usb hcd layer into a
+high-priority tasklet. This will have most benefit for isochronous IN
+transfers but will also have incidental benefit where multiple periodic
+devices are active at once.
+
+dwc_otg: fix NAK holdoff and allow on split transactions only
+
+This corrects a bug where if a single active non-periodic endpoint
+had at least one transaction in its qh, on frnum == MAX_FRNUM the qh
+would get skipped and never get queued again. This would result in
+a silent device until error detection (automatic or otherwise) would
+either reset the device or flush and requeue the URBs.
+
+Additionally the NAK holdoff was enabled for all transactions - this
+would potentially stall a HS endpoint for 1ms if a previous error state
+enabled this interrupt and the next response was a NAK. Fix so that
+only split transactions get held off.
+
+dwc_otg: Call usb_hcd_unlink_urb_from_ep with lock held in completion handler
+
+usb_hcd_unlink_urb_from_ep must be called with the HCD lock held.  Calling it
+asynchronously in the tasklet was not safe (regression in
+c4564d4a1a0a9b10d4419e48239f5d99e88d2667).
+
+This change unlinks it from the endpoint prior to queueing it for handling in
+the tasklet, and also adds a check to ensure the urb is OK to be unlinked
+before doing so.
+
+NULL pointer dereference kernel oopses had been observed in usb_hcd_giveback_urb
+when a USB device was unplugged/replugged during data transfer.  This effect
+was reproduced using automated USB port power control, hundreds of replug
+events were performed during active transfers to confirm that the problem was
+eliminated.
+
+USB fix using a FIQ to implement split transactions
+
+This commit adds a FIQ implementaion that schedules
+the split transactions using a FIQ so we don't get
+held off by the interrupt latency of Linux
+
+dwc_otg: fix device attributes and avoid kernel warnings on boot
+
+dcw_otg: avoid logging function that can cause panics
+
+See: https://github.com/raspberrypi/firmware/issues/21
+Thanks to cleverca22 for fix
+
+dwc_otg: mask correct interrupts after transaction error recovery
+
+The dwc_otg driver will unmask certain interrupts on a transaction
+that previously halted in the error state in order to reset the
+QTD error count. The various fine-grained interrupt handlers do not
+consider that other interrupts besides themselves were unmasked.
+
+By disabling the two other interrupts only ever enabled in DMA mode
+for this purpose, we can avoid unnecessary function calls in the
+IRQ handler. This will also prevent an unneccesary FIQ interrupt
+from being generated if the FIQ is enabled.
+
+dwc_otg: fiq: prevent FIQ thrash and incorrect state passing to IRQ
+
+In the case of a transaction to a device that had previously aborted
+due to an error, several interrupts are enabled to reset the error
+count when a device responds. This has the side-effect of making the
+FIQ thrash because the hardware will generate multiple instances of
+a NAK on an IN bulk/interrupt endpoint and multiple instances of ACK
+on an OUT bulk/interrupt endpoint. Make the FIQ mask and clear the
+associated interrupts.
+
+Additionally, on non-split transactions make sure that only unmasked
+interrupts are cleared. This caused a hard-to-trigger but serious
+race condition when you had the combination of an endpoint awaiting
+error recovery and a transaction completed on an endpoint - due to
+the sequencing and timing of interrupts generated by the dwc_otg core,
+it was possible to confuse the IRQ handler.
+
+Fix function tracing
+
+dwc_otg: whitespace cleanup in dwc_otg_urb_enqueue
+
+dwc_otg: prevent OOPSes during device disconnects
+
+The dwc_otg_urb_enqueue function is thread-unsafe. In particular the
+access of urb->hcpriv, usb_hcd_link_urb_to_ep, dwc_otg_urb->qtd and
+friends does not occur within a critical section and so if a device
+was unplugged during activity there was a high chance that the
+usbcore hub_thread would try to disable the endpoint with partially-
+formed entries in the URB queue. This would result in BUG() or null
+pointer dereferences.
+
+Fix so that access of urb->hcpriv, enqueuing to the hardware and
+adding to usbcore endpoint URB lists is contained within a single
+critical section.
+
+dwc_otg: prevent BUG() in TT allocation if hub address is > 16
+
+A fixed-size array is used to track TT allocation. This was
+previously set to 16 which caused a crash because
+dwc_otg_hcd_allocate_port would read past the end of the array.
+
+This was hit if a hub was plugged in which enumerated as addr > 16,
+due to previous device resets or unplugs.
+
+Also add #ifdef FIQ_DEBUG around hcd->hub_port_alloc[], which grows
+to a large size if 128 hub addresses are supported. This field is
+for debug only for tracking which frame an allocate happened in.
+
+dwc_otg: make channel halts with unknown state less damaging
+
+If the IRQ received a channel halt interrupt through the FIQ
+with no other bits set, the IRQ would not release the host
+channel and never complete the URB.
+
+Add catchall handling to treat as a transaction error and retry.
+
+dwc_otg: fiq_split: use TTs with more granularity
+
+This fixes certain issues with split transaction scheduling.
+
+- Isochronous multi-packet OUT transactions now hog the TT until
+  they are completed - this prevents hubs aborting transactions
+  if they get a periodic start-split out-of-order
+- Don't perform TT allocation on non-periodic endpoints - this
+  allows simultaneous use of the TT's bulk/control and periodic
+  transaction buffers
+
+This commit will mainly affect USB audio playback.
+
+dwc_otg: fix potential sleep while atomic during urb enqueue
+
+Fixes a regression introduced with eb1b482a. Kmalloc called from
+dwc_otg_hcd_qtd_add / dwc_otg_hcd_qtd_create did not always have
+the GPF_ATOMIC flag set. Force this flag when inside the larger
+critical section.
+
+dwc_otg: make fiq_split_enable imply fiq_fix_enable
+
+Failing to set up the FIQ correctly would result in
+"IRQ 32: nobody cared" errors in dmesg.
+
+dwc_otg: prevent crashes on host port disconnects
+
+Fix several issues resulting in crashes or inconsistent state
+if a Model A root port was disconnected.
+
+- Clean up queue heads properly in kill_urbs_in_qh_list by
+  removing the empty QHs from the schedule lists
+- Set the halt status properly to prevent IRQ handlers from
+  using freed memory
+- Add fiq_split related cleanup for saved registers
+- Make microframe scheduling reclaim host channels if
+  active during a disconnect
+- Abort URBs with -ESHUTDOWN status response, informing
+  device drivers so they respond in a more correct fashion
+  and don't try to resubmit URBs
+- Prevent IRQ handlers from attempting to handle channel
+  interrupts if the associated URB was dequeued (and the
+  driver state was cleared)
+
+dwc_otg: prevent leaking URBs during enqueue
+
+A dwc_otg_urb would get leaked if the HCD enqueue function
+failed for any reason. Free the URB at the appropriate points.
+
+dwc_otg: Enable NAK holdoff for control split transactions
+
+Certain low-speed devices take a very long time to complete a
+data or status stage of a control transaction, producing NAK
+responses until they complete internal processing - the USB2.0
+spec limit is up to 500mS. This causes the same type of interrupt
+storm as seen with USB-serial dongles prior to c8edb238.
+
+In certain circumstances, usually while booting, this interrupt
+storm could cause SD card timeouts.
+
+dwc_otg: Fix for occasional lockup on boot when doing a USB reset
+
+dwc_otg: Don't issue traffic to LS devices in FS mode
+
+Issuing low-speed packets when the root port is in full-speed mode
+causes the root port to stop responding. Explicitly fail when
+enqueuing URBs to a LS endpoint on a FS bus.
+
+Fix ARM architecture issue with local_irq_restore()
+
+If local_fiq_enable() is called before a local_irq_restore(flags) where
+the flags variable has the F bit set, the FIQ will be erroneously disabled.
+
+Fixup arch_local_irq_restore to avoid trampling the F bit in CPSR.
+
+Also fix some of the hacks previously implemented for previous dwc_otg
+incarnations.
+
+dwc_otg: fiq_fsm: Base commit for driver rewrite
+
+This commit removes the previous FIQ fixes entirely and adds fiq_fsm.
+
+This rewrite features much more complete support for split transactions
+and takes into account several OTG hardware bugs. High-speed
+isochronous transactions are also capable of being performed by fiq_fsm.
+
+All driver options have been removed and replaced with:
+  - dwc_otg.fiq_enable (bool)
+  - dwc_otg.fiq_fsm_enable (bool)
+  - dwc_otg.fiq_fsm_mask (bitmask)
+  - dwc_otg.nak_holdoff (unsigned int)
+
+Defaults are specified such that fiq_fsm behaves similarly to the
+previously implemented FIQ fixes.
+
+fiq_fsm: Push error recovery into the FIQ when fiq_fsm is used
+
+If the transfer associated with a QTD failed due to a bus error, the HCD
+would retry the transfer up to 3 times (implementing the USB2.0
+three-strikes retry in software).
+
+Due to the masking mechanism used by fiq_fsm, it is only possible to pass
+a single interrupt through to the HCD per-transfer.
+
+In this instance host channels would fall off the radar because the error
+reset would function, but the subsequent channel halt would be lost.
+
+Push the error count reset into the FIQ handler.
+
+fiq_fsm: Implement timeout mechanism
+
+For full-speed endpoints with a large packet size, interrupt latency
+runs the risk of the FIQ starting a transaction too late in a full-speed
+frame. If the device is still transmitting data when EOF2 for the
+downstream frame occurs, the hub will disable the port. This change is
+not reflected in the hub status endpoint and the device becomes
+unresponsive.
+
+Prevent high-bandwidth transactions from being started too late in a
+frame. The mechanism is not guaranteed: a combination of bit stuffing
+and hub latency may still result in a device overrunning.
+
+fiq_fsm: fix bounce buffer utilisation for Isochronous OUT
+
+Multi-packet isochronous OUT transactions were subject to a few bounday
+bugs. Fix them.
+
+Audio playback is now much more robust: however, an issue stands with
+devices that have adaptive sinks - ALSA plays samples too fast.
+
+dwc_otg: Return full-speed frame numbers in HS mode
+
+The frame counter increments on every *microframe* in high-speed mode.
+Most device drivers expect this number to be in full-speed frames - this
+caused considerable confusion to e.g. snd_usb_audio which uses the
+frame counter to estimate the number of samples played.
+
+fiq_fsm: save PID on completion of interrupt OUT transfers
+
+Also add edge case handling for interrupt transports.
+
+Note that for periodic split IN, data toggles are unimplemented in the
+OTG host hardware - it unconditionally accepts any PID.
+
+fiq_fsm: add missing case for fiq_fsm_tt_in_use()
+
+Certain combinations of bitrate and endpoint activity could
+result in a periodic transaction erroneously getting started
+while the previous Isochronous OUT was still active.
+
+fiq_fsm: clear hcintmsk for aborted transactions
+
+Prevents the FIQ from erroneously handling interrupts
+on a timed out channel.
+
+fiq_fsm: enable by default
+
+fiq_fsm: fix dequeues for non-periodic split transactions
+
+If a dequeue happened between the SSPLIT and CSPLIT phases of the
+transaction, the HCD would never receive an interrupt.
+
+fiq_fsm: Disable by default
+
+fiq_fsm: Handle HC babble errors
+
+The HCTSIZ transfer size field raises a babble interrupt if
+the counter wraps. Handle the resulting interrupt in this case.
+
+dwc_otg: fix interrupt registration for fiq_enable=0
+
+Additionally make the module parameter conditional for wherever
+hcd->fiq_state is touched.
+
+fiq_fsm: Enable by default
+
+dwc_otg: Fix various issues with root port and transaction errors
+
+Process the host port interrupts correctly (and don't trample them).
+Root port hotplug now functional again.
+
+Fix a few thinkos with the transaction error passthrough for fiq_fsm.
+
+fiq_fsm: Implement hack for Split Interrupt transactions
+
+Hubs aren't too picky about which endpoint we send Control type split
+transactions to. By treating Interrupt transfers as Control, it is
+possible to use the non-periodic queue in the OTG core as well as the
+non-periodic FIFOs in the hub itself. This massively reduces the
+microframe exclusivity/contention that periodic split transactions
+otherwise have to enforce.
+
+It goes without saying that this is a fairly egregious USB specification
+violation, but it works.
+
+Original idea by Hans Petter Selasky @ FreeBSD.org.
+
+dwc_otg: FIQ support on SMP. Set up FIQ stack and handler on Core 0 only.
+
+dwc_otg: introduce fiq_fsm_spin(un|)lock()
+
+SMP safety for the FIQ relies on register read-modify write cycles being
+completed in the correct order. Several places in the DWC code modify
+registers also touched by the FIQ. Protect these by a bare-bones lock
+mechanism.
+
+This also makes it possible to run the FIQ and IRQ handlers on different
+cores.
+
+fiq_fsm: fix build on bcm2708 and bcm2709 platforms
+
+dwc_otg: put some barriers back where they should be for UP
+
+bcm2709/dwc_otg: Setup FIQ on core 1 if >1 core active
+
+dwc_otg: fixup read-modify-write in critical paths
+
+Be more careful about read-modify-write on registers that the FIQ
+also touches.
+
+Guard fiq_fsm_spin_lock with fiq_enable check
+
+fiq_fsm: Falling out of the state machine isn't fatal
+
+This edge case can be hit if the port is disabled while the FIQ is
+in the middle of a transaction. Make the effects less severe.
+
+Also get rid of the useless return value.
+
+squash: dwc_otg: Allow to build without SMP
+
+usb: core: make overcurrent messages more prominent
+
+Hub overcurrent messages are more serious than "debug". Increase loglevel.
+
+usb: dwc_otg: Don't use dma_to_virt()
+
+Commit 6ce0d20 changes dma_to_virt() which breaks this driver.
+Open code the old dma_to_virt() implementation to work around this.
+
+Limit the use of __bus_to_virt() to cases where transfer_buffer_length
+is set and transfer_buffer is not set. This is done to increase the
+chance that this driver will also work on ARCH_BCM2835.
+
+transfer_buffer should not be NULL if the length is set, but the
+comment in the code indicates that there are situations where this
+might happen. drivers/usb/isp1760/isp1760-hcd.c also has a similar
+comment pointing to a possible: 'usb storage / SCSI bug'.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dwc_otg: Fix crash when fiq_enable=0
+
+dwc_otg: fiq_fsm: Make high-speed isochronous strided transfers work properly
+
+Certain low-bandwidth high-speed USB devices (specialist audio devices,
+compressed-frame webcams) have packet intervals > 1 microframe.
+
+Stride these transfers in the FIQ by using the start-of-frame interrupt
+to restart the channel at the right time.
+
+dwc_otg: Force host mode to fix incorrect compute module boards
+
+dwc_otg: Add ARCH_BCM2835 support
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dwc_otg: Simplify FIQ irq number code
+
+Dropping ATAGS means we can simplify the FIQ irq number code.
+Also add error checking on the returned irq number.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dwc_otg: Remove duplicate gadget probe/unregister function
+---
+ arch/arm/include/asm/irqflags.h                    |   16 +-
+ arch/arm/kernel/fiqasm.S                           |    4 +
+ drivers/usb/Makefile                               |    1 +
+ drivers/usb/core/generic.c                         |    1 +
+ drivers/usb/core/hub.c                             |    2 +-
+ drivers/usb/core/message.c                         |   79 +
+ drivers/usb/core/otg_whitelist.h                   |  114 +-
+ drivers/usb/gadget/file_storage.c                  | 3676 ++++++++++
+ drivers/usb/host/Kconfig                           |   13 +
+ drivers/usb/host/Makefile                          |    2 +
+ drivers/usb/host/dwc_common_port/Makefile          |   58 +
+ drivers/usb/host/dwc_common_port/Makefile.fbsd     |   17 +
+ drivers/usb/host/dwc_common_port/Makefile.linux    |   49 +
+ drivers/usb/host/dwc_common_port/changes.txt       |  174 +
+ drivers/usb/host/dwc_common_port/doc/doxygen.cfg   |  270 +
+ drivers/usb/host/dwc_common_port/dwc_cc.c          |  532 ++
+ drivers/usb/host/dwc_common_port/dwc_cc.h          |  224 +
+ drivers/usb/host/dwc_common_port/dwc_common_fbsd.c | 1308 ++++
+ .../usb/host/dwc_common_port/dwc_common_linux.c    | 1433 ++++
+ drivers/usb/host/dwc_common_port/dwc_common_nbsd.c | 1275 ++++
+ drivers/usb/host/dwc_common_port/dwc_crypto.c      |  308 +
+ drivers/usb/host/dwc_common_port/dwc_crypto.h      |  111 +
+ drivers/usb/host/dwc_common_port/dwc_dh.c          |  291 +
+ drivers/usb/host/dwc_common_port/dwc_dh.h          |  106 +
+ drivers/usb/host/dwc_common_port/dwc_list.h        |  594 ++
+ drivers/usb/host/dwc_common_port/dwc_mem.c         |  245 +
+ drivers/usb/host/dwc_common_port/dwc_modpow.c      |  636 ++
+ drivers/usb/host/dwc_common_port/dwc_modpow.h      |   34 +
+ drivers/usb/host/dwc_common_port/dwc_notifier.c    |  319 +
+ drivers/usb/host/dwc_common_port/dwc_notifier.h    |  122 +
+ drivers/usb/host/dwc_common_port/dwc_os.h          | 1276 ++++
+ drivers/usb/host/dwc_common_port/usb.h             |  946 +++
+ drivers/usb/host/dwc_otg/Makefile                  |   82 +
+ drivers/usb/host/dwc_otg/doc/doxygen.cfg           |  224 +
+ drivers/usb/host/dwc_otg/dummy_audio.c             | 1575 +++++
+ drivers/usb/host/dwc_otg/dwc_cfi_common.h          |  142 +
+ drivers/usb/host/dwc_otg/dwc_otg_adp.c             |  854 +++
+ drivers/usb/host/dwc_otg/dwc_otg_adp.h             |   80 +
+ drivers/usb/host/dwc_otg/dwc_otg_attr.c            | 1210 ++++
+ drivers/usb/host/dwc_otg/dwc_otg_attr.h            |   89 +
+ drivers/usb/host/dwc_otg/dwc_otg_cfi.c             | 1876 +++++
+ drivers/usb/host/dwc_otg/dwc_otg_cfi.h             |  320 +
+ drivers/usb/host/dwc_otg/dwc_otg_cil.c             | 7141 ++++++++++++++++++++
+ drivers/usb/host/dwc_otg/dwc_otg_cil.h             | 1464 ++++
+ drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c        | 1594 +++++
+ drivers/usb/host/dwc_otg/dwc_otg_core_if.h         |  705 ++
+ drivers/usb/host/dwc_otg/dwc_otg_dbg.h             |  117 +
+ drivers/usb/host/dwc_otg/dwc_otg_driver.c          | 1757 +++++
+ drivers/usb/host/dwc_otg/dwc_otg_driver.h          |   86 +
+ drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.c         | 1355 ++++
+ drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.h         |  370 +
+ drivers/usb/host/dwc_otg/dwc_otg_fiq_stub.S        |   80 +
+ drivers/usb/host/dwc_otg/dwc_otg_hcd.c             | 4257 ++++++++++++
+ drivers/usb/host/dwc_otg/dwc_otg_hcd.h             |  862 +++
+ drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c        | 1132 ++++
+ drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h          |  417 ++
+ drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c        | 2714 ++++++++
+ drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c       | 1005 +++
+ drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c       |  957 +++
+ drivers/usb/host/dwc_otg/dwc_otg_os_dep.h          |  188 +
+ drivers/usb/host/dwc_otg/dwc_otg_pcd.c             | 2712 ++++++++
+ drivers/usb/host/dwc_otg/dwc_otg_pcd.h             |  266 +
+ drivers/usb/host/dwc_otg/dwc_otg_pcd_if.h          |  360 +
+ drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c        | 5147 ++++++++++++++
+ drivers/usb/host/dwc_otg/dwc_otg_pcd_linux.c       | 1280 ++++
+ drivers/usb/host/dwc_otg/dwc_otg_regs.h            | 2550 +++++++
+ drivers/usb/host/dwc_otg/test/Makefile             |   16 +
+ drivers/usb/host/dwc_otg/test/dwc_otg_test.pm      |  337 +
+ drivers/usb/host/dwc_otg/test/test_mod_param.pl    |  133 +
+ drivers/usb/host/dwc_otg/test/test_sysfs.pl        |  193 +
+ 70 files changed, 59867 insertions(+), 16 deletions(-)
+ create mode 100644 drivers/usb/gadget/file_storage.c
+ create mode 100644 drivers/usb/host/dwc_common_port/Makefile
+ create mode 100644 drivers/usb/host/dwc_common_port/Makefile.fbsd
+ create mode 100644 drivers/usb/host/dwc_common_port/Makefile.linux
+ create mode 100644 drivers/usb/host/dwc_common_port/changes.txt
+ create mode 100644 drivers/usb/host/dwc_common_port/doc/doxygen.cfg
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_cc.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_cc.h
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_common_fbsd.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_common_linux.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_common_nbsd.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_crypto.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_crypto.h
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_dh.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_dh.h
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_list.h
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_mem.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_modpow.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_modpow.h
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_notifier.c
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_notifier.h
+ create mode 100644 drivers/usb/host/dwc_common_port/dwc_os.h
+ create mode 100644 drivers/usb/host/dwc_common_port/usb.h
+ create mode 100644 drivers/usb/host/dwc_otg/Makefile
+ create mode 100644 drivers/usb/host/dwc_otg/doc/doxygen.cfg
+ create mode 100644 drivers/usb/host/dwc_otg/dummy_audio.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_cfi_common.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_adp.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_adp.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_attr.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_attr.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cfi.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cfi.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cil.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cil.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_core_if.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_dbg.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_driver.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_driver.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_fiq_stub.S
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_os_dep.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd_if.h
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd_linux.c
+ create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_regs.h
+ create mode 100644 drivers/usb/host/dwc_otg/test/Makefile
+ create mode 100644 drivers/usb/host/dwc_otg/test/dwc_otg_test.pm
+ create mode 100644 drivers/usb/host/dwc_otg/test/test_mod_param.pl
+ create mode 100644 drivers/usb/host/dwc_otg/test/test_sysfs.pl
+
+--- a/arch/arm/include/asm/irqflags.h
++++ b/arch/arm/include/asm/irqflags.h
+@@ -162,13 +162,23 @@ static inline unsigned long arch_local_s
+ }
+ 
+ /*
+- * restore saved IRQ & FIQ state
++ * restore saved IRQ state
+  */
+ #define arch_local_irq_restore arch_local_irq_restore
+ static inline void arch_local_irq_restore(unsigned long flags)
+ {
+-	asm volatile(
+-		"	msr	" IRQMASK_REG_NAME_W ", %0	@ local_irq_restore"
++	unsigned long temp = 0;
++	flags &= ~(1 << 6);
++	asm volatile (
++		" mrs %0, cpsr"
++		: "=r" (temp)
++		:
++		: "memory", "cc");
++		/* Preserve FIQ bit */
++		temp &= (1 << 6);
++		flags = flags | temp;
++	asm volatile (
++		"    msr    cpsr_c, %0    @ local_irq_restore"
+ 		:
+ 		: "r" (flags)
+ 		: "memory", "cc");
+--- a/arch/arm/kernel/fiqasm.S
++++ b/arch/arm/kernel/fiqasm.S
+@@ -47,3 +47,7 @@ ENTRY(__get_fiq_regs)
+ 	mov	r0, r0		@ avoid hazard prior to ARMv4
+ 	ret	lr
+ ENDPROC(__get_fiq_regs)
++
++ENTRY(__FIQ_Branch)
++	mov pc, r8
++ENDPROC(__FIQ_Branch)
+--- a/drivers/usb/Makefile
++++ b/drivers/usb/Makefile
+@@ -7,6 +7,7 @@
+ obj-$(CONFIG_USB)		+= core/
+ obj-$(CONFIG_USB_SUPPORT)	+= phy/
+ 
++obj-$(CONFIG_USB_DWCOTG)	+= host/
+ obj-$(CONFIG_USB_DWC3)		+= dwc3/
+ obj-$(CONFIG_USB_DWC2)		+= dwc2/
+ obj-$(CONFIG_USB_ISP1760)	+= isp1760/
+--- a/drivers/usb/core/generic.c
++++ b/drivers/usb/core/generic.c
+@@ -152,6 +152,7 @@ int usb_choose_configuration(struct usb_
+ 		dev_warn(&udev->dev,
+ 			"no configuration chosen from %d choice%s\n",
+ 			num_configs, plural(num_configs));
++		dev_warn(&udev->dev, "No support over %dmA\n", udev->bus_mA);
+ 	}
+ 	return i;
+ }
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -4946,7 +4946,7 @@ static void port_event(struct usb_hub *h
+ 	if (portchange & USB_PORT_STAT_C_OVERCURRENT) {
+ 		u16 status = 0, unused;
+ 
+-		dev_dbg(&port_dev->dev, "over-current change\n");
++		dev_notice(&port_dev->dev, "over-current change\n");
+ 		usb_clear_port_feature(hdev, port1,
+ 				USB_PORT_FEAT_C_OVER_CURRENT);
+ 		msleep(100);	/* Cool down */
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1909,6 +1909,85 @@ free_interfaces:
+ 	if (cp->string == NULL &&
+ 			!(dev->quirks & USB_QUIRK_CONFIG_INTF_STRINGS))
+ 		cp->string = usb_cache_string(dev, cp->desc.iConfiguration);
++/* Uncomment this define to enable the HS Electrical Test support */
++#define DWC_HS_ELECT_TST 1
++#ifdef DWC_HS_ELECT_TST
++		/* Here we implement the HS Electrical Test support. The
++		 * tester uses a vendor ID of 0x1A0A to indicate we should
++		 * run a special test sequence. The product ID tells us
++		 * which sequence to run. We invoke the test sequence by
++		 * sending a non-standard SetFeature command to our root
++		 * hub port. Our dwc_otg_hcd_hub_control() routine will
++		 * recognize the command and perform the desired test
++		 * sequence.
++		 */
++		if (dev->descriptor.idVendor == 0x1A0A) {
++			/* HSOTG Electrical Test */
++			dev_warn(&dev->dev, "VID from HSOTG Electrical Test Fixture\n");
++
++			if (dev->bus && dev->bus->root_hub) {
++				struct usb_device *hdev = dev->bus->root_hub;
++				dev_warn(&dev->dev, "Got PID 0x%x\n", dev->descriptor.idProduct);
++
++				switch (dev->descriptor.idProduct) {
++				case 0x0101:	/* TEST_SE0_NAK */
++					dev_warn(&dev->dev, "TEST_SE0_NAK\n");
++					usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
++							USB_REQ_SET_FEATURE, USB_RT_PORT,
++							USB_PORT_FEAT_TEST, 0x300, NULL, 0, HZ);
++					break;
++
++				case 0x0102:	/* TEST_J */
++					dev_warn(&dev->dev, "TEST_J\n");
++					usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
++							USB_REQ_SET_FEATURE, USB_RT_PORT,
++							USB_PORT_FEAT_TEST, 0x100, NULL, 0, HZ);
++					break;
++
++				case 0x0103:	/* TEST_K */
++					dev_warn(&dev->dev, "TEST_K\n");
++					usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
++							USB_REQ_SET_FEATURE, USB_RT_PORT,
++							USB_PORT_FEAT_TEST, 0x200, NULL, 0, HZ);
++					break;
++
++				case 0x0104:	/* TEST_PACKET */
++					dev_warn(&dev->dev, "TEST_PACKET\n");
++					usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
++							USB_REQ_SET_FEATURE, USB_RT_PORT,
++							USB_PORT_FEAT_TEST, 0x400, NULL, 0, HZ);
++					break;
++
++				case 0x0105:	/* TEST_FORCE_ENABLE */
++					dev_warn(&dev->dev, "TEST_FORCE_ENABLE\n");
++					usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
++							USB_REQ_SET_FEATURE, USB_RT_PORT,
++							USB_PORT_FEAT_TEST, 0x500, NULL, 0, HZ);
++					break;
++
++				case 0x0106:	/* HS_HOST_PORT_SUSPEND_RESUME */
++					dev_warn(&dev->dev, "HS_HOST_PORT_SUSPEND_RESUME\n");
++					usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
++							USB_REQ_SET_FEATURE, USB_RT_PORT,
++							USB_PORT_FEAT_TEST, 0x600, NULL, 0, 40 * HZ);
++					break;
++
++				case 0x0107:	/* SINGLE_STEP_GET_DEVICE_DESCRIPTOR setup */
++					dev_warn(&dev->dev, "SINGLE_STEP_GET_DEVICE_DESCRIPTOR setup\n");
++					usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
++							USB_REQ_SET_FEATURE, USB_RT_PORT,
++							USB_PORT_FEAT_TEST, 0x700, NULL, 0, 40 * HZ);
++					break;
++
++				case 0x0108:	/* SINGLE_STEP_GET_DEVICE_DESCRIPTOR execute */
++					dev_warn(&dev->dev, "SINGLE_STEP_GET_DEVICE_DESCRIPTOR execute\n");
++					usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
++							USB_REQ_SET_FEATURE, USB_RT_PORT,
++							USB_PORT_FEAT_TEST, 0x800, NULL, 0, 40 * HZ);
++				}
++			}
++		}
++#endif /* DWC_HS_ELECT_TST */
+ 
+ 	/* Now that the interfaces are installed, re-enable LPM. */
+ 	usb_unlocked_enable_lpm(dev);
+--- a/drivers/usb/core/otg_whitelist.h
++++ b/drivers/usb/core/otg_whitelist.h
+@@ -19,33 +19,82 @@
+ static struct usb_device_id whitelist_table[] = {
+ 
+ /* hubs are optional in OTG, but very handy ... */
++#define CERT_WITHOUT_HUBS
++#if defined(CERT_WITHOUT_HUBS)
++{ USB_DEVICE( 0x0000, 0x0000 ), }, /* Root HUB Only*/
++#else
+ { USB_DEVICE_INFO(USB_CLASS_HUB, 0, 0), },
+ { USB_DEVICE_INFO(USB_CLASS_HUB, 0, 1), },
++{ USB_DEVICE_INFO(USB_CLASS_HUB, 0, 2), },
++#endif
+ 
+ #ifdef	CONFIG_USB_PRINTER		/* ignoring nonstatic linkage! */
+ /* FIXME actually, printers are NOT supposed to use device classes;
+  * they're supposed to use interface classes...
+  */
+-{ USB_DEVICE_INFO(7, 1, 1) },
+-{ USB_DEVICE_INFO(7, 1, 2) },
+-{ USB_DEVICE_INFO(7, 1, 3) },
++//{ USB_DEVICE_INFO(7, 1, 1) },
++//{ USB_DEVICE_INFO(7, 1, 2) },
++//{ USB_DEVICE_INFO(7, 1, 3) },
+ #endif
+ 
+ #ifdef	CONFIG_USB_NET_CDCETHER
+ /* Linux-USB CDC Ethernet gadget */
+-{ USB_DEVICE(0x0525, 0xa4a1), },
++//{ USB_DEVICE(0x0525, 0xa4a1), },
+ /* Linux-USB CDC Ethernet + RNDIS gadget */
+-{ USB_DEVICE(0x0525, 0xa4a2), },
++//{ USB_DEVICE(0x0525, 0xa4a2), },
+ #endif
+ 
+ #if	defined(CONFIG_USB_TEST) || defined(CONFIG_USB_TEST_MODULE)
+ /* gadget zero, for testing */
+-{ USB_DEVICE(0x0525, 0xa4a0), },
++//{ USB_DEVICE(0x0525, 0xa4a0), },
+ #endif
+ 
++/* OPT Tester */
++{ USB_DEVICE( 0x1a0a, 0x0101 ), }, /* TEST_SE0_NAK */
++{ USB_DEVICE( 0x1a0a, 0x0102 ), }, /* Test_J */
++{ USB_DEVICE( 0x1a0a, 0x0103 ), }, /* Test_K */
++{ USB_DEVICE( 0x1a0a, 0x0104 ), }, /* Test_PACKET */
++{ USB_DEVICE( 0x1a0a, 0x0105 ), }, /* Test_FORCE_ENABLE */
++{ USB_DEVICE( 0x1a0a, 0x0106 ), }, /* HS_PORT_SUSPEND_RESUME  */
++{ USB_DEVICE( 0x1a0a, 0x0107 ), }, /* SINGLE_STEP_GET_DESCRIPTOR setup */
++{ USB_DEVICE( 0x1a0a, 0x0108 ), }, /* SINGLE_STEP_GET_DESCRIPTOR execute */
++
++/* Sony cameras */
++{ USB_DEVICE_VER(0x054c,0x0010,0x0410, 0x0500), },
++
++/* Memory Devices */
++//{ USB_DEVICE( 0x0781, 0x5150 ), }, /* SanDisk */
++//{ USB_DEVICE( 0x05DC, 0x0080 ), }, /* Lexar */
++//{ USB_DEVICE( 0x4146, 0x9281 ), }, /* IOMEGA */
++//{ USB_DEVICE( 0x067b, 0x2507 ), }, /* Hammer 20GB External HD  */
++{ USB_DEVICE( 0x0EA0, 0x2168 ), }, /* Ours Technology Inc. (BUFFALO ClipDrive)*/
++//{ USB_DEVICE( 0x0457, 0x0150 ), }, /* Silicon Integrated Systems Corp. */
++
++/* HP Printers */
++//{ USB_DEVICE( 0x03F0, 0x1102 ), }, /* HP Photosmart 245 */
++//{ USB_DEVICE( 0x03F0, 0x1302 ), }, /* HP Photosmart 370 Series */
++
++/* Speakers */
++//{ USB_DEVICE( 0x0499, 0x3002 ), }, /* YAMAHA YST-MS35D USB Speakers */
++//{ USB_DEVICE( 0x0672, 0x1041 ), }, /* Labtec USB Headset */
++
+ { }	/* Terminating entry */
+ };
+ 
++static inline void report_errors(struct usb_device *dev)
++{
++	/* OTG MESSAGE: report errors here, customize to match your product */
++	dev_info(&dev->dev, "device Vendor:%04x Product:%04x is not supported\n",
++		 le16_to_cpu(dev->descriptor.idVendor),
++		 le16_to_cpu(dev->descriptor.idProduct));
++        if (USB_CLASS_HUB == dev->descriptor.bDeviceClass){
++                dev_printk(KERN_CRIT, &dev->dev, "Unsupported Hub Topology\n");
++        } else {
++                dev_printk(KERN_CRIT, &dev->dev, "Attached Device is not Supported\n");
++        }
++}
++
++
+ static int is_targeted(struct usb_device *dev)
+ {
+ 	struct usb_device_id	*id = whitelist_table;
+@@ -95,16 +144,57 @@ static int is_targeted(struct usb_device
+ 			continue;
+ 
+ 		return 1;
+-	}
++		/* NOTE: can't use usb_match_id() since interface caches
++		 * aren't set up yet. this is cut/paste from that code.
++		 */
++		for (id = whitelist_table; id->match_flags; id++) {
++#ifdef DEBUG
++			dev_dbg(&dev->dev,
++				"ID: V:%04x P:%04x DC:%04x SC:%04x PR:%04x \n",
++				id->idVendor,
++				id->idProduct,
++				id->bDeviceClass,
++				id->bDeviceSubClass,
++				id->bDeviceProtocol);
++#endif
+ 
+-	/* add other match criteria here ... */
++			if ((id->match_flags & USB_DEVICE_ID_MATCH_VENDOR) &&
++			    id->idVendor != le16_to_cpu(dev->descriptor.idVendor))
++				continue;
++
++			if ((id->match_flags & USB_DEVICE_ID_MATCH_PRODUCT) &&
++			    id->idProduct != le16_to_cpu(dev->descriptor.idProduct))
++				continue;
++
++			/* No need to test id->bcdDevice_lo != 0, since 0 is never
++			   greater than any unsigned number. */
++			if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_LO) &&
++			    (id->bcdDevice_lo > le16_to_cpu(dev->descriptor.bcdDevice)))
++				continue;
++
++			if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_HI) &&
++			    (id->bcdDevice_hi < le16_to_cpu(dev->descriptor.bcdDevice)))
++				continue;
++
++			if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_CLASS) &&
++			    (id->bDeviceClass != dev->descriptor.bDeviceClass))
++				continue;
++
++			if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_SUBCLASS) &&
++			    (id->bDeviceSubClass != dev->descriptor.bDeviceSubClass))
++				continue;
++
++			if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_PROTOCOL) &&
++			    (id->bDeviceProtocol != dev->descriptor.bDeviceProtocol))
++				continue;
+ 
++			return 1;
++		}
++	}
+ 
+-	/* OTG MESSAGE: report errors here, customize to match your product */
+-	dev_err(&dev->dev, "device v%04x p%04x is not supported\n",
+-		le16_to_cpu(dev->descriptor.idVendor),
+-		le16_to_cpu(dev->descriptor.idProduct));
++	/* add other match criteria here ... */
+ 
++	report_errors(dev);
+ 	return 0;
+ }
+ 
+--- /dev/null
++++ b/drivers/usb/gadget/file_storage.c
+@@ -0,0 +1,3676 @@
++/*
++ * file_storage.c -- File-backed USB Storage Gadget, for USB development
++ *
++ * Copyright (C) 2003-2008 Alan Stern
++ * All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") as published by the Free Software
++ * Foundation, either version 2 of that License or (at your option) any
++ * later version.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++
++/*
++ * The File-backed Storage Gadget acts as a USB Mass Storage device,
++ * appearing to the host as a disk drive or as a CD-ROM drive.  In addition
++ * to providing an example of a genuinely useful gadget driver for a USB
++ * device, it also illustrates a technique of double-buffering for increased
++ * throughput.  Last but not least, it gives an easy way to probe the
++ * behavior of the Mass Storage drivers in a USB host.
++ *
++ * Backing storage is provided by a regular file or a block device, specified
++ * by the "file" module parameter.  Access can be limited to read-only by
++ * setting the optional "ro" module parameter.  (For CD-ROM emulation,
++ * access is always read-only.)  The gadget will indicate that it has
++ * removable media if the optional "removable" module parameter is set.
++ *
++ * The gadget supports the Control-Bulk (CB), Control-Bulk-Interrupt (CBI),
++ * and Bulk-Only (also known as Bulk-Bulk-Bulk or BBB) transports, selected
++ * by the optional "transport" module parameter.  It also supports the
++ * following protocols: RBC (0x01), ATAPI or SFF-8020i (0x02), QIC-157 (0c03),
++ * UFI (0x04), SFF-8070i (0x05), and transparent SCSI (0x06), selected by
++ * the optional "protocol" module parameter.  In addition, the default
++ * Vendor ID, Product ID, release number and serial number can be overridden.
++ *
++ * There is support for multiple logical units (LUNs), each of which has
++ * its own backing file.  The number of LUNs can be set using the optional
++ * "luns" module parameter (anywhere from 1 to 8), and the corresponding
++ * files are specified using comma-separated lists for "file" and "ro".
++ * The default number of LUNs is taken from the number of "file" elements;
++ * it is 1 if "file" is not given.  If "removable" is not set then a backing
++ * file must be specified for each LUN.  If it is set, then an unspecified
++ * or empty backing filename means the LUN's medium is not loaded.  Ideally
++ * each LUN would be settable independently as a disk drive or a CD-ROM
++ * drive, but currently all LUNs have to be the same type.  The CD-ROM
++ * emulation includes a single data track and no audio tracks; hence there
++ * need be only one backing file per LUN.
++ *
++ * Requirements are modest; only a bulk-in and a bulk-out endpoint are
++ * needed (an interrupt-out endpoint is also needed for CBI).  The memory
++ * requirement amounts to two 16K buffers, size configurable by a parameter.
++ * Support is included for both full-speed and high-speed operation.
++ *
++ * Note that the driver is slightly non-portable in that it assumes a
++ * single memory/DMA buffer will be useable for bulk-in, bulk-out, and
++ * interrupt-in endpoints.  With most device controllers this isn't an
++ * issue, but there may be some with hardware restrictions that prevent
++ * a buffer from being used by more than one endpoint.
++ *
++ * Module options:
++ *
++ *	file=filename[,filename...]
++ *				Required if "removable" is not set, names of
++ *					the files or block devices used for
++ *					backing storage
++ *	serial=HHHH...		Required serial number (string of hex chars)
++ *	ro=b[,b...]		Default false, booleans for read-only access
++ *	removable		Default false, boolean for removable media
++ *	luns=N			Default N = number of filenames, number of
++ *					LUNs to support
++ *	nofua=b[,b...]		Default false, booleans for ignore FUA flag
++ *					in SCSI WRITE(10,12) commands
++ *	stall			Default determined according to the type of
++ *					USB device controller (usually true),
++ *					boolean to permit the driver to halt
++ *					bulk endpoints
++ *	cdrom			Default false, boolean for whether to emulate
++ *					a CD-ROM drive
++ *	transport=XXX		Default BBB, transport name (CB, CBI, or BBB)
++ *	protocol=YYY		Default SCSI, protocol name (RBC, 8020 or
++ *					ATAPI, QIC, UFI, 8070, or SCSI;
++ *					also 1 - 6)
++ *	vendor=0xVVVV		Default 0x0525 (NetChip), USB Vendor ID
++ *	product=0xPPPP		Default 0xa4a5 (FSG), USB Product ID
++ *	release=0xRRRR		Override the USB release number (bcdDevice)
++ *	buflen=N		Default N=16384, buffer size used (will be
++ *					rounded down to a multiple of
++ *					PAGE_CACHE_SIZE)
++ *
++ * If CONFIG_USB_FILE_STORAGE_TEST is not set, only the "file", "serial", "ro",
++ * "removable", "luns", "nofua", "stall", and "cdrom" options are available;
++ * default values are used for everything else.
++ *
++ * The pathnames of the backing files and the ro settings are available in
++ * the attribute files "file", "nofua", and "ro" in the lun<n> subdirectory of
++ * the gadget's sysfs directory.  If the "removable" option is set, writing to
++ * these files will simulate ejecting/loading the medium (writing an empty
++ * line means eject) and adjusting a write-enable tab.  Changes to the ro
++ * setting are not allowed when the medium is loaded or if CD-ROM emulation
++ * is being used.
++ *
++ * This gadget driver is heavily based on "Gadget Zero" by David Brownell.
++ * The driver's SCSI command interface was based on the "Information
++ * technology - Small Computer System Interface - 2" document from
++ * X3T9.2 Project 375D, Revision 10L, 7-SEP-93, available at
++ * <http://www.t10.org/ftp/t10/drafts/s2/s2-r10l.pdf>.  The single exception
++ * is opcode 0x23 (READ FORMAT CAPACITIES), which was based on the
++ * "Universal Serial Bus Mass Storage Class UFI Command Specification"
++ * document, Revision 1.0, December 14, 1998, available at
++ * <http://www.usb.org/developers/devclass_docs/usbmass-ufi10.pdf>.
++ */
++
++
++/*
++ *				Driver Design
++ *
++ * The FSG driver is fairly straightforward.  There is a main kernel
++ * thread that handles most of the work.  Interrupt routines field
++ * callbacks from the controller driver: bulk- and interrupt-request
++ * completion notifications, endpoint-0 events, and disconnect events.
++ * Completion events are passed to the main thread by wakeup calls.  Many
++ * ep0 requests are handled at interrupt time, but SetInterface,
++ * SetConfiguration, and device reset requests are forwarded to the
++ * thread in the form of "exceptions" using SIGUSR1 signals (since they
++ * should interrupt any ongoing file I/O operations).
++ *
++ * The thread's main routine implements the standard command/data/status
++ * parts of a SCSI interaction.  It and its subroutines are full of tests
++ * for pending signals/exceptions -- all this polling is necessary since
++ * the kernel has no setjmp/longjmp equivalents.  (Maybe this is an
++ * indication that the driver really wants to be running in userspace.)
++ * An important point is that so long as the thread is alive it keeps an
++ * open reference to the backing file.  This will prevent unmounting
++ * the backing file's underlying filesystem and could cause problems
++ * during system shutdown, for example.  To prevent such problems, the
++ * thread catches INT, TERM, and KILL signals and converts them into
++ * an EXIT exception.
++ *
++ * In normal operation the main thread is started during the gadget's
++ * fsg_bind() callback and stopped during fsg_unbind().  But it can also
++ * exit when it receives a signal, and there's no point leaving the
++ * gadget running when the thread is dead.  So just before the thread
++ * exits, it deregisters the gadget driver.  This makes things a little
++ * tricky: The driver is deregistered at two places, and the exiting
++ * thread can indirectly call fsg_unbind() which in turn can tell the
++ * thread to exit.  The first problem is resolved through the use of the
++ * REGISTERED atomic bitflag; the driver will only be deregistered once.
++ * The second problem is resolved by having fsg_unbind() check
++ * fsg->state; it won't try to stop the thread if the state is already
++ * FSG_STATE_TERMINATED.
++ *
++ * To provide maximum throughput, the driver uses a circular pipeline of
++ * buffer heads (struct fsg_buffhd).  In principle the pipeline can be
++ * arbitrarily long; in practice the benefits don't justify having more
++ * than 2 stages (i.e., double buffering).  But it helps to think of the
++ * pipeline as being a long one.  Each buffer head contains a bulk-in and
++ * a bulk-out request pointer (since the buffer can be used for both
++ * output and input -- directions always are given from the host's
++ * point of view) as well as a pointer to the buffer and various state
++ * variables.
++ *
++ * Use of the pipeline follows a simple protocol.  There is a variable
++ * (fsg->next_buffhd_to_fill) that points to the next buffer head to use.
++ * At any time that buffer head may still be in use from an earlier
++ * request, so each buffer head has a state variable indicating whether
++ * it is EMPTY, FULL, or BUSY.  Typical use involves waiting for the
++ * buffer head to be EMPTY, filling the buffer either by file I/O or by
++ * USB I/O (during which the buffer head is BUSY), and marking the buffer
++ * head FULL when the I/O is complete.  Then the buffer will be emptied
++ * (again possibly by USB I/O, during which it is marked BUSY) and
++ * finally marked EMPTY again (possibly by a completion routine).
++ *
++ * A module parameter tells the driver to avoid stalling the bulk
++ * endpoints wherever the transport specification allows.  This is
++ * necessary for some UDCs like the SuperH, which cannot reliably clear a
++ * halt on a bulk endpoint.  However, under certain circumstances the
++ * Bulk-only specification requires a stall.  In such cases the driver
++ * will halt the endpoint and set a flag indicating that it should clear
++ * the halt in software during the next device reset.  Hopefully this
++ * will permit everything to work correctly.  Furthermore, although the
++ * specification allows the bulk-out endpoint to halt when the host sends
++ * too much data, implementing this would cause an unavoidable race.
++ * The driver will always use the "no-stall" approach for OUT transfers.
++ *
++ * One subtle point concerns sending status-stage responses for ep0
++ * requests.  Some of these requests, such as device reset, can involve
++ * interrupting an ongoing file I/O operation, which might take an
++ * arbitrarily long time.  During that delay the host might give up on
++ * the original ep0 request and issue a new one.  When that happens the
++ * driver should not notify the host about completion of the original
++ * request, as the host will no longer be waiting for it.  So the driver
++ * assigns to each ep0 request a unique tag, and it keeps track of the
++ * tag value of the request associated with a long-running exception
++ * (device-reset, interface-change, or configuration-change).  When the
++ * exception handler is finished, the status-stage response is submitted
++ * only if the current ep0 request tag is equal to the exception request
++ * tag.  Thus only the most recently received ep0 request will get a
++ * status-stage response.
++ *
++ * Warning: This driver source file is too long.  It ought to be split up
++ * into a header file plus about 3 separate .c files, to handle the details
++ * of the Gadget, USB Mass Storage, and SCSI protocols.
++ */
++
++
++/* #define VERBOSE_DEBUG */
++/* #define DUMP_MSGS */
++
++
++#include <linux/blkdev.h>
++#include <linux/completion.h>
++#include <linux/dcache.h>
++#include <linux/delay.h>
++#include <linux/device.h>
++#include <linux/fcntl.h>
++#include <linux/file.h>
++#include <linux/fs.h>
++#include <linux/kref.h>
++#include <linux/kthread.h>
++#include <linux/limits.h>
++#include <linux/module.h>
++#include <linux/rwsem.h>
++#include <linux/slab.h>
++#include <linux/spinlock.h>
++#include <linux/string.h>
++#include <linux/freezer.h>
++#include <linux/utsname.h>
++
++#include <linux/usb/ch9.h>
++#include <linux/usb/gadget.h>
++
++#include "gadget_chips.h"
++
++
++
++/*
++ * Kbuild is not very cooperative with respect to linking separately
++ * compiled library objects into one module.  So for now we won't use
++ * separate compilation ... ensuring init/exit sections work to shrink
++ * the runtime footprint, and giving us at least some parts of what
++ * a "gcc --combine ... part1.c part2.c part3.c ... " build would.
++ */
++#include "usbstring.c"
++#include "config.c"
++#include "epautoconf.c"
++
++/*-------------------------------------------------------------------------*/
++
++#define DRIVER_DESC		"File-backed Storage Gadget"
++#define DRIVER_NAME		"g_file_storage"
++#define DRIVER_VERSION		"1 September 2010"
++
++static       char fsg_string_manufacturer[64];
++static const char fsg_string_product[] = DRIVER_DESC;
++static const char fsg_string_config[] = "Self-powered";
++static const char fsg_string_interface[] = "Mass Storage";
++
++
++#include "storage_common.c"
++
++
++MODULE_DESCRIPTION(DRIVER_DESC);
++MODULE_AUTHOR("Alan Stern");
++MODULE_LICENSE("Dual BSD/GPL");
++
++/*
++ * This driver assumes self-powered hardware and has no way for users to
++ * trigger remote wakeup.  It uses autoconfiguration to select endpoints
++ * and endpoint addresses.
++ */
++
++
++/*-------------------------------------------------------------------------*/
++
++
++/* Encapsulate the module parameter settings */
++
++static struct {
++	char		*file[FSG_MAX_LUNS];
++	char		*serial;
++	bool		ro[FSG_MAX_LUNS];
++	bool		nofua[FSG_MAX_LUNS];
++	unsigned int	num_filenames;
++	unsigned int	num_ros;
++	unsigned int	num_nofuas;
++	unsigned int	nluns;
++
++	bool		removable;
++	bool		can_stall;
++	bool		cdrom;
++
++	char		*transport_parm;
++	char		*protocol_parm;
++	unsigned short	vendor;
++	unsigned short	product;
++	unsigned short	release;
++	unsigned int	buflen;
++
++	int		transport_type;
++	char		*transport_name;
++	int		protocol_type;
++	char		*protocol_name;
++
++} mod_data = {					// Default values
++	.transport_parm		= "BBB",
++	.protocol_parm		= "SCSI",
++	.removable		= 0,
++	.can_stall		= 1,
++	.cdrom			= 0,
++	.vendor			= FSG_VENDOR_ID,
++	.product		= FSG_PRODUCT_ID,
++	.release		= 0xffff,	// Use controller chip type
++	.buflen			= 16384,
++	};
++
++
++module_param_array_named(file, mod_data.file, charp, &mod_data.num_filenames,
++		S_IRUGO);
++MODULE_PARM_DESC(file, "names of backing files or devices");
++
++module_param_named(serial, mod_data.serial, charp, S_IRUGO);
++MODULE_PARM_DESC(serial, "USB serial number");
++
++module_param_array_named(ro, mod_data.ro, bool, &mod_data.num_ros, S_IRUGO);
++MODULE_PARM_DESC(ro, "true to force read-only");
++
++module_param_array_named(nofua, mod_data.nofua, bool, &mod_data.num_nofuas,
++		S_IRUGO);
++MODULE_PARM_DESC(nofua, "true to ignore SCSI WRITE(10,12) FUA bit");
++
++module_param_named(luns, mod_data.nluns, uint, S_IRUGO);
++MODULE_PARM_DESC(luns, "number of LUNs");
++
++module_param_named(removable, mod_data.removable, bool, S_IRUGO);
++MODULE_PARM_DESC(removable, "true to simulate removable media");
++
++module_param_named(stall, mod_data.can_stall, bool, S_IRUGO);
++MODULE_PARM_DESC(stall, "false to prevent bulk stalls");
++
++module_param_named(cdrom, mod_data.cdrom, bool, S_IRUGO);
++MODULE_PARM_DESC(cdrom, "true to emulate cdrom instead of disk");
++
++/* In the non-TEST version, only the module parameters listed above
++ * are available. */
++#ifdef CONFIG_USB_FILE_STORAGE_TEST
++
++module_param_named(transport, mod_data.transport_parm, charp, S_IRUGO);
++MODULE_PARM_DESC(transport, "type of transport (BBB, CBI, or CB)");
++
++module_param_named(protocol, mod_data.protocol_parm, charp, S_IRUGO);
++MODULE_PARM_DESC(protocol, "type of protocol (RBC, 8020, QIC, UFI, "
++		"8070, or SCSI)");
++
++module_param_named(vendor, mod_data.vendor, ushort, S_IRUGO);
++MODULE_PARM_DESC(vendor, "USB Vendor ID");
++
++module_param_named(product, mod_data.product, ushort, S_IRUGO);
++MODULE_PARM_DESC(product, "USB Product ID");
++
++module_param_named(release, mod_data.release, ushort, S_IRUGO);
++MODULE_PARM_DESC(release, "USB release number");
++
++module_param_named(buflen, mod_data.buflen, uint, S_IRUGO);
++MODULE_PARM_DESC(buflen, "I/O buffer size");
++
++#endif /* CONFIG_USB_FILE_STORAGE_TEST */
++
++
++/*
++ * These definitions will permit the compiler to avoid generating code for
++ * parts of the driver that aren't used in the non-TEST version.  Even gcc
++ * can recognize when a test of a constant expression yields a dead code
++ * path.
++ */
++
++#ifdef CONFIG_USB_FILE_STORAGE_TEST
++
++#define transport_is_bbb()	(mod_data.transport_type == USB_PR_BULK)
++#define transport_is_cbi()	(mod_data.transport_type == USB_PR_CBI)
++#define protocol_is_scsi()	(mod_data.protocol_type == USB_SC_SCSI)
++
++#else
++
++#define transport_is_bbb()	1
++#define transport_is_cbi()	0
++#define protocol_is_scsi()	1
++
++#endif /* CONFIG_USB_FILE_STORAGE_TEST */
++
++
++/*-------------------------------------------------------------------------*/
++
++
++struct fsg_dev {
++	/* lock protects: state, all the req_busy's, and cbbuf_cmnd */
++	spinlock_t		lock;
++	struct usb_gadget	*gadget;
++
++	/* filesem protects: backing files in use */
++	struct rw_semaphore	filesem;
++
++	/* reference counting: wait until all LUNs are released */
++	struct kref		ref;
++
++	struct usb_ep		*ep0;		// Handy copy of gadget->ep0
++	struct usb_request	*ep0req;	// For control responses
++	unsigned int		ep0_req_tag;
++	const char		*ep0req_name;
++
++	struct usb_request	*intreq;	// For interrupt responses
++	int			intreq_busy;
++	struct fsg_buffhd	*intr_buffhd;
++
++	unsigned int		bulk_out_maxpacket;
++	enum fsg_state		state;		// For exception handling
++	unsigned int		exception_req_tag;
++
++	u8			config, new_config;
++
++	unsigned int		running : 1;
++	unsigned int		bulk_in_enabled : 1;
++	unsigned int		bulk_out_enabled : 1;
++	unsigned int		intr_in_enabled : 1;
++	unsigned int		phase_error : 1;
++	unsigned int		short_packet_received : 1;
++	unsigned int		bad_lun_okay : 1;
++
++	unsigned long		atomic_bitflags;
++#define REGISTERED		0
++#define IGNORE_BULK_OUT		1
++#define SUSPENDED		2
++
++	struct usb_ep		*bulk_in;
++	struct usb_ep		*bulk_out;
++	struct usb_ep		*intr_in;
++
++	struct fsg_buffhd	*next_buffhd_to_fill;
++	struct fsg_buffhd	*next_buffhd_to_drain;
++
++	int			thread_wakeup_needed;
++	struct completion	thread_notifier;
++	struct task_struct	*thread_task;
++
++	int			cmnd_size;
++	u8			cmnd[MAX_COMMAND_SIZE];
++	enum data_direction	data_dir;
++	u32			data_size;
++	u32			data_size_from_cmnd;
++	u32			tag;
++	unsigned int		lun;
++	u32			residue;
++	u32			usb_amount_left;
++
++	/* The CB protocol offers no way for a host to know when a command
++	 * has completed.  As a result the next command may arrive early,
++	 * and we will still have to handle it.  For that reason we need
++	 * a buffer to store new commands when using CB (or CBI, which
++	 * does not oblige a host to wait for command completion either). */
++	int			cbbuf_cmnd_size;
++	u8			cbbuf_cmnd[MAX_COMMAND_SIZE];
++
++	unsigned int		nluns;
++	struct fsg_lun		*luns;
++	struct fsg_lun		*curlun;
++	/* Must be the last entry */
++	struct fsg_buffhd	buffhds[];
++};
++
++typedef void (*fsg_routine_t)(struct fsg_dev *);
++
++static int exception_in_progress(struct fsg_dev *fsg)
++{
++	return (fsg->state > FSG_STATE_IDLE);
++}
++
++/* Make bulk-out requests be divisible by the maxpacket size */
++static void set_bulk_out_req_length(struct fsg_dev *fsg,
++		struct fsg_buffhd *bh, unsigned int length)
++{
++	unsigned int	rem;
++
++	bh->bulk_out_intended_length = length;
++	rem = length % fsg->bulk_out_maxpacket;
++	if (rem > 0)
++		length += fsg->bulk_out_maxpacket - rem;
++	bh->outreq->length = length;
++}
++
++static struct fsg_dev			*the_fsg;
++static struct usb_gadget_driver		fsg_driver;
++
++
++/*-------------------------------------------------------------------------*/
++
++static int fsg_set_halt(struct fsg_dev *fsg, struct usb_ep *ep)
++{
++	const char	*name;
++
++	if (ep == fsg->bulk_in)
++		name = "bulk-in";
++	else if (ep == fsg->bulk_out)
++		name = "bulk-out";
++	else
++		name = ep->name;
++	DBG(fsg, "%s set halt\n", name);
++	return usb_ep_set_halt(ep);
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++/*
++ * DESCRIPTORS ... most are static, but strings and (full) configuration
++ * descriptors are built on demand.  Also the (static) config and interface
++ * descriptors are adjusted during fsg_bind().
++ */
++
++/* There is only one configuration. */
++#define	CONFIG_VALUE		1
++
++static struct usb_device_descriptor
++device_desc = {
++	.bLength =		sizeof device_desc,
++	.bDescriptorType =	USB_DT_DEVICE,
++
++	.bcdUSB =		cpu_to_le16(0x0200),
++	.bDeviceClass =		USB_CLASS_PER_INTERFACE,
++
++	/* The next three values can be overridden by module parameters */
++	.idVendor =		cpu_to_le16(FSG_VENDOR_ID),
++	.idProduct =		cpu_to_le16(FSG_PRODUCT_ID),
++	.bcdDevice =		cpu_to_le16(0xffff),
++
++	.iManufacturer =	FSG_STRING_MANUFACTURER,
++	.iProduct =		FSG_STRING_PRODUCT,
++	.iSerialNumber =	FSG_STRING_SERIAL,
++	.bNumConfigurations =	1,
++};
++
++static struct usb_config_descriptor
++config_desc = {
++	.bLength =		sizeof config_desc,
++	.bDescriptorType =	USB_DT_CONFIG,
++
++	/* wTotalLength computed by usb_gadget_config_buf() */
++	.bNumInterfaces =	1,
++	.bConfigurationValue =	CONFIG_VALUE,
++	.iConfiguration =	FSG_STRING_CONFIG,
++	.bmAttributes =		USB_CONFIG_ATT_ONE | USB_CONFIG_ATT_SELFPOWER,
++	.bMaxPower =		CONFIG_USB_GADGET_VBUS_DRAW / 2,
++};
++
++
++static struct usb_qualifier_descriptor
++dev_qualifier = {
++	.bLength =		sizeof dev_qualifier,
++	.bDescriptorType =	USB_DT_DEVICE_QUALIFIER,
++
++	.bcdUSB =		cpu_to_le16(0x0200),
++	.bDeviceClass =		USB_CLASS_PER_INTERFACE,
++
++	.bNumConfigurations =	1,
++};
++
++static int populate_bos(struct fsg_dev *fsg, u8 *buf)
++{
++	memcpy(buf, &fsg_bos_desc, USB_DT_BOS_SIZE);
++	buf += USB_DT_BOS_SIZE;
++
++	memcpy(buf, &fsg_ext_cap_desc, USB_DT_USB_EXT_CAP_SIZE);
++	buf += USB_DT_USB_EXT_CAP_SIZE;
++
++	memcpy(buf, &fsg_ss_cap_desc, USB_DT_USB_SS_CAP_SIZE);
++
++	return USB_DT_BOS_SIZE + USB_DT_USB_SS_CAP_SIZE
++		+ USB_DT_USB_EXT_CAP_SIZE;
++}
++
++/*
++ * Config descriptors must agree with the code that sets configurations
++ * and with code managing interfaces and their altsettings.  They must
++ * also handle different speeds and other-speed requests.
++ */
++static int populate_config_buf(struct usb_gadget *gadget,
++		u8 *buf, u8 type, unsigned index)
++{
++	enum usb_device_speed			speed = gadget->speed;
++	int					len;
++	const struct usb_descriptor_header	**function;
++
++	if (index > 0)
++		return -EINVAL;
++
++	if (gadget_is_dualspeed(gadget) && type == USB_DT_OTHER_SPEED_CONFIG)
++		speed = (USB_SPEED_FULL + USB_SPEED_HIGH) - speed;
++	function = gadget_is_dualspeed(gadget) && speed == USB_SPEED_HIGH
++		? (const struct usb_descriptor_header **)fsg_hs_function
++		: (const struct usb_descriptor_header **)fsg_fs_function;
++
++	/* for now, don't advertise srp-only devices */
++	if (!gadget_is_otg(gadget))
++		function++;
++
++	len = usb_gadget_config_buf(&config_desc, buf, EP0_BUFSIZE, function);
++	((struct usb_config_descriptor *) buf)->bDescriptorType = type;
++	return len;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++/* These routines may be called in process context or in_irq */
++
++/* Caller must hold fsg->lock */
++static void wakeup_thread(struct fsg_dev *fsg)
++{
++	/* Tell the main thread that something has happened */
++	fsg->thread_wakeup_needed = 1;
++	if (fsg->thread_task)
++		wake_up_process(fsg->thread_task);
++}
++
++
++static void raise_exception(struct fsg_dev *fsg, enum fsg_state new_state)
++{
++	unsigned long		flags;
++
++	/* Do nothing if a higher-priority exception is already in progress.
++	 * If a lower-or-equal priority exception is in progress, preempt it
++	 * and notify the main thread by sending it a signal. */
++	spin_lock_irqsave(&fsg->lock, flags);
++	if (fsg->state <= new_state) {
++		fsg->exception_req_tag = fsg->ep0_req_tag;
++		fsg->state = new_state;
++		if (fsg->thread_task)
++			send_sig_info(SIGUSR1, SEND_SIG_FORCED,
++					fsg->thread_task);
++	}
++	spin_unlock_irqrestore(&fsg->lock, flags);
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++/* The disconnect callback and ep0 routines.  These always run in_irq,
++ * except that ep0_queue() is called in the main thread to acknowledge
++ * completion of various requests: set config, set interface, and
++ * Bulk-only device reset. */
++
++static void fsg_disconnect(struct usb_gadget *gadget)
++{
++	struct fsg_dev		*fsg = get_gadget_data(gadget);
++
++	DBG(fsg, "disconnect or port reset\n");
++	raise_exception(fsg, FSG_STATE_DISCONNECT);
++}
++
++
++static int ep0_queue(struct fsg_dev *fsg)
++{
++	int	rc;
++
++	rc = usb_ep_queue(fsg->ep0, fsg->ep0req, GFP_ATOMIC);
++	if (rc != 0 && rc != -ESHUTDOWN) {
++
++		/* We can't do much more than wait for a reset */
++		WARNING(fsg, "error in submission: %s --> %d\n",
++				fsg->ep0->name, rc);
++	}
++	return rc;
++}
++
++static void ep0_complete(struct usb_ep *ep, struct usb_request *req)
++{
++	struct fsg_dev		*fsg = ep->driver_data;
++
++	if (req->actual > 0)
++		dump_msg(fsg, fsg->ep0req_name, req->buf, req->actual);
++	if (req->status || req->actual != req->length)
++		DBG(fsg, "%s --> %d, %u/%u\n", __func__,
++				req->status, req->actual, req->length);
++	if (req->status == -ECONNRESET)		// Request was cancelled
++		usb_ep_fifo_flush(ep);
++
++	if (req->status == 0 && req->context)
++		((fsg_routine_t) (req->context))(fsg);
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++/* Bulk and interrupt endpoint completion handlers.
++ * These always run in_irq. */
++
++static void bulk_in_complete(struct usb_ep *ep, struct usb_request *req)
++{
++	struct fsg_dev		*fsg = ep->driver_data;
++	struct fsg_buffhd	*bh = req->context;
++
++	if (req->status || req->actual != req->length)
++		DBG(fsg, "%s --> %d, %u/%u\n", __func__,
++				req->status, req->actual, req->length);
++	if (req->status == -ECONNRESET)		// Request was cancelled
++		usb_ep_fifo_flush(ep);
++
++	/* Hold the lock while we update the request and buffer states */
++	smp_wmb();
++	spin_lock(&fsg->lock);
++	bh->inreq_busy = 0;
++	bh->state = BUF_STATE_EMPTY;
++	wakeup_thread(fsg);
++	spin_unlock(&fsg->lock);
++}
++
++static void bulk_out_complete(struct usb_ep *ep, struct usb_request *req)
++{
++	struct fsg_dev		*fsg = ep->driver_data;
++	struct fsg_buffhd	*bh = req->context;
++
++	dump_msg(fsg, "bulk-out", req->buf, req->actual);
++	if (req->status || req->actual != bh->bulk_out_intended_length)
++		DBG(fsg, "%s --> %d, %u/%u\n", __func__,
++				req->status, req->actual,
++				bh->bulk_out_intended_length);
++	if (req->status == -ECONNRESET)		// Request was cancelled
++		usb_ep_fifo_flush(ep);
++
++	/* Hold the lock while we update the request and buffer states */
++	smp_wmb();
++	spin_lock(&fsg->lock);
++	bh->outreq_busy = 0;
++	bh->state = BUF_STATE_FULL;
++	wakeup_thread(fsg);
++	spin_unlock(&fsg->lock);
++}
++
++
++#ifdef CONFIG_USB_FILE_STORAGE_TEST
++static void intr_in_complete(struct usb_ep *ep, struct usb_request *req)
++{
++	struct fsg_dev		*fsg = ep->driver_data;
++	struct fsg_buffhd	*bh = req->context;
++
++	if (req->status || req->actual != req->length)
++		DBG(fsg, "%s --> %d, %u/%u\n", __func__,
++				req->status, req->actual, req->length);
++	if (req->status == -ECONNRESET)		// Request was cancelled
++		usb_ep_fifo_flush(ep);
++
++	/* Hold the lock while we update the request and buffer states */
++	smp_wmb();
++	spin_lock(&fsg->lock);
++	fsg->intreq_busy = 0;
++	bh->state = BUF_STATE_EMPTY;
++	wakeup_thread(fsg);
++	spin_unlock(&fsg->lock);
++}
++
++#else
++static void intr_in_complete(struct usb_ep *ep, struct usb_request *req)
++{}
++#endif /* CONFIG_USB_FILE_STORAGE_TEST */
++
++
++/*-------------------------------------------------------------------------*/
++
++/* Ep0 class-specific handlers.  These always run in_irq. */
++
++#ifdef CONFIG_USB_FILE_STORAGE_TEST
++static void received_cbi_adsc(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	struct usb_request	*req = fsg->ep0req;
++	static u8		cbi_reset_cmnd[6] = {
++			SEND_DIAGNOSTIC, 4, 0xff, 0xff, 0xff, 0xff};
++
++	/* Error in command transfer? */
++	if (req->status || req->length != req->actual ||
++			req->actual < 6 || req->actual > MAX_COMMAND_SIZE) {
++
++		/* Not all controllers allow a protocol stall after
++		 * receiving control-out data, but we'll try anyway. */
++		fsg_set_halt(fsg, fsg->ep0);
++		return;			// Wait for reset
++	}
++
++	/* Is it the special reset command? */
++	if (req->actual >= sizeof cbi_reset_cmnd &&
++			memcmp(req->buf, cbi_reset_cmnd,
++				sizeof cbi_reset_cmnd) == 0) {
++
++		/* Raise an exception to stop the current operation
++		 * and reinitialize our state. */
++		DBG(fsg, "cbi reset request\n");
++		raise_exception(fsg, FSG_STATE_RESET);
++		return;
++	}
++
++	VDBG(fsg, "CB[I] accept device-specific command\n");
++	spin_lock(&fsg->lock);
++
++	/* Save the command for later */
++	if (fsg->cbbuf_cmnd_size)
++		WARNING(fsg, "CB[I] overwriting previous command\n");
++	fsg->cbbuf_cmnd_size = req->actual;
++	memcpy(fsg->cbbuf_cmnd, req->buf, fsg->cbbuf_cmnd_size);
++
++	wakeup_thread(fsg);
++	spin_unlock(&fsg->lock);
++}
++
++#else
++static void received_cbi_adsc(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{}
++#endif /* CONFIG_USB_FILE_STORAGE_TEST */
++
++
++static int class_setup_req(struct fsg_dev *fsg,
++		const struct usb_ctrlrequest *ctrl)
++{
++	struct usb_request	*req = fsg->ep0req;
++	int			value = -EOPNOTSUPP;
++	u16			w_index = le16_to_cpu(ctrl->wIndex);
++	u16                     w_value = le16_to_cpu(ctrl->wValue);
++	u16			w_length = le16_to_cpu(ctrl->wLength);
++
++	if (!fsg->config)
++		return value;
++
++	/* Handle Bulk-only class-specific requests */
++	if (transport_is_bbb()) {
++		switch (ctrl->bRequest) {
++
++		case US_BULK_RESET_REQUEST:
++			if (ctrl->bRequestType != (USB_DIR_OUT |
++					USB_TYPE_CLASS | USB_RECIP_INTERFACE))
++				break;
++			if (w_index != 0 || w_value != 0 || w_length != 0) {
++				value = -EDOM;
++				break;
++			}
++
++			/* Raise an exception to stop the current operation
++			 * and reinitialize our state. */
++			DBG(fsg, "bulk reset request\n");
++			raise_exception(fsg, FSG_STATE_RESET);
++			value = DELAYED_STATUS;
++			break;
++
++		case US_BULK_GET_MAX_LUN:
++			if (ctrl->bRequestType != (USB_DIR_IN |
++					USB_TYPE_CLASS | USB_RECIP_INTERFACE))
++				break;
++			if (w_index != 0 || w_value != 0 || w_length != 1) {
++				value = -EDOM;
++				break;
++			}
++			VDBG(fsg, "get max LUN\n");
++			*(u8 *) req->buf = fsg->nluns - 1;
++			value = 1;
++			break;
++		}
++	}
++
++	/* Handle CBI class-specific requests */
++	else {
++		switch (ctrl->bRequest) {
++
++		case USB_CBI_ADSC_REQUEST:
++			if (ctrl->bRequestType != (USB_DIR_OUT |
++					USB_TYPE_CLASS | USB_RECIP_INTERFACE))
++				break;
++			if (w_index != 0 || w_value != 0) {
++				value = -EDOM;
++				break;
++			}
++			if (w_length > MAX_COMMAND_SIZE) {
++				value = -EOVERFLOW;
++				break;
++			}
++			value = w_length;
++			fsg->ep0req->context = received_cbi_adsc;
++			break;
++		}
++	}
++
++	if (value == -EOPNOTSUPP)
++		VDBG(fsg,
++			"unknown class-specific control req "
++			"%02x.%02x v%04x i%04x l%u\n",
++			ctrl->bRequestType, ctrl->bRequest,
++			le16_to_cpu(ctrl->wValue), w_index, w_length);
++	return value;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++/* Ep0 standard request handlers.  These always run in_irq. */
++
++static int standard_setup_req(struct fsg_dev *fsg,
++		const struct usb_ctrlrequest *ctrl)
++{
++	struct usb_request	*req = fsg->ep0req;
++	int			value = -EOPNOTSUPP;
++	u16			w_index = le16_to_cpu(ctrl->wIndex);
++	u16			w_value = le16_to_cpu(ctrl->wValue);
++
++	/* Usually this just stores reply data in the pre-allocated ep0 buffer,
++	 * but config change events will also reconfigure hardware. */
++	switch (ctrl->bRequest) {
++
++	case USB_REQ_GET_DESCRIPTOR:
++		if (ctrl->bRequestType != (USB_DIR_IN | USB_TYPE_STANDARD |
++				USB_RECIP_DEVICE))
++			break;
++		switch (w_value >> 8) {
++
++		case USB_DT_DEVICE:
++			VDBG(fsg, "get device descriptor\n");
++			device_desc.bMaxPacketSize0 = fsg->ep0->maxpacket;
++			value = sizeof device_desc;
++			memcpy(req->buf, &device_desc, value);
++			break;
++		case USB_DT_DEVICE_QUALIFIER:
++			VDBG(fsg, "get device qualifier\n");
++			if (!gadget_is_dualspeed(fsg->gadget) ||
++					fsg->gadget->speed == USB_SPEED_SUPER)
++				break;
++			/*
++			 * Assume ep0 uses the same maxpacket value for both
++			 * speeds
++			 */
++			dev_qualifier.bMaxPacketSize0 = fsg->ep0->maxpacket;
++			value = sizeof dev_qualifier;
++			memcpy(req->buf, &dev_qualifier, value);
++			break;
++
++		case USB_DT_OTHER_SPEED_CONFIG:
++			VDBG(fsg, "get other-speed config descriptor\n");
++			if (!gadget_is_dualspeed(fsg->gadget) ||
++					fsg->gadget->speed == USB_SPEED_SUPER)
++				break;
++			goto get_config;
++		case USB_DT_CONFIG:
++			VDBG(fsg, "get configuration descriptor\n");
++get_config:
++			value = populate_config_buf(fsg->gadget,
++					req->buf,
++					w_value >> 8,
++					w_value & 0xff);
++			break;
++
++		case USB_DT_STRING:
++			VDBG(fsg, "get string descriptor\n");
++
++			/* wIndex == language code */
++			value = usb_gadget_get_string(&fsg_stringtab,
++					w_value & 0xff, req->buf);
++			break;
++
++		case USB_DT_BOS:
++			VDBG(fsg, "get bos descriptor\n");
++
++			if (gadget_is_superspeed(fsg->gadget))
++				value = populate_bos(fsg, req->buf);
++			break;
++		}
++
++		break;
++
++	/* One config, two speeds */
++	case USB_REQ_SET_CONFIGURATION:
++		if (ctrl->bRequestType != (USB_DIR_OUT | USB_TYPE_STANDARD |
++				USB_RECIP_DEVICE))
++			break;
++		VDBG(fsg, "set configuration\n");
++		if (w_value == CONFIG_VALUE || w_value == 0) {
++			fsg->new_config = w_value;
++
++			/* Raise an exception to wipe out previous transaction
++			 * state (queued bufs, etc) and set the new config. */
++			raise_exception(fsg, FSG_STATE_CONFIG_CHANGE);
++			value = DELAYED_STATUS;
++		}
++		break;
++	case USB_REQ_GET_CONFIGURATION:
++		if (ctrl->bRequestType != (USB_DIR_IN | USB_TYPE_STANDARD |
++				USB_RECIP_DEVICE))
++			break;
++		VDBG(fsg, "get configuration\n");
++		*(u8 *) req->buf = fsg->config;
++		value = 1;
++		break;
++
++	case USB_REQ_SET_INTERFACE:
++		if (ctrl->bRequestType != (USB_DIR_OUT| USB_TYPE_STANDARD |
++				USB_RECIP_INTERFACE))
++			break;
++		if (fsg->config && w_index == 0) {
++
++			/* Raise an exception to wipe out previous transaction
++			 * state (queued bufs, etc) and install the new
++			 * interface altsetting. */
++			raise_exception(fsg, FSG_STATE_INTERFACE_CHANGE);
++			value = DELAYED_STATUS;
++		}
++		break;
++	case USB_REQ_GET_INTERFACE:
++		if (ctrl->bRequestType != (USB_DIR_IN | USB_TYPE_STANDARD |
++				USB_RECIP_INTERFACE))
++			break;
++		if (!fsg->config)
++			break;
++		if (w_index != 0) {
++			value = -EDOM;
++			break;
++		}
++		VDBG(fsg, "get interface\n");
++		*(u8 *) req->buf = 0;
++		value = 1;
++		break;
++
++	default:
++		VDBG(fsg,
++			"unknown control req %02x.%02x v%04x i%04x l%u\n",
++			ctrl->bRequestType, ctrl->bRequest,
++			w_value, w_index, le16_to_cpu(ctrl->wLength));
++	}
++
++	return value;
++}
++
++
++static int fsg_setup(struct usb_gadget *gadget,
++		const struct usb_ctrlrequest *ctrl)
++{
++	struct fsg_dev		*fsg = get_gadget_data(gadget);
++	int			rc;
++	int			w_length = le16_to_cpu(ctrl->wLength);
++
++	++fsg->ep0_req_tag;		// Record arrival of a new request
++	fsg->ep0req->context = NULL;
++	fsg->ep0req->length = 0;
++	dump_msg(fsg, "ep0-setup", (u8 *) ctrl, sizeof(*ctrl));
++
++	if ((ctrl->bRequestType & USB_TYPE_MASK) == USB_TYPE_CLASS)
++		rc = class_setup_req(fsg, ctrl);
++	else
++		rc = standard_setup_req(fsg, ctrl);
++
++	/* Respond with data/status or defer until later? */
++	if (rc >= 0 && rc != DELAYED_STATUS) {
++		rc = min(rc, w_length);
++		fsg->ep0req->length = rc;
++		fsg->ep0req->zero = rc < w_length;
++		fsg->ep0req_name = (ctrl->bRequestType & USB_DIR_IN ?
++				"ep0-in" : "ep0-out");
++		rc = ep0_queue(fsg);
++	}
++
++	/* Device either stalls (rc < 0) or reports success */
++	return rc;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++/* All the following routines run in process context */
++
++
++/* Use this for bulk or interrupt transfers, not ep0 */
++static void start_transfer(struct fsg_dev *fsg, struct usb_ep *ep,
++		struct usb_request *req, int *pbusy,
++		enum fsg_buffer_state *state)
++{
++	int	rc;
++
++	if (ep == fsg->bulk_in)
++		dump_msg(fsg, "bulk-in", req->buf, req->length);
++	else if (ep == fsg->intr_in)
++		dump_msg(fsg, "intr-in", req->buf, req->length);
++
++	spin_lock_irq(&fsg->lock);
++	*pbusy = 1;
++	*state = BUF_STATE_BUSY;
++	spin_unlock_irq(&fsg->lock);
++	rc = usb_ep_queue(ep, req, GFP_KERNEL);
++	if (rc != 0) {
++		*pbusy = 0;
++		*state = BUF_STATE_EMPTY;
++
++		/* We can't do much more than wait for a reset */
++
++		/* Note: currently the net2280 driver fails zero-length
++		 * submissions if DMA is enabled. */
++		if (rc != -ESHUTDOWN && !(rc == -EOPNOTSUPP &&
++						req->length == 0))
++			WARNING(fsg, "error in submission: %s --> %d\n",
++					ep->name, rc);
++	}
++}
++
++
++static int sleep_thread(struct fsg_dev *fsg)
++{
++	int	rc = 0;
++
++	/* Wait until a signal arrives or we are woken up */
++	for (;;) {
++		try_to_freeze();
++		set_current_state(TASK_INTERRUPTIBLE);
++		if (signal_pending(current)) {
++			rc = -EINTR;
++			break;
++		}
++		if (fsg->thread_wakeup_needed)
++			break;
++		schedule();
++	}
++	__set_current_state(TASK_RUNNING);
++	fsg->thread_wakeup_needed = 0;
++	return rc;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static int do_read(struct fsg_dev *fsg)
++{
++	struct fsg_lun		*curlun = fsg->curlun;
++	u32			lba;
++	struct fsg_buffhd	*bh;
++	int			rc;
++	u32			amount_left;
++	loff_t			file_offset, file_offset_tmp;
++	unsigned int		amount;
++	ssize_t			nread;
++
++	/* Get the starting Logical Block Address and check that it's
++	 * not too big */
++	if (fsg->cmnd[0] == READ_6)
++		lba = get_unaligned_be24(&fsg->cmnd[1]);
++	else {
++		lba = get_unaligned_be32(&fsg->cmnd[2]);
++
++		/* We allow DPO (Disable Page Out = don't save data in the
++		 * cache) and FUA (Force Unit Access = don't read from the
++		 * cache), but we don't implement them. */
++		if ((fsg->cmnd[1] & ~0x18) != 0) {
++			curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++			return -EINVAL;
++		}
++	}
++	if (lba >= curlun->num_sectors) {
++		curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
++		return -EINVAL;
++	}
++	file_offset = ((loff_t) lba) << curlun->blkbits;
++
++	/* Carry out the file reads */
++	amount_left = fsg->data_size_from_cmnd;
++	if (unlikely(amount_left == 0))
++		return -EIO;		// No default reply
++
++	for (;;) {
++
++		/* Figure out how much we need to read:
++		 * Try to read the remaining amount.
++		 * But don't read more than the buffer size.
++		 * And don't try to read past the end of the file.
++		 */
++		amount = min((unsigned int) amount_left, mod_data.buflen);
++		amount = min((loff_t) amount,
++				curlun->file_length - file_offset);
++
++		/* Wait for the next buffer to become available */
++		bh = fsg->next_buffhd_to_fill;
++		while (bh->state != BUF_STATE_EMPTY) {
++			rc = sleep_thread(fsg);
++			if (rc)
++				return rc;
++		}
++
++		/* If we were asked to read past the end of file,
++		 * end with an empty buffer. */
++		if (amount == 0) {
++			curlun->sense_data =
++					SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
++			curlun->sense_data_info = file_offset >> curlun->blkbits;
++			curlun->info_valid = 1;
++			bh->inreq->length = 0;
++			bh->state = BUF_STATE_FULL;
++			break;
++		}
++
++		/* Perform the read */
++		file_offset_tmp = file_offset;
++		nread = vfs_read(curlun->filp,
++				(char __user *) bh->buf,
++				amount, &file_offset_tmp);
++		VLDBG(curlun, "file read %u @ %llu -> %d\n", amount,
++				(unsigned long long) file_offset,
++				(int) nread);
++		if (signal_pending(current))
++			return -EINTR;
++
++		if (nread < 0) {
++			LDBG(curlun, "error in file read: %d\n",
++					(int) nread);
++			nread = 0;
++		} else if (nread < amount) {
++			LDBG(curlun, "partial file read: %d/%u\n",
++					(int) nread, amount);
++			nread = round_down(nread, curlun->blksize);
++		}
++		file_offset  += nread;
++		amount_left  -= nread;
++		fsg->residue -= nread;
++
++		/* Except at the end of the transfer, nread will be
++		 * equal to the buffer size, which is divisible by the
++		 * bulk-in maxpacket size.
++		 */
++		bh->inreq->length = nread;
++		bh->state = BUF_STATE_FULL;
++
++		/* If an error occurred, report it and its position */
++		if (nread < amount) {
++			curlun->sense_data = SS_UNRECOVERED_READ_ERROR;
++			curlun->sense_data_info = file_offset >> curlun->blkbits;
++			curlun->info_valid = 1;
++			break;
++		}
++
++		if (amount_left == 0)
++			break;		// No more left to read
++
++		/* Send this buffer and go read some more */
++		bh->inreq->zero = 0;
++		start_transfer(fsg, fsg->bulk_in, bh->inreq,
++				&bh->inreq_busy, &bh->state);
++		fsg->next_buffhd_to_fill = bh->next;
++	}
++
++	return -EIO;		// No default reply
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static int do_write(struct fsg_dev *fsg)
++{
++	struct fsg_lun		*curlun = fsg->curlun;
++	u32			lba;
++	struct fsg_buffhd	*bh;
++	int			get_some_more;
++	u32			amount_left_to_req, amount_left_to_write;
++	loff_t			usb_offset, file_offset, file_offset_tmp;
++	unsigned int		amount;
++	ssize_t			nwritten;
++	int			rc;
++
++	if (curlun->ro) {
++		curlun->sense_data = SS_WRITE_PROTECTED;
++		return -EINVAL;
++	}
++	spin_lock(&curlun->filp->f_lock);
++	curlun->filp->f_flags &= ~O_SYNC;	// Default is not to wait
++	spin_unlock(&curlun->filp->f_lock);
++
++	/* Get the starting Logical Block Address and check that it's
++	 * not too big */
++	if (fsg->cmnd[0] == WRITE_6)
++		lba = get_unaligned_be24(&fsg->cmnd[1]);
++	else {
++		lba = get_unaligned_be32(&fsg->cmnd[2]);
++
++		/* We allow DPO (Disable Page Out = don't save data in the
++		 * cache) and FUA (Force Unit Access = write directly to the
++		 * medium).  We don't implement DPO; we implement FUA by
++		 * performing synchronous output. */
++		if ((fsg->cmnd[1] & ~0x18) != 0) {
++			curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++			return -EINVAL;
++		}
++		/* FUA */
++		if (!curlun->nofua && (fsg->cmnd[1] & 0x08)) {
++			spin_lock(&curlun->filp->f_lock);
++			curlun->filp->f_flags |= O_DSYNC;
++			spin_unlock(&curlun->filp->f_lock);
++		}
++	}
++	if (lba >= curlun->num_sectors) {
++		curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
++		return -EINVAL;
++	}
++
++	/* Carry out the file writes */
++	get_some_more = 1;
++	file_offset = usb_offset = ((loff_t) lba) << curlun->blkbits;
++	amount_left_to_req = amount_left_to_write = fsg->data_size_from_cmnd;
++
++	while (amount_left_to_write > 0) {
++
++		/* Queue a request for more data from the host */
++		bh = fsg->next_buffhd_to_fill;
++		if (bh->state == BUF_STATE_EMPTY && get_some_more) {
++
++			/* Figure out how much we want to get:
++			 * Try to get the remaining amount,
++			 * but not more than the buffer size.
++			 */
++			amount = min(amount_left_to_req, mod_data.buflen);
++
++			/* Beyond the end of the backing file? */
++			if (usb_offset >= curlun->file_length) {
++				get_some_more = 0;
++				curlun->sense_data =
++					SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
++				curlun->sense_data_info = usb_offset >> curlun->blkbits;
++				curlun->info_valid = 1;
++				continue;
++			}
++
++			/* Get the next buffer */
++			usb_offset += amount;
++			fsg->usb_amount_left -= amount;
++			amount_left_to_req -= amount;
++			if (amount_left_to_req == 0)
++				get_some_more = 0;
++
++			/* Except at the end of the transfer, amount will be
++			 * equal to the buffer size, which is divisible by
++			 * the bulk-out maxpacket size.
++			 */
++			set_bulk_out_req_length(fsg, bh, amount);
++			start_transfer(fsg, fsg->bulk_out, bh->outreq,
++					&bh->outreq_busy, &bh->state);
++			fsg->next_buffhd_to_fill = bh->next;
++			continue;
++		}
++
++		/* Write the received data to the backing file */
++		bh = fsg->next_buffhd_to_drain;
++		if (bh->state == BUF_STATE_EMPTY && !get_some_more)
++			break;			// We stopped early
++		if (bh->state == BUF_STATE_FULL) {
++			smp_rmb();
++			fsg->next_buffhd_to_drain = bh->next;
++			bh->state = BUF_STATE_EMPTY;
++
++			/* Did something go wrong with the transfer? */
++			if (bh->outreq->status != 0) {
++				curlun->sense_data = SS_COMMUNICATION_FAILURE;
++				curlun->sense_data_info = file_offset >> curlun->blkbits;
++				curlun->info_valid = 1;
++				break;
++			}
++
++			amount = bh->outreq->actual;
++			if (curlun->file_length - file_offset < amount) {
++				LERROR(curlun,
++	"write %u @ %llu beyond end %llu\n",
++	amount, (unsigned long long) file_offset,
++	(unsigned long long) curlun->file_length);
++				amount = curlun->file_length - file_offset;
++			}
++
++			/* Don't accept excess data.  The spec doesn't say
++			 * what to do in this case.  We'll ignore the error.
++			 */
++			amount = min(amount, bh->bulk_out_intended_length);
++
++			/* Don't write a partial block */
++			amount = round_down(amount, curlun->blksize);
++			if (amount == 0)
++				goto empty_write;
++
++			/* Perform the write */
++			file_offset_tmp = file_offset;
++			nwritten = vfs_write(curlun->filp,
++					(char __user *) bh->buf,
++					amount, &file_offset_tmp);
++			VLDBG(curlun, "file write %u @ %llu -> %d\n", amount,
++					(unsigned long long) file_offset,
++					(int) nwritten);
++			if (signal_pending(current))
++				return -EINTR;		// Interrupted!
++
++			if (nwritten < 0) {
++				LDBG(curlun, "error in file write: %d\n",
++						(int) nwritten);
++				nwritten = 0;
++			} else if (nwritten < amount) {
++				LDBG(curlun, "partial file write: %d/%u\n",
++						(int) nwritten, amount);
++				nwritten = round_down(nwritten, curlun->blksize);
++			}
++			file_offset += nwritten;
++			amount_left_to_write -= nwritten;
++			fsg->residue -= nwritten;
++
++			/* If an error occurred, report it and its position */
++			if (nwritten < amount) {
++				curlun->sense_data = SS_WRITE_ERROR;
++				curlun->sense_data_info = file_offset >> curlun->blkbits;
++				curlun->info_valid = 1;
++				break;
++			}
++
++ empty_write:
++			/* Did the host decide to stop early? */
++			if (bh->outreq->actual < bh->bulk_out_intended_length) {
++				fsg->short_packet_received = 1;
++				break;
++			}
++			continue;
++		}
++
++		/* Wait for something to happen */
++		rc = sleep_thread(fsg);
++		if (rc)
++			return rc;
++	}
++
++	return -EIO;		// No default reply
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static int do_synchronize_cache(struct fsg_dev *fsg)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	int		rc;
++
++	/* We ignore the requested LBA and write out all file's
++	 * dirty data buffers. */
++	rc = fsg_lun_fsync_sub(curlun);
++	if (rc)
++		curlun->sense_data = SS_WRITE_ERROR;
++	return 0;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static void invalidate_sub(struct fsg_lun *curlun)
++{
++	struct file	*filp = curlun->filp;
++	struct inode	*inode = filp->f_path.dentry->d_inode;
++	unsigned long	rc;
++
++	rc = invalidate_mapping_pages(inode->i_mapping, 0, -1);
++	VLDBG(curlun, "invalidate_mapping_pages -> %ld\n", rc);
++}
++
++static int do_verify(struct fsg_dev *fsg)
++{
++	struct fsg_lun		*curlun = fsg->curlun;
++	u32			lba;
++	u32			verification_length;
++	struct fsg_buffhd	*bh = fsg->next_buffhd_to_fill;
++	loff_t			file_offset, file_offset_tmp;
++	u32			amount_left;
++	unsigned int		amount;
++	ssize_t			nread;
++
++	/* Get the starting Logical Block Address and check that it's
++	 * not too big */
++	lba = get_unaligned_be32(&fsg->cmnd[2]);
++	if (lba >= curlun->num_sectors) {
++		curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
++		return -EINVAL;
++	}
++
++	/* We allow DPO (Disable Page Out = don't save data in the
++	 * cache) but we don't implement it. */
++	if ((fsg->cmnd[1] & ~0x10) != 0) {
++		curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++		return -EINVAL;
++	}
++
++	verification_length = get_unaligned_be16(&fsg->cmnd[7]);
++	if (unlikely(verification_length == 0))
++		return -EIO;		// No default reply
++
++	/* Prepare to carry out the file verify */
++	amount_left = verification_length << curlun->blkbits;
++	file_offset = ((loff_t) lba) << curlun->blkbits;
++
++	/* Write out all the dirty buffers before invalidating them */
++	fsg_lun_fsync_sub(curlun);
++	if (signal_pending(current))
++		return -EINTR;
++
++	invalidate_sub(curlun);
++	if (signal_pending(current))
++		return -EINTR;
++
++	/* Just try to read the requested blocks */
++	while (amount_left > 0) {
++
++		/* Figure out how much we need to read:
++		 * Try to read the remaining amount, but not more than
++		 * the buffer size.
++		 * And don't try to read past the end of the file.
++		 */
++		amount = min((unsigned int) amount_left, mod_data.buflen);
++		amount = min((loff_t) amount,
++				curlun->file_length - file_offset);
++		if (amount == 0) {
++			curlun->sense_data =
++					SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
++			curlun->sense_data_info = file_offset >> curlun->blkbits;
++			curlun->info_valid = 1;
++			break;
++		}
++
++		/* Perform the read */
++		file_offset_tmp = file_offset;
++		nread = vfs_read(curlun->filp,
++				(char __user *) bh->buf,
++				amount, &file_offset_tmp);
++		VLDBG(curlun, "file read %u @ %llu -> %d\n", amount,
++				(unsigned long long) file_offset,
++				(int) nread);
++		if (signal_pending(current))
++			return -EINTR;
++
++		if (nread < 0) {
++			LDBG(curlun, "error in file verify: %d\n",
++					(int) nread);
++			nread = 0;
++		} else if (nread < amount) {
++			LDBG(curlun, "partial file verify: %d/%u\n",
++					(int) nread, amount);
++			nread = round_down(nread, curlun->blksize);
++		}
++		if (nread == 0) {
++			curlun->sense_data = SS_UNRECOVERED_READ_ERROR;
++			curlun->sense_data_info = file_offset >> curlun->blkbits;
++			curlun->info_valid = 1;
++			break;
++		}
++		file_offset += nread;
++		amount_left -= nread;
++	}
++	return 0;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static int do_inquiry(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	u8	*buf = (u8 *) bh->buf;
++
++	static char vendor_id[] = "Linux   ";
++	static char product_disk_id[] = "File-Stor Gadget";
++	static char product_cdrom_id[] = "File-CD Gadget  ";
++
++	if (!fsg->curlun) {		// Unsupported LUNs are okay
++		fsg->bad_lun_okay = 1;
++		memset(buf, 0, 36);
++		buf[0] = 0x7f;		// Unsupported, no device-type
++		buf[4] = 31;		// Additional length
++		return 36;
++	}
++
++	memset(buf, 0, 8);
++	buf[0] = (mod_data.cdrom ? TYPE_ROM : TYPE_DISK);
++	if (mod_data.removable)
++		buf[1] = 0x80;
++	buf[2] = 2;		// ANSI SCSI level 2
++	buf[3] = 2;		// SCSI-2 INQUIRY data format
++	buf[4] = 31;		// Additional length
++				// No special options
++	sprintf(buf + 8, "%-8s%-16s%04x", vendor_id,
++			(mod_data.cdrom ? product_cdrom_id :
++				product_disk_id),
++			mod_data.release);
++	return 36;
++}
++
++
++static int do_request_sense(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	u8		*buf = (u8 *) bh->buf;
++	u32		sd, sdinfo;
++	int		valid;
++
++	/*
++	 * From the SCSI-2 spec., section 7.9 (Unit attention condition):
++	 *
++	 * If a REQUEST SENSE command is received from an initiator
++	 * with a pending unit attention condition (before the target
++	 * generates the contingent allegiance condition), then the
++	 * target shall either:
++	 *   a) report any pending sense data and preserve the unit
++	 *	attention condition on the logical unit, or,
++	 *   b) report the unit attention condition, may discard any
++	 *	pending sense data, and clear the unit attention
++	 *	condition on the logical unit for that initiator.
++	 *
++	 * FSG normally uses option a); enable this code to use option b).
++	 */
++#if 0
++	if (curlun && curlun->unit_attention_data != SS_NO_SENSE) {
++		curlun->sense_data = curlun->unit_attention_data;
++		curlun->unit_attention_data = SS_NO_SENSE;
++	}
++#endif
++
++	if (!curlun) {		// Unsupported LUNs are okay
++		fsg->bad_lun_okay = 1;
++		sd = SS_LOGICAL_UNIT_NOT_SUPPORTED;
++		sdinfo = 0;
++		valid = 0;
++	} else {
++		sd = curlun->sense_data;
++		sdinfo = curlun->sense_data_info;
++		valid = curlun->info_valid << 7;
++		curlun->sense_data = SS_NO_SENSE;
++		curlun->sense_data_info = 0;
++		curlun->info_valid = 0;
++	}
++
++	memset(buf, 0, 18);
++	buf[0] = valid | 0x70;			// Valid, current error
++	buf[2] = SK(sd);
++	put_unaligned_be32(sdinfo, &buf[3]);	/* Sense information */
++	buf[7] = 18 - 8;			// Additional sense length
++	buf[12] = ASC(sd);
++	buf[13] = ASCQ(sd);
++	return 18;
++}
++
++
++static int do_read_capacity(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	u32		lba = get_unaligned_be32(&fsg->cmnd[2]);
++	int		pmi = fsg->cmnd[8];
++	u8		*buf = (u8 *) bh->buf;
++
++	/* Check the PMI and LBA fields */
++	if (pmi > 1 || (pmi == 0 && lba != 0)) {
++		curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++		return -EINVAL;
++	}
++
++	put_unaligned_be32(curlun->num_sectors - 1, &buf[0]);
++						/* Max logical block */
++	put_unaligned_be32(curlun->blksize, &buf[4]);	/* Block length */
++	return 8;
++}
++
++
++static int do_read_header(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	int		msf = fsg->cmnd[1] & 0x02;
++	u32		lba = get_unaligned_be32(&fsg->cmnd[2]);
++	u8		*buf = (u8 *) bh->buf;
++
++	if ((fsg->cmnd[1] & ~0x02) != 0) {		/* Mask away MSF */
++		curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++		return -EINVAL;
++	}
++	if (lba >= curlun->num_sectors) {
++		curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
++		return -EINVAL;
++	}
++
++	memset(buf, 0, 8);
++	buf[0] = 0x01;		/* 2048 bytes of user data, rest is EC */
++	store_cdrom_address(&buf[4], msf, lba);
++	return 8;
++}
++
++
++static int do_read_toc(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	int		msf = fsg->cmnd[1] & 0x02;
++	int		start_track = fsg->cmnd[6];
++	u8		*buf = (u8 *) bh->buf;
++
++	if ((fsg->cmnd[1] & ~0x02) != 0 ||		/* Mask away MSF */
++			start_track > 1) {
++		curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++		return -EINVAL;
++	}
++
++	memset(buf, 0, 20);
++	buf[1] = (20-2);		/* TOC data length */
++	buf[2] = 1;			/* First track number */
++	buf[3] = 1;			/* Last track number */
++	buf[5] = 0x16;			/* Data track, copying allowed */
++	buf[6] = 0x01;			/* Only track is number 1 */
++	store_cdrom_address(&buf[8], msf, 0);
++
++	buf[13] = 0x16;			/* Lead-out track is data */
++	buf[14] = 0xAA;			/* Lead-out track number */
++	store_cdrom_address(&buf[16], msf, curlun->num_sectors);
++	return 20;
++}
++
++
++static int do_mode_sense(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	int		mscmnd = fsg->cmnd[0];
++	u8		*buf = (u8 *) bh->buf;
++	u8		*buf0 = buf;
++	int		pc, page_code;
++	int		changeable_values, all_pages;
++	int		valid_page = 0;
++	int		len, limit;
++
++	if ((fsg->cmnd[1] & ~0x08) != 0) {		// Mask away DBD
++		curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++		return -EINVAL;
++	}
++	pc = fsg->cmnd[2] >> 6;
++	page_code = fsg->cmnd[2] & 0x3f;
++	if (pc == 3) {
++		curlun->sense_data = SS_SAVING_PARAMETERS_NOT_SUPPORTED;
++		return -EINVAL;
++	}
++	changeable_values = (pc == 1);
++	all_pages = (page_code == 0x3f);
++
++	/* Write the mode parameter header.  Fixed values are: default
++	 * medium type, no cache control (DPOFUA), and no block descriptors.
++	 * The only variable value is the WriteProtect bit.  We will fill in
++	 * the mode data length later. */
++	memset(buf, 0, 8);
++	if (mscmnd == MODE_SENSE) {
++		buf[2] = (curlun->ro ? 0x80 : 0x00);		// WP, DPOFUA
++		buf += 4;
++		limit = 255;
++	} else {			// MODE_SENSE_10
++		buf[3] = (curlun->ro ? 0x80 : 0x00);		// WP, DPOFUA
++		buf += 8;
++		limit = 65535;		// Should really be mod_data.buflen
++	}
++
++	/* No block descriptors */
++
++	/* The mode pages, in numerical order.  The only page we support
++	 * is the Caching page. */
++	if (page_code == 0x08 || all_pages) {
++		valid_page = 1;
++		buf[0] = 0x08;		// Page code
++		buf[1] = 10;		// Page length
++		memset(buf+2, 0, 10);	// None of the fields are changeable
++
++		if (!changeable_values) {
++			buf[2] = 0x04;	// Write cache enable,
++					// Read cache not disabled
++					// No cache retention priorities
++			put_unaligned_be16(0xffff, &buf[4]);
++					/* Don't disable prefetch */
++					/* Minimum prefetch = 0 */
++			put_unaligned_be16(0xffff, &buf[8]);
++					/* Maximum prefetch */
++			put_unaligned_be16(0xffff, &buf[10]);
++					/* Maximum prefetch ceiling */
++		}
++		buf += 12;
++	}
++
++	/* Check that a valid page was requested and the mode data length
++	 * isn't too long. */
++	len = buf - buf0;
++	if (!valid_page || len > limit) {
++		curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++		return -EINVAL;
++	}
++
++	/*  Store the mode data length */
++	if (mscmnd == MODE_SENSE)
++		buf0[0] = len - 1;
++	else
++		put_unaligned_be16(len - 2, buf0);
++	return len;
++}
++
++
++static int do_start_stop(struct fsg_dev *fsg)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	int		loej, start;
++
++	if (!mod_data.removable) {
++		curlun->sense_data = SS_INVALID_COMMAND;
++		return -EINVAL;
++	}
++
++	// int immed = fsg->cmnd[1] & 0x01;
++	loej = fsg->cmnd[4] & 0x02;
++	start = fsg->cmnd[4] & 0x01;
++
++#ifdef CONFIG_USB_FILE_STORAGE_TEST
++	if ((fsg->cmnd[1] & ~0x01) != 0 ||		// Mask away Immed
++			(fsg->cmnd[4] & ~0x03) != 0) {	// Mask LoEj, Start
++		curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++		return -EINVAL;
++	}
++
++	if (!start) {
++
++		/* Are we allowed to unload the media? */
++		if (curlun->prevent_medium_removal) {
++			LDBG(curlun, "unload attempt prevented\n");
++			curlun->sense_data = SS_MEDIUM_REMOVAL_PREVENTED;
++			return -EINVAL;
++		}
++		if (loej) {		// Simulate an unload/eject
++			up_read(&fsg->filesem);
++			down_write(&fsg->filesem);
++			fsg_lun_close(curlun);
++			up_write(&fsg->filesem);
++			down_read(&fsg->filesem);
++		}
++	} else {
++
++		/* Our emulation doesn't support mounting; the medium is
++		 * available for use as soon as it is loaded. */
++		if (!fsg_lun_is_open(curlun)) {
++			curlun->sense_data = SS_MEDIUM_NOT_PRESENT;
++			return -EINVAL;
++		}
++	}
++#endif
++	return 0;
++}
++
++
++static int do_prevent_allow(struct fsg_dev *fsg)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	int		prevent;
++
++	if (!mod_data.removable) {
++		curlun->sense_data = SS_INVALID_COMMAND;
++		return -EINVAL;
++	}
++
++	prevent = fsg->cmnd[4] & 0x01;
++	if ((fsg->cmnd[4] & ~0x01) != 0) {		// Mask away Prevent
++		curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++		return -EINVAL;
++	}
++
++	if (curlun->prevent_medium_removal && !prevent)
++		fsg_lun_fsync_sub(curlun);
++	curlun->prevent_medium_removal = prevent;
++	return 0;
++}
++
++
++static int do_read_format_capacities(struct fsg_dev *fsg,
++			struct fsg_buffhd *bh)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++	u8		*buf = (u8 *) bh->buf;
++
++	buf[0] = buf[1] = buf[2] = 0;
++	buf[3] = 8;		// Only the Current/Maximum Capacity Descriptor
++	buf += 4;
++
++	put_unaligned_be32(curlun->num_sectors, &buf[0]);
++						/* Number of blocks */
++	put_unaligned_be32(curlun->blksize, &buf[4]);	/* Block length */
++	buf[4] = 0x02;				/* Current capacity */
++	return 12;
++}
++
++
++static int do_mode_select(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	struct fsg_lun	*curlun = fsg->curlun;
++
++	/* We don't support MODE SELECT */
++	curlun->sense_data = SS_INVALID_COMMAND;
++	return -EINVAL;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static int halt_bulk_in_endpoint(struct fsg_dev *fsg)
++{
++	int	rc;
++
++	rc = fsg_set_halt(fsg, fsg->bulk_in);
++	if (rc == -EAGAIN)
++		VDBG(fsg, "delayed bulk-in endpoint halt\n");
++	while (rc != 0) {
++		if (rc != -EAGAIN) {
++			WARNING(fsg, "usb_ep_set_halt -> %d\n", rc);
++			rc = 0;
++			break;
++		}
++
++		/* Wait for a short time and then try again */
++		if (msleep_interruptible(100) != 0)
++			return -EINTR;
++		rc = usb_ep_set_halt(fsg->bulk_in);
++	}
++	return rc;
++}
++
++static int wedge_bulk_in_endpoint(struct fsg_dev *fsg)
++{
++	int	rc;
++
++	DBG(fsg, "bulk-in set wedge\n");
++	rc = usb_ep_set_wedge(fsg->bulk_in);
++	if (rc == -EAGAIN)
++		VDBG(fsg, "delayed bulk-in endpoint wedge\n");
++	while (rc != 0) {
++		if (rc != -EAGAIN) {
++			WARNING(fsg, "usb_ep_set_wedge -> %d\n", rc);
++			rc = 0;
++			break;
++		}
++
++		/* Wait for a short time and then try again */
++		if (msleep_interruptible(100) != 0)
++			return -EINTR;
++		rc = usb_ep_set_wedge(fsg->bulk_in);
++	}
++	return rc;
++}
++
++static int throw_away_data(struct fsg_dev *fsg)
++{
++	struct fsg_buffhd	*bh;
++	u32			amount;
++	int			rc;
++
++	while ((bh = fsg->next_buffhd_to_drain)->state != BUF_STATE_EMPTY ||
++			fsg->usb_amount_left > 0) {
++
++		/* Throw away the data in a filled buffer */
++		if (bh->state == BUF_STATE_FULL) {
++			smp_rmb();
++			bh->state = BUF_STATE_EMPTY;
++			fsg->next_buffhd_to_drain = bh->next;
++
++			/* A short packet or an error ends everything */
++			if (bh->outreq->actual < bh->bulk_out_intended_length ||
++					bh->outreq->status != 0) {
++				raise_exception(fsg, FSG_STATE_ABORT_BULK_OUT);
++				return -EINTR;
++			}
++			continue;
++		}
++
++		/* Try to submit another request if we need one */
++		bh = fsg->next_buffhd_to_fill;
++		if (bh->state == BUF_STATE_EMPTY && fsg->usb_amount_left > 0) {
++			amount = min(fsg->usb_amount_left,
++					(u32) mod_data.buflen);
++
++			/* Except at the end of the transfer, amount will be
++			 * equal to the buffer size, which is divisible by
++			 * the bulk-out maxpacket size.
++			 */
++			set_bulk_out_req_length(fsg, bh, amount);
++			start_transfer(fsg, fsg->bulk_out, bh->outreq,
++					&bh->outreq_busy, &bh->state);
++			fsg->next_buffhd_to_fill = bh->next;
++			fsg->usb_amount_left -= amount;
++			continue;
++		}
++
++		/* Otherwise wait for something to happen */
++		rc = sleep_thread(fsg);
++		if (rc)
++			return rc;
++	}
++	return 0;
++}
++
++
++static int finish_reply(struct fsg_dev *fsg)
++{
++	struct fsg_buffhd	*bh = fsg->next_buffhd_to_fill;
++	int			rc = 0;
++
++	switch (fsg->data_dir) {
++	case DATA_DIR_NONE:
++		break;			// Nothing to send
++
++	/* If we don't know whether the host wants to read or write,
++	 * this must be CB or CBI with an unknown command.  We mustn't
++	 * try to send or receive any data.  So stall both bulk pipes
++	 * if we can and wait for a reset. */
++	case DATA_DIR_UNKNOWN:
++		if (mod_data.can_stall) {
++			fsg_set_halt(fsg, fsg->bulk_out);
++			rc = halt_bulk_in_endpoint(fsg);
++		}
++		break;
++
++	/* All but the last buffer of data must have already been sent */
++	case DATA_DIR_TO_HOST:
++		if (fsg->data_size == 0)
++			;		// Nothing to send
++
++		/* If there's no residue, simply send the last buffer */
++		else if (fsg->residue == 0) {
++			bh->inreq->zero = 0;
++			start_transfer(fsg, fsg->bulk_in, bh->inreq,
++					&bh->inreq_busy, &bh->state);
++			fsg->next_buffhd_to_fill = bh->next;
++		}
++
++		/* There is a residue.  For CB and CBI, simply mark the end
++		 * of the data with a short packet.  However, if we are
++		 * allowed to stall, there was no data at all (residue ==
++		 * data_size), and the command failed (invalid LUN or
++		 * sense data is set), then halt the bulk-in endpoint
++		 * instead. */
++		else if (!transport_is_bbb()) {
++			if (mod_data.can_stall &&
++					fsg->residue == fsg->data_size &&
++	(!fsg->curlun || fsg->curlun->sense_data != SS_NO_SENSE)) {
++				bh->state = BUF_STATE_EMPTY;
++				rc = halt_bulk_in_endpoint(fsg);
++			} else {
++				bh->inreq->zero = 1;
++				start_transfer(fsg, fsg->bulk_in, bh->inreq,
++						&bh->inreq_busy, &bh->state);
++				fsg->next_buffhd_to_fill = bh->next;
++			}
++		}
++
++		/*
++		 * For Bulk-only, mark the end of the data with a short
++		 * packet.  If we are allowed to stall, halt the bulk-in
++		 * endpoint.  (Note: This violates the Bulk-Only Transport
++		 * specification, which requires us to pad the data if we
++		 * don't halt the endpoint.  Presumably nobody will mind.)
++		 */
++		else {
++			bh->inreq->zero = 1;
++			start_transfer(fsg, fsg->bulk_in, bh->inreq,
++					&bh->inreq_busy, &bh->state);
++			fsg->next_buffhd_to_fill = bh->next;
++			if (mod_data.can_stall)
++				rc = halt_bulk_in_endpoint(fsg);
++		}
++		break;
++
++	/* We have processed all we want from the data the host has sent.
++	 * There may still be outstanding bulk-out requests. */
++	case DATA_DIR_FROM_HOST:
++		if (fsg->residue == 0)
++			;		// Nothing to receive
++
++		/* Did the host stop sending unexpectedly early? */
++		else if (fsg->short_packet_received) {
++			raise_exception(fsg, FSG_STATE_ABORT_BULK_OUT);
++			rc = -EINTR;
++		}
++
++		/* We haven't processed all the incoming data.  Even though
++		 * we may be allowed to stall, doing so would cause a race.
++		 * The controller may already have ACK'ed all the remaining
++		 * bulk-out packets, in which case the host wouldn't see a
++		 * STALL.  Not realizing the endpoint was halted, it wouldn't
++		 * clear the halt -- leading to problems later on. */
++#if 0
++		else if (mod_data.can_stall) {
++			fsg_set_halt(fsg, fsg->bulk_out);
++			raise_exception(fsg, FSG_STATE_ABORT_BULK_OUT);
++			rc = -EINTR;
++		}
++#endif
++
++		/* We can't stall.  Read in the excess data and throw it
++		 * all away. */
++		else
++			rc = throw_away_data(fsg);
++		break;
++	}
++	return rc;
++}
++
++
++static int send_status(struct fsg_dev *fsg)
++{
++	struct fsg_lun		*curlun = fsg->curlun;
++	struct fsg_buffhd	*bh;
++	int			rc;
++	u8			status = US_BULK_STAT_OK;
++	u32			sd, sdinfo = 0;
++
++	/* Wait for the next buffer to become available */
++	bh = fsg->next_buffhd_to_fill;
++	while (bh->state != BUF_STATE_EMPTY) {
++		rc = sleep_thread(fsg);
++		if (rc)
++			return rc;
++	}
++
++	if (curlun) {
++		sd = curlun->sense_data;
++		sdinfo = curlun->sense_data_info;
++	} else if (fsg->bad_lun_okay)
++		sd = SS_NO_SENSE;
++	else
++		sd = SS_LOGICAL_UNIT_NOT_SUPPORTED;
++
++	if (fsg->phase_error) {
++		DBG(fsg, "sending phase-error status\n");
++		status = US_BULK_STAT_PHASE;
++		sd = SS_INVALID_COMMAND;
++	} else if (sd != SS_NO_SENSE) {
++		DBG(fsg, "sending command-failure status\n");
++		status = US_BULK_STAT_FAIL;
++		VDBG(fsg, "  sense data: SK x%02x, ASC x%02x, ASCQ x%02x;"
++				"  info x%x\n",
++				SK(sd), ASC(sd), ASCQ(sd), sdinfo);
++	}
++
++	if (transport_is_bbb()) {
++		struct bulk_cs_wrap	*csw = bh->buf;
++
++		/* Store and send the Bulk-only CSW */
++		csw->Signature = cpu_to_le32(US_BULK_CS_SIGN);
++		csw->Tag = fsg->tag;
++		csw->Residue = cpu_to_le32(fsg->residue);
++		csw->Status = status;
++
++		bh->inreq->length = US_BULK_CS_WRAP_LEN;
++		bh->inreq->zero = 0;
++		start_transfer(fsg, fsg->bulk_in, bh->inreq,
++				&bh->inreq_busy, &bh->state);
++
++	} else if (mod_data.transport_type == USB_PR_CB) {
++
++		/* Control-Bulk transport has no status phase! */
++		return 0;
++
++	} else {			// USB_PR_CBI
++		struct interrupt_data	*buf = bh->buf;
++
++		/* Store and send the Interrupt data.  UFI sends the ASC
++		 * and ASCQ bytes.  Everything else sends a Type (which
++		 * is always 0) and the status Value. */
++		if (mod_data.protocol_type == USB_SC_UFI) {
++			buf->bType = ASC(sd);
++			buf->bValue = ASCQ(sd);
++		} else {
++			buf->bType = 0;
++			buf->bValue = status;
++		}
++		fsg->intreq->length = CBI_INTERRUPT_DATA_LEN;
++
++		fsg->intr_buffhd = bh;		// Point to the right buffhd
++		fsg->intreq->buf = bh->inreq->buf;
++		fsg->intreq->context = bh;
++		start_transfer(fsg, fsg->intr_in, fsg->intreq,
++				&fsg->intreq_busy, &bh->state);
++	}
++
++	fsg->next_buffhd_to_fill = bh->next;
++	return 0;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++/* Check whether the command is properly formed and whether its data size
++ * and direction agree with the values we already have. */
++static int check_command(struct fsg_dev *fsg, int cmnd_size,
++		enum data_direction data_dir, unsigned int mask,
++		int needs_medium, const char *name)
++{
++	int			i;
++	int			lun = fsg->cmnd[1] >> 5;
++	static const char	dirletter[4] = {'u', 'o', 'i', 'n'};
++	char			hdlen[20];
++	struct fsg_lun		*curlun;
++
++	/* Adjust the expected cmnd_size for protocol encapsulation padding.
++	 * Transparent SCSI doesn't pad. */
++	if (protocol_is_scsi())
++		;
++
++	/* There's some disagreement as to whether RBC pads commands or not.
++	 * We'll play it safe and accept either form. */
++	else if (mod_data.protocol_type == USB_SC_RBC) {
++		if (fsg->cmnd_size == 12)
++			cmnd_size = 12;
++
++	/* All the other protocols pad to 12 bytes */
++	} else
++		cmnd_size = 12;
++
++	hdlen[0] = 0;
++	if (fsg->data_dir != DATA_DIR_UNKNOWN)
++		sprintf(hdlen, ", H%c=%u", dirletter[(int) fsg->data_dir],
++				fsg->data_size);
++	VDBG(fsg, "SCSI command: %s;  Dc=%d, D%c=%u;  Hc=%d%s\n",
++			name, cmnd_size, dirletter[(int) data_dir],
++			fsg->data_size_from_cmnd, fsg->cmnd_size, hdlen);
++
++	/* We can't reply at all until we know the correct data direction
++	 * and size. */
++	if (fsg->data_size_from_cmnd == 0)
++		data_dir = DATA_DIR_NONE;
++	if (fsg->data_dir == DATA_DIR_UNKNOWN) {	// CB or CBI
++		fsg->data_dir = data_dir;
++		fsg->data_size = fsg->data_size_from_cmnd;
++
++	} else {					// Bulk-only
++		if (fsg->data_size < fsg->data_size_from_cmnd) {
++
++			/* Host data size < Device data size is a phase error.
++			 * Carry out the command, but only transfer as much
++			 * as we are allowed. */
++			fsg->data_size_from_cmnd = fsg->data_size;
++			fsg->phase_error = 1;
++		}
++	}
++	fsg->residue = fsg->usb_amount_left = fsg->data_size;
++
++	/* Conflicting data directions is a phase error */
++	if (fsg->data_dir != data_dir && fsg->data_size_from_cmnd > 0) {
++		fsg->phase_error = 1;
++		return -EINVAL;
++	}
++
++	/* Verify the length of the command itself */
++	if (cmnd_size != fsg->cmnd_size) {
++
++		/* Special case workaround: There are plenty of buggy SCSI
++		 * implementations. Many have issues with cbw->Length
++		 * field passing a wrong command size. For those cases we
++		 * always try to work around the problem by using the length
++		 * sent by the host side provided it is at least as large
++		 * as the correct command length.
++		 * Examples of such cases would be MS-Windows, which issues
++		 * REQUEST SENSE with cbw->Length == 12 where it should
++		 * be 6, and xbox360 issuing INQUIRY, TEST UNIT READY and
++		 * REQUEST SENSE with cbw->Length == 10 where it should
++		 * be 6 as well.
++		 */
++		if (cmnd_size <= fsg->cmnd_size) {
++			DBG(fsg, "%s is buggy! Expected length %d "
++					"but we got %d\n", name,
++					cmnd_size, fsg->cmnd_size);
++			cmnd_size = fsg->cmnd_size;
++		} else {
++			fsg->phase_error = 1;
++			return -EINVAL;
++		}
++	}
++
++	/* Check that the LUN values are consistent */
++	if (transport_is_bbb()) {
++		if (fsg->lun != lun)
++			DBG(fsg, "using LUN %d from CBW, "
++					"not LUN %d from CDB\n",
++					fsg->lun, lun);
++	}
++
++	/* Check the LUN */
++	curlun = fsg->curlun;
++	if (curlun) {
++		if (fsg->cmnd[0] != REQUEST_SENSE) {
++			curlun->sense_data = SS_NO_SENSE;
++			curlun->sense_data_info = 0;
++			curlun->info_valid = 0;
++		}
++	} else {
++		fsg->bad_lun_okay = 0;
++
++		/* INQUIRY and REQUEST SENSE commands are explicitly allowed
++		 * to use unsupported LUNs; all others may not. */
++		if (fsg->cmnd[0] != INQUIRY &&
++				fsg->cmnd[0] != REQUEST_SENSE) {
++			DBG(fsg, "unsupported LUN %d\n", fsg->lun);
++			return -EINVAL;
++		}
++	}
++
++	/* If a unit attention condition exists, only INQUIRY and
++	 * REQUEST SENSE commands are allowed; anything else must fail. */
++	if (curlun && curlun->unit_attention_data != SS_NO_SENSE &&
++			fsg->cmnd[0] != INQUIRY &&
++			fsg->cmnd[0] != REQUEST_SENSE) {
++		curlun->sense_data = curlun->unit_attention_data;
++		curlun->unit_attention_data = SS_NO_SENSE;
++		return -EINVAL;
++	}
++
++	/* Check that only command bytes listed in the mask are non-zero */
++	fsg->cmnd[1] &= 0x1f;			// Mask away the LUN
++	for (i = 1; i < cmnd_size; ++i) {
++		if (fsg->cmnd[i] && !(mask & (1 << i))) {
++			if (curlun)
++				curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
++			return -EINVAL;
++		}
++	}
++
++	/* If the medium isn't mounted and the command needs to access
++	 * it, return an error. */
++	if (curlun && !fsg_lun_is_open(curlun) && needs_medium) {
++		curlun->sense_data = SS_MEDIUM_NOT_PRESENT;
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++/* wrapper of check_command for data size in blocks handling */
++static int check_command_size_in_blocks(struct fsg_dev *fsg, int cmnd_size,
++		enum data_direction data_dir, unsigned int mask,
++		int needs_medium, const char *name)
++{
++	if (fsg->curlun)
++		fsg->data_size_from_cmnd <<= fsg->curlun->blkbits;
++	return check_command(fsg, cmnd_size, data_dir,
++			mask, needs_medium, name);
++}
++
++static int do_scsi_command(struct fsg_dev *fsg)
++{
++	struct fsg_buffhd	*bh;
++	int			rc;
++	int			reply = -EINVAL;
++	int			i;
++	static char		unknown[16];
++
++	dump_cdb(fsg);
++
++	/* Wait for the next buffer to become available for data or status */
++	bh = fsg->next_buffhd_to_drain = fsg->next_buffhd_to_fill;
++	while (bh->state != BUF_STATE_EMPTY) {
++		rc = sleep_thread(fsg);
++		if (rc)
++			return rc;
++	}
++	fsg->phase_error = 0;
++	fsg->short_packet_received = 0;
++
++	down_read(&fsg->filesem);	// We're using the backing file
++	switch (fsg->cmnd[0]) {
++
++	case INQUIRY:
++		fsg->data_size_from_cmnd = fsg->cmnd[4];
++		if ((reply = check_command(fsg, 6, DATA_DIR_TO_HOST,
++				(1<<4), 0,
++				"INQUIRY")) == 0)
++			reply = do_inquiry(fsg, bh);
++		break;
++
++	case MODE_SELECT:
++		fsg->data_size_from_cmnd = fsg->cmnd[4];
++		if ((reply = check_command(fsg, 6, DATA_DIR_FROM_HOST,
++				(1<<1) | (1<<4), 0,
++				"MODE SELECT(6)")) == 0)
++			reply = do_mode_select(fsg, bh);
++		break;
++
++	case MODE_SELECT_10:
++		fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
++		if ((reply = check_command(fsg, 10, DATA_DIR_FROM_HOST,
++				(1<<1) | (3<<7), 0,
++				"MODE SELECT(10)")) == 0)
++			reply = do_mode_select(fsg, bh);
++		break;
++
++	case MODE_SENSE:
++		fsg->data_size_from_cmnd = fsg->cmnd[4];
++		if ((reply = check_command(fsg, 6, DATA_DIR_TO_HOST,
++				(1<<1) | (1<<2) | (1<<4), 0,
++				"MODE SENSE(6)")) == 0)
++			reply = do_mode_sense(fsg, bh);
++		break;
++
++	case MODE_SENSE_10:
++		fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
++		if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
++				(1<<1) | (1<<2) | (3<<7), 0,
++				"MODE SENSE(10)")) == 0)
++			reply = do_mode_sense(fsg, bh);
++		break;
++
++	case ALLOW_MEDIUM_REMOVAL:
++		fsg->data_size_from_cmnd = 0;
++		if ((reply = check_command(fsg, 6, DATA_DIR_NONE,
++				(1<<4), 0,
++				"PREVENT-ALLOW MEDIUM REMOVAL")) == 0)
++			reply = do_prevent_allow(fsg);
++		break;
++
++	case READ_6:
++		i = fsg->cmnd[4];
++		fsg->data_size_from_cmnd = (i == 0) ? 256 : i;
++		if ((reply = check_command_size_in_blocks(fsg, 6,
++				DATA_DIR_TO_HOST,
++				(7<<1) | (1<<4), 1,
++				"READ(6)")) == 0)
++			reply = do_read(fsg);
++		break;
++
++	case READ_10:
++		fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
++		if ((reply = check_command_size_in_blocks(fsg, 10,
++				DATA_DIR_TO_HOST,
++				(1<<1) | (0xf<<2) | (3<<7), 1,
++				"READ(10)")) == 0)
++			reply = do_read(fsg);
++		break;
++
++	case READ_12:
++		fsg->data_size_from_cmnd = get_unaligned_be32(&fsg->cmnd[6]);
++		if ((reply = check_command_size_in_blocks(fsg, 12,
++				DATA_DIR_TO_HOST,
++				(1<<1) | (0xf<<2) | (0xf<<6), 1,
++				"READ(12)")) == 0)
++			reply = do_read(fsg);
++		break;
++
++	case READ_CAPACITY:
++		fsg->data_size_from_cmnd = 8;
++		if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
++				(0xf<<2) | (1<<8), 1,
++				"READ CAPACITY")) == 0)
++			reply = do_read_capacity(fsg, bh);
++		break;
++
++	case READ_HEADER:
++		if (!mod_data.cdrom)
++			goto unknown_cmnd;
++		fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
++		if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
++				(3<<7) | (0x1f<<1), 1,
++				"READ HEADER")) == 0)
++			reply = do_read_header(fsg, bh);
++		break;
++
++	case READ_TOC:
++		if (!mod_data.cdrom)
++			goto unknown_cmnd;
++		fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
++		if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
++				(7<<6) | (1<<1), 1,
++				"READ TOC")) == 0)
++			reply = do_read_toc(fsg, bh);
++		break;
++
++	case READ_FORMAT_CAPACITIES:
++		fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
++		if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
++				(3<<7), 1,
++				"READ FORMAT CAPACITIES")) == 0)
++			reply = do_read_format_capacities(fsg, bh);
++		break;
++
++	case REQUEST_SENSE:
++		fsg->data_size_from_cmnd = fsg->cmnd[4];
++		if ((reply = check_command(fsg, 6, DATA_DIR_TO_HOST,
++				(1<<4), 0,
++				"REQUEST SENSE")) == 0)
++			reply = do_request_sense(fsg, bh);
++		break;
++
++	case START_STOP:
++		fsg->data_size_from_cmnd = 0;
++		if ((reply = check_command(fsg, 6, DATA_DIR_NONE,
++				(1<<1) | (1<<4), 0,
++				"START-STOP UNIT")) == 0)
++			reply = do_start_stop(fsg);
++		break;
++
++	case SYNCHRONIZE_CACHE:
++		fsg->data_size_from_cmnd = 0;
++		if ((reply = check_command(fsg, 10, DATA_DIR_NONE,
++				(0xf<<2) | (3<<7), 1,
++				"SYNCHRONIZE CACHE")) == 0)
++			reply = do_synchronize_cache(fsg);
++		break;
++
++	case TEST_UNIT_READY:
++		fsg->data_size_from_cmnd = 0;
++		reply = check_command(fsg, 6, DATA_DIR_NONE,
++				0, 1,
++				"TEST UNIT READY");
++		break;
++
++	/* Although optional, this command is used by MS-Windows.  We
++	 * support a minimal version: BytChk must be 0. */
++	case VERIFY:
++		fsg->data_size_from_cmnd = 0;
++		if ((reply = check_command(fsg, 10, DATA_DIR_NONE,
++				(1<<1) | (0xf<<2) | (3<<7), 1,
++				"VERIFY")) == 0)
++			reply = do_verify(fsg);
++		break;
++
++	case WRITE_6:
++		i = fsg->cmnd[4];
++		fsg->data_size_from_cmnd = (i == 0) ? 256 : i;
++		if ((reply = check_command_size_in_blocks(fsg, 6,
++				DATA_DIR_FROM_HOST,
++				(7<<1) | (1<<4), 1,
++				"WRITE(6)")) == 0)
++			reply = do_write(fsg);
++		break;
++
++	case WRITE_10:
++		fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
++		if ((reply = check_command_size_in_blocks(fsg, 10,
++				DATA_DIR_FROM_HOST,
++				(1<<1) | (0xf<<2) | (3<<7), 1,
++				"WRITE(10)")) == 0)
++			reply = do_write(fsg);
++		break;
++
++	case WRITE_12:
++		fsg->data_size_from_cmnd = get_unaligned_be32(&fsg->cmnd[6]);
++		if ((reply = check_command_size_in_blocks(fsg, 12,
++				DATA_DIR_FROM_HOST,
++				(1<<1) | (0xf<<2) | (0xf<<6), 1,
++				"WRITE(12)")) == 0)
++			reply = do_write(fsg);
++		break;
++
++	/* Some mandatory commands that we recognize but don't implement.
++	 * They don't mean much in this setting.  It's left as an exercise
++	 * for anyone interested to implement RESERVE and RELEASE in terms
++	 * of Posix locks. */
++	case FORMAT_UNIT:
++	case RELEASE:
++	case RESERVE:
++	case SEND_DIAGNOSTIC:
++		// Fall through
++
++	default:
++ unknown_cmnd:
++		fsg->data_size_from_cmnd = 0;
++		sprintf(unknown, "Unknown x%02x", fsg->cmnd[0]);
++		if ((reply = check_command(fsg, fsg->cmnd_size,
++				DATA_DIR_UNKNOWN, ~0, 0, unknown)) == 0) {
++			fsg->curlun->sense_data = SS_INVALID_COMMAND;
++			reply = -EINVAL;
++		}
++		break;
++	}
++	up_read(&fsg->filesem);
++
++	if (reply == -EINTR || signal_pending(current))
++		return -EINTR;
++
++	/* Set up the single reply buffer for finish_reply() */
++	if (reply == -EINVAL)
++		reply = 0;		// Error reply length
++	if (reply >= 0 && fsg->data_dir == DATA_DIR_TO_HOST) {
++		reply = min((u32) reply, fsg->data_size_from_cmnd);
++		bh->inreq->length = reply;
++		bh->state = BUF_STATE_FULL;
++		fsg->residue -= reply;
++	}				// Otherwise it's already set
++
++	return 0;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static int received_cbw(struct fsg_dev *fsg, struct fsg_buffhd *bh)
++{
++	struct usb_request		*req = bh->outreq;
++	struct bulk_cb_wrap	*cbw = req->buf;
++
++	/* Was this a real packet?  Should it be ignored? */
++	if (req->status || test_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags))
++		return -EINVAL;
++
++	/* Is the CBW valid? */
++	if (req->actual != US_BULK_CB_WRAP_LEN ||
++			cbw->Signature != cpu_to_le32(
++				US_BULK_CB_SIGN)) {
++		DBG(fsg, "invalid CBW: len %u sig 0x%x\n",
++				req->actual,
++				le32_to_cpu(cbw->Signature));
++
++		/* The Bulk-only spec says we MUST stall the IN endpoint
++		 * (6.6.1), so it's unavoidable.  It also says we must
++		 * retain this state until the next reset, but there's
++		 * no way to tell the controller driver it should ignore
++		 * Clear-Feature(HALT) requests.
++		 *
++		 * We aren't required to halt the OUT endpoint; instead
++		 * we can simply accept and discard any data received
++		 * until the next reset. */
++		wedge_bulk_in_endpoint(fsg);
++		set_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags);
++		return -EINVAL;
++	}
++
++	/* Is the CBW meaningful? */
++	if (cbw->Lun >= FSG_MAX_LUNS || cbw->Flags & ~US_BULK_FLAG_IN ||
++			cbw->Length <= 0 || cbw->Length > MAX_COMMAND_SIZE) {
++		DBG(fsg, "non-meaningful CBW: lun = %u, flags = 0x%x, "
++				"cmdlen %u\n",
++				cbw->Lun, cbw->Flags, cbw->Length);
++
++		/* We can do anything we want here, so let's stall the
++		 * bulk pipes if we are allowed to. */
++		if (mod_data.can_stall) {
++			fsg_set_halt(fsg, fsg->bulk_out);
++			halt_bulk_in_endpoint(fsg);
++		}
++		return -EINVAL;
++	}
++
++	/* Save the command for later */
++	fsg->cmnd_size = cbw->Length;
++	memcpy(fsg->cmnd, cbw->CDB, fsg->cmnd_size);
++	if (cbw->Flags & US_BULK_FLAG_IN)
++		fsg->data_dir = DATA_DIR_TO_HOST;
++	else
++		fsg->data_dir = DATA_DIR_FROM_HOST;
++	fsg->data_size = le32_to_cpu(cbw->DataTransferLength);
++	if (fsg->data_size == 0)
++		fsg->data_dir = DATA_DIR_NONE;
++	fsg->lun = cbw->Lun;
++	fsg->tag = cbw->Tag;
++	return 0;
++}
++
++
++static int get_next_command(struct fsg_dev *fsg)
++{
++	struct fsg_buffhd	*bh;
++	int			rc = 0;
++
++	if (transport_is_bbb()) {
++
++		/* Wait for the next buffer to become available */
++		bh = fsg->next_buffhd_to_fill;
++		while (bh->state != BUF_STATE_EMPTY) {
++			rc = sleep_thread(fsg);
++			if (rc)
++				return rc;
++		}
++
++		/* Queue a request to read a Bulk-only CBW */
++		set_bulk_out_req_length(fsg, bh, US_BULK_CB_WRAP_LEN);
++		start_transfer(fsg, fsg->bulk_out, bh->outreq,
++				&bh->outreq_busy, &bh->state);
++
++		/* We will drain the buffer in software, which means we
++		 * can reuse it for the next filling.  No need to advance
++		 * next_buffhd_to_fill. */
++
++		/* Wait for the CBW to arrive */
++		while (bh->state != BUF_STATE_FULL) {
++			rc = sleep_thread(fsg);
++			if (rc)
++				return rc;
++		}
++		smp_rmb();
++		rc = received_cbw(fsg, bh);
++		bh->state = BUF_STATE_EMPTY;
++
++	} else {		// USB_PR_CB or USB_PR_CBI
++
++		/* Wait for the next command to arrive */
++		while (fsg->cbbuf_cmnd_size == 0) {
++			rc = sleep_thread(fsg);
++			if (rc)
++				return rc;
++		}
++
++		/* Is the previous status interrupt request still busy?
++		 * The host is allowed to skip reading the status,
++		 * so we must cancel it. */
++		if (fsg->intreq_busy)
++			usb_ep_dequeue(fsg->intr_in, fsg->intreq);
++
++		/* Copy the command and mark the buffer empty */
++		fsg->data_dir = DATA_DIR_UNKNOWN;
++		spin_lock_irq(&fsg->lock);
++		fsg->cmnd_size = fsg->cbbuf_cmnd_size;
++		memcpy(fsg->cmnd, fsg->cbbuf_cmnd, fsg->cmnd_size);
++		fsg->cbbuf_cmnd_size = 0;
++		spin_unlock_irq(&fsg->lock);
++
++		/* Use LUN from the command */
++		fsg->lun = fsg->cmnd[1] >> 5;
++	}
++
++	/* Update current lun */
++	if (fsg->lun >= 0 && fsg->lun < fsg->nluns)
++		fsg->curlun = &fsg->luns[fsg->lun];
++	else
++		fsg->curlun = NULL;
++
++	return rc;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static int enable_endpoint(struct fsg_dev *fsg, struct usb_ep *ep,
++		const struct usb_endpoint_descriptor *d)
++{
++	int	rc;
++
++	ep->driver_data = fsg;
++	ep->desc = d;
++	rc = usb_ep_enable(ep);
++	if (rc)
++		ERROR(fsg, "can't enable %s, result %d\n", ep->name, rc);
++	return rc;
++}
++
++static int alloc_request(struct fsg_dev *fsg, struct usb_ep *ep,
++		struct usb_request **preq)
++{
++	*preq = usb_ep_alloc_request(ep, GFP_ATOMIC);
++	if (*preq)
++		return 0;
++	ERROR(fsg, "can't allocate request for %s\n", ep->name);
++	return -ENOMEM;
++}
++
++/*
++ * Reset interface setting and re-init endpoint state (toggle etc).
++ * Call with altsetting < 0 to disable the interface.  The only other
++ * available altsetting is 0, which enables the interface.
++ */
++static int do_set_interface(struct fsg_dev *fsg, int altsetting)
++{
++	int	rc = 0;
++	int	i;
++	const struct usb_endpoint_descriptor	*d;
++
++	if (fsg->running)
++		DBG(fsg, "reset interface\n");
++
++reset:
++	/* Deallocate the requests */
++	for (i = 0; i < fsg_num_buffers; ++i) {
++		struct fsg_buffhd *bh = &fsg->buffhds[i];
++
++		if (bh->inreq) {
++			usb_ep_free_request(fsg->bulk_in, bh->inreq);
++			bh->inreq = NULL;
++		}
++		if (bh->outreq) {
++			usb_ep_free_request(fsg->bulk_out, bh->outreq);
++			bh->outreq = NULL;
++		}
++	}
++	if (fsg->intreq) {
++		usb_ep_free_request(fsg->intr_in, fsg->intreq);
++		fsg->intreq = NULL;
++	}
++
++	/* Disable the endpoints */
++	if (fsg->bulk_in_enabled) {
++		usb_ep_disable(fsg->bulk_in);
++		fsg->bulk_in_enabled = 0;
++	}
++	if (fsg->bulk_out_enabled) {
++		usb_ep_disable(fsg->bulk_out);
++		fsg->bulk_out_enabled = 0;
++	}
++	if (fsg->intr_in_enabled) {
++		usb_ep_disable(fsg->intr_in);
++		fsg->intr_in_enabled = 0;
++	}
++
++	fsg->running = 0;
++	if (altsetting < 0 || rc != 0)
++		return rc;
++
++	DBG(fsg, "set interface %d\n", altsetting);
++
++	/* Enable the endpoints */
++	d = fsg_ep_desc(fsg->gadget,
++			&fsg_fs_bulk_in_desc, &fsg_hs_bulk_in_desc,
++			&fsg_ss_bulk_in_desc);
++	if ((rc = enable_endpoint(fsg, fsg->bulk_in, d)) != 0)
++		goto reset;
++	fsg->bulk_in_enabled = 1;
++
++	d = fsg_ep_desc(fsg->gadget,
++			&fsg_fs_bulk_out_desc, &fsg_hs_bulk_out_desc,
++			&fsg_ss_bulk_out_desc);
++	if ((rc = enable_endpoint(fsg, fsg->bulk_out, d)) != 0)
++		goto reset;
++	fsg->bulk_out_enabled = 1;
++	fsg->bulk_out_maxpacket = usb_endpoint_maxp(d);
++	clear_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags);
++
++	if (transport_is_cbi()) {
++		d = fsg_ep_desc(fsg->gadget,
++				&fsg_fs_intr_in_desc, &fsg_hs_intr_in_desc,
++				&fsg_ss_intr_in_desc);
++		if ((rc = enable_endpoint(fsg, fsg->intr_in, d)) != 0)
++			goto reset;
++		fsg->intr_in_enabled = 1;
++	}
++
++	/* Allocate the requests */
++	for (i = 0; i < fsg_num_buffers; ++i) {
++		struct fsg_buffhd	*bh = &fsg->buffhds[i];
++
++		if ((rc = alloc_request(fsg, fsg->bulk_in, &bh->inreq)) != 0)
++			goto reset;
++		if ((rc = alloc_request(fsg, fsg->bulk_out, &bh->outreq)) != 0)
++			goto reset;
++		bh->inreq->buf = bh->outreq->buf = bh->buf;
++		bh->inreq->context = bh->outreq->context = bh;
++		bh->inreq->complete = bulk_in_complete;
++		bh->outreq->complete = bulk_out_complete;
++	}
++	if (transport_is_cbi()) {
++		if ((rc = alloc_request(fsg, fsg->intr_in, &fsg->intreq)) != 0)
++			goto reset;
++		fsg->intreq->complete = intr_in_complete;
++	}
++
++	fsg->running = 1;
++	for (i = 0; i < fsg->nluns; ++i)
++		fsg->luns[i].unit_attention_data = SS_RESET_OCCURRED;
++	return rc;
++}
++
++
++/*
++ * Change our operational configuration.  This code must agree with the code
++ * that returns config descriptors, and with interface altsetting code.
++ *
++ * It's also responsible for power management interactions.  Some
++ * configurations might not work with our current power sources.
++ * For now we just assume the gadget is always self-powered.
++ */
++static int do_set_config(struct fsg_dev *fsg, u8 new_config)
++{
++	int	rc = 0;
++
++	/* Disable the single interface */
++	if (fsg->config != 0) {
++		DBG(fsg, "reset config\n");
++		fsg->config = 0;
++		rc = do_set_interface(fsg, -1);
++	}
++
++	/* Enable the interface */
++	if (new_config != 0) {
++		fsg->config = new_config;
++		if ((rc = do_set_interface(fsg, 0)) != 0)
++			fsg->config = 0;	// Reset on errors
++		else
++			INFO(fsg, "%s config #%d\n",
++			     usb_speed_string(fsg->gadget->speed),
++			     fsg->config);
++	}
++	return rc;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static void handle_exception(struct fsg_dev *fsg)
++{
++	siginfo_t		info;
++	int			sig;
++	int			i;
++	int			num_active;
++	struct fsg_buffhd	*bh;
++	enum fsg_state		old_state;
++	u8			new_config;
++	struct fsg_lun		*curlun;
++	unsigned int		exception_req_tag;
++	int			rc;
++
++	/* Clear the existing signals.  Anything but SIGUSR1 is converted
++	 * into a high-priority EXIT exception. */
++	for (;;) {
++		sig = dequeue_signal_lock(current, &current->blocked, &info);
++		if (!sig)
++			break;
++		if (sig != SIGUSR1) {
++			if (fsg->state < FSG_STATE_EXIT)
++				DBG(fsg, "Main thread exiting on signal\n");
++			raise_exception(fsg, FSG_STATE_EXIT);
++		}
++	}
++
++	/* Cancel all the pending transfers */
++	if (fsg->intreq_busy)
++		usb_ep_dequeue(fsg->intr_in, fsg->intreq);
++	for (i = 0; i < fsg_num_buffers; ++i) {
++		bh = &fsg->buffhds[i];
++		if (bh->inreq_busy)
++			usb_ep_dequeue(fsg->bulk_in, bh->inreq);
++		if (bh->outreq_busy)
++			usb_ep_dequeue(fsg->bulk_out, bh->outreq);
++	}
++
++	/* Wait until everything is idle */
++	for (;;) {
++		num_active = fsg->intreq_busy;
++		for (i = 0; i < fsg_num_buffers; ++i) {
++			bh = &fsg->buffhds[i];
++			num_active += bh->inreq_busy + bh->outreq_busy;
++		}
++		if (num_active == 0)
++			break;
++		if (sleep_thread(fsg))
++			return;
++	}
++
++	/* Clear out the controller's fifos */
++	if (fsg->bulk_in_enabled)
++		usb_ep_fifo_flush(fsg->bulk_in);
++	if (fsg->bulk_out_enabled)
++		usb_ep_fifo_flush(fsg->bulk_out);
++	if (fsg->intr_in_enabled)
++		usb_ep_fifo_flush(fsg->intr_in);
++
++	/* Reset the I/O buffer states and pointers, the SCSI
++	 * state, and the exception.  Then invoke the handler. */
++	spin_lock_irq(&fsg->lock);
++
++	for (i = 0; i < fsg_num_buffers; ++i) {
++		bh = &fsg->buffhds[i];
++		bh->state = BUF_STATE_EMPTY;
++	}
++	fsg->next_buffhd_to_fill = fsg->next_buffhd_to_drain =
++			&fsg->buffhds[0];
++
++	exception_req_tag = fsg->exception_req_tag;
++	new_config = fsg->new_config;
++	old_state = fsg->state;
++
++	if (old_state == FSG_STATE_ABORT_BULK_OUT)
++		fsg->state = FSG_STATE_STATUS_PHASE;
++	else {
++		for (i = 0; i < fsg->nluns; ++i) {
++			curlun = &fsg->luns[i];
++			curlun->prevent_medium_removal = 0;
++			curlun->sense_data = curlun->unit_attention_data =
++					SS_NO_SENSE;
++			curlun->sense_data_info = 0;
++			curlun->info_valid = 0;
++		}
++		fsg->state = FSG_STATE_IDLE;
++	}
++	spin_unlock_irq(&fsg->lock);
++
++	/* Carry out any extra actions required for the exception */
++	switch (old_state) {
++	default:
++		break;
++
++	case FSG_STATE_ABORT_BULK_OUT:
++		send_status(fsg);
++		spin_lock_irq(&fsg->lock);
++		if (fsg->state == FSG_STATE_STATUS_PHASE)
++			fsg->state = FSG_STATE_IDLE;
++		spin_unlock_irq(&fsg->lock);
++		break;
++
++	case FSG_STATE_RESET:
++		/* In case we were forced against our will to halt a
++		 * bulk endpoint, clear the halt now.  (The SuperH UDC
++		 * requires this.) */
++		if (test_and_clear_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags))
++			usb_ep_clear_halt(fsg->bulk_in);
++
++		if (transport_is_bbb()) {
++			if (fsg->ep0_req_tag == exception_req_tag)
++				ep0_queue(fsg);	// Complete the status stage
++
++		} else if (transport_is_cbi())
++			send_status(fsg);	// Status by interrupt pipe
++
++		/* Technically this should go here, but it would only be
++		 * a waste of time.  Ditto for the INTERFACE_CHANGE and
++		 * CONFIG_CHANGE cases. */
++		// for (i = 0; i < fsg->nluns; ++i)
++		//	fsg->luns[i].unit_attention_data = SS_RESET_OCCURRED;
++		break;
++
++	case FSG_STATE_INTERFACE_CHANGE:
++		rc = do_set_interface(fsg, 0);
++		if (fsg->ep0_req_tag != exception_req_tag)
++			break;
++		if (rc != 0)			// STALL on errors
++			fsg_set_halt(fsg, fsg->ep0);
++		else				// Complete the status stage
++			ep0_queue(fsg);
++		break;
++
++	case FSG_STATE_CONFIG_CHANGE:
++		rc = do_set_config(fsg, new_config);
++		if (fsg->ep0_req_tag != exception_req_tag)
++			break;
++		if (rc != 0)			// STALL on errors
++			fsg_set_halt(fsg, fsg->ep0);
++		else				// Complete the status stage
++			ep0_queue(fsg);
++		break;
++
++	case FSG_STATE_DISCONNECT:
++		for (i = 0; i < fsg->nluns; ++i)
++			fsg_lun_fsync_sub(fsg->luns + i);
++		do_set_config(fsg, 0);		// Unconfigured state
++		break;
++
++	case FSG_STATE_EXIT:
++	case FSG_STATE_TERMINATED:
++		do_set_config(fsg, 0);			// Free resources
++		spin_lock_irq(&fsg->lock);
++		fsg->state = FSG_STATE_TERMINATED;	// Stop the thread
++		spin_unlock_irq(&fsg->lock);
++		break;
++	}
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static int fsg_main_thread(void *fsg_)
++{
++	struct fsg_dev		*fsg = fsg_;
++
++	/* Allow the thread to be killed by a signal, but set the signal mask
++	 * to block everything but INT, TERM, KILL, and USR1. */
++	allow_signal(SIGINT);
++	allow_signal(SIGTERM);
++	allow_signal(SIGKILL);
++	allow_signal(SIGUSR1);
++
++	/* Allow the thread to be frozen */
++	set_freezable();
++
++	/* Arrange for userspace references to be interpreted as kernel
++	 * pointers.  That way we can pass a kernel pointer to a routine
++	 * that expects a __user pointer and it will work okay. */
++	set_fs(get_ds());
++
++	/* The main loop */
++	while (fsg->state != FSG_STATE_TERMINATED) {
++		if (exception_in_progress(fsg) || signal_pending(current)) {
++			handle_exception(fsg);
++			continue;
++		}
++
++		if (!fsg->running) {
++			sleep_thread(fsg);
++			continue;
++		}
++
++		if (get_next_command(fsg))
++			continue;
++
++		spin_lock_irq(&fsg->lock);
++		if (!exception_in_progress(fsg))
++			fsg->state = FSG_STATE_DATA_PHASE;
++		spin_unlock_irq(&fsg->lock);
++
++		if (do_scsi_command(fsg) || finish_reply(fsg))
++			continue;
++
++		spin_lock_irq(&fsg->lock);
++		if (!exception_in_progress(fsg))
++			fsg->state = FSG_STATE_STATUS_PHASE;
++		spin_unlock_irq(&fsg->lock);
++
++		if (send_status(fsg))
++			continue;
++
++		spin_lock_irq(&fsg->lock);
++		if (!exception_in_progress(fsg))
++			fsg->state = FSG_STATE_IDLE;
++		spin_unlock_irq(&fsg->lock);
++		}
++
++	spin_lock_irq(&fsg->lock);
++	fsg->thread_task = NULL;
++	spin_unlock_irq(&fsg->lock);
++
++	/* If we are exiting because of a signal, unregister the
++	 * gadget driver. */
++	if (test_and_clear_bit(REGISTERED, &fsg->atomic_bitflags))
++		usb_gadget_unregister_driver(&fsg_driver);
++
++	/* Let the unbind and cleanup routines know the thread has exited */
++	complete_and_exit(&fsg->thread_notifier, 0);
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++
++/* The write permissions and store_xxx pointers are set in fsg_bind() */
++static DEVICE_ATTR(ro, 0444, fsg_show_ro, NULL);
++static DEVICE_ATTR(nofua, 0644, fsg_show_nofua, NULL);
++static DEVICE_ATTR(file, 0444, fsg_show_file, NULL);
++
++
++/*-------------------------------------------------------------------------*/
++
++static void fsg_release(struct kref *ref)
++{
++	struct fsg_dev	*fsg = container_of(ref, struct fsg_dev, ref);
++
++	kfree(fsg->luns);
++	kfree(fsg);
++}
++
++static void lun_release(struct device *dev)
++{
++	struct rw_semaphore	*filesem = dev_get_drvdata(dev);
++	struct fsg_dev		*fsg =
++		container_of(filesem, struct fsg_dev, filesem);
++
++	kref_put(&fsg->ref, fsg_release);
++}
++
++static void /* __init_or_exit */ fsg_unbind(struct usb_gadget *gadget)
++{
++	struct fsg_dev		*fsg = get_gadget_data(gadget);
++	int			i;
++	struct fsg_lun		*curlun;
++	struct usb_request	*req = fsg->ep0req;
++
++	DBG(fsg, "unbind\n");
++	clear_bit(REGISTERED, &fsg->atomic_bitflags);
++
++	/* If the thread isn't already dead, tell it to exit now */
++	if (fsg->state != FSG_STATE_TERMINATED) {
++		raise_exception(fsg, FSG_STATE_EXIT);
++		wait_for_completion(&fsg->thread_notifier);
++
++		/* The cleanup routine waits for this completion also */
++		complete(&fsg->thread_notifier);
++	}
++
++	/* Unregister the sysfs attribute files and the LUNs */
++	for (i = 0; i < fsg->nluns; ++i) {
++		curlun = &fsg->luns[i];
++		if (curlun->registered) {
++			device_remove_file(&curlun->dev, &dev_attr_nofua);
++			device_remove_file(&curlun->dev, &dev_attr_ro);
++			device_remove_file(&curlun->dev, &dev_attr_file);
++			fsg_lun_close(curlun);
++			device_unregister(&curlun->dev);
++			curlun->registered = 0;
++		}
++	}
++
++	/* Free the data buffers */
++	for (i = 0; i < fsg_num_buffers; ++i)
++		kfree(fsg->buffhds[i].buf);
++
++	/* Free the request and buffer for endpoint 0 */
++	if (req) {
++		kfree(req->buf);
++		usb_ep_free_request(fsg->ep0, req);
++	}
++
++	set_gadget_data(gadget, NULL);
++}
++
++
++static int __init check_parameters(struct fsg_dev *fsg)
++{
++	int	prot;
++	int	gcnum;
++
++	/* Store the default values */
++	mod_data.transport_type = USB_PR_BULK;
++	mod_data.transport_name = "Bulk-only";
++	mod_data.protocol_type = USB_SC_SCSI;
++	mod_data.protocol_name = "Transparent SCSI";
++
++	/* Some peripheral controllers are known not to be able to
++	 * halt bulk endpoints correctly.  If one of them is present,
++	 * disable stalls.
++	 */
++	if (gadget_is_at91(fsg->gadget))
++		mod_data.can_stall = 0;
++
++	if (mod_data.release == 0xffff) {	// Parameter wasn't set
++		gcnum = usb_gadget_controller_number(fsg->gadget);
++		if (gcnum >= 0)
++			mod_data.release = 0x0300 + gcnum;
++		else {
++			WARNING(fsg, "controller '%s' not recognized\n",
++				fsg->gadget->name);
++			mod_data.release = 0x0399;
++		}
++	}
++
++	prot = simple_strtol(mod_data.protocol_parm, NULL, 0);
++
++#ifdef CONFIG_USB_FILE_STORAGE_TEST
++	if (strnicmp(mod_data.transport_parm, "BBB", 10) == 0) {
++		;		// Use default setting
++	} else if (strnicmp(mod_data.transport_parm, "CB", 10) == 0) {
++		mod_data.transport_type = USB_PR_CB;
++		mod_data.transport_name = "Control-Bulk";
++	} else if (strnicmp(mod_data.transport_parm, "CBI", 10) == 0) {
++		mod_data.transport_type = USB_PR_CBI;
++		mod_data.transport_name = "Control-Bulk-Interrupt";
++	} else {
++		ERROR(fsg, "invalid transport: %s\n", mod_data.transport_parm);
++		return -EINVAL;
++	}
++
++	if (strnicmp(mod_data.protocol_parm, "SCSI", 10) == 0 ||
++			prot == USB_SC_SCSI) {
++		;		// Use default setting
++	} else if (strnicmp(mod_data.protocol_parm, "RBC", 10) == 0 ||
++			prot == USB_SC_RBC) {
++		mod_data.protocol_type = USB_SC_RBC;
++		mod_data.protocol_name = "RBC";
++	} else if (strnicmp(mod_data.protocol_parm, "8020", 4) == 0 ||
++			strnicmp(mod_data.protocol_parm, "ATAPI", 10) == 0 ||
++			prot == USB_SC_8020) {
++		mod_data.protocol_type = USB_SC_8020;
++		mod_data.protocol_name = "8020i (ATAPI)";
++	} else if (strnicmp(mod_data.protocol_parm, "QIC", 3) == 0 ||
++			prot == USB_SC_QIC) {
++		mod_data.protocol_type = USB_SC_QIC;
++		mod_data.protocol_name = "QIC-157";
++	} else if (strnicmp(mod_data.protocol_parm, "UFI", 10) == 0 ||
++			prot == USB_SC_UFI) {
++		mod_data.protocol_type = USB_SC_UFI;
++		mod_data.protocol_name = "UFI";
++	} else if (strnicmp(mod_data.protocol_parm, "8070", 4) == 0 ||
++			prot == USB_SC_8070) {
++		mod_data.protocol_type = USB_SC_8070;
++		mod_data.protocol_name = "8070i";
++	} else {
++		ERROR(fsg, "invalid protocol: %s\n", mod_data.protocol_parm);
++		return -EINVAL;
++	}
++
++	mod_data.buflen &= PAGE_CACHE_MASK;
++	if (mod_data.buflen <= 0) {
++		ERROR(fsg, "invalid buflen\n");
++		return -ETOOSMALL;
++	}
++
++#endif /* CONFIG_USB_FILE_STORAGE_TEST */
++
++	/* Serial string handling.
++	 * On a real device, the serial string would be loaded
++	 * from permanent storage. */
++	if (mod_data.serial) {
++		const char *ch;
++		unsigned len = 0;
++
++		/* Sanity check :
++		 * The CB[I] specification limits the serial string to
++		 * 12 uppercase hexadecimal characters.
++		 * BBB need at least 12 uppercase hexadecimal characters,
++		 * with a maximum of 126. */
++		for (ch = mod_data.serial; *ch; ++ch) {
++			++len;
++			if ((*ch < '0' || *ch > '9') &&
++			    (*ch < 'A' || *ch > 'F')) { /* not uppercase hex */
++				WARNING(fsg,
++					"Invalid serial string character: %c\n",
++					*ch);
++				goto no_serial;
++			}
++		}
++		if (len > 126 ||
++		    (mod_data.transport_type == USB_PR_BULK && len < 12) ||
++		    (mod_data.transport_type != USB_PR_BULK && len > 12)) {
++			WARNING(fsg, "Invalid serial string length!\n");
++			goto no_serial;
++		}
++		fsg_strings[FSG_STRING_SERIAL - 1].s = mod_data.serial;
++	} else {
++		WARNING(fsg, "No serial-number string provided!\n");
++ no_serial:
++		device_desc.iSerialNumber = 0;
++	}
++
++	return 0;
++}
++
++
++static int __init fsg_bind(struct usb_gadget *gadget)
++{
++	struct fsg_dev		*fsg = the_fsg;
++	int			rc;
++	int			i;
++	struct fsg_lun		*curlun;
++	struct usb_ep		*ep;
++	struct usb_request	*req;
++	char			*pathbuf, *p;
++
++	fsg->gadget = gadget;
++	set_gadget_data(gadget, fsg);
++	fsg->ep0 = gadget->ep0;
++	fsg->ep0->driver_data = fsg;
++
++	if ((rc = check_parameters(fsg)) != 0)
++		goto out;
++
++	if (mod_data.removable) {	// Enable the store_xxx attributes
++		dev_attr_file.attr.mode = 0644;
++		dev_attr_file.store = fsg_store_file;
++		if (!mod_data.cdrom) {
++			dev_attr_ro.attr.mode = 0644;
++			dev_attr_ro.store = fsg_store_ro;
++		}
++	}
++
++	/* Only for removable media? */
++	dev_attr_nofua.attr.mode = 0644;
++	dev_attr_nofua.store = fsg_store_nofua;
++
++	/* Find out how many LUNs there should be */
++	i = mod_data.nluns;
++	if (i == 0)
++		i = max(mod_data.num_filenames, 1u);
++	if (i > FSG_MAX_LUNS) {
++		ERROR(fsg, "invalid number of LUNs: %d\n", i);
++		rc = -EINVAL;
++		goto out;
++	}
++
++	/* Create the LUNs, open their backing files, and register the
++	 * LUN devices in sysfs. */
++	fsg->luns = kzalloc(i * sizeof(struct fsg_lun), GFP_KERNEL);
++	if (!fsg->luns) {
++		rc = -ENOMEM;
++		goto out;
++	}
++	fsg->nluns = i;
++
++	for (i = 0; i < fsg->nluns; ++i) {
++		curlun = &fsg->luns[i];
++		curlun->cdrom = !!mod_data.cdrom;
++		curlun->ro = mod_data.cdrom || mod_data.ro[i];
++		curlun->initially_ro = curlun->ro;
++		curlun->removable = mod_data.removable;
++		curlun->nofua = mod_data.nofua[i];
++		curlun->dev.release = lun_release;
++		curlun->dev.parent = &gadget->dev;
++		curlun->dev.driver = &fsg_driver.driver;
++		dev_set_drvdata(&curlun->dev, &fsg->filesem);
++		dev_set_name(&curlun->dev,"%s-lun%d",
++			     dev_name(&gadget->dev), i);
++
++		kref_get(&fsg->ref);
++		rc = device_register(&curlun->dev);
++		if (rc) {
++			INFO(fsg, "failed to register LUN%d: %d\n", i, rc);
++			put_device(&curlun->dev);
++			goto out;
++		}
++		curlun->registered = 1;
++
++		rc = device_create_file(&curlun->dev, &dev_attr_ro);
++		if (rc)
++			goto out;
++		rc = device_create_file(&curlun->dev, &dev_attr_nofua);
++		if (rc)
++			goto out;
++		rc = device_create_file(&curlun->dev, &dev_attr_file);
++		if (rc)
++			goto out;
++
++		if (mod_data.file[i] && *mod_data.file[i]) {
++			rc = fsg_lun_open(curlun, mod_data.file[i]);
++			if (rc)
++				goto out;
++		} else if (!mod_data.removable) {
++			ERROR(fsg, "no file given for LUN%d\n", i);
++			rc = -EINVAL;
++			goto out;
++		}
++	}
++
++	/* Find all the endpoints we will use */
++	usb_ep_autoconfig_reset(gadget);
++	ep = usb_ep_autoconfig(gadget, &fsg_fs_bulk_in_desc);
++	if (!ep)
++		goto autoconf_fail;
++	ep->driver_data = fsg;		// claim the endpoint
++	fsg->bulk_in = ep;
++
++	ep = usb_ep_autoconfig(gadget, &fsg_fs_bulk_out_desc);
++	if (!ep)
++		goto autoconf_fail;
++	ep->driver_data = fsg;		// claim the endpoint
++	fsg->bulk_out = ep;
++
++	if (transport_is_cbi()) {
++		ep = usb_ep_autoconfig(gadget, &fsg_fs_intr_in_desc);
++		if (!ep)
++			goto autoconf_fail;
++		ep->driver_data = fsg;		// claim the endpoint
++		fsg->intr_in = ep;
++	}
++
++	/* Fix up the descriptors */
++	device_desc.idVendor = cpu_to_le16(mod_data.vendor);
++	device_desc.idProduct = cpu_to_le16(mod_data.product);
++	device_desc.bcdDevice = cpu_to_le16(mod_data.release);
++
++	i = (transport_is_cbi() ? 3 : 2);	// Number of endpoints
++	fsg_intf_desc.bNumEndpoints = i;
++	fsg_intf_desc.bInterfaceSubClass = mod_data.protocol_type;
++	fsg_intf_desc.bInterfaceProtocol = mod_data.transport_type;
++	fsg_fs_function[i + FSG_FS_FUNCTION_PRE_EP_ENTRIES] = NULL;
++
++	if (gadget_is_dualspeed(gadget)) {
++		fsg_hs_function[i + FSG_HS_FUNCTION_PRE_EP_ENTRIES] = NULL;
++
++		/* Assume endpoint addresses are the same for both speeds */
++		fsg_hs_bulk_in_desc.bEndpointAddress =
++			fsg_fs_bulk_in_desc.bEndpointAddress;
++		fsg_hs_bulk_out_desc.bEndpointAddress =
++			fsg_fs_bulk_out_desc.bEndpointAddress;
++		fsg_hs_intr_in_desc.bEndpointAddress =
++			fsg_fs_intr_in_desc.bEndpointAddress;
++	}
++
++	if (gadget_is_superspeed(gadget)) {
++		unsigned		max_burst;
++
++		fsg_ss_function[i + FSG_SS_FUNCTION_PRE_EP_ENTRIES] = NULL;
++
++		/* Calculate bMaxBurst, we know packet size is 1024 */
++		max_burst = min_t(unsigned, mod_data.buflen / 1024, 15);
++
++		/* Assume endpoint addresses are the same for both speeds */
++		fsg_ss_bulk_in_desc.bEndpointAddress =
++			fsg_fs_bulk_in_desc.bEndpointAddress;
++		fsg_ss_bulk_in_comp_desc.bMaxBurst = max_burst;
++
++		fsg_ss_bulk_out_desc.bEndpointAddress =
++			fsg_fs_bulk_out_desc.bEndpointAddress;
++		fsg_ss_bulk_out_comp_desc.bMaxBurst = max_burst;
++	}
++
++	if (gadget_is_otg(gadget))
++		fsg_otg_desc.bmAttributes |= USB_OTG_HNP;
++
++	rc = -ENOMEM;
++
++	/* Allocate the request and buffer for endpoint 0 */
++	fsg->ep0req = req = usb_ep_alloc_request(fsg->ep0, GFP_KERNEL);
++	if (!req)
++		goto out;
++	req->buf = kmalloc(EP0_BUFSIZE, GFP_KERNEL);
++	if (!req->buf)
++		goto out;
++	req->complete = ep0_complete;
++
++	/* Allocate the data buffers */
++	for (i = 0; i < fsg_num_buffers; ++i) {
++		struct fsg_buffhd	*bh = &fsg->buffhds[i];
++
++		/* Allocate for the bulk-in endpoint.  We assume that
++		 * the buffer will also work with the bulk-out (and
++		 * interrupt-in) endpoint. */
++		bh->buf = kmalloc(mod_data.buflen, GFP_KERNEL);
++		if (!bh->buf)
++			goto out;
++		bh->next = bh + 1;
++	}
++	fsg->buffhds[fsg_num_buffers - 1].next = &fsg->buffhds[0];
++
++	/* This should reflect the actual gadget power source */
++	usb_gadget_set_selfpowered(gadget);
++
++	snprintf(fsg_string_manufacturer, sizeof fsg_string_manufacturer,
++			"%s %s with %s",
++			init_utsname()->sysname, init_utsname()->release,
++			gadget->name);
++
++	fsg->thread_task = kthread_create(fsg_main_thread, fsg,
++			"file-storage-gadget");
++	if (IS_ERR(fsg->thread_task)) {
++		rc = PTR_ERR(fsg->thread_task);
++		goto out;
++	}
++
++	INFO(fsg, DRIVER_DESC ", version: " DRIVER_VERSION "\n");
++	INFO(fsg, "NOTE: This driver is deprecated.  "
++			"Consider using g_mass_storage instead.\n");
++	INFO(fsg, "Number of LUNs=%d\n", fsg->nluns);
++
++	pathbuf = kmalloc(PATH_MAX, GFP_KERNEL);
++	for (i = 0; i < fsg->nluns; ++i) {
++		curlun = &fsg->luns[i];
++		if (fsg_lun_is_open(curlun)) {
++			p = NULL;
++			if (pathbuf) {
++				p = d_path(&curlun->filp->f_path,
++					   pathbuf, PATH_MAX);
++				if (IS_ERR(p))
++					p = NULL;
++			}
++			LINFO(curlun, "ro=%d, nofua=%d, file: %s\n",
++			      curlun->ro, curlun->nofua, (p ? p : "(error)"));
++		}
++	}
++	kfree(pathbuf);
++
++	DBG(fsg, "transport=%s (x%02x)\n",
++			mod_data.transport_name, mod_data.transport_type);
++	DBG(fsg, "protocol=%s (x%02x)\n",
++			mod_data.protocol_name, mod_data.protocol_type);
++	DBG(fsg, "VendorID=x%04x, ProductID=x%04x, Release=x%04x\n",
++			mod_data.vendor, mod_data.product, mod_data.release);
++	DBG(fsg, "removable=%d, stall=%d, cdrom=%d, buflen=%u\n",
++			mod_data.removable, mod_data.can_stall,
++			mod_data.cdrom, mod_data.buflen);
++	DBG(fsg, "I/O thread pid: %d\n", task_pid_nr(fsg->thread_task));
++
++	set_bit(REGISTERED, &fsg->atomic_bitflags);
++
++	/* Tell the thread to start working */
++	wake_up_process(fsg->thread_task);
++	return 0;
++
++autoconf_fail:
++	ERROR(fsg, "unable to autoconfigure all endpoints\n");
++	rc = -ENOTSUPP;
++
++out:
++	fsg->state = FSG_STATE_TERMINATED;	// The thread is dead
++	fsg_unbind(gadget);
++	complete(&fsg->thread_notifier);
++	return rc;
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static void fsg_suspend(struct usb_gadget *gadget)
++{
++	struct fsg_dev		*fsg = get_gadget_data(gadget);
++
++	DBG(fsg, "suspend\n");
++	set_bit(SUSPENDED, &fsg->atomic_bitflags);
++}
++
++static void fsg_resume(struct usb_gadget *gadget)
++{
++	struct fsg_dev		*fsg = get_gadget_data(gadget);
++
++	DBG(fsg, "resume\n");
++	clear_bit(SUSPENDED, &fsg->atomic_bitflags);
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static struct usb_gadget_driver		fsg_driver = {
++	.max_speed	= USB_SPEED_SUPER,
++	.function	= (char *) fsg_string_product,
++	.unbind		= fsg_unbind,
++	.disconnect	= fsg_disconnect,
++	.setup		= fsg_setup,
++	.suspend	= fsg_suspend,
++	.resume		= fsg_resume,
++
++	.driver		= {
++		.name		= DRIVER_NAME,
++		.owner		= THIS_MODULE,
++		// .release = ...
++		// .suspend = ...
++		// .resume = ...
++	},
++};
++
++
++static int __init fsg_alloc(void)
++{
++	struct fsg_dev		*fsg;
++
++	fsg = kzalloc(sizeof *fsg +
++		      fsg_num_buffers * sizeof *(fsg->buffhds), GFP_KERNEL);
++
++	if (!fsg)
++		return -ENOMEM;
++	spin_lock_init(&fsg->lock);
++	init_rwsem(&fsg->filesem);
++	kref_init(&fsg->ref);
++	init_completion(&fsg->thread_notifier);
++
++	the_fsg = fsg;
++	return 0;
++}
++
++
++static int __init fsg_init(void)
++{
++	int		rc;
++	struct fsg_dev	*fsg;
++
++	rc = fsg_num_buffers_validate();
++	if (rc != 0)
++		return rc;
++
++	if ((rc = fsg_alloc()) != 0)
++		return rc;
++	fsg = the_fsg;
++	if ((rc = usb_gadget_probe_driver(&fsg_driver, fsg_bind)) != 0)
++		kref_put(&fsg->ref, fsg_release);
++	return rc;
++}
++module_init(fsg_init);
++
++
++static void __exit fsg_cleanup(void)
++{
++	struct fsg_dev	*fsg = the_fsg;
++
++	/* Unregister the driver iff the thread hasn't already done so */
++	if (test_and_clear_bit(REGISTERED, &fsg->atomic_bitflags))
++		usb_gadget_unregister_driver(&fsg_driver);
++
++	/* Wait for the thread to finish up */
++	wait_for_completion(&fsg->thread_notifier);
++
++	kref_put(&fsg->ref, fsg_release);
++}
++module_exit(fsg_cleanup);
+--- a/drivers/usb/host/Kconfig
++++ b/drivers/usb/host/Kconfig
+@@ -735,6 +735,19 @@ config USB_HWA_HCD
+ 	  To compile this driver a module, choose M here: the module
+ 	  will be called "hwa-hc".
+ 
++config USB_DWCOTG
++	tristate "Synopsis DWC host support"
++	depends on USB
++	help
++	  The Synopsis DWC controller is a dual-role
++	  host/peripheral/OTG ("On The Go") USB controllers.
++
++	  Enable this option to support this IP in host controller mode.
++	  If unsure, say N.
++
++	  To compile this driver as a module, choose M here: the
++	  modules built will be called dwc_otg and dwc_common_port.
++
+ config USB_IMX21_HCD
+        tristate "i.MX21 HCD support"
+        depends on ARM && ARCH_MXC
+--- a/drivers/usb/host/Makefile
++++ b/drivers/usb/host/Makefile
+@@ -69,6 +69,8 @@ obj-$(CONFIG_USB_SL811_CS)	+= sl811_cs.o
+ obj-$(CONFIG_USB_U132_HCD)	+= u132-hcd.o
+ obj-$(CONFIG_USB_R8A66597_HCD)	+= r8a66597-hcd.o
+ obj-$(CONFIG_USB_HWA_HCD)	+= hwa-hc.o
++
++obj-$(CONFIG_USB_DWCOTG)        += dwc_otg/ dwc_common_port/
+ obj-$(CONFIG_USB_IMX21_HCD)	+= imx21-hcd.o
+ obj-$(CONFIG_USB_FSL_MPH_DR_OF)	+= fsl-mph-dr-of.o
+ obj-$(CONFIG_USB_EHCI_FSL)	+= ehci-fsl.o
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/Makefile
+@@ -0,0 +1,58 @@
++#
++# Makefile for DWC_common library
++#
++
++ifneq ($(KERNELRELEASE),)
++
++ccflags-y	+= -DDWC_LINUX
++#ccflags-y	+= -DDEBUG
++#ccflags-y	+= -DDWC_DEBUG_REGS
++#ccflags-y	+= -DDWC_DEBUG_MEMORY
++
++ccflags-y	+= -DDWC_LIBMODULE
++ccflags-y	+= -DDWC_CCLIB
++#ccflags-y	+= -DDWC_CRYPTOLIB
++ccflags-y	+= -DDWC_NOTIFYLIB
++ccflags-y	+= -DDWC_UTFLIB
++
++obj-$(CONFIG_USB_DWCOTG)	+= dwc_common_port_lib.o
++dwc_common_port_lib-objs := dwc_cc.o dwc_modpow.o dwc_dh.o \
++			    dwc_crypto.o dwc_notifier.o \
++			    dwc_common_linux.o dwc_mem.o
++
++kernrelwd := $(subst ., ,$(KERNELRELEASE))
++kernrel3 := $(word 1,$(kernrelwd)).$(word 2,$(kernrelwd)).$(word 3,$(kernrelwd))
++
++ifneq ($(kernrel3),2.6.20)
++# grayg - I only know that we use ccflags-y in 2.6.31 actually
++ccflags-y += $(CPPFLAGS)
++endif
++
++else
++
++#ifeq ($(KDIR),)
++#$(error Must give "KDIR=/path/to/kernel/source" on command line or in environment)
++#endif
++
++ifeq ($(ARCH),)
++$(error Must give "ARCH=<arch>" on command line or in environment. Also, if \
++ cross-compiling, must give "CROSS_COMPILE=/path/to/compiler/plus/tool-prefix-")
++endif
++
++ifeq ($(DOXYGEN),)
++DOXYGEN		:= doxygen
++endif
++
++default:
++	$(MAKE) -C$(KDIR) M=$(PWD) ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) modules
++
++docs:	$(wildcard *.[hc]) doc/doxygen.cfg
++	$(DOXYGEN) doc/doxygen.cfg
++
++tags:	$(wildcard *.[hc])
++	$(CTAGS) -e $(wildcard *.[hc]) $(wildcard linux/*.[hc]) $(wildcard $(KDIR)/include/linux/usb*.h)
++
++endif
++
++clean:
++	rm -rf *.o *.ko .*.cmd *.mod.c .*.o.d .*.o.tmp modules.order Module.markers Module.symvers .tmp_versions/
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/Makefile.fbsd
+@@ -0,0 +1,17 @@
++CFLAGS	+= -I/sys/i386/compile/GENERIC -I/sys/i386/include -I/usr/include
++CFLAGS	+= -DDWC_FREEBSD
++CFLAGS	+= -DDEBUG
++#CFLAGS	+= -DDWC_DEBUG_REGS
++#CFLAGS	+= -DDWC_DEBUG_MEMORY
++
++#CFLAGS	+= -DDWC_LIBMODULE
++#CFLAGS	+= -DDWC_CCLIB
++#CFLAGS	+= -DDWC_CRYPTOLIB
++#CFLAGS	+= -DDWC_NOTIFYLIB
++#CFLAGS	+= -DDWC_UTFLIB
++
++KMOD = dwc_common_port_lib
++SRCS = dwc_cc.c dwc_modpow.c dwc_dh.c dwc_crypto.c dwc_notifier.c \
++       dwc_common_fbsd.c dwc_mem.c
++
++.include <bsd.kmod.mk>
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/Makefile.linux
+@@ -0,0 +1,49 @@
++#
++# Makefile for DWC_common library
++#
++ifneq ($(KERNELRELEASE),)
++
++ccflags-y	+= -DDWC_LINUX
++#ccflags-y	+= -DDEBUG
++#ccflags-y	+= -DDWC_DEBUG_REGS
++#ccflags-y	+= -DDWC_DEBUG_MEMORY
++
++ccflags-y	+= -DDWC_LIBMODULE
++ccflags-y	+= -DDWC_CCLIB
++ccflags-y	+= -DDWC_CRYPTOLIB
++ccflags-y	+= -DDWC_NOTIFYLIB
++ccflags-y	+= -DDWC_UTFLIB
++
++obj-m			 := dwc_common_port_lib.o
++dwc_common_port_lib-objs := dwc_cc.o dwc_modpow.o dwc_dh.o \
++			    dwc_crypto.o dwc_notifier.o \
++			    dwc_common_linux.o dwc_mem.o
++
++else
++
++ifeq ($(KDIR),)
++$(error Must give "KDIR=/path/to/kernel/source" on command line or in environment)
++endif
++
++ifeq ($(ARCH),)
++$(error Must give "ARCH=<arch>" on command line or in environment. Also, if \
++ cross-compiling, must give "CROSS_COMPILE=/path/to/compiler/plus/tool-prefix-")
++endif
++
++ifeq ($(DOXYGEN),)
++DOXYGEN		:= doxygen
++endif
++
++default:
++	$(MAKE) -C$(KDIR) M=$(PWD) ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) modules
++
++docs:	$(wildcard *.[hc]) doc/doxygen.cfg
++	$(DOXYGEN) doc/doxygen.cfg
++
++tags:	$(wildcard *.[hc])
++	$(CTAGS) -e $(wildcard *.[hc]) $(wildcard linux/*.[hc]) $(wildcard $(KDIR)/include/linux/usb*.h)
++
++endif
++
++clean:
++	rm -rf *.o *.ko .*.cmd *.mod.c .*.o.d .*.o.tmp modules.order Module.markers Module.symvers .tmp_versions/
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/changes.txt
+@@ -0,0 +1,174 @@
++
++dwc_read_reg32() and friends now take an additional parameter, a pointer to an
++IO context struct. The IO context struct should live in an os-dependent struct
++in your driver. As an example, the dwc_usb3 driver has an os-dependent struct
++named 'os_dep' embedded in the main device struct. So there these calls look
++like this:
++
++	dwc_read_reg32(&usb3_dev->os_dep.ioctx, &pcd->dev_global_regs->dcfg);
++
++	dwc_write_reg32(&usb3_dev->os_dep.ioctx,
++			&pcd->dev_global_regs->dcfg, 0);
++
++Note that for the existing Linux driver ports, it is not necessary to actually
++define the 'ioctx' member in the os-dependent struct. Since Linux does not
++require an IO context, its macros for dwc_read_reg32() and friends do not
++use the context pointer, so it is optimized away by the compiler. But it is
++necessary to add the pointer parameter to all of the call sites, to be ready
++for any future ports (such as FreeBSD) which do require an IO context.
++
++
++Similarly, dwc_alloc(), dwc_alloc_atomic(), dwc_strdup(), and dwc_free() now
++take an additional parameter, a pointer to a memory context. Examples:
++
++	addr = dwc_alloc(&usb3_dev->os_dep.memctx, size);
++
++	dwc_free(&usb3_dev->os_dep.memctx, addr);
++
++Again, for the Linux ports, it is not necessary to actually define the memctx
++member, but it is necessary to add the pointer parameter to all of the call
++sites.
++
++
++Same for dwc_dma_alloc() and dwc_dma_free(). Examples:
++
++	virt_addr = dwc_dma_alloc(&usb3_dev->os_dep.dmactx, size, &phys_addr);
++
++	dwc_dma_free(&usb3_dev->os_dep.dmactx, size, virt_addr, phys_addr);
++
++
++Same for dwc_mutex_alloc() and dwc_mutex_free(). Examples:
++
++	mutex = dwc_mutex_alloc(&usb3_dev->os_dep.mtxctx);
++
++	dwc_mutex_free(&usb3_dev->os_dep.mtxctx, mutex);
++
++
++Same for dwc_spinlock_alloc() and dwc_spinlock_free(). Examples:
++
++	lock = dwc_spinlock_alloc(&usb3_dev->osdep.splctx);
++
++	dwc_spinlock_free(&usb3_dev->osdep.splctx, lock);
++
++
++Same for dwc_timer_alloc(). Example:
++
++	timer = dwc_timer_alloc(&usb3_dev->os_dep.tmrctx, "dwc_usb3_tmr1",
++				cb_func, cb_data);
++
++
++Same for dwc_waitq_alloc(). Example:
++
++	waitq = dwc_waitq_alloc(&usb3_dev->os_dep.wtqctx);
++
++
++Same for dwc_thread_run(). Example:
++
++	thread = dwc_thread_run(&usb3_dev->os_dep.thdctx, func,
++				"dwc_usb3_thd1", data);
++
++
++Same for dwc_workq_alloc(). Example:
++
++	workq = dwc_workq_alloc(&usb3_dev->osdep.wkqctx, "dwc_usb3_wkq1");
++
++
++Same for dwc_task_alloc(). Example:
++
++	task = dwc_task_alloc(&usb3_dev->os_dep.tskctx, "dwc_usb3_tsk1",
++			      cb_func, cb_data);
++
++
++In addition to the context pointer additions, a few core functions have had
++other changes made to their parameters:
++
++The 'flags' parameter to dwc_spinlock_irqsave() and dwc_spinunlock_irqrestore()
++has been changed from a uint64_t to a dwc_irqflags_t.
++
++dwc_thread_should_stop() now takes a 'dwc_thread_t *' parameter, because the
++FreeBSD equivalent of that function requires it.
++
++And, in addition to the context pointer, dwc_task_alloc() also adds a
++'char *name' parameter, to be consistent with dwc_thread_run() and
++dwc_workq_alloc(), and because the FreeBSD equivalent of that function
++requires a unique name.
++
++
++Here is a complete list of the core functions that now take a pointer to a
++context as their first parameter:
++
++	dwc_read_reg32
++	dwc_read_reg64
++	dwc_write_reg32
++	dwc_write_reg64
++	dwc_modify_reg32
++	dwc_modify_reg64
++	dwc_alloc
++	dwc_alloc_atomic
++	dwc_strdup
++	dwc_free
++	dwc_dma_alloc
++	dwc_dma_free
++	dwc_mutex_alloc
++	dwc_mutex_free
++	dwc_spinlock_alloc
++	dwc_spinlock_free
++	dwc_timer_alloc
++	dwc_waitq_alloc
++	dwc_thread_run
++	dwc_workq_alloc
++	dwc_task_alloc     Also adds a 'char *name' as its 2nd parameter
++
++And here are the core functions that have other changes to their parameters:
++
++	dwc_spinlock_irqsave      'flags' param is now a 'dwc_irqflags_t *'
++	dwc_spinunlock_irqrestore 'flags' param is now a 'dwc_irqflags_t'
++	dwc_thread_should_stop    Adds a 'dwc_thread_t *' parameter
++
++
++
++The changes to the core functions also require some of the other library
++functions to change:
++
++	dwc_cc_if_alloc() and dwc_cc_if_free() now take a 'void *memctx'
++	(for memory allocation) as the 1st param and a 'void *mtxctx'
++	(for mutex allocation) as the 2nd param.
++
++	dwc_cc_clear(), dwc_cc_add(), dwc_cc_change(), dwc_cc_remove(),
++	dwc_cc_data_for_save(), and dwc_cc_restore_from_data() now take a
++	'void *memctx' as the 1st param.
++
++	dwc_dh_modpow(), dwc_dh_pk(), and dwc_dh_derive_keys() now take a
++	'void *memctx' as the 1st param.
++
++	dwc_modpow() now takes a 'void *memctx' as the 1st param.
++
++	dwc_alloc_notification_manager() now takes a 'void *memctx' as the
++	1st param and a 'void *wkqctx' (for work queue allocation) as the 2nd
++	param, and also now returns an integer value that is non-zero if
++	allocation of its data structures or work queue fails.
++
++	dwc_register_notifier() now takes a 'void *memctx' as the 1st param.
++
++	dwc_memory_debug_start() now takes a 'void *mem_ctx' as the first
++	param, and also now returns an integer value that is non-zero if
++	allocation of its data structures fails.
++
++
++
++Other miscellaneous changes:
++
++The DEBUG_MEMORY and DEBUG_REGS #define's have been renamed to
++DWC_DEBUG_MEMORY and DWC_DEBUG_REGS.
++
++The following #define's have been added to allow selectively compiling library
++features:
++
++	DWC_CCLIB
++	DWC_CRYPTOLIB
++	DWC_NOTIFYLIB
++	DWC_UTFLIB
++
++A DWC_LIBMODULE #define has also been added. If this is not defined, then the
++module code in dwc_common_linux.c is not compiled in. This allows linking the
++library code directly into a driver module, instead of as a standalone module.
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/doc/doxygen.cfg
+@@ -0,0 +1,270 @@
++# Doxyfile 1.4.5
++
++#---------------------------------------------------------------------------
++# Project related configuration options
++#---------------------------------------------------------------------------
++PROJECT_NAME           = "Synopsys DWC Portability and Common Library for UWB"
++PROJECT_NUMBER         =
++OUTPUT_DIRECTORY       = doc
++CREATE_SUBDIRS         = NO
++OUTPUT_LANGUAGE        = English
++BRIEF_MEMBER_DESC      = YES
++REPEAT_BRIEF           = YES
++ABBREVIATE_BRIEF       = "The $name class" \
++                         "The $name widget" \
++                         "The $name file" \
++                         is \
++                         provides \
++                         specifies \
++                         contains \
++                         represents \
++                         a \
++                         an \
++                         the
++ALWAYS_DETAILED_SEC    = YES
++INLINE_INHERITED_MEMB  = NO
++FULL_PATH_NAMES        = NO
++STRIP_FROM_PATH        = ..
++STRIP_FROM_INC_PATH    =
++SHORT_NAMES            = NO
++JAVADOC_AUTOBRIEF      = YES
++MULTILINE_CPP_IS_BRIEF = NO
++DETAILS_AT_TOP         = YES
++INHERIT_DOCS           = YES
++SEPARATE_MEMBER_PAGES  = NO
++TAB_SIZE               = 8
++ALIASES                =
++OPTIMIZE_OUTPUT_FOR_C  = YES
++OPTIMIZE_OUTPUT_JAVA   = NO
++BUILTIN_STL_SUPPORT    = NO
++DISTRIBUTE_GROUP_DOC   = NO
++SUBGROUPING            = NO
++#---------------------------------------------------------------------------
++# Build related configuration options
++#---------------------------------------------------------------------------
++EXTRACT_ALL            = NO
++EXTRACT_PRIVATE        = NO
++EXTRACT_STATIC         = YES
++EXTRACT_LOCAL_CLASSES  = NO
++EXTRACT_LOCAL_METHODS  = NO
++HIDE_UNDOC_MEMBERS     = NO
++HIDE_UNDOC_CLASSES     = NO
++HIDE_FRIEND_COMPOUNDS  = NO
++HIDE_IN_BODY_DOCS      = NO
++INTERNAL_DOCS          = NO
++CASE_SENSE_NAMES       = YES
++HIDE_SCOPE_NAMES       = NO
++SHOW_INCLUDE_FILES     = NO
++INLINE_INFO            = YES
++SORT_MEMBER_DOCS       = NO
++SORT_BRIEF_DOCS        = NO
++SORT_BY_SCOPE_NAME     = NO
++GENERATE_TODOLIST      = YES
++GENERATE_TESTLIST      = YES
++GENERATE_BUGLIST       = YES
++GENERATE_DEPRECATEDLIST= YES
++ENABLED_SECTIONS       =
++MAX_INITIALIZER_LINES  = 30
++SHOW_USED_FILES        = YES
++SHOW_DIRECTORIES       = YES
++FILE_VERSION_FILTER    =
++#---------------------------------------------------------------------------
++# configuration options related to warning and progress messages
++#---------------------------------------------------------------------------
++QUIET                  = YES
++WARNINGS               = YES
++WARN_IF_UNDOCUMENTED   = NO
++WARN_IF_DOC_ERROR      = YES
++WARN_NO_PARAMDOC       = YES
++WARN_FORMAT            = "$file:$line: $text"
++WARN_LOGFILE           =
++#---------------------------------------------------------------------------
++# configuration options related to the input files
++#---------------------------------------------------------------------------
++INPUT                  = .
++FILE_PATTERNS          = *.c \
++                         *.cc \
++                         *.cxx \
++                         *.cpp \
++                         *.c++ \
++                         *.d \
++                         *.java \
++                         *.ii \
++                         *.ixx \
++                         *.ipp \
++                         *.i++ \
++                         *.inl \
++                         *.h \
++                         *.hh \
++                         *.hxx \
++                         *.hpp \
++                         *.h++ \
++                         *.idl \
++                         *.odl \
++                         *.cs \
++                         *.php \
++                         *.php3 \
++                         *.inc \
++                         *.m \
++                         *.mm \
++                         *.dox \
++                         *.py \
++                         *.C \
++                         *.CC \
++                         *.C++ \
++                         *.II \
++                         *.I++ \
++                         *.H \
++                         *.HH \
++                         *.H++ \
++                         *.CS \
++                         *.PHP \
++                         *.PHP3 \
++                         *.M \
++                         *.MM \
++                         *.PY
++RECURSIVE              = NO
++EXCLUDE                =
++EXCLUDE_SYMLINKS       = NO
++EXCLUDE_PATTERNS       =
++EXAMPLE_PATH           =
++EXAMPLE_PATTERNS       = *
++EXAMPLE_RECURSIVE      = NO
++IMAGE_PATH             =
++INPUT_FILTER           =
++FILTER_PATTERNS        =
++FILTER_SOURCE_FILES    = NO
++#---------------------------------------------------------------------------
++# configuration options related to source browsing
++#---------------------------------------------------------------------------
++SOURCE_BROWSER         = NO
++INLINE_SOURCES         = NO
++STRIP_CODE_COMMENTS    = YES
++REFERENCED_BY_RELATION = YES
++REFERENCES_RELATION    = YES
++USE_HTAGS              = NO
++VERBATIM_HEADERS       = NO
++#---------------------------------------------------------------------------
++# configuration options related to the alphabetical class index
++#---------------------------------------------------------------------------
++ALPHABETICAL_INDEX     = NO
++COLS_IN_ALPHA_INDEX    = 5
++IGNORE_PREFIX          =
++#---------------------------------------------------------------------------
++# configuration options related to the HTML output
++#---------------------------------------------------------------------------
++GENERATE_HTML          = YES
++HTML_OUTPUT            = html
++HTML_FILE_EXTENSION    = .html
++HTML_HEADER            =
++HTML_FOOTER            =
++HTML_STYLESHEET        =
++HTML_ALIGN_MEMBERS     = YES
++GENERATE_HTMLHELP      = NO
++CHM_FILE               =
++HHC_LOCATION           =
++GENERATE_CHI           = NO
++BINARY_TOC             = NO
++TOC_EXPAND             = NO
++DISABLE_INDEX          = NO
++ENUM_VALUES_PER_LINE   = 4
++GENERATE_TREEVIEW      = YES
++TREEVIEW_WIDTH         = 250
++#---------------------------------------------------------------------------
++# configuration options related to the LaTeX output
++#---------------------------------------------------------------------------
++GENERATE_LATEX         = NO
++LATEX_OUTPUT           = latex
++LATEX_CMD_NAME         = latex
++MAKEINDEX_CMD_NAME     = makeindex
++COMPACT_LATEX          = NO
++PAPER_TYPE             = a4wide
++EXTRA_PACKAGES         =
++LATEX_HEADER           =
++PDF_HYPERLINKS         = NO
++USE_PDFLATEX           = NO
++LATEX_BATCHMODE        = NO
++LATEX_HIDE_INDICES     = NO
++#---------------------------------------------------------------------------
++# configuration options related to the RTF output
++#---------------------------------------------------------------------------
++GENERATE_RTF           = NO
++RTF_OUTPUT             = rtf
++COMPACT_RTF            = NO
++RTF_HYPERLINKS         = NO
++RTF_STYLESHEET_FILE    =
++RTF_EXTENSIONS_FILE    =
++#---------------------------------------------------------------------------
++# configuration options related to the man page output
++#---------------------------------------------------------------------------
++GENERATE_MAN           = NO
++MAN_OUTPUT             = man
++MAN_EXTENSION          = .3
++MAN_LINKS              = NO
++#---------------------------------------------------------------------------
++# configuration options related to the XML output
++#---------------------------------------------------------------------------
++GENERATE_XML           = NO
++XML_OUTPUT             = xml
++XML_SCHEMA             =
++XML_DTD                =
++XML_PROGRAMLISTING     = YES
++#---------------------------------------------------------------------------
++# configuration options for the AutoGen Definitions output
++#---------------------------------------------------------------------------
++GENERATE_AUTOGEN_DEF   = NO
++#---------------------------------------------------------------------------
++# configuration options related to the Perl module output
++#---------------------------------------------------------------------------
++GENERATE_PERLMOD       = NO
++PERLMOD_LATEX          = NO
++PERLMOD_PRETTY         = YES
++PERLMOD_MAKEVAR_PREFIX =
++#---------------------------------------------------------------------------
++# Configuration options related to the preprocessor
++#---------------------------------------------------------------------------
++ENABLE_PREPROCESSING   = YES
++MACRO_EXPANSION        = NO
++EXPAND_ONLY_PREDEF     = NO
++SEARCH_INCLUDES        = YES
++INCLUDE_PATH           =
++INCLUDE_FILE_PATTERNS  =
++PREDEFINED             = DEBUG DEBUG_MEMORY
++EXPAND_AS_DEFINED      =
++SKIP_FUNCTION_MACROS   = YES
++#---------------------------------------------------------------------------
++# Configuration::additions related to external references
++#---------------------------------------------------------------------------
++TAGFILES               =
++GENERATE_TAGFILE       =
++ALLEXTERNALS           = NO
++EXTERNAL_GROUPS        = YES
++PERL_PATH              = /usr/bin/perl
++#---------------------------------------------------------------------------
++# Configuration options related to the dot tool
++#---------------------------------------------------------------------------
++CLASS_DIAGRAMS         = YES
++HIDE_UNDOC_RELATIONS   = YES
++HAVE_DOT               = NO
++CLASS_GRAPH            = YES
++COLLABORATION_GRAPH    = YES
++GROUP_GRAPHS           = YES
++UML_LOOK               = NO
++TEMPLATE_RELATIONS     = NO
++INCLUDE_GRAPH          = NO
++INCLUDED_BY_GRAPH      = YES
++CALL_GRAPH             = NO
++GRAPHICAL_HIERARCHY    = YES
++DIRECTORY_GRAPH        = YES
++DOT_IMAGE_FORMAT       = png
++DOT_PATH               =
++DOTFILE_DIRS           =
++MAX_DOT_GRAPH_DEPTH    = 1000
++DOT_TRANSPARENT        = NO
++DOT_MULTI_TARGETS      = NO
++GENERATE_LEGEND        = YES
++DOT_CLEANUP            = YES
++#---------------------------------------------------------------------------
++# Configuration::additions related to the search engine
++#---------------------------------------------------------------------------
++SEARCHENGINE           = NO
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_cc.c
+@@ -0,0 +1,532 @@
++/* =========================================================================
++ * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_cc.c $
++ * $Revision: #4 $
++ * $Date: 2010/11/04 $
++ * $Change: 1621692 $
++ *
++ * Synopsys Portability Library Software and documentation
++ * (hereinafter, "Software") is an Unsupported proprietary work of
++ * Synopsys, Inc. unless otherwise expressly agreed to in writing
++ * between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product
++ * under any End User Software License Agreement or Agreement for
++ * Licensed Product with Synopsys or any supplement thereto. You are
++ * permitted to use and redistribute this Software in source and binary
++ * forms, with or without modification, provided that redistributions
++ * of source code must retain this notice. You may not view, use,
++ * disclose, copy or distribute this file or any information contained
++ * herein except pursuant to this license grant from Synopsys. If you
++ * do not agree with this notice, including the disclaimer below, then
++ * you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
++ * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
++ * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
++ * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
++ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
++ * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================= */
++#ifdef DWC_CCLIB
++
++#include "dwc_cc.h"
++
++typedef struct dwc_cc
++{
++	uint32_t uid;
++	uint8_t chid[16];
++	uint8_t cdid[16];
++	uint8_t ck[16];
++	uint8_t *name;
++	uint8_t length;
++        DWC_CIRCLEQ_ENTRY(dwc_cc) list_entry;
++} dwc_cc_t;
++
++DWC_CIRCLEQ_HEAD(context_list, dwc_cc);
++
++/** The main structure for CC management.  */
++struct dwc_cc_if
++{
++	dwc_mutex_t *mutex;
++	char *filename;
++
++	unsigned is_host:1;
++
++	dwc_notifier_t *notifier;
++
++	struct context_list list;
++};
++
++#ifdef DEBUG
++static inline void dump_bytes(char *name, uint8_t *bytes, int len)
++{
++	int i;
++	DWC_PRINTF("%s: ", name);
++	for (i=0; i<len; i++) {
++		DWC_PRINTF("%02x ", bytes[i]);
++	}
++	DWC_PRINTF("\n");
++}
++#else
++#define dump_bytes(x...)
++#endif
++
++static dwc_cc_t *alloc_cc(void *mem_ctx, uint8_t *name, uint32_t length)
++{
++	dwc_cc_t *cc = dwc_alloc(mem_ctx, sizeof(dwc_cc_t));
++	if (!cc) {
++		return NULL;
++	}
++	DWC_MEMSET(cc, 0, sizeof(dwc_cc_t));
++
++	if (name) {
++		cc->length = length;
++		cc->name = dwc_alloc(mem_ctx, length);
++		if (!cc->name) {
++			dwc_free(mem_ctx, cc);
++			return NULL;
++		}
++
++		DWC_MEMCPY(cc->name, name, length);
++	}
++
++	return cc;
++}
++
++static void free_cc(void *mem_ctx, dwc_cc_t *cc)
++{
++	if (cc->name) {
++		dwc_free(mem_ctx, cc->name);
++	}
++	dwc_free(mem_ctx, cc);
++}
++
++static uint32_t next_uid(dwc_cc_if_t *cc_if)
++{
++	uint32_t uid = 0;
++	dwc_cc_t *cc;
++	DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
++		if (cc->uid > uid) {
++			uid = cc->uid;
++		}
++	}
++
++	if (uid == 0) {
++		uid = 255;
++	}
++
++	return uid + 1;
++}
++
++static dwc_cc_t *cc_find(dwc_cc_if_t *cc_if, uint32_t uid)
++{
++	dwc_cc_t *cc;
++	DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
++		if (cc->uid == uid) {
++			return cc;
++		}
++	}
++	return NULL;
++}
++
++static unsigned int cc_data_size(dwc_cc_if_t *cc_if)
++{
++	unsigned int size = 0;
++	dwc_cc_t *cc;
++	DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
++		size += (48 + 1);
++		if (cc->name) {
++			size += cc->length;
++		}
++	}
++	return size;
++}
++
++static uint32_t cc_match_chid(dwc_cc_if_t *cc_if, uint8_t *chid)
++{
++	uint32_t uid = 0;
++	dwc_cc_t *cc;
++
++	DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
++		if (DWC_MEMCMP(cc->chid, chid, 16) == 0) {
++			uid = cc->uid;
++			break;
++		}
++	}
++	return uid;
++}
++static uint32_t cc_match_cdid(dwc_cc_if_t *cc_if, uint8_t *cdid)
++{
++	uint32_t uid = 0;
++	dwc_cc_t *cc;
++
++	DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
++		if (DWC_MEMCMP(cc->cdid, cdid, 16) == 0) {
++			uid = cc->uid;
++			break;
++		}
++	}
++	return uid;
++}
++
++/* Internal cc_add */
++static int32_t cc_add(void *mem_ctx, dwc_cc_if_t *cc_if, uint8_t *chid,
++		      uint8_t *cdid, uint8_t *ck, uint8_t *name, uint8_t length)
++{
++	dwc_cc_t *cc;
++	uint32_t uid;
++
++	if (cc_if->is_host) {
++		uid = cc_match_cdid(cc_if, cdid);
++	}
++	else {
++		uid = cc_match_chid(cc_if, chid);
++	}
++
++	if (uid) {
++		DWC_DEBUGC("Replacing previous connection context id=%d name=%p name_len=%d", uid, name, length);
++		cc = cc_find(cc_if, uid);
++	}
++	else {
++		cc = alloc_cc(mem_ctx, name, length);
++		cc->uid = next_uid(cc_if);
++		DWC_CIRCLEQ_INSERT_TAIL(&cc_if->list, cc, list_entry);
++	}
++
++	DWC_MEMCPY(&(cc->chid[0]), chid, 16);
++	DWC_MEMCPY(&(cc->cdid[0]), cdid, 16);
++	DWC_MEMCPY(&(cc->ck[0]), ck, 16);
++
++	DWC_DEBUGC("Added connection context id=%d name=%p name_len=%d", cc->uid, name, length);
++	dump_bytes("CHID", cc->chid, 16);
++	dump_bytes("CDID", cc->cdid, 16);
++	dump_bytes("CK", cc->ck, 16);
++	return cc->uid;
++}
++
++/* Internal cc_clear */
++static void cc_clear(void *mem_ctx, dwc_cc_if_t *cc_if)
++{
++	while (!DWC_CIRCLEQ_EMPTY(&cc_if->list)) {
++		dwc_cc_t *cc = DWC_CIRCLEQ_FIRST(&cc_if->list);
++		DWC_CIRCLEQ_REMOVE_INIT(&cc_if->list, cc, list_entry);
++		free_cc(mem_ctx, cc);
++	}
++}
++
++dwc_cc_if_t *dwc_cc_if_alloc(void *mem_ctx, void *mtx_ctx,
++			     dwc_notifier_t *notifier, unsigned is_host)
++{
++	dwc_cc_if_t *cc_if = NULL;
++
++	/* Allocate a common_cc_if structure */
++	cc_if = dwc_alloc(mem_ctx, sizeof(dwc_cc_if_t));
++
++	if (!cc_if)
++		return NULL;
++
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
++	DWC_MUTEX_ALLOC_LINUX_DEBUG(cc_if->mutex);
++#else
++	cc_if->mutex = dwc_mutex_alloc(mtx_ctx);
++#endif
++	if (!cc_if->mutex) {
++		dwc_free(mem_ctx, cc_if);
++		return NULL;
++	}
++
++	DWC_CIRCLEQ_INIT(&cc_if->list);
++	cc_if->is_host = is_host;
++	cc_if->notifier = notifier;
++	return cc_if;
++}
++
++void dwc_cc_if_free(void *mem_ctx, void *mtx_ctx, dwc_cc_if_t *cc_if)
++{
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
++	DWC_MUTEX_FREE(cc_if->mutex);
++#else
++	dwc_mutex_free(mtx_ctx, cc_if->mutex);
++#endif
++	cc_clear(mem_ctx, cc_if);
++	dwc_free(mem_ctx, cc_if);
++}
++
++static void cc_changed(dwc_cc_if_t *cc_if)
++{
++	if (cc_if->notifier) {
++		dwc_notify(cc_if->notifier, DWC_CC_LIST_CHANGED_NOTIFICATION, cc_if);
++	}
++}
++
++void dwc_cc_clear(void *mem_ctx, dwc_cc_if_t *cc_if)
++{
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	cc_clear(mem_ctx, cc_if);
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++	cc_changed(cc_if);
++}
++
++int32_t dwc_cc_add(void *mem_ctx, dwc_cc_if_t *cc_if, uint8_t *chid,
++		   uint8_t *cdid, uint8_t *ck, uint8_t *name, uint8_t length)
++{
++	uint32_t uid;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	uid = cc_add(mem_ctx, cc_if, chid, cdid, ck, name, length);
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++	cc_changed(cc_if);
++
++	return uid;
++}
++
++void dwc_cc_change(void *mem_ctx, dwc_cc_if_t *cc_if, int32_t id, uint8_t *chid,
++		   uint8_t *cdid, uint8_t *ck, uint8_t *name, uint8_t length)
++{
++	dwc_cc_t* cc;
++
++	DWC_DEBUGC("Change connection context %d", id);
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	cc = cc_find(cc_if, id);
++	if (!cc) {
++		DWC_ERROR("Uid %d not found in cc list\n", id);
++		DWC_MUTEX_UNLOCK(cc_if->mutex);
++		return;
++	}
++
++	if (chid) {
++		DWC_MEMCPY(&(cc->chid[0]), chid, 16);
++	}
++	if (cdid) {
++		DWC_MEMCPY(&(cc->cdid[0]), cdid, 16);
++	}
++	if (ck) {
++		DWC_MEMCPY(&(cc->ck[0]), ck, 16);
++	}
++
++	if (name) {
++		if (cc->name) {
++			dwc_free(mem_ctx, cc->name);
++		}
++		cc->name = dwc_alloc(mem_ctx, length);
++		if (!cc->name) {
++			DWC_ERROR("Out of memory in dwc_cc_change()\n");
++			DWC_MUTEX_UNLOCK(cc_if->mutex);
++			return;
++		}
++		cc->length = length;
++		DWC_MEMCPY(cc->name, name, length);
++	}
++
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++
++	cc_changed(cc_if);
++
++	DWC_DEBUGC("Changed connection context id=%d\n", id);
++	dump_bytes("New CHID", cc->chid, 16);
++	dump_bytes("New CDID", cc->cdid, 16);
++	dump_bytes("New CK", cc->ck, 16);
++}
++
++void dwc_cc_remove(void *mem_ctx, dwc_cc_if_t *cc_if, int32_t id)
++{
++	dwc_cc_t *cc;
++
++	DWC_DEBUGC("Removing connection context %d", id);
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	cc = cc_find(cc_if, id);
++	if (!cc) {
++		DWC_ERROR("Uid %d not found in cc list\n", id);
++		DWC_MUTEX_UNLOCK(cc_if->mutex);
++		return;
++	}
++
++	DWC_CIRCLEQ_REMOVE_INIT(&cc_if->list, cc, list_entry);
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++	free_cc(mem_ctx, cc);
++
++	cc_changed(cc_if);
++}
++
++uint8_t *dwc_cc_data_for_save(void *mem_ctx, dwc_cc_if_t *cc_if, unsigned int *length)
++{
++	uint8_t *buf, *x;
++	uint8_t zero = 0;
++	dwc_cc_t *cc;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	*length = cc_data_size(cc_if);
++	if (!(*length)) {
++		DWC_MUTEX_UNLOCK(cc_if->mutex);
++		return NULL;
++	}
++
++	DWC_DEBUGC("Creating data for saving (length=%d)", *length);
++
++	buf = dwc_alloc(mem_ctx, *length);
++	if (!buf) {
++		*length = 0;
++		DWC_MUTEX_UNLOCK(cc_if->mutex);
++		return NULL;
++	}
++
++	x = buf;
++	DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
++		DWC_MEMCPY(x, cc->chid, 16);
++		x += 16;
++		DWC_MEMCPY(x, cc->cdid, 16);
++		x += 16;
++		DWC_MEMCPY(x, cc->ck, 16);
++		x += 16;
++		if (cc->name) {
++			DWC_MEMCPY(x, &cc->length, 1);
++			x += 1;
++			DWC_MEMCPY(x, cc->name, cc->length);
++			x += cc->length;
++		}
++		else {
++			DWC_MEMCPY(x, &zero, 1);
++			x += 1;
++		}
++	}
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++
++	return buf;
++}
++
++void dwc_cc_restore_from_data(void *mem_ctx, dwc_cc_if_t *cc_if, uint8_t *data, uint32_t length)
++{
++	uint8_t name_length;
++	uint8_t *name;
++	uint8_t *chid;
++	uint8_t *cdid;
++	uint8_t *ck;
++	uint32_t i = 0;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	cc_clear(mem_ctx, cc_if);
++
++	while (i < length) {
++		chid = &data[i];
++		i += 16;
++		cdid = &data[i];
++		i += 16;
++		ck = &data[i];
++		i += 16;
++
++		name_length = data[i];
++		i ++;
++
++		if (name_length) {
++			name = &data[i];
++			i += name_length;
++		}
++		else {
++			name = NULL;
++		}
++
++		/* check to see if we haven't overflown the buffer */
++		if (i > length) {
++			DWC_ERROR("Data format error while attempting to load CCs "
++				  "(nlen=%d, iter=%d, buflen=%d).\n", name_length, i, length);
++			break;
++		}
++
++		cc_add(mem_ctx, cc_if, chid, cdid, ck, name, name_length);
++	}
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++
++	cc_changed(cc_if);
++}
++
++uint32_t dwc_cc_match_chid(dwc_cc_if_t *cc_if, uint8_t *chid)
++{
++	uint32_t uid = 0;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	uid = cc_match_chid(cc_if, chid);
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++	return uid;
++}
++uint32_t dwc_cc_match_cdid(dwc_cc_if_t *cc_if, uint8_t *cdid)
++{
++	uint32_t uid = 0;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	uid = cc_match_cdid(cc_if, cdid);
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++	return uid;
++}
++
++uint8_t *dwc_cc_ck(dwc_cc_if_t *cc_if, int32_t id)
++{
++	uint8_t *ck = NULL;
++	dwc_cc_t *cc;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	cc = cc_find(cc_if, id);
++	if (cc) {
++		ck = cc->ck;
++	}
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++
++	return ck;
++
++}
++
++uint8_t *dwc_cc_chid(dwc_cc_if_t *cc_if, int32_t id)
++{
++	uint8_t *retval = NULL;
++	dwc_cc_t *cc;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	cc = cc_find(cc_if, id);
++	if (cc) {
++		retval = cc->chid;
++	}
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++
++	return retval;
++}
++
++uint8_t *dwc_cc_cdid(dwc_cc_if_t *cc_if, int32_t id)
++{
++	uint8_t *retval = NULL;
++	dwc_cc_t *cc;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	cc = cc_find(cc_if, id);
++	if (cc) {
++		retval = cc->cdid;
++	}
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++
++	return retval;
++}
++
++uint8_t *dwc_cc_name(dwc_cc_if_t *cc_if, int32_t id, uint8_t *length)
++{
++	uint8_t *retval = NULL;
++	dwc_cc_t *cc;
++
++	DWC_MUTEX_LOCK(cc_if->mutex);
++	*length = 0;
++	cc = cc_find(cc_if, id);
++	if (cc) {
++		*length = cc->length;
++		retval = cc->name;
++	}
++	DWC_MUTEX_UNLOCK(cc_if->mutex);
++
++	return retval;
++}
++
++#endif	/* DWC_CCLIB */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_cc.h
+@@ -0,0 +1,224 @@
++/* =========================================================================
++ * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_cc.h $
++ * $Revision: #4 $
++ * $Date: 2010/09/28 $
++ * $Change: 1596182 $
++ *
++ * Synopsys Portability Library Software and documentation
++ * (hereinafter, "Software") is an Unsupported proprietary work of
++ * Synopsys, Inc. unless otherwise expressly agreed to in writing
++ * between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product
++ * under any End User Software License Agreement or Agreement for
++ * Licensed Product with Synopsys or any supplement thereto. You are
++ * permitted to use and redistribute this Software in source and binary
++ * forms, with or without modification, provided that redistributions
++ * of source code must retain this notice. You may not view, use,
++ * disclose, copy or distribute this file or any information contained
++ * herein except pursuant to this license grant from Synopsys. If you
++ * do not agree with this notice, including the disclaimer below, then
++ * you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
++ * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
++ * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
++ * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
++ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
++ * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================= */
++#ifndef _DWC_CC_H_
++#define _DWC_CC_H_
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++/** @file
++ *
++ * This file defines the Context Context library.
++ *
++ * The main data structure is dwc_cc_if_t which is returned by either the
++ * dwc_cc_if_alloc function or returned by the module to the user via a provided
++ * function. The data structure is opaque and should only be manipulated via the
++ * functions provied in this API.
++ *
++ * It manages a list of connection contexts and operations can be performed to
++ * add, remove, query, search, and change, those contexts.  Additionally,
++ * a dwc_notifier_t object can be requested from the manager so that
++ * the user can be notified whenever the context list has changed.
++ */
++
++#include "dwc_os.h"
++#include "dwc_list.h"
++#include "dwc_notifier.h"
++
++
++/* Notifications */
++#define DWC_CC_LIST_CHANGED_NOTIFICATION "DWC_CC_LIST_CHANGED_NOTIFICATION"
++
++struct dwc_cc_if;
++typedef struct dwc_cc_if dwc_cc_if_t;
++
++
++/** @name Connection Context Operations */
++/** @{ */
++
++/** This function allocates memory for a dwc_cc_if_t structure, initializes
++ * fields to default values, and returns a pointer to the structure or NULL on
++ * error. */
++extern dwc_cc_if_t *dwc_cc_if_alloc(void *mem_ctx, void *mtx_ctx,
++				    dwc_notifier_t *notifier, unsigned is_host);
++
++/** Frees the memory for the specified CC structure allocated from
++ * dwc_cc_if_alloc(). */
++extern void dwc_cc_if_free(void *mem_ctx, void *mtx_ctx, dwc_cc_if_t *cc_if);
++
++/** Removes all contexts from the connection context list */
++extern void dwc_cc_clear(void *mem_ctx, dwc_cc_if_t *cc_if);
++
++/** Adds a connection context (CHID, CK, CDID, Name) to the connection context list.
++ * If a CHID already exists, the CK and name are overwritten.  Statistics are
++ * not overwritten.
++ *
++ * @param cc_if The cc_if structure.
++ * @param chid A pointer to the 16-byte CHID.  This value will be copied.
++ * @param ck A pointer to the 16-byte CK.  This value will be copied.
++ * @param cdid A pointer to the 16-byte CDID.  This value will be copied.
++ * @param name An optional host friendly name as defined in the association model
++ * spec.  Must be a UTF16-LE unicode string.  Can be NULL to indicated no name.
++ * @param length The length othe unicode string.
++ * @return A unique identifier used to refer to this context that is valid for
++ * as long as this context is still in the list. */
++extern int32_t dwc_cc_add(void *mem_ctx, dwc_cc_if_t *cc_if, uint8_t *chid,
++			  uint8_t *cdid, uint8_t *ck, uint8_t *name,
++			  uint8_t length);
++
++/** Changes the CHID, CK, CDID, or Name values of a connection context in the
++ * list, preserving any accumulated statistics.  This would typically be called
++ * if the host decideds to change the context with a SET_CONNECTION request.
++ *
++ * @param cc_if The cc_if structure.
++ * @param id The identifier of the connection context.
++ * @param chid A pointer to the 16-byte CHID.  This value will be copied.  NULL
++ * indicates no change.
++ * @param cdid A pointer to the 16-byte CDID.  This value will be copied.  NULL
++ * indicates no change.
++ * @param ck A pointer to the 16-byte CK.  This value will be copied.  NULL
++ * indicates no change.
++ * @param name Host friendly name UTF16-LE.  NULL indicates no change.
++ * @param length Length of name. */
++extern void dwc_cc_change(void *mem_ctx, dwc_cc_if_t *cc_if, int32_t id,
++			  uint8_t *chid, uint8_t *cdid, uint8_t *ck,
++			  uint8_t *name, uint8_t length);
++
++/** Remove the specified connection context.
++ * @param cc_if The cc_if structure.
++ * @param id The identifier of the connection context to remove. */
++extern void dwc_cc_remove(void *mem_ctx, dwc_cc_if_t *cc_if, int32_t id);
++
++/** Get a binary block of data for the connection context list and attributes.
++ * This data can be used by the OS specific driver to save the connection
++ * context list into non-volatile memory.
++ *
++ * @param cc_if The cc_if structure.
++ * @param length Return the length of the data buffer.
++ * @return A pointer to the data buffer.  The memory for this buffer should be
++ * freed with DWC_FREE() after use. */
++extern uint8_t *dwc_cc_data_for_save(void *mem_ctx, dwc_cc_if_t *cc_if,
++				     unsigned int *length);
++
++/** Restore the connection context list from the binary data that was previously
++ * returned from a call to dwc_cc_data_for_save.  This can be used by the OS specific
++ * driver to load a connection context list from non-volatile memory.
++ *
++ * @param cc_if The cc_if structure.
++ * @param data The data bytes as returned from dwc_cc_data_for_save.
++ * @param length The length of the data. */
++extern void dwc_cc_restore_from_data(void *mem_ctx, dwc_cc_if_t *cc_if,
++				     uint8_t *data, unsigned int length);
++
++/** Find the connection context from the specified CHID.
++ *
++ * @param cc_if The cc_if structure.
++ * @param chid A pointer to the CHID data.
++ * @return A non-zero identifier of the connection context if the CHID matches.
++ * Otherwise returns 0. */
++extern uint32_t dwc_cc_match_chid(dwc_cc_if_t *cc_if, uint8_t *chid);
++
++/** Find the connection context from the specified CDID.
++ *
++ * @param cc_if The cc_if structure.
++ * @param cdid A pointer to the CDID data.
++ * @return A non-zero identifier of the connection context if the CHID matches.
++ * Otherwise returns 0. */
++extern uint32_t dwc_cc_match_cdid(dwc_cc_if_t *cc_if, uint8_t *cdid);
++
++/** Retrieve the CK from the specified connection context.
++ *
++ * @param cc_if The cc_if structure.
++ * @param id The identifier of the connection context.
++ * @return A pointer to the CK data.  The memory does not need to be freed. */
++extern uint8_t *dwc_cc_ck(dwc_cc_if_t *cc_if, int32_t id);
++
++/** Retrieve the CHID from the specified connection context.
++ *
++ * @param cc_if The cc_if structure.
++ * @param id The identifier of the connection context.
++ * @return A pointer to the CHID data.  The memory does not need to be freed. */
++extern uint8_t *dwc_cc_chid(dwc_cc_if_t *cc_if, int32_t id);
++
++/** Retrieve the CDID from the specified connection context.
++ *
++ * @param cc_if The cc_if structure.
++ * @param id The identifier of the connection context.
++ * @return A pointer to the CDID data.  The memory does not need to be freed. */
++extern uint8_t *dwc_cc_cdid(dwc_cc_if_t *cc_if, int32_t id);
++
++extern uint8_t *dwc_cc_name(dwc_cc_if_t *cc_if, int32_t id, uint8_t *length);
++
++/** Checks a buffer for non-zero.
++ * @param id A pointer to a 16 byte buffer.
++ * @return true if the 16 byte value is non-zero. */
++static inline unsigned dwc_assoc_is_not_zero_id(uint8_t *id) {
++	int i;
++	for (i=0; i<16; i++) {
++		if (id[i]) return 1;
++	}
++	return 0;
++}
++
++/** Checks a buffer for zero.
++ * @param id A pointer to a 16 byte buffer.
++ * @return true if the 16 byte value is zero. */
++static inline unsigned dwc_assoc_is_zero_id(uint8_t *id) {
++	return !dwc_assoc_is_not_zero_id(id);
++}
++
++/** Prints an ASCII representation for the 16-byte chid, cdid, or ck, into
++ * buffer. */
++static inline int dwc_print_id_string(char *buffer, uint8_t *id) {
++	char *ptr = buffer;
++	int i;
++	for (i=0; i<16; i++) {
++		ptr += DWC_SPRINTF(ptr, "%02x", id[i]);
++		if (i < 15) {
++			ptr += DWC_SPRINTF(ptr, " ");
++		}
++	}
++	return ptr - buffer;
++}
++
++/** @} */
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* _DWC_CC_H_ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_common_fbsd.c
+@@ -0,0 +1,1308 @@
++#include "dwc_os.h"
++#include "dwc_list.h"
++
++#ifdef DWC_CCLIB
++# include "dwc_cc.h"
++#endif
++
++#ifdef DWC_CRYPTOLIB
++# include "dwc_modpow.h"
++# include "dwc_dh.h"
++# include "dwc_crypto.h"
++#endif
++
++#ifdef DWC_NOTIFYLIB
++# include "dwc_notifier.h"
++#endif
++
++/* OS-Level Implementations */
++
++/* This is the FreeBSD 7.0 kernel implementation of the DWC platform library. */
++
++
++/* MISC */
++
++void *DWC_MEMSET(void *dest, uint8_t byte, uint32_t size)
++{
++	return memset(dest, byte, size);
++}
++
++void *DWC_MEMCPY(void *dest, void const *src, uint32_t size)
++{
++	return memcpy(dest, src, size);
++}
++
++void *DWC_MEMMOVE(void *dest, void *src, uint32_t size)
++{
++	bcopy(src, dest, size);
++	return dest;
++}
++
++int DWC_MEMCMP(void *m1, void *m2, uint32_t size)
++{
++	return memcmp(m1, m2, size);
++}
++
++int DWC_STRNCMP(void *s1, void *s2, uint32_t size)
++{
++	return strncmp(s1, s2, size);
++}
++
++int DWC_STRCMP(void *s1, void *s2)
++{
++	return strcmp(s1, s2);
++}
++
++int DWC_STRLEN(char const *str)
++{
++	return strlen(str);
++}
++
++char *DWC_STRCPY(char *to, char const *from)
++{
++	return strcpy(to, from);
++}
++
++char *DWC_STRDUP(char const *str)
++{
++	int len = DWC_STRLEN(str) + 1;
++	char *new = DWC_ALLOC_ATOMIC(len);
++
++	if (!new) {
++		return NULL;
++	}
++
++	DWC_MEMCPY(new, str, len);
++	return new;
++}
++
++int DWC_ATOI(char *str, int32_t *value)
++{
++	char *end = NULL;
++
++	*value = strtol(str, &end, 0);
++	if (*end == '\0') {
++		return 0;
++	}
++
++	return -1;
++}
++
++int DWC_ATOUI(char *str, uint32_t *value)
++{
++	char *end = NULL;
++
++	*value = strtoul(str, &end, 0);
++	if (*end == '\0') {
++		return 0;
++	}
++
++	return -1;
++}
++
++
++#ifdef DWC_UTFLIB
++/* From usbstring.c */
++
++int DWC_UTF8_TO_UTF16LE(uint8_t const *s, uint16_t *cp, unsigned len)
++{
++	int	count = 0;
++	u8	c;
++	u16	uchar;
++
++	/* this insists on correct encodings, though not minimal ones.
++	 * BUT it currently rejects legit 4-byte UTF-8 code points,
++	 * which need surrogate pairs.  (Unicode 3.1 can use them.)
++	 */
++	while (len != 0 && (c = (u8) *s++) != 0) {
++		if (unlikely(c & 0x80)) {
++			// 2-byte sequence:
++			// 00000yyyyyxxxxxx = 110yyyyy 10xxxxxx
++			if ((c & 0xe0) == 0xc0) {
++				uchar = (c & 0x1f) << 6;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c;
++
++			// 3-byte sequence (most CJKV characters):
++			// zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
++			} else if ((c & 0xf0) == 0xe0) {
++				uchar = (c & 0x0f) << 12;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c << 6;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c;
++
++				/* no bogus surrogates */
++				if (0xd800 <= uchar && uchar <= 0xdfff)
++					goto fail;
++
++			// 4-byte sequence (surrogate pairs, currently rare):
++			// 11101110wwwwzzzzyy + 110111yyyyxxxxxx
++			//     = 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx
++			// (uuuuu = wwww + 1)
++			// FIXME accept the surrogate code points (only)
++			} else
++				goto fail;
++		} else
++			uchar = c;
++		put_unaligned (cpu_to_le16 (uchar), cp++);
++		count++;
++		len--;
++	}
++	return count;
++fail:
++	return -1;
++}
++
++#endif	/* DWC_UTFLIB */
++
++
++/* dwc_debug.h */
++
++dwc_bool_t DWC_IN_IRQ(void)
++{
++//	return in_irq();
++	return 0;
++}
++
++dwc_bool_t DWC_IN_BH(void)
++{
++//	return in_softirq();
++	return 0;
++}
++
++void DWC_VPRINTF(char *format, va_list args)
++{
++	vprintf(format, args);
++}
++
++int DWC_VSNPRINTF(char *str, int size, char *format, va_list args)
++{
++	return vsnprintf(str, size, format, args);
++}
++
++void DWC_PRINTF(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++int DWC_SPRINTF(char *buffer, char *format, ...)
++{
++	int retval;
++	va_list args;
++
++	va_start(args, format);
++	retval = vsprintf(buffer, format, args);
++	va_end(args);
++	return retval;
++}
++
++int DWC_SNPRINTF(char *buffer, int size, char *format, ...)
++{
++	int retval;
++	va_list args;
++
++	va_start(args, format);
++	retval = vsnprintf(buffer, size, format, args);
++	va_end(args);
++	return retval;
++}
++
++void __DWC_WARN(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++void __DWC_ERROR(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++void DWC_EXCEPTION(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++//	BUG_ON(1);	???
++}
++
++#ifdef DEBUG
++void __DWC_DEBUG(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++#endif
++
++
++/* dwc_mem.h */
++
++#if 0
++dwc_pool_t *DWC_DMA_POOL_CREATE(uint32_t size,
++				uint32_t align,
++				uint32_t alloc)
++{
++	struct dma_pool *pool = dma_pool_create("Pool", NULL,
++						size, align, alloc);
++	return (dwc_pool_t *)pool;
++}
++
++void DWC_DMA_POOL_DESTROY(dwc_pool_t *pool)
++{
++	dma_pool_destroy((struct dma_pool *)pool);
++}
++
++void *DWC_DMA_POOL_ALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
++{
++//	return dma_pool_alloc((struct dma_pool *)pool, GFP_KERNEL, dma_addr);
++	return dma_pool_alloc((struct dma_pool *)pool, M_WAITOK, dma_addr);
++}
++
++void *DWC_DMA_POOL_ZALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
++{
++	void *vaddr = DWC_DMA_POOL_ALLOC(pool, dma_addr);
++	memset(..);
++}
++
++void DWC_DMA_POOL_FREE(dwc_pool_t *pool, void *vaddr, void *daddr)
++{
++	dma_pool_free(pool, vaddr, daddr);
++}
++#endif
++
++static void dmamap_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error)
++{
++	if (error)
++		return;
++	*(bus_addr_t *)arg = segs[0].ds_addr;
++}
++
++void *__DWC_DMA_ALLOC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr)
++{
++	dwc_dmactx_t *dma = (dwc_dmactx_t *)dma_ctx;
++	int error;
++
++	error = bus_dma_tag_create(
++#if __FreeBSD_version >= 700000
++			bus_get_dma_tag(dma->dev),	/* parent */
++#else
++			NULL,				/* parent */
++#endif
++			4, 0,				/* alignment, bounds */
++			BUS_SPACE_MAXADDR_32BIT,	/* lowaddr */
++			BUS_SPACE_MAXADDR,		/* highaddr */
++			NULL, NULL,			/* filter, filterarg */
++			size,				/* maxsize */
++			1,				/* nsegments */
++			size,				/* maxsegsize */
++			0,				/* flags */
++			NULL,				/* lockfunc */
++			NULL,				/* lockarg */
++			&dma->dma_tag);
++	if (error) {
++		device_printf(dma->dev, "%s: bus_dma_tag_create failed: %d\n",
++			      __func__, error);
++		goto fail_0;
++	}
++
++	error = bus_dmamem_alloc(dma->dma_tag, &dma->dma_vaddr,
++				 BUS_DMA_NOWAIT | BUS_DMA_COHERENT, &dma->dma_map);
++	if (error) {
++		device_printf(dma->dev, "%s: bus_dmamem_alloc(%ju) failed: %d\n",
++			      __func__, (uintmax_t)size, error);
++		goto fail_1;
++	}
++
++	dma->dma_paddr = 0;
++	error = bus_dmamap_load(dma->dma_tag, dma->dma_map, dma->dma_vaddr, size,
++				dmamap_cb, &dma->dma_paddr, BUS_DMA_NOWAIT);
++	if (error || dma->dma_paddr == 0) {
++		device_printf(dma->dev, "%s: bus_dmamap_load failed: %d\n",
++			      __func__, error);
++		goto fail_2;
++	}
++
++	*dma_addr = dma->dma_paddr;
++	return dma->dma_vaddr;
++
++fail_2:
++	bus_dmamap_unload(dma->dma_tag, dma->dma_map);
++fail_1:
++	bus_dmamem_free(dma->dma_tag, dma->dma_vaddr, dma->dma_map);
++	bus_dma_tag_destroy(dma->dma_tag);
++fail_0:
++	dma->dma_map = NULL;
++	dma->dma_tag = NULL;
++
++	return NULL;
++}
++
++void __DWC_DMA_FREE(void *dma_ctx, uint32_t size, void *virt_addr, dwc_dma_t dma_addr)
++{
++	dwc_dmactx_t *dma = (dwc_dmactx_t *)dma_ctx;
++
++	if (dma->dma_tag == NULL)
++		return;
++	if (dma->dma_map != NULL) {
++		bus_dmamap_sync(dma->dma_tag, dma->dma_map,
++				BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
++		bus_dmamap_unload(dma->dma_tag, dma->dma_map);
++		bus_dmamem_free(dma->dma_tag, dma->dma_vaddr, dma->dma_map);
++		dma->dma_map = NULL;
++	}
++
++	bus_dma_tag_destroy(dma->dma_tag);
++	dma->dma_tag = NULL;
++}
++
++void *__DWC_ALLOC(void *mem_ctx, uint32_t size)
++{
++	return malloc(size, M_DEVBUF, M_WAITOK | M_ZERO);
++}
++
++void *__DWC_ALLOC_ATOMIC(void *mem_ctx, uint32_t size)
++{
++	return malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO);
++}
++
++void __DWC_FREE(void *mem_ctx, void *addr)
++{
++	free(addr, M_DEVBUF);
++}
++
++
++#ifdef DWC_CRYPTOLIB
++/* dwc_crypto.h */
++
++void DWC_RANDOM_BYTES(uint8_t *buffer, uint32_t length)
++{
++	get_random_bytes(buffer, length);
++}
++
++int DWC_AES_CBC(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t iv[16], uint8_t *out)
++{
++	struct crypto_blkcipher *tfm;
++	struct blkcipher_desc desc;
++	struct scatterlist sgd;
++	struct scatterlist sgs;
++
++	tfm = crypto_alloc_blkcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
++	if (tfm == NULL) {
++		printk("failed to load transform for aes CBC\n");
++		return -1;
++	}
++
++	crypto_blkcipher_setkey(tfm, key, keylen);
++	crypto_blkcipher_set_iv(tfm, iv, 16);
++
++	sg_init_one(&sgd, out, messagelen);
++	sg_init_one(&sgs, message, messagelen);
++
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	if (crypto_blkcipher_encrypt(&desc, &sgd, &sgs, messagelen)) {
++		crypto_free_blkcipher(tfm);
++		DWC_ERROR("AES CBC encryption failed");
++		return -1;
++	}
++
++	crypto_free_blkcipher(tfm);
++	return 0;
++}
++
++int DWC_SHA256(uint8_t *message, uint32_t len, uint8_t *out)
++{
++	struct crypto_hash *tfm;
++	struct hash_desc desc;
++	struct scatterlist sg;
++
++	tfm = crypto_alloc_hash("sha256", 0, CRYPTO_ALG_ASYNC);
++	if (IS_ERR(tfm)) {
++		DWC_ERROR("Failed to load transform for sha256: %ld", PTR_ERR(tfm));
++		return 0;
++	}
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	sg_init_one(&sg, message, len);
++	crypto_hash_digest(&desc, &sg, len, out);
++	crypto_free_hash(tfm);
++
++	return 1;
++}
++
++int DWC_HMAC_SHA256(uint8_t *message, uint32_t messagelen,
++		    uint8_t *key, uint32_t keylen, uint8_t *out)
++{
++	struct crypto_hash *tfm;
++	struct hash_desc desc;
++	struct scatterlist sg;
++
++	tfm = crypto_alloc_hash("hmac(sha256)", 0, CRYPTO_ALG_ASYNC);
++	if (IS_ERR(tfm)) {
++		DWC_ERROR("Failed to load transform for hmac(sha256): %ld", PTR_ERR(tfm));
++		return 0;
++	}
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	sg_init_one(&sg, message, messagelen);
++	crypto_hash_setkey(tfm, key, keylen);
++	crypto_hash_digest(&desc, &sg, messagelen, out);
++	crypto_free_hash(tfm);
++
++	return 1;
++}
++
++#endif	/* DWC_CRYPTOLIB */
++
++
++/* Byte Ordering Conversions */
++
++uint32_t DWC_CPU_TO_LE32(uint32_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_CPU_TO_BE32(uint32_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_LE32_TO_CPU(uint32_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_BE32_TO_CPU(uint32_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint16_t DWC_CPU_TO_LE16(uint16_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_CPU_TO_BE16(uint16_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_LE16_TO_CPU(uint16_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_BE16_TO_CPU(uint16_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++
++/* Registers */
++
++uint32_t DWC_READ_REG32(void *io_ctx, uint32_t volatile *reg)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	return bus_space_read_4(io->iot, io->ioh, ior);
++}
++
++#if 0
++uint64_t DWC_READ_REG64(void *io_ctx, uint64_t volatile *reg)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	return bus_space_read_8(io->iot, io->ioh, ior);
++}
++#endif
++
++void DWC_WRITE_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t value)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	bus_space_write_4(io->iot, io->ioh, ior, value);
++}
++
++#if 0
++void DWC_WRITE_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t value)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	bus_space_write_8(io->iot, io->ioh, ior, value);
++}
++#endif
++
++void DWC_MODIFY_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t clear_mask,
++		      uint32_t set_mask)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	bus_space_write_4(io->iot, io->ioh, ior,
++			  (bus_space_read_4(io->iot, io->ioh, ior) &
++			   ~clear_mask) | set_mask);
++}
++
++#if 0
++void DWC_MODIFY_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t clear_mask,
++		      uint64_t set_mask)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	bus_space_write_8(io->iot, io->ioh, ior,
++			  (bus_space_read_8(io->iot, io->ioh, ior) &
++			   ~clear_mask) | set_mask);
++}
++#endif
++
++
++/* Locking */
++
++dwc_spinlock_t *DWC_SPINLOCK_ALLOC(void)
++{
++	struct mtx *sl = DWC_ALLOC(sizeof(*sl));
++
++	if (!sl) {
++		DWC_ERROR("Cannot allocate memory for spinlock");
++		return NULL;
++	}
++
++	mtx_init(sl, "dw3spn", NULL, MTX_SPIN);
++	return (dwc_spinlock_t *)sl;
++}
++
++void DWC_SPINLOCK_FREE(dwc_spinlock_t *lock)
++{
++	struct mtx *sl = (struct mtx *)lock;
++
++	mtx_destroy(sl);
++	DWC_FREE(sl);
++}
++
++void DWC_SPINLOCK(dwc_spinlock_t *lock)
++{
++	mtx_lock_spin((struct mtx *)lock);	// ???
++}
++
++void DWC_SPINUNLOCK(dwc_spinlock_t *lock)
++{
++	mtx_unlock_spin((struct mtx *)lock);	// ???
++}
++
++void DWC_SPINLOCK_IRQSAVE(dwc_spinlock_t *lock, dwc_irqflags_t *flags)
++{
++	mtx_lock_spin((struct mtx *)lock);
++}
++
++void DWC_SPINUNLOCK_IRQRESTORE(dwc_spinlock_t *lock, dwc_irqflags_t flags)
++{
++	mtx_unlock_spin((struct mtx *)lock);
++}
++
++dwc_mutex_t *DWC_MUTEX_ALLOC(void)
++{
++	struct mtx *m;
++	dwc_mutex_t *mutex = (dwc_mutex_t *)DWC_ALLOC(sizeof(struct mtx));
++
++	if (!mutex) {
++		DWC_ERROR("Cannot allocate memory for mutex");
++		return NULL;
++	}
++
++	m = (struct mtx *)mutex;
++	mtx_init(m, "dw3mtx", NULL, MTX_DEF);
++	return mutex;
++}
++
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
++#else
++void DWC_MUTEX_FREE(dwc_mutex_t *mutex)
++{
++	mtx_destroy((struct mtx *)mutex);
++	DWC_FREE(mutex);
++}
++#endif
++
++void DWC_MUTEX_LOCK(dwc_mutex_t *mutex)
++{
++	struct mtx *m = (struct mtx *)mutex;
++
++	mtx_lock(m);
++}
++
++int DWC_MUTEX_TRYLOCK(dwc_mutex_t *mutex)
++{
++	struct mtx *m = (struct mtx *)mutex;
++
++	return mtx_trylock(m);
++}
++
++void DWC_MUTEX_UNLOCK(dwc_mutex_t *mutex)
++{
++	struct mtx *m = (struct mtx *)mutex;
++
++	mtx_unlock(m);
++}
++
++
++/* Timing */
++
++void DWC_UDELAY(uint32_t usecs)
++{
++	DELAY(usecs);
++}
++
++void DWC_MDELAY(uint32_t msecs)
++{
++	do {
++		DELAY(1000);
++	} while (--msecs);
++}
++
++void DWC_MSLEEP(uint32_t msecs)
++{
++	struct timeval tv;
++
++	tv.tv_sec = msecs / 1000;
++	tv.tv_usec = (msecs - tv.tv_sec * 1000) * 1000;
++	pause("dw3slp", tvtohz(&tv));
++}
++
++uint32_t DWC_TIME(void)
++{
++	struct timeval tv;
++
++	microuptime(&tv);	// or getmicrouptime? (less precise, but faster)
++	return tv.tv_sec * 1000 + tv.tv_usec / 1000;
++}
++
++
++/* Timers */
++
++struct dwc_timer {
++	struct callout t;
++	char *name;
++	dwc_spinlock_t *lock;
++	dwc_timer_callback_t cb;
++	void *data;
++};
++
++dwc_timer_t *DWC_TIMER_ALLOC(char *name, dwc_timer_callback_t cb, void *data)
++{
++	dwc_timer_t *t = DWC_ALLOC(sizeof(*t));
++
++	if (!t) {
++		DWC_ERROR("Cannot allocate memory for timer");
++		return NULL;
++	}
++
++	callout_init(&t->t, 1);
++
++	t->name = DWC_STRDUP(name);
++	if (!t->name) {
++		DWC_ERROR("Cannot allocate memory for timer->name");
++		goto no_name;
++	}
++
++	t->lock = DWC_SPINLOCK_ALLOC();
++	if (!t->lock) {
++		DWC_ERROR("Cannot allocate memory for lock");
++		goto no_lock;
++	}
++
++	t->cb = cb;
++	t->data = data;
++
++	return t;
++
++ no_lock:
++	DWC_FREE(t->name);
++ no_name:
++	DWC_FREE(t);
++
++	return NULL;
++}
++
++void DWC_TIMER_FREE(dwc_timer_t *timer)
++{
++	callout_stop(&timer->t);
++	DWC_SPINLOCK_FREE(timer->lock);
++	DWC_FREE(timer->name);
++	DWC_FREE(timer);
++}
++
++void DWC_TIMER_SCHEDULE(dwc_timer_t *timer, uint32_t time)
++{
++	struct timeval tv;
++
++	tv.tv_sec = time / 1000;
++	tv.tv_usec = (time - tv.tv_sec * 1000) * 1000;
++	callout_reset(&timer->t, tvtohz(&tv), timer->cb, timer->data);
++}
++
++void DWC_TIMER_CANCEL(dwc_timer_t *timer)
++{
++	callout_stop(&timer->t);
++}
++
++
++/* Wait Queues */
++
++struct dwc_waitq {
++	struct mtx lock;
++	int abort;
++};
++
++dwc_waitq_t *DWC_WAITQ_ALLOC(void)
++{
++	dwc_waitq_t *wq = DWC_ALLOC(sizeof(*wq));
++
++	if (!wq) {
++		DWC_ERROR("Cannot allocate memory for waitqueue");
++		return NULL;
++	}
++
++	mtx_init(&wq->lock, "dw3wtq", NULL, MTX_DEF);
++	wq->abort = 0;
++
++	return wq;
++}
++
++void DWC_WAITQ_FREE(dwc_waitq_t *wq)
++{
++	mtx_destroy(&wq->lock);
++	DWC_FREE(wq);
++}
++
++int32_t DWC_WAITQ_WAIT(dwc_waitq_t *wq, dwc_waitq_condition_t cond, void *data)
++{
++//	intrmask_t ipl;
++	int result = 0;
++
++	mtx_lock(&wq->lock);
++//	ipl = splbio();
++
++	/* Skip the sleep if already aborted or triggered */
++	if (!wq->abort && !cond(data)) {
++//		splx(ipl);
++		result = msleep(wq, &wq->lock, PCATCH, "dw3wat", 0); // infinite timeout
++//		ipl = splbio();
++	}
++
++	if (result == ERESTART) {	// signaled - restart
++		result = -DWC_E_RESTART;
++
++	} else if (result == EINTR) {	// signaled - interrupt
++		result = -DWC_E_ABORT;
++
++	} else if (wq->abort) {
++		result = -DWC_E_ABORT;
++
++	} else {
++		result = 0;
++	}
++
++	wq->abort = 0;
++//	splx(ipl);
++	mtx_unlock(&wq->lock);
++	return result;
++}
++
++int32_t DWC_WAITQ_WAIT_TIMEOUT(dwc_waitq_t *wq, dwc_waitq_condition_t cond,
++			       void *data, int32_t msecs)
++{
++	struct timeval tv, tv1, tv2;
++//	intrmask_t ipl;
++	int result = 0;
++
++	tv.tv_sec = msecs / 1000;
++	tv.tv_usec = (msecs - tv.tv_sec * 1000) * 1000;
++
++	mtx_lock(&wq->lock);
++//	ipl = splbio();
++
++	/* Skip the sleep if already aborted or triggered */
++	if (!wq->abort && !cond(data)) {
++//		splx(ipl);
++		getmicrouptime(&tv1);
++		result = msleep(wq, &wq->lock, PCATCH, "dw3wto", tvtohz(&tv));
++		getmicrouptime(&tv2);
++//		ipl = splbio();
++	}
++
++	if (result == 0) {			// awoken
++		if (wq->abort) {
++			result = -DWC_E_ABORT;
++		} else {
++			tv2.tv_usec -= tv1.tv_usec;
++			if (tv2.tv_usec < 0) {
++				tv2.tv_usec += 1000000;
++				tv2.tv_sec--;
++			}
++
++			tv2.tv_sec -= tv1.tv_sec;
++			result = tv2.tv_sec * 1000 + tv2.tv_usec / 1000;
++			result = msecs - result;
++			if (result <= 0)
++				result = 1;
++		}
++	} else if (result == ERESTART) {	// signaled - restart
++		result = -DWC_E_RESTART;
++
++	} else if (result == EINTR) {		// signaled - interrupt
++		result = -DWC_E_ABORT;
++
++	} else {				// timed out
++		result = -DWC_E_TIMEOUT;
++	}
++
++	wq->abort = 0;
++//	splx(ipl);
++	mtx_unlock(&wq->lock);
++	return result;
++}
++
++void DWC_WAITQ_TRIGGER(dwc_waitq_t *wq)
++{
++	wakeup(wq);
++}
++
++void DWC_WAITQ_ABORT(dwc_waitq_t *wq)
++{
++//	intrmask_t ipl;
++
++	mtx_lock(&wq->lock);
++//	ipl = splbio();
++	wq->abort = 1;
++	wakeup(wq);
++//	splx(ipl);
++	mtx_unlock(&wq->lock);
++}
++
++
++/* Threading */
++
++struct dwc_thread {
++	struct proc *proc;
++	int abort;
++};
++
++dwc_thread_t *DWC_THREAD_RUN(dwc_thread_function_t func, char *name, void *data)
++{
++	int retval;
++	dwc_thread_t *thread = DWC_ALLOC(sizeof(*thread));
++
++	if (!thread) {
++		return NULL;
++	}
++
++	thread->abort = 0;
++	retval = kthread_create((void (*)(void *))func, data, &thread->proc,
++				RFPROC | RFNOWAIT, 0, "%s", name);
++	if (retval) {
++		DWC_FREE(thread);
++		return NULL;
++	}
++
++	return thread;
++}
++
++int DWC_THREAD_STOP(dwc_thread_t *thread)
++{
++	int retval;
++
++	thread->abort = 1;
++	retval = tsleep(&thread->abort, 0, "dw3stp", 60 * hz);
++
++	if (retval == 0) {
++		/* DWC_THREAD_EXIT() will free the thread struct */
++		return 0;
++	}
++
++	/* NOTE: We leak the thread struct if thread doesn't die */
++
++	if (retval == EWOULDBLOCK) {
++		return -DWC_E_TIMEOUT;
++	}
++
++	return -DWC_E_UNKNOWN;
++}
++
++dwc_bool_t DWC_THREAD_SHOULD_STOP(dwc_thread_t *thread)
++{
++	return thread->abort;
++}
++
++void DWC_THREAD_EXIT(dwc_thread_t *thread)
++{
++	wakeup(&thread->abort);
++	DWC_FREE(thread);
++	kthread_exit(0);
++}
++
++
++/* tasklets
++ - Runs in interrupt context (cannot sleep)
++ - Each tasklet runs on a single CPU [ How can we ensure this on FreeBSD? Does it matter? ]
++ - Different tasklets can be running simultaneously on different CPUs [ shouldn't matter ]
++ */
++struct dwc_tasklet {
++	struct task t;
++	dwc_tasklet_callback_t cb;
++	void *data;
++};
++
++static void tasklet_callback(void *data, int pending)	// what to do with pending ???
++{
++	dwc_tasklet_t *task = (dwc_tasklet_t *)data;
++
++	task->cb(task->data);
++}
++
++dwc_tasklet_t *DWC_TASK_ALLOC(char *name, dwc_tasklet_callback_t cb, void *data)
++{
++	dwc_tasklet_t *task = DWC_ALLOC(sizeof(*task));
++
++	if (task) {
++		task->cb = cb;
++		task->data = data;
++		TASK_INIT(&task->t, 0, tasklet_callback, task);
++	} else {
++		DWC_ERROR("Cannot allocate memory for tasklet");
++	}
++
++	return task;
++}
++
++void DWC_TASK_FREE(dwc_tasklet_t *task)
++{
++	taskqueue_drain(taskqueue_fast, &task->t);	// ???
++	DWC_FREE(task);
++}
++
++void DWC_TASK_SCHEDULE(dwc_tasklet_t *task)
++{
++	/* Uses predefined system queue */
++	taskqueue_enqueue_fast(taskqueue_fast, &task->t);
++}
++
++
++/* workqueues
++ - Runs in process context (can sleep)
++ */
++typedef struct work_container {
++	dwc_work_callback_t cb;
++	void *data;
++	dwc_workq_t *wq;
++	char *name;
++	int hz;
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_ENTRY(work_container) entry;
++#endif
++	struct task task;
++} work_container_t;
++
++#ifdef DEBUG
++DWC_CIRCLEQ_HEAD(work_container_queue, work_container);
++#endif
++
++struct dwc_workq {
++	struct taskqueue *taskq;
++	dwc_spinlock_t *lock;
++	dwc_waitq_t *waitq;
++	int pending;
++
++#ifdef DEBUG
++	struct work_container_queue entries;
++#endif
++};
++
++static void do_work(void *data, int pending)	// what to do with pending ???
++{
++	work_container_t *container = (work_container_t *)data;
++	dwc_workq_t *wq = container->wq;
++	dwc_irqflags_t flags;
++
++	if (container->hz) {
++		pause("dw3wrk", container->hz);
++	}
++
++	container->cb(container->data);
++	DWC_DEBUG("Work done: %s, container=%p", container->name, container);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_REMOVE(&wq->entries, container, entry);
++#endif
++	if (container->name)
++		DWC_FREE(container->name);
++	DWC_FREE(container);
++	wq->pending--;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++}
++
++static int work_done(void *data)
++{
++	dwc_workq_t *workq = (dwc_workq_t *)data;
++
++	return workq->pending == 0;
++}
++
++int DWC_WORKQ_WAIT_WORK_DONE(dwc_workq_t *workq, int timeout)
++{
++	return DWC_WAITQ_WAIT_TIMEOUT(workq->waitq, work_done, workq, timeout);
++}
++
++dwc_workq_t *DWC_WORKQ_ALLOC(char *name)
++{
++	dwc_workq_t *wq = DWC_ALLOC(sizeof(*wq));
++
++	if (!wq) {
++		DWC_ERROR("Cannot allocate memory for workqueue");
++		return NULL;
++	}
++
++	wq->taskq = taskqueue_create(name, M_NOWAIT, taskqueue_thread_enqueue, &wq->taskq);
++	if (!wq->taskq) {
++		DWC_ERROR("Cannot allocate memory for taskqueue");
++		goto no_taskq;
++	}
++
++	wq->pending = 0;
++
++	wq->lock = DWC_SPINLOCK_ALLOC();
++	if (!wq->lock) {
++		DWC_ERROR("Cannot allocate memory for spinlock");
++		goto no_lock;
++	}
++
++	wq->waitq = DWC_WAITQ_ALLOC();
++	if (!wq->waitq) {
++		DWC_ERROR("Cannot allocate memory for waitqueue");
++		goto no_waitq;
++	}
++
++	taskqueue_start_threads(&wq->taskq, 1, PWAIT, "%s taskq", "dw3tsk");
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_INIT(&wq->entries);
++#endif
++	return wq;
++
++ no_waitq:
++	DWC_SPINLOCK_FREE(wq->lock);
++ no_lock:
++	taskqueue_free(wq->taskq);
++ no_taskq:
++	DWC_FREE(wq);
++
++	return NULL;
++}
++
++void DWC_WORKQ_FREE(dwc_workq_t *wq)
++{
++#ifdef DEBUG
++	dwc_irqflags_t flags;
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++
++	if (wq->pending != 0) {
++		struct work_container *container;
++
++		DWC_ERROR("Destroying work queue with pending work");
++
++		DWC_CIRCLEQ_FOREACH(container, &wq->entries, entry) {
++			DWC_ERROR("Work %s still pending", container->name);
++		}
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++#endif
++	DWC_WAITQ_FREE(wq->waitq);
++	DWC_SPINLOCK_FREE(wq->lock);
++	taskqueue_free(wq->taskq);
++	DWC_FREE(wq);
++}
++
++void DWC_WORKQ_SCHEDULE(dwc_workq_t *wq, dwc_work_callback_t cb, void *data,
++			char *format, ...)
++{
++	dwc_irqflags_t flags;
++	work_container_t *container;
++	static char name[128];
++	va_list args;
++
++	va_start(args, format);
++	DWC_VSNPRINTF(name, 128, format, args);
++	va_end(args);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++	wq->pending++;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++
++	container = DWC_ALLOC_ATOMIC(sizeof(*container));
++	if (!container) {
++		DWC_ERROR("Cannot allocate memory for container");
++		return;
++	}
++
++	container->name = DWC_STRDUP(name);
++	if (!container->name) {
++		DWC_ERROR("Cannot allocate memory for container->name");
++		DWC_FREE(container);
++		return;
++	}
++
++	container->cb = cb;
++	container->data = data;
++	container->wq = wq;
++	container->hz = 0;
++
++	DWC_DEBUG("Queueing work: %s, container=%p", container->name, container);
++
++	TASK_INIT(&container->task, 0, do_work, container);
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_INSERT_TAIL(&wq->entries, container, entry);
++#endif
++	taskqueue_enqueue_fast(wq->taskq, &container->task);
++}
++
++void DWC_WORKQ_SCHEDULE_DELAYED(dwc_workq_t *wq, dwc_work_callback_t cb,
++				void *data, uint32_t time, char *format, ...)
++{
++	dwc_irqflags_t flags;
++	work_container_t *container;
++	static char name[128];
++	struct timeval tv;
++	va_list args;
++
++	va_start(args, format);
++	DWC_VSNPRINTF(name, 128, format, args);
++	va_end(args);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++	wq->pending++;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++
++	container = DWC_ALLOC_ATOMIC(sizeof(*container));
++	if (!container) {
++		DWC_ERROR("Cannot allocate memory for container");
++		return;
++	}
++
++	container->name = DWC_STRDUP(name);
++	if (!container->name) {
++		DWC_ERROR("Cannot allocate memory for container->name");
++		DWC_FREE(container);
++		return;
++	}
++
++	container->cb = cb;
++	container->data = data;
++	container->wq = wq;
++
++	tv.tv_sec = time / 1000;
++	tv.tv_usec = (time - tv.tv_sec * 1000) * 1000;
++	container->hz = tvtohz(&tv);
++
++	DWC_DEBUG("Queueing work: %s, container=%p", container->name, container);
++
++	TASK_INIT(&container->task, 0, do_work, container);
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_INSERT_TAIL(&wq->entries, container, entry);
++#endif
++	taskqueue_enqueue_fast(wq->taskq, &container->task);
++}
++
++int DWC_WORKQ_PENDING(dwc_workq_t *wq)
++{
++	return wq->pending;
++}
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_common_linux.c
+@@ -0,0 +1,1433 @@
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/kthread.h>
++
++#ifdef DWC_CCLIB
++# include "dwc_cc.h"
++#endif
++
++#ifdef DWC_CRYPTOLIB
++# include "dwc_modpow.h"
++# include "dwc_dh.h"
++# include "dwc_crypto.h"
++#endif
++
++#ifdef DWC_NOTIFYLIB
++# include "dwc_notifier.h"
++#endif
++
++/* OS-Level Implementations */
++
++/* This is the Linux kernel implementation of the DWC platform library. */
++#include <linux/moduleparam.h>
++#include <linux/ctype.h>
++#include <linux/crypto.h>
++#include <linux/delay.h>
++#include <linux/device.h>
++#include <linux/dma-mapping.h>
++#include <linux/cdev.h>
++#include <linux/errno.h>
++#include <linux/interrupt.h>
++#include <linux/jiffies.h>
++#include <linux/list.h>
++#include <linux/pci.h>
++#include <linux/random.h>
++#include <linux/scatterlist.h>
++#include <linux/slab.h>
++#include <linux/stat.h>
++#include <linux/string.h>
++#include <linux/timer.h>
++#include <linux/usb.h>
++
++#include <linux/version.h>
++
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,24)
++# include <linux/usb/gadget.h>
++#else
++# include <linux/usb_gadget.h>
++#endif
++
++#include <asm/io.h>
++#include <asm/page.h>
++#include <asm/uaccess.h>
++#include <asm/unaligned.h>
++
++#include "dwc_os.h"
++#include "dwc_list.h"
++
++
++/* MISC */
++
++void *DWC_MEMSET(void *dest, uint8_t byte, uint32_t size)
++{
++	return memset(dest, byte, size);
++}
++
++void *DWC_MEMCPY(void *dest, void const *src, uint32_t size)
++{
++	return memcpy(dest, src, size);
++}
++
++void *DWC_MEMMOVE(void *dest, void *src, uint32_t size)
++{
++	return memmove(dest, src, size);
++}
++
++int DWC_MEMCMP(void *m1, void *m2, uint32_t size)
++{
++	return memcmp(m1, m2, size);
++}
++
++int DWC_STRNCMP(void *s1, void *s2, uint32_t size)
++{
++	return strncmp(s1, s2, size);
++}
++
++int DWC_STRCMP(void *s1, void *s2)
++{
++	return strcmp(s1, s2);
++}
++
++int DWC_STRLEN(char const *str)
++{
++	return strlen(str);
++}
++
++char *DWC_STRCPY(char *to, char const *from)
++{
++	return strcpy(to, from);
++}
++
++char *DWC_STRDUP(char const *str)
++{
++	int len = DWC_STRLEN(str) + 1;
++	char *new = DWC_ALLOC_ATOMIC(len);
++
++	if (!new) {
++		return NULL;
++	}
++
++	DWC_MEMCPY(new, str, len);
++	return new;
++}
++
++int DWC_ATOI(const char *str, int32_t *value)
++{
++	char *end = NULL;
++
++	*value = simple_strtol(str, &end, 0);
++	if (*end == '\0') {
++		return 0;
++	}
++
++	return -1;
++}
++
++int DWC_ATOUI(const char *str, uint32_t *value)
++{
++	char *end = NULL;
++
++	*value = simple_strtoul(str, &end, 0);
++	if (*end == '\0') {
++		return 0;
++	}
++
++	return -1;
++}
++
++
++#ifdef DWC_UTFLIB
++/* From usbstring.c */
++
++int DWC_UTF8_TO_UTF16LE(uint8_t const *s, uint16_t *cp, unsigned len)
++{
++	int	count = 0;
++	u8	c;
++	u16	uchar;
++
++	/* this insists on correct encodings, though not minimal ones.
++	 * BUT it currently rejects legit 4-byte UTF-8 code points,
++	 * which need surrogate pairs.  (Unicode 3.1 can use them.)
++	 */
++	while (len != 0 && (c = (u8) *s++) != 0) {
++		if (unlikely(c & 0x80)) {
++			// 2-byte sequence:
++			// 00000yyyyyxxxxxx = 110yyyyy 10xxxxxx
++			if ((c & 0xe0) == 0xc0) {
++				uchar = (c & 0x1f) << 6;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c;
++
++			// 3-byte sequence (most CJKV characters):
++			// zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
++			} else if ((c & 0xf0) == 0xe0) {
++				uchar = (c & 0x0f) << 12;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c << 6;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c;
++
++				/* no bogus surrogates */
++				if (0xd800 <= uchar && uchar <= 0xdfff)
++					goto fail;
++
++			// 4-byte sequence (surrogate pairs, currently rare):
++			// 11101110wwwwzzzzyy + 110111yyyyxxxxxx
++			//     = 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx
++			// (uuuuu = wwww + 1)
++			// FIXME accept the surrogate code points (only)
++			} else
++				goto fail;
++		} else
++			uchar = c;
++		put_unaligned (cpu_to_le16 (uchar), cp++);
++		count++;
++		len--;
++	}
++	return count;
++fail:
++	return -1;
++}
++#endif	/* DWC_UTFLIB */
++
++
++/* dwc_debug.h */
++
++dwc_bool_t DWC_IN_IRQ(void)
++{
++	return in_irq();
++}
++
++dwc_bool_t DWC_IN_BH(void)
++{
++	return in_softirq();
++}
++
++void DWC_VPRINTF(char *format, va_list args)
++{
++	vprintk(format, args);
++}
++
++int DWC_VSNPRINTF(char *str, int size, char *format, va_list args)
++{
++	return vsnprintf(str, size, format, args);
++}
++
++void DWC_PRINTF(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++int DWC_SPRINTF(char *buffer, char *format, ...)
++{
++	int retval;
++	va_list args;
++
++	va_start(args, format);
++	retval = vsprintf(buffer, format, args);
++	va_end(args);
++	return retval;
++}
++
++int DWC_SNPRINTF(char *buffer, int size, char *format, ...)
++{
++	int retval;
++	va_list args;
++
++	va_start(args, format);
++	retval = vsnprintf(buffer, size, format, args);
++	va_end(args);
++	return retval;
++}
++
++void __DWC_WARN(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_PRINTF(KERN_WARNING);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++void __DWC_ERROR(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_PRINTF(KERN_ERR);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++void DWC_EXCEPTION(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_PRINTF(KERN_ERR);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++	BUG_ON(1);
++}
++
++#ifdef DEBUG
++void __DWC_DEBUG(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_PRINTF(KERN_DEBUG);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++#endif
++
++
++/* dwc_mem.h */
++
++#if 0
++dwc_pool_t *DWC_DMA_POOL_CREATE(uint32_t size,
++				uint32_t align,
++				uint32_t alloc)
++{
++	struct dma_pool *pool = dma_pool_create("Pool", NULL,
++						size, align, alloc);
++	return (dwc_pool_t *)pool;
++}
++
++void DWC_DMA_POOL_DESTROY(dwc_pool_t *pool)
++{
++	dma_pool_destroy((struct dma_pool *)pool);
++}
++
++void *DWC_DMA_POOL_ALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
++{
++	return dma_pool_alloc((struct dma_pool *)pool, GFP_KERNEL, dma_addr);
++}
++
++void *DWC_DMA_POOL_ZALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
++{
++	void *vaddr = DWC_DMA_POOL_ALLOC(pool, dma_addr);
++	memset(..);
++}
++
++void DWC_DMA_POOL_FREE(dwc_pool_t *pool, void *vaddr, void *daddr)
++{
++	dma_pool_free(pool, vaddr, daddr);
++}
++#endif
++
++void *__DWC_DMA_ALLOC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr)
++{
++#ifdef xxCOSIM /* Only works for 32-bit cosim */
++	void *buf = dma_alloc_coherent(dma_ctx, (size_t)size, dma_addr, GFP_KERNEL);
++#else
++	void *buf = dma_alloc_coherent(dma_ctx, (size_t)size, dma_addr, GFP_KERNEL | GFP_DMA32);
++#endif
++	if (!buf) {
++		return NULL;
++	}
++
++	memset(buf, 0, (size_t)size);
++	return buf;
++}
++
++void *__DWC_DMA_ALLOC_ATOMIC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr)
++{
++	void *buf = dma_alloc_coherent(NULL, (size_t)size, dma_addr, GFP_ATOMIC);
++	if (!buf) {
++		return NULL;
++	}
++	memset(buf, 0, (size_t)size);
++	return buf;
++}
++
++void __DWC_DMA_FREE(void *dma_ctx, uint32_t size, void *virt_addr, dwc_dma_t dma_addr)
++{
++	dma_free_coherent(dma_ctx, size, virt_addr, dma_addr);
++}
++
++void *__DWC_ALLOC(void *mem_ctx, uint32_t size)
++{
++	return kzalloc(size, GFP_KERNEL);
++}
++
++void *__DWC_ALLOC_ATOMIC(void *mem_ctx, uint32_t size)
++{
++	return kzalloc(size, GFP_ATOMIC);
++}
++
++void __DWC_FREE(void *mem_ctx, void *addr)
++{
++	kfree(addr);
++}
++
++
++#ifdef DWC_CRYPTOLIB
++/* dwc_crypto.h */
++
++void DWC_RANDOM_BYTES(uint8_t *buffer, uint32_t length)
++{
++	get_random_bytes(buffer, length);
++}
++
++int DWC_AES_CBC(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t iv[16], uint8_t *out)
++{
++	struct crypto_blkcipher *tfm;
++	struct blkcipher_desc desc;
++	struct scatterlist sgd;
++	struct scatterlist sgs;
++
++	tfm = crypto_alloc_blkcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
++	if (tfm == NULL) {
++		printk("failed to load transform for aes CBC\n");
++		return -1;
++	}
++
++	crypto_blkcipher_setkey(tfm, key, keylen);
++	crypto_blkcipher_set_iv(tfm, iv, 16);
++
++	sg_init_one(&sgd, out, messagelen);
++	sg_init_one(&sgs, message, messagelen);
++
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	if (crypto_blkcipher_encrypt(&desc, &sgd, &sgs, messagelen)) {
++		crypto_free_blkcipher(tfm);
++		DWC_ERROR("AES CBC encryption failed");
++		return -1;
++	}
++
++	crypto_free_blkcipher(tfm);
++	return 0;
++}
++
++int DWC_SHA256(uint8_t *message, uint32_t len, uint8_t *out)
++{
++	struct crypto_hash *tfm;
++	struct hash_desc desc;
++	struct scatterlist sg;
++
++	tfm = crypto_alloc_hash("sha256", 0, CRYPTO_ALG_ASYNC);
++	if (IS_ERR(tfm)) {
++		DWC_ERROR("Failed to load transform for sha256: %ld\n", PTR_ERR(tfm));
++		return 0;
++	}
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	sg_init_one(&sg, message, len);
++	crypto_hash_digest(&desc, &sg, len, out);
++	crypto_free_hash(tfm);
++
++	return 1;
++}
++
++int DWC_HMAC_SHA256(uint8_t *message, uint32_t messagelen,
++		    uint8_t *key, uint32_t keylen, uint8_t *out)
++{
++	struct crypto_hash *tfm;
++	struct hash_desc desc;
++	struct scatterlist sg;
++
++	tfm = crypto_alloc_hash("hmac(sha256)", 0, CRYPTO_ALG_ASYNC);
++	if (IS_ERR(tfm)) {
++		DWC_ERROR("Failed to load transform for hmac(sha256): %ld\n", PTR_ERR(tfm));
++		return 0;
++	}
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	sg_init_one(&sg, message, messagelen);
++	crypto_hash_setkey(tfm, key, keylen);
++	crypto_hash_digest(&desc, &sg, messagelen, out);
++	crypto_free_hash(tfm);
++
++	return 1;
++}
++#endif	/* DWC_CRYPTOLIB */
++
++
++/* Byte Ordering Conversions */
++
++uint32_t DWC_CPU_TO_LE32(uint32_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_CPU_TO_BE32(uint32_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_LE32_TO_CPU(uint32_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_BE32_TO_CPU(uint32_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint16_t DWC_CPU_TO_LE16(uint16_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_CPU_TO_BE16(uint16_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_LE16_TO_CPU(uint16_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_BE16_TO_CPU(uint16_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++
++/* Registers */
++
++uint32_t DWC_READ_REG32(uint32_t volatile *reg)
++{
++	return readl(reg);
++}
++
++#if 0
++uint64_t DWC_READ_REG64(uint64_t volatile *reg)
++{
++}
++#endif
++
++void DWC_WRITE_REG32(uint32_t volatile *reg, uint32_t value)
++{
++	writel(value, reg);
++}
++
++#if 0
++void DWC_WRITE_REG64(uint64_t volatile *reg, uint64_t value)
++{
++}
++#endif
++
++void DWC_MODIFY_REG32(uint32_t volatile *reg, uint32_t clear_mask, uint32_t set_mask)
++{
++	writel((readl(reg) & ~clear_mask) | set_mask, reg);
++}
++
++#if 0
++void DWC_MODIFY_REG64(uint64_t volatile *reg, uint64_t clear_mask, uint64_t set_mask)
++{
++}
++#endif
++
++
++/* Locking */
++
++dwc_spinlock_t *DWC_SPINLOCK_ALLOC(void)
++{
++	spinlock_t *sl = (spinlock_t *)1;
++
++#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
++	sl = DWC_ALLOC(sizeof(*sl));
++	if (!sl) {
++		DWC_ERROR("Cannot allocate memory for spinlock\n");
++		return NULL;
++	}
++
++	spin_lock_init(sl);
++#endif
++	return (dwc_spinlock_t *)sl;
++}
++
++void DWC_SPINLOCK_FREE(dwc_spinlock_t *lock)
++{
++#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
++	DWC_FREE(lock);
++#endif
++}
++
++void DWC_SPINLOCK(dwc_spinlock_t *lock)
++{
++#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
++	spin_lock((spinlock_t *)lock);
++#endif
++}
++
++void DWC_SPINUNLOCK(dwc_spinlock_t *lock)
++{
++#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
++	spin_unlock((spinlock_t *)lock);
++#endif
++}
++
++void DWC_SPINLOCK_IRQSAVE(dwc_spinlock_t *lock, dwc_irqflags_t *flags)
++{
++	dwc_irqflags_t f;
++
++#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
++	spin_lock_irqsave((spinlock_t *)lock, f);
++#else
++	local_irq_save(f);
++#endif
++	*flags = f;
++}
++
++void DWC_SPINUNLOCK_IRQRESTORE(dwc_spinlock_t *lock, dwc_irqflags_t flags)
++{
++#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
++	spin_unlock_irqrestore((spinlock_t *)lock, flags);
++#else
++	local_irq_restore(flags);
++#endif
++}
++
++dwc_mutex_t *DWC_MUTEX_ALLOC(void)
++{
++	struct mutex *m;
++	dwc_mutex_t *mutex = (dwc_mutex_t *)DWC_ALLOC(sizeof(struct mutex));
++
++	if (!mutex) {
++		DWC_ERROR("Cannot allocate memory for mutex\n");
++		return NULL;
++	}
++
++	m = (struct mutex *)mutex;
++	mutex_init(m);
++	return mutex;
++}
++
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
++#else
++void DWC_MUTEX_FREE(dwc_mutex_t *mutex)
++{
++	mutex_destroy((struct mutex *)mutex);
++	DWC_FREE(mutex);
++}
++#endif
++
++void DWC_MUTEX_LOCK(dwc_mutex_t *mutex)
++{
++	struct mutex *m = (struct mutex *)mutex;
++	mutex_lock(m);
++}
++
++int DWC_MUTEX_TRYLOCK(dwc_mutex_t *mutex)
++{
++	struct mutex *m = (struct mutex *)mutex;
++	return mutex_trylock(m);
++}
++
++void DWC_MUTEX_UNLOCK(dwc_mutex_t *mutex)
++{
++	struct mutex *m = (struct mutex *)mutex;
++	mutex_unlock(m);
++}
++
++
++/* Timing */
++
++void DWC_UDELAY(uint32_t usecs)
++{
++	udelay(usecs);
++}
++
++void DWC_MDELAY(uint32_t msecs)
++{
++	mdelay(msecs);
++}
++
++void DWC_MSLEEP(uint32_t msecs)
++{
++	msleep(msecs);
++}
++
++uint32_t DWC_TIME(void)
++{
++	return jiffies_to_msecs(jiffies);
++}
++
++
++/* Timers */
++
++struct dwc_timer {
++	struct timer_list *t;
++	char *name;
++	dwc_timer_callback_t cb;
++	void *data;
++	uint8_t scheduled;
++	dwc_spinlock_t *lock;
++};
++
++static void timer_callback(unsigned long data)
++{
++	dwc_timer_t *timer = (dwc_timer_t *)data;
++	dwc_irqflags_t flags;
++
++	DWC_SPINLOCK_IRQSAVE(timer->lock, &flags);
++	timer->scheduled = 0;
++	DWC_SPINUNLOCK_IRQRESTORE(timer->lock, flags);
++	DWC_DEBUGC("Timer %s callback", timer->name);
++	timer->cb(timer->data);
++}
++
++dwc_timer_t *DWC_TIMER_ALLOC(char *name, dwc_timer_callback_t cb, void *data)
++{
++	dwc_timer_t *t = DWC_ALLOC(sizeof(*t));
++
++	if (!t) {
++		DWC_ERROR("Cannot allocate memory for timer");
++		return NULL;
++	}
++
++	t->t = DWC_ALLOC(sizeof(*t->t));
++	if (!t->t) {
++		DWC_ERROR("Cannot allocate memory for timer->t");
++		goto no_timer;
++	}
++
++	t->name = DWC_STRDUP(name);
++	if (!t->name) {
++		DWC_ERROR("Cannot allocate memory for timer->name");
++		goto no_name;
++	}
++
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK))
++	DWC_SPINLOCK_ALLOC_LINUX_DEBUG(t->lock);
++#else
++	t->lock = DWC_SPINLOCK_ALLOC();
++#endif
++	if (!t->lock) {
++		DWC_ERROR("Cannot allocate memory for lock");
++		goto no_lock;
++	}
++
++	t->scheduled = 0;
++	t->t->expires = jiffies;
++	setup_timer(t->t, timer_callback, (unsigned long)t);
++
++	t->cb = cb;
++	t->data = data;
++
++	return t;
++
++ no_lock:
++	DWC_FREE(t->name);
++ no_name:
++	DWC_FREE(t->t);
++ no_timer:
++	DWC_FREE(t);
++	return NULL;
++}
++
++void DWC_TIMER_FREE(dwc_timer_t *timer)
++{
++	dwc_irqflags_t flags;
++
++	DWC_SPINLOCK_IRQSAVE(timer->lock, &flags);
++
++	if (timer->scheduled) {
++		del_timer(timer->t);
++		timer->scheduled = 0;
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(timer->lock, flags);
++	DWC_SPINLOCK_FREE(timer->lock);
++	DWC_FREE(timer->t);
++	DWC_FREE(timer->name);
++	DWC_FREE(timer);
++}
++
++void DWC_TIMER_SCHEDULE(dwc_timer_t *timer, uint32_t time)
++{
++	dwc_irqflags_t flags;
++
++	DWC_SPINLOCK_IRQSAVE(timer->lock, &flags);
++
++	if (!timer->scheduled) {
++		timer->scheduled = 1;
++		DWC_DEBUGC("Scheduling timer %s to expire in +%d msec", timer->name, time);
++		timer->t->expires = jiffies + msecs_to_jiffies(time);
++		add_timer(timer->t);
++	} else {
++		DWC_DEBUGC("Modifying timer %s to expire in +%d msec", timer->name, time);
++		mod_timer(timer->t, jiffies + msecs_to_jiffies(time));
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(timer->lock, flags);
++}
++
++void DWC_TIMER_CANCEL(dwc_timer_t *timer)
++{
++	del_timer(timer->t);
++}
++
++
++/* Wait Queues */
++
++struct dwc_waitq {
++	wait_queue_head_t queue;
++	int abort;
++};
++
++dwc_waitq_t *DWC_WAITQ_ALLOC(void)
++{
++	dwc_waitq_t *wq = DWC_ALLOC(sizeof(*wq));
++
++	if (!wq) {
++		DWC_ERROR("Cannot allocate memory for waitqueue\n");
++		return NULL;
++	}
++
++	init_waitqueue_head(&wq->queue);
++	wq->abort = 0;
++	return wq;
++}
++
++void DWC_WAITQ_FREE(dwc_waitq_t *wq)
++{
++	DWC_FREE(wq);
++}
++
++int32_t DWC_WAITQ_WAIT(dwc_waitq_t *wq, dwc_waitq_condition_t cond, void *data)
++{
++	int result = wait_event_interruptible(wq->queue,
++					      cond(data) || wq->abort);
++	if (result == -ERESTARTSYS) {
++		wq->abort = 0;
++		return -DWC_E_RESTART;
++	}
++
++	if (wq->abort == 1) {
++		wq->abort = 0;
++		return -DWC_E_ABORT;
++	}
++
++	wq->abort = 0;
++
++	if (result == 0) {
++		return 0;
++	}
++
++	return -DWC_E_UNKNOWN;
++}
++
++int32_t DWC_WAITQ_WAIT_TIMEOUT(dwc_waitq_t *wq, dwc_waitq_condition_t cond,
++			       void *data, int32_t msecs)
++{
++	int32_t tmsecs;
++	int result = wait_event_interruptible_timeout(wq->queue,
++						      cond(data) || wq->abort,
++						      msecs_to_jiffies(msecs));
++	if (result == -ERESTARTSYS) {
++		wq->abort = 0;
++		return -DWC_E_RESTART;
++	}
++
++	if (wq->abort == 1) {
++		wq->abort = 0;
++		return -DWC_E_ABORT;
++	}
++
++	wq->abort = 0;
++
++	if (result > 0) {
++		tmsecs = jiffies_to_msecs(result);
++		if (!tmsecs) {
++			return 1;
++		}
++
++		return tmsecs;
++	}
++
++	if (result == 0) {
++		return -DWC_E_TIMEOUT;
++	}
++
++	return -DWC_E_UNKNOWN;
++}
++
++void DWC_WAITQ_TRIGGER(dwc_waitq_t *wq)
++{
++	wq->abort = 0;
++	wake_up_interruptible(&wq->queue);
++}
++
++void DWC_WAITQ_ABORT(dwc_waitq_t *wq)
++{
++	wq->abort = 1;
++	wake_up_interruptible(&wq->queue);
++}
++
++
++/* Threading */
++
++dwc_thread_t *DWC_THREAD_RUN(dwc_thread_function_t func, char *name, void *data)
++{
++	struct task_struct *thread = kthread_run(func, data, name);
++
++	if (thread == ERR_PTR(-ENOMEM)) {
++		return NULL;
++	}
++
++	return (dwc_thread_t *)thread;
++}
++
++int DWC_THREAD_STOP(dwc_thread_t *thread)
++{
++	return kthread_stop((struct task_struct *)thread);
++}
++
++dwc_bool_t DWC_THREAD_SHOULD_STOP(void)
++{
++	return kthread_should_stop();
++}
++
++
++/* tasklets
++ - run in interrupt context (cannot sleep)
++ - each tasklet runs on a single CPU
++ - different tasklets can be running simultaneously on different CPUs
++ */
++struct dwc_tasklet {
++	struct tasklet_struct t;
++	dwc_tasklet_callback_t cb;
++	void *data;
++};
++
++static void tasklet_callback(unsigned long data)
++{
++	dwc_tasklet_t *t = (dwc_tasklet_t *)data;
++	t->cb(t->data);
++}
++
++dwc_tasklet_t *DWC_TASK_ALLOC(char *name, dwc_tasklet_callback_t cb, void *data)
++{
++	dwc_tasklet_t *t = DWC_ALLOC(sizeof(*t));
++
++	if (t) {
++		t->cb = cb;
++		t->data = data;
++		tasklet_init(&t->t, tasklet_callback, (unsigned long)t);
++	} else {
++		DWC_ERROR("Cannot allocate memory for tasklet\n");
++	}
++
++	return t;
++}
++
++void DWC_TASK_FREE(dwc_tasklet_t *task)
++{
++	DWC_FREE(task);
++}
++
++void DWC_TASK_SCHEDULE(dwc_tasklet_t *task)
++{
++	tasklet_schedule(&task->t);
++}
++
++void DWC_TASK_HI_SCHEDULE(dwc_tasklet_t *task)
++{
++	tasklet_hi_schedule(&task->t);
++}
++
++
++/* workqueues
++ - run in process context (can sleep)
++ */
++typedef struct work_container {
++	dwc_work_callback_t cb;
++	void *data;
++	dwc_workq_t *wq;
++	char *name;
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_ENTRY(work_container) entry;
++#endif
++	struct delayed_work work;
++} work_container_t;
++
++#ifdef DEBUG
++DWC_CIRCLEQ_HEAD(work_container_queue, work_container);
++#endif
++
++struct dwc_workq {
++	struct workqueue_struct *wq;
++	dwc_spinlock_t *lock;
++	dwc_waitq_t *waitq;
++	int pending;
++
++#ifdef DEBUG
++	struct work_container_queue entries;
++#endif
++};
++
++static void do_work(struct work_struct *work)
++{
++	dwc_irqflags_t flags;
++	struct delayed_work *dw = container_of(work, struct delayed_work, work);
++	work_container_t *container = container_of(dw, struct work_container, work);
++	dwc_workq_t *wq = container->wq;
++
++	container->cb(container->data);
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_REMOVE(&wq->entries, container, entry);
++#endif
++	DWC_DEBUGC("Work done: %s, container=%p", container->name, container);
++	if (container->name) {
++		DWC_FREE(container->name);
++	}
++	DWC_FREE(container);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++	wq->pending--;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++}
++
++static int work_done(void *data)
++{
++	dwc_workq_t *workq = (dwc_workq_t *)data;
++	return workq->pending == 0;
++}
++
++int DWC_WORKQ_WAIT_WORK_DONE(dwc_workq_t *workq, int timeout)
++{
++	return DWC_WAITQ_WAIT_TIMEOUT(workq->waitq, work_done, workq, timeout);
++}
++
++dwc_workq_t *DWC_WORKQ_ALLOC(char *name)
++{
++	dwc_workq_t *wq = DWC_ALLOC(sizeof(*wq));
++
++	if (!wq) {
++		return NULL;
++	}
++
++	wq->wq = create_singlethread_workqueue(name);
++	if (!wq->wq) {
++		goto no_wq;
++	}
++
++	wq->pending = 0;
++
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK))
++	DWC_SPINLOCK_ALLOC_LINUX_DEBUG(wq->lock);
++#else
++	wq->lock = DWC_SPINLOCK_ALLOC();
++#endif
++	if (!wq->lock) {
++		goto no_lock;
++	}
++
++	wq->waitq = DWC_WAITQ_ALLOC();
++	if (!wq->waitq) {
++		goto no_waitq;
++	}
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_INIT(&wq->entries);
++#endif
++	return wq;
++
++ no_waitq:
++	DWC_SPINLOCK_FREE(wq->lock);
++ no_lock:
++	destroy_workqueue(wq->wq);
++ no_wq:
++	DWC_FREE(wq);
++
++	return NULL;
++}
++
++void DWC_WORKQ_FREE(dwc_workq_t *wq)
++{
++#ifdef DEBUG
++	if (wq->pending != 0) {
++		struct work_container *wc;
++		DWC_ERROR("Destroying work queue with pending work");
++		DWC_CIRCLEQ_FOREACH(wc, &wq->entries, entry) {
++			DWC_ERROR("Work %s still pending", wc->name);
++		}
++	}
++#endif
++	destroy_workqueue(wq->wq);
++	DWC_SPINLOCK_FREE(wq->lock);
++	DWC_WAITQ_FREE(wq->waitq);
++	DWC_FREE(wq);
++}
++
++void DWC_WORKQ_SCHEDULE(dwc_workq_t *wq, dwc_work_callback_t cb, void *data,
++			char *format, ...)
++{
++	dwc_irqflags_t flags;
++	work_container_t *container;
++	static char name[128];
++	va_list args;
++
++	va_start(args, format);
++	DWC_VSNPRINTF(name, 128, format, args);
++	va_end(args);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++	wq->pending++;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++
++	container = DWC_ALLOC_ATOMIC(sizeof(*container));
++	if (!container) {
++		DWC_ERROR("Cannot allocate memory for container\n");
++		return;
++	}
++
++	container->name = DWC_STRDUP(name);
++	if (!container->name) {
++		DWC_ERROR("Cannot allocate memory for container->name\n");
++		DWC_FREE(container);
++		return;
++	}
++
++	container->cb = cb;
++	container->data = data;
++	container->wq = wq;
++	DWC_DEBUGC("Queueing work: %s, container=%p", container->name, container);
++	INIT_WORK(&container->work.work, do_work);
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_INSERT_TAIL(&wq->entries, container, entry);
++#endif
++	queue_work(wq->wq, &container->work.work);
++}
++
++void DWC_WORKQ_SCHEDULE_DELAYED(dwc_workq_t *wq, dwc_work_callback_t cb,
++				void *data, uint32_t time, char *format, ...)
++{
++	dwc_irqflags_t flags;
++	work_container_t *container;
++	static char name[128];
++	va_list args;
++
++	va_start(args, format);
++	DWC_VSNPRINTF(name, 128, format, args);
++	va_end(args);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++	wq->pending++;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++
++	container = DWC_ALLOC_ATOMIC(sizeof(*container));
++	if (!container) {
++		DWC_ERROR("Cannot allocate memory for container\n");
++		return;
++	}
++
++	container->name = DWC_STRDUP(name);
++	if (!container->name) {
++		DWC_ERROR("Cannot allocate memory for container->name\n");
++		DWC_FREE(container);
++		return;
++	}
++
++	container->cb = cb;
++	container->data = data;
++	container->wq = wq;
++	DWC_DEBUGC("Queueing work: %s, container=%p", container->name, container);
++	INIT_DELAYED_WORK(&container->work, do_work);
++
++#ifdef DEBUG
++	DWC_CIRCLEQ_INSERT_TAIL(&wq->entries, container, entry);
++#endif
++	queue_delayed_work(wq->wq, &container->work, msecs_to_jiffies(time));
++}
++
++int DWC_WORKQ_PENDING(dwc_workq_t *wq)
++{
++	return wq->pending;
++}
++
++
++#ifdef DWC_LIBMODULE
++
++#ifdef DWC_CCLIB
++/* CC */
++EXPORT_SYMBOL(dwc_cc_if_alloc);
++EXPORT_SYMBOL(dwc_cc_if_free);
++EXPORT_SYMBOL(dwc_cc_clear);
++EXPORT_SYMBOL(dwc_cc_add);
++EXPORT_SYMBOL(dwc_cc_remove);
++EXPORT_SYMBOL(dwc_cc_change);
++EXPORT_SYMBOL(dwc_cc_data_for_save);
++EXPORT_SYMBOL(dwc_cc_restore_from_data);
++EXPORT_SYMBOL(dwc_cc_match_chid);
++EXPORT_SYMBOL(dwc_cc_match_cdid);
++EXPORT_SYMBOL(dwc_cc_ck);
++EXPORT_SYMBOL(dwc_cc_chid);
++EXPORT_SYMBOL(dwc_cc_cdid);
++EXPORT_SYMBOL(dwc_cc_name);
++#endif	/* DWC_CCLIB */
++
++#ifdef DWC_CRYPTOLIB
++# ifndef CONFIG_MACH_IPMATE
++/* Modpow */
++EXPORT_SYMBOL(dwc_modpow);
++
++/* DH */
++EXPORT_SYMBOL(dwc_dh_modpow);
++EXPORT_SYMBOL(dwc_dh_derive_keys);
++EXPORT_SYMBOL(dwc_dh_pk);
++# endif	/* CONFIG_MACH_IPMATE */
++
++/* Crypto */
++EXPORT_SYMBOL(dwc_wusb_aes_encrypt);
++EXPORT_SYMBOL(dwc_wusb_cmf);
++EXPORT_SYMBOL(dwc_wusb_prf);
++EXPORT_SYMBOL(dwc_wusb_fill_ccm_nonce);
++EXPORT_SYMBOL(dwc_wusb_gen_nonce);
++EXPORT_SYMBOL(dwc_wusb_gen_key);
++EXPORT_SYMBOL(dwc_wusb_gen_mic);
++#endif	/* DWC_CRYPTOLIB */
++
++/* Notification */
++#ifdef DWC_NOTIFYLIB
++EXPORT_SYMBOL(dwc_alloc_notification_manager);
++EXPORT_SYMBOL(dwc_free_notification_manager);
++EXPORT_SYMBOL(dwc_register_notifier);
++EXPORT_SYMBOL(dwc_unregister_notifier);
++EXPORT_SYMBOL(dwc_add_observer);
++EXPORT_SYMBOL(dwc_remove_observer);
++EXPORT_SYMBOL(dwc_notify);
++#endif
++
++/* Memory Debugging Routines */
++#ifdef DWC_DEBUG_MEMORY
++EXPORT_SYMBOL(dwc_alloc_debug);
++EXPORT_SYMBOL(dwc_alloc_atomic_debug);
++EXPORT_SYMBOL(dwc_free_debug);
++EXPORT_SYMBOL(dwc_dma_alloc_debug);
++EXPORT_SYMBOL(dwc_dma_free_debug);
++#endif
++
++EXPORT_SYMBOL(DWC_MEMSET);
++EXPORT_SYMBOL(DWC_MEMCPY);
++EXPORT_SYMBOL(DWC_MEMMOVE);
++EXPORT_SYMBOL(DWC_MEMCMP);
++EXPORT_SYMBOL(DWC_STRNCMP);
++EXPORT_SYMBOL(DWC_STRCMP);
++EXPORT_SYMBOL(DWC_STRLEN);
++EXPORT_SYMBOL(DWC_STRCPY);
++EXPORT_SYMBOL(DWC_STRDUP);
++EXPORT_SYMBOL(DWC_ATOI);
++EXPORT_SYMBOL(DWC_ATOUI);
++
++#ifdef DWC_UTFLIB
++EXPORT_SYMBOL(DWC_UTF8_TO_UTF16LE);
++#endif	/* DWC_UTFLIB */
++
++EXPORT_SYMBOL(DWC_IN_IRQ);
++EXPORT_SYMBOL(DWC_IN_BH);
++EXPORT_SYMBOL(DWC_VPRINTF);
++EXPORT_SYMBOL(DWC_VSNPRINTF);
++EXPORT_SYMBOL(DWC_PRINTF);
++EXPORT_SYMBOL(DWC_SPRINTF);
++EXPORT_SYMBOL(DWC_SNPRINTF);
++EXPORT_SYMBOL(__DWC_WARN);
++EXPORT_SYMBOL(__DWC_ERROR);
++EXPORT_SYMBOL(DWC_EXCEPTION);
++
++#ifdef DEBUG
++EXPORT_SYMBOL(__DWC_DEBUG);
++#endif
++
++EXPORT_SYMBOL(__DWC_DMA_ALLOC);
++EXPORT_SYMBOL(__DWC_DMA_ALLOC_ATOMIC);
++EXPORT_SYMBOL(__DWC_DMA_FREE);
++EXPORT_SYMBOL(__DWC_ALLOC);
++EXPORT_SYMBOL(__DWC_ALLOC_ATOMIC);
++EXPORT_SYMBOL(__DWC_FREE);
++
++#ifdef DWC_CRYPTOLIB
++EXPORT_SYMBOL(DWC_RANDOM_BYTES);
++EXPORT_SYMBOL(DWC_AES_CBC);
++EXPORT_SYMBOL(DWC_SHA256);
++EXPORT_SYMBOL(DWC_HMAC_SHA256);
++#endif
++
++EXPORT_SYMBOL(DWC_CPU_TO_LE32);
++EXPORT_SYMBOL(DWC_CPU_TO_BE32);
++EXPORT_SYMBOL(DWC_LE32_TO_CPU);
++EXPORT_SYMBOL(DWC_BE32_TO_CPU);
++EXPORT_SYMBOL(DWC_CPU_TO_LE16);
++EXPORT_SYMBOL(DWC_CPU_TO_BE16);
++EXPORT_SYMBOL(DWC_LE16_TO_CPU);
++EXPORT_SYMBOL(DWC_BE16_TO_CPU);
++EXPORT_SYMBOL(DWC_READ_REG32);
++EXPORT_SYMBOL(DWC_WRITE_REG32);
++EXPORT_SYMBOL(DWC_MODIFY_REG32);
++
++#if 0
++EXPORT_SYMBOL(DWC_READ_REG64);
++EXPORT_SYMBOL(DWC_WRITE_REG64);
++EXPORT_SYMBOL(DWC_MODIFY_REG64);
++#endif
++
++EXPORT_SYMBOL(DWC_SPINLOCK_ALLOC);
++EXPORT_SYMBOL(DWC_SPINLOCK_FREE);
++EXPORT_SYMBOL(DWC_SPINLOCK);
++EXPORT_SYMBOL(DWC_SPINUNLOCK);
++EXPORT_SYMBOL(DWC_SPINLOCK_IRQSAVE);
++EXPORT_SYMBOL(DWC_SPINUNLOCK_IRQRESTORE);
++EXPORT_SYMBOL(DWC_MUTEX_ALLOC);
++
++#if (!defined(DWC_LINUX) || !defined(CONFIG_DEBUG_MUTEXES))
++EXPORT_SYMBOL(DWC_MUTEX_FREE);
++#endif
++
++EXPORT_SYMBOL(DWC_MUTEX_LOCK);
++EXPORT_SYMBOL(DWC_MUTEX_TRYLOCK);
++EXPORT_SYMBOL(DWC_MUTEX_UNLOCK);
++EXPORT_SYMBOL(DWC_UDELAY);
++EXPORT_SYMBOL(DWC_MDELAY);
++EXPORT_SYMBOL(DWC_MSLEEP);
++EXPORT_SYMBOL(DWC_TIME);
++EXPORT_SYMBOL(DWC_TIMER_ALLOC);
++EXPORT_SYMBOL(DWC_TIMER_FREE);
++EXPORT_SYMBOL(DWC_TIMER_SCHEDULE);
++EXPORT_SYMBOL(DWC_TIMER_CANCEL);
++EXPORT_SYMBOL(DWC_WAITQ_ALLOC);
++EXPORT_SYMBOL(DWC_WAITQ_FREE);
++EXPORT_SYMBOL(DWC_WAITQ_WAIT);
++EXPORT_SYMBOL(DWC_WAITQ_WAIT_TIMEOUT);
++EXPORT_SYMBOL(DWC_WAITQ_TRIGGER);
++EXPORT_SYMBOL(DWC_WAITQ_ABORT);
++EXPORT_SYMBOL(DWC_THREAD_RUN);
++EXPORT_SYMBOL(DWC_THREAD_STOP);
++EXPORT_SYMBOL(DWC_THREAD_SHOULD_STOP);
++EXPORT_SYMBOL(DWC_TASK_ALLOC);
++EXPORT_SYMBOL(DWC_TASK_FREE);
++EXPORT_SYMBOL(DWC_TASK_SCHEDULE);
++EXPORT_SYMBOL(DWC_WORKQ_WAIT_WORK_DONE);
++EXPORT_SYMBOL(DWC_WORKQ_ALLOC);
++EXPORT_SYMBOL(DWC_WORKQ_FREE);
++EXPORT_SYMBOL(DWC_WORKQ_SCHEDULE);
++EXPORT_SYMBOL(DWC_WORKQ_SCHEDULE_DELAYED);
++EXPORT_SYMBOL(DWC_WORKQ_PENDING);
++
++static int dwc_common_port_init_module(void)
++{
++	int result = 0;
++
++	printk(KERN_DEBUG "Module dwc_common_port init\n" );
++
++#ifdef DWC_DEBUG_MEMORY
++	result = dwc_memory_debug_start(NULL);
++	if (result) {
++		printk(KERN_ERR
++		       "dwc_memory_debug_start() failed with error %d\n",
++		       result);
++		return result;
++	}
++#endif
++
++#ifdef DWC_NOTIFYLIB
++	result = dwc_alloc_notification_manager(NULL, NULL);
++	if (result) {
++		printk(KERN_ERR
++		       "dwc_alloc_notification_manager() failed with error %d\n",
++		       result);
++		return result;
++	}
++#endif
++	return result;
++}
++
++static void dwc_common_port_exit_module(void)
++{
++	printk(KERN_DEBUG "Module dwc_common_port exit\n" );
++
++#ifdef DWC_NOTIFYLIB
++	dwc_free_notification_manager();
++#endif
++
++#ifdef DWC_DEBUG_MEMORY
++	dwc_memory_debug_stop();
++#endif
++}
++
++module_init(dwc_common_port_init_module);
++module_exit(dwc_common_port_exit_module);
++
++MODULE_DESCRIPTION("DWC Common Library - Portable version");
++MODULE_AUTHOR("Synopsys Inc.");
++MODULE_LICENSE ("GPL");
++
++#endif	/* DWC_LIBMODULE */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_common_nbsd.c
+@@ -0,0 +1,1275 @@
++#include "dwc_os.h"
++#include "dwc_list.h"
++
++#ifdef DWC_CCLIB
++# include "dwc_cc.h"
++#endif
++
++#ifdef DWC_CRYPTOLIB
++# include "dwc_modpow.h"
++# include "dwc_dh.h"
++# include "dwc_crypto.h"
++#endif
++
++#ifdef DWC_NOTIFYLIB
++# include "dwc_notifier.h"
++#endif
++
++/* OS-Level Implementations */
++
++/* This is the NetBSD 4.0.1 kernel implementation of the DWC platform library. */
++
++
++/* MISC */
++
++void *DWC_MEMSET(void *dest, uint8_t byte, uint32_t size)
++{
++	return memset(dest, byte, size);
++}
++
++void *DWC_MEMCPY(void *dest, void const *src, uint32_t size)
++{
++	return memcpy(dest, src, size);
++}
++
++void *DWC_MEMMOVE(void *dest, void *src, uint32_t size)
++{
++	bcopy(src, dest, size);
++	return dest;
++}
++
++int DWC_MEMCMP(void *m1, void *m2, uint32_t size)
++{
++	return memcmp(m1, m2, size);
++}
++
++int DWC_STRNCMP(void *s1, void *s2, uint32_t size)
++{
++	return strncmp(s1, s2, size);
++}
++
++int DWC_STRCMP(void *s1, void *s2)
++{
++	return strcmp(s1, s2);
++}
++
++int DWC_STRLEN(char const *str)
++{
++	return strlen(str);
++}
++
++char *DWC_STRCPY(char *to, char const *from)
++{
++	return strcpy(to, from);
++}
++
++char *DWC_STRDUP(char const *str)
++{
++	int len = DWC_STRLEN(str) + 1;
++	char *new = DWC_ALLOC_ATOMIC(len);
++
++	if (!new) {
++		return NULL;
++	}
++
++	DWC_MEMCPY(new, str, len);
++	return new;
++}
++
++int DWC_ATOI(char *str, int32_t *value)
++{
++	char *end = NULL;
++
++	/* NetBSD doesn't have 'strtol' in the kernel, but 'strtoul'
++	 * should be equivalent on 2's complement machines
++	 */
++	*value = strtoul(str, &end, 0);
++	if (*end == '\0') {
++		return 0;
++	}
++
++	return -1;
++}
++
++int DWC_ATOUI(char *str, uint32_t *value)
++{
++	char *end = NULL;
++
++	*value = strtoul(str, &end, 0);
++	if (*end == '\0') {
++		return 0;
++	}
++
++	return -1;
++}
++
++
++#ifdef DWC_UTFLIB
++/* From usbstring.c */
++
++int DWC_UTF8_TO_UTF16LE(uint8_t const *s, uint16_t *cp, unsigned len)
++{
++	int	count = 0;
++	u8	c;
++	u16	uchar;
++
++	/* this insists on correct encodings, though not minimal ones.
++	 * BUT it currently rejects legit 4-byte UTF-8 code points,
++	 * which need surrogate pairs.  (Unicode 3.1 can use them.)
++	 */
++	while (len != 0 && (c = (u8) *s++) != 0) {
++		if (unlikely(c & 0x80)) {
++			// 2-byte sequence:
++			// 00000yyyyyxxxxxx = 110yyyyy 10xxxxxx
++			if ((c & 0xe0) == 0xc0) {
++				uchar = (c & 0x1f) << 6;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c;
++
++			// 3-byte sequence (most CJKV characters):
++			// zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
++			} else if ((c & 0xf0) == 0xe0) {
++				uchar = (c & 0x0f) << 12;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c << 6;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c;
++
++				/* no bogus surrogates */
++				if (0xd800 <= uchar && uchar <= 0xdfff)
++					goto fail;
++
++			// 4-byte sequence (surrogate pairs, currently rare):
++			// 11101110wwwwzzzzyy + 110111yyyyxxxxxx
++			//     = 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx
++			// (uuuuu = wwww + 1)
++			// FIXME accept the surrogate code points (only)
++			} else
++				goto fail;
++		} else
++			uchar = c;
++		put_unaligned (cpu_to_le16 (uchar), cp++);
++		count++;
++		len--;
++	}
++	return count;
++fail:
++	return -1;
++}
++
++#endif	/* DWC_UTFLIB */
++
++
++/* dwc_debug.h */
++
++dwc_bool_t DWC_IN_IRQ(void)
++{
++//	return in_irq();
++	return 0;
++}
++
++dwc_bool_t DWC_IN_BH(void)
++{
++//	return in_softirq();
++	return 0;
++}
++
++void DWC_VPRINTF(char *format, va_list args)
++{
++	vprintf(format, args);
++}
++
++int DWC_VSNPRINTF(char *str, int size, char *format, va_list args)
++{
++	return vsnprintf(str, size, format, args);
++}
++
++void DWC_PRINTF(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++int DWC_SPRINTF(char *buffer, char *format, ...)
++{
++	int retval;
++	va_list args;
++
++	va_start(args, format);
++	retval = vsprintf(buffer, format, args);
++	va_end(args);
++	return retval;
++}
++
++int DWC_SNPRINTF(char *buffer, int size, char *format, ...)
++{
++	int retval;
++	va_list args;
++
++	va_start(args, format);
++	retval = vsnprintf(buffer, size, format, args);
++	va_end(args);
++	return retval;
++}
++
++void __DWC_WARN(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++void __DWC_ERROR(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++
++void DWC_EXCEPTION(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++//	BUG_ON(1);	???
++}
++
++#ifdef DEBUG
++void __DWC_DEBUG(char *format, ...)
++{
++	va_list args;
++
++	va_start(args, format);
++	DWC_VPRINTF(format, args);
++	va_end(args);
++}
++#endif
++
++
++/* dwc_mem.h */
++
++#if 0
++dwc_pool_t *DWC_DMA_POOL_CREATE(uint32_t size,
++				uint32_t align,
++				uint32_t alloc)
++{
++	struct dma_pool *pool = dma_pool_create("Pool", NULL,
++						size, align, alloc);
++	return (dwc_pool_t *)pool;
++}
++
++void DWC_DMA_POOL_DESTROY(dwc_pool_t *pool)
++{
++	dma_pool_destroy((struct dma_pool *)pool);
++}
++
++void *DWC_DMA_POOL_ALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
++{
++//	return dma_pool_alloc((struct dma_pool *)pool, GFP_KERNEL, dma_addr);
++	return dma_pool_alloc((struct dma_pool *)pool, M_WAITOK, dma_addr);
++}
++
++void *DWC_DMA_POOL_ZALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
++{
++	void *vaddr = DWC_DMA_POOL_ALLOC(pool, dma_addr);
++	memset(..);
++}
++
++void DWC_DMA_POOL_FREE(dwc_pool_t *pool, void *vaddr, void *daddr)
++{
++	dma_pool_free(pool, vaddr, daddr);
++}
++#endif
++
++void *__DWC_DMA_ALLOC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr)
++{
++	dwc_dmactx_t *dma = (dwc_dmactx_t *)dma_ctx;
++	int error;
++
++	error = bus_dmamem_alloc(dma->dma_tag, size, 1, size, dma->segs,
++				 sizeof(dma->segs) / sizeof(dma->segs[0]),
++				 &dma->nsegs, BUS_DMA_NOWAIT);
++	if (error) {
++		printf("%s: bus_dmamem_alloc(%ju) failed: %d\n", __func__,
++		       (uintmax_t)size, error);
++		goto fail_0;
++	}
++
++	error = bus_dmamem_map(dma->dma_tag, dma->segs, dma->nsegs, size,
++			       (caddr_t *)&dma->dma_vaddr,
++			       BUS_DMA_NOWAIT | BUS_DMA_COHERENT);
++	if (error) {
++		printf("%s: bus_dmamem_map failed: %d\n", __func__, error);
++		goto fail_1;
++	}
++
++	error = bus_dmamap_create(dma->dma_tag, size, 1, size, 0,
++				  BUS_DMA_NOWAIT, &dma->dma_map);
++	if (error) {
++		printf("%s: bus_dmamap_create failed: %d\n", __func__, error);
++		goto fail_2;
++	}
++
++	error = bus_dmamap_load(dma->dma_tag, dma->dma_map, dma->dma_vaddr,
++				size, NULL, BUS_DMA_NOWAIT);
++	if (error) {
++		printf("%s: bus_dmamap_load failed: %d\n", __func__, error);
++		goto fail_3;
++	}
++
++	dma->dma_paddr = (bus_addr_t)dma->segs[0].ds_addr;
++	*dma_addr = dma->dma_paddr;
++	return dma->dma_vaddr;
++
++fail_3:
++	bus_dmamap_destroy(dma->dma_tag, dma->dma_map);
++fail_2:
++	bus_dmamem_unmap(dma->dma_tag, dma->dma_vaddr, size);
++fail_1:
++	bus_dmamem_free(dma->dma_tag, dma->segs, dma->nsegs);
++fail_0:
++	dma->dma_map = NULL;
++	dma->dma_vaddr = NULL;
++	dma->nsegs = 0;
++
++	return NULL;
++}
++
++void __DWC_DMA_FREE(void *dma_ctx, uint32_t size, void *virt_addr, dwc_dma_t dma_addr)
++{
++	dwc_dmactx_t *dma = (dwc_dmactx_t *)dma_ctx;
++
++	if (dma->dma_map != NULL) {
++		bus_dmamap_sync(dma->dma_tag, dma->dma_map, 0, size,
++				BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
++		bus_dmamap_unload(dma->dma_tag, dma->dma_map);
++		bus_dmamap_destroy(dma->dma_tag, dma->dma_map);
++		bus_dmamem_unmap(dma->dma_tag, dma->dma_vaddr, size);
++		bus_dmamem_free(dma->dma_tag, dma->segs, dma->nsegs);
++		dma->dma_paddr = 0;
++		dma->dma_map = NULL;
++		dma->dma_vaddr = NULL;
++		dma->nsegs = 0;
++	}
++}
++
++void *__DWC_ALLOC(void *mem_ctx, uint32_t size)
++{
++	return malloc(size, M_DEVBUF, M_WAITOK | M_ZERO);
++}
++
++void *__DWC_ALLOC_ATOMIC(void *mem_ctx, uint32_t size)
++{
++	return malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO);
++}
++
++void __DWC_FREE(void *mem_ctx, void *addr)
++{
++	free(addr, M_DEVBUF);
++}
++
++
++#ifdef DWC_CRYPTOLIB
++/* dwc_crypto.h */
++
++void DWC_RANDOM_BYTES(uint8_t *buffer, uint32_t length)
++{
++	get_random_bytes(buffer, length);
++}
++
++int DWC_AES_CBC(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t iv[16], uint8_t *out)
++{
++	struct crypto_blkcipher *tfm;
++	struct blkcipher_desc desc;
++	struct scatterlist sgd;
++	struct scatterlist sgs;
++
++	tfm = crypto_alloc_blkcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
++	if (tfm == NULL) {
++		printk("failed to load transform for aes CBC\n");
++		return -1;
++	}
++
++	crypto_blkcipher_setkey(tfm, key, keylen);
++	crypto_blkcipher_set_iv(tfm, iv, 16);
++
++	sg_init_one(&sgd, out, messagelen);
++	sg_init_one(&sgs, message, messagelen);
++
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	if (crypto_blkcipher_encrypt(&desc, &sgd, &sgs, messagelen)) {
++		crypto_free_blkcipher(tfm);
++		DWC_ERROR("AES CBC encryption failed");
++		return -1;
++	}
++
++	crypto_free_blkcipher(tfm);
++	return 0;
++}
++
++int DWC_SHA256(uint8_t *message, uint32_t len, uint8_t *out)
++{
++	struct crypto_hash *tfm;
++	struct hash_desc desc;
++	struct scatterlist sg;
++
++	tfm = crypto_alloc_hash("sha256", 0, CRYPTO_ALG_ASYNC);
++	if (IS_ERR(tfm)) {
++		DWC_ERROR("Failed to load transform for sha256: %ld", PTR_ERR(tfm));
++		return 0;
++	}
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	sg_init_one(&sg, message, len);
++	crypto_hash_digest(&desc, &sg, len, out);
++	crypto_free_hash(tfm);
++
++	return 1;
++}
++
++int DWC_HMAC_SHA256(uint8_t *message, uint32_t messagelen,
++		    uint8_t *key, uint32_t keylen, uint8_t *out)
++{
++	struct crypto_hash *tfm;
++	struct hash_desc desc;
++	struct scatterlist sg;
++
++	tfm = crypto_alloc_hash("hmac(sha256)", 0, CRYPTO_ALG_ASYNC);
++	if (IS_ERR(tfm)) {
++		DWC_ERROR("Failed to load transform for hmac(sha256): %ld", PTR_ERR(tfm));
++		return 0;
++	}
++	desc.tfm = tfm;
++	desc.flags = 0;
++
++	sg_init_one(&sg, message, messagelen);
++	crypto_hash_setkey(tfm, key, keylen);
++	crypto_hash_digest(&desc, &sg, messagelen, out);
++	crypto_free_hash(tfm);
++
++	return 1;
++}
++
++#endif	/* DWC_CRYPTOLIB */
++
++
++/* Byte Ordering Conversions */
++
++uint32_t DWC_CPU_TO_LE32(uint32_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_CPU_TO_BE32(uint32_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_LE32_TO_CPU(uint32_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint32_t DWC_BE32_TO_CPU(uint32_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++
++	return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
++#endif
++}
++
++uint16_t DWC_CPU_TO_LE16(uint16_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_CPU_TO_BE16(uint16_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_LE16_TO_CPU(uint16_t *p)
++{
++#ifdef __LITTLE_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++uint16_t DWC_BE16_TO_CPU(uint16_t *p)
++{
++#ifdef __BIG_ENDIAN
++	return *p;
++#else
++	uint8_t *u_p = (uint8_t *)p;
++	return (u_p[1] | (u_p[0] << 8));
++#endif
++}
++
++
++/* Registers */
++
++uint32_t DWC_READ_REG32(void *io_ctx, uint32_t volatile *reg)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	return bus_space_read_4(io->iot, io->ioh, ior);
++}
++
++#if 0
++uint64_t DWC_READ_REG64(void *io_ctx, uint64_t volatile *reg)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	return bus_space_read_8(io->iot, io->ioh, ior);
++}
++#endif
++
++void DWC_WRITE_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t value)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	bus_space_write_4(io->iot, io->ioh, ior, value);
++}
++
++#if 0
++void DWC_WRITE_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t value)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	bus_space_write_8(io->iot, io->ioh, ior, value);
++}
++#endif
++
++void DWC_MODIFY_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t clear_mask,
++		      uint32_t set_mask)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	bus_space_write_4(io->iot, io->ioh, ior,
++			  (bus_space_read_4(io->iot, io->ioh, ior) &
++			   ~clear_mask) | set_mask);
++}
++
++#if 0
++void DWC_MODIFY_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t clear_mask,
++		      uint64_t set_mask)
++{
++	dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
++	bus_size_t ior = (bus_size_t)reg;
++
++	bus_space_write_8(io->iot, io->ioh, ior,
++			  (bus_space_read_8(io->iot, io->ioh, ior) &
++			   ~clear_mask) | set_mask);
++}
++#endif
++
++
++/* Locking */
++
++dwc_spinlock_t *DWC_SPINLOCK_ALLOC(void)
++{
++	struct simplelock *sl = DWC_ALLOC(sizeof(*sl));
++
++	if (!sl) {
++		DWC_ERROR("Cannot allocate memory for spinlock");
++		return NULL;
++	}
++
++	simple_lock_init(sl);
++	return (dwc_spinlock_t *)sl;
++}
++
++void DWC_SPINLOCK_FREE(dwc_spinlock_t *lock)
++{
++	struct simplelock *sl = (struct simplelock *)lock;
++
++	DWC_FREE(sl);
++}
++
++void DWC_SPINLOCK(dwc_spinlock_t *lock)
++{
++	simple_lock((struct simplelock *)lock);
++}
++
++void DWC_SPINUNLOCK(dwc_spinlock_t *lock)
++{
++	simple_unlock((struct simplelock *)lock);
++}
++
++void DWC_SPINLOCK_IRQSAVE(dwc_spinlock_t *lock, dwc_irqflags_t *flags)
++{
++	simple_lock((struct simplelock *)lock);
++	*flags = splbio();
++}
++
++void DWC_SPINUNLOCK_IRQRESTORE(dwc_spinlock_t *lock, dwc_irqflags_t flags)
++{
++	splx(flags);
++	simple_unlock((struct simplelock *)lock);
++}
++
++dwc_mutex_t *DWC_MUTEX_ALLOC(void)
++{
++	dwc_mutex_t *mutex = DWC_ALLOC(sizeof(struct lock));
++
++	if (!mutex) {
++		DWC_ERROR("Cannot allocate memory for mutex");
++		return NULL;
++	}
++
++	lockinit((struct lock *)mutex, 0, "dw3mtx", 0, 0);
++	return mutex;
++}
++
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
++#else
++void DWC_MUTEX_FREE(dwc_mutex_t *mutex)
++{
++	DWC_FREE(mutex);
++}
++#endif
++
++void DWC_MUTEX_LOCK(dwc_mutex_t *mutex)
++{
++	lockmgr((struct lock *)mutex, LK_EXCLUSIVE, NULL);
++}
++
++int DWC_MUTEX_TRYLOCK(dwc_mutex_t *mutex)
++{
++	int status;
++
++	status = lockmgr((struct lock *)mutex, LK_EXCLUSIVE | LK_NOWAIT, NULL);
++	return status == 0;
++}
++
++void DWC_MUTEX_UNLOCK(dwc_mutex_t *mutex)
++{
++	lockmgr((struct lock *)mutex, LK_RELEASE, NULL);
++}
++
++
++/* Timing */
++
++void DWC_UDELAY(uint32_t usecs)
++{
++	DELAY(usecs);
++}
++
++void DWC_MDELAY(uint32_t msecs)
++{
++	do {
++		DELAY(1000);
++	} while (--msecs);
++}
++
++void DWC_MSLEEP(uint32_t msecs)
++{
++	struct timeval tv;
++
++	tv.tv_sec = msecs / 1000;
++	tv.tv_usec = (msecs - tv.tv_sec * 1000) * 1000;
++	tsleep(&tv, 0, "dw3slp", tvtohz(&tv));
++}
++
++uint32_t DWC_TIME(void)
++{
++	struct timeval tv;
++
++	microuptime(&tv);	// or getmicrouptime? (less precise, but faster)
++	return tv.tv_sec * 1000 + tv.tv_usec / 1000;
++}
++
++
++/* Timers */
++
++struct dwc_timer {
++	struct callout t;
++	char *name;
++	dwc_spinlock_t *lock;
++	dwc_timer_callback_t cb;
++	void *data;
++};
++
++dwc_timer_t *DWC_TIMER_ALLOC(char *name, dwc_timer_callback_t cb, void *data)
++{
++	dwc_timer_t *t = DWC_ALLOC(sizeof(*t));
++
++	if (!t) {
++		DWC_ERROR("Cannot allocate memory for timer");
++		return NULL;
++	}
++
++	callout_init(&t->t);
++
++	t->name = DWC_STRDUP(name);
++	if (!t->name) {
++		DWC_ERROR("Cannot allocate memory for timer->name");
++		goto no_name;
++	}
++
++	t->lock = DWC_SPINLOCK_ALLOC();
++	if (!t->lock) {
++		DWC_ERROR("Cannot allocate memory for timer->lock");
++		goto no_lock;
++	}
++
++	t->cb = cb;
++	t->data = data;
++
++	return t;
++
++ no_lock:
++	DWC_FREE(t->name);
++ no_name:
++	DWC_FREE(t);
++
++	return NULL;
++}
++
++void DWC_TIMER_FREE(dwc_timer_t *timer)
++{
++	callout_stop(&timer->t);
++	DWC_SPINLOCK_FREE(timer->lock);
++	DWC_FREE(timer->name);
++	DWC_FREE(timer);
++}
++
++void DWC_TIMER_SCHEDULE(dwc_timer_t *timer, uint32_t time)
++{
++	struct timeval tv;
++
++	tv.tv_sec = time / 1000;
++	tv.tv_usec = (time - tv.tv_sec * 1000) * 1000;
++	callout_reset(&timer->t, tvtohz(&tv), timer->cb, timer->data);
++}
++
++void DWC_TIMER_CANCEL(dwc_timer_t *timer)
++{
++	callout_stop(&timer->t);
++}
++
++
++/* Wait Queues */
++
++struct dwc_waitq {
++	struct simplelock lock;
++	int abort;
++};
++
++dwc_waitq_t *DWC_WAITQ_ALLOC(void)
++{
++	dwc_waitq_t *wq = DWC_ALLOC(sizeof(*wq));
++
++	if (!wq) {
++		DWC_ERROR("Cannot allocate memory for waitqueue");
++		return NULL;
++	}
++
++	simple_lock_init(&wq->lock);
++	wq->abort = 0;
++
++	return wq;
++}
++
++void DWC_WAITQ_FREE(dwc_waitq_t *wq)
++{
++	DWC_FREE(wq);
++}
++
++int32_t DWC_WAITQ_WAIT(dwc_waitq_t *wq, dwc_waitq_condition_t cond, void *data)
++{
++	int ipl;
++	int result = 0;
++
++	simple_lock(&wq->lock);
++	ipl = splbio();
++
++	/* Skip the sleep if already aborted or triggered */
++	if (!wq->abort && !cond(data)) {
++		splx(ipl);
++		result = ltsleep(wq, PCATCH, "dw3wat", 0, &wq->lock); // infinite timeout
++		ipl = splbio();
++	}
++
++	if (result == 0) {			// awoken
++		if (wq->abort) {
++			wq->abort = 0;
++			result = -DWC_E_ABORT;
++		} else {
++			result = 0;
++		}
++
++		splx(ipl);
++		simple_unlock(&wq->lock);
++	} else {
++		wq->abort = 0;
++		splx(ipl);
++		simple_unlock(&wq->lock);
++
++		if (result == ERESTART) {	// signaled - restart
++			result = -DWC_E_RESTART;
++		} else {			// signaled - must be EINTR
++			result = -DWC_E_ABORT;
++		}
++	}
++
++	return result;
++}
++
++int32_t DWC_WAITQ_WAIT_TIMEOUT(dwc_waitq_t *wq, dwc_waitq_condition_t cond,
++			       void *data, int32_t msecs)
++{
++	struct timeval tv, tv1, tv2;
++	int ipl;
++	int result = 0;
++
++	tv.tv_sec = msecs / 1000;
++	tv.tv_usec = (msecs - tv.tv_sec * 1000) * 1000;
++
++	simple_lock(&wq->lock);
++	ipl = splbio();
++
++	/* Skip the sleep if already aborted or triggered */
++	if (!wq->abort && !cond(data)) {
++		splx(ipl);
++		getmicrouptime(&tv1);
++		result = ltsleep(wq, PCATCH, "dw3wto", tvtohz(&tv), &wq->lock);
++		getmicrouptime(&tv2);
++		ipl = splbio();
++	}
++
++	if (result == 0) {			// awoken
++		if (wq->abort) {
++			wq->abort = 0;
++			splx(ipl);
++			simple_unlock(&wq->lock);
++			result = -DWC_E_ABORT;
++		} else {
++			splx(ipl);
++			simple_unlock(&wq->lock);
++
++			tv2.tv_usec -= tv1.tv_usec;
++			if (tv2.tv_usec < 0) {
++				tv2.tv_usec += 1000000;
++				tv2.tv_sec--;
++			}
++
++			tv2.tv_sec -= tv1.tv_sec;
++			result = tv2.tv_sec * 1000 + tv2.tv_usec / 1000;
++			result = msecs - result;
++			if (result <= 0)
++				result = 1;
++		}
++	} else {
++		wq->abort = 0;
++		splx(ipl);
++		simple_unlock(&wq->lock);
++
++		if (result == ERESTART) {	// signaled - restart
++			result = -DWC_E_RESTART;
++
++		} else if (result == EINTR) {		// signaled - interrupt
++			result = -DWC_E_ABORT;
++
++		} else {				// timed out
++			result = -DWC_E_TIMEOUT;
++		}
++	}
++
++	return result;
++}
++
++void DWC_WAITQ_TRIGGER(dwc_waitq_t *wq)
++{
++	wakeup(wq);
++}
++
++void DWC_WAITQ_ABORT(dwc_waitq_t *wq)
++{
++	int ipl;
++
++	simple_lock(&wq->lock);
++	ipl = splbio();
++	wq->abort = 1;
++	wakeup(wq);
++	splx(ipl);
++	simple_unlock(&wq->lock);
++}
++
++
++/* Threading */
++
++struct dwc_thread {
++	struct proc *proc;
++	int abort;
++};
++
++dwc_thread_t *DWC_THREAD_RUN(dwc_thread_function_t func, char *name, void *data)
++{
++	int retval;
++	dwc_thread_t *thread = DWC_ALLOC(sizeof(*thread));
++
++	if (!thread) {
++		return NULL;
++	}
++
++	thread->abort = 0;
++	retval = kthread_create1((void (*)(void *))func, data, &thread->proc,
++				 "%s", name);
++	if (retval) {
++		DWC_FREE(thread);
++		return NULL;
++	}
++
++	return thread;
++}
++
++int DWC_THREAD_STOP(dwc_thread_t *thread)
++{
++	int retval;
++
++	thread->abort = 1;
++	retval = tsleep(&thread->abort, 0, "dw3stp", 60 * hz);
++
++	if (retval == 0) {
++		/* DWC_THREAD_EXIT() will free the thread struct */
++		return 0;
++	}
++
++	/* NOTE: We leak the thread struct if thread doesn't die */
++
++	if (retval == EWOULDBLOCK) {
++		return -DWC_E_TIMEOUT;
++	}
++
++	return -DWC_E_UNKNOWN;
++}
++
++dwc_bool_t DWC_THREAD_SHOULD_STOP(dwc_thread_t *thread)
++{
++	return thread->abort;
++}
++
++void DWC_THREAD_EXIT(dwc_thread_t *thread)
++{
++	wakeup(&thread->abort);
++	DWC_FREE(thread);
++	kthread_exit(0);
++}
++
++/* tasklets
++ - Runs in interrupt context (cannot sleep)
++ - Each tasklet runs on a single CPU
++ - Different tasklets can be running simultaneously on different CPUs
++ [ On NetBSD there is no corresponding mechanism, drivers don't have bottom-
++   halves. So we just call the callback directly from DWC_TASK_SCHEDULE() ]
++ */
++struct dwc_tasklet {
++	dwc_tasklet_callback_t cb;
++	void *data;
++};
++
++static void tasklet_callback(void *data)
++{
++	dwc_tasklet_t *task = (dwc_tasklet_t *)data;
++
++	task->cb(task->data);
++}
++
++dwc_tasklet_t *DWC_TASK_ALLOC(char *name, dwc_tasklet_callback_t cb, void *data)
++{
++	dwc_tasklet_t *task = DWC_ALLOC(sizeof(*task));
++
++	if (task) {
++		task->cb = cb;
++		task->data = data;
++	} else {
++		DWC_ERROR("Cannot allocate memory for tasklet");
++	}
++
++	return task;
++}
++
++void DWC_TASK_FREE(dwc_tasklet_t *task)
++{
++	DWC_FREE(task);
++}
++
++void DWC_TASK_SCHEDULE(dwc_tasklet_t *task)
++{
++	tasklet_callback(task);
++}
++
++
++/* workqueues
++ - Runs in process context (can sleep)
++ */
++typedef struct work_container {
++	dwc_work_callback_t cb;
++	void *data;
++	dwc_workq_t *wq;
++	char *name;
++	int hz;
++	struct work task;
++} work_container_t;
++
++struct dwc_workq {
++	struct workqueue *taskq;
++	dwc_spinlock_t *lock;
++	dwc_waitq_t *waitq;
++	int pending;
++	struct work_container *container;
++};
++
++static void do_work(struct work *task, void *data)
++{
++	dwc_workq_t *wq = (dwc_workq_t *)data;
++	work_container_t *container = wq->container;
++	dwc_irqflags_t flags;
++
++	if (container->hz) {
++		tsleep(container, 0, "dw3wrk", container->hz);
++	}
++
++	container->cb(container->data);
++	DWC_DEBUG("Work done: %s, container=%p", container->name, container);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++	if (container->name)
++		DWC_FREE(container->name);
++	DWC_FREE(container);
++	wq->pending--;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++}
++
++static int work_done(void *data)
++{
++	dwc_workq_t *workq = (dwc_workq_t *)data;
++
++	return workq->pending == 0;
++}
++
++int DWC_WORKQ_WAIT_WORK_DONE(dwc_workq_t *workq, int timeout)
++{
++	return DWC_WAITQ_WAIT_TIMEOUT(workq->waitq, work_done, workq, timeout);
++}
++
++dwc_workq_t *DWC_WORKQ_ALLOC(char *name)
++{
++	int result;
++	dwc_workq_t *wq = DWC_ALLOC(sizeof(*wq));
++
++	if (!wq) {
++		DWC_ERROR("Cannot allocate memory for workqueue");
++		return NULL;
++	}
++
++	result = workqueue_create(&wq->taskq, name, do_work, wq, 0 /*PWAIT*/,
++				  IPL_BIO, 0);
++	if (result) {
++		DWC_ERROR("Cannot create workqueue");
++		goto no_taskq;
++	}
++
++	wq->pending = 0;
++
++	wq->lock = DWC_SPINLOCK_ALLOC();
++	if (!wq->lock) {
++		DWC_ERROR("Cannot allocate memory for spinlock");
++		goto no_lock;
++	}
++
++	wq->waitq = DWC_WAITQ_ALLOC();
++	if (!wq->waitq) {
++		DWC_ERROR("Cannot allocate memory for waitqueue");
++		goto no_waitq;
++	}
++
++	return wq;
++
++ no_waitq:
++	DWC_SPINLOCK_FREE(wq->lock);
++ no_lock:
++	workqueue_destroy(wq->taskq);
++ no_taskq:
++	DWC_FREE(wq);
++
++	return NULL;
++}
++
++void DWC_WORKQ_FREE(dwc_workq_t *wq)
++{
++#ifdef DEBUG
++	dwc_irqflags_t flags;
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++
++	if (wq->pending != 0) {
++		struct work_container *container = wq->container;
++
++		DWC_ERROR("Destroying work queue with pending work");
++
++		if (container && container->name) {
++			DWC_ERROR("Work %s still pending", container->name);
++		}
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++#endif
++	DWC_WAITQ_FREE(wq->waitq);
++	DWC_SPINLOCK_FREE(wq->lock);
++	workqueue_destroy(wq->taskq);
++	DWC_FREE(wq);
++}
++
++void DWC_WORKQ_SCHEDULE(dwc_workq_t *wq, dwc_work_callback_t cb, void *data,
++			char *format, ...)
++{
++	dwc_irqflags_t flags;
++	work_container_t *container;
++	static char name[128];
++	va_list args;
++
++	va_start(args, format);
++	DWC_VSNPRINTF(name, 128, format, args);
++	va_end(args);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++	wq->pending++;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++
++	container = DWC_ALLOC_ATOMIC(sizeof(*container));
++	if (!container) {
++		DWC_ERROR("Cannot allocate memory for container");
++		return;
++	}
++
++	container->name = DWC_STRDUP(name);
++	if (!container->name) {
++		DWC_ERROR("Cannot allocate memory for container->name");
++		DWC_FREE(container);
++		return;
++	}
++
++	container->cb = cb;
++	container->data = data;
++	container->wq = wq;
++	container->hz = 0;
++	wq->container = container;
++
++	DWC_DEBUG("Queueing work: %s, container=%p", container->name, container);
++	workqueue_enqueue(wq->taskq, &container->task);
++}
++
++void DWC_WORKQ_SCHEDULE_DELAYED(dwc_workq_t *wq, dwc_work_callback_t cb,
++				void *data, uint32_t time, char *format, ...)
++{
++	dwc_irqflags_t flags;
++	work_container_t *container;
++	static char name[128];
++	struct timeval tv;
++	va_list args;
++
++	va_start(args, format);
++	DWC_VSNPRINTF(name, 128, format, args);
++	va_end(args);
++
++	DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
++	wq->pending++;
++	DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
++	DWC_WAITQ_TRIGGER(wq->waitq);
++
++	container = DWC_ALLOC_ATOMIC(sizeof(*container));
++	if (!container) {
++		DWC_ERROR("Cannot allocate memory for container");
++		return;
++	}
++
++	container->name = DWC_STRDUP(name);
++	if (!container->name) {
++		DWC_ERROR("Cannot allocate memory for container->name");
++		DWC_FREE(container);
++		return;
++	}
++
++	container->cb = cb;
++	container->data = data;
++	container->wq = wq;
++	tv.tv_sec = time / 1000;
++	tv.tv_usec = (time - tv.tv_sec * 1000) * 1000;
++	container->hz = tvtohz(&tv);
++	wq->container = container;
++
++	DWC_DEBUG("Queueing work: %s, container=%p", container->name, container);
++	workqueue_enqueue(wq->taskq, &container->task);
++}
++
++int DWC_WORKQ_PENDING(dwc_workq_t *wq)
++{
++	return wq->pending;
++}
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_crypto.c
+@@ -0,0 +1,308 @@
++/* =========================================================================
++ * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_crypto.c $
++ * $Revision: #5 $
++ * $Date: 2010/09/28 $
++ * $Change: 1596182 $
++ *
++ * Synopsys Portability Library Software and documentation
++ * (hereinafter, "Software") is an Unsupported proprietary work of
++ * Synopsys, Inc. unless otherwise expressly agreed to in writing
++ * between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product
++ * under any End User Software License Agreement or Agreement for
++ * Licensed Product with Synopsys or any supplement thereto. You are
++ * permitted to use and redistribute this Software in source and binary
++ * forms, with or without modification, provided that redistributions
++ * of source code must retain this notice. You may not view, use,
++ * disclose, copy or distribute this file or any information contained
++ * herein except pursuant to this license grant from Synopsys. If you
++ * do not agree with this notice, including the disclaimer below, then
++ * you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
++ * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
++ * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
++ * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
++ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
++ * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================= */
++
++/** @file
++ * This file contains the WUSB cryptographic routines.
++ */
++
++#ifdef DWC_CRYPTOLIB
++
++#include "dwc_crypto.h"
++#include "usb.h"
++
++#ifdef DEBUG
++static inline void dump_bytes(char *name, uint8_t *bytes, int len)
++{
++	int i;
++	DWC_PRINTF("%s: ", name);
++	for (i=0; i<len; i++) {
++		DWC_PRINTF("%02x ", bytes[i]);
++	}
++	DWC_PRINTF("\n");
++}
++#else
++#define dump_bytes(x...)
++#endif
++
++/* Display a block */
++void show_block(const u8 *blk, const char *prefix, const char *suffix, int a)
++{
++#ifdef DWC_DEBUG_CRYPTO
++	int i, blksize = 16;
++
++	DWC_DEBUG("%s", prefix);
++
++	if (suffix == NULL) {
++		suffix = "\n";
++		blksize = a;
++	}
++
++	for (i = 0; i < blksize; i++)
++		DWC_PRINT("%02x%s", *blk++, ((i & 3) == 3) ? "  " : " ");
++	DWC_PRINT(suffix);
++#endif
++}
++
++/**
++ * Encrypts an array of bytes using the AES encryption engine.
++ * If <code>dst</code> == <code>src</code>, then the bytes will be encrypted
++ * in-place.
++ *
++ * @return  0 on success, negative error code on error.
++ */
++int dwc_wusb_aes_encrypt(u8 *src, u8 *key, u8 *dst)
++{
++	u8 block_t[16];
++	DWC_MEMSET(block_t, 0, 16);
++
++	return DWC_AES_CBC(src, 16, key, 16, block_t, dst);
++}
++
++/**
++ * The CCM-MAC-FUNCTION described in section 6.5 of the WUSB spec.
++ * This function takes a data string and returns the encrypted CBC
++ * Counter-mode MIC.
++ *
++ * @param key     The 128-bit symmetric key.
++ * @param nonce   The CCM nonce.
++ * @param label   The unique 14-byte ASCII text label.
++ * @param bytes   The byte array to be encrypted.
++ * @param len     Length of the byte array.
++ * @param result  Byte array to receive the 8-byte encrypted MIC.
++ */
++void dwc_wusb_cmf(u8 *key, u8 *nonce,
++		  char *label, u8 *bytes, int len, u8 *result)
++{
++	u8 block_m[16];
++	u8 block_x[16];
++	u8 block_t[8];
++	int idx, blkNum;
++	u16 la = (u16)(len + 14);
++
++	/* Set the AES-128 key */
++	//dwc_aes_setkey(tfm, key, 16);
++
++	/* Fill block B0 from flags = 0x59, N, and l(m) = 0 */
++	block_m[0] = 0x59;
++	for (idx = 0; idx < 13; idx++)
++		block_m[idx + 1] = nonce[idx];
++	block_m[14] = 0;
++	block_m[15] = 0;
++
++	/* Produce the CBC IV */
++	dwc_wusb_aes_encrypt(block_m, key, block_x);
++	show_block(block_m, "CBC IV in: ", "\n", 0);
++	show_block(block_x, "CBC IV out:", "\n", 0);
++
++	/* Fill block B1 from l(a) = Blen + 14, and A */
++	block_x[0] ^= (u8)(la >> 8);
++	block_x[1] ^= (u8)la;
++	for (idx = 0; idx < 14; idx++)
++		block_x[idx + 2] ^= label[idx];
++	show_block(block_x, "After xor: ", "b1\n", 16);
++
++	dwc_wusb_aes_encrypt(block_x, key, block_x);
++	show_block(block_x, "After AES: ", "b1\n", 16);
++
++	idx = 0;
++	blkNum = 0;
++
++	/* Fill remaining blocks with B */
++	while (len-- > 0) {
++		block_x[idx] ^= *bytes++;
++		if (++idx >= 16) {
++			idx = 0;
++			show_block(block_x, "After xor: ", "\n", blkNum);
++			dwc_wusb_aes_encrypt(block_x, key, block_x);
++			show_block(block_x, "After AES: ", "\n", blkNum);
++			blkNum++;
++		}
++	}
++
++	/* Handle partial last block */
++	if (idx > 0) {
++		show_block(block_x, "After xor: ", "\n", blkNum);
++		dwc_wusb_aes_encrypt(block_x, key, block_x);
++		show_block(block_x, "After AES: ", "\n", blkNum);
++	}
++
++	/* Save the MIC tag */
++	DWC_MEMCPY(block_t, block_x, 8);
++	show_block(block_t, "MIC tag  : ", NULL, 8);
++
++	/* Fill block A0 from flags = 0x01, N, and counter = 0 */
++	block_m[0] = 0x01;
++	block_m[14] = 0;
++	block_m[15] = 0;
++
++	/* Encrypt the counter */
++	dwc_wusb_aes_encrypt(block_m, key, block_x);
++	show_block(block_x, "CTR[MIC] : ", NULL, 8);
++
++	/* XOR with MIC tag */
++	for (idx = 0; idx < 8; idx++) {
++		block_t[idx] ^= block_x[idx];
++	}
++
++	/* Return result to caller */
++	DWC_MEMCPY(result, block_t, 8);
++	show_block(result, "CCM-MIC  : ", NULL, 8);
++
++}
++
++/**
++ * The PRF function described in section 6.5 of the WUSB spec. This function
++ * concatenates MIC values returned from dwc_cmf() to create a value of
++ * the requested length.
++ *
++ * @param prf_len  Length of the PRF function in bits (64, 128, or 256).
++ * @param key, nonce, label, bytes, len  Same as for dwc_cmf().
++ * @param result   Byte array to receive the result.
++ */
++void dwc_wusb_prf(int prf_len, u8 *key,
++		  u8 *nonce, char *label, u8 *bytes, int len, u8 *result)
++{
++	int i;
++
++	nonce[0] = 0;
++	for (i = 0; i < prf_len >> 6; i++, nonce[0]++) {
++		dwc_wusb_cmf(key, nonce, label, bytes, len, result);
++		result += 8;
++	}
++}
++
++/**
++ * Fills in CCM Nonce per the WUSB spec.
++ *
++ * @param[in] haddr Host address.
++ * @param[in] daddr Device address.
++ * @param[in] tkid Session Key(PTK) identifier.
++ * @param[out] nonce Pointer to where the CCM Nonce output is to be written.
++ */
++void dwc_wusb_fill_ccm_nonce(uint16_t haddr, uint16_t daddr, uint8_t *tkid,
++			     uint8_t *nonce)
++{
++
++	DWC_DEBUG("%s %x %x\n", __func__, daddr, haddr);
++
++	DWC_MEMSET(&nonce[0], 0, 16);
++
++	DWC_MEMCPY(&nonce[6], tkid, 3);
++	nonce[9] = daddr & 0xFF;
++	nonce[10] = (daddr >> 8) & 0xFF;
++	nonce[11] = haddr & 0xFF;
++	nonce[12] = (haddr >> 8) & 0xFF;
++
++	dump_bytes("CCM nonce", nonce, 16);
++}
++
++/**
++ * Generates a 16-byte cryptographic-grade random number for the Host/Device
++ * Nonce.
++ */
++void dwc_wusb_gen_nonce(uint16_t addr, uint8_t *nonce)
++{
++	uint8_t inonce[16];
++	uint32_t temp[4];
++
++	/* Fill in the Nonce */
++	DWC_MEMSET(&inonce[0], 0, sizeof(inonce));
++	inonce[9] = addr & 0xFF;
++	inonce[10] = (addr >> 8) & 0xFF;
++	inonce[11] = inonce[9];
++	inonce[12] = inonce[10];
++
++	/* Collect "randomness samples" */
++	DWC_RANDOM_BYTES((uint8_t *)temp, 16);
++
++	dwc_wusb_prf_128((uint8_t *)temp, nonce,
++			 "Random Numbers", (uint8_t *)temp, sizeof(temp),
++			 nonce);
++}
++
++/**
++ * Generates the Session Key (PTK) and Key Confirmation Key (KCK) per the
++ * WUSB spec.
++ *
++ * @param[in] ccm_nonce Pointer to CCM Nonce.
++ * @param[in] mk Master Key to derive the session from
++ * @param[in] hnonce Pointer to Host Nonce.
++ * @param[in] dnonce Pointer to Device Nonce.
++ * @param[out] kck Pointer to where the KCK output is to be written.
++ * @param[out] ptk Pointer to where the PTK output is to be written.
++ */
++void dwc_wusb_gen_key(uint8_t *ccm_nonce, uint8_t *mk, uint8_t *hnonce,
++		      uint8_t *dnonce, uint8_t *kck, uint8_t *ptk)
++{
++	uint8_t idata[32];
++	uint8_t odata[32];
++
++	dump_bytes("ck", mk, 16);
++	dump_bytes("hnonce", hnonce, 16);
++	dump_bytes("dnonce", dnonce, 16);
++
++	/* The data is the HNonce and DNonce concatenated */
++	DWC_MEMCPY(&idata[0], hnonce, 16);
++	DWC_MEMCPY(&idata[16], dnonce, 16);
++
++	dwc_wusb_prf_256(mk, ccm_nonce, "Pair-wise keys", idata, 32, odata);
++
++	/* Low 16 bytes of the result is the KCK, high 16 is the PTK */
++	DWC_MEMCPY(kck, &odata[0], 16);
++	DWC_MEMCPY(ptk, &odata[16], 16);
++
++	dump_bytes("kck", kck, 16);
++	dump_bytes("ptk", ptk, 16);
++}
++
++/**
++ * Generates the Message Integrity Code over the Handshake data per the
++ * WUSB spec.
++ *
++ * @param ccm_nonce Pointer to CCM Nonce.
++ * @param kck   Pointer to Key Confirmation Key.
++ * @param data  Pointer to Handshake data to be checked.
++ * @param mic   Pointer to where the MIC output is to be written.
++ */
++void dwc_wusb_gen_mic(uint8_t *ccm_nonce, uint8_t *kck,
++		      uint8_t *data, uint8_t *mic)
++{
++
++	dwc_wusb_prf_64(kck, ccm_nonce, "out-of-bandMIC",
++			data, WUSB_HANDSHAKE_LEN_FOR_MIC, mic);
++}
++
++#endif	/* DWC_CRYPTOLIB */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_crypto.h
+@@ -0,0 +1,111 @@
++/* =========================================================================
++ * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_crypto.h $
++ * $Revision: #3 $
++ * $Date: 2010/09/28 $
++ * $Change: 1596182 $
++ *
++ * Synopsys Portability Library Software and documentation
++ * (hereinafter, "Software") is an Unsupported proprietary work of
++ * Synopsys, Inc. unless otherwise expressly agreed to in writing
++ * between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product
++ * under any End User Software License Agreement or Agreement for
++ * Licensed Product with Synopsys or any supplement thereto. You are
++ * permitted to use and redistribute this Software in source and binary
++ * forms, with or without modification, provided that redistributions
++ * of source code must retain this notice. You may not view, use,
++ * disclose, copy or distribute this file or any information contained
++ * herein except pursuant to this license grant from Synopsys. If you
++ * do not agree with this notice, including the disclaimer below, then
++ * you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
++ * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
++ * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
++ * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
++ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
++ * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================= */
++
++#ifndef _DWC_CRYPTO_H_
++#define _DWC_CRYPTO_H_
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++/** @file
++ *
++ * This file contains declarations for the WUSB Cryptographic routines as
++ * defined in the WUSB spec.  They are only to be used internally by the DWC UWB
++ * modules.
++ */
++
++#include "dwc_os.h"
++
++int dwc_wusb_aes_encrypt(u8 *src, u8 *key, u8 *dst);
++
++void dwc_wusb_cmf(u8 *key, u8 *nonce,
++		  char *label, u8 *bytes, int len, u8 *result);
++void dwc_wusb_prf(int prf_len, u8 *key,
++		  u8 *nonce, char *label, u8 *bytes, int len, u8 *result);
++
++/**
++ * The PRF-64 function described in section 6.5 of the WUSB spec.
++ *
++ * @param key, nonce, label, bytes, len, result  Same as for dwc_prf().
++ */
++static inline void dwc_wusb_prf_64(u8 *key, u8 *nonce,
++				   char *label, u8 *bytes, int len, u8 *result)
++{
++	dwc_wusb_prf(64, key, nonce, label, bytes, len, result);
++}
++
++/**
++ * The PRF-128 function described in section 6.5 of the WUSB spec.
++ *
++ * @param key, nonce, label, bytes, len, result  Same as for dwc_prf().
++ */
++static inline void dwc_wusb_prf_128(u8 *key, u8 *nonce,
++				    char *label, u8 *bytes, int len, u8 *result)
++{
++	dwc_wusb_prf(128, key, nonce, label, bytes, len, result);
++}
++
++/**
++ * The PRF-256 function described in section 6.5 of the WUSB spec.
++ *
++ * @param key, nonce, label, bytes, len, result  Same as for dwc_prf().
++ */
++static inline void dwc_wusb_prf_256(u8 *key, u8 *nonce,
++				    char *label, u8 *bytes, int len, u8 *result)
++{
++	dwc_wusb_prf(256, key, nonce, label, bytes, len, result);
++}
++
++
++void dwc_wusb_fill_ccm_nonce(uint16_t haddr, uint16_t daddr, uint8_t *tkid,
++			       uint8_t *nonce);
++void dwc_wusb_gen_nonce(uint16_t addr,
++			  uint8_t *nonce);
++
++void dwc_wusb_gen_key(uint8_t *ccm_nonce, uint8_t *mk,
++			uint8_t *hnonce, uint8_t *dnonce,
++			uint8_t *kck, uint8_t *ptk);
++
++
++void dwc_wusb_gen_mic(uint8_t *ccm_nonce, uint8_t
++			*kck, uint8_t *data, uint8_t *mic);
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* _DWC_CRYPTO_H_ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_dh.c
+@@ -0,0 +1,291 @@
++/* =========================================================================
++ * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_dh.c $
++ * $Revision: #3 $
++ * $Date: 2010/09/28 $
++ * $Change: 1596182 $
++ *
++ * Synopsys Portability Library Software and documentation
++ * (hereinafter, "Software") is an Unsupported proprietary work of
++ * Synopsys, Inc. unless otherwise expressly agreed to in writing
++ * between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product
++ * under any End User Software License Agreement or Agreement for
++ * Licensed Product with Synopsys or any supplement thereto. You are
++ * permitted to use and redistribute this Software in source and binary
++ * forms, with or without modification, provided that redistributions
++ * of source code must retain this notice. You may not view, use,
++ * disclose, copy or distribute this file or any information contained
++ * herein except pursuant to this license grant from Synopsys. If you
++ * do not agree with this notice, including the disclaimer below, then
++ * you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
++ * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
++ * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
++ * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
++ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
++ * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================= */
++#ifdef DWC_CRYPTOLIB
++
++#ifndef CONFIG_MACH_IPMATE
++
++#include "dwc_dh.h"
++#include "dwc_modpow.h"
++
++#ifdef DEBUG
++/* This function prints out a buffer in the format described in the Association
++ * Model specification. */
++static void dh_dump(char *str, void *_num, int len)
++{
++	uint8_t *num = _num;
++	int i;
++	DWC_PRINTF("%s\n", str);
++	for (i = 0; i < len; i ++) {
++		DWC_PRINTF("%02x", num[i]);
++		if (((i + 1) % 2) == 0) DWC_PRINTF(" ");
++		if (((i + 1) % 26) == 0) DWC_PRINTF("\n");
++	}
++
++	DWC_PRINTF("\n");
++}
++#else
++#define dh_dump(_x...) do {; } while(0)
++#endif
++
++/* Constant g value */
++static __u32 dh_g[] = {
++	0x02000000,
++};
++
++/* Constant p value */
++static __u32 dh_p[] = {
++	0xFFFFFFFF, 0xFFFFFFFF, 0xA2DA0FC9, 0x34C26821, 0x8B62C6C4, 0xD11CDC80, 0x084E0229, 0x74CC678A,
++	0xA6BE0B02, 0x229B133B, 0x79084A51, 0xDD04348E, 0xB31995EF, 0x1B433ACD, 0x6D0A2B30, 0x37145FF2,
++	0x6D35E14F, 0x45C2516D, 0x76B585E4, 0xC67E5E62, 0xE9424CF4, 0x6BED37A6, 0xB65CFF0B, 0xEDB706F4,
++	0xFB6B38EE, 0xA59F895A, 0x11249FAE, 0xE61F4B7C, 0x51662849, 0x3D5BE4EC, 0xB87C00C2, 0x05BF63A1,
++	0x3648DA98, 0x9AD3551C, 0xA83F1669, 0x5FCF24FD, 0x235D6583, 0x96ADA3DC, 0x56F3621C, 0xBB528520,
++	0x0729D59E, 0x6D969670, 0x4E350C67, 0x0498BC4A, 0x086C74F1, 0x7C2118CA, 0x465E9032, 0x3BCE362E,
++	0x2C779EE3, 0x03860E18, 0xA283279B, 0x8FA207EC, 0xF05DC5B5, 0xC9524C6F, 0xF6CB2BDE, 0x18175895,
++	0x7C499539, 0xE56A95EA, 0x1826D215, 0x1005FA98, 0x5A8E7215, 0x2DC4AA8A, 0x0D1733AD, 0x337A5004,
++	0xAB2155A8, 0x64BA1CDF, 0x0485FBEC, 0x0AEFDB58, 0x5771EA8A, 0x7D0C065D, 0x850F97B3, 0xC7E4E1A6,
++	0x8CAEF5AB, 0xD73309DB, 0xE0948C1E, 0x9D61254A, 0x26D2E3CE, 0x6BEED21A, 0x06FA2FF1, 0x64088AD9,
++	0x730276D8, 0x646AC83E, 0x182B1F52, 0x0C207B17, 0x5717E1BB, 0x6C5D617A, 0xC0880977, 0xE246D9BA,
++	0xA04FE208, 0x31ABE574, 0xFC5BDB43, 0x8E10FDE0, 0x20D1824B, 0xCAD23AA9, 0xFFFFFFFF, 0xFFFFFFFF,
++};
++
++static void dh_swap_bytes(void *_in, void *_out, uint32_t len)
++{
++	uint8_t *in = _in;
++	uint8_t *out = _out;
++	int i;
++	for (i=0; i<len; i++) {
++		out[i] = in[len-1-i];
++	}
++}
++
++/* Computes the modular exponentiation (num^exp % mod).  num, exp, and mod are
++ * big endian numbers of size len, in bytes.  Each len value must be a multiple
++ * of 4. */
++int dwc_dh_modpow(void *mem_ctx, void *num, uint32_t num_len,
++		  void *exp, uint32_t exp_len,
++		  void *mod, uint32_t mod_len,
++		  void *out)
++{
++	/* modpow() takes little endian numbers.  AM uses big-endian.  This
++	 * function swaps bytes of numbers before passing onto modpow. */
++
++	int retval = 0;
++	uint32_t *result;
++
++	uint32_t *bignum_num = dwc_alloc(mem_ctx, num_len + 4);
++	uint32_t *bignum_exp = dwc_alloc(mem_ctx, exp_len + 4);
++	uint32_t *bignum_mod = dwc_alloc(mem_ctx, mod_len + 4);
++
++	dh_swap_bytes(num, &bignum_num[1], num_len);
++	bignum_num[0] = num_len / 4;
++
++	dh_swap_bytes(exp, &bignum_exp[1], exp_len);
++	bignum_exp[0] = exp_len / 4;
++
++	dh_swap_bytes(mod, &bignum_mod[1], mod_len);
++	bignum_mod[0] = mod_len / 4;
++
++	result = dwc_modpow(mem_ctx, bignum_num, bignum_exp, bignum_mod);
++	if (!result) {
++		retval = -1;
++		goto dh_modpow_nomem;
++	}
++
++	dh_swap_bytes(&result[1], out, result[0] * 4);
++	dwc_free(mem_ctx, result);
++
++ dh_modpow_nomem:
++	dwc_free(mem_ctx, bignum_num);
++	dwc_free(mem_ctx, bignum_exp);
++	dwc_free(mem_ctx, bignum_mod);
++	return retval;
++}
++
++
++int dwc_dh_pk(void *mem_ctx, uint8_t nd, uint8_t *exp, uint8_t *pk, uint8_t *hash)
++{
++	int retval;
++	uint8_t m3[385];
++
++#ifndef DH_TEST_VECTORS
++	DWC_RANDOM_BYTES(exp, 32);
++#endif
++
++	/* Compute the pkd */
++	if ((retval = dwc_dh_modpow(mem_ctx, dh_g, 4,
++				    exp, 32,
++				    dh_p, 384, pk))) {
++		return retval;
++	}
++
++	m3[384] = nd;
++	DWC_MEMCPY(&m3[0], pk, 384);
++	DWC_SHA256(m3, 385, hash);
++
++	dh_dump("PK", pk, 384);
++	dh_dump("SHA-256(M3)", hash, 32);
++	return 0;
++}
++
++int dwc_dh_derive_keys(void *mem_ctx, uint8_t nd, uint8_t *pkh, uint8_t *pkd,
++		       uint8_t *exp, int is_host,
++		       char *dd, uint8_t *ck, uint8_t *kdk)
++{
++	int retval;
++	uint8_t mv[784];
++	uint8_t sha_result[32];
++	uint8_t dhkey[384];
++	uint8_t shared_secret[384];
++	char *message;
++	uint32_t vd;
++
++	uint8_t *pk;
++
++	if (is_host) {
++		pk = pkd;
++	}
++	else {
++		pk = pkh;
++	}
++
++	if ((retval = dwc_dh_modpow(mem_ctx, pk, 384,
++				    exp, 32,
++				    dh_p, 384, shared_secret))) {
++		return retval;
++	}
++	dh_dump("Shared Secret", shared_secret, 384);
++
++	DWC_SHA256(shared_secret, 384, dhkey);
++	dh_dump("DHKEY", dhkey, 384);
++
++	DWC_MEMCPY(&mv[0], pkd, 384);
++	DWC_MEMCPY(&mv[384], pkh, 384);
++	DWC_MEMCPY(&mv[768], "displayed digest", 16);
++	dh_dump("MV", mv, 784);
++
++	DWC_SHA256(mv, 784, sha_result);
++	dh_dump("SHA-256(MV)", sha_result, 32);
++	dh_dump("First 32-bits of SHA-256(MV)", sha_result, 4);
++
++	dh_swap_bytes(sha_result, &vd, 4);
++#ifdef DEBUG
++	DWC_PRINTF("Vd (decimal) = %d\n", vd);
++#endif
++
++	switch (nd) {
++	case 2:
++		vd = vd % 100;
++		DWC_SPRINTF(dd, "%02d", vd);
++		break;
++	case 3:
++		vd = vd % 1000;
++		DWC_SPRINTF(dd, "%03d", vd);
++		break;
++	case 4:
++		vd = vd % 10000;
++		DWC_SPRINTF(dd, "%04d", vd);
++		break;
++	}
++#ifdef DEBUG
++	DWC_PRINTF("Display Digits: %s\n", dd);
++#endif
++
++	message = "connection key";
++	DWC_HMAC_SHA256(message, DWC_STRLEN(message), dhkey, 32, sha_result);
++	dh_dump("HMAC(SHA-256, DHKey, connection key)", sha_result, 32);
++	DWC_MEMCPY(ck, sha_result, 16);
++
++	message = "key derivation key";
++	DWC_HMAC_SHA256(message, DWC_STRLEN(message), dhkey, 32, sha_result);
++	dh_dump("HMAC(SHA-256, DHKey, key derivation key)", sha_result, 32);
++	DWC_MEMCPY(kdk, sha_result, 32);
++
++	return 0;
++}
++
++
++#ifdef DH_TEST_VECTORS
++
++static __u8 dh_a[] = {
++	0x44, 0x00, 0x51, 0xd6,
++	0xf0, 0xb5, 0x5e, 0xa9,
++	0x67, 0xab, 0x31, 0xc6,
++	0x8a, 0x8b, 0x5e, 0x37,
++	0xd9, 0x10, 0xda, 0xe0,
++	0xe2, 0xd4, 0x59, 0xa4,
++	0x86, 0x45, 0x9c, 0xaa,
++	0xdf, 0x36, 0x75, 0x16,
++};
++
++static __u8 dh_b[] = {
++	0x5d, 0xae, 0xc7, 0x86,
++	0x79, 0x80, 0xa3, 0x24,
++	0x8c, 0xe3, 0x57, 0x8f,
++	0xc7, 0x5f, 0x1b, 0x0f,
++	0x2d, 0xf8, 0x9d, 0x30,
++	0x6f, 0xa4, 0x52, 0xcd,
++	0xe0, 0x7a, 0x04, 0x8a,
++	0xde, 0xd9, 0x26, 0x56,
++};
++
++void dwc_run_dh_test_vectors(void *mem_ctx)
++{
++	uint8_t pkd[384];
++	uint8_t pkh[384];
++	uint8_t hashd[32];
++	uint8_t hashh[32];
++	uint8_t ck[16];
++	uint8_t kdk[32];
++	char dd[5];
++
++	DWC_PRINTF("\n\n\nDH_TEST_VECTORS\n\n");
++
++	/* compute the PKd and SHA-256(PKd || Nd) */
++	DWC_PRINTF("Computing PKd\n");
++	dwc_dh_pk(mem_ctx, 2, dh_a, pkd, hashd);
++
++	/* compute the PKd and SHA-256(PKh || Nd) */
++	DWC_PRINTF("Computing PKh\n");
++	dwc_dh_pk(mem_ctx, 2, dh_b, pkh, hashh);
++
++	/* compute the dhkey */
++	dwc_dh_derive_keys(mem_ctx, 2, pkh, pkd, dh_a, 0, dd, ck, kdk);
++}
++#endif /* DH_TEST_VECTORS */
++
++#endif /* !CONFIG_MACH_IPMATE */
++
++#endif /* DWC_CRYPTOLIB */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_dh.h
+@@ -0,0 +1,106 @@
++/* =========================================================================
++ * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_dh.h $
++ * $Revision: #4 $
++ * $Date: 2010/09/28 $
++ * $Change: 1596182 $
++ *
++ * Synopsys Portability Library Software and documentation
++ * (hereinafter, "Software") is an Unsupported proprietary work of
++ * Synopsys, Inc. unless otherwise expressly agreed to in writing
++ * between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product
++ * under any End User Software License Agreement or Agreement for
++ * Licensed Product with Synopsys or any supplement thereto. You are
++ * permitted to use and redistribute this Software in source and binary
++ * forms, with or without modification, provided that redistributions
++ * of source code must retain this notice. You may not view, use,
++ * disclose, copy or distribute this file or any information contained
++ * herein except pursuant to this license grant from Synopsys. If you
++ * do not agree with this notice, including the disclaimer below, then
++ * you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
++ * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
++ * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
++ * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
++ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
++ * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================= */
++#ifndef _DWC_DH_H_
++#define _DWC_DH_H_
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++#include "dwc_os.h"
++
++/** @file
++ *
++ * This file defines the common functions on device and host for performing
++ * numeric association as defined in the WUSB spec.  They are only to be
++ * used internally by the DWC UWB modules. */
++
++extern int dwc_dh_sha256(uint8_t *message, uint32_t len, uint8_t *out);
++extern int dwc_dh_hmac_sha256(uint8_t *message, uint32_t messagelen,
++			      uint8_t *key, uint32_t keylen,
++			      uint8_t *out);
++extern int dwc_dh_modpow(void *mem_ctx, void *num, uint32_t num_len,
++			 void *exp, uint32_t exp_len,
++			 void *mod, uint32_t mod_len,
++			 void *out);
++
++/** Computes PKD or PKH, and SHA-256(PKd || Nd)
++ *
++ * PK = g^exp mod p.
++ *
++ * Input:
++ * Nd = Number of digits on the device.
++ *
++ * Output:
++ * exp = A 32-byte buffer to be filled with a randomly generated number.
++ *       used as either A or B.
++ * pk = A 384-byte buffer to be filled with the PKH or PKD.
++ * hash = A 32-byte buffer to be filled with SHA-256(PK || ND).
++ */
++extern int dwc_dh_pk(void *mem_ctx, uint8_t nd, uint8_t *exp, uint8_t *pkd, uint8_t *hash);
++
++/** Computes the DHKEY, and VD.
++ *
++ * If called from host, then it will comput DHKEY=PKD^exp % p.
++ * If called from device, then it will comput DHKEY=PKH^exp % p.
++ *
++ * Input:
++ * pkd = The PKD value.
++ * pkh = The PKH value.
++ * exp = The A value (if device) or B value (if host) generated in dwc_wudev_dh_pk.
++ * is_host = Set to non zero if a WUSB host is calling this function.
++ *
++ * Output:
++
++ * dd = A pointer to an buffer to be set to the displayed digits string to be shown
++ *      to the user.  This buffer should be at 5 bytes long to hold 4 digits plus a
++ *      null termination character.  This buffer can be used directly for display.
++ * ck = A 16-byte buffer to be filled with the CK.
++ * kdk = A 32-byte buffer to be filled with the KDK.
++ */
++extern int dwc_dh_derive_keys(void *mem_ctx, uint8_t nd, uint8_t *pkh, uint8_t *pkd,
++			      uint8_t *exp, int is_host,
++			      char *dd, uint8_t *ck, uint8_t *kdk);
++
++#ifdef DH_TEST_VECTORS
++extern void dwc_run_dh_test_vectors(void);
++#endif
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* _DWC_DH_H_ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_list.h
+@@ -0,0 +1,594 @@
++/*	$OpenBSD: queue.h,v 1.26 2004/05/04 16:59:32 grange Exp $	*/
++/*	$NetBSD: queue.h,v 1.11 1996/05/16 05:17:14 mycroft Exp $	*/
++
++/*
++ * Copyright (c) 1991, 1993
++ *	The Regents of the University of California.  All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions and the following disclaimer.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. Neither the name of the University nor the names of its contributors
++ *    may be used to endorse or promote products derived from this software
++ *    without specific prior written permission.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
++ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
++ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
++ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
++ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
++ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
++ * SUCH DAMAGE.
++ *
++ *	@(#)queue.h	8.5 (Berkeley) 8/20/94
++ */
++
++#ifndef _DWC_LIST_H_
++#define _DWC_LIST_H_
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++/** @file
++ *
++ * This file defines linked list operations.  It is derived from BSD with
++ * only the MACRO names being prefixed with DWC_.  This is because a few of
++ * these names conflict with those on Linux.  For documentation on use, see the
++ * inline comments in the source code.  The original license for this source
++ * code applies and is preserved in the dwc_list.h source file.
++ */
++
++/*
++ * This file defines five types of data structures: singly-linked lists,
++ * lists, simple queues, tail queues, and circular queues.
++ *
++ *
++ * A singly-linked list is headed by a single forward pointer. The elements
++ * are singly linked for minimum space and pointer manipulation overhead at
++ * the expense of O(n) removal for arbitrary elements. New elements can be
++ * added to the list after an existing element or at the head of the list.
++ * Elements being removed from the head of the list should use the explicit
++ * macro for this purpose for optimum efficiency. A singly-linked list may
++ * only be traversed in the forward direction.  Singly-linked lists are ideal
++ * for applications with large datasets and few or no removals or for
++ * implementing a LIFO queue.
++ *
++ * A list is headed by a single forward pointer (or an array of forward
++ * pointers for a hash table header). The elements are doubly linked
++ * so that an arbitrary element can be removed without a need to
++ * traverse the list. New elements can be added to the list before
++ * or after an existing element or at the head of the list. A list
++ * may only be traversed in the forward direction.
++ *
++ * A simple queue is headed by a pair of pointers, one the head of the
++ * list and the other to the tail of the list. The elements are singly
++ * linked to save space, so elements can only be removed from the
++ * head of the list. New elements can be added to the list before or after
++ * an existing element, at the head of the list, or at the end of the
++ * list. A simple queue may only be traversed in the forward direction.
++ *
++ * A tail queue is headed by a pair of pointers, one to the head of the
++ * list and the other to the tail of the list. The elements are doubly
++ * linked so that an arbitrary element can be removed without a need to
++ * traverse the list. New elements can be added to the list before or
++ * after an existing element, at the head of the list, or at the end of
++ * the list. A tail queue may be traversed in either direction.
++ *
++ * A circle queue is headed by a pair of pointers, one to the head of the
++ * list and the other to the tail of the list. The elements are doubly
++ * linked so that an arbitrary element can be removed without a need to
++ * traverse the list. New elements can be added to the list before or after
++ * an existing element, at the head of the list, or at the end of the list.
++ * A circle queue may be traversed in either direction, but has a more
++ * complex end of list detection.
++ *
++ * For details on the use of these macros, see the queue(3) manual page.
++ */
++
++/*
++ * Double-linked List.
++ */
++
++typedef struct dwc_list_link {
++	struct dwc_list_link *next;
++	struct dwc_list_link *prev;
++} dwc_list_link_t;
++
++#define DWC_LIST_INIT(link) do {	\
++	(link)->next = (link);		\
++	(link)->prev = (link);		\
++} while (0)
++
++#define DWC_LIST_FIRST(link)	((link)->next)
++#define DWC_LIST_LAST(link)	((link)->prev)
++#define DWC_LIST_END(link)	(link)
++#define DWC_LIST_NEXT(link)	((link)->next)
++#define DWC_LIST_PREV(link)	((link)->prev)
++#define DWC_LIST_EMPTY(link)	\
++	(DWC_LIST_FIRST(link) == DWC_LIST_END(link))
++#define DWC_LIST_ENTRY(link, type, field)			\
++	(type *)((uint8_t *)(link) - (size_t)(&((type *)0)->field))
++
++#if 0
++#define DWC_LIST_INSERT_HEAD(list, link) do {			\
++	(link)->next = (list)->next;				\
++	(link)->prev = (list);					\
++	(list)->next->prev = (link);				\
++	(list)->next = (link);					\
++} while (0)
++
++#define DWC_LIST_INSERT_TAIL(list, link) do {			\
++	(link)->next = (list);					\
++	(link)->prev = (list)->prev;				\
++	(list)->prev->next = (link);				\
++	(list)->prev = (link);					\
++} while (0)
++#else
++#define DWC_LIST_INSERT_HEAD(list, link) do {			\
++	dwc_list_link_t *__next__ = (list)->next;		\
++	__next__->prev = (link);				\
++	(link)->next = __next__;				\
++	(link)->prev = (list);					\
++	(list)->next = (link);					\
++} while (0)
++
++#define DWC_LIST_INSERT_TAIL(list, link) do {			\
++	dwc_list_link_t *__prev__ = (list)->prev;		\
++	(list)->prev = (link);					\
++	(link)->next = (list);					\
++	(link)->prev = __prev__;				\
++	__prev__->next = (link);				\
++} while (0)
++#endif
++
++#if 0
++static inline void __list_add(struct list_head *new,
++                              struct list_head *prev,
++                              struct list_head *next)
++{
++        next->prev = new;
++        new->next = next;
++        new->prev = prev;
++        prev->next = new;
++}
++
++static inline void list_add(struct list_head *new, struct list_head *head)
++{
++        __list_add(new, head, head->next);
++}
++
++static inline void list_add_tail(struct list_head *new, struct list_head *head)
++{
++        __list_add(new, head->prev, head);
++}
++
++static inline void __list_del(struct list_head * prev, struct list_head * next)
++{
++        next->prev = prev;
++        prev->next = next;
++}
++
++static inline void list_del(struct list_head *entry)
++{
++        __list_del(entry->prev, entry->next);
++        entry->next = LIST_POISON1;
++        entry->prev = LIST_POISON2;
++}
++#endif
++
++#define DWC_LIST_REMOVE(link) do {				\
++	(link)->next->prev = (link)->prev;			\
++	(link)->prev->next = (link)->next;			\
++} while (0)
++
++#define DWC_LIST_REMOVE_INIT(link) do {				\
++	DWC_LIST_REMOVE(link);					\
++	DWC_LIST_INIT(link);					\
++} while (0)
++
++#define DWC_LIST_MOVE_HEAD(list, link) do {			\
++	DWC_LIST_REMOVE(link);					\
++	DWC_LIST_INSERT_HEAD(list, link);			\
++} while (0)
++
++#define DWC_LIST_MOVE_TAIL(list, link) do {			\
++	DWC_LIST_REMOVE(link);					\
++	DWC_LIST_INSERT_TAIL(list, link);			\
++} while (0)
++
++#define DWC_LIST_FOREACH(var, list)				\
++	for((var) = DWC_LIST_FIRST(list);			\
++	    (var) != DWC_LIST_END(list);			\
++	    (var) = DWC_LIST_NEXT(var))
++
++#define DWC_LIST_FOREACH_SAFE(var, var2, list)			\
++	for((var) = DWC_LIST_FIRST(list), (var2) = DWC_LIST_NEXT(var);	\
++	    (var) != DWC_LIST_END(list);			\
++	    (var) = (var2), (var2) = DWC_LIST_NEXT(var2))
++
++#define DWC_LIST_FOREACH_REVERSE(var, list)			\
++	for((var) = DWC_LIST_LAST(list);			\
++	    (var) != DWC_LIST_END(list);			\
++	    (var) = DWC_LIST_PREV(var))
++
++/*
++ * Singly-linked List definitions.
++ */
++#define DWC_SLIST_HEAD(name, type)					\
++struct name {								\
++	struct type *slh_first;	/* first element */			\
++}
++
++#define DWC_SLIST_HEAD_INITIALIZER(head)				\
++	{ NULL }
++
++#define DWC_SLIST_ENTRY(type)						\
++struct {								\
++	struct type *sle_next;	/* next element */			\
++}
++
++/*
++ * Singly-linked List access methods.
++ */
++#define DWC_SLIST_FIRST(head)	((head)->slh_first)
++#define DWC_SLIST_END(head)		NULL
++#define DWC_SLIST_EMPTY(head)	(SLIST_FIRST(head) == SLIST_END(head))
++#define DWC_SLIST_NEXT(elm, field)	((elm)->field.sle_next)
++
++#define DWC_SLIST_FOREACH(var, head, field)				\
++	for((var) = SLIST_FIRST(head);					\
++	    (var) != SLIST_END(head);					\
++	    (var) = SLIST_NEXT(var, field))
++
++#define DWC_SLIST_FOREACH_PREVPTR(var, varp, head, field)		\
++	for((varp) = &SLIST_FIRST((head));				\
++	    ((var) = *(varp)) != SLIST_END(head);			\
++	    (varp) = &SLIST_NEXT((var), field))
++
++/*
++ * Singly-linked List functions.
++ */
++#define DWC_SLIST_INIT(head) {						\
++	SLIST_FIRST(head) = SLIST_END(head);				\
++}
++
++#define DWC_SLIST_INSERT_AFTER(slistelm, elm, field) do {		\
++	(elm)->field.sle_next = (slistelm)->field.sle_next;		\
++	(slistelm)->field.sle_next = (elm);				\
++} while (0)
++
++#define DWC_SLIST_INSERT_HEAD(head, elm, field) do {			\
++	(elm)->field.sle_next = (head)->slh_first;			\
++	(head)->slh_first = (elm);					\
++} while (0)
++
++#define DWC_SLIST_REMOVE_NEXT(head, elm, field) do {			\
++	(elm)->field.sle_next = (elm)->field.sle_next->field.sle_next;	\
++} while (0)
++
++#define DWC_SLIST_REMOVE_HEAD(head, field) do {				\
++	(head)->slh_first = (head)->slh_first->field.sle_next;		\
++} while (0)
++
++#define DWC_SLIST_REMOVE(head, elm, type, field) do {			\
++	if ((head)->slh_first == (elm)) {				\
++		SLIST_REMOVE_HEAD((head), field);			\
++	}								\
++	else {								\
++		struct type *curelm = (head)->slh_first;		\
++		while( curelm->field.sle_next != (elm) )		\
++			curelm = curelm->field.sle_next;		\
++		curelm->field.sle_next =				\
++		    curelm->field.sle_next->field.sle_next;		\
++	}								\
++} while (0)
++
++/*
++ * Simple queue definitions.
++ */
++#define DWC_SIMPLEQ_HEAD(name, type)					\
++struct name {								\
++	struct type *sqh_first;	/* first element */			\
++	struct type **sqh_last;	/* addr of last next element */		\
++}
++
++#define DWC_SIMPLEQ_HEAD_INITIALIZER(head)				\
++	{ NULL, &(head).sqh_first }
++
++#define DWC_SIMPLEQ_ENTRY(type)						\
++struct {								\
++	struct type *sqe_next;	/* next element */			\
++}
++
++/*
++ * Simple queue access methods.
++ */
++#define DWC_SIMPLEQ_FIRST(head)	    ((head)->sqh_first)
++#define DWC_SIMPLEQ_END(head)	    NULL
++#define DWC_SIMPLEQ_EMPTY(head)	    (SIMPLEQ_FIRST(head) == SIMPLEQ_END(head))
++#define DWC_SIMPLEQ_NEXT(elm, field)    ((elm)->field.sqe_next)
++
++#define DWC_SIMPLEQ_FOREACH(var, head, field)				\
++	for((var) = SIMPLEQ_FIRST(head);				\
++	    (var) != SIMPLEQ_END(head);					\
++	    (var) = SIMPLEQ_NEXT(var, field))
++
++/*
++ * Simple queue functions.
++ */
++#define DWC_SIMPLEQ_INIT(head) do {					\
++	(head)->sqh_first = NULL;					\
++	(head)->sqh_last = &(head)->sqh_first;				\
++} while (0)
++
++#define DWC_SIMPLEQ_INSERT_HEAD(head, elm, field) do {			\
++	if (((elm)->field.sqe_next = (head)->sqh_first) == NULL)	\
++		(head)->sqh_last = &(elm)->field.sqe_next;		\
++	(head)->sqh_first = (elm);					\
++} while (0)
++
++#define DWC_SIMPLEQ_INSERT_TAIL(head, elm, field) do {			\
++	(elm)->field.sqe_next = NULL;					\
++	*(head)->sqh_last = (elm);					\
++	(head)->sqh_last = &(elm)->field.sqe_next;			\
++} while (0)
++
++#define DWC_SIMPLEQ_INSERT_AFTER(head, listelm, elm, field) do {	\
++	if (((elm)->field.sqe_next = (listelm)->field.sqe_next) == NULL)\
++		(head)->sqh_last = &(elm)->field.sqe_next;		\
++	(listelm)->field.sqe_next = (elm);				\
++} while (0)
++
++#define DWC_SIMPLEQ_REMOVE_HEAD(head, field) do {			\
++	if (((head)->sqh_first = (head)->sqh_first->field.sqe_next) == NULL) \
++		(head)->sqh_last = &(head)->sqh_first;			\
++} while (0)
++
++/*
++ * Tail queue definitions.
++ */
++#define DWC_TAILQ_HEAD(name, type)					\
++struct name {								\
++	struct type *tqh_first;	/* first element */			\
++	struct type **tqh_last;	/* addr of last next element */		\
++}
++
++#define DWC_TAILQ_HEAD_INITIALIZER(head)				\
++	{ NULL, &(head).tqh_first }
++
++#define DWC_TAILQ_ENTRY(type)						\
++struct {								\
++	struct type *tqe_next;	/* next element */			\
++	struct type **tqe_prev;	/* address of previous next element */	\
++}
++
++/*
++ * tail queue access methods
++ */
++#define DWC_TAILQ_FIRST(head)		((head)->tqh_first)
++#define DWC_TAILQ_END(head)		NULL
++#define DWC_TAILQ_NEXT(elm, field)	((elm)->field.tqe_next)
++#define DWC_TAILQ_LAST(head, headname)					\
++	(*(((struct headname *)((head)->tqh_last))->tqh_last))
++/* XXX */
++#define DWC_TAILQ_PREV(elm, headname, field)				\
++	(*(((struct headname *)((elm)->field.tqe_prev))->tqh_last))
++#define DWC_TAILQ_EMPTY(head)						\
++	(DWC_TAILQ_FIRST(head) == DWC_TAILQ_END(head))
++
++#define DWC_TAILQ_FOREACH(var, head, field)				\
++	for ((var) = DWC_TAILQ_FIRST(head);				\
++	    (var) != DWC_TAILQ_END(head);				\
++	    (var) = DWC_TAILQ_NEXT(var, field))
++
++#define DWC_TAILQ_FOREACH_REVERSE(var, head, headname, field)		\
++	for ((var) = DWC_TAILQ_LAST(head, headname);			\
++	    (var) != DWC_TAILQ_END(head);				\
++	    (var) = DWC_TAILQ_PREV(var, headname, field))
++
++/*
++ * Tail queue functions.
++ */
++#define DWC_TAILQ_INIT(head) do {					\
++	(head)->tqh_first = NULL;					\
++	(head)->tqh_last = &(head)->tqh_first;				\
++} while (0)
++
++#define DWC_TAILQ_INSERT_HEAD(head, elm, field) do {			\
++	if (((elm)->field.tqe_next = (head)->tqh_first) != NULL)	\
++		(head)->tqh_first->field.tqe_prev =			\
++		    &(elm)->field.tqe_next;				\
++	else								\
++		(head)->tqh_last = &(elm)->field.tqe_next;		\
++	(head)->tqh_first = (elm);					\
++	(elm)->field.tqe_prev = &(head)->tqh_first;			\
++} while (0)
++
++#define DWC_TAILQ_INSERT_TAIL(head, elm, field) do {			\
++	(elm)->field.tqe_next = NULL;					\
++	(elm)->field.tqe_prev = (head)->tqh_last;			\
++	*(head)->tqh_last = (elm);					\
++	(head)->tqh_last = &(elm)->field.tqe_next;			\
++} while (0)
++
++#define DWC_TAILQ_INSERT_AFTER(head, listelm, elm, field) do {		\
++	if (((elm)->field.tqe_next = (listelm)->field.tqe_next) != NULL)\
++		(elm)->field.tqe_next->field.tqe_prev =			\
++		    &(elm)->field.tqe_next;				\
++	else								\
++		(head)->tqh_last = &(elm)->field.tqe_next;		\
++	(listelm)->field.tqe_next = (elm);				\
++	(elm)->field.tqe_prev = &(listelm)->field.tqe_next;		\
++} while (0)
++
++#define DWC_TAILQ_INSERT_BEFORE(listelm, elm, field) do {		\
++	(elm)->field.tqe_prev = (listelm)->field.tqe_prev;		\
++	(elm)->field.tqe_next = (listelm);				\
++	*(listelm)->field.tqe_prev = (elm);				\
++	(listelm)->field.tqe_prev = &(elm)->field.tqe_next;		\
++} while (0)
++
++#define DWC_TAILQ_REMOVE(head, elm, field) do {				\
++	if (((elm)->field.tqe_next) != NULL)				\
++		(elm)->field.tqe_next->field.tqe_prev =			\
++		    (elm)->field.tqe_prev;				\
++	else								\
++		(head)->tqh_last = (elm)->field.tqe_prev;		\
++	*(elm)->field.tqe_prev = (elm)->field.tqe_next;			\
++} while (0)
++
++#define DWC_TAILQ_REPLACE(head, elm, elm2, field) do {			\
++	if (((elm2)->field.tqe_next = (elm)->field.tqe_next) != NULL)	\
++		(elm2)->field.tqe_next->field.tqe_prev =		\
++		    &(elm2)->field.tqe_next;				\
++	else								\
++		(head)->tqh_last = &(elm2)->field.tqe_next;		\
++	(elm2)->field.tqe_prev = (elm)->field.tqe_prev;			\
++	*(elm2)->field.tqe_prev = (elm2);				\
++} while (0)
++
++/*
++ * Circular queue definitions.
++ */
++#define DWC_CIRCLEQ_HEAD(name, type)					\
++struct name {								\
++	struct type *cqh_first;		/* first element */		\
++	struct type *cqh_last;		/* last element */		\
++}
++
++#define DWC_CIRCLEQ_HEAD_INITIALIZER(head)				\
++	{ DWC_CIRCLEQ_END(&head), DWC_CIRCLEQ_END(&head) }
++
++#define DWC_CIRCLEQ_ENTRY(type)						\
++struct {								\
++	struct type *cqe_next;		/* next element */		\
++	struct type *cqe_prev;		/* previous element */		\
++}
++
++/*
++ * Circular queue access methods
++ */
++#define DWC_CIRCLEQ_FIRST(head)		((head)->cqh_first)
++#define DWC_CIRCLEQ_LAST(head)		((head)->cqh_last)
++#define DWC_CIRCLEQ_END(head)		((void *)(head))
++#define DWC_CIRCLEQ_NEXT(elm, field)	((elm)->field.cqe_next)
++#define DWC_CIRCLEQ_PREV(elm, field)	((elm)->field.cqe_prev)
++#define DWC_CIRCLEQ_EMPTY(head)						\
++	(DWC_CIRCLEQ_FIRST(head) == DWC_CIRCLEQ_END(head))
++
++#define DWC_CIRCLEQ_EMPTY_ENTRY(elm, field) (((elm)->field.cqe_next == NULL) && ((elm)->field.cqe_prev == NULL))
++
++#define DWC_CIRCLEQ_FOREACH(var, head, field)				\
++	for((var) = DWC_CIRCLEQ_FIRST(head);				\
++	    (var) != DWC_CIRCLEQ_END(head);				\
++	    (var) = DWC_CIRCLEQ_NEXT(var, field))
++
++#define DWC_CIRCLEQ_FOREACH_SAFE(var, var2, head, field)			\
++	for((var) = DWC_CIRCLEQ_FIRST(head), var2 = DWC_CIRCLEQ_NEXT(var, field); \
++	    (var) != DWC_CIRCLEQ_END(head);					\
++	    (var) = var2, var2 = DWC_CIRCLEQ_NEXT(var, field))
++
++#define DWC_CIRCLEQ_FOREACH_REVERSE(var, head, field)			\
++	for((var) = DWC_CIRCLEQ_LAST(head);				\
++	    (var) != DWC_CIRCLEQ_END(head);				\
++	    (var) = DWC_CIRCLEQ_PREV(var, field))
++
++/*
++ * Circular queue functions.
++ */
++#define DWC_CIRCLEQ_INIT(head) do {					\
++	(head)->cqh_first = DWC_CIRCLEQ_END(head);			\
++	(head)->cqh_last = DWC_CIRCLEQ_END(head);			\
++} while (0)
++
++#define DWC_CIRCLEQ_INIT_ENTRY(elm, field) do {				\
++	(elm)->field.cqe_next = NULL;					\
++	(elm)->field.cqe_prev = NULL;					\
++} while (0)
++
++#define DWC_CIRCLEQ_INSERT_AFTER(head, listelm, elm, field) do {	\
++	(elm)->field.cqe_next = (listelm)->field.cqe_next;		\
++	(elm)->field.cqe_prev = (listelm);				\
++	if ((listelm)->field.cqe_next == DWC_CIRCLEQ_END(head))		\
++		(head)->cqh_last = (elm);				\
++	else								\
++		(listelm)->field.cqe_next->field.cqe_prev = (elm);	\
++	(listelm)->field.cqe_next = (elm);				\
++} while (0)
++
++#define DWC_CIRCLEQ_INSERT_BEFORE(head, listelm, elm, field) do {	\
++	(elm)->field.cqe_next = (listelm);				\
++	(elm)->field.cqe_prev = (listelm)->field.cqe_prev;		\
++	if ((listelm)->field.cqe_prev == DWC_CIRCLEQ_END(head))		\
++		(head)->cqh_first = (elm);				\
++	else								\
++		(listelm)->field.cqe_prev->field.cqe_next = (elm);	\
++	(listelm)->field.cqe_prev = (elm);				\
++} while (0)
++
++#define DWC_CIRCLEQ_INSERT_HEAD(head, elm, field) do {			\
++	(elm)->field.cqe_next = (head)->cqh_first;			\
++	(elm)->field.cqe_prev = DWC_CIRCLEQ_END(head);			\
++	if ((head)->cqh_last == DWC_CIRCLEQ_END(head))			\
++		(head)->cqh_last = (elm);				\
++	else								\
++		(head)->cqh_first->field.cqe_prev = (elm);		\
++	(head)->cqh_first = (elm);					\
++} while (0)
++
++#define DWC_CIRCLEQ_INSERT_TAIL(head, elm, field) do {			\
++	(elm)->field.cqe_next = DWC_CIRCLEQ_END(head);			\
++	(elm)->field.cqe_prev = (head)->cqh_last;			\
++	if ((head)->cqh_first == DWC_CIRCLEQ_END(head))			\
++		(head)->cqh_first = (elm);				\
++	else								\
++		(head)->cqh_last->field.cqe_next = (elm);		\
++	(head)->cqh_last = (elm);					\
++} while (0)
++
++#define DWC_CIRCLEQ_REMOVE(head, elm, field) do {			\
++	if ((elm)->field.cqe_next == DWC_CIRCLEQ_END(head))		\
++		(head)->cqh_last = (elm)->field.cqe_prev;		\
++	else								\
++		(elm)->field.cqe_next->field.cqe_prev =			\
++		    (elm)->field.cqe_prev;				\
++	if ((elm)->field.cqe_prev == DWC_CIRCLEQ_END(head))		\
++		(head)->cqh_first = (elm)->field.cqe_next;		\
++	else								\
++		(elm)->field.cqe_prev->field.cqe_next =			\
++		    (elm)->field.cqe_next;				\
++} while (0)
++
++#define DWC_CIRCLEQ_REMOVE_INIT(head, elm, field) do {			\
++	DWC_CIRCLEQ_REMOVE(head, elm, field);				\
++	DWC_CIRCLEQ_INIT_ENTRY(elm, field);				\
++} while (0)
++
++#define DWC_CIRCLEQ_REPLACE(head, elm, elm2, field) do {		\
++	if (((elm2)->field.cqe_next = (elm)->field.cqe_next) ==		\
++	    DWC_CIRCLEQ_END(head))					\
++		(head).cqh_last = (elm2);				\
++	else								\
++		(elm2)->field.cqe_next->field.cqe_prev = (elm2);	\
++	if (((elm2)->field.cqe_prev = (elm)->field.cqe_prev) ==		\
++	    DWC_CIRCLEQ_END(head))					\
++		(head).cqh_first = (elm2);				\
++	else								\
++		(elm2)->field.cqe_prev->field.cqe_next = (elm2);	\
++} while (0)
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* _DWC_LIST_H_ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_mem.c
+@@ -0,0 +1,245 @@
++/* Memory Debugging */
++#ifdef DWC_DEBUG_MEMORY
++
++#include "dwc_os.h"
++#include "dwc_list.h"
++
++struct allocation {
++	void *addr;
++	void *ctx;
++	char *func;
++	int line;
++	uint32_t size;
++	int dma;
++	DWC_CIRCLEQ_ENTRY(allocation) entry;
++};
++
++DWC_CIRCLEQ_HEAD(allocation_queue, allocation);
++
++struct allocation_manager {
++	void *mem_ctx;
++	struct allocation_queue allocations;
++
++	/* statistics */
++	int num;
++	int num_freed;
++	int num_active;
++	uint32_t total;
++	uint32_t cur;
++	uint32_t max;
++};
++
++static struct allocation_manager *manager = NULL;
++
++static int add_allocation(void *ctx, uint32_t size, char const *func, int line, void *addr,
++			  int dma)
++{
++	struct allocation *a;
++
++	DWC_ASSERT(manager != NULL, "manager not allocated");
++
++	a = __DWC_ALLOC_ATOMIC(manager->mem_ctx, sizeof(*a));
++	if (!a) {
++		return -DWC_E_NO_MEMORY;
++	}
++
++	a->func = __DWC_ALLOC_ATOMIC(manager->mem_ctx, DWC_STRLEN(func) + 1);
++	if (!a->func) {
++		__DWC_FREE(manager->mem_ctx, a);
++		return -DWC_E_NO_MEMORY;
++	}
++
++	DWC_MEMCPY(a->func, func, DWC_STRLEN(func) + 1);
++	a->addr = addr;
++	a->ctx = ctx;
++	a->line = line;
++	a->size = size;
++	a->dma = dma;
++	DWC_CIRCLEQ_INSERT_TAIL(&manager->allocations, a, entry);
++
++	/* Update stats */
++	manager->num++;
++	manager->num_active++;
++	manager->total += size;
++	manager->cur += size;
++
++	if (manager->max < manager->cur) {
++		manager->max = manager->cur;
++	}
++
++	return 0;
++}
++
++static struct allocation *find_allocation(void *ctx, void *addr)
++{
++	struct allocation *a;
++
++	DWC_CIRCLEQ_FOREACH(a, &manager->allocations, entry) {
++		if (a->ctx == ctx && a->addr == addr) {
++			return a;
++		}
++	}
++
++	return NULL;
++}
++
++static void free_allocation(void *ctx, void *addr, char const *func, int line)
++{
++	struct allocation *a = find_allocation(ctx, addr);
++
++	if (!a) {
++		DWC_ASSERT(0,
++			   "Free of address %p that was never allocated or already freed %s:%d",
++			   addr, func, line);
++		return;
++	}
++
++	DWC_CIRCLEQ_REMOVE(&manager->allocations, a, entry);
++
++	manager->num_active--;
++	manager->num_freed++;
++	manager->cur -= a->size;
++	__DWC_FREE(manager->mem_ctx, a->func);
++	__DWC_FREE(manager->mem_ctx, a);
++}
++
++int dwc_memory_debug_start(void *mem_ctx)
++{
++	DWC_ASSERT(manager == NULL, "Memory debugging has already started\n");
++
++	if (manager) {
++		return -DWC_E_BUSY;
++	}
++
++	manager = __DWC_ALLOC(mem_ctx, sizeof(*manager));
++	if (!manager) {
++		return -DWC_E_NO_MEMORY;
++	}
++
++	DWC_CIRCLEQ_INIT(&manager->allocations);
++	manager->mem_ctx = mem_ctx;
++	manager->num = 0;
++	manager->num_freed = 0;
++	manager->num_active = 0;
++	manager->total = 0;
++	manager->cur = 0;
++	manager->max = 0;
++
++	return 0;
++}
++
++void dwc_memory_debug_stop(void)
++{
++	struct allocation *a;
++
++	dwc_memory_debug_report();
++
++	DWC_CIRCLEQ_FOREACH(a, &manager->allocations, entry) {
++		DWC_ERROR("Memory leaked from %s:%d\n", a->func, a->line);
++		free_allocation(a->ctx, a->addr, NULL, -1);
++	}
++
++	__DWC_FREE(manager->mem_ctx, manager);
++}
++
++void dwc_memory_debug_report(void)
++{
++	struct allocation *a;
++
++	DWC_PRINTF("\n\n\n----------------- Memory Debugging Report -----------------\n\n");
++	DWC_PRINTF("Num Allocations = %d\n", manager->num);
++	DWC_PRINTF("Freed = %d\n", manager->num_freed);
++	DWC_PRINTF("Active = %d\n", manager->num_active);
++	DWC_PRINTF("Current Memory Used = %d\n", manager->cur);
++	DWC_PRINTF("Total Memory Used = %d\n", manager->total);
++	DWC_PRINTF("Maximum Memory Used at Once = %d\n", manager->max);
++	DWC_PRINTF("Unfreed allocations:\n");
++
++	DWC_CIRCLEQ_FOREACH(a, &manager->allocations, entry) {
++		DWC_PRINTF("    addr=%p, size=%d from %s:%d, DMA=%d\n",
++			   a->addr, a->size, a->func, a->line, a->dma);
++	}
++}
++
++/* The replacement functions */
++void *dwc_alloc_debug(void *mem_ctx, uint32_t size, char const *func, int line)
++{
++	void *addr = __DWC_ALLOC(mem_ctx, size);
++
++	if (!addr) {
++		return NULL;
++	}
++
++	if (add_allocation(mem_ctx, size, func, line, addr, 0)) {
++		__DWC_FREE(mem_ctx, addr);
++		return NULL;
++	}
++
++	return addr;
++}
++
++void *dwc_alloc_atomic_debug(void *mem_ctx, uint32_t size, char const *func,
++			     int line)
++{
++	void *addr = __DWC_ALLOC_ATOMIC(mem_ctx, size);
++
++	if (!addr) {
++		return NULL;
++	}
++
++	if (add_allocation(mem_ctx, size, func, line, addr, 0)) {
++		__DWC_FREE(mem_ctx, addr);
++		return NULL;
++	}
++
++	return addr;
++}
++
++void dwc_free_debug(void *mem_ctx, void *addr, char const *func, int line)
++{
++	free_allocation(mem_ctx, addr, func, line);
++	__DWC_FREE(mem_ctx, addr);
++}
++
++void *dwc_dma_alloc_debug(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr,
++			  char const *func, int line)
++{
++	void *addr = __DWC_DMA_ALLOC(dma_ctx, size, dma_addr);
++
++	if (!addr) {
++		return NULL;
++	}
++
++	if (add_allocation(dma_ctx, size, func, line, addr, 1)) {
++		__DWC_DMA_FREE(dma_ctx, size, addr, *dma_addr);
++		return NULL;
++	}
++
++	return addr;
++}
++
++void *dwc_dma_alloc_atomic_debug(void *dma_ctx, uint32_t size,
++				 dwc_dma_t *dma_addr, char const *func, int line)
++{
++	void *addr = __DWC_DMA_ALLOC_ATOMIC(dma_ctx, size, dma_addr);
++
++	if (!addr) {
++		return NULL;
++	}
++
++	if (add_allocation(dma_ctx, size, func, line, addr, 1)) {
++		__DWC_DMA_FREE(dma_ctx, size, addr, *dma_addr);
++		return NULL;
++	}
++
++	return addr;
++}
++
++void dwc_dma_free_debug(void *dma_ctx, uint32_t size, void *virt_addr,
++			dwc_dma_t dma_addr, char const *func, int line)
++{
++	free_allocation(dma_ctx, virt_addr, func, line);
++	__DWC_DMA_FREE(dma_ctx, size, virt_addr, dma_addr);
++}
++
++#endif /* DWC_DEBUG_MEMORY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_modpow.c
+@@ -0,0 +1,636 @@
++/* Bignum routines adapted from PUTTY sources.  PuTTY copyright notice follows.
++ *
++ * PuTTY is copyright 1997-2007 Simon Tatham.
++ *
++ * Portions copyright Robert de Bath, Joris van Rantwijk, Delian
++ * Delchev, Andreas Schultz, Jeroen Massar, Wez Furlong, Nicolas Barry,
++ * Justin Bradford, Ben Harris, Malcolm Smith, Ahmad Khalifa, Markus
++ * Kuhn, and CORE SDI S.A.
++ *
++ * Permission is hereby granted, free of charge, to any person
++ * obtaining a copy of this software and associated documentation files
++ * (the "Software"), to deal in the Software without restriction,
++ * including without limitation the rights to use, copy, modify, merge,
++ * publish, distribute, sublicense, and/or sell copies of the Software,
++ * and to permit persons to whom the Software is furnished to do so,
++ * subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be
++ * included in all copies or substantial portions of the Software.
++
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
++ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
++ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
++ * NONINFRINGEMENT.  IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE
++ * FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
++ * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
++ * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
++ *
++ */
++#ifdef DWC_CRYPTOLIB
++
++#ifndef CONFIG_MACH_IPMATE
++
++#include "dwc_modpow.h"
++
++#define BIGNUM_INT_MASK  0xFFFFFFFFUL
++#define BIGNUM_TOP_BIT   0x80000000UL
++#define BIGNUM_INT_BITS  32
++
++
++static void *snmalloc(void *mem_ctx, size_t n, size_t size)
++{
++    void *p;
++    size *= n;
++    if (size == 0) size = 1;
++    p = dwc_alloc(mem_ctx, size);
++    return p;
++}
++
++#define snewn(ctx, n, type) ((type *)snmalloc((ctx), (n), sizeof(type)))
++#define sfree dwc_free
++
++/*
++ * Usage notes:
++ *  * Do not call the DIVMOD_WORD macro with expressions such as array
++ *    subscripts, as some implementations object to this (see below).
++ *  * Note that none of the division methods below will cope if the
++ *    quotient won't fit into BIGNUM_INT_BITS. Callers should be careful
++ *    to avoid this case.
++ *    If this condition occurs, in the case of the x86 DIV instruction,
++ *    an overflow exception will occur, which (according to a correspondent)
++ *    will manifest on Windows as something like
++ *      0xC0000095: Integer overflow
++ *    The C variant won't give the right answer, either.
++ */
++
++#define MUL_WORD(w1, w2) ((BignumDblInt)w1 * w2)
++
++#if defined __GNUC__ && defined __i386__
++#define DIVMOD_WORD(q, r, hi, lo, w) \
++    __asm__("div %2" : \
++	    "=d" (r), "=a" (q) : \
++	    "r" (w), "d" (hi), "a" (lo))
++#else
++#define DIVMOD_WORD(q, r, hi, lo, w) do { \
++    BignumDblInt n = (((BignumDblInt)hi) << BIGNUM_INT_BITS) | lo; \
++    q = n / w; \
++    r = n % w; \
++} while (0)
++#endif
++
++//    q = n / w;
++//    r = n % w;
++
++#define BIGNUM_INT_BYTES (BIGNUM_INT_BITS / 8)
++
++#define BIGNUM_INTERNAL
++
++static Bignum newbn(void *mem_ctx, int length)
++{
++    Bignum b = snewn(mem_ctx, length + 1, BignumInt);
++    //if (!b)
++    //abort();		       /* FIXME */
++    DWC_MEMSET(b, 0, (length + 1) * sizeof(*b));
++    b[0] = length;
++    return b;
++}
++
++void freebn(void *mem_ctx, Bignum b)
++{
++    /*
++     * Burn the evidence, just in case.
++     */
++    DWC_MEMSET(b, 0, sizeof(b[0]) * (b[0] + 1));
++    sfree(mem_ctx, b);
++}
++
++/*
++ * Compute c = a * b.
++ * Input is in the first len words of a and b.
++ * Result is returned in the first 2*len words of c.
++ */
++static void internal_mul(BignumInt *a, BignumInt *b,
++			 BignumInt *c, int len)
++{
++    int i, j;
++    BignumDblInt t;
++
++    for (j = 0; j < 2 * len; j++)
++	c[j] = 0;
++
++    for (i = len - 1; i >= 0; i--) {
++	t = 0;
++	for (j = len - 1; j >= 0; j--) {
++	    t += MUL_WORD(a[i], (BignumDblInt) b[j]);
++	    t += (BignumDblInt) c[i + j + 1];
++	    c[i + j + 1] = (BignumInt) t;
++	    t = t >> BIGNUM_INT_BITS;
++	}
++	c[i] = (BignumInt) t;
++    }
++}
++
++static void internal_add_shifted(BignumInt *number,
++				 unsigned n, int shift)
++{
++    int word = 1 + (shift / BIGNUM_INT_BITS);
++    int bshift = shift % BIGNUM_INT_BITS;
++    BignumDblInt addend;
++
++    addend = (BignumDblInt)n << bshift;
++
++    while (addend) {
++	addend += number[word];
++	number[word] = (BignumInt) addend & BIGNUM_INT_MASK;
++	addend >>= BIGNUM_INT_BITS;
++	word++;
++    }
++}
++
++/*
++ * Compute a = a % m.
++ * Input in first alen words of a and first mlen words of m.
++ * Output in first alen words of a
++ * (of which first alen-mlen words will be zero).
++ * The MSW of m MUST have its high bit set.
++ * Quotient is accumulated in the `quotient' array, which is a Bignum
++ * rather than the internal bigendian format. Quotient parts are shifted
++ * left by `qshift' before adding into quot.
++ */
++static void internal_mod(BignumInt *a, int alen,
++			 BignumInt *m, int mlen,
++			 BignumInt *quot, int qshift)
++{
++    BignumInt m0, m1;
++    unsigned int h;
++    int i, k;
++
++    m0 = m[0];
++    if (mlen > 1)
++	m1 = m[1];
++    else
++	m1 = 0;
++
++    for (i = 0; i <= alen - mlen; i++) {
++	BignumDblInt t;
++	unsigned int q, r, c, ai1;
++
++	if (i == 0) {
++	    h = 0;
++	} else {
++	    h = a[i - 1];
++	    a[i - 1] = 0;
++	}
++
++	if (i == alen - 1)
++	    ai1 = 0;
++	else
++	    ai1 = a[i + 1];
++
++	/* Find q = h:a[i] / m0 */
++	if (h >= m0) {
++	    /*
++	     * Special case.
++	     *
++	     * To illustrate it, suppose a BignumInt is 8 bits, and
++	     * we are dividing (say) A1:23:45:67 by A1:B2:C3. Then
++	     * our initial division will be 0xA123 / 0xA1, which
++	     * will give a quotient of 0x100 and a divide overflow.
++	     * However, the invariants in this division algorithm
++	     * are not violated, since the full number A1:23:... is
++	     * _less_ than the quotient prefix A1:B2:... and so the
++	     * following correction loop would have sorted it out.
++	     *
++	     * In this situation we set q to be the largest
++	     * quotient we _can_ stomach (0xFF, of course).
++	     */
++	    q = BIGNUM_INT_MASK;
++	} else {
++	    /* Macro doesn't want an array subscript expression passed
++	     * into it (see definition), so use a temporary. */
++	    BignumInt tmplo = a[i];
++	    DIVMOD_WORD(q, r, h, tmplo, m0);
++
++	    /* Refine our estimate of q by looking at
++	     h:a[i]:a[i+1] / m0:m1 */
++	    t = MUL_WORD(m1, q);
++	    if (t > ((BignumDblInt) r << BIGNUM_INT_BITS) + ai1) {
++		q--;
++		t -= m1;
++		r = (r + m0) & BIGNUM_INT_MASK;     /* overflow? */
++		if (r >= (BignumDblInt) m0 &&
++		    t > ((BignumDblInt) r << BIGNUM_INT_BITS) + ai1) q--;
++	    }
++	}
++
++	/* Subtract q * m from a[i...] */
++	c = 0;
++	for (k = mlen - 1; k >= 0; k--) {
++	    t = MUL_WORD(q, m[k]);
++	    t += c;
++	    c = (unsigned)(t >> BIGNUM_INT_BITS);
++	    if ((BignumInt) t > a[i + k])
++		c++;
++	    a[i + k] -= (BignumInt) t;
++	}
++
++	/* Add back m in case of borrow */
++	if (c != h) {
++	    t = 0;
++	    for (k = mlen - 1; k >= 0; k--) {
++		t += m[k];
++		t += a[i + k];
++		a[i + k] = (BignumInt) t;
++		t = t >> BIGNUM_INT_BITS;
++	    }
++	    q--;
++	}
++	if (quot)
++	    internal_add_shifted(quot, q, qshift + BIGNUM_INT_BITS * (alen - mlen - i));
++    }
++}
++
++/*
++ * Compute p % mod.
++ * The most significant word of mod MUST be non-zero.
++ * We assume that the result array is the same size as the mod array.
++ * We optionally write out a quotient if `quotient' is non-NULL.
++ * We can avoid writing out the result if `result' is NULL.
++ */
++void bigdivmod(void *mem_ctx, Bignum p, Bignum mod, Bignum result, Bignum quotient)
++{
++    BignumInt *n, *m;
++    int mshift;
++    int plen, mlen, i, j;
++
++    /* Allocate m of size mlen, copy mod to m */
++    /* We use big endian internally */
++    mlen = mod[0];
++    m = snewn(mem_ctx, mlen, BignumInt);
++    //if (!m)
++    //abort();		       /* FIXME */
++    for (j = 0; j < mlen; j++)
++	m[j] = mod[mod[0] - j];
++
++    /* Shift m left to make msb bit set */
++    for (mshift = 0; mshift < BIGNUM_INT_BITS-1; mshift++)
++	if ((m[0] << mshift) & BIGNUM_TOP_BIT)
++	    break;
++    if (mshift) {
++	for (i = 0; i < mlen - 1; i++)
++	    m[i] = (m[i] << mshift) | (m[i + 1] >> (BIGNUM_INT_BITS - mshift));
++	m[mlen - 1] = m[mlen - 1] << mshift;
++    }
++
++    plen = p[0];
++    /* Ensure plen > mlen */
++    if (plen <= mlen)
++	plen = mlen + 1;
++
++    /* Allocate n of size plen, copy p to n */
++    n = snewn(mem_ctx, plen, BignumInt);
++    //if (!n)
++    //abort();		       /* FIXME */
++    for (j = 0; j < plen; j++)
++	n[j] = 0;
++    for (j = 1; j <= (int)p[0]; j++)
++	n[plen - j] = p[j];
++
++    /* Main computation */
++    internal_mod(n, plen, m, mlen, quotient, mshift);
++
++    /* Fixup result in case the modulus was shifted */
++    if (mshift) {
++	for (i = plen - mlen - 1; i < plen - 1; i++)
++	    n[i] = (n[i] << mshift) | (n[i + 1] >> (BIGNUM_INT_BITS - mshift));
++	n[plen - 1] = n[plen - 1] << mshift;
++	internal_mod(n, plen, m, mlen, quotient, 0);
++	for (i = plen - 1; i >= plen - mlen; i--)
++	    n[i] = (n[i] >> mshift) | (n[i - 1] << (BIGNUM_INT_BITS - mshift));
++    }
++
++    /* Copy result to buffer */
++    if (result) {
++	for (i = 1; i <= (int)result[0]; i++) {
++	    int j = plen - i;
++	    result[i] = j >= 0 ? n[j] : 0;
++	}
++    }
++
++    /* Free temporary arrays */
++    for (i = 0; i < mlen; i++)
++	m[i] = 0;
++    sfree(mem_ctx, m);
++    for (i = 0; i < plen; i++)
++	n[i] = 0;
++    sfree(mem_ctx, n);
++}
++
++/*
++ * Simple remainder.
++ */
++Bignum bigmod(void *mem_ctx, Bignum a, Bignum b)
++{
++    Bignum r = newbn(mem_ctx, b[0]);
++    bigdivmod(mem_ctx, a, b, r, NULL);
++    return r;
++}
++
++/*
++ * Compute (base ^ exp) % mod.
++ */
++Bignum dwc_modpow(void *mem_ctx, Bignum base_in, Bignum exp, Bignum mod)
++{
++    BignumInt *a, *b, *n, *m;
++    int mshift;
++    int mlen, i, j;
++    Bignum base, result;
++
++    /*
++     * The most significant word of mod needs to be non-zero. It
++     * should already be, but let's make sure.
++     */
++    //assert(mod[mod[0]] != 0);
++
++    /*
++     * Make sure the base is smaller than the modulus, by reducing
++     * it modulo the modulus if not.
++     */
++    base = bigmod(mem_ctx, base_in, mod);
++
++    /* Allocate m of size mlen, copy mod to m */
++    /* We use big endian internally */
++    mlen = mod[0];
++    m = snewn(mem_ctx, mlen, BignumInt);
++    //if (!m)
++    //abort();		       /* FIXME */
++    for (j = 0; j < mlen; j++)
++	m[j] = mod[mod[0] - j];
++
++    /* Shift m left to make msb bit set */
++    for (mshift = 0; mshift < BIGNUM_INT_BITS - 1; mshift++)
++	if ((m[0] << mshift) & BIGNUM_TOP_BIT)
++	    break;
++    if (mshift) {
++	for (i = 0; i < mlen - 1; i++)
++	    m[i] =
++		(m[i] << mshift) | (m[i + 1] >>
++				    (BIGNUM_INT_BITS - mshift));
++	m[mlen - 1] = m[mlen - 1] << mshift;
++    }
++
++    /* Allocate n of size mlen, copy base to n */
++    n = snewn(mem_ctx, mlen, BignumInt);
++    //if (!n)
++    //abort();		       /* FIXME */
++    i = mlen - base[0];
++    for (j = 0; j < i; j++)
++	n[j] = 0;
++    for (j = 0; j < base[0]; j++)
++	n[i + j] = base[base[0] - j];
++
++    /* Allocate a and b of size 2*mlen. Set a = 1 */
++    a = snewn(mem_ctx, 2 * mlen, BignumInt);
++    //if (!a)
++    //abort();		       /* FIXME */
++    b = snewn(mem_ctx, 2 * mlen, BignumInt);
++    //if (!b)
++    //abort();		       /* FIXME */
++    for (i = 0; i < 2 * mlen; i++)
++	a[i] = 0;
++    a[2 * mlen - 1] = 1;
++
++    /* Skip leading zero bits of exp. */
++    i = 0;
++    j = BIGNUM_INT_BITS - 1;
++    while (i < exp[0] && (exp[exp[0] - i] & (1 << j)) == 0) {
++	j--;
++	if (j < 0) {
++	    i++;
++	    j = BIGNUM_INT_BITS - 1;
++	}
++    }
++
++    /* Main computation */
++    while (i < exp[0]) {
++	while (j >= 0) {
++	    internal_mul(a + mlen, a + mlen, b, mlen);
++	    internal_mod(b, mlen * 2, m, mlen, NULL, 0);
++	    if ((exp[exp[0] - i] & (1 << j)) != 0) {
++		internal_mul(b + mlen, n, a, mlen);
++		internal_mod(a, mlen * 2, m, mlen, NULL, 0);
++	    } else {
++		BignumInt *t;
++		t = a;
++		a = b;
++		b = t;
++	    }
++	    j--;
++	}
++	i++;
++	j = BIGNUM_INT_BITS - 1;
++    }
++
++    /* Fixup result in case the modulus was shifted */
++    if (mshift) {
++	for (i = mlen - 1; i < 2 * mlen - 1; i++)
++	    a[i] =
++		(a[i] << mshift) | (a[i + 1] >>
++				    (BIGNUM_INT_BITS - mshift));
++	a[2 * mlen - 1] = a[2 * mlen - 1] << mshift;
++	internal_mod(a, mlen * 2, m, mlen, NULL, 0);
++	for (i = 2 * mlen - 1; i >= mlen; i--)
++	    a[i] =
++		(a[i] >> mshift) | (a[i - 1] <<
++				    (BIGNUM_INT_BITS - mshift));
++    }
++
++    /* Copy result to buffer */
++    result = newbn(mem_ctx, mod[0]);
++    for (i = 0; i < mlen; i++)
++	result[result[0] - i] = a[i + mlen];
++    while (result[0] > 1 && result[result[0]] == 0)
++	result[0]--;
++
++    /* Free temporary arrays */
++    for (i = 0; i < 2 * mlen; i++)
++	a[i] = 0;
++    sfree(mem_ctx, a);
++    for (i = 0; i < 2 * mlen; i++)
++	b[i] = 0;
++    sfree(mem_ctx, b);
++    for (i = 0; i < mlen; i++)
++	m[i] = 0;
++    sfree(mem_ctx, m);
++    for (i = 0; i < mlen; i++)
++	n[i] = 0;
++    sfree(mem_ctx, n);
++
++    freebn(mem_ctx, base);
++
++    return result;
++}
++
++
++#ifdef UNITTEST
++
++static __u32 dh_p[] = {
++	96,
++	0xFFFFFFFF,
++	0xFFFFFFFF,
++	0xA93AD2CA,
++	0x4B82D120,
++	0xE0FD108E,
++	0x43DB5BFC,
++	0x74E5AB31,
++	0x08E24FA0,
++	0xBAD946E2,
++	0x770988C0,
++	0x7A615D6C,
++	0xBBE11757,
++	0x177B200C,
++	0x521F2B18,
++	0x3EC86A64,
++	0xD8760273,
++	0xD98A0864,
++	0xF12FFA06,
++	0x1AD2EE6B,
++	0xCEE3D226,
++	0x4A25619D,
++	0x1E8C94E0,
++	0xDB0933D7,
++	0xABF5AE8C,
++	0xA6E1E4C7,
++	0xB3970F85,
++	0x5D060C7D,
++	0x8AEA7157,
++	0x58DBEF0A,
++	0xECFB8504,
++	0xDF1CBA64,
++	0xA85521AB,
++	0x04507A33,
++	0xAD33170D,
++	0x8AAAC42D,
++	0x15728E5A,
++	0x98FA0510,
++	0x15D22618,
++	0xEA956AE5,
++	0x3995497C,
++	0x95581718,
++	0xDE2BCBF6,
++	0x6F4C52C9,
++	0xB5C55DF0,
++	0xEC07A28F,
++	0x9B2783A2,
++	0x180E8603,
++	0xE39E772C,
++	0x2E36CE3B,
++	0x32905E46,
++	0xCA18217C,
++	0xF1746C08,
++	0x4ABC9804,
++	0x670C354E,
++	0x7096966D,
++	0x9ED52907,
++	0x208552BB,
++	0x1C62F356,
++	0xDCA3AD96,
++	0x83655D23,
++	0xFD24CF5F,
++	0x69163FA8,
++	0x1C55D39A,
++	0x98DA4836,
++	0xA163BF05,
++	0xC2007CB8,
++	0xECE45B3D,
++	0x49286651,
++	0x7C4B1FE6,
++	0xAE9F2411,
++	0x5A899FA5,
++	0xEE386BFB,
++	0xF406B7ED,
++	0x0BFF5CB6,
++	0xA637ED6B,
++	0xF44C42E9,
++	0x625E7EC6,
++	0xE485B576,
++	0x6D51C245,
++	0x4FE1356D,
++	0xF25F1437,
++	0x302B0A6D,
++	0xCD3A431B,
++	0xEF9519B3,
++	0x8E3404DD,
++	0x514A0879,
++	0x3B139B22,
++	0x020BBEA6,
++	0x8A67CC74,
++	0x29024E08,
++	0x80DC1CD1,
++	0xC4C6628B,
++	0x2168C234,
++	0xC90FDAA2,
++	0xFFFFFFFF,
++	0xFFFFFFFF,
++};
++
++static __u32 dh_a[] = {
++	8,
++	0xdf367516,
++	0x86459caa,
++	0xe2d459a4,
++	0xd910dae0,
++	0x8a8b5e37,
++	0x67ab31c6,
++	0xf0b55ea9,
++	0x440051d6,
++};
++
++static __u32 dh_b[] = {
++	8,
++	0xded92656,
++	0xe07a048a,
++	0x6fa452cd,
++	0x2df89d30,
++	0xc75f1b0f,
++	0x8ce3578f,
++	0x7980a324,
++	0x5daec786,
++};
++
++static __u32 dh_g[] = {
++	1,
++	2,
++};
++
++int main(void)
++{
++	int i;
++	__u32 *k;
++	k = dwc_modpow(NULL, dh_g, dh_a, dh_p);
++
++	printf("\n\n");
++	for (i=0; i<k[0]; i++) {
++		__u32 word32 = k[k[0] - i];
++		__u16 l = word32 & 0xffff;
++		__u16 m = (word32 & 0xffff0000) >> 16;
++		printf("%04x %04x ", m, l);
++		if (!((i + 1)%13)) printf("\n");
++	}
++	printf("\n\n");
++
++	if ((k[0] == 0x60) && (k[1] == 0x28e490e5) && (k[0x60] == 0x5a0d3d4e)) {
++		printf("PASS\n\n");
++	}
++	else {
++		printf("FAIL\n\n");
++	}
++
++}
++
++#endif /* UNITTEST */
++
++#endif /* CONFIG_MACH_IPMATE */
++
++#endif /*DWC_CRYPTOLIB */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_modpow.h
+@@ -0,0 +1,34 @@
++/*
++ * dwc_modpow.h
++ * See dwc_modpow.c for license and changes
++ */
++#ifndef _DWC_MODPOW_H
++#define _DWC_MODPOW_H
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++#include "dwc_os.h"
++
++/** @file
++ *
++ * This file defines the module exponentiation function which is only used
++ * internally by the DWC UWB modules for calculation of PKs during numeric
++ * association.  The routine is taken from the PUTTY, an open source terminal
++ * emulator.  The PUTTY License is preserved in the dwc_modpow.c file.
++ *
++ */
++
++typedef uint32_t BignumInt;
++typedef uint64_t BignumDblInt;
++typedef BignumInt *Bignum;
++
++/* Compute modular exponentiaion */
++extern Bignum dwc_modpow(void *mem_ctx, Bignum base_in, Bignum exp, Bignum mod);
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* _LINUX_BIGNUM_H */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_notifier.c
+@@ -0,0 +1,319 @@
++#ifdef DWC_NOTIFYLIB
++
++#include "dwc_notifier.h"
++#include "dwc_list.h"
++
++typedef struct dwc_observer {
++	void *observer;
++	dwc_notifier_callback_t callback;
++	void *data;
++	char *notification;
++	DWC_CIRCLEQ_ENTRY(dwc_observer) list_entry;
++} observer_t;
++
++DWC_CIRCLEQ_HEAD(observer_queue, dwc_observer);
++
++typedef struct dwc_notifier {
++	void *mem_ctx;
++	void *object;
++	struct observer_queue observers;
++	DWC_CIRCLEQ_ENTRY(dwc_notifier) list_entry;
++} notifier_t;
++
++DWC_CIRCLEQ_HEAD(notifier_queue, dwc_notifier);
++
++typedef struct manager {
++	void *mem_ctx;
++	void *wkq_ctx;
++	dwc_workq_t *wq;
++//	dwc_mutex_t *mutex;
++	struct notifier_queue notifiers;
++} manager_t;
++
++static manager_t *manager = NULL;
++
++static int create_manager(void *mem_ctx, void *wkq_ctx)
++{
++	manager = dwc_alloc(mem_ctx, sizeof(manager_t));
++	if (!manager) {
++		return -DWC_E_NO_MEMORY;
++	}
++
++	DWC_CIRCLEQ_INIT(&manager->notifiers);
++
++	manager->wq = dwc_workq_alloc(wkq_ctx, "DWC Notification WorkQ");
++	if (!manager->wq) {
++		return -DWC_E_NO_MEMORY;
++	}
++
++	return 0;
++}
++
++static void free_manager(void)
++{
++	dwc_workq_free(manager->wq);
++
++	/* All notifiers must have unregistered themselves before this module
++	 * can be removed.  Hitting this assertion indicates a programmer
++	 * error. */
++	DWC_ASSERT(DWC_CIRCLEQ_EMPTY(&manager->notifiers),
++		   "Notification manager being freed before all notifiers have been removed");
++	dwc_free(manager->mem_ctx, manager);
++}
++
++#ifdef DEBUG
++static void dump_manager(void)
++{
++	notifier_t *n;
++	observer_t *o;
++
++	DWC_ASSERT(manager, "Notification manager not found");
++
++	DWC_DEBUG("List of all notifiers and observers:\n");
++	DWC_CIRCLEQ_FOREACH(n, &manager->notifiers, list_entry) {
++		DWC_DEBUG("Notifier %p has observers:\n", n->object);
++		DWC_CIRCLEQ_FOREACH(o, &n->observers, list_entry) {
++			DWC_DEBUG("    %p watching %s\n", o->observer, o->notification);
++		}
++	}
++}
++#else
++#define dump_manager(...)
++#endif
++
++static observer_t *alloc_observer(void *mem_ctx, void *observer, char *notification,
++				  dwc_notifier_callback_t callback, void *data)
++{
++	observer_t *new_observer = dwc_alloc(mem_ctx, sizeof(observer_t));
++
++	if (!new_observer) {
++		return NULL;
++	}
++
++	DWC_CIRCLEQ_INIT_ENTRY(new_observer, list_entry);
++	new_observer->observer = observer;
++	new_observer->notification = notification;
++	new_observer->callback = callback;
++	new_observer->data = data;
++	return new_observer;
++}
++
++static void free_observer(void *mem_ctx, observer_t *observer)
++{
++	dwc_free(mem_ctx, observer);
++}
++
++static notifier_t *alloc_notifier(void *mem_ctx, void *object)
++{
++	notifier_t *notifier;
++
++	if (!object) {
++		return NULL;
++	}
++
++	notifier = dwc_alloc(mem_ctx, sizeof(notifier_t));
++	if (!notifier) {
++		return NULL;
++	}
++
++	DWC_CIRCLEQ_INIT(&notifier->observers);
++	DWC_CIRCLEQ_INIT_ENTRY(notifier, list_entry);
++
++	notifier->mem_ctx = mem_ctx;
++	notifier->object = object;
++	return notifier;
++}
++
++static void free_notifier(notifier_t *notifier)
++{
++	observer_t *observer;
++
++	DWC_CIRCLEQ_FOREACH(observer, &notifier->observers, list_entry) {
++		free_observer(notifier->mem_ctx, observer);
++	}
++
++	dwc_free(notifier->mem_ctx, notifier);
++}
++
++static notifier_t *find_notifier(void *object)
++{
++	notifier_t *notifier;
++
++	DWC_ASSERT(manager, "Notification manager not found");
++
++	if (!object) {
++		return NULL;
++	}
++
++	DWC_CIRCLEQ_FOREACH(notifier, &manager->notifiers, list_entry) {
++		if (notifier->object == object) {
++			return notifier;
++		}
++	}
++
++	return NULL;
++}
++
++int dwc_alloc_notification_manager(void *mem_ctx, void *wkq_ctx)
++{
++	return create_manager(mem_ctx, wkq_ctx);
++}
++
++void dwc_free_notification_manager(void)
++{
++	free_manager();
++}
++
++dwc_notifier_t *dwc_register_notifier(void *mem_ctx, void *object)
++{
++	notifier_t *notifier;
++
++	DWC_ASSERT(manager, "Notification manager not found");
++
++	notifier = find_notifier(object);
++	if (notifier) {
++		DWC_ERROR("Notifier %p is already registered\n", object);
++		return NULL;
++	}
++
++	notifier = alloc_notifier(mem_ctx, object);
++	if (!notifier) {
++		return NULL;
++	}
++
++	DWC_CIRCLEQ_INSERT_TAIL(&manager->notifiers, notifier, list_entry);
++
++	DWC_INFO("Notifier %p registered", object);
++	dump_manager();
++
++	return notifier;
++}
++
++void dwc_unregister_notifier(dwc_notifier_t *notifier)
++{
++	DWC_ASSERT(manager, "Notification manager not found");
++
++	if (!DWC_CIRCLEQ_EMPTY(&notifier->observers)) {
++		observer_t *o;
++
++		DWC_ERROR("Notifier %p has active observers when removing\n", notifier->object);
++		DWC_CIRCLEQ_FOREACH(o, &notifier->observers, list_entry) {
++			DWC_DEBUGC("    %p watching %s\n", o->observer, o->notification);
++		}
++
++		DWC_ASSERT(DWC_CIRCLEQ_EMPTY(&notifier->observers),
++			   "Notifier %p has active observers when removing", notifier);
++	}
++
++	DWC_CIRCLEQ_REMOVE_INIT(&manager->notifiers, notifier, list_entry);
++	free_notifier(notifier);
++
++	DWC_INFO("Notifier unregistered");
++	dump_manager();
++}
++
++/* Add an observer to observe the notifier for a particular state, event, or notification. */
++int dwc_add_observer(void *observer, void *object, char *notification,
++		     dwc_notifier_callback_t callback, void *data)
++{
++	notifier_t *notifier = find_notifier(object);
++	observer_t *new_observer;
++
++	if (!notifier) {
++		DWC_ERROR("Notifier %p is not found when adding observer\n", object);
++		return -DWC_E_INVALID;
++	}
++
++	new_observer = alloc_observer(notifier->mem_ctx, observer, notification, callback, data);
++	if (!new_observer) {
++		return -DWC_E_NO_MEMORY;
++	}
++
++	DWC_CIRCLEQ_INSERT_TAIL(&notifier->observers, new_observer, list_entry);
++
++	DWC_INFO("Added observer %p to notifier %p observing notification %s, callback=%p, data=%p",
++		 observer, object, notification, callback, data);
++
++	dump_manager();
++	return 0;
++}
++
++int dwc_remove_observer(void *observer)
++{
++	notifier_t *n;
++
++	DWC_ASSERT(manager, "Notification manager not found");
++
++	DWC_CIRCLEQ_FOREACH(n, &manager->notifiers, list_entry) {
++		observer_t *o;
++		observer_t *o2;
++
++		DWC_CIRCLEQ_FOREACH_SAFE(o, o2, &n->observers, list_entry) {
++			if (o->observer == observer) {
++				DWC_CIRCLEQ_REMOVE_INIT(&n->observers, o, list_entry);
++				DWC_INFO("Removing observer %p from notifier %p watching notification %s:",
++					 o->observer, n->object, o->notification);
++				free_observer(n->mem_ctx, o);
++			}
++		}
++	}
++
++	dump_manager();
++	return 0;
++}
++
++typedef struct callback_data {
++	void *mem_ctx;
++	dwc_notifier_callback_t cb;
++	void *observer;
++	void *data;
++	void *object;
++	char *notification;
++	void *notification_data;
++} cb_data_t;
++
++static void cb_task(void *data)
++{
++	cb_data_t *cb = (cb_data_t *)data;
++
++	cb->cb(cb->object, cb->notification, cb->observer, cb->notification_data, cb->data);
++	dwc_free(cb->mem_ctx, cb);
++}
++
++void dwc_notify(dwc_notifier_t *notifier, char *notification, void *notification_data)
++{
++	observer_t *o;
++
++	DWC_ASSERT(manager, "Notification manager not found");
++
++	DWC_CIRCLEQ_FOREACH(o, &notifier->observers, list_entry) {
++		int len = DWC_STRLEN(notification);
++
++		if (DWC_STRLEN(o->notification) != len) {
++			continue;
++		}
++
++		if (DWC_STRNCMP(o->notification, notification, len) == 0) {
++			cb_data_t *cb_data = dwc_alloc(notifier->mem_ctx, sizeof(cb_data_t));
++
++			if (!cb_data) {
++				DWC_ERROR("Failed to allocate callback data\n");
++				return;
++			}
++
++			cb_data->mem_ctx = notifier->mem_ctx;
++			cb_data->cb = o->callback;
++			cb_data->observer = o->observer;
++			cb_data->data = o->data;
++			cb_data->object = notifier->object;
++			cb_data->notification = notification;
++			cb_data->notification_data = notification_data;
++			DWC_DEBUGC("Observer found %p for notification %s\n", o->observer, notification);
++			DWC_WORKQ_SCHEDULE(manager->wq, cb_task, cb_data,
++					   "Notify callback from %p for Notification %s, to observer %p",
++					   cb_data->object, notification, cb_data->observer);
++		}
++	}
++}
++
++#endif	/* DWC_NOTIFYLIB */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_notifier.h
+@@ -0,0 +1,122 @@
++
++#ifndef __DWC_NOTIFIER_H__
++#define __DWC_NOTIFIER_H__
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++#include "dwc_os.h"
++
++/** @file
++ *
++ * A simple implementation of the Observer pattern.  Any "module" can
++ * register as an observer or notifier.  The notion of "module" is abstract and
++ * can mean anything used to identify either an observer or notifier.  Usually
++ * it will be a pointer to a data structure which contains some state, ie an
++ * object.
++ *
++ * Before any notifiers can be added, the global notification manager must be
++ * brought up with dwc_alloc_notification_manager().
++ * dwc_free_notification_manager() will bring it down and free all resources.
++ * These would typically be called upon module load and unload.  The
++ * notification manager is a single global instance that handles all registered
++ * observable modules and observers so this should be done only once.
++ *
++ * A module can be observable by using Notifications to publicize some general
++ * information about it's state or operation.  It does not care who listens, or
++ * even if anyone listens, or what they do with the information.  The observable
++ * modules do not need to know any information about it's observers or their
++ * interface, or their state or data.
++ *
++ * Any module can register to emit Notifications.  It should publish a list of
++ * notifications that it can emit and their behavior, such as when they will get
++ * triggered, and what information will be provided to the observer.  Then it
++ * should register itself as an observable module. See dwc_register_notifier().
++ *
++ * Any module can observe any observable, registered module, provided it has a
++ * handle to the other module and knows what notifications to observe.  See
++ * dwc_add_observer().
++ *
++ * A function of type dwc_notifier_callback_t is called whenever a notification
++ * is triggered with one or more observers observing it.  This function is
++ * called in it's own process so it may sleep or block if needed.  It is
++ * guaranteed to be called sometime after the notification has occurred and will
++ * be called once per each time the notification is triggered.  It will NOT be
++ * called in the same process context used to trigger the notification.
++ *
++ * @section Limitiations
++ *
++ * Keep in mind that Notifications that can be triggered in rapid sucession may
++ * schedule too many processes too handle.  Be aware of this limitation when
++ * designing to use notifications, and only add notifications for appropriate
++ * observable information.
++ *
++ * Also Notification callbacks are not synchronous.  If you need to synchronize
++ * the behavior between module/observer you must use other means.  And perhaps
++ * that will mean Notifications are not the proper solution.
++ */
++
++struct dwc_notifier;
++typedef struct dwc_notifier dwc_notifier_t;
++
++/** The callback function must be of this type.
++ *
++ * @param object This is the object that is being observed.
++ * @param notification This is the notification that was triggered.
++ * @param observer This is the observer
++ * @param notification_data This is notification-specific data that the notifier
++ * has included in this notification.  The value of this should be published in
++ * the documentation of the observable module with the notifications.
++ * @param user_data This is any custom data that the observer provided when
++ * adding itself as an observer to the notification. */
++typedef void (*dwc_notifier_callback_t)(void *object, char *notification, void *observer,
++					void *notification_data, void *user_data);
++
++/** Brings up the notification manager. */
++extern int dwc_alloc_notification_manager(void *mem_ctx, void *wkq_ctx);
++/** Brings down the notification manager. */
++extern void dwc_free_notification_manager(void);
++
++/** This function registers an observable module.  A dwc_notifier_t object is
++ * returned to the observable module.  This is an opaque object that is used by
++ * the observable module to trigger notifications.  This object should only be
++ * accessible to functions that are authorized to trigger notifications for this
++ * module.  Observers do not need this object. */
++extern dwc_notifier_t *dwc_register_notifier(void *mem_ctx, void *object);
++
++/** This function unregisters an observable module.  All observers have to be
++ * removed prior to unregistration. */
++extern void dwc_unregister_notifier(dwc_notifier_t *notifier);
++
++/** Add a module as an observer to the observable module.  The observable module
++ * needs to have previously registered with the notification manager.
++ *
++ * @param observer The observer module
++ * @param object The module to observe
++ * @param notification The notification to observe
++ * @param callback The callback function to call
++ * @param user_data Any additional user data to pass into the callback function */
++extern int dwc_add_observer(void *observer, void *object, char *notification,
++			    dwc_notifier_callback_t callback, void *user_data);
++
++/** Removes the specified observer from all notifications that it is currently
++ * observing. */
++extern int dwc_remove_observer(void *observer);
++
++/** This function triggers a Notification.  It should be called by the
++ * observable module, or any module or library which the observable module
++ * allows to trigger notification on it's behalf.  Such as the dwc_cc_t.
++ *
++ * dwc_notify is a non-blocking function.  Callbacks are scheduled called in
++ * their own process context for each trigger.  Callbacks can be blocking.
++ * dwc_notify can be called from interrupt context if needed.
++ *
++ */
++void dwc_notify(dwc_notifier_t *notifier, char *notification, void *notification_data);
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* __DWC_NOTIFIER_H__ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/dwc_os.h
+@@ -0,0 +1,1276 @@
++/* =========================================================================
++ * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_os.h $
++ * $Revision: #14 $
++ * $Date: 2010/11/04 $
++ * $Change: 1621695 $
++ *
++ * Synopsys Portability Library Software and documentation
++ * (hereinafter, "Software") is an Unsupported proprietary work of
++ * Synopsys, Inc. unless otherwise expressly agreed to in writing
++ * between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product
++ * under any End User Software License Agreement or Agreement for
++ * Licensed Product with Synopsys or any supplement thereto. You are
++ * permitted to use and redistribute this Software in source and binary
++ * forms, with or without modification, provided that redistributions
++ * of source code must retain this notice. You may not view, use,
++ * disclose, copy or distribute this file or any information contained
++ * herein except pursuant to this license grant from Synopsys. If you
++ * do not agree with this notice, including the disclaimer below, then
++ * you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
++ * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
++ * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
++ * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
++ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
++ * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================= */
++#ifndef _DWC_OS_H_
++#define _DWC_OS_H_
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++/** @file
++ *
++ * DWC portability library, low level os-wrapper functions
++ *
++ */
++
++/* These basic types need to be defined by some OS header file or custom header
++ * file for your specific target architecture.
++ *
++ * uint8_t, int8_t, uint16_t, int16_t, uint32_t, int32_t, uint64_t, int64_t
++ *
++ * Any custom or alternate header file must be added and enabled here.
++ */
++
++#ifdef DWC_LINUX
++# include <linux/types.h>
++# ifdef CONFIG_DEBUG_MUTEXES
++#  include <linux/mutex.h>
++# endif
++# include <linux/spinlock.h>
++# include <linux/errno.h>
++# include <stdarg.h>
++#endif
++
++#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
++# include <os_dep.h>
++#endif
++
++
++/** @name Primitive Types and Values */
++
++/** We define a boolean type for consistency.  Can be either YES or NO */
++typedef uint8_t dwc_bool_t;
++#define YES  1
++#define NO   0
++
++#ifdef DWC_LINUX
++
++/** @name Error Codes */
++#define DWC_E_INVALID		EINVAL
++#define DWC_E_NO_MEMORY		ENOMEM
++#define DWC_E_NO_DEVICE		ENODEV
++#define DWC_E_NOT_SUPPORTED	EOPNOTSUPP
++#define DWC_E_TIMEOUT		ETIMEDOUT
++#define DWC_E_BUSY		EBUSY
++#define DWC_E_AGAIN		EAGAIN
++#define DWC_E_RESTART		ERESTART
++#define DWC_E_ABORT		ECONNABORTED
++#define DWC_E_SHUTDOWN		ESHUTDOWN
++#define DWC_E_NO_DATA		ENODATA
++#define DWC_E_DISCONNECT	ECONNRESET
++#define DWC_E_UNKNOWN		EINVAL
++#define DWC_E_NO_STREAM_RES	ENOSR
++#define DWC_E_COMMUNICATION	ECOMM
++#define DWC_E_OVERFLOW		EOVERFLOW
++#define DWC_E_PROTOCOL		EPROTO
++#define DWC_E_IN_PROGRESS	EINPROGRESS
++#define DWC_E_PIPE		EPIPE
++#define DWC_E_IO		EIO
++#define DWC_E_NO_SPACE		ENOSPC
++
++#else
++
++/** @name Error Codes */
++#define DWC_E_INVALID		1001
++#define DWC_E_NO_MEMORY		1002
++#define DWC_E_NO_DEVICE		1003
++#define DWC_E_NOT_SUPPORTED	1004
++#define DWC_E_TIMEOUT		1005
++#define DWC_E_BUSY		1006
++#define DWC_E_AGAIN		1007
++#define DWC_E_RESTART		1008
++#define DWC_E_ABORT		1009
++#define DWC_E_SHUTDOWN		1010
++#define DWC_E_NO_DATA		1011
++#define DWC_E_DISCONNECT	2000
++#define DWC_E_UNKNOWN		3000
++#define DWC_E_NO_STREAM_RES	4001
++#define DWC_E_COMMUNICATION	4002
++#define DWC_E_OVERFLOW		4003
++#define DWC_E_PROTOCOL		4004
++#define DWC_E_IN_PROGRESS	4005
++#define DWC_E_PIPE		4006
++#define DWC_E_IO		4007
++#define DWC_E_NO_SPACE		4008
++
++#endif
++
++
++/** @name Tracing/Logging Functions
++ *
++ * These function provide the capability to add tracing, debugging, and error
++ * messages, as well exceptions as assertions.  The WUDEV uses these
++ * extensively.  These could be logged to the main console, the serial port, an
++ * internal buffer, etc.  These functions could also be no-op if they are too
++ * expensive on your system.  By default undefining the DEBUG macro already
++ * no-ops some of these functions. */
++
++/** Returns non-zero if in interrupt context. */
++extern dwc_bool_t DWC_IN_IRQ(void);
++#define dwc_in_irq DWC_IN_IRQ
++
++/** Returns "IRQ" if DWC_IN_IRQ is true. */
++static inline char *dwc_irq(void) {
++	return DWC_IN_IRQ() ? "IRQ" : "";
++}
++
++/** Returns non-zero if in bottom-half context. */
++extern dwc_bool_t DWC_IN_BH(void);
++#define dwc_in_bh DWC_IN_BH
++
++/** Returns "BH" if DWC_IN_BH is true. */
++static inline char *dwc_bh(void) {
++	return DWC_IN_BH() ? "BH" : "";
++}
++
++/**
++ * A vprintf() clone.  Just call vprintf if you've got it.
++ */
++extern void DWC_VPRINTF(char *format, va_list args);
++#define dwc_vprintf DWC_VPRINTF
++
++/**
++ * A vsnprintf() clone.  Just call vprintf if you've got it.
++ */
++extern int DWC_VSNPRINTF(char *str, int size, char *format, va_list args);
++#define dwc_vsnprintf DWC_VSNPRINTF
++
++/**
++ * printf() clone.  Just call printf if you've go it.
++ */
++extern void DWC_PRINTF(char *format, ...)
++/* This provides compiler level static checking of the parameters if you're
++ * using GCC. */
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 1, 2)));
++#else
++	;
++#endif
++#define dwc_printf DWC_PRINTF
++
++/**
++ * sprintf() clone.  Just call sprintf if you've got it.
++ */
++extern int DWC_SPRINTF(char *string, char *format, ...)
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 2, 3)));
++#else
++	;
++#endif
++#define dwc_sprintf DWC_SPRINTF
++
++/**
++ * snprintf() clone.  Just call snprintf if you've got it.
++ */
++extern int DWC_SNPRINTF(char *string, int size, char *format, ...)
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 3, 4)));
++#else
++	;
++#endif
++#define dwc_snprintf DWC_SNPRINTF
++
++/**
++ * Prints a WARNING message.  On systems that don't differentiate between
++ * warnings and regular log messages, just print it.  Indicates that something
++ * may be wrong with the driver.  Works like printf().
++ *
++ * Use the DWC_WARN macro to call this function.
++ */
++extern void __DWC_WARN(char *format, ...)
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 1, 2)));
++#else
++	;
++#endif
++
++/**
++ * Prints an error message.  On systems that don't differentiate between errors
++ * and regular log messages, just print it.  Indicates that something went wrong
++ * with the driver.  Works like printf().
++ *
++ * Use the DWC_ERROR macro to call this function.
++ */
++extern void __DWC_ERROR(char *format, ...)
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 1, 2)));
++#else
++	;
++#endif
++
++/**
++ * Prints an exception error message and takes some user-defined action such as
++ * print out a backtrace or trigger a breakpoint.  Indicates that something went
++ * abnormally wrong with the driver such as programmer error, or other
++ * exceptional condition.  It should not be ignored so even on systems without
++ * printing capability, some action should be taken to notify the developer of
++ * it.  Works like printf().
++ */
++extern void DWC_EXCEPTION(char *format, ...)
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 1, 2)));
++#else
++	;
++#endif
++#define dwc_exception DWC_EXCEPTION
++
++#ifndef DWC_OTG_DEBUG_LEV
++#define DWC_OTG_DEBUG_LEV 0
++#endif
++
++#ifdef DEBUG
++/**
++ * Prints out a debug message.  Used for logging/trace messages.
++ *
++ * Use the DWC_DEBUG macro to call this function
++ */
++extern void __DWC_DEBUG(char *format, ...)
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 1, 2)));
++#else
++	;
++#endif
++#else
++#define __DWC_DEBUG printk
++#endif
++
++/**
++ * Prints out a Debug message.
++ */
++#define DWC_DEBUG(_format, _args...) __DWC_DEBUG("DEBUG:%s:%s: " _format "\n", \
++						 __func__, dwc_irq(), ## _args)
++#define dwc_debug DWC_DEBUG
++/**
++ * Prints out a Debug message if enabled at compile time.
++ */
++#if DWC_OTG_DEBUG_LEV > 0
++#define DWC_DEBUGC(_format, _args...) DWC_DEBUG(_format, ##_args )
++#else
++#define DWC_DEBUGC(_format, _args...)
++#endif
++#define dwc_debugc DWC_DEBUGC
++/**
++ * Prints out an informative message.
++ */
++#define DWC_INFO(_format, _args...) DWC_PRINTF("INFO:%s: " _format "\n", \
++					       dwc_irq(), ## _args)
++#define dwc_info DWC_INFO
++/**
++ * Prints out an informative message if enabled at compile time.
++ */
++#if DWC_OTG_DEBUG_LEV > 1
++#define DWC_INFOC(_format, _args...) DWC_INFO(_format, ##_args )
++#else
++#define DWC_INFOC(_format, _args...)
++#endif
++#define dwc_infoc DWC_INFOC
++/**
++ * Prints out a warning message.
++ */
++#define DWC_WARN(_format, _args...) __DWC_WARN("WARN:%s:%s:%d: " _format "\n", \
++					dwc_irq(), __func__, __LINE__, ## _args)
++#define dwc_warn DWC_WARN
++/**
++ * Prints out an error message.
++ */
++#define DWC_ERROR(_format, _args...) __DWC_ERROR("ERROR:%s:%s:%d: " _format "\n", \
++					dwc_irq(), __func__, __LINE__, ## _args)
++#define dwc_error DWC_ERROR
++
++#define DWC_PROTO_ERROR(_format, _args...) __DWC_WARN("ERROR:%s:%s:%d: " _format "\n", \
++						dwc_irq(), __func__, __LINE__, ## _args)
++#define dwc_proto_error DWC_PROTO_ERROR
++
++#ifdef DEBUG
++/** Prints out a exception error message if the _expr expression fails.  Disabled
++ * if DEBUG is not enabled. */
++#define DWC_ASSERT(_expr, _format, _args...) do { \
++	if (!(_expr)) { DWC_EXCEPTION("%s:%s:%d: " _format "\n", dwc_irq(), \
++				      __FILE__, __LINE__, ## _args); } \
++	} while (0)
++#else
++#define DWC_ASSERT(_x...)
++#endif
++#define dwc_assert DWC_ASSERT
++
++
++/** @name Byte Ordering
++ * The following functions are for conversions between processor's byte ordering
++ * and specific ordering you want.
++ */
++
++/** Converts 32 bit data in CPU byte ordering to little endian. */
++extern uint32_t DWC_CPU_TO_LE32(uint32_t *p);
++#define dwc_cpu_to_le32 DWC_CPU_TO_LE32
++
++/** Converts 32 bit data in CPU byte orderint to big endian. */
++extern uint32_t DWC_CPU_TO_BE32(uint32_t *p);
++#define dwc_cpu_to_be32 DWC_CPU_TO_BE32
++
++/** Converts 32 bit little endian data to CPU byte ordering. */
++extern uint32_t DWC_LE32_TO_CPU(uint32_t *p);
++#define dwc_le32_to_cpu DWC_LE32_TO_CPU
++
++/** Converts 32 bit big endian data to CPU byte ordering. */
++extern uint32_t DWC_BE32_TO_CPU(uint32_t *p);
++#define dwc_be32_to_cpu DWC_BE32_TO_CPU
++
++/** Converts 16 bit data in CPU byte ordering to little endian. */
++extern uint16_t DWC_CPU_TO_LE16(uint16_t *p);
++#define dwc_cpu_to_le16 DWC_CPU_TO_LE16
++
++/** Converts 16 bit data in CPU byte orderint to big endian. */
++extern uint16_t DWC_CPU_TO_BE16(uint16_t *p);
++#define dwc_cpu_to_be16 DWC_CPU_TO_BE16
++
++/** Converts 16 bit little endian data to CPU byte ordering. */
++extern uint16_t DWC_LE16_TO_CPU(uint16_t *p);
++#define dwc_le16_to_cpu DWC_LE16_TO_CPU
++
++/** Converts 16 bit bi endian data to CPU byte ordering. */
++extern uint16_t DWC_BE16_TO_CPU(uint16_t *p);
++#define dwc_be16_to_cpu DWC_BE16_TO_CPU
++
++
++/** @name Register Read/Write
++ *
++ * The following six functions should be implemented to read/write registers of
++ * 32-bit and 64-bit sizes.  All modules use this to read/write register values.
++ * The reg value is a pointer to the register calculated from the void *base
++ * variable passed into the driver when it is started.  */
++
++#ifdef DWC_LINUX
++/* Linux doesn't need any extra parameters for register read/write, so we
++ * just throw away the IO context parameter.
++ */
++/** Reads the content of a 32-bit register. */
++extern uint32_t DWC_READ_REG32(uint32_t volatile *reg);
++#define dwc_read_reg32(_ctx_,_reg_) DWC_READ_REG32(_reg_)
++
++/** Reads the content of a 64-bit register. */
++extern uint64_t DWC_READ_REG64(uint64_t volatile *reg);
++#define dwc_read_reg64(_ctx_,_reg_) DWC_READ_REG64(_reg_)
++
++/** Writes to a 32-bit register. */
++extern void DWC_WRITE_REG32(uint32_t volatile *reg, uint32_t value);
++#define dwc_write_reg32(_ctx_,_reg_,_val_) DWC_WRITE_REG32(_reg_, _val_)
++
++/** Writes to a 64-bit register. */
++extern void DWC_WRITE_REG64(uint64_t volatile *reg, uint64_t value);
++#define dwc_write_reg64(_ctx_,_reg_,_val_) DWC_WRITE_REG64(_reg_, _val_)
++
++/**
++ * Modify bit values in a register.  Using the
++ * algorithm: (reg_contents & ~clear_mask) | set_mask.
++ */
++extern void DWC_MODIFY_REG32(uint32_t volatile *reg, uint32_t clear_mask, uint32_t set_mask);
++#define dwc_modify_reg32(_ctx_,_reg_,_cmsk_,_smsk_) DWC_MODIFY_REG32(_reg_,_cmsk_,_smsk_)
++extern void DWC_MODIFY_REG64(uint64_t volatile *reg, uint64_t clear_mask, uint64_t set_mask);
++#define dwc_modify_reg64(_ctx_,_reg_,_cmsk_,_smsk_) DWC_MODIFY_REG64(_reg_,_cmsk_,_smsk_)
++
++#endif	/* DWC_LINUX */
++
++#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
++typedef struct dwc_ioctx {
++	struct device *dev;
++	bus_space_tag_t iot;
++	bus_space_handle_t ioh;
++} dwc_ioctx_t;
++
++/** BSD needs two extra parameters for register read/write, so we pass
++ * them in using the IO context parameter.
++ */
++/** Reads the content of a 32-bit register. */
++extern uint32_t DWC_READ_REG32(void *io_ctx, uint32_t volatile *reg);
++#define dwc_read_reg32 DWC_READ_REG32
++
++/** Reads the content of a 64-bit register. */
++extern uint64_t DWC_READ_REG64(void *io_ctx, uint64_t volatile *reg);
++#define dwc_read_reg64 DWC_READ_REG64
++
++/** Writes to a 32-bit register. */
++extern void DWC_WRITE_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t value);
++#define dwc_write_reg32 DWC_WRITE_REG32
++
++/** Writes to a 64-bit register. */
++extern void DWC_WRITE_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t value);
++#define dwc_write_reg64 DWC_WRITE_REG64
++
++/**
++ * Modify bit values in a register.  Using the
++ * algorithm: (reg_contents & ~clear_mask) | set_mask.
++ */
++extern void DWC_MODIFY_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t clear_mask, uint32_t set_mask);
++#define dwc_modify_reg32 DWC_MODIFY_REG32
++extern void DWC_MODIFY_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t clear_mask, uint64_t set_mask);
++#define dwc_modify_reg64 DWC_MODIFY_REG64
++
++#endif	/* DWC_FREEBSD || DWC_NETBSD */
++
++/** @cond */
++
++/** @name Some convenience MACROS used internally.  Define DWC_DEBUG_REGS to log the
++ * register writes. */
++
++#ifdef DWC_LINUX
++
++# ifdef DWC_DEBUG_REGS
++
++#define dwc_define_read_write_reg_n(_reg,_container_type) \
++static inline uint32_t dwc_read_##_reg##_n(_container_type *container, int num) { \
++	return DWC_READ_REG32(&container->regs->_reg[num]); \
++} \
++static inline void dwc_write_##_reg##_n(_container_type *container, int num, uint32_t data) { \
++	DWC_DEBUG("WRITING %8s[%d]: %p: %08x", #_reg, num, \
++		  &(((uint32_t*)container->regs->_reg)[num]), data); \
++	DWC_WRITE_REG32(&(((uint32_t*)container->regs->_reg)[num]), data); \
++}
++
++#define dwc_define_read_write_reg(_reg,_container_type) \
++static inline uint32_t dwc_read_##_reg(_container_type *container) { \
++	return DWC_READ_REG32(&container->regs->_reg); \
++} \
++static inline void dwc_write_##_reg(_container_type *container, uint32_t data) { \
++	DWC_DEBUG("WRITING %11s: %p: %08x", #_reg, &container->regs->_reg, data); \
++	DWC_WRITE_REG32(&container->regs->_reg, data); \
++}
++
++# else	/* DWC_DEBUG_REGS */
++
++#define dwc_define_read_write_reg_n(_reg,_container_type) \
++static inline uint32_t dwc_read_##_reg##_n(_container_type *container, int num) { \
++	return DWC_READ_REG32(&container->regs->_reg[num]); \
++} \
++static inline void dwc_write_##_reg##_n(_container_type *container, int num, uint32_t data) { \
++	DWC_WRITE_REG32(&(((uint32_t*)container->regs->_reg)[num]), data); \
++}
++
++#define dwc_define_read_write_reg(_reg,_container_type) \
++static inline uint32_t dwc_read_##_reg(_container_type *container) { \
++	return DWC_READ_REG32(&container->regs->_reg); \
++} \
++static inline void dwc_write_##_reg(_container_type *container, uint32_t data) { \
++	DWC_WRITE_REG32(&container->regs->_reg, data); \
++}
++
++# endif	/* DWC_DEBUG_REGS */
++
++#endif	/* DWC_LINUX */
++
++#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
++
++# ifdef DWC_DEBUG_REGS
++
++#define dwc_define_read_write_reg_n(_reg,_container_type) \
++static inline uint32_t dwc_read_##_reg##_n(void *io_ctx, _container_type *container, int num) { \
++	return DWC_READ_REG32(io_ctx, &container->regs->_reg[num]); \
++} \
++static inline void dwc_write_##_reg##_n(void *io_ctx, _container_type *container, int num, uint32_t data) { \
++	DWC_DEBUG("WRITING %8s[%d]: %p: %08x", #_reg, num, \
++		  &(((uint32_t*)container->regs->_reg)[num]), data); \
++	DWC_WRITE_REG32(io_ctx, &(((uint32_t*)container->regs->_reg)[num]), data); \
++}
++
++#define dwc_define_read_write_reg(_reg,_container_type) \
++static inline uint32_t dwc_read_##_reg(void *io_ctx, _container_type *container) { \
++	return DWC_READ_REG32(io_ctx, &container->regs->_reg); \
++} \
++static inline void dwc_write_##_reg(void *io_ctx, _container_type *container, uint32_t data) { \
++	DWC_DEBUG("WRITING %11s: %p: %08x", #_reg, &container->regs->_reg, data); \
++	DWC_WRITE_REG32(io_ctx, &container->regs->_reg, data); \
++}
++
++# else	/* DWC_DEBUG_REGS */
++
++#define dwc_define_read_write_reg_n(_reg,_container_type) \
++static inline uint32_t dwc_read_##_reg##_n(void *io_ctx, _container_type *container, int num) { \
++	return DWC_READ_REG32(io_ctx, &container->regs->_reg[num]); \
++} \
++static inline void dwc_write_##_reg##_n(void *io_ctx, _container_type *container, int num, uint32_t data) { \
++	DWC_WRITE_REG32(io_ctx, &(((uint32_t*)container->regs->_reg)[num]), data); \
++}
++
++#define dwc_define_read_write_reg(_reg,_container_type) \
++static inline uint32_t dwc_read_##_reg(void *io_ctx, _container_type *container) { \
++	return DWC_READ_REG32(io_ctx, &container->regs->_reg); \
++} \
++static inline void dwc_write_##_reg(void *io_ctx, _container_type *container, uint32_t data) { \
++	DWC_WRITE_REG32(io_ctx, &container->regs->_reg, data); \
++}
++
++# endif	/* DWC_DEBUG_REGS */
++
++#endif	/* DWC_FREEBSD || DWC_NETBSD */
++
++/** @endcond */
++
++
++#ifdef DWC_CRYPTOLIB
++/** @name Crypto Functions
++ *
++ * These are the low-level cryptographic functions used by the driver. */
++
++/** Perform AES CBC */
++extern int DWC_AES_CBC(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t iv[16], uint8_t *out);
++#define dwc_aes_cbc DWC_AES_CBC
++
++/** Fill the provided buffer with random bytes.  These should be cryptographic grade random numbers. */
++extern void DWC_RANDOM_BYTES(uint8_t *buffer, uint32_t length);
++#define dwc_random_bytes DWC_RANDOM_BYTES
++
++/** Perform the SHA-256 hash function */
++extern int DWC_SHA256(uint8_t *message, uint32_t len, uint8_t *out);
++#define dwc_sha256 DWC_SHA256
++
++/** Calculated the HMAC-SHA256 */
++extern int DWC_HMAC_SHA256(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t *out);
++#define dwc_hmac_sha256 DWC_HMAC_SHA256
++
++#endif	/* DWC_CRYPTOLIB */
++
++
++/** @name Memory Allocation
++ *
++ * These function provide access to memory allocation.  There are only 2 DMA
++ * functions and 3 Regular memory functions that need to be implemented.  None
++ * of the memory debugging routines need to be implemented.  The allocation
++ * routines all ZERO the contents of the memory.
++ *
++ * Defining DWC_DEBUG_MEMORY turns on memory debugging and statistic gathering.
++ * This checks for memory leaks, keeping track of alloc/free pairs.  It also
++ * keeps track of how much memory the driver is using at any given time. */
++
++#define DWC_PAGE_SIZE 4096
++#define DWC_PAGE_OFFSET(addr) (((uint32_t)addr) & 0xfff)
++#define DWC_PAGE_ALIGNED(addr) ((((uint32_t)addr) & 0xfff) == 0)
++
++#define DWC_INVALID_DMA_ADDR 0x0
++
++#ifdef DWC_LINUX
++/** Type for a DMA address */
++typedef dma_addr_t dwc_dma_t;
++#endif
++
++#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
++typedef bus_addr_t dwc_dma_t;
++#endif
++
++#ifdef DWC_FREEBSD
++typedef struct dwc_dmactx {
++	struct device *dev;
++	bus_dma_tag_t dma_tag;
++	bus_dmamap_t dma_map;
++	bus_addr_t dma_paddr;
++	void *dma_vaddr;
++} dwc_dmactx_t;
++#endif
++
++#ifdef DWC_NETBSD
++typedef struct dwc_dmactx {
++	struct device *dev;
++	bus_dma_tag_t dma_tag;
++	bus_dmamap_t dma_map;
++	bus_dma_segment_t segs[1];
++	int nsegs;
++	bus_addr_t dma_paddr;
++	void *dma_vaddr;
++} dwc_dmactx_t;
++#endif
++
++/* @todo these functions will be added in the future */
++#if 0
++/**
++ * Creates a DMA pool from which you can allocate DMA buffers.  Buffers
++ * allocated from this pool will be guaranteed to meet the size, alignment, and
++ * boundary requirements specified.
++ *
++ * @param[in] size Specifies the size of the buffers that will be allocated from
++ * this pool.
++ * @param[in] align Specifies the byte alignment requirements of the buffers
++ * allocated from this pool.  Must be a power of 2.
++ * @param[in] boundary Specifies the N-byte boundary that buffers allocated from
++ * this pool must not cross.
++ *
++ * @returns A pointer to an internal opaque structure which is not to be
++ * accessed outside of these library functions.  Use this handle to specify
++ * which pools to allocate/free DMA buffers from and also to destroy the pool,
++ * when you are done with it.
++ */
++extern dwc_pool_t *DWC_DMA_POOL_CREATE(uint32_t size, uint32_t align, uint32_t boundary);
++
++/**
++ * Destroy a DMA pool.  All buffers allocated from that pool must be freed first.
++ */
++extern void DWC_DMA_POOL_DESTROY(dwc_pool_t *pool);
++
++/**
++ * Allocate a buffer from the specified DMA pool and zeros its contents.
++ */
++extern void *DWC_DMA_POOL_ALLOC(dwc_pool_t *pool, uint64_t *dma_addr);
++
++/**
++ * Free a previously allocated buffer from the DMA pool.
++ */
++extern void DWC_DMA_POOL_FREE(dwc_pool_t *pool, void *vaddr, void *daddr);
++#endif
++
++/** Allocates a DMA capable buffer and zeroes its contents. */
++extern void *__DWC_DMA_ALLOC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr);
++
++/** Allocates a DMA capable buffer and zeroes its contents in atomic contest */
++extern void *__DWC_DMA_ALLOC_ATOMIC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr);
++
++/** Frees a previously allocated buffer. */
++extern void __DWC_DMA_FREE(void *dma_ctx, uint32_t size, void *virt_addr, dwc_dma_t dma_addr);
++
++/** Allocates a block of memory and zeroes its contents. */
++extern void *__DWC_ALLOC(void *mem_ctx, uint32_t size);
++
++/** Allocates a block of memory and zeroes its contents, in an atomic manner
++ * which can be used inside interrupt context.  The size should be sufficiently
++ * small, a few KB at most, such that failures are not likely to occur.  Can just call
++ * __DWC_ALLOC if it is atomic. */
++extern void *__DWC_ALLOC_ATOMIC(void *mem_ctx, uint32_t size);
++
++/** Frees a previously allocated buffer. */
++extern void __DWC_FREE(void *mem_ctx, void *addr);
++
++#ifndef DWC_DEBUG_MEMORY
++
++#define DWC_ALLOC(_size_) __DWC_ALLOC(NULL, _size_)
++#define DWC_ALLOC_ATOMIC(_size_) __DWC_ALLOC_ATOMIC(NULL, _size_)
++#define DWC_FREE(_addr_) __DWC_FREE(NULL, _addr_)
++
++# ifdef DWC_LINUX
++#define DWC_DMA_ALLOC(_size_,_dma_) __DWC_DMA_ALLOC(NULL, _size_, _dma_)
++#define DWC_DMA_ALLOC_ATOMIC(_size_,_dma_) __DWC_DMA_ALLOC_ATOMIC(NULL, _size_,_dma_)
++#define DWC_DMA_FREE(_size_,_virt_,_dma_) __DWC_DMA_FREE(NULL, _size_, _virt_, _dma_)
++# endif
++
++# if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
++#define DWC_DMA_ALLOC __DWC_DMA_ALLOC
++#define DWC_DMA_FREE __DWC_DMA_FREE
++# endif
++extern void *dwc_dma_alloc_atomic_debug(uint32_t size, dwc_dma_t *dma_addr, char const *func, int line);
++
++#else	/* DWC_DEBUG_MEMORY */
++
++extern void *dwc_alloc_debug(void *mem_ctx, uint32_t size, char const *func, int line);
++extern void *dwc_alloc_atomic_debug(void *mem_ctx, uint32_t size, char const *func, int line);
++extern void dwc_free_debug(void *mem_ctx, void *addr, char const *func, int line);
++extern void *dwc_dma_alloc_debug(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr,
++				 char const *func, int line);
++extern void *dwc_dma_alloc_atomic_debug(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr,
++				char const *func, int line);
++extern void dwc_dma_free_debug(void *dma_ctx, uint32_t size, void *virt_addr,
++			       dwc_dma_t dma_addr, char const *func, int line);
++
++extern int dwc_memory_debug_start(void *mem_ctx);
++extern void dwc_memory_debug_stop(void);
++extern void dwc_memory_debug_report(void);
++
++#define DWC_ALLOC(_size_) dwc_alloc_debug(NULL, _size_, __func__, __LINE__)
++#define DWC_ALLOC_ATOMIC(_size_) dwc_alloc_atomic_debug(NULL, _size_, \
++							__func__, __LINE__)
++#define DWC_FREE(_addr_) dwc_free_debug(NULL, _addr_, __func__, __LINE__)
++
++# ifdef DWC_LINUX
++#define DWC_DMA_ALLOC(_size_,_dma_) dwc_dma_alloc_debug(NULL, _size_, \
++						_dma_, __func__, __LINE__)
++#define DWC_DMA_ALLOC_ATOMIC(_size_,_dma_) dwc_dma_alloc_atomic_debug(NULL, _size_, \
++						_dma_, __func__, __LINE__)
++#define DWC_DMA_FREE(_size_,_virt_,_dma_) dwc_dma_free_debug(NULL, _size_, \
++						_virt_, _dma_, __func__, __LINE__)
++# endif
++
++# if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
++#define DWC_DMA_ALLOC(_ctx_,_size_,_dma_) dwc_dma_alloc_debug(_ctx_, _size_, \
++						_dma_, __func__, __LINE__)
++#define DWC_DMA_FREE(_ctx_,_size_,_virt_,_dma_) dwc_dma_free_debug(_ctx_, _size_, \
++						 _virt_, _dma_, __func__, __LINE__)
++# endif
++
++#endif /* DWC_DEBUG_MEMORY */
++
++#define dwc_alloc(_ctx_,_size_) DWC_ALLOC(_size_)
++#define dwc_alloc_atomic(_ctx_,_size_) DWC_ALLOC_ATOMIC(_size_)
++#define dwc_free(_ctx_,_addr_) DWC_FREE(_addr_)
++
++#ifdef DWC_LINUX
++/* Linux doesn't need any extra parameters for DMA buffer allocation, so we
++ * just throw away the DMA context parameter.
++ */
++#define dwc_dma_alloc(_ctx_,_size_,_dma_) DWC_DMA_ALLOC(_size_, _dma_)
++#define dwc_dma_alloc_atomic(_ctx_,_size_,_dma_) DWC_DMA_ALLOC_ATOMIC(_size_, _dma_)
++#define dwc_dma_free(_ctx_,_size_,_virt_,_dma_) DWC_DMA_FREE(_size_, _virt_, _dma_)
++#endif
++
++#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
++/** BSD needs several extra parameters for DMA buffer allocation, so we pass
++ * them in using the DMA context parameter.
++ */
++#define dwc_dma_alloc DWC_DMA_ALLOC
++#define dwc_dma_free DWC_DMA_FREE
++#endif
++
++
++/** @name Memory and String Processing */
++
++/** memset() clone */
++extern void *DWC_MEMSET(void *dest, uint8_t byte, uint32_t size);
++#define dwc_memset DWC_MEMSET
++
++/** memcpy() clone */
++extern void *DWC_MEMCPY(void *dest, void const *src, uint32_t size);
++#define dwc_memcpy DWC_MEMCPY
++
++/** memmove() clone */
++extern void *DWC_MEMMOVE(void *dest, void *src, uint32_t size);
++#define dwc_memmove DWC_MEMMOVE
++
++/** memcmp() clone */
++extern int DWC_MEMCMP(void *m1, void *m2, uint32_t size);
++#define dwc_memcmp DWC_MEMCMP
++
++/** strcmp() clone */
++extern int DWC_STRCMP(void *s1, void *s2);
++#define dwc_strcmp DWC_STRCMP
++
++/** strncmp() clone */
++extern int DWC_STRNCMP(void *s1, void *s2, uint32_t size);
++#define dwc_strncmp DWC_STRNCMP
++
++/** strlen() clone, for NULL terminated ASCII strings */
++extern int DWC_STRLEN(char const *str);
++#define dwc_strlen DWC_STRLEN
++
++/** strcpy() clone, for NULL terminated ASCII strings */
++extern char *DWC_STRCPY(char *to, const char *from);
++#define dwc_strcpy DWC_STRCPY
++
++/** strdup() clone.  If you wish to use memory allocation debugging, this
++ * implementation of strdup should use the DWC_* memory routines instead of
++ * calling a predefined strdup.  Otherwise the memory allocated by this routine
++ * will not be seen by the debugging routines. */
++extern char *DWC_STRDUP(char const *str);
++#define dwc_strdup(_ctx_,_str_) DWC_STRDUP(_str_)
++
++/** NOT an atoi() clone.  Read the description carefully.  Returns an integer
++ * converted from the string str in base 10 unless the string begins with a "0x"
++ * in which case it is base 16.  String must be a NULL terminated sequence of
++ * ASCII characters and may optionally begin with whitespace, a + or -, and a
++ * "0x" prefix if base 16.  The remaining characters must be valid digits for
++ * the number and end with a NULL character.  If any invalid characters are
++ * encountered or it returns with a negative error code and the results of the
++ * conversion are undefined.  On sucess it returns 0.  Overflow conditions are
++ * undefined.  An example implementation using atoi() can be referenced from the
++ * Linux implementation. */
++extern int DWC_ATOI(const char *str, int32_t *value);
++#define dwc_atoi DWC_ATOI
++
++/** Same as above but for unsigned. */
++extern int DWC_ATOUI(const char *str, uint32_t *value);
++#define dwc_atoui DWC_ATOUI
++
++#ifdef DWC_UTFLIB
++/** This routine returns a UTF16LE unicode encoded string from a UTF8 string. */
++extern int DWC_UTF8_TO_UTF16LE(uint8_t const *utf8string, uint16_t *utf16string, unsigned len);
++#define dwc_utf8_to_utf16le DWC_UTF8_TO_UTF16LE
++#endif
++
++
++/** @name Wait queues
++ *
++ * Wait queues provide a means of synchronizing between threads or processes.  A
++ * process can block on a waitq if some condition is not true, waiting for it to
++ * become true.  When the waitq is triggered all waiting process will get
++ * unblocked and the condition will be check again.  Waitqs should be triggered
++ * every time a condition can potentially change.*/
++struct dwc_waitq;
++
++/** Type for a waitq */
++typedef struct dwc_waitq dwc_waitq_t;
++
++/** The type of waitq condition callback function.  This is called every time
++ * condition is evaluated. */
++typedef int (*dwc_waitq_condition_t)(void *data);
++
++/** Allocate a waitq */
++extern dwc_waitq_t *DWC_WAITQ_ALLOC(void);
++#define dwc_waitq_alloc(_ctx_) DWC_WAITQ_ALLOC()
++
++/** Free a waitq */
++extern void DWC_WAITQ_FREE(dwc_waitq_t *wq);
++#define dwc_waitq_free DWC_WAITQ_FREE
++
++/** Check the condition and if it is false, block on the waitq.  When unblocked, check the
++ * condition again.  The function returns when the condition becomes true.  The return value
++ * is 0 on condition true, DWC_WAITQ_ABORTED on abort or killed, or DWC_WAITQ_UNKNOWN on error. */
++extern int32_t DWC_WAITQ_WAIT(dwc_waitq_t *wq, dwc_waitq_condition_t cond, void *data);
++#define dwc_waitq_wait DWC_WAITQ_WAIT
++
++/** Check the condition and if it is false, block on the waitq.  When unblocked,
++ * check the condition again.  The function returns when the condition become
++ * true or the timeout has passed.  The return value is 0 on condition true or
++ * DWC_TIMED_OUT on timeout, or DWC_WAITQ_ABORTED, or DWC_WAITQ_UNKNOWN on
++ * error. */
++extern int32_t DWC_WAITQ_WAIT_TIMEOUT(dwc_waitq_t *wq, dwc_waitq_condition_t cond,
++				      void *data, int32_t msecs);
++#define dwc_waitq_wait_timeout DWC_WAITQ_WAIT_TIMEOUT
++
++/** Trigger a waitq, unblocking all processes.  This should be called whenever a condition
++ * has potentially changed. */
++extern void DWC_WAITQ_TRIGGER(dwc_waitq_t *wq);
++#define dwc_waitq_trigger DWC_WAITQ_TRIGGER
++
++/** Unblock all processes waiting on the waitq with an ABORTED result. */
++extern void DWC_WAITQ_ABORT(dwc_waitq_t *wq);
++#define dwc_waitq_abort DWC_WAITQ_ABORT
++
++
++/** @name Threads
++ *
++ * A thread must be explicitly stopped.  It must check DWC_THREAD_SHOULD_STOP
++ * whenever it is woken up, and then return.  The DWC_THREAD_STOP function
++ * returns the value from the thread.
++ */
++
++struct dwc_thread;
++
++/** Type for a thread */
++typedef struct dwc_thread dwc_thread_t;
++
++/** The thread function */
++typedef int (*dwc_thread_function_t)(void *data);
++
++/** Create a thread and start it running the thread_function.  Returns a handle
++ * to the thread */
++extern dwc_thread_t *DWC_THREAD_RUN(dwc_thread_function_t func, char *name, void *data);
++#define dwc_thread_run(_ctx_,_func_,_name_,_data_) DWC_THREAD_RUN(_func_, _name_, _data_)
++
++/** Stops a thread.  Return the value returned by the thread.  Or will return
++ * DWC_ABORT if the thread never started. */
++extern int DWC_THREAD_STOP(dwc_thread_t *thread);
++#define dwc_thread_stop DWC_THREAD_STOP
++
++/** Signifies to the thread that it must stop. */
++#ifdef DWC_LINUX
++/* Linux doesn't need any parameters for kthread_should_stop() */
++extern dwc_bool_t DWC_THREAD_SHOULD_STOP(void);
++#define dwc_thread_should_stop(_thrd_) DWC_THREAD_SHOULD_STOP()
++
++/* No thread_exit function in Linux */
++#define dwc_thread_exit(_thrd_)
++#endif
++
++#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
++/** BSD needs the thread pointer for kthread_suspend_check() */
++extern dwc_bool_t DWC_THREAD_SHOULD_STOP(dwc_thread_t *thread);
++#define dwc_thread_should_stop DWC_THREAD_SHOULD_STOP
++
++/** The thread must call this to exit. */
++extern void DWC_THREAD_EXIT(dwc_thread_t *thread);
++#define dwc_thread_exit DWC_THREAD_EXIT
++#endif
++
++
++/** @name Work queues
++ *
++ * Workqs are used to queue a callback function to be called at some later time,
++ * in another thread. */
++struct dwc_workq;
++
++/** Type for a workq */
++typedef struct dwc_workq dwc_workq_t;
++
++/** The type of the callback function to be called. */
++typedef void (*dwc_work_callback_t)(void *data);
++
++/** Allocate a workq */
++extern dwc_workq_t *DWC_WORKQ_ALLOC(char *name);
++#define dwc_workq_alloc(_ctx_,_name_) DWC_WORKQ_ALLOC(_name_)
++
++/** Free a workq.  All work must be completed before being freed. */
++extern void DWC_WORKQ_FREE(dwc_workq_t *workq);
++#define dwc_workq_free DWC_WORKQ_FREE
++
++/** Schedule a callback on the workq, passing in data.  The function will be
++ * scheduled at some later time. */
++extern void DWC_WORKQ_SCHEDULE(dwc_workq_t *workq, dwc_work_callback_t cb,
++			       void *data, char *format, ...)
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 4, 5)));
++#else
++	;
++#endif
++#define dwc_workq_schedule DWC_WORKQ_SCHEDULE
++
++/** Schedule a callback on the workq, that will be called until at least
++ * given number miliseconds have passed. */
++extern void DWC_WORKQ_SCHEDULE_DELAYED(dwc_workq_t *workq, dwc_work_callback_t cb,
++				       void *data, uint32_t time, char *format, ...)
++#ifdef __GNUC__
++	__attribute__ ((format(printf, 5, 6)));
++#else
++	;
++#endif
++#define dwc_workq_schedule_delayed DWC_WORKQ_SCHEDULE_DELAYED
++
++/** The number of processes in the workq */
++extern int DWC_WORKQ_PENDING(dwc_workq_t *workq);
++#define dwc_workq_pending DWC_WORKQ_PENDING
++
++/** Blocks until all the work in the workq is complete or timed out.  Returns <
++ * 0 on timeout. */
++extern int DWC_WORKQ_WAIT_WORK_DONE(dwc_workq_t *workq, int timeout);
++#define dwc_workq_wait_work_done DWC_WORKQ_WAIT_WORK_DONE
++
++
++/** @name Tasklets
++ *
++ */
++struct dwc_tasklet;
++
++/** Type for a tasklet */
++typedef struct dwc_tasklet dwc_tasklet_t;
++
++/** The type of the callback function to be called */
++typedef void (*dwc_tasklet_callback_t)(void *data);
++
++/** Allocates a tasklet */
++extern dwc_tasklet_t *DWC_TASK_ALLOC(char *name, dwc_tasklet_callback_t cb, void *data);
++#define dwc_task_alloc(_ctx_,_name_,_cb_,_data_) DWC_TASK_ALLOC(_name_, _cb_, _data_)
++
++/** Frees a tasklet */
++extern void DWC_TASK_FREE(dwc_tasklet_t *task);
++#define dwc_task_free DWC_TASK_FREE
++
++/** Schedules a tasklet to run */
++extern void DWC_TASK_SCHEDULE(dwc_tasklet_t *task);
++#define dwc_task_schedule DWC_TASK_SCHEDULE
++
++extern void DWC_TASK_HI_SCHEDULE(dwc_tasklet_t *task);
++#define dwc_task_hi_schedule DWC_TASK_HI_SCHEDULE
++
++/** @name Timer
++ *
++ * Callbacks must be small and atomic.
++ */
++struct dwc_timer;
++
++/** Type for a timer */
++typedef struct dwc_timer dwc_timer_t;
++
++/** The type of the callback function to be called */
++typedef void (*dwc_timer_callback_t)(void *data);
++
++/** Allocates a timer */
++extern dwc_timer_t *DWC_TIMER_ALLOC(char *name, dwc_timer_callback_t cb, void *data);
++#define dwc_timer_alloc(_ctx_,_name_,_cb_,_data_) DWC_TIMER_ALLOC(_name_,_cb_,_data_)
++
++/** Frees a timer */
++extern void DWC_TIMER_FREE(dwc_timer_t *timer);
++#define dwc_timer_free DWC_TIMER_FREE
++
++/** Schedules the timer to run at time ms from now.  And will repeat at every
++ * repeat_interval msec therafter
++ *
++ * Modifies a timer that is still awaiting execution to a new expiration time.
++ * The mod_time is added to the old time.  */
++extern void DWC_TIMER_SCHEDULE(dwc_timer_t *timer, uint32_t time);
++#define dwc_timer_schedule DWC_TIMER_SCHEDULE
++
++/** Disables the timer from execution. */
++extern void DWC_TIMER_CANCEL(dwc_timer_t *timer);
++#define dwc_timer_cancel DWC_TIMER_CANCEL
++
++
++/** @name Spinlocks
++ *
++ * These locks are used when the work between the lock/unlock is atomic and
++ * short.  Interrupts are also disabled during the lock/unlock and thus they are
++ * suitable to lock between interrupt/non-interrupt context.  They also lock
++ * between processes if you have multiple CPUs or Preemption.  If you don't have
++ * multiple CPUS or Preemption, then the you can simply implement the
++ * DWC_SPINLOCK and DWC_SPINUNLOCK to disable and enable interrupts.  Because
++ * the work between the lock/unlock is atomic, the process context will never
++ * change, and so you never have to lock between processes.  */
++
++struct dwc_spinlock;
++
++/** Type for a spinlock */
++typedef struct dwc_spinlock dwc_spinlock_t;
++
++/** Type for the 'flags' argument to spinlock funtions */
++typedef unsigned long dwc_irqflags_t;
++
++/** Returns an initialized lock variable.  This function should allocate and
++ * initialize the OS-specific data structure used for locking.  This data
++ * structure is to be used for the DWC_LOCK and DWC_UNLOCK functions and should
++ * be freed by the DWC_FREE_LOCK when it is no longer used.
++ *
++ * For Linux Spinlock Debugging make it macro because the debugging routines use
++ * the symbol name to determine recursive locking. Using a wrapper function
++ * makes it falsely think recursive locking occurs. */
++#if defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK)
++#define DWC_SPINLOCK_ALLOC_LINUX_DEBUG(lock) ({ \
++	lock = DWC_ALLOC(sizeof(spinlock_t)); \
++	if (lock) { \
++		spin_lock_init((spinlock_t *)lock); \
++	} \
++})
++#else
++extern dwc_spinlock_t *DWC_SPINLOCK_ALLOC(void);
++#define dwc_spinlock_alloc(_ctx_) DWC_SPINLOCK_ALLOC()
++#endif
++
++/** Frees an initialized lock variable. */
++extern void DWC_SPINLOCK_FREE(dwc_spinlock_t *lock);
++#define dwc_spinlock_free(_ctx_,_lock_) DWC_SPINLOCK_FREE(_lock_)
++
++/** Disables interrupts and blocks until it acquires the lock.
++ *
++ * @param lock Pointer to the spinlock.
++ * @param flags Unsigned long for irq flags storage.
++ */
++extern void DWC_SPINLOCK_IRQSAVE(dwc_spinlock_t *lock, dwc_irqflags_t *flags);
++#define dwc_spinlock_irqsave DWC_SPINLOCK_IRQSAVE
++
++/** Re-enables the interrupt and releases the lock.
++ *
++ * @param lock Pointer to the spinlock.
++ * @param flags Unsigned long for irq flags storage.  Must be the same as was
++ * passed into DWC_LOCK.
++ */
++extern void DWC_SPINUNLOCK_IRQRESTORE(dwc_spinlock_t *lock, dwc_irqflags_t flags);
++#define dwc_spinunlock_irqrestore DWC_SPINUNLOCK_IRQRESTORE
++
++/** Blocks until it acquires the lock.
++ *
++ * @param lock Pointer to the spinlock.
++ */
++extern void DWC_SPINLOCK(dwc_spinlock_t *lock);
++#define dwc_spinlock DWC_SPINLOCK
++
++/** Releases the lock.
++ *
++ * @param lock Pointer to the spinlock.
++ */
++extern void DWC_SPINUNLOCK(dwc_spinlock_t *lock);
++#define dwc_spinunlock DWC_SPINUNLOCK
++
++
++/** @name Mutexes
++ *
++ * Unlike spinlocks Mutexes lock only between processes and the work between the
++ * lock/unlock CAN block, therefore it CANNOT be called from interrupt context.
++ */
++
++struct dwc_mutex;
++
++/** Type for a mutex */
++typedef struct dwc_mutex dwc_mutex_t;
++
++/* For Linux Mutex Debugging make it inline because the debugging routines use
++ * the symbol to determine recursive locking.  This makes it falsely think
++ * recursive locking occurs. */
++#if defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES)
++#define DWC_MUTEX_ALLOC_LINUX_DEBUG(__mutexp) ({ \
++	__mutexp = (dwc_mutex_t *)DWC_ALLOC(sizeof(struct mutex)); \
++	mutex_init((struct mutex *)__mutexp); \
++})
++#endif
++
++/** Allocate a mutex */
++extern dwc_mutex_t *DWC_MUTEX_ALLOC(void);
++#define dwc_mutex_alloc(_ctx_) DWC_MUTEX_ALLOC()
++
++/* For memory leak debugging when using Linux Mutex Debugging */
++#if defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES)
++#define DWC_MUTEX_FREE(__mutexp) do { \
++	mutex_destroy((struct mutex *)__mutexp); \
++	DWC_FREE(__mutexp); \
++} while(0)
++#else
++/** Free a mutex */
++extern void DWC_MUTEX_FREE(dwc_mutex_t *mutex);
++#define dwc_mutex_free(_ctx_,_mutex_) DWC_MUTEX_FREE(_mutex_)
++#endif
++
++/** Lock a mutex */
++extern void DWC_MUTEX_LOCK(dwc_mutex_t *mutex);
++#define dwc_mutex_lock DWC_MUTEX_LOCK
++
++/** Non-blocking lock returns 1 on successful lock. */
++extern int DWC_MUTEX_TRYLOCK(dwc_mutex_t *mutex);
++#define dwc_mutex_trylock DWC_MUTEX_TRYLOCK
++
++/** Unlock a mutex */
++extern void DWC_MUTEX_UNLOCK(dwc_mutex_t *mutex);
++#define dwc_mutex_unlock DWC_MUTEX_UNLOCK
++
++
++/** @name Time */
++
++/** Microsecond delay.
++ *
++ * @param usecs  Microseconds to delay.
++ */
++extern void DWC_UDELAY(uint32_t usecs);
++#define dwc_udelay DWC_UDELAY
++
++/** Millisecond delay.
++ *
++ * @param msecs  Milliseconds to delay.
++ */
++extern void DWC_MDELAY(uint32_t msecs);
++#define dwc_mdelay DWC_MDELAY
++
++/** Non-busy waiting.
++ * Sleeps for specified number of milliseconds.
++ *
++ * @param msecs Milliseconds to sleep.
++ */
++extern void DWC_MSLEEP(uint32_t msecs);
++#define dwc_msleep DWC_MSLEEP
++
++/**
++ * Returns number of milliseconds since boot.
++ */
++extern uint32_t DWC_TIME(void);
++#define dwc_time DWC_TIME
++
++
++
++
++/* @mainpage DWC Portability and Common Library
++ *
++ * This is the documentation for the DWC Portability and Common Library.
++ *
++ * @section intro Introduction
++ *
++ * The DWC Portability library consists of wrapper calls and data structures to
++ * all low-level functions which are typically provided by the OS.  The WUDEV
++ * driver uses only these functions.  In order to port the WUDEV driver, only
++ * the functions in this library need to be re-implemented, with the same
++ * behavior as documented here.
++ *
++ * The Common library consists of higher level functions, which rely only on
++ * calling the functions from the DWC Portability library.  These common
++ * routines are shared across modules.  Some of the common libraries need to be
++ * used directly by the driver programmer when porting WUDEV.  Such as the
++ * parameter and notification libraries.
++ *
++ * @section low Portability Library OS Wrapper Functions
++ *
++ * Any function starting with DWC and in all CAPS is a low-level OS-wrapper that
++ * needs to be implemented when porting, for example DWC_MUTEX_ALLOC().  All of
++ * these functions are included in the dwc_os.h file.
++ *
++ * There are many functions here covering a wide array of OS services.  Please
++ * see dwc_os.h for details, and implementation notes for each function.
++ *
++ * @section common Common Library Functions
++ *
++ * Any function starting with dwc and in all lowercase is a common library
++ * routine.  These functions have a portable implementation and do not need to
++ * be reimplemented when porting.  The common routines can be used by any
++ * driver, and some must be used by the end user to control the drivers.  For
++ * example, you must use the Parameter common library in order to set the
++ * parameters in the WUDEV module.
++ *
++ * The common libraries consist of the following:
++ *
++ * - Connection Contexts - Used internally and can be used by end-user.  See dwc_cc.h
++ * - Parameters - Used internally and can be used by end-user.  See dwc_params.h
++ * - Notifications - Used internally and can be used by end-user.  See dwc_notifier.h
++ * - Lists - Used internally and can be used by end-user.  See dwc_list.h
++ * - Memory Debugging - Used internally and can be used by end-user.  See dwc_os.h
++ * - Modpow - Used internally only.  See dwc_modpow.h
++ * - DH - Used internally only.  See dwc_dh.h
++ * - Crypto - Used internally only.  See dwc_crypto.h
++ *
++ *
++ * @section prereq Prerequistes For dwc_os.h
++ * @subsection types Data Types
++ *
++ * The dwc_os.h file assumes that several low-level data types are pre defined for the
++ * compilation environment.  These data types are:
++ *
++ * - uint8_t - unsigned 8-bit data type
++ * - int8_t - signed 8-bit data type
++ * - uint16_t - unsigned 16-bit data type
++ * - int16_t - signed 16-bit data type
++ * - uint32_t - unsigned 32-bit data type
++ * - int32_t - signed 32-bit data type
++ * - uint64_t - unsigned 64-bit data type
++ * - int64_t - signed 64-bit data type
++ *
++ * Ensure that these are defined before using dwc_os.h.  The easiest way to do
++ * that is to modify the top of the file to include the appropriate header.
++ * This is already done for the Linux environment.  If the DWC_LINUX macro is
++ * defined, the correct header will be added.  A standard header <stdint.h> is
++ * also used for environments where standard C headers are available.
++ *
++ * @subsection stdarg Variable Arguments
++ *
++ * Variable arguments are provided by a standard C header <stdarg.h>.  it is
++ * available in Both the Linux and ANSI C enviornment.  An equivalent must be
++ * provided in your enviornment in order to use dwc_os.h with the debug and
++ * tracing message functionality.
++ *
++ * @subsection thread Threading
++ *
++ * WUDEV Core must be run on an operating system that provides for multiple
++ * threads/processes.  Threading can be implemented in many ways, even in
++ * embedded systems without an operating system.  At the bare minimum, the
++ * system should be able to start any number of processes at any time to handle
++ * special work.  It need not be a pre-emptive system.  Process context can
++ * change upon a call to a blocking function.  The hardware interrupt context
++ * that calls the module's ISR() function must be differentiable from process
++ * context, even if your processes are impemented via a hardware interrupt.
++ * Further locking mechanism between process must exist (or be implemented), and
++ * process context must have a way to disable interrupts for a period of time to
++ * lock them out.  If all of this exists, the functions in dwc_os.h related to
++ * threading should be able to be implemented with the defined behavior.
++ *
++ */
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* _DWC_OS_H_ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_common_port/usb.h
+@@ -0,0 +1,946 @@
++/*
++ * Copyright (c) 1998 The NetBSD Foundation, Inc.
++ * All rights reserved.
++ *
++ * This code is derived from software contributed to The NetBSD Foundation
++ * by Lennart Augustsson (lennart at augustsson.net) at
++ * Carlstedt Research & Technology.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions and the following disclaimer.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. All advertising materials mentioning features or use of this software
++ *    must display the following acknowledgement:
++ *        This product includes software developed by the NetBSD
++ *        Foundation, Inc. and its contributors.
++ * 4. Neither the name of The NetBSD Foundation nor the names of its
++ *    contributors may be used to endorse or promote products derived
++ *    from this software without specific prior written permission.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
++ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
++ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
++ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
++ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
++ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
++ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
++ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
++ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
++ * POSSIBILITY OF SUCH DAMAGE.
++ */
++
++/* Modified by Synopsys, Inc, 12/12/2007 */
++
++
++#ifndef _USB_H_
++#define _USB_H_
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++/*
++ * The USB records contain some unaligned little-endian word
++ * components.  The U[SG]ETW macros take care of both the alignment
++ * and endian problem and should always be used to access non-byte
++ * values.
++ */
++typedef u_int8_t uByte;
++typedef u_int8_t uWord[2];
++typedef u_int8_t uDWord[4];
++
++#define USETW2(w,h,l) ((w)[0] = (u_int8_t)(l), (w)[1] = (u_int8_t)(h))
++#define UCONSTW(x)	{ (x) & 0xff, ((x) >> 8) & 0xff }
++#define UCONSTDW(x)	{ (x) & 0xff, ((x) >> 8) & 0xff, \
++			  ((x) >> 16) & 0xff, ((x) >> 24) & 0xff }
++
++#if 1
++#define UGETW(w) ((w)[0] | ((w)[1] << 8))
++#define USETW(w,v) ((w)[0] = (u_int8_t)(v), (w)[1] = (u_int8_t)((v) >> 8))
++#define UGETDW(w) ((w)[0] | ((w)[1] << 8) | ((w)[2] << 16) | ((w)[3] << 24))
++#define USETDW(w,v) ((w)[0] = (u_int8_t)(v), \
++		     (w)[1] = (u_int8_t)((v) >> 8), \
++		     (w)[2] = (u_int8_t)((v) >> 16), \
++		     (w)[3] = (u_int8_t)((v) >> 24))
++#else
++/*
++ * On little-endian machines that can handle unanliged accesses
++ * (e.g. i386) these macros can be replaced by the following.
++ */
++#define UGETW(w) (*(u_int16_t *)(w))
++#define USETW(w,v) (*(u_int16_t *)(w) = (v))
++#define UGETDW(w) (*(u_int32_t *)(w))
++#define USETDW(w,v) (*(u_int32_t *)(w) = (v))
++#endif
++
++/*
++ * Macros for accessing UAS IU fields, which are big-endian
++ */
++#define IUSETW2(w,h,l) ((w)[0] = (u_int8_t)(h), (w)[1] = (u_int8_t)(l))
++#define IUCONSTW(x)	{ ((x) >> 8) & 0xff, (x) & 0xff }
++#define IUCONSTDW(x)	{ ((x) >> 24) & 0xff, ((x) >> 16) & 0xff, \
++			((x) >> 8) & 0xff, (x) & 0xff }
++#define IUGETW(w) (((w)[0] << 8) | (w)[1])
++#define IUSETW(w,v) ((w)[0] = (u_int8_t)((v) >> 8), (w)[1] = (u_int8_t)(v))
++#define IUGETDW(w) (((w)[0] << 24) | ((w)[1] << 16) | ((w)[2] << 8) | (w)[3])
++#define IUSETDW(w,v) ((w)[0] = (u_int8_t)((v) >> 24), \
++		      (w)[1] = (u_int8_t)((v) >> 16), \
++		      (w)[2] = (u_int8_t)((v) >> 8), \
++		      (w)[3] = (u_int8_t)(v))
++
++#define UPACKED __attribute__((__packed__))
++
++typedef struct {
++	uByte		bmRequestType;
++	uByte		bRequest;
++	uWord		wValue;
++	uWord		wIndex;
++	uWord		wLength;
++} UPACKED usb_device_request_t;
++
++#define UT_GET_DIR(a) ((a) & 0x80)
++#define UT_WRITE		0x00
++#define UT_READ			0x80
++
++#define UT_GET_TYPE(a) ((a) & 0x60)
++#define UT_STANDARD		0x00
++#define UT_CLASS		0x20
++#define UT_VENDOR		0x40
++
++#define UT_GET_RECIPIENT(a) ((a) & 0x1f)
++#define UT_DEVICE		0x00
++#define UT_INTERFACE		0x01
++#define UT_ENDPOINT		0x02
++#define UT_OTHER		0x03
++
++#define UT_READ_DEVICE		(UT_READ  | UT_STANDARD | UT_DEVICE)
++#define UT_READ_INTERFACE	(UT_READ  | UT_STANDARD | UT_INTERFACE)
++#define UT_READ_ENDPOINT	(UT_READ  | UT_STANDARD | UT_ENDPOINT)
++#define UT_WRITE_DEVICE		(UT_WRITE | UT_STANDARD | UT_DEVICE)
++#define UT_WRITE_INTERFACE	(UT_WRITE | UT_STANDARD | UT_INTERFACE)
++#define UT_WRITE_ENDPOINT	(UT_WRITE | UT_STANDARD | UT_ENDPOINT)
++#define UT_READ_CLASS_DEVICE	(UT_READ  | UT_CLASS | UT_DEVICE)
++#define UT_READ_CLASS_INTERFACE	(UT_READ  | UT_CLASS | UT_INTERFACE)
++#define UT_READ_CLASS_OTHER	(UT_READ  | UT_CLASS | UT_OTHER)
++#define UT_READ_CLASS_ENDPOINT	(UT_READ  | UT_CLASS | UT_ENDPOINT)
++#define UT_WRITE_CLASS_DEVICE	(UT_WRITE | UT_CLASS | UT_DEVICE)
++#define UT_WRITE_CLASS_INTERFACE (UT_WRITE | UT_CLASS | UT_INTERFACE)
++#define UT_WRITE_CLASS_OTHER	(UT_WRITE | UT_CLASS | UT_OTHER)
++#define UT_WRITE_CLASS_ENDPOINT	(UT_WRITE | UT_CLASS | UT_ENDPOINT)
++#define UT_READ_VENDOR_DEVICE	(UT_READ  | UT_VENDOR | UT_DEVICE)
++#define UT_READ_VENDOR_INTERFACE (UT_READ  | UT_VENDOR | UT_INTERFACE)
++#define UT_READ_VENDOR_OTHER	(UT_READ  | UT_VENDOR | UT_OTHER)
++#define UT_READ_VENDOR_ENDPOINT	(UT_READ  | UT_VENDOR | UT_ENDPOINT)
++#define UT_WRITE_VENDOR_DEVICE	(UT_WRITE | UT_VENDOR | UT_DEVICE)
++#define UT_WRITE_VENDOR_INTERFACE (UT_WRITE | UT_VENDOR | UT_INTERFACE)
++#define UT_WRITE_VENDOR_OTHER	(UT_WRITE | UT_VENDOR | UT_OTHER)
++#define UT_WRITE_VENDOR_ENDPOINT (UT_WRITE | UT_VENDOR | UT_ENDPOINT)
++
++/* Requests */
++#define UR_GET_STATUS		0x00
++#define  USTAT_STANDARD_STATUS  0x00
++#define  WUSTAT_WUSB_FEATURE    0x01
++#define  WUSTAT_CHANNEL_INFO    0x02
++#define  WUSTAT_RECEIVED_DATA   0x03
++#define  WUSTAT_MAS_AVAILABILITY 0x04
++#define  WUSTAT_CURRENT_TRANSMIT_POWER 0x05
++#define UR_CLEAR_FEATURE	0x01
++#define UR_SET_FEATURE		0x03
++#define UR_SET_AND_TEST_FEATURE 0x0c
++#define UR_SET_ADDRESS		0x05
++#define UR_GET_DESCRIPTOR	0x06
++#define  UDESC_DEVICE		0x01
++#define  UDESC_CONFIG		0x02
++#define  UDESC_STRING		0x03
++#define  UDESC_INTERFACE	0x04
++#define  UDESC_ENDPOINT		0x05
++#define  UDESC_SS_USB_COMPANION	0x30
++#define  UDESC_DEVICE_QUALIFIER	0x06
++#define  UDESC_OTHER_SPEED_CONFIGURATION 0x07
++#define  UDESC_INTERFACE_POWER	0x08
++#define  UDESC_OTG		0x09
++#define  WUDESC_SECURITY	0x0c
++#define  WUDESC_KEY		0x0d
++#define   WUD_GET_KEY_INDEX(_wValue_) ((_wValue_) & 0xf)
++#define   WUD_GET_KEY_TYPE(_wValue_) (((_wValue_) & 0x30) >> 4)
++#define    WUD_KEY_TYPE_ASSOC    0x01
++#define    WUD_KEY_TYPE_GTK      0x02
++#define   WUD_GET_KEY_ORIGIN(_wValue_) (((_wValue_) & 0x40) >> 6)
++#define    WUD_KEY_ORIGIN_HOST   0x00
++#define    WUD_KEY_ORIGIN_DEVICE 0x01
++#define  WUDESC_ENCRYPTION_TYPE	0x0e
++#define  WUDESC_BOS		0x0f
++#define  WUDESC_DEVICE_CAPABILITY 0x10
++#define  WUDESC_WIRELESS_ENDPOINT_COMPANION 0x11
++#define  UDESC_BOS		0x0f
++#define  UDESC_DEVICE_CAPABILITY 0x10
++#define  UDESC_CS_DEVICE	0x21	/* class specific */
++#define  UDESC_CS_CONFIG	0x22
++#define  UDESC_CS_STRING	0x23
++#define  UDESC_CS_INTERFACE	0x24
++#define  UDESC_CS_ENDPOINT	0x25
++#define  UDESC_HUB		0x29
++#define UR_SET_DESCRIPTOR	0x07
++#define UR_GET_CONFIG		0x08
++#define UR_SET_CONFIG		0x09
++#define UR_GET_INTERFACE	0x0a
++#define UR_SET_INTERFACE	0x0b
++#define UR_SYNCH_FRAME		0x0c
++#define WUR_SET_ENCRYPTION      0x0d
++#define WUR_GET_ENCRYPTION	0x0e
++#define WUR_SET_HANDSHAKE	0x0f
++#define WUR_GET_HANDSHAKE	0x10
++#define WUR_SET_CONNECTION	0x11
++#define WUR_SET_SECURITY_DATA	0x12
++#define WUR_GET_SECURITY_DATA	0x13
++#define WUR_SET_WUSB_DATA	0x14
++#define  WUDATA_DRPIE_INFO	0x01
++#define  WUDATA_TRANSMIT_DATA	0x02
++#define  WUDATA_TRANSMIT_PARAMS	0x03
++#define  WUDATA_RECEIVE_PARAMS	0x04
++#define  WUDATA_TRANSMIT_POWER	0x05
++#define WUR_LOOPBACK_DATA_WRITE	0x15
++#define WUR_LOOPBACK_DATA_READ	0x16
++#define WUR_SET_INTERFACE_DS	0x17
++
++/* Feature numbers */
++#define UF_ENDPOINT_HALT	0
++#define UF_DEVICE_REMOTE_WAKEUP	1
++#define UF_TEST_MODE		2
++#define UF_DEVICE_B_HNP_ENABLE	3
++#define UF_DEVICE_A_HNP_SUPPORT	4
++#define UF_DEVICE_A_ALT_HNP_SUPPORT 5
++#define WUF_WUSB		3
++#define  WUF_TX_DRPIE		0x0
++#define  WUF_DEV_XMIT_PACKET	0x1
++#define  WUF_COUNT_PACKETS	0x2
++#define  WUF_CAPTURE_PACKETS	0x3
++#define UF_FUNCTION_SUSPEND	0
++#define UF_U1_ENABLE		48
++#define UF_U2_ENABLE		49
++#define UF_LTM_ENABLE		50
++
++/* Class requests from the USB 2.0 hub spec, table 11-15 */
++#define UCR_CLEAR_HUB_FEATURE		(0x2000 | UR_CLEAR_FEATURE)
++#define UCR_CLEAR_PORT_FEATURE		(0x2300 | UR_CLEAR_FEATURE)
++#define UCR_GET_HUB_DESCRIPTOR		(0xa000 | UR_GET_DESCRIPTOR)
++#define UCR_GET_HUB_STATUS		(0xa000 | UR_GET_STATUS)
++#define UCR_GET_PORT_STATUS		(0xa300 | UR_GET_STATUS)
++#define UCR_SET_HUB_FEATURE		(0x2000 | UR_SET_FEATURE)
++#define UCR_SET_PORT_FEATURE		(0x2300 | UR_SET_FEATURE)
++#define UCR_SET_AND_TEST_PORT_FEATURE	(0xa300 | UR_SET_AND_TEST_FEATURE)
++
++#ifdef _MSC_VER
++#include <pshpack1.h>
++#endif
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++	uByte		bDescriptorSubtype;
++} UPACKED usb_descriptor_t;
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++} UPACKED usb_descriptor_header_t;
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++	uWord		bcdUSB;
++#define UD_USB_2_0		0x0200
++#define UD_IS_USB2(d) (UGETW((d)->bcdUSB) >= UD_USB_2_0)
++	uByte		bDeviceClass;
++	uByte		bDeviceSubClass;
++	uByte		bDeviceProtocol;
++	uByte		bMaxPacketSize;
++	/* The fields below are not part of the initial descriptor. */
++	uWord		idVendor;
++	uWord		idProduct;
++	uWord		bcdDevice;
++	uByte		iManufacturer;
++	uByte		iProduct;
++	uByte		iSerialNumber;
++	uByte		bNumConfigurations;
++} UPACKED usb_device_descriptor_t;
++#define USB_DEVICE_DESCRIPTOR_SIZE 18
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++	uWord		wTotalLength;
++	uByte		bNumInterface;
++	uByte		bConfigurationValue;
++	uByte		iConfiguration;
++#define UC_ATT_ONE		(1 << 7)	/* must be set */
++#define UC_ATT_SELFPOWER	(1 << 6)	/* self powered */
++#define UC_ATT_WAKEUP		(1 << 5)	/* can wakeup */
++#define UC_ATT_BATTERY		(1 << 4)	/* battery powered */
++	uByte		bmAttributes;
++#define UC_BUS_POWERED		0x80
++#define UC_SELF_POWERED		0x40
++#define UC_REMOTE_WAKEUP	0x20
++	uByte		bMaxPower; /* max current in 2 mA units */
++#define UC_POWER_FACTOR 2
++} UPACKED usb_config_descriptor_t;
++#define USB_CONFIG_DESCRIPTOR_SIZE 9
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++	uByte		bInterfaceNumber;
++	uByte		bAlternateSetting;
++	uByte		bNumEndpoints;
++	uByte		bInterfaceClass;
++	uByte		bInterfaceSubClass;
++	uByte		bInterfaceProtocol;
++	uByte		iInterface;
++} UPACKED usb_interface_descriptor_t;
++#define USB_INTERFACE_DESCRIPTOR_SIZE 9
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++	uByte		bEndpointAddress;
++#define UE_GET_DIR(a)	((a) & 0x80)
++#define UE_SET_DIR(a,d)	((a) | (((d)&1) << 7))
++#define UE_DIR_IN	0x80
++#define UE_DIR_OUT	0x00
++#define UE_ADDR		0x0f
++#define UE_GET_ADDR(a)	((a) & UE_ADDR)
++	uByte		bmAttributes;
++#define UE_XFERTYPE	0x03
++#define  UE_CONTROL	0x00
++#define  UE_ISOCHRONOUS	0x01
++#define  UE_BULK	0x02
++#define  UE_INTERRUPT	0x03
++#define UE_GET_XFERTYPE(a)	((a) & UE_XFERTYPE)
++#define UE_ISO_TYPE	0x0c
++#define  UE_ISO_ASYNC	0x04
++#define  UE_ISO_ADAPT	0x08
++#define  UE_ISO_SYNC	0x0c
++#define UE_GET_ISO_TYPE(a)	((a) & UE_ISO_TYPE)
++	uWord		wMaxPacketSize;
++	uByte		bInterval;
++} UPACKED usb_endpoint_descriptor_t;
++#define USB_ENDPOINT_DESCRIPTOR_SIZE 7
++
++typedef struct ss_endpoint_companion_descriptor {
++	uByte bLength;
++	uByte bDescriptorType;
++	uByte bMaxBurst;
++#define USSE_GET_MAX_STREAMS(a)		((a) & 0x1f)
++#define USSE_SET_MAX_STREAMS(a, b)	((a) | ((b) & 0x1f))
++#define USSE_GET_MAX_PACKET_NUM(a)	((a) & 0x03)
++#define USSE_SET_MAX_PACKET_NUM(a, b)	((a) | ((b) & 0x03))
++	uByte bmAttributes;
++	uWord wBytesPerInterval;
++} UPACKED ss_endpoint_companion_descriptor_t;
++#define USB_SS_ENDPOINT_COMPANION_DESCRIPTOR_SIZE 6
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++	uWord		bString[127];
++} UPACKED usb_string_descriptor_t;
++#define USB_MAX_STRING_LEN 128
++#define USB_LANGUAGE_TABLE 0	/* # of the string language id table */
++
++/* Hub specific request */
++#define UR_GET_BUS_STATE	0x02
++#define UR_CLEAR_TT_BUFFER	0x08
++#define UR_RESET_TT		0x09
++#define UR_GET_TT_STATE		0x0a
++#define UR_STOP_TT		0x0b
++
++/* Hub features */
++#define UHF_C_HUB_LOCAL_POWER	0
++#define UHF_C_HUB_OVER_CURRENT	1
++#define UHF_PORT_CONNECTION	0
++#define UHF_PORT_ENABLE		1
++#define UHF_PORT_SUSPEND	2
++#define UHF_PORT_OVER_CURRENT	3
++#define UHF_PORT_RESET		4
++#define UHF_PORT_L1		5
++#define UHF_PORT_POWER		8
++#define UHF_PORT_LOW_SPEED	9
++#define UHF_PORT_HIGH_SPEED	10
++#define UHF_C_PORT_CONNECTION	16
++#define UHF_C_PORT_ENABLE	17
++#define UHF_C_PORT_SUSPEND	18
++#define UHF_C_PORT_OVER_CURRENT	19
++#define UHF_C_PORT_RESET	20
++#define UHF_C_PORT_L1		23
++#define UHF_PORT_TEST		21
++#define UHF_PORT_INDICATOR	22
++
++typedef struct {
++	uByte		bDescLength;
++	uByte		bDescriptorType;
++	uByte		bNbrPorts;
++	uWord		wHubCharacteristics;
++#define UHD_PWR			0x0003
++#define  UHD_PWR_GANGED		0x0000
++#define  UHD_PWR_INDIVIDUAL	0x0001
++#define  UHD_PWR_NO_SWITCH	0x0002
++#define UHD_COMPOUND		0x0004
++#define UHD_OC			0x0018
++#define  UHD_OC_GLOBAL		0x0000
++#define  UHD_OC_INDIVIDUAL	0x0008
++#define  UHD_OC_NONE		0x0010
++#define UHD_TT_THINK		0x0060
++#define  UHD_TT_THINK_8		0x0000
++#define  UHD_TT_THINK_16	0x0020
++#define  UHD_TT_THINK_24	0x0040
++#define  UHD_TT_THINK_32	0x0060
++#define UHD_PORT_IND		0x0080
++	uByte		bPwrOn2PwrGood;	/* delay in 2 ms units */
++#define UHD_PWRON_FACTOR 2
++	uByte		bHubContrCurrent;
++	uByte		DeviceRemovable[32]; /* max 255 ports */
++#define UHD_NOT_REMOV(desc, i) \
++    (((desc)->DeviceRemovable[(i)/8] >> ((i) % 8)) & 1)
++	/* deprecated */ uByte		PortPowerCtrlMask[1];
++} UPACKED usb_hub_descriptor_t;
++#define USB_HUB_DESCRIPTOR_SIZE 9 /* includes deprecated PortPowerCtrlMask */
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++	uWord		bcdUSB;
++	uByte		bDeviceClass;
++	uByte		bDeviceSubClass;
++	uByte		bDeviceProtocol;
++	uByte		bMaxPacketSize0;
++	uByte		bNumConfigurations;
++	uByte		bReserved;
++} UPACKED usb_device_qualifier_t;
++#define USB_DEVICE_QUALIFIER_SIZE 10
++
++typedef struct {
++	uByte		bLength;
++	uByte		bDescriptorType;
++	uByte		bmAttributes;
++#define UOTG_SRP	0x01
++#define UOTG_HNP	0x02
++} UPACKED usb_otg_descriptor_t;
++
++/* OTG feature selectors */
++#define UOTG_B_HNP_ENABLE	3
++#define UOTG_A_HNP_SUPPORT	4
++#define UOTG_A_ALT_HNP_SUPPORT	5
++
++typedef struct {
++	uWord		wStatus;
++/* Device status flags */
++#define UDS_SELF_POWERED		0x0001
++#define UDS_REMOTE_WAKEUP		0x0002
++/* Endpoint status flags */
++#define UES_HALT			0x0001
++} UPACKED usb_status_t;
++
++typedef struct {
++	uWord		wHubStatus;
++#define UHS_LOCAL_POWER			0x0001
++#define UHS_OVER_CURRENT		0x0002
++	uWord		wHubChange;
++} UPACKED usb_hub_status_t;
++
++typedef struct {
++	uWord		wPortStatus;
++#define UPS_CURRENT_CONNECT_STATUS	0x0001
++#define UPS_PORT_ENABLED		0x0002
++#define UPS_SUSPEND			0x0004
++#define UPS_OVERCURRENT_INDICATOR	0x0008
++#define UPS_RESET			0x0010
++#define UPS_PORT_POWER			0x0100
++#define UPS_LOW_SPEED			0x0200
++#define UPS_HIGH_SPEED			0x0400
++#define UPS_PORT_TEST			0x0800
++#define UPS_PORT_INDICATOR		0x1000
++	uWord		wPortChange;
++#define UPS_C_CONNECT_STATUS		0x0001
++#define UPS_C_PORT_ENABLED		0x0002
++#define UPS_C_SUSPEND			0x0004
++#define UPS_C_OVERCURRENT_INDICATOR	0x0008
++#define UPS_C_PORT_RESET		0x0010
++} UPACKED usb_port_status_t;
++
++#ifdef _MSC_VER
++#include <poppack.h>
++#endif
++
++/* Device class codes */
++#define UDCLASS_IN_INTERFACE	0x00
++#define UDCLASS_COMM		0x02
++#define UDCLASS_HUB		0x09
++#define  UDSUBCLASS_HUB		0x00
++#define  UDPROTO_FSHUB		0x00
++#define  UDPROTO_HSHUBSTT	0x01
++#define  UDPROTO_HSHUBMTT	0x02
++#define UDCLASS_DIAGNOSTIC	0xdc
++#define UDCLASS_WIRELESS	0xe0
++#define  UDSUBCLASS_RF		0x01
++#define   UDPROTO_BLUETOOTH	0x01
++#define UDCLASS_VENDOR		0xff
++
++/* Interface class codes */
++#define UICLASS_UNSPEC		0x00
++
++#define UICLASS_AUDIO		0x01
++#define  UISUBCLASS_AUDIOCONTROL	1
++#define  UISUBCLASS_AUDIOSTREAM		2
++#define  UISUBCLASS_MIDISTREAM		3
++
++#define UICLASS_CDC		0x02 /* communication */
++#define  UISUBCLASS_DIRECT_LINE_CONTROL_MODEL	1
++#define  UISUBCLASS_ABSTRACT_CONTROL_MODEL	2
++#define  UISUBCLASS_TELEPHONE_CONTROL_MODEL	3
++#define  UISUBCLASS_MULTICHANNEL_CONTROL_MODEL	4
++#define  UISUBCLASS_CAPI_CONTROLMODEL		5
++#define  UISUBCLASS_ETHERNET_NETWORKING_CONTROL_MODEL 6
++#define  UISUBCLASS_ATM_NETWORKING_CONTROL_MODEL 7
++#define   UIPROTO_CDC_AT			1
++
++#define UICLASS_HID		0x03
++#define  UISUBCLASS_BOOT	1
++#define  UIPROTO_BOOT_KEYBOARD	1
++
++#define UICLASS_PHYSICAL	0x05
++
++#define UICLASS_IMAGE		0x06
++
++#define UICLASS_PRINTER		0x07
++#define  UISUBCLASS_PRINTER	1
++#define  UIPROTO_PRINTER_UNI	1
++#define  UIPROTO_PRINTER_BI	2
++#define  UIPROTO_PRINTER_1284	3
++
++#define UICLASS_MASS		0x08
++#define  UISUBCLASS_RBC		1
++#define  UISUBCLASS_SFF8020I	2
++#define  UISUBCLASS_QIC157	3
++#define  UISUBCLASS_UFI		4
++#define  UISUBCLASS_SFF8070I	5
++#define  UISUBCLASS_SCSI	6
++#define  UIPROTO_MASS_CBI_I	0
++#define  UIPROTO_MASS_CBI	1
++#define  UIPROTO_MASS_BBB_OLD	2	/* Not in the spec anymore */
++#define  UIPROTO_MASS_BBB	80	/* 'P' for the Iomega Zip drive */
++
++#define UICLASS_HUB		0x09
++#define  UISUBCLASS_HUB		0
++#define  UIPROTO_FSHUB		0
++#define  UIPROTO_HSHUBSTT	0 /* Yes, same as previous */
++#define  UIPROTO_HSHUBMTT	1
++
++#define UICLASS_CDC_DATA	0x0a
++#define  UISUBCLASS_DATA		0
++#define   UIPROTO_DATA_ISDNBRI		0x30    /* Physical iface */
++#define   UIPROTO_DATA_HDLC		0x31    /* HDLC */
++#define   UIPROTO_DATA_TRANSPARENT	0x32    /* Transparent */
++#define   UIPROTO_DATA_Q921M		0x50    /* Management for Q921 */
++#define   UIPROTO_DATA_Q921		0x51    /* Data for Q921 */
++#define   UIPROTO_DATA_Q921TM		0x52    /* TEI multiplexer for Q921 */
++#define   UIPROTO_DATA_V42BIS		0x90    /* Data compression */
++#define   UIPROTO_DATA_Q931		0x91    /* Euro-ISDN */
++#define   UIPROTO_DATA_V120		0x92    /* V.24 rate adaption */
++#define   UIPROTO_DATA_CAPI		0x93    /* CAPI 2.0 commands */
++#define   UIPROTO_DATA_HOST_BASED	0xfd    /* Host based driver */
++#define   UIPROTO_DATA_PUF		0xfe    /* see Prot. Unit Func. Desc.*/
++#define   UIPROTO_DATA_VENDOR		0xff    /* Vendor specific */
++
++#define UICLASS_SMARTCARD	0x0b
++
++/*#define UICLASS_FIRM_UPD	0x0c*/
++
++#define UICLASS_SECURITY	0x0d
++
++#define UICLASS_DIAGNOSTIC	0xdc
++
++#define UICLASS_WIRELESS	0xe0
++#define  UISUBCLASS_RF			0x01
++#define   UIPROTO_BLUETOOTH		0x01
++
++#define UICLASS_APPL_SPEC	0xfe
++#define  UISUBCLASS_FIRMWARE_DOWNLOAD	1
++#define  UISUBCLASS_IRDA		2
++#define  UIPROTO_IRDA			0
++
++#define UICLASS_VENDOR		0xff
++
++#define USB_HUB_MAX_DEPTH 5
++
++/*
++ * Minimum time a device needs to be powered down to go through
++ * a power cycle.  XXX Are these time in the spec?
++ */
++#define USB_POWER_DOWN_TIME	200 /* ms */
++#define USB_PORT_POWER_DOWN_TIME	100 /* ms */
++
++#if 0
++/* These are the values from the spec. */
++#define USB_PORT_RESET_DELAY	10  /* ms */
++#define USB_PORT_ROOT_RESET_DELAY 50  /* ms */
++#define USB_PORT_RESET_RECOVERY	10  /* ms */
++#define USB_PORT_POWERUP_DELAY	100 /* ms */
++#define USB_SET_ADDRESS_SETTLE	2   /* ms */
++#define USB_RESUME_DELAY	(20*5)  /* ms */
++#define USB_RESUME_WAIT		10  /* ms */
++#define USB_RESUME_RECOVERY	10  /* ms */
++#define USB_EXTRA_POWER_UP_TIME	0   /* ms */
++#else
++/* Allow for marginal (i.e. non-conforming) devices. */
++#define USB_PORT_RESET_DELAY	50  /* ms */
++#define USB_PORT_ROOT_RESET_DELAY 250  /* ms */
++#define USB_PORT_RESET_RECOVERY	250  /* ms */
++#define USB_PORT_POWERUP_DELAY	300 /* ms */
++#define USB_SET_ADDRESS_SETTLE	10  /* ms */
++#define USB_RESUME_DELAY	(50*5)  /* ms */
++#define USB_RESUME_WAIT		50  /* ms */
++#define USB_RESUME_RECOVERY	50  /* ms */
++#define USB_EXTRA_POWER_UP_TIME	20  /* ms */
++#endif
++
++#define USB_MIN_POWER		100 /* mA */
++#define USB_MAX_POWER		500 /* mA */
++
++#define USB_BUS_RESET_DELAY	100 /* ms XXX?*/
++
++#define USB_UNCONFIG_NO 0
++#define USB_UNCONFIG_INDEX (-1)
++
++/*** ioctl() related stuff ***/
++
++struct usb_ctl_request {
++	int	ucr_addr;
++	usb_device_request_t ucr_request;
++	void	*ucr_data;
++	int	ucr_flags;
++#define USBD_SHORT_XFER_OK	0x04	/* allow short reads */
++	int	ucr_actlen;		/* actual length transferred */
++};
++
++struct usb_alt_interface {
++	int	uai_config_index;
++	int	uai_interface_index;
++	int	uai_alt_no;
++};
++
++#define USB_CURRENT_CONFIG_INDEX (-1)
++#define USB_CURRENT_ALT_INDEX (-1)
++
++struct usb_config_desc {
++	int	ucd_config_index;
++	usb_config_descriptor_t ucd_desc;
++};
++
++struct usb_interface_desc {
++	int	uid_config_index;
++	int	uid_interface_index;
++	int	uid_alt_index;
++	usb_interface_descriptor_t uid_desc;
++};
++
++struct usb_endpoint_desc {
++	int	ued_config_index;
++	int	ued_interface_index;
++	int	ued_alt_index;
++	int	ued_endpoint_index;
++	usb_endpoint_descriptor_t ued_desc;
++};
++
++struct usb_full_desc {
++	int	ufd_config_index;
++	u_int	ufd_size;
++	u_char	*ufd_data;
++};
++
++struct usb_string_desc {
++	int	usd_string_index;
++	int	usd_language_id;
++	usb_string_descriptor_t usd_desc;
++};
++
++struct usb_ctl_report_desc {
++	int	ucrd_size;
++	u_char	ucrd_data[1024];	/* filled data size will vary */
++};
++
++typedef struct { u_int32_t cookie; } usb_event_cookie_t;
++
++#define USB_MAX_DEVNAMES 4
++#define USB_MAX_DEVNAMELEN 16
++struct usb_device_info {
++	u_int8_t	udi_bus;
++	u_int8_t	udi_addr;	/* device address */
++	usb_event_cookie_t udi_cookie;
++	char		udi_product[USB_MAX_STRING_LEN];
++	char		udi_vendor[USB_MAX_STRING_LEN];
++	char		udi_release[8];
++	u_int16_t	udi_productNo;
++	u_int16_t	udi_vendorNo;
++	u_int16_t	udi_releaseNo;
++	u_int8_t	udi_class;
++	u_int8_t	udi_subclass;
++	u_int8_t	udi_protocol;
++	u_int8_t	udi_config;
++	u_int8_t	udi_speed;
++#define USB_SPEED_UNKNOWN	0
++#define USB_SPEED_LOW		1
++#define USB_SPEED_FULL		2
++#define USB_SPEED_HIGH		3
++#define USB_SPEED_VARIABLE	4
++#define USB_SPEED_SUPER		5
++	int		udi_power;	/* power consumption in mA, 0 if selfpowered */
++	int		udi_nports;
++	char		udi_devnames[USB_MAX_DEVNAMES][USB_MAX_DEVNAMELEN];
++	u_int8_t	udi_ports[16];/* hub only: addresses of devices on ports */
++#define USB_PORT_ENABLED 0xff
++#define USB_PORT_SUSPENDED 0xfe
++#define USB_PORT_POWERED 0xfd
++#define USB_PORT_DISABLED 0xfc
++};
++
++struct usb_ctl_report {
++	int	ucr_report;
++	u_char	ucr_data[1024];	/* filled data size will vary */
++};
++
++struct usb_device_stats {
++	u_long	uds_requests[4];	/* indexed by transfer type UE_* */
++};
++
++#define WUSB_MIN_IE			0x80
++#define WUSB_WCTA_IE			0x80
++#define WUSB_WCONNECTACK_IE		0x81
++#define WUSB_WHOSTINFO_IE		0x82
++#define  WUHI_GET_CA(_bmAttributes_) ((_bmAttributes_) & 0x3)
++#define   WUHI_CA_RECONN		0x00
++#define   WUHI_CA_LIMITED		0x01
++#define   WUHI_CA_ALL			0x03
++#define  WUHI_GET_MLSI(_bmAttributes_) (((_bmAttributes_) & 0x38) >> 3)
++#define WUSB_WCHCHANGEANNOUNCE_IE	0x83
++#define WUSB_WDEV_DISCONNECT_IE		0x84
++#define WUSB_WHOST_DISCONNECT_IE	0x85
++#define WUSB_WRELEASE_CHANNEL_IE	0x86
++#define WUSB_WWORK_IE			0x87
++#define WUSB_WCHANNEL_STOP_IE		0x88
++#define WUSB_WDEV_KEEPALIVE_IE		0x89
++#define WUSB_WISOCH_DISCARD_IE		0x8A
++#define WUSB_WRESETDEVICE_IE		0x8B
++#define WUSB_WXMIT_PACKET_ADJUST_IE	0x8C
++#define WUSB_MAX_IE			0x8C
++
++/* Device Notification Types */
++
++#define WUSB_DN_MIN			0x01
++#define WUSB_DN_CONNECT			0x01
++# define WUSB_DA_OLDCONN	0x00
++# define WUSB_DA_NEWCONN	0x01
++# define WUSB_DA_SELF_BEACON	0x02
++# define WUSB_DA_DIR_BEACON	0x04
++# define WUSB_DA_NO_BEACON	0x06
++#define WUSB_DN_DISCONNECT		0x02
++#define WUSB_DN_EPRDY			0x03
++#define WUSB_DN_MASAVAILCHANGED		0x04
++#define WUSB_DN_REMOTEWAKEUP		0x05
++#define WUSB_DN_SLEEP			0x06
++#define WUSB_DN_ALIVE			0x07
++#define WUSB_DN_MAX			0x07
++
++#ifdef _MSC_VER
++#include <pshpack1.h>
++#endif
++
++/* WUSB Handshake Data.  Used during the SET/GET HANDSHAKE requests */
++typedef struct wusb_hndshk_data {
++	uByte bMessageNumber;
++	uByte bStatus;
++	uByte tTKID[3];
++	uByte bReserved;
++	uByte CDID[16];
++	uByte Nonce[16];
++	uByte MIC[8];
++} UPACKED wusb_hndshk_data_t;
++#define WUSB_HANDSHAKE_LEN_FOR_MIC	38
++
++/* WUSB Connection Context */
++typedef struct wusb_conn_context {
++	uByte CHID [16];
++	uByte CDID [16];
++	uByte CK [16];
++} UPACKED wusb_conn_context_t;
++
++/* WUSB Security Descriptor */
++typedef struct wusb_security_desc {
++	uByte bLength;
++	uByte bDescriptorType;
++	uWord wTotalLength;
++	uByte bNumEncryptionTypes;
++} UPACKED wusb_security_desc_t;
++
++/* WUSB Encryption Type Descriptor */
++typedef struct wusb_encrypt_type_desc {
++	uByte bLength;
++	uByte bDescriptorType;
++
++	uByte bEncryptionType;
++#define WUETD_UNSECURE		0
++#define WUETD_WIRED		1
++#define WUETD_CCM_1		2
++#define WUETD_RSA_1		3
++
++	uByte bEncryptionValue;
++	uByte bAuthKeyIndex;
++} UPACKED wusb_encrypt_type_desc_t;
++
++/* WUSB Key Descriptor */
++typedef struct wusb_key_desc {
++	uByte bLength;
++	uByte bDescriptorType;
++	uByte tTKID[3];
++	uByte bReserved;
++	uByte KeyData[1];	/* variable length */
++} UPACKED wusb_key_desc_t;
++
++/* WUSB BOS Descriptor (Binary device Object Store) */
++typedef struct wusb_bos_desc {
++	uByte bLength;
++	uByte bDescriptorType;
++	uWord wTotalLength;
++	uByte bNumDeviceCaps;
++} UPACKED wusb_bos_desc_t;
++
++#define USB_DEVICE_CAPABILITY_20_EXTENSION	0x02
++typedef struct usb_dev_cap_20_ext_desc {
++	uByte bLength;
++	uByte bDescriptorType;
++	uByte bDevCapabilityType;
++#define USB_20_EXT_LPM				0x02
++	uDWord bmAttributes;
++} UPACKED usb_dev_cap_20_ext_desc_t;
++
++#define USB_DEVICE_CAPABILITY_SS_USB		0x03
++typedef struct usb_dev_cap_ss_usb {
++	uByte bLength;
++	uByte bDescriptorType;
++	uByte bDevCapabilityType;
++#define USB_DC_SS_USB_LTM_CAPABLE		0x02
++	uByte bmAttributes;
++#define USB_DC_SS_USB_SPEED_SUPPORT_LOW		0x01
++#define USB_DC_SS_USB_SPEED_SUPPORT_FULL	0x02
++#define USB_DC_SS_USB_SPEED_SUPPORT_HIGH	0x04
++#define USB_DC_SS_USB_SPEED_SUPPORT_SS		0x08
++	uWord wSpeedsSupported;
++	uByte bFunctionalitySupport;
++	uByte bU1DevExitLat;
++	uWord wU2DevExitLat;
++} UPACKED usb_dev_cap_ss_usb_t;
++
++#define USB_DEVICE_CAPABILITY_CONTAINER_ID	0x04
++typedef struct usb_dev_cap_container_id {
++	uByte bLength;
++	uByte bDescriptorType;
++	uByte bDevCapabilityType;
++	uByte bReserved;
++	uByte containerID[16];
++} UPACKED usb_dev_cap_container_id_t;
++
++/* Device Capability Type Codes */
++#define WUSB_DEVICE_CAPABILITY_WIRELESS_USB 0x01
++
++/* Device Capability Descriptor */
++typedef struct wusb_dev_cap_desc {
++	uByte bLength;
++	uByte bDescriptorType;
++	uByte bDevCapabilityType;
++	uByte caps[1];	/* Variable length */
++} UPACKED wusb_dev_cap_desc_t;
++
++/* Device Capability Descriptor */
++typedef struct wusb_dev_cap_uwb_desc {
++	uByte bLength;
++	uByte bDescriptorType;
++	uByte bDevCapabilityType;
++	uByte bmAttributes;
++	uWord wPHYRates;	/* Bitmap */
++	uByte bmTFITXPowerInfo;
++	uByte bmFFITXPowerInfo;
++	uWord bmBandGroup;
++	uByte bReserved;
++} UPACKED wusb_dev_cap_uwb_desc_t;
++
++/* Wireless USB Endpoint Companion Descriptor */
++typedef struct wusb_endpoint_companion_desc {
++	uByte bLength;
++	uByte bDescriptorType;
++	uByte bMaxBurst;
++	uByte bMaxSequence;
++	uWord wMaxStreamDelay;
++	uWord wOverTheAirPacketSize;
++	uByte bOverTheAirInterval;
++	uByte bmCompAttributes;
++} UPACKED wusb_endpoint_companion_desc_t;
++
++/* Wireless USB Numeric Association M1 Data Structure */
++typedef struct wusb_m1_data {
++	uByte version;
++	uWord langId;
++	uByte deviceFriendlyNameLength;
++	uByte sha_256_m3[32];
++	uByte deviceFriendlyName[256];
++} UPACKED wusb_m1_data_t;
++
++typedef struct wusb_m2_data {
++	uByte version;
++	uWord langId;
++	uByte hostFriendlyNameLength;
++	uByte pkh[384];
++	uByte hostFriendlyName[256];
++} UPACKED wusb_m2_data_t;
++
++typedef struct wusb_m3_data {
++	uByte pkd[384];
++	uByte nd;
++} UPACKED wusb_m3_data_t;
++
++typedef struct wusb_m4_data {
++	uDWord _attributeTypeIdAndLength_1;
++	uWord  associationTypeId;
++
++	uDWord _attributeTypeIdAndLength_2;
++	uWord  associationSubTypeId;
++
++	uDWord _attributeTypeIdAndLength_3;
++	uDWord length;
++
++	uDWord _attributeTypeIdAndLength_4;
++	uDWord associationStatus;
++
++	uDWord _attributeTypeIdAndLength_5;
++	uByte  chid[16];
++
++	uDWord _attributeTypeIdAndLength_6;
++	uByte  cdid[16];
++
++	uDWord _attributeTypeIdAndLength_7;
++	uByte  bandGroups[2];
++} UPACKED wusb_m4_data_t;
++
++#ifdef _MSC_VER
++#include <poppack.h>
++#endif
++
++#ifdef __cplusplus
++}
++#endif
++
++#endif /* _USB_H_ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/Makefile
+@@ -0,0 +1,82 @@
++#
++# Makefile for DWC_otg Highspeed USB controller driver
++#
++
++ifneq ($(KERNELRELEASE),)
++
++# Use the BUS_INTERFACE variable to compile the software for either
++# PCI(PCI_INTERFACE) or LM(LM_INTERFACE) bus.
++ifeq ($(BUS_INTERFACE),)
++#	BUS_INTERFACE = -DPCI_INTERFACE
++#	BUS_INTERFACE = -DLM_INTERFACE
++        BUS_INTERFACE = -DPLATFORM_INTERFACE
++endif
++
++#ccflags-y	+= -DDEBUG
++#ccflags-y	+= -DDWC_OTG_DEBUGLEV=1 # reduce common debug msgs
++
++# Use one of the following flags to compile the software in host-only or
++# device-only mode.
++#ccflags-y        += -DDWC_HOST_ONLY
++#ccflags-y        += -DDWC_DEVICE_ONLY
++
++ccflags-y	+= -Dlinux -DDWC_HS_ELECT_TST
++#ccflags-y	+= -DDWC_EN_ISOC
++ccflags-y   	+= -I$(obj)/../dwc_common_port
++#ccflags-y   	+= -I$(PORTLIB)
++ccflags-y   	+= -DDWC_LINUX
++ccflags-y   	+= $(CFI)
++ccflags-y	+= $(BUS_INTERFACE)
++#ccflags-y	+= -DDWC_DEV_SRPCAP
++
++obj-$(CONFIG_USB_DWCOTG) += dwc_otg.o
++
++dwc_otg-objs	:= dwc_otg_driver.o dwc_otg_attr.o
++dwc_otg-objs	+= dwc_otg_cil.o dwc_otg_cil_intr.o
++dwc_otg-objs	+= dwc_otg_pcd_linux.o dwc_otg_pcd.o dwc_otg_pcd_intr.o
++dwc_otg-objs	+= dwc_otg_hcd.o dwc_otg_hcd_linux.o dwc_otg_hcd_intr.o dwc_otg_hcd_queue.o dwc_otg_hcd_ddma.o
++dwc_otg-objs	+= dwc_otg_adp.o
++dwc_otg-objs	+= dwc_otg_fiq_fsm.o
++dwc_otg-objs	+= dwc_otg_fiq_stub.o
++ifneq ($(CFI),)
++dwc_otg-objs	+= dwc_otg_cfi.o
++endif
++
++kernrelwd := $(subst ., ,$(KERNELRELEASE))
++kernrel3 := $(word 1,$(kernrelwd)).$(word 2,$(kernrelwd)).$(word 3,$(kernrelwd))
++
++ifneq ($(kernrel3),2.6.20)
++ccflags-y += $(CPPFLAGS)
++endif
++
++else
++
++PWD		:= $(shell pwd)
++PORTLIB		:= $(PWD)/../dwc_common_port
++
++# Command paths
++CTAGS		:= $(CTAGS)
++DOXYGEN		:= $(DOXYGEN)
++
++default: portlib
++	$(MAKE) -C$(KDIR) M=$(PWD) ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) modules
++
++install: default
++	$(MAKE) -C$(KDIR) M=$(PORTLIB) modules_install
++	$(MAKE) -C$(KDIR) M=$(PWD) modules_install
++
++portlib:
++	$(MAKE) -C$(KDIR) M=$(PORTLIB) ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) modules
++	cp $(PORTLIB)/Module.symvers $(PWD)/
++
++docs:	$(wildcard *.[hc]) doc/doxygen.cfg
++	$(DOXYGEN) doc/doxygen.cfg
++
++tags:	$(wildcard *.[hc])
++	$(CTAGS) -e $(wildcard *.[hc]) $(wildcard linux/*.[hc]) $(wildcard $(KDIR)/include/linux/usb*.h)
++
++
++clean:
++	rm -rf   *.o *.ko .*cmd *.mod.c .tmp_versions Module.symvers
++
++endif
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/doc/doxygen.cfg
+@@ -0,0 +1,224 @@
++# Doxyfile 1.3.9.1
++
++#---------------------------------------------------------------------------
++# Project related configuration options
++#---------------------------------------------------------------------------
++PROJECT_NAME           = "DesignWare USB 2.0 OTG Controller (DWC_otg) Device Driver"
++PROJECT_NUMBER         = v3.00a
++OUTPUT_DIRECTORY       = ./doc/
++CREATE_SUBDIRS         = NO
++OUTPUT_LANGUAGE        = English
++BRIEF_MEMBER_DESC      = YES
++REPEAT_BRIEF           = YES
++ABBREVIATE_BRIEF       = "The $name class" \
++                         "The $name widget" \
++                         "The $name file" \
++                         is \
++                         provides \
++                         specifies \
++                         contains \
++                         represents \
++                         a \
++                         an \
++                         the
++ALWAYS_DETAILED_SEC    = NO
++INLINE_INHERITED_MEMB  = NO
++FULL_PATH_NAMES        = NO
++STRIP_FROM_PATH        =
++STRIP_FROM_INC_PATH    =
++SHORT_NAMES            = NO
++JAVADOC_AUTOBRIEF      = YES
++MULTILINE_CPP_IS_BRIEF = NO
++INHERIT_DOCS           = YES
++DISTRIBUTE_GROUP_DOC   = NO
++TAB_SIZE               = 8
++ALIASES                =
++OPTIMIZE_OUTPUT_FOR_C  = YES
++OPTIMIZE_OUTPUT_JAVA   = NO
++SUBGROUPING            = YES
++#---------------------------------------------------------------------------
++# Build related configuration options
++#---------------------------------------------------------------------------
++EXTRACT_ALL            = NO
++EXTRACT_PRIVATE        = YES
++EXTRACT_STATIC         = YES
++EXTRACT_LOCAL_CLASSES  = YES
++EXTRACT_LOCAL_METHODS  = NO
++HIDE_UNDOC_MEMBERS     = NO
++HIDE_UNDOC_CLASSES     = NO
++HIDE_FRIEND_COMPOUNDS  = NO
++HIDE_IN_BODY_DOCS      = NO
++INTERNAL_DOCS          = NO
++CASE_SENSE_NAMES       = NO
++HIDE_SCOPE_NAMES       = NO
++SHOW_INCLUDE_FILES     = YES
++INLINE_INFO            = YES
++SORT_MEMBER_DOCS       = NO
++SORT_BRIEF_DOCS        = NO
++SORT_BY_SCOPE_NAME     = NO
++GENERATE_TODOLIST      = YES
++GENERATE_TESTLIST      = YES
++GENERATE_BUGLIST       = YES
++GENERATE_DEPRECATEDLIST= YES
++ENABLED_SECTIONS       =
++MAX_INITIALIZER_LINES  = 30
++SHOW_USED_FILES        = YES
++SHOW_DIRECTORIES       = YES
++#---------------------------------------------------------------------------
++# configuration options related to warning and progress messages
++#---------------------------------------------------------------------------
++QUIET                  = YES
++WARNINGS               = YES
++WARN_IF_UNDOCUMENTED   = NO
++WARN_IF_DOC_ERROR      = YES
++WARN_FORMAT            = "$file:$line: $text"
++WARN_LOGFILE           =
++#---------------------------------------------------------------------------
++# configuration options related to the input files
++#---------------------------------------------------------------------------
++INPUT                  = .
++FILE_PATTERNS          = *.c \
++                         *.h \
++                         ./linux/*.c \
++                         ./linux/*.h
++RECURSIVE              = NO
++EXCLUDE                = ./test/ \
++                         ./dwc_otg/.AppleDouble/
++EXCLUDE_SYMLINKS       = YES
++EXCLUDE_PATTERNS       = *.mod.*
++EXAMPLE_PATH           =
++EXAMPLE_PATTERNS       = *
++EXAMPLE_RECURSIVE      = NO
++IMAGE_PATH             =
++INPUT_FILTER           =
++FILTER_PATTERNS        =
++FILTER_SOURCE_FILES    = NO
++#---------------------------------------------------------------------------
++# configuration options related to source browsing
++#---------------------------------------------------------------------------
++SOURCE_BROWSER         = YES
++INLINE_SOURCES         = NO
++STRIP_CODE_COMMENTS    = YES
++REFERENCED_BY_RELATION = NO
++REFERENCES_RELATION    = NO
++VERBATIM_HEADERS       = NO
++#---------------------------------------------------------------------------
++# configuration options related to the alphabetical class index
++#---------------------------------------------------------------------------
++ALPHABETICAL_INDEX     = NO
++COLS_IN_ALPHA_INDEX    = 5
++IGNORE_PREFIX          =
++#---------------------------------------------------------------------------
++# configuration options related to the HTML output
++#---------------------------------------------------------------------------
++GENERATE_HTML          = YES
++HTML_OUTPUT            = html
++HTML_FILE_EXTENSION    = .html
++HTML_HEADER            =
++HTML_FOOTER            =
++HTML_STYLESHEET        =
++HTML_ALIGN_MEMBERS     = YES
++GENERATE_HTMLHELP      = NO
++CHM_FILE               =
++HHC_LOCATION           =
++GENERATE_CHI           = NO
++BINARY_TOC             = NO
++TOC_EXPAND             = NO
++DISABLE_INDEX          = NO
++ENUM_VALUES_PER_LINE   = 4
++GENERATE_TREEVIEW      = YES
++TREEVIEW_WIDTH         = 250
++#---------------------------------------------------------------------------
++# configuration options related to the LaTeX output
++#---------------------------------------------------------------------------
++GENERATE_LATEX         = NO
++LATEX_OUTPUT           = latex
++LATEX_CMD_NAME         = latex
++MAKEINDEX_CMD_NAME     = makeindex
++COMPACT_LATEX          = NO
++PAPER_TYPE             = a4wide
++EXTRA_PACKAGES         =
++LATEX_HEADER           =
++PDF_HYPERLINKS         = NO
++USE_PDFLATEX           = NO
++LATEX_BATCHMODE        = NO
++LATEX_HIDE_INDICES     = NO
++#---------------------------------------------------------------------------
++# configuration options related to the RTF output
++#---------------------------------------------------------------------------
++GENERATE_RTF           = NO
++RTF_OUTPUT             = rtf
++COMPACT_RTF            = NO
++RTF_HYPERLINKS         = NO
++RTF_STYLESHEET_FILE    =
++RTF_EXTENSIONS_FILE    =
++#---------------------------------------------------------------------------
++# configuration options related to the man page output
++#---------------------------------------------------------------------------
++GENERATE_MAN           = NO
++MAN_OUTPUT             = man
++MAN_EXTENSION          = .3
++MAN_LINKS              = NO
++#---------------------------------------------------------------------------
++# configuration options related to the XML output
++#---------------------------------------------------------------------------
++GENERATE_XML           = NO
++XML_OUTPUT             = xml
++XML_SCHEMA             =
++XML_DTD                =
++XML_PROGRAMLISTING     = YES
++#---------------------------------------------------------------------------
++# configuration options for the AutoGen Definitions output
++#---------------------------------------------------------------------------
++GENERATE_AUTOGEN_DEF   = NO
++#---------------------------------------------------------------------------
++# configuration options related to the Perl module output
++#---------------------------------------------------------------------------
++GENERATE_PERLMOD       = NO
++PERLMOD_LATEX          = NO
++PERLMOD_PRETTY         = YES
++PERLMOD_MAKEVAR_PREFIX =
++#---------------------------------------------------------------------------
++# Configuration options related to the preprocessor
++#---------------------------------------------------------------------------
++ENABLE_PREPROCESSING   = YES
++MACRO_EXPANSION        = YES
++EXPAND_ONLY_PREDEF     = YES
++SEARCH_INCLUDES        = YES
++INCLUDE_PATH           =
++INCLUDE_FILE_PATTERNS  =
++PREDEFINED             = DEVICE_ATTR DWC_EN_ISOC
++EXPAND_AS_DEFINED      = DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW DWC_OTG_DEVICE_ATTR_BITFIELD_STORE DWC_OTG_DEVICE_ATTR_BITFIELD_RW DWC_OTG_DEVICE_ATTR_BITFIELD_RO DWC_OTG_DEVICE_ATTR_REG_SHOW DWC_OTG_DEVICE_ATTR_REG_STORE DWC_OTG_DEVICE_ATTR_REG32_RW DWC_OTG_DEVICE_ATTR_REG32_RO DWC_EN_ISOC
++SKIP_FUNCTION_MACROS   = NO
++#---------------------------------------------------------------------------
++# Configuration::additions related to external references
++#---------------------------------------------------------------------------
++TAGFILES               =
++GENERATE_TAGFILE       =
++ALLEXTERNALS           = NO
++EXTERNAL_GROUPS        = YES
++PERL_PATH              = /usr/bin/perl
++#---------------------------------------------------------------------------
++# Configuration options related to the dot tool
++#---------------------------------------------------------------------------
++CLASS_DIAGRAMS         = YES
++HIDE_UNDOC_RELATIONS   = YES
++HAVE_DOT               = NO
++CLASS_GRAPH            = YES
++COLLABORATION_GRAPH    = YES
++UML_LOOK               = NO
++TEMPLATE_RELATIONS     = NO
++INCLUDE_GRAPH          = YES
++INCLUDED_BY_GRAPH      = YES
++CALL_GRAPH             = NO
++GRAPHICAL_HIERARCHY    = YES
++DOT_IMAGE_FORMAT       = png
++DOT_PATH               =
++DOTFILE_DIRS           =
++MAX_DOT_GRAPH_DEPTH    = 1000
++GENERATE_LEGEND        = YES
++DOT_CLEANUP            = YES
++#---------------------------------------------------------------------------
++# Configuration::additions related to the search engine
++#---------------------------------------------------------------------------
++SEARCHENGINE           = NO
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dummy_audio.c
+@@ -0,0 +1,1575 @@
++/*
++ * zero.c -- Gadget Zero, for USB development
++ *
++ * Copyright (C) 2003-2004 David Brownell
++ * All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") as published by the Free Software
++ * Foundation, either version 2 of that License or (at your option) any
++ * later version.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++
++/*
++ * Gadget Zero only needs two bulk endpoints, and is an example of how you
++ * can write a hardware-agnostic gadget driver running inside a USB device.
++ *
++ * Hardware details are visible (see CONFIG_USB_ZERO_* below) but don't
++ * affect most of the driver.
++ *
++ * Use it with the Linux host/master side "usbtest" driver to get a basic
++ * functional test of your device-side usb stack, or with "usb-skeleton".
++ *
++ * It supports two similar configurations.  One sinks whatever the usb host
++ * writes, and in return sources zeroes.  The other loops whatever the host
++ * writes back, so the host can read it.  Module options include:
++ *
++ *   buflen=N		default N=4096, buffer size used
++ *   qlen=N		default N=32, how many buffers in the loopback queue
++ *   loopdefault	default false, list loopback config first
++ *
++ * Many drivers will only have one configuration, letting them be much
++ * simpler if they also don't support high speed operation (like this
++ * driver does).
++ */
++
++#include <linux/config.h>
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/delay.h>
++#include <linux/ioport.h>
++#include <linux/sched.h>
++#include <linux/slab.h>
++#include <linux/smp_lock.h>
++#include <linux/errno.h>
++#include <linux/init.h>
++#include <linux/timer.h>
++#include <linux/list.h>
++#include <linux/interrupt.h>
++#include <linux/uts.h>
++#include <linux/version.h>
++#include <linux/device.h>
++#include <linux/moduleparam.h>
++#include <linux/proc_fs.h>
++
++#include <asm/byteorder.h>
++#include <asm/io.h>
++#include <asm/irq.h>
++#include <asm/system.h>
++#include <asm/unaligned.h>
++
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,21)
++# include <linux/usb/ch9.h>
++#else
++# include <linux/usb_ch9.h>
++#endif
++
++#include <linux/usb_gadget.h>
++
++
++/*-------------------------------------------------------------------------*/
++/*-------------------------------------------------------------------------*/
++
++
++static int utf8_to_utf16le(const char *s, u16 *cp, unsigned len)
++{
++	int	count = 0;
++	u8	c;
++	u16	uchar;
++
++	/* this insists on correct encodings, though not minimal ones.
++	 * BUT it currently rejects legit 4-byte UTF-8 code points,
++	 * which need surrogate pairs.  (Unicode 3.1 can use them.)
++	 */
++	while (len != 0 && (c = (u8) *s++) != 0) {
++		if (unlikely(c & 0x80)) {
++			// 2-byte sequence:
++			// 00000yyyyyxxxxxx = 110yyyyy 10xxxxxx
++			if ((c & 0xe0) == 0xc0) {
++				uchar = (c & 0x1f) << 6;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c;
++
++			// 3-byte sequence (most CJKV characters):
++			// zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
++			} else if ((c & 0xf0) == 0xe0) {
++				uchar = (c & 0x0f) << 12;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c << 6;
++
++				c = (u8) *s++;
++				if ((c & 0xc0) != 0xc0)
++					goto fail;
++				c &= 0x3f;
++				uchar |= c;
++
++				/* no bogus surrogates */
++				if (0xd800 <= uchar && uchar <= 0xdfff)
++					goto fail;
++
++			// 4-byte sequence (surrogate pairs, currently rare):
++			// 11101110wwwwzzzzyy + 110111yyyyxxxxxx
++			//     = 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx
++			// (uuuuu = wwww + 1)
++			// FIXME accept the surrogate code points (only)
++
++			} else
++				goto fail;
++		} else
++			uchar = c;
++		put_unaligned (cpu_to_le16 (uchar), cp++);
++		count++;
++		len--;
++	}
++	return count;
++fail:
++	return -1;
++}
++
++
++/**
++ * usb_gadget_get_string - fill out a string descriptor
++ * @table: of c strings encoded using UTF-8
++ * @id: string id, from low byte of wValue in get string descriptor
++ * @buf: at least 256 bytes
++ *
++ * Finds the UTF-8 string matching the ID, and converts it into a
++ * string descriptor in utf16-le.
++ * Returns length of descriptor (always even) or negative errno
++ *
++ * If your driver needs stings in multiple languages, you'll probably
++ * "switch (wIndex) { ... }"  in your ep0 string descriptor logic,
++ * using this routine after choosing which set of UTF-8 strings to use.
++ * Note that US-ASCII is a strict subset of UTF-8; any string bytes with
++ * the eighth bit set will be multibyte UTF-8 characters, not ISO-8859/1
++ * characters (which are also widely used in C strings).
++ */
++int
++usb_gadget_get_string (struct usb_gadget_strings *table, int id, u8 *buf)
++{
++	struct usb_string	*s;
++	int			len;
++
++	/* descriptor 0 has the language id */
++	if (id == 0) {
++		buf [0] = 4;
++		buf [1] = USB_DT_STRING;
++		buf [2] = (u8) table->language;
++		buf [3] = (u8) (table->language >> 8);
++		return 4;
++	}
++	for (s = table->strings; s && s->s; s++)
++		if (s->id == id)
++			break;
++
++	/* unrecognized: stall. */
++	if (!s || !s->s)
++		return -EINVAL;
++
++	/* string descriptors have length, tag, then UTF16-LE text */
++	len = min ((size_t) 126, strlen (s->s));
++	memset (buf + 2, 0, 2 * len);	/* zero all the bytes */
++	len = utf8_to_utf16le(s->s, (u16 *)&buf[2], len);
++	if (len < 0)
++		return -EINVAL;
++	buf [0] = (len + 1) * 2;
++	buf [1] = USB_DT_STRING;
++	return buf [0];
++}
++
++
++/*-------------------------------------------------------------------------*/
++/*-------------------------------------------------------------------------*/
++
++
++/**
++ * usb_descriptor_fillbuf - fill buffer with descriptors
++ * @buf: Buffer to be filled
++ * @buflen: Size of buf
++ * @src: Array of descriptor pointers, terminated by null pointer.
++ *
++ * Copies descriptors into the buffer, returning the length or a
++ * negative error code if they can't all be copied.  Useful when
++ * assembling descriptors for an associated set of interfaces used
++ * as part of configuring a composite device; or in other cases where
++ * sets of descriptors need to be marshaled.
++ */
++int
++usb_descriptor_fillbuf(void *buf, unsigned buflen,
++		const struct usb_descriptor_header **src)
++{
++	u8	*dest = buf;
++
++	if (!src)
++		return -EINVAL;
++
++	/* fill buffer from src[] until null descriptor ptr */
++	for (; 0 != *src; src++) {
++		unsigned		len = (*src)->bLength;
++
++		if (len > buflen)
++			return -EINVAL;
++		memcpy(dest, *src, len);
++		buflen -= len;
++		dest += len;
++	}
++	return dest - (u8 *)buf;
++}
++
++
++/**
++ * usb_gadget_config_buf - builts a complete configuration descriptor
++ * @config: Header for the descriptor, including characteristics such
++ *	as power requirements and number of interfaces.
++ * @desc: Null-terminated vector of pointers to the descriptors (interface,
++ *	endpoint, etc) defining all functions in this device configuration.
++ * @buf: Buffer for the resulting configuration descriptor.
++ * @length: Length of buffer.  If this is not big enough to hold the
++ *	entire configuration descriptor, an error code will be returned.
++ *
++ * This copies descriptors into the response buffer, building a descriptor
++ * for that configuration.  It returns the buffer length or a negative
++ * status code.  The config.wTotalLength field is set to match the length
++ * of the result, but other descriptor fields (including power usage and
++ * interface count) must be set by the caller.
++ *
++ * Gadget drivers could use this when constructing a config descriptor
++ * in response to USB_REQ_GET_DESCRIPTOR.  They will need to patch the
++ * resulting bDescriptorType value if USB_DT_OTHER_SPEED_CONFIG is needed.
++ */
++int usb_gadget_config_buf(
++	const struct usb_config_descriptor	*config,
++	void					*buf,
++	unsigned				length,
++	const struct usb_descriptor_header	**desc
++)
++{
++	struct usb_config_descriptor		*cp = buf;
++	int					len;
++
++	/* config descriptor first */
++	if (length < USB_DT_CONFIG_SIZE || !desc)
++		return -EINVAL;
++	*cp = *config;
++
++	/* then interface/endpoint/class/vendor/... */
++	len = usb_descriptor_fillbuf(USB_DT_CONFIG_SIZE + (u8*)buf,
++			length - USB_DT_CONFIG_SIZE, desc);
++	if (len < 0)
++		return len;
++	len += USB_DT_CONFIG_SIZE;
++	if (len > 0xffff)
++		return -EINVAL;
++
++	/* patch up the config descriptor */
++	cp->bLength = USB_DT_CONFIG_SIZE;
++	cp->bDescriptorType = USB_DT_CONFIG;
++	cp->wTotalLength = cpu_to_le16(len);
++	cp->bmAttributes |= USB_CONFIG_ATT_ONE;
++	return len;
++}
++
++/*-------------------------------------------------------------------------*/
++/*-------------------------------------------------------------------------*/
++
++
++#define RBUF_LEN (1024*1024)
++static int rbuf_start;
++static int rbuf_len;
++static __u8 rbuf[RBUF_LEN];
++
++/*-------------------------------------------------------------------------*/
++
++#define DRIVER_VERSION		"St Patrick's Day 2004"
++
++static const char shortname [] = "zero";
++static const char longname [] = "YAMAHA YST-MS35D USB Speaker  ";
++
++static const char source_sink [] = "source and sink data";
++static const char loopback [] = "loop input to output";
++
++/*-------------------------------------------------------------------------*/
++
++/*
++ * driver assumes self-powered hardware, and
++ * has no way for users to trigger remote wakeup.
++ *
++ * this version autoconfigures as much as possible,
++ * which is reasonable for most "bulk-only" drivers.
++ */
++static const char *EP_IN_NAME;		/* source */
++static const char *EP_OUT_NAME;		/* sink */
++
++/*-------------------------------------------------------------------------*/
++
++/* big enough to hold our biggest descriptor */
++#define USB_BUFSIZ	512
++
++struct zero_dev {
++	spinlock_t		lock;
++	struct usb_gadget	*gadget;
++	struct usb_request	*req;		/* for control responses */
++
++	/* when configured, we have one of two configs:
++	 * - source data (in to host) and sink it (out from host)
++	 * - or loop it back (out from host back in to host)
++	 */
++	u8			config;
++	struct usb_ep		*in_ep, *out_ep;
++
++	/* autoresume timer */
++	struct timer_list	resume;
++};
++
++#define xprintk(d,level,fmt,args...) \
++	dev_printk(level , &(d)->gadget->dev , fmt , ## args)
++
++#ifdef DEBUG
++#define DBG(dev,fmt,args...) \
++	xprintk(dev , KERN_DEBUG , fmt , ## args)
++#else
++#define DBG(dev,fmt,args...) \
++	do { } while (0)
++#endif /* DEBUG */
++
++#ifdef VERBOSE
++#define VDBG	DBG
++#else
++#define VDBG(dev,fmt,args...) \
++	do { } while (0)
++#endif /* VERBOSE */
++
++#define ERROR(dev,fmt,args...) \
++	xprintk(dev , KERN_ERR , fmt , ## args)
++#define WARN(dev,fmt,args...) \
++	xprintk(dev , KERN_WARNING , fmt , ## args)
++#define INFO(dev,fmt,args...) \
++	xprintk(dev , KERN_INFO , fmt , ## args)
++
++/*-------------------------------------------------------------------------*/
++
++static unsigned buflen = 4096;
++static unsigned qlen = 32;
++static unsigned pattern = 0;
++
++module_param (buflen, uint, S_IRUGO|S_IWUSR);
++module_param (qlen, uint, S_IRUGO|S_IWUSR);
++module_param (pattern, uint, S_IRUGO|S_IWUSR);
++
++/*
++ * if it's nonzero, autoresume says how many seconds to wait
++ * before trying to wake up the host after suspend.
++ */
++static unsigned autoresume = 0;
++module_param (autoresume, uint, 0);
++
++/*
++ * Normally the "loopback" configuration is second (index 1) so
++ * it's not the default.  Here's where to change that order, to
++ * work better with hosts where config changes are problematic.
++ * Or controllers (like superh) that only support one config.
++ */
++static int loopdefault = 0;
++
++module_param (loopdefault, bool, S_IRUGO|S_IWUSR);
++
++/*-------------------------------------------------------------------------*/
++
++/* Thanks to NetChip Technologies for donating this product ID.
++ *
++ * DO NOT REUSE THESE IDs with a protocol-incompatible driver!!  Ever!!
++ * Instead:  allocate your own, using normal USB-IF procedures.
++ */
++#ifndef	CONFIG_USB_ZERO_HNPTEST
++#define DRIVER_VENDOR_NUM	0x0525		/* NetChip */
++#define DRIVER_PRODUCT_NUM	0xa4a0		/* Linux-USB "Gadget Zero" */
++#else
++#define DRIVER_VENDOR_NUM	0x1a0a		/* OTG test device IDs */
++#define DRIVER_PRODUCT_NUM	0xbadd
++#endif
++
++/*-------------------------------------------------------------------------*/
++
++/*
++ * DESCRIPTORS ... most are static, but strings and (full)
++ * configuration descriptors are built on demand.
++ */
++
++/*
++#define STRING_MANUFACTURER		25
++#define STRING_PRODUCT			42
++#define STRING_SERIAL			101
++*/
++#define STRING_MANUFACTURER		1
++#define STRING_PRODUCT			2
++#define STRING_SERIAL			3
++
++#define STRING_SOURCE_SINK		250
++#define STRING_LOOPBACK			251
++
++/*
++ * This device advertises two configurations; these numbers work
++ * on a pxa250 as well as more flexible hardware.
++ */
++#define	CONFIG_SOURCE_SINK	3
++#define	CONFIG_LOOPBACK		2
++
++/*
++static struct usb_device_descriptor
++device_desc = {
++	.bLength =		sizeof device_desc,
++	.bDescriptorType =	USB_DT_DEVICE,
++
++	.bcdUSB =		__constant_cpu_to_le16 (0x0200),
++	.bDeviceClass =		USB_CLASS_VENDOR_SPEC,
++
++	.idVendor =		__constant_cpu_to_le16 (DRIVER_VENDOR_NUM),
++	.idProduct =		__constant_cpu_to_le16 (DRIVER_PRODUCT_NUM),
++	.iManufacturer =	STRING_MANUFACTURER,
++	.iProduct =		STRING_PRODUCT,
++	.iSerialNumber =	STRING_SERIAL,
++	.bNumConfigurations =	2,
++};
++*/
++static struct usb_device_descriptor
++device_desc = {
++	.bLength =		sizeof device_desc,
++	.bDescriptorType =	USB_DT_DEVICE,
++	.bcdUSB =		__constant_cpu_to_le16 (0x0100),
++	.bDeviceClass =		USB_CLASS_PER_INTERFACE,
++	.bDeviceSubClass =      0,
++	.bDeviceProtocol =      0,
++	.bMaxPacketSize0 =      64,
++	.bcdDevice =            __constant_cpu_to_le16 (0x0100),
++	.idVendor =		__constant_cpu_to_le16 (0x0499),
++	.idProduct =		__constant_cpu_to_le16 (0x3002),
++	.iManufacturer =	STRING_MANUFACTURER,
++	.iProduct =		STRING_PRODUCT,
++	.iSerialNumber =	STRING_SERIAL,
++	.bNumConfigurations =	1,
++};
++
++static struct usb_config_descriptor
++z_config = {
++	.bLength =		sizeof z_config,
++	.bDescriptorType =	USB_DT_CONFIG,
++
++	/* compute wTotalLength on the fly */
++	.bNumInterfaces =	2,
++	.bConfigurationValue =	1,
++	.iConfiguration =	0,
++	.bmAttributes =		0x40,
++	.bMaxPower =		0,	/* self-powered */
++};
++
++
++static struct usb_otg_descriptor
++otg_descriptor = {
++	.bLength =		sizeof otg_descriptor,
++	.bDescriptorType =	USB_DT_OTG,
++
++	.bmAttributes =		USB_OTG_SRP,
++};
++
++/* one interface in each configuration */
++#ifdef	CONFIG_USB_GADGET_DUALSPEED
++
++/*
++ * usb 2.0 devices need to expose both high speed and full speed
++ * descriptors, unless they only run at full speed.
++ *
++ * that means alternate endpoint descriptors (bigger packets)
++ * and a "device qualifier" ... plus more construction options
++ * for the config descriptor.
++ */
++
++static struct usb_qualifier_descriptor
++dev_qualifier = {
++	.bLength =		sizeof dev_qualifier,
++	.bDescriptorType =	USB_DT_DEVICE_QUALIFIER,
++
++	.bcdUSB =		__constant_cpu_to_le16 (0x0200),
++	.bDeviceClass =		USB_CLASS_VENDOR_SPEC,
++
++	.bNumConfigurations =	2,
++};
++
++
++struct usb_cs_as_general_descriptor {
++	__u8  bLength;
++	__u8  bDescriptorType;
++
++	__u8  bDescriptorSubType;
++	__u8  bTerminalLink;
++	__u8  bDelay;
++	__u16  wFormatTag;
++} __attribute__ ((packed));
++
++struct usb_cs_as_format_descriptor {
++	__u8  bLength;
++	__u8  bDescriptorType;
++
++	__u8  bDescriptorSubType;
++	__u8  bFormatType;
++	__u8  bNrChannels;
++	__u8  bSubframeSize;
++	__u8  bBitResolution;
++	__u8  bSamfreqType;
++	__u8  tLowerSamFreq[3];
++	__u8  tUpperSamFreq[3];
++} __attribute__ ((packed));
++
++static const struct usb_interface_descriptor
++z_audio_control_if_desc = {
++	.bLength =		sizeof z_audio_control_if_desc,
++	.bDescriptorType =	USB_DT_INTERFACE,
++	.bInterfaceNumber = 0,
++	.bAlternateSetting = 0,
++	.bNumEndpoints = 0,
++	.bInterfaceClass = USB_CLASS_AUDIO,
++	.bInterfaceSubClass = 0x1,
++	.bInterfaceProtocol = 0,
++	.iInterface = 0,
++};
++
++static const struct usb_interface_descriptor
++z_audio_if_desc = {
++	.bLength =		sizeof z_audio_if_desc,
++	.bDescriptorType =	USB_DT_INTERFACE,
++	.bInterfaceNumber = 1,
++	.bAlternateSetting = 0,
++	.bNumEndpoints = 0,
++	.bInterfaceClass = USB_CLASS_AUDIO,
++	.bInterfaceSubClass = 0x2,
++	.bInterfaceProtocol = 0,
++	.iInterface = 0,
++};
++
++static const struct usb_interface_descriptor
++z_audio_if_desc2 = {
++	.bLength =		sizeof z_audio_if_desc,
++	.bDescriptorType =	USB_DT_INTERFACE,
++	.bInterfaceNumber = 1,
++	.bAlternateSetting = 1,
++	.bNumEndpoints = 1,
++	.bInterfaceClass = USB_CLASS_AUDIO,
++	.bInterfaceSubClass = 0x2,
++	.bInterfaceProtocol = 0,
++	.iInterface = 0,
++};
++
++static const struct usb_cs_as_general_descriptor
++z_audio_cs_as_if_desc = {
++	.bLength = 7,
++	.bDescriptorType = 0x24,
++
++	.bDescriptorSubType = 0x01,
++	.bTerminalLink = 0x01,
++	.bDelay = 0x0,
++	.wFormatTag = __constant_cpu_to_le16 (0x0001)
++};
++
++
++static const struct usb_cs_as_format_descriptor
++z_audio_cs_as_format_desc = {
++	.bLength = 0xe,
++	.bDescriptorType = 0x24,
++
++	.bDescriptorSubType = 2,
++	.bFormatType = 1,
++	.bNrChannels = 1,
++	.bSubframeSize = 1,
++	.bBitResolution = 8,
++	.bSamfreqType = 0,
++	.tLowerSamFreq = {0x7e, 0x13, 0x00},
++	.tUpperSamFreq = {0xe2, 0xd6, 0x00},
++};
++
++static const struct usb_endpoint_descriptor
++z_iso_ep = {
++	.bLength = 0x09,
++	.bDescriptorType = 0x05,
++	.bEndpointAddress = 0x04,
++	.bmAttributes = 0x09,
++	.wMaxPacketSize = 0x0038,
++	.bInterval = 0x01,
++	.bRefresh = 0x00,
++	.bSynchAddress = 0x00,
++};
++
++static char z_iso_ep2[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
++
++// 9 bytes
++static char z_ac_interface_header_desc[] =
++{ 0x09, 0x24, 0x01, 0x00, 0x01, 0x2b, 0x00, 0x01, 0x01 };
++
++// 12 bytes
++static char z_0[] = {0x0c, 0x24, 0x02, 0x01, 0x01, 0x01, 0x00, 0x02,
++		     0x03, 0x00, 0x00, 0x00};
++// 13 bytes
++static char z_1[] = {0x0d, 0x24, 0x06, 0x02, 0x01, 0x02, 0x15, 0x00,
++		     0x02, 0x00, 0x02, 0x00, 0x00};
++// 9 bytes
++static char z_2[] = {0x09, 0x24, 0x03, 0x03, 0x01, 0x03, 0x00, 0x02,
++		     0x00};
++
++static char za_0[] = {0x09, 0x04, 0x01, 0x02, 0x01, 0x01, 0x02, 0x00,
++		      0x00};
++
++static char za_1[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
++
++static char za_2[] = {0x0e, 0x24, 0x02, 0x01, 0x02, 0x01, 0x08, 0x00,
++		      0x7e, 0x13, 0x00, 0xe2, 0xd6, 0x00};
++
++static char za_3[] = {0x09, 0x05, 0x04, 0x09, 0x70, 0x00, 0x01, 0x00,
++		      0x00};
++
++static char za_4[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
++
++static char za_5[] = {0x09, 0x04, 0x01, 0x03, 0x01, 0x01, 0x02, 0x00,
++		      0x00};
++
++static char za_6[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
++
++static char za_7[] = {0x0e, 0x24, 0x02, 0x01, 0x01, 0x02, 0x10, 0x00,
++		      0x7e, 0x13, 0x00, 0xe2, 0xd6, 0x00};
++
++static char za_8[] = {0x09, 0x05, 0x04, 0x09, 0x70, 0x00, 0x01, 0x00,
++		      0x00};
++
++static char za_9[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
++
++static char za_10[] = {0x09, 0x04, 0x01, 0x04, 0x01, 0x01, 0x02, 0x00,
++		       0x00};
++
++static char za_11[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
++
++static char za_12[] = {0x0e, 0x24, 0x02, 0x01, 0x02, 0x02, 0x10, 0x00,
++		       0x73, 0x13, 0x00, 0xe2, 0xd6, 0x00};
++
++static char za_13[] = {0x09, 0x05, 0x04, 0x09, 0xe0, 0x00, 0x01, 0x00,
++		       0x00};
++
++static char za_14[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
++
++static char za_15[] = {0x09, 0x04, 0x01, 0x05, 0x01, 0x01, 0x02, 0x00,
++		       0x00};
++
++static char za_16[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
++
++static char za_17[] = {0x0e, 0x24, 0x02, 0x01, 0x01, 0x03, 0x14, 0x00,
++		       0x7e, 0x13, 0x00, 0xe2, 0xd6, 0x00};
++
++static char za_18[] = {0x09, 0x05, 0x04, 0x09, 0xa8, 0x00, 0x01, 0x00,
++		       0x00};
++
++static char za_19[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
++
++static char za_20[] = {0x09, 0x04, 0x01, 0x06, 0x01, 0x01, 0x02, 0x00,
++		       0x00};
++
++static char za_21[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
++
++static char za_22[] = {0x0e, 0x24, 0x02, 0x01, 0x02, 0x03, 0x14, 0x00,
++		       0x7e, 0x13, 0x00, 0xe2, 0xd6, 0x00};
++
++static char za_23[] = {0x09, 0x05, 0x04, 0x09, 0x50, 0x01, 0x01, 0x00,
++		       0x00};
++
++static char za_24[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
++
++
++
++static const struct usb_descriptor_header *z_function [] = {
++	(struct usb_descriptor_header *) &z_audio_control_if_desc,
++	(struct usb_descriptor_header *) &z_ac_interface_header_desc,
++	(struct usb_descriptor_header *) &z_0,
++	(struct usb_descriptor_header *) &z_1,
++	(struct usb_descriptor_header *) &z_2,
++	(struct usb_descriptor_header *) &z_audio_if_desc,
++	(struct usb_descriptor_header *) &z_audio_if_desc2,
++	(struct usb_descriptor_header *) &z_audio_cs_as_if_desc,
++	(struct usb_descriptor_header *) &z_audio_cs_as_format_desc,
++	(struct usb_descriptor_header *) &z_iso_ep,
++	(struct usb_descriptor_header *) &z_iso_ep2,
++	(struct usb_descriptor_header *) &za_0,
++	(struct usb_descriptor_header *) &za_1,
++	(struct usb_descriptor_header *) &za_2,
++	(struct usb_descriptor_header *) &za_3,
++	(struct usb_descriptor_header *) &za_4,
++	(struct usb_descriptor_header *) &za_5,
++	(struct usb_descriptor_header *) &za_6,
++	(struct usb_descriptor_header *) &za_7,
++	(struct usb_descriptor_header *) &za_8,
++	(struct usb_descriptor_header *) &za_9,
++	(struct usb_descriptor_header *) &za_10,
++	(struct usb_descriptor_header *) &za_11,
++	(struct usb_descriptor_header *) &za_12,
++	(struct usb_descriptor_header *) &za_13,
++	(struct usb_descriptor_header *) &za_14,
++	(struct usb_descriptor_header *) &za_15,
++	(struct usb_descriptor_header *) &za_16,
++	(struct usb_descriptor_header *) &za_17,
++	(struct usb_descriptor_header *) &za_18,
++	(struct usb_descriptor_header *) &za_19,
++	(struct usb_descriptor_header *) &za_20,
++	(struct usb_descriptor_header *) &za_21,
++	(struct usb_descriptor_header *) &za_22,
++	(struct usb_descriptor_header *) &za_23,
++	(struct usb_descriptor_header *) &za_24,
++	NULL,
++};
++
++/* maxpacket and other transfer characteristics vary by speed. */
++#define ep_desc(g,hs,fs) (((g)->speed==USB_SPEED_HIGH)?(hs):(fs))
++
++#else
++
++/* if there's no high speed support, maxpacket doesn't change. */
++#define ep_desc(g,hs,fs) fs
++
++#endif	/* !CONFIG_USB_GADGET_DUALSPEED */
++
++static char				manufacturer [40];
++//static char				serial [40];
++static char				serial [] = "Ser 00 em";
++
++/* static strings, in UTF-8 */
++static struct usb_string		strings [] = {
++	{ STRING_MANUFACTURER, manufacturer, },
++	{ STRING_PRODUCT, longname, },
++	{ STRING_SERIAL, serial, },
++	{ STRING_LOOPBACK, loopback, },
++	{ STRING_SOURCE_SINK, source_sink, },
++	{  }			/* end of list */
++};
++
++static struct usb_gadget_strings	stringtab = {
++	.language	= 0x0409,	/* en-us */
++	.strings	= strings,
++};
++
++/*
++ * config descriptors are also handcrafted.  these must agree with code
++ * that sets configurations, and with code managing interfaces and their
++ * altsettings.  other complexity may come from:
++ *
++ *  - high speed support, including "other speed config" rules
++ *  - multiple configurations
++ *  - interfaces with alternate settings
++ *  - embedded class or vendor-specific descriptors
++ *
++ * this handles high speed, and has a second config that could as easily
++ * have been an alternate interface setting (on most hardware).
++ *
++ * NOTE:  to demonstrate (and test) more USB capabilities, this driver
++ * should include an altsetting to test interrupt transfers, including
++ * high bandwidth modes at high speed.  (Maybe work like Intel's test
++ * device?)
++ */
++static int
++config_buf (struct usb_gadget *gadget, u8 *buf, u8 type, unsigned index)
++{
++	int len;
++	const struct usb_descriptor_header **function;
++
++	function = z_function;
++	len = usb_gadget_config_buf (&z_config, buf, USB_BUFSIZ, function);
++	if (len < 0)
++		return len;
++	((struct usb_config_descriptor *) buf)->bDescriptorType = type;
++	return len;
++}
++
++/*-------------------------------------------------------------------------*/
++
++static struct usb_request *
++alloc_ep_req (struct usb_ep *ep, unsigned length)
++{
++	struct usb_request	*req;
++
++	req = usb_ep_alloc_request (ep, GFP_ATOMIC);
++	if (req) {
++		req->length = length;
++		req->buf = usb_ep_alloc_buffer (ep, length,
++				&req->dma, GFP_ATOMIC);
++		if (!req->buf) {
++			usb_ep_free_request (ep, req);
++			req = NULL;
++		}
++	}
++	return req;
++}
++
++static void free_ep_req (struct usb_ep *ep, struct usb_request *req)
++{
++	if (req->buf)
++		usb_ep_free_buffer (ep, req->buf, req->dma, req->length);
++	usb_ep_free_request (ep, req);
++}
++
++/*-------------------------------------------------------------------------*/
++
++/* optionally require specific source/sink data patterns  */
++
++static int
++check_read_data (
++	struct zero_dev		*dev,
++	struct usb_ep		*ep,
++	struct usb_request	*req
++)
++{
++	unsigned	i;
++	u8		*buf = req->buf;
++
++	for (i = 0; i < req->actual; i++, buf++) {
++		switch (pattern) {
++		/* all-zeroes has no synchronization issues */
++		case 0:
++			if (*buf == 0)
++				continue;
++			break;
++		/* mod63 stays in sync with short-terminated transfers,
++		 * or otherwise when host and gadget agree on how large
++		 * each usb transfer request should be.  resync is done
++		 * with set_interface or set_config.
++		 */
++		case 1:
++			if (*buf == (u8)(i % 63))
++				continue;
++			break;
++		}
++		ERROR (dev, "bad OUT byte, buf [%d] = %d\n", i, *buf);
++		usb_ep_set_halt (ep);
++		return -EINVAL;
++	}
++	return 0;
++}
++
++/*-------------------------------------------------------------------------*/
++
++static void zero_reset_config (struct zero_dev *dev)
++{
++	if (dev->config == 0)
++		return;
++
++	DBG (dev, "reset config\n");
++
++	/* just disable endpoints, forcing completion of pending i/o.
++	 * all our completion handlers free their requests in this case.
++	 */
++	if (dev->in_ep) {
++		usb_ep_disable (dev->in_ep);
++		dev->in_ep = NULL;
++	}
++	if (dev->out_ep) {
++		usb_ep_disable (dev->out_ep);
++		dev->out_ep = NULL;
++	}
++	dev->config = 0;
++	del_timer (&dev->resume);
++}
++
++#define _write(f, buf, sz) (f->f_op->write(f, buf, sz, &f->f_pos))
++
++static void
++zero_isoc_complete (struct usb_ep *ep, struct usb_request *req)
++{
++	struct zero_dev	*dev = ep->driver_data;
++	int		status = req->status;
++	int i, j;
++
++	switch (status) {
++
++	case 0: 			/* normal completion? */
++		//printk ("\nzero ---------------> isoc normal completion %d bytes\n", req->actual);
++		for (i=0, j=rbuf_start; i<req->actual; i++) {
++			//printk ("%02x ", ((__u8*)req->buf)[i]);
++			rbuf[j] = ((__u8*)req->buf)[i];
++			j++;
++			if (j >= RBUF_LEN) j=0;
++		}
++		rbuf_start = j;
++		//printk ("\n\n");
++
++		if (rbuf_len < RBUF_LEN) {
++			rbuf_len += req->actual;
++			if (rbuf_len > RBUF_LEN) {
++				rbuf_len = RBUF_LEN;
++			}
++		}
++
++		break;
++
++	/* this endpoint is normally active while we're configured */
++	case -ECONNABORTED: 		/* hardware forced ep reset */
++	case -ECONNRESET:		/* request dequeued */
++	case -ESHUTDOWN:		/* disconnect from host */
++		VDBG (dev, "%s gone (%d), %d/%d\n", ep->name, status,
++				req->actual, req->length);
++		if (ep == dev->out_ep)
++			check_read_data (dev, ep, req);
++		free_ep_req (ep, req);
++		return;
++
++	case -EOVERFLOW:		/* buffer overrun on read means that
++					 * we didn't provide a big enough
++					 * buffer.
++					 */
++	default:
++#if 1
++		DBG (dev, "%s complete --> %d, %d/%d\n", ep->name,
++				status, req->actual, req->length);
++#endif
++	case -EREMOTEIO:		/* short read */
++		break;
++	}
++
++	status = usb_ep_queue (ep, req, GFP_ATOMIC);
++	if (status) {
++		ERROR (dev, "kill %s:  resubmit %d bytes --> %d\n",
++				ep->name, req->length, status);
++		usb_ep_set_halt (ep);
++		/* FIXME recover later ... somehow */
++	}
++}
++
++static struct usb_request *
++zero_start_isoc_ep (struct usb_ep *ep, int gfp_flags)
++{
++	struct usb_request	*req;
++	int			status;
++
++	req = alloc_ep_req (ep, 512);
++	if (!req)
++		return NULL;
++
++	req->complete = zero_isoc_complete;
++
++	status = usb_ep_queue (ep, req, gfp_flags);
++	if (status) {
++		struct zero_dev	*dev = ep->driver_data;
++
++		ERROR (dev, "start %s --> %d\n", ep->name, status);
++		free_ep_req (ep, req);
++		req = NULL;
++	}
++
++	return req;
++}
++
++/* change our operational config.  this code must agree with the code
++ * that returns config descriptors, and altsetting code.
++ *
++ * it's also responsible for power management interactions. some
++ * configurations might not work with our current power sources.
++ *
++ * note that some device controller hardware will constrain what this
++ * code can do, perhaps by disallowing more than one configuration or
++ * by limiting configuration choices (like the pxa2xx).
++ */
++static int
++zero_set_config (struct zero_dev *dev, unsigned number, int gfp_flags)
++{
++	int			result = 0;
++	struct usb_gadget	*gadget = dev->gadget;
++	const struct usb_endpoint_descriptor	*d;
++	struct usb_ep		*ep;
++
++	if (number == dev->config)
++		return 0;
++
++	zero_reset_config (dev);
++
++	gadget_for_each_ep (ep, gadget) {
++
++		if (strcmp (ep->name, "ep4") == 0) {
++
++			d = (struct usb_endpoint_descripter *)&za_23; // isoc ep desc for audio i/f alt setting 6
++			result = usb_ep_enable (ep, d);
++
++			if (result == 0) {
++				ep->driver_data = dev;
++				dev->in_ep = ep;
++
++				if (zero_start_isoc_ep (ep, gfp_flags) != 0) {
++
++					dev->in_ep = ep;
++					continue;
++				}
++
++				usb_ep_disable (ep);
++				result = -EIO;
++			}
++		}
++
++	}
++
++	dev->config = number;
++	return result;
++}
++
++/*-------------------------------------------------------------------------*/
++
++static void zero_setup_complete (struct usb_ep *ep, struct usb_request *req)
++{
++	if (req->status || req->actual != req->length)
++		DBG ((struct zero_dev *) ep->driver_data,
++				"setup complete --> %d, %d/%d\n",
++				req->status, req->actual, req->length);
++}
++
++/*
++ * The setup() callback implements all the ep0 functionality that's
++ * not handled lower down, in hardware or the hardware driver (like
++ * device and endpoint feature flags, and their status).  It's all
++ * housekeeping for the gadget function we're implementing.  Most of
++ * the work is in config-specific setup.
++ */
++static int
++zero_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
++{
++	struct zero_dev		*dev = get_gadget_data (gadget);
++	struct usb_request	*req = dev->req;
++	int			value = -EOPNOTSUPP;
++
++	/* usually this stores reply data in the pre-allocated ep0 buffer,
++	 * but config change events will reconfigure hardware.
++	 */
++	req->zero = 0;
++	switch (ctrl->bRequest) {
++
++	case USB_REQ_GET_DESCRIPTOR:
++
++		switch (ctrl->wValue >> 8) {
++
++		case USB_DT_DEVICE:
++			value = min (ctrl->wLength, (u16) sizeof device_desc);
++			memcpy (req->buf, &device_desc, value);
++			break;
++#ifdef CONFIG_USB_GADGET_DUALSPEED
++		case USB_DT_DEVICE_QUALIFIER:
++			if (!gadget->is_dualspeed)
++				break;
++			value = min (ctrl->wLength, (u16) sizeof dev_qualifier);
++			memcpy (req->buf, &dev_qualifier, value);
++			break;
++
++		case USB_DT_OTHER_SPEED_CONFIG:
++			if (!gadget->is_dualspeed)
++				break;
++			// FALLTHROUGH
++#endif /* CONFIG_USB_GADGET_DUALSPEED */
++		case USB_DT_CONFIG:
++			value = config_buf (gadget, req->buf,
++					ctrl->wValue >> 8,
++					ctrl->wValue & 0xff);
++			if (value >= 0)
++				value = min (ctrl->wLength, (u16) value);
++			break;
++
++		case USB_DT_STRING:
++			/* wIndex == language code.
++			 * this driver only handles one language, you can
++			 * add string tables for other languages, using
++			 * any UTF-8 characters
++			 */
++			value = usb_gadget_get_string (&stringtab,
++					ctrl->wValue & 0xff, req->buf);
++			if (value >= 0) {
++				value = min (ctrl->wLength, (u16) value);
++			}
++			break;
++		}
++		break;
++
++	/* currently two configs, two speeds */
++	case USB_REQ_SET_CONFIGURATION:
++		if (ctrl->bRequestType != 0)
++			goto unknown;
++
++		spin_lock (&dev->lock);
++		value = zero_set_config (dev, ctrl->wValue, GFP_ATOMIC);
++		spin_unlock (&dev->lock);
++		break;
++	case USB_REQ_GET_CONFIGURATION:
++		if (ctrl->bRequestType != USB_DIR_IN)
++			goto unknown;
++		*(u8 *)req->buf = dev->config;
++		value = min (ctrl->wLength, (u16) 1);
++		break;
++
++	/* until we add altsetting support, or other interfaces,
++	 * only 0/0 are possible.  pxa2xx only supports 0/0 (poorly)
++	 * and already killed pending endpoint I/O.
++	 */
++	case USB_REQ_SET_INTERFACE:
++
++		if (ctrl->bRequestType != USB_RECIP_INTERFACE)
++			goto unknown;
++		spin_lock (&dev->lock);
++		if (dev->config) {
++			u8		config = dev->config;
++
++			/* resets interface configuration, forgets about
++			 * previous transaction state (queued bufs, etc)
++			 * and re-inits endpoint state (toggle etc)
++			 * no response queued, just zero status == success.
++			 * if we had more than one interface we couldn't
++			 * use this "reset the config" shortcut.
++			 */
++			zero_reset_config (dev);
++			zero_set_config (dev, config, GFP_ATOMIC);
++			value = 0;
++		}
++		spin_unlock (&dev->lock);
++		break;
++	case USB_REQ_GET_INTERFACE:
++		if ((ctrl->bRequestType == 0x21) && (ctrl->wIndex == 0x02)) {
++			value = ctrl->wLength;
++			break;
++		}
++		else {
++			if (ctrl->bRequestType != (USB_DIR_IN|USB_RECIP_INTERFACE))
++				goto unknown;
++			if (!dev->config)
++				break;
++			if (ctrl->wIndex != 0) {
++				value = -EDOM;
++				break;
++			}
++			*(u8 *)req->buf = 0;
++			value = min (ctrl->wLength, (u16) 1);
++		}
++		break;
++
++	/*
++	 * These are the same vendor-specific requests supported by
++	 * Intel's USB 2.0 compliance test devices.  We exceed that
++	 * device spec by allowing multiple-packet requests.
++	 */
++	case 0x5b:	/* control WRITE test -- fill the buffer */
++		if (ctrl->bRequestType != (USB_DIR_OUT|USB_TYPE_VENDOR))
++			goto unknown;
++		if (ctrl->wValue || ctrl->wIndex)
++			break;
++		/* just read that many bytes into the buffer */
++		if (ctrl->wLength > USB_BUFSIZ)
++			break;
++		value = ctrl->wLength;
++		break;
++	case 0x5c:	/* control READ test -- return the buffer */
++		if (ctrl->bRequestType != (USB_DIR_IN|USB_TYPE_VENDOR))
++			goto unknown;
++		if (ctrl->wValue || ctrl->wIndex)
++			break;
++		/* expect those bytes are still in the buffer; send back */
++		if (ctrl->wLength > USB_BUFSIZ
++				|| ctrl->wLength != req->length)
++			break;
++		value = ctrl->wLength;
++		break;
++
++	case 0x01: // SET_CUR
++	case 0x02:
++	case 0x03:
++	case 0x04:
++	case 0x05:
++		value = ctrl->wLength;
++		break;
++	case 0x81:
++		switch (ctrl->wValue) {
++		case 0x0201:
++		case 0x0202:
++			((u8*)req->buf)[0] = 0x00;
++			((u8*)req->buf)[1] = 0xe3;
++			break;
++		case 0x0300:
++		case 0x0500:
++			((u8*)req->buf)[0] = 0x00;
++			break;
++		}
++		//((u8*)req->buf)[0] = 0x81;
++		//((u8*)req->buf)[1] = 0x81;
++		value = ctrl->wLength;
++		break;
++	case 0x82:
++		switch (ctrl->wValue) {
++		case 0x0201:
++		case 0x0202:
++			((u8*)req->buf)[0] = 0x00;
++			((u8*)req->buf)[1] = 0xc3;
++			break;
++		case 0x0300:
++		case 0x0500:
++			((u8*)req->buf)[0] = 0x00;
++			break;
++		}
++		//((u8*)req->buf)[0] = 0x82;
++		//((u8*)req->buf)[1] = 0x82;
++		value = ctrl->wLength;
++		break;
++	case 0x83:
++		switch (ctrl->wValue) {
++		case 0x0201:
++		case 0x0202:
++			((u8*)req->buf)[0] = 0x00;
++			((u8*)req->buf)[1] = 0x00;
++			break;
++		case 0x0300:
++			((u8*)req->buf)[0] = 0x60;
++			break;
++		case 0x0500:
++			((u8*)req->buf)[0] = 0x18;
++			break;
++		}
++		//((u8*)req->buf)[0] = 0x83;
++		//((u8*)req->buf)[1] = 0x83;
++		value = ctrl->wLength;
++		break;
++	case 0x84:
++		switch (ctrl->wValue) {
++		case 0x0201:
++		case 0x0202:
++			((u8*)req->buf)[0] = 0x00;
++			((u8*)req->buf)[1] = 0x01;
++			break;
++		case 0x0300:
++		case 0x0500:
++			((u8*)req->buf)[0] = 0x08;
++			break;
++		}
++		//((u8*)req->buf)[0] = 0x84;
++		//((u8*)req->buf)[1] = 0x84;
++		value = ctrl->wLength;
++		break;
++	case 0x85:
++		((u8*)req->buf)[0] = 0x85;
++		((u8*)req->buf)[1] = 0x85;
++		value = ctrl->wLength;
++		break;
++
++
++	default:
++unknown:
++		printk("unknown control req%02x.%02x v%04x i%04x l%d\n",
++			ctrl->bRequestType, ctrl->bRequest,
++			ctrl->wValue, ctrl->wIndex, ctrl->wLength);
++	}
++
++	/* respond with data transfer before status phase? */
++	if (value >= 0) {
++		req->length = value;
++		req->zero = value < ctrl->wLength
++				&& (value % gadget->ep0->maxpacket) == 0;
++		value = usb_ep_queue (gadget->ep0, req, GFP_ATOMIC);
++		if (value < 0) {
++			DBG (dev, "ep_queue < 0 --> %d\n", value);
++			req->status = 0;
++			zero_setup_complete (gadget->ep0, req);
++		}
++	}
++
++	/* device either stalls (value < 0) or reports success */
++	return value;
++}
++
++static void
++zero_disconnect (struct usb_gadget *gadget)
++{
++	struct zero_dev		*dev = get_gadget_data (gadget);
++	unsigned long		flags;
++
++	spin_lock_irqsave (&dev->lock, flags);
++	zero_reset_config (dev);
++
++	/* a more significant application might have some non-usb
++	 * activities to quiesce here, saving resources like power
++	 * or pushing the notification up a network stack.
++	 */
++	spin_unlock_irqrestore (&dev->lock, flags);
++
++	/* next we may get setup() calls to enumerate new connections;
++	 * or an unbind() during shutdown (including removing module).
++	 */
++}
++
++static void
++zero_autoresume (unsigned long _dev)
++{
++	struct zero_dev	*dev = (struct zero_dev *) _dev;
++	int		status;
++
++	/* normally the host would be woken up for something
++	 * more significant than just a timer firing...
++	 */
++	if (dev->gadget->speed != USB_SPEED_UNKNOWN) {
++		status = usb_gadget_wakeup (dev->gadget);
++		DBG (dev, "wakeup --> %d\n", status);
++	}
++}
++
++/*-------------------------------------------------------------------------*/
++
++static void
++zero_unbind (struct usb_gadget *gadget)
++{
++	struct zero_dev		*dev = get_gadget_data (gadget);
++
++	DBG (dev, "unbind\n");
++
++	/* we've already been disconnected ... no i/o is active */
++	if (dev->req)
++		free_ep_req (gadget->ep0, dev->req);
++	del_timer_sync (&dev->resume);
++	kfree (dev);
++	set_gadget_data (gadget, NULL);
++}
++
++static int
++zero_bind (struct usb_gadget *gadget)
++{
++	struct zero_dev		*dev;
++	//struct usb_ep		*ep;
++
++	printk("binding\n");
++	/*
++	 * DRIVER POLICY CHOICE:  you may want to do this differently.
++	 * One thing to avoid is reusing a bcdDevice revision code
++	 * with different host-visible configurations or behavior
++	 * restrictions -- using ep1in/ep2out vs ep1out/ep3in, etc
++	 */
++	//device_desc.bcdDevice = __constant_cpu_to_le16 (0x0201);
++
++
++	/* ok, we made sense of the hardware ... */
++	dev = kmalloc (sizeof *dev, SLAB_KERNEL);
++	if (!dev)
++		return -ENOMEM;
++	memset (dev, 0, sizeof *dev);
++	spin_lock_init (&dev->lock);
++	dev->gadget = gadget;
++	set_gadget_data (gadget, dev);
++
++	/* preallocate control response and buffer */
++	dev->req = usb_ep_alloc_request (gadget->ep0, GFP_KERNEL);
++	if (!dev->req)
++		goto enomem;
++	dev->req->buf = usb_ep_alloc_buffer (gadget->ep0, USB_BUFSIZ,
++				&dev->req->dma, GFP_KERNEL);
++	if (!dev->req->buf)
++		goto enomem;
++
++	dev->req->complete = zero_setup_complete;
++
++	device_desc.bMaxPacketSize0 = gadget->ep0->maxpacket;
++
++#ifdef CONFIG_USB_GADGET_DUALSPEED
++	/* assume ep0 uses the same value for both speeds ... */
++	dev_qualifier.bMaxPacketSize0 = device_desc.bMaxPacketSize0;
++
++	/* and that all endpoints are dual-speed */
++	//hs_source_desc.bEndpointAddress = fs_source_desc.bEndpointAddress;
++	//hs_sink_desc.bEndpointAddress = fs_sink_desc.bEndpointAddress;
++#endif
++
++	usb_gadget_set_selfpowered (gadget);
++
++	init_timer (&dev->resume);
++	dev->resume.function = zero_autoresume;
++	dev->resume.data = (unsigned long) dev;
++
++	gadget->ep0->driver_data = dev;
++
++	INFO (dev, "%s, version: " DRIVER_VERSION "\n", longname);
++	INFO (dev, "using %s, OUT %s IN %s\n", gadget->name,
++		EP_OUT_NAME, EP_IN_NAME);
++
++	snprintf (manufacturer, sizeof manufacturer,
++		UTS_SYSNAME " " UTS_RELEASE " with %s",
++		gadget->name);
++
++	return 0;
++
++enomem:
++	zero_unbind (gadget);
++	return -ENOMEM;
++}
++
++/*-------------------------------------------------------------------------*/
++
++static void
++zero_suspend (struct usb_gadget *gadget)
++{
++	struct zero_dev		*dev = get_gadget_data (gadget);
++
++	if (gadget->speed == USB_SPEED_UNKNOWN)
++		return;
++
++	if (autoresume) {
++		mod_timer (&dev->resume, jiffies + (HZ * autoresume));
++		DBG (dev, "suspend, wakeup in %d seconds\n", autoresume);
++	} else
++		DBG (dev, "suspend\n");
++}
++
++static void
++zero_resume (struct usb_gadget *gadget)
++{
++	struct zero_dev		*dev = get_gadget_data (gadget);
++
++	DBG (dev, "resume\n");
++	del_timer (&dev->resume);
++}
++
++
++/*-------------------------------------------------------------------------*/
++
++static struct usb_gadget_driver zero_driver = {
++#ifdef CONFIG_USB_GADGET_DUALSPEED
++	.speed		= USB_SPEED_HIGH,
++#else
++	.speed		= USB_SPEED_FULL,
++#endif
++	.function	= (char *) longname,
++	.bind		= zero_bind,
++	.unbind		= zero_unbind,
++
++	.setup		= zero_setup,
++	.disconnect	= zero_disconnect,
++
++	.suspend	= zero_suspend,
++	.resume		= zero_resume,
++
++	.driver 	= {
++		.name		= (char *) shortname,
++		// .shutdown = ...
++		// .suspend = ...
++		// .resume = ...
++	},
++};
++
++MODULE_AUTHOR ("David Brownell");
++MODULE_LICENSE ("Dual BSD/GPL");
++
++static struct proc_dir_entry *pdir, *pfile;
++
++static int isoc_read_data (char *page, char **start,
++			   off_t off, int count,
++			   int *eof, void *data)
++{
++	int i;
++	static int c = 0;
++	static int done = 0;
++	static int s = 0;
++
++/*
++	printk ("\ncount: %d\n", count);
++	printk ("rbuf_start: %d\n", rbuf_start);
++	printk ("rbuf_len: %d\n", rbuf_len);
++	printk ("off: %d\n", off);
++	printk ("start: %p\n\n", *start);
++*/
++	if (done) {
++		c = 0;
++		done = 0;
++		*eof = 1;
++		return 0;
++	}
++
++	if (c == 0) {
++		if (rbuf_len == RBUF_LEN)
++			s = rbuf_start;
++		else s = 0;
++	}
++
++	for (i=0; i<count && c<rbuf_len; i++, c++) {
++		page[i] = rbuf[(c+s) % RBUF_LEN];
++	}
++	*start = page;
++
++	if (c >= rbuf_len) {
++		*eof = 1;
++		done = 1;
++	}
++
++
++	return i;
++}
++
++static int __init init (void)
++{
++
++	int retval = 0;
++
++	pdir = proc_mkdir("isoc_test", NULL);
++	if(pdir == NULL) {
++		retval = -ENOMEM;
++		printk("Error creating dir\n");
++		goto done;
++	}
++	pdir->owner = THIS_MODULE;
++
++	pfile = create_proc_read_entry("isoc_data",
++				       0444, pdir,
++				       isoc_read_data,
++				       NULL);
++	if (pfile == NULL) {
++		retval = -ENOMEM;
++		printk("Error creating file\n");
++		goto no_file;
++	}
++	pfile->owner = THIS_MODULE;
++
++	return usb_gadget_register_driver (&zero_driver);
++
++ no_file:
++	remove_proc_entry("isoc_data", NULL);
++ done:
++	return retval;
++}
++module_init (init);
++
++static void __exit cleanup (void)
++{
++
++	usb_gadget_unregister_driver (&zero_driver);
++
++	remove_proc_entry("isoc_data", pdir);
++	remove_proc_entry("isoc_test", NULL);
++}
++module_exit (cleanup);
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_cfi_common.h
+@@ -0,0 +1,142 @@
++/* ==========================================================================
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#if !defined(__DWC_CFI_COMMON_H__)
++#define __DWC_CFI_COMMON_H__
++
++//#include <linux/types.h>
++
++/**
++ * @file
++ *
++ * This file contains the CFI specific common constants, interfaces
++ * (functions and macros) and structures for Linux. No PCD specific
++ * data structure or definition is to be included in this file.
++ *
++ */
++
++/** This is a request for all Core Features */
++#define VEN_CORE_GET_FEATURES		0xB1
++
++/** This is a request to get the value of a specific Core Feature */
++#define VEN_CORE_GET_FEATURE		0xB2
++
++/** This command allows the host to set the value of a specific Core Feature */
++#define VEN_CORE_SET_FEATURE		0xB3
++
++/** This command allows the host to set the default values of
++ * either all or any specific Core Feature
++ */
++#define VEN_CORE_RESET_FEATURES		0xB4
++
++/** This command forces the PCD to write the deferred values of a Core Features */
++#define VEN_CORE_ACTIVATE_FEATURES	0xB5
++
++/** This request reads a DWORD value from a register at the specified offset */
++#define VEN_CORE_READ_REGISTER		0xB6
++
++/** This request writes a DWORD value into a register at the specified offset */
++#define VEN_CORE_WRITE_REGISTER		0xB7
++
++/** This structure is the header of the Core Features dataset returned to
++ *  the Host
++ */
++struct cfi_all_features_header {
++/** The features header structure length is */
++#define CFI_ALL_FEATURES_HDR_LEN		8
++	/**
++	 * The total length of the features dataset returned to the Host
++	 */
++	uint16_t wTotalLen;
++
++	/**
++	 * CFI version number inBinary-Coded Decimal (i.e., 1.00 is 100H).
++	 * This field identifies the version of the CFI Specification with which
++	 * the device is compliant.
++	 */
++	uint16_t wVersion;
++
++	/** The ID of the Core */
++	uint16_t wCoreID;
++#define CFI_CORE_ID_UDC		1
++#define CFI_CORE_ID_OTG		2
++#define CFI_CORE_ID_WUDEV	3
++
++	/** Number of features returned by VEN_CORE_GET_FEATURES request */
++	uint16_t wNumFeatures;
++} UPACKED;
++
++typedef struct cfi_all_features_header cfi_all_features_header_t;
++
++/** This structure is a header of the Core Feature descriptor dataset returned to
++ *  the Host after the VEN_CORE_GET_FEATURES request
++ */
++struct cfi_feature_desc_header {
++#define CFI_FEATURE_DESC_HDR_LEN	8
++
++	/** The feature ID */
++	uint16_t wFeatureID;
++
++	/** Length of this feature descriptor in bytes - including the
++	 * length of the feature name string
++	 */
++	uint16_t wLength;
++
++	/** The data length of this feature in bytes */
++	uint16_t wDataLength;
++
++	/**
++	 * Attributes of this features
++	 * D0: Access rights
++	 * 0 - Read/Write
++	 * 1 - Read only
++	 */
++	uint8_t bmAttributes;
++#define CFI_FEATURE_ATTR_RO		1
++#define CFI_FEATURE_ATTR_RW		0
++
++	/** Length of the feature name in bytes */
++	uint8_t bNameLen;
++
++	/** The feature name buffer */
++	//uint8_t *name;
++} UPACKED;
++
++typedef struct cfi_feature_desc_header cfi_feature_desc_header_t;
++
++/**
++ * This structure describes a NULL terminated string referenced by its id field.
++ * It is very similar to usb_string structure but has the id field type set to 16-bit.
++ */
++struct cfi_string {
++	uint16_t id;
++	const uint8_t *s;
++};
++typedef struct cfi_string cfi_string_t;
++
++#endif
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_adp.c
+@@ -0,0 +1,854 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_adp.c $
++ * $Revision: #12 $
++ * $Date: 2011/10/26 $
++ * $Change: 1873028 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#include "dwc_os.h"
++#include "dwc_otg_regs.h"
++#include "dwc_otg_cil.h"
++#include "dwc_otg_adp.h"
++
++/** @file
++ *
++ * This file contains the most of the Attach Detect Protocol implementation for
++ * the driver to support OTG Rev2.0.
++ *
++ */
++
++void dwc_otg_adp_write_reg(dwc_otg_core_if_t * core_if, uint32_t value)
++{
++	adpctl_data_t adpctl;
++
++	adpctl.d32 = value;
++	adpctl.b.ar = 0x2;
++
++	DWC_WRITE_REG32(&core_if->core_global_regs->adpctl, adpctl.d32);
++
++	while (adpctl.b.ar) {
++		adpctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->adpctl);
++	}
++
++}
++
++/**
++ * Function is called to read ADP registers
++ */
++uint32_t dwc_otg_adp_read_reg(dwc_otg_core_if_t * core_if)
++{
++	adpctl_data_t adpctl;
++
++	adpctl.d32 = 0;
++	adpctl.b.ar = 0x1;
++
++	DWC_WRITE_REG32(&core_if->core_global_regs->adpctl, adpctl.d32);
++
++	while (adpctl.b.ar) {
++		adpctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->adpctl);
++	}
++
++	return adpctl.d32;
++}
++
++/**
++ * Function is called to read ADPCTL register and filter Write-clear bits
++ */
++uint32_t dwc_otg_adp_read_reg_filter(dwc_otg_core_if_t * core_if)
++{
++	adpctl_data_t adpctl;
++
++	adpctl.d32 = dwc_otg_adp_read_reg(core_if);
++	adpctl.b.adp_tmout_int = 0;
++	adpctl.b.adp_prb_int = 0;
++	adpctl.b.adp_tmout_int = 0;
++
++	return adpctl.d32;
++}
++
++/**
++ * Function is called to write ADP registers
++ */
++void dwc_otg_adp_modify_reg(dwc_otg_core_if_t * core_if, uint32_t clr,
++			    uint32_t set)
++{
++	dwc_otg_adp_write_reg(core_if,
++			      (dwc_otg_adp_read_reg(core_if) & (~clr)) | set);
++}
++
++static void adp_sense_timeout(void *ptr)
++{
++	dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) ptr;
++	core_if->adp.sense_timer_started = 0;
++	DWC_PRINTF("ADP SENSE TIMEOUT\n");
++	if (core_if->adp_enable) {
++		dwc_otg_adp_sense_stop(core_if);
++		dwc_otg_adp_probe_start(core_if);
++	}
++}
++
++/**
++ * This function is called when the ADP vbus timer expires. Timeout is 1.1s.
++ */
++static void adp_vbuson_timeout(void *ptr)
++{
++	gpwrdn_data_t gpwrdn;
++	dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) ptr;
++	hprt0_data_t hprt0 = {.d32 = 0 };
++	pcgcctl_data_t pcgcctl = {.d32 = 0 };
++	DWC_PRINTF("%s: 1.1 seconds expire after turning on VBUS\n",__FUNCTION__);
++	if (core_if) {
++		core_if->adp.vbuson_timer_started = 0;
++		/* Turn off vbus */
++		hprt0.b.prtpwr = 1;
++		DWC_MODIFY_REG32(core_if->host_if->hprt0, hprt0.d32, 0);
++		gpwrdn.d32 = 0;
++
++		/* Power off the core */
++		if (core_if->power_down == 2) {
++			/* Enable Wakeup Logic */
++//                      gpwrdn.b.wkupactiv = 1;
++			gpwrdn.b.pmuactv = 0;
++			gpwrdn.b.pwrdnrstn = 1;
++			gpwrdn.b.pwrdnclmp = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0,
++					 gpwrdn.d32);
++
++			/* Suspend the Phy Clock */
++			pcgcctl.b.stoppclk = 1;
++			DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
++
++			/* Switch on VDD */
++//                      gpwrdn.b.wkupactiv = 1;
++			gpwrdn.b.pmuactv = 1;
++			gpwrdn.b.pwrdnrstn = 1;
++			gpwrdn.b.pwrdnclmp = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0,
++					 gpwrdn.d32);
++		} else {
++			/* Enable Power Down Logic */
++			gpwrdn.b.pmuintsel = 1;
++			gpwrdn.b.pmuactv = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++		}
++
++		/* Power off the core */
++		if (core_if->power_down == 2) {
++			gpwrdn.d32 = 0;
++			gpwrdn.b.pwrdnswtch = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn,
++					 gpwrdn.d32, 0);
++		}
++
++		/* Unmask SRP detected interrupt from Power Down Logic */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.srp_det_msk = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++
++		dwc_otg_adp_probe_start(core_if);
++		dwc_otg_dump_global_registers(core_if);
++		dwc_otg_dump_host_registers(core_if);
++	}
++
++}
++
++/**
++ * Start the ADP Initial Probe timer to detect if Port Connected interrupt is
++ * not asserted within 1.1 seconds.
++ *
++ * @param core_if the pointer to core_if strucure.
++ */
++void dwc_otg_adp_vbuson_timer_start(dwc_otg_core_if_t * core_if)
++{
++	core_if->adp.vbuson_timer_started = 1;
++	if (core_if->adp.vbuson_timer)
++	{
++		DWC_PRINTF("SCHEDULING VBUSON TIMER\n");
++		/* 1.1 secs + 60ms necessary for cil_hcd_start*/
++		DWC_TIMER_SCHEDULE(core_if->adp.vbuson_timer, 1160);
++	} else {
++		DWC_WARN("VBUSON_TIMER = %p\n",core_if->adp.vbuson_timer);
++	}
++}
++
++#if 0
++/**
++ * Masks all DWC OTG core interrupts
++ *
++ */
++static void mask_all_interrupts(dwc_otg_core_if_t * core_if)
++{
++	int i;
++	gahbcfg_data_t ahbcfg = {.d32 = 0 };
++
++	/* Mask Host Interrupts */
++
++	/* Clear and disable HCINTs */
++	for (i = 0; i < core_if->core_params->host_channels; i++) {
++		DWC_WRITE_REG32(&core_if->host_if->hc_regs[i]->hcintmsk, 0);
++		DWC_WRITE_REG32(&core_if->host_if->hc_regs[i]->hcint, 0xFFFFFFFF);
++
++	}
++
++	/* Clear and disable HAINT */
++	DWC_WRITE_REG32(&core_if->host_if->host_global_regs->haintmsk, 0x0000);
++	DWC_WRITE_REG32(&core_if->host_if->host_global_regs->haint, 0xFFFFFFFF);
++
++	/* Mask Device Interrupts */
++	if (!core_if->multiproc_int_enable) {
++		/* Clear and disable IN Endpoint interrupts */
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->diepmsk, 0);
++		for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
++			DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->
++					diepint, 0xFFFFFFFF);
++		}
++
++		/* Clear and disable OUT Endpoint interrupts */
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->doepmsk, 0);
++		for (i = 0; i <= core_if->dev_if->num_out_eps; i++) {
++			DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[i]->
++					doepint, 0xFFFFFFFF);
++		}
++
++		/* Clear and disable DAINT */
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->daint,
++				0xFFFFFFFF);
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->daintmsk, 0);
++	} else {
++		for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
++			DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->
++					diepeachintmsk[i], 0);
++			DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->
++					diepint, 0xFFFFFFFF);
++		}
++
++		for (i = 0; i < core_if->dev_if->num_out_eps; ++i) {
++			DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->
++					doepeachintmsk[i], 0);
++			DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[i]->
++					doepint, 0xFFFFFFFF);
++		}
++
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->deachintmsk,
++				0);
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->deachint,
++				0xFFFFFFFF);
++
++	}
++
++	/* Disable interrupts */
++	ahbcfg.b.glblintrmsk = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gahbcfg, ahbcfg.d32, 0);
++
++	/* Disable all interrupts. */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, 0);
++
++	/* Clear any pending interrupts */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++
++	/* Clear any pending OTG Interrupts */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gotgint, 0xFFFFFFFF);
++}
++
++/**
++ * Unmask Port Connection Detected interrupt
++ *
++ */
++static void unmask_conn_det_intr(dwc_otg_core_if_t * core_if)
++{
++	gintmsk_data_t gintmsk = {.d32 = 0,.b.portintr = 1 };
++
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gintmsk.d32);
++}
++#endif
++
++/**
++ * Starts the ADP Probing
++ *
++ * @param core_if the pointer to core_if structure.
++ */
++uint32_t dwc_otg_adp_probe_start(dwc_otg_core_if_t * core_if)
++{
++
++	adpctl_data_t adpctl = {.d32 = 0};
++	gpwrdn_data_t gpwrdn;
++#if 0
++	adpctl_data_t adpctl_int = {.d32 = 0, .b.adp_prb_int = 1,
++								.b.adp_sns_int = 1, b.adp_tmout_int};
++#endif
++	dwc_otg_disable_global_interrupts(core_if);
++	DWC_PRINTF("ADP Probe Start\n");
++	core_if->adp.probe_enabled = 1;
++
++	adpctl.b.adpres = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++
++	while (adpctl.b.adpres) {
++		adpctl.d32 = dwc_otg_adp_read_reg(core_if);
++	}
++
++	adpctl.d32 = 0;
++	gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++
++	/* In Host mode unmask SRP detected interrupt */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.sts_chngint_msk = 1;
++	if (!gpwrdn.b.idsts) {
++		gpwrdn.b.srp_det_msk = 1;
++	}
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++
++	adpctl.b.adp_tmout_int_msk = 1;
++	adpctl.b.adp_prb_int_msk = 1;
++	adpctl.b.prb_dschg = 1;
++	adpctl.b.prb_delta = 1;
++	adpctl.b.prb_per = 1;
++	adpctl.b.adpen = 1;
++	adpctl.b.enaprb = 1;
++
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++	DWC_PRINTF("ADP Probe Finish\n");
++	return 0;
++}
++
++/**
++ * Starts the ADP Sense timer to detect if ADP Sense interrupt is not asserted
++ * within 3 seconds.
++ *
++ * @param core_if the pointer to core_if strucure.
++ */
++void dwc_otg_adp_sense_timer_start(dwc_otg_core_if_t * core_if)
++{
++	core_if->adp.sense_timer_started = 1;
++	DWC_TIMER_SCHEDULE(core_if->adp.sense_timer, 3000 /* 3 secs */ );
++}
++
++/**
++ * Starts the ADP Sense
++ *
++ * @param core_if the pointer to core_if strucure.
++ */
++uint32_t dwc_otg_adp_sense_start(dwc_otg_core_if_t * core_if)
++{
++	adpctl_data_t adpctl;
++
++	DWC_PRINTF("ADP Sense Start\n");
++
++	/* Unmask ADP sense interrupt and mask all other from the core */
++	adpctl.d32 = dwc_otg_adp_read_reg_filter(core_if);
++	adpctl.b.adp_sns_int_msk = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++	dwc_otg_disable_global_interrupts(core_if); // vahrama
++
++	/* Set ADP reset bit*/
++	adpctl.d32 = dwc_otg_adp_read_reg_filter(core_if);
++	adpctl.b.adpres = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++
++	while (adpctl.b.adpres) {
++		adpctl.d32 = dwc_otg_adp_read_reg(core_if);
++	}
++
++	adpctl.b.adpres = 0;
++	adpctl.b.adpen = 1;
++	adpctl.b.enasns = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++
++	dwc_otg_adp_sense_timer_start(core_if);
++
++	return 0;
++}
++
++/**
++ * Stops the ADP Probing
++ *
++ * @param core_if the pointer to core_if strucure.
++ */
++uint32_t dwc_otg_adp_probe_stop(dwc_otg_core_if_t * core_if)
++{
++
++	adpctl_data_t adpctl;
++	DWC_PRINTF("Stop ADP probe\n");
++	core_if->adp.probe_enabled = 0;
++	core_if->adp.probe_counter = 0;
++	adpctl.d32 = dwc_otg_adp_read_reg(core_if);
++
++	adpctl.b.adpen = 0;
++	adpctl.b.adp_prb_int = 1;
++	adpctl.b.adp_tmout_int = 1;
++	adpctl.b.adp_sns_int = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++
++	return 0;
++}
++
++/**
++ * Stops the ADP Sensing
++ *
++ * @param core_if the pointer to core_if strucure.
++ */
++uint32_t dwc_otg_adp_sense_stop(dwc_otg_core_if_t * core_if)
++{
++	adpctl_data_t adpctl;
++
++	core_if->adp.sense_enabled = 0;
++
++	adpctl.d32 = dwc_otg_adp_read_reg_filter(core_if);
++	adpctl.b.enasns = 0;
++	adpctl.b.adp_sns_int = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++
++	return 0;
++}
++
++/**
++ * Called to turn on the VBUS after initial ADP probe in host mode.
++ * If port power was already enabled in cil_hcd_start function then
++ * only schedule a timer.
++ *
++ * @param core_if the pointer to core_if structure.
++ */
++void dwc_otg_adp_turnon_vbus(dwc_otg_core_if_t * core_if)
++{
++	hprt0_data_t hprt0 = {.d32 = 0 };
++	hprt0.d32 = dwc_otg_read_hprt0(core_if);
++	DWC_PRINTF("Turn on VBUS for 1.1s, port power is %d\n", hprt0.b.prtpwr);
++
++	if (hprt0.b.prtpwr == 0) {
++		hprt0.b.prtpwr = 1;
++		//DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++	}
++
++	dwc_otg_adp_vbuson_timer_start(core_if);
++}
++
++/**
++ * Called right after driver is loaded
++ * to perform initial actions for ADP
++ *
++ * @param core_if the pointer to core_if structure.
++ * @param is_host - flag for current mode of operation either from GINTSTS or GPWRDN
++ */
++void dwc_otg_adp_start(dwc_otg_core_if_t * core_if, uint8_t is_host)
++{
++	gpwrdn_data_t gpwrdn;
++
++	DWC_PRINTF("ADP Initial Start\n");
++	core_if->adp.adp_started = 1;
++
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++	dwc_otg_disable_global_interrupts(core_if);
++	if (is_host) {
++		DWC_PRINTF("HOST MODE\n");
++		/* Enable Power Down Logic Interrupt*/
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pmuintsel = 1;
++		gpwrdn.b.pmuactv = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++		/* Initialize first ADP probe to obtain Ramp Time value */
++		core_if->adp.initial_probe = 1;
++		dwc_otg_adp_probe_start(core_if);
++	} else {
++		gotgctl_data_t gotgctl;
++		gotgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++		DWC_PRINTF("DEVICE MODE\n");
++		if (gotgctl.b.bsesvld == 0) {
++			/* Enable Power Down Logic Interrupt*/
++			gpwrdn.d32 = 0;
++			DWC_PRINTF("VBUS is not valid - start ADP probe\n");
++			gpwrdn.b.pmuintsel = 1;
++			gpwrdn.b.pmuactv = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++			core_if->adp.initial_probe = 1;
++			dwc_otg_adp_probe_start(core_if);
++		} else {
++			DWC_PRINTF("VBUS is valid - initialize core as a Device\n");
++			core_if->op_state = B_PERIPHERAL;
++			dwc_otg_core_init(core_if);
++			dwc_otg_enable_global_interrupts(core_if);
++			cil_pcd_start(core_if);
++			dwc_otg_dump_global_registers(core_if);
++			dwc_otg_dump_dev_registers(core_if);
++		}
++	}
++}
++
++void dwc_otg_adp_init(dwc_otg_core_if_t * core_if)
++{
++	core_if->adp.adp_started = 0;
++	core_if->adp.initial_probe = 0;
++	core_if->adp.probe_timer_values[0] = -1;
++	core_if->adp.probe_timer_values[1] = -1;
++	core_if->adp.probe_enabled = 0;
++	core_if->adp.sense_enabled = 0;
++	core_if->adp.sense_timer_started = 0;
++	core_if->adp.vbuson_timer_started = 0;
++	core_if->adp.probe_counter = 0;
++	core_if->adp.gpwrdn = 0;
++	core_if->adp.attached = DWC_OTG_ADP_UNKOWN;
++	/* Initialize timers */
++	core_if->adp.sense_timer =
++	    DWC_TIMER_ALLOC("ADP SENSE TIMER", adp_sense_timeout, core_if);
++	core_if->adp.vbuson_timer =
++	    DWC_TIMER_ALLOC("ADP VBUS ON TIMER", adp_vbuson_timeout, core_if);
++	if (!core_if->adp.sense_timer || !core_if->adp.vbuson_timer)
++	{
++		DWC_ERROR("Could not allocate memory for ADP timers\n");
++	}
++}
++
++void dwc_otg_adp_remove(dwc_otg_core_if_t * core_if)
++{
++	gpwrdn_data_t gpwrdn = { .d32 = 0 };
++	gpwrdn.b.pmuintsel = 1;
++	gpwrdn.b.pmuactv = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	if (core_if->adp.probe_enabled)
++		dwc_otg_adp_probe_stop(core_if);
++	if (core_if->adp.sense_enabled)
++		dwc_otg_adp_sense_stop(core_if);
++	if (core_if->adp.sense_timer_started)
++		DWC_TIMER_CANCEL(core_if->adp.sense_timer);
++	if (core_if->adp.vbuson_timer_started)
++		DWC_TIMER_CANCEL(core_if->adp.vbuson_timer);
++	DWC_TIMER_FREE(core_if->adp.sense_timer);
++	DWC_TIMER_FREE(core_if->adp.vbuson_timer);
++}
++
++/////////////////////////////////////////////////////////////////////
++////////////// ADP Interrupt Handlers ///////////////////////////////
++/////////////////////////////////////////////////////////////////////
++/**
++ * This function sets Ramp Timer values
++ */
++static uint32_t set_timer_value(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	if (core_if->adp.probe_timer_values[0] == -1) {
++		core_if->adp.probe_timer_values[0] = val;
++		core_if->adp.probe_timer_values[1] = -1;
++		return 1;
++	} else {
++		core_if->adp.probe_timer_values[1] =
++		    core_if->adp.probe_timer_values[0];
++		core_if->adp.probe_timer_values[0] = val;
++		return 0;
++	}
++}
++
++/**
++ * This function compares Ramp Timer values
++ */
++static uint32_t compare_timer_values(dwc_otg_core_if_t * core_if)
++{
++	uint32_t diff;
++	if (core_if->adp.probe_timer_values[0]>=core_if->adp.probe_timer_values[1])
++			diff = core_if->adp.probe_timer_values[0]-core_if->adp.probe_timer_values[1];
++	else
++			diff = core_if->adp.probe_timer_values[1]-core_if->adp.probe_timer_values[0];
++	if(diff < 2) {
++		return 0;
++	} else {
++		return 1;
++	}
++}
++
++/**
++ * This function handles ADP Probe Interrupts
++ */
++static int32_t dwc_otg_adp_handle_prb_intr(dwc_otg_core_if_t * core_if,
++						 uint32_t val)
++{
++	adpctl_data_t adpctl = {.d32 = 0 };
++	gpwrdn_data_t gpwrdn, temp;
++	adpctl.d32 = val;
++
++	temp.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++	core_if->adp.probe_counter++;
++	core_if->adp.gpwrdn = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++	if (adpctl.b.rtim == 0 && !temp.b.idsts){
++		DWC_PRINTF("RTIM value is 0\n");
++		goto exit;
++	}
++	if (set_timer_value(core_if, adpctl.b.rtim) &&
++	    core_if->adp.initial_probe) {
++		core_if->adp.initial_probe = 0;
++		dwc_otg_adp_probe_stop(core_if);
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pmuactv = 1;
++		gpwrdn.b.pmuintsel = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++
++		/* check which value is for device mode and which for Host mode */
++		if (!temp.b.idsts) {	/* considered host mode value is 0 */
++			/*
++			 * Turn on VBUS after initial ADP probe.
++			 */
++			core_if->op_state = A_HOST;
++			dwc_otg_enable_global_interrupts(core_if);
++			DWC_SPINUNLOCK(core_if->lock);
++			cil_hcd_start(core_if);
++			dwc_otg_adp_turnon_vbus(core_if);
++			DWC_SPINLOCK(core_if->lock);
++		} else {
++			/*
++			 * Initiate SRP after initial ADP probe.
++			 */
++			dwc_otg_enable_global_interrupts(core_if);
++			dwc_otg_initiate_srp(core_if);
++		}
++	} else if (core_if->adp.probe_counter > 2){
++		gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++		if (compare_timer_values(core_if)) {
++			DWC_PRINTF("Difference in timer values !!! \n");
++//                      core_if->adp.attached = DWC_OTG_ADP_ATTACHED;
++			dwc_otg_adp_probe_stop(core_if);
++
++			/* Power on the core */
++			if (core_if->power_down == 2) {
++				gpwrdn.b.pwrdnswtch = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++			}
++
++			/* check which value is for device mode and which for Host mode */
++			if (!temp.b.idsts) {	/* considered host mode value is 0 */
++				/* Disable Interrupt from Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pmuintsel = 1;
++				gpwrdn.b.pmuactv = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, gpwrdn.d32, 0);
++
++				/*
++				 * Initialize the Core for Host mode.
++				 */
++				core_if->op_state = A_HOST;
++				dwc_otg_core_init(core_if);
++				dwc_otg_enable_global_interrupts(core_if);
++				cil_hcd_start(core_if);
++			} else {
++				gotgctl_data_t gotgctl;
++				/* Mask SRP detected interrupt from Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.srp_det_msk = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, gpwrdn.d32, 0);
++
++				/* Disable Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pmuintsel = 1;
++				gpwrdn.b.pmuactv = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, gpwrdn.d32, 0);
++
++				/*
++				 * Initialize the Core for Device mode.
++				 */
++				core_if->op_state = B_PERIPHERAL;
++				dwc_otg_core_init(core_if);
++				dwc_otg_enable_global_interrupts(core_if);
++				cil_pcd_start(core_if);
++
++				gotgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++				if (!gotgctl.b.bsesvld) {
++					dwc_otg_initiate_srp(core_if);
++				}
++			}
++		}
++		if (core_if->power_down == 2) {
++			if (gpwrdn.b.bsessvld) {
++				/* Mask SRP detected interrupt from Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.srp_det_msk = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++				/* Disable Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pmuactv = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++				/*
++				 * Initialize the Core for Device mode.
++				 */
++				core_if->op_state = B_PERIPHERAL;
++				dwc_otg_core_init(core_if);
++				dwc_otg_enable_global_interrupts(core_if);
++				cil_pcd_start(core_if);
++			}
++		}
++	}
++exit:
++	/* Clear interrupt */
++	adpctl.d32 = dwc_otg_adp_read_reg(core_if);
++	adpctl.b.adp_prb_int = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++
++	return 0;
++}
++
++/**
++ * This function hadles ADP Sense Interrupt
++ */
++static int32_t dwc_otg_adp_handle_sns_intr(dwc_otg_core_if_t * core_if)
++{
++	adpctl_data_t adpctl;
++	/* Stop ADP Sense timer */
++	DWC_TIMER_CANCEL(core_if->adp.sense_timer);
++
++	/* Restart ADP Sense timer */
++	dwc_otg_adp_sense_timer_start(core_if);
++
++	/* Clear interrupt */
++	adpctl.d32 = dwc_otg_adp_read_reg(core_if);
++	adpctl.b.adp_sns_int = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++
++	return 0;
++}
++
++/**
++ * This function handles ADP Probe Interrupts
++ */
++static int32_t dwc_otg_adp_handle_prb_tmout_intr(dwc_otg_core_if_t * core_if,
++						 uint32_t val)
++{
++	adpctl_data_t adpctl = {.d32 = 0 };
++	adpctl.d32 = val;
++	set_timer_value(core_if, adpctl.b.rtim);
++
++	/* Clear interrupt */
++	adpctl.d32 = dwc_otg_adp_read_reg(core_if);
++	adpctl.b.adp_tmout_int = 1;
++	dwc_otg_adp_write_reg(core_if, adpctl.d32);
++
++	return 0;
++}
++
++/**
++ * ADP Interrupt handler.
++ *
++ */
++int32_t dwc_otg_adp_handle_intr(dwc_otg_core_if_t * core_if)
++{
++	int retval = 0;
++	adpctl_data_t adpctl = {.d32 = 0};
++
++	adpctl.d32 = dwc_otg_adp_read_reg(core_if);
++	DWC_PRINTF("ADPCTL = %08x\n",adpctl.d32);
++
++	if (adpctl.b.adp_sns_int & adpctl.b.adp_sns_int_msk) {
++		DWC_PRINTF("ADP Sense interrupt\n");
++		retval |= dwc_otg_adp_handle_sns_intr(core_if);
++	}
++	if (adpctl.b.adp_tmout_int & adpctl.b.adp_tmout_int_msk) {
++		DWC_PRINTF("ADP timeout interrupt\n");
++		retval |= dwc_otg_adp_handle_prb_tmout_intr(core_if, adpctl.d32);
++	}
++	if (adpctl.b.adp_prb_int & adpctl.b.adp_prb_int_msk) {
++		DWC_PRINTF("ADP Probe interrupt\n");
++		adpctl.b.adp_prb_int = 1;
++		retval |= dwc_otg_adp_handle_prb_intr(core_if, adpctl.d32);
++	}
++
++//	dwc_otg_adp_modify_reg(core_if, adpctl.d32, 0);
++	//dwc_otg_adp_write_reg(core_if, adpctl.d32);
++	DWC_PRINTF("RETURN FROM ADP ISR\n");
++
++	return retval;
++}
++
++/**
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++int32_t dwc_otg_adp_handle_srp_intr(dwc_otg_core_if_t * core_if)
++{
++
++#ifndef DWC_HOST_ONLY
++	hprt0_data_t hprt0;
++	gpwrdn_data_t gpwrdn;
++	DWC_DEBUGPL(DBG_ANY, "++ Power Down Logic Session Request Interrupt++\n");
++
++	gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++	/* check which value is for device mode and which for Host mode */
++	if (!gpwrdn.b.idsts) {	/* considered host mode value is 0 */
++		DWC_PRINTF("SRP: Host mode\n");
++
++		if (core_if->adp_enable) {
++			dwc_otg_adp_probe_stop(core_if);
++
++			/* Power on the core */
++			if (core_if->power_down == 2) {
++				gpwrdn.b.pwrdnswtch = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++			}
++
++			core_if->op_state = A_HOST;
++			dwc_otg_core_init(core_if);
++			dwc_otg_enable_global_interrupts(core_if);
++			cil_hcd_start(core_if);
++		}
++
++		/* Turn on the port power bit. */
++		hprt0.d32 = dwc_otg_read_hprt0(core_if);
++		hprt0.b.prtpwr = 1;
++		DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++
++		/* Start the Connection timer. So a message can be displayed
++		 * if connect does not occur within 10 seconds. */
++		cil_hcd_session_start(core_if);
++	} else {
++		DWC_PRINTF("SRP: Device mode %s\n", __FUNCTION__);
++		if (core_if->adp_enable) {
++			dwc_otg_adp_probe_stop(core_if);
++
++			/* Power on the core */
++			if (core_if->power_down == 2) {
++				gpwrdn.b.pwrdnswtch = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++			}
++
++			gpwrdn.d32 = 0;
++			gpwrdn.b.pmuactv = 0;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0,
++					 gpwrdn.d32);
++
++			core_if->op_state = B_PERIPHERAL;
++			dwc_otg_core_init(core_if);
++			dwc_otg_enable_global_interrupts(core_if);
++			cil_pcd_start(core_if);
++		}
++	}
++#endif
++	return 1;
++}
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_adp.h
+@@ -0,0 +1,80 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_adp.h $
++ * $Revision: #7 $
++ * $Date: 2011/10/24 $
++ * $Change: 1871159 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#ifndef __DWC_OTG_ADP_H__
++#define __DWC_OTG_ADP_H__
++
++/**
++ * @file
++ *
++ * This file contains the Attach Detect Protocol interfaces and defines
++ * (functions) and structures for Linux.
++ *
++ */
++
++#define DWC_OTG_ADP_UNATTACHED	0
++#define DWC_OTG_ADP_ATTACHED	1
++#define DWC_OTG_ADP_UNKOWN	2
++
++typedef struct dwc_otg_adp {
++	uint32_t adp_started;
++	uint32_t initial_probe;
++	int32_t probe_timer_values[2];
++	uint32_t probe_enabled;
++	uint32_t sense_enabled;
++	dwc_timer_t *sense_timer;
++	uint32_t sense_timer_started;
++	dwc_timer_t *vbuson_timer;
++	uint32_t vbuson_timer_started;
++	uint32_t attached;
++	uint32_t probe_counter;
++	uint32_t gpwrdn;
++} dwc_otg_adp_t;
++
++/**
++ * Attach Detect Protocol functions
++ */
++
++extern void dwc_otg_adp_write_reg(dwc_otg_core_if_t * core_if, uint32_t value);
++extern uint32_t dwc_otg_adp_read_reg(dwc_otg_core_if_t * core_if);
++extern uint32_t dwc_otg_adp_probe_start(dwc_otg_core_if_t * core_if);
++extern uint32_t dwc_otg_adp_sense_start(dwc_otg_core_if_t * core_if);
++extern uint32_t dwc_otg_adp_probe_stop(dwc_otg_core_if_t * core_if);
++extern uint32_t dwc_otg_adp_sense_stop(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_adp_start(dwc_otg_core_if_t * core_if, uint8_t is_host);
++extern void dwc_otg_adp_init(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_adp_remove(dwc_otg_core_if_t * core_if);
++extern int32_t dwc_otg_adp_handle_intr(dwc_otg_core_if_t * core_if);
++extern int32_t dwc_otg_adp_handle_srp_intr(dwc_otg_core_if_t * core_if);
++
++#endif //__DWC_OTG_ADP_H__
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_attr.c
+@@ -0,0 +1,1210 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_attr.c $
++ * $Revision: #44 $
++ * $Date: 2010/11/29 $
++ * $Change: 1636033 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++/** @file
++ *
++ * The diagnostic interface will provide access to the controller for
++ * bringing up the hardware and testing.  The Linux driver attributes
++ * feature will be used to provide the Linux Diagnostic
++ * Interface. These attributes are accessed through sysfs.
++ */
++
++/** @page "Linux Module Attributes"
++ *
++ * The Linux module attributes feature is used to provide the Linux
++ * Diagnostic Interface.  These attributes are accessed through sysfs.
++ * The diagnostic interface will provide access to the controller for
++ * bringing up the hardware and testing.
++
++ The following table shows the attributes.
++ <table>
++ <tr>
++ <td><b> Name</b></td>
++ <td><b> Description</b></td>
++ <td><b> Access</b></td>
++ </tr>
++
++ <tr>
++ <td> mode </td>
++ <td> Returns the current mode: 0 for device mode, 1 for host mode</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> hnpcapable </td>
++ <td> Gets or sets the "HNP-capable" bit in the Core USB Configuraton Register.
++ Read returns the current value.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> srpcapable </td>
++ <td> Gets or sets the "SRP-capable" bit in the Core USB Configuraton Register.
++ Read returns the current value.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> hsic_connect </td>
++ <td> Gets or sets the "HSIC-Connect" bit in the GLPMCFG Register.
++ Read returns the current value.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> inv_sel_hsic </td>
++ <td> Gets or sets the "Invert Select HSIC" bit in the GLPMFG Register.
++ Read returns the current value.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> hnp </td>
++ <td> Initiates the Host Negotiation Protocol.  Read returns the status.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> srp </td>
++ <td> Initiates the Session Request Protocol.  Read returns the status.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> buspower </td>
++ <td> Gets or sets the Power State of the bus (0 - Off or 1 - On)</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> bussuspend </td>
++ <td> Suspends the USB bus.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> busconnected </td>
++ <td> Gets the connection status of the bus</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> gotgctl </td>
++ <td> Gets or sets the Core Control Status Register.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> gusbcfg </td>
++ <td> Gets or sets the Core USB Configuration Register</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> grxfsiz </td>
++ <td> Gets or sets the Receive FIFO Size Register</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> gnptxfsiz </td>
++ <td> Gets or sets the non-periodic Transmit Size Register</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> gpvndctl </td>
++ <td> Gets or sets the PHY Vendor Control Register</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> ggpio </td>
++ <td> Gets the value in the lower 16-bits of the General Purpose IO Register
++ or sets the upper 16 bits.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> guid </td>
++ <td> Gets or sets the value of the User ID Register</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> gsnpsid </td>
++ <td> Gets the value of the Synopsys ID Regester</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> devspeed </td>
++ <td> Gets or sets the device speed setting in the DCFG register</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> enumspeed </td>
++ <td> Gets the device enumeration Speed.</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> hptxfsiz </td>
++ <td> Gets the value of the Host Periodic Transmit FIFO</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> hprt0 </td>
++ <td> Gets or sets the value in the Host Port Control and Status Register</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> regoffset </td>
++ <td> Sets the register offset for the next Register Access</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> regvalue </td>
++ <td> Gets or sets the value of the register at the offset in the regoffset attribute.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> remote_wakeup </td>
++ <td> On read, shows the status of Remote Wakeup. On write, initiates a remote
++ wakeup of the host. When bit 0 is 1 and Remote Wakeup is enabled, the Remote
++ Wakeup signalling bit in the Device Control Register is set for 1
++ milli-second.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> rem_wakeup_pwrdn </td>
++ <td> On read, shows the status core - hibernated or not. On write, initiates
++ a remote wakeup of the device from Hibernation. </td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> mode_ch_tim_en </td>
++ <td> This bit is used to enable or disable the host core to wait for 200 PHY
++ clock cycles at the end of Resume to change the opmode signal to the PHY to 00
++ after Suspend or LPM. </td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> fr_interval </td>
++ <td> On read, shows the value of HFIR Frame Interval. On write, dynamically
++ reload HFIR register during runtime. The application can write a value to this
++ register only after the Port Enable bit of the Host Port Control and Status
++ register (HPRT.PrtEnaPort) has been set </td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> disconnect_us </td>
++ <td> On read, shows the status of disconnect_device_us. On write, sets disconnect_us
++ which causes soft disconnect for 100us. Applicable only for device mode of operation.</td>
++ <td> Read/Write</td>
++ </tr>
++
++ <tr>
++ <td> regdump </td>
++ <td> Dumps the contents of core registers.</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> spramdump </td>
++ <td> Dumps the contents of core registers.</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> hcddump </td>
++ <td> Dumps the current HCD state.</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> hcd_frrem </td>
++ <td> Shows the average value of the Frame Remaining
++ field in the Host Frame Number/Frame Remaining register when an SOF interrupt
++ occurs. This can be used to determine the average interrupt latency. Also
++ shows the average Frame Remaining value for start_transfer and the "a" and
++ "b" sample points. The "a" and "b" sample points may be used during debugging
++ bto determine how long it takes to execute a section of the HCD code.</td>
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> rd_reg_test </td>
++ <td> Displays the time required to read the GNPTXFSIZ register many times
++ (the output shows the number of times the register is read).
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> wr_reg_test </td>
++ <td> Displays the time required to write the GNPTXFSIZ register many times
++ (the output shows the number of times the register is written).
++ <td> Read</td>
++ </tr>
++
++ <tr>
++ <td> lpm_response </td>
++ <td> Gets or sets lpm_response mode. Applicable only in device mode.
++ <td> Write</td>
++ </tr>
++
++ <tr>
++ <td> sleep_status </td>
++ <td> Shows sleep status of device.
++ <td> Read</td>
++ </tr>
++
++ </table>
++
++ Example usage:
++ To get the current mode:
++ cat /sys/devices/lm0/mode
++
++ To power down the USB:
++ echo 0 > /sys/devices/lm0/buspower
++ */
++
++#include "dwc_otg_os_dep.h"
++#include "dwc_os.h"
++#include "dwc_otg_driver.h"
++#include "dwc_otg_attr.h"
++#include "dwc_otg_core_if.h"
++#include "dwc_otg_pcd_if.h"
++#include "dwc_otg_hcd_if.h"
++
++/*
++ * MACROs for defining sysfs attribute
++ */
++#ifdef LM_INTERFACE
++
++#define DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
++{ \
++	struct lm_device *lm_dev = container_of(_dev, struct lm_device, dev); \
++	dwc_otg_device_t *otg_dev = lm_get_drvdata(lm_dev);		\
++	uint32_t val; \
++	val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
++	return sprintf (buf, "%s = 0x%x\n", _string_, val); \
++}
++#define DWC_OTG_DEVICE_ATTR_BITFIELD_STORE(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
++					const char *buf, size_t count) \
++{ \
++	struct lm_device *lm_dev = container_of(_dev, struct lm_device, dev); \
++	dwc_otg_device_t *otg_dev = lm_get_drvdata(lm_dev); \
++	uint32_t set = simple_strtoul(buf, NULL, 16); \
++	dwc_otg_set_##_otg_attr_name_(otg_dev->core_if, set);\
++	return count; \
++}
++
++#elif defined(PCI_INTERFACE)
++
++#define DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
++{ \
++	dwc_otg_device_t *otg_dev = dev_get_drvdata(_dev);	\
++	uint32_t val; \
++	val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
++	return sprintf (buf, "%s = 0x%x\n", _string_, val); \
++}
++#define DWC_OTG_DEVICE_ATTR_BITFIELD_STORE(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
++					const char *buf, size_t count) \
++{ \
++	dwc_otg_device_t *otg_dev = dev_get_drvdata(_dev);  \
++	uint32_t set = simple_strtoul(buf, NULL, 16); \
++	dwc_otg_set_##_otg_attr_name_(otg_dev->core_if, set);\
++	return count; \
++}
++
++#elif defined(PLATFORM_INTERFACE)
++
++#define DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
++{ \
++        struct platform_device *platform_dev = \
++                container_of(_dev, struct platform_device, dev); \
++        dwc_otg_device_t *otg_dev = platform_get_drvdata(platform_dev);  \
++	uint32_t val; \
++	DWC_PRINTF("%s(%p) -> platform_dev %p, otg_dev %p\n", \
++                    __func__, _dev, platform_dev, otg_dev); \
++	val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
++	return sprintf (buf, "%s = 0x%x\n", _string_, val); \
++}
++#define DWC_OTG_DEVICE_ATTR_BITFIELD_STORE(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
++					const char *buf, size_t count) \
++{ \
++        struct platform_device *platform_dev = container_of(_dev, struct platform_device, dev); \
++        dwc_otg_device_t *otg_dev = platform_get_drvdata(platform_dev); \
++	uint32_t set = simple_strtoul(buf, NULL, 16); \
++	dwc_otg_set_##_otg_attr_name_(otg_dev->core_if, set);\
++	return count; \
++}
++#endif
++
++/*
++ * MACROs for defining sysfs attribute for 32-bit registers
++ */
++#ifdef LM_INTERFACE
++#define DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
++{ \
++	struct lm_device *lm_dev = container_of(_dev, struct lm_device, dev); \
++	dwc_otg_device_t *otg_dev = lm_get_drvdata(lm_dev); \
++	uint32_t val; \
++	val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
++	return sprintf (buf, "%s = 0x%08x\n", _string_, val); \
++}
++#define DWC_OTG_DEVICE_ATTR_REG_STORE(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
++					const char *buf, size_t count) \
++{ \
++	struct lm_device *lm_dev = container_of(_dev, struct lm_device, dev); \
++	dwc_otg_device_t *otg_dev = lm_get_drvdata(lm_dev); \
++	uint32_t val = simple_strtoul(buf, NULL, 16); \
++	dwc_otg_set_##_otg_attr_name_ (otg_dev->core_if, val); \
++	return count; \
++}
++#elif defined(PCI_INTERFACE)
++#define DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
++{ \
++	dwc_otg_device_t *otg_dev = dev_get_drvdata(_dev);  \
++	uint32_t val; \
++	val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
++	return sprintf (buf, "%s = 0x%08x\n", _string_, val); \
++}
++#define DWC_OTG_DEVICE_ATTR_REG_STORE(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
++					const char *buf, size_t count) \
++{ \
++	dwc_otg_device_t *otg_dev = dev_get_drvdata(_dev);  \
++	uint32_t val = simple_strtoul(buf, NULL, 16); \
++	dwc_otg_set_##_otg_attr_name_ (otg_dev->core_if, val); \
++	return count; \
++}
++
++#elif defined(PLATFORM_INTERFACE)
++#include "dwc_otg_dbg.h"
++#define DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
++{ \
++	struct platform_device *platform_dev = container_of(_dev, struct platform_device, dev); \
++	dwc_otg_device_t *otg_dev = platform_get_drvdata(platform_dev); \
++	uint32_t val; \
++	DWC_PRINTF("%s(%p) -> platform_dev %p, otg_dev %p\n", \
++                    __func__, _dev, platform_dev, otg_dev); \
++	val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
++	return sprintf (buf, "%s = 0x%08x\n", _string_, val); \
++}
++#define DWC_OTG_DEVICE_ATTR_REG_STORE(_otg_attr_name_,_string_) \
++static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
++					const char *buf, size_t count) \
++{ \
++	struct platform_device *platform_dev = container_of(_dev, struct platform_device, dev); \
++	dwc_otg_device_t *otg_dev = platform_get_drvdata(platform_dev); \
++	uint32_t val = simple_strtoul(buf, NULL, 16); \
++	dwc_otg_set_##_otg_attr_name_ (otg_dev->core_if, val); \
++	return count; \
++}
++
++#endif
++
++#define DWC_OTG_DEVICE_ATTR_BITFIELD_RW(_otg_attr_name_,_string_) \
++DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
++DWC_OTG_DEVICE_ATTR_BITFIELD_STORE(_otg_attr_name_,_string_) \
++DEVICE_ATTR(_otg_attr_name_,0644,_otg_attr_name_##_show,_otg_attr_name_##_store);
++
++#define DWC_OTG_DEVICE_ATTR_BITFIELD_RO(_otg_attr_name_,_string_) \
++DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
++DEVICE_ATTR(_otg_attr_name_,0444,_otg_attr_name_##_show,NULL);
++
++#define DWC_OTG_DEVICE_ATTR_REG32_RW(_otg_attr_name_,_addr_,_string_) \
++DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
++DWC_OTG_DEVICE_ATTR_REG_STORE(_otg_attr_name_,_string_) \
++DEVICE_ATTR(_otg_attr_name_,0644,_otg_attr_name_##_show,_otg_attr_name_##_store);
++
++#define DWC_OTG_DEVICE_ATTR_REG32_RO(_otg_attr_name_,_addr_,_string_) \
++DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
++DEVICE_ATTR(_otg_attr_name_,0444,_otg_attr_name_##_show,NULL);
++
++/** @name Functions for Show/Store of Attributes */
++/**@{*/
++
++/**
++ * Helper function returning the otg_device structure of the given device
++ */
++static dwc_otg_device_t *dwc_otg_drvdev(struct device *_dev)
++{
++        dwc_otg_device_t *otg_dev;
++        DWC_OTG_GETDRVDEV(otg_dev, _dev);
++        return otg_dev;
++}
++
++/**
++ * Show the register offset of the Register Access.
++ */
++static ssize_t regoffset_show(struct device *_dev,
++			      struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	return snprintf(buf, sizeof("0xFFFFFFFF\n") + 1, "0x%08x\n",
++			otg_dev->os_dep.reg_offset);
++}
++
++/**
++ * Set the register offset for the next Register Access 	Read/Write
++ */
++static ssize_t regoffset_store(struct device *_dev,
++			       struct device_attribute *attr,
++			       const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t offset = simple_strtoul(buf, NULL, 16);
++#if defined(LM_INTERFACE) || defined(PLATFORM_INTERFACE)
++	if (offset < SZ_256K) {
++#elif  defined(PCI_INTERFACE)
++	if (offset < 0x00040000) {
++#endif
++		otg_dev->os_dep.reg_offset = offset;
++	} else {
++		dev_err(_dev, "invalid offset\n");
++	}
++
++	return count;
++}
++
++DEVICE_ATTR(regoffset, S_IRUGO | S_IWUSR, regoffset_show, regoffset_store);
++
++/**
++ * Show the value of the register at the offset in the reg_offset
++ * attribute.
++ */
++static ssize_t regvalue_show(struct device *_dev,
++			     struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t val;
++	volatile uint32_t *addr;
++
++	if (otg_dev->os_dep.reg_offset != 0xFFFFFFFF && 0 != otg_dev->os_dep.base) {
++		/* Calculate the address */
++		addr = (uint32_t *) (otg_dev->os_dep.reg_offset +
++				     (uint8_t *) otg_dev->os_dep.base);
++		val = DWC_READ_REG32(addr);
++		return snprintf(buf,
++				sizeof("Reg at 0xFFFFFFFF = 0xFFFFFFFF\n") + 1,
++				"Reg at 0x%06x = 0x%08x\n", otg_dev->os_dep.reg_offset,
++				val);
++	} else {
++		dev_err(_dev, "Invalid offset (0x%0x)\n", otg_dev->os_dep.reg_offset);
++		return sprintf(buf, "invalid offset\n");
++	}
++}
++
++/**
++ * Store the value in the register at the offset in the reg_offset
++ * attribute.
++ *
++ */
++static ssize_t regvalue_store(struct device *_dev,
++			      struct device_attribute *attr,
++			      const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	volatile uint32_t *addr;
++	uint32_t val = simple_strtoul(buf, NULL, 16);
++	//dev_dbg(_dev, "Offset=0x%08x Val=0x%08x\n", otg_dev->reg_offset, val);
++	if (otg_dev->os_dep.reg_offset != 0xFFFFFFFF && 0 != otg_dev->os_dep.base) {
++		/* Calculate the address */
++		addr = (uint32_t *) (otg_dev->os_dep.reg_offset +
++				     (uint8_t *) otg_dev->os_dep.base);
++		DWC_WRITE_REG32(addr, val);
++	} else {
++		dev_err(_dev, "Invalid Register Offset (0x%08x)\n",
++			otg_dev->os_dep.reg_offset);
++	}
++	return count;
++}
++
++DEVICE_ATTR(regvalue, S_IRUGO | S_IWUSR, regvalue_show, regvalue_store);
++
++/*
++ * Attributes
++ */
++DWC_OTG_DEVICE_ATTR_BITFIELD_RO(mode, "Mode");
++DWC_OTG_DEVICE_ATTR_BITFIELD_RW(hnpcapable, "HNPCapable");
++DWC_OTG_DEVICE_ATTR_BITFIELD_RW(srpcapable, "SRPCapable");
++DWC_OTG_DEVICE_ATTR_BITFIELD_RW(hsic_connect, "HSIC Connect");
++DWC_OTG_DEVICE_ATTR_BITFIELD_RW(inv_sel_hsic, "Invert Select HSIC");
++
++//DWC_OTG_DEVICE_ATTR_BITFIELD_RW(buspower,&(otg_dev->core_if->core_global_regs->gotgctl),(1<<8),8,"Mode");
++//DWC_OTG_DEVICE_ATTR_BITFIELD_RW(bussuspend,&(otg_dev->core_if->core_global_regs->gotgctl),(1<<8),8,"Mode");
++DWC_OTG_DEVICE_ATTR_BITFIELD_RO(busconnected, "Bus Connected");
++
++DWC_OTG_DEVICE_ATTR_REG32_RW(gotgctl, 0, "GOTGCTL");
++DWC_OTG_DEVICE_ATTR_REG32_RW(gusbcfg,
++			     &(otg_dev->core_if->core_global_regs->gusbcfg),
++			     "GUSBCFG");
++DWC_OTG_DEVICE_ATTR_REG32_RW(grxfsiz,
++			     &(otg_dev->core_if->core_global_regs->grxfsiz),
++			     "GRXFSIZ");
++DWC_OTG_DEVICE_ATTR_REG32_RW(gnptxfsiz,
++			     &(otg_dev->core_if->core_global_regs->gnptxfsiz),
++			     "GNPTXFSIZ");
++DWC_OTG_DEVICE_ATTR_REG32_RW(gpvndctl,
++			     &(otg_dev->core_if->core_global_regs->gpvndctl),
++			     "GPVNDCTL");
++DWC_OTG_DEVICE_ATTR_REG32_RW(ggpio,
++			     &(otg_dev->core_if->core_global_regs->ggpio),
++			     "GGPIO");
++DWC_OTG_DEVICE_ATTR_REG32_RW(guid, &(otg_dev->core_if->core_global_regs->guid),
++			     "GUID");
++DWC_OTG_DEVICE_ATTR_REG32_RO(gsnpsid,
++			     &(otg_dev->core_if->core_global_regs->gsnpsid),
++			     "GSNPSID");
++DWC_OTG_DEVICE_ATTR_BITFIELD_RW(devspeed, "Device Speed");
++DWC_OTG_DEVICE_ATTR_BITFIELD_RO(enumspeed, "Device Enumeration Speed");
++
++DWC_OTG_DEVICE_ATTR_REG32_RO(hptxfsiz,
++			     &(otg_dev->core_if->core_global_regs->hptxfsiz),
++			     "HPTXFSIZ");
++DWC_OTG_DEVICE_ATTR_REG32_RW(hprt0, otg_dev->core_if->host_if->hprt0, "HPRT0");
++
++/**
++ * @todo Add code to initiate the HNP.
++ */
++/**
++ * Show the HNP status bit
++ */
++static ssize_t hnp_show(struct device *_dev,
++			struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	return sprintf(buf, "HstNegScs = 0x%x\n",
++		       dwc_otg_get_hnpstatus(otg_dev->core_if));
++}
++
++/**
++ * Set the HNP Request bit
++ */
++static ssize_t hnp_store(struct device *_dev,
++			 struct device_attribute *attr,
++			 const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t in = simple_strtoul(buf, NULL, 16);
++	dwc_otg_set_hnpreq(otg_dev->core_if, in);
++	return count;
++}
++
++DEVICE_ATTR(hnp, 0644, hnp_show, hnp_store);
++
++/**
++ * @todo Add code to initiate the SRP.
++ */
++/**
++ * Show the SRP status bit
++ */
++static ssize_t srp_show(struct device *_dev,
++			struct device_attribute *attr, char *buf)
++{
++#ifndef DWC_HOST_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	return sprintf(buf, "SesReqScs = 0x%x\n",
++		       dwc_otg_get_srpstatus(otg_dev->core_if));
++#else
++	return sprintf(buf, "Host Only Mode!\n");
++#endif
++}
++
++/**
++ * Set the SRP Request bit
++ */
++static ssize_t srp_store(struct device *_dev,
++			 struct device_attribute *attr,
++			 const char *buf, size_t count)
++{
++#ifndef DWC_HOST_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	dwc_otg_pcd_initiate_srp(otg_dev->pcd);
++#endif
++	return count;
++}
++
++DEVICE_ATTR(srp, 0644, srp_show, srp_store);
++
++/**
++ * @todo Need to do more for power on/off?
++ */
++/**
++ * Show the Bus Power status
++ */
++static ssize_t buspower_show(struct device *_dev,
++			     struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	return sprintf(buf, "Bus Power = 0x%x\n",
++		       dwc_otg_get_prtpower(otg_dev->core_if));
++}
++
++/**
++ * Set the Bus Power status
++ */
++static ssize_t buspower_store(struct device *_dev,
++			      struct device_attribute *attr,
++			      const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t on = simple_strtoul(buf, NULL, 16);
++	dwc_otg_set_prtpower(otg_dev->core_if, on);
++	return count;
++}
++
++DEVICE_ATTR(buspower, 0644, buspower_show, buspower_store);
++
++/**
++ * @todo Need to do more for suspend?
++ */
++/**
++ * Show the Bus Suspend status
++ */
++static ssize_t bussuspend_show(struct device *_dev,
++			       struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	return sprintf(buf, "Bus Suspend = 0x%x\n",
++		       dwc_otg_get_prtsuspend(otg_dev->core_if));
++}
++
++/**
++ * Set the Bus Suspend status
++ */
++static ssize_t bussuspend_store(struct device *_dev,
++				struct device_attribute *attr,
++				const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t in = simple_strtoul(buf, NULL, 16);
++	dwc_otg_set_prtsuspend(otg_dev->core_if, in);
++	return count;
++}
++
++DEVICE_ATTR(bussuspend, 0644, bussuspend_show, bussuspend_store);
++
++/**
++ * Show the Mode Change Ready Timer status
++ */
++static ssize_t mode_ch_tim_en_show(struct device *_dev,
++				   struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	return sprintf(buf, "Mode Change Ready Timer Enable = 0x%x\n",
++		       dwc_otg_get_mode_ch_tim(otg_dev->core_if));
++}
++
++/**
++ * Set the Mode Change Ready Timer status
++ */
++static ssize_t mode_ch_tim_en_store(struct device *_dev,
++				    struct device_attribute *attr,
++				    const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t in = simple_strtoul(buf, NULL, 16);
++	dwc_otg_set_mode_ch_tim(otg_dev->core_if, in);
++	return count;
++}
++
++DEVICE_ATTR(mode_ch_tim_en, 0644, mode_ch_tim_en_show, mode_ch_tim_en_store);
++
++/**
++ * Show the value of HFIR Frame Interval bitfield
++ */
++static ssize_t fr_interval_show(struct device *_dev,
++				struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	return sprintf(buf, "Frame Interval = 0x%x\n",
++		       dwc_otg_get_fr_interval(otg_dev->core_if));
++}
++
++/**
++ * Set the HFIR Frame Interval value
++ */
++static ssize_t fr_interval_store(struct device *_dev,
++				 struct device_attribute *attr,
++				 const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t in = simple_strtoul(buf, NULL, 10);
++	dwc_otg_set_fr_interval(otg_dev->core_if, in);
++	return count;
++}
++
++DEVICE_ATTR(fr_interval, 0644, fr_interval_show, fr_interval_store);
++
++/**
++ * Show the status of Remote Wakeup.
++ */
++static ssize_t remote_wakeup_show(struct device *_dev,
++				  struct device_attribute *attr, char *buf)
++{
++#ifndef DWC_HOST_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++
++	return sprintf(buf,
++		       "Remote Wakeup Sig = %d Enabled = %d LPM Remote Wakeup = %d\n",
++		       dwc_otg_get_remotewakesig(otg_dev->core_if),
++		       dwc_otg_pcd_get_rmwkup_enable(otg_dev->pcd),
++		       dwc_otg_get_lpm_remotewakeenabled(otg_dev->core_if));
++#else
++	return sprintf(buf, "Host Only Mode!\n");
++#endif /* DWC_HOST_ONLY */
++}
++
++/**
++ * Initiate a remote wakeup of the host.  The Device control register
++ * Remote Wakeup Signal bit is written if the PCD Remote wakeup enable
++ * flag is set.
++ *
++ */
++static ssize_t remote_wakeup_store(struct device *_dev,
++				   struct device_attribute *attr,
++				   const char *buf, size_t count)
++{
++#ifndef DWC_HOST_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t val = simple_strtoul(buf, NULL, 16);
++
++	if (val & 1) {
++		dwc_otg_pcd_remote_wakeup(otg_dev->pcd, 1);
++	} else {
++		dwc_otg_pcd_remote_wakeup(otg_dev->pcd, 0);
++	}
++#endif /* DWC_HOST_ONLY */
++	return count;
++}
++
++DEVICE_ATTR(remote_wakeup, S_IRUGO | S_IWUSR, remote_wakeup_show,
++	    remote_wakeup_store);
++
++/**
++ * Show the whether core is hibernated or not.
++ */
++static ssize_t rem_wakeup_pwrdn_show(struct device *_dev,
++				     struct device_attribute *attr, char *buf)
++{
++#ifndef DWC_HOST_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++
++	if (dwc_otg_get_core_state(otg_dev->core_if)) {
++		DWC_PRINTF("Core is in hibernation\n");
++	} else {
++		DWC_PRINTF("Core is not in hibernation\n");
++	}
++#endif /* DWC_HOST_ONLY */
++	return 0;
++}
++
++extern int dwc_otg_device_hibernation_restore(dwc_otg_core_if_t * core_if,
++					      int rem_wakeup, int reset);
++
++/**
++ * Initiate a remote wakeup of the device to exit from hibernation.
++ */
++static ssize_t rem_wakeup_pwrdn_store(struct device *_dev,
++				      struct device_attribute *attr,
++				      const char *buf, size_t count)
++{
++#ifndef DWC_HOST_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	dwc_otg_device_hibernation_restore(otg_dev->core_if, 1, 0);
++#endif
++	return count;
++}
++
++DEVICE_ATTR(rem_wakeup_pwrdn, S_IRUGO | S_IWUSR, rem_wakeup_pwrdn_show,
++	    rem_wakeup_pwrdn_store);
++
++static ssize_t disconnect_us(struct device *_dev,
++			     struct device_attribute *attr,
++			     const char *buf, size_t count)
++{
++
++#ifndef DWC_HOST_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t val = simple_strtoul(buf, NULL, 16);
++	DWC_PRINTF("The Passed value is %04x\n", val);
++
++	dwc_otg_pcd_disconnect_us(otg_dev->pcd, 50);
++
++#endif /* DWC_HOST_ONLY */
++	return count;
++}
++
++DEVICE_ATTR(disconnect_us, S_IWUSR, 0, disconnect_us);
++
++/**
++ * Dump global registers and either host or device registers (depending on the
++ * current mode of the core).
++ */
++static ssize_t regdump_show(struct device *_dev,
++			    struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++
++	dwc_otg_dump_global_registers(otg_dev->core_if);
++	if (dwc_otg_is_host_mode(otg_dev->core_if)) {
++		dwc_otg_dump_host_registers(otg_dev->core_if);
++	} else {
++		dwc_otg_dump_dev_registers(otg_dev->core_if);
++
++	}
++	return sprintf(buf, "Register Dump\n");
++}
++
++DEVICE_ATTR(regdump, S_IRUGO, regdump_show, 0);
++
++/**
++ * Dump global registers and either host or device registers (depending on the
++ * current mode of the core).
++ */
++static ssize_t spramdump_show(struct device *_dev,
++			      struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++
++	//dwc_otg_dump_spram(otg_dev->core_if);
++
++	return sprintf(buf, "SPRAM Dump\n");
++}
++
++DEVICE_ATTR(spramdump, S_IRUGO, spramdump_show, 0);
++
++/**
++ * Dump the current hcd state.
++ */
++static ssize_t hcddump_show(struct device *_dev,
++			    struct device_attribute *attr, char *buf)
++{
++#ifndef DWC_DEVICE_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	dwc_otg_hcd_dump_state(otg_dev->hcd);
++#endif /* DWC_DEVICE_ONLY */
++	return sprintf(buf, "HCD Dump\n");
++}
++
++DEVICE_ATTR(hcddump, S_IRUGO, hcddump_show, 0);
++
++/**
++ * Dump the average frame remaining at SOF. This can be used to
++ * determine average interrupt latency. Frame remaining is also shown for
++ * start transfer and two additional sample points.
++ */
++static ssize_t hcd_frrem_show(struct device *_dev,
++			      struct device_attribute *attr, char *buf)
++{
++#ifndef DWC_DEVICE_ONLY
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++
++	dwc_otg_hcd_dump_frrem(otg_dev->hcd);
++#endif /* DWC_DEVICE_ONLY */
++	return sprintf(buf, "HCD Dump Frame Remaining\n");
++}
++
++DEVICE_ATTR(hcd_frrem, S_IRUGO, hcd_frrem_show, 0);
++
++/**
++ * Displays the time required to read the GNPTXFSIZ register many times (the
++ * output shows the number of times the register is read).
++ */
++#define RW_REG_COUNT 10000000
++#define MSEC_PER_JIFFIE 1000/HZ
++static ssize_t rd_reg_test_show(struct device *_dev,
++				struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	int i;
++	int time;
++	int start_jiffies;
++
++	printk("HZ %d, MSEC_PER_JIFFIE %d, loops_per_jiffy %lu\n",
++	       HZ, MSEC_PER_JIFFIE, loops_per_jiffy);
++	start_jiffies = jiffies;
++	for (i = 0; i < RW_REG_COUNT; i++) {
++		dwc_otg_get_gnptxfsiz(otg_dev->core_if);
++	}
++	time = jiffies - start_jiffies;
++	return sprintf(buf,
++		       "Time to read GNPTXFSIZ reg %d times: %d msecs (%d jiffies)\n",
++		       RW_REG_COUNT, time * MSEC_PER_JIFFIE, time);
++}
++
++DEVICE_ATTR(rd_reg_test, S_IRUGO, rd_reg_test_show, 0);
++
++/**
++ * Displays the time required to write the GNPTXFSIZ register many times (the
++ * output shows the number of times the register is written).
++ */
++static ssize_t wr_reg_test_show(struct device *_dev,
++				struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t reg_val;
++	int i;
++	int time;
++	int start_jiffies;
++
++	printk("HZ %d, MSEC_PER_JIFFIE %d, loops_per_jiffy %lu\n",
++	       HZ, MSEC_PER_JIFFIE, loops_per_jiffy);
++	reg_val = dwc_otg_get_gnptxfsiz(otg_dev->core_if);
++	start_jiffies = jiffies;
++	for (i = 0; i < RW_REG_COUNT; i++) {
++		dwc_otg_set_gnptxfsiz(otg_dev->core_if, reg_val);
++	}
++	time = jiffies - start_jiffies;
++	return sprintf(buf,
++		       "Time to write GNPTXFSIZ reg %d times: %d msecs (%d jiffies)\n",
++		       RW_REG_COUNT, time * MSEC_PER_JIFFIE, time);
++}
++
++DEVICE_ATTR(wr_reg_test, S_IRUGO, wr_reg_test_show, 0);
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++
++/**
++* Show the lpm_response attribute.
++*/
++static ssize_t lpmresp_show(struct device *_dev,
++			    struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++
++	if (!dwc_otg_get_param_lpm_enable(otg_dev->core_if))
++		return sprintf(buf, "** LPM is DISABLED **\n");
++
++	if (!dwc_otg_is_device_mode(otg_dev->core_if)) {
++		return sprintf(buf, "** Current mode is not device mode\n");
++	}
++	return sprintf(buf, "lpm_response = %d\n",
++		       dwc_otg_get_lpmresponse(otg_dev->core_if));
++}
++
++/**
++* Store the lpm_response attribute.
++*/
++static ssize_t lpmresp_store(struct device *_dev,
++			     struct device_attribute *attr,
++			     const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	uint32_t val = simple_strtoul(buf, NULL, 16);
++
++	if (!dwc_otg_get_param_lpm_enable(otg_dev->core_if)) {
++		return 0;
++	}
++
++	if (!dwc_otg_is_device_mode(otg_dev->core_if)) {
++		return 0;
++	}
++
++	dwc_otg_set_lpmresponse(otg_dev->core_if, val);
++	return count;
++}
++
++DEVICE_ATTR(lpm_response, S_IRUGO | S_IWUSR, lpmresp_show, lpmresp_store);
++
++/**
++* Show the sleep_status attribute.
++*/
++static ssize_t sleepstatus_show(struct device *_dev,
++				struct device_attribute *attr, char *buf)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	return sprintf(buf, "Sleep Status = %d\n",
++		       dwc_otg_get_lpm_portsleepstatus(otg_dev->core_if));
++}
++
++/**
++ * Store the sleep_status attribure.
++ */
++static ssize_t sleepstatus_store(struct device *_dev,
++				 struct device_attribute *attr,
++				 const char *buf, size_t count)
++{
++        dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
++	dwc_otg_core_if_t *core_if = otg_dev->core_if;
++
++	if (dwc_otg_get_lpm_portsleepstatus(otg_dev->core_if)) {
++		if (dwc_otg_is_host_mode(core_if)) {
++
++			DWC_PRINTF("Host initiated resume\n");
++			dwc_otg_set_prtresume(otg_dev->core_if, 1);
++		}
++	}
++
++	return count;
++}
++
++DEVICE_ATTR(sleep_status, S_IRUGO | S_IWUSR, sleepstatus_show,
++	    sleepstatus_store);
++
++#endif /* CONFIG_USB_DWC_OTG_LPM_ENABLE */
++
++/**@}*/
++
++/**
++ * Create the device files
++ */
++void dwc_otg_attr_create(
++#ifdef LM_INTERFACE
++	struct lm_device *dev
++#elif  defined(PCI_INTERFACE)
++	struct pci_dev *dev
++#elif  defined(PLATFORM_INTERFACE)
++        struct platform_device *dev
++#endif
++    )
++{
++	int error;
++
++	error = device_create_file(&dev->dev, &dev_attr_regoffset);
++	error = device_create_file(&dev->dev, &dev_attr_regvalue);
++	error = device_create_file(&dev->dev, &dev_attr_mode);
++	error = device_create_file(&dev->dev, &dev_attr_hnpcapable);
++	error = device_create_file(&dev->dev, &dev_attr_srpcapable);
++	error = device_create_file(&dev->dev, &dev_attr_hsic_connect);
++	error = device_create_file(&dev->dev, &dev_attr_inv_sel_hsic);
++	error = device_create_file(&dev->dev, &dev_attr_hnp);
++	error = device_create_file(&dev->dev, &dev_attr_srp);
++	error = device_create_file(&dev->dev, &dev_attr_buspower);
++	error = device_create_file(&dev->dev, &dev_attr_bussuspend);
++	error = device_create_file(&dev->dev, &dev_attr_mode_ch_tim_en);
++	error = device_create_file(&dev->dev, &dev_attr_fr_interval);
++	error = device_create_file(&dev->dev, &dev_attr_busconnected);
++	error = device_create_file(&dev->dev, &dev_attr_gotgctl);
++	error = device_create_file(&dev->dev, &dev_attr_gusbcfg);
++	error = device_create_file(&dev->dev, &dev_attr_grxfsiz);
++	error = device_create_file(&dev->dev, &dev_attr_gnptxfsiz);
++	error = device_create_file(&dev->dev, &dev_attr_gpvndctl);
++	error = device_create_file(&dev->dev, &dev_attr_ggpio);
++	error = device_create_file(&dev->dev, &dev_attr_guid);
++	error = device_create_file(&dev->dev, &dev_attr_gsnpsid);
++	error = device_create_file(&dev->dev, &dev_attr_devspeed);
++	error = device_create_file(&dev->dev, &dev_attr_enumspeed);
++	error = device_create_file(&dev->dev, &dev_attr_hptxfsiz);
++	error = device_create_file(&dev->dev, &dev_attr_hprt0);
++	error = device_create_file(&dev->dev, &dev_attr_remote_wakeup);
++	error = device_create_file(&dev->dev, &dev_attr_rem_wakeup_pwrdn);
++	error = device_create_file(&dev->dev, &dev_attr_disconnect_us);
++	error = device_create_file(&dev->dev, &dev_attr_regdump);
++	error = device_create_file(&dev->dev, &dev_attr_spramdump);
++	error = device_create_file(&dev->dev, &dev_attr_hcddump);
++	error = device_create_file(&dev->dev, &dev_attr_hcd_frrem);
++	error = device_create_file(&dev->dev, &dev_attr_rd_reg_test);
++	error = device_create_file(&dev->dev, &dev_attr_wr_reg_test);
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	error = device_create_file(&dev->dev, &dev_attr_lpm_response);
++	error = device_create_file(&dev->dev, &dev_attr_sleep_status);
++#endif
++}
++
++/**
++ * Remove the device files
++ */
++void dwc_otg_attr_remove(
++#ifdef LM_INTERFACE
++	struct lm_device *dev
++#elif  defined(PCI_INTERFACE)
++	struct pci_dev *dev
++#elif  defined(PLATFORM_INTERFACE)
++	struct platform_device *dev
++#endif
++    )
++{
++	device_remove_file(&dev->dev, &dev_attr_regoffset);
++	device_remove_file(&dev->dev, &dev_attr_regvalue);
++	device_remove_file(&dev->dev, &dev_attr_mode);
++	device_remove_file(&dev->dev, &dev_attr_hnpcapable);
++	device_remove_file(&dev->dev, &dev_attr_srpcapable);
++	device_remove_file(&dev->dev, &dev_attr_hsic_connect);
++	device_remove_file(&dev->dev, &dev_attr_inv_sel_hsic);
++	device_remove_file(&dev->dev, &dev_attr_hnp);
++	device_remove_file(&dev->dev, &dev_attr_srp);
++	device_remove_file(&dev->dev, &dev_attr_buspower);
++	device_remove_file(&dev->dev, &dev_attr_bussuspend);
++	device_remove_file(&dev->dev, &dev_attr_mode_ch_tim_en);
++	device_remove_file(&dev->dev, &dev_attr_fr_interval);
++	device_remove_file(&dev->dev, &dev_attr_busconnected);
++	device_remove_file(&dev->dev, &dev_attr_gotgctl);
++	device_remove_file(&dev->dev, &dev_attr_gusbcfg);
++	device_remove_file(&dev->dev, &dev_attr_grxfsiz);
++	device_remove_file(&dev->dev, &dev_attr_gnptxfsiz);
++	device_remove_file(&dev->dev, &dev_attr_gpvndctl);
++	device_remove_file(&dev->dev, &dev_attr_ggpio);
++	device_remove_file(&dev->dev, &dev_attr_guid);
++	device_remove_file(&dev->dev, &dev_attr_gsnpsid);
++	device_remove_file(&dev->dev, &dev_attr_devspeed);
++	device_remove_file(&dev->dev, &dev_attr_enumspeed);
++	device_remove_file(&dev->dev, &dev_attr_hptxfsiz);
++	device_remove_file(&dev->dev, &dev_attr_hprt0);
++	device_remove_file(&dev->dev, &dev_attr_remote_wakeup);
++	device_remove_file(&dev->dev, &dev_attr_rem_wakeup_pwrdn);
++	device_remove_file(&dev->dev, &dev_attr_disconnect_us);
++	device_remove_file(&dev->dev, &dev_attr_regdump);
++	device_remove_file(&dev->dev, &dev_attr_spramdump);
++	device_remove_file(&dev->dev, &dev_attr_hcddump);
++	device_remove_file(&dev->dev, &dev_attr_hcd_frrem);
++	device_remove_file(&dev->dev, &dev_attr_rd_reg_test);
++	device_remove_file(&dev->dev, &dev_attr_wr_reg_test);
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	device_remove_file(&dev->dev, &dev_attr_lpm_response);
++	device_remove_file(&dev->dev, &dev_attr_sleep_status);
++#endif
++}
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_attr.h
+@@ -0,0 +1,89 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_attr.h $
++ * $Revision: #13 $
++ * $Date: 2010/06/21 $
++ * $Change: 1532021 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#if !defined(__DWC_OTG_ATTR_H__)
++#define __DWC_OTG_ATTR_H__
++
++/** @file
++ * This file contains the interface to the Linux device attributes.
++ */
++extern struct device_attribute dev_attr_regoffset;
++extern struct device_attribute dev_attr_regvalue;
++
++extern struct device_attribute dev_attr_mode;
++extern struct device_attribute dev_attr_hnpcapable;
++extern struct device_attribute dev_attr_srpcapable;
++extern struct device_attribute dev_attr_hnp;
++extern struct device_attribute dev_attr_srp;
++extern struct device_attribute dev_attr_buspower;
++extern struct device_attribute dev_attr_bussuspend;
++extern struct device_attribute dev_attr_mode_ch_tim_en;
++extern struct device_attribute dev_attr_fr_interval;
++extern struct device_attribute dev_attr_busconnected;
++extern struct device_attribute dev_attr_gotgctl;
++extern struct device_attribute dev_attr_gusbcfg;
++extern struct device_attribute dev_attr_grxfsiz;
++extern struct device_attribute dev_attr_gnptxfsiz;
++extern struct device_attribute dev_attr_gpvndctl;
++extern struct device_attribute dev_attr_ggpio;
++extern struct device_attribute dev_attr_guid;
++extern struct device_attribute dev_attr_gsnpsid;
++extern struct device_attribute dev_attr_devspeed;
++extern struct device_attribute dev_attr_enumspeed;
++extern struct device_attribute dev_attr_hptxfsiz;
++extern struct device_attribute dev_attr_hprt0;
++#ifdef CONFIG_USB_DWC_OTG_LPM
++extern struct device_attribute dev_attr_lpm_response;
++extern struct device_attribute devi_attr_sleep_status;
++#endif
++
++void dwc_otg_attr_create(
++#ifdef LM_INTERFACE
++				struct lm_device *dev
++#elif  defined(PCI_INTERFACE)
++				struct pci_dev *dev
++#elif  defined(PLATFORM_INTERFACE)
++	struct platform_device *dev
++#endif
++    );
++
++void dwc_otg_attr_remove(
++#ifdef LM_INTERFACE
++				struct lm_device *dev
++#elif  defined(PCI_INTERFACE)
++				struct pci_dev *dev
++#elif  defined(PLATFORM_INTERFACE)
++	struct platform_device *dev
++#endif
++    );
++#endif
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_cfi.c
+@@ -0,0 +1,1876 @@
++/* ==========================================================================
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++/** @file
++ *
++ * This file contains the most of the CFI(Core Feature Interface)
++ * implementation for the OTG.
++ */
++
++#ifdef DWC_UTE_CFI
++
++#include "dwc_otg_pcd.h"
++#include "dwc_otg_cfi.h"
++
++/** This definition should actually migrate to the Portability Library */
++#define DWC_CONSTANT_CPU_TO_LE16(x) (x)
++
++extern dwc_otg_pcd_ep_t *get_ep_by_addr(dwc_otg_pcd_t * pcd, u16 wIndex);
++
++static int cfi_core_features_buf(uint8_t * buf, uint16_t buflen);
++static int cfi_get_feature_value(uint8_t * buf, uint16_t buflen,
++				 struct dwc_otg_pcd *pcd,
++				 struct cfi_usb_ctrlrequest *ctrl_req);
++static int cfi_set_feature_value(struct dwc_otg_pcd *pcd);
++static int cfi_ep_get_sg_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
++			     struct cfi_usb_ctrlrequest *req);
++static int cfi_ep_get_concat_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
++				 struct cfi_usb_ctrlrequest *req);
++static int cfi_ep_get_align_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
++				struct cfi_usb_ctrlrequest *req);
++static int cfi_preproc_reset(struct dwc_otg_pcd *pcd,
++			     struct cfi_usb_ctrlrequest *req);
++static void cfi_free_ep_bs_dyn_data(cfi_ep_t * cfiep);
++
++static uint16_t get_dfifo_size(dwc_otg_core_if_t * core_if);
++static int32_t get_rxfifo_size(dwc_otg_core_if_t * core_if, uint16_t wValue);
++static int32_t get_txfifo_size(struct dwc_otg_pcd *pcd, uint16_t wValue);
++
++static uint8_t resize_fifos(dwc_otg_core_if_t * core_if);
++
++/** This is the header of the all features descriptor */
++static cfi_all_features_header_t all_props_desc_header = {
++	.wVersion = DWC_CONSTANT_CPU_TO_LE16(0x100),
++	.wCoreID = DWC_CONSTANT_CPU_TO_LE16(CFI_CORE_ID_OTG),
++	.wNumFeatures = DWC_CONSTANT_CPU_TO_LE16(9),
++};
++
++/** This is an array of statically allocated feature descriptors */
++static cfi_feature_desc_header_t prop_descs[] = {
++
++	/* FT_ID_DMA_MODE */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_MODE),
++	 .bmAttributes = CFI_FEATURE_ATTR_RW,
++	 .wDataLength = DWC_CONSTANT_CPU_TO_LE16(1),
++	 },
++
++	/* FT_ID_DMA_BUFFER_SETUP */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_BUFFER_SETUP),
++	 .bmAttributes = CFI_FEATURE_ATTR_RW,
++	 .wDataLength = DWC_CONSTANT_CPU_TO_LE16(6),
++	 },
++
++	/* FT_ID_DMA_BUFF_ALIGN */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_BUFF_ALIGN),
++	 .bmAttributes = CFI_FEATURE_ATTR_RW,
++	 .wDataLength = DWC_CONSTANT_CPU_TO_LE16(2),
++	 },
++
++	/* FT_ID_DMA_CONCAT_SETUP */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_CONCAT_SETUP),
++	 .bmAttributes = CFI_FEATURE_ATTR_RW,
++	 //.wDataLength  = DWC_CONSTANT_CPU_TO_LE16(6),
++	 },
++
++	/* FT_ID_DMA_CIRCULAR */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_CIRCULAR),
++	 .bmAttributes = CFI_FEATURE_ATTR_RW,
++	 .wDataLength = DWC_CONSTANT_CPU_TO_LE16(6),
++	 },
++
++	/* FT_ID_THRESHOLD_SETUP */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_THRESHOLD_SETUP),
++	 .bmAttributes = CFI_FEATURE_ATTR_RW,
++	 .wDataLength = DWC_CONSTANT_CPU_TO_LE16(6),
++	 },
++
++	/* FT_ID_DFIFO_DEPTH */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DFIFO_DEPTH),
++	 .bmAttributes = CFI_FEATURE_ATTR_RO,
++	 .wDataLength = DWC_CONSTANT_CPU_TO_LE16(2),
++	 },
++
++	/* FT_ID_TX_FIFO_DEPTH */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_TX_FIFO_DEPTH),
++	 .bmAttributes = CFI_FEATURE_ATTR_RW,
++	 .wDataLength = DWC_CONSTANT_CPU_TO_LE16(2),
++	 },
++
++	/* FT_ID_RX_FIFO_DEPTH */
++	{
++	 .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_RX_FIFO_DEPTH),
++	 .bmAttributes = CFI_FEATURE_ATTR_RW,
++	 .wDataLength = DWC_CONSTANT_CPU_TO_LE16(2),
++	 }
++};
++
++/** The table of feature names */
++cfi_string_t prop_name_table[] = {
++	{FT_ID_DMA_MODE, "dma_mode"},
++	{FT_ID_DMA_BUFFER_SETUP, "buffer_setup"},
++	{FT_ID_DMA_BUFF_ALIGN, "buffer_align"},
++	{FT_ID_DMA_CONCAT_SETUP, "concat_setup"},
++	{FT_ID_DMA_CIRCULAR, "buffer_circular"},
++	{FT_ID_THRESHOLD_SETUP, "threshold_setup"},
++	{FT_ID_DFIFO_DEPTH, "dfifo_depth"},
++	{FT_ID_TX_FIFO_DEPTH, "txfifo_depth"},
++	{FT_ID_RX_FIFO_DEPTH, "rxfifo_depth"},
++	{}
++};
++
++/************************************************************************/
++
++/**
++ * Returns the name of the feature by its ID
++ * or NULL if no featute ID matches.
++ *
++ */
++const uint8_t *get_prop_name(uint16_t prop_id, int *len)
++{
++	cfi_string_t *pstr;
++	*len = 0;
++
++	for (pstr = prop_name_table; pstr && pstr->s; pstr++) {
++		if (pstr->id == prop_id) {
++			*len = DWC_STRLEN(pstr->s);
++			return pstr->s;
++		}
++	}
++	return NULL;
++}
++
++/**
++ * This function handles all CFI specific control requests.
++ *
++ * Return a negative value to stall the DCE.
++ */
++int cfi_setup(struct dwc_otg_pcd *pcd, struct cfi_usb_ctrlrequest *ctrl)
++{
++	int retval = 0;
++	dwc_otg_pcd_ep_t *ep = NULL;
++	cfiobject_t *cfi = pcd->cfi;
++	struct dwc_otg_core_if *coreif = GET_CORE_IF(pcd);
++	uint16_t wLen = DWC_LE16_TO_CPU(&ctrl->wLength);
++	uint16_t wValue = DWC_LE16_TO_CPU(&ctrl->wValue);
++	uint16_t wIndex = DWC_LE16_TO_CPU(&ctrl->wIndex);
++	uint32_t regaddr = 0;
++	uint32_t regval = 0;
++
++	/* Save this Control Request in the CFI object.
++	 * The data field will be assigned in the data stage completion CB function.
++	 */
++	cfi->ctrl_req = *ctrl;
++	cfi->ctrl_req.data = NULL;
++
++	cfi->need_gadget_att = 0;
++	cfi->need_status_in_complete = 0;
++
++	switch (ctrl->bRequest) {
++	case VEN_CORE_GET_FEATURES:
++		retval = cfi_core_features_buf(cfi->buf_in.buf, CFI_IN_BUF_LEN);
++		if (retval >= 0) {
++			//dump_msg(cfi->buf_in.buf, retval);
++			ep = &pcd->ep0;
++
++			retval = min((uint16_t) retval, wLen);
++			/* Transfer this buffer to the host through the EP0-IN EP */
++			ep->dwc_ep.dma_addr = cfi->buf_in.addr;
++			ep->dwc_ep.start_xfer_buff = cfi->buf_in.buf;
++			ep->dwc_ep.xfer_buff = cfi->buf_in.buf;
++			ep->dwc_ep.xfer_len = retval;
++			ep->dwc_ep.xfer_count = 0;
++			ep->dwc_ep.sent_zlp = 0;
++			ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
++
++			pcd->ep0_pending = 1;
++			dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
++		}
++		retval = 0;
++		break;
++
++	case VEN_CORE_GET_FEATURE:
++		CFI_INFO("VEN_CORE_GET_FEATURE\n");
++		retval = cfi_get_feature_value(cfi->buf_in.buf, CFI_IN_BUF_LEN,
++					       pcd, ctrl);
++		if (retval >= 0) {
++			ep = &pcd->ep0;
++
++			retval = min((uint16_t) retval, wLen);
++			/* Transfer this buffer to the host through the EP0-IN EP */
++			ep->dwc_ep.dma_addr = cfi->buf_in.addr;
++			ep->dwc_ep.start_xfer_buff = cfi->buf_in.buf;
++			ep->dwc_ep.xfer_buff = cfi->buf_in.buf;
++			ep->dwc_ep.xfer_len = retval;
++			ep->dwc_ep.xfer_count = 0;
++			ep->dwc_ep.sent_zlp = 0;
++			ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
++
++			pcd->ep0_pending = 1;
++			dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
++		}
++		CFI_INFO("VEN_CORE_GET_FEATURE=%d\n", retval);
++		dump_msg(cfi->buf_in.buf, retval);
++		break;
++
++	case VEN_CORE_SET_FEATURE:
++		CFI_INFO("VEN_CORE_SET_FEATURE\n");
++		/* Set up an XFER to get the data stage of the control request,
++		 * which is the new value of the feature to be modified.
++		 */
++		ep = &pcd->ep0;
++		ep->dwc_ep.is_in = 0;
++		ep->dwc_ep.dma_addr = cfi->buf_out.addr;
++		ep->dwc_ep.start_xfer_buff = cfi->buf_out.buf;
++		ep->dwc_ep.xfer_buff = cfi->buf_out.buf;
++		ep->dwc_ep.xfer_len = wLen;
++		ep->dwc_ep.xfer_count = 0;
++		ep->dwc_ep.sent_zlp = 0;
++		ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
++
++		pcd->ep0_pending = 1;
++		/* Read the control write's data stage */
++		dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
++		retval = 0;
++		break;
++
++	case VEN_CORE_RESET_FEATURES:
++		CFI_INFO("VEN_CORE_RESET_FEATURES\n");
++		cfi->need_gadget_att = 1;
++		cfi->need_status_in_complete = 1;
++		retval = cfi_preproc_reset(pcd, ctrl);
++		CFI_INFO("VEN_CORE_RESET_FEATURES = (%d)\n", retval);
++		break;
++
++	case VEN_CORE_ACTIVATE_FEATURES:
++		CFI_INFO("VEN_CORE_ACTIVATE_FEATURES\n");
++		break;
++
++	case VEN_CORE_READ_REGISTER:
++		CFI_INFO("VEN_CORE_READ_REGISTER\n");
++		/* wValue optionally contains the HI WORD of the register offset and
++		 * wIndex contains the LOW WORD of the register offset
++		 */
++		if (wValue == 0) {
++			/* @TODO - MAS - fix the access to the base field */
++			regaddr = 0;
++			//regaddr = (uint32_t) pcd->otg_dev->os_dep.base;
++			//GET_CORE_IF(pcd)->co
++			regaddr |= wIndex;
++		} else {
++			regaddr = (wValue << 16) | wIndex;
++		}
++
++		/* Read a 32-bit value of the memory at the regaddr */
++		regval = DWC_READ_REG32((uint32_t *) regaddr);
++
++		ep = &pcd->ep0;
++		dwc_memcpy(cfi->buf_in.buf, &regval, sizeof(uint32_t));
++		ep->dwc_ep.is_in = 1;
++		ep->dwc_ep.dma_addr = cfi->buf_in.addr;
++		ep->dwc_ep.start_xfer_buff = cfi->buf_in.buf;
++		ep->dwc_ep.xfer_buff = cfi->buf_in.buf;
++		ep->dwc_ep.xfer_len = wLen;
++		ep->dwc_ep.xfer_count = 0;
++		ep->dwc_ep.sent_zlp = 0;
++		ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
++
++		pcd->ep0_pending = 1;
++		dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
++		cfi->need_gadget_att = 0;
++		retval = 0;
++		break;
++
++	case VEN_CORE_WRITE_REGISTER:
++		CFI_INFO("VEN_CORE_WRITE_REGISTER\n");
++		/* Set up an XFER to get the data stage of the control request,
++		 * which is the new value of the register to be modified.
++		 */
++		ep = &pcd->ep0;
++		ep->dwc_ep.is_in = 0;
++		ep->dwc_ep.dma_addr = cfi->buf_out.addr;
++		ep->dwc_ep.start_xfer_buff = cfi->buf_out.buf;
++		ep->dwc_ep.xfer_buff = cfi->buf_out.buf;
++		ep->dwc_ep.xfer_len = wLen;
++		ep->dwc_ep.xfer_count = 0;
++		ep->dwc_ep.sent_zlp = 0;
++		ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
++
++		pcd->ep0_pending = 1;
++		/* Read the control write's data stage */
++		dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
++		retval = 0;
++		break;
++
++	default:
++		retval = -DWC_E_NOT_SUPPORTED;
++		break;
++	}
++
++	return retval;
++}
++
++/**
++ * This function prepares the core features descriptors and copies its
++ * raw representation into the buffer <buf>.
++ *
++ * The buffer structure is as follows:
++ *	all_features_header (8 bytes)
++ *	features_#1 (8 bytes + feature name string length)
++ *	features_#2 (8 bytes + feature name string length)
++ *	.....
++ *	features_#n - where n=the total count of feature descriptors
++ */
++static int cfi_core_features_buf(uint8_t * buf, uint16_t buflen)
++{
++	cfi_feature_desc_header_t *prop_hdr = prop_descs;
++	cfi_feature_desc_header_t *prop;
++	cfi_all_features_header_t *all_props_hdr = &all_props_desc_header;
++	cfi_all_features_header_t *tmp;
++	uint8_t *tmpbuf = buf;
++	const uint8_t *pname = NULL;
++	int i, j, namelen = 0, totlen;
++
++	/* Prepare and copy the core features into the buffer */
++	CFI_INFO("%s:\n", __func__);
++
++	tmp = (cfi_all_features_header_t *) tmpbuf;
++	*tmp = *all_props_hdr;
++	tmpbuf += CFI_ALL_FEATURES_HDR_LEN;
++
++	j = sizeof(prop_descs) / sizeof(cfi_all_features_header_t);
++	for (i = 0; i < j; i++, prop_hdr++) {
++		pname = get_prop_name(prop_hdr->wFeatureID, &namelen);
++		prop = (cfi_feature_desc_header_t *) tmpbuf;
++		*prop = *prop_hdr;
++
++		prop->bNameLen = namelen;
++		prop->wLength =
++		    DWC_CONSTANT_CPU_TO_LE16(CFI_FEATURE_DESC_HDR_LEN +
++					     namelen);
++
++		tmpbuf += CFI_FEATURE_DESC_HDR_LEN;
++		dwc_memcpy(tmpbuf, pname, namelen);
++		tmpbuf += namelen;
++	}
++
++	totlen = tmpbuf - buf;
++
++	if (totlen > 0) {
++		tmp = (cfi_all_features_header_t *) buf;
++		tmp->wTotalLen = DWC_CONSTANT_CPU_TO_LE16(totlen);
++	}
++
++	return totlen;
++}
++
++/**
++ * This function releases all the dynamic memory in the CFI object.
++ */
++static void cfi_release(cfiobject_t * cfiobj)
++{
++	cfi_ep_t *cfiep;
++	dwc_list_link_t *tmp;
++
++	CFI_INFO("%s\n", __func__);
++
++	if (cfiobj->buf_in.buf) {
++		DWC_DMA_FREE(CFI_IN_BUF_LEN, cfiobj->buf_in.buf,
++			     cfiobj->buf_in.addr);
++		cfiobj->buf_in.buf = NULL;
++	}
++
++	if (cfiobj->buf_out.buf) {
++		DWC_DMA_FREE(CFI_OUT_BUF_LEN, cfiobj->buf_out.buf,
++			     cfiobj->buf_out.addr);
++		cfiobj->buf_out.buf = NULL;
++	}
++
++	/* Free the Buffer Setup values for each EP */
++	//list_for_each_entry(cfiep, &cfiobj->active_eps, lh) {
++	DWC_LIST_FOREACH(tmp, &cfiobj->active_eps) {
++		cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
++		cfi_free_ep_bs_dyn_data(cfiep);
++	}
++}
++
++/**
++ * This function frees the dynamically allocated EP buffer setup data.
++ */
++static void cfi_free_ep_bs_dyn_data(cfi_ep_t * cfiep)
++{
++	if (cfiep->bm_sg) {
++		DWC_FREE(cfiep->bm_sg);
++		cfiep->bm_sg = NULL;
++	}
++
++	if (cfiep->bm_align) {
++		DWC_FREE(cfiep->bm_align);
++		cfiep->bm_align = NULL;
++	}
++
++	if (cfiep->bm_concat) {
++		if (NULL != cfiep->bm_concat->wTxBytes) {
++			DWC_FREE(cfiep->bm_concat->wTxBytes);
++			cfiep->bm_concat->wTxBytes = NULL;
++		}
++		DWC_FREE(cfiep->bm_concat);
++		cfiep->bm_concat = NULL;
++	}
++}
++
++/**
++ * This function initializes the default values of the features
++ * for a specific endpoint and should be called only once when
++ * the EP is enabled first time.
++ */
++static int cfi_ep_init_defaults(struct dwc_otg_pcd *pcd, cfi_ep_t * cfiep)
++{
++	int retval = 0;
++
++	cfiep->bm_sg = DWC_ALLOC(sizeof(ddma_sg_buffer_setup_t));
++	if (NULL == cfiep->bm_sg) {
++		CFI_INFO("Failed to allocate memory for SG feature value\n");
++		return -DWC_E_NO_MEMORY;
++	}
++	dwc_memset(cfiep->bm_sg, 0, sizeof(ddma_sg_buffer_setup_t));
++
++	/* For the Concatenation feature's default value we do not allocate
++	 * memory for the wTxBytes field - it will be done in the set_feature_value
++	 * request handler.
++	 */
++	cfiep->bm_concat = DWC_ALLOC(sizeof(ddma_concat_buffer_setup_t));
++	if (NULL == cfiep->bm_concat) {
++		CFI_INFO
++		    ("Failed to allocate memory for CONCATENATION feature value\n");
++		DWC_FREE(cfiep->bm_sg);
++		return -DWC_E_NO_MEMORY;
++	}
++	dwc_memset(cfiep->bm_concat, 0, sizeof(ddma_concat_buffer_setup_t));
++
++	cfiep->bm_align = DWC_ALLOC(sizeof(ddma_align_buffer_setup_t));
++	if (NULL == cfiep->bm_align) {
++		CFI_INFO
++		    ("Failed to allocate memory for Alignment feature value\n");
++		DWC_FREE(cfiep->bm_sg);
++		DWC_FREE(cfiep->bm_concat);
++		return -DWC_E_NO_MEMORY;
++	}
++	dwc_memset(cfiep->bm_align, 0, sizeof(ddma_align_buffer_setup_t));
++
++	return retval;
++}
++
++/**
++ * The callback function that notifies the CFI on the activation of
++ * an endpoint in the PCD. The following steps are done in this function:
++ *
++ *	Create a dynamically allocated cfi_ep_t object (a CFI wrapper to the PCD's
++ *		active endpoint)
++ *	Create MAX_DMA_DESCS_PER_EP count DMA Descriptors for the EP
++ *	Set the Buffer Mode to standard
++ *	Initialize the default values for all EP modes (SG, Circular, Concat, Align)
++ *	Add the cfi_ep_t object to the list of active endpoints in the CFI object
++ */
++static int cfi_ep_enable(struct cfiobject *cfi, struct dwc_otg_pcd *pcd,
++			 struct dwc_otg_pcd_ep *ep)
++{
++	cfi_ep_t *cfiep;
++	int retval = -DWC_E_NOT_SUPPORTED;
++
++	CFI_INFO("%s: epname=%s; epnum=0x%02x\n", __func__,
++		 "EP_" /*ep->ep.name */ , ep->desc->bEndpointAddress);
++	/* MAS - Check whether this endpoint already is in the list */
++	cfiep = get_cfi_ep_by_pcd_ep(cfi, ep);
++
++	if (NULL == cfiep) {
++		/* Allocate a cfi_ep_t object */
++		cfiep = DWC_ALLOC(sizeof(cfi_ep_t));
++		if (NULL == cfiep) {
++			CFI_INFO
++			    ("Unable to allocate memory for <cfiep> in function %s\n",
++			     __func__);
++			return -DWC_E_NO_MEMORY;
++		}
++		dwc_memset(cfiep, 0, sizeof(cfi_ep_t));
++
++		/* Save the dwc_otg_pcd_ep pointer in the cfiep object */
++		cfiep->ep = ep;
++
++		/* Allocate the DMA Descriptors chain of MAX_DMA_DESCS_PER_EP count */
++		ep->dwc_ep.descs =
++		    DWC_DMA_ALLOC(MAX_DMA_DESCS_PER_EP *
++				  sizeof(dwc_otg_dma_desc_t),
++				  &ep->dwc_ep.descs_dma_addr);
++
++		if (NULL == ep->dwc_ep.descs) {
++			DWC_FREE(cfiep);
++			return -DWC_E_NO_MEMORY;
++		}
++
++		DWC_LIST_INIT(&cfiep->lh);
++
++		/* Set the buffer mode to BM_STANDARD. It will be modified
++		 * when building descriptors for a specific buffer mode */
++		ep->dwc_ep.buff_mode = BM_STANDARD;
++
++		/* Create and initialize the default values for this EP's Buffer modes */
++		if ((retval = cfi_ep_init_defaults(pcd, cfiep)) < 0)
++			return retval;
++
++		/* Add the cfi_ep_t object to the CFI object's list of active endpoints */
++		DWC_LIST_INSERT_TAIL(&cfi->active_eps, &cfiep->lh);
++		retval = 0;
++	} else {		/* The sought EP already is in the list */
++		CFI_INFO("%s: The sought EP already is in the list\n",
++			 __func__);
++	}
++
++	return retval;
++}
++
++/**
++ * This function is called when the data stage of a 3-stage Control Write request
++ * is complete.
++ *
++ */
++static int cfi_ctrl_write_complete(struct cfiobject *cfi,
++				   struct dwc_otg_pcd *pcd)
++{
++	uint32_t addr, reg_value;
++	uint16_t wIndex, wValue;
++	uint8_t bRequest;
++	uint8_t *buf = cfi->buf_out.buf;
++	//struct usb_ctrlrequest *ctrl_req = &cfi->ctrl_req_saved;
++	struct cfi_usb_ctrlrequest *ctrl_req = &cfi->ctrl_req;
++	int retval = -DWC_E_NOT_SUPPORTED;
++
++	CFI_INFO("%s\n", __func__);
++
++	bRequest = ctrl_req->bRequest;
++	wIndex = DWC_CONSTANT_CPU_TO_LE16(ctrl_req->wIndex);
++	wValue = DWC_CONSTANT_CPU_TO_LE16(ctrl_req->wValue);
++
++	/*
++	 * Save the pointer to the data stage in the ctrl_req's <data> field.
++	 * The request should be already saved in the command stage by now.
++	 */
++	ctrl_req->data = cfi->buf_out.buf;
++	cfi->need_status_in_complete = 0;
++	cfi->need_gadget_att = 0;
++
++	switch (bRequest) {
++	case VEN_CORE_WRITE_REGISTER:
++		/* The buffer contains raw data of the new value for the register */
++		reg_value = *((uint32_t *) buf);
++		if (wValue == 0) {
++			addr = 0;
++			//addr = (uint32_t) pcd->otg_dev->os_dep.base;
++			addr += wIndex;
++		} else {
++			addr = (wValue << 16) | wIndex;
++		}
++
++		//writel(reg_value, addr);
++
++		retval = 0;
++		cfi->need_status_in_complete = 1;
++		break;
++
++	case VEN_CORE_SET_FEATURE:
++		/* The buffer contains raw data of the new value of the feature */
++		retval = cfi_set_feature_value(pcd);
++		if (retval < 0)
++			return retval;
++
++		cfi->need_status_in_complete = 1;
++		break;
++
++	default:
++		break;
++	}
++
++	return retval;
++}
++
++/**
++ * This function builds the DMA descriptors for the SG buffer mode.
++ */
++static void cfi_build_sg_descs(struct cfiobject *cfi, cfi_ep_t * cfiep,
++			       dwc_otg_pcd_request_t * req)
++{
++	struct dwc_otg_pcd_ep *ep = cfiep->ep;
++	ddma_sg_buffer_setup_t *sgval = cfiep->bm_sg;
++	struct dwc_otg_dma_desc *desc = cfiep->ep->dwc_ep.descs;
++	struct dwc_otg_dma_desc *desc_last = cfiep->ep->dwc_ep.descs;
++	dma_addr_t buff_addr = req->dma;
++	int i;
++	uint32_t txsize, off;
++
++	txsize = sgval->wSize;
++	off = sgval->bOffset;
++
++//      CFI_INFO("%s: %s TXSIZE=0x%08x; OFFSET=0x%08x\n",
++//              __func__, cfiep->ep->ep.name, txsize, off);
++
++	for (i = 0; i < sgval->bCount; i++) {
++		desc->status.b.bs = BS_HOST_BUSY;
++		desc->buf = buff_addr;
++		desc->status.b.l = 0;
++		desc->status.b.ioc = 0;
++		desc->status.b.sp = 0;
++		desc->status.b.bytes = txsize;
++		desc->status.b.bs = BS_HOST_READY;
++
++		/* Set the next address of the buffer */
++		buff_addr += txsize + off;
++		desc_last = desc;
++		desc++;
++	}
++
++	/* Set the last, ioc and sp bits on the Last DMA Descriptor */
++	desc_last->status.b.l = 1;
++	desc_last->status.b.ioc = 1;
++	desc_last->status.b.sp = ep->dwc_ep.sent_zlp;
++	/* Save the last DMA descriptor pointer */
++	cfiep->dma_desc_last = desc_last;
++	cfiep->desc_count = sgval->bCount;
++}
++
++/**
++ * This function builds the DMA descriptors for the Concatenation buffer mode.
++ */
++static void cfi_build_concat_descs(struct cfiobject *cfi, cfi_ep_t * cfiep,
++				   dwc_otg_pcd_request_t * req)
++{
++	struct dwc_otg_pcd_ep *ep = cfiep->ep;
++	ddma_concat_buffer_setup_t *concatval = cfiep->bm_concat;
++	struct dwc_otg_dma_desc *desc = cfiep->ep->dwc_ep.descs;
++	struct dwc_otg_dma_desc *desc_last = cfiep->ep->dwc_ep.descs;
++	dma_addr_t buff_addr = req->dma;
++	int i;
++	uint16_t *txsize;
++
++	txsize = concatval->wTxBytes;
++
++	for (i = 0; i < concatval->hdr.bDescCount; i++) {
++		desc->buf = buff_addr;
++		desc->status.b.bs = BS_HOST_BUSY;
++		desc->status.b.l = 0;
++		desc->status.b.ioc = 0;
++		desc->status.b.sp = 0;
++		desc->status.b.bytes = *txsize;
++		desc->status.b.bs = BS_HOST_READY;
++
++		txsize++;
++		/* Set the next address of the buffer */
++		buff_addr += UGETW(ep->desc->wMaxPacketSize);
++		desc_last = desc;
++		desc++;
++	}
++
++	/* Set the last, ioc and sp bits on the Last DMA Descriptor */
++	desc_last->status.b.l = 1;
++	desc_last->status.b.ioc = 1;
++	desc_last->status.b.sp = ep->dwc_ep.sent_zlp;
++	cfiep->dma_desc_last = desc_last;
++	cfiep->desc_count = concatval->hdr.bDescCount;
++}
++
++/**
++ * This function builds the DMA descriptors for the Circular buffer mode
++ */
++static void cfi_build_circ_descs(struct cfiobject *cfi, cfi_ep_t * cfiep,
++				 dwc_otg_pcd_request_t * req)
++{
++	/* @todo: MAS - add implementation when this feature needs to be tested */
++}
++
++/**
++ * This function builds the DMA descriptors for the Alignment buffer mode
++ */
++static void cfi_build_align_descs(struct cfiobject *cfi, cfi_ep_t * cfiep,
++				  dwc_otg_pcd_request_t * req)
++{
++	struct dwc_otg_pcd_ep *ep = cfiep->ep;
++	ddma_align_buffer_setup_t *alignval = cfiep->bm_align;
++	struct dwc_otg_dma_desc *desc = cfiep->ep->dwc_ep.descs;
++	dma_addr_t buff_addr = req->dma;
++
++	desc->status.b.bs = BS_HOST_BUSY;
++	desc->status.b.l = 1;
++	desc->status.b.ioc = 1;
++	desc->status.b.sp = ep->dwc_ep.sent_zlp;
++	desc->status.b.bytes = req->length;
++	/* Adjust the buffer alignment */
++	desc->buf = (buff_addr + alignval->bAlign);
++	desc->status.b.bs = BS_HOST_READY;
++	cfiep->dma_desc_last = desc;
++	cfiep->desc_count = 1;
++}
++
++/**
++ * This function builds the DMA descriptors chain for different modes of the
++ * buffer setup of an endpoint.
++ */
++static void cfi_build_descriptors(struct cfiobject *cfi,
++				  struct dwc_otg_pcd *pcd,
++				  struct dwc_otg_pcd_ep *ep,
++				  dwc_otg_pcd_request_t * req)
++{
++	cfi_ep_t *cfiep;
++
++	/* Get the cfiep by the dwc_otg_pcd_ep */
++	cfiep = get_cfi_ep_by_pcd_ep(cfi, ep);
++	if (NULL == cfiep) {
++		CFI_INFO("%s: Unable to find a matching active endpoint\n",
++			 __func__);
++		return;
++	}
++
++	cfiep->xfer_len = req->length;
++
++	/* Iterate through all the DMA descriptors */
++	switch (cfiep->ep->dwc_ep.buff_mode) {
++	case BM_SG:
++		cfi_build_sg_descs(cfi, cfiep, req);
++		break;
++
++	case BM_CONCAT:
++		cfi_build_concat_descs(cfi, cfiep, req);
++		break;
++
++	case BM_CIRCULAR:
++		cfi_build_circ_descs(cfi, cfiep, req);
++		break;
++
++	case BM_ALIGN:
++		cfi_build_align_descs(cfi, cfiep, req);
++		break;
++
++	default:
++		break;
++	}
++}
++
++/**
++ * Allocate DMA buffer for different Buffer modes.
++ */
++static void *cfi_ep_alloc_buf(struct cfiobject *cfi, struct dwc_otg_pcd *pcd,
++			      struct dwc_otg_pcd_ep *ep, dma_addr_t * dma,
++			      unsigned size, gfp_t flags)
++{
++	return DWC_DMA_ALLOC(size, dma);
++}
++
++/**
++ * This function initializes the CFI object.
++ */
++int init_cfi(cfiobject_t * cfiobj)
++{
++	CFI_INFO("%s\n", __func__);
++
++	/* Allocate a buffer for IN XFERs */
++	cfiobj->buf_in.buf =
++	    DWC_DMA_ALLOC(CFI_IN_BUF_LEN, &cfiobj->buf_in.addr);
++	if (NULL == cfiobj->buf_in.buf) {
++		CFI_INFO("Unable to allocate buffer for INs\n");
++		return -DWC_E_NO_MEMORY;
++	}
++
++	/* Allocate a buffer for OUT XFERs */
++	cfiobj->buf_out.buf =
++	    DWC_DMA_ALLOC(CFI_OUT_BUF_LEN, &cfiobj->buf_out.addr);
++	if (NULL == cfiobj->buf_out.buf) {
++		CFI_INFO("Unable to allocate buffer for OUT\n");
++		return -DWC_E_NO_MEMORY;
++	}
++
++	/* Initialize the callback function pointers */
++	cfiobj->ops.release = cfi_release;
++	cfiobj->ops.ep_enable = cfi_ep_enable;
++	cfiobj->ops.ctrl_write_complete = cfi_ctrl_write_complete;
++	cfiobj->ops.build_descriptors = cfi_build_descriptors;
++	cfiobj->ops.ep_alloc_buf = cfi_ep_alloc_buf;
++
++	/* Initialize the list of active endpoints in the CFI object */
++	DWC_LIST_INIT(&cfiobj->active_eps);
++
++	return 0;
++}
++
++/**
++ * This function reads the required feature's current value into the buffer
++ *
++ * @retval: Returns negative as error, or the data length of the feature
++ */
++static int cfi_get_feature_value(uint8_t * buf, uint16_t buflen,
++				 struct dwc_otg_pcd *pcd,
++				 struct cfi_usb_ctrlrequest *ctrl_req)
++{
++	int retval = -DWC_E_NOT_SUPPORTED;
++	struct dwc_otg_core_if *coreif = GET_CORE_IF(pcd);
++	uint16_t dfifo, rxfifo, txfifo;
++
++	switch (ctrl_req->wIndex) {
++		/* Whether the DDMA is enabled or not */
++	case FT_ID_DMA_MODE:
++		*buf = (coreif->dma_enable && coreif->dma_desc_enable) ? 1 : 0;
++		retval = 1;
++		break;
++
++	case FT_ID_DMA_BUFFER_SETUP:
++		retval = cfi_ep_get_sg_val(buf, pcd, ctrl_req);
++		break;
++
++	case FT_ID_DMA_BUFF_ALIGN:
++		retval = cfi_ep_get_align_val(buf, pcd, ctrl_req);
++		break;
++
++	case FT_ID_DMA_CONCAT_SETUP:
++		retval = cfi_ep_get_concat_val(buf, pcd, ctrl_req);
++		break;
++
++	case FT_ID_DMA_CIRCULAR:
++		CFI_INFO("GetFeature value (FT_ID_DMA_CIRCULAR)\n");
++		break;
++
++	case FT_ID_THRESHOLD_SETUP:
++		CFI_INFO("GetFeature value (FT_ID_THRESHOLD_SETUP)\n");
++		break;
++
++	case FT_ID_DFIFO_DEPTH:
++		dfifo = get_dfifo_size(coreif);
++		*((uint16_t *) buf) = dfifo;
++		retval = sizeof(uint16_t);
++		break;
++
++	case FT_ID_TX_FIFO_DEPTH:
++		retval = get_txfifo_size(pcd, ctrl_req->wValue);
++		if (retval >= 0) {
++			txfifo = retval;
++			*((uint16_t *) buf) = txfifo;
++			retval = sizeof(uint16_t);
++		}
++		break;
++
++	case FT_ID_RX_FIFO_DEPTH:
++		retval = get_rxfifo_size(coreif, ctrl_req->wValue);
++		if (retval >= 0) {
++			rxfifo = retval;
++			*((uint16_t *) buf) = rxfifo;
++			retval = sizeof(uint16_t);
++		}
++		break;
++	}
++
++	return retval;
++}
++
++/**
++ * This function resets the SG for the specified EP to its default value
++ */
++static int cfi_reset_sg_val(cfi_ep_t * cfiep)
++{
++	dwc_memset(cfiep->bm_sg, 0, sizeof(ddma_sg_buffer_setup_t));
++	return 0;
++}
++
++/**
++ * This function resets the Alignment for the specified EP to its default value
++ */
++static int cfi_reset_align_val(cfi_ep_t * cfiep)
++{
++	dwc_memset(cfiep->bm_sg, 0, sizeof(ddma_sg_buffer_setup_t));
++	return 0;
++}
++
++/**
++ * This function resets the Concatenation for the specified EP to its default value
++ * This function will also set the value of the wTxBytes field to NULL after
++ * freeing the memory previously allocated for this field.
++ */
++static int cfi_reset_concat_val(cfi_ep_t * cfiep)
++{
++	/* First we need to free the wTxBytes field */
++	if (cfiep->bm_concat->wTxBytes) {
++		DWC_FREE(cfiep->bm_concat->wTxBytes);
++		cfiep->bm_concat->wTxBytes = NULL;
++	}
++
++	dwc_memset(cfiep->bm_concat, 0, sizeof(ddma_concat_buffer_setup_t));
++	return 0;
++}
++
++/**
++ * This function resets all the buffer setups of the specified endpoint
++ */
++static int cfi_ep_reset_all_setup_vals(cfi_ep_t * cfiep)
++{
++	cfi_reset_sg_val(cfiep);
++	cfi_reset_align_val(cfiep);
++	cfi_reset_concat_val(cfiep);
++	return 0;
++}
++
++static int cfi_handle_reset_fifo_val(struct dwc_otg_pcd *pcd, uint8_t ep_addr,
++				     uint8_t rx_rst, uint8_t tx_rst)
++{
++	int retval = -DWC_E_INVALID;
++	uint16_t tx_siz[15];
++	uint16_t rx_siz = 0;
++	dwc_otg_pcd_ep_t *ep = NULL;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_core_params_t *params = GET_CORE_IF(pcd)->core_params;
++
++	if (rx_rst) {
++		rx_siz = params->dev_rx_fifo_size;
++		params->dev_rx_fifo_size = GET_CORE_IF(pcd)->init_rxfsiz;
++	}
++
++	if (tx_rst) {
++		if (ep_addr == 0) {
++			int i;
++
++			for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++				tx_siz[i] =
++				    core_if->core_params->dev_tx_fifo_size[i];
++				core_if->core_params->dev_tx_fifo_size[i] =
++				    core_if->init_txfsiz[i];
++			}
++		} else {
++
++			ep = get_ep_by_addr(pcd, ep_addr);
++
++			if (NULL == ep) {
++				CFI_INFO
++				    ("%s: Unable to get the endpoint addr=0x%02x\n",
++				     __func__, ep_addr);
++				return -DWC_E_INVALID;
++			}
++
++			tx_siz[0] =
++			    params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num -
++						     1];
++			params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num - 1] =
++			    GET_CORE_IF(pcd)->init_txfsiz[ep->
++							  dwc_ep.tx_fifo_num -
++							  1];
++		}
++	}
++
++	if (resize_fifos(GET_CORE_IF(pcd))) {
++		retval = 0;
++	} else {
++		CFI_INFO
++		    ("%s: Error resetting the feature Reset All(FIFO size)\n",
++		     __func__);
++		if (rx_rst) {
++			params->dev_rx_fifo_size = rx_siz;
++		}
++
++		if (tx_rst) {
++			if (ep_addr == 0) {
++				int i;
++				for (i = 0; i < core_if->hwcfg4.b.num_in_eps;
++				     i++) {
++					core_if->
++					    core_params->dev_tx_fifo_size[i] =
++					    tx_siz[i];
++				}
++			} else {
++				params->dev_tx_fifo_size[ep->
++							 dwc_ep.tx_fifo_num -
++							 1] = tx_siz[0];
++			}
++		}
++		retval = -DWC_E_INVALID;
++	}
++	return retval;
++}
++
++static int cfi_handle_reset_all(struct dwc_otg_pcd *pcd, uint8_t addr)
++{
++	int retval = 0;
++	cfi_ep_t *cfiep;
++	cfiobject_t *cfi = pcd->cfi;
++	dwc_list_link_t *tmp;
++
++	retval = cfi_handle_reset_fifo_val(pcd, addr, 1, 1);
++	if (retval < 0) {
++		return retval;
++	}
++
++	/* If the EP address is known then reset the features for only that EP */
++	if (addr) {
++		cfiep = get_cfi_ep_by_addr(pcd->cfi, addr);
++		if (NULL == cfiep) {
++			CFI_INFO("%s: Error getting the EP address 0x%02x\n",
++				 __func__, addr);
++			return -DWC_E_INVALID;
++		}
++		retval = cfi_ep_reset_all_setup_vals(cfiep);
++		cfiep->ep->dwc_ep.buff_mode = BM_STANDARD;
++	}
++	/* Otherwise (wValue == 0), reset all features of all EP's */
++	else {
++		/* Traverse all the active EP's and reset the feature(s) value(s) */
++		//list_for_each_entry(cfiep, &cfi->active_eps, lh) {
++		DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
++			cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
++			retval = cfi_ep_reset_all_setup_vals(cfiep);
++			cfiep->ep->dwc_ep.buff_mode = BM_STANDARD;
++			if (retval < 0) {
++				CFI_INFO
++				    ("%s: Error resetting the feature Reset All\n",
++				     __func__);
++				return retval;
++			}
++		}
++	}
++	return retval;
++}
++
++static int cfi_handle_reset_dma_buff_setup(struct dwc_otg_pcd *pcd,
++					   uint8_t addr)
++{
++	int retval = 0;
++	cfi_ep_t *cfiep;
++	cfiobject_t *cfi = pcd->cfi;
++	dwc_list_link_t *tmp;
++
++	/* If the EP address is known then reset the features for only that EP */
++	if (addr) {
++		cfiep = get_cfi_ep_by_addr(pcd->cfi, addr);
++		if (NULL == cfiep) {
++			CFI_INFO("%s: Error getting the EP address 0x%02x\n",
++				 __func__, addr);
++			return -DWC_E_INVALID;
++		}
++		retval = cfi_reset_sg_val(cfiep);
++	}
++	/* Otherwise (wValue == 0), reset all features of all EP's */
++	else {
++		/* Traverse all the active EP's and reset the feature(s) value(s) */
++		//list_for_each_entry(cfiep, &cfi->active_eps, lh) {
++		DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
++			cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
++			retval = cfi_reset_sg_val(cfiep);
++			if (retval < 0) {
++				CFI_INFO
++				    ("%s: Error resetting the feature Buffer Setup\n",
++				     __func__);
++				return retval;
++			}
++		}
++	}
++	return retval;
++}
++
++static int cfi_handle_reset_concat_val(struct dwc_otg_pcd *pcd, uint8_t addr)
++{
++	int retval = 0;
++	cfi_ep_t *cfiep;
++	cfiobject_t *cfi = pcd->cfi;
++	dwc_list_link_t *tmp;
++
++	/* If the EP address is known then reset the features for only that EP */
++	if (addr) {
++		cfiep = get_cfi_ep_by_addr(pcd->cfi, addr);
++		if (NULL == cfiep) {
++			CFI_INFO("%s: Error getting the EP address 0x%02x\n",
++				 __func__, addr);
++			return -DWC_E_INVALID;
++		}
++		retval = cfi_reset_concat_val(cfiep);
++	}
++	/* Otherwise (wValue == 0), reset all features of all EP's */
++	else {
++		/* Traverse all the active EP's and reset the feature(s) value(s) */
++		//list_for_each_entry(cfiep, &cfi->active_eps, lh) {
++		DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
++			cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
++			retval = cfi_reset_concat_val(cfiep);
++			if (retval < 0) {
++				CFI_INFO
++				    ("%s: Error resetting the feature Concatenation Value\n",
++				     __func__);
++				return retval;
++			}
++		}
++	}
++	return retval;
++}
++
++static int cfi_handle_reset_align_val(struct dwc_otg_pcd *pcd, uint8_t addr)
++{
++	int retval = 0;
++	cfi_ep_t *cfiep;
++	cfiobject_t *cfi = pcd->cfi;
++	dwc_list_link_t *tmp;
++
++	/* If the EP address is known then reset the features for only that EP */
++	if (addr) {
++		cfiep = get_cfi_ep_by_addr(pcd->cfi, addr);
++		if (NULL == cfiep) {
++			CFI_INFO("%s: Error getting the EP address 0x%02x\n",
++				 __func__, addr);
++			return -DWC_E_INVALID;
++		}
++		retval = cfi_reset_align_val(cfiep);
++	}
++	/* Otherwise (wValue == 0), reset all features of all EP's */
++	else {
++		/* Traverse all the active EP's and reset the feature(s) value(s) */
++		//list_for_each_entry(cfiep, &cfi->active_eps, lh) {
++		DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
++			cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
++			retval = cfi_reset_align_val(cfiep);
++			if (retval < 0) {
++				CFI_INFO
++				    ("%s: Error resetting the feature Aliignment Value\n",
++				     __func__);
++				return retval;
++			}
++		}
++	}
++	return retval;
++
++}
++
++static int cfi_preproc_reset(struct dwc_otg_pcd *pcd,
++			     struct cfi_usb_ctrlrequest *req)
++{
++	int retval = 0;
++
++	switch (req->wIndex) {
++	case 0:
++		/* Reset all features */
++		retval = cfi_handle_reset_all(pcd, req->wValue & 0xff);
++		break;
++
++	case FT_ID_DMA_BUFFER_SETUP:
++		/* Reset the SG buffer setup */
++		retval =
++		    cfi_handle_reset_dma_buff_setup(pcd, req->wValue & 0xff);
++		break;
++
++	case FT_ID_DMA_CONCAT_SETUP:
++		/* Reset the Concatenation buffer setup */
++		retval = cfi_handle_reset_concat_val(pcd, req->wValue & 0xff);
++		break;
++
++	case FT_ID_DMA_BUFF_ALIGN:
++		/* Reset the Alignment buffer setup */
++		retval = cfi_handle_reset_align_val(pcd, req->wValue & 0xff);
++		break;
++
++	case FT_ID_TX_FIFO_DEPTH:
++		retval =
++		    cfi_handle_reset_fifo_val(pcd, req->wValue & 0xff, 0, 1);
++		pcd->cfi->need_gadget_att = 0;
++		break;
++
++	case FT_ID_RX_FIFO_DEPTH:
++		retval = cfi_handle_reset_fifo_val(pcd, 0, 1, 0);
++		pcd->cfi->need_gadget_att = 0;
++		break;
++	default:
++		break;
++	}
++	return retval;
++}
++
++/**
++ * This function sets a new value for the SG buffer setup.
++ */
++static int cfi_ep_set_sg_val(uint8_t * buf, struct dwc_otg_pcd *pcd)
++{
++	uint8_t inaddr, outaddr;
++	cfi_ep_t *epin, *epout;
++	ddma_sg_buffer_setup_t *psgval;
++	uint32_t desccount, size;
++
++	CFI_INFO("%s\n", __func__);
++
++	psgval = (ddma_sg_buffer_setup_t *) buf;
++	desccount = (uint32_t) psgval->bCount;
++	size = (uint32_t) psgval->wSize;
++
++	/* Check the DMA descriptor count */
++	if ((desccount > MAX_DMA_DESCS_PER_EP) || (desccount == 0)) {
++		CFI_INFO
++		    ("%s: The count of DMA Descriptors should be between 1 and %d\n",
++		     __func__, MAX_DMA_DESCS_PER_EP);
++		return -DWC_E_INVALID;
++	}
++
++	/* Check the DMA descriptor count */
++
++	if (size == 0) {
++
++		CFI_INFO("%s: The transfer size should be at least 1 byte\n",
++			 __func__);
++
++		return -DWC_E_INVALID;
++
++	}
++
++	inaddr = psgval->bInEndpointAddress;
++	outaddr = psgval->bOutEndpointAddress;
++
++	epin = get_cfi_ep_by_addr(pcd->cfi, inaddr);
++	epout = get_cfi_ep_by_addr(pcd->cfi, outaddr);
++
++	if (NULL == epin || NULL == epout) {
++		CFI_INFO
++		    ("%s: Unable to get the endpoints inaddr=0x%02x outaddr=0x%02x\n",
++		     __func__, inaddr, outaddr);
++		return -DWC_E_INVALID;
++	}
++
++	epin->ep->dwc_ep.buff_mode = BM_SG;
++	dwc_memcpy(epin->bm_sg, psgval, sizeof(ddma_sg_buffer_setup_t));
++
++	epout->ep->dwc_ep.buff_mode = BM_SG;
++	dwc_memcpy(epout->bm_sg, psgval, sizeof(ddma_sg_buffer_setup_t));
++
++	return 0;
++}
++
++/**
++ * This function sets a new value for the buffer Alignment setup.
++ */
++static int cfi_ep_set_alignment_val(uint8_t * buf, struct dwc_otg_pcd *pcd)
++{
++	cfi_ep_t *ep;
++	uint8_t addr;
++	ddma_align_buffer_setup_t *palignval;
++
++	palignval = (ddma_align_buffer_setup_t *) buf;
++	addr = palignval->bEndpointAddress;
++
++	ep = get_cfi_ep_by_addr(pcd->cfi, addr);
++
++	if (NULL == ep) {
++		CFI_INFO("%s: Unable to get the endpoint addr=0x%02x\n",
++			 __func__, addr);
++		return -DWC_E_INVALID;
++	}
++
++	ep->ep->dwc_ep.buff_mode = BM_ALIGN;
++	dwc_memcpy(ep->bm_align, palignval, sizeof(ddma_align_buffer_setup_t));
++
++	return 0;
++}
++
++/**
++ * This function sets a new value for the Concatenation buffer setup.
++ */
++static int cfi_ep_set_concat_val(uint8_t * buf, struct dwc_otg_pcd *pcd)
++{
++	uint8_t addr;
++	cfi_ep_t *ep;
++	struct _ddma_concat_buffer_setup_hdr *pConcatValHdr;
++	uint16_t *pVals;
++	uint32_t desccount;
++	int i;
++	uint16_t mps;
++
++	pConcatValHdr = (struct _ddma_concat_buffer_setup_hdr *)buf;
++	desccount = (uint32_t) pConcatValHdr->bDescCount;
++	pVals = (uint16_t *) (buf + BS_CONCAT_VAL_HDR_LEN);
++
++	/* Check the DMA descriptor count */
++	if (desccount > MAX_DMA_DESCS_PER_EP) {
++		CFI_INFO("%s: Maximum DMA Descriptor count should be %d\n",
++			 __func__, MAX_DMA_DESCS_PER_EP);
++		return -DWC_E_INVALID;
++	}
++
++	addr = pConcatValHdr->bEndpointAddress;
++	ep = get_cfi_ep_by_addr(pcd->cfi, addr);
++	if (NULL == ep) {
++		CFI_INFO("%s: Unable to get the endpoint addr=0x%02x\n",
++			 __func__, addr);
++		return -DWC_E_INVALID;
++	}
++
++	mps = UGETW(ep->ep->desc->wMaxPacketSize);
++
++#if 0
++	for (i = 0; i < desccount; i++) {
++		CFI_INFO("%s: wTxSize[%d]=0x%04x\n", __func__, i, pVals[i]);
++	}
++	CFI_INFO("%s: epname=%s; mps=%d\n", __func__, ep->ep->ep.name, mps);
++#endif
++
++	/* Check the wTxSizes to be less than or equal to the mps */
++	for (i = 0; i < desccount; i++) {
++		if (pVals[i] > mps) {
++			CFI_INFO
++			    ("%s: ERROR - the wTxSize[%d] should be <= MPS (wTxSize=%d)\n",
++			     __func__, i, pVals[i]);
++			return -DWC_E_INVALID;
++		}
++	}
++
++	ep->ep->dwc_ep.buff_mode = BM_CONCAT;
++	dwc_memcpy(ep->bm_concat, pConcatValHdr, BS_CONCAT_VAL_HDR_LEN);
++
++	/* Free the previously allocated storage for the wTxBytes */
++	if (ep->bm_concat->wTxBytes) {
++		DWC_FREE(ep->bm_concat->wTxBytes);
++	}
++
++	/* Allocate a new storage for the wTxBytes field */
++	ep->bm_concat->wTxBytes =
++	    DWC_ALLOC(sizeof(uint16_t) * pConcatValHdr->bDescCount);
++	if (NULL == ep->bm_concat->wTxBytes) {
++		CFI_INFO("%s: Unable to allocate memory\n", __func__);
++		return -DWC_E_NO_MEMORY;
++	}
++
++	/* Copy the new values into the wTxBytes filed */
++	dwc_memcpy(ep->bm_concat->wTxBytes, buf + BS_CONCAT_VAL_HDR_LEN,
++		   sizeof(uint16_t) * pConcatValHdr->bDescCount);
++
++	return 0;
++}
++
++/**
++ * This function calculates the total of all FIFO sizes
++ *
++ * @param core_if Programming view of DWC_otg controller
++ *
++ * @return The total of data FIFO sizes.
++ *
++ */
++static uint16_t get_dfifo_size(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_core_params_t *params = core_if->core_params;
++	uint16_t dfifo_total = 0;
++	int i;
++
++	/* The shared RxFIFO size */
++	dfifo_total =
++	    params->dev_rx_fifo_size + params->dev_nperio_tx_fifo_size;
++
++	/* Add up each TxFIFO size to the total */
++	for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++		dfifo_total += params->dev_tx_fifo_size[i];
++	}
++
++	return dfifo_total;
++}
++
++/**
++ * This function returns Rx FIFO size
++ *
++ * @param core_if Programming view of DWC_otg controller
++ *
++ * @return The total of data FIFO sizes.
++ *
++ */
++static int32_t get_rxfifo_size(dwc_otg_core_if_t * core_if, uint16_t wValue)
++{
++	switch (wValue >> 8) {
++	case 0:
++		return (core_if->pwron_rxfsiz <
++			32768) ? core_if->pwron_rxfsiz : 32768;
++		break;
++	case 1:
++		return core_if->core_params->dev_rx_fifo_size;
++		break;
++	default:
++		return -DWC_E_INVALID;
++		break;
++	}
++}
++
++/**
++ * This function returns Tx FIFO size for IN EP
++ *
++ * @param core_if Programming view of DWC_otg controller
++ *
++ * @return The total of data FIFO sizes.
++ *
++ */
++static int32_t get_txfifo_size(struct dwc_otg_pcd *pcd, uint16_t wValue)
++{
++	dwc_otg_pcd_ep_t *ep;
++
++	ep = get_ep_by_addr(pcd, wValue & 0xff);
++
++	if (NULL == ep) {
++		CFI_INFO("%s: Unable to get the endpoint addr=0x%02x\n",
++			 __func__, wValue & 0xff);
++		return -DWC_E_INVALID;
++	}
++
++	if (!ep->dwc_ep.is_in) {
++		CFI_INFO
++		    ("%s: No Tx FIFO assingned to the Out endpoint addr=0x%02x\n",
++		     __func__, wValue & 0xff);
++		return -DWC_E_INVALID;
++	}
++
++	switch (wValue >> 8) {
++	case 0:
++		return (GET_CORE_IF(pcd)->pwron_txfsiz
++			[ep->dwc_ep.tx_fifo_num - 1] <
++			768) ? GET_CORE_IF(pcd)->pwron_txfsiz[ep->
++							      dwc_ep.tx_fifo_num
++							      - 1] : 32768;
++		break;
++	case 1:
++		return GET_CORE_IF(pcd)->core_params->
++		    dev_tx_fifo_size[ep->dwc_ep.num - 1];
++		break;
++	default:
++		return -DWC_E_INVALID;
++		break;
++	}
++}
++
++/**
++ * This function checks if the submitted combination of
++ * device mode FIFO sizes is possible or not.
++ *
++ * @param core_if Programming view of DWC_otg controller
++ *
++ * @return 1 if possible, 0 otherwise.
++ *
++ */
++static uint8_t check_fifo_sizes(dwc_otg_core_if_t * core_if)
++{
++	uint16_t dfifo_actual = 0;
++	dwc_otg_core_params_t *params = core_if->core_params;
++	uint16_t start_addr = 0;
++	int i;
++
++	dfifo_actual =
++	    params->dev_rx_fifo_size + params->dev_nperio_tx_fifo_size;
++
++	for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++		dfifo_actual += params->dev_tx_fifo_size[i];
++	}
++
++	if (dfifo_actual > core_if->total_fifo_size) {
++		return 0;
++	}
++
++	if (params->dev_rx_fifo_size > 32768 || params->dev_rx_fifo_size < 16)
++		return 0;
++
++	if (params->dev_nperio_tx_fifo_size > 32768
++	    || params->dev_nperio_tx_fifo_size < 16)
++		return 0;
++
++	for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++
++		if (params->dev_tx_fifo_size[i] > 768
++		    || params->dev_tx_fifo_size[i] < 4)
++			return 0;
++	}
++
++	if (params->dev_rx_fifo_size > core_if->pwron_rxfsiz)
++		return 0;
++	start_addr = params->dev_rx_fifo_size;
++
++	if (params->dev_nperio_tx_fifo_size > core_if->pwron_gnptxfsiz)
++		return 0;
++	start_addr += params->dev_nperio_tx_fifo_size;
++
++	for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++
++		if (params->dev_tx_fifo_size[i] > core_if->pwron_txfsiz[i])
++			return 0;
++		start_addr += params->dev_tx_fifo_size[i];
++	}
++
++	return 1;
++}
++
++/**
++ * This function resizes Device mode FIFOs
++ *
++ * @param core_if Programming view of DWC_otg controller
++ *
++ * @return 1 if successful, 0 otherwise
++ *
++ */
++static uint8_t resize_fifos(dwc_otg_core_if_t * core_if)
++{
++	int i = 0;
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	dwc_otg_core_params_t *params = core_if->core_params;
++	uint32_t rx_fifo_size;
++	fifosize_data_t nptxfifosize;
++	fifosize_data_t txfifosize[15];
++
++	uint32_t rx_fsz_bak;
++	uint32_t nptxfsz_bak;
++	uint32_t txfsz_bak[15];
++
++	uint16_t start_address;
++	uint8_t retval = 1;
++
++	if (!check_fifo_sizes(core_if)) {
++		return 0;
++	}
++
++	/* Configure data FIFO sizes */
++	if (core_if->hwcfg2.b.dynamic_fifo && params->enable_dynamic_fifo) {
++		rx_fsz_bak = DWC_READ_REG32(&global_regs->grxfsiz);
++		rx_fifo_size = params->dev_rx_fifo_size;
++		DWC_WRITE_REG32(&global_regs->grxfsiz, rx_fifo_size);
++
++		/*
++		 * Tx FIFOs These FIFOs are numbered from 1 to 15.
++		 * Indexes of the FIFO size module parameters in the
++		 * dev_tx_fifo_size array and the FIFO size registers in
++		 * the dtxfsiz array run from 0 to 14.
++		 */
++
++		/* Non-periodic Tx FIFO */
++		nptxfsz_bak = DWC_READ_REG32(&global_regs->gnptxfsiz);
++		nptxfifosize.b.depth = params->dev_nperio_tx_fifo_size;
++		start_address = params->dev_rx_fifo_size;
++		nptxfifosize.b.startaddr = start_address;
++
++		DWC_WRITE_REG32(&global_regs->gnptxfsiz, nptxfifosize.d32);
++
++		start_address += nptxfifosize.b.depth;
++
++		for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++			txfsz_bak[i] = DWC_READ_REG32(&global_regs->dtxfsiz[i]);
++
++			txfifosize[i].b.depth = params->dev_tx_fifo_size[i];
++			txfifosize[i].b.startaddr = start_address;
++			DWC_WRITE_REG32(&global_regs->dtxfsiz[i],
++					txfifosize[i].d32);
++
++			start_address += txfifosize[i].b.depth;
++		}
++
++		/** Check if register values are set correctly */
++		if (rx_fifo_size != DWC_READ_REG32(&global_regs->grxfsiz)) {
++			retval = 0;
++		}
++
++		if (nptxfifosize.d32 != DWC_READ_REG32(&global_regs->gnptxfsiz)) {
++			retval = 0;
++		}
++
++		for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++			if (txfifosize[i].d32 !=
++			    DWC_READ_REG32(&global_regs->dtxfsiz[i])) {
++				retval = 0;
++			}
++		}
++
++		/** If register values are not set correctly, reset old values */
++		if (retval == 0) {
++			DWC_WRITE_REG32(&global_regs->grxfsiz, rx_fsz_bak);
++
++			/* Non-periodic Tx FIFO */
++			DWC_WRITE_REG32(&global_regs->gnptxfsiz, nptxfsz_bak);
++
++			for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++				DWC_WRITE_REG32(&global_regs->dtxfsiz[i],
++						txfsz_bak[i]);
++			}
++		}
++	} else {
++		return 0;
++	}
++
++	/* Flush the FIFOs */
++	dwc_otg_flush_tx_fifo(core_if, 0x10);	/* all Tx FIFOs */
++	dwc_otg_flush_rx_fifo(core_if);
++
++	return retval;
++}
++
++/**
++ * This function sets a new value for the buffer Alignment setup.
++ */
++static int cfi_ep_set_tx_fifo_val(uint8_t * buf, dwc_otg_pcd_t * pcd)
++{
++	int retval;
++	uint32_t fsiz;
++	uint16_t size;
++	uint16_t ep_addr;
++	dwc_otg_pcd_ep_t *ep;
++	dwc_otg_core_params_t *params = GET_CORE_IF(pcd)->core_params;
++	tx_fifo_size_setup_t *ptxfifoval;
++
++	ptxfifoval = (tx_fifo_size_setup_t *) buf;
++	ep_addr = ptxfifoval->bEndpointAddress;
++	size = ptxfifoval->wDepth;
++
++	ep = get_ep_by_addr(pcd, ep_addr);
++
++	CFI_INFO
++	    ("%s: Set Tx FIFO size: endpoint addr=0x%02x, depth=%d, FIFO Num=%d\n",
++	     __func__, ep_addr, size, ep->dwc_ep.tx_fifo_num);
++
++	if (NULL == ep) {
++		CFI_INFO("%s: Unable to get the endpoint addr=0x%02x\n",
++			 __func__, ep_addr);
++		return -DWC_E_INVALID;
++	}
++
++	fsiz = params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num - 1];
++	params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num - 1] = size;
++
++	if (resize_fifos(GET_CORE_IF(pcd))) {
++		retval = 0;
++	} else {
++		CFI_INFO
++		    ("%s: Error setting the feature Tx FIFO Size for EP%d\n",
++		     __func__, ep_addr);
++		params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num - 1] = fsiz;
++		retval = -DWC_E_INVALID;
++	}
++
++	return retval;
++}
++
++/**
++ * This function sets a new value for the buffer Alignment setup.
++ */
++static int cfi_set_rx_fifo_val(uint8_t * buf, dwc_otg_pcd_t * pcd)
++{
++	int retval;
++	uint32_t fsiz;
++	uint16_t size;
++	dwc_otg_core_params_t *params = GET_CORE_IF(pcd)->core_params;
++	rx_fifo_size_setup_t *prxfifoval;
++
++	prxfifoval = (rx_fifo_size_setup_t *) buf;
++	size = prxfifoval->wDepth;
++
++	fsiz = params->dev_rx_fifo_size;
++	params->dev_rx_fifo_size = size;
++
++	if (resize_fifos(GET_CORE_IF(pcd))) {
++		retval = 0;
++	} else {
++		CFI_INFO("%s: Error setting the feature Rx FIFO Size\n",
++			 __func__);
++		params->dev_rx_fifo_size = fsiz;
++		retval = -DWC_E_INVALID;
++	}
++
++	return retval;
++}
++
++/**
++ * This function reads the SG of an EP's buffer setup into the buffer buf
++ */
++static int cfi_ep_get_sg_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
++			     struct cfi_usb_ctrlrequest *req)
++{
++	int retval = -DWC_E_INVALID;
++	uint8_t addr;
++	cfi_ep_t *ep;
++
++	/* The Low Byte of the wValue contains a non-zero address of the endpoint */
++	addr = req->wValue & 0xFF;
++	if (addr == 0)		/* The address should be non-zero */
++		return retval;
++
++	ep = get_cfi_ep_by_addr(pcd->cfi, addr);
++	if (NULL == ep) {
++		CFI_INFO("%s: Unable to get the endpoint address(0x%02x)\n",
++			 __func__, addr);
++		return retval;
++	}
++
++	dwc_memcpy(buf, ep->bm_sg, BS_SG_VAL_DESC_LEN);
++	retval = BS_SG_VAL_DESC_LEN;
++	return retval;
++}
++
++/**
++ * This function reads the Concatenation value of an EP's buffer mode into
++ * the buffer buf
++ */
++static int cfi_ep_get_concat_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
++				 struct cfi_usb_ctrlrequest *req)
++{
++	int retval = -DWC_E_INVALID;
++	uint8_t addr;
++	cfi_ep_t *ep;
++	uint8_t desc_count;
++
++	/* The Low Byte of the wValue contains a non-zero address of the endpoint */
++	addr = req->wValue & 0xFF;
++	if (addr == 0)		/* The address should be non-zero */
++		return retval;
++
++	ep = get_cfi_ep_by_addr(pcd->cfi, addr);
++	if (NULL == ep) {
++		CFI_INFO("%s: Unable to get the endpoint address(0x%02x)\n",
++			 __func__, addr);
++		return retval;
++	}
++
++	/* Copy the header to the buffer */
++	dwc_memcpy(buf, ep->bm_concat, BS_CONCAT_VAL_HDR_LEN);
++	/* Advance the buffer pointer by the header size */
++	buf += BS_CONCAT_VAL_HDR_LEN;
++
++	desc_count = ep->bm_concat->hdr.bDescCount;
++	/* Copy alll the wTxBytes to the buffer */
++	dwc_memcpy(buf, ep->bm_concat->wTxBytes, sizeof(uid16_t) * desc_count);
++
++	retval = BS_CONCAT_VAL_HDR_LEN + sizeof(uid16_t) * desc_count;
++	return retval;
++}
++
++/**
++ * This function reads the buffer Alignment value of an EP's buffer mode into
++ * the buffer buf
++ *
++ * @return The total number of bytes copied to the buffer or negative error code.
++ */
++static int cfi_ep_get_align_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
++				struct cfi_usb_ctrlrequest *req)
++{
++	int retval = -DWC_E_INVALID;
++	uint8_t addr;
++	cfi_ep_t *ep;
++
++	/* The Low Byte of the wValue contains a non-zero address of the endpoint */
++	addr = req->wValue & 0xFF;
++	if (addr == 0)		/* The address should be non-zero */
++		return retval;
++
++	ep = get_cfi_ep_by_addr(pcd->cfi, addr);
++	if (NULL == ep) {
++		CFI_INFO("%s: Unable to get the endpoint address(0x%02x)\n",
++			 __func__, addr);
++		return retval;
++	}
++
++	dwc_memcpy(buf, ep->bm_align, BS_ALIGN_VAL_HDR_LEN);
++	retval = BS_ALIGN_VAL_HDR_LEN;
++
++	return retval;
++}
++
++/**
++ * This function sets a new value for the specified feature
++ *
++ * @param	pcd	A pointer to the PCD object
++ *
++ * @return 0 if successful, negative error code otherwise to stall the DCE.
++ */
++static int cfi_set_feature_value(struct dwc_otg_pcd *pcd)
++{
++	int retval = -DWC_E_NOT_SUPPORTED;
++	uint16_t wIndex, wValue;
++	uint8_t bRequest;
++	struct dwc_otg_core_if *coreif;
++	cfiobject_t *cfi = pcd->cfi;
++	struct cfi_usb_ctrlrequest *ctrl_req;
++	uint8_t *buf;
++	ctrl_req = &cfi->ctrl_req;
++
++	buf = pcd->cfi->ctrl_req.data;
++
++	coreif = GET_CORE_IF(pcd);
++	bRequest = ctrl_req->bRequest;
++	wIndex = DWC_CONSTANT_CPU_TO_LE16(ctrl_req->wIndex);
++	wValue = DWC_CONSTANT_CPU_TO_LE16(ctrl_req->wValue);
++
++	/* See which feature is to be modified */
++	switch (wIndex) {
++	case FT_ID_DMA_BUFFER_SETUP:
++		/* Modify the feature */
++		if ((retval = cfi_ep_set_sg_val(buf, pcd)) < 0)
++			return retval;
++
++		/* And send this request to the gadget */
++		cfi->need_gadget_att = 1;
++		break;
++
++	case FT_ID_DMA_BUFF_ALIGN:
++		if ((retval = cfi_ep_set_alignment_val(buf, pcd)) < 0)
++			return retval;
++		cfi->need_gadget_att = 1;
++		break;
++
++	case FT_ID_DMA_CONCAT_SETUP:
++		/* Modify the feature */
++		if ((retval = cfi_ep_set_concat_val(buf, pcd)) < 0)
++			return retval;
++		cfi->need_gadget_att = 1;
++		break;
++
++	case FT_ID_DMA_CIRCULAR:
++		CFI_INFO("FT_ID_DMA_CIRCULAR\n");
++		break;
++
++	case FT_ID_THRESHOLD_SETUP:
++		CFI_INFO("FT_ID_THRESHOLD_SETUP\n");
++		break;
++
++	case FT_ID_DFIFO_DEPTH:
++		CFI_INFO("FT_ID_DFIFO_DEPTH\n");
++		break;
++
++	case FT_ID_TX_FIFO_DEPTH:
++		CFI_INFO("FT_ID_TX_FIFO_DEPTH\n");
++		if ((retval = cfi_ep_set_tx_fifo_val(buf, pcd)) < 0)
++			return retval;
++		cfi->need_gadget_att = 0;
++		break;
++
++	case FT_ID_RX_FIFO_DEPTH:
++		CFI_INFO("FT_ID_RX_FIFO_DEPTH\n");
++		if ((retval = cfi_set_rx_fifo_val(buf, pcd)) < 0)
++			return retval;
++		cfi->need_gadget_att = 0;
++		break;
++	}
++
++	return retval;
++}
++
++#endif //DWC_UTE_CFI
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_cfi.h
+@@ -0,0 +1,320 @@
++/* ==========================================================================
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#if !defined(__DWC_OTG_CFI_H__)
++#define __DWC_OTG_CFI_H__
++
++#include "dwc_otg_pcd.h"
++#include "dwc_cfi_common.h"
++
++/**
++ * @file
++ * This file contains the CFI related OTG PCD specific common constants,
++ * interfaces(functions and macros) and data structures.The CFI Protocol is an
++ * optional interface for internal testing purposes that a DUT may implement to
++ * support testing of configurable features.
++ *
++ */
++
++struct dwc_otg_pcd;
++struct dwc_otg_pcd_ep;
++
++/** OTG CFI Features (properties) ID constants */
++/** This is a request for all Core Features */
++#define FT_ID_DMA_MODE					0x0001
++#define FT_ID_DMA_BUFFER_SETUP			0x0002
++#define FT_ID_DMA_BUFF_ALIGN			0x0003
++#define FT_ID_DMA_CONCAT_SETUP			0x0004
++#define FT_ID_DMA_CIRCULAR				0x0005
++#define FT_ID_THRESHOLD_SETUP			0x0006
++#define FT_ID_DFIFO_DEPTH				0x0007
++#define FT_ID_TX_FIFO_DEPTH				0x0008
++#define FT_ID_RX_FIFO_DEPTH				0x0009
++
++/**********************************************************/
++#define CFI_INFO_DEF
++
++#ifdef CFI_INFO_DEF
++#define CFI_INFO(fmt...)	DWC_PRINTF("CFI: " fmt);
++#else
++#define CFI_INFO(fmt...)
++#endif
++
++#define min(x,y) ({ \
++	x < y ? x : y; })
++
++#define max(x,y) ({ \
++	x > y ? x : y; })
++
++/**
++ * Descriptor DMA SG Buffer setup structure (SG buffer). This structure is
++ * also used for setting up a buffer for Circular DDMA.
++ */
++struct _ddma_sg_buffer_setup {
++#define BS_SG_VAL_DESC_LEN	6
++	/* The OUT EP address */
++	uint8_t bOutEndpointAddress;
++	/* The IN EP address */
++	uint8_t bInEndpointAddress;
++	/* Number of bytes to put between transfer segments (must be DWORD boundaries) */
++	uint8_t bOffset;
++	/* The number of transfer segments (a DMA descriptors per each segment) */
++	uint8_t bCount;
++	/* Size (in byte) of each transfer segment */
++	uint16_t wSize;
++} __attribute__ ((packed));
++typedef struct _ddma_sg_buffer_setup ddma_sg_buffer_setup_t;
++
++/** Descriptor DMA Concatenation Buffer setup structure */
++struct _ddma_concat_buffer_setup_hdr {
++#define BS_CONCAT_VAL_HDR_LEN	4
++	/* The endpoint for which the buffer is to be set up */
++	uint8_t bEndpointAddress;
++	/* The count of descriptors to be used */
++	uint8_t bDescCount;
++	/* The total size of the transfer */
++	uint16_t wSize;
++} __attribute__ ((packed));
++typedef struct _ddma_concat_buffer_setup_hdr ddma_concat_buffer_setup_hdr_t;
++
++/** Descriptor DMA Concatenation Buffer setup structure */
++struct _ddma_concat_buffer_setup {
++	/* The SG header */
++	ddma_concat_buffer_setup_hdr_t hdr;
++
++	/* The XFER sizes pointer (allocated dynamically) */
++	uint16_t *wTxBytes;
++} __attribute__ ((packed));
++typedef struct _ddma_concat_buffer_setup ddma_concat_buffer_setup_t;
++
++/** Descriptor DMA Alignment Buffer setup structure */
++struct _ddma_align_buffer_setup {
++#define BS_ALIGN_VAL_HDR_LEN	2
++	uint8_t bEndpointAddress;
++	uint8_t bAlign;
++} __attribute__ ((packed));
++typedef struct _ddma_align_buffer_setup ddma_align_buffer_setup_t;
++
++/** Transmit FIFO Size setup structure */
++struct _tx_fifo_size_setup {
++	uint8_t bEndpointAddress;
++	uint16_t wDepth;
++} __attribute__ ((packed));
++typedef struct _tx_fifo_size_setup tx_fifo_size_setup_t;
++
++/** Transmit FIFO Size setup structure */
++struct _rx_fifo_size_setup {
++	uint16_t wDepth;
++} __attribute__ ((packed));
++typedef struct _rx_fifo_size_setup rx_fifo_size_setup_t;
++
++/**
++ * struct cfi_usb_ctrlrequest - the CFI implementation of the struct usb_ctrlrequest
++ * This structure encapsulates the standard usb_ctrlrequest and adds a pointer
++ * to the data returned in the data stage of a 3-stage Control Write requests.
++ */
++struct cfi_usb_ctrlrequest {
++	uint8_t bRequestType;
++	uint8_t bRequest;
++	uint16_t wValue;
++	uint16_t wIndex;
++	uint16_t wLength;
++	uint8_t *data;
++} UPACKED;
++
++/*---------------------------------------------------------------------------*/
++
++/**
++ * The CFI wrapper of the enabled and activated dwc_otg_pcd_ep structures.
++ * This structure is used to store the buffer setup data for any
++ * enabled endpoint in the PCD.
++ */
++struct cfi_ep {
++	/* Entry for the list container */
++	dwc_list_link_t lh;
++	/* Pointer to the active PCD endpoint structure */
++	struct dwc_otg_pcd_ep *ep;
++	/* The last descriptor in the chain of DMA descriptors of the endpoint */
++	struct dwc_otg_dma_desc *dma_desc_last;
++	/* The SG feature value */
++	ddma_sg_buffer_setup_t *bm_sg;
++	/* The Circular feature value */
++	ddma_sg_buffer_setup_t *bm_circ;
++	/* The Concatenation feature value */
++	ddma_concat_buffer_setup_t *bm_concat;
++	/* The Alignment feature value */
++	ddma_align_buffer_setup_t *bm_align;
++	/* XFER length */
++	uint32_t xfer_len;
++	/*
++	 * Count of DMA descriptors currently used.
++	 * The total should not exceed the MAX_DMA_DESCS_PER_EP value
++	 * defined in the dwc_otg_cil.h
++	 */
++	uint32_t desc_count;
++};
++typedef struct cfi_ep cfi_ep_t;
++
++typedef struct cfi_dma_buff {
++#define CFI_IN_BUF_LEN	1024
++#define CFI_OUT_BUF_LEN	1024
++	dma_addr_t addr;
++	uint8_t *buf;
++} cfi_dma_buff_t;
++
++struct cfiobject;
++
++/**
++ * This is the interface for the CFI operations.
++ *
++ * @param	ep_enable			Called when any endpoint is enabled and activated.
++ * @param	release				Called when the CFI object is released and it needs to correctly
++ *								deallocate the dynamic memory
++ * @param	ctrl_write_complete	Called when the data stage of the request is complete
++ */
++typedef struct cfi_ops {
++	int (*ep_enable) (struct cfiobject * cfi, struct dwc_otg_pcd * pcd,
++			  struct dwc_otg_pcd_ep * ep);
++	void *(*ep_alloc_buf) (struct cfiobject * cfi, struct dwc_otg_pcd * pcd,
++			       struct dwc_otg_pcd_ep * ep, dma_addr_t * dma,
++			       unsigned size, gfp_t flags);
++	void (*release) (struct cfiobject * cfi);
++	int (*ctrl_write_complete) (struct cfiobject * cfi,
++				    struct dwc_otg_pcd * pcd);
++	void (*build_descriptors) (struct cfiobject * cfi,
++				   struct dwc_otg_pcd * pcd,
++				   struct dwc_otg_pcd_ep * ep,
++				   dwc_otg_pcd_request_t * req);
++} cfi_ops_t;
++
++struct cfiobject {
++	cfi_ops_t ops;
++	struct dwc_otg_pcd *pcd;
++	struct usb_gadget *gadget;
++
++	/* Buffers used to send/receive CFI-related request data */
++	cfi_dma_buff_t buf_in;
++	cfi_dma_buff_t buf_out;
++
++	/* CFI specific Control request wrapper */
++	struct cfi_usb_ctrlrequest ctrl_req;
++
++	/* The list of active EP's in the PCD of type cfi_ep_t */
++	dwc_list_link_t active_eps;
++
++	/* This flag shall control the propagation of a specific request
++	 * to the gadget's processing routines.
++	 * 0 - no gadget handling
++	 * 1 - the gadget needs to know about this request (w/o completing a status
++	 * phase - just return a 0 to the _setup callback)
++	 */
++	uint8_t need_gadget_att;
++
++	/* Flag indicating whether the status IN phase needs to be
++	 * completed by the PCD
++	 */
++	uint8_t need_status_in_complete;
++};
++typedef struct cfiobject cfiobject_t;
++
++#define DUMP_MSG
++
++#if defined(DUMP_MSG)
++static inline void dump_msg(const u8 * buf, unsigned int length)
++{
++	unsigned int start, num, i;
++	char line[52], *p;
++
++	if (length >= 512)
++		return;
++
++	start = 0;
++	while (length > 0) {
++		num = min(length, 16u);
++		p = line;
++		for (i = 0; i < num; ++i) {
++			if (i == 8)
++				*p++ = ' ';
++			DWC_SPRINTF(p, " %02x", buf[i]);
++			p += 3;
++		}
++		*p = 0;
++		DWC_DEBUG("%6x: %s\n", start, line);
++		buf += num;
++		start += num;
++		length -= num;
++	}
++}
++#else
++static inline void dump_msg(const u8 * buf, unsigned int length)
++{
++}
++#endif
++
++/**
++ * This function returns a pointer to cfi_ep_t object with the addr address.
++ */
++static inline struct cfi_ep *get_cfi_ep_by_addr(struct cfiobject *cfi,
++						uint8_t addr)
++{
++	struct cfi_ep *pcfiep;
++	dwc_list_link_t *tmp;
++
++	DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
++		pcfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
++
++		if (pcfiep->ep->desc->bEndpointAddress == addr) {
++			return pcfiep;
++		}
++	}
++
++	return NULL;
++}
++
++/**
++ * This function returns a pointer to cfi_ep_t object that matches
++ * the dwc_otg_pcd_ep object.
++ */
++static inline struct cfi_ep *get_cfi_ep_by_pcd_ep(struct cfiobject *cfi,
++						  struct dwc_otg_pcd_ep *ep)
++{
++	struct cfi_ep *pcfiep = NULL;
++	dwc_list_link_t *tmp;
++
++	DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
++		pcfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
++		if (pcfiep->ep == ep) {
++			return pcfiep;
++		}
++	}
++	return NULL;
++}
++
++int cfi_setup(struct dwc_otg_pcd *pcd, struct cfi_usb_ctrlrequest *ctrl);
++
++#endif /* (__DWC_OTG_CFI_H__) */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_cil.c
+@@ -0,0 +1,7141 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_cil.c $
++ * $Revision: #191 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++/** @file
++ *
++ * The Core Interface Layer provides basic services for accessing and
++ * managing the DWC_otg hardware. These services are used by both the
++ * Host Controller Driver and the Peripheral Controller Driver.
++ *
++ * The CIL manages the memory map for the core so that the HCD and PCD
++ * don't have to do this separately. It also handles basic tasks like
++ * reading/writing the registers and data FIFOs in the controller.
++ * Some of the data access functions provide encapsulation of several
++ * operations required to perform a task, such as writing multiple
++ * registers to start a transfer. Finally, the CIL performs basic
++ * services that are not specific to either the host or device modes
++ * of operation. These services include management of the OTG Host
++ * Negotiation Protocol (HNP) and Session Request Protocol (SRP). A
++ * Diagnostic API is also provided to allow testing of the controller
++ * hardware.
++ *
++ * The Core Interface Layer has the following requirements:
++ * - Provides basic controller operations.
++ * - Minimal use of OS services.
++ * - The OS services used will be abstracted by using inline functions
++ *	 or macros.
++ *
++ */
++
++#include "dwc_os.h"
++#include "dwc_otg_regs.h"
++#include "dwc_otg_cil.h"
++
++static int dwc_otg_setup_params(dwc_otg_core_if_t * core_if);
++
++/**
++ * This function is called to initialize the DWC_otg CSR data
++ * structures. The register addresses in the device and host
++ * structures are initialized from the base address supplied by the
++ * caller. The calling function must make the OS calls to get the
++ * base address of the DWC_otg controller registers. The core_params
++ * argument holds the parameters that specify how the core should be
++ * configured.
++ *
++ * @param reg_base_addr Base address of DWC_otg core registers
++ *
++ */
++dwc_otg_core_if_t *dwc_otg_cil_init(const uint32_t * reg_base_addr)
++{
++	dwc_otg_core_if_t *core_if = 0;
++	dwc_otg_dev_if_t *dev_if = 0;
++	dwc_otg_host_if_t *host_if = 0;
++	uint8_t *reg_base = (uint8_t *) reg_base_addr;
++	int i = 0;
++
++	DWC_DEBUGPL(DBG_CILV, "%s(%p)\n", __func__, reg_base_addr);
++
++	core_if = DWC_ALLOC(sizeof(dwc_otg_core_if_t));
++
++	if (core_if == NULL) {
++		DWC_DEBUGPL(DBG_CIL,
++			    "Allocation of dwc_otg_core_if_t failed\n");
++		return 0;
++	}
++	core_if->core_global_regs = (dwc_otg_core_global_regs_t *) reg_base;
++
++	/*
++	 * Allocate the Device Mode structures.
++	 */
++	dev_if = DWC_ALLOC(sizeof(dwc_otg_dev_if_t));
++
++	if (dev_if == NULL) {
++		DWC_DEBUGPL(DBG_CIL, "Allocation of dwc_otg_dev_if_t failed\n");
++		DWC_FREE(core_if);
++		return 0;
++	}
++
++	dev_if->dev_global_regs =
++	    (dwc_otg_device_global_regs_t *) (reg_base +
++					      DWC_DEV_GLOBAL_REG_OFFSET);
++
++	for (i = 0; i < MAX_EPS_CHANNELS; i++) {
++		dev_if->in_ep_regs[i] = (dwc_otg_dev_in_ep_regs_t *)
++		    (reg_base + DWC_DEV_IN_EP_REG_OFFSET +
++		     (i * DWC_EP_REG_OFFSET));
++
++		dev_if->out_ep_regs[i] = (dwc_otg_dev_out_ep_regs_t *)
++		    (reg_base + DWC_DEV_OUT_EP_REG_OFFSET +
++		     (i * DWC_EP_REG_OFFSET));
++		DWC_DEBUGPL(DBG_CILV, "in_ep_regs[%d]->diepctl=%p\n",
++			    i, &dev_if->in_ep_regs[i]->diepctl);
++		DWC_DEBUGPL(DBG_CILV, "out_ep_regs[%d]->doepctl=%p\n",
++			    i, &dev_if->out_ep_regs[i]->doepctl);
++	}
++
++	dev_if->speed = 0;	// unknown
++
++	core_if->dev_if = dev_if;
++
++	/*
++	 * Allocate the Host Mode structures.
++	 */
++	host_if = DWC_ALLOC(sizeof(dwc_otg_host_if_t));
++
++	if (host_if == NULL) {
++		DWC_DEBUGPL(DBG_CIL,
++			    "Allocation of dwc_otg_host_if_t failed\n");
++		DWC_FREE(dev_if);
++		DWC_FREE(core_if);
++		return 0;
++	}
++
++	host_if->host_global_regs = (dwc_otg_host_global_regs_t *)
++	    (reg_base + DWC_OTG_HOST_GLOBAL_REG_OFFSET);
++
++	host_if->hprt0 =
++	    (uint32_t *) (reg_base + DWC_OTG_HOST_PORT_REGS_OFFSET);
++
++	for (i = 0; i < MAX_EPS_CHANNELS; i++) {
++		host_if->hc_regs[i] = (dwc_otg_hc_regs_t *)
++		    (reg_base + DWC_OTG_HOST_CHAN_REGS_OFFSET +
++		     (i * DWC_OTG_CHAN_REGS_OFFSET));
++		DWC_DEBUGPL(DBG_CILV, "hc_reg[%d]->hcchar=%p\n",
++			    i, &host_if->hc_regs[i]->hcchar);
++	}
++
++	host_if->num_host_channels = MAX_EPS_CHANNELS;
++	core_if->host_if = host_if;
++
++	for (i = 0; i < MAX_EPS_CHANNELS; i++) {
++		core_if->data_fifo[i] =
++		    (uint32_t *) (reg_base + DWC_OTG_DATA_FIFO_OFFSET +
++				  (i * DWC_OTG_DATA_FIFO_SIZE));
++		DWC_DEBUGPL(DBG_CILV, "data_fifo[%d]=0x%08lx\n",
++			    i, (unsigned long)core_if->data_fifo[i]);
++	}
++
++	core_if->pcgcctl = (uint32_t *) (reg_base + DWC_OTG_PCGCCTL_OFFSET);
++
++	/* Initiate lx_state to L3 disconnected state */
++	core_if->lx_state = DWC_OTG_L3;
++	/*
++	 * Store the contents of the hardware configuration registers here for
++	 * easy access later.
++	 */
++	core_if->hwcfg1.d32 =
++	    DWC_READ_REG32(&core_if->core_global_regs->ghwcfg1);
++	core_if->hwcfg2.d32 =
++	    DWC_READ_REG32(&core_if->core_global_regs->ghwcfg2);
++	core_if->hwcfg3.d32 =
++	    DWC_READ_REG32(&core_if->core_global_regs->ghwcfg3);
++	core_if->hwcfg4.d32 =
++	    DWC_READ_REG32(&core_if->core_global_regs->ghwcfg4);
++
++	/* Force host mode to get HPTXFSIZ exact power on value */
++	{
++		gusbcfg_data_t gusbcfg = {.d32 = 0 };
++		gusbcfg.d32 =  DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++		gusbcfg.b.force_host_mode = 1;
++		DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, gusbcfg.d32);
++		dwc_mdelay(100);
++		core_if->hptxfsiz.d32 =
++		DWC_READ_REG32(&core_if->core_global_regs->hptxfsiz);
++		gusbcfg.d32 =  DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++		gusbcfg.b.force_host_mode = 1;
++		DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, gusbcfg.d32);
++		dwc_mdelay(100);
++	}
++
++	DWC_DEBUGPL(DBG_CILV, "hwcfg1=%08x\n", core_if->hwcfg1.d32);
++	DWC_DEBUGPL(DBG_CILV, "hwcfg2=%08x\n", core_if->hwcfg2.d32);
++	DWC_DEBUGPL(DBG_CILV, "hwcfg3=%08x\n", core_if->hwcfg3.d32);
++	DWC_DEBUGPL(DBG_CILV, "hwcfg4=%08x\n", core_if->hwcfg4.d32);
++
++	core_if->hcfg.d32 =
++	    DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
++	core_if->dcfg.d32 =
++	    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
++
++	DWC_DEBUGPL(DBG_CILV, "hcfg=%08x\n", core_if->hcfg.d32);
++	DWC_DEBUGPL(DBG_CILV, "dcfg=%08x\n", core_if->dcfg.d32);
++
++	DWC_DEBUGPL(DBG_CILV, "op_mode=%0x\n", core_if->hwcfg2.b.op_mode);
++	DWC_DEBUGPL(DBG_CILV, "arch=%0x\n", core_if->hwcfg2.b.architecture);
++	DWC_DEBUGPL(DBG_CILV, "num_dev_ep=%d\n", core_if->hwcfg2.b.num_dev_ep);
++	DWC_DEBUGPL(DBG_CILV, "num_host_chan=%d\n",
++		    core_if->hwcfg2.b.num_host_chan);
++	DWC_DEBUGPL(DBG_CILV, "nonperio_tx_q_depth=0x%0x\n",
++		    core_if->hwcfg2.b.nonperio_tx_q_depth);
++	DWC_DEBUGPL(DBG_CILV, "host_perio_tx_q_depth=0x%0x\n",
++		    core_if->hwcfg2.b.host_perio_tx_q_depth);
++	DWC_DEBUGPL(DBG_CILV, "dev_token_q_depth=0x%0x\n",
++		    core_if->hwcfg2.b.dev_token_q_depth);
++
++	DWC_DEBUGPL(DBG_CILV, "Total FIFO SZ=%d\n",
++		    core_if->hwcfg3.b.dfifo_depth);
++	DWC_DEBUGPL(DBG_CILV, "xfer_size_cntr_width=%0x\n",
++		    core_if->hwcfg3.b.xfer_size_cntr_width);
++
++	/*
++	 * Set the SRP sucess bit for FS-I2c
++	 */
++	core_if->srp_success = 0;
++	core_if->srp_timer_started = 0;
++
++	/*
++	 * Create new workqueue and init works
++	 */
++	core_if->wq_otg = DWC_WORKQ_ALLOC("dwc_otg");
++	if (core_if->wq_otg == 0) {
++		DWC_WARN("DWC_WORKQ_ALLOC failed\n");
++		DWC_FREE(host_if);
++		DWC_FREE(dev_if);
++		DWC_FREE(core_if);
++		return 0;
++	}
++
++	core_if->snpsid = DWC_READ_REG32(&core_if->core_global_regs->gsnpsid);
++
++	DWC_PRINTF("Core Release: %x.%x%x%x\n",
++		   (core_if->snpsid >> 12 & 0xF),
++		   (core_if->snpsid >> 8 & 0xF),
++		   (core_if->snpsid >> 4 & 0xF), (core_if->snpsid & 0xF));
++
++	core_if->wkp_timer = DWC_TIMER_ALLOC("Wake Up Timer",
++					     w_wakeup_detected, core_if);
++	if (core_if->wkp_timer == 0) {
++		DWC_WARN("DWC_TIMER_ALLOC failed\n");
++		DWC_FREE(host_if);
++		DWC_FREE(dev_if);
++		DWC_WORKQ_FREE(core_if->wq_otg);
++		DWC_FREE(core_if);
++		return 0;
++	}
++
++	if (dwc_otg_setup_params(core_if)) {
++		DWC_WARN("Error while setting core params\n");
++	}
++
++	core_if->hibernation_suspend = 0;
++
++	/** ADP initialization */
++	dwc_otg_adp_init(core_if);
++
++	return core_if;
++}
++
++/**
++ * This function frees the structures allocated by dwc_otg_cil_init().
++ *
++ * @param core_if The core interface pointer returned from
++ * 		  dwc_otg_cil_init().
++ *
++ */
++void dwc_otg_cil_remove(dwc_otg_core_if_t * core_if)
++{
++	dctl_data_t dctl = {.d32 = 0 };
++	DWC_DEBUGPL(DBG_CILV, "%s(%p)\n", __func__, core_if);
++
++	/* Disable all interrupts */
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gahbcfg, 1, 0);
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, 0);
++
++	dctl.b.sftdiscon = 1;
++	if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0,
++				 dctl.d32);
++	}
++
++	if (core_if->wq_otg) {
++		DWC_WORKQ_WAIT_WORK_DONE(core_if->wq_otg, 500);
++		DWC_WORKQ_FREE(core_if->wq_otg);
++	}
++	if (core_if->dev_if) {
++		DWC_FREE(core_if->dev_if);
++	}
++	if (core_if->host_if) {
++		DWC_FREE(core_if->host_if);
++	}
++
++	/** Remove ADP Stuff  */
++	dwc_otg_adp_remove(core_if);
++	if (core_if->core_params) {
++		DWC_FREE(core_if->core_params);
++	}
++	if (core_if->wkp_timer) {
++		DWC_TIMER_FREE(core_if->wkp_timer);
++	}
++	if (core_if->srp_timer) {
++		DWC_TIMER_FREE(core_if->srp_timer);
++	}
++	DWC_FREE(core_if);
++}
++
++/**
++ * This function enables the controller's Global Interrupt in the AHB Config
++ * register.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++void dwc_otg_enable_global_interrupts(dwc_otg_core_if_t * core_if)
++{
++	gahbcfg_data_t ahbcfg = {.d32 = 0 };
++	ahbcfg.b.glblintrmsk = 1;	/* Enable interrupts */
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gahbcfg, 0, ahbcfg.d32);
++}
++
++/**
++ * This function disables the controller's Global Interrupt in the AHB Config
++ * register.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++void dwc_otg_disable_global_interrupts(dwc_otg_core_if_t * core_if)
++{
++	gahbcfg_data_t ahbcfg = {.d32 = 0 };
++	ahbcfg.b.glblintrmsk = 1;	/* Disable interrupts */
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gahbcfg, ahbcfg.d32, 0);
++}
++
++/**
++ * This function initializes the commmon interrupts, used in both
++ * device and host modes.
++ *
++ * @param core_if Programming view of the DWC_otg controller
++ *
++ */
++static void dwc_otg_enable_common_interrupts(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	/* Clear any pending OTG Interrupts */
++	DWC_WRITE_REG32(&global_regs->gotgint, 0xFFFFFFFF);
++
++	/* Clear any pending interrupts */
++	DWC_WRITE_REG32(&global_regs->gintsts, 0xFFFFFFFF);
++
++	/*
++	 * Enable the interrupts in the GINTMSK.
++	 */
++	intr_mask.b.modemismatch = 1;
++	intr_mask.b.otgintr = 1;
++
++	if (!core_if->dma_enable) {
++		intr_mask.b.rxstsqlvl = 1;
++	}
++
++	intr_mask.b.conidstschng = 1;
++	intr_mask.b.wkupintr = 1;
++	intr_mask.b.disconnect = 0;
++	intr_mask.b.usbsuspend = 1;
++	intr_mask.b.sessreqintr = 1;
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	if (core_if->core_params->lpm_enable) {
++		intr_mask.b.lpmtranrcvd = 1;
++	}
++#endif
++	DWC_WRITE_REG32(&global_regs->gintmsk, intr_mask.d32);
++}
++
++/*
++ * The restore operation is modified to support Synopsys Emulated Powerdown and
++ * Hibernation. This function is for exiting from Device mode hibernation by
++ * Host Initiated Resume/Reset and Device Initiated Remote-Wakeup.
++ * @param core_if Programming view of DWC_otg controller.
++ * @param rem_wakeup - indicates whether resume is initiated by Device or Host.
++ * @param reset - indicates whether resume is initiated by Reset.
++ */
++int dwc_otg_device_hibernation_restore(dwc_otg_core_if_t * core_if,
++				       int rem_wakeup, int reset)
++{
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	pcgcctl_data_t pcgcctl = {.d32 = 0 };
++	dctl_data_t dctl = {.d32 = 0 };
++
++	int timeout = 2000;
++
++	if (!core_if->hibernation_suspend) {
++		DWC_PRINTF("Already exited from Hibernation\n");
++		return 1;
++	}
++
++	DWC_DEBUGPL(DBG_PCD, "%s called\n", __FUNCTION__);
++	/* Switch-on voltage to the core */
++	gpwrdn.b.pwrdnswtch = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Reset core */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Assert Restore signal */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.restore = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++	dwc_udelay(10);
++
++	/* Disable power clamps */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnclmp = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	if (rem_wakeup) {
++		dwc_udelay(70);
++	}
++
++	/* Deassert Reset core */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++	dwc_udelay(10);
++
++	/* Disable PMU interrupt */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuintsel = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	/* Mask interrupts from gpwrdn */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.connect_det_msk = 1;
++	gpwrdn.b.srp_det_msk = 1;
++	gpwrdn.b.disconn_det_msk = 1;
++	gpwrdn.b.rst_det_msk = 1;
++	gpwrdn.b.lnstchng_msk = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	/* Indicates that we are going out from hibernation */
++	core_if->hibernation_suspend = 0;
++
++	/*
++	 * Set Restore Essential Regs bit in PCGCCTL register, restore_mode = 1
++	 * indicates restore from remote_wakeup
++	 */
++	restore_essential_regs(core_if, rem_wakeup, 0);
++
++	/*
++	 * Wait a little for seeing new value of variable hibernation_suspend if
++	 * Restore done interrupt received before polling
++	 */
++	dwc_udelay(10);
++
++	if (core_if->hibernation_suspend == 0) {
++		/*
++		 * Wait For Restore_done Interrupt. This mechanism of polling the
++		 * interrupt is introduced to avoid any possible race conditions
++		 */
++		do {
++			gintsts_data_t gintsts;
++			gintsts.d32 =
++			    DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++			if (gintsts.b.restoredone) {
++				gintsts.d32 = 0;
++				gintsts.b.restoredone = 1;
++				DWC_WRITE_REG32(&core_if->core_global_regs->
++						gintsts, gintsts.d32);
++				DWC_PRINTF("Restore Done Interrupt seen\n");
++				break;
++			}
++			dwc_udelay(10);
++		} while (--timeout);
++		if (!timeout) {
++			DWC_PRINTF("Restore Done interrupt wasn't generated here\n");
++		}
++	}
++	/* Clear all pending interupts */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++
++	/* De-assert Restore */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.restore = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	if (!rem_wakeup) {
++		pcgcctl.d32 = 0;
++		pcgcctl.b.rstpdwnmodule = 1;
++		DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
++	}
++
++	/* Restore GUSBCFG and DCFG */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg,
++			core_if->gr_backup->gusbcfg_local);
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg,
++			core_if->dr_backup->dcfg);
++
++	/* De-assert Wakeup Logic */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuactv = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	if (!rem_wakeup) {
++		/* Set Device programming done bit */
++		dctl.b.pwronprgdone = 1;
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
++	} else {
++		/* Start Remote Wakeup Signaling */
++		dctl.d32 = core_if->dr_backup->dctl;
++		dctl.b.rmtwkupsig = 1;
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32);
++	}
++
++	dwc_mdelay(2);
++	/* Clear all pending interupts */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++
++	/* Restore global registers */
++	dwc_otg_restore_global_regs(core_if);
++	/* Restore device global registers */
++	dwc_otg_restore_dev_regs(core_if, rem_wakeup);
++
++	if (rem_wakeup) {
++		dwc_mdelay(7);
++		dctl.d32 = 0;
++		dctl.b.rmtwkupsig = 1;
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32, 0);
++	}
++
++	core_if->hibernation_suspend = 0;
++	/* The core will be in ON STATE */
++	core_if->lx_state = DWC_OTG_L0;
++	DWC_PRINTF("Hibernation recovery completes here\n");
++
++	return 1;
++}
++
++/*
++ * The restore operation is modified to support Synopsys Emulated Powerdown and
++ * Hibernation. This function is for exiting from Host mode hibernation by
++ * Host Initiated Resume/Reset and Device Initiated Remote-Wakeup.
++ * @param core_if Programming view of DWC_otg controller.
++ * @param rem_wakeup - indicates whether resume is initiated by Device or Host.
++ * @param reset - indicates whether resume is initiated by Reset.
++ */
++int dwc_otg_host_hibernation_restore(dwc_otg_core_if_t * core_if,
++				     int rem_wakeup, int reset)
++{
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	hprt0_data_t hprt0 = {.d32 = 0 };
++
++	int timeout = 2000;
++
++	DWC_DEBUGPL(DBG_HCD, "%s called\n", __FUNCTION__);
++	/* Switch-on voltage to the core */
++	gpwrdn.b.pwrdnswtch = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Reset core */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Assert Restore signal */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.restore = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++	dwc_udelay(10);
++
++	/* Disable power clamps */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnclmp = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	if (!rem_wakeup) {
++		dwc_udelay(50);
++	}
++
++	/* Deassert Reset core */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++	dwc_udelay(10);
++
++	/* Disable PMU interrupt */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuintsel = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	gpwrdn.d32 = 0;
++	gpwrdn.b.connect_det_msk = 1;
++	gpwrdn.b.srp_det_msk = 1;
++	gpwrdn.b.disconn_det_msk = 1;
++	gpwrdn.b.rst_det_msk = 1;
++	gpwrdn.b.lnstchng_msk = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	/* Indicates that we are going out from hibernation */
++	core_if->hibernation_suspend = 0;
++
++	/* Set Restore Essential Regs bit in PCGCCTL register */
++	restore_essential_regs(core_if, rem_wakeup, 1);
++
++	/* Wait a little for seeing new value of variable hibernation_suspend if
++	 * Restore done interrupt received before polling */
++	dwc_udelay(10);
++
++	if (core_if->hibernation_suspend == 0) {
++		/* Wait For Restore_done Interrupt. This mechanism of polling the
++		 * interrupt is introduced to avoid any possible race conditions
++		 */
++		do {
++			gintsts_data_t gintsts;
++			gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++			if (gintsts.b.restoredone) {
++				gintsts.d32 = 0;
++				gintsts.b.restoredone = 1;
++			DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++				DWC_DEBUGPL(DBG_HCD,"Restore Done Interrupt seen\n");
++				break;
++			}
++			dwc_udelay(10);
++		} while (--timeout);
++		if (!timeout) {
++			DWC_WARN("Restore Done interrupt wasn't generated\n");
++		}
++	}
++
++	/* Set the flag's value to 0 again after receiving restore done interrupt */
++	core_if->hibernation_suspend = 0;
++
++	/* This step is not described in functional spec but if not wait for this
++	 * delay, mismatch interrupts occurred because just after restore core is
++	 * in Device mode(gintsts.curmode == 0) */
++	dwc_mdelay(100);
++
++	/* Clear all pending interrupts */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++
++	/* De-assert Restore */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.restore = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Restore GUSBCFG and HCFG */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg,
++			core_if->gr_backup->gusbcfg_local);
++	DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg,
++			core_if->hr_backup->hcfg_local);
++
++	/* De-assert Wakeup Logic */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuactv = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Start the Resume operation by programming HPRT0 */
++	hprt0.d32 = core_if->hr_backup->hprt0_local;
++	hprt0.b.prtpwr = 1;
++	hprt0.b.prtena = 0;
++	hprt0.b.prtsusp = 0;
++	DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++
++	DWC_PRINTF("Resume Starts Now\n");
++	if (!reset) {		// Indicates it is Resume Operation
++		hprt0.d32 = core_if->hr_backup->hprt0_local;
++		hprt0.b.prtres = 1;
++		hprt0.b.prtpwr = 1;
++		hprt0.b.prtena = 0;
++		hprt0.b.prtsusp = 0;
++		DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++
++		if (!rem_wakeup)
++			hprt0.b.prtres = 0;
++		/* Wait for Resume time and then program HPRT again */
++		dwc_mdelay(100);
++		DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++
++	} else {		// Indicates it is Reset Operation
++		hprt0.d32 = core_if->hr_backup->hprt0_local;
++		hprt0.b.prtrst = 1;
++		hprt0.b.prtpwr = 1;
++		hprt0.b.prtena = 0;
++		hprt0.b.prtsusp = 0;
++		DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++		/* Wait for Reset time and then program HPRT again */
++		dwc_mdelay(60);
++		hprt0.b.prtrst = 0;
++		DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++	}
++	/* Clear all interrupt status */
++	hprt0.d32 = dwc_otg_read_hprt0(core_if);
++	hprt0.b.prtconndet = 1;
++	hprt0.b.prtenchng = 1;
++	DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++
++	/* Clear all pending interupts */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++
++	/* Restore global registers */
++	dwc_otg_restore_global_regs(core_if);
++	/* Restore host global registers */
++	dwc_otg_restore_host_regs(core_if, reset);
++
++	/* The core will be in ON STATE */
++	core_if->lx_state = DWC_OTG_L0;
++	DWC_PRINTF("Hibernation recovery is complete here\n");
++	return 0;
++}
++
++/** Saves some register values into system memory. */
++int dwc_otg_save_global_regs(dwc_otg_core_if_t * core_if)
++{
++	struct dwc_otg_global_regs_backup *gr;
++	int i;
++
++	gr = core_if->gr_backup;
++	if (!gr) {
++		gr = DWC_ALLOC(sizeof(*gr));
++		if (!gr) {
++			return -DWC_E_NO_MEMORY;
++		}
++		core_if->gr_backup = gr;
++	}
++
++	gr->gotgctl_local = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++	gr->gintmsk_local = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
++	gr->gahbcfg_local = DWC_READ_REG32(&core_if->core_global_regs->gahbcfg);
++	gr->gusbcfg_local = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++	gr->grxfsiz_local = DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
++	gr->gnptxfsiz_local = DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz);
++	gr->hptxfsiz_local = DWC_READ_REG32(&core_if->core_global_regs->hptxfsiz);
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	gr->glpmcfg_local = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++#endif
++	gr->gi2cctl_local = DWC_READ_REG32(&core_if->core_global_regs->gi2cctl);
++	gr->pcgcctl_local = DWC_READ_REG32(core_if->pcgcctl);
++	gr->gdfifocfg_local =
++	    DWC_READ_REG32(&core_if->core_global_regs->gdfifocfg);
++	for (i = 0; i < MAX_EPS_CHANNELS; i++) {
++		gr->dtxfsiz_local[i] =
++		    DWC_READ_REG32(&(core_if->core_global_regs->dtxfsiz[i]));
++	}
++
++	DWC_DEBUGPL(DBG_ANY, "===========Backing Global registers==========\n");
++	DWC_DEBUGPL(DBG_ANY, "Backed up gotgctl   = %08x\n", gr->gotgctl_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up gintmsk   = %08x\n", gr->gintmsk_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up gahbcfg   = %08x\n", gr->gahbcfg_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up gusbcfg   = %08x\n", gr->gusbcfg_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up grxfsiz   = %08x\n", gr->grxfsiz_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up gnptxfsiz = %08x\n",
++		    gr->gnptxfsiz_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up hptxfsiz  = %08x\n",
++		    gr->hptxfsiz_local);
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	DWC_DEBUGPL(DBG_ANY, "Backed up glpmcfg   = %08x\n", gr->glpmcfg_local);
++#endif
++	DWC_DEBUGPL(DBG_ANY, "Backed up gi2cctl   = %08x\n", gr->gi2cctl_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up pcgcctl   = %08x\n", gr->pcgcctl_local);
++	DWC_DEBUGPL(DBG_ANY,"Backed up gdfifocfg   = %08x\n",gr->gdfifocfg_local);
++
++	return 0;
++}
++
++/** Saves GINTMSK register before setting the msk bits. */
++int dwc_otg_save_gintmsk_reg(dwc_otg_core_if_t * core_if)
++{
++	struct dwc_otg_global_regs_backup *gr;
++
++	gr = core_if->gr_backup;
++	if (!gr) {
++		gr = DWC_ALLOC(sizeof(*gr));
++		if (!gr) {
++			return -DWC_E_NO_MEMORY;
++		}
++		core_if->gr_backup = gr;
++	}
++
++	gr->gintmsk_local = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
++
++	DWC_DEBUGPL(DBG_ANY,"=============Backing GINTMSK registers============\n");
++	DWC_DEBUGPL(DBG_ANY, "Backed up gintmsk   = %08x\n", gr->gintmsk_local);
++
++	return 0;
++}
++
++int dwc_otg_save_dev_regs(dwc_otg_core_if_t * core_if)
++{
++	struct dwc_otg_dev_regs_backup *dr;
++	int i;
++
++	dr = core_if->dr_backup;
++	if (!dr) {
++		dr = DWC_ALLOC(sizeof(*dr));
++		if (!dr) {
++			return -DWC_E_NO_MEMORY;
++		}
++		core_if->dr_backup = dr;
++	}
++
++	dr->dcfg = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
++	dr->dctl = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
++	dr->daintmsk =
++	    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daintmsk);
++	dr->diepmsk =
++	    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->diepmsk);
++	dr->doepmsk =
++	    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->doepmsk);
++
++	for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
++		dr->diepctl[i] =
++		    DWC_READ_REG32(&core_if->dev_if->in_ep_regs[i]->diepctl);
++		dr->dieptsiz[i] =
++		    DWC_READ_REG32(&core_if->dev_if->in_ep_regs[i]->dieptsiz);
++		dr->diepdma[i] =
++		    DWC_READ_REG32(&core_if->dev_if->in_ep_regs[i]->diepdma);
++	}
++
++	DWC_DEBUGPL(DBG_ANY,
++		    "=============Backing Host registers==============\n");
++	DWC_DEBUGPL(DBG_ANY, "Backed up dcfg            = %08x\n", dr->dcfg);
++	DWC_DEBUGPL(DBG_ANY, "Backed up dctl        = %08x\n", dr->dctl);
++	DWC_DEBUGPL(DBG_ANY, "Backed up daintmsk            = %08x\n",
++		    dr->daintmsk);
++	DWC_DEBUGPL(DBG_ANY, "Backed up diepmsk        = %08x\n", dr->diepmsk);
++	DWC_DEBUGPL(DBG_ANY, "Backed up doepmsk        = %08x\n", dr->doepmsk);
++	for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
++		DWC_DEBUGPL(DBG_ANY, "Backed up diepctl[%d]        = %08x\n", i,
++			    dr->diepctl[i]);
++		DWC_DEBUGPL(DBG_ANY, "Backed up dieptsiz[%d]        = %08x\n",
++			    i, dr->dieptsiz[i]);
++		DWC_DEBUGPL(DBG_ANY, "Backed up diepdma[%d]        = %08x\n", i,
++			    dr->diepdma[i]);
++	}
++
++	return 0;
++}
++
++int dwc_otg_save_host_regs(dwc_otg_core_if_t * core_if)
++{
++	struct dwc_otg_host_regs_backup *hr;
++	int i;
++
++	hr = core_if->hr_backup;
++	if (!hr) {
++		hr = DWC_ALLOC(sizeof(*hr));
++		if (!hr) {
++			return -DWC_E_NO_MEMORY;
++		}
++		core_if->hr_backup = hr;
++	}
++
++	hr->hcfg_local =
++	    DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
++	hr->haintmsk_local =
++	    DWC_READ_REG32(&core_if->host_if->host_global_regs->haintmsk);
++	for (i = 0; i < dwc_otg_get_param_host_channels(core_if); ++i) {
++		hr->hcintmsk_local[i] =
++		    DWC_READ_REG32(&core_if->host_if->hc_regs[i]->hcintmsk);
++	}
++	hr->hprt0_local = DWC_READ_REG32(core_if->host_if->hprt0);
++	hr->hfir_local =
++	    DWC_READ_REG32(&core_if->host_if->host_global_regs->hfir);
++
++	DWC_DEBUGPL(DBG_ANY,
++		    "=============Backing Host registers===============\n");
++	DWC_DEBUGPL(DBG_ANY, "Backed up hcfg		= %08x\n",
++		    hr->hcfg_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up haintmsk = %08x\n", hr->haintmsk_local);
++	for (i = 0; i < dwc_otg_get_param_host_channels(core_if); ++i) {
++		DWC_DEBUGPL(DBG_ANY, "Backed up hcintmsk[%02d]=%08x\n", i,
++			    hr->hcintmsk_local[i]);
++	}
++	DWC_DEBUGPL(DBG_ANY, "Backed up hprt0           = %08x\n",
++		    hr->hprt0_local);
++	DWC_DEBUGPL(DBG_ANY, "Backed up hfir           = %08x\n",
++		    hr->hfir_local);
++
++	return 0;
++}
++
++int dwc_otg_restore_global_regs(dwc_otg_core_if_t *core_if)
++{
++	struct dwc_otg_global_regs_backup *gr;
++	int i;
++
++	gr = core_if->gr_backup;
++	if (!gr) {
++		return -DWC_E_INVALID;
++	}
++
++	DWC_WRITE_REG32(&core_if->core_global_regs->gotgctl, gr->gotgctl_local);
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gr->gintmsk_local);
++	DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, gr->gusbcfg_local);
++	DWC_WRITE_REG32(&core_if->core_global_regs->gahbcfg, gr->gahbcfg_local);
++	DWC_WRITE_REG32(&core_if->core_global_regs->grxfsiz, gr->grxfsiz_local);
++	DWC_WRITE_REG32(&core_if->core_global_regs->gnptxfsiz,
++			gr->gnptxfsiz_local);
++	DWC_WRITE_REG32(&core_if->core_global_regs->hptxfsiz,
++			gr->hptxfsiz_local);
++	DWC_WRITE_REG32(&core_if->core_global_regs->gdfifocfg,
++			gr->gdfifocfg_local);
++	for (i = 0; i < MAX_EPS_CHANNELS; i++) {
++		DWC_WRITE_REG32(&core_if->core_global_regs->dtxfsiz[i],
++				gr->dtxfsiz_local[i]);
++	}
++
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++	DWC_WRITE_REG32(core_if->host_if->hprt0, 0x0000100A);
++	DWC_WRITE_REG32(&core_if->core_global_regs->gahbcfg,
++			(gr->gahbcfg_local));
++	return 0;
++}
++
++int dwc_otg_restore_dev_regs(dwc_otg_core_if_t * core_if, int rem_wakeup)
++{
++	struct dwc_otg_dev_regs_backup *dr;
++	int i;
++
++	dr = core_if->dr_backup;
++
++	if (!dr) {
++		return -DWC_E_INVALID;
++	}
++
++	if (!rem_wakeup) {
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl,
++				dr->dctl);
++	}
++
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->daintmsk, dr->daintmsk);
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->diepmsk, dr->diepmsk);
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->doepmsk, dr->doepmsk);
++
++	for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
++		DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->dieptsiz, dr->dieptsiz[i]);
++		DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->diepdma, dr->diepdma[i]);
++		DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->diepctl, dr->diepctl[i]);
++	}
++
++	return 0;
++}
++
++int dwc_otg_restore_host_regs(dwc_otg_core_if_t * core_if, int reset)
++{
++	struct dwc_otg_host_regs_backup *hr;
++	int i;
++	hr = core_if->hr_backup;
++
++	if (!hr) {
++		return -DWC_E_INVALID;
++	}
++
++	DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg, hr->hcfg_local);
++	//if (!reset)
++	//{
++	//      DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hfir, hr->hfir_local);
++	//}
++
++	DWC_WRITE_REG32(&core_if->host_if->host_global_regs->haintmsk,
++			hr->haintmsk_local);
++	for (i = 0; i < dwc_otg_get_param_host_channels(core_if); ++i) {
++		DWC_WRITE_REG32(&core_if->host_if->hc_regs[i]->hcintmsk,
++				hr->hcintmsk_local[i]);
++	}
++
++	return 0;
++}
++
++int restore_lpm_i2c_regs(dwc_otg_core_if_t * core_if)
++{
++	struct dwc_otg_global_regs_backup *gr;
++
++	gr = core_if->gr_backup;
++
++	/* Restore values for LPM and I2C */
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, gr->glpmcfg_local);
++#endif
++	DWC_WRITE_REG32(&core_if->core_global_regs->gi2cctl, gr->gi2cctl_local);
++
++	return 0;
++}
++
++int restore_essential_regs(dwc_otg_core_if_t * core_if, int rmode, int is_host)
++{
++	struct dwc_otg_global_regs_backup *gr;
++	pcgcctl_data_t pcgcctl = {.d32 = 0 };
++	gahbcfg_data_t gahbcfg = {.d32 = 0 };
++	gusbcfg_data_t gusbcfg = {.d32 = 0 };
++	gintmsk_data_t gintmsk = {.d32 = 0 };
++
++	/* Restore LPM and I2C registers */
++	restore_lpm_i2c_regs(core_if);
++
++	/* Set PCGCCTL to 0 */
++	DWC_WRITE_REG32(core_if->pcgcctl, 0x00000000);
++
++	gr = core_if->gr_backup;
++	/* Load restore values for [31:14] bits */
++	DWC_WRITE_REG32(core_if->pcgcctl,
++			((gr->pcgcctl_local & 0xffffc000) | 0x00020000));
++
++	/* Umnask global Interrupt in GAHBCFG and restore it */
++	gahbcfg.d32 = gr->gahbcfg_local;
++	gahbcfg.b.glblintrmsk = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gahbcfg, gahbcfg.d32);
++
++	/* Clear all pending interupts */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++
++	/* Unmask restore done interrupt */
++	gintmsk.b.restoredone = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gintmsk.d32);
++
++	/* Restore GUSBCFG and HCFG/DCFG */
++	gusbcfg.d32 = core_if->gr_backup->gusbcfg_local;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, gusbcfg.d32);
++
++	if (is_host) {
++		hcfg_data_t hcfg = {.d32 = 0 };
++		hcfg.d32 = core_if->hr_backup->hcfg_local;
++		DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg,
++				hcfg.d32);
++
++		/* Load restore values for [31:14] bits */
++		pcgcctl.d32 = gr->pcgcctl_local & 0xffffc000;
++		pcgcctl.d32 = gr->pcgcctl_local | 0x00020000;
++
++		if (rmode)
++			pcgcctl.b.restoremode = 1;
++		DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
++		dwc_udelay(10);
++
++		/* Load restore values for [31:14] bits and set EssRegRestored bit */
++		pcgcctl.d32 = gr->pcgcctl_local | 0xffffc000;
++		pcgcctl.d32 = gr->pcgcctl_local & 0xffffc000;
++		pcgcctl.b.ess_reg_restored = 1;
++		if (rmode)
++			pcgcctl.b.restoremode = 1;
++		DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
++	} else {
++		dcfg_data_t dcfg = {.d32 = 0 };
++		dcfg.d32 = core_if->dr_backup->dcfg;
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg, dcfg.d32);
++
++		/* Load restore values for [31:14] bits */
++		pcgcctl.d32 = gr->pcgcctl_local & 0xffffc000;
++		pcgcctl.d32 = gr->pcgcctl_local | 0x00020000;
++		if (!rmode) {
++			pcgcctl.d32 |= 0x208;
++		}
++		DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
++		dwc_udelay(10);
++
++		/* Load restore values for [31:14] bits */
++		pcgcctl.d32 = gr->pcgcctl_local & 0xffffc000;
++		pcgcctl.d32 = gr->pcgcctl_local | 0x00020000;
++		pcgcctl.b.ess_reg_restored = 1;
++		if (!rmode)
++			pcgcctl.d32 |= 0x208;
++		DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
++	}
++
++	return 0;
++}
++
++/**
++ * Initializes the FSLSPClkSel field of the HCFG register depending on the PHY
++ * type.
++ */
++static void init_fslspclksel(dwc_otg_core_if_t * core_if)
++{
++	uint32_t val;
++	hcfg_data_t hcfg;
++
++	if (((core_if->hwcfg2.b.hs_phy_type == 2) &&
++	     (core_if->hwcfg2.b.fs_phy_type == 1) &&
++	     (core_if->core_params->ulpi_fs_ls)) ||
++	    (core_if->core_params->phy_type == DWC_PHY_TYPE_PARAM_FS)) {
++		/* Full speed PHY */
++		val = DWC_HCFG_48_MHZ;
++	} else {
++		/* High speed PHY running at full speed or high speed */
++		val = DWC_HCFG_30_60_MHZ;
++	}
++
++	DWC_DEBUGPL(DBG_CIL, "Initializing HCFG.FSLSPClkSel to 0x%1x\n", val);
++	hcfg.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
++	hcfg.b.fslspclksel = val;
++	DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg, hcfg.d32);
++}
++
++/**
++ * Initializes the DevSpd field of the DCFG register depending on the PHY type
++ * and the enumeration speed of the device.
++ */
++static void init_devspd(dwc_otg_core_if_t * core_if)
++{
++	uint32_t val;
++	dcfg_data_t dcfg;
++
++	if (((core_if->hwcfg2.b.hs_phy_type == 2) &&
++	     (core_if->hwcfg2.b.fs_phy_type == 1) &&
++	     (core_if->core_params->ulpi_fs_ls)) ||
++	    (core_if->core_params->phy_type == DWC_PHY_TYPE_PARAM_FS)) {
++		/* Full speed PHY */
++		val = 0x3;
++	} else if (core_if->core_params->speed == DWC_SPEED_PARAM_FULL) {
++		/* High speed PHY running at full speed */
++		val = 0x1;
++	} else {
++		/* High speed PHY running at high speed */
++		val = 0x0;
++	}
++
++	DWC_DEBUGPL(DBG_CIL, "Initializing DCFG.DevSpd to 0x%1x\n", val);
++
++	dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
++	dcfg.b.devspd = val;
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg, dcfg.d32);
++}
++
++/**
++ * This function calculates the number of IN EPS
++ * using GHWCFG1 and GHWCFG2 registers values
++ *
++ * @param core_if Programming view of the DWC_otg controller
++ */
++static uint32_t calc_num_in_eps(dwc_otg_core_if_t * core_if)
++{
++	uint32_t num_in_eps = 0;
++	uint32_t num_eps = core_if->hwcfg2.b.num_dev_ep;
++	uint32_t hwcfg1 = core_if->hwcfg1.d32 >> 3;
++	uint32_t num_tx_fifos = core_if->hwcfg4.b.num_in_eps;
++	int i;
++
++	for (i = 0; i < num_eps; ++i) {
++		if (!(hwcfg1 & 0x1))
++			num_in_eps++;
++
++		hwcfg1 >>= 2;
++	}
++
++	if (core_if->hwcfg4.b.ded_fifo_en) {
++		num_in_eps =
++		    (num_in_eps > num_tx_fifos) ? num_tx_fifos : num_in_eps;
++	}
++
++	return num_in_eps;
++}
++
++/**
++ * This function calculates the number of OUT EPS
++ * using GHWCFG1 and GHWCFG2 registers values
++ *
++ * @param core_if Programming view of the DWC_otg controller
++ */
++static uint32_t calc_num_out_eps(dwc_otg_core_if_t * core_if)
++{
++	uint32_t num_out_eps = 0;
++	uint32_t num_eps = core_if->hwcfg2.b.num_dev_ep;
++	uint32_t hwcfg1 = core_if->hwcfg1.d32 >> 2;
++	int i;
++
++	for (i = 0; i < num_eps; ++i) {
++		if (!(hwcfg1 & 0x1))
++			num_out_eps++;
++
++		hwcfg1 >>= 2;
++	}
++	return num_out_eps;
++}
++
++/**
++ * This function initializes the DWC_otg controller registers and
++ * prepares the core for device mode or host mode operation.
++ *
++ * @param core_if Programming view of the DWC_otg controller
++ *
++ */
++void dwc_otg_core_init(dwc_otg_core_if_t * core_if)
++{
++	int i = 0;
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	gahbcfg_data_t ahbcfg = {.d32 = 0 };
++	gusbcfg_data_t usbcfg = {.d32 = 0 };
++	gi2cctl_data_t i2cctl = {.d32 = 0 };
++
++	DWC_DEBUGPL(DBG_CILV, "dwc_otg_core_init(%p) regs at %p\n",
++                    core_if, global_regs);
++
++	/* Common Initialization */
++	usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
++
++	/* Program the ULPI External VBUS bit if needed */
++	usbcfg.b.ulpi_ext_vbus_drv =
++	    (core_if->core_params->phy_ulpi_ext_vbus ==
++	     DWC_PHY_ULPI_EXTERNAL_VBUS) ? 1 : 0;
++
++	/* Set external TS Dline pulsing */
++	usbcfg.b.term_sel_dl_pulse =
++	    (core_if->core_params->ts_dline == 1) ? 1 : 0;
++	DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
++
++	/* Reset the Controller */
++	dwc_otg_core_reset(core_if);
++
++	core_if->adp_enable = core_if->core_params->adp_supp_enable;
++	core_if->power_down = core_if->core_params->power_down;
++	core_if->otg_sts = 0;
++
++	/* Initialize parameters from Hardware configuration registers. */
++	dev_if->num_in_eps = calc_num_in_eps(core_if);
++	dev_if->num_out_eps = calc_num_out_eps(core_if);
++
++	DWC_DEBUGPL(DBG_CIL, "num_dev_perio_in_ep=%d\n",
++		    core_if->hwcfg4.b.num_dev_perio_in_ep);
++
++	for (i = 0; i < core_if->hwcfg4.b.num_dev_perio_in_ep; i++) {
++		dev_if->perio_tx_fifo_size[i] =
++		    DWC_READ_REG32(&global_regs->dtxfsiz[i]) >> 16;
++		DWC_DEBUGPL(DBG_CIL, "Periodic Tx FIFO SZ #%d=0x%0x\n",
++			    i, dev_if->perio_tx_fifo_size[i]);
++	}
++
++	for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++		dev_if->tx_fifo_size[i] =
++		    DWC_READ_REG32(&global_regs->dtxfsiz[i]) >> 16;
++		DWC_DEBUGPL(DBG_CIL, "Tx FIFO SZ #%d=0x%0x\n",
++			    i, dev_if->tx_fifo_size[i]);
++	}
++
++	core_if->total_fifo_size = core_if->hwcfg3.b.dfifo_depth;
++	core_if->rx_fifo_size = DWC_READ_REG32(&global_regs->grxfsiz);
++	core_if->nperio_tx_fifo_size =
++	    DWC_READ_REG32(&global_regs->gnptxfsiz) >> 16;
++
++	DWC_DEBUGPL(DBG_CIL, "Total FIFO SZ=%d\n", core_if->total_fifo_size);
++	DWC_DEBUGPL(DBG_CIL, "Rx FIFO SZ=%d\n", core_if->rx_fifo_size);
++	DWC_DEBUGPL(DBG_CIL, "NP Tx FIFO SZ=%d\n",
++		    core_if->nperio_tx_fifo_size);
++
++	/* This programming sequence needs to happen in FS mode before any other
++	 * programming occurs */
++	if ((core_if->core_params->speed == DWC_SPEED_PARAM_FULL) &&
++	    (core_if->core_params->phy_type == DWC_PHY_TYPE_PARAM_FS)) {
++		/* If FS mode with FS PHY */
++
++		/* core_init() is now called on every switch so only call the
++		 * following for the first time through. */
++		if (!core_if->phy_init_done) {
++			core_if->phy_init_done = 1;
++			DWC_DEBUGPL(DBG_CIL, "FS_PHY detected\n");
++			usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
++			usbcfg.b.physel = 1;
++			DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
++
++			/* Reset after a PHY select */
++			dwc_otg_core_reset(core_if);
++		}
++
++		/* Program DCFG.DevSpd or HCFG.FSLSPclkSel to 48Mhz in FS.      Also
++		 * do this on HNP Dev/Host mode switches (done in dev_init and
++		 * host_init). */
++		if (dwc_otg_is_host_mode(core_if)) {
++			init_fslspclksel(core_if);
++		} else {
++			init_devspd(core_if);
++		}
++
++		if (core_if->core_params->i2c_enable) {
++			DWC_DEBUGPL(DBG_CIL, "FS_PHY Enabling I2c\n");
++			/* Program GUSBCFG.OtgUtmifsSel to I2C */
++			usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
++			usbcfg.b.otgutmifssel = 1;
++			DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
++
++			/* Program GI2CCTL.I2CEn */
++			i2cctl.d32 = DWC_READ_REG32(&global_regs->gi2cctl);
++			i2cctl.b.i2cdevaddr = 1;
++			i2cctl.b.i2cen = 0;
++			DWC_WRITE_REG32(&global_regs->gi2cctl, i2cctl.d32);
++			i2cctl.b.i2cen = 1;
++			DWC_WRITE_REG32(&global_regs->gi2cctl, i2cctl.d32);
++		}
++
++	} /* endif speed == DWC_SPEED_PARAM_FULL */
++	else {
++		/* High speed PHY. */
++		if (!core_if->phy_init_done) {
++			core_if->phy_init_done = 1;
++			/* HS PHY parameters.  These parameters are preserved
++			 * during soft reset so only program the first time.  Do
++			 * a soft reset immediately after setting phyif.  */
++
++			if (core_if->core_params->phy_type == 2) {
++				/* ULPI interface */
++				usbcfg.b.ulpi_utmi_sel = 1;
++				usbcfg.b.phyif = 0;
++				usbcfg.b.ddrsel =
++				    core_if->core_params->phy_ulpi_ddr;
++			} else if (core_if->core_params->phy_type == 1) {
++				/* UTMI+ interface */
++				usbcfg.b.ulpi_utmi_sel = 0;
++				if (core_if->core_params->phy_utmi_width == 16) {
++					usbcfg.b.phyif = 1;
++
++				} else {
++					usbcfg.b.phyif = 0;
++				}
++			} else {
++				DWC_ERROR("FS PHY TYPE\n");
++			}
++			DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
++			/* Reset after setting the PHY parameters */
++			dwc_otg_core_reset(core_if);
++		}
++	}
++
++	if ((core_if->hwcfg2.b.hs_phy_type == 2) &&
++	    (core_if->hwcfg2.b.fs_phy_type == 1) &&
++	    (core_if->core_params->ulpi_fs_ls)) {
++		DWC_DEBUGPL(DBG_CIL, "Setting ULPI FSLS\n");
++		usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
++		usbcfg.b.ulpi_fsls = 1;
++		usbcfg.b.ulpi_clk_sus_m = 1;
++		DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
++	} else {
++		usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
++		usbcfg.b.ulpi_fsls = 0;
++		usbcfg.b.ulpi_clk_sus_m = 0;
++		DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
++	}
++
++	/* Program the GAHBCFG Register. */
++	switch (core_if->hwcfg2.b.architecture) {
++
++	case DWC_SLAVE_ONLY_ARCH:
++		DWC_DEBUGPL(DBG_CIL, "Slave Only Mode\n");
++		ahbcfg.b.nptxfemplvl_txfemplvl =
++		    DWC_GAHBCFG_TXFEMPTYLVL_HALFEMPTY;
++		ahbcfg.b.ptxfemplvl = DWC_GAHBCFG_TXFEMPTYLVL_HALFEMPTY;
++		core_if->dma_enable = 0;
++		core_if->dma_desc_enable = 0;
++		break;
++
++	case DWC_EXT_DMA_ARCH:
++		DWC_DEBUGPL(DBG_CIL, "External DMA Mode\n");
++		{
++			uint8_t brst_sz = core_if->core_params->dma_burst_size;
++			ahbcfg.b.hburstlen = 0;
++			while (brst_sz > 1) {
++				ahbcfg.b.hburstlen++;
++				brst_sz >>= 1;
++			}
++		}
++		core_if->dma_enable = (core_if->core_params->dma_enable != 0);
++		core_if->dma_desc_enable =
++		    (core_if->core_params->dma_desc_enable != 0);
++		break;
++
++	case DWC_INT_DMA_ARCH:
++		DWC_DEBUGPL(DBG_CIL, "Internal DMA Mode\n");
++		/* Old value was DWC_GAHBCFG_INT_DMA_BURST_INCR - done for
++		  Host mode ISOC in issue fix - vahrama */
++		/* Broadcom had altered to (1<<3)|(0<<0) - WRESP=1, max 4 beats */
++		ahbcfg.b.hburstlen = (1<<3)|(0<<0);//DWC_GAHBCFG_INT_DMA_BURST_INCR4;
++		core_if->dma_enable = (core_if->core_params->dma_enable != 0);
++		core_if->dma_desc_enable =
++		    (core_if->core_params->dma_desc_enable != 0);
++		break;
++
++	}
++	if (core_if->dma_enable) {
++		if (core_if->dma_desc_enable) {
++			DWC_PRINTF("Using Descriptor DMA mode\n");
++		} else {
++			DWC_PRINTF("Using Buffer DMA mode\n");
++
++		}
++	} else {
++		DWC_PRINTF("Using Slave mode\n");
++		core_if->dma_desc_enable = 0;
++	}
++
++	if (core_if->core_params->ahb_single) {
++		ahbcfg.b.ahbsingle = 1;
++	}
++
++	ahbcfg.b.dmaenable = core_if->dma_enable;
++	DWC_WRITE_REG32(&global_regs->gahbcfg, ahbcfg.d32);
++
++	core_if->en_multiple_tx_fifo = core_if->hwcfg4.b.ded_fifo_en;
++
++	core_if->pti_enh_enable = core_if->core_params->pti_enable != 0;
++	core_if->multiproc_int_enable = core_if->core_params->mpi_enable;
++	DWC_PRINTF("Periodic Transfer Interrupt Enhancement - %s\n",
++		   ((core_if->pti_enh_enable) ? "enabled" : "disabled"));
++	DWC_PRINTF("Multiprocessor Interrupt Enhancement - %s\n",
++		   ((core_if->multiproc_int_enable) ? "enabled" : "disabled"));
++
++	/*
++	 * Program the GUSBCFG register.
++	 */
++	usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
++
++	switch (core_if->hwcfg2.b.op_mode) {
++	case DWC_MODE_HNP_SRP_CAPABLE:
++		usbcfg.b.hnpcap = (core_if->core_params->otg_cap ==
++				   DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE);
++		usbcfg.b.srpcap = (core_if->core_params->otg_cap !=
++				   DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
++		break;
++
++	case DWC_MODE_SRP_ONLY_CAPABLE:
++		usbcfg.b.hnpcap = 0;
++		usbcfg.b.srpcap = (core_if->core_params->otg_cap !=
++				   DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
++		break;
++
++	case DWC_MODE_NO_HNP_SRP_CAPABLE:
++		usbcfg.b.hnpcap = 0;
++		usbcfg.b.srpcap = 0;
++		break;
++
++	case DWC_MODE_SRP_CAPABLE_DEVICE:
++		usbcfg.b.hnpcap = 0;
++		usbcfg.b.srpcap = (core_if->core_params->otg_cap !=
++				   DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
++		break;
++
++	case DWC_MODE_NO_SRP_CAPABLE_DEVICE:
++		usbcfg.b.hnpcap = 0;
++		usbcfg.b.srpcap = 0;
++		break;
++
++	case DWC_MODE_SRP_CAPABLE_HOST:
++		usbcfg.b.hnpcap = 0;
++		usbcfg.b.srpcap = (core_if->core_params->otg_cap !=
++				   DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
++		break;
++
++	case DWC_MODE_NO_SRP_CAPABLE_HOST:
++		usbcfg.b.hnpcap = 0;
++		usbcfg.b.srpcap = 0;
++		break;
++	}
++
++	DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	if (core_if->core_params->lpm_enable) {
++		glpmcfg_data_t lpmcfg = {.d32 = 0 };
++
++		/* To enable LPM support set lpm_cap_en bit */
++		lpmcfg.b.lpm_cap_en = 1;
++
++		/* Make AppL1Res ACK */
++		lpmcfg.b.appl_resp = 1;
++
++		/* Retry 3 times */
++		lpmcfg.b.retry_count = 3;
++
++		DWC_MODIFY_REG32(&core_if->core_global_regs->glpmcfg,
++				 0, lpmcfg.d32);
++
++	}
++#endif
++	if (core_if->core_params->ic_usb_cap) {
++		gusbcfg_data_t gusbcfg = {.d32 = 0 };
++		gusbcfg.b.ic_usb_cap = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gusbcfg,
++				 0, gusbcfg.d32);
++	}
++	{
++		gotgctl_data_t gotgctl = {.d32 = 0 };
++		gotgctl.b.otgver = core_if->core_params->otg_ver;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gotgctl, 0,
++				 gotgctl.d32);
++		/* Set OTG version supported */
++		core_if->otg_ver = core_if->core_params->otg_ver;
++		DWC_PRINTF("OTG VER PARAM: %d, OTG VER FLAG: %d\n",
++			   core_if->core_params->otg_ver, core_if->otg_ver);
++	}
++
++
++	/* Enable common interrupts */
++	dwc_otg_enable_common_interrupts(core_if);
++
++	/* Do device or host intialization based on mode during PCD
++	 * and HCD initialization  */
++	if (dwc_otg_is_host_mode(core_if)) {
++		DWC_DEBUGPL(DBG_ANY, "Host Mode\n");
++		core_if->op_state = A_HOST;
++	} else {
++		DWC_DEBUGPL(DBG_ANY, "Device Mode\n");
++		core_if->op_state = B_PERIPHERAL;
++#ifdef DWC_DEVICE_ONLY
++		dwc_otg_core_dev_init(core_if);
++#endif
++	}
++}
++
++/**
++ * This function enables the Device mode interrupts.
++ *
++ * @param core_if Programming view of DWC_otg controller
++ */
++void dwc_otg_enable_device_interrupts(dwc_otg_core_if_t * core_if)
++{
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++
++	DWC_DEBUGPL(DBG_CIL, "%s()\n", __func__);
++
++	/* Disable all interrupts. */
++	DWC_WRITE_REG32(&global_regs->gintmsk, 0);
++
++	/* Clear any pending interrupts */
++	DWC_WRITE_REG32(&global_regs->gintsts, 0xFFFFFFFF);
++
++	/* Enable the common interrupts */
++	dwc_otg_enable_common_interrupts(core_if);
++
++	/* Enable interrupts */
++	intr_mask.b.usbreset = 1;
++	intr_mask.b.enumdone = 1;
++	/* Disable Disconnect interrupt in Device mode */
++	intr_mask.b.disconnect = 0;
++
++	if (!core_if->multiproc_int_enable) {
++		intr_mask.b.inepintr = 1;
++		intr_mask.b.outepintr = 1;
++	}
++
++	intr_mask.b.erlysuspend = 1;
++
++	if (core_if->en_multiple_tx_fifo == 0) {
++		intr_mask.b.epmismatch = 1;
++	}
++
++	//intr_mask.b.incomplisoout = 1;
++	intr_mask.b.incomplisoin = 1;
++
++/* Enable the ignore frame number for ISOC xfers - MAS */
++/* Disable to support high bandwith ISOC transfers - manukz */
++#if 0
++#ifdef DWC_UTE_PER_IO
++	if (core_if->dma_enable) {
++		if (core_if->dma_desc_enable) {
++			dctl_data_t dctl1 = {.d32 = 0 };
++			dctl1.b.ifrmnum = 1;
++			DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++					 dctl, 0, dctl1.d32);
++			DWC_DEBUG("----Enabled Ignore frame number (0x%08x)",
++				  DWC_READ_REG32(&core_if->dev_if->
++						 dev_global_regs->dctl));
++		}
++	}
++#endif
++#endif
++#ifdef DWC_EN_ISOC
++	if (core_if->dma_enable) {
++		if (core_if->dma_desc_enable == 0) {
++			if (core_if->pti_enh_enable) {
++				dctl_data_t dctl = {.d32 = 0 };
++				dctl.b.ifrmnum = 1;
++				DWC_MODIFY_REG32(&core_if->
++						 dev_if->dev_global_regs->dctl,
++						 0, dctl.d32);
++			} else {
++				intr_mask.b.incomplisoin = 1;
++				intr_mask.b.incomplisoout = 1;
++			}
++		}
++	} else {
++		intr_mask.b.incomplisoin = 1;
++		intr_mask.b.incomplisoout = 1;
++	}
++#endif /* DWC_EN_ISOC */
++
++	/** @todo NGS: Should this be a module parameter? */
++#ifdef USE_PERIODIC_EP
++	intr_mask.b.isooutdrop = 1;
++	intr_mask.b.eopframe = 1;
++	intr_mask.b.incomplisoin = 1;
++	intr_mask.b.incomplisoout = 1;
++#endif
++
++	DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32, intr_mask.d32);
++
++	DWC_DEBUGPL(DBG_CIL, "%s() gintmsk=%0x\n", __func__,
++		    DWC_READ_REG32(&global_regs->gintmsk));
++}
++
++/**
++ * This function initializes the DWC_otg controller registers for
++ * device mode.
++ *
++ * @param core_if Programming view of DWC_otg controller
++ *
++ */
++void dwc_otg_core_dev_init(dwc_otg_core_if_t * core_if)
++{
++	int i;
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	dwc_otg_core_params_t *params = core_if->core_params;
++	dcfg_data_t dcfg = {.d32 = 0 };
++	depctl_data_t diepctl = {.d32 = 0 };
++	grstctl_t resetctl = {.d32 = 0 };
++	uint32_t rx_fifo_size;
++	fifosize_data_t nptxfifosize;
++	fifosize_data_t txfifosize;
++	dthrctl_data_t dthrctl;
++	fifosize_data_t ptxfifosize;
++	uint16_t rxfsiz, nptxfsiz;
++	gdfifocfg_data_t gdfifocfg = {.d32 = 0 };
++	hwcfg3_data_t hwcfg3 = {.d32 = 0 };
++
++	/* Restart the Phy Clock */
++	DWC_WRITE_REG32(core_if->pcgcctl, 0);
++
++	/* Device configuration register */
++	init_devspd(core_if);
++	dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
++	dcfg.b.descdma = (core_if->dma_desc_enable) ? 1 : 0;
++	dcfg.b.perfrint = DWC_DCFG_FRAME_INTERVAL_80;
++	/* Enable Device OUT NAK in case of DDMA mode*/
++	if (core_if->core_params->dev_out_nak) {
++		dcfg.b.endevoutnak = 1;
++	}
++
++	if (core_if->core_params->cont_on_bna) {
++		dctl_data_t dctl = {.d32 = 0 };
++		dctl.b.encontonbna = 1;
++		DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, 0, dctl.d32);
++	}
++
++
++	DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
++
++	/* Configure data FIFO sizes */
++	if (core_if->hwcfg2.b.dynamic_fifo && params->enable_dynamic_fifo) {
++		DWC_DEBUGPL(DBG_CIL, "Total FIFO Size=%d\n",
++			    core_if->total_fifo_size);
++		DWC_DEBUGPL(DBG_CIL, "Rx FIFO Size=%d\n",
++			    params->dev_rx_fifo_size);
++		DWC_DEBUGPL(DBG_CIL, "NP Tx FIFO Size=%d\n",
++			    params->dev_nperio_tx_fifo_size);
++
++		/* Rx FIFO */
++		DWC_DEBUGPL(DBG_CIL, "initial grxfsiz=%08x\n",
++			    DWC_READ_REG32(&global_regs->grxfsiz));
++
++#ifdef DWC_UTE_CFI
++		core_if->pwron_rxfsiz = DWC_READ_REG32(&global_regs->grxfsiz);
++		core_if->init_rxfsiz = params->dev_rx_fifo_size;
++#endif
++		rx_fifo_size = params->dev_rx_fifo_size;
++		DWC_WRITE_REG32(&global_regs->grxfsiz, rx_fifo_size);
++
++		DWC_DEBUGPL(DBG_CIL, "new grxfsiz=%08x\n",
++			    DWC_READ_REG32(&global_regs->grxfsiz));
++
++		/** Set Periodic Tx FIFO Mask all bits 0 */
++		core_if->p_tx_msk = 0;
++
++		/** Set Tx FIFO Mask all bits 0 */
++		core_if->tx_msk = 0;
++
++		if (core_if->en_multiple_tx_fifo == 0) {
++			/* Non-periodic Tx FIFO */
++			DWC_DEBUGPL(DBG_CIL, "initial gnptxfsiz=%08x\n",
++				    DWC_READ_REG32(&global_regs->gnptxfsiz));
++
++			nptxfifosize.b.depth = params->dev_nperio_tx_fifo_size;
++			nptxfifosize.b.startaddr = params->dev_rx_fifo_size;
++
++			DWC_WRITE_REG32(&global_regs->gnptxfsiz,
++					nptxfifosize.d32);
++
++			DWC_DEBUGPL(DBG_CIL, "new gnptxfsiz=%08x\n",
++				    DWC_READ_REG32(&global_regs->gnptxfsiz));
++
++			/**@todo NGS: Fix Periodic FIFO Sizing! */
++			/*
++			 * Periodic Tx FIFOs These FIFOs are numbered from 1 to 15.
++			 * Indexes of the FIFO size module parameters in the
++			 * dev_perio_tx_fifo_size array and the FIFO size registers in
++			 * the dptxfsiz array run from 0 to 14.
++			 */
++			/** @todo Finish debug of this */
++			ptxfifosize.b.startaddr =
++			    nptxfifosize.b.startaddr + nptxfifosize.b.depth;
++			for (i = 0; i < core_if->hwcfg4.b.num_dev_perio_in_ep; i++) {
++				ptxfifosize.b.depth =
++				    params->dev_perio_tx_fifo_size[i];
++				DWC_DEBUGPL(DBG_CIL,
++					    "initial dtxfsiz[%d]=%08x\n", i,
++					    DWC_READ_REG32(&global_regs->dtxfsiz
++							   [i]));
++				DWC_WRITE_REG32(&global_regs->dtxfsiz[i],
++						ptxfifosize.d32);
++				DWC_DEBUGPL(DBG_CIL, "new dtxfsiz[%d]=%08x\n",
++					    i,
++					    DWC_READ_REG32(&global_regs->dtxfsiz
++							   [i]));
++				ptxfifosize.b.startaddr += ptxfifosize.b.depth;
++			}
++		} else {
++			/*
++			 * Tx FIFOs These FIFOs are numbered from 1 to 15.
++			 * Indexes of the FIFO size module parameters in the
++			 * dev_tx_fifo_size array and the FIFO size registers in
++			 * the dtxfsiz array run from 0 to 14.
++			 */
++
++			/* Non-periodic Tx FIFO */
++			DWC_DEBUGPL(DBG_CIL, "initial gnptxfsiz=%08x\n",
++				    DWC_READ_REG32(&global_regs->gnptxfsiz));
++
++#ifdef DWC_UTE_CFI
++			core_if->pwron_gnptxfsiz =
++			    (DWC_READ_REG32(&global_regs->gnptxfsiz) >> 16);
++			core_if->init_gnptxfsiz =
++			    params->dev_nperio_tx_fifo_size;
++#endif
++			nptxfifosize.b.depth = params->dev_nperio_tx_fifo_size;
++			nptxfifosize.b.startaddr = params->dev_rx_fifo_size;
++
++			DWC_WRITE_REG32(&global_regs->gnptxfsiz,
++					nptxfifosize.d32);
++
++			DWC_DEBUGPL(DBG_CIL, "new gnptxfsiz=%08x\n",
++				    DWC_READ_REG32(&global_regs->gnptxfsiz));
++
++			txfifosize.b.startaddr =
++			    nptxfifosize.b.startaddr + nptxfifosize.b.depth;
++
++			for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
++
++				txfifosize.b.depth =
++				    params->dev_tx_fifo_size[i];
++
++				DWC_DEBUGPL(DBG_CIL,
++					    "initial dtxfsiz[%d]=%08x\n",
++					    i,
++					    DWC_READ_REG32(&global_regs->dtxfsiz
++							   [i]));
++
++#ifdef DWC_UTE_CFI
++				core_if->pwron_txfsiz[i] =
++				    (DWC_READ_REG32
++				     (&global_regs->dtxfsiz[i]) >> 16);
++				core_if->init_txfsiz[i] =
++				    params->dev_tx_fifo_size[i];
++#endif
++				DWC_WRITE_REG32(&global_regs->dtxfsiz[i],
++						txfifosize.d32);
++
++				DWC_DEBUGPL(DBG_CIL,
++					    "new dtxfsiz[%d]=%08x\n",
++					    i,
++					    DWC_READ_REG32(&global_regs->dtxfsiz
++							   [i]));
++
++				txfifosize.b.startaddr += txfifosize.b.depth;
++			}
++			if (core_if->snpsid <= OTG_CORE_REV_2_94a) {
++				/* Calculating DFIFOCFG for Device mode to include RxFIFO and NPTXFIFO */
++				gdfifocfg.d32 = DWC_READ_REG32(&global_regs->gdfifocfg);
++				hwcfg3.d32 = DWC_READ_REG32(&global_regs->ghwcfg3);
++				gdfifocfg.b.gdfifocfg = (DWC_READ_REG32(&global_regs->ghwcfg3) >> 16);
++				DWC_WRITE_REG32(&global_regs->gdfifocfg, gdfifocfg.d32);
++				rxfsiz = (DWC_READ_REG32(&global_regs->grxfsiz) & 0x0000ffff);
++				nptxfsiz = (DWC_READ_REG32(&global_regs->gnptxfsiz) >> 16);
++				gdfifocfg.b.epinfobase = rxfsiz + nptxfsiz;
++				DWC_WRITE_REG32(&global_regs->gdfifocfg, gdfifocfg.d32);
++			}
++		}
++
++		/* Flush the FIFOs */
++		dwc_otg_flush_tx_fifo(core_if, 0x10);	/* all Tx FIFOs */
++		dwc_otg_flush_rx_fifo(core_if);
++
++		/* Flush the Learning Queue. */
++		resetctl.b.intknqflsh = 1;
++		DWC_WRITE_REG32(&core_if->core_global_regs->grstctl, resetctl.d32);
++
++		if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable) {
++			core_if->start_predict = 0;
++			for (i = 0; i<= core_if->dev_if->num_in_eps; ++i) {
++				core_if->nextep_seq[i] = 0xff;	// 0xff - EP not active
++			}
++			core_if->nextep_seq[0] = 0;
++			core_if->first_in_nextep_seq = 0;
++			diepctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl);
++			diepctl.b.nextep = 0;
++			DWC_WRITE_REG32(&dev_if->in_ep_regs[0]->diepctl, diepctl.d32);
++
++			/* Update IN Endpoint Mismatch Count by active IN NP EP count + 1 */
++			dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
++			dcfg.b.epmscnt = 2;
++			DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
++
++			DWC_DEBUGPL(DBG_CILV,"%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
++				__func__, core_if->first_in_nextep_seq);
++			for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
++				DWC_DEBUGPL(DBG_CILV, "%2d ", core_if->nextep_seq[i]);
++			}
++			DWC_DEBUGPL(DBG_CILV,"\n");
++		}
++
++		/* Clear all pending Device Interrupts */
++		/** @todo - if the condition needed to be checked
++		 *  or in any case all pending interrutps should be cleared?
++	     */
++		if (core_if->multiproc_int_enable) {
++			for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
++				DWC_WRITE_REG32(&dev_if->
++						dev_global_regs->diepeachintmsk[i], 0);
++			}
++		}
++
++		for (i = 0; i < core_if->dev_if->num_out_eps; ++i) {
++			DWC_WRITE_REG32(&dev_if->
++					dev_global_regs->doepeachintmsk[i], 0);
++		}
++
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->deachint, 0xFFFFFFFF);
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->deachintmsk, 0);
++	} else {
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->diepmsk, 0);
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->doepmsk, 0);
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->daint, 0xFFFFFFFF);
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->daintmsk, 0);
++	}
++
++	for (i = 0; i <= dev_if->num_in_eps; i++) {
++		depctl_data_t depctl;
++		depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++		if (depctl.b.epena) {
++			depctl.d32 = 0;
++			depctl.b.epdis = 1;
++			depctl.b.snak = 1;
++		} else {
++			depctl.d32 = 0;
++		}
++
++		DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepctl, depctl.d32);
++
++		DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->dieptsiz, 0);
++		DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepdma, 0);
++		DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepint, 0xFF);
++	}
++
++	for (i = 0; i <= dev_if->num_out_eps; i++) {
++		depctl_data_t depctl;
++		depctl.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[i]->doepctl);
++		if (depctl.b.epena) {
++			dctl_data_t dctl = {.d32 = 0 };
++			gintmsk_data_t gintsts = {.d32 = 0 };
++			doepint_data_t doepint = {.d32 = 0 };
++			dctl.b.sgoutnak = 1;
++			DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
++			do {
++				dwc_udelay(10);
++				gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++			} while (!gintsts.b.goutnakeff);
++			gintsts.d32 = 0;
++			gintsts.b.goutnakeff = 1;
++			DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++			depctl.d32 = 0;
++			depctl.b.epdis = 1;
++			depctl.b.snak = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[i]->doepctl, depctl.d32);
++			do {
++				dwc_udelay(10);
++				doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
++					out_ep_regs[i]->doepint);
++			} while (!doepint.b.epdisabled);
++
++			doepint.b.epdisabled = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[i]->doepint, doepint.d32);
++
++			dctl.d32 = 0;
++			dctl.b.cgoutnak = 1;
++			DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
++		} else {
++			depctl.d32 = 0;
++		}
++
++		DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepctl, depctl.d32);
++
++		DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doeptsiz, 0);
++		DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepdma, 0);
++		DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepint, 0xFF);
++	}
++
++	if (core_if->en_multiple_tx_fifo && core_if->dma_enable) {
++		dev_if->non_iso_tx_thr_en = params->thr_ctl & 0x1;
++		dev_if->iso_tx_thr_en = (params->thr_ctl >> 1) & 0x1;
++		dev_if->rx_thr_en = (params->thr_ctl >> 2) & 0x1;
++
++		dev_if->rx_thr_length = params->rx_thr_length;
++		dev_if->tx_thr_length = params->tx_thr_length;
++
++		dev_if->setup_desc_index = 0;
++
++		dthrctl.d32 = 0;
++		dthrctl.b.non_iso_thr_en = dev_if->non_iso_tx_thr_en;
++		dthrctl.b.iso_thr_en = dev_if->iso_tx_thr_en;
++		dthrctl.b.tx_thr_len = dev_if->tx_thr_length;
++		dthrctl.b.rx_thr_en = dev_if->rx_thr_en;
++		dthrctl.b.rx_thr_len = dev_if->rx_thr_length;
++		dthrctl.b.ahb_thr_ratio = params->ahb_thr_ratio;
++
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->dtknqr3_dthrctl,
++				dthrctl.d32);
++
++		DWC_DEBUGPL(DBG_CIL,
++			    "Non ISO Tx Thr - %d\nISO Tx Thr - %d\nRx Thr - %d\nTx Thr Len - %d\nRx Thr Len - %d\n",
++			    dthrctl.b.non_iso_thr_en, dthrctl.b.iso_thr_en,
++			    dthrctl.b.rx_thr_en, dthrctl.b.tx_thr_len,
++			    dthrctl.b.rx_thr_len);
++
++	}
++
++	dwc_otg_enable_device_interrupts(core_if);
++
++	{
++		diepmsk_data_t msk = {.d32 = 0 };
++		msk.b.txfifoundrn = 1;
++		if (core_if->multiproc_int_enable) {
++			DWC_MODIFY_REG32(&dev_if->dev_global_regs->
++					 diepeachintmsk[0], msk.d32, msk.d32);
++		} else {
++			DWC_MODIFY_REG32(&dev_if->dev_global_regs->diepmsk,
++					 msk.d32, msk.d32);
++		}
++	}
++
++	if (core_if->multiproc_int_enable) {
++		/* Set NAK on Babble */
++		dctl_data_t dctl = {.d32 = 0 };
++		dctl.b.nakonbble = 1;
++		DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, 0, dctl.d32);
++	}
++
++	if (core_if->snpsid >= OTG_CORE_REV_2_94a) {
++		dctl_data_t dctl = {.d32 = 0 };
++		dctl.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dctl);
++		dctl.b.sftdiscon = 0;
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->dctl, dctl.d32);
++	}
++}
++
++/**
++ * This function enables the Host mode interrupts.
++ *
++ * @param core_if Programming view of DWC_otg controller
++ */
++void dwc_otg_enable_host_interrupts(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_DEBUGPL(DBG_CIL, "%s(%p)\n", __func__, core_if);
++
++	/* Disable all interrupts. */
++	DWC_WRITE_REG32(&global_regs->gintmsk, 0);
++
++	/* Clear any pending interrupts. */
++	DWC_WRITE_REG32(&global_regs->gintsts, 0xFFFFFFFF);
++
++	/* Enable the common interrupts */
++	dwc_otg_enable_common_interrupts(core_if);
++
++	/*
++	 * Enable host mode interrupts without disturbing common
++	 * interrupts.
++	 */
++
++	intr_mask.b.disconnect = 1;
++	intr_mask.b.portintr = 1;
++	intr_mask.b.hcintr = 1;
++
++	DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32, intr_mask.d32);
++}
++
++/**
++ * This function disables the Host Mode interrupts.
++ *
++ * @param core_if Programming view of DWC_otg controller
++ */
++void dwc_otg_disable_host_interrupts(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_DEBUGPL(DBG_CILV, "%s()\n", __func__);
++
++	/*
++	 * Disable host mode interrupts without disturbing common
++	 * interrupts.
++	 */
++	intr_mask.b.sofintr = 1;
++	intr_mask.b.portintr = 1;
++	intr_mask.b.hcintr = 1;
++	intr_mask.b.ptxfempty = 1;
++	intr_mask.b.nptxfempty = 1;
++
++	DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32, 0);
++}
++
++/**
++ * This function initializes the DWC_otg controller registers for
++ * host mode.
++ *
++ * This function flushes the Tx and Rx FIFOs and it flushes any entries in the
++ * request queues. Host channels are reset to ensure that they are ready for
++ * performing transfers.
++ *
++ * @param core_if Programming view of DWC_otg controller
++ *
++ */
++void dwc_otg_core_host_init(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	dwc_otg_host_if_t *host_if = core_if->host_if;
++	dwc_otg_core_params_t *params = core_if->core_params;
++	hprt0_data_t hprt0 = {.d32 = 0 };
++	fifosize_data_t nptxfifosize;
++	fifosize_data_t ptxfifosize;
++	uint16_t rxfsiz, nptxfsiz, hptxfsiz;
++	gdfifocfg_data_t gdfifocfg = {.d32 = 0 };
++	int i;
++	hcchar_data_t hcchar;
++	hcfg_data_t hcfg;
++	hfir_data_t hfir;
++	dwc_otg_hc_regs_t *hc_regs;
++	int num_channels;
++	gotgctl_data_t gotgctl = {.d32 = 0 };
++
++	DWC_DEBUGPL(DBG_CILV, "%s(%p)\n", __func__, core_if);
++
++	/* Restart the Phy Clock */
++	DWC_WRITE_REG32(core_if->pcgcctl, 0);
++
++	/* Initialize Host Configuration Register */
++	init_fslspclksel(core_if);
++	if (core_if->core_params->speed == DWC_SPEED_PARAM_FULL) {
++		hcfg.d32 = DWC_READ_REG32(&host_if->host_global_regs->hcfg);
++		hcfg.b.fslssupp = 1;
++		DWC_WRITE_REG32(&host_if->host_global_regs->hcfg, hcfg.d32);
++
++	}
++
++	/* This bit allows dynamic reloading of the HFIR register
++	 * during runtime. This bit needs to be programmed during
++	 * initial configuration and its value must not be changed
++	 * during runtime.*/
++	if (core_if->core_params->reload_ctl == 1) {
++		hfir.d32 = DWC_READ_REG32(&host_if->host_global_regs->hfir);
++		hfir.b.hfirrldctrl = 1;
++		DWC_WRITE_REG32(&host_if->host_global_regs->hfir, hfir.d32);
++	}
++
++	if (core_if->core_params->dma_desc_enable) {
++		uint8_t op_mode = core_if->hwcfg2.b.op_mode;
++		if (!
++		    (core_if->hwcfg4.b.desc_dma
++		     && (core_if->snpsid >= OTG_CORE_REV_2_90a)
++		     && ((op_mode == DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG)
++			 || (op_mode == DWC_HWCFG2_OP_MODE_SRP_ONLY_CAPABLE_OTG)
++			 || (op_mode ==
++			     DWC_HWCFG2_OP_MODE_NO_HNP_SRP_CAPABLE_OTG)
++			 || (op_mode == DWC_HWCFG2_OP_MODE_SRP_CAPABLE_HOST)
++			 || (op_mode ==
++			     DWC_HWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST)))) {
++
++			DWC_ERROR("Host can't operate in Descriptor DMA mode.\n"
++				  "Either core version is below 2.90a or "
++				  "GHWCFG2, GHWCFG4 registers' values do not allow Descriptor DMA in host mode.\n"
++				  "To run the driver in Buffer DMA host mode set dma_desc_enable "
++				  "module parameter to 0.\n");
++			return;
++		}
++		hcfg.d32 = DWC_READ_REG32(&host_if->host_global_regs->hcfg);
++		hcfg.b.descdma = 1;
++		DWC_WRITE_REG32(&host_if->host_global_regs->hcfg, hcfg.d32);
++	}
++
++	/* Configure data FIFO sizes */
++	if (core_if->hwcfg2.b.dynamic_fifo && params->enable_dynamic_fifo) {
++		DWC_DEBUGPL(DBG_CIL, "Total FIFO Size=%d\n",
++			    core_if->total_fifo_size);
++		DWC_DEBUGPL(DBG_CIL, "Rx FIFO Size=%d\n",
++			    params->host_rx_fifo_size);
++		DWC_DEBUGPL(DBG_CIL, "NP Tx FIFO Size=%d\n",
++			    params->host_nperio_tx_fifo_size);
++		DWC_DEBUGPL(DBG_CIL, "P Tx FIFO Size=%d\n",
++			    params->host_perio_tx_fifo_size);
++
++		/* Rx FIFO */
++		DWC_DEBUGPL(DBG_CIL, "initial grxfsiz=%08x\n",
++			    DWC_READ_REG32(&global_regs->grxfsiz));
++		DWC_WRITE_REG32(&global_regs->grxfsiz,
++				params->host_rx_fifo_size);
++		DWC_DEBUGPL(DBG_CIL, "new grxfsiz=%08x\n",
++			    DWC_READ_REG32(&global_regs->grxfsiz));
++
++		/* Non-periodic Tx FIFO */
++		DWC_DEBUGPL(DBG_CIL, "initial gnptxfsiz=%08x\n",
++			    DWC_READ_REG32(&global_regs->gnptxfsiz));
++		nptxfifosize.b.depth = params->host_nperio_tx_fifo_size;
++		nptxfifosize.b.startaddr = params->host_rx_fifo_size;
++		DWC_WRITE_REG32(&global_regs->gnptxfsiz, nptxfifosize.d32);
++		DWC_DEBUGPL(DBG_CIL, "new gnptxfsiz=%08x\n",
++			    DWC_READ_REG32(&global_regs->gnptxfsiz));
++
++		/* Periodic Tx FIFO */
++		DWC_DEBUGPL(DBG_CIL, "initial hptxfsiz=%08x\n",
++			    DWC_READ_REG32(&global_regs->hptxfsiz));
++		ptxfifosize.b.depth = params->host_perio_tx_fifo_size;
++		ptxfifosize.b.startaddr =
++		    nptxfifosize.b.startaddr + nptxfifosize.b.depth;
++		DWC_WRITE_REG32(&global_regs->hptxfsiz, ptxfifosize.d32);
++		DWC_DEBUGPL(DBG_CIL, "new hptxfsiz=%08x\n",
++			    DWC_READ_REG32(&global_regs->hptxfsiz));
++
++		if (core_if->en_multiple_tx_fifo
++		    && core_if->snpsid <= OTG_CORE_REV_2_94a) {
++			/* Global DFIFOCFG calculation for Host mode - include RxFIFO, NPTXFIFO and HPTXFIFO */
++			gdfifocfg.d32 = DWC_READ_REG32(&global_regs->gdfifocfg);
++			rxfsiz = (DWC_READ_REG32(&global_regs->grxfsiz) & 0x0000ffff);
++			nptxfsiz = (DWC_READ_REG32(&global_regs->gnptxfsiz) >> 16);
++			hptxfsiz = (DWC_READ_REG32(&global_regs->hptxfsiz) >> 16);
++			gdfifocfg.b.epinfobase = rxfsiz + nptxfsiz + hptxfsiz;
++			DWC_WRITE_REG32(&global_regs->gdfifocfg, gdfifocfg.d32);
++		}
++	}
++
++	/* TODO - check this */
++	/* Clear Host Set HNP Enable in the OTG Control Register */
++	gotgctl.b.hstsethnpen = 1;
++	DWC_MODIFY_REG32(&global_regs->gotgctl, gotgctl.d32, 0);
++	/* Make sure the FIFOs are flushed. */
++	dwc_otg_flush_tx_fifo(core_if, 0x10 /* all TX FIFOs */ );
++	dwc_otg_flush_rx_fifo(core_if);
++
++	/* Clear Host Set HNP Enable in the OTG Control Register */
++	gotgctl.b.hstsethnpen = 1;
++	DWC_MODIFY_REG32(&global_regs->gotgctl, gotgctl.d32, 0);
++
++	if (!core_if->core_params->dma_desc_enable) {
++		/* Flush out any leftover queued requests. */
++		num_channels = core_if->core_params->host_channels;
++
++		for (i = 0; i < num_channels; i++) {
++			hc_regs = core_if->host_if->hc_regs[i];
++			hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++			hcchar.b.chen = 0;
++			hcchar.b.chdis = 1;
++			hcchar.b.epdir = 0;
++			DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++		}
++
++		/* Halt all channels to put them into a known state. */
++		for (i = 0; i < num_channels; i++) {
++			int count = 0;
++			hc_regs = core_if->host_if->hc_regs[i];
++			hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++			hcchar.b.chen = 1;
++			hcchar.b.chdis = 1;
++			hcchar.b.epdir = 0;
++			DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++			DWC_DEBUGPL(DBG_HCDV, "%s: Halt channel %d regs %p\n", __func__, i, hc_regs);
++			do {
++				hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++				if (++count > 1000) {
++					DWC_ERROR
++					    ("%s: Unable to clear halt on channel %d (timeout HCCHAR 0x%X @%p)\n",
++					     __func__, i, hcchar.d32, &hc_regs->hcchar);
++					break;
++				}
++				dwc_udelay(1);
++			} while (hcchar.b.chen);
++		}
++	}
++
++	/* Turn on the vbus power. */
++	DWC_PRINTF("Init: Port Power? op_state=%d\n", core_if->op_state);
++	if (core_if->op_state == A_HOST) {
++		hprt0.d32 = dwc_otg_read_hprt0(core_if);
++		DWC_PRINTF("Init: Power Port (%d)\n", hprt0.b.prtpwr);
++		if (hprt0.b.prtpwr == 0) {
++			hprt0.b.prtpwr = 1;
++			DWC_WRITE_REG32(host_if->hprt0, hprt0.d32);
++		}
++	}
++
++	dwc_otg_enable_host_interrupts(core_if);
++}
++
++/**
++ * Prepares a host channel for transferring packets to/from a specific
++ * endpoint. The HCCHARn register is set up with the characteristics specified
++ * in _hc. Host channel interrupts that may need to be serviced while this
++ * transfer is in progress are enabled.
++ *
++ * @param core_if Programming view of DWC_otg controller
++ * @param hc Information needed to initialize the host channel
++ */
++void dwc_otg_hc_init(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
++{
++	hcintmsk_data_t hc_intr_mask;
++	hcchar_data_t hcchar;
++	hcsplt_data_t hcsplt;
++
++	uint8_t hc_num = hc->hc_num;
++	dwc_otg_host_if_t *host_if = core_if->host_if;
++	dwc_otg_hc_regs_t *hc_regs = host_if->hc_regs[hc_num];
++
++	/* Clear old interrupt conditions for this host channel. */
++	hc_intr_mask.d32 = 0xFFFFFFFF;
++	hc_intr_mask.b.reserved14_31 = 0;
++	DWC_WRITE_REG32(&hc_regs->hcint, hc_intr_mask.d32);
++
++	/* Enable channel interrupts required for this transfer. */
++	hc_intr_mask.d32 = 0;
++	hc_intr_mask.b.chhltd = 1;
++	if (core_if->dma_enable) {
++		/* For Descriptor DMA mode core halts the channel on AHB error. Interrupt is not required */
++		if (!core_if->dma_desc_enable)
++			hc_intr_mask.b.ahberr = 1;
++		else {
++			if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)
++				hc_intr_mask.b.xfercompl = 1;
++		}
++
++		if (hc->error_state && !hc->do_split &&
++		    hc->ep_type != DWC_OTG_EP_TYPE_ISOC) {
++			hc_intr_mask.b.ack = 1;
++			if (hc->ep_is_in) {
++				hc_intr_mask.b.datatglerr = 1;
++				if (hc->ep_type != DWC_OTG_EP_TYPE_INTR) {
++					hc_intr_mask.b.nak = 1;
++				}
++			}
++		}
++	} else {
++		switch (hc->ep_type) {
++		case DWC_OTG_EP_TYPE_CONTROL:
++		case DWC_OTG_EP_TYPE_BULK:
++			hc_intr_mask.b.xfercompl = 1;
++			hc_intr_mask.b.stall = 1;
++			hc_intr_mask.b.xacterr = 1;
++			hc_intr_mask.b.datatglerr = 1;
++			if (hc->ep_is_in) {
++				hc_intr_mask.b.bblerr = 1;
++			} else {
++				hc_intr_mask.b.nak = 1;
++				hc_intr_mask.b.nyet = 1;
++				if (hc->do_ping) {
++					hc_intr_mask.b.ack = 1;
++				}
++			}
++
++			if (hc->do_split) {
++				hc_intr_mask.b.nak = 1;
++				if (hc->complete_split) {
++					hc_intr_mask.b.nyet = 1;
++				} else {
++					hc_intr_mask.b.ack = 1;
++				}
++			}
++
++			if (hc->error_state) {
++				hc_intr_mask.b.ack = 1;
++			}
++			break;
++		case DWC_OTG_EP_TYPE_INTR:
++			hc_intr_mask.b.xfercompl = 1;
++			hc_intr_mask.b.nak = 1;
++			hc_intr_mask.b.stall = 1;
++			hc_intr_mask.b.xacterr = 1;
++			hc_intr_mask.b.datatglerr = 1;
++			hc_intr_mask.b.frmovrun = 1;
++
++			if (hc->ep_is_in) {
++				hc_intr_mask.b.bblerr = 1;
++			}
++			if (hc->error_state) {
++				hc_intr_mask.b.ack = 1;
++			}
++			if (hc->do_split) {
++				if (hc->complete_split) {
++					hc_intr_mask.b.nyet = 1;
++				} else {
++					hc_intr_mask.b.ack = 1;
++				}
++			}
++			break;
++		case DWC_OTG_EP_TYPE_ISOC:
++			hc_intr_mask.b.xfercompl = 1;
++			hc_intr_mask.b.frmovrun = 1;
++			hc_intr_mask.b.ack = 1;
++
++			if (hc->ep_is_in) {
++				hc_intr_mask.b.xacterr = 1;
++				hc_intr_mask.b.bblerr = 1;
++			}
++			break;
++		}
++	}
++	DWC_WRITE_REG32(&hc_regs->hcintmsk, hc_intr_mask.d32);
++
++	/*
++	 * Program the HCCHARn register with the endpoint characteristics for
++	 * the current transfer.
++	 */
++	hcchar.d32 = 0;
++	hcchar.b.devaddr = hc->dev_addr;
++	hcchar.b.epnum = hc->ep_num;
++	hcchar.b.epdir = hc->ep_is_in;
++	hcchar.b.lspddev = (hc->speed == DWC_OTG_EP_SPEED_LOW);
++	hcchar.b.eptype = hc->ep_type;
++	hcchar.b.mps = hc->max_packet;
++
++	DWC_WRITE_REG32(&host_if->hc_regs[hc_num]->hcchar, hcchar.d32);
++
++	DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d, Dev Addr %d, EP #%d\n",
++                    __func__, hc->hc_num, hcchar.b.devaddr, hcchar.b.epnum);
++	DWC_DEBUGPL(DBG_HCDV, "	 Is In %d, Is Low Speed %d, EP Type %d, "
++                                "Max Pkt %d, Multi Cnt %d\n",
++                    hcchar.b.epdir, hcchar.b.lspddev, hcchar.b.eptype,
++                    hcchar.b.mps, hcchar.b.multicnt);
++
++	/*
++	 * Program the HCSPLIT register for SPLITs
++	 */
++	hcsplt.d32 = 0;
++	if (hc->do_split) {
++		DWC_DEBUGPL(DBG_HCDV, "Programming HC %d with split --> %s\n",
++			    hc->hc_num,
++			    hc->complete_split ? "CSPLIT" : "SSPLIT");
++		hcsplt.b.compsplt = hc->complete_split;
++		hcsplt.b.xactpos = hc->xact_pos;
++		hcsplt.b.hubaddr = hc->hub_addr;
++		hcsplt.b.prtaddr = hc->port_addr;
++		DWC_DEBUGPL(DBG_HCDV, "\t  comp split %d\n", hc->complete_split);
++		DWC_DEBUGPL(DBG_HCDV, "\t  xact pos %d\n", hc->xact_pos);
++		DWC_DEBUGPL(DBG_HCDV, "\t  hub addr %d\n", hc->hub_addr);
++		DWC_DEBUGPL(DBG_HCDV, "\t  port addr %d\n", hc->port_addr);
++		DWC_DEBUGPL(DBG_HCDV, "\t  is_in %d\n", hc->ep_is_in);
++		DWC_DEBUGPL(DBG_HCDV, "\t  Max Pkt: %d\n", hcchar.b.mps);
++		DWC_DEBUGPL(DBG_HCDV, "\t  xferlen: %d\n", hc->xfer_len);
++	}
++	DWC_WRITE_REG32(&host_if->hc_regs[hc_num]->hcsplt, hcsplt.d32);
++
++}
++
++/**
++ * Attempts to halt a host channel. This function should only be called in
++ * Slave mode or to abort a transfer in either Slave mode or DMA mode. Under
++ * normal circumstances in DMA mode, the controller halts the channel when the
++ * transfer is complete or a condition occurs that requires application
++ * intervention.
++ *
++ * In slave mode, checks for a free request queue entry, then sets the Channel
++ * Enable and Channel Disable bits of the Host Channel Characteristics
++ * register of the specified channel to intiate the halt. If there is no free
++ * request queue entry, sets only the Channel Disable bit of the HCCHARn
++ * register to flush requests for this channel. In the latter case, sets a
++ * flag to indicate that the host channel needs to be halted when a request
++ * queue slot is open.
++ *
++ * In DMA mode, always sets the Channel Enable and Channel Disable bits of the
++ * HCCHARn register. The controller ensures there is space in the request
++ * queue before submitting the halt request.
++ *
++ * Some time may elapse before the core flushes any posted requests for this
++ * host channel and halts. The Channel Halted interrupt handler completes the
++ * deactivation of the host channel.
++ *
++ * @param core_if Controller register interface.
++ * @param hc Host channel to halt.
++ * @param halt_status Reason for halting the channel.
++ */
++void dwc_otg_hc_halt(dwc_otg_core_if_t * core_if,
++		     dwc_hc_t * hc, dwc_otg_halt_status_e halt_status)
++{
++	gnptxsts_data_t nptxsts;
++	hptxsts_data_t hptxsts;
++	hcchar_data_t hcchar;
++	dwc_otg_hc_regs_t *hc_regs;
++	dwc_otg_core_global_regs_t *global_regs;
++	dwc_otg_host_global_regs_t *host_global_regs;
++
++	hc_regs = core_if->host_if->hc_regs[hc->hc_num];
++	global_regs = core_if->core_global_regs;
++	host_global_regs = core_if->host_if->host_global_regs;
++
++	DWC_ASSERT(!(halt_status == DWC_OTG_HC_XFER_NO_HALT_STATUS),
++		   "halt_status = %d\n", halt_status);
++
++	if (halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE ||
++	    halt_status == DWC_OTG_HC_XFER_AHB_ERR) {
++		/*
++		 * Disable all channel interrupts except Ch Halted. The QTD
++		 * and QH state associated with this transfer has been cleared
++		 * (in the case of URB_DEQUEUE), so the channel needs to be
++		 * shut down carefully to prevent crashes.
++		 */
++		hcintmsk_data_t hcintmsk;
++		hcintmsk.d32 = 0;
++		hcintmsk.b.chhltd = 1;
++		DWC_WRITE_REG32(&hc_regs->hcintmsk, hcintmsk.d32);
++
++		/*
++		 * Make sure no other interrupts besides halt are currently
++		 * pending. Handling another interrupt could cause a crash due
++		 * to the QTD and QH state.
++		 */
++		DWC_WRITE_REG32(&hc_regs->hcint, ~hcintmsk.d32);
++
++		/*
++		 * Make sure the halt status is set to URB_DEQUEUE or AHB_ERR
++		 * even if the channel was already halted for some other
++		 * reason.
++		 */
++		hc->halt_status = halt_status;
++
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++		if (hcchar.b.chen == 0) {
++			/*
++			 * The channel is either already halted or it hasn't
++			 * started yet. In DMA mode, the transfer may halt if
++			 * it finishes normally or a condition occurs that
++			 * requires driver intervention. Don't want to halt
++			 * the channel again. In either Slave or DMA mode,
++			 * it's possible that the transfer has been assigned
++			 * to a channel, but not started yet when an URB is
++			 * dequeued. Don't want to halt a channel that hasn't
++			 * started yet.
++			 */
++			return;
++		}
++	}
++	if (hc->halt_pending) {
++		/*
++		 * A halt has already been issued for this channel. This might
++		 * happen when a transfer is aborted by a higher level in
++		 * the stack.
++		 */
++#ifdef DEBUG
++		DWC_PRINTF
++		    ("*** %s: Channel %d, _hc->halt_pending already set ***\n",
++		     __func__, hc->hc_num);
++
++#endif
++		return;
++	}
++
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++	/* No need to set the bit in DDMA for disabling the channel */
++	//TODO check it everywhere channel is disabled
++	if (!core_if->core_params->dma_desc_enable)
++		hcchar.b.chen = 1;
++	hcchar.b.chdis = 1;
++
++	if (!core_if->dma_enable) {
++		/* Check for space in the request queue to issue the halt. */
++		if (hc->ep_type == DWC_OTG_EP_TYPE_CONTROL ||
++		    hc->ep_type == DWC_OTG_EP_TYPE_BULK) {
++			nptxsts.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
++			if (nptxsts.b.nptxqspcavail == 0) {
++				hcchar.b.chen = 0;
++			}
++		} else {
++			hptxsts.d32 =
++			    DWC_READ_REG32(&host_global_regs->hptxsts);
++			if ((hptxsts.b.ptxqspcavail == 0)
++			    || (core_if->queuing_high_bandwidth)) {
++				hcchar.b.chen = 0;
++			}
++		}
++	}
++	DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++
++	hc->halt_status = halt_status;
++
++	if (hcchar.b.chen) {
++		hc->halt_pending = 1;
++		hc->halt_on_queue = 0;
++	} else {
++		hc->halt_on_queue = 1;
++	}
++
++	DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
++	DWC_DEBUGPL(DBG_HCDV, "	 hcchar: 0x%08x\n", hcchar.d32);
++	DWC_DEBUGPL(DBG_HCDV, "	 halt_pending: %d\n", hc->halt_pending);
++	DWC_DEBUGPL(DBG_HCDV, "	 halt_on_queue: %d\n", hc->halt_on_queue);
++	DWC_DEBUGPL(DBG_HCDV, "	 halt_status: %d\n", hc->halt_status);
++
++	return;
++}
++
++/**
++ * Clears the transfer state for a host channel. This function is normally
++ * called after a transfer is done and the host channel is being released.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param hc Identifies the host channel to clean up.
++ */
++void dwc_otg_hc_cleanup(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
++{
++	dwc_otg_hc_regs_t *hc_regs;
++
++	hc->xfer_started = 0;
++
++	/*
++	 * Clear channel interrupt enables and any unhandled channel interrupt
++	 * conditions.
++	 */
++	hc_regs = core_if->host_if->hc_regs[hc->hc_num];
++	DWC_WRITE_REG32(&hc_regs->hcintmsk, 0);
++	DWC_WRITE_REG32(&hc_regs->hcint, 0xFFFFFFFF);
++#ifdef DEBUG
++	DWC_TIMER_CANCEL(core_if->hc_xfer_timer[hc->hc_num]);
++#endif
++}
++
++/**
++ * Sets the channel property that indicates in which frame a periodic transfer
++ * should occur. This is always set to the _next_ frame. This function has no
++ * effect on non-periodic transfers.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param hc Identifies the host channel to set up and its properties.
++ * @param hcchar Current value of the HCCHAR register for the specified host
++ * channel.
++ */
++static inline void hc_set_even_odd_frame(dwc_otg_core_if_t * core_if,
++					 dwc_hc_t * hc, hcchar_data_t * hcchar)
++{
++	if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
++	    hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
++		hfnum_data_t hfnum;
++		hfnum.d32 =
++		    DWC_READ_REG32(&core_if->host_if->host_global_regs->hfnum);
++
++		/* 1 if _next_ frame is odd, 0 if it's even */
++		hcchar->b.oddfrm = (hfnum.b.frnum & 0x1) ? 0 : 1;
++#ifdef DEBUG
++		if (hc->ep_type == DWC_OTG_EP_TYPE_INTR && hc->do_split
++		    && !hc->complete_split) {
++			switch (hfnum.b.frnum & 0x7) {
++			case 7:
++				core_if->hfnum_7_samples++;
++				core_if->hfnum_7_frrem_accum += hfnum.b.frrem;
++				break;
++			case 0:
++				core_if->hfnum_0_samples++;
++				core_if->hfnum_0_frrem_accum += hfnum.b.frrem;
++				break;
++			default:
++				core_if->hfnum_other_samples++;
++				core_if->hfnum_other_frrem_accum +=
++				    hfnum.b.frrem;
++				break;
++			}
++		}
++#endif
++	}
++}
++
++#ifdef DEBUG
++void hc_xfer_timeout(void *ptr)
++{
++	hc_xfer_info_t *xfer_info = NULL;
++	int hc_num = 0;
++
++	if (ptr)
++		xfer_info = (hc_xfer_info_t *) ptr;
++
++	if (!xfer_info->hc) {
++		DWC_ERROR("xfer_info->hc = %p\n", xfer_info->hc);
++		return;
++	}
++
++	hc_num = xfer_info->hc->hc_num;
++	DWC_WARN("%s: timeout on channel %d\n", __func__, hc_num);
++	DWC_WARN("	start_hcchar_val 0x%08x\n",
++		 xfer_info->core_if->start_hcchar_val[hc_num]);
++}
++#endif
++
++void ep_xfer_timeout(void *ptr)
++{
++	ep_xfer_info_t *xfer_info = NULL;
++	int ep_num = 0;
++	dctl_data_t dctl = {.d32 = 0 };
++	gintsts_data_t gintsts = {.d32 = 0 };
++	gintmsk_data_t gintmsk = {.d32 = 0 };
++
++	if (ptr)
++		xfer_info = (ep_xfer_info_t *) ptr;
++
++	if (!xfer_info->ep) {
++		DWC_ERROR("xfer_info->ep = %p\n", xfer_info->ep);
++		return;
++	}
++
++	ep_num = xfer_info->ep->num;
++	DWC_WARN("%s: timeout on endpoit %d\n", __func__, ep_num);
++	/* Put the sate to 2 as it was time outed */
++	xfer_info->state = 2;
++
++	dctl.d32 =
++	    DWC_READ_REG32(&xfer_info->core_if->dev_if->dev_global_regs->dctl);
++	gintsts.d32 =
++	    DWC_READ_REG32(&xfer_info->core_if->core_global_regs->gintsts);
++	gintmsk.d32 =
++	    DWC_READ_REG32(&xfer_info->core_if->core_global_regs->gintmsk);
++
++	if (!gintmsk.b.goutnakeff) {
++		/* Unmask it */
++		gintmsk.b.goutnakeff = 1;
++		DWC_WRITE_REG32(&xfer_info->core_if->core_global_regs->gintmsk,
++				gintmsk.d32);
++
++	}
++
++	if (!gintsts.b.goutnakeff) {
++		dctl.b.sgoutnak = 1;
++	}
++	DWC_WRITE_REG32(&xfer_info->core_if->dev_if->dev_global_regs->dctl,
++			dctl.d32);
++
++}
++
++void set_pid_isoc(dwc_hc_t * hc)
++{
++	/* Set up the initial PID for the transfer. */
++	if (hc->speed == DWC_OTG_EP_SPEED_HIGH) {
++		if (hc->ep_is_in) {
++			if (hc->multi_count == 1) {
++				hc->data_pid_start = DWC_OTG_HC_PID_DATA0;
++			} else if (hc->multi_count == 2) {
++				hc->data_pid_start = DWC_OTG_HC_PID_DATA1;
++			} else {
++				hc->data_pid_start = DWC_OTG_HC_PID_DATA2;
++			}
++		} else {
++			if (hc->multi_count == 1) {
++				hc->data_pid_start = DWC_OTG_HC_PID_DATA0;
++			} else {
++				hc->data_pid_start = DWC_OTG_HC_PID_MDATA;
++			}
++		}
++	} else {
++		hc->data_pid_start = DWC_OTG_HC_PID_DATA0;
++	}
++}
++
++/**
++ * This function does the setup for a data transfer for a host channel and
++ * starts the transfer. May be called in either Slave mode or DMA mode. In
++ * Slave mode, the caller must ensure that there is sufficient space in the
++ * request queue and Tx Data FIFO.
++ *
++ * For an OUT transfer in Slave mode, it loads a data packet into the
++ * appropriate FIFO. If necessary, additional data packets will be loaded in
++ * the Host ISR.
++ *
++ * For an IN transfer in Slave mode, a data packet is requested. The data
++ * packets are unloaded from the Rx FIFO in the Host ISR. If necessary,
++ * additional data packets are requested in the Host ISR.
++ *
++ * For a PING transfer in Slave mode, the Do Ping bit is set in the HCTSIZ
++ * register along with a packet count of 1 and the channel is enabled. This
++ * causes a single PING transaction to occur. Other fields in HCTSIZ are
++ * simply set to 0 since no data transfer occurs in this case.
++ *
++ * For a PING transfer in DMA mode, the HCTSIZ register is initialized with
++ * all the information required to perform the subsequent data transfer. In
++ * addition, the Do Ping bit is set in the HCTSIZ register. In this case, the
++ * controller performs the entire PING protocol, then starts the data
++ * transfer.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param hc Information needed to initialize the host channel. The xfer_len
++ * value may be reduced to accommodate the max widths of the XferSize and
++ * PktCnt fields in the HCTSIZn register. The multi_count value may be changed
++ * to reflect the final xfer_len value.
++ */
++void dwc_otg_hc_start_transfer(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
++{
++	hcchar_data_t hcchar;
++	hctsiz_data_t hctsiz;
++	uint16_t num_packets;
++	uint32_t max_hc_xfer_size = core_if->core_params->max_transfer_size;
++	uint16_t max_hc_pkt_count = core_if->core_params->max_packet_count;
++	dwc_otg_hc_regs_t *hc_regs = core_if->host_if->hc_regs[hc->hc_num];
++
++	hctsiz.d32 = 0;
++
++	if (hc->do_ping) {
++		if (!core_if->dma_enable) {
++			dwc_otg_hc_do_ping(core_if, hc);
++			hc->xfer_started = 1;
++			return;
++		} else {
++			hctsiz.b.dopng = 1;
++		}
++	}
++
++	if (hc->do_split) {
++		num_packets = 1;
++
++		if (hc->complete_split && !hc->ep_is_in) {
++			/* For CSPLIT OUT Transfer, set the size to 0 so the
++			 * core doesn't expect any data written to the FIFO */
++			hc->xfer_len = 0;
++		} else if (hc->ep_is_in || (hc->xfer_len > hc->max_packet)) {
++			hc->xfer_len = hc->max_packet;
++		} else if (!hc->ep_is_in && (hc->xfer_len > 188)) {
++			hc->xfer_len = 188;
++		}
++
++		hctsiz.b.xfersize = hc->xfer_len;
++	} else {
++		/*
++		 * Ensure that the transfer length and packet count will fit
++		 * in the widths allocated for them in the HCTSIZn register.
++		 */
++		if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
++		    hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
++			/*
++			 * Make sure the transfer size is no larger than one
++			 * (micro)frame's worth of data. (A check was done
++			 * when the periodic transfer was accepted to ensure
++			 * that a (micro)frame's worth of data can be
++			 * programmed into a channel.)
++			 */
++			uint32_t max_periodic_len =
++			    hc->multi_count * hc->max_packet;
++			if (hc->xfer_len > max_periodic_len) {
++				hc->xfer_len = max_periodic_len;
++			} else {
++			}
++		} else if (hc->xfer_len > max_hc_xfer_size) {
++			/* Make sure that xfer_len is a multiple of max packet size. */
++			hc->xfer_len = max_hc_xfer_size - hc->max_packet + 1;
++		}
++
++		if (hc->xfer_len > 0) {
++			num_packets =
++			    (hc->xfer_len + hc->max_packet -
++			     1) / hc->max_packet;
++			if (num_packets > max_hc_pkt_count) {
++				num_packets = max_hc_pkt_count;
++				hc->xfer_len = num_packets * hc->max_packet;
++			}
++		} else {
++			/* Need 1 packet for transfer length of 0. */
++			num_packets = 1;
++		}
++
++		if (hc->ep_is_in) {
++			/* Always program an integral # of max packets for IN transfers. */
++			hc->xfer_len = num_packets * hc->max_packet;
++		}
++
++		if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
++		    hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
++			/*
++			 * Make sure that the multi_count field matches the
++			 * actual transfer length.
++			 */
++			hc->multi_count = num_packets;
++		}
++
++		if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)
++			set_pid_isoc(hc);
++
++		hctsiz.b.xfersize = hc->xfer_len;
++	}
++
++	hc->start_pkt_count = num_packets;
++	hctsiz.b.pktcnt = num_packets;
++	hctsiz.b.pid = hc->data_pid_start;
++	DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
++
++	DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
++	DWC_DEBUGPL(DBG_HCDV, "	 Xfer Size: %d\n", hctsiz.b.xfersize);
++	DWC_DEBUGPL(DBG_HCDV, "	 Num Pkts: %d\n", hctsiz.b.pktcnt);
++	DWC_DEBUGPL(DBG_HCDV, "	 Start PID: %d\n", hctsiz.b.pid);
++
++	if (core_if->dma_enable) {
++		dwc_dma_t dma_addr;
++		if (hc->align_buff) {
++			dma_addr = hc->align_buff;
++		} else {
++			dma_addr = ((unsigned long)hc->xfer_buff & 0xffffffff);
++		}
++		DWC_WRITE_REG32(&hc_regs->hcdma, dma_addr);
++	}
++
++	/* Start the split */
++	if (hc->do_split) {
++		hcsplt_data_t hcsplt;
++		hcsplt.d32 = DWC_READ_REG32(&hc_regs->hcsplt);
++		hcsplt.b.spltena = 1;
++		DWC_WRITE_REG32(&hc_regs->hcsplt, hcsplt.d32);
++	}
++
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	hcchar.b.multicnt = hc->multi_count;
++	hc_set_even_odd_frame(core_if, hc, &hcchar);
++#ifdef DEBUG
++	core_if->start_hcchar_val[hc->hc_num] = hcchar.d32;
++	if (hcchar.b.chdis) {
++		DWC_WARN("%s: chdis set, channel %d, hcchar 0x%08x\n",
++			 __func__, hc->hc_num, hcchar.d32);
++	}
++#endif
++
++	/* Set host channel enable after all other setup is complete. */
++	hcchar.b.chen = 1;
++	hcchar.b.chdis = 0;
++	DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++
++	hc->xfer_started = 1;
++	hc->requests++;
++
++	if (!core_if->dma_enable && !hc->ep_is_in && hc->xfer_len > 0) {
++		/* Load OUT packet into the appropriate Tx FIFO. */
++		dwc_otg_hc_write_packet(core_if, hc);
++	}
++#ifdef DEBUG
++	if (hc->ep_type != DWC_OTG_EP_TYPE_INTR) {
++                DWC_DEBUGPL(DBG_HCDV, "transfer %d from core_if %p\n",
++                            hc->hc_num, core_if);//GRAYG
++		core_if->hc_xfer_info[hc->hc_num].core_if = core_if;
++		core_if->hc_xfer_info[hc->hc_num].hc = hc;
++
++		/* Start a timer for this transfer. */
++		DWC_TIMER_SCHEDULE(core_if->hc_xfer_timer[hc->hc_num], 10000);
++	}
++#endif
++}
++
++/**
++ * This function does the setup for a data transfer for a host channel
++ * and starts the transfer in Descriptor DMA mode.
++ *
++ * Initializes HCTSIZ register. For a PING transfer the Do Ping bit is set.
++ * Sets PID and NTD values. For periodic transfers
++ * initializes SCHED_INFO field with micro-frame bitmap.
++ *
++ * Initializes HCDMA register with descriptor list address and CTD value
++ * then starts the transfer via enabling the channel.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param hc Information needed to initialize the host channel.
++ */
++void dwc_otg_hc_start_transfer_ddma(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
++{
++	dwc_otg_hc_regs_t *hc_regs = core_if->host_if->hc_regs[hc->hc_num];
++	hcchar_data_t hcchar;
++	hctsiz_data_t hctsiz;
++	hcdma_data_t hcdma;
++
++	hctsiz.d32 = 0;
++
++	if (hc->do_ping)
++		hctsiz.b_ddma.dopng = 1;
++
++	if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)
++		set_pid_isoc(hc);
++
++	/* Packet Count and Xfer Size are not used in Descriptor DMA mode */
++	hctsiz.b_ddma.pid = hc->data_pid_start;
++	hctsiz.b_ddma.ntd = hc->ntd - 1;	/* 0 - 1 descriptor, 1 - 2 descriptors, etc. */
++	hctsiz.b_ddma.schinfo = hc->schinfo;	/* Non-zero only for high-speed interrupt endpoints */
++
++	DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
++	DWC_DEBUGPL(DBG_HCDV, "	 Start PID: %d\n", hctsiz.b.pid);
++	DWC_DEBUGPL(DBG_HCDV, "	 NTD: %d\n", hctsiz.b_ddma.ntd);
++
++	DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
++
++	hcdma.d32 = 0;
++	hcdma.b.dma_addr = ((uint32_t) hc->desc_list_addr) >> 11;
++
++	/* Always start from first descriptor. */
++	hcdma.b.ctd = 0;
++	DWC_WRITE_REG32(&hc_regs->hcdma, hcdma.d32);
++
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	hcchar.b.multicnt = hc->multi_count;
++
++#ifdef DEBUG
++	core_if->start_hcchar_val[hc->hc_num] = hcchar.d32;
++	if (hcchar.b.chdis) {
++		DWC_WARN("%s: chdis set, channel %d, hcchar 0x%08x\n",
++			 __func__, hc->hc_num, hcchar.d32);
++	}
++#endif
++
++	/* Set host channel enable after all other setup is complete. */
++	hcchar.b.chen = 1;
++	hcchar.b.chdis = 0;
++
++	DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++
++	hc->xfer_started = 1;
++	hc->requests++;
++
++#ifdef DEBUG
++	if ((hc->ep_type != DWC_OTG_EP_TYPE_INTR)
++	    && (hc->ep_type != DWC_OTG_EP_TYPE_ISOC)) {
++                DWC_DEBUGPL(DBG_HCDV, "DMA transfer %d from core_if %p\n",
++                            hc->hc_num, core_if);//GRAYG
++		core_if->hc_xfer_info[hc->hc_num].core_if = core_if;
++		core_if->hc_xfer_info[hc->hc_num].hc = hc;
++		/* Start a timer for this transfer. */
++		DWC_TIMER_SCHEDULE(core_if->hc_xfer_timer[hc->hc_num], 10000);
++	}
++#endif
++
++}
++
++/**
++ * This function continues a data transfer that was started by previous call
++ * to <code>dwc_otg_hc_start_transfer</code>. The caller must ensure there is
++ * sufficient space in the request queue and Tx Data FIFO. This function
++ * should only be called in Slave mode. In DMA mode, the controller acts
++ * autonomously to complete transfers programmed to a host channel.
++ *
++ * For an OUT transfer, a new data packet is loaded into the appropriate FIFO
++ * if there is any data remaining to be queued. For an IN transfer, another
++ * data packet is always requested. For the SETUP phase of a control transfer,
++ * this function does nothing.
++ *
++ * @return 1 if a new request is queued, 0 if no more requests are required
++ * for this transfer.
++ */
++int dwc_otg_hc_continue_transfer(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
++{
++	DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
++
++	if (hc->do_split) {
++		/* SPLITs always queue just once per channel */
++		return 0;
++	} else if (hc->data_pid_start == DWC_OTG_HC_PID_SETUP) {
++		/* SETUPs are queued only once since they can't be NAKed. */
++		return 0;
++	} else if (hc->ep_is_in) {
++		/*
++		 * Always queue another request for other IN transfers. If
++		 * back-to-back INs are issued and NAKs are received for both,
++		 * the driver may still be processing the first NAK when the
++		 * second NAK is received. When the interrupt handler clears
++		 * the NAK interrupt for the first NAK, the second NAK will
++		 * not be seen. So we can't depend on the NAK interrupt
++		 * handler to requeue a NAKed request. Instead, IN requests
++		 * are issued each time this function is called. When the
++		 * transfer completes, the extra requests for the channel will
++		 * be flushed.
++		 */
++		hcchar_data_t hcchar;
++		dwc_otg_hc_regs_t *hc_regs =
++		    core_if->host_if->hc_regs[hc->hc_num];
++
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++		hc_set_even_odd_frame(core_if, hc, &hcchar);
++		hcchar.b.chen = 1;
++		hcchar.b.chdis = 0;
++		DWC_DEBUGPL(DBG_HCDV, "	 IN xfer: hcchar = 0x%08x\n",
++			    hcchar.d32);
++		DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++		hc->requests++;
++		return 1;
++	} else {
++		/* OUT transfers. */
++		if (hc->xfer_count < hc->xfer_len) {
++			if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
++			    hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
++				hcchar_data_t hcchar;
++				dwc_otg_hc_regs_t *hc_regs;
++				hc_regs = core_if->host_if->hc_regs[hc->hc_num];
++				hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++				hc_set_even_odd_frame(core_if, hc, &hcchar);
++			}
++
++			/* Load OUT packet into the appropriate Tx FIFO. */
++			dwc_otg_hc_write_packet(core_if, hc);
++			hc->requests++;
++			return 1;
++		} else {
++			return 0;
++		}
++	}
++}
++
++/**
++ * Starts a PING transfer. This function should only be called in Slave mode.
++ * The Do Ping bit is set in the HCTSIZ register, then the channel is enabled.
++ */
++void dwc_otg_hc_do_ping(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
++{
++	hcchar_data_t hcchar;
++	hctsiz_data_t hctsiz;
++	dwc_otg_hc_regs_t *hc_regs = core_if->host_if->hc_regs[hc->hc_num];
++
++	DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
++
++	hctsiz.d32 = 0;
++	hctsiz.b.dopng = 1;
++	hctsiz.b.pktcnt = 1;
++	DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
++
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	hcchar.b.chen = 1;
++	hcchar.b.chdis = 0;
++	DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++}
++
++/*
++ * This function writes a packet into the Tx FIFO associated with the Host
++ * Channel. For a channel associated with a non-periodic EP, the non-periodic
++ * Tx FIFO is written. For a channel associated with a periodic EP, the
++ * periodic Tx FIFO is written. This function should only be called in Slave
++ * mode.
++ *
++ * Upon return the xfer_buff and xfer_count fields in _hc are incremented by
++ * then number of bytes written to the Tx FIFO.
++ */
++void dwc_otg_hc_write_packet(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
++{
++	uint32_t i;
++	uint32_t remaining_count;
++	uint32_t byte_count;
++	uint32_t dword_count;
++
++	uint32_t *data_buff = (uint32_t *) (hc->xfer_buff);
++	uint32_t *data_fifo = core_if->data_fifo[hc->hc_num];
++
++	remaining_count = hc->xfer_len - hc->xfer_count;
++	if (remaining_count > hc->max_packet) {
++		byte_count = hc->max_packet;
++	} else {
++		byte_count = remaining_count;
++	}
++
++	dword_count = (byte_count + 3) / 4;
++
++	if ((((unsigned long)data_buff) & 0x3) == 0) {
++		/* xfer_buff is DWORD aligned. */
++		for (i = 0; i < dword_count; i++, data_buff++) {
++			DWC_WRITE_REG32(data_fifo, *data_buff);
++		}
++	} else {
++		/* xfer_buff is not DWORD aligned. */
++		for (i = 0; i < dword_count; i++, data_buff++) {
++			uint32_t data;
++			data =
++			    (data_buff[0] | data_buff[1] << 8 | data_buff[2] <<
++			     16 | data_buff[3] << 24);
++			DWC_WRITE_REG32(data_fifo, data);
++		}
++	}
++
++	hc->xfer_count += byte_count;
++	hc->xfer_buff += byte_count;
++}
++
++/**
++ * Gets the current USB frame number. This is the frame number from the last
++ * SOF packet.
++ */
++uint32_t dwc_otg_get_frame_number(dwc_otg_core_if_t * core_if)
++{
++	dsts_data_t dsts;
++	dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++
++	/* read current frame/microframe number from DSTS register */
++	return dsts.b.soffn;
++}
++
++/**
++ * Calculates and gets the frame Interval value of HFIR register according PHY
++ * type and speed.The application can modify a value of HFIR register only after
++ * the Port Enable bit of the Host Port Control and Status register
++ * (HPRT.PrtEnaPort) has been set.
++*/
++
++uint32_t calc_frame_interval(dwc_otg_core_if_t * core_if)
++{
++	gusbcfg_data_t usbcfg;
++	hwcfg2_data_t hwcfg2;
++	hprt0_data_t hprt0;
++	int clock = 60;		// default value
++	usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++	hwcfg2.d32 = DWC_READ_REG32(&core_if->core_global_regs->ghwcfg2);
++	hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
++	if (!usbcfg.b.physel && usbcfg.b.ulpi_utmi_sel && !usbcfg.b.phyif)
++		clock = 60;
++	if (usbcfg.b.physel && hwcfg2.b.fs_phy_type == 3)
++		clock = 48;
++	if (!usbcfg.b.phylpwrclksel && !usbcfg.b.physel &&
++	    !usbcfg.b.ulpi_utmi_sel && usbcfg.b.phyif)
++		clock = 30;
++	if (!usbcfg.b.phylpwrclksel && !usbcfg.b.physel &&
++	    !usbcfg.b.ulpi_utmi_sel && !usbcfg.b.phyif)
++		clock = 60;
++	if (usbcfg.b.phylpwrclksel && !usbcfg.b.physel &&
++	    !usbcfg.b.ulpi_utmi_sel && usbcfg.b.phyif)
++		clock = 48;
++	if (usbcfg.b.physel && !usbcfg.b.phyif && hwcfg2.b.fs_phy_type == 2)
++		clock = 48;
++	if (usbcfg.b.physel && hwcfg2.b.fs_phy_type == 1)
++		clock = 48;
++	if (hprt0.b.prtspd == 0)
++		/* High speed case */
++		return 125 * clock;
++	else
++		/* FS/LS case */
++		return 1000 * clock;
++}
++
++/**
++ * This function reads a setup packet from the Rx FIFO into the destination
++ * buffer. This function is called from the Rx Status Queue Level (RxStsQLvl)
++ * Interrupt routine when a SETUP packet has been received in Slave mode.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param dest Destination buffer for packet data.
++ */
++void dwc_otg_read_setup_packet(dwc_otg_core_if_t * core_if, uint32_t * dest)
++{
++	device_grxsts_data_t status;
++	/* Get the 8 bytes of a setup transaction data */
++
++	/* Pop 2 DWORDS off the receive data FIFO into memory */
++	dest[0] = DWC_READ_REG32(core_if->data_fifo[0]);
++	dest[1] = DWC_READ_REG32(core_if->data_fifo[0]);
++	if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
++		status.d32 =
++		    DWC_READ_REG32(&core_if->core_global_regs->grxstsp);
++		DWC_DEBUGPL(DBG_ANY,
++			    "EP:%d BCnt:%d " "pktsts:%x Frame:%d(0x%0x)\n",
++			    status.b.epnum, status.b.bcnt, status.b.pktsts,
++			    status.b.fn, status.b.fn);
++	}
++}
++
++/**
++ * This function enables EP0 OUT to receive SETUP packets and configures EP0
++ * IN for transmitting packets. It is normally called when the
++ * "Enumeration Done" interrupt occurs.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP0 data.
++ */
++void dwc_otg_ep0_activate(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	dsts_data_t dsts;
++	depctl_data_t diepctl;
++	depctl_data_t doepctl;
++	dctl_data_t dctl = {.d32 = 0 };
++
++	ep->stp_rollover = 0;
++	/* Read the Device Status and Endpoint 0 Control registers */
++	dsts.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dsts);
++	diepctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl);
++	doepctl.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl);
++
++	/* Set the MPS of the IN EP based on the enumeration speed */
++	switch (dsts.b.enumspd) {
++	case DWC_DSTS_ENUMSPD_HS_PHY_30MHZ_OR_60MHZ:
++	case DWC_DSTS_ENUMSPD_FS_PHY_30MHZ_OR_60MHZ:
++	case DWC_DSTS_ENUMSPD_FS_PHY_48MHZ:
++		diepctl.b.mps = DWC_DEP0CTL_MPS_64;
++		break;
++	case DWC_DSTS_ENUMSPD_LS_PHY_6MHZ:
++		diepctl.b.mps = DWC_DEP0CTL_MPS_8;
++		break;
++	}
++
++	DWC_WRITE_REG32(&dev_if->in_ep_regs[0]->diepctl, diepctl.d32);
++
++	/* Enable OUT EP for receive */
++	if (core_if->snpsid <= OTG_CORE_REV_2_94a) {
++	doepctl.b.epena = 1;
++	DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doepctl, doepctl.d32);
++	}
++#ifdef VERBOSE
++	DWC_DEBUGPL(DBG_PCDV, "doepctl0=%0x\n",
++		    DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl));
++	DWC_DEBUGPL(DBG_PCDV, "diepctl0=%0x\n",
++		    DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl));
++#endif
++	dctl.b.cgnpinnak = 1;
++
++	DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
++	DWC_DEBUGPL(DBG_PCDV, "dctl=%0x\n",
++		    DWC_READ_REG32(&dev_if->dev_global_regs->dctl));
++
++}
++
++/**
++ * This function activates an EP.  The Device EP control register for
++ * the EP is configured as defined in the ep structure. Note: This
++ * function is not used for EP0.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to activate.
++ */
++void dwc_otg_ep_activate(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	depctl_data_t depctl;
++	volatile uint32_t *addr;
++	daint_data_t daintmsk = {.d32 = 0 };
++	dcfg_data_t dcfg;
++	uint8_t i;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s() EP%d-%s\n", __func__, ep->num,
++		    (ep->is_in ? "IN" : "OUT"));
++
++#ifdef DWC_UTE_PER_IO
++	ep->xiso_frame_num = 0xFFFFFFFF;
++	ep->xiso_active_xfers = 0;
++	ep->xiso_queued_xfers = 0;
++#endif
++	/* Read DEPCTLn register */
++	if (ep->is_in == 1) {
++		addr = &dev_if->in_ep_regs[ep->num]->diepctl;
++		daintmsk.ep.in = 1 << ep->num;
++	} else {
++		addr = &dev_if->out_ep_regs[ep->num]->doepctl;
++		daintmsk.ep.out = 1 << ep->num;
++	}
++
++	/* If the EP is already active don't change the EP Control
++	 * register. */
++	depctl.d32 = DWC_READ_REG32(addr);
++	if (!depctl.b.usbactep) {
++		depctl.b.mps = ep->maxpacket;
++		depctl.b.eptype = ep->type;
++		depctl.b.txfnum = ep->tx_fifo_num;
++
++		if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
++			depctl.b.setd0pid = 1;	// ???
++		} else {
++			depctl.b.setd0pid = 1;
++		}
++		depctl.b.usbactep = 1;
++
++		/* Update nextep_seq array and EPMSCNT in DCFG*/
++		if (!(depctl.b.eptype & 1) && (ep->is_in == 1)) {	// NP IN EP
++			for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
++				if (core_if->nextep_seq[i] == core_if->first_in_nextep_seq)
++				break;
++			}
++			core_if->nextep_seq[i] = ep->num;
++			core_if->nextep_seq[ep->num] = core_if->first_in_nextep_seq;
++			depctl.b.nextep = core_if->nextep_seq[ep->num];
++			dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
++			dcfg.b.epmscnt++;
++			DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
++
++			DWC_DEBUGPL(DBG_PCDV,
++				    "%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
++				__func__, core_if->first_in_nextep_seq);
++			for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
++				DWC_DEBUGPL(DBG_PCDV, "%2d\n",
++					    core_if->nextep_seq[i]);
++			}
++
++		}
++
++
++		DWC_WRITE_REG32(addr, depctl.d32);
++		DWC_DEBUGPL(DBG_PCDV, "DEPCTL=%08x\n", DWC_READ_REG32(addr));
++	}
++
++	/* Enable the Interrupt for this EP */
++	if (core_if->multiproc_int_enable) {
++		if (ep->is_in == 1) {
++			diepmsk_data_t diepmsk = {.d32 = 0 };
++			diepmsk.b.xfercompl = 1;
++			diepmsk.b.timeout = 1;
++			diepmsk.b.epdisabled = 1;
++			diepmsk.b.ahberr = 1;
++			diepmsk.b.intknepmis = 1;
++			if (!core_if->en_multiple_tx_fifo && core_if->dma_enable)
++				diepmsk.b.intknepmis = 0;
++			diepmsk.b.txfifoundrn = 1;	//?????
++			if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
++				diepmsk.b.nak = 1;
++			}
++
++
++
++/*
++			if (core_if->dma_desc_enable) {
++				diepmsk.b.bna = 1;
++			}
++*/
++/*
++			if (core_if->dma_enable) {
++				doepmsk.b.nak = 1;
++			}
++*/
++			DWC_WRITE_REG32(&dev_if->dev_global_regs->
++					diepeachintmsk[ep->num], diepmsk.d32);
++
++		} else {
++			doepmsk_data_t doepmsk = {.d32 = 0 };
++			doepmsk.b.xfercompl = 1;
++			doepmsk.b.ahberr = 1;
++			doepmsk.b.epdisabled = 1;
++			if (ep->type == DWC_OTG_EP_TYPE_ISOC)
++				doepmsk.b.outtknepdis = 1;
++
++/*
++
++			if (core_if->dma_desc_enable) {
++				doepmsk.b.bna = 1;
++			}
++*/
++/*
++			doepmsk.b.babble = 1;
++			doepmsk.b.nyet = 1;
++			doepmsk.b.nak = 1;
++*/
++			DWC_WRITE_REG32(&dev_if->dev_global_regs->
++					doepeachintmsk[ep->num], doepmsk.d32);
++		}
++		DWC_MODIFY_REG32(&dev_if->dev_global_regs->deachintmsk,
++				 0, daintmsk.d32);
++	} else {
++		if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
++			if (ep->is_in) {
++				diepmsk_data_t diepmsk = {.d32 = 0 };
++				diepmsk.b.nak = 1;
++				DWC_MODIFY_REG32(&dev_if->dev_global_regs->diepmsk, 0, diepmsk.d32);
++			} else {
++				doepmsk_data_t doepmsk = {.d32 = 0 };
++				doepmsk.b.outtknepdis = 1;
++				DWC_MODIFY_REG32(&dev_if->dev_global_regs->doepmsk, 0, doepmsk.d32);
++			}
++		}
++		DWC_MODIFY_REG32(&dev_if->dev_global_regs->daintmsk,
++				 0, daintmsk.d32);
++	}
++
++	DWC_DEBUGPL(DBG_PCDV, "DAINTMSK=%0x\n",
++		    DWC_READ_REG32(&dev_if->dev_global_regs->daintmsk));
++
++	ep->stall_clear_flag = 0;
++
++	return;
++}
++
++/**
++ * This function deactivates an EP. This is done by clearing the USB Active
++ * EP bit in the Device EP control register. Note: This function is not used
++ * for EP0. EP0 cannot be deactivated.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to deactivate.
++ */
++void dwc_otg_ep_deactivate(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	depctl_data_t depctl = {.d32 = 0 };
++	volatile uint32_t *addr;
++	daint_data_t daintmsk = {.d32 = 0 };
++	dcfg_data_t dcfg;
++	uint8_t i = 0;
++
++#ifdef DWC_UTE_PER_IO
++	ep->xiso_frame_num = 0xFFFFFFFF;
++	ep->xiso_active_xfers = 0;
++	ep->xiso_queued_xfers = 0;
++#endif
++
++	/* Read DEPCTLn register */
++	if (ep->is_in == 1) {
++		addr = &core_if->dev_if->in_ep_regs[ep->num]->diepctl;
++		daintmsk.ep.in = 1 << ep->num;
++	} else {
++		addr = &core_if->dev_if->out_ep_regs[ep->num]->doepctl;
++		daintmsk.ep.out = 1 << ep->num;
++	}
++
++	depctl.d32 = DWC_READ_REG32(addr);
++
++	depctl.b.usbactep = 0;
++
++	/* Update nextep_seq array and EPMSCNT in DCFG*/
++	if (!(depctl.b.eptype & 1) && ep->is_in == 1) {	// NP EP IN
++		for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
++			if (core_if->nextep_seq[i] == ep->num)
++			break;
++		}
++		core_if->nextep_seq[i] = core_if->nextep_seq[ep->num];
++		if (core_if->first_in_nextep_seq == ep->num)
++			core_if->first_in_nextep_seq = i;
++		core_if->nextep_seq[ep->num] = 0xff;
++		depctl.b.nextep = 0;
++		dcfg.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
++		dcfg.b.epmscnt--;
++		DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg,
++				dcfg.d32);
++
++		DWC_DEBUGPL(DBG_PCDV,
++			    "%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
++				__func__, core_if->first_in_nextep_seq);
++			for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
++				DWC_DEBUGPL(DBG_PCDV, "%2d\n", core_if->nextep_seq[i]);
++			}
++	}
++
++	if (ep->is_in == 1)
++		depctl.b.txfnum = 0;
++
++	if (core_if->dma_desc_enable)
++		depctl.b.epdis = 1;
++
++	DWC_WRITE_REG32(addr, depctl.d32);
++	depctl.d32 = DWC_READ_REG32(addr);
++	if (core_if->dma_enable && ep->type == DWC_OTG_EP_TYPE_ISOC
++	    && depctl.b.epena) {
++		depctl_data_t depctl = {.d32 = 0};
++		if (ep->is_in) {
++			diepint_data_t diepint = {.d32 = 0};
++
++			depctl.b.snak = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
++					diepctl, depctl.d32);
++			do {
++				dwc_udelay(10);
++				diepint.d32 =
++				    DWC_READ_REG32(&core_if->
++						   dev_if->in_ep_regs[ep->num]->
++						   diepint);
++			} while (!diepint.b.inepnakeff);
++			diepint.b.inepnakeff = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
++					diepint, diepint.d32);
++			depctl.d32 = 0;
++			depctl.b.epdis = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
++					diepctl, depctl.d32);
++			do {
++				dwc_udelay(10);
++				diepint.d32 =
++				    DWC_READ_REG32(&core_if->
++						   dev_if->in_ep_regs[ep->num]->
++						   diepint);
++			} while (!diepint.b.epdisabled);
++			diepint.b.epdisabled = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
++					diepint, diepint.d32);
++		} else {
++			dctl_data_t dctl = {.d32 = 0};
++			gintmsk_data_t gintsts = {.d32 = 0};
++			doepint_data_t doepint = {.d32 = 0};
++			dctl.b.sgoutnak = 1;
++			DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++					 dctl, 0, dctl.d32);
++			do {
++				dwc_udelay(10);
++				gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++			} while (!gintsts.b.goutnakeff);
++			gintsts.d32 = 0;
++			gintsts.b.goutnakeff = 1;
++			DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++			depctl.d32 = 0;
++			depctl.b.epdis = 1;
++			depctl.b.snak = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[ep->num]->doepctl, depctl.d32);
++			do
++			{
++				dwc_udelay(10);
++				doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
++											out_ep_regs[ep->num]->doepint);
++			} while (!doepint.b.epdisabled);
++
++			doepint.b.epdisabled = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[ep->num]->doepint, doepint.d32);
++
++			dctl.d32 = 0;
++			dctl.b.cgoutnak = 1;
++			DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
++		}
++	}
++
++	/* Disable the Interrupt for this EP */
++	if (core_if->multiproc_int_enable) {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->deachintmsk,
++				 daintmsk.d32, 0);
++
++		if (ep->is_in == 1) {
++			DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->
++					diepeachintmsk[ep->num], 0);
++		} else {
++			DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->
++					doepeachintmsk[ep->num], 0);
++		}
++	} else {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->daintmsk,
++				 daintmsk.d32, 0);
++	}
++
++}
++
++/**
++ * This function initializes dma descriptor chain.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ */
++static void init_dma_desc_chain(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	dwc_otg_dev_dma_desc_t *dma_desc;
++	uint32_t offset;
++	uint32_t xfer_est;
++	int i;
++	unsigned maxxfer_local, total_len;
++
++	if (!ep->is_in && ep->type == DWC_OTG_EP_TYPE_INTR &&
++					(ep->maxpacket%4)) {
++		maxxfer_local = ep->maxpacket;
++		total_len = ep->xfer_len;
++	} else {
++		maxxfer_local = ep->maxxfer;
++		total_len = ep->total_len;
++	}
++
++	ep->desc_cnt = (total_len / maxxfer_local) +
++            ((total_len % maxxfer_local) ? 1 : 0);
++
++	if (!ep->desc_cnt)
++		ep->desc_cnt = 1;
++
++	if (ep->desc_cnt > MAX_DMA_DESC_CNT)
++		ep->desc_cnt = MAX_DMA_DESC_CNT;
++
++	dma_desc = ep->desc_addr;
++	if (maxxfer_local == ep->maxpacket) {
++		if ((total_len % maxxfer_local) &&
++				(total_len/maxxfer_local < MAX_DMA_DESC_CNT)) {
++			xfer_est = (ep->desc_cnt - 1) * maxxfer_local +
++					(total_len % maxxfer_local);
++		} else
++			xfer_est = ep->desc_cnt * maxxfer_local;
++	} else
++		xfer_est = total_len;
++	offset = 0;
++	for (i = 0; i < ep->desc_cnt; ++i) {
++		/** DMA Descriptor Setup */
++		if (xfer_est > maxxfer_local) {
++			dma_desc->status.b.bs = BS_HOST_BUSY;
++			dma_desc->status.b.l = 0;
++			dma_desc->status.b.ioc = 0;
++			dma_desc->status.b.sp = 0;
++			dma_desc->status.b.bytes = maxxfer_local;
++			dma_desc->buf = ep->dma_addr + offset;
++			dma_desc->status.b.sts = 0;
++			dma_desc->status.b.bs = BS_HOST_READY;
++
++			xfer_est -= maxxfer_local;
++			offset += maxxfer_local;
++		} else {
++			dma_desc->status.b.bs = BS_HOST_BUSY;
++			dma_desc->status.b.l = 1;
++			dma_desc->status.b.ioc = 1;
++			if (ep->is_in) {
++				dma_desc->status.b.sp =
++				    (xfer_est %
++				     ep->maxpacket) ? 1 : ((ep->
++							    sent_zlp) ? 1 : 0);
++				dma_desc->status.b.bytes = xfer_est;
++			} else {
++				if (maxxfer_local == ep->maxpacket)
++					dma_desc->status.b.bytes = xfer_est;
++				else
++					dma_desc->status.b.bytes =
++						xfer_est + ((4 - (xfer_est & 0x3)) & 0x3);
++			}
++
++			dma_desc->buf = ep->dma_addr + offset;
++			dma_desc->status.b.sts = 0;
++			dma_desc->status.b.bs = BS_HOST_READY;
++		}
++		dma_desc++;
++	}
++}
++/**
++ * This function is called when to write ISOC data into appropriate dedicated
++ * periodic FIFO.
++ */
++static int32_t write_isoc_tx_fifo(dwc_otg_core_if_t * core_if, dwc_ep_t * dwc_ep)
++{
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	dwc_otg_dev_in_ep_regs_t *ep_regs;
++	dtxfsts_data_t txstatus = {.d32 = 0 };
++	uint32_t len = 0;
++	int epnum = dwc_ep->num;
++	int dwords;
++
++	DWC_DEBUGPL(DBG_PCD, "Dedicated TxFifo Empty: %d \n", epnum);
++
++	ep_regs = core_if->dev_if->in_ep_regs[epnum];
++
++	len = dwc_ep->xfer_len - dwc_ep->xfer_count;
++
++	if (len > dwc_ep->maxpacket) {
++		len = dwc_ep->maxpacket;
++	}
++
++	dwords = (len + 3) / 4;
++
++	/* While there is space in the queue and space in the FIFO and
++	 * More data to tranfer, Write packets to the Tx FIFO */
++	txstatus.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts);
++	DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", epnum, txstatus.d32);
++
++	while (txstatus.b.txfspcavail > dwords &&
++	       dwc_ep->xfer_count < dwc_ep->xfer_len && dwc_ep->xfer_len != 0) {
++		/* Write the FIFO */
++		dwc_otg_ep_write_packet(core_if, dwc_ep, 0);
++
++		len = dwc_ep->xfer_len - dwc_ep->xfer_count;
++		if (len > dwc_ep->maxpacket) {
++			len = dwc_ep->maxpacket;
++		}
++
++		dwords = (len + 3) / 4;
++		txstatus.d32 =
++		    DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts);
++		DWC_DEBUGPL(DBG_PCDV, "dtxfsts[%d]=0x%08x\n", epnum,
++			    txstatus.d32);
++	}
++
++	DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", epnum,
++		    DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts));
++
++	return 1;
++}
++/**
++ * This function does the setup for a data transfer for an EP and
++ * starts the transfer. For an IN transfer, the packets will be
++ * loaded into the appropriate Tx FIFO in the ISR. For OUT transfers,
++ * the packets are unloaded from the Rx FIFO in the ISR.  the ISR.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ */
++
++void dwc_otg_ep_start_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	depctl_data_t depctl;
++	deptsiz_data_t deptsiz;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "%s()\n", __func__);
++	DWC_DEBUGPL(DBG_PCD, "ep%d-%s xfer_len=%d xfer_cnt=%d "
++		    "xfer_buff=%p start_xfer_buff=%p, total_len = %d\n",
++		    ep->num, (ep->is_in ? "IN" : "OUT"), ep->xfer_len,
++		    ep->xfer_count, ep->xfer_buff, ep->start_xfer_buff,
++		    ep->total_len);
++	/* IN endpoint */
++	if (ep->is_in == 1) {
++		dwc_otg_dev_in_ep_regs_t *in_regs =
++		    core_if->dev_if->in_ep_regs[ep->num];
++
++		gnptxsts_data_t gtxstatus;
++
++		gtxstatus.d32 =
++		    DWC_READ_REG32(&core_if->core_global_regs->gnptxsts);
++
++		if (core_if->en_multiple_tx_fifo == 0
++		    && gtxstatus.b.nptxqspcavail == 0 && !core_if->dma_enable) {
++#ifdef DEBUG
++			DWC_PRINTF("TX Queue Full (0x%0x)\n", gtxstatus.d32);
++#endif
++			return;
++		}
++
++		depctl.d32 = DWC_READ_REG32(&(in_regs->diepctl));
++		deptsiz.d32 = DWC_READ_REG32(&(in_regs->dieptsiz));
++
++		if (ep->maxpacket > ep->maxxfer / MAX_PKT_CNT)
++			ep->xfer_len += (ep->maxxfer < (ep->total_len - ep->xfer_len)) ?
++				ep->maxxfer : (ep->total_len - ep->xfer_len);
++		else
++			ep->xfer_len += (MAX_PKT_CNT * ep->maxpacket < (ep->total_len - ep->xfer_len)) ?
++				 MAX_PKT_CNT * ep->maxpacket : (ep->total_len - ep->xfer_len);
++
++
++		/* Zero Length Packet? */
++		if ((ep->xfer_len - ep->xfer_count) == 0) {
++			deptsiz.b.xfersize = 0;
++			deptsiz.b.pktcnt = 1;
++		} else {
++			/* Program the transfer size and packet count
++			 *      as follows: xfersize = N * maxpacket +
++			 *      short_packet pktcnt = N + (short_packet
++			 *      exist ? 1 : 0)
++			 */
++			deptsiz.b.xfersize = ep->xfer_len - ep->xfer_count;
++			deptsiz.b.pktcnt =
++			    (ep->xfer_len - ep->xfer_count - 1 +
++			     ep->maxpacket) / ep->maxpacket;
++			if (deptsiz.b.pktcnt > MAX_PKT_CNT) {
++				deptsiz.b.pktcnt = MAX_PKT_CNT;
++				deptsiz.b.xfersize = deptsiz.b.pktcnt * ep->maxpacket;
++			}
++			if (ep->type == DWC_OTG_EP_TYPE_ISOC)
++				deptsiz.b.mc = deptsiz.b.pktcnt;
++		}
++
++		/* Write the DMA register */
++		if (core_if->dma_enable) {
++			if (core_if->dma_desc_enable == 0) {
++				if (ep->type != DWC_OTG_EP_TYPE_ISOC)
++					deptsiz.b.mc = 1;
++				DWC_WRITE_REG32(&in_regs->dieptsiz,
++						deptsiz.d32);
++				DWC_WRITE_REG32(&(in_regs->diepdma),
++						(uint32_t) ep->dma_addr);
++			} else {
++#ifdef DWC_UTE_CFI
++				/* The descriptor chain should be already initialized by now */
++				if (ep->buff_mode != BM_STANDARD) {
++					DWC_WRITE_REG32(&in_regs->diepdma,
++							ep->descs_dma_addr);
++				} else {
++#endif
++					init_dma_desc_chain(core_if, ep);
++				/** DIEPDMAn Register write */
++					DWC_WRITE_REG32(&in_regs->diepdma,
++							ep->dma_desc_addr);
++#ifdef DWC_UTE_CFI
++				}
++#endif
++			}
++		} else {
++			DWC_WRITE_REG32(&in_regs->dieptsiz, deptsiz.d32);
++			if (ep->type != DWC_OTG_EP_TYPE_ISOC) {
++				/**
++				 * Enable the Non-Periodic Tx FIFO empty interrupt,
++				 * or the Tx FIFO epmty interrupt in dedicated Tx FIFO mode,
++				 * the data will be written into the fifo by the ISR.
++				 */
++				if (core_if->en_multiple_tx_fifo == 0) {
++					intr_mask.b.nptxfempty = 1;
++					DWC_MODIFY_REG32
++					    (&core_if->core_global_regs->gintmsk,
++					     intr_mask.d32, intr_mask.d32);
++				} else {
++					/* Enable the Tx FIFO Empty Interrupt for this EP */
++					if (ep->xfer_len > 0) {
++						uint32_t fifoemptymsk = 0;
++						fifoemptymsk = 1 << ep->num;
++						DWC_MODIFY_REG32
++						    (&core_if->dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
++						     0, fifoemptymsk);
++
++					}
++				}
++			}  else {
++					 write_isoc_tx_fifo(core_if, ep);
++			}
++		}
++		if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable)
++			depctl.b.nextep = core_if->nextep_seq[ep->num];
++
++		if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
++			dsts_data_t dsts = {.d32 = 0};
++			if (ep->bInterval == 1) {
++				dsts.d32 =
++				    DWC_READ_REG32(&core_if->dev_if->
++						   dev_global_regs->dsts);
++				ep->frame_num = dsts.b.soffn + ep->bInterval;
++				if (ep->frame_num > 0x3FFF) {
++					ep->frm_overrun = 1;
++					ep->frame_num &= 0x3FFF;
++				} else
++					ep->frm_overrun = 0;
++				if (ep->frame_num & 0x1) {
++					depctl.b.setd1pid = 1;
++				} else {
++					depctl.b.setd0pid = 1;
++				}
++			}
++		}
++		/* EP enable, IN data in FIFO */
++		depctl.b.cnak = 1;
++		depctl.b.epena = 1;
++		DWC_WRITE_REG32(&in_regs->diepctl, depctl.d32);
++
++	} else {
++		/* OUT endpoint */
++		dwc_otg_dev_out_ep_regs_t *out_regs =
++		    core_if->dev_if->out_ep_regs[ep->num];
++
++		depctl.d32 = DWC_READ_REG32(&(out_regs->doepctl));
++		deptsiz.d32 = DWC_READ_REG32(&(out_regs->doeptsiz));
++
++		if (!core_if->dma_desc_enable) {
++			if (ep->maxpacket > ep->maxxfer / MAX_PKT_CNT)
++				ep->xfer_len += (ep->maxxfer < (ep->total_len - ep->xfer_len)) ?
++				ep->maxxfer : (ep->total_len - ep->xfer_len);
++                else
++					ep->xfer_len += (MAX_PKT_CNT * ep->maxpacket < (ep->total_len
++					- ep->xfer_len)) ? MAX_PKT_CNT * ep->maxpacket : (ep->total_len - ep->xfer_len);
++		}
++
++		/* Program the transfer size and packet count as follows:
++		 *
++		 *      pktcnt = N
++		 *      xfersize = N * maxpacket
++		 */
++		if ((ep->xfer_len - ep->xfer_count) == 0) {
++			/* Zero Length Packet */
++			deptsiz.b.xfersize = ep->maxpacket;
++			deptsiz.b.pktcnt = 1;
++		} else {
++			deptsiz.b.pktcnt =
++			    (ep->xfer_len - ep->xfer_count +
++			     (ep->maxpacket - 1)) / ep->maxpacket;
++			if (deptsiz.b.pktcnt > MAX_PKT_CNT) {
++				deptsiz.b.pktcnt = MAX_PKT_CNT;
++			}
++			if (!core_if->dma_desc_enable) {
++				ep->xfer_len =
++					deptsiz.b.pktcnt * ep->maxpacket + ep->xfer_count;
++			}
++			deptsiz.b.xfersize = ep->xfer_len - ep->xfer_count;
++		}
++
++		DWC_DEBUGPL(DBG_PCDV, "ep%d xfersize=%d pktcnt=%d\n",
++			    ep->num, deptsiz.b.xfersize, deptsiz.b.pktcnt);
++
++		if (core_if->dma_enable) {
++			if (!core_if->dma_desc_enable) {
++				DWC_WRITE_REG32(&out_regs->doeptsiz,
++						deptsiz.d32);
++
++				DWC_WRITE_REG32(&(out_regs->doepdma),
++						(uint32_t) ep->dma_addr);
++			} else {
++#ifdef DWC_UTE_CFI
++				/* The descriptor chain should be already initialized by now */
++				if (ep->buff_mode != BM_STANDARD) {
++					DWC_WRITE_REG32(&out_regs->doepdma,
++							ep->descs_dma_addr);
++				} else {
++#endif
++					/** This is used for interrupt out transfers*/
++					if (!ep->xfer_len)
++						ep->xfer_len = ep->total_len;
++					init_dma_desc_chain(core_if, ep);
++
++					if (core_if->core_params->dev_out_nak) {
++						if (ep->type == DWC_OTG_EP_TYPE_BULK) {
++							deptsiz.b.pktcnt = (ep->total_len +
++								(ep->maxpacket - 1)) / ep->maxpacket;
++							deptsiz.b.xfersize = ep->total_len;
++							/* Remember initial value of doeptsiz */
++							core_if->start_doeptsiz_val[ep->num] = deptsiz.d32;
++							DWC_WRITE_REG32(&out_regs->doeptsiz,
++								deptsiz.d32);
++						}
++					}
++				/** DOEPDMAn Register write */
++					DWC_WRITE_REG32(&out_regs->doepdma,
++							ep->dma_desc_addr);
++#ifdef DWC_UTE_CFI
++				}
++#endif
++			}
++		} else {
++			DWC_WRITE_REG32(&out_regs->doeptsiz, deptsiz.d32);
++		}
++
++		if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
++			dsts_data_t dsts = {.d32 = 0};
++			if (ep->bInterval == 1) {
++				dsts.d32 =
++				    DWC_READ_REG32(&core_if->dev_if->
++						   dev_global_regs->dsts);
++				ep->frame_num = dsts.b.soffn + ep->bInterval;
++				if (ep->frame_num > 0x3FFF) {
++					ep->frm_overrun = 1;
++					ep->frame_num &= 0x3FFF;
++				} else
++					ep->frm_overrun = 0;
++
++				if (ep->frame_num & 0x1) {
++					depctl.b.setd1pid = 1;
++				} else {
++					depctl.b.setd0pid = 1;
++				}
++			}
++		}
++
++		/* EP enable */
++		depctl.b.cnak = 1;
++		depctl.b.epena = 1;
++
++		DWC_WRITE_REG32(&out_regs->doepctl, depctl.d32);
++
++		DWC_DEBUGPL(DBG_PCD, "DOEPCTL=%08x DOEPTSIZ=%08x\n",
++			    DWC_READ_REG32(&out_regs->doepctl),
++			    DWC_READ_REG32(&out_regs->doeptsiz));
++		DWC_DEBUGPL(DBG_PCD, "DAINTMSK=%08x GINTMSK=%08x\n",
++			    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->
++					   daintmsk),
++			    DWC_READ_REG32(&core_if->core_global_regs->
++					   gintmsk));
++
++		/* Timer is scheduling only for out bulk transfers for
++		 * "Device DDMA OUT NAK Enhancement" feature to inform user
++		 * about received data payload in case of timeout
++		 */
++		if (core_if->core_params->dev_out_nak) {
++			if (ep->type == DWC_OTG_EP_TYPE_BULK) {
++				core_if->ep_xfer_info[ep->num].core_if = core_if;
++				core_if->ep_xfer_info[ep->num].ep = ep;
++				core_if->ep_xfer_info[ep->num].state = 1;
++
++				/* Start a timer for this transfer. */
++				DWC_TIMER_SCHEDULE(core_if->ep_xfer_timer[ep->num], 10000);
++			}
++		}
++	}
++}
++
++/**
++ * This function setup a zero length transfer in Buffer DMA and
++ * Slave modes for usb requests with zero field set
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ *
++ */
++void dwc_otg_ep_start_zl_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++
++	depctl_data_t depctl;
++	deptsiz_data_t deptsiz;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "%s()\n", __func__);
++	DWC_PRINTF("zero length transfer is called\n");
++
++	/* IN endpoint */
++	if (ep->is_in == 1) {
++		dwc_otg_dev_in_ep_regs_t *in_regs =
++		    core_if->dev_if->in_ep_regs[ep->num];
++
++		depctl.d32 = DWC_READ_REG32(&(in_regs->diepctl));
++		deptsiz.d32 = DWC_READ_REG32(&(in_regs->dieptsiz));
++
++		deptsiz.b.xfersize = 0;
++		deptsiz.b.pktcnt = 1;
++
++		/* Write the DMA register */
++		if (core_if->dma_enable) {
++			if (core_if->dma_desc_enable == 0) {
++				deptsiz.b.mc = 1;
++				DWC_WRITE_REG32(&in_regs->dieptsiz,
++						deptsiz.d32);
++				DWC_WRITE_REG32(&(in_regs->diepdma),
++						(uint32_t) ep->dma_addr);
++			}
++		} else {
++			DWC_WRITE_REG32(&in_regs->dieptsiz, deptsiz.d32);
++			/**
++			 * Enable the Non-Periodic Tx FIFO empty interrupt,
++			 * or the Tx FIFO epmty interrupt in dedicated Tx FIFO mode,
++			 * the data will be written into the fifo by the ISR.
++			 */
++			if (core_if->en_multiple_tx_fifo == 0) {
++				intr_mask.b.nptxfempty = 1;
++				DWC_MODIFY_REG32(&core_if->
++						 core_global_regs->gintmsk,
++						 intr_mask.d32, intr_mask.d32);
++			} else {
++				/* Enable the Tx FIFO Empty Interrupt for this EP */
++				if (ep->xfer_len > 0) {
++					uint32_t fifoemptymsk = 0;
++					fifoemptymsk = 1 << ep->num;
++					DWC_MODIFY_REG32(&core_if->
++							 dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
++							 0, fifoemptymsk);
++				}
++			}
++		}
++
++		if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable)
++			depctl.b.nextep = core_if->nextep_seq[ep->num];
++		/* EP enable, IN data in FIFO */
++		depctl.b.cnak = 1;
++		depctl.b.epena = 1;
++		DWC_WRITE_REG32(&in_regs->diepctl, depctl.d32);
++
++	} else {
++		/* OUT endpoint */
++		dwc_otg_dev_out_ep_regs_t *out_regs =
++		    core_if->dev_if->out_ep_regs[ep->num];
++
++		depctl.d32 = DWC_READ_REG32(&(out_regs->doepctl));
++		deptsiz.d32 = DWC_READ_REG32(&(out_regs->doeptsiz));
++
++		/* Zero Length Packet */
++		deptsiz.b.xfersize = ep->maxpacket;
++		deptsiz.b.pktcnt = 1;
++
++		if (core_if->dma_enable) {
++			if (!core_if->dma_desc_enable) {
++				DWC_WRITE_REG32(&out_regs->doeptsiz,
++						deptsiz.d32);
++
++				DWC_WRITE_REG32(&(out_regs->doepdma),
++						(uint32_t) ep->dma_addr);
++			}
++		} else {
++			DWC_WRITE_REG32(&out_regs->doeptsiz, deptsiz.d32);
++		}
++
++		/* EP enable */
++		depctl.b.cnak = 1;
++		depctl.b.epena = 1;
++
++		DWC_WRITE_REG32(&out_regs->doepctl, depctl.d32);
++
++	}
++}
++
++/**
++ * This function does the setup for a data transfer for EP0 and starts
++ * the transfer.  For an IN transfer, the packets will be loaded into
++ * the appropriate Tx FIFO in the ISR. For OUT transfers, the packets are
++ * unloaded from the Rx FIFO in the ISR.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP0 data.
++ */
++void dwc_otg_ep0_start_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	depctl_data_t depctl;
++	deptsiz0_data_t deptsiz;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	dwc_otg_dev_dma_desc_t *dma_desc;
++
++	DWC_DEBUGPL(DBG_PCD, "ep%d-%s xfer_len=%d xfer_cnt=%d "
++		    "xfer_buff=%p start_xfer_buff=%p \n",
++		    ep->num, (ep->is_in ? "IN" : "OUT"), ep->xfer_len,
++		    ep->xfer_count, ep->xfer_buff, ep->start_xfer_buff);
++
++	ep->total_len = ep->xfer_len;
++
++	/* IN endpoint */
++	if (ep->is_in == 1) {
++		dwc_otg_dev_in_ep_regs_t *in_regs =
++		    core_if->dev_if->in_ep_regs[0];
++
++		gnptxsts_data_t gtxstatus;
++
++		if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
++			depctl.d32 = DWC_READ_REG32(&in_regs->diepctl);
++			if (depctl.b.epena)
++				return;
++		}
++
++		gtxstatus.d32 =
++		    DWC_READ_REG32(&core_if->core_global_regs->gnptxsts);
++
++		/* If dedicated FIFO every time flush fifo before enable ep*/
++		if (core_if->en_multiple_tx_fifo && core_if->snpsid >= OTG_CORE_REV_3_00a)
++			dwc_otg_flush_tx_fifo(core_if, ep->tx_fifo_num);
++
++		if (core_if->en_multiple_tx_fifo == 0
++		    && gtxstatus.b.nptxqspcavail == 0
++		    && !core_if->dma_enable) {
++#ifdef DEBUG
++			deptsiz.d32 = DWC_READ_REG32(&in_regs->dieptsiz);
++			DWC_DEBUGPL(DBG_PCD, "DIEPCTL0=%0x\n",
++				    DWC_READ_REG32(&in_regs->diepctl));
++			DWC_DEBUGPL(DBG_PCD, "DIEPTSIZ0=%0x (sz=%d, pcnt=%d)\n",
++				    deptsiz.d32,
++				    deptsiz.b.xfersize, deptsiz.b.pktcnt);
++			DWC_PRINTF("TX Queue or FIFO Full (0x%0x)\n",
++				   gtxstatus.d32);
++#endif
++			return;
++		}
++
++		depctl.d32 = DWC_READ_REG32(&in_regs->diepctl);
++		deptsiz.d32 = DWC_READ_REG32(&in_regs->dieptsiz);
++
++		/* Zero Length Packet? */
++		if (ep->xfer_len == 0) {
++			deptsiz.b.xfersize = 0;
++			deptsiz.b.pktcnt = 1;
++		} else {
++			/* Program the transfer size and packet count
++			 *      as follows: xfersize = N * maxpacket +
++			 *      short_packet pktcnt = N + (short_packet
++			 *      exist ? 1 : 0)
++			 */
++			if (ep->xfer_len > ep->maxpacket) {
++				ep->xfer_len = ep->maxpacket;
++				deptsiz.b.xfersize = ep->maxpacket;
++			} else {
++				deptsiz.b.xfersize = ep->xfer_len;
++			}
++			deptsiz.b.pktcnt = 1;
++
++		}
++		DWC_DEBUGPL(DBG_PCDV,
++			    "IN len=%d  xfersize=%d pktcnt=%d [%08x]\n",
++			    ep->xfer_len, deptsiz.b.xfersize, deptsiz.b.pktcnt,
++			    deptsiz.d32);
++
++		/* Write the DMA register */
++		if (core_if->dma_enable) {
++			if (core_if->dma_desc_enable == 0) {
++				DWC_WRITE_REG32(&in_regs->dieptsiz,
++						deptsiz.d32);
++
++				DWC_WRITE_REG32(&(in_regs->diepdma),
++						(uint32_t) ep->dma_addr);
++			} else {
++				dma_desc = core_if->dev_if->in_desc_addr;
++
++				/** DMA Descriptor Setup */
++				dma_desc->status.b.bs = BS_HOST_BUSY;
++				dma_desc->status.b.l = 1;
++				dma_desc->status.b.ioc = 1;
++				dma_desc->status.b.sp =
++				    (ep->xfer_len == ep->maxpacket) ? 0 : 1;
++				dma_desc->status.b.bytes = ep->xfer_len;
++				dma_desc->buf = ep->dma_addr;
++				dma_desc->status.b.sts = 0;
++				dma_desc->status.b.bs = BS_HOST_READY;
++
++				/** DIEPDMA0 Register write */
++				DWC_WRITE_REG32(&in_regs->diepdma,
++						core_if->
++						dev_if->dma_in_desc_addr);
++			}
++		} else {
++			DWC_WRITE_REG32(&in_regs->dieptsiz, deptsiz.d32);
++		}
++
++		if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable)
++			depctl.b.nextep = core_if->nextep_seq[ep->num];
++		/* EP enable, IN data in FIFO */
++		depctl.b.cnak = 1;
++		depctl.b.epena = 1;
++		DWC_WRITE_REG32(&in_regs->diepctl, depctl.d32);
++
++		/**
++		 * Enable the Non-Periodic Tx FIFO empty interrupt, the
++		 * data will be written into the fifo by the ISR.
++		 */
++		if (!core_if->dma_enable) {
++			if (core_if->en_multiple_tx_fifo == 0) {
++				intr_mask.b.nptxfempty = 1;
++				DWC_MODIFY_REG32(&core_if->
++						 core_global_regs->gintmsk,
++						 intr_mask.d32, intr_mask.d32);
++			} else {
++				/* Enable the Tx FIFO Empty Interrupt for this EP */
++				if (ep->xfer_len > 0) {
++					uint32_t fifoemptymsk = 0;
++					fifoemptymsk |= 1 << ep->num;
++					DWC_MODIFY_REG32(&core_if->
++							 dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
++							 0, fifoemptymsk);
++				}
++			}
++		}
++	} else {
++		/* OUT endpoint */
++		dwc_otg_dev_out_ep_regs_t *out_regs =
++		    core_if->dev_if->out_ep_regs[0];
++
++		depctl.d32 = DWC_READ_REG32(&out_regs->doepctl);
++		deptsiz.d32 = DWC_READ_REG32(&out_regs->doeptsiz);
++
++		/* Program the transfer size and packet count as follows:
++		 *      xfersize = N * (maxpacket + 4 - (maxpacket % 4))
++		 *      pktcnt = N                                                                                      */
++		/* Zero Length Packet */
++		deptsiz.b.xfersize = ep->maxpacket;
++		deptsiz.b.pktcnt = 1;
++		if (core_if->snpsid >= OTG_CORE_REV_3_00a)
++			deptsiz.b.supcnt = 3;
++
++		DWC_DEBUGPL(DBG_PCDV, "len=%d  xfersize=%d pktcnt=%d\n",
++			    ep->xfer_len, deptsiz.b.xfersize, deptsiz.b.pktcnt);
++
++		if (core_if->dma_enable) {
++			if (!core_if->dma_desc_enable) {
++				DWC_WRITE_REG32(&out_regs->doeptsiz,
++						deptsiz.d32);
++
++				DWC_WRITE_REG32(&(out_regs->doepdma),
++						(uint32_t) ep->dma_addr);
++			} else {
++				dma_desc = core_if->dev_if->out_desc_addr;
++
++				/** DMA Descriptor Setup */
++				dma_desc->status.b.bs = BS_HOST_BUSY;
++				if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
++					dma_desc->status.b.mtrf = 0;
++					dma_desc->status.b.sr = 0;
++				}
++				dma_desc->status.b.l = 1;
++				dma_desc->status.b.ioc = 1;
++				dma_desc->status.b.bytes = ep->maxpacket;
++				dma_desc->buf = ep->dma_addr;
++				dma_desc->status.b.sts = 0;
++				dma_desc->status.b.bs = BS_HOST_READY;
++
++				/** DOEPDMA0 Register write */
++				DWC_WRITE_REG32(&out_regs->doepdma,
++						core_if->dev_if->
++						dma_out_desc_addr);
++			}
++		} else {
++			DWC_WRITE_REG32(&out_regs->doeptsiz, deptsiz.d32);
++		}
++
++		/* EP enable */
++		depctl.b.cnak = 1;
++		depctl.b.epena = 1;
++		DWC_WRITE_REG32(&(out_regs->doepctl), depctl.d32);
++	}
++}
++
++/**
++ * This function continues control IN transfers started by
++ * dwc_otg_ep0_start_transfer, when the transfer does not fit in a
++ * single packet.  NOTE: The DIEPCTL0/DOEPCTL0 registers only have one
++ * bit for the packet count.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP0 data.
++ */
++void dwc_otg_ep0_continue_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	depctl_data_t depctl;
++	deptsiz0_data_t deptsiz;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	dwc_otg_dev_dma_desc_t *dma_desc;
++
++	if (ep->is_in == 1) {
++		dwc_otg_dev_in_ep_regs_t *in_regs =
++		    core_if->dev_if->in_ep_regs[0];
++		gnptxsts_data_t tx_status = {.d32 = 0 };
++
++		tx_status.d32 =
++		    DWC_READ_REG32(&core_if->core_global_regs->gnptxsts);
++		/** @todo Should there be check for room in the Tx
++		 * Status Queue.  If not remove the code above this comment. */
++
++		depctl.d32 = DWC_READ_REG32(&in_regs->diepctl);
++		deptsiz.d32 = DWC_READ_REG32(&in_regs->dieptsiz);
++
++		/* Program the transfer size and packet count
++		 *      as follows: xfersize = N * maxpacket +
++		 *      short_packet pktcnt = N + (short_packet
++		 *      exist ? 1 : 0)
++		 */
++
++		if (core_if->dma_desc_enable == 0) {
++			deptsiz.b.xfersize =
++			    (ep->total_len - ep->xfer_count) >
++			    ep->maxpacket ? ep->maxpacket : (ep->total_len -
++							     ep->xfer_count);
++			deptsiz.b.pktcnt = 1;
++			if (core_if->dma_enable == 0) {
++				ep->xfer_len += deptsiz.b.xfersize;
++			} else {
++				ep->xfer_len = deptsiz.b.xfersize;
++			}
++			DWC_WRITE_REG32(&in_regs->dieptsiz, deptsiz.d32);
++		} else {
++			ep->xfer_len =
++			    (ep->total_len - ep->xfer_count) >
++			    ep->maxpacket ? ep->maxpacket : (ep->total_len -
++							     ep->xfer_count);
++
++			dma_desc = core_if->dev_if->in_desc_addr;
++
++			/** DMA Descriptor Setup */
++			dma_desc->status.b.bs = BS_HOST_BUSY;
++			dma_desc->status.b.l = 1;
++			dma_desc->status.b.ioc = 1;
++			dma_desc->status.b.sp =
++			    (ep->xfer_len == ep->maxpacket) ? 0 : 1;
++			dma_desc->status.b.bytes = ep->xfer_len;
++			dma_desc->buf = ep->dma_addr;
++			dma_desc->status.b.sts = 0;
++			dma_desc->status.b.bs = BS_HOST_READY;
++
++			/** DIEPDMA0 Register write */
++			DWC_WRITE_REG32(&in_regs->diepdma,
++					core_if->dev_if->dma_in_desc_addr);
++		}
++
++		DWC_DEBUGPL(DBG_PCDV,
++			    "IN len=%d  xfersize=%d pktcnt=%d [%08x]\n",
++			    ep->xfer_len, deptsiz.b.xfersize, deptsiz.b.pktcnt,
++			    deptsiz.d32);
++
++		/* Write the DMA register */
++		if (core_if->hwcfg2.b.architecture == DWC_INT_DMA_ARCH) {
++			if (core_if->dma_desc_enable == 0)
++				DWC_WRITE_REG32(&(in_regs->diepdma),
++						(uint32_t) ep->dma_addr);
++		}
++		if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable)
++			depctl.b.nextep = core_if->nextep_seq[ep->num];
++		/* EP enable, IN data in FIFO */
++		depctl.b.cnak = 1;
++		depctl.b.epena = 1;
++		DWC_WRITE_REG32(&in_regs->diepctl, depctl.d32);
++
++		/**
++		 * Enable the Non-Periodic Tx FIFO empty interrupt, the
++		 * data will be written into the fifo by the ISR.
++		 */
++		if (!core_if->dma_enable) {
++			if (core_if->en_multiple_tx_fifo == 0) {
++				/* First clear it from GINTSTS */
++				intr_mask.b.nptxfempty = 1;
++				DWC_MODIFY_REG32(&core_if->
++						 core_global_regs->gintmsk,
++						 intr_mask.d32, intr_mask.d32);
++
++			} else {
++				/* Enable the Tx FIFO Empty Interrupt for this EP */
++				if (ep->xfer_len > 0) {
++					uint32_t fifoemptymsk = 0;
++					fifoemptymsk |= 1 << ep->num;
++					DWC_MODIFY_REG32(&core_if->
++							 dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
++							 0, fifoemptymsk);
++				}
++			}
++		}
++	} else {
++		dwc_otg_dev_out_ep_regs_t *out_regs =
++		    core_if->dev_if->out_ep_regs[0];
++
++		depctl.d32 = DWC_READ_REG32(&out_regs->doepctl);
++		deptsiz.d32 = DWC_READ_REG32(&out_regs->doeptsiz);
++
++		/* Program the transfer size and packet count
++		 *      as follows: xfersize = N * maxpacket +
++		 *      short_packet pktcnt = N + (short_packet
++		 *      exist ? 1 : 0)
++		 */
++		deptsiz.b.xfersize = ep->maxpacket;
++		deptsiz.b.pktcnt = 1;
++
++		if (core_if->dma_desc_enable == 0) {
++			DWC_WRITE_REG32(&out_regs->doeptsiz, deptsiz.d32);
++		} else {
++			dma_desc = core_if->dev_if->out_desc_addr;
++
++			/** DMA Descriptor Setup */
++			dma_desc->status.b.bs = BS_HOST_BUSY;
++			dma_desc->status.b.l = 1;
++			dma_desc->status.b.ioc = 1;
++			dma_desc->status.b.bytes = ep->maxpacket;
++			dma_desc->buf = ep->dma_addr;
++			dma_desc->status.b.sts = 0;
++			dma_desc->status.b.bs = BS_HOST_READY;
++
++			/** DOEPDMA0 Register write */
++			DWC_WRITE_REG32(&out_regs->doepdma,
++					core_if->dev_if->dma_out_desc_addr);
++		}
++
++		DWC_DEBUGPL(DBG_PCDV,
++			    "IN len=%d  xfersize=%d pktcnt=%d [%08x]\n",
++			    ep->xfer_len, deptsiz.b.xfersize, deptsiz.b.pktcnt,
++			    deptsiz.d32);
++
++		/* Write the DMA register */
++		if (core_if->hwcfg2.b.architecture == DWC_INT_DMA_ARCH) {
++			if (core_if->dma_desc_enable == 0)
++				DWC_WRITE_REG32(&(out_regs->doepdma),
++						(uint32_t) ep->dma_addr);
++
++		}
++
++		/* EP enable, IN data in FIFO */
++		depctl.b.cnak = 1;
++		depctl.b.epena = 1;
++		DWC_WRITE_REG32(&out_regs->doepctl, depctl.d32);
++
++	}
++}
++
++#ifdef DEBUG
++void dump_msg(const u8 * buf, unsigned int length)
++{
++	unsigned int start, num, i;
++	char line[52], *p;
++
++	if (length >= 512)
++		return;
++	start = 0;
++	while (length > 0) {
++		num = length < 16u ? length : 16u;
++		p = line;
++		for (i = 0; i < num; ++i) {
++			if (i == 8)
++				*p++ = ' ';
++			DWC_SPRINTF(p, " %02x", buf[i]);
++			p += 3;
++		}
++		*p = 0;
++		DWC_PRINTF("%6x: %s\n", start, line);
++		buf += num;
++		start += num;
++		length -= num;
++	}
++}
++#else
++static inline void dump_msg(const u8 * buf, unsigned int length)
++{
++}
++#endif
++
++/**
++ * This function writes a packet into the Tx FIFO associated with the
++ * EP. For non-periodic EPs the non-periodic Tx FIFO is written.  For
++ * periodic EPs the periodic Tx FIFO associated with the EP is written
++ * with all packets for the next micro-frame.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to write packet for.
++ * @param dma Indicates if DMA is being used.
++ */
++void dwc_otg_ep_write_packet(dwc_otg_core_if_t * core_if, dwc_ep_t * ep,
++			     int dma)
++{
++	/**
++	 * The buffer is padded to DWORD on a per packet basis in
++	 * slave/dma mode if the MPS is not DWORD aligned. The last
++	 * packet, if short, is also padded to a multiple of DWORD.
++	 *
++	 * ep->xfer_buff always starts DWORD aligned in memory and is a
++	 * multiple of DWORD in length
++	 *
++	 * ep->xfer_len can be any number of bytes
++	 *
++	 * ep->xfer_count is a multiple of ep->maxpacket until the last
++	 *	packet
++	 *
++	 * FIFO access is DWORD */
++
++	uint32_t i;
++	uint32_t byte_count;
++	uint32_t dword_count;
++	uint32_t *fifo;
++	uint32_t *data_buff = (uint32_t *) ep->xfer_buff;
++
++	DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "%s(%p,%p)\n", __func__, core_if,
++		    ep);
++	if (ep->xfer_count >= ep->xfer_len) {
++		DWC_WARN("%s() No data for EP%d!!!\n", __func__, ep->num);
++		return;
++	}
++
++	/* Find the byte length of the packet either short packet or MPS */
++	if ((ep->xfer_len - ep->xfer_count) < ep->maxpacket) {
++		byte_count = ep->xfer_len - ep->xfer_count;
++	} else {
++		byte_count = ep->maxpacket;
++	}
++
++	/* Find the DWORD length, padded by extra bytes as neccessary if MPS
++	 * is not a multiple of DWORD */
++	dword_count = (byte_count + 3) / 4;
++
++#ifdef VERBOSE
++	dump_msg(ep->xfer_buff, byte_count);
++#endif
++
++	/**@todo NGS Where are the Periodic Tx FIFO addresses
++	 * intialized?	What should this be? */
++
++	fifo = core_if->data_fifo[ep->num];
++
++	DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "fifo=%p buff=%p *p=%08x bc=%d\n",
++		    fifo, data_buff, *data_buff, byte_count);
++
++	if (!dma) {
++		for (i = 0; i < dword_count; i++, data_buff++) {
++			DWC_WRITE_REG32(fifo, *data_buff);
++		}
++	}
++
++	ep->xfer_count += byte_count;
++	ep->xfer_buff += byte_count;
++	ep->dma_addr += byte_count;
++}
++
++/**
++ * Set the EP STALL.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to set the stall on.
++ */
++void dwc_otg_ep_set_stall(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	depctl_data_t depctl;
++	volatile uint32_t *depctl_addr;
++
++	DWC_DEBUGPL(DBG_PCD, "%s ep%d-%s\n", __func__, ep->num,
++		    (ep->is_in ? "IN" : "OUT"));
++
++	if (ep->is_in == 1) {
++		depctl_addr = &(core_if->dev_if->in_ep_regs[ep->num]->diepctl);
++		depctl.d32 = DWC_READ_REG32(depctl_addr);
++
++		/* set the disable and stall bits */
++		if (depctl.b.epena) {
++			depctl.b.epdis = 1;
++		}
++		depctl.b.stall = 1;
++		DWC_WRITE_REG32(depctl_addr, depctl.d32);
++	} else {
++		depctl_addr = &(core_if->dev_if->out_ep_regs[ep->num]->doepctl);
++		depctl.d32 = DWC_READ_REG32(depctl_addr);
++
++		/* set the stall bit */
++		depctl.b.stall = 1;
++		DWC_WRITE_REG32(depctl_addr, depctl.d32);
++	}
++
++	DWC_DEBUGPL(DBG_PCD, "DEPCTL=%0x\n", DWC_READ_REG32(depctl_addr));
++
++	return;
++}
++
++/**
++ * Clear the EP STALL.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to clear stall from.
++ */
++void dwc_otg_ep_clear_stall(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	depctl_data_t depctl;
++	volatile uint32_t *depctl_addr;
++
++	DWC_DEBUGPL(DBG_PCD, "%s ep%d-%s\n", __func__, ep->num,
++		    (ep->is_in ? "IN" : "OUT"));
++
++	if (ep->is_in == 1) {
++		depctl_addr = &(core_if->dev_if->in_ep_regs[ep->num]->diepctl);
++	} else {
++		depctl_addr = &(core_if->dev_if->out_ep_regs[ep->num]->doepctl);
++	}
++
++	depctl.d32 = DWC_READ_REG32(depctl_addr);
++
++	/* clear the stall bits */
++	depctl.b.stall = 0;
++
++	/*
++	 * USB Spec 9.4.5: For endpoints using data toggle, regardless
++	 * of whether an endpoint has the Halt feature set, a
++	 * ClearFeature(ENDPOINT_HALT) request always results in the
++	 * data toggle being reinitialized to DATA0.
++	 */
++	if (ep->type == DWC_OTG_EP_TYPE_INTR ||
++	    ep->type == DWC_OTG_EP_TYPE_BULK) {
++		depctl.b.setd0pid = 1;	/* DATA0 */
++	}
++
++	DWC_WRITE_REG32(depctl_addr, depctl.d32);
++	DWC_DEBUGPL(DBG_PCD, "DEPCTL=%0x\n", DWC_READ_REG32(depctl_addr));
++	return;
++}
++
++/**
++ * This function reads a packet from the Rx FIFO into the destination
++ * buffer. To read SETUP data use dwc_otg_read_setup_packet.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param dest	  Destination buffer for the packet.
++ * @param bytes  Number of bytes to copy to the destination.
++ */
++void dwc_otg_read_packet(dwc_otg_core_if_t * core_if,
++			 uint8_t * dest, uint16_t bytes)
++{
++	int i;
++	int word_count = (bytes + 3) / 4;
++
++	volatile uint32_t *fifo = core_if->data_fifo[0];
++	uint32_t *data_buff = (uint32_t *) dest;
++
++	/**
++	 * @todo Account for the case where _dest is not dword aligned. This
++	 * requires reading data from the FIFO into a uint32_t temp buffer,
++	 * then moving it into the data buffer.
++	 */
++
++	DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "%s(%p,%p,%d)\n", __func__,
++		    core_if, dest, bytes);
++
++	for (i = 0; i < word_count; i++, data_buff++) {
++		*data_buff = DWC_READ_REG32(fifo);
++	}
++
++	return;
++}
++
++/**
++ * This functions reads the device registers and prints them
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++void dwc_otg_dump_dev_registers(dwc_otg_core_if_t * core_if)
++{
++	int i;
++	volatile uint32_t *addr;
++
++	DWC_PRINTF("Device Global Registers\n");
++	addr = &core_if->dev_if->dev_global_regs->dcfg;
++	DWC_PRINTF("DCFG		 @0x%08lX : 0x%08X\n",
++		   (unsigned long)addr, DWC_READ_REG32(addr));
++	addr = &core_if->dev_if->dev_global_regs->dctl;
++	DWC_PRINTF("DCTL		 @0x%08lX : 0x%08X\n",
++		   (unsigned long)addr, DWC_READ_REG32(addr));
++	addr = &core_if->dev_if->dev_global_regs->dsts;
++	DWC_PRINTF("DSTS		 @0x%08lX : 0x%08X\n",
++		   (unsigned long)addr, DWC_READ_REG32(addr));
++	addr = &core_if->dev_if->dev_global_regs->diepmsk;
++	DWC_PRINTF("DIEPMSK	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->dev_if->dev_global_regs->doepmsk;
++	DWC_PRINTF("DOEPMSK	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->dev_if->dev_global_regs->daint;
++	DWC_PRINTF("DAINT	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->dev_if->dev_global_regs->daintmsk;
++	DWC_PRINTF("DAINTMSK	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->dev_if->dev_global_regs->dtknqr1;
++	DWC_PRINTF("DTKNQR1	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	if (core_if->hwcfg2.b.dev_token_q_depth > 6) {
++		addr = &core_if->dev_if->dev_global_regs->dtknqr2;
++		DWC_PRINTF("DTKNQR2	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++	}
++
++	addr = &core_if->dev_if->dev_global_regs->dvbusdis;
++	DWC_PRINTF("DVBUSID	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++
++	addr = &core_if->dev_if->dev_global_regs->dvbuspulse;
++	DWC_PRINTF("DVBUSPULSE	@0x%08lX : 0x%08X\n",
++		   (unsigned long)addr, DWC_READ_REG32(addr));
++
++	addr = &core_if->dev_if->dev_global_regs->dtknqr3_dthrctl;
++	DWC_PRINTF("DTKNQR3_DTHRCTL	 @0x%08lX : 0x%08X\n",
++		   (unsigned long)addr, DWC_READ_REG32(addr));
++
++	if (core_if->hwcfg2.b.dev_token_q_depth > 22) {
++		addr = &core_if->dev_if->dev_global_regs->dtknqr4_fifoemptymsk;
++		DWC_PRINTF("DTKNQR4	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++	}
++
++	addr = &core_if->dev_if->dev_global_regs->dtknqr4_fifoemptymsk;
++	DWC_PRINTF("FIFOEMPMSK	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++
++	if (core_if->hwcfg2.b.multi_proc_int) {
++
++		addr = &core_if->dev_if->dev_global_regs->deachint;
++		DWC_PRINTF("DEACHINT	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->dev_global_regs->deachintmsk;
++		DWC_PRINTF("DEACHINTMSK	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++
++		for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
++			addr =
++			    &core_if->dev_if->
++			    dev_global_regs->diepeachintmsk[i];
++			DWC_PRINTF("DIEPEACHINTMSK[%d]	 @0x%08lX : 0x%08X\n",
++				   i, (unsigned long)addr,
++				   DWC_READ_REG32(addr));
++		}
++
++		for (i = 0; i <= core_if->dev_if->num_out_eps; i++) {
++			addr =
++			    &core_if->dev_if->
++			    dev_global_regs->doepeachintmsk[i];
++			DWC_PRINTF("DOEPEACHINTMSK[%d]	 @0x%08lX : 0x%08X\n",
++				   i, (unsigned long)addr,
++				   DWC_READ_REG32(addr));
++		}
++	}
++
++	for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
++		DWC_PRINTF("Device IN EP %d Registers\n", i);
++		addr = &core_if->dev_if->in_ep_regs[i]->diepctl;
++		DWC_PRINTF("DIEPCTL	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->in_ep_regs[i]->diepint;
++		DWC_PRINTF("DIEPINT	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->in_ep_regs[i]->dieptsiz;
++		DWC_PRINTF("DIETSIZ	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->in_ep_regs[i]->diepdma;
++		DWC_PRINTF("DIEPDMA	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->in_ep_regs[i]->dtxfsts;
++		DWC_PRINTF("DTXFSTS	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->in_ep_regs[i]->diepdmab;
++		DWC_PRINTF("DIEPDMAB	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, 0 /*DWC_READ_REG32(addr) */ );
++	}
++
++	for (i = 0; i <= core_if->dev_if->num_out_eps; i++) {
++		DWC_PRINTF("Device OUT EP %d Registers\n", i);
++		addr = &core_if->dev_if->out_ep_regs[i]->doepctl;
++		DWC_PRINTF("DOEPCTL	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->out_ep_regs[i]->doepint;
++		DWC_PRINTF("DOEPINT	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->out_ep_regs[i]->doeptsiz;
++		DWC_PRINTF("DOETSIZ	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->dev_if->out_ep_regs[i]->doepdma;
++		DWC_PRINTF("DOEPDMA	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		if (core_if->dma_enable) {	/* Don't access this register in SLAVE mode */
++			addr = &core_if->dev_if->out_ep_regs[i]->doepdmab;
++			DWC_PRINTF("DOEPDMAB	 @0x%08lX : 0x%08X\n",
++				   (unsigned long)addr, DWC_READ_REG32(addr));
++		}
++
++	}
++}
++
++/**
++ * This functions reads the SPRAM and prints its content
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++void dwc_otg_dump_spram(dwc_otg_core_if_t * core_if)
++{
++	volatile uint8_t *addr, *start_addr, *end_addr;
++
++	DWC_PRINTF("SPRAM Data:\n");
++	start_addr = (void *)core_if->core_global_regs;
++	DWC_PRINTF("Base Address: 0x%8lX\n", (unsigned long)start_addr);
++	start_addr += 0x00028000;
++	end_addr = (void *)core_if->core_global_regs;
++	end_addr += 0x000280e0;
++
++	for (addr = start_addr; addr < end_addr; addr += 16) {
++		DWC_PRINTF
++		    ("0x%8lX:\t%2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X\n",
++		     (unsigned long)addr, addr[0], addr[1], addr[2], addr[3],
++		     addr[4], addr[5], addr[6], addr[7], addr[8], addr[9],
++		     addr[10], addr[11], addr[12], addr[13], addr[14], addr[15]
++		    );
++	}
++
++	return;
++}
++
++/**
++ * This function reads the host registers and prints them
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++void dwc_otg_dump_host_registers(dwc_otg_core_if_t * core_if)
++{
++	int i;
++	volatile uint32_t *addr;
++
++	DWC_PRINTF("Host Global Registers\n");
++	addr = &core_if->host_if->host_global_regs->hcfg;
++	DWC_PRINTF("HCFG		 @0x%08lX : 0x%08X\n",
++		   (unsigned long)addr, DWC_READ_REG32(addr));
++	addr = &core_if->host_if->host_global_regs->hfir;
++	DWC_PRINTF("HFIR		 @0x%08lX : 0x%08X\n",
++		   (unsigned long)addr, DWC_READ_REG32(addr));
++	addr = &core_if->host_if->host_global_regs->hfnum;
++	DWC_PRINTF("HFNUM	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->host_if->host_global_regs->hptxsts;
++	DWC_PRINTF("HPTXSTS	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->host_if->host_global_regs->haint;
++	DWC_PRINTF("HAINT	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->host_if->host_global_regs->haintmsk;
++	DWC_PRINTF("HAINTMSK	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	if (core_if->dma_desc_enable) {
++		addr = &core_if->host_if->host_global_regs->hflbaddr;
++		DWC_PRINTF("HFLBADDR	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++	}
++
++	addr = core_if->host_if->hprt0;
++	DWC_PRINTF("HPRT0	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++
++	for (i = 0; i < core_if->core_params->host_channels; i++) {
++		DWC_PRINTF("Host Channel %d Specific Registers\n", i);
++		addr = &core_if->host_if->hc_regs[i]->hcchar;
++		DWC_PRINTF("HCCHAR	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->host_if->hc_regs[i]->hcsplt;
++		DWC_PRINTF("HCSPLT	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->host_if->hc_regs[i]->hcint;
++		DWC_PRINTF("HCINT	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->host_if->hc_regs[i]->hcintmsk;
++		DWC_PRINTF("HCINTMSK	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->host_if->hc_regs[i]->hctsiz;
++		DWC_PRINTF("HCTSIZ	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		addr = &core_if->host_if->hc_regs[i]->hcdma;
++		DWC_PRINTF("HCDMA	 @0x%08lX : 0x%08X\n",
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++		if (core_if->dma_desc_enable) {
++			addr = &core_if->host_if->hc_regs[i]->hcdmab;
++			DWC_PRINTF("HCDMAB	 @0x%08lX : 0x%08X\n",
++				   (unsigned long)addr, DWC_READ_REG32(addr));
++		}
++
++	}
++	return;
++}
++
++/**
++ * This function reads the core global registers and prints them
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++void dwc_otg_dump_global_registers(dwc_otg_core_if_t * core_if)
++{
++	int i, ep_num;
++	volatile uint32_t *addr;
++	char *txfsiz;
++
++	DWC_PRINTF("Core Global Registers\n");
++	addr = &core_if->core_global_regs->gotgctl;
++	DWC_PRINTF("GOTGCTL	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gotgint;
++	DWC_PRINTF("GOTGINT	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gahbcfg;
++	DWC_PRINTF("GAHBCFG	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gusbcfg;
++	DWC_PRINTF("GUSBCFG	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->grstctl;
++	DWC_PRINTF("GRSTCTL	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gintsts;
++	DWC_PRINTF("GINTSTS	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gintmsk;
++	DWC_PRINTF("GINTMSK	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->grxstsr;
++	DWC_PRINTF("GRXSTSR	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->grxfsiz;
++	DWC_PRINTF("GRXFSIZ	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gnptxfsiz;
++	DWC_PRINTF("GNPTXFSIZ @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gnptxsts;
++	DWC_PRINTF("GNPTXSTS	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gi2cctl;
++	DWC_PRINTF("GI2CCTL	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gpvndctl;
++	DWC_PRINTF("GPVNDCTL	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->ggpio;
++	DWC_PRINTF("GGPIO	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->guid;
++	DWC_PRINTF("GUID		 @0x%08lX : 0x%08X\n",
++		   (unsigned long)addr, DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gsnpsid;
++	DWC_PRINTF("GSNPSID	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->ghwcfg1;
++	DWC_PRINTF("GHWCFG1	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->ghwcfg2;
++	DWC_PRINTF("GHWCFG2	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->ghwcfg3;
++	DWC_PRINTF("GHWCFG3	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->ghwcfg4;
++	DWC_PRINTF("GHWCFG4	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->glpmcfg;
++	DWC_PRINTF("GLPMCFG	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gpwrdn;
++	DWC_PRINTF("GPWRDN	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->gdfifocfg;
++	DWC_PRINTF("GDFIFOCFG	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++	addr = &core_if->core_global_regs->adpctl;
++	DWC_PRINTF("ADPCTL	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   dwc_otg_adp_read_reg(core_if));
++	addr = &core_if->core_global_regs->hptxfsiz;
++	DWC_PRINTF("HPTXFSIZ	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++
++	if (core_if->en_multiple_tx_fifo == 0) {
++		ep_num = core_if->hwcfg4.b.num_dev_perio_in_ep;
++		txfsiz = "DPTXFSIZ";
++	} else {
++		ep_num = core_if->hwcfg4.b.num_in_eps;
++		txfsiz = "DIENPTXF";
++	}
++	for (i = 0; i < ep_num; i++) {
++		addr = &core_if->core_global_regs->dtxfsiz[i];
++		DWC_PRINTF("%s[%d] @0x%08lX : 0x%08X\n", txfsiz, i + 1,
++			   (unsigned long)addr, DWC_READ_REG32(addr));
++	}
++	addr = core_if->pcgcctl;
++	DWC_PRINTF("PCGCCTL	 @0x%08lX : 0x%08X\n", (unsigned long)addr,
++		   DWC_READ_REG32(addr));
++}
++
++/**
++ * Flush a Tx FIFO.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param num Tx FIFO to flush.
++ */
++void dwc_otg_flush_tx_fifo(dwc_otg_core_if_t * core_if, const int num)
++{
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	volatile grstctl_t greset = {.d32 = 0 };
++	int count = 0;
++
++	DWC_DEBUGPL((DBG_CIL | DBG_PCDV), "Flush Tx FIFO %d\n", num);
++
++	greset.b.txfflsh = 1;
++	greset.b.txfnum = num;
++	DWC_WRITE_REG32(&global_regs->grstctl, greset.d32);
++
++	do {
++		greset.d32 = DWC_READ_REG32(&global_regs->grstctl);
++		if (++count > 10000) {
++			DWC_WARN("%s() HANG! GRSTCTL=%0x GNPTXSTS=0x%08x\n",
++				 __func__, greset.d32,
++				 DWC_READ_REG32(&global_regs->gnptxsts));
++			break;
++		}
++		dwc_udelay(1);
++	} while (greset.b.txfflsh == 1);
++
++	/* Wait for 3 PHY Clocks */
++	dwc_udelay(1);
++}
++
++/**
++ * Flush Rx FIFO.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++void dwc_otg_flush_rx_fifo(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	volatile grstctl_t greset = {.d32 = 0 };
++	int count = 0;
++
++	DWC_DEBUGPL((DBG_CIL | DBG_PCDV), "%s\n", __func__);
++	/*
++	 *
++	 */
++	greset.b.rxfflsh = 1;
++	DWC_WRITE_REG32(&global_regs->grstctl, greset.d32);
++
++	do {
++		greset.d32 = DWC_READ_REG32(&global_regs->grstctl);
++		if (++count > 10000) {
++			DWC_WARN("%s() HANG! GRSTCTL=%0x\n", __func__,
++				 greset.d32);
++			break;
++		}
++		dwc_udelay(1);
++	} while (greset.b.rxfflsh == 1);
++
++	/* Wait for 3 PHY Clocks */
++	dwc_udelay(1);
++}
++
++/**
++ * Do core a soft reset of the core.  Be careful with this because it
++ * resets all the internal state machines of the core.
++ */
++void dwc_otg_core_reset(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	volatile grstctl_t greset = {.d32 = 0 };
++	int count = 0;
++
++	DWC_DEBUGPL(DBG_CILV, "%s\n", __func__);
++	/* Wait for AHB master IDLE state. */
++	do {
++		dwc_udelay(10);
++		greset.d32 = DWC_READ_REG32(&global_regs->grstctl);
++		if (++count > 100000) {
++			DWC_WARN("%s() HANG! AHB Idle GRSTCTL=%0x\n", __func__,
++				 greset.d32);
++			return;
++		}
++	}
++	while (greset.b.ahbidle == 0);
++
++	/* Core Soft Reset */
++	count = 0;
++	greset.b.csftrst = 1;
++	DWC_WRITE_REG32(&global_regs->grstctl, greset.d32);
++	do {
++		greset.d32 = DWC_READ_REG32(&global_regs->grstctl);
++		if (++count > 10000) {
++			DWC_WARN("%s() HANG! Soft Reset GRSTCTL=%0x\n",
++				 __func__, greset.d32);
++			break;
++		}
++		dwc_udelay(1);
++	}
++	while (greset.b.csftrst == 1);
++
++	/* Wait for 3 PHY Clocks */
++	dwc_mdelay(100);
++}
++
++uint8_t dwc_otg_is_device_mode(dwc_otg_core_if_t * _core_if)
++{
++	return (dwc_otg_mode(_core_if) != DWC_HOST_MODE);
++}
++
++uint8_t dwc_otg_is_host_mode(dwc_otg_core_if_t * _core_if)
++{
++	return (dwc_otg_mode(_core_if) == DWC_HOST_MODE);
++}
++
++/**
++ * Register HCD callbacks. The callbacks are used to start and stop
++ * the HCD for interrupt processing.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param cb the HCD callback structure.
++ * @param p pointer to be passed to callback function (usb_hcd*).
++ */
++void dwc_otg_cil_register_hcd_callbacks(dwc_otg_core_if_t * core_if,
++					dwc_otg_cil_callbacks_t * cb, void *p)
++{
++	core_if->hcd_cb = cb;
++	cb->p = p;
++}
++
++/**
++ * Register PCD callbacks. The callbacks are used to start and stop
++ * the PCD for interrupt processing.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param cb the PCD callback structure.
++ * @param p pointer to be passed to callback function (pcd*).
++ */
++void dwc_otg_cil_register_pcd_callbacks(dwc_otg_core_if_t * core_if,
++					dwc_otg_cil_callbacks_t * cb, void *p)
++{
++	core_if->pcd_cb = cb;
++	cb->p = p;
++}
++
++#ifdef DWC_EN_ISOC
++
++/**
++ * This function writes isoc data per 1 (micro)frame into tx fifo
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ *
++ */
++void write_isoc_frame_data(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	dwc_otg_dev_in_ep_regs_t *ep_regs;
++	dtxfsts_data_t txstatus = {.d32 = 0 };
++	uint32_t len = 0;
++	uint32_t dwords;
++
++	ep->xfer_len = ep->data_per_frame;
++	ep->xfer_count = 0;
++
++	ep_regs = core_if->dev_if->in_ep_regs[ep->num];
++
++	len = ep->xfer_len - ep->xfer_count;
++
++	if (len > ep->maxpacket) {
++		len = ep->maxpacket;
++	}
++
++	dwords = (len + 3) / 4;
++
++	/* While there is space in the queue and space in the FIFO and
++	 * More data to tranfer, Write packets to the Tx FIFO */
++	txstatus.d32 =
++	    DWC_READ_REG32(&core_if->dev_if->in_ep_regs[ep->num]->dtxfsts);
++	DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", ep->num, txstatus.d32);
++
++	while (txstatus.b.txfspcavail > dwords &&
++	       ep->xfer_count < ep->xfer_len && ep->xfer_len != 0) {
++		/* Write the FIFO */
++		dwc_otg_ep_write_packet(core_if, ep, 0);
++
++		len = ep->xfer_len - ep->xfer_count;
++		if (len > ep->maxpacket) {
++			len = ep->maxpacket;
++		}
++
++		dwords = (len + 3) / 4;
++		txstatus.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
++				   dtxfsts);
++		DWC_DEBUGPL(DBG_PCDV, "dtxfsts[%d]=0x%08x\n", ep->num,
++			    txstatus.d32);
++	}
++}
++
++/**
++ * This function initializes a descriptor chain for Isochronous transfer
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ *
++ */
++void dwc_otg_iso_ep_start_frm_transfer(dwc_otg_core_if_t * core_if,
++				       dwc_ep_t * ep)
++{
++	deptsiz_data_t deptsiz = {.d32 = 0 };
++	depctl_data_t depctl = {.d32 = 0 };
++	dsts_data_t dsts = {.d32 = 0 };
++	volatile uint32_t *addr;
++
++	if (ep->is_in) {
++		addr = &core_if->dev_if->in_ep_regs[ep->num]->diepctl;
++	} else {
++		addr = &core_if->dev_if->out_ep_regs[ep->num]->doepctl;
++	}
++
++	ep->xfer_len = ep->data_per_frame;
++	ep->xfer_count = 0;
++	ep->xfer_buff = ep->cur_pkt_addr;
++	ep->dma_addr = ep->cur_pkt_dma_addr;
++
++	if (ep->is_in) {
++		/* Program the transfer size and packet count
++		 *      as follows: xfersize = N * maxpacket +
++		 *      short_packet pktcnt = N + (short_packet
++		 *      exist ? 1 : 0)
++		 */
++		deptsiz.b.xfersize = ep->xfer_len;
++		deptsiz.b.pktcnt =
++		    (ep->xfer_len - 1 + ep->maxpacket) / ep->maxpacket;
++		deptsiz.b.mc = deptsiz.b.pktcnt;
++		DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->dieptsiz,
++				deptsiz.d32);
++
++		/* Write the DMA register */
++		if (core_if->dma_enable) {
++			DWC_WRITE_REG32(&
++					(core_if->dev_if->in_ep_regs[ep->num]->
++					 diepdma), (uint32_t) ep->dma_addr);
++		}
++	} else {
++		deptsiz.b.pktcnt =
++		    (ep->xfer_len + (ep->maxpacket - 1)) / ep->maxpacket;
++		deptsiz.b.xfersize = deptsiz.b.pktcnt * ep->maxpacket;
++
++		DWC_WRITE_REG32(&core_if->dev_if->
++				out_ep_regs[ep->num]->doeptsiz, deptsiz.d32);
++
++		if (core_if->dma_enable) {
++			DWC_WRITE_REG32(&
++					(core_if->dev_if->
++					 out_ep_regs[ep->num]->doepdma),
++					(uint32_t) ep->dma_addr);
++		}
++	}
++
++	/** Enable endpoint, clear nak  */
++
++	depctl.d32 = 0;
++	if (ep->bInterval == 1) {
++		dsts.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++		ep->next_frame = dsts.b.soffn + ep->bInterval;
++
++		if (ep->next_frame & 0x1) {
++			depctl.b.setd1pid = 1;
++		} else {
++			depctl.b.setd0pid = 1;
++		}
++	} else {
++		ep->next_frame += ep->bInterval;
++
++		if (ep->next_frame & 0x1) {
++			depctl.b.setd1pid = 1;
++		} else {
++			depctl.b.setd0pid = 1;
++		}
++	}
++	depctl.b.epena = 1;
++	depctl.b.cnak = 1;
++
++	DWC_MODIFY_REG32(addr, 0, depctl.d32);
++	depctl.d32 = DWC_READ_REG32(addr);
++
++	if (ep->is_in && core_if->dma_enable == 0) {
++		write_isoc_frame_data(core_if, ep);
++	}
++
++}
++#endif /* DWC_EN_ISOC */
++
++static void dwc_otg_set_uninitialized(int32_t * p, int size)
++{
++	int i;
++	for (i = 0; i < size; i++) {
++		p[i] = -1;
++	}
++}
++
++static int dwc_otg_param_initialized(int32_t val)
++{
++	return val != -1;
++}
++
++static int dwc_otg_setup_params(dwc_otg_core_if_t * core_if)
++{
++	int i;
++	core_if->core_params = DWC_ALLOC(sizeof(*core_if->core_params));
++	if (!core_if->core_params) {
++		return -DWC_E_NO_MEMORY;
++	}
++	dwc_otg_set_uninitialized((int32_t *) core_if->core_params,
++				  sizeof(*core_if->core_params) /
++				  sizeof(int32_t));
++	DWC_PRINTF("Setting default values for core params\n");
++	dwc_otg_set_param_otg_cap(core_if, dwc_param_otg_cap_default);
++	dwc_otg_set_param_dma_enable(core_if, dwc_param_dma_enable_default);
++	dwc_otg_set_param_dma_desc_enable(core_if,
++					  dwc_param_dma_desc_enable_default);
++	dwc_otg_set_param_opt(core_if, dwc_param_opt_default);
++	dwc_otg_set_param_dma_burst_size(core_if,
++					 dwc_param_dma_burst_size_default);
++	dwc_otg_set_param_host_support_fs_ls_low_power(core_if,
++						       dwc_param_host_support_fs_ls_low_power_default);
++	dwc_otg_set_param_enable_dynamic_fifo(core_if,
++					      dwc_param_enable_dynamic_fifo_default);
++	dwc_otg_set_param_data_fifo_size(core_if,
++					 dwc_param_data_fifo_size_default);
++	dwc_otg_set_param_dev_rx_fifo_size(core_if,
++					   dwc_param_dev_rx_fifo_size_default);
++	dwc_otg_set_param_dev_nperio_tx_fifo_size(core_if,
++						  dwc_param_dev_nperio_tx_fifo_size_default);
++	dwc_otg_set_param_host_rx_fifo_size(core_if,
++					    dwc_param_host_rx_fifo_size_default);
++	dwc_otg_set_param_host_nperio_tx_fifo_size(core_if,
++						   dwc_param_host_nperio_tx_fifo_size_default);
++	dwc_otg_set_param_host_perio_tx_fifo_size(core_if,
++						  dwc_param_host_perio_tx_fifo_size_default);
++	dwc_otg_set_param_max_transfer_size(core_if,
++					    dwc_param_max_transfer_size_default);
++	dwc_otg_set_param_max_packet_count(core_if,
++					   dwc_param_max_packet_count_default);
++	dwc_otg_set_param_host_channels(core_if,
++					dwc_param_host_channels_default);
++	dwc_otg_set_param_dev_endpoints(core_if,
++					dwc_param_dev_endpoints_default);
++	dwc_otg_set_param_phy_type(core_if, dwc_param_phy_type_default);
++	dwc_otg_set_param_speed(core_if, dwc_param_speed_default);
++	dwc_otg_set_param_host_ls_low_power_phy_clk(core_if,
++						    dwc_param_host_ls_low_power_phy_clk_default);
++	dwc_otg_set_param_phy_ulpi_ddr(core_if, dwc_param_phy_ulpi_ddr_default);
++	dwc_otg_set_param_phy_ulpi_ext_vbus(core_if,
++					    dwc_param_phy_ulpi_ext_vbus_default);
++	dwc_otg_set_param_phy_utmi_width(core_if,
++					 dwc_param_phy_utmi_width_default);
++	dwc_otg_set_param_ts_dline(core_if, dwc_param_ts_dline_default);
++	dwc_otg_set_param_i2c_enable(core_if, dwc_param_i2c_enable_default);
++	dwc_otg_set_param_ulpi_fs_ls(core_if, dwc_param_ulpi_fs_ls_default);
++	dwc_otg_set_param_en_multiple_tx_fifo(core_if,
++					      dwc_param_en_multiple_tx_fifo_default);
++	for (i = 0; i < 15; i++) {
++		dwc_otg_set_param_dev_perio_tx_fifo_size(core_if,
++							 dwc_param_dev_perio_tx_fifo_size_default,
++							 i);
++	}
++
++	for (i = 0; i < 15; i++) {
++		dwc_otg_set_param_dev_tx_fifo_size(core_if,
++						   dwc_param_dev_tx_fifo_size_default,
++						   i);
++	}
++	dwc_otg_set_param_thr_ctl(core_if, dwc_param_thr_ctl_default);
++	dwc_otg_set_param_mpi_enable(core_if, dwc_param_mpi_enable_default);
++	dwc_otg_set_param_pti_enable(core_if, dwc_param_pti_enable_default);
++	dwc_otg_set_param_lpm_enable(core_if, dwc_param_lpm_enable_default);
++	dwc_otg_set_param_ic_usb_cap(core_if, dwc_param_ic_usb_cap_default);
++	dwc_otg_set_param_tx_thr_length(core_if,
++					dwc_param_tx_thr_length_default);
++	dwc_otg_set_param_rx_thr_length(core_if,
++					dwc_param_rx_thr_length_default);
++	dwc_otg_set_param_ahb_thr_ratio(core_if,
++					dwc_param_ahb_thr_ratio_default);
++	dwc_otg_set_param_power_down(core_if, dwc_param_power_down_default);
++	dwc_otg_set_param_reload_ctl(core_if, dwc_param_reload_ctl_default);
++	dwc_otg_set_param_dev_out_nak(core_if, dwc_param_dev_out_nak_default);
++	dwc_otg_set_param_cont_on_bna(core_if, dwc_param_cont_on_bna_default);
++	dwc_otg_set_param_ahb_single(core_if, dwc_param_ahb_single_default);
++	dwc_otg_set_param_otg_ver(core_if, dwc_param_otg_ver_default);
++	dwc_otg_set_param_adp_enable(core_if, dwc_param_adp_enable_default);
++	DWC_PRINTF("Finished setting default values for core params\n");
++
++	return 0;
++}
++
++uint8_t dwc_otg_is_dma_enable(dwc_otg_core_if_t * core_if)
++{
++	return core_if->dma_enable;
++}
++
++/* Checks if the parameter is outside of its valid range of values */
++#define DWC_OTG_PARAM_TEST(_param_, _low_, _high_) \
++		(((_param_) < (_low_)) || \
++		((_param_) > (_high_)))
++
++/* Parameter access functions */
++int dwc_otg_set_param_otg_cap(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int valid;
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 2)) {
++		DWC_WARN("Wrong value for otg_cap parameter\n");
++		DWC_WARN("otg_cap parameter must be 0,1 or 2\n");
++		retval = -DWC_E_INVALID;
++		goto out;
++	}
++
++	valid = 1;
++	switch (val) {
++	case DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE:
++		if (core_if->hwcfg2.b.op_mode !=
++		    DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG)
++			valid = 0;
++		break;
++	case DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE:
++		if ((core_if->hwcfg2.b.op_mode !=
++		     DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG)
++		    && (core_if->hwcfg2.b.op_mode !=
++			DWC_HWCFG2_OP_MODE_SRP_ONLY_CAPABLE_OTG)
++		    && (core_if->hwcfg2.b.op_mode !=
++			DWC_HWCFG2_OP_MODE_SRP_CAPABLE_DEVICE)
++		    && (core_if->hwcfg2.b.op_mode !=
++			DWC_HWCFG2_OP_MODE_SRP_CAPABLE_HOST)) {
++			valid = 0;
++		}
++		break;
++	case DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE:
++		/* always valid */
++		break;
++	}
++	if (!valid) {
++		if (dwc_otg_param_initialized(core_if->core_params->otg_cap)) {
++			DWC_ERROR
++			    ("%d invalid for otg_cap paremter. Check HW configuration.\n",
++			     val);
++		}
++		val =
++		    (((core_if->hwcfg2.b.op_mode ==
++		       DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG)
++		      || (core_if->hwcfg2.b.op_mode ==
++			  DWC_HWCFG2_OP_MODE_SRP_ONLY_CAPABLE_OTG)
++		      || (core_if->hwcfg2.b.op_mode ==
++			  DWC_HWCFG2_OP_MODE_SRP_CAPABLE_DEVICE)
++		      || (core_if->hwcfg2.b.op_mode ==
++			  DWC_HWCFG2_OP_MODE_SRP_CAPABLE_HOST)) ?
++		     DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE :
++		     DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->otg_cap = val;
++out:
++	return retval;
++}
++
++int32_t dwc_otg_get_param_otg_cap(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->otg_cap;
++}
++
++int dwc_otg_set_param_opt(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong value for opt parameter\n");
++		return -DWC_E_INVALID;
++	}
++	core_if->core_params->opt = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_opt(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->opt;
++}
++
++int dwc_otg_set_param_dma_enable(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong value for dma enable\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == 1) && (core_if->hwcfg2.b.architecture == 0)) {
++		if (dwc_otg_param_initialized(core_if->core_params->dma_enable)) {
++			DWC_ERROR
++			    ("%d invalid for dma_enable paremter. Check HW configuration.\n",
++			     val);
++		}
++		val = 0;
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->dma_enable = val;
++	if (val == 0) {
++		dwc_otg_set_param_dma_desc_enable(core_if, 0);
++	}
++	return retval;
++}
++
++int32_t dwc_otg_get_param_dma_enable(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->dma_enable;
++}
++
++int dwc_otg_set_param_dma_desc_enable(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong value for dma_enable\n");
++		DWC_WARN("dma_desc_enable must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == 1)
++	    && ((dwc_otg_get_param_dma_enable(core_if) == 0)
++		|| (core_if->hwcfg4.b.desc_dma == 0))) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->dma_desc_enable)) {
++			DWC_ERROR
++			    ("%d invalid for dma_desc_enable paremter. Check HW configuration.\n",
++			     val);
++		}
++		val = 0;
++		retval = -DWC_E_INVALID;
++	}
++	core_if->core_params->dma_desc_enable = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_dma_desc_enable(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->dma_desc_enable;
++}
++
++int dwc_otg_set_param_host_support_fs_ls_low_power(dwc_otg_core_if_t * core_if,
++						   int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong value for host_support_fs_low_power\n");
++		DWC_WARN("host_support_fs_low_power must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++	core_if->core_params->host_support_fs_ls_low_power = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_host_support_fs_ls_low_power(dwc_otg_core_if_t *
++						       core_if)
++{
++	return core_if->core_params->host_support_fs_ls_low_power;
++}
++
++int dwc_otg_set_param_enable_dynamic_fifo(dwc_otg_core_if_t * core_if,
++					  int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong value for enable_dynamic_fifo\n");
++		DWC_WARN("enable_dynamic_fifo must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == 1) && (core_if->hwcfg2.b.dynamic_fifo == 0)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->enable_dynamic_fifo)) {
++			DWC_ERROR
++			    ("%d invalid for enable_dynamic_fifo paremter. Check HW configuration.\n",
++			     val);
++		}
++		val = 0;
++		retval = -DWC_E_INVALID;
++	}
++	core_if->core_params->enable_dynamic_fifo = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_enable_dynamic_fifo(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->enable_dynamic_fifo;
++}
++
++int dwc_otg_set_param_data_fifo_size(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 32, 32768)) {
++		DWC_WARN("Wrong value for data_fifo_size\n");
++		DWC_WARN("data_fifo_size must be 32-32768\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > core_if->hwcfg3.b.dfifo_depth) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->data_fifo_size)) {
++			DWC_ERROR
++			    ("%d invalid for data_fifo_size parameter. Check HW configuration.\n",
++			     val);
++		}
++		val = core_if->hwcfg3.b.dfifo_depth;
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->data_fifo_size = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_data_fifo_size(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->data_fifo_size;
++}
++
++int dwc_otg_set_param_dev_rx_fifo_size(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
++		DWC_WARN("Wrong value for dev_rx_fifo_size\n");
++		DWC_WARN("dev_rx_fifo_size must be 16-32768\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > DWC_READ_REG32(&core_if->core_global_regs->grxfsiz)) {
++		if (dwc_otg_param_initialized(core_if->core_params->dev_rx_fifo_size)) {
++		DWC_WARN("%d invalid for dev_rx_fifo_size parameter\n", val);
++		}
++		val = DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->dev_rx_fifo_size = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_dev_rx_fifo_size(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->dev_rx_fifo_size;
++}
++
++int dwc_otg_set_param_dev_nperio_tx_fifo_size(dwc_otg_core_if_t * core_if,
++					      int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
++		DWC_WARN("Wrong value for dev_nperio_tx_fifo\n");
++		DWC_WARN("dev_nperio_tx_fifo must be 16-32768\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > (DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz) >> 16)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->dev_nperio_tx_fifo_size)) {
++			DWC_ERROR
++			    ("%d invalid for dev_nperio_tx_fifo_size. Check HW configuration.\n",
++			     val);
++		}
++		val =
++		    (DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz) >>
++		     16);
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->dev_nperio_tx_fifo_size = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_dev_nperio_tx_fifo_size(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->dev_nperio_tx_fifo_size;
++}
++
++int dwc_otg_set_param_host_rx_fifo_size(dwc_otg_core_if_t * core_if,
++					int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
++		DWC_WARN("Wrong value for host_rx_fifo_size\n");
++		DWC_WARN("host_rx_fifo_size must be 16-32768\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > DWC_READ_REG32(&core_if->core_global_regs->grxfsiz)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->host_rx_fifo_size)) {
++			DWC_ERROR
++			    ("%d invalid for host_rx_fifo_size. Check HW configuration.\n",
++			     val);
++		}
++		val = DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->host_rx_fifo_size = val;
++	return retval;
++
++}
++
++int32_t dwc_otg_get_param_host_rx_fifo_size(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->host_rx_fifo_size;
++}
++
++int dwc_otg_set_param_host_nperio_tx_fifo_size(dwc_otg_core_if_t * core_if,
++					       int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
++		DWC_WARN("Wrong value for host_nperio_tx_fifo_size\n");
++		DWC_WARN("host_nperio_tx_fifo_size must be 16-32768\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > (DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz) >> 16)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->host_nperio_tx_fifo_size)) {
++			DWC_ERROR
++			    ("%d invalid for host_nperio_tx_fifo_size. Check HW configuration.\n",
++			     val);
++		}
++		val =
++		    (DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz) >>
++		     16);
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->host_nperio_tx_fifo_size = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_host_nperio_tx_fifo_size(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->host_nperio_tx_fifo_size;
++}
++
++int dwc_otg_set_param_host_perio_tx_fifo_size(dwc_otg_core_if_t * core_if,
++					      int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
++		DWC_WARN("Wrong value for host_perio_tx_fifo_size\n");
++		DWC_WARN("host_perio_tx_fifo_size must be 16-32768\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > ((core_if->hptxfsiz.d32) >> 16)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->host_perio_tx_fifo_size)) {
++			DWC_ERROR
++			    ("%d invalid for host_perio_tx_fifo_size. Check HW configuration.\n",
++			     val);
++		}
++		val = (core_if->hptxfsiz.d32) >> 16;
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->host_perio_tx_fifo_size = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_host_perio_tx_fifo_size(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->host_perio_tx_fifo_size;
++}
++
++int dwc_otg_set_param_max_transfer_size(dwc_otg_core_if_t * core_if,
++					int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 2047, 524288)) {
++		DWC_WARN("Wrong value for max_transfer_size\n");
++		DWC_WARN("max_transfer_size must be 2047-524288\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val >= (1 << (core_if->hwcfg3.b.xfer_size_cntr_width + 11))) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->max_transfer_size)) {
++			DWC_ERROR
++			    ("%d invalid for max_transfer_size. Check HW configuration.\n",
++			     val);
++		}
++		val =
++		    ((1 << (core_if->hwcfg3.b.packet_size_cntr_width + 11)) -
++		     1);
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->max_transfer_size = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_max_transfer_size(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->max_transfer_size;
++}
++
++int dwc_otg_set_param_max_packet_count(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 15, 511)) {
++		DWC_WARN("Wrong value for max_packet_count\n");
++		DWC_WARN("max_packet_count must be 15-511\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > (1 << (core_if->hwcfg3.b.packet_size_cntr_width + 4))) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->max_packet_count)) {
++			DWC_ERROR
++			    ("%d invalid for max_packet_count. Check HW configuration.\n",
++			     val);
++		}
++		val =
++		    ((1 << (core_if->hwcfg3.b.packet_size_cntr_width + 4)) - 1);
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->max_packet_count = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_max_packet_count(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->max_packet_count;
++}
++
++int dwc_otg_set_param_host_channels(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 1, 16)) {
++		DWC_WARN("Wrong value for host_channels\n");
++		DWC_WARN("host_channels must be 1-16\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > (core_if->hwcfg2.b.num_host_chan + 1)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->host_channels)) {
++			DWC_ERROR
++			    ("%d invalid for host_channels. Check HW configurations.\n",
++			     val);
++		}
++		val = (core_if->hwcfg2.b.num_host_chan + 1);
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->host_channels = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_host_channels(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->host_channels;
++}
++
++int dwc_otg_set_param_dev_endpoints(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 1, 15)) {
++		DWC_WARN("Wrong value for dev_endpoints\n");
++		DWC_WARN("dev_endpoints must be 1-15\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val > (core_if->hwcfg2.b.num_dev_ep)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->dev_endpoints)) {
++			DWC_ERROR
++			    ("%d invalid for dev_endpoints. Check HW configurations.\n",
++			     val);
++		}
++		val = core_if->hwcfg2.b.num_dev_ep;
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->dev_endpoints = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_dev_endpoints(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->dev_endpoints;
++}
++
++int dwc_otg_set_param_phy_type(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	int valid = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 2)) {
++		DWC_WARN("Wrong value for phy_type\n");
++		DWC_WARN("phy_type must be 0,1 or 2\n");
++		return -DWC_E_INVALID;
++	}
++#ifndef NO_FS_PHY_HW_CHECKS
++	if ((val == DWC_PHY_TYPE_PARAM_UTMI) &&
++	    ((core_if->hwcfg2.b.hs_phy_type == 1) ||
++	     (core_if->hwcfg2.b.hs_phy_type == 3))) {
++		valid = 1;
++	} else if ((val == DWC_PHY_TYPE_PARAM_ULPI) &&
++		   ((core_if->hwcfg2.b.hs_phy_type == 2) ||
++		    (core_if->hwcfg2.b.hs_phy_type == 3))) {
++		valid = 1;
++	} else if ((val == DWC_PHY_TYPE_PARAM_FS) &&
++		   (core_if->hwcfg2.b.fs_phy_type == 1)) {
++		valid = 1;
++	}
++	if (!valid) {
++		if (dwc_otg_param_initialized(core_if->core_params->phy_type)) {
++			DWC_ERROR
++			    ("%d invalid for phy_type. Check HW configurations.\n",
++			     val);
++		}
++		if (core_if->hwcfg2.b.hs_phy_type) {
++			if ((core_if->hwcfg2.b.hs_phy_type == 3) ||
++			    (core_if->hwcfg2.b.hs_phy_type == 1)) {
++				val = DWC_PHY_TYPE_PARAM_UTMI;
++			} else {
++				val = DWC_PHY_TYPE_PARAM_ULPI;
++			}
++		}
++		retval = -DWC_E_INVALID;
++	}
++#endif
++	core_if->core_params->phy_type = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_phy_type(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->phy_type;
++}
++
++int dwc_otg_set_param_speed(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong value for speed parameter\n");
++		DWC_WARN("max_speed parameter must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++	if ((val == 0)
++	    && dwc_otg_get_param_phy_type(core_if) == DWC_PHY_TYPE_PARAM_FS) {
++		if (dwc_otg_param_initialized(core_if->core_params->speed)) {
++			DWC_ERROR
++			    ("%d invalid for speed paremter. Check HW configuration.\n",
++			     val);
++		}
++		val =
++		    (dwc_otg_get_param_phy_type(core_if) ==
++		     DWC_PHY_TYPE_PARAM_FS ? 1 : 0);
++		retval = -DWC_E_INVALID;
++	}
++	core_if->core_params->speed = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_speed(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->speed;
++}
++
++int dwc_otg_set_param_host_ls_low_power_phy_clk(dwc_otg_core_if_t * core_if,
++						int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN
++		    ("Wrong value for host_ls_low_power_phy_clk parameter\n");
++		DWC_WARN("host_ls_low_power_phy_clk must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_48MHZ)
++	    && (dwc_otg_get_param_phy_type(core_if) == DWC_PHY_TYPE_PARAM_FS)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->host_ls_low_power_phy_clk)) {
++			DWC_ERROR
++			    ("%d invalid for host_ls_low_power_phy_clk. Check HW configuration.\n",
++			     val);
++		}
++		val =
++		    (dwc_otg_get_param_phy_type(core_if) ==
++		     DWC_PHY_TYPE_PARAM_FS) ?
++		    DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_6MHZ :
++		    DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_48MHZ;
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->host_ls_low_power_phy_clk = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_host_ls_low_power_phy_clk(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->host_ls_low_power_phy_clk;
++}
++
++int dwc_otg_set_param_phy_ulpi_ddr(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong value for phy_ulpi_ddr\n");
++		DWC_WARN("phy_upli_ddr must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	core_if->core_params->phy_ulpi_ddr = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_phy_ulpi_ddr(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->phy_ulpi_ddr;
++}
++
++int dwc_otg_set_param_phy_ulpi_ext_vbus(dwc_otg_core_if_t * core_if,
++					int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong valaue for phy_ulpi_ext_vbus\n");
++		DWC_WARN("phy_ulpi_ext_vbus must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	core_if->core_params->phy_ulpi_ext_vbus = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_phy_ulpi_ext_vbus(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->phy_ulpi_ext_vbus;
++}
++
++int dwc_otg_set_param_phy_utmi_width(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 8, 8) && DWC_OTG_PARAM_TEST(val, 16, 16)) {
++		DWC_WARN("Wrong valaue for phy_utmi_width\n");
++		DWC_WARN("phy_utmi_width must be 8 or 16\n");
++		return -DWC_E_INVALID;
++	}
++
++	core_if->core_params->phy_utmi_width = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_phy_utmi_width(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->phy_utmi_width;
++}
++
++int dwc_otg_set_param_ulpi_fs_ls(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong valaue for ulpi_fs_ls\n");
++		DWC_WARN("ulpi_fs_ls must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	core_if->core_params->ulpi_fs_ls = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_ulpi_fs_ls(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->ulpi_fs_ls;
++}
++
++int dwc_otg_set_param_ts_dline(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong valaue for ts_dline\n");
++		DWC_WARN("ts_dline must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	core_if->core_params->ts_dline = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_ts_dline(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->ts_dline;
++}
++
++int dwc_otg_set_param_i2c_enable(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong valaue for i2c_enable\n");
++		DWC_WARN("i2c_enable must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++#ifndef NO_FS_PHY_HW_CHECK
++	if (val == 1 && core_if->hwcfg3.b.i2c == 0) {
++		if (dwc_otg_param_initialized(core_if->core_params->i2c_enable)) {
++			DWC_ERROR
++			    ("%d invalid for i2c_enable. Check HW configuration.\n",
++			     val);
++		}
++		val = 0;
++		retval = -DWC_E_INVALID;
++	}
++#endif
++
++	core_if->core_params->i2c_enable = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_i2c_enable(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->i2c_enable;
++}
++
++int dwc_otg_set_param_dev_perio_tx_fifo_size(dwc_otg_core_if_t * core_if,
++					     int32_t val, int fifo_num)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 4, 768)) {
++		DWC_WARN("Wrong value for dev_perio_tx_fifo_size\n");
++		DWC_WARN("dev_perio_tx_fifo_size must be 4-768\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val >
++	    (DWC_READ_REG32(&core_if->core_global_regs->dtxfsiz[fifo_num]))) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->dev_perio_tx_fifo_size[fifo_num])) {
++			DWC_ERROR
++			    ("`%d' invalid for parameter `dev_perio_fifo_size_%d'. Check HW configuration.\n",
++			     val, fifo_num);
++		}
++		val = (DWC_READ_REG32(&core_if->core_global_regs->dtxfsiz[fifo_num]));
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->dev_perio_tx_fifo_size[fifo_num] = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_dev_perio_tx_fifo_size(dwc_otg_core_if_t * core_if,
++						 int fifo_num)
++{
++	return core_if->core_params->dev_perio_tx_fifo_size[fifo_num];
++}
++
++int dwc_otg_set_param_en_multiple_tx_fifo(dwc_otg_core_if_t * core_if,
++					  int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong valaue for en_multiple_tx_fifo,\n");
++		DWC_WARN("en_multiple_tx_fifo must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val == 1 && core_if->hwcfg4.b.ded_fifo_en == 0) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->en_multiple_tx_fifo)) {
++			DWC_ERROR
++			    ("%d invalid for parameter en_multiple_tx_fifo. Check HW configuration.\n",
++			     val);
++		}
++		val = 0;
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->en_multiple_tx_fifo = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_en_multiple_tx_fifo(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->en_multiple_tx_fifo;
++}
++
++int dwc_otg_set_param_dev_tx_fifo_size(dwc_otg_core_if_t * core_if, int32_t val,
++				       int fifo_num)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 4, 768)) {
++		DWC_WARN("Wrong value for dev_tx_fifo_size\n");
++		DWC_WARN("dev_tx_fifo_size must be 4-768\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val >
++	    (DWC_READ_REG32(&core_if->core_global_regs->dtxfsiz[fifo_num]))) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->dev_tx_fifo_size[fifo_num])) {
++			DWC_ERROR
++			    ("`%d' invalid for parameter `dev_tx_fifo_size_%d'. Check HW configuration.\n",
++			     val, fifo_num);
++		}
++		val = (DWC_READ_REG32(&core_if->core_global_regs->dtxfsiz[fifo_num]));
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->dev_tx_fifo_size[fifo_num] = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_dev_tx_fifo_size(dwc_otg_core_if_t * core_if,
++					   int fifo_num)
++{
++	return core_if->core_params->dev_tx_fifo_size[fifo_num];
++}
++
++int dwc_otg_set_param_thr_ctl(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 7)) {
++		DWC_WARN("Wrong value for thr_ctl\n");
++		DWC_WARN("thr_ctl must be 0-7\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val != 0) &&
++	    (!dwc_otg_get_param_dma_enable(core_if) ||
++	     !core_if->hwcfg4.b.ded_fifo_en)) {
++		if (dwc_otg_param_initialized(core_if->core_params->thr_ctl)) {
++			DWC_ERROR
++			    ("%d invalid for parameter thr_ctl. Check HW configuration.\n",
++			     val);
++		}
++		val = 0;
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->thr_ctl = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_thr_ctl(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->thr_ctl;
++}
++
++int dwc_otg_set_param_lpm_enable(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("Wrong value for lpm_enable\n");
++		DWC_WARN("lpm_enable must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val && !core_if->hwcfg3.b.otg_lpm_en) {
++		if (dwc_otg_param_initialized(core_if->core_params->lpm_enable)) {
++			DWC_ERROR
++			    ("%d invalid for parameter lpm_enable. Check HW configuration.\n",
++			     val);
++		}
++		val = 0;
++		retval = -DWC_E_INVALID;
++	}
++
++	core_if->core_params->lpm_enable = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_lpm_enable(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->lpm_enable;
++}
++
++int dwc_otg_set_param_tx_thr_length(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 8, 128)) {
++		DWC_WARN("Wrong valaue for tx_thr_length\n");
++		DWC_WARN("tx_thr_length must be 8 - 128\n");
++		return -DWC_E_INVALID;
++	}
++
++	core_if->core_params->tx_thr_length = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_tx_thr_length(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->tx_thr_length;
++}
++
++int dwc_otg_set_param_rx_thr_length(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 8, 128)) {
++		DWC_WARN("Wrong valaue for rx_thr_length\n");
++		DWC_WARN("rx_thr_length must be 8 - 128\n");
++		return -DWC_E_INVALID;
++	}
++
++	core_if->core_params->rx_thr_length = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_rx_thr_length(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->rx_thr_length;
++}
++
++int dwc_otg_set_param_dma_burst_size(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	if (DWC_OTG_PARAM_TEST(val, 1, 1) &&
++	    DWC_OTG_PARAM_TEST(val, 4, 4) &&
++	    DWC_OTG_PARAM_TEST(val, 8, 8) &&
++	    DWC_OTG_PARAM_TEST(val, 16, 16) &&
++	    DWC_OTG_PARAM_TEST(val, 32, 32) &&
++	    DWC_OTG_PARAM_TEST(val, 64, 64) &&
++	    DWC_OTG_PARAM_TEST(val, 128, 128) &&
++	    DWC_OTG_PARAM_TEST(val, 256, 256)) {
++		DWC_WARN("`%d' invalid for parameter `dma_burst_size'\n", val);
++		return -DWC_E_INVALID;
++	}
++	core_if->core_params->dma_burst_size = val;
++	return 0;
++}
++
++int32_t dwc_otg_get_param_dma_burst_size(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->dma_burst_size;
++}
++
++int dwc_otg_set_param_pti_enable(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `pti_enable'\n", val);
++		return -DWC_E_INVALID;
++	}
++	if (val && (core_if->snpsid < OTG_CORE_REV_2_72a)) {
++		if (dwc_otg_param_initialized(core_if->core_params->pti_enable)) {
++			DWC_ERROR
++			    ("%d invalid for parameter pti_enable. Check HW configuration.\n",
++			     val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->pti_enable = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_pti_enable(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->pti_enable;
++}
++
++int dwc_otg_set_param_mpi_enable(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `mpi_enable'\n", val);
++		return -DWC_E_INVALID;
++	}
++	if (val && (core_if->hwcfg2.b.multi_proc_int == 0)) {
++		if (dwc_otg_param_initialized(core_if->core_params->mpi_enable)) {
++			DWC_ERROR
++			    ("%d invalid for parameter mpi_enable. Check HW configuration.\n",
++			     val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->mpi_enable = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_mpi_enable(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->mpi_enable;
++}
++
++int dwc_otg_set_param_adp_enable(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `adp_enable'\n", val);
++		return -DWC_E_INVALID;
++	}
++	if (val && (core_if->hwcfg3.b.adp_supp == 0)) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->adp_supp_enable)) {
++			DWC_ERROR
++			    ("%d invalid for parameter adp_enable. Check HW configuration.\n",
++			     val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->adp_supp_enable = val;
++	/*Set OTG version 2.0 in case of enabling ADP*/
++	if (val)
++		dwc_otg_set_param_otg_ver(core_if, 1);
++
++	return retval;
++}
++
++int32_t dwc_otg_get_param_adp_enable(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->adp_supp_enable;
++}
++
++int dwc_otg_set_param_ic_usb_cap(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `ic_usb_cap'\n", val);
++		DWC_WARN("ic_usb_cap must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val && (core_if->hwcfg2.b.otg_enable_ic_usb == 0)) {
++		if (dwc_otg_param_initialized(core_if->core_params->ic_usb_cap)) {
++			DWC_ERROR
++			    ("%d invalid for parameter ic_usb_cap. Check HW configuration.\n",
++			     val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->ic_usb_cap = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_ic_usb_cap(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->ic_usb_cap;
++}
++
++int dwc_otg_set_param_ahb_thr_ratio(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	int valid = 1;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 3)) {
++		DWC_WARN("`%d' invalid for parameter `ahb_thr_ratio'\n", val);
++		DWC_WARN("ahb_thr_ratio must be 0 - 3\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (val
++	    && (core_if->snpsid < OTG_CORE_REV_2_81a
++		|| !dwc_otg_get_param_thr_ctl(core_if))) {
++		valid = 0;
++	} else if (val
++		   && ((dwc_otg_get_param_tx_thr_length(core_if) / (1 << val)) <
++		       4)) {
++		valid = 0;
++	}
++	if (valid == 0) {
++		if (dwc_otg_param_initialized
++		    (core_if->core_params->ahb_thr_ratio)) {
++			DWC_ERROR
++			    ("%d invalid for parameter ahb_thr_ratio. Check HW configuration.\n",
++			     val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++
++	core_if->core_params->ahb_thr_ratio = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_ahb_thr_ratio(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->ahb_thr_ratio;
++}
++
++int dwc_otg_set_param_power_down(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	int valid = 1;
++	hwcfg4_data_t hwcfg4 = {.d32 = 0 };
++	hwcfg4.d32 = DWC_READ_REG32(&core_if->core_global_regs->ghwcfg4);
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 3)) {
++		DWC_WARN("`%d' invalid for parameter `power_down'\n", val);
++		DWC_WARN("power_down must be 0 - 2\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == 2) && (core_if->snpsid < OTG_CORE_REV_2_91a)) {
++		valid = 0;
++	}
++	if ((val == 3)
++	    && ((core_if->snpsid < OTG_CORE_REV_3_00a)
++		|| (hwcfg4.b.xhiber == 0))) {
++		valid = 0;
++	}
++	if (valid == 0) {
++		if (dwc_otg_param_initialized(core_if->core_params->power_down)) {
++			DWC_ERROR
++			    ("%d invalid for parameter power_down. Check HW configuration.\n",
++			     val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->power_down = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_power_down(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->power_down;
++}
++
++int dwc_otg_set_param_reload_ctl(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	int valid = 1;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `reload_ctl'\n", val);
++		DWC_WARN("reload_ctl must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == 1) && (core_if->snpsid < OTG_CORE_REV_2_92a)) {
++		valid = 0;
++	}
++	if (valid == 0) {
++		if (dwc_otg_param_initialized(core_if->core_params->reload_ctl)) {
++			DWC_ERROR("%d invalid for parameter reload_ctl."
++				  "Check HW configuration.\n", val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->reload_ctl = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_reload_ctl(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->reload_ctl;
++}
++
++int dwc_otg_set_param_dev_out_nak(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	int valid = 1;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `dev_out_nak'\n", val);
++		DWC_WARN("dev_out_nak must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == 1) && ((core_if->snpsid < OTG_CORE_REV_2_93a) ||
++		!(core_if->core_params->dma_desc_enable))) {
++		valid = 0;
++	}
++	if (valid == 0) {
++		if (dwc_otg_param_initialized(core_if->core_params->dev_out_nak)) {
++			DWC_ERROR("%d invalid for parameter dev_out_nak."
++				"Check HW configuration.\n", val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->dev_out_nak = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_dev_out_nak(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->dev_out_nak;
++}
++
++int dwc_otg_set_param_cont_on_bna(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	int valid = 1;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `cont_on_bna'\n", val);
++		DWC_WARN("cont_on_bna must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == 1) && ((core_if->snpsid < OTG_CORE_REV_2_94a) ||
++		!(core_if->core_params->dma_desc_enable))) {
++			valid = 0;
++	}
++	if (valid == 0) {
++		if (dwc_otg_param_initialized(core_if->core_params->cont_on_bna)) {
++			DWC_ERROR("%d invalid for parameter cont_on_bna."
++				"Check HW configuration.\n", val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->cont_on_bna = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_cont_on_bna(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->cont_on_bna;
++}
++
++int dwc_otg_set_param_ahb_single(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++	int valid = 1;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `ahb_single'\n", val);
++		DWC_WARN("ahb_single must be 0 or 1\n");
++		return -DWC_E_INVALID;
++	}
++
++	if ((val == 1) && (core_if->snpsid < OTG_CORE_REV_2_94a)) {
++			valid = 0;
++	}
++	if (valid == 0) {
++		if (dwc_otg_param_initialized(core_if->core_params->ahb_single)) {
++			DWC_ERROR("%d invalid for parameter ahb_single."
++				"Check HW configuration.\n", val);
++		}
++		retval = -DWC_E_INVALID;
++		val = 0;
++	}
++	core_if->core_params->ahb_single = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_ahb_single(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->ahb_single;
++}
++
++int dwc_otg_set_param_otg_ver(dwc_otg_core_if_t * core_if, int32_t val)
++{
++	int retval = 0;
++
++	if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
++		DWC_WARN("`%d' invalid for parameter `otg_ver'\n", val);
++		DWC_WARN
++		    ("otg_ver must be 0(for OTG 1.3 support) or 1(for OTG 2.0 support)\n");
++		return -DWC_E_INVALID;
++	}
++
++	core_if->core_params->otg_ver = val;
++	return retval;
++}
++
++int32_t dwc_otg_get_param_otg_ver(dwc_otg_core_if_t * core_if)
++{
++	return core_if->core_params->otg_ver;
++}
++
++uint32_t dwc_otg_get_hnpstatus(dwc_otg_core_if_t * core_if)
++{
++	gotgctl_data_t otgctl;
++	otgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++	return otgctl.b.hstnegscs;
++}
++
++uint32_t dwc_otg_get_srpstatus(dwc_otg_core_if_t * core_if)
++{
++	gotgctl_data_t otgctl;
++	otgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++	return otgctl.b.sesreqscs;
++}
++
++void dwc_otg_set_hnpreq(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	if(core_if->otg_ver == 0) {
++		gotgctl_data_t otgctl;
++		otgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++		otgctl.b.hnpreq = val;
++		DWC_WRITE_REG32(&core_if->core_global_regs->gotgctl, otgctl.d32);
++	} else {
++		core_if->otg_sts = val;
++	}
++}
++
++uint32_t dwc_otg_get_gsnpsid(dwc_otg_core_if_t * core_if)
++{
++	return core_if->snpsid;
++}
++
++uint32_t dwc_otg_get_mode(dwc_otg_core_if_t * core_if)
++{
++	gintsts_data_t gintsts;
++	gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++	return gintsts.b.curmode;
++}
++
++uint32_t dwc_otg_get_hnpcapable(dwc_otg_core_if_t * core_if)
++{
++	gusbcfg_data_t usbcfg;
++	usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++	return usbcfg.b.hnpcap;
++}
++
++void dwc_otg_set_hnpcapable(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	gusbcfg_data_t usbcfg;
++	usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++	usbcfg.b.hnpcap = val;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, usbcfg.d32);
++}
++
++uint32_t dwc_otg_get_srpcapable(dwc_otg_core_if_t * core_if)
++{
++	gusbcfg_data_t usbcfg;
++	usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++	return usbcfg.b.srpcap;
++}
++
++void dwc_otg_set_srpcapable(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	gusbcfg_data_t usbcfg;
++	usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++	usbcfg.b.srpcap = val;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, usbcfg.d32);
++}
++
++uint32_t dwc_otg_get_devspeed(dwc_otg_core_if_t * core_if)
++{
++	dcfg_data_t dcfg;
++	/* originally: dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg); */
++
++        dcfg.d32 = -1; //GRAYG
++        DWC_DEBUGPL(DBG_CILV, "%s - core_if(%p)\n", __func__, core_if);
++        if (NULL == core_if)
++                DWC_ERROR("reg request with NULL core_if\n");
++        DWC_DEBUGPL(DBG_CILV, "%s - core_if(%p)->dev_if(%p)\n", __func__,
++                    core_if, core_if->dev_if);
++        if (NULL == core_if->dev_if)
++                DWC_ERROR("reg request with NULL dev_if\n");
++        DWC_DEBUGPL(DBG_CILV, "%s - core_if(%p)->dev_if(%p)->"
++                    "dev_global_regs(%p)\n", __func__,
++                    core_if, core_if->dev_if,
++                    core_if->dev_if->dev_global_regs);
++        if (NULL == core_if->dev_if->dev_global_regs)
++                DWC_ERROR("reg request with NULL dev_global_regs\n");
++        else {
++                DWC_DEBUGPL(DBG_CILV, "%s - &core_if(%p)->dev_if(%p)->"
++                            "dev_global_regs(%p)->dcfg = %p\n", __func__,
++                            core_if, core_if->dev_if,
++                            core_if->dev_if->dev_global_regs,
++                            &core_if->dev_if->dev_global_regs->dcfg);
++		dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
++        }
++	return dcfg.b.devspd;
++}
++
++void dwc_otg_set_devspeed(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	dcfg_data_t dcfg;
++	dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
++	dcfg.b.devspd = val;
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg, dcfg.d32);
++}
++
++uint32_t dwc_otg_get_busconnected(dwc_otg_core_if_t * core_if)
++{
++	hprt0_data_t hprt0;
++	hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
++	return hprt0.b.prtconnsts;
++}
++
++uint32_t dwc_otg_get_enumspeed(dwc_otg_core_if_t * core_if)
++{
++	dsts_data_t dsts;
++	dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++	return dsts.b.enumspd;
++}
++
++uint32_t dwc_otg_get_prtpower(dwc_otg_core_if_t * core_if)
++{
++	hprt0_data_t hprt0;
++	hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
++	return hprt0.b.prtpwr;
++
++}
++
++uint32_t dwc_otg_get_core_state(dwc_otg_core_if_t * core_if)
++{
++	return core_if->hibernation_suspend;
++}
++
++void dwc_otg_set_prtpower(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	hprt0_data_t hprt0;
++	hprt0.d32 = dwc_otg_read_hprt0(core_if);
++	hprt0.b.prtpwr = val;
++	DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++}
++
++uint32_t dwc_otg_get_prtsuspend(dwc_otg_core_if_t * core_if)
++{
++	hprt0_data_t hprt0;
++	hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
++	return hprt0.b.prtsusp;
++
++}
++
++void dwc_otg_set_prtsuspend(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	hprt0_data_t hprt0;
++	hprt0.d32 = dwc_otg_read_hprt0(core_if);
++	hprt0.b.prtsusp = val;
++	DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++}
++
++uint32_t dwc_otg_get_fr_interval(dwc_otg_core_if_t * core_if)
++{
++	hfir_data_t hfir;
++	hfir.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hfir);
++	return hfir.b.frint;
++
++}
++
++void dwc_otg_set_fr_interval(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	hfir_data_t hfir;
++	uint32_t fram_int;
++	fram_int = calc_frame_interval(core_if);
++	hfir.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hfir);
++	if (!core_if->core_params->reload_ctl) {
++		DWC_WARN("\nCannot reload HFIR register.HFIR.HFIRRldCtrl bit is"
++			 "not set to 1.\nShould load driver with reload_ctl=1"
++			 " module parameter\n");
++		return;
++	}
++	switch (fram_int) {
++	case 3750:
++		if ((val < 3350) || (val > 4150)) {
++			DWC_WARN("HFIR interval for HS core and 30 MHz"
++				 "clock freq should be from 3350 to 4150\n");
++			return;
++		}
++		break;
++	case 30000:
++		if ((val < 26820) || (val > 33180)) {
++			DWC_WARN("HFIR interval for FS/LS core and 30 MHz"
++				 "clock freq should be from 26820 to 33180\n");
++			return;
++		}
++		break;
++	case 6000:
++		if ((val < 5360) || (val > 6640)) {
++			DWC_WARN("HFIR interval for HS core and 48 MHz"
++				 "clock freq should be from 5360 to 6640\n");
++			return;
++		}
++		break;
++	case 48000:
++		if ((val < 42912) || (val > 53088)) {
++			DWC_WARN("HFIR interval for FS/LS core and 48 MHz"
++				 "clock freq should be from 42912 to 53088\n");
++			return;
++		}
++		break;
++	case 7500:
++		if ((val < 6700) || (val > 8300)) {
++			DWC_WARN("HFIR interval for HS core and 60 MHz"
++				 "clock freq should be from 6700 to 8300\n");
++			return;
++		}
++		break;
++	case 60000:
++		if ((val < 53640) || (val > 65536)) {
++			DWC_WARN("HFIR interval for FS/LS core and 60 MHz"
++				 "clock freq should be from 53640 to 65536\n");
++			return;
++		}
++		break;
++	default:
++		DWC_WARN("Unknown frame interval\n");
++		return;
++		break;
++
++	}
++	hfir.b.frint = val;
++	DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hfir, hfir.d32);
++}
++
++uint32_t dwc_otg_get_mode_ch_tim(dwc_otg_core_if_t * core_if)
++{
++	hcfg_data_t hcfg;
++	hcfg.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
++	return hcfg.b.modechtimen;
++
++}
++
++void dwc_otg_set_mode_ch_tim(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	hcfg_data_t hcfg;
++	hcfg.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
++	hcfg.b.modechtimen = val;
++	DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg, hcfg.d32);
++}
++
++void dwc_otg_set_prtresume(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	hprt0_data_t hprt0;
++	hprt0.d32 = dwc_otg_read_hprt0(core_if);
++	hprt0.b.prtres = val;
++	DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++}
++
++uint32_t dwc_otg_get_remotewakesig(dwc_otg_core_if_t * core_if)
++{
++	dctl_data_t dctl;
++	dctl.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
++	return dctl.b.rmtwkupsig;
++}
++
++uint32_t dwc_otg_get_lpm_portsleepstatus(dwc_otg_core_if_t * core_if)
++{
++	glpmcfg_data_t lpmcfg;
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++
++	DWC_ASSERT(!
++		   ((core_if->lx_state == DWC_OTG_L1) ^ lpmcfg.b.prt_sleep_sts),
++		   "lx_state = %d, lmpcfg.prt_sleep_sts = %d\n",
++		   core_if->lx_state, lpmcfg.b.prt_sleep_sts);
++
++	return lpmcfg.b.prt_sleep_sts;
++}
++
++uint32_t dwc_otg_get_lpm_remotewakeenabled(dwc_otg_core_if_t * core_if)
++{
++	glpmcfg_data_t lpmcfg;
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	return lpmcfg.b.rem_wkup_en;
++}
++
++uint32_t dwc_otg_get_lpmresponse(dwc_otg_core_if_t * core_if)
++{
++	glpmcfg_data_t lpmcfg;
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	return lpmcfg.b.appl_resp;
++}
++
++void dwc_otg_set_lpmresponse(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	glpmcfg_data_t lpmcfg;
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	lpmcfg.b.appl_resp = val;
++	DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, lpmcfg.d32);
++}
++
++uint32_t dwc_otg_get_hsic_connect(dwc_otg_core_if_t * core_if)
++{
++	glpmcfg_data_t lpmcfg;
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	return lpmcfg.b.hsic_connect;
++}
++
++void dwc_otg_set_hsic_connect(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	glpmcfg_data_t lpmcfg;
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	lpmcfg.b.hsic_connect = val;
++	DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, lpmcfg.d32);
++}
++
++uint32_t dwc_otg_get_inv_sel_hsic(dwc_otg_core_if_t * core_if)
++{
++	glpmcfg_data_t lpmcfg;
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	return lpmcfg.b.inv_sel_hsic;
++
++}
++
++void dwc_otg_set_inv_sel_hsic(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	glpmcfg_data_t lpmcfg;
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	lpmcfg.b.inv_sel_hsic = val;
++	DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, lpmcfg.d32);
++}
++
++uint32_t dwc_otg_get_gotgctl(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++}
++
++void dwc_otg_set_gotgctl(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	DWC_WRITE_REG32(&core_if->core_global_regs->gotgctl, val);
++}
++
++uint32_t dwc_otg_get_gusbcfg(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++}
++
++void dwc_otg_set_gusbcfg(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, val);
++}
++
++uint32_t dwc_otg_get_grxfsiz(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
++}
++
++void dwc_otg_set_grxfsiz(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	DWC_WRITE_REG32(&core_if->core_global_regs->grxfsiz, val);
++}
++
++uint32_t dwc_otg_get_gnptxfsiz(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz);
++}
++
++void dwc_otg_set_gnptxfsiz(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	DWC_WRITE_REG32(&core_if->core_global_regs->gnptxfsiz, val);
++}
++
++uint32_t dwc_otg_get_gpvndctl(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(&core_if->core_global_regs->gpvndctl);
++}
++
++void dwc_otg_set_gpvndctl(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	DWC_WRITE_REG32(&core_if->core_global_regs->gpvndctl, val);
++}
++
++uint32_t dwc_otg_get_ggpio(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(&core_if->core_global_regs->ggpio);
++}
++
++void dwc_otg_set_ggpio(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	DWC_WRITE_REG32(&core_if->core_global_regs->ggpio, val);
++}
++
++uint32_t dwc_otg_get_hprt0(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(core_if->host_if->hprt0);
++
++}
++
++void dwc_otg_set_hprt0(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	DWC_WRITE_REG32(core_if->host_if->hprt0, val);
++}
++
++uint32_t dwc_otg_get_guid(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(&core_if->core_global_regs->guid);
++}
++
++void dwc_otg_set_guid(dwc_otg_core_if_t * core_if, uint32_t val)
++{
++	DWC_WRITE_REG32(&core_if->core_global_regs->guid, val);
++}
++
++uint32_t dwc_otg_get_hptxfsiz(dwc_otg_core_if_t * core_if)
++{
++	return DWC_READ_REG32(&core_if->core_global_regs->hptxfsiz);
++}
++
++uint16_t dwc_otg_get_otg_version(dwc_otg_core_if_t * core_if)
++{
++	return ((core_if->otg_ver == 1) ? (uint16_t)0x0200 : (uint16_t)0x0103);
++}
++
++/**
++ * Start the SRP timer to detect when the SRP does not complete within
++ * 6 seconds.
++ *
++ * @param core_if the pointer to core_if strucure.
++ */
++void dwc_otg_pcd_start_srp_timer(dwc_otg_core_if_t * core_if)
++{
++	core_if->srp_timer_started = 1;
++	DWC_TIMER_SCHEDULE(core_if->srp_timer, 6000 /* 6 secs */ );
++}
++
++void dwc_otg_initiate_srp(dwc_otg_core_if_t * core_if)
++{
++	uint32_t *addr = (uint32_t *) & (core_if->core_global_regs->gotgctl);
++	gotgctl_data_t mem;
++	gotgctl_data_t val;
++
++	val.d32 = DWC_READ_REG32(addr);
++	if (val.b.sesreq) {
++		DWC_ERROR("Session Request Already active!\n");
++		return;
++	}
++
++	DWC_INFO("Session Request Initated\n");	//NOTICE
++	mem.d32 = DWC_READ_REG32(addr);
++	mem.b.sesreq = 1;
++	DWC_WRITE_REG32(addr, mem.d32);
++
++	/* Start the SRP timer */
++	dwc_otg_pcd_start_srp_timer(core_if);
++	return;
++}
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_cil.h
+@@ -0,0 +1,1464 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_cil.h $
++ * $Revision: #123 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#if !defined(__DWC_CIL_H__)
++#define __DWC_CIL_H__
++
++#include "dwc_list.h"
++#include "dwc_otg_dbg.h"
++#include "dwc_otg_regs.h"
++
++#include "dwc_otg_core_if.h"
++#include "dwc_otg_adp.h"
++
++/**
++ * @file
++ * This file contains the interface to the Core Interface Layer.
++ */
++
++#ifdef DWC_UTE_CFI
++
++#define MAX_DMA_DESCS_PER_EP	256
++
++/**
++ * Enumeration for the data buffer mode
++ */
++typedef enum _data_buffer_mode {
++	BM_STANDARD = 0,	/* data buffer is in normal mode */
++	BM_SG = 1,		/* data buffer uses the scatter/gather mode */
++	BM_CONCAT = 2,		/* data buffer uses the concatenation mode */
++	BM_CIRCULAR = 3,	/* data buffer uses the circular DMA mode */
++	BM_ALIGN = 4		/* data buffer is in buffer alignment mode */
++} data_buffer_mode_e;
++#endif //DWC_UTE_CFI
++
++/** Macros defined for DWC OTG HW Release version */
++
++#define OTG_CORE_REV_2_60a	0x4F54260A
++#define OTG_CORE_REV_2_71a	0x4F54271A
++#define OTG_CORE_REV_2_72a	0x4F54272A
++#define OTG_CORE_REV_2_80a	0x4F54280A
++#define OTG_CORE_REV_2_81a	0x4F54281A
++#define OTG_CORE_REV_2_90a	0x4F54290A
++#define OTG_CORE_REV_2_91a	0x4F54291A
++#define OTG_CORE_REV_2_92a	0x4F54292A
++#define OTG_CORE_REV_2_93a	0x4F54293A
++#define OTG_CORE_REV_2_94a	0x4F54294A
++#define OTG_CORE_REV_3_00a	0x4F54300A
++
++/**
++ * Information for each ISOC packet.
++ */
++typedef struct iso_pkt_info {
++	uint32_t offset;
++	uint32_t length;
++	int32_t status;
++} iso_pkt_info_t;
++
++/**
++ * The <code>dwc_ep</code> structure represents the state of a single
++ * endpoint when acting in device mode. It contains the data items
++ * needed for an endpoint to be activated and transfer packets.
++ */
++typedef struct dwc_ep {
++	/** EP number used for register address lookup */
++	uint8_t num;
++	/** EP direction 0 = OUT */
++	unsigned is_in:1;
++	/** EP active. */
++	unsigned active:1;
++
++	/**
++	 * Periodic Tx FIFO # for IN EPs For INTR EP set to 0 to use non-periodic
++	 * Tx FIFO. If dedicated Tx FIFOs are enabled Tx FIFO # FOR IN EPs*/
++	unsigned tx_fifo_num:4;
++	/** EP type: 0 - Control, 1 - ISOC,	 2 - BULK,	3 - INTR */
++	unsigned type:2;
++#define DWC_OTG_EP_TYPE_CONTROL	   0
++#define DWC_OTG_EP_TYPE_ISOC	   1
++#define DWC_OTG_EP_TYPE_BULK	   2
++#define DWC_OTG_EP_TYPE_INTR	   3
++
++	/** DATA start PID for INTR and BULK EP */
++	unsigned data_pid_start:1;
++	/** Frame (even/odd) for ISOC EP */
++	unsigned even_odd_frame:1;
++	/** Max Packet bytes */
++	unsigned maxpacket:11;
++
++	/** Max Transfer size */
++	uint32_t maxxfer;
++
++	/** @name Transfer state */
++	/** @{ */
++
++	/**
++	 * Pointer to the beginning of the transfer buffer -- do not modify
++	 * during transfer.
++	 */
++
++	dwc_dma_t dma_addr;
++
++	dwc_dma_t dma_desc_addr;
++	dwc_otg_dev_dma_desc_t *desc_addr;
++
++	uint8_t *start_xfer_buff;
++	/** pointer to the transfer buffer */
++	uint8_t *xfer_buff;
++	/** Number of bytes to transfer */
++	unsigned xfer_len:19;
++	/** Number of bytes transferred. */
++	unsigned xfer_count:19;
++	/** Sent ZLP */
++	unsigned sent_zlp:1;
++	/** Total len for control transfer */
++	unsigned total_len:19;
++
++	/** stall clear flag */
++	unsigned stall_clear_flag:1;
++
++	/** SETUP pkt cnt rollover flag for EP0 out*/
++	unsigned stp_rollover;
++
++#ifdef DWC_UTE_CFI
++	/* The buffer mode */
++	data_buffer_mode_e buff_mode;
++
++	/* The chain of DMA descriptors.
++	 * MAX_DMA_DESCS_PER_EP will be allocated for each active EP.
++	 */
++	dwc_otg_dma_desc_t *descs;
++
++	/* The DMA address of the descriptors chain start */
++	dma_addr_t descs_dma_addr;
++	/** This variable stores the length of the last enqueued request */
++	uint32_t cfi_req_len;
++#endif				//DWC_UTE_CFI
++
++/** Max DMA Descriptor count for any EP */
++#define MAX_DMA_DESC_CNT 256
++	/** Allocated DMA Desc count */
++	uint32_t desc_cnt;
++
++	/** bInterval */
++	uint32_t bInterval;
++	/** Next frame num to setup next ISOC transfer */
++	uint32_t frame_num;
++	/** Indicates SOF number overrun in DSTS */
++	uint8_t frm_overrun;
++
++#ifdef DWC_UTE_PER_IO
++	/** Next frame num for which will be setup DMA Desc */
++	uint32_t xiso_frame_num;
++	/** bInterval */
++	uint32_t xiso_bInterval;
++	/** Count of currently active transfers - shall be either 0 or 1 */
++	int xiso_active_xfers;
++	int xiso_queued_xfers;
++#endif
++#ifdef DWC_EN_ISOC
++	/**
++	 * Variables specific for ISOC EPs
++	 *
++	 */
++	/** DMA addresses of ISOC buffers */
++	dwc_dma_t dma_addr0;
++	dwc_dma_t dma_addr1;
++
++	dwc_dma_t iso_dma_desc_addr;
++	dwc_otg_dev_dma_desc_t *iso_desc_addr;
++
++	/** pointer to the transfer buffers */
++	uint8_t *xfer_buff0;
++	uint8_t *xfer_buff1;
++
++	/** number of ISOC Buffer is processing */
++	uint32_t proc_buf_num;
++	/** Interval of ISOC Buffer processing */
++	uint32_t buf_proc_intrvl;
++	/** Data size for regular frame */
++	uint32_t data_per_frame;
++
++	/* todo - pattern data support is to be implemented in the future */
++	/** Data size for pattern frame */
++	uint32_t data_pattern_frame;
++	/** Frame number of pattern data */
++	uint32_t sync_frame;
++
++	/** bInterval */
++	uint32_t bInterval;
++	/** ISO Packet number per frame */
++	uint32_t pkt_per_frm;
++	/** Next frame num for which will be setup DMA Desc */
++	uint32_t next_frame;
++	/** Number of packets per buffer processing */
++	uint32_t pkt_cnt;
++	/** Info for all isoc packets */
++	iso_pkt_info_t *pkt_info;
++	/** current pkt number */
++	uint32_t cur_pkt;
++	/** current pkt number */
++	uint8_t *cur_pkt_addr;
++	/** current pkt number */
++	uint32_t cur_pkt_dma_addr;
++#endif				/* DWC_EN_ISOC */
++
++/** @} */
++} dwc_ep_t;
++
++/*
++ * Reasons for halting a host channel.
++ */
++typedef enum dwc_otg_halt_status {
++	DWC_OTG_HC_XFER_NO_HALT_STATUS,
++	DWC_OTG_HC_XFER_COMPLETE,
++	DWC_OTG_HC_XFER_URB_COMPLETE,
++	DWC_OTG_HC_XFER_ACK,
++	DWC_OTG_HC_XFER_NAK,
++	DWC_OTG_HC_XFER_NYET,
++	DWC_OTG_HC_XFER_STALL,
++	DWC_OTG_HC_XFER_XACT_ERR,
++	DWC_OTG_HC_XFER_FRAME_OVERRUN,
++	DWC_OTG_HC_XFER_BABBLE_ERR,
++	DWC_OTG_HC_XFER_DATA_TOGGLE_ERR,
++	DWC_OTG_HC_XFER_AHB_ERR,
++	DWC_OTG_HC_XFER_PERIODIC_INCOMPLETE,
++	DWC_OTG_HC_XFER_URB_DEQUEUE
++} dwc_otg_halt_status_e;
++
++/**
++ * Host channel descriptor. This structure represents the state of a single
++ * host channel when acting in host mode. It contains the data items needed to
++ * transfer packets to an endpoint via a host channel.
++ */
++typedef struct dwc_hc {
++	/** Host channel number used for register address lookup */
++	uint8_t hc_num;
++
++	/** Device to access */
++	unsigned dev_addr:7;
++
++	/** EP to access */
++	unsigned ep_num:4;
++
++	/** EP direction. 0: OUT, 1: IN */
++	unsigned ep_is_in:1;
++
++	/**
++	 * EP speed.
++	 * One of the following values:
++	 *	- DWC_OTG_EP_SPEED_LOW
++	 *	- DWC_OTG_EP_SPEED_FULL
++	 *	- DWC_OTG_EP_SPEED_HIGH
++	 */
++	unsigned speed:2;
++#define DWC_OTG_EP_SPEED_LOW	0
++#define DWC_OTG_EP_SPEED_FULL	1
++#define DWC_OTG_EP_SPEED_HIGH	2
++
++	/**
++	 * Endpoint type.
++	 * One of the following values:
++	 *	- DWC_OTG_EP_TYPE_CONTROL: 0
++	 *	- DWC_OTG_EP_TYPE_ISOC: 1
++	 *	- DWC_OTG_EP_TYPE_BULK: 2
++	 *	- DWC_OTG_EP_TYPE_INTR: 3
++	 */
++	unsigned ep_type:2;
++
++	/** Max packet size in bytes */
++	unsigned max_packet:11;
++
++	/**
++	 * PID for initial transaction.
++	 * 0: DATA0,<br>
++	 * 1: DATA2,<br>
++	 * 2: DATA1,<br>
++	 * 3: MDATA (non-Control EP),
++	 *	  SETUP (Control EP)
++	 */
++	unsigned data_pid_start:2;
++#define DWC_OTG_HC_PID_DATA0 0
++#define DWC_OTG_HC_PID_DATA2 1
++#define DWC_OTG_HC_PID_DATA1 2
++#define DWC_OTG_HC_PID_MDATA 3
++#define DWC_OTG_HC_PID_SETUP 3
++
++	/** Number of periodic transactions per (micro)frame */
++	unsigned multi_count:2;
++
++	/** @name Transfer State */
++	/** @{ */
++
++	/** Pointer to the current transfer buffer position. */
++	uint8_t *xfer_buff;
++	/**
++	 * In Buffer DMA mode this buffer will be used
++	 * if xfer_buff is not DWORD aligned.
++	 */
++	dwc_dma_t align_buff;
++	/** Total number of bytes to transfer. */
++	uint32_t xfer_len;
++	/** Number of bytes transferred so far. */
++	uint32_t xfer_count;
++	/** Packet count at start of transfer.*/
++	uint16_t start_pkt_count;
++
++	/**
++	 * Flag to indicate whether the transfer has been started. Set to 1 if
++	 * it has been started, 0 otherwise.
++	 */
++	uint8_t xfer_started;
++
++	/**
++	 * Set to 1 to indicate that a PING request should be issued on this
++	 * channel. If 0, process normally.
++	 */
++	uint8_t do_ping;
++
++	/**
++	 * Set to 1 to indicate that the error count for this transaction is
++	 * non-zero. Set to 0 if the error count is 0.
++	 */
++	uint8_t error_state;
++
++	/**
++	 * Set to 1 to indicate that this channel should be halted the next
++	 * time a request is queued for the channel. This is necessary in
++	 * slave mode if no request queue space is available when an attempt
++	 * is made to halt the channel.
++	 */
++	uint8_t halt_on_queue;
++
++	/**
++	 * Set to 1 if the host channel has been halted, but the core is not
++	 * finished flushing queued requests. Otherwise 0.
++	 */
++	uint8_t halt_pending;
++
++	/**
++	 * Reason for halting the host channel.
++	 */
++	dwc_otg_halt_status_e halt_status;
++
++	/*
++	 * Split settings for the host channel
++	 */
++	uint8_t do_split;		   /**< Enable split for the channel */
++	uint8_t complete_split;	   /**< Enable complete split */
++	uint8_t hub_addr;		   /**< Address of high speed hub */
++
++	uint8_t port_addr;		   /**< Port of the low/full speed device */
++	/** Split transaction position
++	 * One of the following values:
++	 *	  - DWC_HCSPLIT_XACTPOS_MID
++	 *	  - DWC_HCSPLIT_XACTPOS_BEGIN
++	 *	  - DWC_HCSPLIT_XACTPOS_END
++	 *	  - DWC_HCSPLIT_XACTPOS_ALL */
++	uint8_t xact_pos;
++
++	/** Set when the host channel does a short read. */
++	uint8_t short_read;
++
++	/**
++	 * Number of requests issued for this channel since it was assigned to
++	 * the current transfer (not counting PINGs).
++	 */
++	uint8_t requests;
++
++	/**
++	 * Queue Head for the transfer being processed by this channel.
++	 */
++	struct dwc_otg_qh *qh;
++
++	/** @} */
++
++	/** Entry in list of host channels. */
++	 DWC_CIRCLEQ_ENTRY(dwc_hc) hc_list_entry;
++
++	/** @name Descriptor DMA support */
++	/** @{ */
++
++	/** Number of Transfer Descriptors */
++	uint16_t ntd;
++
++	/** Descriptor List DMA address */
++	dwc_dma_t desc_list_addr;
++
++	/** Scheduling micro-frame bitmap. */
++	uint8_t schinfo;
++
++	/** @} */
++} dwc_hc_t;
++
++/**
++ * The following parameters may be specified when starting the module. These
++ * parameters define how the DWC_otg controller should be configured.
++ */
++typedef struct dwc_otg_core_params {
++	int32_t opt;
++
++	/**
++	 * Specifies the OTG capabilities. The driver will automatically
++	 * detect the value for this parameter if none is specified.
++	 * 0 - HNP and SRP capable (default)
++	 * 1 - SRP Only capable
++	 * 2 - No HNP/SRP capable
++	 */
++	int32_t otg_cap;
++
++	/**
++	 * Specifies whether to use slave or DMA mode for accessing the data
++	 * FIFOs. The driver will automatically detect the value for this
++	 * parameter if none is specified.
++	 * 0 - Slave
++	 * 1 - DMA (default, if available)
++	 */
++	int32_t dma_enable;
++
++	/**
++	 * When DMA mode is enabled specifies whether to use address DMA or DMA
++	 * Descriptor mode for accessing the data FIFOs in device mode. The driver
++	 * will automatically detect the value for this if none is specified.
++	 * 0 - address DMA
++	 * 1 - DMA Descriptor(default, if available)
++	 */
++	int32_t dma_desc_enable;
++	/** The DMA Burst size (applicable only for External DMA
++	 * Mode). 1, 4, 8 16, 32, 64, 128, 256 (default 32)
++	 */
++	int32_t dma_burst_size;	/* Translate this to GAHBCFG values */
++
++	/**
++	 * Specifies the maximum speed of operation in host and device mode.
++	 * The actual speed depends on the speed of the attached device and
++	 * the value of phy_type. The actual speed depends on the speed of the
++	 * attached device.
++	 * 0 - High Speed (default)
++	 * 1 - Full Speed
++	 */
++	int32_t speed;
++	/** Specifies whether low power mode is supported when attached
++	 *	to a Full Speed or Low Speed device in host mode.
++	 * 0 - Don't support low power mode (default)
++	 * 1 - Support low power mode
++	 */
++	int32_t host_support_fs_ls_low_power;
++
++	/** Specifies the PHY clock rate in low power mode when connected to a
++	 * Low Speed device in host mode. This parameter is applicable only if
++	 * HOST_SUPPORT_FS_LS_LOW_POWER is enabled. If PHY_TYPE is set to FS
++	 * then defaults to 6 MHZ otherwise 48 MHZ.
++	 *
++	 * 0 - 48 MHz
++	 * 1 - 6 MHz
++	 */
++	int32_t host_ls_low_power_phy_clk;
++
++	/**
++	 * 0 - Use cC FIFO size parameters
++	 * 1 - Allow dynamic FIFO sizing (default)
++	 */
++	int32_t enable_dynamic_fifo;
++
++	/** Total number of 4-byte words in the data FIFO memory. This
++	 * memory includes the Rx FIFO, non-periodic Tx FIFO, and periodic
++	 * Tx FIFOs.
++	 * 32 to 32768 (default 8192)
++	 * Note: The total FIFO memory depth in the FPGA configuration is 8192.
++	 */
++	int32_t data_fifo_size;
++
++	/** Number of 4-byte words in the Rx FIFO in device mode when dynamic
++	 * FIFO sizing is enabled.
++	 * 16 to 32768 (default 1064)
++	 */
++	int32_t dev_rx_fifo_size;
++
++	/** Number of 4-byte words in the non-periodic Tx FIFO in device mode
++	 * when dynamic FIFO sizing is enabled.
++	 * 16 to 32768 (default 1024)
++	 */
++	int32_t dev_nperio_tx_fifo_size;
++
++	/** Number of 4-byte words in each of the periodic Tx FIFOs in device
++	 * mode when dynamic FIFO sizing is enabled.
++	 * 4 to 768 (default 256)
++	 */
++	uint32_t dev_perio_tx_fifo_size[MAX_PERIO_FIFOS];
++
++	/** Number of 4-byte words in the Rx FIFO in host mode when dynamic
++	 * FIFO sizing is enabled.
++	 * 16 to 32768 (default 1024)
++	 */
++	int32_t host_rx_fifo_size;
++
++	/** Number of 4-byte words in the non-periodic Tx FIFO in host mode
++	 * when Dynamic FIFO sizing is enabled in the core.
++	 * 16 to 32768 (default 1024)
++	 */
++	int32_t host_nperio_tx_fifo_size;
++
++	/** Number of 4-byte words in the host periodic Tx FIFO when dynamic
++	 * FIFO sizing is enabled.
++	 * 16 to 32768 (default 1024)
++	 */
++	int32_t host_perio_tx_fifo_size;
++
++	/** The maximum transfer size supported in bytes.
++	 * 2047 to 65,535  (default 65,535)
++	 */
++	int32_t max_transfer_size;
++
++	/** The maximum number of packets in a transfer.
++	 * 15 to 511  (default 511)
++	 */
++	int32_t max_packet_count;
++
++	/** The number of host channel registers to use.
++	 * 1 to 16 (default 12)
++	 * Note: The FPGA configuration supports a maximum of 12 host channels.
++	 */
++	int32_t host_channels;
++
++	/** The number of endpoints in addition to EP0 available for device
++	 * mode operations.
++	 * 1 to 15 (default 6 IN and OUT)
++	 * Note: The FPGA configuration supports a maximum of 6 IN and OUT
++	 * endpoints in addition to EP0.
++	 */
++	int32_t dev_endpoints;
++
++		/**
++		 * Specifies the type of PHY interface to use. By default, the driver
++		 * will automatically detect the phy_type.
++		 *
++		 * 0 - Full Speed PHY
++		 * 1 - UTMI+ (default)
++		 * 2 - ULPI
++		 */
++	int32_t phy_type;
++
++	/**
++	 * Specifies the UTMI+ Data Width. This parameter is
++	 * applicable for a PHY_TYPE of UTMI+ or ULPI. (For a ULPI
++	 * PHY_TYPE, this parameter indicates the data width between
++	 * the MAC and the ULPI Wrapper.) Also, this parameter is
++	 * applicable only if the OTG_HSPHY_WIDTH cC parameter was set
++	 * to "8 and 16 bits", meaning that the core has been
++	 * configured to work at either data path width.
++	 *
++	 * 8 or 16 bits (default 16)
++	 */
++	int32_t phy_utmi_width;
++
++	/**
++	 * Specifies whether the ULPI operates at double or single
++	 * data rate. This parameter is only applicable if PHY_TYPE is
++	 * ULPI.
++	 *
++	 * 0 - single data rate ULPI interface with 8 bit wide data
++	 * bus (default)
++	 * 1 - double data rate ULPI interface with 4 bit wide data
++	 * bus
++	 */
++	int32_t phy_ulpi_ddr;
++
++	/**
++	 * Specifies whether to use the internal or external supply to
++	 * drive the vbus with a ULPI phy.
++	 */
++	int32_t phy_ulpi_ext_vbus;
++
++	/**
++	 * Specifies whether to use the I2Cinterface for full speed PHY. This
++	 * parameter is only applicable if PHY_TYPE is FS.
++	 * 0 - No (default)
++	 * 1 - Yes
++	 */
++	int32_t i2c_enable;
++
++	int32_t ulpi_fs_ls;
++
++	int32_t ts_dline;
++
++	/**
++	 * Specifies whether dedicated transmit FIFOs are
++	 * enabled for non periodic IN endpoints in device mode
++	 * 0 - No
++	 * 1 - Yes
++	 */
++	int32_t en_multiple_tx_fifo;
++
++	/** Number of 4-byte words in each of the Tx FIFOs in device
++	 * mode when dynamic FIFO sizing is enabled.
++	 * 4 to 768 (default 256)
++	 */
++	uint32_t dev_tx_fifo_size[MAX_TX_FIFOS];
++
++	/** Thresholding enable flag-
++	 * bit 0 - enable non-ISO Tx thresholding
++	 * bit 1 - enable ISO Tx thresholding
++	 * bit 2 - enable Rx thresholding
++	 */
++	uint32_t thr_ctl;
++
++	/** Thresholding length for Tx
++	 *	FIFOs in 32 bit DWORDs
++	 */
++	uint32_t tx_thr_length;
++
++	/** Thresholding length for Rx
++	 *	FIFOs in 32 bit DWORDs
++	 */
++	uint32_t rx_thr_length;
++
++	/**
++	 * Specifies whether LPM (Link Power Management) support is enabled
++	 */
++	int32_t lpm_enable;
++
++	/** Per Transfer Interrupt
++	 *	mode enable flag
++	 * 1 - Enabled
++	 * 0 - Disabled
++	 */
++	int32_t pti_enable;
++
++	/** Multi Processor Interrupt
++	 *	mode enable flag
++	 * 1 - Enabled
++	 * 0 - Disabled
++	 */
++	int32_t mpi_enable;
++
++	/** IS_USB Capability
++	 * 1 - Enabled
++	 * 0 - Disabled
++	 */
++	int32_t ic_usb_cap;
++
++	/** AHB Threshold Ratio
++	 * 2'b00 AHB Threshold = 	MAC Threshold
++	 * 2'b01 AHB Threshold = 1/2 	MAC Threshold
++	 * 2'b10 AHB Threshold = 1/4	MAC Threshold
++	 * 2'b11 AHB Threshold = 1/8	MAC Threshold
++	 */
++	int32_t ahb_thr_ratio;
++
++	/** ADP Support
++	 * 1 - Enabled
++	 * 0 - Disabled
++	 */
++	int32_t adp_supp_enable;
++
++	/** HFIR Reload Control
++	 * 0 - The HFIR cannot be reloaded dynamically.
++	 * 1 - Allow dynamic reloading of the HFIR register during runtime.
++	 */
++	int32_t reload_ctl;
++
++	/** DCFG: Enable device Out NAK
++	 * 0 - The core does not set NAK after Bulk Out transfer complete.
++	 * 1 - The core sets NAK after Bulk OUT transfer complete.
++	 */
++	int32_t dev_out_nak;
++
++	/** DCFG: Enable Continue on BNA
++	 * After receiving BNA interrupt the core disables the endpoint,when the
++	 * endpoint is re-enabled by the application the core starts processing
++	 * 0 - from the DOEPDMA descriptor
++	 * 1 - from the descriptor which received the BNA.
++	 */
++	int32_t cont_on_bna;
++
++	/** GAHBCFG: AHB Single Support
++	 * This bit when programmed supports SINGLE transfers for remainder
++	 * data in a transfer for DMA mode of operation.
++	 * 0 - in this case the remainder data will be sent using INCR burst size.
++	 * 1 - in this case the remainder data will be sent using SINGLE burst size.
++	 */
++	int32_t ahb_single;
++
++	/** Core Power down mode
++	 * 0 - No Power Down is enabled
++	 * 1 - Reserved
++	 * 2 - Complete Power Down (Hibernation)
++	 */
++	int32_t power_down;
++
++	/** OTG revision supported
++	 * 0 - OTG 1.3 revision
++	 * 1 - OTG 2.0 revision
++	 */
++	int32_t otg_ver;
++
++} dwc_otg_core_params_t;
++
++#ifdef DEBUG
++struct dwc_otg_core_if;
++typedef struct hc_xfer_info {
++	struct dwc_otg_core_if *core_if;
++	dwc_hc_t *hc;
++} hc_xfer_info_t;
++#endif
++
++typedef struct ep_xfer_info {
++	struct dwc_otg_core_if *core_if;
++	dwc_ep_t *ep;
++	uint8_t state;
++} ep_xfer_info_t;
++/*
++ * Device States
++ */
++typedef enum dwc_otg_lx_state {
++	/** On state */
++	DWC_OTG_L0,
++	/** LPM sleep state*/
++	DWC_OTG_L1,
++	/** USB suspend state*/
++	DWC_OTG_L2,
++	/** Off state*/
++	DWC_OTG_L3
++} dwc_otg_lx_state_e;
++
++struct dwc_otg_global_regs_backup {
++	uint32_t gotgctl_local;
++	uint32_t gintmsk_local;
++	uint32_t gahbcfg_local;
++	uint32_t gusbcfg_local;
++	uint32_t grxfsiz_local;
++	uint32_t gnptxfsiz_local;
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	uint32_t glpmcfg_local;
++#endif
++	uint32_t gi2cctl_local;
++	uint32_t hptxfsiz_local;
++	uint32_t pcgcctl_local;
++	uint32_t gdfifocfg_local;
++	uint32_t dtxfsiz_local[MAX_EPS_CHANNELS];
++	uint32_t gpwrdn_local;
++	uint32_t xhib_pcgcctl;
++	uint32_t xhib_gpwrdn;
++};
++
++struct dwc_otg_host_regs_backup {
++	uint32_t hcfg_local;
++	uint32_t haintmsk_local;
++	uint32_t hcintmsk_local[MAX_EPS_CHANNELS];
++	uint32_t hprt0_local;
++	uint32_t hfir_local;
++};
++
++struct dwc_otg_dev_regs_backup {
++	uint32_t dcfg;
++	uint32_t dctl;
++	uint32_t daintmsk;
++	uint32_t diepmsk;
++	uint32_t doepmsk;
++	uint32_t diepctl[MAX_EPS_CHANNELS];
++	uint32_t dieptsiz[MAX_EPS_CHANNELS];
++	uint32_t diepdma[MAX_EPS_CHANNELS];
++};
++/**
++ * The <code>dwc_otg_core_if</code> structure contains information needed to manage
++ * the DWC_otg controller acting in either host or device mode. It
++ * represents the programming view of the controller as a whole.
++ */
++struct dwc_otg_core_if {
++	/** Parameters that define how the core should be configured.*/
++	dwc_otg_core_params_t *core_params;
++
++	/** Core Global registers starting at offset 000h. */
++	dwc_otg_core_global_regs_t *core_global_regs;
++
++	/** Device-specific information */
++	dwc_otg_dev_if_t *dev_if;
++	/** Host-specific information */
++	dwc_otg_host_if_t *host_if;
++
++	/** Value from SNPSID register */
++	uint32_t snpsid;
++
++	/*
++	 * Set to 1 if the core PHY interface bits in USBCFG have been
++	 * initialized.
++	 */
++	uint8_t phy_init_done;
++
++	/*
++	 * SRP Success flag, set by srp success interrupt in FS I2C mode
++	 */
++	uint8_t srp_success;
++	uint8_t srp_timer_started;
++	/** Timer for SRP. If it expires before SRP is successful
++	 * clear the SRP. */
++	dwc_timer_t *srp_timer;
++
++#ifdef DWC_DEV_SRPCAP
++	/* This timer is needed to power on the hibernated host core if SRP is not
++	 * initiated on connected SRP capable device for limited period of time
++	 */
++	uint8_t pwron_timer_started;
++	dwc_timer_t *pwron_timer;
++#endif
++	/* Common configuration information */
++	/** Power and Clock Gating Control Register */
++	volatile uint32_t *pcgcctl;
++#define DWC_OTG_PCGCCTL_OFFSET 0xE00
++
++	/** Push/pop addresses for endpoints or host channels.*/
++	uint32_t *data_fifo[MAX_EPS_CHANNELS];
++#define DWC_OTG_DATA_FIFO_OFFSET 0x1000
++#define DWC_OTG_DATA_FIFO_SIZE 0x1000
++
++	/** Total RAM for FIFOs (Bytes) */
++	uint16_t total_fifo_size;
++	/** Size of Rx FIFO (Bytes) */
++	uint16_t rx_fifo_size;
++	/** Size of Non-periodic Tx FIFO (Bytes) */
++	uint16_t nperio_tx_fifo_size;
++
++	/** 1 if DMA is enabled, 0 otherwise. */
++	uint8_t dma_enable;
++
++	/** 1 if DMA descriptor is enabled, 0 otherwise. */
++	uint8_t dma_desc_enable;
++
++	/** 1 if PTI Enhancement mode is enabled, 0 otherwise. */
++	uint8_t pti_enh_enable;
++
++	/** 1 if MPI Enhancement mode is enabled, 0 otherwise. */
++	uint8_t multiproc_int_enable;
++
++	/** 1 if dedicated Tx FIFOs are enabled, 0 otherwise. */
++	uint8_t en_multiple_tx_fifo;
++
++	/** Set to 1 if multiple packets of a high-bandwidth transfer is in
++	 * process of being queued */
++	uint8_t queuing_high_bandwidth;
++
++	/** Hardware Configuration -- stored here for convenience.*/
++	hwcfg1_data_t hwcfg1;
++	hwcfg2_data_t hwcfg2;
++	hwcfg3_data_t hwcfg3;
++	hwcfg4_data_t hwcfg4;
++	fifosize_data_t hptxfsiz;
++
++	/** Host and Device Configuration -- stored here for convenience.*/
++	hcfg_data_t hcfg;
++	dcfg_data_t dcfg;
++
++	/** The operational State, during transations
++	 * (a_host>>a_peripherial and b_device=>b_host) this may not
++	 * match the core but allows the software to determine
++	 * transitions.
++	 */
++	uint8_t op_state;
++
++	/**
++	 * Set to 1 if the HCD needs to be restarted on a session request
++	 * interrupt. This is required if no connector ID status change has
++	 * occurred since the HCD was last disconnected.
++	 */
++	uint8_t restart_hcd_on_session_req;
++
++	/** HCD callbacks */
++	/** A-Device is a_host */
++#define A_HOST		(1)
++	/** A-Device is a_suspend */
++#define A_SUSPEND	(2)
++	/** A-Device is a_peripherial */
++#define A_PERIPHERAL	(3)
++	/** B-Device is operating as a Peripheral. */
++#define B_PERIPHERAL	(4)
++	/** B-Device is operating as a Host. */
++#define B_HOST		(5)
++
++	/** HCD callbacks */
++	struct dwc_otg_cil_callbacks *hcd_cb;
++	/** PCD callbacks */
++	struct dwc_otg_cil_callbacks *pcd_cb;
++
++	/** Device mode Periodic Tx FIFO Mask */
++	uint32_t p_tx_msk;
++	/** Device mode Periodic Tx FIFO Mask */
++	uint32_t tx_msk;
++
++	/** Workqueue object used for handling several interrupts */
++	dwc_workq_t *wq_otg;
++
++	/** Timer object used for handling "Wakeup Detected" Interrupt */
++	dwc_timer_t *wkp_timer;
++	/** This arrays used for debug purposes for DEV OUT NAK enhancement */
++	uint32_t start_doeptsiz_val[MAX_EPS_CHANNELS];
++	ep_xfer_info_t ep_xfer_info[MAX_EPS_CHANNELS];
++	dwc_timer_t *ep_xfer_timer[MAX_EPS_CHANNELS];
++#ifdef DEBUG
++	uint32_t start_hcchar_val[MAX_EPS_CHANNELS];
++
++	hc_xfer_info_t hc_xfer_info[MAX_EPS_CHANNELS];
++	dwc_timer_t *hc_xfer_timer[MAX_EPS_CHANNELS];
++
++	uint32_t hfnum_7_samples;
++	uint64_t hfnum_7_frrem_accum;
++	uint32_t hfnum_0_samples;
++	uint64_t hfnum_0_frrem_accum;
++	uint32_t hfnum_other_samples;
++	uint64_t hfnum_other_frrem_accum;
++#endif
++
++#ifdef DWC_UTE_CFI
++	uint16_t pwron_rxfsiz;
++	uint16_t pwron_gnptxfsiz;
++	uint16_t pwron_txfsiz[15];
++
++	uint16_t init_rxfsiz;
++	uint16_t init_gnptxfsiz;
++	uint16_t init_txfsiz[15];
++#endif
++
++	/** Lx state of device */
++	dwc_otg_lx_state_e lx_state;
++
++	/** Saved Core Global registers */
++	struct dwc_otg_global_regs_backup *gr_backup;
++	/** Saved Host registers */
++	struct dwc_otg_host_regs_backup *hr_backup;
++	/** Saved Device registers */
++	struct dwc_otg_dev_regs_backup *dr_backup;
++
++	/** Power Down Enable */
++	uint32_t power_down;
++
++	/** ADP support Enable */
++	uint32_t adp_enable;
++
++	/** ADP structure object */
++	dwc_otg_adp_t adp;
++
++	/** hibernation/suspend flag */
++	int hibernation_suspend;
++
++	/** Device mode extended hibernation flag */
++	int xhib;
++
++	/** OTG revision supported */
++	uint32_t otg_ver;
++
++	/** OTG status flag used for HNP polling */
++	uint8_t otg_sts;
++
++	/** Pointer to either hcd->lock or pcd->lock */
++	dwc_spinlock_t *lock;
++
++	/** Start predict NextEP based on Learning Queue if equal 1,
++	 * also used as counter of disabled NP IN EP's */
++	uint8_t start_predict;
++
++	/** NextEp sequence, including EP0: nextep_seq[] = EP if non-periodic and
++	 * active, 0xff otherwise */
++	uint8_t nextep_seq[MAX_EPS_CHANNELS];
++
++	/** Index of fisrt EP in nextep_seq array which should be re-enabled **/
++	uint8_t first_in_nextep_seq;
++
++	/** Frame number while entering to ISR - needed for ISOCs **/
++	uint32_t frame_num;
++
++};
++
++#ifdef DEBUG
++/*
++ * This function is called when transfer is timed out.
++ */
++extern void hc_xfer_timeout(void *ptr);
++#endif
++
++/*
++ * This function is called when transfer is timed out on endpoint.
++ */
++extern void ep_xfer_timeout(void *ptr);
++
++/*
++ * The following functions are functions for works
++ * using during handling some interrupts
++ */
++extern void w_conn_id_status_change(void *p);
++
++extern void w_wakeup_detected(void *p);
++
++/** Saves global register values into system memory. */
++extern int dwc_otg_save_global_regs(dwc_otg_core_if_t * core_if);
++/** Saves device register values into system memory. */
++extern int dwc_otg_save_dev_regs(dwc_otg_core_if_t * core_if);
++/** Saves host register values into system memory. */
++extern int dwc_otg_save_host_regs(dwc_otg_core_if_t * core_if);
++/** Restore global register values. */
++extern int dwc_otg_restore_global_regs(dwc_otg_core_if_t * core_if);
++/** Restore host register values. */
++extern int dwc_otg_restore_host_regs(dwc_otg_core_if_t * core_if, int reset);
++/** Restore device register values. */
++extern int dwc_otg_restore_dev_regs(dwc_otg_core_if_t * core_if,
++				    int rem_wakeup);
++extern int restore_lpm_i2c_regs(dwc_otg_core_if_t * core_if);
++extern int restore_essential_regs(dwc_otg_core_if_t * core_if, int rmode,
++				  int is_host);
++
++extern int dwc_otg_host_hibernation_restore(dwc_otg_core_if_t * core_if,
++					    int restore_mode, int reset);
++extern int dwc_otg_device_hibernation_restore(dwc_otg_core_if_t * core_if,
++					      int rem_wakeup, int reset);
++
++/*
++ * The following functions support initialization of the CIL driver component
++ * and the DWC_otg controller.
++ */
++extern void dwc_otg_core_host_init(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_core_dev_init(dwc_otg_core_if_t * _core_if);
++
++/** @name Device CIL Functions
++ * The following functions support managing the DWC_otg controller in device
++ * mode.
++ */
++/**@{*/
++extern void dwc_otg_wakeup(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_read_setup_packet(dwc_otg_core_if_t * _core_if,
++				      uint32_t * _dest);
++extern uint32_t dwc_otg_get_frame_number(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_ep0_activate(dwc_otg_core_if_t * _core_if, dwc_ep_t * _ep);
++extern void dwc_otg_ep_activate(dwc_otg_core_if_t * _core_if, dwc_ep_t * _ep);
++extern void dwc_otg_ep_deactivate(dwc_otg_core_if_t * _core_if, dwc_ep_t * _ep);
++extern void dwc_otg_ep_start_transfer(dwc_otg_core_if_t * _core_if,
++				      dwc_ep_t * _ep);
++extern void dwc_otg_ep_start_zl_transfer(dwc_otg_core_if_t * _core_if,
++					 dwc_ep_t * _ep);
++extern void dwc_otg_ep0_start_transfer(dwc_otg_core_if_t * _core_if,
++				       dwc_ep_t * _ep);
++extern void dwc_otg_ep0_continue_transfer(dwc_otg_core_if_t * _core_if,
++					  dwc_ep_t * _ep);
++extern void dwc_otg_ep_write_packet(dwc_otg_core_if_t * _core_if,
++				    dwc_ep_t * _ep, int _dma);
++extern void dwc_otg_ep_set_stall(dwc_otg_core_if_t * _core_if, dwc_ep_t * _ep);
++extern void dwc_otg_ep_clear_stall(dwc_otg_core_if_t * _core_if,
++				   dwc_ep_t * _ep);
++extern void dwc_otg_enable_device_interrupts(dwc_otg_core_if_t * _core_if);
++
++#ifdef DWC_EN_ISOC
++extern void dwc_otg_iso_ep_start_frm_transfer(dwc_otg_core_if_t * core_if,
++					      dwc_ep_t * ep);
++extern void dwc_otg_iso_ep_start_buf_transfer(dwc_otg_core_if_t * core_if,
++					      dwc_ep_t * ep);
++#endif /* DWC_EN_ISOC */
++/**@}*/
++
++/** @name Host CIL Functions
++ * The following functions support managing the DWC_otg controller in host
++ * mode.
++ */
++/**@{*/
++extern void dwc_otg_hc_init(dwc_otg_core_if_t * _core_if, dwc_hc_t * _hc);
++extern void dwc_otg_hc_halt(dwc_otg_core_if_t * _core_if,
++			    dwc_hc_t * _hc, dwc_otg_halt_status_e _halt_status);
++extern void dwc_otg_hc_cleanup(dwc_otg_core_if_t * _core_if, dwc_hc_t * _hc);
++extern void dwc_otg_hc_start_transfer(dwc_otg_core_if_t * _core_if,
++				      dwc_hc_t * _hc);
++extern int dwc_otg_hc_continue_transfer(dwc_otg_core_if_t * _core_if,
++					dwc_hc_t * _hc);
++extern void dwc_otg_hc_do_ping(dwc_otg_core_if_t * _core_if, dwc_hc_t * _hc);
++extern void dwc_otg_hc_write_packet(dwc_otg_core_if_t * _core_if,
++				    dwc_hc_t * _hc);
++extern void dwc_otg_enable_host_interrupts(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_disable_host_interrupts(dwc_otg_core_if_t * _core_if);
++
++extern void dwc_otg_hc_start_transfer_ddma(dwc_otg_core_if_t * core_if,
++					   dwc_hc_t * hc);
++
++extern uint32_t calc_frame_interval(dwc_otg_core_if_t * core_if);
++
++/* Macro used to clear one channel interrupt */
++#define clear_hc_int(_hc_regs_, _intr_) \
++do { \
++	hcint_data_t hcint_clear = {.d32 = 0}; \
++	hcint_clear.b._intr_ = 1; \
++	DWC_WRITE_REG32(&(_hc_regs_)->hcint, hcint_clear.d32); \
++} while (0)
++
++/*
++ * Macro used to disable one channel interrupt. Channel interrupts are
++ * disabled when the channel is halted or released by the interrupt handler.
++ * There is no need to handle further interrupts of that type until the
++ * channel is re-assigned. In fact, subsequent handling may cause crashes
++ * because the channel structures are cleaned up when the channel is released.
++ */
++#define disable_hc_int(_hc_regs_, _intr_) \
++do { \
++	hcintmsk_data_t hcintmsk = {.d32 = 0}; \
++	hcintmsk.b._intr_ = 1; \
++	DWC_MODIFY_REG32(&(_hc_regs_)->hcintmsk, hcintmsk.d32, 0); \
++} while (0)
++
++/**
++ * This function Reads HPRT0 in preparation to modify. It keeps the
++ * WC bits 0 so that if they are read as 1, they won't clear when you
++ * write it back
++ */
++static inline uint32_t dwc_otg_read_hprt0(dwc_otg_core_if_t * _core_if)
++{
++	hprt0_data_t hprt0;
++	hprt0.d32 = DWC_READ_REG32(_core_if->host_if->hprt0);
++	hprt0.b.prtena = 0;
++	hprt0.b.prtconndet = 0;
++	hprt0.b.prtenchng = 0;
++	hprt0.b.prtovrcurrchng = 0;
++	return hprt0.d32;
++}
++
++/**@}*/
++
++/** @name Common CIL Functions
++ * The following functions support managing the DWC_otg controller in either
++ * device or host mode.
++ */
++/**@{*/
++
++extern void dwc_otg_read_packet(dwc_otg_core_if_t * core_if,
++				uint8_t * dest, uint16_t bytes);
++
++extern void dwc_otg_flush_tx_fifo(dwc_otg_core_if_t * _core_if, const int _num);
++extern void dwc_otg_flush_rx_fifo(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_core_reset(dwc_otg_core_if_t * _core_if);
++
++/**
++ * This function returns the Core Interrupt register.
++ */
++static inline uint32_t dwc_otg_read_core_intr(dwc_otg_core_if_t * core_if)
++{
++	return (DWC_READ_REG32(&core_if->core_global_regs->gintsts) &
++		DWC_READ_REG32(&core_if->core_global_regs->gintmsk));
++}
++
++/**
++ * This function returns the OTG Interrupt register.
++ */
++static inline uint32_t dwc_otg_read_otg_intr(dwc_otg_core_if_t * core_if)
++{
++	return (DWC_READ_REG32(&core_if->core_global_regs->gotgint));
++}
++
++/**
++ * This function reads the Device All Endpoints Interrupt register and
++ * returns the IN endpoint interrupt bits.
++ */
++static inline uint32_t dwc_otg_read_dev_all_in_ep_intr(dwc_otg_core_if_t *
++						       core_if)
++{
++
++	uint32_t v;
++
++	if (core_if->multiproc_int_enable) {
++		v = DWC_READ_REG32(&core_if->dev_if->
++				   dev_global_regs->deachint) &
++		    DWC_READ_REG32(&core_if->
++				   dev_if->dev_global_regs->deachintmsk);
++	} else {
++		v = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daint) &
++		    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daintmsk);
++	}
++	return (v & 0xffff);
++}
++
++/**
++ * This function reads the Device All Endpoints Interrupt register and
++ * returns the OUT endpoint interrupt bits.
++ */
++static inline uint32_t dwc_otg_read_dev_all_out_ep_intr(dwc_otg_core_if_t *
++							core_if)
++{
++	uint32_t v;
++
++	if (core_if->multiproc_int_enable) {
++		v = DWC_READ_REG32(&core_if->dev_if->
++				   dev_global_regs->deachint) &
++		    DWC_READ_REG32(&core_if->
++				   dev_if->dev_global_regs->deachintmsk);
++	} else {
++		v = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daint) &
++		    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daintmsk);
++	}
++
++	return ((v & 0xffff0000) >> 16);
++}
++
++/**
++ * This function returns the Device IN EP Interrupt register
++ */
++static inline uint32_t dwc_otg_read_dev_in_ep_intr(dwc_otg_core_if_t * core_if,
++						   dwc_ep_t * ep)
++{
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	uint32_t v, msk, emp;
++
++	if (core_if->multiproc_int_enable) {
++		msk =
++		    DWC_READ_REG32(&dev_if->
++				   dev_global_regs->diepeachintmsk[ep->num]);
++		emp =
++		    DWC_READ_REG32(&dev_if->
++				   dev_global_regs->dtknqr4_fifoemptymsk);
++		msk |= ((emp >> ep->num) & 0x1) << 7;
++		v = DWC_READ_REG32(&dev_if->in_ep_regs[ep->num]->diepint) & msk;
++	} else {
++		msk = DWC_READ_REG32(&dev_if->dev_global_regs->diepmsk);
++		emp =
++		    DWC_READ_REG32(&dev_if->
++				   dev_global_regs->dtknqr4_fifoemptymsk);
++		msk |= ((emp >> ep->num) & 0x1) << 7;
++		v = DWC_READ_REG32(&dev_if->in_ep_regs[ep->num]->diepint) & msk;
++	}
++
++	return v;
++}
++
++/**
++ * This function returns the Device OUT EP Interrupt register
++ */
++static inline uint32_t dwc_otg_read_dev_out_ep_intr(dwc_otg_core_if_t *
++						    _core_if, dwc_ep_t * _ep)
++{
++	dwc_otg_dev_if_t *dev_if = _core_if->dev_if;
++	uint32_t v;
++	doepmsk_data_t msk = {.d32 = 0 };
++
++	if (_core_if->multiproc_int_enable) {
++		msk.d32 =
++		    DWC_READ_REG32(&dev_if->
++				   dev_global_regs->doepeachintmsk[_ep->num]);
++		if (_core_if->pti_enh_enable) {
++			msk.b.pktdrpsts = 1;
++		}
++		v = DWC_READ_REG32(&dev_if->
++				   out_ep_regs[_ep->num]->doepint) & msk.d32;
++	} else {
++		msk.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->doepmsk);
++		if (_core_if->pti_enh_enable) {
++			msk.b.pktdrpsts = 1;
++		}
++		v = DWC_READ_REG32(&dev_if->
++				   out_ep_regs[_ep->num]->doepint) & msk.d32;
++	}
++	return v;
++}
++
++/**
++ * This function returns the Host All Channel Interrupt register
++ */
++static inline uint32_t dwc_otg_read_host_all_channels_intr(dwc_otg_core_if_t *
++							   _core_if)
++{
++	return (DWC_READ_REG32(&_core_if->host_if->host_global_regs->haint));
++}
++
++static inline uint32_t dwc_otg_read_host_channel_intr(dwc_otg_core_if_t *
++						      _core_if, dwc_hc_t * _hc)
++{
++	return (DWC_READ_REG32
++		(&_core_if->host_if->hc_regs[_hc->hc_num]->hcint));
++}
++
++/**
++ * This function returns the mode of the operation, host or device.
++ *
++ * @return 0 - Device Mode, 1 - Host Mode
++ */
++static inline uint32_t dwc_otg_mode(dwc_otg_core_if_t * _core_if)
++{
++	return (DWC_READ_REG32(&_core_if->core_global_regs->gintsts) & 0x1);
++}
++
++/**@}*/
++
++/**
++ * DWC_otg CIL callback structure. This structure allows the HCD and
++ * PCD to register functions used for starting and stopping the PCD
++ * and HCD for role change on for a DRD.
++ */
++typedef struct dwc_otg_cil_callbacks {
++	/** Start function for role change */
++	int (*start) (void *_p);
++	/** Stop Function for role change */
++	int (*stop) (void *_p);
++	/** Disconnect Function for role change */
++	int (*disconnect) (void *_p);
++	/** Resume/Remote wakeup Function */
++	int (*resume_wakeup) (void *_p);
++	/** Suspend function */
++	int (*suspend) (void *_p);
++	/** Session Start (SRP) */
++	int (*session_start) (void *_p);
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	/** Sleep (switch to L0 state) */
++	int (*sleep) (void *_p);
++#endif
++	/** Pointer passed to start() and stop() */
++	void *p;
++} dwc_otg_cil_callbacks_t;
++
++extern void dwc_otg_cil_register_pcd_callbacks(dwc_otg_core_if_t * _core_if,
++					       dwc_otg_cil_callbacks_t * _cb,
++					       void *_p);
++extern void dwc_otg_cil_register_hcd_callbacks(dwc_otg_core_if_t * _core_if,
++					       dwc_otg_cil_callbacks_t * _cb,
++					       void *_p);
++
++void dwc_otg_initiate_srp(dwc_otg_core_if_t * core_if);
++
++//////////////////////////////////////////////////////////////////////
++/** Start the HCD.  Helper function for using the HCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_hcd_start(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->hcd_cb && core_if->hcd_cb->start) {
++		core_if->hcd_cb->start(core_if->hcd_cb->p);
++	}
++}
++
++/** Stop the HCD.  Helper function for using the HCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_hcd_stop(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->hcd_cb && core_if->hcd_cb->stop) {
++		core_if->hcd_cb->stop(core_if->hcd_cb->p);
++	}
++}
++
++/** Disconnect the HCD.  Helper function for using the HCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_hcd_disconnect(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->hcd_cb && core_if->hcd_cb->disconnect) {
++		core_if->hcd_cb->disconnect(core_if->hcd_cb->p);
++	}
++}
++
++/** Inform the HCD the a New Session has begun.  Helper function for
++ * using the HCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_hcd_session_start(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->hcd_cb && core_if->hcd_cb->session_start) {
++		core_if->hcd_cb->session_start(core_if->hcd_cb->p);
++	}
++}
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++/**
++ * Inform the HCD about LPM sleep.
++ * Helper function for using the HCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_hcd_sleep(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->hcd_cb && core_if->hcd_cb->sleep) {
++		core_if->hcd_cb->sleep(core_if->hcd_cb->p);
++	}
++}
++#endif
++
++/** Resume the HCD.  Helper function for using the HCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_hcd_resume(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->hcd_cb && core_if->hcd_cb->resume_wakeup) {
++		core_if->hcd_cb->resume_wakeup(core_if->hcd_cb->p);
++	}
++}
++
++/** Start the PCD.  Helper function for using the PCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_pcd_start(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->pcd_cb && core_if->pcd_cb->start) {
++		core_if->pcd_cb->start(core_if->pcd_cb->p);
++	}
++}
++
++/** Stop the PCD.  Helper function for using the PCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_pcd_stop(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->pcd_cb && core_if->pcd_cb->stop) {
++		core_if->pcd_cb->stop(core_if->pcd_cb->p);
++	}
++}
++
++/** Suspend the PCD.  Helper function for using the PCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_pcd_suspend(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->pcd_cb && core_if->pcd_cb->suspend) {
++		core_if->pcd_cb->suspend(core_if->pcd_cb->p);
++	}
++}
++
++/** Resume the PCD.  Helper function for using the PCD callbacks.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static inline void cil_pcd_resume(dwc_otg_core_if_t * core_if)
++{
++	if (core_if->pcd_cb && core_if->pcd_cb->resume_wakeup) {
++		core_if->pcd_cb->resume_wakeup(core_if->pcd_cb->p);
++	}
++}
++
++//////////////////////////////////////////////////////////////////////
++
++#endif
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c
+@@ -0,0 +1,1594 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_cil_intr.c $
++ * $Revision: #32 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++/** @file
++ *
++ * The Core Interface Layer provides basic services for accessing and
++ * managing the DWC_otg hardware. These services are used by both the
++ * Host Controller Driver and the Peripheral Controller Driver.
++ *
++ * This file contains the Common Interrupt handlers.
++ */
++#include "dwc_os.h"
++#include "dwc_otg_regs.h"
++#include "dwc_otg_cil.h"
++#include "dwc_otg_driver.h"
++#include "dwc_otg_pcd.h"
++#include "dwc_otg_hcd.h"
++
++#ifdef DEBUG
++inline const char *op_state_str(dwc_otg_core_if_t * core_if)
++{
++	return (core_if->op_state == A_HOST ? "a_host" :
++		(core_if->op_state == A_SUSPEND ? "a_suspend" :
++		 (core_if->op_state == A_PERIPHERAL ? "a_peripheral" :
++		  (core_if->op_state == B_PERIPHERAL ? "b_peripheral" :
++		   (core_if->op_state == B_HOST ? "b_host" : "unknown")))));
++}
++#endif
++
++/** This function will log a debug message
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++int32_t dwc_otg_handle_mode_mismatch_intr(dwc_otg_core_if_t * core_if)
++{
++	gintsts_data_t gintsts;
++	DWC_WARN("Mode Mismatch Interrupt: currently in %s mode\n",
++		 dwc_otg_mode(core_if) ? "Host" : "Device");
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.modemismatch = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++	return 1;
++}
++
++/**
++ * This function handles the OTG Interrupts. It reads the OTG
++ * Interrupt Register (GOTGINT) to determine what interrupt has
++ * occurred.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++int32_t dwc_otg_handle_otg_intr(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	gotgint_data_t gotgint;
++	gotgctl_data_t gotgctl;
++	gintmsk_data_t gintmsk;
++	gpwrdn_data_t gpwrdn;
++
++	gotgint.d32 = DWC_READ_REG32(&global_regs->gotgint);
++	gotgctl.d32 = DWC_READ_REG32(&global_regs->gotgctl);
++	DWC_DEBUGPL(DBG_CIL, "++OTG Interrupt gotgint=%0x [%s]\n", gotgint.d32,
++		    op_state_str(core_if));
++
++	if (gotgint.b.sesenddet) {
++		DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: "
++			    "Session End Detected++ (%s)\n",
++			    op_state_str(core_if));
++		gotgctl.d32 = DWC_READ_REG32(&global_regs->gotgctl);
++
++		if (core_if->op_state == B_HOST) {
++			cil_pcd_start(core_if);
++			core_if->op_state = B_PERIPHERAL;
++		} else {
++			/* If not B_HOST and Device HNP still set. HNP
++			 * Did not succeed!*/
++			if (gotgctl.b.devhnpen) {
++				DWC_DEBUGPL(DBG_ANY, "Session End Detected\n");
++				__DWC_ERROR("Device Not Connected/Responding!\n");
++			}
++
++			/* If Session End Detected the B-Cable has
++			 * been disconnected. */
++			/* Reset PCD and Gadget driver to a
++			 * clean state. */
++			core_if->lx_state = DWC_OTG_L0;
++			DWC_SPINUNLOCK(core_if->lock);
++			cil_pcd_stop(core_if);
++			DWC_SPINLOCK(core_if->lock);
++
++			if (core_if->adp_enable) {
++				if (core_if->power_down == 2) {
++					gpwrdn.d32 = 0;
++					gpwrdn.b.pwrdnswtch = 1;
++					DWC_MODIFY_REG32(&core_if->
++							 core_global_regs->
++							 gpwrdn, gpwrdn.d32, 0);
++				}
++
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pmuintsel = 1;
++				gpwrdn.b.pmuactv = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++
++				dwc_otg_adp_sense_start(core_if);
++			}
++		}
++
++		gotgctl.d32 = 0;
++		gotgctl.b.devhnpen = 1;
++		DWC_MODIFY_REG32(&global_regs->gotgctl, gotgctl.d32, 0);
++	}
++	if (gotgint.b.sesreqsucstschng) {
++		DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: "
++			    "Session Reqeust Success Status Change++\n");
++		gotgctl.d32 = DWC_READ_REG32(&global_regs->gotgctl);
++		if (gotgctl.b.sesreqscs) {
++
++			if ((core_if->core_params->phy_type ==
++			     DWC_PHY_TYPE_PARAM_FS) && (core_if->core_params->i2c_enable)) {
++				core_if->srp_success = 1;
++			} else {
++				DWC_SPINUNLOCK(core_if->lock);
++				cil_pcd_resume(core_if);
++				DWC_SPINLOCK(core_if->lock);
++				/* Clear Session Request */
++				gotgctl.d32 = 0;
++				gotgctl.b.sesreq = 1;
++				DWC_MODIFY_REG32(&global_regs->gotgctl,
++						 gotgctl.d32, 0);
++			}
++		}
++	}
++	if (gotgint.b.hstnegsucstschng) {
++		/* Print statements during the HNP interrupt handling
++		 * can cause it to fail.*/
++		gotgctl.d32 = DWC_READ_REG32(&global_regs->gotgctl);
++		/* WA for 3.00a- HW is not setting cur_mode, even sometimes
++		 * this does not help*/
++		if (core_if->snpsid >= OTG_CORE_REV_3_00a)
++			dwc_udelay(100);
++		if (gotgctl.b.hstnegscs) {
++			if (dwc_otg_is_host_mode(core_if)) {
++				core_if->op_state = B_HOST;
++				/*
++				 * Need to disable SOF interrupt immediately.
++				 * When switching from device to host, the PCD
++				 * interrupt handler won't handle the
++				 * interrupt if host mode is already set. The
++				 * HCD interrupt handler won't get called if
++				 * the HCD state is HALT. This means that the
++				 * interrupt does not get handled and Linux
++				 * complains loudly.
++				 */
++				gintmsk.d32 = 0;
++				gintmsk.b.sofintr = 1;
++				DWC_MODIFY_REG32(&global_regs->gintmsk,
++						 gintmsk.d32, 0);
++				/* Call callback function with spin lock released */
++				DWC_SPINUNLOCK(core_if->lock);
++				cil_pcd_stop(core_if);
++				/*
++				 * Initialize the Core for Host mode.
++				 */
++				cil_hcd_start(core_if);
++				DWC_SPINLOCK(core_if->lock);
++				core_if->op_state = B_HOST;
++			}
++		} else {
++			gotgctl.d32 = 0;
++			gotgctl.b.hnpreq = 1;
++			gotgctl.b.devhnpen = 1;
++			DWC_MODIFY_REG32(&global_regs->gotgctl, gotgctl.d32, 0);
++			DWC_DEBUGPL(DBG_ANY, "HNP Failed\n");
++			__DWC_ERROR("Device Not Connected/Responding\n");
++		}
++	}
++	if (gotgint.b.hstnegdet) {
++		/* The disconnect interrupt is set at the same time as
++		 * Host Negotiation Detected.  During the mode
++		 * switch all interrupts are cleared so the disconnect
++		 * interrupt handler will not get executed.
++		 */
++		DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: "
++			    "Host Negotiation Detected++ (%s)\n",
++			    (dwc_otg_is_host_mode(core_if) ? "Host" :
++			     "Device"));
++		if (dwc_otg_is_device_mode(core_if)) {
++			DWC_DEBUGPL(DBG_ANY, "a_suspend->a_peripheral (%d)\n",
++				    core_if->op_state);
++			DWC_SPINUNLOCK(core_if->lock);
++			cil_hcd_disconnect(core_if);
++			cil_pcd_start(core_if);
++			DWC_SPINLOCK(core_if->lock);
++			core_if->op_state = A_PERIPHERAL;
++		} else {
++			/*
++			 * Need to disable SOF interrupt immediately. When
++			 * switching from device to host, the PCD interrupt
++			 * handler won't handle the interrupt if host mode is
++			 * already set. The HCD interrupt handler won't get
++			 * called if the HCD state is HALT. This means that
++			 * the interrupt does not get handled and Linux
++			 * complains loudly.
++			 */
++			gintmsk.d32 = 0;
++			gintmsk.b.sofintr = 1;
++			DWC_MODIFY_REG32(&global_regs->gintmsk, gintmsk.d32, 0);
++			DWC_SPINUNLOCK(core_if->lock);
++			cil_pcd_stop(core_if);
++			cil_hcd_start(core_if);
++			DWC_SPINLOCK(core_if->lock);
++			core_if->op_state = A_HOST;
++		}
++	}
++	if (gotgint.b.adevtoutchng) {
++		DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: "
++			    "A-Device Timeout Change++\n");
++	}
++	if (gotgint.b.debdone) {
++		DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: " "Debounce Done++\n");
++	}
++
++	/* Clear GOTGINT */
++	DWC_WRITE_REG32(&core_if->core_global_regs->gotgint, gotgint.d32);
++
++	return 1;
++}
++
++void w_conn_id_status_change(void *p)
++{
++	dwc_otg_core_if_t *core_if = p;
++	uint32_t count = 0;
++	gotgctl_data_t gotgctl = {.d32 = 0 };
++
++	gotgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++	DWC_DEBUGPL(DBG_CIL, "gotgctl=%0x\n", gotgctl.d32);
++	DWC_DEBUGPL(DBG_CIL, "gotgctl.b.conidsts=%d\n", gotgctl.b.conidsts);
++
++	/* B-Device connector (Device Mode) */
++	if (gotgctl.b.conidsts) {
++		/* Wait for switch to device mode. */
++		while (!dwc_otg_is_device_mode(core_if)) {
++			DWC_PRINTF("Waiting for Peripheral Mode, Mode=%s\n",
++				   (dwc_otg_is_host_mode(core_if) ? "Host" :
++				    "Peripheral"));
++			dwc_mdelay(100);
++			if (++count > 10000)
++				break;
++		}
++		DWC_ASSERT(++count < 10000,
++			   "Connection id status change timed out");
++		core_if->op_state = B_PERIPHERAL;
++		dwc_otg_core_init(core_if);
++		dwc_otg_enable_global_interrupts(core_if);
++		cil_pcd_start(core_if);
++	} else {
++		/* A-Device connector (Host Mode) */
++		while (!dwc_otg_is_host_mode(core_if)) {
++			DWC_PRINTF("Waiting for Host Mode, Mode=%s\n",
++				   (dwc_otg_is_host_mode(core_if) ? "Host" :
++				    "Peripheral"));
++			dwc_mdelay(100);
++			if (++count > 10000)
++				break;
++		}
++		DWC_ASSERT(++count < 10000,
++			   "Connection id status change timed out");
++		core_if->op_state = A_HOST;
++		/*
++		 * Initialize the Core for Host mode.
++		 */
++		dwc_otg_core_init(core_if);
++		dwc_otg_enable_global_interrupts(core_if);
++		cil_hcd_start(core_if);
++	}
++}
++
++/**
++ * This function handles the Connector ID Status Change Interrupt.  It
++ * reads the OTG Interrupt Register (GOTCTL) to determine whether this
++ * is a Device to Host Mode transition or a Host Mode to Device
++ * Transition.
++ *
++ * This only occurs when the cable is connected/removed from the PHY
++ * connector.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++int32_t dwc_otg_handle_conn_id_status_change_intr(dwc_otg_core_if_t * core_if)
++{
++
++	/*
++	 * Need to disable SOF interrupt immediately. If switching from device
++	 * to host, the PCD interrupt handler won't handle the interrupt if
++	 * host mode is already set. The HCD interrupt handler won't get
++	 * called if the HCD state is HALT. This means that the interrupt does
++	 * not get handled and Linux complains loudly.
++	 */
++	gintmsk_data_t gintmsk = {.d32 = 0 };
++	gintsts_data_t gintsts = {.d32 = 0 };
++
++	gintmsk.b.sofintr = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, gintmsk.d32, 0);
++
++	DWC_DEBUGPL(DBG_CIL,
++		    " ++Connector ID Status Change Interrupt++  (%s)\n",
++		    (dwc_otg_is_host_mode(core_if) ? "Host" : "Device"));
++
++	DWC_SPINUNLOCK(core_if->lock);
++
++	/*
++	 * Need to schedule a work, as there are possible DELAY function calls
++	 * Release lock before scheduling workq as it holds spinlock during scheduling
++	 */
++
++	DWC_WORKQ_SCHEDULE(core_if->wq_otg, w_conn_id_status_change,
++			   core_if, "connection id status change");
++	DWC_SPINLOCK(core_if->lock);
++
++	/* Set flag and clear interrupt */
++	gintsts.b.conidstschng = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * This interrupt indicates that a device is initiating the Session
++ * Request Protocol to request the host to turn on bus power so a new
++ * session can begin. The handler responds by turning on bus power. If
++ * the DWC_otg controller is in low power mode, the handler brings the
++ * controller out of low power mode before turning on bus power.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++int32_t dwc_otg_handle_session_req_intr(dwc_otg_core_if_t * core_if)
++{
++	gintsts_data_t gintsts;
++
++#ifndef DWC_HOST_ONLY
++	DWC_DEBUGPL(DBG_ANY, "++Session Request Interrupt++\n");
++
++	if (dwc_otg_is_device_mode(core_if)) {
++		DWC_PRINTF("SRP: Device mode\n");
++	} else {
++		hprt0_data_t hprt0;
++		DWC_PRINTF("SRP: Host mode\n");
++
++		/* Turn on the port power bit. */
++		hprt0.d32 = dwc_otg_read_hprt0(core_if);
++		hprt0.b.prtpwr = 1;
++		DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++
++		/* Start the Connection timer. So a message can be displayed
++		 * if connect does not occur within 10 seconds. */
++		cil_hcd_session_start(core_if);
++	}
++#endif
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.sessreqintr = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++
++void w_wakeup_detected(void *p)
++{
++	dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) p;
++	/*
++	 * Clear the Resume after 70ms. (Need 20 ms minimum. Use 70 ms
++	 * so that OPT tests pass with all PHYs).
++	 */
++	hprt0_data_t hprt0 = {.d32 = 0 };
++#if 0
++	pcgcctl_data_t pcgcctl = {.d32 = 0 };
++	/* Restart the Phy Clock */
++	pcgcctl.b.stoppclk = 1;
++	DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
++	dwc_udelay(10);
++#endif //0
++	hprt0.d32 = dwc_otg_read_hprt0(core_if);
++	DWC_DEBUGPL(DBG_ANY, "Resume: HPRT0=%0x\n", hprt0.d32);
++//      dwc_mdelay(70);
++	hprt0.b.prtres = 0;	/* Resume */
++	DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++	DWC_DEBUGPL(DBG_ANY, "Clear Resume: HPRT0=%0x\n",
++		    DWC_READ_REG32(core_if->host_if->hprt0));
++
++	cil_hcd_resume(core_if);
++
++	/** Change to L0 state*/
++	core_if->lx_state = DWC_OTG_L0;
++}
++
++/**
++ * This interrupt indicates that the DWC_otg controller has detected a
++ * resume or remote wakeup sequence. If the DWC_otg controller is in
++ * low power mode, the handler must brings the controller out of low
++ * power mode. The controller automatically begins resume
++ * signaling. The handler schedules a time to stop resume signaling.
++ */
++int32_t dwc_otg_handle_wakeup_detected_intr(dwc_otg_core_if_t * core_if)
++{
++	gintsts_data_t gintsts;
++
++	DWC_DEBUGPL(DBG_ANY,
++		    "++Resume and Remote Wakeup Detected Interrupt++\n");
++
++	DWC_PRINTF("%s lxstate = %d\n", __func__, core_if->lx_state);
++
++	if (dwc_otg_is_device_mode(core_if)) {
++		dctl_data_t dctl = {.d32 = 0 };
++		DWC_DEBUGPL(DBG_PCD, "DSTS=0x%0x\n",
++			    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->
++					   dsts));
++		if (core_if->lx_state == DWC_OTG_L2) {
++#ifdef PARTIAL_POWER_DOWN
++			if (core_if->hwcfg4.b.power_optimiz) {
++				pcgcctl_data_t power = {.d32 = 0 };
++
++				power.d32 = DWC_READ_REG32(core_if->pcgcctl);
++				DWC_DEBUGPL(DBG_CIL, "PCGCCTL=%0x\n",
++					    power.d32);
++
++				power.b.stoppclk = 0;
++				DWC_WRITE_REG32(core_if->pcgcctl, power.d32);
++
++				power.b.pwrclmp = 0;
++				DWC_WRITE_REG32(core_if->pcgcctl, power.d32);
++
++				power.b.rstpdwnmodule = 0;
++				DWC_WRITE_REG32(core_if->pcgcctl, power.d32);
++			}
++#endif
++			/* Clear the Remote Wakeup Signaling */
++			dctl.b.rmtwkupsig = 1;
++			DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++					 dctl, dctl.d32, 0);
++
++			DWC_SPINUNLOCK(core_if->lock);
++			if (core_if->pcd_cb && core_if->pcd_cb->resume_wakeup) {
++				core_if->pcd_cb->resume_wakeup(core_if->pcd_cb->p);
++			}
++			DWC_SPINLOCK(core_if->lock);
++		} else {
++			glpmcfg_data_t lpmcfg;
++			lpmcfg.d32 =
++			    DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++			lpmcfg.b.hird_thres &= (~(1 << 4));
++			DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg,
++					lpmcfg.d32);
++		}
++		/** Change to L0 state*/
++		core_if->lx_state = DWC_OTG_L0;
++	} else {
++		if (core_if->lx_state != DWC_OTG_L1) {
++			pcgcctl_data_t pcgcctl = {.d32 = 0 };
++
++			/* Restart the Phy Clock */
++			pcgcctl.b.stoppclk = 1;
++			DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
++			DWC_TIMER_SCHEDULE(core_if->wkp_timer, 71);
++		} else {
++			/** Change to L0 state*/
++			core_if->lx_state = DWC_OTG_L0;
++		}
++	}
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.wkupintr = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * This interrupt indicates that the Wakeup Logic has detected a
++ * Device disconnect.
++ */
++static int32_t dwc_otg_handle_pwrdn_disconnect_intr(dwc_otg_core_if_t *core_if)
++{
++	gpwrdn_data_t gpwrdn = { .d32 = 0 };
++	gpwrdn_data_t gpwrdn_temp = { .d32 = 0 };
++	gpwrdn_temp.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++
++	DWC_PRINTF("%s called\n", __FUNCTION__);
++
++	if (!core_if->hibernation_suspend) {
++		DWC_PRINTF("Already exited from Hibernation\n");
++		return 1;
++	}
++
++	/* Switch on the voltage to the core */
++	gpwrdn.b.pwrdnswtch = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Reset the core */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Disable power clamps*/
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnclmp = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	/* Remove reset the core signal */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++	dwc_udelay(10);
++
++	/* Disable PMU interrupt */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuintsel = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	core_if->hibernation_suspend = 0;
++
++	/* Disable PMU */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuactv = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	if (gpwrdn_temp.b.idsts) {
++		core_if->op_state = B_PERIPHERAL;
++		dwc_otg_core_init(core_if);
++		dwc_otg_enable_global_interrupts(core_if);
++		cil_pcd_start(core_if);
++	} else {
++		core_if->op_state = A_HOST;
++		dwc_otg_core_init(core_if);
++		dwc_otg_enable_global_interrupts(core_if);
++		cil_hcd_start(core_if);
++	}
++
++	return 1;
++}
++
++/**
++ * This interrupt indicates that the Wakeup Logic has detected a
++ * remote wakeup sequence.
++ */
++static int32_t dwc_otg_handle_pwrdn_wakeup_detected_intr(dwc_otg_core_if_t * core_if)
++{
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	DWC_DEBUGPL(DBG_ANY,
++		    "++Powerdown Remote Wakeup Detected Interrupt++\n");
++
++	if (!core_if->hibernation_suspend) {
++		DWC_PRINTF("Already exited from Hibernation\n");
++		return 1;
++	}
++
++	gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++	if (gpwrdn.b.idsts) {	// Device Mode
++		if ((core_if->power_down == 2)
++		    && (core_if->hibernation_suspend == 1)) {
++			dwc_otg_device_hibernation_restore(core_if, 0, 0);
++		}
++	} else {
++		if ((core_if->power_down == 2)
++		    && (core_if->hibernation_suspend == 1)) {
++			dwc_otg_host_hibernation_restore(core_if, 1, 0);
++		}
++	}
++	return 1;
++}
++
++static int32_t dwc_otg_handle_pwrdn_idsts_change(dwc_otg_device_t *otg_dev)
++{
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	gpwrdn_data_t gpwrdn_temp = {.d32 = 0 };
++	dwc_otg_core_if_t *core_if = otg_dev->core_if;
++
++	DWC_DEBUGPL(DBG_ANY, "%s called\n", __FUNCTION__);
++	gpwrdn_temp.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++	if (core_if->power_down == 2) {
++		if (!core_if->hibernation_suspend) {
++			DWC_PRINTF("Already exited from Hibernation\n");
++			return 1;
++		}
++		DWC_DEBUGPL(DBG_ANY, "Exit from hibernation on ID sts change\n");
++		/* Switch on the voltage to the core */
++		gpwrdn.b.pwrdnswtch = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		dwc_udelay(10);
++
++		/* Reset the core */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pwrdnrstn = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		dwc_udelay(10);
++
++		/* Disable power clamps */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pwrdnclmp = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++		/* Remove reset the core signal */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pwrdnrstn = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++		dwc_udelay(10);
++
++		/* Disable PMU interrupt */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pmuintsel = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++		/*Indicates that we are exiting from hibernation */
++		core_if->hibernation_suspend = 0;
++
++		/* Disable PMU */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pmuactv = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		dwc_udelay(10);
++
++		gpwrdn.d32 = core_if->gr_backup->gpwrdn_local;
++		if (gpwrdn.b.dis_vbus == 1) {
++			gpwrdn.d32 = 0;
++			gpwrdn.b.dis_vbus = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		}
++
++		if (gpwrdn_temp.b.idsts) {
++			core_if->op_state = B_PERIPHERAL;
++			dwc_otg_core_init(core_if);
++			dwc_otg_enable_global_interrupts(core_if);
++			cil_pcd_start(core_if);
++		} else {
++			core_if->op_state = A_HOST;
++			dwc_otg_core_init(core_if);
++			dwc_otg_enable_global_interrupts(core_if);
++			cil_hcd_start(core_if);
++		}
++	}
++
++	if (core_if->adp_enable) {
++		uint8_t is_host = 0;
++		DWC_SPINUNLOCK(core_if->lock);
++		/* Change the core_if's lock to hcd/pcd lock depend on mode? */
++#ifndef DWC_HOST_ONLY
++		if (gpwrdn_temp.b.idsts)
++			core_if->lock = otg_dev->pcd->lock;
++#endif
++#ifndef DWC_DEVICE_ONLY
++		if (!gpwrdn_temp.b.idsts) {
++				core_if->lock = otg_dev->hcd->lock;
++				is_host = 1;
++		}
++#endif
++		DWC_PRINTF("RESTART ADP\n");
++		if (core_if->adp.probe_enabled)
++			dwc_otg_adp_probe_stop(core_if);
++		if (core_if->adp.sense_enabled)
++			dwc_otg_adp_sense_stop(core_if);
++		if (core_if->adp.sense_timer_started)
++			DWC_TIMER_CANCEL(core_if->adp.sense_timer);
++		if (core_if->adp.vbuson_timer_started)
++			DWC_TIMER_CANCEL(core_if->adp.vbuson_timer);
++		core_if->adp.probe_timer_values[0] = -1;
++		core_if->adp.probe_timer_values[1] = -1;
++		core_if->adp.sense_timer_started = 0;
++		core_if->adp.vbuson_timer_started = 0;
++		core_if->adp.probe_counter = 0;
++		core_if->adp.gpwrdn = 0;
++
++		/* Disable PMU and restart ADP */
++		gpwrdn_temp.d32 = 0;
++		gpwrdn_temp.b.pmuactv = 1;
++		gpwrdn_temp.b.pmuintsel = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		DWC_PRINTF("Check point 1\n");
++		dwc_mdelay(110);
++		dwc_otg_adp_start(core_if, is_host);
++		DWC_SPINLOCK(core_if->lock);
++	}
++
++
++	return 1;
++}
++
++static int32_t dwc_otg_handle_pwrdn_session_change(dwc_otg_core_if_t * core_if)
++{
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	int32_t otg_cap_param = core_if->core_params->otg_cap;
++	DWC_DEBUGPL(DBG_ANY, "%s called\n", __FUNCTION__);
++
++	gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++	if (core_if->power_down == 2) {
++		if (!core_if->hibernation_suspend) {
++			DWC_PRINTF("Already exited from Hibernation\n");
++			return 1;
++		}
++
++		if ((otg_cap_param != DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE ||
++			 otg_cap_param != DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE) &&
++			gpwrdn.b.bsessvld == 0) {
++			/* Save gpwrdn register for further usage if stschng interrupt */
++			core_if->gr_backup->gpwrdn_local =
++				DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++			/*Exit from ISR and wait for stschng interrupt with bsessvld = 1 */
++			return 1;
++		}
++
++		/* Switch on the voltage to the core */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pwrdnswtch = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		dwc_udelay(10);
++
++		/* Reset the core */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pwrdnrstn = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		dwc_udelay(10);
++
++		/* Disable power clamps */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pwrdnclmp = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++		/* Remove reset the core signal */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pwrdnrstn = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++		dwc_udelay(10);
++
++		/* Disable PMU interrupt */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pmuintsel = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		dwc_udelay(10);
++
++		/*Indicates that we are exiting from hibernation */
++		core_if->hibernation_suspend = 0;
++
++		/* Disable PMU */
++		gpwrdn.d32 = 0;
++		gpwrdn.b.pmuactv = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++		dwc_udelay(10);
++
++		core_if->op_state = B_PERIPHERAL;
++		dwc_otg_core_init(core_if);
++		dwc_otg_enable_global_interrupts(core_if);
++		cil_pcd_start(core_if);
++
++		if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE ||
++			otg_cap_param == DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE) {
++			/*
++			 * Initiate SRP after initial ADP probe.
++			 */
++			dwc_otg_initiate_srp(core_if);
++		}
++	}
++
++	return 1;
++}
++/**
++ * This interrupt indicates that the Wakeup Logic has detected a
++ * status change either on IDDIG or BSessVld.
++ */
++static uint32_t dwc_otg_handle_pwrdn_stschng_intr(dwc_otg_device_t *otg_dev)
++{
++	int retval;
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	gpwrdn_data_t gpwrdn_temp = {.d32 = 0 };
++	dwc_otg_core_if_t *core_if = otg_dev->core_if;
++
++	DWC_PRINTF("%s called\n", __FUNCTION__);
++
++	if (core_if->power_down == 2) {
++		if (core_if->hibernation_suspend <= 0) {
++			DWC_PRINTF("Already exited from Hibernation\n");
++			return 1;
++		} else
++			gpwrdn_temp.d32 = core_if->gr_backup->gpwrdn_local;
++
++	} else {
++		gpwrdn_temp.d32 = core_if->adp.gpwrdn;
++	}
++
++	gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++
++	if (gpwrdn.b.idsts ^ gpwrdn_temp.b.idsts) {
++		retval = dwc_otg_handle_pwrdn_idsts_change(otg_dev);
++	} else if (gpwrdn.b.bsessvld ^ gpwrdn_temp.b.bsessvld) {
++		retval = dwc_otg_handle_pwrdn_session_change(core_if);
++	}
++
++	return retval;
++}
++
++/**
++ * This interrupt indicates that the Wakeup Logic has detected a
++ * SRP.
++ */
++static int32_t dwc_otg_handle_pwrdn_srp_intr(dwc_otg_core_if_t * core_if)
++{
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++
++	DWC_PRINTF("%s called\n", __FUNCTION__);
++
++	if (!core_if->hibernation_suspend) {
++		DWC_PRINTF("Already exited from Hibernation\n");
++		return 1;
++	}
++#ifdef DWC_DEV_SRPCAP
++	if (core_if->pwron_timer_started) {
++		core_if->pwron_timer_started = 0;
++		DWC_TIMER_CANCEL(core_if->pwron_timer);
++	}
++#endif
++
++	/* Switch on the voltage to the core */
++	gpwrdn.b.pwrdnswtch = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Reset the core */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Disable power clamps */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnclmp = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	/* Remove reset the core signal */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++	dwc_udelay(10);
++
++	/* Disable PMU interrupt */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuintsel = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	/* Indicates that we are exiting from hibernation */
++	core_if->hibernation_suspend = 0;
++
++	/* Disable PMU */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuactv = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Programm Disable VBUS to 0 */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.dis_vbus = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	/*Initialize the core as Host */
++	core_if->op_state = A_HOST;
++	dwc_otg_core_init(core_if);
++	dwc_otg_enable_global_interrupts(core_if);
++	cil_hcd_start(core_if);
++
++	return 1;
++}
++
++/** This interrupt indicates that restore command after Hibernation
++ * was completed by the core. */
++int32_t dwc_otg_handle_restore_done_intr(dwc_otg_core_if_t * core_if)
++{
++	pcgcctl_data_t pcgcctl;
++	DWC_DEBUGPL(DBG_ANY, "++Restore Done Interrupt++\n");
++
++	//TODO De-assert restore signal. 8.a
++	pcgcctl.d32 = DWC_READ_REG32(core_if->pcgcctl);
++	if (pcgcctl.b.restoremode == 1) {
++		gintmsk_data_t gintmsk = {.d32 = 0 };
++		/*
++		 * If restore mode is Remote Wakeup,
++		 * unmask Remote Wakeup interrupt.
++		 */
++		gintmsk.b.wkupintr = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk,
++				 0, gintmsk.d32);
++	}
++
++	return 1;
++}
++
++/**
++ * This interrupt indicates that a device has been disconnected from
++ * the root port.
++ */
++int32_t dwc_otg_handle_disconnect_intr(dwc_otg_core_if_t * core_if)
++{
++	gintsts_data_t gintsts;
++
++	DWC_DEBUGPL(DBG_ANY, "++Disconnect Detected Interrupt++ (%s) %s\n",
++		    (dwc_otg_is_host_mode(core_if) ? "Host" : "Device"),
++		    op_state_str(core_if));
++
++/** @todo Consolidate this if statement. */
++#ifndef DWC_HOST_ONLY
++	if (core_if->op_state == B_HOST) {
++		/* If in device mode Disconnect and stop the HCD, then
++		 * start the PCD. */
++		DWC_SPINUNLOCK(core_if->lock);
++		cil_hcd_disconnect(core_if);
++		cil_pcd_start(core_if);
++		DWC_SPINLOCK(core_if->lock);
++		core_if->op_state = B_PERIPHERAL;
++	} else if (dwc_otg_is_device_mode(core_if)) {
++		gotgctl_data_t gotgctl = {.d32 = 0 };
++		gotgctl.d32 =
++		    DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
++		if (gotgctl.b.hstsethnpen == 1) {
++			/* Do nothing, if HNP in process the OTG
++			 * interrupt "Host Negotiation Detected"
++			 * interrupt will do the mode switch.
++			 */
++		} else if (gotgctl.b.devhnpen == 0) {
++			/* If in device mode Disconnect and stop the HCD, then
++			 * start the PCD. */
++			DWC_SPINUNLOCK(core_if->lock);
++			cil_hcd_disconnect(core_if);
++			cil_pcd_start(core_if);
++			DWC_SPINLOCK(core_if->lock);
++			core_if->op_state = B_PERIPHERAL;
++		} else {
++			DWC_DEBUGPL(DBG_ANY, "!a_peripheral && !devhnpen\n");
++		}
++	} else {
++		if (core_if->op_state == A_HOST) {
++			/* A-Cable still connected but device disconnected. */
++			cil_hcd_disconnect(core_if);
++			if (core_if->adp_enable) {
++				gpwrdn_data_t gpwrdn = { .d32 = 0 };
++				cil_hcd_stop(core_if);
++				/* Enable Power Down Logic */
++				gpwrdn.b.pmuintsel = 1;
++				gpwrdn.b.pmuactv = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++				dwc_otg_adp_probe_start(core_if);
++
++				/* Power off the core */
++				if (core_if->power_down == 2) {
++					gpwrdn.d32 = 0;
++					gpwrdn.b.pwrdnswtch = 1;
++					DWC_MODIFY_REG32
++					    (&core_if->core_global_regs->gpwrdn,
++					     gpwrdn.d32, 0);
++				}
++			}
++		}
++	}
++#endif
++	/* Change to L3(OFF) state */
++	core_if->lx_state = DWC_OTG_L3;
++
++	gintsts.d32 = 0;
++	gintsts.b.disconnect = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++	return 1;
++}
++
++/**
++ * This interrupt indicates that SUSPEND state has been detected on
++ * the USB.
++ *
++ * For HNP the USB Suspend interrupt signals the change from
++ * "a_peripheral" to "a_host".
++ *
++ * When power management is enabled the core will be put in low power
++ * mode.
++ */
++int32_t dwc_otg_handle_usb_suspend_intr(dwc_otg_core_if_t * core_if)
++{
++	dsts_data_t dsts;
++	gintsts_data_t gintsts;
++	dcfg_data_t dcfg;
++
++	DWC_DEBUGPL(DBG_ANY, "USB SUSPEND\n");
++
++	if (dwc_otg_is_device_mode(core_if)) {
++		/* Check the Device status register to determine if the Suspend
++		 * state is active. */
++		dsts.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++		DWC_DEBUGPL(DBG_PCD, "DSTS=0x%0x\n", dsts.d32);
++		DWC_DEBUGPL(DBG_PCD, "DSTS.Suspend Status=%d "
++			    "HWCFG4.power Optimize=%d\n",
++			    dsts.b.suspsts, core_if->hwcfg4.b.power_optimiz);
++
++#ifdef PARTIAL_POWER_DOWN
++/** @todo Add a module parameter for power management. */
++
++		if (dsts.b.suspsts && core_if->hwcfg4.b.power_optimiz) {
++			pcgcctl_data_t power = {.d32 = 0 };
++			DWC_DEBUGPL(DBG_CIL, "suspend\n");
++
++			power.b.pwrclmp = 1;
++			DWC_WRITE_REG32(core_if->pcgcctl, power.d32);
++
++			power.b.rstpdwnmodule = 1;
++			DWC_MODIFY_REG32(core_if->pcgcctl, 0, power.d32);
++
++			power.b.stoppclk = 1;
++			DWC_MODIFY_REG32(core_if->pcgcctl, 0, power.d32);
++
++		} else {
++			DWC_DEBUGPL(DBG_ANY, "disconnect?\n");
++		}
++#endif
++		/* PCD callback for suspend. Release the lock inside of callback function */
++		cil_pcd_suspend(core_if);
++		if (core_if->power_down == 2)
++		{
++			dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
++			DWC_DEBUGPL(DBG_ANY,"lx_state = %08x\n",core_if->lx_state);
++			DWC_DEBUGPL(DBG_ANY," device address = %08d\n",dcfg.b.devaddr);
++
++			if (core_if->lx_state != DWC_OTG_L3 && dcfg.b.devaddr) {
++				pcgcctl_data_t pcgcctl = {.d32 = 0 };
++				gpwrdn_data_t gpwrdn = {.d32 = 0 };
++				gusbcfg_data_t gusbcfg = {.d32 = 0 };
++
++				/* Change to L2(suspend) state */
++				core_if->lx_state = DWC_OTG_L2;
++
++				/* Clear interrupt in gintsts */
++				gintsts.d32 = 0;
++				gintsts.b.usbsuspend = 1;
++				DWC_WRITE_REG32(&core_if->core_global_regs->
++						gintsts, gintsts.d32);
++				DWC_PRINTF("Start of hibernation completed\n");
++				dwc_otg_save_global_regs(core_if);
++				dwc_otg_save_dev_regs(core_if);
++
++				gusbcfg.d32 =
++				    DWC_READ_REG32(&core_if->core_global_regs->
++						   gusbcfg);
++				if (gusbcfg.b.ulpi_utmi_sel == 1) {
++					/* ULPI interface */
++					/* Suspend the Phy Clock */
++					pcgcctl.d32 = 0;
++					pcgcctl.b.stoppclk = 1;
++					DWC_MODIFY_REG32(core_if->pcgcctl, 0,
++							 pcgcctl.d32);
++					dwc_udelay(10);
++					gpwrdn.b.pmuactv = 1;
++					DWC_MODIFY_REG32(&core_if->
++							 core_global_regs->
++							 gpwrdn, 0, gpwrdn.d32);
++				} else {
++					/* UTMI+ Interface */
++					gpwrdn.b.pmuactv = 1;
++					DWC_MODIFY_REG32(&core_if->
++							 core_global_regs->
++							 gpwrdn, 0, gpwrdn.d32);
++					dwc_udelay(10);
++					pcgcctl.b.stoppclk = 1;
++					DWC_MODIFY_REG32(core_if->pcgcctl, 0,
++							 pcgcctl.d32);
++					dwc_udelay(10);
++				}
++
++				/* Set flag to indicate that we are in hibernation */
++				core_if->hibernation_suspend = 1;
++				/* Enable interrupts from wake up logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pmuintsel = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++				dwc_udelay(10);
++
++				/* Unmask device mode interrupts in GPWRDN */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.rst_det_msk = 1;
++				gpwrdn.b.lnstchng_msk = 1;
++				gpwrdn.b.sts_chngint_msk = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++				dwc_udelay(10);
++
++				/* Enable Power Down Clamp */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pwrdnclmp = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++				dwc_udelay(10);
++
++				/* Switch off VDD */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pwrdnswtch = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++
++				/* Save gpwrdn register for further usage if stschng interrupt */
++				core_if->gr_backup->gpwrdn_local =
++							DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++				DWC_PRINTF("Hibernation completed\n");
++
++				return 1;
++			}
++		} else if (core_if->power_down == 3) {
++			pcgcctl_data_t pcgcctl = {.d32 = 0 };
++			dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
++			DWC_DEBUGPL(DBG_ANY, "lx_state = %08x\n",core_if->lx_state);
++			DWC_DEBUGPL(DBG_ANY, " device address = %08d\n",dcfg.b.devaddr);
++
++			if (core_if->lx_state != DWC_OTG_L3 && dcfg.b.devaddr) {
++				DWC_DEBUGPL(DBG_ANY, "Start entering to extended hibernation\n");
++				core_if->xhib = 1;
++
++				/* Clear interrupt in gintsts */
++				gintsts.d32 = 0;
++				gintsts.b.usbsuspend = 1;
++				DWC_WRITE_REG32(&core_if->core_global_regs->
++					gintsts, gintsts.d32);
++
++				dwc_otg_save_global_regs(core_if);
++				dwc_otg_save_dev_regs(core_if);
++
++				/* Wait for 10 PHY clocks */
++				dwc_udelay(10);
++
++				/* Program GPIO register while entering to xHib */
++				DWC_WRITE_REG32(&core_if->core_global_regs->ggpio, 0x1);
++
++				pcgcctl.b.enbl_extnd_hiber = 1;
++				DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
++				DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
++
++				pcgcctl.d32 = 0;
++				pcgcctl.b.extnd_hiber_pwrclmp = 1;
++				DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
++
++				pcgcctl.d32 = 0;
++				pcgcctl.b.extnd_hiber_switch = 1;
++				core_if->gr_backup->xhib_gpwrdn = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++				core_if->gr_backup->xhib_pcgcctl = DWC_READ_REG32(core_if->pcgcctl) | pcgcctl.d32;
++				DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
++
++				DWC_DEBUGPL(DBG_ANY, "Finished entering to extended hibernation\n");
++
++				return 1;
++			}
++		}
++	} else {
++		if (core_if->op_state == A_PERIPHERAL) {
++			DWC_DEBUGPL(DBG_ANY, "a_peripheral->a_host\n");
++			/* Clear the a_peripheral flag, back to a_host. */
++			DWC_SPINUNLOCK(core_if->lock);
++			cil_pcd_stop(core_if);
++			cil_hcd_start(core_if);
++			DWC_SPINLOCK(core_if->lock);
++			core_if->op_state = A_HOST;
++		}
++	}
++
++	/* Change to L2(suspend) state */
++	core_if->lx_state = DWC_OTG_L2;
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.usbsuspend = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++
++static int32_t dwc_otg_handle_xhib_exit_intr(dwc_otg_core_if_t * core_if)
++{
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	pcgcctl_data_t pcgcctl = {.d32 = 0 };
++	gahbcfg_data_t gahbcfg = {.d32 = 0 };
++
++	dwc_udelay(10);
++
++	/* Program GPIO register while entering to xHib */
++	DWC_WRITE_REG32(&core_if->core_global_regs->ggpio, 0x0);
++
++	pcgcctl.d32 = core_if->gr_backup->xhib_pcgcctl;
++	pcgcctl.b.extnd_hiber_pwrclmp = 0;
++	DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
++	dwc_udelay(10);
++
++	gpwrdn.d32 = core_if->gr_backup->xhib_gpwrdn;
++	gpwrdn.b.restore = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32);
++	dwc_udelay(10);
++
++	restore_lpm_i2c_regs(core_if);
++
++	pcgcctl.d32 = core_if->gr_backup->pcgcctl_local & (0x3FFFF << 14);
++	pcgcctl.b.max_xcvrselect = 1;
++	pcgcctl.b.ess_reg_restored = 0;
++	pcgcctl.b.extnd_hiber_switch = 0;
++	pcgcctl.b.extnd_hiber_pwrclmp = 0;
++	pcgcctl.b.enbl_extnd_hiber = 1;
++	DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
++
++	gahbcfg.d32 = core_if->gr_backup->gahbcfg_local;
++	gahbcfg.b.glblintrmsk = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gahbcfg, gahbcfg.d32);
++
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, 0x1 << 16);
++
++	DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg,
++			core_if->gr_backup->gusbcfg_local);
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg,
++			core_if->dr_backup->dcfg);
++
++	pcgcctl.d32 = 0;
++	pcgcctl.d32 = core_if->gr_backup->pcgcctl_local & (0x3FFFF << 14);
++	pcgcctl.b.max_xcvrselect = 1;
++	pcgcctl.d32 |= 0x608;
++	DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
++	dwc_udelay(10);
++
++	pcgcctl.d32 = 0;
++	pcgcctl.d32 = core_if->gr_backup->pcgcctl_local & (0x3FFFF << 14);
++	pcgcctl.b.max_xcvrselect = 1;
++	pcgcctl.b.ess_reg_restored = 1;
++	pcgcctl.b.enbl_extnd_hiber = 1;
++	pcgcctl.b.rstpdwnmodule = 1;
++	pcgcctl.b.restoremode = 1;
++	DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
++
++	DWC_DEBUGPL(DBG_ANY, "%s called\n", __FUNCTION__);
++
++	return 1;
++}
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++/**
++ * This function hadles LPM transaction received interrupt.
++ */
++static int32_t dwc_otg_handle_lpm_intr(dwc_otg_core_if_t * core_if)
++{
++	glpmcfg_data_t lpmcfg;
++	gintsts_data_t gintsts;
++
++	if (!core_if->core_params->lpm_enable) {
++		DWC_PRINTF("Unexpected LPM interrupt\n");
++	}
++
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	DWC_PRINTF("LPM config register = 0x%08x\n", lpmcfg.d32);
++
++	if (dwc_otg_is_host_mode(core_if)) {
++		cil_hcd_sleep(core_if);
++	} else {
++		lpmcfg.b.hird_thres |= (1 << 4);
++		DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg,
++				lpmcfg.d32);
++	}
++
++	/* Examine prt_sleep_sts after TL1TokenTetry period max (10 us) */
++	dwc_udelay(10);
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	if (lpmcfg.b.prt_sleep_sts) {
++		/* Save the current state */
++		core_if->lx_state = DWC_OTG_L1;
++	}
++
++	/* Clear interrupt  */
++	gintsts.d32 = 0;
++	gintsts.b.lpmtranrcvd = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++	return 1;
++}
++#endif /* CONFIG_USB_DWC_OTG_LPM */
++
++/**
++ * This function returns the Core Interrupt register.
++ */
++static inline uint32_t dwc_otg_read_common_intr(dwc_otg_core_if_t * core_if, gintmsk_data_t *reenable_gintmsk, dwc_otg_hcd_t *hcd)
++{
++	gahbcfg_data_t gahbcfg = {.d32 = 0 };
++	gintsts_data_t gintsts;
++	gintmsk_data_t gintmsk;
++	gintmsk_data_t gintmsk_common = {.d32 = 0 };
++	gintmsk_common.b.wkupintr = 1;
++	gintmsk_common.b.sessreqintr = 1;
++	gintmsk_common.b.conidstschng = 1;
++	gintmsk_common.b.otgintr = 1;
++	gintmsk_common.b.modemismatch = 1;
++	gintmsk_common.b.disconnect = 1;
++	gintmsk_common.b.usbsuspend = 1;
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	gintmsk_common.b.lpmtranrcvd = 1;
++#endif
++	gintmsk_common.b.restoredone = 1;
++	if(dwc_otg_is_device_mode(core_if))
++	{
++		/** @todo: The port interrupt occurs while in device
++		 * mode. Added code to CIL to clear the interrupt for now!
++		 */
++		gintmsk_common.b.portintr = 1;
++	}
++	gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++	gintmsk.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
++	if(fiq_enable) {
++		local_fiq_disable();
++		/* Pull in the interrupts that the FIQ has masked */
++		gintmsk.d32 |= ~(hcd->fiq_state->gintmsk_saved.d32);
++		gintmsk.d32 |= gintmsk_common.d32;
++		/* for the upstairs function to reenable - have to read it here in case FIQ triggers again */
++		reenable_gintmsk->d32 = gintmsk.d32;
++		local_fiq_enable();
++	}
++
++	gahbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gahbcfg);
++
++#ifdef DEBUG
++	/* if any common interrupts set */
++	if (gintsts.d32 & gintmsk_common.d32) {
++		DWC_DEBUGPL(DBG_ANY, "common_intr: gintsts=%08x  gintmsk=%08x\n",
++			    gintsts.d32, gintmsk.d32);
++	}
++#endif
++	if (!fiq_enable){
++		if (gahbcfg.b.glblintrmsk)
++			return ((gintsts.d32 & gintmsk.d32) & gintmsk_common.d32);
++		else
++			return 0;
++	} else {
++		/* Our IRQ kicker is no longer the USB hardware, it's the MPHI interface.
++		 * Can't trust the global interrupt mask bit in this case.
++		 */
++		return ((gintsts.d32 & gintmsk.d32) & gintmsk_common.d32);
++	}
++
++}
++
++/* MACRO for clearing interupt bits in GPWRDN register */
++#define CLEAR_GPWRDN_INTR(__core_if,__intr) \
++do { \
++		gpwrdn_data_t gpwrdn = {.d32=0}; \
++		gpwrdn.b.__intr = 1; \
++		DWC_MODIFY_REG32(&__core_if->core_global_regs->gpwrdn, \
++		0, gpwrdn.d32); \
++} while (0)
++
++/**
++ * Common interrupt handler.
++ *
++ * The common interrupts are those that occur in both Host and Device mode.
++ * This handler handles the following interrupts:
++ * - Mode Mismatch Interrupt
++ * - Disconnect Interrupt
++ * - OTG Interrupt
++ * - Connector ID Status Change Interrupt
++ * - Session Request Interrupt.
++ * - Resume / Remote Wakeup Detected Interrupt.
++ * - LPM Transaction Received Interrupt
++ * - ADP Transaction Received Interrupt
++ *
++ */
++int32_t dwc_otg_handle_common_intr(void *dev)
++{
++	int retval = 0;
++	gintsts_data_t gintsts;
++	gintmsk_data_t gintmsk_reenable = { .d32 = 0 };
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	dwc_otg_device_t *otg_dev = dev;
++	dwc_otg_core_if_t *core_if = otg_dev->core_if;
++	gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++	if (dwc_otg_is_device_mode(core_if))
++		core_if->frame_num = dwc_otg_get_frame_number(core_if);
++
++	if (core_if->lock)
++		DWC_SPINLOCK(core_if->lock);
++
++	if (core_if->power_down == 3 && core_if->xhib == 1) {
++		DWC_DEBUGPL(DBG_ANY, "Exiting from xHIB state\n");
++		retval |= dwc_otg_handle_xhib_exit_intr(core_if);
++		core_if->xhib = 2;
++		if (core_if->lock)
++			DWC_SPINUNLOCK(core_if->lock);
++
++		return retval;
++	}
++
++	if (core_if->hibernation_suspend <= 0) {
++		/* read_common will have to poke the FIQ's saved mask. We must then clear this mask at the end
++		 * of this handler - god only knows why it's done like this
++		 */
++		gintsts.d32 = dwc_otg_read_common_intr(core_if, &gintmsk_reenable, otg_dev->hcd);
++
++		if (gintsts.b.modemismatch) {
++			retval |= dwc_otg_handle_mode_mismatch_intr(core_if);
++		}
++		if (gintsts.b.otgintr) {
++			retval |= dwc_otg_handle_otg_intr(core_if);
++		}
++		if (gintsts.b.conidstschng) {
++			retval |=
++			    dwc_otg_handle_conn_id_status_change_intr(core_if);
++		}
++		if (gintsts.b.disconnect) {
++			retval |= dwc_otg_handle_disconnect_intr(core_if);
++		}
++		if (gintsts.b.sessreqintr) {
++			retval |= dwc_otg_handle_session_req_intr(core_if);
++		}
++		if (gintsts.b.wkupintr) {
++			retval |= dwc_otg_handle_wakeup_detected_intr(core_if);
++		}
++		if (gintsts.b.usbsuspend) {
++			retval |= dwc_otg_handle_usb_suspend_intr(core_if);
++		}
++#ifdef CONFIG_USB_DWC_OTG_LPM
++		if (gintsts.b.lpmtranrcvd) {
++			retval |= dwc_otg_handle_lpm_intr(core_if);
++		}
++#endif
++		if (gintsts.b.restoredone) {
++			gintsts.d32 = 0;
++	                if (core_if->power_down == 2)
++				core_if->hibernation_suspend = -1;
++			else if (core_if->power_down == 3 && core_if->xhib == 2) {
++				gpwrdn_data_t gpwrdn = {.d32 = 0 };
++				pcgcctl_data_t pcgcctl = {.d32 = 0 };
++				dctl_data_t dctl = {.d32 = 0 };
++
++				DWC_WRITE_REG32(&core_if->core_global_regs->
++						gintsts, 0xFFFFFFFF);
++
++				DWC_DEBUGPL(DBG_ANY,
++					    "RESTORE DONE generated\n");
++
++				gpwrdn.b.restore = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++				dwc_udelay(10);
++
++				pcgcctl.b.rstpdwnmodule = 1;
++				DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
++
++				DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, core_if->gr_backup->gusbcfg_local);
++				DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg, core_if->dr_backup->dcfg);
++				DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, core_if->dr_backup->dctl);
++				dwc_udelay(50);
++
++				dctl.b.pwronprgdone = 1;
++				DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
++				dwc_udelay(10);
++
++				dwc_otg_restore_global_regs(core_if);
++				dwc_otg_restore_dev_regs(core_if, 0);
++
++				dctl.d32 = 0;
++				dctl.b.pwronprgdone = 1;
++				DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32, 0);
++				dwc_udelay(10);
++
++				pcgcctl.d32 = 0;
++				pcgcctl.b.enbl_extnd_hiber = 1;
++				DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
++
++				/* The core will be in ON STATE */
++				core_if->lx_state = DWC_OTG_L0;
++				core_if->xhib = 0;
++
++				DWC_SPINUNLOCK(core_if->lock);
++				if (core_if->pcd_cb && core_if->pcd_cb->resume_wakeup) {
++					core_if->pcd_cb->resume_wakeup(core_if->pcd_cb->p);
++				}
++				DWC_SPINLOCK(core_if->lock);
++
++			}
++
++			gintsts.b.restoredone = 1;
++			DWC_WRITE_REG32(&core_if->core_global_regs->gintsts,gintsts.d32);
++			DWC_PRINTF(" --Restore done interrupt received-- \n");
++			retval |= 1;
++		}
++		if (gintsts.b.portintr && dwc_otg_is_device_mode(core_if)) {
++			/* The port interrupt occurs while in device mode with HPRT0
++			 * Port Enable/Disable.
++			 */
++			gintsts.d32 = 0;
++			gintsts.b.portintr = 1;
++			DWC_WRITE_REG32(&core_if->core_global_regs->gintsts,gintsts.d32);
++			retval |= 1;
++			gintmsk_reenable.b.portintr = 1;
++
++		}
++		/* Did we actually handle anything? if so, unmask the interrupt */
++//		fiq_print(FIQDBG_INT, otg_dev->hcd->fiq_state, "CILOUT %1d", retval);
++//		fiq_print(FIQDBG_INT, otg_dev->hcd->fiq_state, "%08x", gintsts.d32);
++//		fiq_print(FIQDBG_INT, otg_dev->hcd->fiq_state, "%08x", gintmsk_reenable.d32);
++		if (retval && fiq_enable) {
++			DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gintmsk_reenable.d32);
++		}
++
++	} else {
++		DWC_DEBUGPL(DBG_ANY, "gpwrdn=%08x\n", gpwrdn.d32);
++
++		if (gpwrdn.b.disconn_det && gpwrdn.b.disconn_det_msk) {
++			CLEAR_GPWRDN_INTR(core_if, disconn_det);
++			if (gpwrdn.b.linestate == 0) {
++				dwc_otg_handle_pwrdn_disconnect_intr(core_if);
++			} else {
++				DWC_PRINTF("Disconnect detected while linestate is not 0\n");
++			}
++
++			retval |= 1;
++		}
++		if (gpwrdn.b.lnstschng && gpwrdn.b.lnstchng_msk) {
++			CLEAR_GPWRDN_INTR(core_if, lnstschng);
++			/* remote wakeup from hibernation */
++			if (gpwrdn.b.linestate == 2 || gpwrdn.b.linestate == 1) {
++				dwc_otg_handle_pwrdn_wakeup_detected_intr(core_if);
++			} else {
++				DWC_PRINTF("gpwrdn.linestate = %d\n", gpwrdn.b.linestate);
++			}
++			retval |= 1;
++		}
++		if (gpwrdn.b.rst_det && gpwrdn.b.rst_det_msk) {
++			CLEAR_GPWRDN_INTR(core_if, rst_det);
++			if (gpwrdn.b.linestate == 0) {
++				DWC_PRINTF("Reset detected\n");
++				retval |= dwc_otg_device_hibernation_restore(core_if, 0, 1);
++			}
++		}
++		if (gpwrdn.b.srp_det && gpwrdn.b.srp_det_msk) {
++			CLEAR_GPWRDN_INTR(core_if, srp_det);
++			dwc_otg_handle_pwrdn_srp_intr(core_if);
++			retval |= 1;
++		}
++	}
++	/* Handle ADP interrupt here */
++	if (gpwrdn.b.adp_int) {
++		DWC_PRINTF("ADP interrupt\n");
++		CLEAR_GPWRDN_INTR(core_if, adp_int);
++		dwc_otg_adp_handle_intr(core_if);
++		retval |= 1;
++	}
++	if (gpwrdn.b.sts_chngint && gpwrdn.b.sts_chngint_msk) {
++		DWC_PRINTF("STS CHNG interrupt asserted\n");
++		CLEAR_GPWRDN_INTR(core_if, sts_chngint);
++		dwc_otg_handle_pwrdn_stschng_intr(otg_dev);
++
++		retval |= 1;
++	}
++	if (core_if->lock)
++		DWC_SPINUNLOCK(core_if->lock);
++	return retval;
++}
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_core_if.h
+@@ -0,0 +1,705 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_core_if.h $
++ * $Revision: #13 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#if !defined(__DWC_CORE_IF_H__)
++#define __DWC_CORE_IF_H__
++
++#include "dwc_os.h"
++
++/** @file
++ * This file defines DWC_OTG Core API
++ */
++
++struct dwc_otg_core_if;
++typedef struct dwc_otg_core_if dwc_otg_core_if_t;
++
++/** Maximum number of Periodic FIFOs */
++#define MAX_PERIO_FIFOS 15
++/** Maximum number of Periodic FIFOs */
++#define MAX_TX_FIFOS 15
++
++/** Maximum number of Endpoints/HostChannels */
++#define MAX_EPS_CHANNELS 16
++
++extern dwc_otg_core_if_t *dwc_otg_cil_init(const uint32_t * _reg_base_addr);
++extern void dwc_otg_core_init(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_cil_remove(dwc_otg_core_if_t * _core_if);
++
++extern void dwc_otg_enable_global_interrupts(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_disable_global_interrupts(dwc_otg_core_if_t * _core_if);
++
++extern uint8_t dwc_otg_is_device_mode(dwc_otg_core_if_t * _core_if);
++extern uint8_t dwc_otg_is_host_mode(dwc_otg_core_if_t * _core_if);
++
++extern uint8_t dwc_otg_is_dma_enable(dwc_otg_core_if_t * core_if);
++
++/** This function should be called on every hardware interrupt. */
++extern int32_t dwc_otg_handle_common_intr(void *otg_dev);
++
++/** @name OTG Core Parameters */
++/** @{ */
++
++/**
++ * Specifies the OTG capabilities. The driver will automatically
++ * detect the value for this parameter if none is specified.
++ * 0 - HNP and SRP capable (default)
++ * 1 - SRP Only capable
++ * 2 - No HNP/SRP capable
++ */
++extern int dwc_otg_set_param_otg_cap(dwc_otg_core_if_t * core_if, int32_t val);
++extern int32_t dwc_otg_get_param_otg_cap(dwc_otg_core_if_t * core_if);
++#define DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE 0
++#define DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE 1
++#define DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE 2
++#define dwc_param_otg_cap_default DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE
++
++extern int dwc_otg_set_param_opt(dwc_otg_core_if_t * core_if, int32_t val);
++extern int32_t dwc_otg_get_param_opt(dwc_otg_core_if_t * core_if);
++#define dwc_param_opt_default 1
++
++/**
++ * Specifies whether to use slave or DMA mode for accessing the data
++ * FIFOs. The driver will automatically detect the value for this
++ * parameter if none is specified.
++ * 0 - Slave
++ * 1 - DMA (default, if available)
++ */
++extern int dwc_otg_set_param_dma_enable(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_dma_enable(dwc_otg_core_if_t * core_if);
++#define dwc_param_dma_enable_default 1
++
++/**
++ * When DMA mode is enabled specifies whether to use
++ * address DMA or DMA Descritor mode for accessing the data
++ * FIFOs in device mode. The driver will automatically detect
++ * the value for this parameter if none is specified.
++ * 0 - address DMA
++ * 1 - DMA Descriptor(default, if available)
++ */
++extern int dwc_otg_set_param_dma_desc_enable(dwc_otg_core_if_t * core_if,
++					     int32_t val);
++extern int32_t dwc_otg_get_param_dma_desc_enable(dwc_otg_core_if_t * core_if);
++//#define dwc_param_dma_desc_enable_default 1
++#define dwc_param_dma_desc_enable_default 0 // Broadcom BCM2708
++
++/** The DMA Burst size (applicable only for External DMA
++ * Mode). 1, 4, 8 16, 32, 64, 128, 256 (default 32)
++ */
++extern int dwc_otg_set_param_dma_burst_size(dwc_otg_core_if_t * core_if,
++					    int32_t val);
++extern int32_t dwc_otg_get_param_dma_burst_size(dwc_otg_core_if_t * core_if);
++#define dwc_param_dma_burst_size_default 32
++
++/**
++ * Specifies the maximum speed of operation in host and device mode.
++ * The actual speed depends on the speed of the attached device and
++ * the value of phy_type. The actual speed depends on the speed of the
++ * attached device.
++ * 0 - High Speed (default)
++ * 1 - Full Speed
++ */
++extern int dwc_otg_set_param_speed(dwc_otg_core_if_t * core_if, int32_t val);
++extern int32_t dwc_otg_get_param_speed(dwc_otg_core_if_t * core_if);
++#define dwc_param_speed_default 0
++#define DWC_SPEED_PARAM_HIGH 0
++#define DWC_SPEED_PARAM_FULL 1
++
++/** Specifies whether low power mode is supported when attached
++ *	to a Full Speed or Low Speed device in host mode.
++ * 0 - Don't support low power mode (default)
++ * 1 - Support low power mode
++ */
++extern int dwc_otg_set_param_host_support_fs_ls_low_power(dwc_otg_core_if_t *
++							  core_if, int32_t val);
++extern int32_t dwc_otg_get_param_host_support_fs_ls_low_power(dwc_otg_core_if_t
++							      * core_if);
++#define dwc_param_host_support_fs_ls_low_power_default 0
++
++/** Specifies the PHY clock rate in low power mode when connected to a
++ * Low Speed device in host mode. This parameter is applicable only if
++ * HOST_SUPPORT_FS_LS_LOW_POWER is enabled. If PHY_TYPE is set to FS
++ * then defaults to 6 MHZ otherwise 48 MHZ.
++ *
++ * 0 - 48 MHz
++ * 1 - 6 MHz
++ */
++extern int dwc_otg_set_param_host_ls_low_power_phy_clk(dwc_otg_core_if_t *
++						       core_if, int32_t val);
++extern int32_t dwc_otg_get_param_host_ls_low_power_phy_clk(dwc_otg_core_if_t *
++							   core_if);
++#define dwc_param_host_ls_low_power_phy_clk_default 0
++#define DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_48MHZ 0
++#define DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_6MHZ 1
++
++/**
++ * 0 - Use cC FIFO size parameters
++ * 1 - Allow dynamic FIFO sizing (default)
++ */
++extern int dwc_otg_set_param_enable_dynamic_fifo(dwc_otg_core_if_t * core_if,
++						 int32_t val);
++extern int32_t dwc_otg_get_param_enable_dynamic_fifo(dwc_otg_core_if_t *
++						     core_if);
++#define dwc_param_enable_dynamic_fifo_default 1
++
++/** Total number of 4-byte words in the data FIFO memory. This
++ * memory includes the Rx FIFO, non-periodic Tx FIFO, and periodic
++ * Tx FIFOs.
++ * 32 to 32768 (default 8192)
++ * Note: The total FIFO memory depth in the FPGA configuration is 8192.
++ */
++extern int dwc_otg_set_param_data_fifo_size(dwc_otg_core_if_t * core_if,
++					    int32_t val);
++extern int32_t dwc_otg_get_param_data_fifo_size(dwc_otg_core_if_t * core_if);
++//#define dwc_param_data_fifo_size_default 8192
++#define dwc_param_data_fifo_size_default 0xFF0 // Broadcom BCM2708
++
++/** Number of 4-byte words in the Rx FIFO in device mode when dynamic
++ * FIFO sizing is enabled.
++ * 16 to 32768 (default 1064)
++ */
++extern int dwc_otg_set_param_dev_rx_fifo_size(dwc_otg_core_if_t * core_if,
++					      int32_t val);
++extern int32_t dwc_otg_get_param_dev_rx_fifo_size(dwc_otg_core_if_t * core_if);
++#define dwc_param_dev_rx_fifo_size_default 1064
++
++/** Number of 4-byte words in the non-periodic Tx FIFO in device mode
++ * when dynamic FIFO sizing is enabled.
++ * 16 to 32768 (default 1024)
++ */
++extern int dwc_otg_set_param_dev_nperio_tx_fifo_size(dwc_otg_core_if_t *
++						     core_if, int32_t val);
++extern int32_t dwc_otg_get_param_dev_nperio_tx_fifo_size(dwc_otg_core_if_t *
++							 core_if);
++#define dwc_param_dev_nperio_tx_fifo_size_default 1024
++
++/** Number of 4-byte words in each of the periodic Tx FIFOs in device
++ * mode when dynamic FIFO sizing is enabled.
++ * 4 to 768 (default 256)
++ */
++extern int dwc_otg_set_param_dev_perio_tx_fifo_size(dwc_otg_core_if_t * core_if,
++						    int32_t val, int fifo_num);
++extern int32_t dwc_otg_get_param_dev_perio_tx_fifo_size(dwc_otg_core_if_t *
++							core_if, int fifo_num);
++#define dwc_param_dev_perio_tx_fifo_size_default 256
++
++/** Number of 4-byte words in the Rx FIFO in host mode when dynamic
++ * FIFO sizing is enabled.
++ * 16 to 32768 (default 1024)
++ */
++extern int dwc_otg_set_param_host_rx_fifo_size(dwc_otg_core_if_t * core_if,
++					       int32_t val);
++extern int32_t dwc_otg_get_param_host_rx_fifo_size(dwc_otg_core_if_t * core_if);
++//#define dwc_param_host_rx_fifo_size_default 1024
++#define dwc_param_host_rx_fifo_size_default 774 // Broadcom BCM2708
++
++/** Number of 4-byte words in the non-periodic Tx FIFO in host mode
++ * when Dynamic FIFO sizing is enabled in the core.
++ * 16 to 32768 (default 1024)
++ */
++extern int dwc_otg_set_param_host_nperio_tx_fifo_size(dwc_otg_core_if_t *
++						      core_if, int32_t val);
++extern int32_t dwc_otg_get_param_host_nperio_tx_fifo_size(dwc_otg_core_if_t *
++							  core_if);
++//#define dwc_param_host_nperio_tx_fifo_size_default 1024
++#define dwc_param_host_nperio_tx_fifo_size_default 0x100 // Broadcom BCM2708
++
++/** Number of 4-byte words in the host periodic Tx FIFO when dynamic
++ * FIFO sizing is enabled.
++ * 16 to 32768 (default 1024)
++ */
++extern int dwc_otg_set_param_host_perio_tx_fifo_size(dwc_otg_core_if_t *
++						     core_if, int32_t val);
++extern int32_t dwc_otg_get_param_host_perio_tx_fifo_size(dwc_otg_core_if_t *
++							 core_if);
++//#define dwc_param_host_perio_tx_fifo_size_default 1024
++#define dwc_param_host_perio_tx_fifo_size_default 0x200 // Broadcom BCM2708
++
++/** The maximum transfer size supported in bytes.
++ * 2047 to 65,535  (default 65,535)
++ */
++extern int dwc_otg_set_param_max_transfer_size(dwc_otg_core_if_t * core_if,
++					       int32_t val);
++extern int32_t dwc_otg_get_param_max_transfer_size(dwc_otg_core_if_t * core_if);
++#define dwc_param_max_transfer_size_default 65535
++
++/** The maximum number of packets in a transfer.
++ * 15 to 511  (default 511)
++ */
++extern int dwc_otg_set_param_max_packet_count(dwc_otg_core_if_t * core_if,
++					      int32_t val);
++extern int32_t dwc_otg_get_param_max_packet_count(dwc_otg_core_if_t * core_if);
++#define dwc_param_max_packet_count_default 511
++
++/** The number of host channel registers to use.
++ * 1 to 16 (default 12)
++ * Note: The FPGA configuration supports a maximum of 12 host channels.
++ */
++extern int dwc_otg_set_param_host_channels(dwc_otg_core_if_t * core_if,
++					   int32_t val);
++extern int32_t dwc_otg_get_param_host_channels(dwc_otg_core_if_t * core_if);
++//#define dwc_param_host_channels_default 12
++#define dwc_param_host_channels_default 8 // Broadcom BCM2708
++
++/** The number of endpoints in addition to EP0 available for device
++ * mode operations.
++ * 1 to 15 (default 6 IN and OUT)
++ * Note: The FPGA configuration supports a maximum of 6 IN and OUT
++ * endpoints in addition to EP0.
++ */
++extern int dwc_otg_set_param_dev_endpoints(dwc_otg_core_if_t * core_if,
++					   int32_t val);
++extern int32_t dwc_otg_get_param_dev_endpoints(dwc_otg_core_if_t * core_if);
++#define dwc_param_dev_endpoints_default 6
++
++/**
++ * Specifies the type of PHY interface to use. By default, the driver
++ * will automatically detect the phy_type.
++ *
++ * 0 - Full Speed PHY
++ * 1 - UTMI+ (default)
++ * 2 - ULPI
++ */
++extern int dwc_otg_set_param_phy_type(dwc_otg_core_if_t * core_if, int32_t val);
++extern int32_t dwc_otg_get_param_phy_type(dwc_otg_core_if_t * core_if);
++#define DWC_PHY_TYPE_PARAM_FS 0
++#define DWC_PHY_TYPE_PARAM_UTMI 1
++#define DWC_PHY_TYPE_PARAM_ULPI 2
++#define dwc_param_phy_type_default DWC_PHY_TYPE_PARAM_UTMI
++
++/**
++ * Specifies the UTMI+ Data Width. This parameter is
++ * applicable for a PHY_TYPE of UTMI+ or ULPI. (For a ULPI
++ * PHY_TYPE, this parameter indicates the data width between
++ * the MAC and the ULPI Wrapper.) Also, this parameter is
++ * applicable only if the OTG_HSPHY_WIDTH cC parameter was set
++ * to "8 and 16 bits", meaning that the core has been
++ * configured to work at either data path width.
++ *
++ * 8 or 16 bits (default 16)
++ */
++extern int dwc_otg_set_param_phy_utmi_width(dwc_otg_core_if_t * core_if,
++					    int32_t val);
++extern int32_t dwc_otg_get_param_phy_utmi_width(dwc_otg_core_if_t * core_if);
++//#define dwc_param_phy_utmi_width_default 16
++#define dwc_param_phy_utmi_width_default 8 // Broadcom BCM2708
++
++/**
++ * Specifies whether the ULPI operates at double or single
++ * data rate. This parameter is only applicable if PHY_TYPE is
++ * ULPI.
++ *
++ * 0 - single data rate ULPI interface with 8 bit wide data
++ * bus (default)
++ * 1 - double data rate ULPI interface with 4 bit wide data
++ * bus
++ */
++extern int dwc_otg_set_param_phy_ulpi_ddr(dwc_otg_core_if_t * core_if,
++					  int32_t val);
++extern int32_t dwc_otg_get_param_phy_ulpi_ddr(dwc_otg_core_if_t * core_if);
++#define dwc_param_phy_ulpi_ddr_default 0
++
++/**
++ * Specifies whether to use the internal or external supply to
++ * drive the vbus with a ULPI phy.
++ */
++extern int dwc_otg_set_param_phy_ulpi_ext_vbus(dwc_otg_core_if_t * core_if,
++					       int32_t val);
++extern int32_t dwc_otg_get_param_phy_ulpi_ext_vbus(dwc_otg_core_if_t * core_if);
++#define DWC_PHY_ULPI_INTERNAL_VBUS 0
++#define DWC_PHY_ULPI_EXTERNAL_VBUS 1
++#define dwc_param_phy_ulpi_ext_vbus_default DWC_PHY_ULPI_INTERNAL_VBUS
++
++/**
++ * Specifies whether to use the I2Cinterface for full speed PHY. This
++ * parameter is only applicable if PHY_TYPE is FS.
++ * 0 - No (default)
++ * 1 - Yes
++ */
++extern int dwc_otg_set_param_i2c_enable(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_i2c_enable(dwc_otg_core_if_t * core_if);
++#define dwc_param_i2c_enable_default 0
++
++extern int dwc_otg_set_param_ulpi_fs_ls(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_ulpi_fs_ls(dwc_otg_core_if_t * core_if);
++#define dwc_param_ulpi_fs_ls_default 0
++
++extern int dwc_otg_set_param_ts_dline(dwc_otg_core_if_t * core_if, int32_t val);
++extern int32_t dwc_otg_get_param_ts_dline(dwc_otg_core_if_t * core_if);
++#define dwc_param_ts_dline_default 0
++
++/**
++ * Specifies whether dedicated transmit FIFOs are
++ * enabled for non periodic IN endpoints in device mode
++ * 0 - No
++ * 1 - Yes
++ */
++extern int dwc_otg_set_param_en_multiple_tx_fifo(dwc_otg_core_if_t * core_if,
++						 int32_t val);
++extern int32_t dwc_otg_get_param_en_multiple_tx_fifo(dwc_otg_core_if_t *
++						     core_if);
++#define dwc_param_en_multiple_tx_fifo_default 1
++
++/** Number of 4-byte words in each of the Tx FIFOs in device
++ * mode when dynamic FIFO sizing is enabled.
++ * 4 to 768 (default 256)
++ */
++extern int dwc_otg_set_param_dev_tx_fifo_size(dwc_otg_core_if_t * core_if,
++					      int fifo_num, int32_t val);
++extern int32_t dwc_otg_get_param_dev_tx_fifo_size(dwc_otg_core_if_t * core_if,
++						  int fifo_num);
++#define dwc_param_dev_tx_fifo_size_default 768
++
++/** Thresholding enable flag-
++ * bit 0 - enable non-ISO Tx thresholding
++ * bit 1 - enable ISO Tx thresholding
++ * bit 2 - enable Rx thresholding
++ */
++extern int dwc_otg_set_param_thr_ctl(dwc_otg_core_if_t * core_if, int32_t val);
++extern int32_t dwc_otg_get_thr_ctl(dwc_otg_core_if_t * core_if, int fifo_num);
++#define dwc_param_thr_ctl_default 0
++
++/** Thresholding length for Tx
++ * FIFOs in 32 bit DWORDs
++ */
++extern int dwc_otg_set_param_tx_thr_length(dwc_otg_core_if_t * core_if,
++					   int32_t val);
++extern int32_t dwc_otg_get_tx_thr_length(dwc_otg_core_if_t * core_if);
++#define dwc_param_tx_thr_length_default 64
++
++/** Thresholding length for Rx
++ *	FIFOs in 32 bit DWORDs
++ */
++extern int dwc_otg_set_param_rx_thr_length(dwc_otg_core_if_t * core_if,
++					   int32_t val);
++extern int32_t dwc_otg_get_rx_thr_length(dwc_otg_core_if_t * core_if);
++#define dwc_param_rx_thr_length_default 64
++
++/**
++ * Specifies whether LPM (Link Power Management) support is enabled
++ */
++extern int dwc_otg_set_param_lpm_enable(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_lpm_enable(dwc_otg_core_if_t * core_if);
++#define dwc_param_lpm_enable_default 1
++
++/**
++ * Specifies whether PTI enhancement is enabled
++ */
++extern int dwc_otg_set_param_pti_enable(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_pti_enable(dwc_otg_core_if_t * core_if);
++#define dwc_param_pti_enable_default 0
++
++/**
++ * Specifies whether MPI enhancement is enabled
++ */
++extern int dwc_otg_set_param_mpi_enable(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_mpi_enable(dwc_otg_core_if_t * core_if);
++#define dwc_param_mpi_enable_default 0
++
++/**
++ * Specifies whether ADP capability is enabled
++ */
++extern int dwc_otg_set_param_adp_enable(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_adp_enable(dwc_otg_core_if_t * core_if);
++#define dwc_param_adp_enable_default 0
++
++/**
++ * Specifies whether IC_USB capability is enabled
++ */
++
++extern int dwc_otg_set_param_ic_usb_cap(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_ic_usb_cap(dwc_otg_core_if_t * core_if);
++#define dwc_param_ic_usb_cap_default 0
++
++extern int dwc_otg_set_param_ahb_thr_ratio(dwc_otg_core_if_t * core_if,
++					   int32_t val);
++extern int32_t dwc_otg_get_param_ahb_thr_ratio(dwc_otg_core_if_t * core_if);
++#define dwc_param_ahb_thr_ratio_default 0
++
++extern int dwc_otg_set_param_power_down(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_power_down(dwc_otg_core_if_t * core_if);
++#define dwc_param_power_down_default 0
++
++extern int dwc_otg_set_param_reload_ctl(dwc_otg_core_if_t * core_if,
++					int32_t val);
++extern int32_t dwc_otg_get_param_reload_ctl(dwc_otg_core_if_t * core_if);
++#define dwc_param_reload_ctl_default 0
++
++extern int dwc_otg_set_param_dev_out_nak(dwc_otg_core_if_t * core_if,
++										int32_t val);
++extern int32_t dwc_otg_get_param_dev_out_nak(dwc_otg_core_if_t * core_if);
++#define dwc_param_dev_out_nak_default 0
++
++extern int dwc_otg_set_param_cont_on_bna(dwc_otg_core_if_t * core_if,
++										 int32_t val);
++extern int32_t dwc_otg_get_param_cont_on_bna(dwc_otg_core_if_t * core_if);
++#define dwc_param_cont_on_bna_default 0
++
++extern int dwc_otg_set_param_ahb_single(dwc_otg_core_if_t * core_if,
++										 int32_t val);
++extern int32_t dwc_otg_get_param_ahb_single(dwc_otg_core_if_t * core_if);
++#define dwc_param_ahb_single_default 0
++
++extern int dwc_otg_set_param_otg_ver(dwc_otg_core_if_t * core_if, int32_t val);
++extern int32_t dwc_otg_get_param_otg_ver(dwc_otg_core_if_t * core_if);
++#define dwc_param_otg_ver_default 0
++
++/** @} */
++
++/** @name Access to registers and bit-fields */
++
++/**
++ * Dump core registers and SPRAM
++ */
++extern void dwc_otg_dump_dev_registers(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_dump_spram(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_dump_host_registers(dwc_otg_core_if_t * _core_if);
++extern void dwc_otg_dump_global_registers(dwc_otg_core_if_t * _core_if);
++
++/**
++ * Get host negotiation status.
++ */
++extern uint32_t dwc_otg_get_hnpstatus(dwc_otg_core_if_t * core_if);
++
++/**
++ * Get srp status
++ */
++extern uint32_t dwc_otg_get_srpstatus(dwc_otg_core_if_t * core_if);
++
++/**
++ * Set hnpreq bit in the GOTGCTL register.
++ */
++extern void dwc_otg_set_hnpreq(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get Content of SNPSID register.
++ */
++extern uint32_t dwc_otg_get_gsnpsid(dwc_otg_core_if_t * core_if);
++
++/**
++ * Get current mode.
++ * Returns 0 if in device mode, and 1 if in host mode.
++ */
++extern uint32_t dwc_otg_get_mode(dwc_otg_core_if_t * core_if);
++
++/**
++ * Get value of hnpcapable field in the GUSBCFG register
++ */
++extern uint32_t dwc_otg_get_hnpcapable(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of hnpcapable field in the GUSBCFG register
++ */
++extern void dwc_otg_set_hnpcapable(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get value of srpcapable field in the GUSBCFG register
++ */
++extern uint32_t dwc_otg_get_srpcapable(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of srpcapable field in the GUSBCFG register
++ */
++extern void dwc_otg_set_srpcapable(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get value of devspeed field in the DCFG register
++ */
++extern uint32_t dwc_otg_get_devspeed(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of devspeed field in the DCFG register
++ */
++extern void dwc_otg_set_devspeed(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get the value of busconnected field from the HPRT0 register
++ */
++extern uint32_t dwc_otg_get_busconnected(dwc_otg_core_if_t * core_if);
++
++/**
++ * Gets the device enumeration Speed.
++ */
++extern uint32_t dwc_otg_get_enumspeed(dwc_otg_core_if_t * core_if);
++
++/**
++ * Get value of prtpwr field from the HPRT0 register
++ */
++extern uint32_t dwc_otg_get_prtpower(dwc_otg_core_if_t * core_if);
++
++/**
++ * Get value of flag indicating core state - hibernated or not
++ */
++extern uint32_t dwc_otg_get_core_state(dwc_otg_core_if_t * core_if);
++
++/**
++ * Set value of prtpwr field from the HPRT0 register
++ */
++extern void dwc_otg_set_prtpower(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get value of prtsusp field from the HPRT0 regsiter
++ */
++extern uint32_t dwc_otg_get_prtsuspend(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of prtpwr field from the HPRT0 register
++ */
++extern void dwc_otg_set_prtsuspend(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get value of ModeChTimEn field from the HCFG regsiter
++ */
++extern uint32_t dwc_otg_get_mode_ch_tim(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of ModeChTimEn field from the HCFG regsiter
++ */
++extern void dwc_otg_set_mode_ch_tim(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get value of Fram Interval field from the HFIR regsiter
++ */
++extern uint32_t dwc_otg_get_fr_interval(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of Frame Interval field from the HFIR regsiter
++ */
++extern void dwc_otg_set_fr_interval(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Set value of prtres field from the HPRT0 register
++ *FIXME Remove?
++ */
++extern void dwc_otg_set_prtresume(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get value of rmtwkupsig bit in DCTL register
++ */
++extern uint32_t dwc_otg_get_remotewakesig(dwc_otg_core_if_t * core_if);
++
++/**
++ * Get value of prt_sleep_sts field from the GLPMCFG register
++ */
++extern uint32_t dwc_otg_get_lpm_portsleepstatus(dwc_otg_core_if_t * core_if);
++
++/**
++ * Get value of rem_wkup_en field from the GLPMCFG register
++ */
++extern uint32_t dwc_otg_get_lpm_remotewakeenabled(dwc_otg_core_if_t * core_if);
++
++/**
++ * Get value of appl_resp field from the GLPMCFG register
++ */
++extern uint32_t dwc_otg_get_lpmresponse(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of appl_resp field from the GLPMCFG register
++ */
++extern void dwc_otg_set_lpmresponse(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get value of hsic_connect field from the GLPMCFG register
++ */
++extern uint32_t dwc_otg_get_hsic_connect(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of hsic_connect field from the GLPMCFG register
++ */
++extern void dwc_otg_set_hsic_connect(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * Get value of inv_sel_hsic field from the GLPMCFG register.
++ */
++extern uint32_t dwc_otg_get_inv_sel_hsic(dwc_otg_core_if_t * core_if);
++/**
++ * Set value of inv_sel_hsic field from the GLPMFG register.
++ */
++extern void dwc_otg_set_inv_sel_hsic(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/*
++ * Some functions for accessing registers
++ */
++
++/**
++ *  GOTGCTL register
++ */
++extern uint32_t dwc_otg_get_gotgctl(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_set_gotgctl(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * GUSBCFG register
++ */
++extern uint32_t dwc_otg_get_gusbcfg(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_set_gusbcfg(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * GRXFSIZ register
++ */
++extern uint32_t dwc_otg_get_grxfsiz(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_set_grxfsiz(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * GNPTXFSIZ register
++ */
++extern uint32_t dwc_otg_get_gnptxfsiz(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_set_gnptxfsiz(dwc_otg_core_if_t * core_if, uint32_t val);
++
++extern uint32_t dwc_otg_get_gpvndctl(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_set_gpvndctl(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * GGPIO register
++ */
++extern uint32_t dwc_otg_get_ggpio(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_set_ggpio(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * GUID register
++ */
++extern uint32_t dwc_otg_get_guid(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_set_guid(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * HPRT0 register
++ */
++extern uint32_t dwc_otg_get_hprt0(dwc_otg_core_if_t * core_if);
++extern void dwc_otg_set_hprt0(dwc_otg_core_if_t * core_if, uint32_t val);
++
++/**
++ * GHPTXFSIZE
++ */
++extern uint32_t dwc_otg_get_hptxfsiz(dwc_otg_core_if_t * core_if);
++
++/** @} */
++
++#endif				/* __DWC_CORE_IF_H__ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_dbg.h
+@@ -0,0 +1,117 @@
++/* ==========================================================================
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#ifndef __DWC_OTG_DBG_H__
++#define __DWC_OTG_DBG_H__
++
++/** @file
++ * This file defines debug levels.
++ * Debugging support vanishes in non-debug builds.
++ */
++
++/**
++ * The Debug Level bit-mask variable.
++ */
++extern uint32_t g_dbg_lvl;
++/**
++ * Set the Debug Level variable.
++ */
++static inline uint32_t SET_DEBUG_LEVEL(const uint32_t new)
++{
++	uint32_t old = g_dbg_lvl;
++	g_dbg_lvl = new;
++	return old;
++}
++
++#define DBG_USER	(0x1)
++/** When debug level has the DBG_CIL bit set, display CIL Debug messages. */
++#define DBG_CIL		(0x2)
++/** When debug level has the DBG_CILV bit set, display CIL Verbose debug
++ * messages */
++#define DBG_CILV	(0x20)
++/**  When debug level has the DBG_PCD bit set, display PCD (Device) debug
++ *  messages */
++#define DBG_PCD		(0x4)
++/** When debug level has the DBG_PCDV set, display PCD (Device) Verbose debug
++ * messages */
++#define DBG_PCDV	(0x40)
++/** When debug level has the DBG_HCD bit set, display Host debug messages */
++#define DBG_HCD		(0x8)
++/** When debug level has the DBG_HCDV bit set, display Verbose Host debug
++ * messages */
++#define DBG_HCDV	(0x80)
++/** When debug level has the DBG_HCD_URB bit set, display enqueued URBs in host
++ *  mode. */
++#define DBG_HCD_URB	(0x800)
++/** When debug level has the DBG_HCDI bit set, display host interrupt
++ *  messages. */
++#define DBG_HCDI	(0x1000)
++
++/** When debug level has any bit set, display debug messages */
++#define DBG_ANY		(0xFF)
++
++/** All debug messages off */
++#define DBG_OFF		0
++
++/** Prefix string for DWC_DEBUG print macros. */
++#define USB_DWC "DWC_otg: "
++
++/**
++ * Print a debug message when the Global debug level variable contains
++ * the bit defined in <code>lvl</code>.
++ *
++ * @param[in] lvl - Debug level, use one of the DBG_ constants above.
++ * @param[in] x - like printf
++ *
++ *    Example:<p>
++ * <code>
++ *      DWC_DEBUGPL( DBG_ANY, "%s(%p)\n", __func__, _reg_base_addr);
++ * </code>
++ * <br>
++ * results in:<br>
++ * <code>
++ * usb-DWC_otg: dwc_otg_cil_init(ca867000)
++ * </code>
++ */
++#ifdef DEBUG
++
++# define DWC_DEBUGPL(lvl, x...) do{ if ((lvl)&g_dbg_lvl)__DWC_DEBUG(USB_DWC x ); }while(0)
++# define DWC_DEBUGP(x...)	DWC_DEBUGPL(DBG_ANY, x )
++
++# define CHK_DEBUG_LEVEL(level) ((level) & g_dbg_lvl)
++
++#else
++
++# define DWC_DEBUGPL(lvl, x...) do{}while(0)
++# define DWC_DEBUGP(x...)
++
++# define CHK_DEBUG_LEVEL(level) (0)
++
++#endif /*DEBUG*/
++#endif
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_driver.c
+@@ -0,0 +1,1757 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_driver.c $
++ * $Revision: #92 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++/** @file
++ * The dwc_otg_driver module provides the initialization and cleanup entry
++ * points for the DWC_otg driver. This module will be dynamically installed
++ * after Linux is booted using the insmod command. When the module is
++ * installed, the dwc_otg_driver_init function is called. When the module is
++ * removed (using rmmod), the dwc_otg_driver_cleanup function is called.
++ *
++ * This module also defines a data structure for the dwc_otg_driver, which is
++ * used in conjunction with the standard ARM lm_device structure. These
++ * structures allow the OTG driver to comply with the standard Linux driver
++ * model in which devices and drivers are registered with a bus driver. This
++ * has the benefit that Linux can expose attributes of the driver and device
++ * in its special sysfs file system. Users can then read or write files in
++ * this file system to perform diagnostics on the driver components or the
++ * device.
++ */
++
++#include "dwc_otg_os_dep.h"
++#include "dwc_os.h"
++#include "dwc_otg_dbg.h"
++#include "dwc_otg_driver.h"
++#include "dwc_otg_attr.h"
++#include "dwc_otg_core_if.h"
++#include "dwc_otg_pcd_if.h"
++#include "dwc_otg_hcd_if.h"
++#include "dwc_otg_fiq_fsm.h"
++
++#define DWC_DRIVER_VERSION	"3.00a 10-AUG-2012"
++#define DWC_DRIVER_DESC		"HS OTG USB Controller driver"
++
++bool microframe_schedule=true;
++
++static const char dwc_driver_name[] = "dwc_otg";
++
++
++extern int pcd_init(
++#ifdef LM_INTERFACE
++			   struct lm_device *_dev
++#elif  defined(PCI_INTERFACE)
++			   struct pci_dev *_dev
++#elif  defined(PLATFORM_INTERFACE)
++	struct platform_device *dev
++#endif
++    );
++extern int hcd_init(
++#ifdef LM_INTERFACE
++			   struct lm_device *_dev
++#elif  defined(PCI_INTERFACE)
++			   struct pci_dev *_dev
++#elif  defined(PLATFORM_INTERFACE)
++	struct platform_device *dev
++#endif
++    );
++
++extern int pcd_remove(
++#ifdef LM_INTERFACE
++			     struct lm_device *_dev
++#elif  defined(PCI_INTERFACE)
++			     struct pci_dev *_dev
++#elif  defined(PLATFORM_INTERFACE)
++	struct platform_device *_dev
++#endif
++    );
++
++extern void hcd_remove(
++#ifdef LM_INTERFACE
++			      struct lm_device *_dev
++#elif  defined(PCI_INTERFACE)
++			      struct pci_dev *_dev
++#elif  defined(PLATFORM_INTERFACE)
++	struct platform_device *_dev
++#endif
++    );
++
++extern void dwc_otg_adp_start(dwc_otg_core_if_t * core_if, uint8_t is_host);
++
++/*-------------------------------------------------------------------------*/
++/* Encapsulate the module parameter settings */
++
++struct dwc_otg_driver_module_params {
++	int32_t opt;
++	int32_t otg_cap;
++	int32_t dma_enable;
++	int32_t dma_desc_enable;
++	int32_t dma_burst_size;
++	int32_t speed;
++	int32_t host_support_fs_ls_low_power;
++	int32_t host_ls_low_power_phy_clk;
++	int32_t enable_dynamic_fifo;
++	int32_t data_fifo_size;
++	int32_t dev_rx_fifo_size;
++	int32_t dev_nperio_tx_fifo_size;
++	uint32_t dev_perio_tx_fifo_size[MAX_PERIO_FIFOS];
++	int32_t host_rx_fifo_size;
++	int32_t host_nperio_tx_fifo_size;
++	int32_t host_perio_tx_fifo_size;
++	int32_t max_transfer_size;
++	int32_t max_packet_count;
++	int32_t host_channels;
++	int32_t dev_endpoints;
++	int32_t phy_type;
++	int32_t phy_utmi_width;
++	int32_t phy_ulpi_ddr;
++	int32_t phy_ulpi_ext_vbus;
++	int32_t i2c_enable;
++	int32_t ulpi_fs_ls;
++	int32_t ts_dline;
++	int32_t en_multiple_tx_fifo;
++	uint32_t dev_tx_fifo_size[MAX_TX_FIFOS];
++	uint32_t thr_ctl;
++	uint32_t tx_thr_length;
++	uint32_t rx_thr_length;
++	int32_t pti_enable;
++	int32_t mpi_enable;
++	int32_t lpm_enable;
++	int32_t ic_usb_cap;
++	int32_t ahb_thr_ratio;
++	int32_t power_down;
++	int32_t reload_ctl;
++	int32_t dev_out_nak;
++	int32_t cont_on_bna;
++	int32_t ahb_single;
++	int32_t otg_ver;
++	int32_t adp_enable;
++};
++
++static struct dwc_otg_driver_module_params dwc_otg_module_params = {
++	.opt = -1,
++	.otg_cap = -1,
++	.dma_enable = -1,
++	.dma_desc_enable = -1,
++	.dma_burst_size = -1,
++	.speed = -1,
++	.host_support_fs_ls_low_power = -1,
++	.host_ls_low_power_phy_clk = -1,
++	.enable_dynamic_fifo = -1,
++	.data_fifo_size = -1,
++	.dev_rx_fifo_size = -1,
++	.dev_nperio_tx_fifo_size = -1,
++	.dev_perio_tx_fifo_size = {
++				   /* dev_perio_tx_fifo_size_1 */
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1,
++				   -1
++				   /* 15 */
++				   },
++	.host_rx_fifo_size = -1,
++	.host_nperio_tx_fifo_size = -1,
++	.host_perio_tx_fifo_size = -1,
++	.max_transfer_size = -1,
++	.max_packet_count = -1,
++	.host_channels = -1,
++	.dev_endpoints = -1,
++	.phy_type = -1,
++	.phy_utmi_width = -1,
++	.phy_ulpi_ddr = -1,
++	.phy_ulpi_ext_vbus = -1,
++	.i2c_enable = -1,
++	.ulpi_fs_ls = -1,
++	.ts_dline = -1,
++	.en_multiple_tx_fifo = -1,
++	.dev_tx_fifo_size = {
++			     /* dev_tx_fifo_size */
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1,
++			     -1
++			     /* 15 */
++			     },
++	.thr_ctl = -1,
++	.tx_thr_length = -1,
++	.rx_thr_length = -1,
++	.pti_enable = -1,
++	.mpi_enable = -1,
++	.lpm_enable = 0,
++	.ic_usb_cap = -1,
++	.ahb_thr_ratio = -1,
++	.power_down = -1,
++	.reload_ctl = -1,
++	.dev_out_nak = -1,
++	.cont_on_bna = -1,
++	.ahb_single = -1,
++	.otg_ver = -1,
++	.adp_enable = -1,
++};
++
++//Global variable to switch the fiq fix on or off
++bool fiq_enable = 1;
++// Global variable to enable the split transaction fix
++bool fiq_fsm_enable = true;
++//Bulk split-transaction NAK holdoff in microframes
++uint16_t nak_holdoff = 8;
++
++unsigned short fiq_fsm_mask = 0x07;
++
++/**
++ * This function shows the Driver Version.
++ */
++static ssize_t version_show(struct device_driver *dev, char *buf)
++{
++	return snprintf(buf, sizeof(DWC_DRIVER_VERSION) + 2, "%s\n",
++			DWC_DRIVER_VERSION);
++}
++
++static DRIVER_ATTR(version, S_IRUGO, version_show, NULL);
++
++/**
++ * Global Debug Level Mask.
++ */
++uint32_t g_dbg_lvl = 0;		/* OFF */
++
++/**
++ * This function shows the driver Debug Level.
++ */
++static ssize_t dbg_level_show(struct device_driver *drv, char *buf)
++{
++	return sprintf(buf, "0x%0x\n", g_dbg_lvl);
++}
++
++/**
++ * This function stores the driver Debug Level.
++ */
++static ssize_t dbg_level_store(struct device_driver *drv, const char *buf,
++			       size_t count)
++{
++	g_dbg_lvl = simple_strtoul(buf, NULL, 16);
++	return count;
++}
++
++static DRIVER_ATTR(debuglevel, S_IRUGO | S_IWUSR, dbg_level_show,
++		   dbg_level_store);
++
++/**
++ * This function is called during module intialization
++ * to pass module parameters to the DWC_OTG CORE.
++ */
++static int set_parameters(dwc_otg_core_if_t * core_if)
++{
++	int retval = 0;
++	int i;
++
++	if (dwc_otg_module_params.otg_cap != -1) {
++		retval +=
++		    dwc_otg_set_param_otg_cap(core_if,
++					      dwc_otg_module_params.otg_cap);
++	}
++	if (dwc_otg_module_params.dma_enable != -1) {
++		retval +=
++		    dwc_otg_set_param_dma_enable(core_if,
++						 dwc_otg_module_params.
++						 dma_enable);
++	}
++	if (dwc_otg_module_params.dma_desc_enable != -1) {
++		retval +=
++		    dwc_otg_set_param_dma_desc_enable(core_if,
++						      dwc_otg_module_params.
++						      dma_desc_enable);
++	}
++	if (dwc_otg_module_params.opt != -1) {
++		retval +=
++		    dwc_otg_set_param_opt(core_if, dwc_otg_module_params.opt);
++	}
++	if (dwc_otg_module_params.dma_burst_size != -1) {
++		retval +=
++		    dwc_otg_set_param_dma_burst_size(core_if,
++						     dwc_otg_module_params.
++						     dma_burst_size);
++	}
++	if (dwc_otg_module_params.host_support_fs_ls_low_power != -1) {
++		retval +=
++		    dwc_otg_set_param_host_support_fs_ls_low_power(core_if,
++								   dwc_otg_module_params.
++								   host_support_fs_ls_low_power);
++	}
++	if (dwc_otg_module_params.enable_dynamic_fifo != -1) {
++		retval +=
++		    dwc_otg_set_param_enable_dynamic_fifo(core_if,
++							  dwc_otg_module_params.
++							  enable_dynamic_fifo);
++	}
++	if (dwc_otg_module_params.data_fifo_size != -1) {
++		retval +=
++		    dwc_otg_set_param_data_fifo_size(core_if,
++						     dwc_otg_module_params.
++						     data_fifo_size);
++	}
++	if (dwc_otg_module_params.dev_rx_fifo_size != -1) {
++		retval +=
++		    dwc_otg_set_param_dev_rx_fifo_size(core_if,
++						       dwc_otg_module_params.
++						       dev_rx_fifo_size);
++	}
++	if (dwc_otg_module_params.dev_nperio_tx_fifo_size != -1) {
++		retval +=
++		    dwc_otg_set_param_dev_nperio_tx_fifo_size(core_if,
++							      dwc_otg_module_params.
++							      dev_nperio_tx_fifo_size);
++	}
++	if (dwc_otg_module_params.host_rx_fifo_size != -1) {
++		retval +=
++		    dwc_otg_set_param_host_rx_fifo_size(core_if,
++							dwc_otg_module_params.host_rx_fifo_size);
++	}
++	if (dwc_otg_module_params.host_nperio_tx_fifo_size != -1) {
++		retval +=
++		    dwc_otg_set_param_host_nperio_tx_fifo_size(core_if,
++							       dwc_otg_module_params.
++							       host_nperio_tx_fifo_size);
++	}
++	if (dwc_otg_module_params.host_perio_tx_fifo_size != -1) {
++		retval +=
++		    dwc_otg_set_param_host_perio_tx_fifo_size(core_if,
++							      dwc_otg_module_params.
++							      host_perio_tx_fifo_size);
++	}
++	if (dwc_otg_module_params.max_transfer_size != -1) {
++		retval +=
++		    dwc_otg_set_param_max_transfer_size(core_if,
++							dwc_otg_module_params.
++							max_transfer_size);
++	}
++	if (dwc_otg_module_params.max_packet_count != -1) {
++		retval +=
++		    dwc_otg_set_param_max_packet_count(core_if,
++						       dwc_otg_module_params.
++						       max_packet_count);
++	}
++	if (dwc_otg_module_params.host_channels != -1) {
++		retval +=
++		    dwc_otg_set_param_host_channels(core_if,
++						    dwc_otg_module_params.
++						    host_channels);
++	}
++	if (dwc_otg_module_params.dev_endpoints != -1) {
++		retval +=
++		    dwc_otg_set_param_dev_endpoints(core_if,
++						    dwc_otg_module_params.
++						    dev_endpoints);
++	}
++	if (dwc_otg_module_params.phy_type != -1) {
++		retval +=
++		    dwc_otg_set_param_phy_type(core_if,
++					       dwc_otg_module_params.phy_type);
++	}
++	if (dwc_otg_module_params.speed != -1) {
++		retval +=
++		    dwc_otg_set_param_speed(core_if,
++					    dwc_otg_module_params.speed);
++	}
++	if (dwc_otg_module_params.host_ls_low_power_phy_clk != -1) {
++		retval +=
++		    dwc_otg_set_param_host_ls_low_power_phy_clk(core_if,
++								dwc_otg_module_params.
++								host_ls_low_power_phy_clk);
++	}
++	if (dwc_otg_module_params.phy_ulpi_ddr != -1) {
++		retval +=
++		    dwc_otg_set_param_phy_ulpi_ddr(core_if,
++						   dwc_otg_module_params.
++						   phy_ulpi_ddr);
++	}
++	if (dwc_otg_module_params.phy_ulpi_ext_vbus != -1) {
++		retval +=
++		    dwc_otg_set_param_phy_ulpi_ext_vbus(core_if,
++							dwc_otg_module_params.
++							phy_ulpi_ext_vbus);
++	}
++	if (dwc_otg_module_params.phy_utmi_width != -1) {
++		retval +=
++		    dwc_otg_set_param_phy_utmi_width(core_if,
++						     dwc_otg_module_params.
++						     phy_utmi_width);
++	}
++	if (dwc_otg_module_params.ulpi_fs_ls != -1) {
++		retval +=
++		    dwc_otg_set_param_ulpi_fs_ls(core_if,
++						 dwc_otg_module_params.ulpi_fs_ls);
++	}
++	if (dwc_otg_module_params.ts_dline != -1) {
++		retval +=
++		    dwc_otg_set_param_ts_dline(core_if,
++					       dwc_otg_module_params.ts_dline);
++	}
++	if (dwc_otg_module_params.i2c_enable != -1) {
++		retval +=
++		    dwc_otg_set_param_i2c_enable(core_if,
++						 dwc_otg_module_params.
++						 i2c_enable);
++	}
++	if (dwc_otg_module_params.en_multiple_tx_fifo != -1) {
++		retval +=
++		    dwc_otg_set_param_en_multiple_tx_fifo(core_if,
++							  dwc_otg_module_params.
++							  en_multiple_tx_fifo);
++	}
++	for (i = 0; i < 15; i++) {
++		if (dwc_otg_module_params.dev_perio_tx_fifo_size[i] != -1) {
++			retval +=
++			    dwc_otg_set_param_dev_perio_tx_fifo_size(core_if,
++								     dwc_otg_module_params.
++								     dev_perio_tx_fifo_size
++								     [i], i);
++		}
++	}
++
++	for (i = 0; i < 15; i++) {
++		if (dwc_otg_module_params.dev_tx_fifo_size[i] != -1) {
++			retval += dwc_otg_set_param_dev_tx_fifo_size(core_if,
++								     dwc_otg_module_params.
++								     dev_tx_fifo_size
++								     [i], i);
++		}
++	}
++	if (dwc_otg_module_params.thr_ctl != -1) {
++		retval +=
++		    dwc_otg_set_param_thr_ctl(core_if,
++					      dwc_otg_module_params.thr_ctl);
++	}
++	if (dwc_otg_module_params.mpi_enable != -1) {
++		retval +=
++		    dwc_otg_set_param_mpi_enable(core_if,
++						 dwc_otg_module_params.
++						 mpi_enable);
++	}
++	if (dwc_otg_module_params.pti_enable != -1) {
++		retval +=
++		    dwc_otg_set_param_pti_enable(core_if,
++						 dwc_otg_module_params.
++						 pti_enable);
++	}
++	if (dwc_otg_module_params.lpm_enable != -1) {
++		retval +=
++		    dwc_otg_set_param_lpm_enable(core_if,
++						 dwc_otg_module_params.
++						 lpm_enable);
++	}
++	if (dwc_otg_module_params.ic_usb_cap != -1) {
++		retval +=
++		    dwc_otg_set_param_ic_usb_cap(core_if,
++						 dwc_otg_module_params.
++						 ic_usb_cap);
++	}
++	if (dwc_otg_module_params.tx_thr_length != -1) {
++		retval +=
++		    dwc_otg_set_param_tx_thr_length(core_if,
++						    dwc_otg_module_params.tx_thr_length);
++	}
++	if (dwc_otg_module_params.rx_thr_length != -1) {
++		retval +=
++		    dwc_otg_set_param_rx_thr_length(core_if,
++						    dwc_otg_module_params.
++						    rx_thr_length);
++	}
++	if (dwc_otg_module_params.ahb_thr_ratio != -1) {
++		retval +=
++		    dwc_otg_set_param_ahb_thr_ratio(core_if,
++						    dwc_otg_module_params.ahb_thr_ratio);
++	}
++	if (dwc_otg_module_params.power_down != -1) {
++		retval +=
++		    dwc_otg_set_param_power_down(core_if,
++						 dwc_otg_module_params.power_down);
++	}
++	if (dwc_otg_module_params.reload_ctl != -1) {
++		retval +=
++		    dwc_otg_set_param_reload_ctl(core_if,
++						 dwc_otg_module_params.reload_ctl);
++	}
++
++	if (dwc_otg_module_params.dev_out_nak != -1) {
++		retval +=
++			dwc_otg_set_param_dev_out_nak(core_if,
++			dwc_otg_module_params.dev_out_nak);
++	}
++
++	if (dwc_otg_module_params.cont_on_bna != -1) {
++		retval +=
++			dwc_otg_set_param_cont_on_bna(core_if,
++			dwc_otg_module_params.cont_on_bna);
++	}
++
++	if (dwc_otg_module_params.ahb_single != -1) {
++		retval +=
++			dwc_otg_set_param_ahb_single(core_if,
++			dwc_otg_module_params.ahb_single);
++	}
++
++	if (dwc_otg_module_params.otg_ver != -1) {
++		retval +=
++		    dwc_otg_set_param_otg_ver(core_if,
++					      dwc_otg_module_params.otg_ver);
++	}
++	if (dwc_otg_module_params.adp_enable != -1) {
++		retval +=
++		    dwc_otg_set_param_adp_enable(core_if,
++						 dwc_otg_module_params.
++						 adp_enable);
++	}
++	return retval;
++}
++
++/**
++ * This function is the top level interrupt handler for the Common
++ * (Device and host modes) interrupts.
++ */
++static irqreturn_t dwc_otg_common_irq(int irq, void *dev)
++{
++	int32_t retval = IRQ_NONE;
++
++	retval = dwc_otg_handle_common_intr(dev);
++	if (retval != 0) {
++		S3C2410X_CLEAR_EINTPEND();
++	}
++	return IRQ_RETVAL(retval);
++}
++
++/**
++ * This function is called when a lm_device is unregistered with the
++ * dwc_otg_driver. This happens, for example, when the rmmod command is
++ * executed. The device may or may not be electrically present. If it is
++ * present, the driver stops device processing. Any resources used on behalf
++ * of this device are freed.
++ *
++ * @param _dev
++ */
++#ifdef LM_INTERFACE
++#define REM_RETVAL(n)
++static void dwc_otg_driver_remove(	 struct lm_device *_dev )
++{       dwc_otg_device_t *otg_dev = lm_get_drvdata(_dev);
++#elif  defined(PCI_INTERFACE)
++#define REM_RETVAL(n)
++static void dwc_otg_driver_remove(	 struct pci_dev *_dev )
++{	dwc_otg_device_t *otg_dev = pci_get_drvdata(_dev);
++#elif  defined(PLATFORM_INTERFACE)
++#define REM_RETVAL(n) n
++static int dwc_otg_driver_remove(        struct platform_device *_dev )
++{       dwc_otg_device_t *otg_dev = platform_get_drvdata(_dev);
++#endif
++
++	DWC_DEBUGPL(DBG_ANY, "%s(%p) otg_dev %p\n", __func__, _dev, otg_dev);
++
++	if (!otg_dev) {
++		/* Memory allocation for the dwc_otg_device failed. */
++		DWC_DEBUGPL(DBG_ANY, "%s: otg_dev NULL!\n", __func__);
++                return REM_RETVAL(-ENOMEM);
++	}
++#ifndef DWC_DEVICE_ONLY
++	if (otg_dev->hcd) {
++		hcd_remove(_dev);
++	} else {
++		DWC_DEBUGPL(DBG_ANY, "%s: otg_dev->hcd NULL!\n", __func__);
++                return REM_RETVAL(-EINVAL);
++	}
++#endif
++
++#ifndef DWC_HOST_ONLY
++	if (otg_dev->pcd) {
++		pcd_remove(_dev);
++	} else {
++		DWC_DEBUGPL(DBG_ANY, "%s: otg_dev->pcd NULL!\n", __func__);
++                return REM_RETVAL(-EINVAL);
++	}
++#endif
++	/*
++	 * Free the IRQ
++	 */
++	if (otg_dev->common_irq_installed) {
++#ifdef PLATFORM_INTERFACE
++		free_irq(platform_get_irq(_dev, 0), otg_dev);
++#else
++		free_irq(_dev->irq, otg_dev);
++#endif
++        } else {
++		DWC_DEBUGPL(DBG_ANY, "%s: There is no installed irq!\n", __func__);
++		return REM_RETVAL(-ENXIO);
++	}
++
++	if (otg_dev->core_if) {
++		dwc_otg_cil_remove(otg_dev->core_if);
++	} else {
++		DWC_DEBUGPL(DBG_ANY, "%s: otg_dev->core_if NULL!\n", __func__);
++		return REM_RETVAL(-ENXIO);
++	}
++
++	/*
++	 * Remove the device attributes
++	 */
++	dwc_otg_attr_remove(_dev);
++
++	/*
++	 * Return the memory.
++	 */
++	if (otg_dev->os_dep.base) {
++		iounmap(otg_dev->os_dep.base);
++	}
++	DWC_FREE(otg_dev);
++
++	/*
++	 * Clear the drvdata pointer.
++	 */
++#ifdef LM_INTERFACE
++	lm_set_drvdata(_dev, 0);
++#elif defined(PCI_INTERFACE)
++        release_mem_region(otg_dev->os_dep.rsrc_start,
++                           otg_dev->os_dep.rsrc_len);
++	pci_set_drvdata(_dev, 0);
++#elif  defined(PLATFORM_INTERFACE)
++        platform_set_drvdata(_dev, 0);
++#endif
++        return REM_RETVAL(0);
++}
++
++/**
++ * This function is called when an lm_device is bound to a
++ * dwc_otg_driver. It creates the driver components required to
++ * control the device (CIL, HCD, and PCD) and it initializes the
++ * device. The driver components are stored in a dwc_otg_device
++ * structure. A reference to the dwc_otg_device is saved in the
++ * lm_device. This allows the driver to access the dwc_otg_device
++ * structure on subsequent calls to driver methods for this device.
++ *
++ * @param _dev Bus device
++ */
++static int dwc_otg_driver_probe(
++#ifdef LM_INTERFACE
++				       struct lm_device *_dev
++#elif defined(PCI_INTERFACE)
++				       struct pci_dev *_dev,
++				       const struct pci_device_id *id
++#elif  defined(PLATFORM_INTERFACE)
++                                       struct platform_device *_dev
++#endif
++    )
++{
++	int retval = 0;
++	dwc_otg_device_t *dwc_otg_device;
++        int devirq;
++
++	dev_dbg(&_dev->dev, "dwc_otg_driver_probe(%p)\n", _dev);
++#ifdef LM_INTERFACE
++	dev_dbg(&_dev->dev, "start=0x%08x\n", (unsigned)_dev->resource.start);
++#elif defined(PCI_INTERFACE)
++	if (!id) {
++		DWC_ERROR("Invalid pci_device_id %p", id);
++		return -EINVAL;
++	}
++
++	if (!_dev || (pci_enable_device(_dev) < 0)) {
++		DWC_ERROR("Invalid pci_device %p", _dev);
++		return -ENODEV;
++	}
++	dev_dbg(&_dev->dev, "start=0x%08x\n", (unsigned)pci_resource_start(_dev,0));
++	/* other stuff needed as well? */
++
++#elif  defined(PLATFORM_INTERFACE)
++	dev_dbg(&_dev->dev, "start=0x%08x (len 0x%x)\n",
++                (unsigned)_dev->resource->start,
++                (unsigned)(_dev->resource->end - _dev->resource->start));
++#endif
++
++	dwc_otg_device = DWC_ALLOC(sizeof(dwc_otg_device_t));
++
++	if (!dwc_otg_device) {
++		dev_err(&_dev->dev, "kmalloc of dwc_otg_device failed\n");
++		return -ENOMEM;
++	}
++
++	memset(dwc_otg_device, 0, sizeof(*dwc_otg_device));
++	dwc_otg_device->os_dep.reg_offset = 0xFFFFFFFF;
++	dwc_otg_device->os_dep.platformdev = _dev;
++
++	/*
++	 * Map the DWC_otg Core memory into virtual address space.
++	 */
++#ifdef LM_INTERFACE
++	dwc_otg_device->os_dep.base = ioremap(_dev->resource.start, SZ_256K);
++
++	if (!dwc_otg_device->os_dep.base) {
++		dev_err(&_dev->dev, "ioremap() failed\n");
++		DWC_FREE(dwc_otg_device);
++		return -ENOMEM;
++	}
++	dev_dbg(&_dev->dev, "base=0x%08x\n",
++		(unsigned)dwc_otg_device->os_dep.base);
++#elif defined(PCI_INTERFACE)
++	_dev->current_state = PCI_D0;
++	_dev->dev.power.power_state = PMSG_ON;
++
++	if (!_dev->irq) {
++		DWC_ERROR("Found HC with no IRQ. Check BIOS/PCI %s setup!",
++			  pci_name(_dev));
++		iounmap(dwc_otg_device->os_dep.base);
++		DWC_FREE(dwc_otg_device);
++		return -ENODEV;
++	}
++
++	dwc_otg_device->os_dep.rsrc_start = pci_resource_start(_dev, 0);
++	dwc_otg_device->os_dep.rsrc_len = pci_resource_len(_dev, 0);
++	DWC_DEBUGPL(DBG_ANY, "PCI resource: start=%08x, len=%08x\n",
++		    (unsigned)dwc_otg_device->os_dep.rsrc_start,
++		    (unsigned)dwc_otg_device->os_dep.rsrc_len);
++	if (!request_mem_region
++	    (dwc_otg_device->os_dep.rsrc_start, dwc_otg_device->os_dep.rsrc_len,
++	     "dwc_otg")) {
++		dev_dbg(&_dev->dev, "error requesting memory\n");
++		iounmap(dwc_otg_device->os_dep.base);
++		DWC_FREE(dwc_otg_device);
++		return -EFAULT;
++	}
++
++	dwc_otg_device->os_dep.base =
++	    ioremap_nocache(dwc_otg_device->os_dep.rsrc_start,
++			    dwc_otg_device->os_dep.rsrc_len);
++	if (dwc_otg_device->os_dep.base == NULL) {
++		dev_dbg(&_dev->dev, "error mapping memory\n");
++		release_mem_region(dwc_otg_device->os_dep.rsrc_start,
++				   dwc_otg_device->os_dep.rsrc_len);
++		iounmap(dwc_otg_device->os_dep.base);
++		DWC_FREE(dwc_otg_device);
++		return -EFAULT;
++	}
++	dev_dbg(&_dev->dev, "base=0x%p (before adjust) \n",
++		dwc_otg_device->os_dep.base);
++	dwc_otg_device->os_dep.base = (char *)dwc_otg_device->os_dep.base;
++	dev_dbg(&_dev->dev, "base=0x%p (after adjust) \n",
++		dwc_otg_device->os_dep.base);
++	dev_dbg(&_dev->dev, "%s: mapped PA 0x%x to VA 0x%p\n", __func__,
++		(unsigned)dwc_otg_device->os_dep.rsrc_start,
++		dwc_otg_device->os_dep.base);
++
++	pci_set_master(_dev);
++	pci_set_drvdata(_dev, dwc_otg_device);
++#elif defined(PLATFORM_INTERFACE)
++        DWC_DEBUGPL(DBG_ANY,"Platform resource: start=%08x, len=%08x\n",
++                    _dev->resource->start,
++                    _dev->resource->end - _dev->resource->start + 1);
++#if 1
++        if (!request_mem_region(_dev->resource[0].start,
++                                _dev->resource[0].end - _dev->resource[0].start + 1,
++                                "dwc_otg")) {
++          dev_dbg(&_dev->dev, "error reserving mapped memory\n");
++          retval = -EFAULT;
++          goto fail;
++        }
++
++	dwc_otg_device->os_dep.base = ioremap_nocache(_dev->resource[0].start,
++                                                      _dev->resource[0].end -
++                                                      _dev->resource[0].start+1);
++	if (fiq_enable)
++	{
++		if (!request_mem_region(_dev->resource[1].start,
++	                                _dev->resource[1].end - _dev->resource[1].start + 1,
++	                                "dwc_otg")) {
++	          dev_dbg(&_dev->dev, "error reserving mapped memory\n");
++	          retval = -EFAULT;
++	          goto fail;
++	}
++
++		dwc_otg_device->os_dep.mphi_base = ioremap_nocache(_dev->resource[1].start,
++							    _dev->resource[1].end -
++							    _dev->resource[1].start + 1);
++	}
++
++#else
++        {
++                struct map_desc desc = {
++                    .virtual = IO_ADDRESS((unsigned)_dev->resource->start),
++                    .pfn     = __phys_to_pfn((unsigned)_dev->resource->start),
++                    .length  = SZ_128K,
++                    .type    = MT_DEVICE
++                };
++                iotable_init(&desc, 1);
++                dwc_otg_device->os_dep.base = (void *)desc.virtual;
++        }
++#endif
++	if (!dwc_otg_device->os_dep.base) {
++		dev_err(&_dev->dev, "ioremap() failed\n");
++		retval = -ENOMEM;
++		goto fail;
++	}
++	dev_dbg(&_dev->dev, "base=0x%08x\n",
++                (unsigned)dwc_otg_device->os_dep.base);
++#endif
++
++	/*
++	 * Initialize driver data to point to the global DWC_otg
++	 * Device structure.
++	 */
++#ifdef LM_INTERFACE
++	lm_set_drvdata(_dev, dwc_otg_device);
++#elif defined(PLATFORM_INTERFACE)
++	platform_set_drvdata(_dev, dwc_otg_device);
++#endif
++	dev_dbg(&_dev->dev, "dwc_otg_device=0x%p\n", dwc_otg_device);
++
++	dwc_otg_device->core_if = dwc_otg_cil_init(dwc_otg_device->os_dep.base);
++        DWC_DEBUGPL(DBG_HCDV, "probe of device %p given core_if %p\n",
++                    dwc_otg_device, dwc_otg_device->core_if);//GRAYG
++
++	if (!dwc_otg_device->core_if) {
++		dev_err(&_dev->dev, "CIL initialization failed!\n");
++		retval = -ENOMEM;
++		goto fail;
++	}
++
++	dev_dbg(&_dev->dev, "Calling get_gsnpsid\n");
++	/*
++	 * Attempt to ensure this device is really a DWC_otg Controller.
++	 * Read and verify the SNPSID register contents. The value should be
++	 * 0x45F42XXX or 0x45F42XXX, which corresponds to either "OT2" or "OTG3",
++	 * as in "OTG version 2.XX" or "OTG version 3.XX".
++	 */
++
++	if (((dwc_otg_get_gsnpsid(dwc_otg_device->core_if) & 0xFFFFF000) !=	0x4F542000) &&
++		((dwc_otg_get_gsnpsid(dwc_otg_device->core_if) & 0xFFFFF000) != 0x4F543000)) {
++		dev_err(&_dev->dev, "Bad value for SNPSID: 0x%08x\n",
++			dwc_otg_get_gsnpsid(dwc_otg_device->core_if));
++		retval = -EINVAL;
++		goto fail;
++	}
++
++	/*
++	 * Validate parameter values.
++	 */
++	dev_dbg(&_dev->dev, "Calling set_parameters\n");
++	if (set_parameters(dwc_otg_device->core_if)) {
++		retval = -EINVAL;
++		goto fail;
++	}
++
++	/*
++	 * Create Device Attributes in sysfs
++	 */
++	dev_dbg(&_dev->dev, "Calling attr_create\n");
++	dwc_otg_attr_create(_dev);
++
++	/*
++	 * Disable the global interrupt until all the interrupt
++	 * handlers are installed.
++	 */
++	dev_dbg(&_dev->dev, "Calling disable_global_interrupts\n");
++	dwc_otg_disable_global_interrupts(dwc_otg_device->core_if);
++
++	/*
++	 * Install the interrupt handler for the common interrupts before
++	 * enabling common interrupts in core_init below.
++	 */
++
++#if defined(PLATFORM_INTERFACE)
++	devirq = platform_get_irq(_dev, fiq_enable ? 0 : 1);
++#else
++	devirq = _dev->irq;
++#endif
++	DWC_DEBUGPL(DBG_CIL, "registering (common) handler for irq%d\n",
++		    devirq);
++	dev_dbg(&_dev->dev, "Calling request_irq(%d)\n", devirq);
++	retval = request_irq(devirq, dwc_otg_common_irq,
++                             IRQF_SHARED,
++                             "dwc_otg", dwc_otg_device);
++	if (retval) {
++		DWC_ERROR("request of irq%d failed\n", devirq);
++		retval = -EBUSY;
++		goto fail;
++	} else {
++		dwc_otg_device->common_irq_installed = 1;
++	}
++
++#ifndef IRQF_TRIGGER_LOW
++#if defined(LM_INTERFACE) || defined(PLATFORM_INTERFACE)
++	dev_dbg(&_dev->dev, "Calling set_irq_type\n");
++	set_irq_type(devirq,
++#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30))
++                     IRQT_LOW
++#else
++                     IRQ_TYPE_LEVEL_LOW
++#endif
++                    );
++#endif
++#endif /*IRQF_TRIGGER_LOW*/
++
++	/*
++	 * Initialize the DWC_otg core.
++	 */
++	dev_dbg(&_dev->dev, "Calling dwc_otg_core_init\n");
++	dwc_otg_core_init(dwc_otg_device->core_if);
++
++#ifndef DWC_HOST_ONLY
++	/*
++	 * Initialize the PCD
++	 */
++	dev_dbg(&_dev->dev, "Calling pcd_init\n");
++	retval = pcd_init(_dev);
++	if (retval != 0) {
++		DWC_ERROR("pcd_init failed\n");
++		dwc_otg_device->pcd = NULL;
++		goto fail;
++	}
++#endif
++#ifndef DWC_DEVICE_ONLY
++	/*
++	 * Initialize the HCD
++	 */
++	dev_dbg(&_dev->dev, "Calling hcd_init\n");
++	retval = hcd_init(_dev);
++	if (retval != 0) {
++		DWC_ERROR("hcd_init failed\n");
++		dwc_otg_device->hcd = NULL;
++		goto fail;
++	}
++#endif
++        /* Recover from drvdata having been overwritten by hcd_init() */
++#ifdef LM_INTERFACE
++	lm_set_drvdata(_dev, dwc_otg_device);
++#elif defined(PLATFORM_INTERFACE)
++	platform_set_drvdata(_dev, dwc_otg_device);
++#elif defined(PCI_INTERFACE)
++	pci_set_drvdata(_dev, dwc_otg_device);
++	dwc_otg_device->os_dep.pcidev = _dev;
++#endif
++
++	/*
++	 * Enable the global interrupt after all the interrupt
++	 * handlers are installed if there is no ADP support else
++	 * perform initial actions required for Internal ADP logic.
++	 */
++	if (!dwc_otg_get_param_adp_enable(dwc_otg_device->core_if)) {
++	        dev_dbg(&_dev->dev, "Calling enable_global_interrupts\n");
++		dwc_otg_enable_global_interrupts(dwc_otg_device->core_if);
++	        dev_dbg(&_dev->dev, "Done\n");
++	} else
++		dwc_otg_adp_start(dwc_otg_device->core_if,
++							dwc_otg_is_host_mode(dwc_otg_device->core_if));
++
++	return 0;
++
++fail:
++	dwc_otg_driver_remove(_dev);
++	return retval;
++}
++
++/**
++ * This structure defines the methods to be called by a bus driver
++ * during the lifecycle of a device on that bus. Both drivers and
++ * devices are registered with a bus driver. The bus driver matches
++ * devices to drivers based on information in the device and driver
++ * structures.
++ *
++ * The probe function is called when the bus driver matches a device
++ * to this driver. The remove function is called when a device is
++ * unregistered with the bus driver.
++ */
++#ifdef LM_INTERFACE
++static struct lm_driver dwc_otg_driver = {
++	.drv = {.name = (char *)dwc_driver_name,},
++	.probe = dwc_otg_driver_probe,
++	.remove = dwc_otg_driver_remove,
++        // 'suspend' and 'resume' absent
++};
++#elif defined(PCI_INTERFACE)
++static const struct pci_device_id pci_ids[] = { {
++						 PCI_DEVICE(0x16c3, 0xabcd),
++						 .driver_data =
++						 (unsigned long)0xdeadbeef,
++						 }, { /* end: all zeroes */ }
++};
++
++MODULE_DEVICE_TABLE(pci, pci_ids);
++
++/* pci driver glue; this is a "new style" PCI driver module */
++static struct pci_driver dwc_otg_driver = {
++	.name = "dwc_otg",
++	.id_table = pci_ids,
++
++	.probe = dwc_otg_driver_probe,
++	.remove = dwc_otg_driver_remove,
++
++	.driver = {
++		   .name = (char *)dwc_driver_name,
++		   },
++};
++#elif defined(PLATFORM_INTERFACE)
++static struct platform_device_id platform_ids[] = {
++        {
++              .name = "bcm2708_usb",
++              .driver_data = (kernel_ulong_t) 0xdeadbeef,
++        },
++        { /* end: all zeroes */ }
++};
++MODULE_DEVICE_TABLE(platform, platform_ids);
++
++static const struct of_device_id dwc_otg_of_match_table[] = {
++	{ .compatible = "brcm,bcm2708-usb", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, dwc_otg_of_match_table);
++
++static struct platform_driver dwc_otg_driver = {
++	.driver = {
++		.name = (char *)dwc_driver_name,
++		.of_match_table = dwc_otg_of_match_table,
++		},
++        .id_table = platform_ids,
++
++	.probe = dwc_otg_driver_probe,
++	.remove = dwc_otg_driver_remove,
++        // no 'shutdown', 'suspend', 'resume', 'suspend_late' or 'resume_early'
++};
++#endif
++
++/**
++ * This function is called when the dwc_otg_driver is installed with the
++ * insmod command. It registers the dwc_otg_driver structure with the
++ * appropriate bus driver. This will cause the dwc_otg_driver_probe function
++ * to be called. In addition, the bus driver will automatically expose
++ * attributes defined for the device and driver in the special sysfs file
++ * system.
++ *
++ * @return
++ */
++static int __init dwc_otg_driver_init(void)
++{
++	int retval = 0;
++	int error;
++        struct device_driver *drv;
++
++	if(fiq_fsm_enable && !fiq_enable) {
++		printk(KERN_WARNING "dwc_otg: fiq_fsm_enable was set without fiq_enable! Correcting.\n");
++		fiq_enable = 1;
++	}
++
++	printk(KERN_INFO "%s: version %s (%s bus)\n", dwc_driver_name,
++	       DWC_DRIVER_VERSION,
++#ifdef LM_INTERFACE
++               "logicmodule");
++	retval = lm_driver_register(&dwc_otg_driver);
++        drv = &dwc_otg_driver.drv;
++#elif defined(PCI_INTERFACE)
++               "pci");
++	retval = pci_register_driver(&dwc_otg_driver);
++        drv = &dwc_otg_driver.driver;
++#elif defined(PLATFORM_INTERFACE)
++               "platform");
++	retval = platform_driver_register(&dwc_otg_driver);
++        drv = &dwc_otg_driver.driver;
++#endif
++	if (retval < 0) {
++		printk(KERN_ERR "%s retval=%d\n", __func__, retval);
++		return retval;
++	}
++	printk(KERN_DEBUG "dwc_otg: FIQ %s\n", fiq_enable ? "enabled":"disabled");
++	printk(KERN_DEBUG "dwc_otg: NAK holdoff %s\n", nak_holdoff ? "enabled":"disabled");
++	printk(KERN_DEBUG "dwc_otg: FIQ split-transaction FSM %s\n", fiq_fsm_enable ? "enabled":"disabled");
++
++	error = driver_create_file(drv, &driver_attr_version);
++#ifdef DEBUG
++	error = driver_create_file(drv, &driver_attr_debuglevel);
++#endif
++	return retval;
++}
++
++module_init(dwc_otg_driver_init);
++
++/**
++ * This function is called when the driver is removed from the kernel
++ * with the rmmod command. The driver unregisters itself with its bus
++ * driver.
++ *
++ */
++static void __exit dwc_otg_driver_cleanup(void)
++{
++	printk(KERN_DEBUG "dwc_otg_driver_cleanup()\n");
++
++#ifdef LM_INTERFACE
++	driver_remove_file(&dwc_otg_driver.drv, &driver_attr_debuglevel);
++	driver_remove_file(&dwc_otg_driver.drv, &driver_attr_version);
++	lm_driver_unregister(&dwc_otg_driver);
++#elif defined(PCI_INTERFACE)
++	driver_remove_file(&dwc_otg_driver.driver, &driver_attr_debuglevel);
++	driver_remove_file(&dwc_otg_driver.driver, &driver_attr_version);
++	pci_unregister_driver(&dwc_otg_driver);
++#elif defined(PLATFORM_INTERFACE)
++	driver_remove_file(&dwc_otg_driver.driver, &driver_attr_debuglevel);
++	driver_remove_file(&dwc_otg_driver.driver, &driver_attr_version);
++	platform_driver_unregister(&dwc_otg_driver);
++#endif
++
++	printk(KERN_INFO "%s module removed\n", dwc_driver_name);
++}
++
++module_exit(dwc_otg_driver_cleanup);
++
++MODULE_DESCRIPTION(DWC_DRIVER_DESC);
++MODULE_AUTHOR("Synopsys Inc.");
++MODULE_LICENSE("GPL");
++
++module_param_named(otg_cap, dwc_otg_module_params.otg_cap, int, 0444);
++MODULE_PARM_DESC(otg_cap, "OTG Capabilities 0=HNP&SRP 1=SRP Only 2=None");
++module_param_named(opt, dwc_otg_module_params.opt, int, 0444);
++MODULE_PARM_DESC(opt, "OPT Mode");
++module_param_named(dma_enable, dwc_otg_module_params.dma_enable, int, 0444);
++MODULE_PARM_DESC(dma_enable, "DMA Mode 0=Slave 1=DMA enabled");
++
++module_param_named(dma_desc_enable, dwc_otg_module_params.dma_desc_enable, int,
++		   0444);
++MODULE_PARM_DESC(dma_desc_enable,
++		 "DMA Desc Mode 0=Address DMA 1=DMA Descriptor enabled");
++
++module_param_named(dma_burst_size, dwc_otg_module_params.dma_burst_size, int,
++		   0444);
++MODULE_PARM_DESC(dma_burst_size,
++		 "DMA Burst Size 1, 4, 8, 16, 32, 64, 128, 256");
++module_param_named(speed, dwc_otg_module_params.speed, int, 0444);
++MODULE_PARM_DESC(speed, "Speed 0=High Speed 1=Full Speed");
++module_param_named(host_support_fs_ls_low_power,
++		   dwc_otg_module_params.host_support_fs_ls_low_power, int,
++		   0444);
++MODULE_PARM_DESC(host_support_fs_ls_low_power,
++		 "Support Low Power w/FS or LS 0=Support 1=Don't Support");
++module_param_named(host_ls_low_power_phy_clk,
++		   dwc_otg_module_params.host_ls_low_power_phy_clk, int, 0444);
++MODULE_PARM_DESC(host_ls_low_power_phy_clk,
++		 "Low Speed Low Power Clock 0=48Mhz 1=6Mhz");
++module_param_named(enable_dynamic_fifo,
++		   dwc_otg_module_params.enable_dynamic_fifo, int, 0444);
++MODULE_PARM_DESC(enable_dynamic_fifo, "0=cC Setting 1=Allow Dynamic Sizing");
++module_param_named(data_fifo_size, dwc_otg_module_params.data_fifo_size, int,
++		   0444);
++MODULE_PARM_DESC(data_fifo_size,
++		 "Total number of words in the data FIFO memory 32-32768");
++module_param_named(dev_rx_fifo_size, dwc_otg_module_params.dev_rx_fifo_size,
++		   int, 0444);
++MODULE_PARM_DESC(dev_rx_fifo_size, "Number of words in the Rx FIFO 16-32768");
++module_param_named(dev_nperio_tx_fifo_size,
++		   dwc_otg_module_params.dev_nperio_tx_fifo_size, int, 0444);
++MODULE_PARM_DESC(dev_nperio_tx_fifo_size,
++		 "Number of words in the non-periodic Tx FIFO 16-32768");
++module_param_named(dev_perio_tx_fifo_size_1,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[0], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_1,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_2,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[1], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_2,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_3,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[2], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_3,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_4,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[3], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_4,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_5,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[4], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_5,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_6,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[5], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_6,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_7,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[6], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_7,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_8,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[7], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_8,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_9,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[8], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_9,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_10,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[9], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_10,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_11,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[10], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_11,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_12,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[11], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_12,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_13,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[12], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_13,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_14,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[13], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_14,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(dev_perio_tx_fifo_size_15,
++		   dwc_otg_module_params.dev_perio_tx_fifo_size[14], int, 0444);
++MODULE_PARM_DESC(dev_perio_tx_fifo_size_15,
++		 "Number of words in the periodic Tx FIFO 4-768");
++module_param_named(host_rx_fifo_size, dwc_otg_module_params.host_rx_fifo_size,
++		   int, 0444);
++MODULE_PARM_DESC(host_rx_fifo_size, "Number of words in the Rx FIFO 16-32768");
++module_param_named(host_nperio_tx_fifo_size,
++		   dwc_otg_module_params.host_nperio_tx_fifo_size, int, 0444);
++MODULE_PARM_DESC(host_nperio_tx_fifo_size,
++		 "Number of words in the non-periodic Tx FIFO 16-32768");
++module_param_named(host_perio_tx_fifo_size,
++		   dwc_otg_module_params.host_perio_tx_fifo_size, int, 0444);
++MODULE_PARM_DESC(host_perio_tx_fifo_size,
++		 "Number of words in the host periodic Tx FIFO 16-32768");
++module_param_named(max_transfer_size, dwc_otg_module_params.max_transfer_size,
++		   int, 0444);
++/** @todo Set the max to 512K, modify checks */
++MODULE_PARM_DESC(max_transfer_size,
++		 "The maximum transfer size supported in bytes 2047-65535");
++module_param_named(max_packet_count, dwc_otg_module_params.max_packet_count,
++		   int, 0444);
++MODULE_PARM_DESC(max_packet_count,
++		 "The maximum number of packets in a transfer 15-511");
++module_param_named(host_channels, dwc_otg_module_params.host_channels, int,
++		   0444);
++MODULE_PARM_DESC(host_channels,
++		 "The number of host channel registers to use 1-16");
++module_param_named(dev_endpoints, dwc_otg_module_params.dev_endpoints, int,
++		   0444);
++MODULE_PARM_DESC(dev_endpoints,
++		 "The number of endpoints in addition to EP0 available for device mode 1-15");
++module_param_named(phy_type, dwc_otg_module_params.phy_type, int, 0444);
++MODULE_PARM_DESC(phy_type, "0=Reserved 1=UTMI+ 2=ULPI");
++module_param_named(phy_utmi_width, dwc_otg_module_params.phy_utmi_width, int,
++		   0444);
++MODULE_PARM_DESC(phy_utmi_width, "Specifies the UTMI+ Data Width 8 or 16 bits");
++module_param_named(phy_ulpi_ddr, dwc_otg_module_params.phy_ulpi_ddr, int, 0444);
++MODULE_PARM_DESC(phy_ulpi_ddr,
++		 "ULPI at double or single data rate 0=Single 1=Double");
++module_param_named(phy_ulpi_ext_vbus, dwc_otg_module_params.phy_ulpi_ext_vbus,
++		   int, 0444);
++MODULE_PARM_DESC(phy_ulpi_ext_vbus,
++		 "ULPI PHY using internal or external vbus 0=Internal");
++module_param_named(i2c_enable, dwc_otg_module_params.i2c_enable, int, 0444);
++MODULE_PARM_DESC(i2c_enable, "FS PHY Interface");
++module_param_named(ulpi_fs_ls, dwc_otg_module_params.ulpi_fs_ls, int, 0444);
++MODULE_PARM_DESC(ulpi_fs_ls, "ULPI PHY FS/LS mode only");
++module_param_named(ts_dline, dwc_otg_module_params.ts_dline, int, 0444);
++MODULE_PARM_DESC(ts_dline, "Term select Dline pulsing for all PHYs");
++module_param_named(debug, g_dbg_lvl, int, 0444);
++MODULE_PARM_DESC(debug, "");
++
++module_param_named(en_multiple_tx_fifo,
++		   dwc_otg_module_params.en_multiple_tx_fifo, int, 0444);
++MODULE_PARM_DESC(en_multiple_tx_fifo,
++		 "Dedicated Non Periodic Tx FIFOs 0=disabled 1=enabled");
++module_param_named(dev_tx_fifo_size_1,
++		   dwc_otg_module_params.dev_tx_fifo_size[0], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_1, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_2,
++		   dwc_otg_module_params.dev_tx_fifo_size[1], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_2, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_3,
++		   dwc_otg_module_params.dev_tx_fifo_size[2], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_3, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_4,
++		   dwc_otg_module_params.dev_tx_fifo_size[3], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_4, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_5,
++		   dwc_otg_module_params.dev_tx_fifo_size[4], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_5, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_6,
++		   dwc_otg_module_params.dev_tx_fifo_size[5], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_6, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_7,
++		   dwc_otg_module_params.dev_tx_fifo_size[6], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_7, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_8,
++		   dwc_otg_module_params.dev_tx_fifo_size[7], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_8, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_9,
++		   dwc_otg_module_params.dev_tx_fifo_size[8], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_9, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_10,
++		   dwc_otg_module_params.dev_tx_fifo_size[9], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_10, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_11,
++		   dwc_otg_module_params.dev_tx_fifo_size[10], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_11, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_12,
++		   dwc_otg_module_params.dev_tx_fifo_size[11], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_12, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_13,
++		   dwc_otg_module_params.dev_tx_fifo_size[12], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_13, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_14,
++		   dwc_otg_module_params.dev_tx_fifo_size[13], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_14, "Number of words in the Tx FIFO 4-768");
++module_param_named(dev_tx_fifo_size_15,
++		   dwc_otg_module_params.dev_tx_fifo_size[14], int, 0444);
++MODULE_PARM_DESC(dev_tx_fifo_size_15, "Number of words in the Tx FIFO 4-768");
++
++module_param_named(thr_ctl, dwc_otg_module_params.thr_ctl, int, 0444);
++MODULE_PARM_DESC(thr_ctl,
++		 "Thresholding enable flag bit 0 - non ISO Tx thr., 1 - ISO Tx thr., 2 - Rx thr.- bit 0=disabled 1=enabled");
++module_param_named(tx_thr_length, dwc_otg_module_params.tx_thr_length, int,
++		   0444);
++MODULE_PARM_DESC(tx_thr_length, "Tx Threshold length in 32 bit DWORDs");
++module_param_named(rx_thr_length, dwc_otg_module_params.rx_thr_length, int,
++		   0444);
++MODULE_PARM_DESC(rx_thr_length, "Rx Threshold length in 32 bit DWORDs");
++
++module_param_named(pti_enable, dwc_otg_module_params.pti_enable, int, 0444);
++module_param_named(mpi_enable, dwc_otg_module_params.mpi_enable, int, 0444);
++module_param_named(lpm_enable, dwc_otg_module_params.lpm_enable, int, 0444);
++MODULE_PARM_DESC(lpm_enable, "LPM Enable 0=LPM Disabled 1=LPM Enabled");
++module_param_named(ic_usb_cap, dwc_otg_module_params.ic_usb_cap, int, 0444);
++MODULE_PARM_DESC(ic_usb_cap,
++		 "IC_USB Capability 0=IC_USB Disabled 1=IC_USB Enabled");
++module_param_named(ahb_thr_ratio, dwc_otg_module_params.ahb_thr_ratio, int,
++		   0444);
++MODULE_PARM_DESC(ahb_thr_ratio, "AHB Threshold Ratio");
++module_param_named(power_down, dwc_otg_module_params.power_down, int, 0444);
++MODULE_PARM_DESC(power_down, "Power Down Mode");
++module_param_named(reload_ctl, dwc_otg_module_params.reload_ctl, int, 0444);
++MODULE_PARM_DESC(reload_ctl, "HFIR Reload Control");
++module_param_named(dev_out_nak, dwc_otg_module_params.dev_out_nak, int, 0444);
++MODULE_PARM_DESC(dev_out_nak, "Enable Device OUT NAK");
++module_param_named(cont_on_bna, dwc_otg_module_params.cont_on_bna, int, 0444);
++MODULE_PARM_DESC(cont_on_bna, "Enable Enable Continue on BNA");
++module_param_named(ahb_single, dwc_otg_module_params.ahb_single, int, 0444);
++MODULE_PARM_DESC(ahb_single, "Enable AHB Single Support");
++module_param_named(adp_enable, dwc_otg_module_params.adp_enable, int, 0444);
++MODULE_PARM_DESC(adp_enable, "ADP Enable 0=ADP Disabled 1=ADP Enabled");
++module_param_named(otg_ver, dwc_otg_module_params.otg_ver, int, 0444);
++MODULE_PARM_DESC(otg_ver, "OTG revision supported 0=OTG 1.3 1=OTG 2.0");
++module_param(microframe_schedule, bool, 0444);
++MODULE_PARM_DESC(microframe_schedule, "Enable the microframe scheduler");
++
++module_param(fiq_enable, bool, 0444);
++MODULE_PARM_DESC(fiq_enable, "Enable the FIQ");
++module_param(nak_holdoff, ushort, 0644);
++MODULE_PARM_DESC(nak_holdoff, "Throttle duration for bulk split-transaction endpoints on a NAK. Default 8");
++module_param(fiq_fsm_enable, bool, 0444);
++MODULE_PARM_DESC(fiq_fsm_enable, "Enable the FIQ to perform split transactions as defined by fiq_fsm_mask");
++module_param(fiq_fsm_mask, ushort, 0444);
++MODULE_PARM_DESC(fiq_fsm_mask, "Bitmask of transactions to perform in the FIQ.\n"
++					"Bit 0 : Non-periodic split transactions\n"
++					"Bit 1 : Periodic split transactions\n"
++					"Bit 2 : High-speed multi-transfer isochronous\n"
++					"All other bits should be set 0.");
++
++
++/** @page "Module Parameters"
++ *
++ * The following parameters may be specified when starting the module.
++ * These parameters define how the DWC_otg controller should be
++ * configured. Parameter values are passed to the CIL initialization
++ * function dwc_otg_cil_init
++ *
++ * Example: <code>modprobe dwc_otg speed=1 otg_cap=1</code>
++ *
++
++ <table>
++ <tr><td>Parameter Name</td><td>Meaning</td></tr>
++
++ <tr>
++ <td>otg_cap</td>
++ <td>Specifies the OTG capabilities. The driver will automatically detect the
++ value for this parameter if none is specified.
++ - 0: HNP and SRP capable (default, if available)
++ - 1: SRP Only capable
++ - 2: No HNP/SRP capable
++ </td></tr>
++
++ <tr>
++ <td>dma_enable</td>
++ <td>Specifies whether to use slave or DMA mode for accessing the data FIFOs.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: Slave
++ - 1: DMA (default, if available)
++ </td></tr>
++
++ <tr>
++ <td>dma_burst_size</td>
++ <td>The DMA Burst size (applicable only for External DMA Mode).
++ - Values: 1, 4, 8 16, 32, 64, 128, 256 (default 32)
++ </td></tr>
++
++ <tr>
++ <td>speed</td>
++ <td>Specifies the maximum speed of operation in host and device mode. The
++ actual speed depends on the speed of the attached device and the value of
++ phy_type.
++ - 0: High Speed (default)
++ - 1: Full Speed
++ </td></tr>
++
++ <tr>
++ <td>host_support_fs_ls_low_power</td>
++ <td>Specifies whether low power mode is supported when attached to a Full
++ Speed or Low Speed device in host mode.
++ - 0: Don't support low power mode (default)
++ - 1: Support low power mode
++ </td></tr>
++
++ <tr>
++ <td>host_ls_low_power_phy_clk</td>
++ <td>Specifies the PHY clock rate in low power mode when connected to a Low
++ Speed device in host mode. This parameter is applicable only if
++ HOST_SUPPORT_FS_LS_LOW_POWER is enabled.
++ - 0: 48 MHz (default)
++ - 1: 6 MHz
++ </td></tr>
++
++ <tr>
++ <td>enable_dynamic_fifo</td>
++ <td> Specifies whether FIFOs may be resized by the driver software.
++ - 0: Use cC FIFO size parameters
++ - 1: Allow dynamic FIFO sizing (default)
++ </td></tr>
++
++ <tr>
++ <td>data_fifo_size</td>
++ <td>Total number of 4-byte words in the data FIFO memory. This memory
++ includes the Rx FIFO, non-periodic Tx FIFO, and periodic Tx FIFOs.
++ - Values: 32 to 32768 (default 8192)
++
++ Note: The total FIFO memory depth in the FPGA configuration is 8192.
++ </td></tr>
++
++ <tr>
++ <td>dev_rx_fifo_size</td>
++ <td>Number of 4-byte words in the Rx FIFO in device mode when dynamic
++ FIFO sizing is enabled.
++ - Values: 16 to 32768 (default 1064)
++ </td></tr>
++
++ <tr>
++ <td>dev_nperio_tx_fifo_size</td>
++ <td>Number of 4-byte words in the non-periodic Tx FIFO in device mode when
++ dynamic FIFO sizing is enabled.
++ - Values: 16 to 32768 (default 1024)
++ </td></tr>
++
++ <tr>
++ <td>dev_perio_tx_fifo_size_n (n = 1 to 15)</td>
++ <td>Number of 4-byte words in each of the periodic Tx FIFOs in device mode
++ when dynamic FIFO sizing is enabled.
++ - Values: 4 to 768 (default 256)
++ </td></tr>
++
++ <tr>
++ <td>host_rx_fifo_size</td>
++ <td>Number of 4-byte words in the Rx FIFO in host mode when dynamic FIFO
++ sizing is enabled.
++ - Values: 16 to 32768 (default 1024)
++ </td></tr>
++
++ <tr>
++ <td>host_nperio_tx_fifo_size</td>
++ <td>Number of 4-byte words in the non-periodic Tx FIFO in host mode when
++ dynamic FIFO sizing is enabled in the core.
++ - Values: 16 to 32768 (default 1024)
++ </td></tr>
++
++ <tr>
++ <td>host_perio_tx_fifo_size</td>
++ <td>Number of 4-byte words in the host periodic Tx FIFO when dynamic FIFO
++ sizing is enabled.
++ - Values: 16 to 32768 (default 1024)
++ </td></tr>
++
++ <tr>
++ <td>max_transfer_size</td>
++ <td>The maximum transfer size supported in bytes.
++ - Values: 2047 to 65,535 (default 65,535)
++ </td></tr>
++
++ <tr>
++ <td>max_packet_count</td>
++ <td>The maximum number of packets in a transfer.
++ - Values: 15 to 511 (default 511)
++ </td></tr>
++
++ <tr>
++ <td>host_channels</td>
++ <td>The number of host channel registers to use.
++ - Values: 1 to 16 (default 12)
++
++ Note: The FPGA configuration supports a maximum of 12 host channels.
++ </td></tr>
++
++ <tr>
++ <td>dev_endpoints</td>
++ <td>The number of endpoints in addition to EP0 available for device mode
++ operations.
++ - Values: 1 to 15 (default 6 IN and OUT)
++
++ Note: The FPGA configuration supports a maximum of 6 IN and OUT endpoints in
++ addition to EP0.
++ </td></tr>
++
++ <tr>
++ <td>phy_type</td>
++ <td>Specifies the type of PHY interface to use. By default, the driver will
++ automatically detect the phy_type.
++ - 0: Full Speed
++ - 1: UTMI+ (default, if available)
++ - 2: ULPI
++ </td></tr>
++
++ <tr>
++ <td>phy_utmi_width</td>
++ <td>Specifies the UTMI+ Data Width. This parameter is applicable for a
++ phy_type of UTMI+. Also, this parameter is applicable only if the
++ OTG_HSPHY_WIDTH cC parameter was set to "8 and 16 bits", meaning that the
++ core has been configured to work at either data path width.
++ - Values: 8 or 16 bits (default 16)
++ </td></tr>
++
++ <tr>
++ <td>phy_ulpi_ddr</td>
++ <td>Specifies whether the ULPI operates at double or single data rate. This
++ parameter is only applicable if phy_type is ULPI.
++ - 0: single data rate ULPI interface with 8 bit wide data bus (default)
++ - 1: double data rate ULPI interface with 4 bit wide data bus
++ </td></tr>
++
++ <tr>
++ <td>i2c_enable</td>
++ <td>Specifies whether to use the I2C interface for full speed PHY. This
++ parameter is only applicable if PHY_TYPE is FS.
++ - 0: Disabled (default)
++ - 1: Enabled
++ </td></tr>
++
++ <tr>
++ <td>ulpi_fs_ls</td>
++ <td>Specifies whether to use ULPI FS/LS mode only.
++ - 0: Disabled (default)
++ - 1: Enabled
++ </td></tr>
++
++ <tr>
++ <td>ts_dline</td>
++ <td>Specifies whether term select D-Line pulsing for all PHYs is enabled.
++ - 0: Disabled (default)
++ - 1: Enabled
++ </td></tr>
++
++ <tr>
++ <td>en_multiple_tx_fifo</td>
++ <td>Specifies whether dedicatedto tx fifos are enabled for non periodic IN EPs.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: Disabled
++ - 1: Enabled (default, if available)
++ </td></tr>
++
++ <tr>
++ <td>dev_tx_fifo_size_n (n = 1 to 15)</td>
++ <td>Number of 4-byte words in each of the Tx FIFOs in device mode
++ when dynamic FIFO sizing is enabled.
++ - Values: 4 to 768 (default 256)
++ </td></tr>
++
++ <tr>
++ <td>tx_thr_length</td>
++ <td>Transmit Threshold length in 32 bit double words
++ - Values: 8 to 128 (default 64)
++ </td></tr>
++
++ <tr>
++ <td>rx_thr_length</td>
++ <td>Receive Threshold length in 32 bit double words
++ - Values: 8 to 128 (default 64)
++ </td></tr>
++
++<tr>
++ <td>thr_ctl</td>
++ <td>Specifies whether to enable Thresholding for Device mode. Bits 0, 1, 2 of
++ this parmater specifies if thresholding is enabled for non-Iso Tx, Iso Tx and
++ Rx transfers accordingly.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - Values: 0 to 7 (default 0)
++ Bit values indicate:
++ - 0: Thresholding disabled
++ - 1: Thresholding enabled
++ </td></tr>
++
++<tr>
++ <td>dma_desc_enable</td>
++ <td>Specifies whether to enable Descriptor DMA mode.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: Descriptor DMA disabled
++ - 1: Descriptor DMA (default, if available)
++ </td></tr>
++
++<tr>
++ <td>mpi_enable</td>
++ <td>Specifies whether to enable MPI enhancement mode.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: MPI disabled (default)
++ - 1: MPI enable
++ </td></tr>
++
++<tr>
++ <td>pti_enable</td>
++ <td>Specifies whether to enable PTI enhancement support.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: PTI disabled (default)
++ - 1: PTI enable
++ </td></tr>
++
++<tr>
++ <td>lpm_enable</td>
++ <td>Specifies whether to enable LPM support.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: LPM disabled
++ - 1: LPM enable (default, if available)
++ </td></tr>
++
++<tr>
++ <td>ic_usb_cap</td>
++ <td>Specifies whether to enable IC_USB capability.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: IC_USB disabled (default, if available)
++ - 1: IC_USB enable
++ </td></tr>
++
++<tr>
++ <td>ahb_thr_ratio</td>
++ <td>Specifies AHB Threshold ratio.
++ - Values: 0 to 3 (default 0)
++ </td></tr>
++
++<tr>
++ <td>power_down</td>
++ <td>Specifies Power Down(Hibernation) Mode.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: Power Down disabled (default)
++ - 2: Power Down enabled
++ </td></tr>
++
++ <tr>
++ <td>reload_ctl</td>
++ <td>Specifies whether dynamic reloading of the HFIR register is allowed during
++ run time. The driver will automatically detect the value for this parameter if
++ none is specified. In case the HFIR value is reloaded when HFIR.RldCtrl == 1'b0
++ the core might misbehave.
++ - 0: Reload Control disabled (default)
++ - 1: Reload Control enabled
++ </td></tr>
++
++ <tr>
++ <td>dev_out_nak</td>
++ <td>Specifies whether  Device OUT NAK enhancement enabled or no.
++ The driver will automatically detect the value for this parameter if
++ none is specified. This parameter is valid only when OTG_EN_DESC_DMA == 1b1.
++ - 0: The core does not set NAK after Bulk OUT transfer complete (default)
++ - 1: The core sets NAK after Bulk OUT transfer complete
++ </td></tr>
++
++ <tr>
++ <td>cont_on_bna</td>
++ <td>Specifies whether Enable Continue on BNA enabled or no.
++ After receiving BNA interrupt the core disables the endpoint,when the
++ endpoint is re-enabled by the application the
++ - 0: Core starts processing from the DOEPDMA descriptor (default)
++ - 1: Core starts processing from the descriptor which received the BNA.
++ This parameter is valid only when OTG_EN_DESC_DMA == 1b1.
++ </td></tr>
++
++ <tr>
++ <td>ahb_single</td>
++ <td>This bit when programmed supports SINGLE transfers for remainder data
++ in a transfer for DMA mode of operation.
++ - 0: The remainder data will be sent using INCR burst size (default)
++ - 1: The remainder data will be sent using SINGLE burst size.
++ </td></tr>
++
++<tr>
++ <td>adp_enable</td>
++ <td>Specifies whether ADP feature is enabled.
++ The driver will automatically detect the value for this parameter if none is
++ specified.
++ - 0: ADP feature disabled (default)
++ - 1: ADP feature enabled
++ </td></tr>
++
++  <tr>
++ <td>otg_ver</td>
++ <td>Specifies whether OTG is performing as USB OTG Revision 2.0 or Revision 1.3
++ USB OTG device.
++ - 0: OTG 2.0 support disabled (default)
++ - 1: OTG 2.0 support enabled
++ </td></tr>
++
++*/
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_driver.h
+@@ -0,0 +1,86 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_driver.h $
++ * $Revision: #19 $
++ * $Date: 2010/11/15 $
++ * $Change: 1627671 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#ifndef __DWC_OTG_DRIVER_H__
++#define __DWC_OTG_DRIVER_H__
++
++/** @file
++ * This file contains the interface to the Linux driver.
++ */
++#include "dwc_otg_os_dep.h"
++#include "dwc_otg_core_if.h"
++
++/* Type declarations */
++struct dwc_otg_pcd;
++struct dwc_otg_hcd;
++
++/**
++ * This structure is a wrapper that encapsulates the driver components used to
++ * manage a single DWC_otg controller.
++ */
++typedef struct dwc_otg_device {
++	/** Structure containing OS-dependent stuff. KEEP THIS STRUCT AT THE
++	 * VERY BEGINNING OF THE DEVICE STRUCT. OSes such as FreeBSD and NetBSD
++	 * require this. */
++	struct os_dependent os_dep;
++
++	/** Pointer to the core interface structure. */
++	dwc_otg_core_if_t *core_if;
++
++	/** Pointer to the PCD structure. */
++	struct dwc_otg_pcd *pcd;
++
++	/** Pointer to the HCD structure. */
++	struct dwc_otg_hcd *hcd;
++
++	/** Flag to indicate whether the common IRQ handler is installed. */
++	uint8_t common_irq_installed;
++
++} dwc_otg_device_t;
++
++/*We must clear S3C24XX_EINTPEND external interrupt register
++ * because after clearing in this register trigerred IRQ from
++ * H/W core in kernel interrupt can be occured again before OTG
++ * handlers clear all IRQ sources of Core registers because of
++ * timing latencies and Low Level IRQ Type.
++ */
++#ifdef CONFIG_MACH_IPMATE
++#define  S3C2410X_CLEAR_EINTPEND()   \
++do { \
++	__raw_writel(1UL << 11,S3C24XX_EINTPEND); \
++} while (0)
++#else
++#define  S3C2410X_CLEAR_EINTPEND()   do { } while (0)
++#endif
++
++#endif
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.c
+@@ -0,0 +1,1355 @@
++/*
++ * dwc_otg_fiq_fsm.c - The finite state machine FIQ
++ *
++ * Copyright (c) 2013 Raspberry Pi Foundation
++ *
++ * Author: Jonathan Bell <jonathan at raspberrypi.org>
++ * All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions are met:
++ *	* Redistributions of source code must retain the above copyright
++ *	  notice, this list of conditions and the following disclaimer.
++ *	* Redistributions in binary form must reproduce the above copyright
++ *	  notice, this list of conditions and the following disclaimer in the
++ *	  documentation and/or other materials provided with the distribution.
++ *	* Neither the name of Raspberry Pi nor the
++ *	  names of its contributors may be used to endorse or promote products
++ *	  derived from this software without specific prior written permission.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++ * DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
++ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ *
++ * This FIQ implements functionality that performs split transactions on
++ * the dwc_otg hardware without any outside intervention. A split transaction
++ * is "queued" by nominating a specific host channel to perform the entirety
++ * of a split transaction. This FIQ will then perform the microframe-precise
++ * scheduling required in each phase of the transaction until completion.
++ *
++ * The FIQ functionality is glued into the Synopsys driver via the entry point
++ * in the FSM enqueue function, and at the exit point in handling a HC interrupt
++ * for a FSM-enabled channel.
++ *
++ * NB: Large parts of this implementation have architecture-specific code.
++ * For porting this functionality to other ARM machines, the minimum is required:
++ * - An interrupt controller allowing the top-level dwc USB interrupt to be routed
++ *   to the FIQ
++ * - A method of forcing a software generated interrupt from FIQ mode that then
++ *   triggers an IRQ entry (with the dwc USB handler called by this IRQ number)
++ * - Guaranteed interrupt routing such that both the FIQ and SGI occur on the same
++ *   processor core - there is no locking between the FIQ and IRQ (aside from
++ *   local_fiq_disable)
++ *
++ */
++
++#include "dwc_otg_fiq_fsm.h"
++
++
++char buffer[1000*16];
++int wptr;
++void notrace _fiq_print(enum fiq_debug_level dbg_lvl, volatile struct fiq_state *state, char *fmt, ...)
++{
++	enum fiq_debug_level dbg_lvl_req = FIQDBG_ERR;
++	va_list args;
++	char text[17];
++	hfnum_data_t hfnum = { .d32 = FIQ_READ(state->dwc_regs_base + 0x408) };
++
++	if((dbg_lvl & dbg_lvl_req) || dbg_lvl == FIQDBG_ERR)
++	{
++		snprintf(text, 9, " %4d:%1u  ", hfnum.b.frnum/8, hfnum.b.frnum & 7);
++		va_start(args, fmt);
++		vsnprintf(text+8, 9, fmt, args);
++		va_end(args);
++
++		memcpy(buffer + wptr, text, 16);
++		wptr = (wptr + 16) % sizeof(buffer);
++	}
++}
++
++/**
++ * fiq_fsm_spin_lock() - ARMv6+ bare bones spinlock
++ * Must be called with local interrupts and FIQ disabled.
++ */
++#if defined(CONFIG_ARCH_BCM2709) && defined(CONFIG_SMP)
++inline void fiq_fsm_spin_lock(fiq_lock_t *lock)
++{
++	unsigned long tmp;
++	uint32_t newval;
++	fiq_lock_t lockval;
++	smp_mb__before_spinlock();
++	/* Nested locking, yay. If we are on the same CPU as the fiq, then the disable
++	 * will be sufficient. If we are on a different CPU, then the lock protects us. */
++	prefetchw(&lock->slock);
++	asm volatile (
++	"1:     ldrex   %0, [%3]\n"
++	"       add     %1, %0, %4\n"
++	"       strex   %2, %1, [%3]\n"
++	"       teq     %2, #0\n"
++	"       bne     1b"
++	: "=&r" (lockval), "=&r" (newval), "=&r" (tmp)
++	: "r" (&lock->slock), "I" (1 << 16)
++	: "cc");
++
++	while (lockval.tickets.next != lockval.tickets.owner) {
++		wfe();
++		lockval.tickets.owner = ACCESS_ONCE(lock->tickets.owner);
++	}
++	smp_mb();
++}
++#else
++inline void fiq_fsm_spin_lock(fiq_lock_t *lock) { }
++#endif
++
++/**
++ * fiq_fsm_spin_unlock() - ARMv6+ bare bones spinunlock
++ */
++#if defined(CONFIG_ARCH_BCM2709) && defined(CONFIG_SMP)
++inline void fiq_fsm_spin_unlock(fiq_lock_t *lock)
++{
++	smp_mb();
++	lock->tickets.owner++;
++	dsb_sev();
++}
++#else
++inline void fiq_fsm_spin_unlock(fiq_lock_t *lock) { }
++#endif
++
++/**
++ * fiq_fsm_restart_channel() - Poke channel enable bit for a split transaction
++ * @channel: channel to re-enable
++ */
++static void fiq_fsm_restart_channel(struct fiq_state *st, int n, int force)
++{
++	hcchar_data_t hcchar = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR) };
++
++	hcchar.b.chen = 0;
++	if (st->channel[n].hcchar_copy.b.eptype & 0x1) {
++		hfnum_data_t hfnum = { .d32 = FIQ_READ(st->dwc_regs_base + HFNUM) };
++		/* Hardware bug workaround: update the ssplit index */
++		if (st->channel[n].hcsplt_copy.b.spltena)
++			st->channel[n].expected_uframe = (hfnum.b.frnum + 1) & 0x3FFF;
++
++		hcchar.b.oddfrm = (hfnum.b.frnum & 0x1) ? 0	: 1;
++	}
++
++	FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR, hcchar.d32);
++	hcchar.d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
++	hcchar.b.chen = 1;
++
++	FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR, hcchar.d32);
++	fiq_print(FIQDBG_INT, st, "HCGO %01d %01d", n, force);
++}
++
++/**
++ * fiq_fsm_setup_csplit() - Prepare a host channel for a CSplit transaction stage
++ * @st: Pointer to the channel's state
++ * @n : channel number
++ *
++ * Change host channel registers to perform a complete-split transaction. Being mindful of the
++ * endpoint direction, set control regs up correctly.
++ */
++static void notrace fiq_fsm_setup_csplit(struct fiq_state *st, int n)
++{
++	hcsplt_data_t hcsplt = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCSPLT) };
++	hctsiz_data_t hctsiz = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ) };
++
++	hcsplt.b.compsplt = 1;
++	if (st->channel[n].hcchar_copy.b.epdir == 1) {
++		// If IN, the CSPLIT result contains the data or a hub handshake. hctsiz = maxpacket.
++		hctsiz.b.xfersize = st->channel[n].hctsiz_copy.b.xfersize;
++	} else {
++		// If OUT, the CSPLIT result contains handshake only.
++		hctsiz.b.xfersize = 0;
++	}
++	FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCSPLT, hcsplt.d32);
++	FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ, hctsiz.d32);
++	mb();
++}
++
++static inline int notrace fiq_get_xfer_len(struct fiq_state *st, int n)
++{
++	/* The xfersize register is a bit wonky. For IN transfers, it decrements by the packet size. */
++	hctsiz_data_t hctsiz = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ) };
++
++	if (st->channel[n].hcchar_copy.b.epdir == 0) {
++		return st->channel[n].hctsiz_copy.b.xfersize;
++	} else {
++		return st->channel[n].hctsiz_copy.b.xfersize - hctsiz.b.xfersize;
++	}
++
++}
++
++
++/**
++ * fiq_increment_dma_buf() - update DMA address for bounce buffers after a CSPLIT
++ *
++ * Of use only for IN periodic transfers.
++ */
++static int notrace fiq_increment_dma_buf(struct fiq_state *st, int num_channels, int n)
++{
++	hcdma_data_t hcdma;
++	int i = st->channel[n].dma_info.index;
++	int len;
++	struct fiq_dma_blob *blob = (struct fiq_dma_blob *) st->dma_base;
++
++	len = fiq_get_xfer_len(st, n);
++	fiq_print(FIQDBG_INT, st, "LEN: %03d", len);
++	st->channel[n].dma_info.slot_len[i] = len;
++	i++;
++	if (i > 6)
++		BUG();
++
++	hcdma.d32 = (dma_addr_t) &blob->channel[n].index[i].buf[0];
++	FIQ_WRITE(st->dwc_regs_base + HC_DMA + (HC_OFFSET * n), hcdma.d32);
++	st->channel[n].dma_info.index = i;
++	return 0;
++}
++
++/**
++ * fiq_reload_hctsiz() - for IN transactions, reset HCTSIZ
++ */
++static void notrace fiq_fsm_reload_hctsiz(struct fiq_state *st, int n)
++{
++	hctsiz_data_t hctsiz = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ) };
++	hctsiz.b.xfersize = st->channel[n].hctsiz_copy.b.xfersize;
++	hctsiz.b.pktcnt = 1;
++	FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ, hctsiz.d32);
++}
++
++/**
++ * fiq_iso_out_advance() - update DMA address and split position bits
++ * for isochronous OUT transactions.
++ *
++ * Returns 1 if this is the last packet queued, 0 otherwise. Split-ALL and
++ * Split-BEGIN states are not handled - this is done when the transaction was queued.
++ *
++ * This function must only be called from the FIQ_ISO_OUT_ACTIVE state.
++ */
++static int notrace fiq_iso_out_advance(struct fiq_state *st, int num_channels, int n)
++{
++	hcsplt_data_t hcsplt;
++	hctsiz_data_t hctsiz;
++	hcdma_data_t hcdma;
++	struct fiq_dma_blob *blob = (struct fiq_dma_blob *) st->dma_base;
++	int last = 0;
++	int i = st->channel[n].dma_info.index;
++
++	fiq_print(FIQDBG_INT, st, "ADV %01d %01d ", n, i);
++	i++;
++	if (i == 4)
++		last = 1;
++	if (st->channel[n].dma_info.slot_len[i+1] == 255)
++		last = 1;
++
++	/* New DMA address - address of bounce buffer referred to in index */
++	hcdma.d32 = (uint32_t) &blob->channel[n].index[i].buf[0];
++	//hcdma.d32 = FIQ_READ(st->dwc_regs_base + HC_DMA + (HC_OFFSET * n));
++	//hcdma.d32 += st->channel[n].dma_info.slot_len[i];
++	fiq_print(FIQDBG_INT, st, "LAST: %01d ", last);
++	fiq_print(FIQDBG_INT, st, "LEN: %03d", st->channel[n].dma_info.slot_len[i]);
++	hcsplt.d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCSPLT);
++	hctsiz.d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ);
++	hcsplt.b.xactpos = (last) ? ISOC_XACTPOS_END : ISOC_XACTPOS_MID;
++	/* Set up new packet length */
++	hctsiz.b.pktcnt = 1;
++	hctsiz.b.xfersize = st->channel[n].dma_info.slot_len[i];
++	fiq_print(FIQDBG_INT, st, "%08x", hctsiz.d32);
++
++	st->channel[n].dma_info.index++;
++	FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCSPLT, hcsplt.d32);
++	FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ, hctsiz.d32);
++	FIQ_WRITE(st->dwc_regs_base + HC_DMA + (HC_OFFSET * n), hcdma.d32);
++	return last;
++}
++
++/**
++ * fiq_fsm_tt_next_isoc() - queue next pending isochronous out start-split on a TT
++ *
++ * Despite the limitations of the DWC core, we can force a microframe pipeline of
++ * isochronous OUT start-split transactions while waiting for a corresponding other-type
++ * of endpoint to finish its CSPLITs. TTs have big periodic buffers therefore it
++ * is very unlikely that filling the start-split FIFO will cause data loss.
++ * This allows much better interleaving of transactions in an order-independent way-
++ * there is no requirement to prioritise isochronous, just a state-space search has
++ * to be performed on each periodic start-split complete interrupt.
++ */
++static int notrace fiq_fsm_tt_next_isoc(struct fiq_state *st, int num_channels, int n)
++{
++	int hub_addr = st->channel[n].hub_addr;
++	int port_addr = st->channel[n].port_addr;
++	int i, poked = 0;
++	for (i = 0; i < num_channels; i++) {
++		if (i == n || st->channel[i].fsm == FIQ_PASSTHROUGH)
++			continue;
++		if (st->channel[i].hub_addr == hub_addr &&
++			st->channel[i].port_addr == port_addr) {
++			switch (st->channel[i].fsm) {
++			case FIQ_PER_ISO_OUT_PENDING:
++				if (st->channel[i].nrpackets == 1) {
++					st->channel[i].fsm = FIQ_PER_ISO_OUT_LAST;
++				} else {
++					st->channel[i].fsm = FIQ_PER_ISO_OUT_ACTIVE;
++				}
++				fiq_fsm_restart_channel(st, i, 0);
++				poked = 1;
++				break;
++
++			default:
++				break;
++			}
++		}
++		if (poked)
++			break;
++	}
++	return poked;
++}
++
++/**
++ * fiq_fsm_tt_in_use() - search for host channels using this TT
++ * @n: Channel to use as reference
++ *
++ */
++int notrace noinline fiq_fsm_tt_in_use(struct fiq_state *st, int num_channels, int n)
++{
++	int hub_addr = st->channel[n].hub_addr;
++	int port_addr = st->channel[n].port_addr;
++	int i, in_use = 0;
++	for (i = 0; i < num_channels; i++) {
++		if (i == n || st->channel[i].fsm == FIQ_PASSTHROUGH)
++			continue;
++		switch (st->channel[i].fsm) {
++		/* TT is reserved for channels that are in the middle of a periodic
++		 * split transaction.
++		 */
++		case FIQ_PER_SSPLIT_STARTED:
++		case FIQ_PER_CSPLIT_WAIT:
++		case FIQ_PER_CSPLIT_NYET1:
++		//case FIQ_PER_CSPLIT_POLL:
++		case FIQ_PER_ISO_OUT_ACTIVE:
++		case FIQ_PER_ISO_OUT_LAST:
++			if (st->channel[i].hub_addr == hub_addr &&
++				st->channel[i].port_addr == port_addr) {
++				in_use = 1;
++			}
++			break;
++		default:
++			break;
++		}
++		if (in_use)
++			break;
++	}
++	return in_use;
++}
++
++/**
++ * fiq_fsm_more_csplits() - determine whether additional CSPLITs need
++ * 			to be issued for this IN transaction.
++ *
++ * We cannot tell the inbound PID of a data packet due to hardware limitations.
++ * we need to make an educated guess as to whether we need to queue another CSPLIT
++ * or not. A no-brainer is when we have received enough data to fill the endpoint
++ * size, but for endpoints that give variable-length data then we have to resort
++ * to heuristics.
++ *
++ * We also return whether this is the last CSPLIT to be queued, again based on
++ * heuristics. This is to allow a 1-uframe overlap of periodic split transactions.
++ * Note: requires at least 1 CSPLIT to have been performed prior to being called.
++ */
++
++/*
++ * We need some way of guaranteeing if a returned periodic packet of size X
++ * has a DATA0 PID.
++ * The heuristic value of 144 bytes assumes that the received data has maximal
++ * bit-stuffing and the clock frequency of the transmitting device is at the lowest
++ * permissible limit. If the transfer length results in a final packet size
++ * 144 < p <= 188, then an erroneous CSPLIT will be issued.
++ * Also used to ensure that an endpoint will nominally only return a single
++ * complete-split worth of data.
++ */
++#define DATA0_PID_HEURISTIC 144
++
++static int notrace noinline fiq_fsm_more_csplits(struct fiq_state *state, int n, int *probably_last)
++{
++
++	int i;
++	int total_len = 0;
++	int more_needed = 1;
++	struct fiq_channel_state *st = &state->channel[n];
++
++	for (i = 0; i < st->dma_info.index; i++) {
++			total_len += st->dma_info.slot_len[i];
++	}
++
++	*probably_last = 0;
++
++	if (st->hcchar_copy.b.eptype == 0x3) {
++		/*
++		 * An interrupt endpoint will take max 2 CSPLITs. if we are receiving data
++		 * then this is definitely the last CSPLIT.
++		 */
++		*probably_last = 1;
++	} else {
++		/* Isoc IN. This is a bit risky if we are the first transaction:
++		 * we may have been held off slightly. */
++		if (i > 1 && st->dma_info.slot_len[st->dma_info.index-1] <= DATA0_PID_HEURISTIC) {
++			more_needed = 0;
++		}
++		/* If in the next uframe we will receive enough data to fill the endpoint,
++		 * then only issue 1 more csplit.
++		 */
++		if (st->hctsiz_copy.b.xfersize - total_len <= DATA0_PID_HEURISTIC)
++			*probably_last = 1;
++	}
++
++	if (total_len >= st->hctsiz_copy.b.xfersize ||
++		i == 6 || total_len == 0)
++		/* Note: due to bit stuffing it is possible to have > 6 CSPLITs for
++		 * a single endpoint. Accepting more would completely break our scheduling mechanism though
++		 * - in these extreme cases we will pass through a truncated packet.
++		 */
++		more_needed = 0;
++
++	return more_needed;
++}
++
++/**
++ * fiq_fsm_too_late() - Test transaction for lateness
++ *
++ * If a SSPLIT for a large IN transaction is issued too late in a frame,
++ * the hub will disable the port to the device and respond with ERR handshakes.
++ * The hub status endpoint will not reflect this change.
++ * Returns 1 if we will issue a SSPLIT that will result in a device babble.
++ */
++int notrace fiq_fsm_too_late(struct fiq_state *st, int n)
++{
++	int uframe;
++	hfnum_data_t hfnum = { .d32 = FIQ_READ(st->dwc_regs_base + HFNUM) };
++	uframe = hfnum.b.frnum & 0x7;
++	if ((uframe < 6) && (st->channel[n].nrpackets + 1 + uframe > 7)) {
++		return 1;
++	} else {
++		return 0;
++	}
++}
++
++
++/**
++ * fiq_fsm_start_next_periodic() - A half-arsed attempt at a microframe pipeline
++ *
++ * Search pending transactions in the start-split pending state and queue them.
++ * Don't queue packets in uframe .5 (comes out in .6) (USB2.0 11.18.4).
++ * Note: we specifically don't do isochronous OUT transactions first because better
++ * use of the TT's start-split fifo can be achieved by pipelining an IN before an OUT.
++ */
++static void notrace noinline fiq_fsm_start_next_periodic(struct fiq_state *st, int num_channels)
++{
++	int n;
++	hfnum_data_t hfnum = { .d32 = FIQ_READ(st->dwc_regs_base + HFNUM) };
++	if ((hfnum.b.frnum & 0x7) == 5)
++		return;
++	for (n = 0; n < num_channels; n++) {
++		if (st->channel[n].fsm == FIQ_PER_SSPLIT_QUEUED) {
++			/* Check to see if any other transactions are using this TT */
++			if(!fiq_fsm_tt_in_use(st, num_channels, n)) {
++				if (!fiq_fsm_too_late(st, n)) {
++					st->channel[n].fsm = FIQ_PER_SSPLIT_STARTED;
++					fiq_print(FIQDBG_INT, st, "NEXTPER ");
++					fiq_fsm_restart_channel(st, n, 0);
++				} else {
++					st->channel[n].fsm = FIQ_PER_SPLIT_TIMEOUT;
++				}
++				break;
++			}
++		}
++	}
++	for (n = 0; n < num_channels; n++) {
++		if (st->channel[n].fsm == FIQ_PER_ISO_OUT_PENDING) {
++			if (!fiq_fsm_tt_in_use(st, num_channels, n)) {
++				fiq_print(FIQDBG_INT, st, "NEXTISO ");
++				st->channel[n].fsm = FIQ_PER_ISO_OUT_ACTIVE;
++				fiq_fsm_restart_channel(st, n, 0);
++				break;
++			}
++		}
++	}
++}
++
++/**
++ * fiq_fsm_update_hs_isoc() - update isochronous frame and transfer data
++ * @state:	Pointer to fiq_state
++ * @n:		Channel transaction is active on
++ * @hcint:	Copy of host channel interrupt register
++ *
++ * Returns 0 if there are no more transactions for this HC to do, 1
++ * otherwise.
++ */
++static int notrace noinline fiq_fsm_update_hs_isoc(struct fiq_state *state, int n, hcint_data_t hcint)
++{
++	struct fiq_channel_state *st = &state->channel[n];
++	int xfer_len = 0, nrpackets = 0;
++	hcdma_data_t hcdma;
++	fiq_print(FIQDBG_INT, state, "HSISO %02d", n);
++
++	xfer_len = fiq_get_xfer_len(state, n);
++	st->hs_isoc_info.iso_desc[st->hs_isoc_info.index].actual_length = xfer_len;
++
++	st->hs_isoc_info.iso_desc[st->hs_isoc_info.index].status = hcint.d32;
++
++	st->hs_isoc_info.index++;
++	if (st->hs_isoc_info.index == st->hs_isoc_info.nrframes) {
++		return 0;
++	}
++
++	/* grab the next DMA address offset from the array */
++	hcdma.d32 = st->hcdma_copy.d32 + st->hs_isoc_info.iso_desc[st->hs_isoc_info.index].offset;
++	FIQ_WRITE(state->dwc_regs_base + HC_DMA + (HC_OFFSET * n), hcdma.d32);
++
++	/* We need to set multi_count. This is a bit tricky - has to be set per-transaction as
++	 * the core needs to be told to send the correct number. Caution: for IN transfers,
++	 * this is always set to the maximum size of the endpoint. */
++	xfer_len = st->hs_isoc_info.iso_desc[st->hs_isoc_info.index].length;
++	/* Integer divide in a FIQ: fun. FIXME: make this not suck */
++	nrpackets = (xfer_len + st->hcchar_copy.b.mps - 1) / st->hcchar_copy.b.mps;
++	if (nrpackets == 0)
++		nrpackets = 1;
++	st->hcchar_copy.b.multicnt = nrpackets;
++	st->hctsiz_copy.b.pktcnt = nrpackets;
++
++	/* Initial PID also needs to be set */
++	if (st->hcchar_copy.b.epdir == 0) {
++		st->hctsiz_copy.b.xfersize = xfer_len;
++		switch (st->hcchar_copy.b.multicnt) {
++		case 1:
++			st->hctsiz_copy.b.pid = DWC_PID_DATA0;
++			break;
++		case 2:
++		case 3:
++			st->hctsiz_copy.b.pid = DWC_PID_MDATA;
++			break;
++		}
++
++	} else {
++		switch (st->hcchar_copy.b.multicnt) {
++		st->hctsiz_copy.b.xfersize = nrpackets * st->hcchar_copy.b.mps;
++		case 1:
++			st->hctsiz_copy.b.pid = DWC_PID_DATA0;
++			break;
++		case 2:
++			st->hctsiz_copy.b.pid = DWC_PID_DATA1;
++			break;
++		case 3:
++			st->hctsiz_copy.b.pid = DWC_PID_DATA2;
++			break;
++		}
++	}
++	FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ, st->hctsiz_copy.d32);
++	FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR, st->hcchar_copy.d32);
++	/* Channel is enabled on hcint handler exit */
++	fiq_print(FIQDBG_INT, state, "HSISOOUT");
++	return 1;
++}
++
++
++/**
++ * fiq_fsm_do_sof() - FSM start-of-frame interrupt handler
++ * @state:	Pointer to the state struct passed from banked FIQ mode registers.
++ * @num_channels:	set according to the DWC hardware configuration
++ *
++ * The SOF handler in FSM mode has two functions
++ * 1. Hold off SOF from causing schedule advancement in IRQ context if there's
++ *    nothing to do
++ * 2. Advance certain FSM states that require either a microframe delay, or a microframe
++ *    of holdoff.
++ *
++ * The second part is architecture-specific to mach-bcm2835 -
++ * a sane interrupt controller would have a mask register for ARM interrupt sources
++ * to be promoted to the nFIQ line, but it doesn't. Instead a single interrupt
++ * number (USB) can be enabled. This means that certain parts of the USB specification
++ * that require "wait a little while, then issue another packet" cannot be fulfilled with
++ * the timing granularity required to achieve optimal throughout. The workaround is to use
++ * the SOF "timer" (125uS) to perform this task.
++ */
++static int notrace noinline fiq_fsm_do_sof(struct fiq_state *state, int num_channels)
++{
++	hfnum_data_t hfnum = { .d32 = FIQ_READ(state->dwc_regs_base + HFNUM) };
++	int n;
++	int kick_irq = 0;
++
++	if ((hfnum.b.frnum & 0x7) == 1) {
++		/* We cannot issue csplits for transactions in the last frame past (n+1).1
++		 * Check to see if there are any transactions that are stale.
++		 * Boot them out.
++		 */
++		for (n = 0; n < num_channels; n++) {
++			switch (state->channel[n].fsm) {
++			case FIQ_PER_CSPLIT_WAIT:
++			case FIQ_PER_CSPLIT_NYET1:
++			case FIQ_PER_CSPLIT_POLL:
++			case FIQ_PER_CSPLIT_LAST:
++				/* Check if we are no longer in the same full-speed frame. */
++				if (((state->channel[n].expected_uframe & 0x3FFF) & ~0x7) <
++						(hfnum.b.frnum & ~0x7))
++					state->channel[n].fsm = FIQ_PER_SPLIT_TIMEOUT;
++				break;
++			default:
++				break;
++			}
++		}
++	}
++
++	for (n = 0; n < num_channels; n++) {
++		switch (state->channel[n].fsm) {
++
++		case FIQ_NP_SSPLIT_RETRY:
++		case FIQ_NP_IN_CSPLIT_RETRY:
++		case FIQ_NP_OUT_CSPLIT_RETRY:
++			fiq_fsm_restart_channel(state, n, 0);
++			break;
++
++		case FIQ_HS_ISOC_SLEEPING:
++			/* Is it time to wake this channel yet? */
++			if (--state->channel[n].uframe_sleeps == 0) {
++				state->channel[n].fsm = FIQ_HS_ISOC_TURBO;
++				fiq_fsm_restart_channel(state, n, 0);
++			}
++			break;
++
++		case FIQ_PER_SSPLIT_QUEUED:
++			if ((hfnum.b.frnum & 0x7) == 5)
++				break;
++			if(!fiq_fsm_tt_in_use(state, num_channels, n)) {
++				if (!fiq_fsm_too_late(state, n)) {
++					fiq_print(FIQDBG_INT, state, "SOF GO %01d", n);
++					fiq_fsm_restart_channel(state, n, 0);
++					state->channel[n].fsm = FIQ_PER_SSPLIT_STARTED;
++				} else {
++					/* Transaction cannot be started without risking a device babble error */
++					state->channel[n].fsm = FIQ_PER_SPLIT_TIMEOUT;
++					state->haintmsk_saved.b2.chint &= ~(1 << n);
++					FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINTMSK, 0);
++					kick_irq |= 1;
++				}
++			}
++			break;
++
++		case FIQ_PER_ISO_OUT_PENDING:
++			/* Ordinarily, this should be poked after the SSPLIT
++			 * complete interrupt for a competing transfer on the same
++			 * TT. Doesn't happen for aborted transactions though.
++			 */
++			if ((hfnum.b.frnum & 0x7) >= 5)
++				break;
++			if (!fiq_fsm_tt_in_use(state, num_channels, n)) {
++				/* Hardware bug. SOF can sometimes occur after the channel halt interrupt
++				 * that caused this.
++				 */
++					fiq_fsm_restart_channel(state, n, 0);
++					fiq_print(FIQDBG_INT, state, "SOF ISOC");
++					if (state->channel[n].nrpackets == 1) {
++						state->channel[n].fsm = FIQ_PER_ISO_OUT_LAST;
++					} else {
++						state->channel[n].fsm = FIQ_PER_ISO_OUT_ACTIVE;
++					}
++			}
++			break;
++
++		case FIQ_PER_CSPLIT_WAIT:
++			/* we are guaranteed to be in this state if and only if the SSPLIT interrupt
++			 * occurred when the bus transaction occurred. The SOF interrupt reversal bug
++			 * will utterly bugger this up though.
++			 */
++			if (hfnum.b.frnum != state->channel[n].expected_uframe) {
++				fiq_print(FIQDBG_INT, state, "SOFCS %d ", n);
++				state->channel[n].fsm = FIQ_PER_CSPLIT_POLL;
++				fiq_fsm_restart_channel(state, n, 0);
++				fiq_fsm_start_next_periodic(state, num_channels);
++
++			}
++			break;
++
++		case FIQ_PER_SPLIT_TIMEOUT:
++		case FIQ_DEQUEUE_ISSUED:
++			/* Ugly: we have to force a HCD interrupt.
++			 * Poke the mask for the channel in question.
++			 * We will take a fake SOF because of this, but
++			 * that's OK.
++			 */
++			state->haintmsk_saved.b2.chint &= ~(1 << n);
++			FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINTMSK, 0);
++			kick_irq |= 1;
++			break;
++
++		default:
++			break;
++		}
++	}
++
++	if (state->kick_np_queues ||
++			dwc_frame_num_le(state->next_sched_frame, hfnum.b.frnum))
++		kick_irq |= 1;
++
++	return !kick_irq;
++}
++
++
++/**
++ * fiq_fsm_do_hcintr() - FSM host channel interrupt handler
++ * @state: Pointer to the FIQ state struct
++ * @num_channels: Number of channels as per hardware config
++ * @n: channel for which HAINT(i) was raised
++ *
++ * An important property is that only the CHHLT interrupt is unmasked. Unfortunately, AHBerr is as well.
++ */
++static int notrace noinline fiq_fsm_do_hcintr(struct fiq_state *state, int num_channels, int n)
++{
++	hcint_data_t hcint;
++	hcintmsk_data_t hcintmsk;
++	hcint_data_t hcint_probe;
++	hcchar_data_t hcchar;
++	int handled = 0;
++	int restart = 0;
++	int last_csplit = 0;
++	int start_next_periodic = 0;
++	struct fiq_channel_state *st = &state->channel[n];
++	hfnum_data_t hfnum;
++
++	hcint.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINT);
++	hcintmsk.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINTMSK);
++	hcint_probe.d32 = hcint.d32 & hcintmsk.d32;
++
++	if (st->fsm != FIQ_PASSTHROUGH) {
++		fiq_print(FIQDBG_INT, state, "HC%01d ST%02d", n, st->fsm);
++		fiq_print(FIQDBG_INT, state, "%08x", hcint.d32);
++	}
++
++	switch (st->fsm) {
++
++	case FIQ_PASSTHROUGH:
++	case FIQ_DEQUEUE_ISSUED:
++		/* doesn't belong to us, kick it upstairs */
++		break;
++
++	case FIQ_PASSTHROUGH_ERRORSTATE:
++		/* We are here to emulate the error recovery mechanism of the dwc HCD.
++		 * Several interrupts are unmasked if a previous transaction failed - it's
++		 * death for the FIQ to attempt to handle them as the channel isn't halted.
++		 * Emulate what the HCD does in this situation: mask and continue.
++		 * The FSM has no other state setup so this has to be handled out-of-band.
++		 */
++		fiq_print(FIQDBG_ERR, state, "ERRST %02d", n);
++		if (hcint_probe.b.nak || hcint_probe.b.ack || hcint_probe.b.datatglerr) {
++			fiq_print(FIQDBG_ERR, state, "RESET %02d", n);
++			/* In some random cases we can get a NAK interrupt coincident with a Xacterr
++			 * interrupt, after the device has disappeared.
++			 */
++			if (!hcint.b.xacterr)
++				st->nr_errors = 0;
++			hcintmsk.b.nak = 0;
++			hcintmsk.b.ack = 0;
++			hcintmsk.b.datatglerr = 0;
++			FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINTMSK, hcintmsk.d32);
++			return 1;
++		}
++		if (hcint_probe.b.chhltd) {
++			fiq_print(FIQDBG_ERR, state, "CHHLT %02d", n);
++			fiq_print(FIQDBG_ERR, state, "%08x", hcint.d32);
++			return 0;
++		}
++		break;
++
++	/* Non-periodic state groups */
++	case FIQ_NP_SSPLIT_STARTED:
++	case FIQ_NP_SSPLIT_RETRY:
++		/* Got a HCINT for a NP SSPLIT. Expected ACK / NAK / fail */
++		if (hcint.b.ack) {
++			/* SSPLIT complete. For OUT, the data has been sent. For IN, the LS transaction
++			 * will start shortly. SOF needs to kick the transaction to prevent a NYET flood.
++			 */
++			if(st->hcchar_copy.b.epdir == 1)
++				st->fsm = FIQ_NP_IN_CSPLIT_RETRY;
++			else
++				st->fsm = FIQ_NP_OUT_CSPLIT_RETRY;
++			st->nr_errors = 0;
++			handled = 1;
++			fiq_fsm_setup_csplit(state, n);
++		} else if (hcint.b.nak) {
++			// No buffer space in TT. Retry on a uframe boundary.
++			st->fsm = FIQ_NP_SSPLIT_RETRY;
++			handled = 1;
++		} else if (hcint.b.xacterr) {
++			// The only other one we care about is xacterr. This implies HS bus error - retry.
++			st->nr_errors++;
++			st->fsm = FIQ_NP_SSPLIT_RETRY;
++			if (st->nr_errors >= 3) {
++				st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
++			} else {
++				handled = 1;
++				restart = 1;
++			}
++		} else {
++			st->fsm = FIQ_NP_SPLIT_LS_ABORTED;
++			handled = 0;
++			restart = 0;
++		}
++		break;
++
++	case FIQ_NP_IN_CSPLIT_RETRY:
++		/* Received a CSPLIT done interrupt.
++		 * Expected Data/NAK/STALL/NYET for IN.
++		 */
++		if (hcint.b.xfercomp) {
++			/* For IN, data is present. */
++			st->fsm = FIQ_NP_SPLIT_DONE;
++		} else if (hcint.b.nak) {
++			/* no endpoint data. Punt it upstairs */
++			st->fsm = FIQ_NP_SPLIT_DONE;
++		} else if (hcint.b.nyet) {
++			/* CSPLIT NYET - retry on a uframe boundary. */
++			handled = 1;
++			st->nr_errors = 0;
++		} else if (hcint.b.datatglerr) {
++			/* data toggle errors do not set the xfercomp bit. */
++			st->fsm = FIQ_NP_SPLIT_LS_ABORTED;
++		} else if (hcint.b.xacterr) {
++			/* HS error. Retry immediate */
++			st->fsm = FIQ_NP_IN_CSPLIT_RETRY;
++			st->nr_errors++;
++			if (st->nr_errors >= 3) {
++				st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
++			} else {
++				handled = 1;
++				restart = 1;
++			}
++		} else if (hcint.b.stall || hcint.b.bblerr) {
++			/* A STALL implies either a LS bus error or a genuine STALL. */
++			st->fsm = FIQ_NP_SPLIT_LS_ABORTED;
++		} else {
++			/*  Hardware bug. It's possible in some cases to
++			 *  get a channel halt with nothing else set when
++			 *  the response was a NYET. Treat as local 3-strikes retry.
++			 */
++			hcint_data_t hcint_test = hcint;
++			hcint_test.b.chhltd = 0;
++			if (!hcint_test.d32) {
++				st->nr_errors++;
++				if (st->nr_errors >= 3) {
++					st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
++				} else {
++					handled = 1;
++				}
++			} else {
++				/* Bail out if something unexpected happened */
++				st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
++			}
++		}
++		break;
++
++	case FIQ_NP_OUT_CSPLIT_RETRY:
++		/* Received a CSPLIT done interrupt.
++		 * Expected ACK/NAK/STALL/NYET/XFERCOMP for OUT.*/
++		if (hcint.b.xfercomp) {
++			st->fsm = FIQ_NP_SPLIT_DONE;
++		} else if (hcint.b.nak) {
++			// The HCD will implement the holdoff on frame boundaries.
++			st->fsm = FIQ_NP_SPLIT_DONE;
++		} else if (hcint.b.nyet) {
++			// Hub still processing.
++			st->fsm = FIQ_NP_OUT_CSPLIT_RETRY;
++			handled = 1;
++			st->nr_errors = 0;
++			//restart = 1;
++		} else if (hcint.b.xacterr) {
++			/* HS error. retry immediate */
++			st->fsm = FIQ_NP_OUT_CSPLIT_RETRY;
++			st->nr_errors++;
++			if (st->nr_errors >= 3) {
++				st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
++			} else {
++				handled = 1;
++				restart = 1;
++			}
++		} else if (hcint.b.stall) {
++			/* LS bus error or genuine stall */
++			st->fsm = FIQ_NP_SPLIT_LS_ABORTED;
++		} else {
++			/*
++			 * Hardware bug. It's possible in some cases to get a
++			 * channel halt with nothing else set when the response was a NYET.
++			 * Treat as local 3-strikes retry.
++			 */
++			hcint_data_t hcint_test = hcint;
++			hcint_test.b.chhltd = 0;
++			if (!hcint_test.d32) {
++				st->nr_errors++;
++				if (st->nr_errors >= 3) {
++					st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
++				} else {
++					handled = 1;
++				}
++			} else {
++				// Something unexpected happened. AHBerror or babble perhaps. Let the IRQ deal with it.
++				st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
++			}
++		}
++		break;
++
++	/* Periodic split states (except isoc out) */
++	case FIQ_PER_SSPLIT_STARTED:
++		/* Expect an ACK or failure for SSPLIT */
++		if (hcint.b.ack) {
++			/*
++			 * SSPLIT transfer complete interrupt - the generation of this interrupt is fraught with bugs.
++			 * For a packet queued in microframe n-3 to appear in n-2, if the channel is enabled near the EOF1
++			 * point for microframe n-3, the packet will not appear on the bus until microframe n.
++			 * Additionally, the generation of the actual interrupt is dodgy. For a packet appearing on the bus
++			 * in microframe n, sometimes the interrupt is generated immediately. Sometimes, it appears in n+1
++			 * coincident with SOF for n+1.
++			 * SOF is also buggy. It can sometimes be raised AFTER the first bus transaction has taken place.
++			 * These appear to be caused by timing/clock crossing bugs within the core itself.
++			 * State machine workaround.
++			 */
++			hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
++			hcchar.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
++			fiq_fsm_setup_csplit(state, n);
++			/* Poke the oddfrm bit. If we are equivalent, we received the interrupt at the correct
++			 * time. If not, then we're in the next SOF.
++			 */
++			if ((hfnum.b.frnum & 0x1) == hcchar.b.oddfrm) {
++				fiq_print(FIQDBG_INT, state, "CSWAIT %01d", n);
++				st->expected_uframe = hfnum.b.frnum;
++				st->fsm = FIQ_PER_CSPLIT_WAIT;
++			} else {
++				fiq_print(FIQDBG_INT, state, "CSPOL  %01d", n);
++				/* For isochronous IN endpoints,
++				 * we need to hold off if we are expecting a lot of data */
++				if (st->hcchar_copy.b.mps < DATA0_PID_HEURISTIC) {
++					start_next_periodic = 1;
++				}
++				/* Danger will robinson: we are in a broken state. If our first interrupt after
++				 * this is a NYET, it will be delayed by 1 uframe and result in an unrecoverable
++				 * lag. Unmask the NYET interrupt.
++				 */
++				st->expected_uframe = (hfnum.b.frnum + 1) & 0x3FFF;
++				st->fsm = FIQ_PER_CSPLIT_BROKEN_NYET1;
++				restart = 1;
++			}
++			handled = 1;
++		} else if (hcint.b.xacterr) {
++			/* 3-strikes retry is enabled, we have hit our max nr_errors */
++			st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
++			start_next_periodic = 1;
++		} else {
++			st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
++			start_next_periodic = 1;
++		}
++		/* We can now queue the next isochronous OUT transaction, if one is pending. */
++		if(fiq_fsm_tt_next_isoc(state, num_channels, n)) {
++			fiq_print(FIQDBG_INT, state, "NEXTISO ");
++		}
++		break;
++
++	case FIQ_PER_CSPLIT_NYET1:
++		/* First CSPLIT attempt was a NYET. If we get a subsequent NYET,
++		 * we are too late and the TT has dropped its CSPLIT fifo.
++		 */
++		hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
++		hcchar.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
++		start_next_periodic = 1;
++		if (hcint.b.nak) {
++			st->fsm = FIQ_PER_SPLIT_DONE;
++		} else if (hcint.b.xfercomp) {
++			fiq_increment_dma_buf(state, num_channels, n);
++			st->fsm = FIQ_PER_CSPLIT_POLL;
++			st->nr_errors = 0;
++			if (fiq_fsm_more_csplits(state, n, &last_csplit)) {
++				handled = 1;
++				restart = 1;
++				if (!last_csplit)
++					start_next_periodic = 0;
++			} else {
++				st->fsm = FIQ_PER_SPLIT_DONE;
++			}
++		} else if (hcint.b.nyet) {
++			/* Doh. Data lost. */
++			st->fsm = FIQ_PER_SPLIT_NYET_ABORTED;
++		} else if (hcint.b.xacterr || hcint.b.stall || hcint.b.bblerr) {
++			st->fsm = FIQ_PER_SPLIT_LS_ABORTED;
++		} else {
++			st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
++		}
++		break;
++
++	case FIQ_PER_CSPLIT_BROKEN_NYET1:
++		/*
++		 * we got here because our host channel is in the delayed-interrupt
++		 * state and we cannot take a NYET interrupt any later than when it
++		 * occurred. Disable then re-enable the channel if this happens to force
++		 * CSPLITs to occur at the right time.
++		 */
++		hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
++		hcchar.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
++		fiq_print(FIQDBG_INT, state, "BROK: %01d ", n);
++		if (hcint.b.nak) {
++			st->fsm = FIQ_PER_SPLIT_DONE;
++			start_next_periodic = 1;
++		} else if (hcint.b.xfercomp) {
++			fiq_increment_dma_buf(state, num_channels, n);
++			if (fiq_fsm_more_csplits(state, n, &last_csplit)) {
++				st->fsm = FIQ_PER_CSPLIT_POLL;
++				handled = 1;
++				restart = 1;
++				start_next_periodic = 1;
++				/* Reload HCTSIZ for the next transfer */
++				fiq_fsm_reload_hctsiz(state, n);
++				if (!last_csplit)
++					start_next_periodic = 0;
++			} else {
++				st->fsm = FIQ_PER_SPLIT_DONE;
++			}
++		} else if (hcint.b.nyet) {
++			st->fsm = FIQ_PER_SPLIT_NYET_ABORTED;
++			start_next_periodic = 1;
++		} else if (hcint.b.xacterr || hcint.b.stall || hcint.b.bblerr) {
++			/* Local 3-strikes retry is handled by the core. This is a ERR response.*/
++			st->fsm = FIQ_PER_SPLIT_LS_ABORTED;
++		} else {
++			st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
++		}
++		break;
++
++	case FIQ_PER_CSPLIT_POLL:
++		hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
++		hcchar.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
++		start_next_periodic = 1;
++		if (hcint.b.nak) {
++			st->fsm = FIQ_PER_SPLIT_DONE;
++		} else if (hcint.b.xfercomp) {
++			fiq_increment_dma_buf(state, num_channels, n);
++			if (fiq_fsm_more_csplits(state, n, &last_csplit)) {
++				handled = 1;
++				restart = 1;
++				/* Reload HCTSIZ for the next transfer */
++				fiq_fsm_reload_hctsiz(state, n);
++				if (!last_csplit)
++					start_next_periodic = 0;
++			} else {
++				st->fsm = FIQ_PER_SPLIT_DONE;
++			}
++		} else if (hcint.b.nyet) {
++			/* Are we a NYET after the first data packet? */
++			if (st->nrpackets == 0) {
++				st->fsm = FIQ_PER_CSPLIT_NYET1;
++				handled = 1;
++				restart = 1;
++			} else {
++				/* We got a NYET when polling CSPLITs. Can happen
++				 * if our heuristic fails, or if someone disables us
++				 * for any significant length of time.
++				 */
++				if (st->nr_errors >= 3) {
++					st->fsm = FIQ_PER_SPLIT_NYET_ABORTED;
++				} else {
++					st->fsm = FIQ_PER_SPLIT_DONE;
++				}
++			}
++		} else if (hcint.b.xacterr || hcint.b.stall || hcint.b.bblerr) {
++			/* For xacterr, Local 3-strikes retry is handled by the core. This is a ERR response.*/
++			st->fsm = FIQ_PER_SPLIT_LS_ABORTED;
++		} else {
++			st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
++		}
++		break;
++
++	case FIQ_HS_ISOC_TURBO:
++		if (fiq_fsm_update_hs_isoc(state, n, hcint)) {
++			/* more transactions to come */
++			handled = 1;
++			fiq_print(FIQDBG_INT, state, "HSISO M ");
++			/* For strided transfers, put ourselves to sleep */
++			if (st->hs_isoc_info.stride > 1) {
++				st->uframe_sleeps = st->hs_isoc_info.stride - 1;
++				st->fsm = FIQ_HS_ISOC_SLEEPING;
++			} else {
++				restart = 1;
++			}
++		} else {
++			st->fsm = FIQ_HS_ISOC_DONE;
++			fiq_print(FIQDBG_INT, state, "HSISO F ");
++		}
++		break;
++
++	case FIQ_HS_ISOC_ABORTED:
++		/* This abort is called by the driver rewriting the state mid-transaction
++		 * which allows the dequeue mechanism to work more effectively.
++		 */
++		break;
++
++	case FIQ_PER_ISO_OUT_ACTIVE:
++		if (hcint.b.ack) {
++			if(fiq_iso_out_advance(state, num_channels, n)) {
++				/* last OUT transfer */
++				st->fsm = FIQ_PER_ISO_OUT_LAST;
++				/*
++				 * Assuming the periodic FIFO in the dwc core
++				 * actually does its job properly, we can queue
++				 * the next ssplit now and in theory, the wire
++				 * transactions will be in-order.
++				 */
++				// No it doesn't. It appears to process requests in host channel order.
++				//start_next_periodic = 1;
++			}
++			handled = 1;
++			restart = 1;
++		} else {
++			/*
++			 * Isochronous transactions carry on regardless. Log the error
++			 * and continue.
++			 */
++			//explode += 1;
++			st->nr_errors++;
++			if(fiq_iso_out_advance(state, num_channels, n)) {
++				st->fsm = FIQ_PER_ISO_OUT_LAST;
++				//start_next_periodic = 1;
++			}
++			handled = 1;
++			restart = 1;
++		}
++		break;
++
++	case FIQ_PER_ISO_OUT_LAST:
++		if (hcint.b.ack) {
++			/* All done here */
++			st->fsm = FIQ_PER_ISO_OUT_DONE;
++		} else {
++			st->fsm = FIQ_PER_ISO_OUT_DONE;
++			st->nr_errors++;
++		}
++		start_next_periodic = 1;
++		break;
++
++	case FIQ_PER_SPLIT_TIMEOUT:
++		/* SOF kicked us because we overran. */
++		start_next_periodic = 1;
++		break;
++
++	default:
++		break;
++	}
++
++	if (handled) {
++		FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINT, hcint.d32);
++	} else {
++		/* Copy the regs into the state so the IRQ knows what to do */
++		st->hcint_copy.d32 = hcint.d32;
++	}
++
++	if (restart) {
++		/* Restart always implies handled. */
++		if (restart == 2) {
++			/* For complete-split INs, the show must go on.
++			 * Force a channel restart */
++			fiq_fsm_restart_channel(state, n, 1);
++		} else {
++			fiq_fsm_restart_channel(state, n, 0);
++		}
++	}
++	if (start_next_periodic) {
++		fiq_fsm_start_next_periodic(state, num_channels);
++	}
++	if (st->fsm != FIQ_PASSTHROUGH)
++		fiq_print(FIQDBG_INT, state, "FSMOUT%02d", st->fsm);
++
++	return handled;
++}
++
++
++/**
++ * dwc_otg_fiq_fsm() - Flying State Machine (monster) FIQ
++ * @state:		pointer to state struct passed from the banked FIQ mode registers.
++ * @num_channels:	set according to the DWC hardware configuration
++ * @dma:		pointer to DMA bounce buffers for split transaction slots
++ *
++ * The FSM FIQ performs the low-level tasks that normally would be performed by the microcode
++ * inside an EHCI or similar host controller regarding split transactions. The DWC core
++ * interrupts each and every time a split transaction packet is received or sent successfully.
++ * This results in either an interrupt storm when everything is working "properly", or
++ * the interrupt latency of the system in general breaks time-sensitive periodic split
++ * transactions. Pushing the low-level, but relatively easy state machine work into the FIQ
++ * solves these problems.
++ *
++ * Return: void
++ */
++void notrace dwc_otg_fiq_fsm(struct fiq_state *state, int num_channels)
++{
++	gintsts_data_t gintsts, gintsts_handled;
++	gintmsk_data_t gintmsk;
++	//hfnum_data_t hfnum;
++	haint_data_t haint, haint_handled;
++	haintmsk_data_t haintmsk;
++	int kick_irq = 0;
++
++	gintsts_handled.d32 = 0;
++	haint_handled.d32 = 0;
++
++	fiq_fsm_spin_lock(&state->lock);
++	gintsts.d32 = FIQ_READ(state->dwc_regs_base + GINTSTS);
++	gintmsk.d32 = FIQ_READ(state->dwc_regs_base + GINTMSK);
++	gintsts.d32 &= gintmsk.d32;
++
++	if (gintsts.b.sofintr) {
++		/* For FSM mode, SOF is required to keep the state machine advance for
++		 * certain stages of the periodic pipeline. It's death to mask this
++		 * interrupt in that case.
++		 */
++
++		if (!fiq_fsm_do_sof(state, num_channels)) {
++			/* Kick IRQ once. Queue advancement means that all pending transactions
++			 * will get serviced when the IRQ finally executes.
++			 */
++			if (state->gintmsk_saved.b.sofintr == 1)
++				kick_irq |= 1;
++			state->gintmsk_saved.b.sofintr = 0;
++		}
++		gintsts_handled.b.sofintr = 1;
++	}
++
++	if (gintsts.b.hcintr) {
++		int i;
++		haint.d32 = FIQ_READ(state->dwc_regs_base + HAINT);
++		haintmsk.d32 = FIQ_READ(state->dwc_regs_base + HAINTMSK);
++		haint.d32 &= haintmsk.d32;
++		haint_handled.d32 = 0;
++		for (i=0; i<num_channels; i++) {
++			if (haint.b2.chint & (1 << i)) {
++				if(!fiq_fsm_do_hcintr(state, num_channels, i)) {
++					/* HCINT was not handled in FIQ
++					 * HAINT is level-sensitive, leading to level-sensitive ginststs.b.hcint bit.
++					 * Mask HAINT(i) but keep top-level hcint unmasked.
++					 */
++					state->haintmsk_saved.b2.chint &= ~(1 << i);
++				} else {
++					/* do_hcintr cleaned up after itself, but clear haint */
++					haint_handled.b2.chint |= (1 << i);
++				}
++			}
++		}
++
++		if (haint_handled.b2.chint) {
++			FIQ_WRITE(state->dwc_regs_base + HAINT, haint_handled.d32);
++		}
++
++		if (haintmsk.d32 != (haintmsk.d32 & state->haintmsk_saved.d32)) {
++			/*
++			 * This is necessary to avoid multiple retriggers of the MPHI in the case
++			 * where interrupts are held off and HCINTs start to pile up.
++			 * Only wake up the IRQ if a new interrupt came in, was not handled and was
++			 * masked.
++			 */
++			haintmsk.d32 &= state->haintmsk_saved.d32;
++			FIQ_WRITE(state->dwc_regs_base + HAINTMSK, haintmsk.d32);
++			kick_irq |= 1;
++		}
++		/* Top-Level interrupt - always handled because it's level-sensitive */
++		gintsts_handled.b.hcintr = 1;
++	}
++
++
++	/* Clear the bits in the saved register that were not handled but were triggered. */
++	state->gintmsk_saved.d32 &= ~(gintsts.d32 & ~gintsts_handled.d32);
++
++	/* FIQ didn't handle something - mask has changed - write new mask */
++	if (gintmsk.d32 != (gintmsk.d32 & state->gintmsk_saved.d32)) {
++		gintmsk.d32 &= state->gintmsk_saved.d32;
++		gintmsk.b.sofintr = 1;
++		FIQ_WRITE(state->dwc_regs_base + GINTMSK, gintmsk.d32);
++//		fiq_print(FIQDBG_INT, state, "KICKGINT");
++//		fiq_print(FIQDBG_INT, state, "%08x", gintmsk.d32);
++//		fiq_print(FIQDBG_INT, state, "%08x", state->gintmsk_saved.d32);
++		kick_irq |= 1;
++	}
++
++	if (gintsts_handled.d32) {
++		/* Only applies to edge-sensitive bits in GINTSTS */
++		FIQ_WRITE(state->dwc_regs_base + GINTSTS, gintsts_handled.d32);
++	}
++
++	/* We got an interrupt, didn't handle it. */
++	if (kick_irq) {
++		state->mphi_int_count++;
++		FIQ_WRITE(state->mphi_regs.outdda, (int) state->dummy_send);
++		FIQ_WRITE(state->mphi_regs.outddb, (1<<29));
++
++	}
++	state->fiq_done++;
++	mb();
++	fiq_fsm_spin_unlock(&state->lock);
++}
++
++
++/**
++ * dwc_otg_fiq_nop() - FIQ "lite"
++ * @state:	pointer to state struct passed from the banked FIQ mode registers.
++ *
++ * The "nop" handler does not intervene on any interrupts other than SOF.
++ * It is limited in scope to deciding at each SOF if the IRQ SOF handler (which deals
++ * with non-periodic/periodic queues) needs to be kicked.
++ *
++ * This is done to hold off the SOF interrupt, which occurs at a rate of 8000 per second.
++ *
++ * Return: void
++ */
++void notrace dwc_otg_fiq_nop(struct fiq_state *state)
++{
++	gintsts_data_t gintsts, gintsts_handled;
++	gintmsk_data_t gintmsk;
++	hfnum_data_t hfnum;
++
++	fiq_fsm_spin_lock(&state->lock);
++	hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
++	gintsts.d32 = FIQ_READ(state->dwc_regs_base + GINTSTS);
++	gintmsk.d32 = FIQ_READ(state->dwc_regs_base + GINTMSK);
++	gintsts.d32 &= gintmsk.d32;
++	gintsts_handled.d32 = 0;
++
++	if (gintsts.b.sofintr) {
++		if (!state->kick_np_queues &&
++				dwc_frame_num_gt(state->next_sched_frame, hfnum.b.frnum)) {
++			/* SOF handled, no work to do, just ACK interrupt */
++			gintsts_handled.b.sofintr = 1;
++		} else {
++			/* Kick IRQ */
++			state->gintmsk_saved.b.sofintr = 0;
++		}
++	}
++
++	/* Reset handled interrupts */
++	if(gintsts_handled.d32) {
++		FIQ_WRITE(state->dwc_regs_base + GINTSTS, gintsts_handled.d32);
++	}
++
++	/* Clear the bits in the saved register that were not handled but were triggered. */
++	state->gintmsk_saved.d32 &= ~(gintsts.d32 & ~gintsts_handled.d32);
++
++	/* We got an interrupt, didn't handle it and want to mask it */
++	if (~(state->gintmsk_saved.d32)) {
++		state->mphi_int_count++;
++		gintmsk.d32 &= state->gintmsk_saved.d32;
++		FIQ_WRITE(state->dwc_regs_base + GINTMSK, gintmsk.d32);
++		/* Force a clear before another dummy send */
++		FIQ_WRITE(state->mphi_regs.intstat, (1<<29));
++		FIQ_WRITE(state->mphi_regs.outdda, (int) state->dummy_send);
++		FIQ_WRITE(state->mphi_regs.outddb, (1<<29));
++
++	}
++	state->fiq_done++;
++	mb();
++	fiq_fsm_spin_unlock(&state->lock);
++}
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.h
+@@ -0,0 +1,370 @@
++/*
++ * dwc_otg_fiq_fsm.h - Finite state machine FIQ header definitions
++ *
++ * Copyright (c) 2013 Raspberry Pi Foundation
++ *
++ * Author: Jonathan Bell <jonathan at raspberrypi.org>
++ * All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions are met:
++ *	* Redistributions of source code must retain the above copyright
++ *	  notice, this list of conditions and the following disclaimer.
++ *	* Redistributions in binary form must reproduce the above copyright
++ *	  notice, this list of conditions and the following disclaimer in the
++ *	  documentation and/or other materials provided with the distribution.
++ *	* Neither the name of Raspberry Pi nor the
++ *	  names of its contributors may be used to endorse or promote products
++ *	  derived from this software without specific prior written permission.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++ * DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
++ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ *
++ * This FIQ implements functionality that performs split transactions on
++ * the dwc_otg hardware without any outside intervention. A split transaction
++ * is "queued" by nominating a specific host channel to perform the entirety
++ * of a split transaction. This FIQ will then perform the microframe-precise
++ * scheduling required in each phase of the transaction until completion.
++ *
++ * The FIQ functionality has been surgically implanted into the Synopsys
++ * vendor-provided driver.
++ *
++ */
++
++#ifndef DWC_OTG_FIQ_FSM_H_
++#define DWC_OTG_FIQ_FSM_H_
++
++#include "dwc_otg_regs.h"
++#include "dwc_otg_cil.h"
++#include "dwc_otg_hcd.h"
++#include <linux/kernel.h>
++#include <linux/irqflags.h>
++#include <linux/string.h>
++#include <asm/barrier.h>
++
++#if 0
++#define FLAME_ON(x)					\
++do {							\
++	int gpioreg;                                    \
++							\
++	gpioreg = readl(__io_address(0x20200000+0x8));	\
++	gpioreg &= ~(7 << (x-20)*3);			\
++	gpioreg |= 0x1 << (x-20)*3;			\
++	writel(gpioreg, __io_address(0x20200000+0x8));	\
++							\
++	writel(1<<x, __io_address(0x20200000+(0x1C)));	\
++} while (0)
++
++#define FLAME_OFF(x)					\
++do {							\
++	writel(1<<x, __io_address(0x20200000+(0x28)));	\
++} while (0)
++#else
++#define FLAME_ON(x) do { } while (0)
++#define FLAME_OFF(X) do { } while (0)
++#endif
++
++/* This is a quick-and-dirty arch-specific register read/write. We know that
++ * writes to a peripheral on BCM2835 will always arrive in-order, also that
++ * reads and writes are executed in-order therefore the need for memory barriers
++ * is obviated if we're only talking to USB.
++ */
++#define FIQ_WRITE(_addr_,_data_) (*(volatile unsigned int *) (_addr_) = (_data_))
++#define FIQ_READ(_addr_) (*(volatile unsigned int *) (_addr_))
++
++/* FIQ-ified register definitions. Offsets are from dwc_regs_base. */
++#define GINTSTS		0x014
++#define GINTMSK		0x018
++/* Debug register. Poll the top of the received packets FIFO. */
++#define GRXSTSR		0x01C
++#define HFNUM		0x408
++#define HAINT		0x414
++#define HAINTMSK	0x418
++#define HPRT0		0x440
++
++/* HC_regs start from an offset of 0x500 */
++#define HC_START	0x500
++#define HC_OFFSET	0x020
++
++#define HC_DMA		0x514
++
++#define HCCHAR		0x00
++#define HCSPLT		0x04
++#define HCINT		0x08
++#define HCINTMSK	0x0C
++#define HCTSIZ		0x10
++
++#define ISOC_XACTPOS_ALL 	0b11
++#define ISOC_XACTPOS_BEGIN	0b10
++#define ISOC_XACTPOS_MID	0b00
++#define ISOC_XACTPOS_END	0b01
++
++#define DWC_PID_DATA2	0b01
++#define DWC_PID_MDATA	0b11
++#define DWC_PID_DATA1	0b10
++#define DWC_PID_DATA0	0b00
++
++typedef struct {
++	volatile void* base;
++	volatile void* ctrl;
++	volatile void* outdda;
++	volatile void* outddb;
++	volatile void* intstat;
++} mphi_regs_t;
++
++enum fiq_debug_level {
++	FIQDBG_SCHED = (1 << 0),
++	FIQDBG_INT   = (1 << 1),
++	FIQDBG_ERR   = (1 << 2),
++	FIQDBG_PORTHUB = (1 << 3),
++};
++
++typedef struct {
++	union {
++		uint32_t slock;
++		struct _tickets {
++			uint16_t owner;
++			uint16_t next;
++		} tickets;
++	};
++} fiq_lock_t;
++
++struct fiq_state;
++
++extern void _fiq_print (enum fiq_debug_level dbg_lvl, volatile struct fiq_state *state, char *fmt, ...);
++#if 0
++#define fiq_print _fiq_print
++#else
++#define fiq_print(x, y, ...)
++#endif
++
++extern bool fiq_enable, fiq_fsm_enable;
++extern ushort nak_holdoff;
++
++/**
++ * enum fiq_fsm_state - The FIQ FSM states.
++ *
++ * This is the "core" of the FIQ FSM. Broadly, the FSM states follow the
++ * USB2.0 specification for host responses to various transaction states.
++ * There are modifications to this host state machine because of a variety of
++ * quirks and limitations in the dwc_otg hardware.
++ *
++ * The fsm state is also used to communicate back to the driver on completion of
++ * a split transaction. The end states are used in conjunction with the interrupts
++ * raised by the final transaction.
++ */
++enum fiq_fsm_state {
++	/* FIQ isn't enabled for this host channel */
++	FIQ_PASSTHROUGH = 0,
++	/* For the first interrupt received for this channel,
++	 * the FIQ has to ack any interrupts indicating success. */
++	FIQ_PASSTHROUGH_ERRORSTATE = 31,
++	/* Nonperiodic state groups */
++	FIQ_NP_SSPLIT_STARTED = 1,
++	FIQ_NP_SSPLIT_RETRY = 2,
++	FIQ_NP_OUT_CSPLIT_RETRY = 3,
++	FIQ_NP_IN_CSPLIT_RETRY = 4,
++	FIQ_NP_SPLIT_DONE = 5,
++	FIQ_NP_SPLIT_LS_ABORTED = 6,
++	/* This differentiates a HS transaction error from a LS one
++	 * (handling the hub state is different) */
++	FIQ_NP_SPLIT_HS_ABORTED = 7,
++
++	/* Periodic state groups */
++	/* Periodic transactions are either started directly by the IRQ handler
++	 * or deferred if the TT is already in use.
++	 */
++	FIQ_PER_SSPLIT_QUEUED = 8,
++	FIQ_PER_SSPLIT_STARTED = 9,
++	FIQ_PER_SSPLIT_LAST = 10,
++
++
++	FIQ_PER_ISO_OUT_PENDING = 11,
++	FIQ_PER_ISO_OUT_ACTIVE = 12,
++	FIQ_PER_ISO_OUT_LAST = 13,
++	FIQ_PER_ISO_OUT_DONE = 27,
++
++	FIQ_PER_CSPLIT_WAIT = 14,
++	FIQ_PER_CSPLIT_NYET1 = 15,
++	FIQ_PER_CSPLIT_BROKEN_NYET1 = 28,
++	FIQ_PER_CSPLIT_NYET_FAFF = 29,
++	/* For multiple CSPLITs (large isoc IN, or delayed interrupt) */
++	FIQ_PER_CSPLIT_POLL = 16,
++	/* The last CSPLIT for a transaction has been issued, differentiates
++	 * for the state machine to queue the next packet.
++	 */
++	FIQ_PER_CSPLIT_LAST = 17,
++
++	FIQ_PER_SPLIT_DONE = 18,
++	FIQ_PER_SPLIT_LS_ABORTED = 19,
++	FIQ_PER_SPLIT_HS_ABORTED = 20,
++	FIQ_PER_SPLIT_NYET_ABORTED = 21,
++	/* Frame rollover has occurred without the transaction finishing. */
++	FIQ_PER_SPLIT_TIMEOUT = 22,
++
++	/* FIQ-accelerated HS Isochronous state groups */
++	FIQ_HS_ISOC_TURBO = 23,
++	/* For interval > 1, SOF wakes up the isochronous FSM */
++	FIQ_HS_ISOC_SLEEPING = 24,
++	FIQ_HS_ISOC_DONE = 25,
++	FIQ_HS_ISOC_ABORTED = 26,
++	FIQ_DEQUEUE_ISSUED = 30,
++	FIQ_TEST = 32,
++};
++
++struct fiq_stack {
++	int magic1;
++	uint8_t stack[2048];
++	int magic2;
++};
++
++
++/**
++ * struct fiq_dma_info - DMA bounce buffer utilisation information (per-channel)
++ * @index:	Number of slots reported used for IN transactions / number of slots
++ *			transmitted for an OUT transaction
++ * @slot_len[6]: Number of actual transfer bytes in each slot (255 if unused)
++ *
++ * Split transaction transfers can have variable length depending on other bus
++ * traffic. The OTG core DMA engine requires 4-byte aligned addresses therefore
++ * each transaction needs a guaranteed aligned address. A maximum of 6 split transfers
++ * can happen per-frame.
++ */
++struct fiq_dma_info {
++	u8 index;
++	u8 slot_len[6];
++};
++
++struct __attribute__((packed)) fiq_split_dma_slot {
++	u8 buf[188];
++};
++
++struct fiq_dma_channel {
++	struct __attribute__((packed)) fiq_split_dma_slot index[6];
++};
++
++struct fiq_dma_blob {
++	struct __attribute__((packed)) fiq_dma_channel channel[0];
++};
++
++/**
++ * struct fiq_hs_isoc_info - USB2.0 isochronous data
++ * @iso_frame:	Pointer to the array of OTG URB iso_frame_descs.
++ * @nrframes:	Total length of iso_frame_desc array
++ * @index:	Current index (FIQ-maintained)
++ * @stride:	Interval in uframes between HS isoc transactions
++ */
++struct fiq_hs_isoc_info {
++	struct dwc_otg_hcd_iso_packet_desc *iso_desc;
++	unsigned int nrframes;
++	unsigned int index;
++	unsigned int stride;
++};
++
++/**
++ * struct fiq_channel_state - FIQ state machine storage
++ * @fsm:	Current state of the channel as understood by the FIQ
++ * @nr_errors:	Number of transaction errors on this split-transaction
++ * @hub_addr:   SSPLIT/CSPLIT destination hub
++ * @port_addr:  SSPLIT/CSPLIT destination port - always 1 if single TT hub
++ * @nrpackets:  For isoc OUT, the number of split-OUT packets to transmit. For
++ * 		split-IN, number of CSPLIT data packets that were received.
++ * @hcchar_copy:
++ * @hcsplt_copy:
++ * @hcintmsk_copy:
++ * @hctsiz_copy:	Copies of the host channel registers.
++ * 			For use as scratch, or for returning state.
++ *
++ * The fiq_channel_state is state storage between interrupts for a host channel. The
++ * FSM state is stored here. Members of this structure must only be set up by the
++ * driver prior to enabling the FIQ for this host channel, and not touched until the FIQ
++ * has updated the state to either a COMPLETE state group or ABORT state group.
++ */
++
++struct fiq_channel_state {
++	enum fiq_fsm_state fsm;
++	unsigned int nr_errors;
++	unsigned int hub_addr;
++	unsigned int port_addr;
++	/* Hardware bug workaround: sometimes channel halt interrupts are
++	 * delayed until the next SOF. Keep track of when we expected to get interrupted. */
++	unsigned int expected_uframe;
++	/* number of uframes remaining (for interval > 1 HS isoc transfers) before next transfer */
++	unsigned int uframe_sleeps;
++	/* in/out for communicating number of dma buffers used, or number of ISOC to do */
++	unsigned int nrpackets;
++	struct fiq_dma_info dma_info;
++	struct fiq_hs_isoc_info hs_isoc_info;
++	/* Copies of HC registers - in/out communication from/to IRQ handler
++	 * and for ease of channel setup. A bit of mungeing is performed - for
++	 * example the hctsiz.b.maxp is _always_ the max packet size of the endpoint.
++	 */
++	hcchar_data_t hcchar_copy;
++	hcsplt_data_t hcsplt_copy;
++	hcint_data_t hcint_copy;
++	hcintmsk_data_t hcintmsk_copy;
++	hctsiz_data_t hctsiz_copy;
++	hcdma_data_t hcdma_copy;
++};
++
++/**
++ * struct fiq_state - top-level FIQ state machine storage
++ * @mphi_regs:		virtual address of the MPHI peripheral register file
++ * @dwc_regs_base:	virtual address of the base of the DWC core register file
++ * @dma_base:		physical address for the base of the DMA bounce buffers
++ * @dummy_send:		Scratch area for sending a fake message to the MPHI peripheral
++ * @gintmsk_saved:	Top-level mask of interrupts that the FIQ has not handled.
++ * 			Used for determining which interrupts fired to set off the IRQ handler.
++ * @haintmsk_saved:	Mask of interrupts from host channels that the FIQ did not handle internally.
++ * @np_count:		Non-periodic transactions in the active queue
++ * @np_sent:		Count of non-periodic transactions that have completed
++ * @next_sched_frame:	For periodic transactions handled by the driver's SOF-driven queuing mechanism,
++ * 			this is the next frame on which a SOF interrupt is required. Used to hold off
++ * 			passing SOF through to the driver until necessary.
++ * @channel[n]:		Per-channel FIQ state. Allocated during init depending on the number of host
++ * 			channels configured into the core logic.
++ *
++ * This is passed as the first argument to the dwc_otg_fiq_fsm top-level FIQ handler from the asm stub.
++ * It contains top-level state information.
++ */
++struct fiq_state {
++	fiq_lock_t lock;
++	mphi_regs_t mphi_regs;
++	void *dwc_regs_base;
++	dma_addr_t dma_base;
++	struct fiq_dma_blob *fiq_dmab;
++	void *dummy_send;
++	gintmsk_data_t gintmsk_saved;
++	haintmsk_data_t haintmsk_saved;
++	int mphi_int_count;
++	unsigned int fiq_done;
++	unsigned int kick_np_queues;
++	unsigned int next_sched_frame;
++#ifdef FIQ_DEBUG
++	char * buffer;
++	unsigned int bufsiz;
++#endif
++	struct fiq_channel_state channel[0];
++};
++
++extern void fiq_fsm_spin_lock(fiq_lock_t *lock);
++
++extern void fiq_fsm_spin_unlock(fiq_lock_t *lock);
++
++extern int fiq_fsm_too_late(struct fiq_state *st, int n);
++
++extern int fiq_fsm_tt_in_use(struct fiq_state *st, int num_channels, int n);
++
++extern void dwc_otg_fiq_fsm(struct fiq_state *state, int num_channels);
++
++extern void dwc_otg_fiq_nop(struct fiq_state *state);
++
++#endif /* DWC_OTG_FIQ_FSM_H_ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_fiq_stub.S
+@@ -0,0 +1,80 @@
++/*
++ * dwc_otg_fiq_fsm.S - assembly stub for the FSM FIQ
++ *
++ * Copyright (c) 2013 Raspberry Pi Foundation
++ *
++ * Author: Jonathan Bell <jonathan at raspberrypi.org>
++ * All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions are met:
++ *	* Redistributions of source code must retain the above copyright
++ *	  notice, this list of conditions and the following disclaimer.
++ *	* Redistributions in binary form must reproduce the above copyright
++ *	  notice, this list of conditions and the following disclaimer in the
++ *	  documentation and/or other materials provided with the distribution.
++ *	* Neither the name of Raspberry Pi nor the
++ *	  names of its contributors may be used to endorse or promote products
++ *	  derived from this software without specific prior written permission.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++ * DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
++ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++
++#include <asm/assembler.h>
++#include <linux/linkage.h>
++
++
++.text
++
++.global _dwc_otg_fiq_stub_end;
++
++/**
++  * _dwc_otg_fiq_stub() - entry copied to the FIQ vector page to allow
++  * a C-style function call with arguments from the FIQ banked registers.
++  * r0 = &hcd->fiq_state
++  * r1 = &hcd->num_channels
++  * r2 = &hcd->dma_buffers
++  * Tramples: r0, r1, r2, r4, fp, ip
++  */
++
++ENTRY(_dwc_otg_fiq_stub)
++	/* Stash unbanked regs - SP will have been set up for us */
++	mov ip, sp;
++	stmdb sp!, {r0-r12, lr};
++#ifdef FIQ_DEBUG
++	// Cycle profiling - read cycle counter at start
++	mrc p15, 0, r5, c15, c12, 1;
++#endif
++	/* r11 = fp, don't trample it */
++	mov r4, fp;
++	/* set EABI frame size */
++	sub fp, ip, #512;
++
++	/* for fiq NOP mode - just need state */
++	mov r0, r8;
++	/* r9 = num_channels */
++	mov r1, r9;
++	/* r10 = struct *dma_bufs */
++//	mov r2, r10;
++
++	/* r4 = &fiq_c_function */
++	blx r4;
++#ifdef FIQ_DEBUG
++	mrc p15, 0, r4, c15, c12, 1;
++	subs r5, r5, r4;
++	// r5 is now the cycle count time for executing the FIQ. Store it somewhere?
++#endif
++	ldmia sp!, {r0-r12, lr};
++	subs pc, lr, #4;
++_dwc_otg_fiq_stub_end:
++END(_dwc_otg_fiq_stub)
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd.c
+@@ -0,0 +1,4257 @@
++
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd.c $
++ * $Revision: #104 $
++ * $Date: 2011/10/24 $
++ * $Change: 1871159 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_DEVICE_ONLY
++
++/** @file
++ * This file implements HCD Core. All code in this file is portable and doesn't
++ * use any OS specific functions.
++ * Interface provided by HCD Core is defined in <code><hcd_if.h></code>
++ * header file.
++ */
++
++#include <linux/usb.h>
++#include <linux/usb/hcd.h>
++
++#include "dwc_otg_hcd.h"
++#include "dwc_otg_regs.h"
++#include "dwc_otg_fiq_fsm.h"
++
++extern bool microframe_schedule;
++extern uint16_t fiq_fsm_mask, nak_holdoff;
++
++//#define DEBUG_HOST_CHANNELS
++#ifdef DEBUG_HOST_CHANNELS
++static int last_sel_trans_num_per_scheduled = 0;
++static int last_sel_trans_num_nonper_scheduled = 0;
++static int last_sel_trans_num_avail_hc_at_start = 0;
++static int last_sel_trans_num_avail_hc_at_end = 0;
++#endif /* DEBUG_HOST_CHANNELS */
++
++
++dwc_otg_hcd_t *dwc_otg_hcd_alloc_hcd(void)
++{
++	return DWC_ALLOC(sizeof(dwc_otg_hcd_t));
++}
++
++/**
++ * Connection timeout function.  An OTG host is required to display a
++ * message if the device does not connect within 10 seconds.
++ */
++void dwc_otg_hcd_connect_timeout(void *ptr)
++{
++	DWC_DEBUGPL(DBG_HCDV, "%s(%p)\n", __func__, ptr);
++	DWC_PRINTF("Connect Timeout\n");
++	__DWC_ERROR("Device Not Connected/Responding\n");
++}
++
++#if defined(DEBUG)
++static void dump_channel_info(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	if (qh->channel != NULL) {
++		dwc_hc_t *hc = qh->channel;
++		dwc_list_link_t *item;
++		dwc_otg_qh_t *qh_item;
++		int num_channels = hcd->core_if->core_params->host_channels;
++		int i;
++
++		dwc_otg_hc_regs_t *hc_regs;
++		hcchar_data_t hcchar;
++		hcsplt_data_t hcsplt;
++		hctsiz_data_t hctsiz;
++		uint32_t hcdma;
++
++		hc_regs = hcd->core_if->host_if->hc_regs[hc->hc_num];
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++		hcsplt.d32 = DWC_READ_REG32(&hc_regs->hcsplt);
++		hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
++		hcdma = DWC_READ_REG32(&hc_regs->hcdma);
++
++		DWC_PRINTF("  Assigned to channel %p:\n", hc);
++		DWC_PRINTF("    hcchar 0x%08x, hcsplt 0x%08x\n", hcchar.d32,
++			   hcsplt.d32);
++		DWC_PRINTF("    hctsiz 0x%08x, hcdma 0x%08x\n", hctsiz.d32,
++			   hcdma);
++		DWC_PRINTF("    dev_addr: %d, ep_num: %d, ep_is_in: %d\n",
++			   hc->dev_addr, hc->ep_num, hc->ep_is_in);
++		DWC_PRINTF("    ep_type: %d\n", hc->ep_type);
++		DWC_PRINTF("    max_packet: %d\n", hc->max_packet);
++		DWC_PRINTF("    data_pid_start: %d\n", hc->data_pid_start);
++		DWC_PRINTF("    xfer_started: %d\n", hc->xfer_started);
++		DWC_PRINTF("    halt_status: %d\n", hc->halt_status);
++		DWC_PRINTF("    xfer_buff: %p\n", hc->xfer_buff);
++		DWC_PRINTF("    xfer_len: %d\n", hc->xfer_len);
++		DWC_PRINTF("    qh: %p\n", hc->qh);
++		DWC_PRINTF("  NP inactive sched:\n");
++		DWC_LIST_FOREACH(item, &hcd->non_periodic_sched_inactive) {
++			qh_item =
++			    DWC_LIST_ENTRY(item, dwc_otg_qh_t, qh_list_entry);
++			DWC_PRINTF("    %p\n", qh_item);
++		}
++		DWC_PRINTF("  NP active sched:\n");
++		DWC_LIST_FOREACH(item, &hcd->non_periodic_sched_active) {
++			qh_item =
++			    DWC_LIST_ENTRY(item, dwc_otg_qh_t, qh_list_entry);
++			DWC_PRINTF("    %p\n", qh_item);
++		}
++		DWC_PRINTF("  Channels: \n");
++		for (i = 0; i < num_channels; i++) {
++			dwc_hc_t *hc = hcd->hc_ptr_array[i];
++			DWC_PRINTF("    %2d: %p\n", i, hc);
++		}
++	}
++}
++#else
++#define dump_channel_info(hcd, qh)
++#endif /* DEBUG */
++
++/**
++ * Work queue function for starting the HCD when A-Cable is connected.
++ * The hcd_start() must be called in a process context.
++ */
++static void hcd_start_func(void *_vp)
++{
++	dwc_otg_hcd_t *hcd = (dwc_otg_hcd_t *) _vp;
++
++	DWC_DEBUGPL(DBG_HCDV, "%s() %p\n", __func__, hcd);
++	if (hcd) {
++		hcd->fops->start(hcd);
++	}
++}
++
++static void del_xfer_timers(dwc_otg_hcd_t * hcd)
++{
++#ifdef DEBUG
++	int i;
++	int num_channels = hcd->core_if->core_params->host_channels;
++	for (i = 0; i < num_channels; i++) {
++		DWC_TIMER_CANCEL(hcd->core_if->hc_xfer_timer[i]);
++	}
++#endif
++}
++
++static void del_timers(dwc_otg_hcd_t * hcd)
++{
++	del_xfer_timers(hcd);
++	DWC_TIMER_CANCEL(hcd->conn_timer);
++}
++
++/**
++ * Processes all the URBs in a single list of QHs. Completes them with
++ * -ESHUTDOWN and frees the QTD.
++ */
++static void kill_urbs_in_qh_list(dwc_otg_hcd_t * hcd, dwc_list_link_t * qh_list)
++{
++	dwc_list_link_t *qh_item, *qh_tmp;
++	dwc_otg_qh_t *qh;
++	dwc_otg_qtd_t *qtd, *qtd_tmp;
++
++	DWC_LIST_FOREACH_SAFE(qh_item, qh_tmp, qh_list) {
++		qh = DWC_LIST_ENTRY(qh_item, dwc_otg_qh_t, qh_list_entry);
++		DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp,
++					 &qh->qtd_list, qtd_list_entry) {
++			qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
++			if (qtd->urb != NULL) {
++				hcd->fops->complete(hcd, qtd->urb->priv,
++						    qtd->urb, -DWC_E_SHUTDOWN);
++				dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
++			}
++
++		}
++		if(qh->channel) {
++			/* Using hcchar.chen == 1 is not a reliable test.
++			 * It is possible that the channel has already halted
++			 * but not yet been through the IRQ handler.
++			 */
++			dwc_otg_hc_halt(hcd->core_if, qh->channel,
++				DWC_OTG_HC_XFER_URB_DEQUEUE);
++			if(microframe_schedule)
++				hcd->available_host_channels++;
++			qh->channel = NULL;
++		}
++		dwc_otg_hcd_qh_remove(hcd, qh);
++	}
++}
++
++/**
++ * Responds with an error status of ESHUTDOWN to all URBs in the non-periodic
++ * and periodic schedules. The QTD associated with each URB is removed from
++ * the schedule and freed. This function may be called when a disconnect is
++ * detected or when the HCD is being stopped.
++ */
++static void kill_all_urbs(dwc_otg_hcd_t * hcd)
++{
++	kill_urbs_in_qh_list(hcd, &hcd->non_periodic_sched_inactive);
++	kill_urbs_in_qh_list(hcd, &hcd->non_periodic_sched_active);
++	kill_urbs_in_qh_list(hcd, &hcd->periodic_sched_inactive);
++	kill_urbs_in_qh_list(hcd, &hcd->periodic_sched_ready);
++	kill_urbs_in_qh_list(hcd, &hcd->periodic_sched_assigned);
++	kill_urbs_in_qh_list(hcd, &hcd->periodic_sched_queued);
++}
++
++/**
++ * Start the connection timer.  An OTG host is required to display a
++ * message if the device does not connect within 10 seconds.  The
++ * timer is deleted if a port connect interrupt occurs before the
++ * timer expires.
++ */
++static void dwc_otg_hcd_start_connect_timer(dwc_otg_hcd_t * hcd)
++{
++	DWC_TIMER_SCHEDULE(hcd->conn_timer, 10000 /* 10 secs */ );
++}
++
++/**
++ * HCD Callback function for disconnect of the HCD.
++ *
++ * @param p void pointer to the <code>struct usb_hcd</code>
++ */
++static int32_t dwc_otg_hcd_session_start_cb(void *p)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd;
++	DWC_DEBUGPL(DBG_HCDV, "%s(%p)\n", __func__, p);
++	dwc_otg_hcd = p;
++	dwc_otg_hcd_start_connect_timer(dwc_otg_hcd);
++	return 1;
++}
++
++/**
++ * HCD Callback function for starting the HCD when A-Cable is
++ * connected.
++ *
++ * @param p void pointer to the <code>struct usb_hcd</code>
++ */
++static int32_t dwc_otg_hcd_start_cb(void *p)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd = p;
++	dwc_otg_core_if_t *core_if;
++	hprt0_data_t hprt0;
++
++	core_if = dwc_otg_hcd->core_if;
++
++	if (core_if->op_state == B_HOST) {
++		/*
++		 * Reset the port.  During a HNP mode switch the reset
++		 * needs to occur within 1ms and have a duration of at
++		 * least 50ms.
++		 */
++		hprt0.d32 = dwc_otg_read_hprt0(core_if);
++		hprt0.b.prtrst = 1;
++		DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++	}
++	DWC_WORKQ_SCHEDULE_DELAYED(core_if->wq_otg,
++				   hcd_start_func, dwc_otg_hcd, 50,
++				   "start hcd");
++
++	return 1;
++}
++
++/**
++ * HCD Callback function for disconnect of the HCD.
++ *
++ * @param p void pointer to the <code>struct usb_hcd</code>
++ */
++static int32_t dwc_otg_hcd_disconnect_cb(void *p)
++{
++	gintsts_data_t intr;
++	dwc_otg_hcd_t *dwc_otg_hcd = p;
++
++	/*
++	 * Set status flags for the hub driver.
++	 */
++	dwc_otg_hcd->flags.b.port_connect_status_change = 1;
++	dwc_otg_hcd->flags.b.port_connect_status = 0;
++	if(fiq_enable)
++		local_fiq_disable();
++	/*
++	 * Shutdown any transfers in process by clearing the Tx FIFO Empty
++	 * interrupt mask and status bits and disabling subsequent host
++	 * channel interrupts.
++	 */
++	intr.d32 = 0;
++	intr.b.nptxfempty = 1;
++	intr.b.ptxfempty = 1;
++	intr.b.hcintr = 1;
++	DWC_MODIFY_REG32(&dwc_otg_hcd->core_if->core_global_regs->gintmsk,
++			 intr.d32, 0);
++	DWC_MODIFY_REG32(&dwc_otg_hcd->core_if->core_global_regs->gintsts,
++			 intr.d32, 0);
++
++	del_timers(dwc_otg_hcd);
++
++	/*
++	 * Turn off the vbus power only if the core has transitioned to device
++	 * mode. If still in host mode, need to keep power on to detect a
++	 * reconnection.
++	 */
++	if (dwc_otg_is_device_mode(dwc_otg_hcd->core_if)) {
++		if (dwc_otg_hcd->core_if->op_state != A_SUSPEND) {
++			hprt0_data_t hprt0 = {.d32 = 0 };
++			DWC_PRINTF("Disconnect: PortPower off\n");
++			hprt0.b.prtpwr = 0;
++			DWC_WRITE_REG32(dwc_otg_hcd->core_if->host_if->hprt0,
++					hprt0.d32);
++		}
++
++		dwc_otg_disable_host_interrupts(dwc_otg_hcd->core_if);
++	}
++
++	/* Respond with an error status to all URBs in the schedule. */
++	kill_all_urbs(dwc_otg_hcd);
++
++	if (dwc_otg_is_host_mode(dwc_otg_hcd->core_if)) {
++		/* Clean up any host channels that were in use. */
++		int num_channels;
++		int i;
++		dwc_hc_t *channel;
++		dwc_otg_hc_regs_t *hc_regs;
++		hcchar_data_t hcchar;
++
++		num_channels = dwc_otg_hcd->core_if->core_params->host_channels;
++
++		if (!dwc_otg_hcd->core_if->dma_enable) {
++			/* Flush out any channel requests in slave mode. */
++			for (i = 0; i < num_channels; i++) {
++				channel = dwc_otg_hcd->hc_ptr_array[i];
++				if (DWC_CIRCLEQ_EMPTY_ENTRY
++				    (channel, hc_list_entry)) {
++					hc_regs =
++					    dwc_otg_hcd->core_if->
++					    host_if->hc_regs[i];
++					hcchar.d32 =
++					    DWC_READ_REG32(&hc_regs->hcchar);
++					if (hcchar.b.chen) {
++						hcchar.b.chen = 0;
++						hcchar.b.chdis = 1;
++						hcchar.b.epdir = 0;
++						DWC_WRITE_REG32
++						    (&hc_regs->hcchar,
++						     hcchar.d32);
++					}
++				}
++			}
++		}
++
++		for (i = 0; i < num_channels; i++) {
++			channel = dwc_otg_hcd->hc_ptr_array[i];
++			if (DWC_CIRCLEQ_EMPTY_ENTRY(channel, hc_list_entry)) {
++				hc_regs =
++				    dwc_otg_hcd->core_if->host_if->hc_regs[i];
++				hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++				if (hcchar.b.chen) {
++					/* Halt the channel. */
++					hcchar.b.chdis = 1;
++					DWC_WRITE_REG32(&hc_regs->hcchar,
++							hcchar.d32);
++				}
++
++				dwc_otg_hc_cleanup(dwc_otg_hcd->core_if,
++						   channel);
++				DWC_CIRCLEQ_INSERT_TAIL
++				    (&dwc_otg_hcd->free_hc_list, channel,
++				     hc_list_entry);
++				/*
++				 * Added for Descriptor DMA to prevent channel double cleanup
++				 * in release_channel_ddma(). Which called from ep_disable
++				 * when device disconnect.
++				 */
++				channel->qh = NULL;
++			}
++		}
++		if(fiq_fsm_enable) {
++			for(i=0; i < 128; i++) {
++				dwc_otg_hcd->hub_port[i] = 0;
++			}
++		}
++
++	}
++
++	if(fiq_enable)
++		local_fiq_enable();
++
++	if (dwc_otg_hcd->fops->disconnect) {
++		dwc_otg_hcd->fops->disconnect(dwc_otg_hcd);
++	}
++
++	return 1;
++}
++
++/**
++ * HCD Callback function for stopping the HCD.
++ *
++ * @param p void pointer to the <code>struct usb_hcd</code>
++ */
++static int32_t dwc_otg_hcd_stop_cb(void *p)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd = p;
++
++	DWC_DEBUGPL(DBG_HCDV, "%s(%p)\n", __func__, p);
++	dwc_otg_hcd_stop(dwc_otg_hcd);
++	return 1;
++}
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++/**
++ * HCD Callback function for sleep of HCD.
++ *
++ * @param p void pointer to the <code>struct usb_hcd</code>
++ */
++static int dwc_otg_hcd_sleep_cb(void *p)
++{
++	dwc_otg_hcd_t *hcd = p;
++
++	dwc_otg_hcd_free_hc_from_lpm(hcd);
++
++	return 0;
++}
++#endif
++
++
++/**
++ * HCD Callback function for Remote Wakeup.
++ *
++ * @param p void pointer to the <code>struct usb_hcd</code>
++ */
++static int dwc_otg_hcd_rem_wakeup_cb(void *p)
++{
++	dwc_otg_hcd_t *hcd = p;
++
++	if (hcd->core_if->lx_state == DWC_OTG_L2) {
++		hcd->flags.b.port_suspend_change = 1;
++	}
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	else {
++		hcd->flags.b.port_l1_change = 1;
++	}
++#endif
++	return 0;
++}
++
++/**
++ * Halts the DWC_otg host mode operations in a clean manner. USB transfers are
++ * stopped.
++ */
++void dwc_otg_hcd_stop(dwc_otg_hcd_t * hcd)
++{
++	hprt0_data_t hprt0 = {.d32 = 0 };
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD STOP\n");
++
++	/*
++	 * The root hub should be disconnected before this function is called.
++	 * The disconnect will clear the QTD lists (via ..._hcd_urb_dequeue)
++	 * and the QH lists (via ..._hcd_endpoint_disable).
++	 */
++
++	/* Turn off all host-specific interrupts. */
++	dwc_otg_disable_host_interrupts(hcd->core_if);
++
++	/* Turn off the vbus power */
++	DWC_PRINTF("PortPower off\n");
++	hprt0.b.prtpwr = 0;
++	DWC_WRITE_REG32(hcd->core_if->host_if->hprt0, hprt0.d32);
++	dwc_mdelay(1);
++}
++
++int dwc_otg_hcd_urb_enqueue(dwc_otg_hcd_t * hcd,
++			    dwc_otg_hcd_urb_t * dwc_otg_urb, void **ep_handle,
++			    int atomic_alloc)
++{
++	int retval = 0;
++	uint8_t needs_scheduling = 0;
++	dwc_otg_transaction_type_e tr_type;
++	dwc_otg_qtd_t *qtd;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	hprt0_data_t hprt0 = { .d32 = 0 };
++
++#ifdef DEBUG /* integrity checks (Broadcom) */
++	if (NULL == hcd->core_if) {
++		DWC_ERROR("**** DWC OTG HCD URB Enqueue - HCD has NULL core_if\n");
++		/* No longer connected. */
++		return -DWC_E_INVALID;
++	}
++#endif
++	if (!hcd->flags.b.port_connect_status) {
++		/* No longer connected. */
++		DWC_ERROR("Not connected\n");
++		return -DWC_E_NO_DEVICE;
++	}
++
++	/* Some core configurations cannot support LS traffic on a FS root port */
++	if ((hcd->fops->speed(hcd, dwc_otg_urb->priv) == USB_SPEED_LOW) &&
++		(hcd->core_if->hwcfg2.b.fs_phy_type == 1) &&
++		(hcd->core_if->hwcfg2.b.hs_phy_type == 1)) {
++			hprt0.d32 = DWC_READ_REG32(hcd->core_if->host_if->hprt0);
++			if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_FULL_SPEED) {
++				return -DWC_E_NO_DEVICE;
++			}
++	}
++
++	qtd = dwc_otg_hcd_qtd_create(dwc_otg_urb, atomic_alloc);
++	if (qtd == NULL) {
++		DWC_ERROR("DWC OTG HCD URB Enqueue failed creating QTD\n");
++		return -DWC_E_NO_MEMORY;
++	}
++#ifdef DEBUG /* integrity checks (Broadcom) */
++	if (qtd->urb == NULL) {
++		DWC_ERROR("**** DWC OTG HCD URB Enqueue created QTD with no URBs\n");
++		return -DWC_E_NO_MEMORY;
++	}
++	if (qtd->urb->priv == NULL) {
++		DWC_ERROR("**** DWC OTG HCD URB Enqueue created QTD URB with no URB handle\n");
++		return -DWC_E_NO_MEMORY;
++	}
++#endif
++	intr_mask.d32 = DWC_READ_REG32(&hcd->core_if->core_global_regs->gintmsk);
++	if(!intr_mask.b.sofintr || fiq_enable) needs_scheduling = 1;
++	if((((dwc_otg_qh_t *)ep_handle)->ep_type == UE_BULK) && !(qtd->urb->flags & URB_GIVEBACK_ASAP))
++		/* Do not schedule SG transactions until qtd has URB_GIVEBACK_ASAP set */
++		needs_scheduling = 0;
++
++	retval = dwc_otg_hcd_qtd_add(qtd, hcd, (dwc_otg_qh_t **) ep_handle, atomic_alloc);
++            // creates a new queue in ep_handle if it doesn't exist already
++	if (retval < 0) {
++		DWC_ERROR("DWC OTG HCD URB Enqueue failed adding QTD. "
++			  "Error status %d\n", retval);
++		dwc_otg_hcd_qtd_free(qtd);
++		return retval;
++	}
++
++	if(needs_scheduling) {
++		tr_type = dwc_otg_hcd_select_transactions(hcd);
++		if (tr_type != DWC_OTG_TRANSACTION_NONE) {
++			dwc_otg_hcd_queue_transactions(hcd, tr_type);
++		}
++	}
++	return retval;
++}
++
++int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_t * hcd,
++			    dwc_otg_hcd_urb_t * dwc_otg_urb)
++{
++	dwc_otg_qh_t *qh;
++	dwc_otg_qtd_t *urb_qtd;
++	BUG_ON(!hcd);
++	BUG_ON(!dwc_otg_urb);
++
++#ifdef DEBUG /* integrity checks (Broadcom) */
++
++	if (hcd == NULL) {
++		DWC_ERROR("**** DWC OTG HCD URB Dequeue has NULL HCD\n");
++		return -DWC_E_INVALID;
++	}
++	if (dwc_otg_urb == NULL) {
++		DWC_ERROR("**** DWC OTG HCD URB Dequeue has NULL URB\n");
++		return -DWC_E_INVALID;
++	}
++	if (dwc_otg_urb->qtd == NULL) {
++		DWC_ERROR("**** DWC OTG HCD URB Dequeue with NULL QTD\n");
++		return -DWC_E_INVALID;
++	}
++	urb_qtd = dwc_otg_urb->qtd;
++	BUG_ON(!urb_qtd);
++	if (urb_qtd->qh == NULL) {
++		DWC_ERROR("**** DWC OTG HCD URB Dequeue with QTD with NULL Q handler\n");
++		return -DWC_E_INVALID;
++	}
++#else
++	urb_qtd = dwc_otg_urb->qtd;
++	BUG_ON(!urb_qtd);
++#endif
++	qh = urb_qtd->qh;
++	BUG_ON(!qh);
++	if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
++		if (urb_qtd->in_process) {
++			dump_channel_info(hcd, qh);
++		}
++	}
++#ifdef DEBUG /* integrity checks (Broadcom) */
++	if (hcd->core_if == NULL) {
++		DWC_ERROR("**** DWC OTG HCD URB Dequeue HCD has NULL core_if\n");
++		return -DWC_E_INVALID;
++	}
++#endif
++	if (urb_qtd->in_process && qh->channel) {
++		/* The QTD is in process (it has been assigned to a channel). */
++		if (hcd->flags.b.port_connect_status) {
++			int n = qh->channel->hc_num;
++			/*
++			 * If still connected (i.e. in host mode), halt the
++			 * channel so it can be used for other transfers. If
++			 * no longer connected, the host registers can't be
++			 * written to halt the channel since the core is in
++			 * device mode.
++			 */
++			/* In FIQ FSM mode, we need to shut down carefully.
++			 * The FIQ may attempt to restart a disabled channel */
++			if (fiq_fsm_enable && (hcd->fiq_state->channel[n].fsm != FIQ_PASSTHROUGH)) {
++				qh->channel->halt_status = DWC_OTG_HC_XFER_URB_DEQUEUE;
++				qh->channel->halt_pending = 1;
++				hcd->fiq_state->channel[n].fsm = FIQ_DEQUEUE_ISSUED;
++			} else {
++				dwc_otg_hc_halt(hcd->core_if, qh->channel,
++						DWC_OTG_HC_XFER_URB_DEQUEUE);
++			}
++		}
++	}
++
++	/*
++	 * Free the QTD and clean up the associated QH. Leave the QH in the
++	 * schedule if it has any remaining QTDs.
++	 */
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Dequeue - "
++                    "delete %sQueue handler\n",
++                    hcd->core_if->dma_desc_enable?"DMA ":"");
++	if (!hcd->core_if->dma_desc_enable) {
++		uint8_t b = urb_qtd->in_process;
++		dwc_otg_hcd_qtd_remove_and_free(hcd, urb_qtd, qh);
++		if (b) {
++			dwc_otg_hcd_qh_deactivate(hcd, qh, 0);
++			qh->channel = NULL;
++		} else if (DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
++			dwc_otg_hcd_qh_remove(hcd, qh);
++		}
++	} else {
++		dwc_otg_hcd_qtd_remove_and_free(hcd, urb_qtd, qh);
++	}
++	return 0;
++}
++
++int dwc_otg_hcd_endpoint_disable(dwc_otg_hcd_t * hcd, void *ep_handle,
++				 int retry)
++{
++	dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
++	int retval = 0;
++	dwc_irqflags_t flags;
++
++	if (retry < 0) {
++		retval = -DWC_E_INVALID;
++		goto done;
++	}
++
++	if (!qh) {
++		retval = -DWC_E_INVALID;
++		goto done;
++	}
++
++	DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
++
++	while (!DWC_CIRCLEQ_EMPTY(&qh->qtd_list) && retry) {
++		DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
++		retry--;
++		dwc_msleep(5);
++		DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
++	}
++
++	dwc_otg_hcd_qh_remove(hcd, qh);
++
++	DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
++	/*
++	 * Split dwc_otg_hcd_qh_remove_and_free() into qh_remove
++	 * and qh_free to prevent stack dump on DWC_DMA_FREE() with
++	 * irq_disabled (spinlock_irqsave) in dwc_otg_hcd_desc_list_free()
++	 * and dwc_otg_hcd_frame_list_alloc().
++	 */
++	dwc_otg_hcd_qh_free(hcd, qh);
++
++done:
++	return retval;
++}
++
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30)
++int dwc_otg_hcd_endpoint_reset(dwc_otg_hcd_t * hcd, void *ep_handle)
++{
++	int retval = 0;
++	dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
++	if (!qh)
++		return -DWC_E_INVALID;
++
++	qh->data_toggle = DWC_OTG_HC_PID_DATA0;
++	return retval;
++}
++#endif
++
++/**
++ * HCD Callback structure for handling mode switching.
++ */
++static dwc_otg_cil_callbacks_t hcd_cil_callbacks = {
++	.start = dwc_otg_hcd_start_cb,
++	.stop = dwc_otg_hcd_stop_cb,
++	.disconnect = dwc_otg_hcd_disconnect_cb,
++	.session_start = dwc_otg_hcd_session_start_cb,
++	.resume_wakeup = dwc_otg_hcd_rem_wakeup_cb,
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	.sleep = dwc_otg_hcd_sleep_cb,
++#endif
++	.p = 0,
++};
++
++/**
++ * Reset tasklet function
++ */
++static void reset_tasklet_func(void *data)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd = (dwc_otg_hcd_t *) data;
++	dwc_otg_core_if_t *core_if = dwc_otg_hcd->core_if;
++	hprt0_data_t hprt0;
++
++	DWC_DEBUGPL(DBG_HCDV, "USB RESET tasklet called\n");
++
++	hprt0.d32 = dwc_otg_read_hprt0(core_if);
++	hprt0.b.prtrst = 1;
++	DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++	dwc_mdelay(60);
++
++	hprt0.b.prtrst = 0;
++	DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++	dwc_otg_hcd->flags.b.port_reset_change = 1;
++}
++
++static void completion_tasklet_func(void *ptr)
++{
++	dwc_otg_hcd_t *hcd = (dwc_otg_hcd_t *) ptr;
++	struct urb *urb;
++	urb_tq_entry_t *item;
++	dwc_irqflags_t flags;
++
++	/* This could just be spin_lock_irq */
++	DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
++	while (!DWC_TAILQ_EMPTY(&hcd->completed_urb_list)) {
++		item = DWC_TAILQ_FIRST(&hcd->completed_urb_list);
++		urb = item->urb;
++		DWC_TAILQ_REMOVE(&hcd->completed_urb_list, item,
++				urb_tq_entries);
++		DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
++		DWC_FREE(item);
++
++		usb_hcd_giveback_urb(hcd->priv, urb, urb->status);
++
++
++		DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
++	}
++	DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
++	return;
++}
++
++static void qh_list_free(dwc_otg_hcd_t * hcd, dwc_list_link_t * qh_list)
++{
++	dwc_list_link_t *item;
++	dwc_otg_qh_t *qh;
++	dwc_irqflags_t flags;
++
++	if (!qh_list->next) {
++		/* The list hasn't been initialized yet. */
++		return;
++	}
++	/*
++	 * Hold spinlock here. Not needed in that case if bellow
++	 * function is being called from ISR
++	 */
++	DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
++	/* Ensure there are no QTDs or URBs left. */
++	kill_urbs_in_qh_list(hcd, qh_list);
++	DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
++
++	DWC_LIST_FOREACH(item, qh_list) {
++		qh = DWC_LIST_ENTRY(item, dwc_otg_qh_t, qh_list_entry);
++		dwc_otg_hcd_qh_remove_and_free(hcd, qh);
++	}
++}
++
++/**
++ * Exit from Hibernation if Host did not detect SRP from connected SRP capable
++ * Device during SRP time by host power up.
++ */
++void dwc_otg_hcd_power_up(void *ptr)
++{
++	gpwrdn_data_t gpwrdn = {.d32 = 0 };
++	dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) ptr;
++
++	DWC_PRINTF("%s called\n", __FUNCTION__);
++
++	if (!core_if->hibernation_suspend) {
++		DWC_PRINTF("Already exited from Hibernation\n");
++		return;
++	}
++
++	/* Switch on the voltage to the core */
++	gpwrdn.b.pwrdnswtch = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Reset the core */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Disable power clamps */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnclmp = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	/* Remove reset the core signal */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pwrdnrstn = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
++	dwc_udelay(10);
++
++	/* Disable PMU interrupt */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuintsel = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	core_if->hibernation_suspend = 0;
++
++	/* Disable PMU */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.pmuactv = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++	dwc_udelay(10);
++
++	/* Enable VBUS */
++	gpwrdn.d32 = 0;
++	gpwrdn.b.dis_vbus = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
++
++	core_if->op_state = A_HOST;
++	dwc_otg_core_init(core_if);
++	dwc_otg_enable_global_interrupts(core_if);
++	cil_hcd_start(core_if);
++}
++
++void dwc_otg_cleanup_fiq_channel(dwc_otg_hcd_t *hcd, uint32_t num)
++{
++	struct fiq_channel_state *st = &hcd->fiq_state->channel[num];
++	struct fiq_dma_blob *blob = hcd->fiq_dmab;
++	int i;
++
++	st->fsm = FIQ_PASSTHROUGH;
++	st->hcchar_copy.d32 = 0;
++	st->hcsplt_copy.d32 = 0;
++	st->hcint_copy.d32 = 0;
++	st->hcintmsk_copy.d32 = 0;
++	st->hctsiz_copy.d32 = 0;
++	st->hcdma_copy.d32 = 0;
++	st->nr_errors = 0;
++	st->hub_addr = 0;
++	st->port_addr = 0;
++	st->expected_uframe = 0;
++	st->nrpackets = 0;
++	st->dma_info.index = 0;
++	for (i = 0; i < 6; i++)
++		st->dma_info.slot_len[i] = 255;
++	st->hs_isoc_info.index = 0;
++	st->hs_isoc_info.iso_desc = NULL;
++	st->hs_isoc_info.nrframes = 0;
++
++	DWC_MEMSET(&blob->channel[num].index[0], 0x6b, 1128);
++}
++
++/**
++ * Frees secondary storage associated with the dwc_otg_hcd structure contained
++ * in the struct usb_hcd field.
++ */
++static void dwc_otg_hcd_free(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	int i;
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD FREE\n");
++
++	del_timers(dwc_otg_hcd);
++
++	/* Free memory for QH/QTD lists */
++	qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->non_periodic_sched_inactive);
++	qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->non_periodic_sched_active);
++	qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->periodic_sched_inactive);
++	qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->periodic_sched_ready);
++	qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->periodic_sched_assigned);
++	qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->periodic_sched_queued);
++
++	/* Free memory for the host channels. */
++	for (i = 0; i < MAX_EPS_CHANNELS; i++) {
++		dwc_hc_t *hc = dwc_otg_hcd->hc_ptr_array[i];
++
++#ifdef DEBUG
++		if (dwc_otg_hcd->core_if->hc_xfer_timer[i]) {
++			DWC_TIMER_FREE(dwc_otg_hcd->core_if->hc_xfer_timer[i]);
++		}
++#endif
++		if (hc != NULL) {
++			DWC_DEBUGPL(DBG_HCDV, "HCD Free channel #%i, hc=%p\n",
++				    i, hc);
++			DWC_FREE(hc);
++		}
++	}
++
++	if (dwc_otg_hcd->core_if->dma_enable) {
++		if (dwc_otg_hcd->status_buf_dma) {
++			DWC_DMA_FREE(DWC_OTG_HCD_STATUS_BUF_SIZE,
++				     dwc_otg_hcd->status_buf,
++				     dwc_otg_hcd->status_buf_dma);
++		}
++	} else if (dwc_otg_hcd->status_buf != NULL) {
++		DWC_FREE(dwc_otg_hcd->status_buf);
++	}
++	DWC_SPINLOCK_FREE(dwc_otg_hcd->channel_lock);
++	DWC_SPINLOCK_FREE(dwc_otg_hcd->lock);
++	/* Set core_if's lock pointer to NULL */
++	dwc_otg_hcd->core_if->lock = NULL;
++
++	DWC_TIMER_FREE(dwc_otg_hcd->conn_timer);
++	DWC_TASK_FREE(dwc_otg_hcd->reset_tasklet);
++	DWC_TASK_FREE(dwc_otg_hcd->completion_tasklet);
++	DWC_FREE(dwc_otg_hcd->fiq_state);
++
++#ifdef DWC_DEV_SRPCAP
++	if (dwc_otg_hcd->core_if->power_down == 2 &&
++	    dwc_otg_hcd->core_if->pwron_timer) {
++		DWC_TIMER_FREE(dwc_otg_hcd->core_if->pwron_timer);
++	}
++#endif
++	DWC_FREE(dwc_otg_hcd);
++}
++
++int init_hcd_usecs(dwc_otg_hcd_t *_hcd);
++
++int dwc_otg_hcd_init(dwc_otg_hcd_t * hcd, dwc_otg_core_if_t * core_if)
++{
++	int retval = 0;
++	int num_channels;
++	int i;
++	dwc_hc_t *channel;
++
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK))
++	DWC_SPINLOCK_ALLOC_LINUX_DEBUG(hcd->lock);
++	DWC_SPINLOCK_ALLOC_LINUX_DEBUG(hcd->channel_lock);
++#else
++	hcd->lock = DWC_SPINLOCK_ALLOC();
++	hcd->channel_lock = DWC_SPINLOCK_ALLOC();
++#endif
++        DWC_DEBUGPL(DBG_HCDV, "init of HCD %p given core_if %p\n",
++                    hcd, core_if);
++	if (!hcd->lock) {
++		DWC_ERROR("Could not allocate lock for pcd");
++		DWC_FREE(hcd);
++		retval = -DWC_E_NO_MEMORY;
++		goto out;
++	}
++	hcd->core_if = core_if;
++
++	/* Register the HCD CIL Callbacks */
++	dwc_otg_cil_register_hcd_callbacks(hcd->core_if,
++					   &hcd_cil_callbacks, hcd);
++
++	/* Initialize the non-periodic schedule. */
++	DWC_LIST_INIT(&hcd->non_periodic_sched_inactive);
++	DWC_LIST_INIT(&hcd->non_periodic_sched_active);
++
++	/* Initialize the periodic schedule. */
++	DWC_LIST_INIT(&hcd->periodic_sched_inactive);
++	DWC_LIST_INIT(&hcd->periodic_sched_ready);
++	DWC_LIST_INIT(&hcd->periodic_sched_assigned);
++	DWC_LIST_INIT(&hcd->periodic_sched_queued);
++	DWC_TAILQ_INIT(&hcd->completed_urb_list);
++	/*
++	 * Create a host channel descriptor for each host channel implemented
++	 * in the controller. Initialize the channel descriptor array.
++	 */
++	DWC_CIRCLEQ_INIT(&hcd->free_hc_list);
++	num_channels = hcd->core_if->core_params->host_channels;
++	DWC_MEMSET(hcd->hc_ptr_array, 0, sizeof(hcd->hc_ptr_array));
++	for (i = 0; i < num_channels; i++) {
++		channel = DWC_ALLOC(sizeof(dwc_hc_t));
++		if (channel == NULL) {
++			retval = -DWC_E_NO_MEMORY;
++			DWC_ERROR("%s: host channel allocation failed\n",
++				  __func__);
++			dwc_otg_hcd_free(hcd);
++			goto out;
++		}
++		channel->hc_num = i;
++		hcd->hc_ptr_array[i] = channel;
++#ifdef DEBUG
++		hcd->core_if->hc_xfer_timer[i] =
++		    DWC_TIMER_ALLOC("hc timer", hc_xfer_timeout,
++				    &hcd->core_if->hc_xfer_info[i]);
++#endif
++		DWC_DEBUGPL(DBG_HCDV, "HCD Added channel #%d, hc=%p\n", i,
++			    channel);
++	}
++
++	if (fiq_enable) {
++		hcd->fiq_state = DWC_ALLOC(sizeof(struct fiq_state) + (sizeof(struct fiq_channel_state) * num_channels));
++		if (!hcd->fiq_state) {
++			retval = -DWC_E_NO_MEMORY;
++			DWC_ERROR("%s: cannot allocate fiq_state structure\n", __func__);
++			dwc_otg_hcd_free(hcd);
++			goto out;
++		}
++		DWC_MEMSET(hcd->fiq_state, 0, (sizeof(struct fiq_state) + (sizeof(struct fiq_channel_state) * num_channels)));
++
++		for (i = 0; i < num_channels; i++) {
++			hcd->fiq_state->channel[i].fsm = FIQ_PASSTHROUGH;
++		}
++		hcd->fiq_state->dummy_send = DWC_ALLOC_ATOMIC(16);
++
++		hcd->fiq_stack = DWC_ALLOC(sizeof(struct fiq_stack));
++		if (!hcd->fiq_stack) {
++			retval = -DWC_E_NO_MEMORY;
++			DWC_ERROR("%s: cannot allocate fiq_stack structure\n", __func__);
++			dwc_otg_hcd_free(hcd);
++			goto out;
++		}
++		hcd->fiq_stack->magic1 = 0xDEADBEEF;
++		hcd->fiq_stack->magic2 = 0xD00DFEED;
++		hcd->fiq_state->gintmsk_saved.d32 = ~0;
++		hcd->fiq_state->haintmsk_saved.b2.chint = ~0;
++
++		/* This bit is terrible and uses no API, but necessary. The FIQ has no concept of DMA pools
++		 * (and if it did, would be a lot slower). This allocates a chunk of memory (~9kiB for 8 host channels)
++		 * for use as transaction bounce buffers in a 2-D array. Our access into this chunk is done by some
++		 * moderately readable array casts.
++		 */
++		hcd->fiq_dmab = DWC_DMA_ALLOC((sizeof(struct fiq_dma_channel) * num_channels), &hcd->fiq_state->dma_base);
++		DWC_WARN("FIQ DMA bounce buffers: virt = 0x%08x dma = 0x%08x len=%d",
++				(unsigned int)hcd->fiq_dmab, (unsigned int)hcd->fiq_state->dma_base,
++				sizeof(struct fiq_dma_channel) * num_channels);
++
++		DWC_MEMSET(hcd->fiq_dmab, 0x6b, 9024);
++
++		/* pointer for debug in fiq_print */
++		hcd->fiq_state->fiq_dmab = hcd->fiq_dmab;
++		if (fiq_fsm_enable) {
++			int i;
++			for (i=0; i < hcd->core_if->core_params->host_channels; i++) {
++				dwc_otg_cleanup_fiq_channel(hcd, i);
++			}
++			DWC_PRINTF("FIQ FSM acceleration enabled for :\n%s%s%s%s",
++				(fiq_fsm_mask & 0x1) ? "Non-periodic Split Transactions\n" : "",
++				(fiq_fsm_mask & 0x2) ? "Periodic Split Transactions\n" : "",
++				(fiq_fsm_mask & 0x4) ? "High-Speed Isochronous Endpoints\n" : "",
++				(fiq_fsm_mask & 0x8) ? "Interrupt/Control Split Transaction hack enabled\n" : "");
++		}
++	}
++
++	/* Initialize the Connection timeout timer. */
++	hcd->conn_timer = DWC_TIMER_ALLOC("Connection timer",
++					  dwc_otg_hcd_connect_timeout, 0);
++
++	printk(KERN_DEBUG "dwc_otg: Microframe scheduler %s\n", microframe_schedule ? "enabled":"disabled");
++	if (microframe_schedule)
++		init_hcd_usecs(hcd);
++
++	/* Initialize reset tasklet. */
++	hcd->reset_tasklet = DWC_TASK_ALLOC("reset_tasklet", reset_tasklet_func, hcd);
++
++	hcd->completion_tasklet = DWC_TASK_ALLOC("completion_tasklet",
++						completion_tasklet_func, hcd);
++#ifdef DWC_DEV_SRPCAP
++	if (hcd->core_if->power_down == 2) {
++		/* Initialize Power on timer for Host power up in case hibernation */
++		hcd->core_if->pwron_timer = DWC_TIMER_ALLOC("PWRON TIMER",
++									dwc_otg_hcd_power_up, core_if);
++	}
++#endif
++
++	/*
++	 * Allocate space for storing data on status transactions. Normally no
++	 * data is sent, but this space acts as a bit bucket. This must be
++	 * done after usb_add_hcd since that function allocates the DMA buffer
++	 * pool.
++	 */
++	if (hcd->core_if->dma_enable) {
++		hcd->status_buf =
++		    DWC_DMA_ALLOC(DWC_OTG_HCD_STATUS_BUF_SIZE,
++				  &hcd->status_buf_dma);
++	} else {
++		hcd->status_buf = DWC_ALLOC(DWC_OTG_HCD_STATUS_BUF_SIZE);
++	}
++	if (!hcd->status_buf) {
++		retval = -DWC_E_NO_MEMORY;
++		DWC_ERROR("%s: status_buf allocation failed\n", __func__);
++		dwc_otg_hcd_free(hcd);
++		goto out;
++	}
++
++	hcd->otg_port = 1;
++	hcd->frame_list = NULL;
++	hcd->frame_list_dma = 0;
++	hcd->periodic_qh_count = 0;
++
++	DWC_MEMSET(hcd->hub_port, 0, sizeof(hcd->hub_port));
++#ifdef FIQ_DEBUG
++	DWC_MEMSET(hcd->hub_port_alloc, -1, sizeof(hcd->hub_port_alloc));
++#endif
++
++out:
++	return retval;
++}
++
++void dwc_otg_hcd_remove(dwc_otg_hcd_t * hcd)
++{
++	/* Turn off all host-specific interrupts. */
++	dwc_otg_disable_host_interrupts(hcd->core_if);
++
++	dwc_otg_hcd_free(hcd);
++}
++
++/**
++ * Initializes dynamic portions of the DWC_otg HCD state.
++ */
++static void dwc_otg_hcd_reinit(dwc_otg_hcd_t * hcd)
++{
++	int num_channels;
++	int i;
++	dwc_hc_t *channel;
++	dwc_hc_t *channel_tmp;
++
++	hcd->flags.d32 = 0;
++
++	hcd->non_periodic_qh_ptr = &hcd->non_periodic_sched_active;
++	if (!microframe_schedule) {
++		hcd->non_periodic_channels = 0;
++		hcd->periodic_channels = 0;
++	} else {
++		hcd->available_host_channels = hcd->core_if->core_params->host_channels;
++	}
++	/*
++	 * Put all channels in the free channel list and clean up channel
++	 * states.
++	 */
++	DWC_CIRCLEQ_FOREACH_SAFE(channel, channel_tmp,
++				 &hcd->free_hc_list, hc_list_entry) {
++		DWC_CIRCLEQ_REMOVE(&hcd->free_hc_list, channel, hc_list_entry);
++	}
++
++	num_channels = hcd->core_if->core_params->host_channels;
++	for (i = 0; i < num_channels; i++) {
++		channel = hcd->hc_ptr_array[i];
++		DWC_CIRCLEQ_INSERT_TAIL(&hcd->free_hc_list, channel,
++					hc_list_entry);
++		dwc_otg_hc_cleanup(hcd->core_if, channel);
++	}
++
++	/* Initialize the DWC core for host mode operation. */
++	dwc_otg_core_host_init(hcd->core_if);
++
++	/* Set core_if's lock pointer to the hcd->lock */
++	hcd->core_if->lock = hcd->lock;
++}
++
++/**
++ * Assigns transactions from a QTD to a free host channel and initializes the
++ * host channel to perform the transactions. The host channel is removed from
++ * the free list.
++ *
++ * @param hcd The HCD state structure.
++ * @param qh Transactions from the first QTD for this QH are selected and
++ * assigned to a free host channel.
++ */
++static void assign_and_init_hc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	dwc_hc_t *hc;
++	dwc_otg_qtd_t *qtd;
++	dwc_otg_hcd_urb_t *urb;
++	void* ptr = NULL;
++	uint32_t intr_enable;
++	unsigned long flags;
++	gintmsk_data_t gintmsk = { .d32 = 0, };
++
++	qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
++
++	urb = qtd->urb;
++
++	DWC_DEBUGPL(DBG_HCDV, "%s(%p,%p) - urb %x, actual_length %d\n", __func__, hcd, qh, (unsigned int)urb, urb->actual_length);
++
++	if (((urb->actual_length < 0) || (urb->actual_length > urb->length)) && !dwc_otg_hcd_is_pipe_in(&urb->pipe_info))
++		urb->actual_length = urb->length;
++
++
++	hc = DWC_CIRCLEQ_FIRST(&hcd->free_hc_list);
++
++	/* Remove the host channel from the free list. */
++	DWC_CIRCLEQ_REMOVE_INIT(&hcd->free_hc_list, hc, hc_list_entry);
++
++	qh->channel = hc;
++
++	qtd->in_process = 1;
++
++	/*
++	 * Use usb_pipedevice to determine device address. This address is
++	 * 0 before the SET_ADDRESS command and the correct address afterward.
++	 */
++	hc->dev_addr = dwc_otg_hcd_get_dev_addr(&urb->pipe_info);
++	hc->ep_num = dwc_otg_hcd_get_ep_num(&urb->pipe_info);
++	hc->speed = qh->dev_speed;
++	hc->max_packet = dwc_max_packet(qh->maxp);
++
++	hc->xfer_started = 0;
++	hc->halt_status = DWC_OTG_HC_XFER_NO_HALT_STATUS;
++	hc->error_state = (qtd->error_count > 0);
++	hc->halt_on_queue = 0;
++	hc->halt_pending = 0;
++	hc->requests = 0;
++
++	/*
++	 * The following values may be modified in the transfer type section
++	 * below. The xfer_len value may be reduced when the transfer is
++	 * started to accommodate the max widths of the XferSize and PktCnt
++	 * fields in the HCTSIZn register.
++	 */
++
++	hc->ep_is_in = (dwc_otg_hcd_is_pipe_in(&urb->pipe_info) != 0);
++	if (hc->ep_is_in) {
++		hc->do_ping = 0;
++	} else {
++		hc->do_ping = qh->ping_state;
++	}
++
++	hc->data_pid_start = qh->data_toggle;
++	hc->multi_count = 1;
++
++	if (hcd->core_if->dma_enable) {
++		hc->xfer_buff = (uint8_t *) urb->dma + urb->actual_length;
++
++		/* For non-dword aligned case */
++		if (((unsigned long)hc->xfer_buff & 0x3)
++		    && !hcd->core_if->dma_desc_enable) {
++			ptr = (uint8_t *) urb->buf + urb->actual_length;
++		}
++	} else {
++		hc->xfer_buff = (uint8_t *) urb->buf + urb->actual_length;
++	}
++	hc->xfer_len = urb->length - urb->actual_length;
++	hc->xfer_count = 0;
++
++	/*
++	 * Set the split attributes
++	 */
++	hc->do_split = 0;
++	if (qh->do_split) {
++		uint32_t hub_addr, port_addr;
++		hc->do_split = 1;
++		hc->xact_pos = qtd->isoc_split_pos;
++		/* We don't need to do complete splits anymore */
++//		if(fiq_fsm_enable)
++		if (0)
++			hc->complete_split = qtd->complete_split = 0;
++		else
++			hc->complete_split = qtd->complete_split;
++
++		hcd->fops->hub_info(hcd, urb->priv, &hub_addr, &port_addr);
++		hc->hub_addr = (uint8_t) hub_addr;
++		hc->port_addr = (uint8_t) port_addr;
++	}
++
++	switch (dwc_otg_hcd_get_pipe_type(&urb->pipe_info)) {
++	case UE_CONTROL:
++		hc->ep_type = DWC_OTG_EP_TYPE_CONTROL;
++		switch (qtd->control_phase) {
++		case DWC_OTG_CONTROL_SETUP:
++			DWC_DEBUGPL(DBG_HCDV, "  Control setup transaction\n");
++			hc->do_ping = 0;
++			hc->ep_is_in = 0;
++			hc->data_pid_start = DWC_OTG_HC_PID_SETUP;
++			if (hcd->core_if->dma_enable) {
++				hc->xfer_buff = (uint8_t *) urb->setup_dma;
++			} else {
++				hc->xfer_buff = (uint8_t *) urb->setup_packet;
++			}
++			hc->xfer_len = 8;
++			ptr = NULL;
++			break;
++		case DWC_OTG_CONTROL_DATA:
++			DWC_DEBUGPL(DBG_HCDV, "  Control data transaction\n");
++			hc->data_pid_start = qtd->data_toggle;
++			break;
++		case DWC_OTG_CONTROL_STATUS:
++			/*
++			 * Direction is opposite of data direction or IN if no
++			 * data.
++			 */
++			DWC_DEBUGPL(DBG_HCDV, "  Control status transaction\n");
++			if (urb->length == 0) {
++				hc->ep_is_in = 1;
++			} else {
++				hc->ep_is_in =
++				    dwc_otg_hcd_is_pipe_out(&urb->pipe_info);
++			}
++			if (hc->ep_is_in) {
++				hc->do_ping = 0;
++			}
++
++			hc->data_pid_start = DWC_OTG_HC_PID_DATA1;
++
++			hc->xfer_len = 0;
++			if (hcd->core_if->dma_enable) {
++				hc->xfer_buff = (uint8_t *) hcd->status_buf_dma;
++			} else {
++				hc->xfer_buff = (uint8_t *) hcd->status_buf;
++			}
++			ptr = NULL;
++			break;
++		}
++		break;
++	case UE_BULK:
++		hc->ep_type = DWC_OTG_EP_TYPE_BULK;
++		break;
++	case UE_INTERRUPT:
++		hc->ep_type = DWC_OTG_EP_TYPE_INTR;
++		break;
++	case UE_ISOCHRONOUS:
++		{
++			struct dwc_otg_hcd_iso_packet_desc *frame_desc;
++
++			hc->ep_type = DWC_OTG_EP_TYPE_ISOC;
++
++			if (hcd->core_if->dma_desc_enable)
++				break;
++
++			frame_desc = &urb->iso_descs[qtd->isoc_frame_index];
++
++			frame_desc->status = 0;
++
++			if (hcd->core_if->dma_enable) {
++				hc->xfer_buff = (uint8_t *) urb->dma;
++			} else {
++				hc->xfer_buff = (uint8_t *) urb->buf;
++			}
++			hc->xfer_buff +=
++			    frame_desc->offset + qtd->isoc_split_offset;
++			hc->xfer_len =
++			    frame_desc->length - qtd->isoc_split_offset;
++
++			/* For non-dword aligned buffers */
++			if (((unsigned long)hc->xfer_buff & 0x3)
++			    && hcd->core_if->dma_enable) {
++				ptr =
++				    (uint8_t *) urb->buf + frame_desc->offset +
++				    qtd->isoc_split_offset;
++			} else
++				ptr = NULL;
++
++			if (hc->xact_pos == DWC_HCSPLIT_XACTPOS_ALL) {
++				if (hc->xfer_len <= 188) {
++					hc->xact_pos = DWC_HCSPLIT_XACTPOS_ALL;
++				} else {
++					hc->xact_pos =
++					    DWC_HCSPLIT_XACTPOS_BEGIN;
++				}
++			}
++		}
++		break;
++	}
++	/* non DWORD-aligned buffer case */
++	if (ptr) {
++		uint32_t buf_size;
++		if (hc->ep_type != DWC_OTG_EP_TYPE_ISOC) {
++			buf_size = hcd->core_if->core_params->max_transfer_size;
++		} else {
++			buf_size = 4096;
++		}
++		if (!qh->dw_align_buf) {
++			qh->dw_align_buf = DWC_DMA_ALLOC_ATOMIC(buf_size,
++							 &qh->dw_align_buf_dma);
++			if (!qh->dw_align_buf) {
++				DWC_ERROR
++				    ("%s: Failed to allocate memory to handle "
++				     "non-dword aligned buffer case\n",
++				     __func__);
++				return;
++			}
++		}
++		if (!hc->ep_is_in) {
++			dwc_memcpy(qh->dw_align_buf, ptr, hc->xfer_len);
++		}
++		hc->align_buff = qh->dw_align_buf_dma;
++	} else {
++		hc->align_buff = 0;
++	}
++
++	if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
++	    hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
++		/*
++		 * This value may be modified when the transfer is started to
++		 * reflect the actual transfer length.
++		 */
++		hc->multi_count = dwc_hb_mult(qh->maxp);
++	}
++
++	if (hcd->core_if->dma_desc_enable)
++		hc->desc_list_addr = qh->desc_list_dma;
++
++	dwc_otg_hc_init(hcd->core_if, hc);
++
++	local_irq_save(flags);
++
++	if (fiq_enable) {
++		local_fiq_disable();
++		fiq_fsm_spin_lock(&hcd->fiq_state->lock);
++	}
++
++	/* Enable the top level host channel interrupt. */
++	intr_enable = (1 << hc->hc_num);
++	DWC_MODIFY_REG32(&hcd->core_if->host_if->host_global_regs->haintmsk, 0, intr_enable);
++
++	/* Make sure host channel interrupts are enabled. */
++	gintmsk.b.hcintr = 1;
++	DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, 0, gintmsk.d32);
++
++	if (fiq_enable) {
++		fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
++		local_fiq_enable();
++	}
++	
++	local_irq_restore(flags);
++	hc->qh = qh;
++}
++
++
++/**
++ * fiq_fsm_transaction_suitable() - Test a QH for compatibility with the FIQ
++ * @qh:	pointer to the endpoint's queue head
++ *
++ * Transaction start/end control flow is grafted onto the existing dwc_otg
++ * mechanisms, to avoid spaghettifying the functions more than they already are.
++ * This function's eligibility check is altered by debug parameter.
++ *
++ * Returns: 0 for unsuitable, 1 implies the FIQ can be enabled for this transaction.
++ */
++
++int fiq_fsm_transaction_suitable(dwc_otg_qh_t *qh)
++{
++	if (qh->do_split) {
++		switch (qh->ep_type) {
++		case UE_CONTROL:
++		case UE_BULK:
++			if (fiq_fsm_mask & (1 << 0))
++				return 1;
++			break;
++		case UE_INTERRUPT:
++		case UE_ISOCHRONOUS:
++			if (fiq_fsm_mask & (1 << 1))
++				return 1;
++			break;
++		default:
++			break;
++		}
++	} else if (qh->ep_type == UE_ISOCHRONOUS) {
++		if (fiq_fsm_mask & (1 << 2)) {
++			/* HS ISOCH support. We test for compatibility:
++			 * - DWORD aligned buffers
++			 * - Must be at least 2 transfers (otherwise pointless to use the FIQ)
++			 * If yes, then the fsm enqueue function will handle the state machine setup.
++			 */
++			dwc_otg_qtd_t *qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
++			dwc_otg_hcd_urb_t *urb = qtd->urb;
++			struct dwc_otg_hcd_iso_packet_desc (*iso_descs)[0] = &urb->iso_descs;
++			int nr_iso_frames = urb->packet_count;
++			int i;
++			uint32_t ptr;
++
++			if (nr_iso_frames < 2)
++				return 0;
++			for (i = 0; i < nr_iso_frames; i++) {
++				ptr = urb->dma + iso_descs[i]->offset;
++				if (ptr & 0x3) {
++					printk_ratelimited("%s: Non-Dword aligned isochronous frame offset."
++							" Cannot queue FIQ-accelerated transfer to device %d endpoint %d\n",
++							__FUNCTION__, qh->channel->dev_addr, qh->channel->ep_num);
++					return 0;
++				}
++			}
++			return 1;
++		}
++	}
++	return 0;
++}
++
++/**
++ * fiq_fsm_setup_periodic_dma() - Set up DMA bounce buffers
++ * @hcd: Pointer to the dwc_otg_hcd struct
++ * @qh: Pointer to the endpoint's queue head
++ *
++ * Periodic split transactions are transmitted modulo 188 bytes.
++ * This necessitates slicing data up into buckets for isochronous out
++ * and fixing up the DMA address for all IN transfers.
++ *
++ * Returns 1 if the DMA bounce buffers have been used, 0 if the default
++ * HC buffer has been used.
++ */
++int fiq_fsm_setup_periodic_dma(dwc_otg_hcd_t *hcd, struct fiq_channel_state *st, dwc_otg_qh_t *qh)
++ {
++	int frame_length, i = 0;
++	uint8_t *ptr = NULL;
++	dwc_hc_t *hc = qh->channel;
++	struct fiq_dma_blob *blob;
++	struct dwc_otg_hcd_iso_packet_desc *frame_desc;
++
++	for (i = 0; i < 6; i++) {
++		st->dma_info.slot_len[i] = 255;
++	}
++	st->dma_info.index = 0;
++	i = 0;
++	if (hc->ep_is_in) {
++		/*
++		 * Set dma_regs to bounce buffer. FIQ will update the
++		 * state depending on transaction progress.
++		 */
++		blob = (struct fiq_dma_blob *) hcd->fiq_state->dma_base;
++		st->hcdma_copy.d32 = (uint32_t) &blob->channel[hc->hc_num].index[0].buf[0];
++		/* Calculate the max number of CSPLITS such that the FIQ can time out
++		 * a transaction if it fails.
++		 */
++		frame_length = st->hcchar_copy.b.mps;
++		do {
++			i++;
++			frame_length -= 188;
++		} while (frame_length >= 0);
++		st->nrpackets = i;
++		return 1;
++	} else {
++		if (qh->ep_type == UE_ISOCHRONOUS) {
++
++			dwc_otg_qtd_t *qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
++
++			frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
++			frame_length = frame_desc->length;
++
++			/* Virtual address for bounce buffers */
++			blob = hcd->fiq_dmab;
++
++			ptr = qtd->urb->buf + frame_desc->offset;
++			if (frame_length == 0) {
++				/*
++				 * for isochronous transactions, we must still transmit a packet
++				 * even if the length is zero.
++				 */
++				st->dma_info.slot_len[0] = 0;
++				st->nrpackets = 1;
++			} else {
++				do {
++					if (frame_length <= 188) {
++						dwc_memcpy(&blob->channel[hc->hc_num].index[i].buf[0], ptr, frame_length);
++						st->dma_info.slot_len[i] = frame_length;
++						ptr += frame_length;
++					} else {
++						dwc_memcpy(&blob->channel[hc->hc_num].index[i].buf[0], ptr, 188);
++						st->dma_info.slot_len[i] = 188;
++						ptr += 188;
++					}
++					i++;
++					frame_length -= 188;
++				} while (frame_length > 0);
++				st->nrpackets = i;
++			}
++			ptr = qtd->urb->buf + frame_desc->offset;
++			/* Point the HC at the DMA address of the bounce buffers */
++			blob = (struct fiq_dma_blob *) hcd->fiq_state->dma_base;
++			st->hcdma_copy.d32 = (uint32_t) &blob->channel[hc->hc_num].index[0].buf[0];
++
++			/* fixup xfersize to the actual packet size */
++			st->hctsiz_copy.b.pid = 0;
++			st->hctsiz_copy.b.xfersize = st->dma_info.slot_len[0];
++			return 1;
++		} else {
++			/* For interrupt, single OUT packet required, goes in the SSPLIT from hc_buff. */
++			return 0;
++		}
++	}
++}
++
++/*
++ * Pushing a periodic request into the queue near the EOF1 point
++ * in a microframe causes erroneous behaviour (frmovrun) interrupt.
++ * Usually, the request goes out on the bus causing a transfer but
++ * the core does not transfer the data to memory.
++ * This guard interval (in number of 60MHz clocks) is required which
++ * must cater for CPU latency between reading the value and enabling
++ * the channel.
++ */
++#define PERIODIC_FRREM_BACKOFF 1000
++
++int fiq_fsm_queue_isoc_transaction(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh)
++{
++	dwc_hc_t *hc = qh->channel;
++	dwc_otg_hc_regs_t *hc_regs = hcd->core_if->host_if->hc_regs[hc->hc_num];
++	dwc_otg_qtd_t *qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
++	int frame;
++	struct fiq_channel_state *st = &hcd->fiq_state->channel[hc->hc_num];
++	int xfer_len, nrpackets;
++	hcdma_data_t hcdma;
++	hfnum_data_t hfnum;
++
++	if (st->fsm != FIQ_PASSTHROUGH)
++		return 0;
++
++	st->nr_errors = 0;
++
++	st->hcchar_copy.d32 = 0;
++	st->hcchar_copy.b.mps = hc->max_packet;
++	st->hcchar_copy.b.epdir = hc->ep_is_in;
++	st->hcchar_copy.b.devaddr = hc->dev_addr;
++	st->hcchar_copy.b.epnum = hc->ep_num;
++	st->hcchar_copy.b.eptype = hc->ep_type;
++
++	st->hcintmsk_copy.b.chhltd = 1;
++
++	frame = dwc_otg_hcd_get_frame_number(hcd);
++	st->hcchar_copy.b.oddfrm = (frame & 0x1) ? 0 : 1;
++
++	st->hcchar_copy.b.lspddev = 0;
++	/* Enable the channel later as a final register write. */
++
++	st->hcsplt_copy.d32 = 0;
++
++	st->hs_isoc_info.iso_desc = (struct dwc_otg_hcd_iso_packet_desc *) &qtd->urb->iso_descs;
++	st->hs_isoc_info.nrframes = qtd->urb->packet_count;
++	/* grab the next DMA address offset from the array */
++	st->hcdma_copy.d32 = qtd->urb->dma;
++	hcdma.d32 = st->hcdma_copy.d32 + st->hs_isoc_info.iso_desc[0].offset;
++
++	/* We need to set multi_count. This is a bit tricky - has to be set per-transaction as
++	 * the core needs to be told to send the correct number. Caution: for IN transfers,
++	 * this is always set to the maximum size of the endpoint. */
++	xfer_len = st->hs_isoc_info.iso_desc[0].length;
++	nrpackets = (xfer_len + st->hcchar_copy.b.mps - 1) / st->hcchar_copy.b.mps;
++	if (nrpackets == 0)
++		nrpackets = 1;
++	st->hcchar_copy.b.multicnt = nrpackets;
++	st->hctsiz_copy.b.pktcnt = nrpackets;
++
++	/* Initial PID also needs to be set */
++	if (st->hcchar_copy.b.epdir == 0) {
++		st->hctsiz_copy.b.xfersize = xfer_len;
++		switch (st->hcchar_copy.b.multicnt) {
++		case 1:
++			st->hctsiz_copy.b.pid = DWC_PID_DATA0;
++			break;
++		case 2:
++		case 3:
++			st->hctsiz_copy.b.pid = DWC_PID_MDATA;
++			break;
++		}
++
++	} else {
++		st->hctsiz_copy.b.xfersize = nrpackets * st->hcchar_copy.b.mps;
++		switch (st->hcchar_copy.b.multicnt) {
++		case 1:
++			st->hctsiz_copy.b.pid = DWC_PID_DATA0;
++			break;
++		case 2:
++			st->hctsiz_copy.b.pid = DWC_PID_DATA1;
++			break;
++		case 3:
++			st->hctsiz_copy.b.pid = DWC_PID_DATA2;
++			break;
++		}
++	}
++
++	st->hs_isoc_info.stride = qh->interval;
++	st->uframe_sleeps = 0;
++
++	fiq_print(FIQDBG_INT, hcd->fiq_state, "FSMQ  %01d ", hc->hc_num);
++	fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hcchar_copy.d32);
++	fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hctsiz_copy.d32);
++	fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hcdma_copy.d32);
++	hfnum.d32 = DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hfnum);
++	local_fiq_disable();
++	fiq_fsm_spin_lock(&hcd->fiq_state->lock);
++	DWC_WRITE_REG32(&hc_regs->hctsiz, st->hctsiz_copy.d32);
++	DWC_WRITE_REG32(&hc_regs->hcsplt, st->hcsplt_copy.d32);
++	DWC_WRITE_REG32(&hc_regs->hcdma, st->hcdma_copy.d32);
++	DWC_WRITE_REG32(&hc_regs->hcchar, st->hcchar_copy.d32);
++	DWC_WRITE_REG32(&hc_regs->hcintmsk, st->hcintmsk_copy.d32);
++	if (hfnum.b.frrem < PERIODIC_FRREM_BACKOFF) {
++		/* Prevent queueing near EOF1. Bad things happen if a periodic
++		 * split transaction is queued very close to EOF. SOF interrupt handler
++		 * will wake this channel at the next interrupt.
++		 */
++		st->fsm = FIQ_HS_ISOC_SLEEPING;
++		st->uframe_sleeps = 1;
++	} else {
++		st->fsm = FIQ_HS_ISOC_TURBO;
++		st->hcchar_copy.b.chen = 1;
++		DWC_WRITE_REG32(&hc_regs->hcchar, st->hcchar_copy.d32);
++	}
++	mb();
++	st->hcchar_copy.b.chen = 0;
++	fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
++	local_fiq_enable();
++	return 0;
++}
++
++
++/**
++ * fiq_fsm_queue_split_transaction() - Set up a host channel and FIQ state
++ * @hcd: Pointer to the dwc_otg_hcd struct
++ * @qh: Pointer to the endpoint's queue head
++ *
++ * This overrides the dwc_otg driver's normal method of queueing a transaction.
++ * Called from dwc_otg_hcd_queue_transactions(), this performs specific setup
++ * for the nominated host channel.
++ *
++ * For periodic transfers, it also peeks at the FIQ state to see if an immediate
++ * start is possible. If not, then the FIQ is left to start the transfer.
++ */
++int fiq_fsm_queue_split_transaction(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh)
++{
++	int start_immediate = 1, i;
++	hfnum_data_t hfnum;
++	dwc_hc_t *hc = qh->channel;
++	dwc_otg_hc_regs_t *hc_regs = hcd->core_if->host_if->hc_regs[hc->hc_num];
++	/* Program HC registers, setup FIQ_state, examine FIQ if periodic, start transfer (not if uframe 5) */
++	int hub_addr, port_addr, frame, uframe;
++	struct fiq_channel_state *st = &hcd->fiq_state->channel[hc->hc_num];
++
++	if (st->fsm != FIQ_PASSTHROUGH)
++		return 0;
++	st->nr_errors = 0;
++
++	st->hcchar_copy.d32 = 0;
++	st->hcchar_copy.b.mps = hc->max_packet;
++	st->hcchar_copy.b.epdir = hc->ep_is_in;
++	st->hcchar_copy.b.devaddr = hc->dev_addr;
++	st->hcchar_copy.b.epnum = hc->ep_num;
++	st->hcchar_copy.b.eptype = hc->ep_type;
++	if (hc->ep_type & 0x1) {
++		if (hc->ep_is_in)
++			st->hcchar_copy.b.multicnt = 3;
++		else
++			/* Docs say set this to 1, but driver sets to 0! */
++			st->hcchar_copy.b.multicnt = 0;
++	} else {
++		st->hcchar_copy.b.multicnt = 1;
++		st->hcchar_copy.b.oddfrm = 0;
++	}
++	st->hcchar_copy.b.lspddev = (hc->speed == DWC_OTG_EP_SPEED_LOW) ? 1 : 0;
++	/* Enable the channel later as a final register write. */
++
++	st->hcsplt_copy.d32 = 0;
++	if(qh->do_split) {
++		hcd->fops->hub_info(hcd, DWC_CIRCLEQ_FIRST(&qh->qtd_list)->urb->priv, &hub_addr, &port_addr);
++		st->hcsplt_copy.b.compsplt = 0;
++		st->hcsplt_copy.b.spltena = 1;
++		// XACTPOS is for isoc-out only but needs initialising anyway.
++		st->hcsplt_copy.b.xactpos = ISOC_XACTPOS_ALL;
++		if((qh->ep_type == DWC_OTG_EP_TYPE_ISOC) && (!qh->ep_is_in)) {
++			/* For packetsize 0 < L < 188, ISOC_XACTPOS_ALL.
++			 * for longer than this, ISOC_XACTPOS_BEGIN and the FIQ
++			 * will update as necessary.
++			 */
++			if (hc->xfer_len > 188) {
++				st->hcsplt_copy.b.xactpos = ISOC_XACTPOS_BEGIN;
++			}
++		}
++		st->hcsplt_copy.b.hubaddr = (uint8_t) hub_addr;
++		st->hcsplt_copy.b.prtaddr = (uint8_t) port_addr;
++		st->hub_addr = hub_addr;
++		st->port_addr = port_addr;
++	}
++
++	st->hctsiz_copy.d32 = 0;
++	st->hctsiz_copy.b.dopng = 0;
++	st->hctsiz_copy.b.pid = hc->data_pid_start;
++
++	if (hc->ep_is_in || (hc->xfer_len > hc->max_packet)) {
++		hc->xfer_len = hc->max_packet;
++	} else if (!hc->ep_is_in && (hc->xfer_len > 188)) {
++		hc->xfer_len = 188;
++	}
++	st->hctsiz_copy.b.xfersize = hc->xfer_len;
++
++	st->hctsiz_copy.b.pktcnt = 1;
++
++	if (hc->ep_type & 0x1) {
++		/*
++		 * For potentially multi-packet transfers, must use the DMA bounce buffers. For IN transfers,
++		 * the DMA address is the address of the first 188byte slot buffer in the bounce buffer array.
++		 * For multi-packet OUT transfers, we need to copy the data into the bounce buffer array so the FIQ can punt
++		 * the right address out as necessary. hc->xfer_buff and hc->xfer_len have already been set
++		 * in assign_and_init_hc(), but this is for the eventual transaction completion only. The FIQ
++		 * must not touch internal driver state.
++		 */
++		if(!fiq_fsm_setup_periodic_dma(hcd, st, qh)) {
++			if (hc->align_buff) {
++				st->hcdma_copy.d32 = hc->align_buff;
++			} else {
++				st->hcdma_copy.d32 = ((unsigned long) hc->xfer_buff & 0xFFFFFFFF);
++			}
++		}
++	} else {
++		if (hc->align_buff) {
++			st->hcdma_copy.d32 = hc->align_buff;
++		} else {
++			st->hcdma_copy.d32 = ((unsigned long) hc->xfer_buff & 0xFFFFFFFF);
++		}
++	}
++	/* The FIQ depends upon no other interrupts being enabled except channel halt.
++	 * Fixup channel interrupt mask. */
++	st->hcintmsk_copy.d32 = 0;
++	st->hcintmsk_copy.b.chhltd = 1;
++	st->hcintmsk_copy.b.ahberr = 1;
++
++	/* Hack courtesy of FreeBSD: apparently forcing Interrupt Split transactions
++	 * as Control puts the transfer into the non-periodic request queue and the
++	 * non-periodic handler in the hub. Makes things lots easier.
++	 */
++	if ((fiq_fsm_mask & 0x8) && hc->ep_type == UE_INTERRUPT) {
++		st->hcchar_copy.b.multicnt = 0;
++		st->hcchar_copy.b.oddfrm = 0;
++		st->hcchar_copy.b.eptype = UE_CONTROL;
++		if (hc->align_buff) {
++			st->hcdma_copy.d32 = hc->align_buff;
++		} else {
++			st->hcdma_copy.d32 = ((unsigned long) hc->xfer_buff & 0xFFFFFFFF);
++		}
++	}
++	DWC_WRITE_REG32(&hc_regs->hcdma, st->hcdma_copy.d32);
++	DWC_WRITE_REG32(&hc_regs->hctsiz, st->hctsiz_copy.d32);
++	DWC_WRITE_REG32(&hc_regs->hcsplt, st->hcsplt_copy.d32);
++	DWC_WRITE_REG32(&hc_regs->hcchar, st->hcchar_copy.d32);
++	DWC_WRITE_REG32(&hc_regs->hcintmsk, st->hcintmsk_copy.d32);
++
++	local_fiq_disable();
++	fiq_fsm_spin_lock(&hcd->fiq_state->lock);
++
++	if (hc->ep_type & 0x1) {
++		hfnum.d32 = DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hfnum);
++		frame = (hfnum.b.frnum & ~0x7) >> 3;
++		uframe = hfnum.b.frnum & 0x7;
++		if (hfnum.b.frrem < PERIODIC_FRREM_BACKOFF) {
++			/* Prevent queueing near EOF1. Bad things happen if a periodic
++			 * split transaction is queued very close to EOF.
++			 */
++			start_immediate = 0;
++		} else if (uframe == 5) {
++			start_immediate = 0;
++		} else if (hc->ep_type == UE_ISOCHRONOUS && !hc->ep_is_in) {
++			start_immediate = 0;
++		} else if (hc->ep_is_in && fiq_fsm_too_late(hcd->fiq_state, hc->hc_num)) {
++			start_immediate = 0;
++		} else {
++			/* Search through all host channels to determine if a transaction
++			 * is currently in progress */
++			for (i = 0; i < hcd->core_if->core_params->host_channels; i++) {
++				if (i == hc->hc_num || hcd->fiq_state->channel[i].fsm == FIQ_PASSTHROUGH)
++					continue;
++				switch (hcd->fiq_state->channel[i].fsm) {
++				/* TT is reserved for channels that are in the middle of a periodic
++				 * split transaction.
++				 */
++				case FIQ_PER_SSPLIT_STARTED:
++				case FIQ_PER_CSPLIT_WAIT:
++				case FIQ_PER_CSPLIT_NYET1:
++				case FIQ_PER_CSPLIT_POLL:
++				case FIQ_PER_ISO_OUT_ACTIVE:
++				case FIQ_PER_ISO_OUT_LAST:
++					if (hcd->fiq_state->channel[i].hub_addr == hub_addr &&
++							hcd->fiq_state->channel[i].port_addr == port_addr) {
++						start_immediate = 0;
++					}
++					break;
++				default:
++					break;
++				}
++				if (!start_immediate)
++					break;
++			}
++		}
++	}
++	if ((fiq_fsm_mask & 0x8) && hc->ep_type == UE_INTERRUPT)
++		start_immediate = 1;
++
++	fiq_print(FIQDBG_INT, hcd->fiq_state, "FSMQ %01d %01d", hc->hc_num, start_immediate);
++	fiq_print(FIQDBG_INT, hcd->fiq_state, "%08d", hfnum.b.frrem);
++	//fiq_print(FIQDBG_INT, hcd->fiq_state, "H:%02dP:%02d", hub_addr, port_addr);
++	//fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hctsiz_copy.d32);
++	//fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hcdma_copy.d32);
++	switch (hc->ep_type) {
++		case UE_CONTROL:
++		case UE_BULK:
++			st->fsm = FIQ_NP_SSPLIT_STARTED;
++			break;
++		case UE_ISOCHRONOUS:
++			if (hc->ep_is_in) {
++				if (start_immediate) {
++					st->fsm = FIQ_PER_SSPLIT_STARTED;
++				} else {
++					st->fsm = FIQ_PER_SSPLIT_QUEUED;
++				}
++			} else {
++				if (start_immediate) {
++					/* Single-isoc OUT packets don't require FIQ involvement */
++					if (st->nrpackets == 1) {
++						st->fsm = FIQ_PER_ISO_OUT_LAST;
++					} else {
++						st->fsm = FIQ_PER_ISO_OUT_ACTIVE;
++					}
++				} else {
++					st->fsm = FIQ_PER_ISO_OUT_PENDING;
++				}
++			}
++			break;
++		case UE_INTERRUPT:
++			if (fiq_fsm_mask & 0x8) {
++				st->fsm = FIQ_NP_SSPLIT_STARTED;
++			} else if (start_immediate) {
++					st->fsm = FIQ_PER_SSPLIT_STARTED;
++			} else {
++				st->fsm = FIQ_PER_SSPLIT_QUEUED;
++			}
++		default:
++			break;
++	}
++	if (start_immediate) {
++		/* Set the oddfrm bit as close as possible to actual queueing */
++		frame = dwc_otg_hcd_get_frame_number(hcd);
++		st->expected_uframe = (frame + 1) & 0x3FFF;
++		st->hcchar_copy.b.oddfrm = (frame & 0x1) ? 0 : 1;
++		st->hcchar_copy.b.chen = 1;
++		DWC_WRITE_REG32(&hc_regs->hcchar, st->hcchar_copy.d32);
++	}
++	mb();
++	fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
++	local_fiq_enable();
++	return 0;
++}
++
++
++/**
++ * This function selects transactions from the HCD transfer schedule and
++ * assigns them to available host channels. It is called from HCD interrupt
++ * handler functions.
++ *
++ * @param hcd The HCD state structure.
++ *
++ * @return The types of new transactions that were assigned to host channels.
++ */
++dwc_otg_transaction_type_e dwc_otg_hcd_select_transactions(dwc_otg_hcd_t * hcd)
++{
++	dwc_list_link_t *qh_ptr;
++	dwc_otg_qh_t *qh;
++	int num_channels;
++	dwc_irqflags_t flags;
++	dwc_spinlock_t *channel_lock = hcd->channel_lock;
++	dwc_otg_transaction_type_e ret_val = DWC_OTG_TRANSACTION_NONE;
++
++#ifdef DEBUG_HOST_CHANNELS
++	last_sel_trans_num_per_scheduled = 0;
++	last_sel_trans_num_nonper_scheduled = 0;
++	last_sel_trans_num_avail_hc_at_start = hcd->available_host_channels;
++#endif /* DEBUG_HOST_CHANNELS */
++
++	/* Process entries in the periodic ready list. */
++	qh_ptr = DWC_LIST_FIRST(&hcd->periodic_sched_ready);
++
++	while (qh_ptr != &hcd->periodic_sched_ready &&
++	       !DWC_CIRCLEQ_EMPTY(&hcd->free_hc_list)) {
++
++		qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
++
++		if (microframe_schedule) {
++			// Make sure we leave one channel for non periodic transactions.
++			DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
++			if (hcd->available_host_channels <= 1) {
++				DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
++				break;
++			}
++			hcd->available_host_channels--;
++			DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
++#ifdef DEBUG_HOST_CHANNELS
++			last_sel_trans_num_per_scheduled++;
++#endif /* DEBUG_HOST_CHANNELS */
++		}
++		qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
++		assign_and_init_hc(hcd, qh);
++
++		/*
++		 * Move the QH from the periodic ready schedule to the
++		 * periodic assigned schedule.
++		 */
++		qh_ptr = DWC_LIST_NEXT(qh_ptr);
++		DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
++		DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_assigned,
++				   &qh->qh_list_entry);
++		DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
++	}
++
++	/*
++	 * Process entries in the inactive portion of the non-periodic
++	 * schedule. Some free host channels may not be used if they are
++	 * reserved for periodic transfers.
++	 */
++	qh_ptr = hcd->non_periodic_sched_inactive.next;
++	num_channels = hcd->core_if->core_params->host_channels;
++	while (qh_ptr != &hcd->non_periodic_sched_inactive &&
++	       (microframe_schedule || hcd->non_periodic_channels <
++		num_channels - hcd->periodic_channels) &&
++	       !DWC_CIRCLEQ_EMPTY(&hcd->free_hc_list)) {
++
++		qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
++		/*
++		 * Check to see if this is a NAK'd retransmit, in which case ignore for retransmission
++		 * we hold off on bulk retransmissions to reduce NAK interrupt overhead for full-speed
++		 * cheeky devices that just hold off using NAKs
++		 */
++		if (fiq_enable && nak_holdoff && qh->do_split) {
++			if (qh->nak_frame != 0xffff) {
++				uint16_t next_frame = dwc_frame_num_inc(qh->nak_frame, (qh->ep_type == UE_BULK) ? nak_holdoff : 8);
++				uint16_t frame = dwc_otg_hcd_get_frame_number(hcd);
++				if (dwc_frame_num_le(frame, next_frame)) {
++					if(dwc_frame_num_le(next_frame, hcd->fiq_state->next_sched_frame)) {
++						hcd->fiq_state->next_sched_frame = next_frame;
++					}
++					qh_ptr = DWC_LIST_NEXT(qh_ptr);
++					continue;
++				} else {
++					qh->nak_frame = 0xFFFF;
++				}
++			}
++		}
++
++		if (microframe_schedule) {
++				DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
++				if (hcd->available_host_channels < 1) {
++					DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
++					break;
++				}
++				hcd->available_host_channels--;
++				DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
++#ifdef DEBUG_HOST_CHANNELS
++				last_sel_trans_num_nonper_scheduled++;
++#endif /* DEBUG_HOST_CHANNELS */
++		}
++
++		assign_and_init_hc(hcd, qh);
++
++		/*
++		 * Move the QH from the non-periodic inactive schedule to the
++		 * non-periodic active schedule.
++		 */
++		qh_ptr = DWC_LIST_NEXT(qh_ptr);
++		DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
++		DWC_LIST_MOVE_HEAD(&hcd->non_periodic_sched_active,
++				   &qh->qh_list_entry);
++		DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
++
++
++		if (!microframe_schedule)
++			hcd->non_periodic_channels++;
++	}
++	/* we moved a non-periodic QH to the active schedule. If the inactive queue is empty,
++	 * stop the FIQ from kicking us. We could potentially still have elements here if we
++	 * ran out of host channels.
++	 */
++	if (fiq_enable) {
++		if (DWC_LIST_EMPTY(&hcd->non_periodic_sched_inactive)) {
++			hcd->fiq_state->kick_np_queues = 0;
++		} else {
++			/* For each entry remaining in the NP inactive queue,
++			* if this a NAK'd retransmit then don't set the kick flag.
++			*/
++			if(nak_holdoff) {
++				DWC_LIST_FOREACH(qh_ptr, &hcd->non_periodic_sched_inactive) {
++					qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
++					if (qh->nak_frame == 0xFFFF) {
++						hcd->fiq_state->kick_np_queues = 1;
++					}
++				}
++			}
++		}
++	}
++	if(!DWC_LIST_EMPTY(&hcd->periodic_sched_assigned))
++		ret_val |= DWC_OTG_TRANSACTION_PERIODIC;
++
++	if(!DWC_LIST_EMPTY(&hcd->non_periodic_sched_active))
++		ret_val |= DWC_OTG_TRANSACTION_NON_PERIODIC;
++
++
++#ifdef DEBUG_HOST_CHANNELS
++	last_sel_trans_num_avail_hc_at_end = hcd->available_host_channels;
++#endif /* DEBUG_HOST_CHANNELS */
++	return ret_val;
++}
++
++/**
++ * Attempts to queue a single transaction request for a host channel
++ * associated with either a periodic or non-periodic transfer. This function
++ * assumes that there is space available in the appropriate request queue. For
++ * an OUT transfer or SETUP transaction in Slave mode, it checks whether space
++ * is available in the appropriate Tx FIFO.
++ *
++ * @param hcd The HCD state structure.
++ * @param hc Host channel descriptor associated with either a periodic or
++ * non-periodic transfer.
++ * @param fifo_dwords_avail Number of DWORDs available in the periodic Tx
++ * FIFO for periodic transfers or the non-periodic Tx FIFO for non-periodic
++ * transfers.
++ *
++ * @return 1 if a request is queued and more requests may be needed to
++ * complete the transfer, 0 if no more requests are required for this
++ * transfer, -1 if there is insufficient space in the Tx FIFO.
++ */
++static int queue_transaction(dwc_otg_hcd_t * hcd,
++			     dwc_hc_t * hc, uint16_t fifo_dwords_avail)
++{
++	int retval;
++
++	if (hcd->core_if->dma_enable) {
++		if (hcd->core_if->dma_desc_enable) {
++			if (!hc->xfer_started
++			    || (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)) {
++				dwc_otg_hcd_start_xfer_ddma(hcd, hc->qh);
++				hc->qh->ping_state = 0;
++			}
++		} else if (!hc->xfer_started) {
++			if (fiq_fsm_enable && hc->error_state) {
++				hcd->fiq_state->channel[hc->hc_num].nr_errors =
++					DWC_CIRCLEQ_FIRST(&hc->qh->qtd_list)->error_count;
++				hcd->fiq_state->channel[hc->hc_num].fsm =
++					FIQ_PASSTHROUGH_ERRORSTATE;
++			}
++			dwc_otg_hc_start_transfer(hcd->core_if, hc);
++			hc->qh->ping_state = 0;
++		}
++		retval = 0;
++	} else if (hc->halt_pending) {
++		/* Don't queue a request if the channel has been halted. */
++		retval = 0;
++	} else if (hc->halt_on_queue) {
++		dwc_otg_hc_halt(hcd->core_if, hc, hc->halt_status);
++		retval = 0;
++	} else if (hc->do_ping) {
++		if (!hc->xfer_started) {
++			dwc_otg_hc_start_transfer(hcd->core_if, hc);
++		}
++		retval = 0;
++	} else if (!hc->ep_is_in || hc->data_pid_start == DWC_OTG_HC_PID_SETUP) {
++		if ((fifo_dwords_avail * 4) >= hc->max_packet) {
++			if (!hc->xfer_started) {
++				dwc_otg_hc_start_transfer(hcd->core_if, hc);
++				retval = 1;
++			} else {
++				retval =
++				    dwc_otg_hc_continue_transfer(hcd->core_if,
++								 hc);
++			}
++		} else {
++			retval = -1;
++		}
++	} else {
++		if (!hc->xfer_started) {
++			dwc_otg_hc_start_transfer(hcd->core_if, hc);
++			retval = 1;
++		} else {
++			retval = dwc_otg_hc_continue_transfer(hcd->core_if, hc);
++		}
++	}
++
++	return retval;
++}
++
++/**
++ * Processes periodic channels for the next frame and queues transactions for
++ * these channels to the DWC_otg controller. After queueing transactions, the
++ * Periodic Tx FIFO Empty interrupt is enabled if there are more transactions
++ * to queue as Periodic Tx FIFO or request queue space becomes available.
++ * Otherwise, the Periodic Tx FIFO Empty interrupt is disabled.
++ */
++static void process_periodic_channels(dwc_otg_hcd_t * hcd)
++{
++	hptxsts_data_t tx_status;
++	dwc_list_link_t *qh_ptr;
++	dwc_otg_qh_t *qh;
++	int status = 0;
++	int no_queue_space = 0;
++	int no_fifo_space = 0;
++
++	dwc_otg_host_global_regs_t *host_regs;
++	host_regs = hcd->core_if->host_if->host_global_regs;
++
++	DWC_DEBUGPL(DBG_HCDV, "Queue periodic transactions\n");
++#ifdef DEBUG
++	tx_status.d32 = DWC_READ_REG32(&host_regs->hptxsts);
++	DWC_DEBUGPL(DBG_HCDV,
++		    "  P Tx Req Queue Space Avail (before queue): %d\n",
++		    tx_status.b.ptxqspcavail);
++	DWC_DEBUGPL(DBG_HCDV, "  P Tx FIFO Space Avail (before queue): %d\n",
++		    tx_status.b.ptxfspcavail);
++#endif
++
++	qh_ptr = hcd->periodic_sched_assigned.next;
++	while (qh_ptr != &hcd->periodic_sched_assigned) {
++		tx_status.d32 = DWC_READ_REG32(&host_regs->hptxsts);
++		if (tx_status.b.ptxqspcavail == 0) {
++			no_queue_space = 1;
++			break;
++		}
++
++		qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
++
++		// Do not send a split start transaction any later than frame .6
++		// Note, we have to schedule a periodic in .5 to make it go in .6
++		if(fiq_fsm_enable && qh->do_split && ((dwc_otg_hcd_get_frame_number(hcd) + 1) & 7) > 6)
++		{
++			qh_ptr = qh_ptr->next;
++			hcd->fiq_state->next_sched_frame = dwc_otg_hcd_get_frame_number(hcd) | 7;
++			continue;
++		}
++
++		if (fiq_fsm_enable && fiq_fsm_transaction_suitable(qh)) {
++			if (qh->do_split)
++				fiq_fsm_queue_split_transaction(hcd, qh);
++			else
++				fiq_fsm_queue_isoc_transaction(hcd, qh);
++		} else {
++
++			/*
++			 * Set a flag if we're queueing high-bandwidth in slave mode.
++			 * The flag prevents any halts to get into the request queue in
++			 * the middle of multiple high-bandwidth packets getting queued.
++			 */
++			if (!hcd->core_if->dma_enable && qh->channel->multi_count > 1) {
++				hcd->core_if->queuing_high_bandwidth = 1;
++			}
++			status = queue_transaction(hcd, qh->channel,
++							tx_status.b.ptxfspcavail);
++			if (status < 0) {
++				no_fifo_space = 1;
++				break;
++			}
++		}
++
++		/*
++		 * In Slave mode, stay on the current transfer until there is
++		 * nothing more to do or the high-bandwidth request count is
++		 * reached. In DMA mode, only need to queue one request. The
++		 * controller automatically handles multiple packets for
++		 * high-bandwidth transfers.
++		 */
++		if (hcd->core_if->dma_enable || status == 0 ||
++		    qh->channel->requests == qh->channel->multi_count) {
++			qh_ptr = qh_ptr->next;
++			/*
++			 * Move the QH from the periodic assigned schedule to
++			 * the periodic queued schedule.
++			 */
++			DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_queued,
++					   &qh->qh_list_entry);
++
++			/* done queuing high bandwidth */
++			hcd->core_if->queuing_high_bandwidth = 0;
++		}
++	}
++
++	if (!hcd->core_if->dma_enable) {
++		dwc_otg_core_global_regs_t *global_regs;
++		gintmsk_data_t intr_mask = {.d32 = 0 };
++
++		global_regs = hcd->core_if->core_global_regs;
++		intr_mask.b.ptxfempty = 1;
++#ifdef DEBUG
++		tx_status.d32 = DWC_READ_REG32(&host_regs->hptxsts);
++		DWC_DEBUGPL(DBG_HCDV,
++			    "  P Tx Req Queue Space Avail (after queue): %d\n",
++			    tx_status.b.ptxqspcavail);
++		DWC_DEBUGPL(DBG_HCDV,
++			    "  P Tx FIFO Space Avail (after queue): %d\n",
++			    tx_status.b.ptxfspcavail);
++#endif
++		if (!DWC_LIST_EMPTY(&hcd->periodic_sched_assigned) ||
++		    no_queue_space || no_fifo_space) {
++			/*
++			 * May need to queue more transactions as the request
++			 * queue or Tx FIFO empties. Enable the periodic Tx
++			 * FIFO empty interrupt. (Always use the half-empty
++			 * level to ensure that new requests are loaded as
++			 * soon as possible.)
++			 */
++			DWC_MODIFY_REG32(&global_regs->gintmsk, 0,
++					 intr_mask.d32);
++		} else {
++			/*
++			 * Disable the Tx FIFO empty interrupt since there are
++			 * no more transactions that need to be queued right
++			 * now. This function is called from interrupt
++			 * handlers to queue more transactions as transfer
++			 * states change.
++			 */
++			DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32,
++					 0);
++		}
++	}
++}
++
++/**
++ * Processes active non-periodic channels and queues transactions for these
++ * channels to the DWC_otg controller. After queueing transactions, the NP Tx
++ * FIFO Empty interrupt is enabled if there are more transactions to queue as
++ * NP Tx FIFO or request queue space becomes available. Otherwise, the NP Tx
++ * FIFO Empty interrupt is disabled.
++ */
++static void process_non_periodic_channels(dwc_otg_hcd_t * hcd)
++{
++	gnptxsts_data_t tx_status;
++	dwc_list_link_t *orig_qh_ptr;
++	dwc_otg_qh_t *qh;
++	int status;
++	int no_queue_space = 0;
++	int no_fifo_space = 0;
++	int more_to_do = 0;
++
++	dwc_otg_core_global_regs_t *global_regs =
++	    hcd->core_if->core_global_regs;
++
++	DWC_DEBUGPL(DBG_HCDV, "Queue non-periodic transactions\n");
++#ifdef DEBUG
++	tx_status.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
++	DWC_DEBUGPL(DBG_HCDV,
++		    "  NP Tx Req Queue Space Avail (before queue): %d\n",
++		    tx_status.b.nptxqspcavail);
++	DWC_DEBUGPL(DBG_HCDV, "  NP Tx FIFO Space Avail (before queue): %d\n",
++		    tx_status.b.nptxfspcavail);
++#endif
++	/*
++	 * Keep track of the starting point. Skip over the start-of-list
++	 * entry.
++	 */
++	if (hcd->non_periodic_qh_ptr == &hcd->non_periodic_sched_active) {
++		hcd->non_periodic_qh_ptr = hcd->non_periodic_qh_ptr->next;
++	}
++	orig_qh_ptr = hcd->non_periodic_qh_ptr;
++
++	/*
++	 * Process once through the active list or until no more space is
++	 * available in the request queue or the Tx FIFO.
++	 */
++	do {
++		tx_status.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
++		if (!hcd->core_if->dma_enable && tx_status.b.nptxqspcavail == 0) {
++			no_queue_space = 1;
++			break;
++		}
++
++		qh = DWC_LIST_ENTRY(hcd->non_periodic_qh_ptr, dwc_otg_qh_t,
++				    qh_list_entry);
++
++		if(fiq_fsm_enable && fiq_fsm_transaction_suitable(qh)) {
++			fiq_fsm_queue_split_transaction(hcd, qh);
++		} else {
++			status = queue_transaction(hcd, qh->channel,
++						tx_status.b.nptxfspcavail);
++
++			if (status > 0) {
++				more_to_do = 1;
++			} else if (status < 0) {
++				no_fifo_space = 1;
++				break;
++			}
++		}
++		/* Advance to next QH, skipping start-of-list entry. */
++		hcd->non_periodic_qh_ptr = hcd->non_periodic_qh_ptr->next;
++		if (hcd->non_periodic_qh_ptr == &hcd->non_periodic_sched_active) {
++			hcd->non_periodic_qh_ptr =
++			    hcd->non_periodic_qh_ptr->next;
++		}
++
++	} while (hcd->non_periodic_qh_ptr != orig_qh_ptr);
++
++	if (!hcd->core_if->dma_enable) {
++		gintmsk_data_t intr_mask = {.d32 = 0 };
++		intr_mask.b.nptxfempty = 1;
++
++#ifdef DEBUG
++		tx_status.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
++		DWC_DEBUGPL(DBG_HCDV,
++			    "  NP Tx Req Queue Space Avail (after queue): %d\n",
++			    tx_status.b.nptxqspcavail);
++		DWC_DEBUGPL(DBG_HCDV,
++			    "  NP Tx FIFO Space Avail (after queue): %d\n",
++			    tx_status.b.nptxfspcavail);
++#endif
++		if (more_to_do || no_queue_space || no_fifo_space) {
++			/*
++			 * May need to queue more transactions as the request
++			 * queue or Tx FIFO empties. Enable the non-periodic
++			 * Tx FIFO empty interrupt. (Always use the half-empty
++			 * level to ensure that new requests are loaded as
++			 * soon as possible.)
++			 */
++			DWC_MODIFY_REG32(&global_regs->gintmsk, 0,
++					 intr_mask.d32);
++		} else {
++			/*
++			 * Disable the Tx FIFO empty interrupt since there are
++			 * no more transactions that need to be queued right
++			 * now. This function is called from interrupt
++			 * handlers to queue more transactions as transfer
++			 * states change.
++			 */
++			DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32,
++					 0);
++		}
++	}
++}
++
++/**
++ * This function processes the currently active host channels and queues
++ * transactions for these channels to the DWC_otg controller. It is called
++ * from HCD interrupt handler functions.
++ *
++ * @param hcd The HCD state structure.
++ * @param tr_type The type(s) of transactions to queue (non-periodic,
++ * periodic, or both).
++ */
++void dwc_otg_hcd_queue_transactions(dwc_otg_hcd_t * hcd,
++				    dwc_otg_transaction_type_e tr_type)
++{
++#ifdef DEBUG_SOF
++	DWC_DEBUGPL(DBG_HCD, "Queue Transactions\n");
++#endif
++	/* Process host channels associated with periodic transfers. */
++	if ((tr_type == DWC_OTG_TRANSACTION_PERIODIC ||
++	     tr_type == DWC_OTG_TRANSACTION_ALL) &&
++	    !DWC_LIST_EMPTY(&hcd->periodic_sched_assigned)) {
++
++		process_periodic_channels(hcd);
++	}
++
++	/* Process host channels associated with non-periodic transfers. */
++	if (tr_type == DWC_OTG_TRANSACTION_NON_PERIODIC ||
++	    tr_type == DWC_OTG_TRANSACTION_ALL) {
++		if (!DWC_LIST_EMPTY(&hcd->non_periodic_sched_active)) {
++			process_non_periodic_channels(hcd);
++		} else {
++			/*
++			 * Ensure NP Tx FIFO empty interrupt is disabled when
++			 * there are no non-periodic transfers to process.
++			 */
++			gintmsk_data_t gintmsk = {.d32 = 0 };
++			gintmsk.b.nptxfempty = 1;
++
++			if (fiq_enable) {
++				local_fiq_disable();
++				fiq_fsm_spin_lock(&hcd->fiq_state->lock);
++				DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, gintmsk.d32, 0);
++				fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
++				local_fiq_enable();
++			} else {
++				DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, gintmsk.d32, 0);
++			}
++		}
++	}
++}
++
++#ifdef DWC_HS_ELECT_TST
++/*
++ * Quick and dirty hack to implement the HS Electrical Test
++ * SINGLE_STEP_GET_DEVICE_DESCRIPTOR feature.
++ *
++ * This code was copied from our userspace app "hset". It sends a
++ * Get Device Descriptor control sequence in two parts, first the
++ * Setup packet by itself, followed some time later by the In and
++ * Ack packets. Rather than trying to figure out how to add this
++ * functionality to the normal driver code, we just hijack the
++ * hardware, using these two function to drive the hardware
++ * directly.
++ */
++
++static dwc_otg_core_global_regs_t *global_regs;
++static dwc_otg_host_global_regs_t *hc_global_regs;
++static dwc_otg_hc_regs_t *hc_regs;
++static uint32_t *data_fifo;
++
++static void do_setup(void)
++{
++	gintsts_data_t gintsts;
++	hctsiz_data_t hctsiz;
++	hcchar_data_t hcchar;
++	haint_data_t haint;
++	hcint_data_t hcint;
++
++	/* Enable HAINTs */
++	DWC_WRITE_REG32(&hc_global_regs->haintmsk, 0x0001);
++
++	/* Enable HCINTs */
++	DWC_WRITE_REG32(&hc_regs->hcintmsk, 0x04a3);
++
++	/* Read GINTSTS */
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/* Read HAINT */
++	haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++	/* Read HCINT */
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++	/* Read HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++	/* Clear HCINT */
++	DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++	/* Clear HAINT */
++	DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++	/* Clear GINTSTS */
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	/* Read GINTSTS */
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/*
++	 * Send Setup packet (Get Device Descriptor)
++	 */
++
++	/* Make sure channel is disabled */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	if (hcchar.b.chen) {
++		hcchar.b.chdis = 1;
++//              hcchar.b.chen = 1;
++		DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++		//sleep(1);
++		dwc_mdelay(1000);
++
++		/* Read GINTSTS */
++		gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++		/* Read HAINT */
++		haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++		/* Read HCINT */
++		hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++		/* Read HCCHAR */
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++		/* Clear HCINT */
++		DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++		/* Clear HAINT */
++		DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++		/* Clear GINTSTS */
++		DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	}
++
++	/* Set HCTSIZ */
++	hctsiz.d32 = 0;
++	hctsiz.b.xfersize = 8;
++	hctsiz.b.pktcnt = 1;
++	hctsiz.b.pid = DWC_OTG_HC_PID_SETUP;
++	DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
++
++	/* Set HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	hcchar.b.eptype = DWC_OTG_EP_TYPE_CONTROL;
++	hcchar.b.epdir = 0;
++	hcchar.b.epnum = 0;
++	hcchar.b.mps = 8;
++	hcchar.b.chen = 1;
++	DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++
++	/* Fill FIFO with Setup data for Get Device Descriptor */
++	data_fifo = (uint32_t *) ((char *)global_regs + 0x1000);
++	DWC_WRITE_REG32(data_fifo++, 0x01000680);
++	DWC_WRITE_REG32(data_fifo++, 0x00080000);
++
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/* Wait for host channel interrupt */
++	do {
++		gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++	} while (gintsts.b.hcintr == 0);
++
++	/* Disable HCINTs */
++	DWC_WRITE_REG32(&hc_regs->hcintmsk, 0x0000);
++
++	/* Disable HAINTs */
++	DWC_WRITE_REG32(&hc_global_regs->haintmsk, 0x0000);
++
++	/* Read HAINT */
++	haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++	/* Read HCINT */
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++	/* Read HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++	/* Clear HCINT */
++	DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++	/* Clear HAINT */
++	DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++	/* Clear GINTSTS */
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	/* Read GINTSTS */
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++}
++
++static void do_in_ack(void)
++{
++	gintsts_data_t gintsts;
++	hctsiz_data_t hctsiz;
++	hcchar_data_t hcchar;
++	haint_data_t haint;
++	hcint_data_t hcint;
++	host_grxsts_data_t grxsts;
++
++	/* Enable HAINTs */
++	DWC_WRITE_REG32(&hc_global_regs->haintmsk, 0x0001);
++
++	/* Enable HCINTs */
++	DWC_WRITE_REG32(&hc_regs->hcintmsk, 0x04a3);
++
++	/* Read GINTSTS */
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/* Read HAINT */
++	haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++	/* Read HCINT */
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++	/* Read HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++	/* Clear HCINT */
++	DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++	/* Clear HAINT */
++	DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++	/* Clear GINTSTS */
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	/* Read GINTSTS */
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/*
++	 * Receive Control In packet
++	 */
++
++	/* Make sure channel is disabled */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	if (hcchar.b.chen) {
++		hcchar.b.chdis = 1;
++		hcchar.b.chen = 1;
++		DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++		//sleep(1);
++		dwc_mdelay(1000);
++
++		/* Read GINTSTS */
++		gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++		/* Read HAINT */
++		haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++		/* Read HCINT */
++		hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++		/* Read HCCHAR */
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++		/* Clear HCINT */
++		DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++		/* Clear HAINT */
++		DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++		/* Clear GINTSTS */
++		DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	}
++
++	/* Set HCTSIZ */
++	hctsiz.d32 = 0;
++	hctsiz.b.xfersize = 8;
++	hctsiz.b.pktcnt = 1;
++	hctsiz.b.pid = DWC_OTG_HC_PID_DATA1;
++	DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
++
++	/* Set HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	hcchar.b.eptype = DWC_OTG_EP_TYPE_CONTROL;
++	hcchar.b.epdir = 1;
++	hcchar.b.epnum = 0;
++	hcchar.b.mps = 8;
++	hcchar.b.chen = 1;
++	DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/* Wait for receive status queue interrupt */
++	do {
++		gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++	} while (gintsts.b.rxstsqlvl == 0);
++
++	/* Read RXSTS */
++	grxsts.d32 = DWC_READ_REG32(&global_regs->grxstsp);
++
++	/* Clear RXSTSQLVL in GINTSTS */
++	gintsts.d32 = 0;
++	gintsts.b.rxstsqlvl = 1;
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	switch (grxsts.b.pktsts) {
++	case DWC_GRXSTS_PKTSTS_IN:
++		/* Read the data into the host buffer */
++		if (grxsts.b.bcnt > 0) {
++			int i;
++			int word_count = (grxsts.b.bcnt + 3) / 4;
++
++			data_fifo = (uint32_t *) ((char *)global_regs + 0x1000);
++
++			for (i = 0; i < word_count; i++) {
++				(void)DWC_READ_REG32(data_fifo++);
++			}
++		}
++		break;
++
++	default:
++		break;
++	}
++
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/* Wait for receive status queue interrupt */
++	do {
++		gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++	} while (gintsts.b.rxstsqlvl == 0);
++
++	/* Read RXSTS */
++	grxsts.d32 = DWC_READ_REG32(&global_regs->grxstsp);
++
++	/* Clear RXSTSQLVL in GINTSTS */
++	gintsts.d32 = 0;
++	gintsts.b.rxstsqlvl = 1;
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	switch (grxsts.b.pktsts) {
++	case DWC_GRXSTS_PKTSTS_IN_XFER_COMP:
++		break;
++
++	default:
++		break;
++	}
++
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/* Wait for host channel interrupt */
++	do {
++		gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++	} while (gintsts.b.hcintr == 0);
++
++	/* Read HAINT */
++	haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++	/* Read HCINT */
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++	/* Read HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++	/* Clear HCINT */
++	DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++	/* Clear HAINT */
++	DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++	/* Clear GINTSTS */
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	/* Read GINTSTS */
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++//      usleep(100000);
++//      mdelay(100);
++	dwc_mdelay(1);
++
++	/*
++	 * Send handshake packet
++	 */
++
++	/* Read HAINT */
++	haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++	/* Read HCINT */
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++	/* Read HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++	/* Clear HCINT */
++	DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++	/* Clear HAINT */
++	DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++	/* Clear GINTSTS */
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	/* Read GINTSTS */
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/* Make sure channel is disabled */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	if (hcchar.b.chen) {
++		hcchar.b.chdis = 1;
++		hcchar.b.chen = 1;
++		DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++		//sleep(1);
++		dwc_mdelay(1000);
++
++		/* Read GINTSTS */
++		gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++		/* Read HAINT */
++		haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++		/* Read HCINT */
++		hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++		/* Read HCCHAR */
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++		/* Clear HCINT */
++		DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++		/* Clear HAINT */
++		DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++		/* Clear GINTSTS */
++		DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	}
++
++	/* Set HCTSIZ */
++	hctsiz.d32 = 0;
++	hctsiz.b.xfersize = 0;
++	hctsiz.b.pktcnt = 1;
++	hctsiz.b.pid = DWC_OTG_HC_PID_DATA1;
++	DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
++
++	/* Set HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	hcchar.b.eptype = DWC_OTG_EP_TYPE_CONTROL;
++	hcchar.b.epdir = 0;
++	hcchar.b.epnum = 0;
++	hcchar.b.mps = 8;
++	hcchar.b.chen = 1;
++	DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
++
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++
++	/* Wait for host channel interrupt */
++	do {
++		gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++	} while (gintsts.b.hcintr == 0);
++
++	/* Disable HCINTs */
++	DWC_WRITE_REG32(&hc_regs->hcintmsk, 0x0000);
++
++	/* Disable HAINTs */
++	DWC_WRITE_REG32(&hc_global_regs->haintmsk, 0x0000);
++
++	/* Read HAINT */
++	haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
++
++	/* Read HCINT */
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++
++	/* Read HCCHAR */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++
++	/* Clear HCINT */
++	DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
++
++	/* Clear HAINT */
++	DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
++
++	/* Clear GINTSTS */
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	/* Read GINTSTS */
++	gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
++}
++#endif
++
++/** Handles hub class-specific requests. */
++int dwc_otg_hcd_hub_control(dwc_otg_hcd_t * dwc_otg_hcd,
++			    uint16_t typeReq,
++			    uint16_t wValue,
++			    uint16_t wIndex, uint8_t * buf, uint16_t wLength)
++{
++	int retval = 0;
++
++	dwc_otg_core_if_t *core_if = dwc_otg_hcd->core_if;
++	usb_hub_descriptor_t *hub_desc;
++	hprt0_data_t hprt0 = {.d32 = 0 };
++
++	uint32_t port_status;
++
++	switch (typeReq) {
++	case UCR_CLEAR_HUB_FEATURE:
++		DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++			    "ClearHubFeature 0x%x\n", wValue);
++		switch (wValue) {
++		case UHF_C_HUB_LOCAL_POWER:
++		case UHF_C_HUB_OVER_CURRENT:
++			/* Nothing required here */
++			break;
++		default:
++			retval = -DWC_E_INVALID;
++			DWC_ERROR("DWC OTG HCD - "
++				  "ClearHubFeature request %xh unknown\n",
++				  wValue);
++		}
++		break;
++	case UCR_CLEAR_PORT_FEATURE:
++#ifdef CONFIG_USB_DWC_OTG_LPM
++		if (wValue != UHF_PORT_L1)
++#endif
++			if (!wIndex || wIndex > 1)
++				goto error;
++
++		switch (wValue) {
++		case UHF_PORT_ENABLE:
++			DWC_DEBUGPL(DBG_ANY, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_ENABLE\n");
++			hprt0.d32 = dwc_otg_read_hprt0(core_if);
++			hprt0.b.prtena = 1;
++			DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++			break;
++		case UHF_PORT_SUSPEND:
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_SUSPEND\n");
++
++			if (core_if->power_down == 2) {
++				dwc_otg_host_hibernation_restore(core_if, 0, 0);
++			} else {
++				DWC_WRITE_REG32(core_if->pcgcctl, 0);
++				dwc_mdelay(5);
++
++				hprt0.d32 = dwc_otg_read_hprt0(core_if);
++				hprt0.b.prtres = 1;
++				DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++				hprt0.b.prtsusp = 0;
++				/* Clear Resume bit */
++				dwc_mdelay(100);
++				hprt0.b.prtres = 0;
++				DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++			}
++			break;
++#ifdef CONFIG_USB_DWC_OTG_LPM
++		case UHF_PORT_L1:
++			{
++				pcgcctl_data_t pcgcctl = {.d32 = 0 };
++				glpmcfg_data_t lpmcfg = {.d32 = 0 };
++
++				lpmcfg.d32 =
++				    DWC_READ_REG32(&core_if->
++						   core_global_regs->glpmcfg);
++				lpmcfg.b.en_utmi_sleep = 0;
++				lpmcfg.b.hird_thres &= (~(1 << 4));
++				lpmcfg.b.prt_sleep_sts = 1;
++				DWC_WRITE_REG32(&core_if->
++						core_global_regs->glpmcfg,
++						lpmcfg.d32);
++
++				/* Clear Enbl_L1Gating bit. */
++				pcgcctl.b.enbl_sleep_gating = 1;
++				DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32,
++						 0);
++
++				dwc_mdelay(5);
++
++				hprt0.d32 = dwc_otg_read_hprt0(core_if);
++				hprt0.b.prtres = 1;
++				DWC_WRITE_REG32(core_if->host_if->hprt0,
++						hprt0.d32);
++				/* This bit will be cleared in wakeup interrupt handle */
++				break;
++			}
++#endif
++		case UHF_PORT_POWER:
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_POWER\n");
++			hprt0.d32 = dwc_otg_read_hprt0(core_if);
++			hprt0.b.prtpwr = 0;
++			DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++			break;
++		case UHF_PORT_INDICATOR:
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_INDICATOR\n");
++			/* Port inidicator not supported */
++			break;
++		case UHF_C_PORT_CONNECTION:
++			/* Clears drivers internal connect status change
++			 * flag */
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_C_CONNECTION\n");
++			dwc_otg_hcd->flags.b.port_connect_status_change = 0;
++			break;
++		case UHF_C_PORT_RESET:
++			/* Clears the driver's internal Port Reset Change
++			 * flag */
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_C_RESET\n");
++			dwc_otg_hcd->flags.b.port_reset_change = 0;
++			break;
++		case UHF_C_PORT_ENABLE:
++			/* Clears the driver's internal Port
++			 * Enable/Disable Change flag */
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_C_ENABLE\n");
++			dwc_otg_hcd->flags.b.port_enable_change = 0;
++			break;
++		case UHF_C_PORT_SUSPEND:
++			/* Clears the driver's internal Port Suspend
++			 * Change flag, which is set when resume signaling on
++			 * the host port is complete */
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_C_SUSPEND\n");
++			dwc_otg_hcd->flags.b.port_suspend_change = 0;
++			break;
++#ifdef CONFIG_USB_DWC_OTG_LPM
++		case UHF_C_PORT_L1:
++			dwc_otg_hcd->flags.b.port_l1_change = 0;
++			break;
++#endif
++		case UHF_C_PORT_OVER_CURRENT:
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "ClearPortFeature USB_PORT_FEAT_C_OVER_CURRENT\n");
++			dwc_otg_hcd->flags.b.port_over_current_change = 0;
++			break;
++		default:
++			retval = -DWC_E_INVALID;
++			DWC_ERROR("DWC OTG HCD - "
++				  "ClearPortFeature request %xh "
++				  "unknown or unsupported\n", wValue);
++		}
++		break;
++	case UCR_GET_HUB_DESCRIPTOR:
++		DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++			    "GetHubDescriptor\n");
++		hub_desc = (usb_hub_descriptor_t *) buf;
++		hub_desc->bDescLength = 9;
++		hub_desc->bDescriptorType = 0x29;
++		hub_desc->bNbrPorts = 1;
++		USETW(hub_desc->wHubCharacteristics, 0x08);
++		hub_desc->bPwrOn2PwrGood = 1;
++		hub_desc->bHubContrCurrent = 0;
++		hub_desc->DeviceRemovable[0] = 0;
++		hub_desc->DeviceRemovable[1] = 0xff;
++		break;
++	case UCR_GET_HUB_STATUS:
++		DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++			    "GetHubStatus\n");
++		DWC_MEMSET(buf, 0, 4);
++		break;
++	case UCR_GET_PORT_STATUS:
++		DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++			    "GetPortStatus wIndex = 0x%04x FLAGS=0x%08x\n",
++			    wIndex, dwc_otg_hcd->flags.d32);
++		if (!wIndex || wIndex > 1)
++			goto error;
++
++		port_status = 0;
++
++		if (dwc_otg_hcd->flags.b.port_connect_status_change)
++			port_status |= (1 << UHF_C_PORT_CONNECTION);
++
++		if (dwc_otg_hcd->flags.b.port_enable_change)
++			port_status |= (1 << UHF_C_PORT_ENABLE);
++
++		if (dwc_otg_hcd->flags.b.port_suspend_change)
++			port_status |= (1 << UHF_C_PORT_SUSPEND);
++
++		if (dwc_otg_hcd->flags.b.port_l1_change)
++			port_status |= (1 << UHF_C_PORT_L1);
++
++		if (dwc_otg_hcd->flags.b.port_reset_change) {
++			port_status |= (1 << UHF_C_PORT_RESET);
++		}
++
++		if (dwc_otg_hcd->flags.b.port_over_current_change) {
++			DWC_WARN("Overcurrent change detected\n");
++			port_status |= (1 << UHF_C_PORT_OVER_CURRENT);
++		}
++
++		if (!dwc_otg_hcd->flags.b.port_connect_status) {
++			/*
++			 * The port is disconnected, which means the core is
++			 * either in device mode or it soon will be. Just
++			 * return 0's for the remainder of the port status
++			 * since the port register can't be read if the core
++			 * is in device mode.
++			 */
++			*((__le32 *) buf) = dwc_cpu_to_le32(&port_status);
++			break;
++		}
++
++		hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
++		DWC_DEBUGPL(DBG_HCDV, "  HPRT0: 0x%08x\n", hprt0.d32);
++
++		if (hprt0.b.prtconnsts)
++			port_status |= (1 << UHF_PORT_CONNECTION);
++
++		if (hprt0.b.prtena)
++			port_status |= (1 << UHF_PORT_ENABLE);
++
++		if (hprt0.b.prtsusp)
++			port_status |= (1 << UHF_PORT_SUSPEND);
++
++		if (hprt0.b.prtovrcurract)
++			port_status |= (1 << UHF_PORT_OVER_CURRENT);
++
++		if (hprt0.b.prtrst)
++			port_status |= (1 << UHF_PORT_RESET);
++
++		if (hprt0.b.prtpwr)
++			port_status |= (1 << UHF_PORT_POWER);
++
++		if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_HIGH_SPEED)
++			port_status |= (1 << UHF_PORT_HIGH_SPEED);
++		else if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_LOW_SPEED)
++			port_status |= (1 << UHF_PORT_LOW_SPEED);
++
++		if (hprt0.b.prttstctl)
++			port_status |= (1 << UHF_PORT_TEST);
++		if (dwc_otg_get_lpm_portsleepstatus(dwc_otg_hcd->core_if)) {
++			port_status |= (1 << UHF_PORT_L1);
++		}
++		/*
++		   For Synopsys HW emulation of Power down wkup_control asserts the
++		   hreset_n and prst_n on suspned. This causes the HPRT0 to be zero.
++		   We intentionally tell the software that port is in L2Suspend state.
++		   Only for STE.
++		*/
++		if ((core_if->power_down == 2)
++		    && (core_if->hibernation_suspend == 1)) {
++			port_status |= (1 << UHF_PORT_SUSPEND);
++		}
++		/* USB_PORT_FEAT_INDICATOR unsupported always 0 */
++
++		*((__le32 *) buf) = dwc_cpu_to_le32(&port_status);
++
++		break;
++	case UCR_SET_HUB_FEATURE:
++		DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++			    "SetHubFeature\n");
++		/* No HUB features supported */
++		break;
++	case UCR_SET_PORT_FEATURE:
++		if (wValue != UHF_PORT_TEST && (!wIndex || wIndex > 1))
++			goto error;
++
++		if (!dwc_otg_hcd->flags.b.port_connect_status) {
++			/*
++			 * The port is disconnected, which means the core is
++			 * either in device mode or it soon will be. Just
++			 * return without doing anything since the port
++			 * register can't be written if the core is in device
++			 * mode.
++			 */
++			break;
++		}
++
++		switch (wValue) {
++		case UHF_PORT_SUSPEND:
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "SetPortFeature - USB_PORT_FEAT_SUSPEND\n");
++			if (dwc_otg_hcd_otg_port(dwc_otg_hcd) != wIndex) {
++				goto error;
++			}
++			if (core_if->power_down == 2) {
++				int timeout = 300;
++				dwc_irqflags_t flags;
++				pcgcctl_data_t pcgcctl = {.d32 = 0 };
++				gpwrdn_data_t gpwrdn = {.d32 = 0 };
++				gusbcfg_data_t gusbcfg = {.d32 = 0 };
++#ifdef DWC_DEV_SRPCAP
++				int32_t otg_cap_param = core_if->core_params->otg_cap;
++#endif
++				DWC_PRINTF("Preparing for complete power-off\n");
++
++				/* Save registers before hibernation */
++				dwc_otg_save_global_regs(core_if);
++				dwc_otg_save_host_regs(core_if);
++
++				hprt0.d32 = dwc_otg_read_hprt0(core_if);
++				hprt0.b.prtsusp = 1;
++				hprt0.b.prtena = 0;
++				DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++				/* Spin hprt0.b.prtsusp to became 1 */
++				do {
++					hprt0.d32 = dwc_otg_read_hprt0(core_if);
++					if (hprt0.b.prtsusp) {
++						break;
++					}
++					dwc_mdelay(1);
++				} while (--timeout);
++				if (!timeout) {
++					DWC_WARN("Suspend wasn't genereted\n");
++				}
++				dwc_udelay(10);
++
++				/*
++				 * We need to disable interrupts to prevent servicing of any IRQ
++				 * during going to hibernation
++				 */
++				DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &flags);
++				core_if->lx_state = DWC_OTG_L2;
++#ifdef DWC_DEV_SRPCAP
++				hprt0.d32 = dwc_otg_read_hprt0(core_if);
++				hprt0.b.prtpwr = 0;
++				hprt0.b.prtena = 0;
++				DWC_WRITE_REG32(core_if->host_if->hprt0,
++						hprt0.d32);
++#endif
++				gusbcfg.d32 =
++				    DWC_READ_REG32(&core_if->core_global_regs->
++						   gusbcfg);
++				if (gusbcfg.b.ulpi_utmi_sel == 1) {
++					/* ULPI interface */
++					/* Suspend the Phy Clock */
++					pcgcctl.d32 = 0;
++					pcgcctl.b.stoppclk = 1;
++					DWC_MODIFY_REG32(core_if->pcgcctl, 0,
++							 pcgcctl.d32);
++					dwc_udelay(10);
++					gpwrdn.b.pmuactv = 1;
++					DWC_MODIFY_REG32(&core_if->
++							 core_global_regs->
++							 gpwrdn, 0, gpwrdn.d32);
++				} else {
++					/* UTMI+ Interface */
++					gpwrdn.b.pmuactv = 1;
++					DWC_MODIFY_REG32(&core_if->
++							 core_global_regs->
++							 gpwrdn, 0, gpwrdn.d32);
++					dwc_udelay(10);
++					pcgcctl.b.stoppclk = 1;
++					DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
++					dwc_udelay(10);
++				}
++#ifdef DWC_DEV_SRPCAP
++				gpwrdn.d32 = 0;
++				gpwrdn.b.dis_vbus = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++#endif
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pmuintsel = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++				dwc_udelay(10);
++
++				gpwrdn.d32 = 0;
++#ifdef DWC_DEV_SRPCAP
++				gpwrdn.b.srp_det_msk = 1;
++#endif
++				gpwrdn.b.disconn_det_msk = 1;
++				gpwrdn.b.lnstchng_msk = 1;
++				gpwrdn.b.sts_chngint_msk = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++				dwc_udelay(10);
++
++				/* Enable Power Down Clamp and all interrupts in GPWRDN */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pwrdnclmp = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++				dwc_udelay(10);
++
++				/* Switch off VDD */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pwrdnswtch = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++
++#ifdef DWC_DEV_SRPCAP
++				if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE)
++				{
++					core_if->pwron_timer_started = 1;
++					DWC_TIMER_SCHEDULE(core_if->pwron_timer, 6000 /* 6 secs */ );
++				}
++#endif
++				/* Save gpwrdn register for further usage if stschng interrupt */
++				core_if->gr_backup->gpwrdn_local =
++						DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
++
++				/* Set flag to indicate that we are in hibernation */
++				core_if->hibernation_suspend = 1;
++				DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock,flags);
++
++				DWC_PRINTF("Host hibernation completed\n");
++				// Exit from case statement
++				break;
++
++			}
++			if (dwc_otg_hcd_otg_port(dwc_otg_hcd) == wIndex &&
++			    dwc_otg_hcd->fops->get_b_hnp_enable(dwc_otg_hcd)) {
++				gotgctl_data_t gotgctl = {.d32 = 0 };
++				gotgctl.b.hstsethnpen = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gotgctl, 0, gotgctl.d32);
++				core_if->op_state = A_SUSPEND;
++			}
++			hprt0.d32 = dwc_otg_read_hprt0(core_if);
++			hprt0.b.prtsusp = 1;
++			DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++			{
++				dwc_irqflags_t flags;
++				/* Update lx_state */
++				DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &flags);
++				core_if->lx_state = DWC_OTG_L2;
++				DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
++			}
++			/* Suspend the Phy Clock */
++			{
++				pcgcctl_data_t pcgcctl = {.d32 = 0 };
++				pcgcctl.b.stoppclk = 1;
++				DWC_MODIFY_REG32(core_if->pcgcctl, 0,
++						 pcgcctl.d32);
++				dwc_udelay(10);
++			}
++
++			/* For HNP the bus must be suspended for at least 200ms. */
++			if (dwc_otg_hcd->fops->get_b_hnp_enable(dwc_otg_hcd)) {
++				pcgcctl_data_t pcgcctl = {.d32 = 0 };
++				pcgcctl.b.stoppclk = 1;
++                DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
++				dwc_mdelay(200);
++			}
++
++			/** @todo - check how sw can wait for 1 sec to check asesvld??? */
++#if 0 //vahrama !!!!!!!!!!!!!!!!!!
++			if (core_if->adp_enable) {
++				gotgctl_data_t gotgctl = {.d32 = 0 };
++				gpwrdn_data_t gpwrdn;
++
++				while (gotgctl.b.asesvld == 1) {
++					gotgctl.d32 =
++					    DWC_READ_REG32(&core_if->
++							   core_global_regs->
++							   gotgctl);
++					dwc_mdelay(100);
++				}
++
++				/* Enable Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pmuactv = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++
++				/* Unmask SRP detected interrupt from Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.srp_det_msk = 1;
++				DWC_MODIFY_REG32(&core_if->core_global_regs->
++						 gpwrdn, 0, gpwrdn.d32);
++
++				dwc_otg_adp_probe_start(core_if);
++			}
++#endif
++			break;
++		case UHF_PORT_POWER:
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "SetPortFeature - USB_PORT_FEAT_POWER\n");
++			hprt0.d32 = dwc_otg_read_hprt0(core_if);
++			hprt0.b.prtpwr = 1;
++			DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++			break;
++		case UHF_PORT_RESET:
++			if ((core_if->power_down == 2)
++			    && (core_if->hibernation_suspend == 1)) {
++				/* If we are going to exit from Hibernated
++				 * state via USB RESET.
++				 */
++				dwc_otg_host_hibernation_restore(core_if, 0, 1);
++			} else {
++				hprt0.d32 = dwc_otg_read_hprt0(core_if);
++
++				DWC_DEBUGPL(DBG_HCD,
++					    "DWC OTG HCD HUB CONTROL - "
++					    "SetPortFeature - USB_PORT_FEAT_RESET\n");
++				{
++					pcgcctl_data_t pcgcctl = {.d32 = 0 };
++					pcgcctl.b.enbl_sleep_gating = 1;
++					pcgcctl.b.stoppclk = 1;
++					DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
++					DWC_WRITE_REG32(core_if->pcgcctl, 0);
++				}
++#ifdef CONFIG_USB_DWC_OTG_LPM
++				{
++					glpmcfg_data_t lpmcfg;
++					lpmcfg.d32 =
++						DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++					if (lpmcfg.b.prt_sleep_sts) {
++						lpmcfg.b.en_utmi_sleep = 0;
++						lpmcfg.b.hird_thres &= (~(1 << 4));
++						DWC_WRITE_REG32
++						    (&core_if->core_global_regs->glpmcfg,
++						     lpmcfg.d32);
++						dwc_mdelay(1);
++					}
++				}
++#endif
++				hprt0.d32 = dwc_otg_read_hprt0(core_if);
++				/* Clear suspend bit if resetting from suspended state. */
++				hprt0.b.prtsusp = 0;
++				/* When B-Host the Port reset bit is set in
++				 * the Start HCD Callback function, so that
++				 * the reset is started within 1ms of the HNP
++				 * success interrupt. */
++				if (!dwc_otg_hcd_is_b_host(dwc_otg_hcd)) {
++					hprt0.b.prtpwr = 1;
++					hprt0.b.prtrst = 1;
++					DWC_PRINTF("Indeed it is in host mode hprt0 = %08x\n",hprt0.d32);
++					DWC_WRITE_REG32(core_if->host_if->hprt0,
++							hprt0.d32);
++				}
++				/* Clear reset bit in 10ms (FS/LS) or 50ms (HS) */
++				dwc_mdelay(60);
++				hprt0.b.prtrst = 0;
++				DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++				core_if->lx_state = DWC_OTG_L0;	/* Now back to the on state */
++			}
++			break;
++#ifdef DWC_HS_ELECT_TST
++		case UHF_PORT_TEST:
++			{
++				uint32_t t;
++				gintmsk_data_t gintmsk;
++
++				t = (wIndex >> 8);	/* MSB wIndex USB */
++				DWC_DEBUGPL(DBG_HCD,
++					    "DWC OTG HCD HUB CONTROL - "
++					    "SetPortFeature - USB_PORT_FEAT_TEST %d\n",
++					    t);
++				DWC_WARN("USB_PORT_FEAT_TEST %d\n", t);
++				if (t < 6) {
++					hprt0.d32 = dwc_otg_read_hprt0(core_if);
++					hprt0.b.prttstctl = t;
++					DWC_WRITE_REG32(core_if->host_if->hprt0,
++							hprt0.d32);
++				} else {
++					/* Setup global vars with reg addresses (quick and
++					 * dirty hack, should be cleaned up)
++					 */
++					global_regs = core_if->core_global_regs;
++					hc_global_regs =
++					    core_if->host_if->host_global_regs;
++					hc_regs =
++					    (dwc_otg_hc_regs_t *) ((char *)
++								   global_regs +
++								   0x500);
++					data_fifo =
++					    (uint32_t *) ((char *)global_regs +
++							  0x1000);
++
++					if (t == 6) {	/* HS_HOST_PORT_SUSPEND_RESUME */
++						/* Save current interrupt mask */
++						gintmsk.d32 =
++						    DWC_READ_REG32
++						    (&global_regs->gintmsk);
++
++						/* Disable all interrupts while we muck with
++						 * the hardware directly
++						 */
++						DWC_WRITE_REG32(&global_regs->gintmsk, 0);
++
++						/* 15 second delay per the test spec */
++						dwc_mdelay(15000);
++
++						/* Drive suspend on the root port */
++						hprt0.d32 =
++						    dwc_otg_read_hprt0(core_if);
++						hprt0.b.prtsusp = 1;
++						hprt0.b.prtres = 0;
++						DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++
++						/* 15 second delay per the test spec */
++						dwc_mdelay(15000);
++
++						/* Drive resume on the root port */
++						hprt0.d32 =
++						    dwc_otg_read_hprt0(core_if);
++						hprt0.b.prtsusp = 0;
++						hprt0.b.prtres = 1;
++						DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++						dwc_mdelay(100);
++
++						/* Clear the resume bit */
++						hprt0.b.prtres = 0;
++						DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
++
++						/* Restore interrupts */
++						DWC_WRITE_REG32(&global_regs->gintmsk, gintmsk.d32);
++					} else if (t == 7) {	/* SINGLE_STEP_GET_DEVICE_DESCRIPTOR setup */
++						/* Save current interrupt mask */
++						gintmsk.d32 =
++						    DWC_READ_REG32
++						    (&global_regs->gintmsk);
++
++						/* Disable all interrupts while we muck with
++						 * the hardware directly
++						 */
++						DWC_WRITE_REG32(&global_regs->gintmsk, 0);
++
++						/* 15 second delay per the test spec */
++						dwc_mdelay(15000);
++
++						/* Send the Setup packet */
++						do_setup();
++
++						/* 15 second delay so nothing else happens for awhile */
++						dwc_mdelay(15000);
++
++						/* Restore interrupts */
++						DWC_WRITE_REG32(&global_regs->gintmsk, gintmsk.d32);
++					} else if (t == 8) {	/* SINGLE_STEP_GET_DEVICE_DESCRIPTOR execute */
++						/* Save current interrupt mask */
++						gintmsk.d32 =
++						    DWC_READ_REG32
++						    (&global_regs->gintmsk);
++
++						/* Disable all interrupts while we muck with
++						 * the hardware directly
++						 */
++						DWC_WRITE_REG32(&global_regs->gintmsk, 0);
++
++						/* Send the Setup packet */
++						do_setup();
++
++						/* 15 second delay so nothing else happens for awhile */
++						dwc_mdelay(15000);
++
++						/* Send the In and Ack packets */
++						do_in_ack();
++
++						/* 15 second delay so nothing else happens for awhile */
++						dwc_mdelay(15000);
++
++						/* Restore interrupts */
++						DWC_WRITE_REG32(&global_regs->gintmsk, gintmsk.d32);
++					}
++				}
++				break;
++			}
++#endif /* DWC_HS_ELECT_TST */
++
++		case UHF_PORT_INDICATOR:
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
++				    "SetPortFeature - USB_PORT_FEAT_INDICATOR\n");
++			/* Not supported */
++			break;
++		default:
++			retval = -DWC_E_INVALID;
++			DWC_ERROR("DWC OTG HCD - "
++				  "SetPortFeature request %xh "
++				  "unknown or unsupported\n", wValue);
++			break;
++		}
++		break;
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	case UCR_SET_AND_TEST_PORT_FEATURE:
++		if (wValue != UHF_PORT_L1) {
++			goto error;
++		}
++		{
++			int portnum, hird, devaddr, remwake;
++			glpmcfg_data_t lpmcfg;
++			uint32_t time_usecs;
++			gintsts_data_t gintsts;
++			gintmsk_data_t gintmsk;
++
++			if (!dwc_otg_get_param_lpm_enable(core_if)) {
++				goto error;
++			}
++			if (wValue != UHF_PORT_L1 || wLength != 1) {
++				goto error;
++			}
++			/* Check if the port currently is in SLEEP state */
++			lpmcfg.d32 =
++			    DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++			if (lpmcfg.b.prt_sleep_sts) {
++				DWC_INFO("Port is already in sleep mode\n");
++				buf[0] = 0;	/* Return success */
++				break;
++			}
++
++			portnum = wIndex & 0xf;
++			hird = (wIndex >> 4) & 0xf;
++			devaddr = (wIndex >> 8) & 0x7f;
++			remwake = (wIndex >> 15);
++
++			if (portnum != 1) {
++				retval = -DWC_E_INVALID;
++				DWC_WARN
++				    ("Wrong port number(%d) in SetandTestPortFeature request\n",
++				     portnum);
++				break;
++			}
++
++			DWC_PRINTF
++			    ("SetandTestPortFeature request: portnum = %d, hird = %d, devaddr = %d, rewake = %d\n",
++			     portnum, hird, devaddr, remwake);
++			/* Disable LPM interrupt */
++			gintmsk.d32 = 0;
++			gintmsk.b.lpmtranrcvd = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk,
++					 gintmsk.d32, 0);
++
++			if (dwc_otg_hcd_send_lpm
++			    (dwc_otg_hcd, devaddr, hird, remwake)) {
++				retval = -DWC_E_INVALID;
++				break;
++			}
++
++			time_usecs = 10 * (lpmcfg.b.retry_count + 1);
++			/* We will consider timeout if time_usecs microseconds pass,
++			 * and we don't receive LPM transaction status.
++			 * After receiving non-error responce(ACK/NYET/STALL) from device,
++			 *  core will set lpmtranrcvd bit.
++			 */
++			do {
++				gintsts.d32 =
++				    DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++				if (gintsts.b.lpmtranrcvd) {
++					break;
++				}
++				dwc_udelay(1);
++			} while (--time_usecs);
++			/* lpm_int bit will be cleared in LPM interrupt handler */
++
++			/* Now fill status
++			 * 0x00 - Success
++			 * 0x10 - NYET
++			 * 0x11 - Timeout
++			 */
++			if (!gintsts.b.lpmtranrcvd) {
++				buf[0] = 0x3;	/* Completion code is Timeout */
++				dwc_otg_hcd_free_hc_from_lpm(dwc_otg_hcd);
++			} else {
++				lpmcfg.d32 =
++				    DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++				if (lpmcfg.b.lpm_resp == 0x3) {
++					/* ACK responce from the device */
++					buf[0] = 0x00;	/* Success */
++				} else if (lpmcfg.b.lpm_resp == 0x2) {
++					/* NYET responce from the device */
++					buf[0] = 0x2;
++				} else {
++					/* Otherwise responce with Timeout */
++					buf[0] = 0x3;
++				}
++			}
++			DWC_PRINTF("Device responce to LPM trans is %x\n",
++				   lpmcfg.b.lpm_resp);
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0,
++					 gintmsk.d32);
++
++			break;
++		}
++#endif /* CONFIG_USB_DWC_OTG_LPM */
++	default:
++error:
++		retval = -DWC_E_INVALID;
++		DWC_WARN("DWC OTG HCD - "
++			 "Unknown hub control request type or invalid typeReq: %xh wIndex: %xh wValue: %xh\n",
++			 typeReq, wIndex, wValue);
++		break;
++	}
++
++	return retval;
++}
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++/** Returns index of host channel to perform LPM transaction. */
++int dwc_otg_hcd_get_hc_for_lpm_tran(dwc_otg_hcd_t * hcd, uint8_t devaddr)
++{
++	dwc_otg_core_if_t *core_if = hcd->core_if;
++	dwc_hc_t *hc;
++	hcchar_data_t hcchar;
++	gintmsk_data_t gintmsk = {.d32 = 0 };
++
++	if (DWC_CIRCLEQ_EMPTY(&hcd->free_hc_list)) {
++		DWC_PRINTF("No free channel to select for LPM transaction\n");
++		return -1;
++	}
++
++	hc = DWC_CIRCLEQ_FIRST(&hcd->free_hc_list);
++
++	/* Mask host channel interrupts. */
++	gintmsk.b.hcintr = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, gintmsk.d32, 0);
++
++	/* Fill fields that core needs for LPM transaction */
++	hcchar.b.devaddr = devaddr;
++	hcchar.b.epnum = 0;
++	hcchar.b.eptype = DWC_OTG_EP_TYPE_CONTROL;
++	hcchar.b.mps = 64;
++	hcchar.b.lspddev = (hc->speed == DWC_OTG_EP_SPEED_LOW);
++	hcchar.b.epdir = 0;	/* OUT */
++	DWC_WRITE_REG32(&core_if->host_if->hc_regs[hc->hc_num]->hcchar,
++			hcchar.d32);
++
++	/* Remove the host channel from the free list. */
++	DWC_CIRCLEQ_REMOVE_INIT(&hcd->free_hc_list, hc, hc_list_entry);
++
++	DWC_PRINTF("hcnum = %d devaddr = %d\n", hc->hc_num, devaddr);
++
++	return hc->hc_num;
++}
++
++/** Release hc after performing LPM transaction */
++void dwc_otg_hcd_free_hc_from_lpm(dwc_otg_hcd_t * hcd)
++{
++	dwc_hc_t *hc;
++	glpmcfg_data_t lpmcfg;
++	uint8_t hc_num;
++
++	lpmcfg.d32 = DWC_READ_REG32(&hcd->core_if->core_global_regs->glpmcfg);
++	hc_num = lpmcfg.b.lpm_chan_index;
++
++	hc = hcd->hc_ptr_array[hc_num];
++
++	DWC_PRINTF("Freeing channel %d after LPM\n", hc_num);
++	/* Return host channel to free list */
++	DWC_CIRCLEQ_INSERT_TAIL(&hcd->free_hc_list, hc, hc_list_entry);
++}
++
++int dwc_otg_hcd_send_lpm(dwc_otg_hcd_t * hcd, uint8_t devaddr, uint8_t hird,
++			 uint8_t bRemoteWake)
++{
++	glpmcfg_data_t lpmcfg;
++	pcgcctl_data_t pcgcctl = {.d32 = 0 };
++	int channel;
++
++	channel = dwc_otg_hcd_get_hc_for_lpm_tran(hcd, devaddr);
++	if (channel < 0) {
++		return channel;
++	}
++
++	pcgcctl.b.enbl_sleep_gating = 1;
++	DWC_MODIFY_REG32(hcd->core_if->pcgcctl, 0, pcgcctl.d32);
++
++	/* Read LPM config register */
++	lpmcfg.d32 = DWC_READ_REG32(&hcd->core_if->core_global_regs->glpmcfg);
++
++	/* Program LPM transaction fields */
++	lpmcfg.b.rem_wkup_en = bRemoteWake;
++	lpmcfg.b.hird = hird;
++	lpmcfg.b.hird_thres = 0x1c;
++	lpmcfg.b.lpm_chan_index = channel;
++	lpmcfg.b.en_utmi_sleep = 1;
++	/* Program LPM config register */
++	DWC_WRITE_REG32(&hcd->core_if->core_global_regs->glpmcfg, lpmcfg.d32);
++
++	/* Send LPM transaction */
++	lpmcfg.b.send_lpm = 1;
++	DWC_WRITE_REG32(&hcd->core_if->core_global_regs->glpmcfg, lpmcfg.d32);
++
++	return 0;
++}
++
++#endif /* CONFIG_USB_DWC_OTG_LPM */
++
++int dwc_otg_hcd_is_status_changed(dwc_otg_hcd_t * hcd, int port)
++{
++	int retval;
++
++	if (port != 1) {
++		return -DWC_E_INVALID;
++	}
++
++	retval = (hcd->flags.b.port_connect_status_change ||
++		  hcd->flags.b.port_reset_change ||
++		  hcd->flags.b.port_enable_change ||
++		  hcd->flags.b.port_suspend_change ||
++		  hcd->flags.b.port_over_current_change);
++#ifdef DEBUG
++	if (retval) {
++		DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB STATUS DATA:"
++			    " Root port status changed\n");
++		DWC_DEBUGPL(DBG_HCDV, "  port_connect_status_change: %d\n",
++			    hcd->flags.b.port_connect_status_change);
++		DWC_DEBUGPL(DBG_HCDV, "  port_reset_change: %d\n",
++			    hcd->flags.b.port_reset_change);
++		DWC_DEBUGPL(DBG_HCDV, "  port_enable_change: %d\n",
++			    hcd->flags.b.port_enable_change);
++		DWC_DEBUGPL(DBG_HCDV, "  port_suspend_change: %d\n",
++			    hcd->flags.b.port_suspend_change);
++		DWC_DEBUGPL(DBG_HCDV, "  port_over_current_change: %d\n",
++			    hcd->flags.b.port_over_current_change);
++	}
++#endif
++	return retval;
++}
++
++int dwc_otg_hcd_get_frame_number(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	hfnum_data_t hfnum;
++	hfnum.d32 =
++	    DWC_READ_REG32(&dwc_otg_hcd->core_if->host_if->host_global_regs->
++			   hfnum);
++
++#ifdef DEBUG_SOF
++	DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD GET FRAME NUMBER %d\n",
++		    hfnum.b.frnum);
++#endif
++	return hfnum.b.frnum;
++}
++
++int dwc_otg_hcd_start(dwc_otg_hcd_t * hcd,
++		      struct dwc_otg_hcd_function_ops *fops)
++{
++	int retval = 0;
++
++	hcd->fops = fops;
++	if (!dwc_otg_is_device_mode(hcd->core_if) &&
++		(!hcd->core_if->adp_enable || hcd->core_if->adp.adp_started)) {
++		dwc_otg_hcd_reinit(hcd);
++	} else {
++		retval = -DWC_E_NO_DEVICE;
++	}
++
++	return retval;
++}
++
++void *dwc_otg_hcd_get_priv_data(dwc_otg_hcd_t * hcd)
++{
++	return hcd->priv;
++}
++
++void dwc_otg_hcd_set_priv_data(dwc_otg_hcd_t * hcd, void *priv_data)
++{
++	hcd->priv = priv_data;
++}
++
++uint32_t dwc_otg_hcd_otg_port(dwc_otg_hcd_t * hcd)
++{
++	return hcd->otg_port;
++}
++
++uint32_t dwc_otg_hcd_is_b_host(dwc_otg_hcd_t * hcd)
++{
++	uint32_t is_b_host;
++	if (hcd->core_if->op_state == B_HOST) {
++		is_b_host = 1;
++	} else {
++		is_b_host = 0;
++	}
++
++	return is_b_host;
++}
++
++dwc_otg_hcd_urb_t *dwc_otg_hcd_urb_alloc(dwc_otg_hcd_t * hcd,
++					 int iso_desc_count, int atomic_alloc)
++{
++	dwc_otg_hcd_urb_t *dwc_otg_urb;
++	uint32_t size;
++
++	size =
++	    sizeof(*dwc_otg_urb) +
++	    iso_desc_count * sizeof(struct dwc_otg_hcd_iso_packet_desc);
++	if (atomic_alloc)
++		dwc_otg_urb = DWC_ALLOC_ATOMIC(size);
++	else
++		dwc_otg_urb = DWC_ALLOC(size);
++
++        if (dwc_otg_urb)
++		dwc_otg_urb->packet_count = iso_desc_count;
++        else {
++		DWC_ERROR("**** DWC OTG HCD URB alloc - "
++			"%salloc of %db failed\n",
++			atomic_alloc?"atomic ":"", size);
++	}
++	return dwc_otg_urb;
++}
++
++void dwc_otg_hcd_urb_set_pipeinfo(dwc_otg_hcd_urb_t * dwc_otg_urb,
++				  uint8_t dev_addr, uint8_t ep_num,
++				  uint8_t ep_type, uint8_t ep_dir, uint16_t mps)
++{
++	dwc_otg_hcd_fill_pipe(&dwc_otg_urb->pipe_info, dev_addr, ep_num,
++			      ep_type, ep_dir, mps);
++#if 0
++	DWC_PRINTF
++	    ("addr = %d, ep_num = %d, ep_dir = 0x%x, ep_type = 0x%x, mps = %d\n",
++	     dev_addr, ep_num, ep_dir, ep_type, mps);
++#endif
++}
++
++void dwc_otg_hcd_urb_set_params(dwc_otg_hcd_urb_t * dwc_otg_urb,
++				void *urb_handle, void *buf, dwc_dma_t dma,
++				uint32_t buflen, void *setup_packet,
++				dwc_dma_t setup_dma, uint32_t flags,
++				uint16_t interval)
++{
++	dwc_otg_urb->priv = urb_handle;
++	dwc_otg_urb->buf = buf;
++	dwc_otg_urb->dma = dma;
++	dwc_otg_urb->length = buflen;
++	dwc_otg_urb->setup_packet = setup_packet;
++	dwc_otg_urb->setup_dma = setup_dma;
++	dwc_otg_urb->flags = flags;
++	dwc_otg_urb->interval = interval;
++	dwc_otg_urb->status = -DWC_E_IN_PROGRESS;
++}
++
++uint32_t dwc_otg_hcd_urb_get_status(dwc_otg_hcd_urb_t * dwc_otg_urb)
++{
++	return dwc_otg_urb->status;
++}
++
++uint32_t dwc_otg_hcd_urb_get_actual_length(dwc_otg_hcd_urb_t * dwc_otg_urb)
++{
++	return dwc_otg_urb->actual_length;
++}
++
++uint32_t dwc_otg_hcd_urb_get_error_count(dwc_otg_hcd_urb_t * dwc_otg_urb)
++{
++	return dwc_otg_urb->error_count;
++}
++
++void dwc_otg_hcd_urb_set_iso_desc_params(dwc_otg_hcd_urb_t * dwc_otg_urb,
++					 int desc_num, uint32_t offset,
++					 uint32_t length)
++{
++	dwc_otg_urb->iso_descs[desc_num].offset = offset;
++	dwc_otg_urb->iso_descs[desc_num].length = length;
++}
++
++uint32_t dwc_otg_hcd_urb_get_iso_desc_status(dwc_otg_hcd_urb_t * dwc_otg_urb,
++					     int desc_num)
++{
++	return dwc_otg_urb->iso_descs[desc_num].status;
++}
++
++uint32_t dwc_otg_hcd_urb_get_iso_desc_actual_length(dwc_otg_hcd_urb_t *
++						    dwc_otg_urb, int desc_num)
++{
++	return dwc_otg_urb->iso_descs[desc_num].actual_length;
++}
++
++int dwc_otg_hcd_is_bandwidth_allocated(dwc_otg_hcd_t * hcd, void *ep_handle)
++{
++	int allocated = 0;
++	dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
++
++	if (qh) {
++		if (!DWC_LIST_EMPTY(&qh->qh_list_entry)) {
++			allocated = 1;
++		}
++	}
++	return allocated;
++}
++
++int dwc_otg_hcd_is_bandwidth_freed(dwc_otg_hcd_t * hcd, void *ep_handle)
++{
++	dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
++	int freed = 0;
++	DWC_ASSERT(qh, "qh is not allocated\n");
++
++	if (DWC_LIST_EMPTY(&qh->qh_list_entry)) {
++		freed = 1;
++	}
++
++	return freed;
++}
++
++uint8_t dwc_otg_hcd_get_ep_bandwidth(dwc_otg_hcd_t * hcd, void *ep_handle)
++{
++	dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
++	DWC_ASSERT(qh, "qh is not allocated\n");
++	return qh->usecs;
++}
++
++void dwc_otg_hcd_dump_state(dwc_otg_hcd_t * hcd)
++{
++#ifdef DEBUG
++	int num_channels;
++	int i;
++	gnptxsts_data_t np_tx_status;
++	hptxsts_data_t p_tx_status;
++
++	num_channels = hcd->core_if->core_params->host_channels;
++	DWC_PRINTF("\n");
++	DWC_PRINTF
++	    ("************************************************************\n");
++	DWC_PRINTF("HCD State:\n");
++	DWC_PRINTF("  Num channels: %d\n", num_channels);
++	for (i = 0; i < num_channels; i++) {
++		dwc_hc_t *hc = hcd->hc_ptr_array[i];
++		DWC_PRINTF("  Channel %d:\n", i);
++		DWC_PRINTF("    dev_addr: %d, ep_num: %d, ep_is_in: %d\n",
++			   hc->dev_addr, hc->ep_num, hc->ep_is_in);
++		DWC_PRINTF("    speed: %d\n", hc->speed);
++		DWC_PRINTF("    ep_type: %d\n", hc->ep_type);
++		DWC_PRINTF("    max_packet: %d\n", hc->max_packet);
++		DWC_PRINTF("    data_pid_start: %d\n", hc->data_pid_start);
++		DWC_PRINTF("    multi_count: %d\n", hc->multi_count);
++		DWC_PRINTF("    xfer_started: %d\n", hc->xfer_started);
++		DWC_PRINTF("    xfer_buff: %p\n", hc->xfer_buff);
++		DWC_PRINTF("    xfer_len: %d\n", hc->xfer_len);
++		DWC_PRINTF("    xfer_count: %d\n", hc->xfer_count);
++		DWC_PRINTF("    halt_on_queue: %d\n", hc->halt_on_queue);
++		DWC_PRINTF("    halt_pending: %d\n", hc->halt_pending);
++		DWC_PRINTF("    halt_status: %d\n", hc->halt_status);
++		DWC_PRINTF("    do_split: %d\n", hc->do_split);
++		DWC_PRINTF("    complete_split: %d\n", hc->complete_split);
++		DWC_PRINTF("    hub_addr: %d\n", hc->hub_addr);
++		DWC_PRINTF("    port_addr: %d\n", hc->port_addr);
++		DWC_PRINTF("    xact_pos: %d\n", hc->xact_pos);
++		DWC_PRINTF("    requests: %d\n", hc->requests);
++		DWC_PRINTF("    qh: %p\n", hc->qh);
++		if (hc->xfer_started) {
++			hfnum_data_t hfnum;
++			hcchar_data_t hcchar;
++			hctsiz_data_t hctsiz;
++			hcint_data_t hcint;
++			hcintmsk_data_t hcintmsk;
++			hfnum.d32 =
++			    DWC_READ_REG32(&hcd->core_if->
++					   host_if->host_global_regs->hfnum);
++			hcchar.d32 =
++			    DWC_READ_REG32(&hcd->core_if->host_if->
++					   hc_regs[i]->hcchar);
++			hctsiz.d32 =
++			    DWC_READ_REG32(&hcd->core_if->host_if->
++					   hc_regs[i]->hctsiz);
++			hcint.d32 =
++			    DWC_READ_REG32(&hcd->core_if->host_if->
++					   hc_regs[i]->hcint);
++			hcintmsk.d32 =
++			    DWC_READ_REG32(&hcd->core_if->host_if->
++					   hc_regs[i]->hcintmsk);
++			DWC_PRINTF("    hfnum: 0x%08x\n", hfnum.d32);
++			DWC_PRINTF("    hcchar: 0x%08x\n", hcchar.d32);
++			DWC_PRINTF("    hctsiz: 0x%08x\n", hctsiz.d32);
++			DWC_PRINTF("    hcint: 0x%08x\n", hcint.d32);
++			DWC_PRINTF("    hcintmsk: 0x%08x\n", hcintmsk.d32);
++		}
++		if (hc->xfer_started && hc->qh) {
++			dwc_otg_qtd_t *qtd;
++			dwc_otg_hcd_urb_t *urb;
++
++			DWC_CIRCLEQ_FOREACH(qtd, &hc->qh->qtd_list, qtd_list_entry) {
++				if (!qtd->in_process)
++					break;
++
++				urb = qtd->urb;
++			DWC_PRINTF("    URB Info:\n");
++			DWC_PRINTF("      qtd: %p, urb: %p\n", qtd, urb);
++			if (urb) {
++				DWC_PRINTF("      Dev: %d, EP: %d %s\n",
++					   dwc_otg_hcd_get_dev_addr(&urb->
++								    pipe_info),
++					   dwc_otg_hcd_get_ep_num(&urb->
++								  pipe_info),
++					   dwc_otg_hcd_is_pipe_in(&urb->
++								  pipe_info) ?
++					   "IN" : "OUT");
++				DWC_PRINTF("      Max packet size: %d\n",
++					   dwc_otg_hcd_get_mps(&urb->
++							       pipe_info));
++				DWC_PRINTF("      transfer_buffer: %p\n",
++					   urb->buf);
++				DWC_PRINTF("      transfer_dma: %p\n",
++					   (void *)urb->dma);
++				DWC_PRINTF("      transfer_buffer_length: %d\n",
++					   urb->length);
++					DWC_PRINTF("      actual_length: %d\n",
++						   urb->actual_length);
++				}
++			}
++		}
++	}
++	DWC_PRINTF("  non_periodic_channels: %d\n", hcd->non_periodic_channels);
++	DWC_PRINTF("  periodic_channels: %d\n", hcd->periodic_channels);
++	DWC_PRINTF("  periodic_usecs: %d\n", hcd->periodic_usecs);
++	np_tx_status.d32 =
++	    DWC_READ_REG32(&hcd->core_if->core_global_regs->gnptxsts);
++	DWC_PRINTF("  NP Tx Req Queue Space Avail: %d\n",
++		   np_tx_status.b.nptxqspcavail);
++	DWC_PRINTF("  NP Tx FIFO Space Avail: %d\n",
++		   np_tx_status.b.nptxfspcavail);
++	p_tx_status.d32 =
++	    DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hptxsts);
++	DWC_PRINTF("  P Tx Req Queue Space Avail: %d\n",
++		   p_tx_status.b.ptxqspcavail);
++	DWC_PRINTF("  P Tx FIFO Space Avail: %d\n", p_tx_status.b.ptxfspcavail);
++	dwc_otg_hcd_dump_frrem(hcd);
++	dwc_otg_dump_global_registers(hcd->core_if);
++	dwc_otg_dump_host_registers(hcd->core_if);
++	DWC_PRINTF
++	    ("************************************************************\n");
++	DWC_PRINTF("\n");
++#endif
++}
++
++#ifdef DEBUG
++void dwc_print_setup_data(uint8_t * setup)
++{
++	int i;
++	if (CHK_DEBUG_LEVEL(DBG_HCD)) {
++		DWC_PRINTF("Setup Data = MSB ");
++		for (i = 7; i >= 0; i--)
++			DWC_PRINTF("%02x ", setup[i]);
++		DWC_PRINTF("\n");
++		DWC_PRINTF("  bmRequestType Tranfer = %s\n",
++			   (setup[0] & 0x80) ? "Device-to-Host" :
++			   "Host-to-Device");
++		DWC_PRINTF("  bmRequestType Type = ");
++		switch ((setup[0] & 0x60) >> 5) {
++		case 0:
++			DWC_PRINTF("Standard\n");
++			break;
++		case 1:
++			DWC_PRINTF("Class\n");
++			break;
++		case 2:
++			DWC_PRINTF("Vendor\n");
++			break;
++		case 3:
++			DWC_PRINTF("Reserved\n");
++			break;
++		}
++		DWC_PRINTF("  bmRequestType Recipient = ");
++		switch (setup[0] & 0x1f) {
++		case 0:
++			DWC_PRINTF("Device\n");
++			break;
++		case 1:
++			DWC_PRINTF("Interface\n");
++			break;
++		case 2:
++			DWC_PRINTF("Endpoint\n");
++			break;
++		case 3:
++			DWC_PRINTF("Other\n");
++			break;
++		default:
++			DWC_PRINTF("Reserved\n");
++			break;
++		}
++		DWC_PRINTF("  bRequest = 0x%0x\n", setup[1]);
++		DWC_PRINTF("  wValue = 0x%0x\n", *((uint16_t *) & setup[2]));
++		DWC_PRINTF("  wIndex = 0x%0x\n", *((uint16_t *) & setup[4]));
++		DWC_PRINTF("  wLength = 0x%0x\n\n", *((uint16_t *) & setup[6]));
++	}
++}
++#endif
++
++void dwc_otg_hcd_dump_frrem(dwc_otg_hcd_t * hcd)
++{
++#if 0
++	DWC_PRINTF("Frame remaining at SOF:\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->frrem_samples, hcd->frrem_accum,
++		   (hcd->frrem_samples > 0) ?
++		   hcd->frrem_accum / hcd->frrem_samples : 0);
++
++	DWC_PRINTF("\n");
++	DWC_PRINTF("Frame remaining at start_transfer (uframe 7):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->core_if->hfnum_7_samples,
++		   hcd->core_if->hfnum_7_frrem_accum,
++		   (hcd->core_if->hfnum_7_samples >
++		    0) ? hcd->core_if->hfnum_7_frrem_accum /
++		   hcd->core_if->hfnum_7_samples : 0);
++	DWC_PRINTF("Frame remaining at start_transfer (uframe 0):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->core_if->hfnum_0_samples,
++		   hcd->core_if->hfnum_0_frrem_accum,
++		   (hcd->core_if->hfnum_0_samples >
++		    0) ? hcd->core_if->hfnum_0_frrem_accum /
++		   hcd->core_if->hfnum_0_samples : 0);
++	DWC_PRINTF("Frame remaining at start_transfer (uframe 1-6):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->core_if->hfnum_other_samples,
++		   hcd->core_if->hfnum_other_frrem_accum,
++		   (hcd->core_if->hfnum_other_samples >
++		    0) ? hcd->core_if->hfnum_other_frrem_accum /
++		   hcd->core_if->hfnum_other_samples : 0);
++
++	DWC_PRINTF("\n");
++	DWC_PRINTF("Frame remaining at sample point A (uframe 7):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->hfnum_7_samples_a, hcd->hfnum_7_frrem_accum_a,
++		   (hcd->hfnum_7_samples_a > 0) ?
++		   hcd->hfnum_7_frrem_accum_a / hcd->hfnum_7_samples_a : 0);
++	DWC_PRINTF("Frame remaining at sample point A (uframe 0):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->hfnum_0_samples_a, hcd->hfnum_0_frrem_accum_a,
++		   (hcd->hfnum_0_samples_a > 0) ?
++		   hcd->hfnum_0_frrem_accum_a / hcd->hfnum_0_samples_a : 0);
++	DWC_PRINTF("Frame remaining at sample point A (uframe 1-6):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->hfnum_other_samples_a, hcd->hfnum_other_frrem_accum_a,
++		   (hcd->hfnum_other_samples_a > 0) ?
++		   hcd->hfnum_other_frrem_accum_a /
++		   hcd->hfnum_other_samples_a : 0);
++
++	DWC_PRINTF("\n");
++	DWC_PRINTF("Frame remaining at sample point B (uframe 7):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->hfnum_7_samples_b, hcd->hfnum_7_frrem_accum_b,
++		   (hcd->hfnum_7_samples_b > 0) ?
++		   hcd->hfnum_7_frrem_accum_b / hcd->hfnum_7_samples_b : 0);
++	DWC_PRINTF("Frame remaining at sample point B (uframe 0):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->hfnum_0_samples_b, hcd->hfnum_0_frrem_accum_b,
++		   (hcd->hfnum_0_samples_b > 0) ?
++		   hcd->hfnum_0_frrem_accum_b / hcd->hfnum_0_samples_b : 0);
++	DWC_PRINTF("Frame remaining at sample point B (uframe 1-6):\n");
++	DWC_PRINTF("  samples %u, accum %llu, avg %llu\n",
++		   hcd->hfnum_other_samples_b, hcd->hfnum_other_frrem_accum_b,
++		   (hcd->hfnum_other_samples_b > 0) ?
++		   hcd->hfnum_other_frrem_accum_b /
++		   hcd->hfnum_other_samples_b : 0);
++#endif
++}
++
++#endif /* DWC_DEVICE_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd.h
+@@ -0,0 +1,862 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd.h $
++ * $Revision: #58 $
++ * $Date: 2011/09/15 $
++ * $Change: 1846647 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_DEVICE_ONLY
++#ifndef __DWC_HCD_H__
++#define __DWC_HCD_H__
++
++#include "dwc_otg_os_dep.h"
++#include "usb.h"
++#include "dwc_otg_hcd_if.h"
++#include "dwc_otg_core_if.h"
++#include "dwc_list.h"
++#include "dwc_otg_cil.h"
++#include "dwc_otg_fiq_fsm.h"
++
++
++/**
++ * @file
++ *
++ * This file contains the structures, constants, and interfaces for
++ * the Host Contoller Driver (HCD).
++ *
++ * The Host Controller Driver (HCD) is responsible for translating requests
++ * from the USB Driver into the appropriate actions on the DWC_otg controller.
++ * It isolates the USBD from the specifics of the controller by providing an
++ * API to the USBD.
++ */
++
++struct dwc_otg_hcd_pipe_info {
++	uint8_t dev_addr;
++	uint8_t ep_num;
++	uint8_t pipe_type;
++	uint8_t pipe_dir;
++	uint16_t mps;
++};
++
++struct dwc_otg_hcd_iso_packet_desc {
++	uint32_t offset;
++	uint32_t length;
++	uint32_t actual_length;
++	uint32_t status;
++};
++
++struct dwc_otg_qtd;
++
++struct dwc_otg_hcd_urb {
++	void *priv;
++	struct dwc_otg_qtd *qtd;
++	void *buf;
++	dwc_dma_t dma;
++	void *setup_packet;
++	dwc_dma_t setup_dma;
++	uint32_t length;
++	uint32_t actual_length;
++	uint32_t status;
++	uint32_t error_count;
++	uint32_t packet_count;
++	uint32_t flags;
++	uint16_t interval;
++	struct dwc_otg_hcd_pipe_info pipe_info;
++	struct dwc_otg_hcd_iso_packet_desc iso_descs[0];
++};
++
++static inline uint8_t dwc_otg_hcd_get_ep_num(struct dwc_otg_hcd_pipe_info *pipe)
++{
++	return pipe->ep_num;
++}
++
++static inline uint8_t dwc_otg_hcd_get_pipe_type(struct dwc_otg_hcd_pipe_info
++						*pipe)
++{
++	return pipe->pipe_type;
++}
++
++static inline uint16_t dwc_otg_hcd_get_mps(struct dwc_otg_hcd_pipe_info *pipe)
++{
++	return pipe->mps;
++}
++
++static inline uint8_t dwc_otg_hcd_get_dev_addr(struct dwc_otg_hcd_pipe_info
++					       *pipe)
++{
++	return pipe->dev_addr;
++}
++
++static inline uint8_t dwc_otg_hcd_is_pipe_isoc(struct dwc_otg_hcd_pipe_info
++					       *pipe)
++{
++	return (pipe->pipe_type == UE_ISOCHRONOUS);
++}
++
++static inline uint8_t dwc_otg_hcd_is_pipe_int(struct dwc_otg_hcd_pipe_info
++					      *pipe)
++{
++	return (pipe->pipe_type == UE_INTERRUPT);
++}
++
++static inline uint8_t dwc_otg_hcd_is_pipe_bulk(struct dwc_otg_hcd_pipe_info
++					       *pipe)
++{
++	return (pipe->pipe_type == UE_BULK);
++}
++
++static inline uint8_t dwc_otg_hcd_is_pipe_control(struct dwc_otg_hcd_pipe_info
++						  *pipe)
++{
++	return (pipe->pipe_type == UE_CONTROL);
++}
++
++static inline uint8_t dwc_otg_hcd_is_pipe_in(struct dwc_otg_hcd_pipe_info *pipe)
++{
++	return (pipe->pipe_dir == UE_DIR_IN);
++}
++
++static inline uint8_t dwc_otg_hcd_is_pipe_out(struct dwc_otg_hcd_pipe_info
++					      *pipe)
++{
++	return (!dwc_otg_hcd_is_pipe_in(pipe));
++}
++
++static inline void dwc_otg_hcd_fill_pipe(struct dwc_otg_hcd_pipe_info *pipe,
++					 uint8_t devaddr, uint8_t ep_num,
++					 uint8_t pipe_type, uint8_t pipe_dir,
++					 uint16_t mps)
++{
++	pipe->dev_addr = devaddr;
++	pipe->ep_num = ep_num;
++	pipe->pipe_type = pipe_type;
++	pipe->pipe_dir = pipe_dir;
++	pipe->mps = mps;
++}
++
++/**
++ * Phases for control transfers.
++ */
++typedef enum dwc_otg_control_phase {
++	DWC_OTG_CONTROL_SETUP,
++	DWC_OTG_CONTROL_DATA,
++	DWC_OTG_CONTROL_STATUS
++} dwc_otg_control_phase_e;
++
++/** Transaction types. */
++typedef enum dwc_otg_transaction_type {
++	DWC_OTG_TRANSACTION_NONE          = 0,
++	DWC_OTG_TRANSACTION_PERIODIC      = 1,
++	DWC_OTG_TRANSACTION_NON_PERIODIC  = 2,
++	DWC_OTG_TRANSACTION_ALL           = DWC_OTG_TRANSACTION_PERIODIC + DWC_OTG_TRANSACTION_NON_PERIODIC
++} dwc_otg_transaction_type_e;
++
++struct dwc_otg_qh;
++
++/**
++ * A Queue Transfer Descriptor (QTD) holds the state of a bulk, control,
++ * interrupt, or isochronous transfer. A single QTD is created for each URB
++ * (of one of these types) submitted to the HCD. The transfer associated with
++ * a QTD may require one or multiple transactions.
++ *
++ * A QTD is linked to a Queue Head, which is entered in either the
++ * non-periodic or periodic schedule for execution. When a QTD is chosen for
++ * execution, some or all of its transactions may be executed. After
++ * execution, the state of the QTD is updated. The QTD may be retired if all
++ * its transactions are complete or if an error occurred. Otherwise, it
++ * remains in the schedule so more transactions can be executed later.
++ */
++typedef struct dwc_otg_qtd {
++	/**
++	 * Determines the PID of the next data packet for the data phase of
++	 * control transfers. Ignored for other transfer types.<br>
++	 * One of the following values:
++	 *	- DWC_OTG_HC_PID_DATA0
++	 *	- DWC_OTG_HC_PID_DATA1
++	 */
++	uint8_t data_toggle;
++
++	/** Current phase for control transfers (Setup, Data, or Status). */
++	dwc_otg_control_phase_e control_phase;
++
++	/** Keep track of the current split type
++	 * for FS/LS endpoints on a HS Hub */
++	uint8_t complete_split;
++
++	/** How many bytes transferred during SSPLIT OUT */
++	uint32_t ssplit_out_xfer_count;
++
++	/**
++	 * Holds the number of bus errors that have occurred for a transaction
++	 * within this transfer.
++	 */
++	uint8_t error_count;
++
++	/**
++	 * Index of the next frame descriptor for an isochronous transfer. A
++	 * frame descriptor describes the buffer position and length of the
++	 * data to be transferred in the next scheduled (micro)frame of an
++	 * isochronous transfer. It also holds status for that transaction.
++	 * The frame index starts at 0.
++	 */
++	uint16_t isoc_frame_index;
++
++	/** Position of the ISOC split on full/low speed */
++	uint8_t isoc_split_pos;
++
++	/** Position of the ISOC split in the buffer for the current frame */
++	uint16_t isoc_split_offset;
++
++	/** URB for this transfer */
++	struct dwc_otg_hcd_urb *urb;
++
++	struct dwc_otg_qh *qh;
++
++	/** This list of QTDs */
++	 DWC_CIRCLEQ_ENTRY(dwc_otg_qtd) qtd_list_entry;
++
++	/** Indicates if this QTD is currently processed by HW. */
++	uint8_t in_process;
++
++	/** Number of DMA descriptors for this QTD */
++	uint8_t n_desc;
++
++	/**
++	 * Last activated frame(packet) index.
++	 * Used in Descriptor DMA mode only.
++	 */
++	uint16_t isoc_frame_index_last;
++
++} dwc_otg_qtd_t;
++
++DWC_CIRCLEQ_HEAD(dwc_otg_qtd_list, dwc_otg_qtd);
++
++/**
++ * A Queue Head (QH) holds the static characteristics of an endpoint and
++ * maintains a list of transfers (QTDs) for that endpoint. A QH structure may
++ * be entered in either the non-periodic or periodic schedule.
++ */
++typedef struct dwc_otg_qh {
++	/**
++	 * Endpoint type.
++	 * One of the following values:
++	 *	- UE_CONTROL
++	 *	- UE_BULK
++	 *	- UE_INTERRUPT
++	 *	- UE_ISOCHRONOUS
++	 */
++	uint8_t ep_type;
++	uint8_t ep_is_in;
++
++	/** wMaxPacketSize Field of Endpoint Descriptor. */
++	uint16_t maxp;
++
++	/**
++	 * Device speed.
++	 * One of the following values:
++	 *	- DWC_OTG_EP_SPEED_LOW
++	 *	- DWC_OTG_EP_SPEED_FULL
++	 *	- DWC_OTG_EP_SPEED_HIGH
++	 */
++	uint8_t dev_speed;
++
++	/**
++	 * Determines the PID of the next data packet for non-control
++	 * transfers. Ignored for control transfers.<br>
++	 * One of the following values:
++	 *	- DWC_OTG_HC_PID_DATA0
++	 *	- DWC_OTG_HC_PID_DATA1
++	 */
++	uint8_t data_toggle;
++
++	/** Ping state if 1. */
++	uint8_t ping_state;
++
++	/**
++	 * List of QTDs for this QH.
++	 */
++	struct dwc_otg_qtd_list qtd_list;
++
++	/** Host channel currently processing transfers for this QH. */
++	struct dwc_hc *channel;
++
++	/** Full/low speed endpoint on high-speed hub requires split. */
++	uint8_t do_split;
++
++	/** @name Periodic schedule information */
++	/** @{ */
++
++	/** Bandwidth in microseconds per (micro)frame. */
++	uint16_t usecs;
++
++	/** Interval between transfers in (micro)frames. */
++	uint16_t interval;
++
++	/**
++	 * (micro)frame to initialize a periodic transfer. The transfer
++	 * executes in the following (micro)frame.
++	 */
++	uint16_t sched_frame;
++
++	/*
++	** Frame a NAK was received on this queue head, used to minimise NAK retransmission
++	*/
++	uint16_t nak_frame;
++
++	/** (micro)frame at which last start split was initialized. */
++	uint16_t start_split_frame;
++
++	/** @} */
++
++	/**
++	 * Used instead of original buffer if
++	 * it(physical address) is not dword-aligned.
++	 */
++	uint8_t *dw_align_buf;
++	dwc_dma_t dw_align_buf_dma;
++
++	/** Entry for QH in either the periodic or non-periodic schedule. */
++	dwc_list_link_t qh_list_entry;
++
++	/** @name Descriptor DMA support */
++	/** @{ */
++
++	/** Descriptor List. */
++	dwc_otg_host_dma_desc_t *desc_list;
++
++	/** Descriptor List physical address. */
++	dwc_dma_t desc_list_dma;
++
++	/**
++	 * Xfer Bytes array.
++	 * Each element corresponds to a descriptor and indicates
++	 * original XferSize size value for the descriptor.
++	 */
++	uint32_t *n_bytes;
++
++	/** Actual number of transfer descriptors in a list. */
++	uint16_t ntd;
++
++	/** First activated isochronous transfer descriptor index. */
++	uint8_t td_first;
++	/** Last activated isochronous transfer descriptor index. */
++	uint8_t td_last;
++
++	/** @} */
++
++
++	uint16_t speed;
++	uint16_t frame_usecs[8];
++
++	uint32_t skip_count;
++} dwc_otg_qh_t;
++
++DWC_CIRCLEQ_HEAD(hc_list, dwc_hc);
++
++typedef struct urb_tq_entry {
++	struct urb *urb;
++	DWC_TAILQ_ENTRY(urb_tq_entry) urb_tq_entries;
++} urb_tq_entry_t;
++
++DWC_TAILQ_HEAD(urb_list, urb_tq_entry);
++
++/**
++ * This structure holds the state of the HCD, including the non-periodic and
++ * periodic schedules.
++ */
++struct dwc_otg_hcd {
++	/** The DWC otg device pointer */
++	struct dwc_otg_device *otg_dev;
++	/** DWC OTG Core Interface Layer */
++	dwc_otg_core_if_t *core_if;
++
++	/** Function HCD driver callbacks */
++	struct dwc_otg_hcd_function_ops *fops;
++
++	/** Internal DWC HCD Flags */
++	volatile union dwc_otg_hcd_internal_flags {
++		uint32_t d32;
++		struct {
++			unsigned port_connect_status_change:1;
++			unsigned port_connect_status:1;
++			unsigned port_reset_change:1;
++			unsigned port_enable_change:1;
++			unsigned port_suspend_change:1;
++			unsigned port_over_current_change:1;
++			unsigned port_l1_change:1;
++			unsigned reserved:26;
++		} b;
++	} flags;
++
++	/**
++	 * Inactive items in the non-periodic schedule. This is a list of
++	 * Queue Heads. Transfers associated with these Queue Heads are not
++	 * currently assigned to a host channel.
++	 */
++	dwc_list_link_t non_periodic_sched_inactive;
++
++	/**
++	 * Active items in the non-periodic schedule. This is a list of
++	 * Queue Heads. Transfers associated with these Queue Heads are
++	 * currently assigned to a host channel.
++	 */
++	dwc_list_link_t non_periodic_sched_active;
++
++	/**
++	 * Pointer to the next Queue Head to process in the active
++	 * non-periodic schedule.
++	 */
++	dwc_list_link_t *non_periodic_qh_ptr;
++
++	/**
++	 * Inactive items in the periodic schedule. This is a list of QHs for
++	 * periodic transfers that are _not_ scheduled for the next frame.
++	 * Each QH in the list has an interval counter that determines when it
++	 * needs to be scheduled for execution. This scheduling mechanism
++	 * allows only a simple calculation for periodic bandwidth used (i.e.
++	 * must assume that all periodic transfers may need to execute in the
++	 * same frame). However, it greatly simplifies scheduling and should
++	 * be sufficient for the vast majority of OTG hosts, which need to
++	 * connect to a small number of peripherals at one time.
++	 *
++	 * Items move from this list to periodic_sched_ready when the QH
++	 * interval counter is 0 at SOF.
++	 */
++	dwc_list_link_t periodic_sched_inactive;
++
++	/**
++	 * List of periodic QHs that are ready for execution in the next
++	 * frame, but have not yet been assigned to host channels.
++	 *
++	 * Items move from this list to periodic_sched_assigned as host
++	 * channels become available during the current frame.
++	 */
++	dwc_list_link_t periodic_sched_ready;
++
++	/**
++	 * List of periodic QHs to be executed in the next frame that are
++	 * assigned to host channels.
++	 *
++	 * Items move from this list to periodic_sched_queued as the
++	 * transactions for the QH are queued to the DWC_otg controller.
++	 */
++	dwc_list_link_t periodic_sched_assigned;
++
++	/**
++	 * List of periodic QHs that have been queued for execution.
++	 *
++	 * Items move from this list to either periodic_sched_inactive or
++	 * periodic_sched_ready when the channel associated with the transfer
++	 * is released. If the interval for the QH is 1, the item moves to
++	 * periodic_sched_ready because it must be rescheduled for the next
++	 * frame. Otherwise, the item moves to periodic_sched_inactive.
++	 */
++	dwc_list_link_t periodic_sched_queued;
++
++	/**
++	 * Total bandwidth claimed so far for periodic transfers. This value
++	 * is in microseconds per (micro)frame. The assumption is that all
++	 * periodic transfers may occur in the same (micro)frame.
++	 */
++	uint16_t periodic_usecs;
++
++	/**
++	 * Total bandwidth claimed so far for all periodic transfers
++	 * in a frame.
++	 * This will include a mixture of HS and FS transfers.
++	 * Units are microseconds per (micro)frame.
++	 * We have a budget per frame and have to schedule
++	 * transactions accordingly.
++	 * Watch out for the fact that things are actually scheduled for the
++	 * "next frame".
++	 */
++	uint16_t                frame_usecs[8];
++
++
++	/**
++	 * Frame number read from the core at SOF. The value ranges from 0 to
++	 * DWC_HFNUM_MAX_FRNUM.
++	 */
++	uint16_t frame_number;
++
++	/**
++	 * Count of periodic QHs, if using several eps. For SOF enable/disable.
++	 */
++	uint16_t periodic_qh_count;
++
++	/**
++	 * Free host channels in the controller. This is a list of
++	 * dwc_hc_t items.
++	 */
++	struct hc_list free_hc_list;
++	/**
++	 * Number of host channels assigned to periodic transfers. Currently
++	 * assuming that there is a dedicated host channel for each periodic
++	 * transaction and at least one host channel available for
++	 * non-periodic transactions.
++	 */
++	int periodic_channels; /* microframe_schedule==0 */
++
++	/**
++	 * Number of host channels assigned to non-periodic transfers.
++	 */
++	int non_periodic_channels; /* microframe_schedule==0 */
++
++	/**
++	 * Number of host channels assigned to non-periodic transfers.
++	 */
++	int available_host_channels;
++
++	/**
++	 * Array of pointers to the host channel descriptors. Allows accessing
++	 * a host channel descriptor given the host channel number. This is
++	 * useful in interrupt handlers.
++	 */
++	struct dwc_hc *hc_ptr_array[MAX_EPS_CHANNELS];
++
++	/**
++	 * Buffer to use for any data received during the status phase of a
++	 * control transfer. Normally no data is transferred during the status
++	 * phase. This buffer is used as a bit bucket.
++	 */
++	uint8_t *status_buf;
++
++	/**
++	 * DMA address for status_buf.
++	 */
++	dma_addr_t status_buf_dma;
++#define DWC_OTG_HCD_STATUS_BUF_SIZE 64
++
++	/**
++	 * Connection timer. An OTG host must display a message if the device
++	 * does not connect. Started when the VBus power is turned on via
++	 * sysfs attribute "buspower".
++	 */
++	dwc_timer_t *conn_timer;
++
++	/* Tasket to do a reset */
++	dwc_tasklet_t *reset_tasklet;
++
++	dwc_tasklet_t *completion_tasklet;
++	struct urb_list completed_urb_list;
++
++	/*  */
++	dwc_spinlock_t *lock;
++	dwc_spinlock_t *channel_lock;
++	/**
++	 * Private data that could be used by OS wrapper.
++	 */
++	void *priv;
++
++	uint8_t otg_port;
++
++	/** Frame List */
++	uint32_t *frame_list;
++
++	/** Hub - Port assignment */
++	int hub_port[128];
++#ifdef FIQ_DEBUG
++	int hub_port_alloc[2048];
++#endif
++
++	/** Frame List DMA address */
++	dma_addr_t frame_list_dma;
++
++	struct fiq_stack *fiq_stack;
++	struct fiq_state *fiq_state;
++
++	/** Virtual address for split transaction DMA bounce buffers */
++	struct fiq_dma_blob *fiq_dmab;
++
++#ifdef DEBUG
++	uint32_t frrem_samples;
++	uint64_t frrem_accum;
++
++	uint32_t hfnum_7_samples_a;
++	uint64_t hfnum_7_frrem_accum_a;
++	uint32_t hfnum_0_samples_a;
++	uint64_t hfnum_0_frrem_accum_a;
++	uint32_t hfnum_other_samples_a;
++	uint64_t hfnum_other_frrem_accum_a;
++
++	uint32_t hfnum_7_samples_b;
++	uint64_t hfnum_7_frrem_accum_b;
++	uint32_t hfnum_0_samples_b;
++	uint64_t hfnum_0_frrem_accum_b;
++	uint32_t hfnum_other_samples_b;
++	uint64_t hfnum_other_frrem_accum_b;
++#endif
++};
++
++/** @name Transaction Execution Functions */
++/** @{ */
++extern dwc_otg_transaction_type_e dwc_otg_hcd_select_transactions(dwc_otg_hcd_t
++								  * hcd);
++extern void dwc_otg_hcd_queue_transactions(dwc_otg_hcd_t * hcd,
++					   dwc_otg_transaction_type_e tr_type);
++
++int dwc_otg_hcd_allocate_port(dwc_otg_hcd_t * hcd, dwc_otg_qh_t *qh);
++void dwc_otg_hcd_release_port(dwc_otg_hcd_t * dwc_otg_hcd, dwc_otg_qh_t *qh);
++
++extern int fiq_fsm_queue_transaction(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh);
++extern int fiq_fsm_transaction_suitable(dwc_otg_qh_t *qh);
++extern void dwc_otg_cleanup_fiq_channel(dwc_otg_hcd_t *hcd, uint32_t num);
++
++/** @} */
++
++/** @name Interrupt Handler Functions */
++/** @{ */
++extern int32_t dwc_otg_hcd_handle_intr(dwc_otg_hcd_t * dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_sof_intr(dwc_otg_hcd_t * dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_rx_status_q_level_intr(dwc_otg_hcd_t *
++							 dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_np_tx_fifo_empty_intr(dwc_otg_hcd_t *
++							dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_perio_tx_fifo_empty_intr(dwc_otg_hcd_t *
++							   dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_incomplete_periodic_intr(dwc_otg_hcd_t *
++							   dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_port_intr(dwc_otg_hcd_t * dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_conn_id_status_change_intr(dwc_otg_hcd_t *
++							     dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_disconnect_intr(dwc_otg_hcd_t * dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_hc_intr(dwc_otg_hcd_t * dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_hc_n_intr(dwc_otg_hcd_t * dwc_otg_hcd,
++					    uint32_t num);
++extern int32_t dwc_otg_hcd_handle_session_req_intr(dwc_otg_hcd_t * dwc_otg_hcd);
++extern int32_t dwc_otg_hcd_handle_wakeup_detected_intr(dwc_otg_hcd_t *
++						       dwc_otg_hcd);
++/** @} */
++
++/** @name Schedule Queue Functions */
++/** @{ */
++
++/* Implemented in dwc_otg_hcd_queue.c */
++extern dwc_otg_qh_t *dwc_otg_hcd_qh_create(dwc_otg_hcd_t * hcd,
++					   dwc_otg_hcd_urb_t * urb, int atomic_alloc);
++extern void dwc_otg_hcd_qh_free(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
++extern int dwc_otg_hcd_qh_add(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
++extern void dwc_otg_hcd_qh_remove(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
++extern void dwc_otg_hcd_qh_deactivate(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh,
++				      int sched_csplit);
++
++/** Remove and free a QH */
++static inline void dwc_otg_hcd_qh_remove_and_free(dwc_otg_hcd_t * hcd,
++						  dwc_otg_qh_t * qh)
++{
++	dwc_irqflags_t flags;
++	DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
++	dwc_otg_hcd_qh_remove(hcd, qh);
++	DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
++	dwc_otg_hcd_qh_free(hcd, qh);
++}
++
++/** Allocates memory for a QH structure.
++ * @return Returns the memory allocate or NULL on error. */
++static inline dwc_otg_qh_t *dwc_otg_hcd_qh_alloc(int atomic_alloc)
++{
++	if (atomic_alloc)
++		return (dwc_otg_qh_t *) DWC_ALLOC_ATOMIC(sizeof(dwc_otg_qh_t));
++	else
++		return (dwc_otg_qh_t *) DWC_ALLOC(sizeof(dwc_otg_qh_t));
++}
++
++extern dwc_otg_qtd_t *dwc_otg_hcd_qtd_create(dwc_otg_hcd_urb_t * urb,
++					     int atomic_alloc);
++extern void dwc_otg_hcd_qtd_init(dwc_otg_qtd_t * qtd, dwc_otg_hcd_urb_t * urb);
++extern int dwc_otg_hcd_qtd_add(dwc_otg_qtd_t * qtd, dwc_otg_hcd_t * dwc_otg_hcd,
++			       dwc_otg_qh_t ** qh, int atomic_alloc);
++
++/** Allocates memory for a QTD structure.
++ * @return Returns the memory allocate or NULL on error. */
++static inline dwc_otg_qtd_t *dwc_otg_hcd_qtd_alloc(int atomic_alloc)
++{
++	if (atomic_alloc)
++		return (dwc_otg_qtd_t *) DWC_ALLOC_ATOMIC(sizeof(dwc_otg_qtd_t));
++	else
++		return (dwc_otg_qtd_t *) DWC_ALLOC(sizeof(dwc_otg_qtd_t));
++}
++
++/** Frees the memory for a QTD structure.  QTD should already be removed from
++ * list.
++ * @param qtd QTD to free.*/
++static inline void dwc_otg_hcd_qtd_free(dwc_otg_qtd_t * qtd)
++{
++	DWC_FREE(qtd);
++}
++
++/** Removes a QTD from list.
++ * @param hcd HCD instance.
++ * @param qtd QTD to remove from list.
++ * @param qh QTD belongs to.
++ */
++static inline void dwc_otg_hcd_qtd_remove(dwc_otg_hcd_t * hcd,
++					  dwc_otg_qtd_t * qtd,
++					  dwc_otg_qh_t * qh)
++{
++	DWC_CIRCLEQ_REMOVE(&qh->qtd_list, qtd, qtd_list_entry);
++}
++
++/** Remove and free a QTD
++  * Need to disable IRQ and hold hcd lock while calling this function out of
++  * interrupt servicing chain */
++static inline void dwc_otg_hcd_qtd_remove_and_free(dwc_otg_hcd_t * hcd,
++						   dwc_otg_qtd_t * qtd,
++						   dwc_otg_qh_t * qh)
++{
++	dwc_otg_hcd_qtd_remove(hcd, qtd, qh);
++	dwc_otg_hcd_qtd_free(qtd);
++}
++
++/** @} */
++
++/** @name Descriptor DMA Supporting Functions */
++/** @{ */
++
++extern void dwc_otg_hcd_start_xfer_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
++extern void dwc_otg_hcd_complete_xfer_ddma(dwc_otg_hcd_t * hcd,
++					   dwc_hc_t * hc,
++					   dwc_otg_hc_regs_t * hc_regs,
++					   dwc_otg_halt_status_e halt_status);
++
++extern int dwc_otg_hcd_qh_init_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
++extern void dwc_otg_hcd_qh_free_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
++
++/** @} */
++
++/** @name Internal Functions */
++/** @{ */
++dwc_otg_qh_t *dwc_urb_to_qh(dwc_otg_hcd_urb_t * urb);
++/** @} */
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++extern int dwc_otg_hcd_get_hc_for_lpm_tran(dwc_otg_hcd_t * hcd,
++					   uint8_t devaddr);
++extern void dwc_otg_hcd_free_hc_from_lpm(dwc_otg_hcd_t * hcd);
++#endif
++
++/** Gets the QH that contains the list_head */
++#define dwc_list_to_qh(_list_head_ptr_) container_of(_list_head_ptr_, dwc_otg_qh_t, qh_list_entry)
++
++/** Gets the QTD that contains the list_head */
++#define dwc_list_to_qtd(_list_head_ptr_) container_of(_list_head_ptr_, dwc_otg_qtd_t, qtd_list_entry)
++
++/** Check if QH is non-periodic  */
++#define dwc_qh_is_non_per(_qh_ptr_) ((_qh_ptr_->ep_type == UE_BULK) || \
++				     (_qh_ptr_->ep_type == UE_CONTROL))
++
++/** High bandwidth multiplier as encoded in highspeed endpoint descriptors */
++#define dwc_hb_mult(wMaxPacketSize) (1 + (((wMaxPacketSize) >> 11) & 0x03))
++
++/** Packet size for any kind of endpoint descriptor */
++#define dwc_max_packet(wMaxPacketSize) ((wMaxPacketSize) & 0x07ff)
++
++/**
++ * Returns true if _frame1 is less than or equal to _frame2. The comparison is
++ * done modulo DWC_HFNUM_MAX_FRNUM. This accounts for the rollover of the
++ * frame number when the max frame number is reached.
++ */
++static inline int dwc_frame_num_le(uint16_t frame1, uint16_t frame2)
++{
++	return ((frame2 - frame1) & DWC_HFNUM_MAX_FRNUM) <=
++	    (DWC_HFNUM_MAX_FRNUM >> 1);
++}
++
++/**
++ * Returns true if _frame1 is greater than _frame2. The comparison is done
++ * modulo DWC_HFNUM_MAX_FRNUM. This accounts for the rollover of the frame
++ * number when the max frame number is reached.
++ */
++static inline int dwc_frame_num_gt(uint16_t frame1, uint16_t frame2)
++{
++	return (frame1 != frame2) &&
++	    (((frame1 - frame2) & DWC_HFNUM_MAX_FRNUM) <
++	     (DWC_HFNUM_MAX_FRNUM >> 1));
++}
++
++/**
++ * Increments _frame by the amount specified by _inc. The addition is done
++ * modulo DWC_HFNUM_MAX_FRNUM. Returns the incremented value.
++ */
++static inline uint16_t dwc_frame_num_inc(uint16_t frame, uint16_t inc)
++{
++	return (frame + inc) & DWC_HFNUM_MAX_FRNUM;
++}
++
++static inline uint16_t dwc_full_frame_num(uint16_t frame)
++{
++	return (frame & DWC_HFNUM_MAX_FRNUM) >> 3;
++}
++
++static inline uint16_t dwc_micro_frame_num(uint16_t frame)
++{
++	return frame & 0x7;
++}
++
++void dwc_otg_hcd_save_data_toggle(dwc_hc_t * hc,
++				  dwc_otg_hc_regs_t * hc_regs,
++				  dwc_otg_qtd_t * qtd);
++
++#ifdef DEBUG
++/**
++ * Macro to sample the remaining PHY clocks left in the current frame. This
++ * may be used during debugging to determine the average time it takes to
++ * execute sections of code. There are two possible sample points, "a" and
++ * "b", so the _letter argument must be one of these values.
++ *
++ * To dump the average sample times, read the "hcd_frrem" sysfs attribute. For
++ * example, "cat /sys/devices/lm0/hcd_frrem".
++ */
++#define dwc_sample_frrem(_hcd, _qh, _letter) \
++{ \
++	hfnum_data_t hfnum; \
++	dwc_otg_qtd_t *qtd; \
++	qtd = list_entry(_qh->qtd_list.next, dwc_otg_qtd_t, qtd_list_entry); \
++	if (usb_pipeint(qtd->urb->pipe) && _qh->start_split_frame != 0 && !qtd->complete_split) { \
++		hfnum.d32 = DWC_READ_REG32(&_hcd->core_if->host_if->host_global_regs->hfnum); \
++		switch (hfnum.b.frnum & 0x7) { \
++		case 7: \
++			_hcd->hfnum_7_samples_##_letter++; \
++			_hcd->hfnum_7_frrem_accum_##_letter += hfnum.b.frrem; \
++			break; \
++		case 0: \
++			_hcd->hfnum_0_samples_##_letter++; \
++			_hcd->hfnum_0_frrem_accum_##_letter += hfnum.b.frrem; \
++			break; \
++		default: \
++			_hcd->hfnum_other_samples_##_letter++; \
++			_hcd->hfnum_other_frrem_accum_##_letter += hfnum.b.frrem; \
++			break; \
++		} \
++	} \
++}
++#else
++#define dwc_sample_frrem(_hcd, _qh, _letter)
++#endif
++#endif
++#endif /* DWC_DEVICE_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c
+@@ -0,0 +1,1132 @@
++/*==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_ddma.c $
++ * $Revision: #10 $
++ * $Date: 2011/10/20 $
++ * $Change: 1869464 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_DEVICE_ONLY
++
++/** @file
++ * This file contains Descriptor DMA support implementation for host mode.
++ */
++
++#include "dwc_otg_hcd.h"
++#include "dwc_otg_regs.h"
++
++extern bool microframe_schedule;
++
++static inline uint8_t frame_list_idx(uint16_t frame)
++{
++	return (frame & (MAX_FRLIST_EN_NUM - 1));
++}
++
++static inline uint16_t desclist_idx_inc(uint16_t idx, uint16_t inc, uint8_t speed)
++{
++	return (idx + inc) &
++	    (((speed ==
++	       DWC_OTG_EP_SPEED_HIGH) ? MAX_DMA_DESC_NUM_HS_ISOC :
++	      MAX_DMA_DESC_NUM_GENERIC) - 1);
++}
++
++static inline uint16_t desclist_idx_dec(uint16_t idx, uint16_t inc, uint8_t speed)
++{
++	return (idx - inc) &
++	    (((speed ==
++	       DWC_OTG_EP_SPEED_HIGH) ? MAX_DMA_DESC_NUM_HS_ISOC :
++	      MAX_DMA_DESC_NUM_GENERIC) - 1);
++}
++
++static inline uint16_t max_desc_num(dwc_otg_qh_t * qh)
++{
++	return (((qh->ep_type == UE_ISOCHRONOUS)
++		 && (qh->dev_speed == DWC_OTG_EP_SPEED_HIGH))
++		? MAX_DMA_DESC_NUM_HS_ISOC : MAX_DMA_DESC_NUM_GENERIC);
++}
++static inline uint16_t frame_incr_val(dwc_otg_qh_t * qh)
++{
++	return ((qh->dev_speed == DWC_OTG_EP_SPEED_HIGH)
++		? ((qh->interval + 8 - 1) / 8)
++		: qh->interval);
++}
++
++static int desc_list_alloc(dwc_otg_qh_t * qh)
++{
++	int retval = 0;
++
++	qh->desc_list = (dwc_otg_host_dma_desc_t *)
++	    DWC_DMA_ALLOC(sizeof(dwc_otg_host_dma_desc_t) * max_desc_num(qh),
++			  &qh->desc_list_dma);
++
++	if (!qh->desc_list) {
++		retval = -DWC_E_NO_MEMORY;
++		DWC_ERROR("%s: DMA descriptor list allocation failed\n", __func__);
++
++	}
++
++	dwc_memset(qh->desc_list, 0x00,
++		   sizeof(dwc_otg_host_dma_desc_t) * max_desc_num(qh));
++
++	qh->n_bytes =
++	    (uint32_t *) DWC_ALLOC(sizeof(uint32_t) * max_desc_num(qh));
++
++	if (!qh->n_bytes) {
++		retval = -DWC_E_NO_MEMORY;
++		DWC_ERROR
++		    ("%s: Failed to allocate array for descriptors' size actual values\n",
++		     __func__);
++
++	}
++	return retval;
++
++}
++
++static void desc_list_free(dwc_otg_qh_t * qh)
++{
++	if (qh->desc_list) {
++		DWC_DMA_FREE(max_desc_num(qh), qh->desc_list,
++			     qh->desc_list_dma);
++		qh->desc_list = NULL;
++	}
++
++	if (qh->n_bytes) {
++		DWC_FREE(qh->n_bytes);
++		qh->n_bytes = NULL;
++	}
++}
++
++static int frame_list_alloc(dwc_otg_hcd_t * hcd)
++{
++	int retval = 0;
++	if (hcd->frame_list)
++		return 0;
++
++	hcd->frame_list = DWC_DMA_ALLOC(4 * MAX_FRLIST_EN_NUM,
++					&hcd->frame_list_dma);
++	if (!hcd->frame_list) {
++		retval = -DWC_E_NO_MEMORY;
++		DWC_ERROR("%s: Frame List allocation failed\n", __func__);
++	}
++
++	dwc_memset(hcd->frame_list, 0x00, 4 * MAX_FRLIST_EN_NUM);
++
++	return retval;
++}
++
++static void frame_list_free(dwc_otg_hcd_t * hcd)
++{
++	if (!hcd->frame_list)
++		return;
++
++	DWC_DMA_FREE(4 * MAX_FRLIST_EN_NUM, hcd->frame_list, hcd->frame_list_dma);
++	hcd->frame_list = NULL;
++}
++
++static void per_sched_enable(dwc_otg_hcd_t * hcd, uint16_t fr_list_en)
++{
++
++	hcfg_data_t hcfg;
++
++	hcfg.d32 = DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hcfg);
++
++	if (hcfg.b.perschedena) {
++		/* already enabled */
++		return;
++	}
++
++	DWC_WRITE_REG32(&hcd->core_if->host_if->host_global_regs->hflbaddr,
++			hcd->frame_list_dma);
++
++	switch (fr_list_en) {
++	case 64:
++		hcfg.b.frlisten = 3;
++		break;
++	case 32:
++		hcfg.b.frlisten = 2;
++		break;
++	case 16:
++		hcfg.b.frlisten = 1;
++		break;
++	case 8:
++		hcfg.b.frlisten = 0;
++		break;
++	default:
++		break;
++	}
++
++	hcfg.b.perschedena = 1;
++
++	DWC_DEBUGPL(DBG_HCD, "Enabling Periodic schedule\n");
++	DWC_WRITE_REG32(&hcd->core_if->host_if->host_global_regs->hcfg, hcfg.d32);
++
++}
++
++static void per_sched_disable(dwc_otg_hcd_t * hcd)
++{
++	hcfg_data_t hcfg;
++
++	hcfg.d32 = DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hcfg);
++
++	if (!hcfg.b.perschedena) {
++		/* already disabled */
++		return;
++	}
++	hcfg.b.perschedena = 0;
++
++	DWC_DEBUGPL(DBG_HCD, "Disabling Periodic schedule\n");
++	DWC_WRITE_REG32(&hcd->core_if->host_if->host_global_regs->hcfg, hcfg.d32);
++}
++
++/*
++ * Activates/Deactivates FrameList entries for the channel
++ * based on endpoint servicing period.
++ */
++void update_frame_list(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh, uint8_t enable)
++{
++	uint16_t i, j, inc;
++	dwc_hc_t *hc = NULL;
++
++	if (!qh->channel) {
++		DWC_ERROR("qh->channel = %p", qh->channel);
++		return;
++	}
++
++	if (!hcd) {
++		DWC_ERROR("------hcd = %p", hcd);
++		return;
++	}
++
++	if (!hcd->frame_list) {
++		DWC_ERROR("-------hcd->frame_list = %p", hcd->frame_list);
++		return;
++	}
++
++	hc = qh->channel;
++	inc = frame_incr_val(qh);
++	if (qh->ep_type == UE_ISOCHRONOUS)
++		i = frame_list_idx(qh->sched_frame);
++	else
++		i = 0;
++
++	j = i;
++	do {
++		if (enable)
++			hcd->frame_list[j] |= (1 << hc->hc_num);
++		else
++			hcd->frame_list[j] &= ~(1 << hc->hc_num);
++		j = (j + inc) & (MAX_FRLIST_EN_NUM - 1);
++	}
++	while (j != i);
++	if (!enable)
++		return;
++	hc->schinfo = 0;
++	if (qh->channel->speed == DWC_OTG_EP_SPEED_HIGH) {
++		j = 1;
++		/* TODO - check this */
++		inc = (8 + qh->interval - 1) / qh->interval;
++		for (i = 0; i < inc; i++) {
++			hc->schinfo |= j;
++			j = j << qh->interval;
++		}
++	} else {
++		hc->schinfo = 0xff;
++	}
++}
++
++#if 1
++void dump_frame_list(dwc_otg_hcd_t * hcd)
++{
++	int i = 0;
++	DWC_PRINTF("--FRAME LIST (hex) --\n");
++	for (i = 0; i < MAX_FRLIST_EN_NUM; i++) {
++		DWC_PRINTF("%x\t", hcd->frame_list[i]);
++		if (!(i % 8) && i)
++			DWC_PRINTF("\n");
++	}
++	DWC_PRINTF("\n----\n");
++
++}
++#endif
++
++static void release_channel_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	dwc_irqflags_t flags;
++	dwc_spinlock_t *channel_lock = hcd->channel_lock;
++
++	dwc_hc_t *hc = qh->channel;
++	if (dwc_qh_is_non_per(qh)) {
++		DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
++		if (!microframe_schedule)
++			hcd->non_periodic_channels--;
++		else
++			hcd->available_host_channels++;
++		DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
++	} else
++		update_frame_list(hcd, qh, 0);
++
++	/*
++	 * The condition is added to prevent double cleanup try in case of device
++	 * disconnect. See channel cleanup in dwc_otg_hcd_disconnect_cb().
++	 */
++	if (hc->qh) {
++		dwc_otg_hc_cleanup(hcd->core_if, hc);
++		DWC_CIRCLEQ_INSERT_TAIL(&hcd->free_hc_list, hc, hc_list_entry);
++		hc->qh = NULL;
++	}
++
++	qh->channel = NULL;
++	qh->ntd = 0;
++
++	if (qh->desc_list) {
++		dwc_memset(qh->desc_list, 0x00,
++			   sizeof(dwc_otg_host_dma_desc_t) * max_desc_num(qh));
++	}
++}
++
++/**
++ * Initializes a QH structure's Descriptor DMA related members.
++ * Allocates memory for descriptor list.
++ * On first periodic QH, allocates memory for FrameList
++ * and enables periodic scheduling.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param qh The QH to init.
++ *
++ * @return 0 if successful, negative error code otherwise.
++ */
++int dwc_otg_hcd_qh_init_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	int retval = 0;
++
++	if (qh->do_split) {
++		DWC_ERROR("SPLIT Transfers are not supported in Descriptor DMA.\n");
++		return -1;
++	}
++
++	retval = desc_list_alloc(qh);
++
++	if ((retval == 0)
++	    && (qh->ep_type == UE_ISOCHRONOUS || qh->ep_type == UE_INTERRUPT)) {
++		if (!hcd->frame_list) {
++			retval = frame_list_alloc(hcd);
++			/* Enable periodic schedule on first periodic QH */
++			if (retval == 0)
++				per_sched_enable(hcd, MAX_FRLIST_EN_NUM);
++		}
++	}
++
++	qh->ntd = 0;
++
++	return retval;
++}
++
++/**
++ * Frees descriptor list memory associated with the QH.
++ * If QH is periodic and the last, frees FrameList memory
++ * and disables periodic scheduling.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param qh The QH to init.
++ */
++void dwc_otg_hcd_qh_free_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	desc_list_free(qh);
++
++	/*
++	 * Channel still assigned due to some reasons.
++	 * Seen on Isoc URB dequeue. Channel halted but no subsequent
++	 * ChHalted interrupt to release the channel. Afterwards
++	 * when it comes here from endpoint disable routine
++	 * channel remains assigned.
++	 */
++	if (qh->channel)
++		release_channel_ddma(hcd, qh);
++
++	if ((qh->ep_type == UE_ISOCHRONOUS || qh->ep_type == UE_INTERRUPT)
++	    && (microframe_schedule || !hcd->periodic_channels) && hcd->frame_list) {
++
++		per_sched_disable(hcd);
++		frame_list_free(hcd);
++	}
++}
++
++static uint8_t frame_to_desc_idx(dwc_otg_qh_t * qh, uint16_t frame_idx)
++{
++	if (qh->dev_speed == DWC_OTG_EP_SPEED_HIGH) {
++		/*
++		 * Descriptor set(8 descriptors) index
++		 * which is 8-aligned.
++		 */
++		return (frame_idx & ((MAX_DMA_DESC_NUM_HS_ISOC / 8) - 1)) * 8;
++	} else {
++		return (frame_idx & (MAX_DMA_DESC_NUM_GENERIC - 1));
++	}
++}
++
++/*
++ * Determine starting frame for Isochronous transfer.
++ * Few frames skipped to prevent race condition with HC.
++ */
++static uint8_t calc_starting_frame(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh,
++				   uint8_t * skip_frames)
++{
++	uint16_t frame = 0;
++	hcd->frame_number = dwc_otg_hcd_get_frame_number(hcd);
++
++	/* sched_frame is always frame number(not uFrame) both in FS and HS !! */
++
++	/*
++	 * skip_frames is used to limit activated descriptors number
++	 * to avoid the situation when HC services the last activated
++	 * descriptor firstly.
++	 * Example for FS:
++	 * Current frame is 1, scheduled frame is 3. Since HC always fetches the descriptor
++	 * corresponding to curr_frame+1, the descriptor corresponding to frame 2
++	 * will be fetched. If the number of descriptors is max=64 (or greather) the
++	 * list will be fully programmed with Active descriptors and it is possible
++	 * case(rare) that the latest descriptor(considering rollback) corresponding
++	 * to frame 2 will be serviced first. HS case is more probable because, in fact,
++	 * up to 11 uframes(16 in the code) may be skipped.
++	 */
++	if (qh->dev_speed == DWC_OTG_EP_SPEED_HIGH) {
++		/*
++		 * Consider uframe counter also, to start xfer asap.
++		 * If half of the frame elapsed skip 2 frames otherwise
++		 * just 1 frame.
++		 * Starting descriptor index must be 8-aligned, so
++		 * if the current frame is near to complete the next one
++		 * is skipped as well.
++		 */
++
++		if (dwc_micro_frame_num(hcd->frame_number) >= 5) {
++			*skip_frames = 2 * 8;
++			frame = dwc_frame_num_inc(hcd->frame_number, *skip_frames);
++		} else {
++			*skip_frames = 1 * 8;
++			frame = dwc_frame_num_inc(hcd->frame_number, *skip_frames);
++		}
++
++		frame = dwc_full_frame_num(frame);
++	} else {
++		/*
++		 * Two frames are skipped for FS - the current and the next.
++		 * But for descriptor programming, 1 frame(descriptor) is enough,
++		 * see example above.
++		 */
++		*skip_frames = 1;
++		frame = dwc_frame_num_inc(hcd->frame_number, 2);
++	}
++
++	return frame;
++}
++
++/*
++ * Calculate initial descriptor index for isochronous transfer
++ * based on scheduled frame.
++ */
++static uint8_t recalc_initial_desc_idx(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	uint16_t frame = 0, fr_idx, fr_idx_tmp;
++	uint8_t skip_frames = 0;
++	/*
++	 * With current ISOC processing algorithm the channel is being
++	 * released when no more QTDs in the list(qh->ntd == 0).
++	 * Thus this function is called only when qh->ntd == 0 and qh->channel == 0.
++	 *
++	 * So qh->channel != NULL branch is not used and just not removed from the
++	 * source file. It is required for another possible approach which is,
++	 * do not disable and release the channel when ISOC session completed,
++	 * just move QH to inactive schedule until new QTD arrives.
++	 * On new QTD, the QH moved back to 'ready' schedule,
++	 * starting frame and therefore starting desc_index are recalculated.
++	 * In this case channel is released only on ep_disable.
++	 */
++
++	/* Calculate starting descriptor index. For INTERRUPT endpoint it is always 0. */
++	if (qh->channel) {
++		frame = calc_starting_frame(hcd, qh, &skip_frames);
++		/*
++		 * Calculate initial descriptor index based on FrameList current bitmap
++		 * and servicing period.
++		 */
++		fr_idx_tmp = frame_list_idx(frame);
++		fr_idx =
++		    (MAX_FRLIST_EN_NUM + frame_list_idx(qh->sched_frame) -
++		     fr_idx_tmp)
++		    % frame_incr_val(qh);
++		fr_idx = (fr_idx + fr_idx_tmp) % MAX_FRLIST_EN_NUM;
++	} else {
++		qh->sched_frame = calc_starting_frame(hcd, qh, &skip_frames);
++		fr_idx = frame_list_idx(qh->sched_frame);
++	}
++
++	qh->td_first = qh->td_last = frame_to_desc_idx(qh, fr_idx);
++
++	return skip_frames;
++}
++
++#define	ISOC_URB_GIVEBACK_ASAP
++
++#define MAX_ISOC_XFER_SIZE_FS 1023
++#define MAX_ISOC_XFER_SIZE_HS 3072
++#define DESCNUM_THRESHOLD 4
++
++static void init_isoc_dma_desc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh,
++			       uint8_t skip_frames)
++{
++	struct dwc_otg_hcd_iso_packet_desc *frame_desc;
++	dwc_otg_qtd_t *qtd;
++	dwc_otg_host_dma_desc_t *dma_desc;
++	uint16_t idx, inc, n_desc, ntd_max, max_xfer_size;
++
++	idx = qh->td_last;
++	inc = qh->interval;
++	n_desc = 0;
++
++	ntd_max = (max_desc_num(qh) + qh->interval - 1) / qh->interval;
++	if (skip_frames && !qh->channel)
++		ntd_max = ntd_max - skip_frames / qh->interval;
++
++	max_xfer_size =
++	    (qh->dev_speed ==
++	     DWC_OTG_EP_SPEED_HIGH) ? MAX_ISOC_XFER_SIZE_HS :
++	    MAX_ISOC_XFER_SIZE_FS;
++
++	DWC_CIRCLEQ_FOREACH(qtd, &qh->qtd_list, qtd_list_entry) {
++		while ((qh->ntd < ntd_max)
++		       && (qtd->isoc_frame_index_last <
++			   qtd->urb->packet_count)) {
++
++			dma_desc = &qh->desc_list[idx];
++			dwc_memset(dma_desc, 0x00, sizeof(dwc_otg_host_dma_desc_t));
++
++			frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index_last];
++
++			if (frame_desc->length > max_xfer_size)
++				qh->n_bytes[idx] = max_xfer_size;
++			else
++				qh->n_bytes[idx] = frame_desc->length;
++			dma_desc->status.b_isoc.n_bytes = qh->n_bytes[idx];
++			dma_desc->status.b_isoc.a = 1;
++			dma_desc->status.b_isoc.sts = 0;
++
++			dma_desc->buf = qtd->urb->dma + frame_desc->offset;
++
++			qh->ntd++;
++
++			qtd->isoc_frame_index_last++;
++
++#ifdef	ISOC_URB_GIVEBACK_ASAP
++			/*
++			 * Set IOC for each descriptor corresponding to the
++			 * last frame of the URB.
++			 */
++			if (qtd->isoc_frame_index_last ==
++			    qtd->urb->packet_count)
++				dma_desc->status.b_isoc.ioc = 1;
++
++#endif
++			idx = desclist_idx_inc(idx, inc, qh->dev_speed);
++			n_desc++;
++
++		}
++		qtd->in_process = 1;
++	}
++
++	qh->td_last = idx;
++
++#ifdef	ISOC_URB_GIVEBACK_ASAP
++	/* Set IOC for the last descriptor if descriptor list is full */
++	if (qh->ntd == ntd_max) {
++		idx = desclist_idx_dec(qh->td_last, inc, qh->dev_speed);
++		qh->desc_list[idx].status.b_isoc.ioc = 1;
++	}
++#else
++	/*
++	 * Set IOC bit only for one descriptor.
++	 * Always try to be ahead of HW processing,
++	 * i.e. on IOC generation driver activates next descriptors but
++	 * core continues to process descriptors followed the one with IOC set.
++	 */
++
++	if (n_desc > DESCNUM_THRESHOLD) {
++		/*
++		 * Move IOC "up". Required even if there is only one QTD
++		 * in the list, cause QTDs migth continue to be queued,
++		 * but during the activation it was only one queued.
++		 * Actually more than one QTD might be in the list if this function called
++		 * from XferCompletion - QTDs was queued during HW processing of the previous
++		 * descriptor chunk.
++		 */
++		idx = dwc_desclist_idx_dec(idx, inc * ((qh->ntd + 1) / 2), qh->dev_speed);
++	} else {
++		/*
++		 * Set the IOC for the latest descriptor
++		 * if either number of descriptor is not greather than threshold
++		 * or no more new descriptors activated.
++		 */
++		idx = dwc_desclist_idx_dec(qh->td_last, inc, qh->dev_speed);
++	}
++
++	qh->desc_list[idx].status.b_isoc.ioc = 1;
++#endif
++}
++
++static void init_non_isoc_dma_desc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++
++	dwc_hc_t *hc;
++	dwc_otg_host_dma_desc_t *dma_desc;
++	dwc_otg_qtd_t *qtd;
++	int num_packets, len, n_desc = 0;
++
++	hc = qh->channel;
++
++	/*
++	 * Start with hc->xfer_buff initialized in
++	 * assign_and_init_hc(), then if SG transfer consists of multiple URBs,
++	 * this pointer re-assigned to the buffer of the currently processed QTD.
++	 * For non-SG request there is always one QTD active.
++	 */
++
++	DWC_CIRCLEQ_FOREACH(qtd, &qh->qtd_list, qtd_list_entry) {
++
++		if (n_desc) {
++			/* SG request - more than 1 QTDs */
++			hc->xfer_buff = (uint8_t *)qtd->urb->dma + qtd->urb->actual_length;
++			hc->xfer_len = qtd->urb->length - qtd->urb->actual_length;
++		}
++
++		qtd->n_desc = 0;
++
++		do {
++			dma_desc = &qh->desc_list[n_desc];
++			len = hc->xfer_len;
++
++			if (len > MAX_DMA_DESC_SIZE)
++				len = MAX_DMA_DESC_SIZE - hc->max_packet + 1;
++
++			if (hc->ep_is_in) {
++				if (len > 0) {
++					num_packets = (len + hc->max_packet - 1) / hc->max_packet;
++				} else {
++					/* Need 1 packet for transfer length of 0. */
++					num_packets = 1;
++				}
++				/* Always program an integral # of max packets for IN transfers. */
++				len = num_packets * hc->max_packet;
++			}
++
++			dma_desc->status.b.n_bytes = len;
++
++			qh->n_bytes[n_desc] = len;
++
++			if ((qh->ep_type == UE_CONTROL)
++			    && (qtd->control_phase == DWC_OTG_CONTROL_SETUP))
++				dma_desc->status.b.sup = 1;	/* Setup Packet */
++
++			dma_desc->status.b.a = 1;	/* Active descriptor */
++			dma_desc->status.b.sts = 0;
++
++			dma_desc->buf =
++			    ((unsigned long)hc->xfer_buff & 0xffffffff);
++
++			/*
++			 * Last descriptor(or single) of IN transfer
++			 * with actual size less than MaxPacket.
++			 */
++			if (len > hc->xfer_len) {
++				hc->xfer_len = 0;
++			} else {
++				hc->xfer_buff += len;
++				hc->xfer_len -= len;
++			}
++
++			qtd->n_desc++;
++			n_desc++;
++		}
++		while ((hc->xfer_len > 0) && (n_desc != MAX_DMA_DESC_NUM_GENERIC));
++
++
++		qtd->in_process = 1;
++
++		if (qh->ep_type == UE_CONTROL)
++			break;
++
++		if (n_desc == MAX_DMA_DESC_NUM_GENERIC)
++			break;
++	}
++
++	if (n_desc) {
++		/* Request Transfer Complete interrupt for the last descriptor */
++		qh->desc_list[n_desc - 1].status.b.ioc = 1;
++		/* End of List indicator */
++		qh->desc_list[n_desc - 1].status.b.eol = 1;
++
++		hc->ntd = n_desc;
++	}
++}
++
++/**
++ * For Control and Bulk endpoints initializes descriptor list
++ * and starts the transfer.
++ *
++ * For Interrupt and Isochronous endpoints initializes descriptor list
++ * then updates FrameList, marking appropriate entries as active.
++ * In case of Isochronous, the starting descriptor index is calculated based
++ * on the scheduled frame, but only on the first transfer descriptor within a session.
++ * Then starts the transfer via enabling the channel.
++ * For Isochronous endpoint the channel is not halted on XferComplete
++ * interrupt so remains assigned to the endpoint(QH) until session is done.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param qh The QH to init.
++ *
++ * @return 0 if successful, negative error code otherwise.
++ */
++void dwc_otg_hcd_start_xfer_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	/* Channel is already assigned */
++	dwc_hc_t *hc = qh->channel;
++	uint8_t skip_frames = 0;
++
++	switch (hc->ep_type) {
++	case DWC_OTG_EP_TYPE_CONTROL:
++	case DWC_OTG_EP_TYPE_BULK:
++		init_non_isoc_dma_desc(hcd, qh);
++
++		dwc_otg_hc_start_transfer_ddma(hcd->core_if, hc);
++		break;
++	case DWC_OTG_EP_TYPE_INTR:
++		init_non_isoc_dma_desc(hcd, qh);
++
++		update_frame_list(hcd, qh, 1);
++
++		dwc_otg_hc_start_transfer_ddma(hcd->core_if, hc);
++		break;
++	case DWC_OTG_EP_TYPE_ISOC:
++
++		if (!qh->ntd)
++			skip_frames = recalc_initial_desc_idx(hcd, qh);
++
++		init_isoc_dma_desc(hcd, qh, skip_frames);
++
++		if (!hc->xfer_started) {
++
++			update_frame_list(hcd, qh, 1);
++
++			/*
++			 * Always set to max, instead of actual size.
++			 * Otherwise ntd will be changed with
++			 * channel being enabled. Not recommended.
++			 *
++			 */
++			hc->ntd = max_desc_num(qh);
++			/* Enable channel only once for ISOC */
++			dwc_otg_hc_start_transfer_ddma(hcd->core_if, hc);
++		}
++
++		break;
++	default:
++
++		break;
++	}
++}
++
++static void complete_isoc_xfer_ddma(dwc_otg_hcd_t * hcd,
++				    dwc_hc_t * hc,
++				    dwc_otg_hc_regs_t * hc_regs,
++				    dwc_otg_halt_status_e halt_status)
++{
++	struct dwc_otg_hcd_iso_packet_desc *frame_desc;
++	dwc_otg_qtd_t *qtd, *qtd_tmp;
++	dwc_otg_qh_t *qh;
++	dwc_otg_host_dma_desc_t *dma_desc;
++	uint16_t idx, remain;
++	uint8_t urb_compl;
++
++	qh = hc->qh;
++	idx = qh->td_first;
++
++	if (hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
++		DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &hc->qh->qtd_list, qtd_list_entry)
++		    qtd->in_process = 0;
++		return;
++	} else if ((halt_status == DWC_OTG_HC_XFER_AHB_ERR) ||
++		   (halt_status == DWC_OTG_HC_XFER_BABBLE_ERR)) {
++		/*
++		 * Channel is halted in these error cases.
++		 * Considered as serious issues.
++		 * Complete all URBs marking all frames as failed,
++		 * irrespective whether some of the descriptors(frames) succeeded or no.
++		 * Pass error code to completion routine as well, to
++		 * update urb->status, some of class drivers might use it to stop
++		 * queing transfer requests.
++		 */
++		int err = (halt_status == DWC_OTG_HC_XFER_AHB_ERR)
++		    ? (-DWC_E_IO)
++		    : (-DWC_E_OVERFLOW);
++
++		DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &hc->qh->qtd_list, qtd_list_entry) {
++			for (idx = 0; idx < qtd->urb->packet_count; idx++) {
++				frame_desc = &qtd->urb->iso_descs[idx];
++				frame_desc->status = err;
++			}
++			hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, err);
++			dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
++		}
++		return;
++	}
++
++	DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &hc->qh->qtd_list, qtd_list_entry) {
++
++		if (!qtd->in_process)
++			break;
++
++		urb_compl = 0;
++
++		do {
++
++			dma_desc = &qh->desc_list[idx];
++
++			frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
++			remain = hc->ep_is_in ? dma_desc->status.b_isoc.n_bytes : 0;
++
++			if (dma_desc->status.b_isoc.sts == DMA_DESC_STS_PKTERR) {
++				/*
++				 * XactError or, unable to complete all the transactions
++				 * in the scheduled micro-frame/frame,
++				 * both indicated by DMA_DESC_STS_PKTERR.
++				 */
++				qtd->urb->error_count++;
++				frame_desc->actual_length = qh->n_bytes[idx] - remain;
++				frame_desc->status = -DWC_E_PROTOCOL;
++			} else {
++				/* Success */
++
++				frame_desc->actual_length = qh->n_bytes[idx] - remain;
++				frame_desc->status = 0;
++			}
++
++			if (++qtd->isoc_frame_index == qtd->urb->packet_count) {
++				/*
++				 * urb->status is not used for isoc transfers here.
++				 * The individual frame_desc status are used instead.
++				 */
++
++				hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
++				dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
++
++				/*
++				 * This check is necessary because urb_dequeue can be called
++				 * from urb complete callback(sound driver example).
++				 * All pending URBs are dequeued there, so no need for
++				 * further processing.
++				 */
++				if (hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
++					return;
++				}
++
++				urb_compl = 1;
++
++			}
++
++			qh->ntd--;
++
++			/* Stop if IOC requested descriptor reached */
++			if (dma_desc->status.b_isoc.ioc) {
++				idx = desclist_idx_inc(idx, qh->interval, hc->speed);
++				goto stop_scan;
++			}
++
++			idx = desclist_idx_inc(idx, qh->interval, hc->speed);
++
++			if (urb_compl)
++				break;
++		}
++		while (idx != qh->td_first);
++	}
++stop_scan:
++	qh->td_first = idx;
++}
++
++uint8_t update_non_isoc_urb_state_ddma(dwc_otg_hcd_t * hcd,
++				       dwc_hc_t * hc,
++				       dwc_otg_qtd_t * qtd,
++				       dwc_otg_host_dma_desc_t * dma_desc,
++				       dwc_otg_halt_status_e halt_status,
++				       uint32_t n_bytes, uint8_t * xfer_done)
++{
++
++	uint16_t remain = hc->ep_is_in ? dma_desc->status.b.n_bytes : 0;
++	dwc_otg_hcd_urb_t *urb = qtd->urb;
++
++	if (halt_status == DWC_OTG_HC_XFER_AHB_ERR) {
++		urb->status = -DWC_E_IO;
++		return 1;
++	}
++	if (dma_desc->status.b.sts == DMA_DESC_STS_PKTERR) {
++		switch (halt_status) {
++		case DWC_OTG_HC_XFER_STALL:
++			urb->status = -DWC_E_PIPE;
++			break;
++		case DWC_OTG_HC_XFER_BABBLE_ERR:
++			urb->status = -DWC_E_OVERFLOW;
++			break;
++		case DWC_OTG_HC_XFER_XACT_ERR:
++			urb->status = -DWC_E_PROTOCOL;
++			break;
++		default:
++			DWC_ERROR("%s: Unhandled descriptor error status (%d)\n", __func__,
++				  halt_status);
++			break;
++		}
++		return 1;
++	}
++
++	if (dma_desc->status.b.a == 1) {
++		DWC_DEBUGPL(DBG_HCDV,
++			    "Active descriptor encountered on channel %d\n",
++			    hc->hc_num);
++		return 0;
++	}
++
++	if (hc->ep_type == DWC_OTG_EP_TYPE_CONTROL) {
++		if (qtd->control_phase == DWC_OTG_CONTROL_DATA) {
++			urb->actual_length += n_bytes - remain;
++			if (remain || urb->actual_length == urb->length) {
++				/*
++				 * For Control Data stage do not set urb->status=0 to prevent
++				 * URB callback. Set it when Status phase done. See below.
++				 */
++				*xfer_done = 1;
++			}
++
++		} else if (qtd->control_phase == DWC_OTG_CONTROL_STATUS) {
++			urb->status = 0;
++			*xfer_done = 1;
++		}
++		/* No handling for SETUP stage */
++	} else {
++		/* BULK and INTR */
++		urb->actual_length += n_bytes - remain;
++		if (remain || urb->actual_length == urb->length) {
++			urb->status = 0;
++			*xfer_done = 1;
++		}
++	}
++
++	return 0;
++}
++
++static void complete_non_isoc_xfer_ddma(dwc_otg_hcd_t * hcd,
++					dwc_hc_t * hc,
++					dwc_otg_hc_regs_t * hc_regs,
++					dwc_otg_halt_status_e halt_status)
++{
++	dwc_otg_hcd_urb_t *urb = NULL;
++	dwc_otg_qtd_t *qtd, *qtd_tmp;
++	dwc_otg_qh_t *qh;
++	dwc_otg_host_dma_desc_t *dma_desc;
++	uint32_t n_bytes, n_desc, i;
++	uint8_t failed = 0, xfer_done;
++
++	n_desc = 0;
++
++	qh = hc->qh;
++
++	if (hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
++		DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &hc->qh->qtd_list, qtd_list_entry) {
++			qtd->in_process = 0;
++		}
++		return;
++	}
++
++	DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &qh->qtd_list, qtd_list_entry) {
++
++		urb = qtd->urb;
++
++		n_bytes = 0;
++		xfer_done = 0;
++
++		for (i = 0; i < qtd->n_desc; i++) {
++			dma_desc = &qh->desc_list[n_desc];
++
++			n_bytes = qh->n_bytes[n_desc];
++
++			failed =
++			    update_non_isoc_urb_state_ddma(hcd, hc, qtd,
++							   dma_desc,
++							   halt_status, n_bytes,
++							   &xfer_done);
++
++			if (failed
++			    || (xfer_done
++				&& (urb->status != -DWC_E_IN_PROGRESS))) {
++
++				hcd->fops->complete(hcd, urb->priv, urb,
++						    urb->status);
++				dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
++
++				if (failed)
++					goto stop_scan;
++			} else if (qh->ep_type == UE_CONTROL) {
++				if (qtd->control_phase == DWC_OTG_CONTROL_SETUP) {
++					if (urb->length > 0) {
++						qtd->control_phase = DWC_OTG_CONTROL_DATA;
++					} else {
++						qtd->control_phase = DWC_OTG_CONTROL_STATUS;
++					}
++					DWC_DEBUGPL(DBG_HCDV, "  Control setup transaction done\n");
++				} else if (qtd->control_phase == DWC_OTG_CONTROL_DATA) {
++					if (xfer_done) {
++						qtd->control_phase = DWC_OTG_CONTROL_STATUS;
++						DWC_DEBUGPL(DBG_HCDV, "  Control data transfer done\n");
++					} else if (i + 1 == qtd->n_desc) {
++						/*
++						 * Last descriptor for Control data stage which is
++						 * not completed yet.
++						 */
++						dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++					}
++				}
++			}
++
++			n_desc++;
++		}
++
++	}
++
++stop_scan:
++
++	if (qh->ep_type != UE_CONTROL) {
++		/*
++		 * Resetting the data toggle for bulk
++		 * and interrupt endpoints in case of stall. See handle_hc_stall_intr()
++		 */
++		if (halt_status == DWC_OTG_HC_XFER_STALL)
++			qh->data_toggle = DWC_OTG_HC_PID_DATA0;
++		else
++			dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++	}
++
++	if (halt_status == DWC_OTG_HC_XFER_COMPLETE) {
++		hcint_data_t hcint;
++		hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++		if (hcint.b.nyet) {
++			/*
++			 * Got a NYET on the last transaction of the transfer. It
++			 * means that the endpoint should be in the PING state at the
++			 * beginning of the next transfer.
++			 */
++			qh->ping_state = 1;
++			clear_hc_int(hc_regs, nyet);
++		}
++
++	}
++
++}
++
++/**
++ * This function is called from interrupt handlers.
++ * Scans the descriptor list, updates URB's status and
++ * calls completion routine for the URB if it's done.
++ * Releases the channel to be used by other transfers.
++ * In case of Isochronous endpoint the channel is not halted until
++ * the end of the session, i.e. QTD list is empty.
++ * If periodic channel released the FrameList is updated accordingly.
++ *
++ * Calls transaction selection routines to activate pending transfers.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param hc Host channel, the transfer is completed on.
++ * @param hc_regs Host channel registers.
++ * @param halt_status Reason the channel is being halted,
++ *		      or just XferComplete for isochronous transfer
++ */
++void dwc_otg_hcd_complete_xfer_ddma(dwc_otg_hcd_t * hcd,
++				    dwc_hc_t * hc,
++				    dwc_otg_hc_regs_t * hc_regs,
++				    dwc_otg_halt_status_e halt_status)
++{
++	uint8_t continue_isoc_xfer = 0;
++	dwc_otg_transaction_type_e tr_type;
++	dwc_otg_qh_t *qh = hc->qh;
++
++	if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
++
++		complete_isoc_xfer_ddma(hcd, hc, hc_regs, halt_status);
++
++		/* Release the channel if halted or session completed */
++		if (halt_status != DWC_OTG_HC_XFER_COMPLETE ||
++		    DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
++
++			/* Halt the channel if session completed */
++			if (halt_status == DWC_OTG_HC_XFER_COMPLETE) {
++				dwc_otg_hc_halt(hcd->core_if, hc, halt_status);
++			}
++
++			release_channel_ddma(hcd, qh);
++			dwc_otg_hcd_qh_remove(hcd, qh);
++		} else {
++			/* Keep in assigned schedule to continue transfer */
++			DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_assigned,
++					   &qh->qh_list_entry);
++			continue_isoc_xfer = 1;
++
++		}
++		/** @todo Consider the case when period exceeds FrameList size.
++		 *  Frame Rollover interrupt should be used.
++		 */
++	} else {
++		/* Scan descriptor list to complete the URB(s), then release the channel */
++		complete_non_isoc_xfer_ddma(hcd, hc, hc_regs, halt_status);
++
++		release_channel_ddma(hcd, qh);
++		dwc_otg_hcd_qh_remove(hcd, qh);
++
++		if (!DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
++			/* Add back to inactive non-periodic schedule on normal completion */
++			dwc_otg_hcd_qh_add(hcd, qh);
++		}
++
++	}
++	tr_type = dwc_otg_hcd_select_transactions(hcd);
++	if (tr_type != DWC_OTG_TRANSACTION_NONE || continue_isoc_xfer) {
++		if (continue_isoc_xfer) {
++			if (tr_type == DWC_OTG_TRANSACTION_NONE) {
++				tr_type = DWC_OTG_TRANSACTION_PERIODIC;
++			} else if (tr_type == DWC_OTG_TRANSACTION_NON_PERIODIC) {
++				tr_type = DWC_OTG_TRANSACTION_ALL;
++			}
++		}
++		dwc_otg_hcd_queue_transactions(hcd, tr_type);
++	}
++}
++
++#endif /* DWC_DEVICE_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h
+@@ -0,0 +1,417 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_if.h $
++ * $Revision: #12 $
++ * $Date: 2011/10/26 $
++ * $Change: 1873028 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_DEVICE_ONLY
++#ifndef __DWC_HCD_IF_H__
++#define __DWC_HCD_IF_H__
++
++#include "dwc_otg_core_if.h"
++
++/** @file
++ * This file defines DWC_OTG HCD Core API.
++ */
++
++struct dwc_otg_hcd;
++typedef struct dwc_otg_hcd dwc_otg_hcd_t;
++
++struct dwc_otg_hcd_urb;
++typedef struct dwc_otg_hcd_urb dwc_otg_hcd_urb_t;
++
++/** @name HCD Function Driver Callbacks */
++/** @{ */
++
++/** This function is called whenever core switches to host mode. */
++typedef int (*dwc_otg_hcd_start_cb_t) (dwc_otg_hcd_t * hcd);
++
++/** This function is called when device has been disconnected */
++typedef int (*dwc_otg_hcd_disconnect_cb_t) (dwc_otg_hcd_t * hcd);
++
++/** Wrapper provides this function to HCD to core, so it can get hub information to which device is connected */
++typedef int (*dwc_otg_hcd_hub_info_from_urb_cb_t) (dwc_otg_hcd_t * hcd,
++						   void *urb_handle,
++						   uint32_t * hub_addr,
++						   uint32_t * port_addr);
++/** Via this function HCD core gets device speed */
++typedef int (*dwc_otg_hcd_speed_from_urb_cb_t) (dwc_otg_hcd_t * hcd,
++						void *urb_handle);
++
++/** This function is called when urb is completed */
++typedef int (*dwc_otg_hcd_complete_urb_cb_t) (dwc_otg_hcd_t * hcd,
++					      void *urb_handle,
++					      dwc_otg_hcd_urb_t * dwc_otg_urb,
++					      int32_t status);
++
++/** Via this function HCD core gets b_hnp_enable parameter */
++typedef int (*dwc_otg_hcd_get_b_hnp_enable) (dwc_otg_hcd_t * hcd);
++
++struct dwc_otg_hcd_function_ops {
++	dwc_otg_hcd_start_cb_t start;
++	dwc_otg_hcd_disconnect_cb_t disconnect;
++	dwc_otg_hcd_hub_info_from_urb_cb_t hub_info;
++	dwc_otg_hcd_speed_from_urb_cb_t speed;
++	dwc_otg_hcd_complete_urb_cb_t complete;
++	dwc_otg_hcd_get_b_hnp_enable get_b_hnp_enable;
++};
++/** @} */
++
++/** @name HCD Core API */
++/** @{ */
++/** This function allocates dwc_otg_hcd structure and returns pointer on it. */
++extern dwc_otg_hcd_t *dwc_otg_hcd_alloc_hcd(void);
++
++/** This function should be called to initiate HCD Core.
++ *
++ * @param hcd The HCD
++ * @param core_if The DWC_OTG Core
++ *
++ * Returns -DWC_E_NO_MEMORY if no enough memory.
++ * Returns 0 on success
++ */
++extern int dwc_otg_hcd_init(dwc_otg_hcd_t * hcd, dwc_otg_core_if_t * core_if);
++
++/** Frees HCD
++ *
++ * @param hcd The HCD
++ */
++extern void dwc_otg_hcd_remove(dwc_otg_hcd_t * hcd);
++
++/** This function should be called on every hardware interrupt.
++ *
++ * @param dwc_otg_hcd The HCD
++ *
++ * Returns non zero if interrupt is handled
++ * Return 0 if interrupt is not handled
++ */
++extern int32_t dwc_otg_hcd_handle_intr(dwc_otg_hcd_t * dwc_otg_hcd);
++
++/** This function is used to handle the fast interrupt
++ *
++ */
++extern void __attribute__ ((naked)) dwc_otg_hcd_handle_fiq(void);
++
++/**
++ * Returns private data set by
++ * dwc_otg_hcd_set_priv_data function.
++ *
++ * @param hcd The HCD
++ */
++extern void *dwc_otg_hcd_get_priv_data(dwc_otg_hcd_t * hcd);
++
++/**
++ * Set private data.
++ *
++ * @param hcd The HCD
++ * @param priv_data pointer to be stored in private data
++ */
++extern void dwc_otg_hcd_set_priv_data(dwc_otg_hcd_t * hcd, void *priv_data);
++
++/**
++ * This function initializes the HCD Core.
++ *
++ * @param hcd The HCD
++ * @param fops The Function Driver Operations data structure containing pointers to all callbacks.
++ *
++ * Returns -DWC_E_NO_DEVICE if Core is currently is in device mode.
++ * Returns 0 on success
++ */
++extern int dwc_otg_hcd_start(dwc_otg_hcd_t * hcd,
++			     struct dwc_otg_hcd_function_ops *fops);
++
++/**
++ * Halts the DWC_otg host mode operations in a clean manner. USB transfers are
++ * stopped.
++ *
++ * @param hcd The HCD
++ */
++extern void dwc_otg_hcd_stop(dwc_otg_hcd_t * hcd);
++
++/**
++ * Handles hub class-specific requests.
++ *
++ * @param dwc_otg_hcd The HCD
++ * @param typeReq Request Type
++ * @param wValue wValue from control request
++ * @param wIndex wIndex from control request
++ * @param buf data buffer
++ * @param wLength data buffer length
++ *
++ * Returns -DWC_E_INVALID if invalid argument is passed
++ * Returns 0 on success
++ */
++extern int dwc_otg_hcd_hub_control(dwc_otg_hcd_t * dwc_otg_hcd,
++				   uint16_t typeReq, uint16_t wValue,
++				   uint16_t wIndex, uint8_t * buf,
++				   uint16_t wLength);
++
++/**
++ * Returns otg port number.
++ *
++ * @param hcd The HCD
++ */
++extern uint32_t dwc_otg_hcd_otg_port(dwc_otg_hcd_t * hcd);
++
++/**
++ * Returns OTG version - either 1.3 or 2.0.
++ *
++ * @param core_if The core_if structure pointer
++ */
++extern uint16_t dwc_otg_get_otg_version(dwc_otg_core_if_t * core_if);
++
++/**
++ * Returns 1 if currently core is acting as B host, and 0 otherwise.
++ *
++ * @param hcd The HCD
++ */
++extern uint32_t dwc_otg_hcd_is_b_host(dwc_otg_hcd_t * hcd);
++
++/**
++ * Returns current frame number.
++ *
++ * @param hcd The HCD
++ */
++extern int dwc_otg_hcd_get_frame_number(dwc_otg_hcd_t * hcd);
++
++/**
++ * Dumps hcd state.
++ *
++ * @param hcd The HCD
++ */
++extern void dwc_otg_hcd_dump_state(dwc_otg_hcd_t * hcd);
++
++/**
++ * Dump the average frame remaining at SOF. This can be used to
++ * determine average interrupt latency. Frame remaining is also shown for
++ * start transfer and two additional sample points.
++ * Currently this function is not implemented.
++ *
++ * @param hcd The HCD
++ */
++extern void dwc_otg_hcd_dump_frrem(dwc_otg_hcd_t * hcd);
++
++/**
++ * Sends LPM transaction to the local device.
++ *
++ * @param hcd The HCD
++ * @param devaddr Device Address
++ * @param hird Host initiated resume duration
++ * @param bRemoteWake Value of bRemoteWake field in LPM transaction
++ *
++ * Returns negative value if sending LPM transaction was not succeeded.
++ * Returns 0 on success.
++ */
++extern int dwc_otg_hcd_send_lpm(dwc_otg_hcd_t * hcd, uint8_t devaddr,
++				uint8_t hird, uint8_t bRemoteWake);
++
++/* URB interface */
++
++/**
++ * Allocates memory for dwc_otg_hcd_urb structure.
++ * Allocated memory should be freed by call of DWC_FREE.
++ *
++ * @param hcd The HCD
++ * @param iso_desc_count Count of ISOC descriptors
++ * @param atomic_alloc Specefies whether to perform atomic allocation.
++ */
++extern dwc_otg_hcd_urb_t *dwc_otg_hcd_urb_alloc(dwc_otg_hcd_t * hcd,
++						int iso_desc_count,
++						int atomic_alloc);
++
++/**
++ * Set pipe information in URB.
++ *
++ * @param hcd_urb DWC_OTG URB
++ * @param devaddr Device Address
++ * @param ep_num Endpoint Number
++ * @param ep_type Endpoint Type
++ * @param ep_dir Endpoint Direction
++ * @param mps Max Packet Size
++ */
++extern void dwc_otg_hcd_urb_set_pipeinfo(dwc_otg_hcd_urb_t * hcd_urb,
++					 uint8_t devaddr, uint8_t ep_num,
++					 uint8_t ep_type, uint8_t ep_dir,
++					 uint16_t mps);
++
++/* Transfer flags */
++#define URB_GIVEBACK_ASAP 0x1
++#define URB_SEND_ZERO_PACKET 0x2
++
++/**
++ * Sets dwc_otg_hcd_urb parameters.
++ *
++ * @param urb DWC_OTG URB allocated by dwc_otg_hcd_urb_alloc function.
++ * @param urb_handle Unique handle for request, this will be passed back
++ * to function driver in completion callback.
++ * @param buf The buffer for the data
++ * @param dma The DMA buffer for the data
++ * @param buflen Transfer length
++ * @param sp Buffer for setup data
++ * @param sp_dma DMA address of setup data buffer
++ * @param flags Transfer flags
++ * @param interval Polling interval for interrupt or isochronous transfers.
++ */
++extern void dwc_otg_hcd_urb_set_params(dwc_otg_hcd_urb_t * urb,
++				       void *urb_handle, void *buf,
++				       dwc_dma_t dma, uint32_t buflen, void *sp,
++				       dwc_dma_t sp_dma, uint32_t flags,
++				       uint16_t interval);
++
++/** Gets status from dwc_otg_hcd_urb
++ *
++ * @param dwc_otg_urb DWC_OTG URB
++ */
++extern uint32_t dwc_otg_hcd_urb_get_status(dwc_otg_hcd_urb_t * dwc_otg_urb);
++
++/** Gets actual length from dwc_otg_hcd_urb
++ *
++ * @param dwc_otg_urb DWC_OTG URB
++ */
++extern uint32_t dwc_otg_hcd_urb_get_actual_length(dwc_otg_hcd_urb_t *
++						  dwc_otg_urb);
++
++/** Gets error count from dwc_otg_hcd_urb. Only for ISOC URBs
++ *
++ * @param dwc_otg_urb DWC_OTG URB
++ */
++extern uint32_t dwc_otg_hcd_urb_get_error_count(dwc_otg_hcd_urb_t *
++						dwc_otg_urb);
++
++/** Set ISOC descriptor offset and length
++ *
++ * @param dwc_otg_urb DWC_OTG URB
++ * @param desc_num ISOC descriptor number
++ * @param offset Offset from beginig of buffer.
++ * @param length Transaction length
++ */
++extern void dwc_otg_hcd_urb_set_iso_desc_params(dwc_otg_hcd_urb_t * dwc_otg_urb,
++						int desc_num, uint32_t offset,
++						uint32_t length);
++
++/** Get status of ISOC descriptor, specified by desc_num
++ *
++ * @param dwc_otg_urb DWC_OTG URB
++ * @param desc_num ISOC descriptor number
++ */
++extern uint32_t dwc_otg_hcd_urb_get_iso_desc_status(dwc_otg_hcd_urb_t *
++						    dwc_otg_urb, int desc_num);
++
++/** Get actual length of ISOC descriptor, specified by desc_num
++ *
++ * @param dwc_otg_urb DWC_OTG URB
++ * @param desc_num ISOC descriptor number
++ */
++extern uint32_t dwc_otg_hcd_urb_get_iso_desc_actual_length(dwc_otg_hcd_urb_t *
++							   dwc_otg_urb,
++							   int desc_num);
++
++/** Queue URB. After transfer is completes, the complete callback will be called with the URB status
++ *
++ * @param dwc_otg_hcd The HCD
++ * @param dwc_otg_urb DWC_OTG URB
++ * @param ep_handle Out parameter for returning endpoint handle
++ * @param atomic_alloc Flag to do atomic allocation if needed
++ *
++ * Returns -DWC_E_NO_DEVICE if no device is connected.
++ * Returns -DWC_E_NO_MEMORY if there is no enough memory.
++ * Returns 0 on success.
++ */
++extern int dwc_otg_hcd_urb_enqueue(dwc_otg_hcd_t * dwc_otg_hcd,
++				   dwc_otg_hcd_urb_t * dwc_otg_urb,
++				   void **ep_handle, int atomic_alloc);
++
++/** De-queue the specified URB
++ *
++ * @param dwc_otg_hcd The HCD
++ * @param dwc_otg_urb DWC_OTG URB
++ */
++extern int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_t * dwc_otg_hcd,
++				   dwc_otg_hcd_urb_t * dwc_otg_urb);
++
++/** Frees resources in the DWC_otg controller related to a given endpoint.
++ * Any URBs for the endpoint must already be dequeued.
++ *
++ * @param hcd The HCD
++ * @param ep_handle Endpoint handle, returned by dwc_otg_hcd_urb_enqueue function
++ * @param retry Number of retries if there are queued transfers.
++ *
++ * Returns -DWC_E_INVALID if invalid arguments are passed.
++ * Returns 0 on success
++ */
++extern int dwc_otg_hcd_endpoint_disable(dwc_otg_hcd_t * hcd, void *ep_handle,
++					int retry);
++
++/* Resets the data toggle in qh structure. This function can be called from
++ * usb_clear_halt routine.
++ *
++ * @param hcd The HCD
++ * @param ep_handle Endpoint handle, returned by dwc_otg_hcd_urb_enqueue function
++ *
++ * Returns -DWC_E_INVALID if invalid arguments are passed.
++ * Returns 0 on success
++ */
++extern int dwc_otg_hcd_endpoint_reset(dwc_otg_hcd_t * hcd, void *ep_handle);
++
++/** Returns 1 if status of specified port is changed and 0 otherwise.
++ *
++ * @param hcd The HCD
++ * @param port Port number
++ */
++extern int dwc_otg_hcd_is_status_changed(dwc_otg_hcd_t * hcd, int port);
++
++/** Call this function to check if bandwidth was allocated for specified endpoint.
++ * Only for ISOC and INTERRUPT endpoints.
++ *
++ * @param hcd The HCD
++ * @param ep_handle Endpoint handle
++ */
++extern int dwc_otg_hcd_is_bandwidth_allocated(dwc_otg_hcd_t * hcd,
++					      void *ep_handle);
++
++/** Call this function to check if bandwidth was freed for specified endpoint.
++ *
++ * @param hcd The HCD
++ * @param ep_handle Endpoint handle
++ */
++extern int dwc_otg_hcd_is_bandwidth_freed(dwc_otg_hcd_t * hcd, void *ep_handle);
++
++/** Returns bandwidth allocated for specified endpoint in microseconds.
++ * Only for ISOC and INTERRUPT endpoints.
++ *
++ * @param hcd The HCD
++ * @param ep_handle Endpoint handle
++ */
++extern uint8_t dwc_otg_hcd_get_ep_bandwidth(dwc_otg_hcd_t * hcd,
++					    void *ep_handle);
++
++/** @} */
++
++#endif /* __DWC_HCD_IF_H__ */
++#endif /* DWC_DEVICE_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c
+@@ -0,0 +1,2714 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_intr.c $
++ * $Revision: #89 $
++ * $Date: 2011/10/20 $
++ * $Change: 1869487 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_DEVICE_ONLY
++
++#include "dwc_otg_hcd.h"
++#include "dwc_otg_regs.h"
++
++#include <linux/jiffies.h>
++#include <asm/fiq.h>
++
++
++extern bool microframe_schedule;
++
++/** @file
++ * This file contains the implementation of the HCD Interrupt handlers.
++ */
++
++int fiq_done, int_done;
++
++#ifdef FIQ_DEBUG
++char buffer[1000*16];
++int wptr;
++void notrace _fiq_print(FIQDBG_T dbg_lvl, char *fmt, ...)
++{
++	FIQDBG_T dbg_lvl_req = FIQDBG_PORTHUB;
++	va_list args;
++	char text[17];
++	hfnum_data_t hfnum = { .d32 = FIQ_READ(dwc_regs_base + 0x408) };
++
++	if(dbg_lvl & dbg_lvl_req || dbg_lvl == FIQDBG_ERR)
++	{
++		local_fiq_disable();
++		snprintf(text, 9, "%4d%d:%d ", hfnum.b.frnum/8, hfnum.b.frnum%8, 8 - hfnum.b.frrem/937);
++		va_start(args, fmt);
++		vsnprintf(text+8, 9, fmt, args);
++		va_end(args);
++
++		memcpy(buffer + wptr, text, 16);
++		wptr = (wptr + 16) % sizeof(buffer);
++		local_fiq_enable();
++	}
++}
++#endif
++
++/** This function handles interrupts for the HCD. */
++int32_t dwc_otg_hcd_handle_intr(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	int retval = 0;
++	static int last_time;
++	dwc_otg_core_if_t *core_if = dwc_otg_hcd->core_if;
++	gintsts_data_t gintsts;
++	gintmsk_data_t gintmsk;
++	hfnum_data_t hfnum;
++	haintmsk_data_t haintmsk;
++
++#ifdef DEBUG
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++
++#endif
++
++	gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++	gintmsk.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
++
++	/* Exit from ISR if core is hibernated */
++	if (core_if->hibernation_suspend == 1) {
++		goto exit_handler_routine;
++	}
++	DWC_SPINLOCK(dwc_otg_hcd->lock);
++	/* Check if HOST Mode */
++	if (dwc_otg_is_host_mode(core_if)) {
++		if (fiq_enable) {
++			local_fiq_disable();
++			fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
++			/* Pull in from the FIQ's disabled mask */
++			gintmsk.d32 = gintmsk.d32 | ~(dwc_otg_hcd->fiq_state->gintmsk_saved.d32);
++			dwc_otg_hcd->fiq_state->gintmsk_saved.d32 = ~0;
++		}
++
++		if (fiq_fsm_enable && ( 0x0000FFFF & ~(dwc_otg_hcd->fiq_state->haintmsk_saved.b2.chint))) {
++			gintsts.b.hcintr = 1;
++		}
++
++		/* Danger will robinson: fake a SOF if necessary */
++		if (fiq_fsm_enable && (dwc_otg_hcd->fiq_state->gintmsk_saved.b.sofintr == 1)) {
++			gintsts.b.sofintr = 1;
++		}
++		gintsts.d32 &= gintmsk.d32;
++
++		if (fiq_enable) {
++			fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
++			local_fiq_enable();
++		}
++
++		if (!gintsts.d32) {
++			goto exit_handler_routine;
++		}
++
++#ifdef DEBUG
++		// We should be OK doing this because the common interrupts should already have been serviced
++		/* Don't print debug message in the interrupt handler on SOF */
++#ifndef DEBUG_SOF
++		if (gintsts.d32 != DWC_SOF_INTR_MASK)
++#endif
++			DWC_DEBUGPL(DBG_HCDI, "\n");
++#endif
++
++#ifdef DEBUG
++#ifndef DEBUG_SOF
++		if (gintsts.d32 != DWC_SOF_INTR_MASK)
++#endif
++			DWC_DEBUGPL(DBG_HCDI,
++				    "DWC OTG HCD Interrupt Detected gintsts&gintmsk=0x%08x core_if=%p\n",
++				    gintsts.d32, core_if);
++#endif
++		hfnum.d32 = DWC_READ_REG32(&dwc_otg_hcd->core_if->host_if->host_global_regs->hfnum);
++		if (gintsts.b.sofintr) {
++			retval |= dwc_otg_hcd_handle_sof_intr(dwc_otg_hcd);
++		}
++
++		if (gintsts.b.rxstsqlvl) {
++			retval |=
++			    dwc_otg_hcd_handle_rx_status_q_level_intr
++			    (dwc_otg_hcd);
++		}
++		if (gintsts.b.nptxfempty) {
++			retval |=
++			    dwc_otg_hcd_handle_np_tx_fifo_empty_intr
++			    (dwc_otg_hcd);
++		}
++		if (gintsts.b.i2cintr) {
++			/** @todo Implement i2cintr handler. */
++		}
++		if (gintsts.b.portintr) {
++
++			gintmsk_data_t gintmsk = { .b.portintr = 1};
++			retval |= dwc_otg_hcd_handle_port_intr(dwc_otg_hcd);
++			if (fiq_enable) {
++				local_fiq_disable();
++				fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
++				DWC_MODIFY_REG32(&dwc_otg_hcd->core_if->core_global_regs->gintmsk, 0, gintmsk.d32);
++				fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
++				local_fiq_enable();
++			} else {
++				DWC_MODIFY_REG32(&dwc_otg_hcd->core_if->core_global_regs->gintmsk, 0, gintmsk.d32);
++			}
++		}
++		if (gintsts.b.hcintr) {
++			retval |= dwc_otg_hcd_handle_hc_intr(dwc_otg_hcd);
++		}
++		if (gintsts.b.ptxfempty) {
++			retval |=
++			    dwc_otg_hcd_handle_perio_tx_fifo_empty_intr
++			    (dwc_otg_hcd);
++		}
++#ifdef DEBUG
++#ifndef DEBUG_SOF
++		if (gintsts.d32 != DWC_SOF_INTR_MASK)
++#endif
++		{
++			DWC_DEBUGPL(DBG_HCDI,
++				    "DWC OTG HCD Finished Servicing Interrupts\n");
++			DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD gintsts=0x%08x\n",
++				    DWC_READ_REG32(&global_regs->gintsts));
++			DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD gintmsk=0x%08x\n",
++				    DWC_READ_REG32(&global_regs->gintmsk));
++		}
++#endif
++
++#ifdef DEBUG
++#ifndef DEBUG_SOF
++		if (gintsts.d32 != DWC_SOF_INTR_MASK)
++#endif
++			DWC_DEBUGPL(DBG_HCDI, "\n");
++#endif
++
++	}
++
++exit_handler_routine:
++	if (fiq_enable)	{
++		gintmsk_data_t gintmsk_new;
++		haintmsk_data_t haintmsk_new;
++		local_fiq_disable();
++		fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
++		gintmsk_new.d32 = *(volatile uint32_t *)&dwc_otg_hcd->fiq_state->gintmsk_saved.d32;
++		if(fiq_fsm_enable)
++			haintmsk_new.d32 = *(volatile uint32_t *)&dwc_otg_hcd->fiq_state->haintmsk_saved.d32;
++		else
++			haintmsk_new.d32 = 0x0000FFFF;
++
++		/* The FIQ could have sneaked another interrupt in. If so, don't clear MPHI */
++		if ((gintmsk_new.d32 == ~0) && (haintmsk_new.d32 == 0x0000FFFF)) {
++				DWC_WRITE_REG32(dwc_otg_hcd->fiq_state->mphi_regs.intstat, (1<<16));
++				if (dwc_otg_hcd->fiq_state->mphi_int_count >= 50) {
++					fiq_print(FIQDBG_INT, dwc_otg_hcd->fiq_state, "MPHI CLR");
++					DWC_WRITE_REG32(dwc_otg_hcd->fiq_state->mphi_regs.ctrl, ((1<<31) + (1<<16)));
++					while (!(DWC_READ_REG32(dwc_otg_hcd->fiq_state->mphi_regs.ctrl) & (1 << 17)))
++						;
++					DWC_WRITE_REG32(dwc_otg_hcd->fiq_state->mphi_regs.ctrl, (1<<31));
++					dwc_otg_hcd->fiq_state->mphi_int_count = 0;
++				}
++				int_done++;
++		}
++		haintmsk.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->haintmsk);
++		/* Re-enable interrupts that the FIQ masked (first time round) */
++		FIQ_WRITE(dwc_otg_hcd->fiq_state->dwc_regs_base + GINTMSK, gintmsk.d32);
++		fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
++		local_fiq_enable();
++
++		if ((jiffies / HZ) > last_time) {
++			//dwc_otg_qh_t *qh;
++			//dwc_list_link_t *cur;
++			/* Once a second output the fiq and irq numbers, useful for debug */
++			last_time = jiffies / HZ;
++		//	 DWC_WARN("np_kick=%d AHC=%d sched_frame=%d cur_frame=%d int_done=%d fiq_done=%d",
++		//	dwc_otg_hcd->fiq_state->kick_np_queues, dwc_otg_hcd->available_host_channels,
++		//	dwc_otg_hcd->fiq_state->next_sched_frame, hfnum.b.frnum, int_done, dwc_otg_hcd->fiq_state->fiq_done);
++			 //printk(KERN_WARNING "Periodic queues:\n");
++		}
++	}
++
++	DWC_SPINUNLOCK(dwc_otg_hcd->lock);
++	return retval;
++}
++
++#ifdef DWC_TRACK_MISSED_SOFS
++
++#warning Compiling code to track missed SOFs
++#define FRAME_NUM_ARRAY_SIZE 1000
++/**
++ * This function is for debug only.
++ */
++static inline void track_missed_sofs(uint16_t curr_frame_number)
++{
++	static uint16_t frame_num_array[FRAME_NUM_ARRAY_SIZE];
++	static uint16_t last_frame_num_array[FRAME_NUM_ARRAY_SIZE];
++	static int frame_num_idx = 0;
++	static uint16_t last_frame_num = DWC_HFNUM_MAX_FRNUM;
++	static int dumped_frame_num_array = 0;
++
++	if (frame_num_idx < FRAME_NUM_ARRAY_SIZE) {
++		if (((last_frame_num + 1) & DWC_HFNUM_MAX_FRNUM) !=
++		    curr_frame_number) {
++			frame_num_array[frame_num_idx] = curr_frame_number;
++			last_frame_num_array[frame_num_idx++] = last_frame_num;
++		}
++	} else if (!dumped_frame_num_array) {
++		int i;
++		DWC_PRINTF("Frame     Last Frame\n");
++		DWC_PRINTF("-----     ----------\n");
++		for (i = 0; i < FRAME_NUM_ARRAY_SIZE; i++) {
++			DWC_PRINTF("0x%04x    0x%04x\n",
++				   frame_num_array[i], last_frame_num_array[i]);
++		}
++		dumped_frame_num_array = 1;
++	}
++	last_frame_num = curr_frame_number;
++}
++#endif
++
++/**
++ * Handles the start-of-frame interrupt in host mode. Non-periodic
++ * transactions may be queued to the DWC_otg controller for the current
++ * (micro)frame. Periodic transactions may be queued to the controller for the
++ * next (micro)frame.
++ */
++int32_t dwc_otg_hcd_handle_sof_intr(dwc_otg_hcd_t * hcd)
++{
++	hfnum_data_t hfnum;
++	gintsts_data_t gintsts = { .d32 = 0 };
++	dwc_list_link_t *qh_entry;
++	dwc_otg_qh_t *qh;
++	dwc_otg_transaction_type_e tr_type;
++	int did_something = 0;
++	int32_t next_sched_frame = -1;
++
++	hfnum.d32 =
++	    DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hfnum);
++
++#ifdef DEBUG_SOF
++	DWC_DEBUGPL(DBG_HCD, "--Start of Frame Interrupt--\n");
++#endif
++	hcd->frame_number = hfnum.b.frnum;
++
++#ifdef DEBUG
++	hcd->frrem_accum += hfnum.b.frrem;
++	hcd->frrem_samples++;
++#endif
++
++#ifdef DWC_TRACK_MISSED_SOFS
++	track_missed_sofs(hcd->frame_number);
++#endif
++	/* Determine whether any periodic QHs should be executed. */
++	qh_entry = DWC_LIST_FIRST(&hcd->periodic_sched_inactive);
++	while (qh_entry != &hcd->periodic_sched_inactive) {
++		qh = DWC_LIST_ENTRY(qh_entry, dwc_otg_qh_t, qh_list_entry);
++		qh_entry = qh_entry->next;
++		if (dwc_frame_num_le(qh->sched_frame, hcd->frame_number)) {
++
++			/*
++			 * Move QH to the ready list to be executed next
++			 * (micro)frame.
++			 */
++			DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_ready,
++					   &qh->qh_list_entry);
++
++			did_something = 1;
++		}
++		else
++		{
++			if(next_sched_frame < 0 || dwc_frame_num_le(qh->sched_frame, next_sched_frame))
++			{
++				next_sched_frame = qh->sched_frame;
++			}
++		}
++	}
++	if (fiq_enable)
++		hcd->fiq_state->next_sched_frame = next_sched_frame;
++
++	tr_type = dwc_otg_hcd_select_transactions(hcd);
++	if (tr_type != DWC_OTG_TRANSACTION_NONE) {
++		dwc_otg_hcd_queue_transactions(hcd, tr_type);
++		did_something = 1;
++	}
++
++	/* Clear interrupt - but do not trample on the FIQ sof */
++	if (!fiq_fsm_enable) {
++		gintsts.b.sofintr = 1;
++		DWC_WRITE_REG32(&hcd->core_if->core_global_regs->gintsts, gintsts.d32);
++	}
++	return 1;
++}
++
++/** Handles the Rx Status Queue Level Interrupt, which indicates that there is at
++ * least one packet in the Rx FIFO.  The packets are moved from the FIFO to
++ * memory if the DWC_otg controller is operating in Slave mode. */
++int32_t dwc_otg_hcd_handle_rx_status_q_level_intr(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	host_grxsts_data_t grxsts;
++	dwc_hc_t *hc = NULL;
++
++	DWC_DEBUGPL(DBG_HCD, "--RxStsQ Level Interrupt--\n");
++
++	grxsts.d32 =
++	    DWC_READ_REG32(&dwc_otg_hcd->core_if->core_global_regs->grxstsp);
++
++	hc = dwc_otg_hcd->hc_ptr_array[grxsts.b.chnum];
++	if (!hc) {
++		DWC_ERROR("Unable to get corresponding channel\n");
++		return 0;
++	}
++
++	/* Packet Status */
++	DWC_DEBUGPL(DBG_HCDV, "    Ch num = %d\n", grxsts.b.chnum);
++	DWC_DEBUGPL(DBG_HCDV, "    Count = %d\n", grxsts.b.bcnt);
++	DWC_DEBUGPL(DBG_HCDV, "    DPID = %d, hc.dpid = %d\n", grxsts.b.dpid,
++		    hc->data_pid_start);
++	DWC_DEBUGPL(DBG_HCDV, "    PStatus = %d\n", grxsts.b.pktsts);
++
++	switch (grxsts.b.pktsts) {
++	case DWC_GRXSTS_PKTSTS_IN:
++		/* Read the data into the host buffer. */
++		if (grxsts.b.bcnt > 0) {
++			dwc_otg_read_packet(dwc_otg_hcd->core_if,
++					    hc->xfer_buff, grxsts.b.bcnt);
++
++			/* Update the HC fields for the next packet received. */
++			hc->xfer_count += grxsts.b.bcnt;
++			hc->xfer_buff += grxsts.b.bcnt;
++		}
++
++	case DWC_GRXSTS_PKTSTS_IN_XFER_COMP:
++	case DWC_GRXSTS_PKTSTS_DATA_TOGGLE_ERR:
++	case DWC_GRXSTS_PKTSTS_CH_HALTED:
++		/* Handled in interrupt, just ignore data */
++		break;
++	default:
++		DWC_ERROR("RX_STS_Q Interrupt: Unknown status %d\n",
++			  grxsts.b.pktsts);
++		break;
++	}
++
++	return 1;
++}
++
++/** This interrupt occurs when the non-periodic Tx FIFO is half-empty. More
++ * data packets may be written to the FIFO for OUT transfers. More requests
++ * may be written to the non-periodic request queue for IN transfers. This
++ * interrupt is enabled only in Slave mode. */
++int32_t dwc_otg_hcd_handle_np_tx_fifo_empty_intr(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	DWC_DEBUGPL(DBG_HCD, "--Non-Periodic TxFIFO Empty Interrupt--\n");
++	dwc_otg_hcd_queue_transactions(dwc_otg_hcd,
++				       DWC_OTG_TRANSACTION_NON_PERIODIC);
++	return 1;
++}
++
++/** This interrupt occurs when the periodic Tx FIFO is half-empty. More data
++ * packets may be written to the FIFO for OUT transfers. More requests may be
++ * written to the periodic request queue for IN transfers. This interrupt is
++ * enabled only in Slave mode. */
++int32_t dwc_otg_hcd_handle_perio_tx_fifo_empty_intr(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	DWC_DEBUGPL(DBG_HCD, "--Periodic TxFIFO Empty Interrupt--\n");
++	dwc_otg_hcd_queue_transactions(dwc_otg_hcd,
++				       DWC_OTG_TRANSACTION_PERIODIC);
++	return 1;
++}
++
++/** There are multiple conditions that can cause a port interrupt. This function
++ * determines which interrupt conditions have occurred and handles them
++ * appropriately. */
++int32_t dwc_otg_hcd_handle_port_intr(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	int retval = 0;
++	hprt0_data_t hprt0;
++	hprt0_data_t hprt0_modify;
++
++	hprt0.d32 = DWC_READ_REG32(dwc_otg_hcd->core_if->host_if->hprt0);
++	hprt0_modify.d32 = DWC_READ_REG32(dwc_otg_hcd->core_if->host_if->hprt0);
++
++	/* Clear appropriate bits in HPRT0 to clear the interrupt bit in
++	 * GINTSTS */
++
++	hprt0_modify.b.prtena = 0;
++	hprt0_modify.b.prtconndet = 0;
++	hprt0_modify.b.prtenchng = 0;
++	hprt0_modify.b.prtovrcurrchng = 0;
++
++	/* Port Connect Detected
++	 * Set flag and clear if detected */
++	if (dwc_otg_hcd->core_if->hibernation_suspend == 1) {
++		// Dont modify port status if we are in hibernation state
++		hprt0_modify.b.prtconndet = 1;
++		hprt0_modify.b.prtenchng = 1;
++		DWC_WRITE_REG32(dwc_otg_hcd->core_if->host_if->hprt0, hprt0_modify.d32);
++		hprt0.d32 = DWC_READ_REG32(dwc_otg_hcd->core_if->host_if->hprt0);
++		return retval;
++	}
++
++	if (hprt0.b.prtconndet) {
++		/** @todo - check if steps performed in 'else' block should be perfromed regardles adp */
++		if (dwc_otg_hcd->core_if->adp_enable &&
++				dwc_otg_hcd->core_if->adp.vbuson_timer_started == 1) {
++			DWC_PRINTF("PORT CONNECT DETECTED ----------------\n");
++			DWC_TIMER_CANCEL(dwc_otg_hcd->core_if->adp.vbuson_timer);
++			dwc_otg_hcd->core_if->adp.vbuson_timer_started = 0;
++			/* TODO - check if this is required, as
++			 * host initialization was already performed
++			 * after initial ADP probing
++			 */
++			/*dwc_otg_hcd->core_if->adp.vbuson_timer_started = 0;
++			dwc_otg_core_init(dwc_otg_hcd->core_if);
++			dwc_otg_enable_global_interrupts(dwc_otg_hcd->core_if);
++			cil_hcd_start(dwc_otg_hcd->core_if);*/
++		} else {
++
++			DWC_DEBUGPL(DBG_HCD, "--Port Interrupt HPRT0=0x%08x "
++				    "Port Connect Detected--\n", hprt0.d32);
++			dwc_otg_hcd->flags.b.port_connect_status_change = 1;
++			dwc_otg_hcd->flags.b.port_connect_status = 1;
++			hprt0_modify.b.prtconndet = 1;
++
++			/* B-Device has connected, Delete the connection timer. */
++			DWC_TIMER_CANCEL(dwc_otg_hcd->conn_timer);
++		}
++		/* The Hub driver asserts a reset when it sees port connect
++		 * status change flag */
++		retval |= 1;
++	}
++
++	/* Port Enable Changed
++	 * Clear if detected - Set internal flag if disabled */
++	if (hprt0.b.prtenchng) {
++		DWC_DEBUGPL(DBG_HCD, "  --Port Interrupt HPRT0=0x%08x "
++			    "Port Enable Changed--\n", hprt0.d32);
++		hprt0_modify.b.prtenchng = 1;
++		if (hprt0.b.prtena == 1) {
++			hfir_data_t hfir;
++			int do_reset = 0;
++			dwc_otg_core_params_t *params =
++			    dwc_otg_hcd->core_if->core_params;
++			dwc_otg_core_global_regs_t *global_regs =
++			    dwc_otg_hcd->core_if->core_global_regs;
++			dwc_otg_host_if_t *host_if =
++			    dwc_otg_hcd->core_if->host_if;
++
++			/* Every time when port enables calculate
++			 * HFIR.FrInterval
++			 */
++			hfir.d32 = DWC_READ_REG32(&host_if->host_global_regs->hfir);
++			hfir.b.frint = calc_frame_interval(dwc_otg_hcd->core_if);
++			DWC_WRITE_REG32(&host_if->host_global_regs->hfir, hfir.d32);
++
++			/* Check if we need to adjust the PHY clock speed for
++			 * low power and adjust it */
++			if (params->host_support_fs_ls_low_power) {
++				gusbcfg_data_t usbcfg;
++
++				usbcfg.d32 =
++				    DWC_READ_REG32(&global_regs->gusbcfg);
++
++				if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_LOW_SPEED
++				    || hprt0.b.prtspd ==
++				    DWC_HPRT0_PRTSPD_FULL_SPEED) {
++					/*
++					 * Low power
++					 */
++					hcfg_data_t hcfg;
++					if (usbcfg.b.phylpwrclksel == 0) {
++						/* Set PHY low power clock select for FS/LS devices */
++						usbcfg.b.phylpwrclksel = 1;
++						DWC_WRITE_REG32
++						    (&global_regs->gusbcfg,
++						     usbcfg.d32);
++						do_reset = 1;
++					}
++
++					hcfg.d32 =
++					    DWC_READ_REG32
++					    (&host_if->host_global_regs->hcfg);
++
++					if (hprt0.b.prtspd ==
++					    DWC_HPRT0_PRTSPD_LOW_SPEED
++					    && params->host_ls_low_power_phy_clk
++					    ==
++					    DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_6MHZ)
++					{
++						/* 6 MHZ */
++						DWC_DEBUGPL(DBG_CIL,
++							    "FS_PHY programming HCFG to 6 MHz (Low Power)\n");
++						if (hcfg.b.fslspclksel !=
++						    DWC_HCFG_6_MHZ) {
++							hcfg.b.fslspclksel =
++							    DWC_HCFG_6_MHZ;
++							DWC_WRITE_REG32
++							    (&host_if->host_global_regs->hcfg,
++							     hcfg.d32);
++							do_reset = 1;
++						}
++					} else {
++						/* 48 MHZ */
++						DWC_DEBUGPL(DBG_CIL,
++							    "FS_PHY programming HCFG to 48 MHz ()\n");
++						if (hcfg.b.fslspclksel !=
++						    DWC_HCFG_48_MHZ) {
++							hcfg.b.fslspclksel =
++							    DWC_HCFG_48_MHZ;
++							DWC_WRITE_REG32
++							    (&host_if->host_global_regs->hcfg,
++							     hcfg.d32);
++							do_reset = 1;
++						}
++					}
++				} else {
++					/*
++					 * Not low power
++					 */
++					if (usbcfg.b.phylpwrclksel == 1) {
++						usbcfg.b.phylpwrclksel = 0;
++						DWC_WRITE_REG32
++						    (&global_regs->gusbcfg,
++						     usbcfg.d32);
++						do_reset = 1;
++					}
++				}
++
++				if (do_reset) {
++					DWC_TASK_SCHEDULE(dwc_otg_hcd->reset_tasklet);
++				}
++			}
++
++			if (!do_reset) {
++				/* Port has been enabled set the reset change flag */
++				dwc_otg_hcd->flags.b.port_reset_change = 1;
++			}
++		} else {
++			dwc_otg_hcd->flags.b.port_enable_change = 1;
++		}
++		retval |= 1;
++	}
++
++	/** Overcurrent Change Interrupt */
++	if (hprt0.b.prtovrcurrchng) {
++		DWC_DEBUGPL(DBG_HCD, "  --Port Interrupt HPRT0=0x%08x "
++			    "Port Overcurrent Changed--\n", hprt0.d32);
++		dwc_otg_hcd->flags.b.port_over_current_change = 1;
++		hprt0_modify.b.prtovrcurrchng = 1;
++		retval |= 1;
++	}
++
++	/* Clear Port Interrupts */
++	DWC_WRITE_REG32(dwc_otg_hcd->core_if->host_if->hprt0, hprt0_modify.d32);
++
++	return retval;
++}
++
++/** This interrupt indicates that one or more host channels has a pending
++ * interrupt. There are multiple conditions that can cause each host channel
++ * interrupt. This function determines which conditions have occurred for each
++ * host channel interrupt and handles them appropriately. */
++int32_t dwc_otg_hcd_handle_hc_intr(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	int i;
++	int retval = 0;
++	haint_data_t haint = { .d32 = 0 } ;
++
++	/* Clear appropriate bits in HCINTn to clear the interrupt bit in
++	 * GINTSTS */
++
++	if (!fiq_fsm_enable)
++		haint.d32 = dwc_otg_read_host_all_channels_intr(dwc_otg_hcd->core_if);
++
++	// Overwrite with saved interrupts from fiq handler
++	if(fiq_fsm_enable)
++	{
++		/* check the mask? */
++		local_fiq_disable();
++		fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
++		haint.b2.chint |= ~(dwc_otg_hcd->fiq_state->haintmsk_saved.b2.chint);
++		dwc_otg_hcd->fiq_state->haintmsk_saved.b2.chint = ~0;
++		fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
++		local_fiq_enable();
++	}
++
++	for (i = 0; i < dwc_otg_hcd->core_if->core_params->host_channels; i++) {
++		if (haint.b2.chint & (1 << i)) {
++			retval |= dwc_otg_hcd_handle_hc_n_intr(dwc_otg_hcd, i);
++		}
++	}
++
++	return retval;
++}
++
++/**
++ * Gets the actual length of a transfer after the transfer halts. _halt_status
++ * holds the reason for the halt.
++ *
++ * For IN transfers where halt_status is DWC_OTG_HC_XFER_COMPLETE,
++ * *short_read is set to 1 upon return if less than the requested
++ * number of bytes were transferred. Otherwise, *short_read is set to 0 upon
++ * return. short_read may also be NULL on entry, in which case it remains
++ * unchanged.
++ */
++static uint32_t get_actual_xfer_length(dwc_hc_t * hc,
++				       dwc_otg_hc_regs_t * hc_regs,
++				       dwc_otg_qtd_t * qtd,
++				       dwc_otg_halt_status_e halt_status,
++				       int *short_read)
++{
++	hctsiz_data_t hctsiz;
++	uint32_t length;
++
++	if (short_read != NULL) {
++		*short_read = 0;
++	}
++	hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
++
++	if (halt_status == DWC_OTG_HC_XFER_COMPLETE) {
++		if (hc->ep_is_in) {
++			length = hc->xfer_len - hctsiz.b.xfersize;
++			if (short_read != NULL) {
++				*short_read = (hctsiz.b.xfersize != 0);
++			}
++		} else if (hc->qh->do_split) {
++				//length = split_out_xfersize[hc->hc_num];
++				length = qtd->ssplit_out_xfer_count;
++		} else {
++			length = hc->xfer_len;
++		}
++	} else {
++		/*
++		 * Must use the hctsiz.pktcnt field to determine how much data
++		 * has been transferred. This field reflects the number of
++		 * packets that have been transferred via the USB. This is
++		 * always an integral number of packets if the transfer was
++		 * halted before its normal completion. (Can't use the
++		 * hctsiz.xfersize field because that reflects the number of
++		 * bytes transferred via the AHB, not the USB).
++		 */
++		length =
++		    (hc->start_pkt_count - hctsiz.b.pktcnt) * hc->max_packet;
++	}
++
++	return length;
++}
++
++/**
++ * Updates the state of the URB after a Transfer Complete interrupt on the
++ * host channel. Updates the actual_length field of the URB based on the
++ * number of bytes transferred via the host channel. Sets the URB status
++ * if the data transfer is finished.
++ *
++ * @return 1 if the data transfer specified by the URB is completely finished,
++ * 0 otherwise.
++ */
++static int update_urb_state_xfer_comp(dwc_hc_t * hc,
++				      dwc_otg_hc_regs_t * hc_regs,
++				      dwc_otg_hcd_urb_t * urb,
++				      dwc_otg_qtd_t * qtd)
++{
++	int xfer_done = 0;
++	int short_read = 0;
++
++	int xfer_length;
++
++	xfer_length = get_actual_xfer_length(hc, hc_regs, qtd,
++					     DWC_OTG_HC_XFER_COMPLETE,
++					     &short_read);
++
++	/* non DWORD-aligned buffer case handling. */
++	if (hc->align_buff && xfer_length && hc->ep_is_in) {
++		dwc_memcpy(urb->buf + urb->actual_length, hc->qh->dw_align_buf,
++			   xfer_length);
++	}
++
++	urb->actual_length += xfer_length;
++
++	if (xfer_length && (hc->ep_type == DWC_OTG_EP_TYPE_BULK) &&
++	    (urb->flags & URB_SEND_ZERO_PACKET)
++	    && (urb->actual_length == urb->length)
++	    && !(urb->length % hc->max_packet)) {
++		xfer_done = 0;
++	} else if (short_read || urb->actual_length >= urb->length) {
++		xfer_done = 1;
++		urb->status = 0;
++	}
++
++#ifdef DEBUG
++	{
++		hctsiz_data_t hctsiz;
++		hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
++		DWC_DEBUGPL(DBG_HCDV, "DWC_otg: %s: %s, channel %d\n",
++			    __func__, (hc->ep_is_in ? "IN" : "OUT"),
++			    hc->hc_num);
++		DWC_DEBUGPL(DBG_HCDV, "  hc->xfer_len %d\n", hc->xfer_len);
++		DWC_DEBUGPL(DBG_HCDV, "  hctsiz.xfersize %d\n",
++			    hctsiz.b.xfersize);
++		DWC_DEBUGPL(DBG_HCDV, "  urb->transfer_buffer_length %d\n",
++			    urb->length);
++		DWC_DEBUGPL(DBG_HCDV, "  urb->actual_length %d\n",
++			    urb->actual_length);
++		DWC_DEBUGPL(DBG_HCDV, "  short_read %d, xfer_done %d\n",
++			    short_read, xfer_done);
++	}
++#endif
++
++	return xfer_done;
++}
++
++/*
++ * Save the starting data toggle for the next transfer. The data toggle is
++ * saved in the QH for non-control transfers and it's saved in the QTD for
++ * control transfers.
++ */
++void dwc_otg_hcd_save_data_toggle(dwc_hc_t * hc,
++			     dwc_otg_hc_regs_t * hc_regs, dwc_otg_qtd_t * qtd)
++{
++	hctsiz_data_t hctsiz;
++	hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
++
++	if (hc->ep_type != DWC_OTG_EP_TYPE_CONTROL) {
++		dwc_otg_qh_t *qh = hc->qh;
++		if (hctsiz.b.pid == DWC_HCTSIZ_DATA0) {
++			qh->data_toggle = DWC_OTG_HC_PID_DATA0;
++		} else {
++			qh->data_toggle = DWC_OTG_HC_PID_DATA1;
++		}
++	} else {
++		if (hctsiz.b.pid == DWC_HCTSIZ_DATA0) {
++			qtd->data_toggle = DWC_OTG_HC_PID_DATA0;
++		} else {
++			qtd->data_toggle = DWC_OTG_HC_PID_DATA1;
++		}
++	}
++}
++
++/**
++ * Updates the state of an Isochronous URB when the transfer is stopped for
++ * any reason. The fields of the current entry in the frame descriptor array
++ * are set based on the transfer state and the input _halt_status. Completes
++ * the Isochronous URB if all the URB frames have been completed.
++ *
++ * @return DWC_OTG_HC_XFER_COMPLETE if there are more frames remaining to be
++ * transferred in the URB. Otherwise return DWC_OTG_HC_XFER_URB_COMPLETE.
++ */
++static dwc_otg_halt_status_e
++update_isoc_urb_state(dwc_otg_hcd_t * hcd,
++		      dwc_hc_t * hc,
++		      dwc_otg_hc_regs_t * hc_regs,
++		      dwc_otg_qtd_t * qtd, dwc_otg_halt_status_e halt_status)
++{
++	dwc_otg_hcd_urb_t *urb = qtd->urb;
++	dwc_otg_halt_status_e ret_val = halt_status;
++	struct dwc_otg_hcd_iso_packet_desc *frame_desc;
++
++	frame_desc = &urb->iso_descs[qtd->isoc_frame_index];
++	switch (halt_status) {
++	case DWC_OTG_HC_XFER_COMPLETE:
++		frame_desc->status = 0;
++		frame_desc->actual_length =
++		    get_actual_xfer_length(hc, hc_regs, qtd, halt_status, NULL);
++
++		/* non DWORD-aligned buffer case handling. */
++		if (hc->align_buff && frame_desc->actual_length && hc->ep_is_in) {
++			dwc_memcpy(urb->buf + frame_desc->offset + qtd->isoc_split_offset,
++				   hc->qh->dw_align_buf, frame_desc->actual_length);
++		}
++
++		break;
++	case DWC_OTG_HC_XFER_FRAME_OVERRUN:
++		urb->error_count++;
++		if (hc->ep_is_in) {
++			frame_desc->status = -DWC_E_NO_STREAM_RES;
++		} else {
++			frame_desc->status = -DWC_E_COMMUNICATION;
++		}
++		frame_desc->actual_length = 0;
++		break;
++	case DWC_OTG_HC_XFER_BABBLE_ERR:
++		urb->error_count++;
++		frame_desc->status = -DWC_E_OVERFLOW;
++		/* Don't need to update actual_length in this case. */
++		break;
++	case DWC_OTG_HC_XFER_XACT_ERR:
++		urb->error_count++;
++		frame_desc->status = -DWC_E_PROTOCOL;
++		frame_desc->actual_length =
++		    get_actual_xfer_length(hc, hc_regs, qtd, halt_status, NULL);
++
++		/* non DWORD-aligned buffer case handling. */
++		if (hc->align_buff && frame_desc->actual_length && hc->ep_is_in) {
++			dwc_memcpy(urb->buf + frame_desc->offset + qtd->isoc_split_offset,
++				   hc->qh->dw_align_buf, frame_desc->actual_length);
++		}
++		/* Skip whole frame */
++		if (hc->qh->do_split && (hc->ep_type == DWC_OTG_EP_TYPE_ISOC) &&
++		    hc->ep_is_in && hcd->core_if->dma_enable) {
++			qtd->complete_split = 0;
++			qtd->isoc_split_offset = 0;
++		}
++
++		break;
++	default:
++		DWC_ASSERT(1, "Unhandled _halt_status (%d)\n", halt_status);
++		break;
++	}
++	if (++qtd->isoc_frame_index == urb->packet_count) {
++		/*
++		 * urb->status is not used for isoc transfers.
++		 * The individual frame_desc statuses are used instead.
++		 */
++		hcd->fops->complete(hcd, urb->priv, urb, 0);
++		ret_val = DWC_OTG_HC_XFER_URB_COMPLETE;
++	} else {
++		ret_val = DWC_OTG_HC_XFER_COMPLETE;
++	}
++	return ret_val;
++}
++
++/**
++ * Frees the first QTD in the QH's list if free_qtd is 1. For non-periodic
++ * QHs, removes the QH from the active non-periodic schedule. If any QTDs are
++ * still linked to the QH, the QH is added to the end of the inactive
++ * non-periodic schedule. For periodic QHs, removes the QH from the periodic
++ * schedule if no more QTDs are linked to the QH.
++ */
++static void deactivate_qh(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh, int free_qtd)
++{
++	int continue_split = 0;
++	dwc_otg_qtd_t *qtd;
++
++	DWC_DEBUGPL(DBG_HCDV, "  %s(%p,%p,%d)\n", __func__, hcd, qh, free_qtd);
++
++	qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
++
++	if (qtd->complete_split) {
++		continue_split = 1;
++	} else if (qtd->isoc_split_pos == DWC_HCSPLIT_XACTPOS_MID ||
++		   qtd->isoc_split_pos == DWC_HCSPLIT_XACTPOS_END) {
++		continue_split = 1;
++	}
++
++	if (free_qtd) {
++		dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
++		continue_split = 0;
++	}
++
++	qh->channel = NULL;
++	dwc_otg_hcd_qh_deactivate(hcd, qh, continue_split);
++}
++
++/**
++ * Releases a host channel for use by other transfers. Attempts to select and
++ * queue more transactions since at least one host channel is available.
++ *
++ * @param hcd The HCD state structure.
++ * @param hc The host channel to release.
++ * @param qtd The QTD associated with the host channel. This QTD may be freed
++ * if the transfer is complete or an error has occurred.
++ * @param halt_status Reason the channel is being released. This status
++ * determines the actions taken by this function.
++ */
++static void release_channel(dwc_otg_hcd_t * hcd,
++			    dwc_hc_t * hc,
++			    dwc_otg_qtd_t * qtd,
++			    dwc_otg_halt_status_e halt_status)
++{
++	dwc_otg_transaction_type_e tr_type;
++	int free_qtd;
++	dwc_irqflags_t flags;
++	dwc_spinlock_t *channel_lock = hcd->channel_lock;
++
++	int hog_port = 0;
++
++	DWC_DEBUGPL(DBG_HCDV, "  %s: channel %d, halt_status %d, xfer_len %d\n",
++		    __func__, hc->hc_num, halt_status, hc->xfer_len);
++
++	if(fiq_fsm_enable && hc->do_split) {
++		if(!hc->ep_is_in && hc->ep_type == UE_ISOCHRONOUS) {
++			if(hc->xact_pos == DWC_HCSPLIT_XACTPOS_MID ||
++					hc->xact_pos == DWC_HCSPLIT_XACTPOS_BEGIN) {
++				hog_port = 0;
++			}
++		}
++	}
++
++	switch (halt_status) {
++	case DWC_OTG_HC_XFER_URB_COMPLETE:
++		free_qtd = 1;
++		break;
++	case DWC_OTG_HC_XFER_AHB_ERR:
++	case DWC_OTG_HC_XFER_STALL:
++	case DWC_OTG_HC_XFER_BABBLE_ERR:
++		free_qtd = 1;
++		break;
++	case DWC_OTG_HC_XFER_XACT_ERR:
++		if (qtd->error_count >= 3) {
++			DWC_DEBUGPL(DBG_HCDV,
++				    "  Complete URB with transaction error\n");
++			free_qtd = 1;
++			qtd->urb->status = -DWC_E_PROTOCOL;
++			hcd->fops->complete(hcd, qtd->urb->priv,
++					    qtd->urb, -DWC_E_PROTOCOL);
++		} else {
++			free_qtd = 0;
++		}
++		break;
++	case DWC_OTG_HC_XFER_URB_DEQUEUE:
++		/*
++		 * The QTD has already been removed and the QH has been
++		 * deactivated. Don't want to do anything except release the
++		 * host channel and try to queue more transfers.
++		 */
++		goto cleanup;
++	case DWC_OTG_HC_XFER_NO_HALT_STATUS:
++		free_qtd = 0;
++		break;
++	case DWC_OTG_HC_XFER_PERIODIC_INCOMPLETE:
++		DWC_DEBUGPL(DBG_HCDV,
++			"  Complete URB with I/O error\n");
++		free_qtd = 1;
++		qtd->urb->status = -DWC_E_IO;
++		hcd->fops->complete(hcd, qtd->urb->priv,
++			qtd->urb, -DWC_E_IO);
++		break;
++	default:
++		free_qtd = 0;
++		break;
++	}
++
++	deactivate_qh(hcd, hc->qh, free_qtd);
++
++cleanup:
++	/*
++	 * Release the host channel for use by other transfers. The cleanup
++	 * function clears the channel interrupt enables and conditions, so
++	 * there's no need to clear the Channel Halted interrupt separately.
++	 */
++	if (fiq_fsm_enable && hcd->fiq_state->channel[hc->hc_num].fsm != FIQ_PASSTHROUGH)
++		dwc_otg_cleanup_fiq_channel(hcd, hc->hc_num);
++	dwc_otg_hc_cleanup(hcd->core_if, hc);
++	DWC_CIRCLEQ_INSERT_TAIL(&hcd->free_hc_list, hc, hc_list_entry);
++
++	if (!microframe_schedule) {
++		switch (hc->ep_type) {
++		case DWC_OTG_EP_TYPE_CONTROL:
++		case DWC_OTG_EP_TYPE_BULK:
++			hcd->non_periodic_channels--;
++			break;
++
++		default:
++			/*
++			 * Don't release reservations for periodic channels here.
++			 * That's done when a periodic transfer is descheduled (i.e.
++			 * when the QH is removed from the periodic schedule).
++			 */
++			break;
++		}
++	} else {
++
++		DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
++		hcd->available_host_channels++;
++		fiq_print(FIQDBG_INT, hcd->fiq_state, "AHC = %d ", hcd->available_host_channels);
++		DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
++	}
++
++	/* Try to queue more transfers now that there's a free channel. */
++	tr_type = dwc_otg_hcd_select_transactions(hcd);
++	if (tr_type != DWC_OTG_TRANSACTION_NONE) {
++		dwc_otg_hcd_queue_transactions(hcd, tr_type);
++	}
++}
++
++/**
++ * Halts a host channel. If the channel cannot be halted immediately because
++ * the request queue is full, this function ensures that the FIFO empty
++ * interrupt for the appropriate queue is enabled so that the halt request can
++ * be queued when there is space in the request queue.
++ *
++ * This function may also be called in DMA mode. In that case, the channel is
++ * simply released since the core always halts the channel automatically in
++ * DMA mode.
++ */
++static void halt_channel(dwc_otg_hcd_t * hcd,
++			 dwc_hc_t * hc,
++			 dwc_otg_qtd_t * qtd, dwc_otg_halt_status_e halt_status)
++{
++	if (hcd->core_if->dma_enable) {
++		release_channel(hcd, hc, qtd, halt_status);
++		return;
++	}
++
++	/* Slave mode processing... */
++	dwc_otg_hc_halt(hcd->core_if, hc, halt_status);
++
++	if (hc->halt_on_queue) {
++		gintmsk_data_t gintmsk = {.d32 = 0 };
++		dwc_otg_core_global_regs_t *global_regs;
++		global_regs = hcd->core_if->core_global_regs;
++
++		if (hc->ep_type == DWC_OTG_EP_TYPE_CONTROL ||
++		    hc->ep_type == DWC_OTG_EP_TYPE_BULK) {
++			/*
++			 * Make sure the Non-periodic Tx FIFO empty interrupt
++			 * is enabled so that the non-periodic schedule will
++			 * be processed.
++			 */
++			gintmsk.b.nptxfempty = 1;
++			if (fiq_enable) {
++				local_fiq_disable();
++				fiq_fsm_spin_lock(&hcd->fiq_state->lock);
++				DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmsk.d32);
++				fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
++				local_fiq_enable();
++			} else {
++				DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmsk.d32);
++			}
++		} else {
++			/*
++			 * Move the QH from the periodic queued schedule to
++			 * the periodic assigned schedule. This allows the
++			 * halt to be queued when the periodic schedule is
++			 * processed.
++			 */
++			DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_assigned,
++					   &hc->qh->qh_list_entry);
++
++			/*
++			 * Make sure the Periodic Tx FIFO Empty interrupt is
++			 * enabled so that the periodic schedule will be
++			 * processed.
++			 */
++			gintmsk.b.ptxfempty = 1;
++			if (fiq_enable) {
++				local_fiq_disable();
++				fiq_fsm_spin_lock(&hcd->fiq_state->lock);
++				DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmsk.d32);
++				fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
++				local_fiq_enable();
++			} else {
++				DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmsk.d32);
++			}
++		}
++	}
++}
++
++/**
++ * Performs common cleanup for non-periodic transfers after a Transfer
++ * Complete interrupt. This function should be called after any endpoint type
++ * specific handling is finished to release the host channel.
++ */
++static void complete_non_periodic_xfer(dwc_otg_hcd_t * hcd,
++				       dwc_hc_t * hc,
++				       dwc_otg_hc_regs_t * hc_regs,
++				       dwc_otg_qtd_t * qtd,
++				       dwc_otg_halt_status_e halt_status)
++{
++	hcint_data_t hcint;
++
++	qtd->error_count = 0;
++
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++	if (hcint.b.nyet) {
++		/*
++		 * Got a NYET on the last transaction of the transfer. This
++		 * means that the endpoint should be in the PING state at the
++		 * beginning of the next transfer.
++		 */
++		hc->qh->ping_state = 1;
++		clear_hc_int(hc_regs, nyet);
++	}
++
++	/*
++	 * Always halt and release the host channel to make it available for
++	 * more transfers. There may still be more phases for a control
++	 * transfer or more data packets for a bulk transfer at this point,
++	 * but the host channel is still halted. A channel will be reassigned
++	 * to the transfer when the non-periodic schedule is processed after
++	 * the channel is released. This allows transactions to be queued
++	 * properly via dwc_otg_hcd_queue_transactions, which also enables the
++	 * Tx FIFO Empty interrupt if necessary.
++	 */
++	if (hc->ep_is_in) {
++		/*
++		 * IN transfers in Slave mode require an explicit disable to
++		 * halt the channel. (In DMA mode, this call simply releases
++		 * the channel.)
++		 */
++		halt_channel(hcd, hc, qtd, halt_status);
++	} else {
++		/*
++		 * The channel is automatically disabled by the core for OUT
++		 * transfers in Slave mode.
++		 */
++		release_channel(hcd, hc, qtd, halt_status);
++	}
++}
++
++/**
++ * Performs common cleanup for periodic transfers after a Transfer Complete
++ * interrupt. This function should be called after any endpoint type specific
++ * handling is finished to release the host channel.
++ */
++static void complete_periodic_xfer(dwc_otg_hcd_t * hcd,
++				   dwc_hc_t * hc,
++				   dwc_otg_hc_regs_t * hc_regs,
++				   dwc_otg_qtd_t * qtd,
++				   dwc_otg_halt_status_e halt_status)
++{
++	hctsiz_data_t hctsiz;
++	qtd->error_count = 0;
++
++	hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
++	if (!hc->ep_is_in || hctsiz.b.pktcnt == 0) {
++		/* Core halts channel in these cases. */
++		release_channel(hcd, hc, qtd, halt_status);
++	} else {
++		/* Flush any outstanding requests from the Tx queue. */
++		halt_channel(hcd, hc, qtd, halt_status);
++	}
++}
++
++static int32_t handle_xfercomp_isoc_split_in(dwc_otg_hcd_t * hcd,
++					     dwc_hc_t * hc,
++					     dwc_otg_hc_regs_t * hc_regs,
++					     dwc_otg_qtd_t * qtd)
++{
++	uint32_t len;
++	struct dwc_otg_hcd_iso_packet_desc *frame_desc;
++	frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
++
++	len = get_actual_xfer_length(hc, hc_regs, qtd,
++				     DWC_OTG_HC_XFER_COMPLETE, NULL);
++
++	if (!len) {
++		qtd->complete_split = 0;
++		qtd->isoc_split_offset = 0;
++		return 0;
++	}
++	frame_desc->actual_length += len;
++
++	if (hc->align_buff && len)
++		dwc_memcpy(qtd->urb->buf + frame_desc->offset +
++			   qtd->isoc_split_offset, hc->qh->dw_align_buf, len);
++	qtd->isoc_split_offset += len;
++
++	if (frame_desc->length == frame_desc->actual_length) {
++		frame_desc->status = 0;
++		qtd->isoc_frame_index++;
++		qtd->complete_split = 0;
++		qtd->isoc_split_offset = 0;
++	}
++
++	if (qtd->isoc_frame_index == qtd->urb->packet_count) {
++		hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
++		release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++	} else {
++		release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
++	}
++
++	return 1;		/* Indicates that channel released */
++}
++
++/**
++ * Handles a host channel Transfer Complete interrupt. This handler may be
++ * called in either DMA mode or Slave mode.
++ */
++static int32_t handle_hc_xfercomp_intr(dwc_otg_hcd_t * hcd,
++				       dwc_hc_t * hc,
++				       dwc_otg_hc_regs_t * hc_regs,
++				       dwc_otg_qtd_t * qtd)
++{
++	int urb_xfer_done;
++	dwc_otg_halt_status_e halt_status = DWC_OTG_HC_XFER_COMPLETE;
++	dwc_otg_hcd_urb_t *urb = qtd->urb;
++	int pipe_type = dwc_otg_hcd_get_pipe_type(&urb->pipe_info);
++
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "Transfer Complete--\n", hc->hc_num);
++
++	if (hcd->core_if->dma_desc_enable) {
++		dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs, halt_status);
++		if (pipe_type == UE_ISOCHRONOUS) {
++			/* Do not disable the interrupt, just clear it */
++			clear_hc_int(hc_regs, xfercomp);
++			return 1;
++		}
++		goto handle_xfercomp_done;
++	}
++
++	/*
++	 * Handle xfer complete on CSPLIT.
++	 */
++
++	if (hc->qh->do_split) {
++		if ((hc->ep_type == DWC_OTG_EP_TYPE_ISOC) && hc->ep_is_in
++		    && hcd->core_if->dma_enable) {
++			if (qtd->complete_split
++			    && handle_xfercomp_isoc_split_in(hcd, hc, hc_regs,
++							     qtd))
++				goto handle_xfercomp_done;
++		} else {
++			qtd->complete_split = 0;
++		}
++	}
++
++	/* Update the QTD and URB states. */
++	switch (pipe_type) {
++	case UE_CONTROL:
++		switch (qtd->control_phase) {
++		case DWC_OTG_CONTROL_SETUP:
++			if (urb->length > 0) {
++				qtd->control_phase = DWC_OTG_CONTROL_DATA;
++			} else {
++				qtd->control_phase = DWC_OTG_CONTROL_STATUS;
++			}
++			DWC_DEBUGPL(DBG_HCDV,
++				    "  Control setup transaction done\n");
++			halt_status = DWC_OTG_HC_XFER_COMPLETE;
++			break;
++		case DWC_OTG_CONTROL_DATA:{
++				urb_xfer_done =
++				    update_urb_state_xfer_comp(hc, hc_regs, urb,
++							       qtd);
++				if (urb_xfer_done) {
++					qtd->control_phase =
++					    DWC_OTG_CONTROL_STATUS;
++					DWC_DEBUGPL(DBG_HCDV,
++						    "  Control data transfer done\n");
++				} else {
++					dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++				}
++				halt_status = DWC_OTG_HC_XFER_COMPLETE;
++				break;
++			}
++		case DWC_OTG_CONTROL_STATUS:
++			DWC_DEBUGPL(DBG_HCDV, "  Control transfer complete\n");
++			if (urb->status == -DWC_E_IN_PROGRESS) {
++				urb->status = 0;
++			}
++			hcd->fops->complete(hcd, urb->priv, urb, urb->status);
++			halt_status = DWC_OTG_HC_XFER_URB_COMPLETE;
++			break;
++		}
++
++		complete_non_periodic_xfer(hcd, hc, hc_regs, qtd, halt_status);
++		break;
++	case UE_BULK:
++		DWC_DEBUGPL(DBG_HCDV, "  Bulk transfer complete\n");
++		urb_xfer_done =
++		    update_urb_state_xfer_comp(hc, hc_regs, urb, qtd);
++		if (urb_xfer_done) {
++			hcd->fops->complete(hcd, urb->priv, urb, urb->status);
++			halt_status = DWC_OTG_HC_XFER_URB_COMPLETE;
++		} else {
++			halt_status = DWC_OTG_HC_XFER_COMPLETE;
++		}
++
++		dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++		complete_non_periodic_xfer(hcd, hc, hc_regs, qtd, halt_status);
++		break;
++	case UE_INTERRUPT:
++		DWC_DEBUGPL(DBG_HCDV, "  Interrupt transfer complete\n");
++		urb_xfer_done =
++			update_urb_state_xfer_comp(hc, hc_regs, urb, qtd);
++
++		/*
++		 * Interrupt URB is done on the first transfer complete
++		 * interrupt.
++		 */
++		if (urb_xfer_done) {
++				hcd->fops->complete(hcd, urb->priv, urb, urb->status);
++				halt_status = DWC_OTG_HC_XFER_URB_COMPLETE;
++		} else {
++				halt_status = DWC_OTG_HC_XFER_COMPLETE;
++		}
++
++		dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++		complete_periodic_xfer(hcd, hc, hc_regs, qtd, halt_status);
++		break;
++	case UE_ISOCHRONOUS:
++		DWC_DEBUGPL(DBG_HCDV, "  Isochronous transfer complete\n");
++		if (qtd->isoc_split_pos == DWC_HCSPLIT_XACTPOS_ALL) {
++			halt_status =
++			    update_isoc_urb_state(hcd, hc, hc_regs, qtd,
++						  DWC_OTG_HC_XFER_COMPLETE);
++		}
++		complete_periodic_xfer(hcd, hc, hc_regs, qtd, halt_status);
++		break;
++	}
++
++handle_xfercomp_done:
++	disable_hc_int(hc_regs, xfercompl);
++
++	return 1;
++}
++
++/**
++ * Handles a host channel STALL interrupt. This handler may be called in
++ * either DMA mode or Slave mode.
++ */
++static int32_t handle_hc_stall_intr(dwc_otg_hcd_t * hcd,
++				    dwc_hc_t * hc,
++				    dwc_otg_hc_regs_t * hc_regs,
++				    dwc_otg_qtd_t * qtd)
++{
++	dwc_otg_hcd_urb_t *urb = qtd->urb;
++	int pipe_type = dwc_otg_hcd_get_pipe_type(&urb->pipe_info);
++
++	DWC_DEBUGPL(DBG_HCD, "--Host Channel %d Interrupt: "
++		    "STALL Received--\n", hc->hc_num);
++
++	if (hcd->core_if->dma_desc_enable) {
++		dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs, DWC_OTG_HC_XFER_STALL);
++		goto handle_stall_done;
++	}
++
++	if (pipe_type == UE_CONTROL) {
++		hcd->fops->complete(hcd, urb->priv, urb, -DWC_E_PIPE);
++	}
++
++	if (pipe_type == UE_BULK || pipe_type == UE_INTERRUPT) {
++		hcd->fops->complete(hcd, urb->priv, urb, -DWC_E_PIPE);
++		/*
++		 * USB protocol requires resetting the data toggle for bulk
++		 * and interrupt endpoints when a CLEAR_FEATURE(ENDPOINT_HALT)
++		 * setup command is issued to the endpoint. Anticipate the
++		 * CLEAR_FEATURE command since a STALL has occurred and reset
++		 * the data toggle now.
++		 */
++		hc->qh->data_toggle = 0;
++	}
++
++	halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_STALL);
++
++handle_stall_done:
++	disable_hc_int(hc_regs, stall);
++
++	return 1;
++}
++
++/*
++ * Updates the state of the URB when a transfer has been stopped due to an
++ * abnormal condition before the transfer completes. Modifies the
++ * actual_length field of the URB to reflect the number of bytes that have
++ * actually been transferred via the host channel.
++ */
++static void update_urb_state_xfer_intr(dwc_hc_t * hc,
++				       dwc_otg_hc_regs_t * hc_regs,
++				       dwc_otg_hcd_urb_t * urb,
++				       dwc_otg_qtd_t * qtd,
++				       dwc_otg_halt_status_e halt_status)
++{
++	uint32_t bytes_transferred = get_actual_xfer_length(hc, hc_regs, qtd,
++							    halt_status, NULL);
++	/* non DWORD-aligned buffer case handling. */
++	if (hc->align_buff && bytes_transferred && hc->ep_is_in) {
++		dwc_memcpy(urb->buf + urb->actual_length, hc->qh->dw_align_buf,
++			   bytes_transferred);
++	}
++
++	urb->actual_length += bytes_transferred;
++
++#ifdef DEBUG
++	{
++		hctsiz_data_t hctsiz;
++		hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
++		DWC_DEBUGPL(DBG_HCDV, "DWC_otg: %s: %s, channel %d\n",
++			    __func__, (hc->ep_is_in ? "IN" : "OUT"),
++			    hc->hc_num);
++		DWC_DEBUGPL(DBG_HCDV, "  hc->start_pkt_count %d\n",
++			    hc->start_pkt_count);
++		DWC_DEBUGPL(DBG_HCDV, "  hctsiz.pktcnt %d\n", hctsiz.b.pktcnt);
++		DWC_DEBUGPL(DBG_HCDV, "  hc->max_packet %d\n", hc->max_packet);
++		DWC_DEBUGPL(DBG_HCDV, "  bytes_transferred %d\n",
++			    bytes_transferred);
++		DWC_DEBUGPL(DBG_HCDV, "  urb->actual_length %d\n",
++			    urb->actual_length);
++		DWC_DEBUGPL(DBG_HCDV, "  urb->transfer_buffer_length %d\n",
++			    urb->length);
++	}
++#endif
++}
++
++/**
++ * Handles a host channel NAK interrupt. This handler may be called in either
++ * DMA mode or Slave mode.
++ */
++static int32_t handle_hc_nak_intr(dwc_otg_hcd_t * hcd,
++				  dwc_hc_t * hc,
++				  dwc_otg_hc_regs_t * hc_regs,
++				  dwc_otg_qtd_t * qtd)
++{
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "NAK Received--\n", hc->hc_num);
++
++	/*
++	 * When we get bulk NAKs then remember this so we holdoff on this qh until
++	 * the beginning of the next frame
++	 */
++	switch(dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
++		case UE_BULK:
++		case UE_CONTROL:
++		if (nak_holdoff && qtd->qh->do_split)
++			hc->qh->nak_frame = dwc_otg_hcd_get_frame_number(hcd);
++	}
++
++	/*
++	 * Handle NAK for IN/OUT SSPLIT/CSPLIT transfers, bulk, control, and
++	 * interrupt.  Re-start the SSPLIT transfer.
++	 */
++	if (hc->do_split) {
++		if (hc->complete_split) {
++			qtd->error_count = 0;
++		}
++		qtd->complete_split = 0;
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NAK);
++		goto handle_nak_done;
++	}
++
++	switch (dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
++	case UE_CONTROL:
++	case UE_BULK:
++		if (hcd->core_if->dma_enable && hc->ep_is_in) {
++			/*
++			 * NAK interrupts are enabled on bulk/control IN
++			 * transfers in DMA mode for the sole purpose of
++			 * resetting the error count after a transaction error
++			 * occurs. The core will continue transferring data.
++			 * Disable other interrupts unmasked for the same
++			 * reason.
++			 */
++			disable_hc_int(hc_regs, datatglerr);
++			disable_hc_int(hc_regs, ack);
++			qtd->error_count = 0;
++			goto handle_nak_done;
++		}
++
++		/*
++		 * NAK interrupts normally occur during OUT transfers in DMA
++		 * or Slave mode. For IN transfers, more requests will be
++		 * queued as request queue space is available.
++		 */
++		qtd->error_count = 0;
++
++		if (!hc->qh->ping_state) {
++			update_urb_state_xfer_intr(hc, hc_regs,
++						   qtd->urb, qtd,
++						   DWC_OTG_HC_XFER_NAK);
++			dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++
++			if (hc->speed == DWC_OTG_EP_SPEED_HIGH)
++				hc->qh->ping_state = 1;
++		}
++
++		/*
++		 * Halt the channel so the transfer can be re-started from
++		 * the appropriate point or the PING protocol will
++		 * start/continue.
++		 */
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NAK);
++		break;
++	case UE_INTERRUPT:
++		qtd->error_count = 0;
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NAK);
++		break;
++	case UE_ISOCHRONOUS:
++		/* Should never get called for isochronous transfers. */
++		DWC_ASSERT(1, "NACK interrupt for ISOC transfer\n");
++		break;
++	}
++
++handle_nak_done:
++	disable_hc_int(hc_regs, nak);
++
++	return 1;
++}
++
++/**
++ * Handles a host channel ACK interrupt. This interrupt is enabled when
++ * performing the PING protocol in Slave mode, when errors occur during
++ * either Slave mode or DMA mode, and during Start Split transactions.
++ */
++static int32_t handle_hc_ack_intr(dwc_otg_hcd_t * hcd,
++				  dwc_hc_t * hc,
++				  dwc_otg_hc_regs_t * hc_regs,
++				  dwc_otg_qtd_t * qtd)
++{
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "ACK Received--\n", hc->hc_num);
++
++	if (hc->do_split) {
++		/*
++		 * Handle ACK on SSPLIT.
++		 * ACK should not occur in CSPLIT.
++		 */
++		if (!hc->ep_is_in && hc->data_pid_start != DWC_OTG_HC_PID_SETUP) {
++			qtd->ssplit_out_xfer_count = hc->xfer_len;
++		}
++		if (!(hc->ep_type == DWC_OTG_EP_TYPE_ISOC && !hc->ep_is_in)) {
++			/* Don't need complete for isochronous out transfers. */
++			qtd->complete_split = 1;
++		}
++
++		/* ISOC OUT */
++		if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC && !hc->ep_is_in) {
++			switch (hc->xact_pos) {
++			case DWC_HCSPLIT_XACTPOS_ALL:
++				break;
++			case DWC_HCSPLIT_XACTPOS_END:
++				qtd->isoc_split_pos = DWC_HCSPLIT_XACTPOS_ALL;
++				qtd->isoc_split_offset = 0;
++				break;
++			case DWC_HCSPLIT_XACTPOS_BEGIN:
++			case DWC_HCSPLIT_XACTPOS_MID:
++				/*
++				 * For BEGIN or MID, calculate the length for
++				 * the next microframe to determine the correct
++				 * SSPLIT token, either MID or END.
++				 */
++				{
++					struct dwc_otg_hcd_iso_packet_desc
++					*frame_desc;
++
++					frame_desc =
++					    &qtd->urb->
++					    iso_descs[qtd->isoc_frame_index];
++					qtd->isoc_split_offset += 188;
++
++					if ((frame_desc->length -
++					     qtd->isoc_split_offset) <= 188) {
++						qtd->isoc_split_pos =
++						    DWC_HCSPLIT_XACTPOS_END;
++					} else {
++						qtd->isoc_split_pos =
++						    DWC_HCSPLIT_XACTPOS_MID;
++					}
++
++				}
++				break;
++			}
++		} else {
++			halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_ACK);
++		}
++	} else {
++		/*
++		 * An unmasked ACK on a non-split DMA transaction is
++		 * for the sole purpose of resetting error counts. Disable other
++		 * interrupts unmasked for the same reason.
++		 */
++		if(hcd->core_if->dma_enable) {
++			disable_hc_int(hc_regs, datatglerr);
++			disable_hc_int(hc_regs, nak);
++		}
++		qtd->error_count = 0;
++
++		if (hc->qh->ping_state) {
++			hc->qh->ping_state = 0;
++			/*
++			 * Halt the channel so the transfer can be re-started
++			 * from the appropriate point. This only happens in
++			 * Slave mode. In DMA mode, the ping_state is cleared
++			 * when the transfer is started because the core
++			 * automatically executes the PING, then the transfer.
++			 */
++			halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_ACK);
++		}
++	}
++
++	/*
++	 * If the ACK occurred when _not_ in the PING state, let the channel
++	 * continue transferring data after clearing the error count.
++	 */
++
++	disable_hc_int(hc_regs, ack);
++
++	return 1;
++}
++
++/**
++ * Handles a host channel NYET interrupt. This interrupt should only occur on
++ * Bulk and Control OUT endpoints and for complete split transactions. If a
++ * NYET occurs at the same time as a Transfer Complete interrupt, it is
++ * handled in the xfercomp interrupt handler, not here. This handler may be
++ * called in either DMA mode or Slave mode.
++ */
++static int32_t handle_hc_nyet_intr(dwc_otg_hcd_t * hcd,
++				   dwc_hc_t * hc,
++				   dwc_otg_hc_regs_t * hc_regs,
++				   dwc_otg_qtd_t * qtd)
++{
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "NYET Received--\n", hc->hc_num);
++
++	/*
++	 * NYET on CSPLIT
++	 * re-do the CSPLIT immediately on non-periodic
++	 */
++	if (hc->do_split && hc->complete_split) {
++		if (hc->ep_is_in && (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)
++		    && hcd->core_if->dma_enable) {
++			qtd->complete_split = 0;
++			qtd->isoc_split_offset = 0;
++			if (++qtd->isoc_frame_index == qtd->urb->packet_count) {
++				hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++			}
++			else
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
++			goto handle_nyet_done;
++		}
++
++		if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
++		    hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
++			int frnum = dwc_otg_hcd_get_frame_number(hcd);
++
++			// With the FIQ running we only ever see the failed NYET
++			if (dwc_full_frame_num(frnum) !=
++			    dwc_full_frame_num(hc->qh->sched_frame) ||
++			    fiq_fsm_enable) {
++				/*
++				 * No longer in the same full speed frame.
++				 * Treat this as a transaction error.
++				 */
++#if 0
++				/** @todo Fix system performance so this can
++				 * be treated as an error. Right now complete
++				 * splits cannot be scheduled precisely enough
++				 * due to other system activity, so this error
++				 * occurs regularly in Slave mode.
++				 */
++				qtd->error_count++;
++#endif
++				qtd->complete_split = 0;
++				halt_channel(hcd, hc, qtd,
++					     DWC_OTG_HC_XFER_XACT_ERR);
++				/** @todo add support for isoc release */
++				goto handle_nyet_done;
++			}
++		}
++
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NYET);
++		goto handle_nyet_done;
++	}
++
++	hc->qh->ping_state = 1;
++	qtd->error_count = 0;
++
++	update_urb_state_xfer_intr(hc, hc_regs, qtd->urb, qtd,
++				   DWC_OTG_HC_XFER_NYET);
++	dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++
++	/*
++	 * Halt the channel and re-start the transfer so the PING
++	 * protocol will start.
++	 */
++	halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NYET);
++
++handle_nyet_done:
++	disable_hc_int(hc_regs, nyet);
++	return 1;
++}
++
++/**
++ * Handles a host channel babble interrupt. This handler may be called in
++ * either DMA mode or Slave mode.
++ */
++static int32_t handle_hc_babble_intr(dwc_otg_hcd_t * hcd,
++				     dwc_hc_t * hc,
++				     dwc_otg_hc_regs_t * hc_regs,
++				     dwc_otg_qtd_t * qtd)
++{
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "Babble Error--\n", hc->hc_num);
++
++	if (hcd->core_if->dma_desc_enable) {
++		dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs,
++					       DWC_OTG_HC_XFER_BABBLE_ERR);
++		goto handle_babble_done;
++	}
++
++	if (hc->ep_type != DWC_OTG_EP_TYPE_ISOC) {
++		hcd->fops->complete(hcd, qtd->urb->priv,
++				    qtd->urb, -DWC_E_OVERFLOW);
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_BABBLE_ERR);
++	} else {
++		dwc_otg_halt_status_e halt_status;
++		halt_status = update_isoc_urb_state(hcd, hc, hc_regs, qtd,
++						    DWC_OTG_HC_XFER_BABBLE_ERR);
++		halt_channel(hcd, hc, qtd, halt_status);
++	}
++
++handle_babble_done:
++	disable_hc_int(hc_regs, bblerr);
++	return 1;
++}
++
++/**
++ * Handles a host channel AHB error interrupt. This handler is only called in
++ * DMA mode.
++ */
++static int32_t handle_hc_ahberr_intr(dwc_otg_hcd_t * hcd,
++				     dwc_hc_t * hc,
++				     dwc_otg_hc_regs_t * hc_regs,
++				     dwc_otg_qtd_t * qtd)
++{
++	hcchar_data_t hcchar;
++	hcsplt_data_t hcsplt;
++	hctsiz_data_t hctsiz;
++	uint32_t hcdma;
++	char *pipetype, *speed;
++
++	dwc_otg_hcd_urb_t *urb = qtd->urb;
++
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "AHB Error--\n", hc->hc_num);
++
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	hcsplt.d32 = DWC_READ_REG32(&hc_regs->hcsplt);
++	hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
++	hcdma = DWC_READ_REG32(&hc_regs->hcdma);
++
++	DWC_ERROR("AHB ERROR, Channel %d\n", hc->hc_num);
++	DWC_ERROR("  hcchar 0x%08x, hcsplt 0x%08x\n", hcchar.d32, hcsplt.d32);
++	DWC_ERROR("  hctsiz 0x%08x, hcdma 0x%08x\n", hctsiz.d32, hcdma);
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Enqueue\n");
++	DWC_ERROR("  Device address: %d\n",
++		  dwc_otg_hcd_get_dev_addr(&urb->pipe_info));
++	DWC_ERROR("  Endpoint: %d, %s\n",
++		  dwc_otg_hcd_get_ep_num(&urb->pipe_info),
++		  (dwc_otg_hcd_is_pipe_in(&urb->pipe_info) ? "IN" : "OUT"));
++
++	switch (dwc_otg_hcd_get_pipe_type(&urb->pipe_info)) {
++	case UE_CONTROL:
++		pipetype = "CONTROL";
++		break;
++	case UE_BULK:
++		pipetype = "BULK";
++		break;
++	case UE_INTERRUPT:
++		pipetype = "INTERRUPT";
++		break;
++	case UE_ISOCHRONOUS:
++		pipetype = "ISOCHRONOUS";
++		break;
++	default:
++		pipetype = "UNKNOWN";
++		break;
++	}
++
++	DWC_ERROR("  Endpoint type: %s\n", pipetype);
++
++	switch (hc->speed) {
++	case DWC_OTG_EP_SPEED_HIGH:
++		speed = "HIGH";
++		break;
++	case DWC_OTG_EP_SPEED_FULL:
++		speed = "FULL";
++		break;
++	case DWC_OTG_EP_SPEED_LOW:
++		speed = "LOW";
++		break;
++	default:
++		speed = "UNKNOWN";
++		break;
++	};
++
++	DWC_ERROR("  Speed: %s\n", speed);
++
++	DWC_ERROR("  Max packet size: %d\n",
++		  dwc_otg_hcd_get_mps(&urb->pipe_info));
++	DWC_ERROR("  Data buffer length: %d\n", urb->length);
++	DWC_ERROR("  Transfer buffer: %p, Transfer DMA: %p\n",
++		  urb->buf, (void *)urb->dma);
++	DWC_ERROR("  Setup buffer: %p, Setup DMA: %p\n",
++		  urb->setup_packet, (void *)urb->setup_dma);
++	DWC_ERROR("  Interval: %d\n", urb->interval);
++
++	/* Core haltes the channel for Descriptor DMA mode */
++	if (hcd->core_if->dma_desc_enable) {
++		dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs,
++					       DWC_OTG_HC_XFER_AHB_ERR);
++		goto handle_ahberr_done;
++	}
++
++	hcd->fops->complete(hcd, urb->priv, urb, -DWC_E_IO);
++
++	/*
++	 * Force a channel halt. Don't call halt_channel because that won't
++	 * write to the HCCHARn register in DMA mode to force the halt.
++	 */
++	dwc_otg_hc_halt(hcd->core_if, hc, DWC_OTG_HC_XFER_AHB_ERR);
++handle_ahberr_done:
++	disable_hc_int(hc_regs, ahberr);
++	return 1;
++}
++
++/**
++ * Handles a host channel transaction error interrupt. This handler may be
++ * called in either DMA mode or Slave mode.
++ */
++static int32_t handle_hc_xacterr_intr(dwc_otg_hcd_t * hcd,
++				      dwc_hc_t * hc,
++				      dwc_otg_hc_regs_t * hc_regs,
++				      dwc_otg_qtd_t * qtd)
++{
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "Transaction Error--\n", hc->hc_num);
++
++	if (hcd->core_if->dma_desc_enable) {
++		dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs,
++					       DWC_OTG_HC_XFER_XACT_ERR);
++		goto handle_xacterr_done;
++	}
++
++	switch (dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
++	case UE_CONTROL:
++	case UE_BULK:
++		qtd->error_count++;
++		if (!hc->qh->ping_state) {
++
++			update_urb_state_xfer_intr(hc, hc_regs,
++						   qtd->urb, qtd,
++						   DWC_OTG_HC_XFER_XACT_ERR);
++			dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++			if (!hc->ep_is_in && hc->speed == DWC_OTG_EP_SPEED_HIGH) {
++				hc->qh->ping_state = 1;
++			}
++		}
++
++		/*
++		 * Halt the channel so the transfer can be re-started from
++		 * the appropriate point or the PING protocol will start.
++		 */
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
++		break;
++	case UE_INTERRUPT:
++		qtd->error_count++;
++		if (hc->do_split && hc->complete_split) {
++			qtd->complete_split = 0;
++		}
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
++		break;
++	case UE_ISOCHRONOUS:
++		{
++			dwc_otg_halt_status_e halt_status;
++			halt_status =
++			    update_isoc_urb_state(hcd, hc, hc_regs, qtd,
++						  DWC_OTG_HC_XFER_XACT_ERR);
++
++			halt_channel(hcd, hc, qtd, halt_status);
++		}
++		break;
++	}
++handle_xacterr_done:
++	disable_hc_int(hc_regs, xacterr);
++
++	return 1;
++}
++
++/**
++ * Handles a host channel frame overrun interrupt. This handler may be called
++ * in either DMA mode or Slave mode.
++ */
++static int32_t handle_hc_frmovrun_intr(dwc_otg_hcd_t * hcd,
++				       dwc_hc_t * hc,
++				       dwc_otg_hc_regs_t * hc_regs,
++				       dwc_otg_qtd_t * qtd)
++{
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "Frame Overrun--\n", hc->hc_num);
++
++	switch (dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
++	case UE_CONTROL:
++	case UE_BULK:
++		break;
++	case UE_INTERRUPT:
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_FRAME_OVERRUN);
++		break;
++	case UE_ISOCHRONOUS:
++		{
++			dwc_otg_halt_status_e halt_status;
++			halt_status =
++			    update_isoc_urb_state(hcd, hc, hc_regs, qtd,
++						  DWC_OTG_HC_XFER_FRAME_OVERRUN);
++
++			halt_channel(hcd, hc, qtd, halt_status);
++		}
++		break;
++	}
++
++	disable_hc_int(hc_regs, frmovrun);
++
++	return 1;
++}
++
++/**
++ * Handles a host channel data toggle error interrupt. This handler may be
++ * called in either DMA mode or Slave mode.
++ */
++static int32_t handle_hc_datatglerr_intr(dwc_otg_hcd_t * hcd,
++					 dwc_hc_t * hc,
++					 dwc_otg_hc_regs_t * hc_regs,
++					 dwc_otg_qtd_t * qtd)
++{
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		"Data Toggle Error on %s transfer--\n",
++		hc->hc_num, (hc->ep_is_in ? "IN" : "OUT"));
++
++	/* Data toggles on split transactions cause the hc to halt.
++	 * restart transfer */
++	if(hc->qh->do_split)
++	{
++		qtd->error_count++;
++		dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++		update_urb_state_xfer_intr(hc, hc_regs,
++			qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
++	} else if (hc->ep_is_in) {
++		/* An unmasked data toggle error on a non-split DMA transaction is
++		 * for the sole purpose of resetting error counts. Disable other
++		 * interrupts unmasked for the same reason.
++		 */
++		if(hcd->core_if->dma_enable) {
++			disable_hc_int(hc_regs, ack);
++			disable_hc_int(hc_regs, nak);
++		}
++		qtd->error_count = 0;
++	}
++
++	disable_hc_int(hc_regs, datatglerr);
++
++	return 1;
++}
++
++#ifdef DEBUG
++/**
++ * This function is for debug only. It checks that a valid halt status is set
++ * and that HCCHARn.chdis is clear. If there's a problem, corrective action is
++ * taken and a warning is issued.
++ * @return 1 if halt status is ok, 0 otherwise.
++ */
++static inline int halt_status_ok(dwc_otg_hcd_t * hcd,
++				 dwc_hc_t * hc,
++				 dwc_otg_hc_regs_t * hc_regs,
++				 dwc_otg_qtd_t * qtd)
++{
++	hcchar_data_t hcchar;
++	hctsiz_data_t hctsiz;
++	hcint_data_t hcint;
++	hcintmsk_data_t hcintmsk;
++	hcsplt_data_t hcsplt;
++
++	if (hc->halt_status == DWC_OTG_HC_XFER_NO_HALT_STATUS) {
++		/*
++		 * This code is here only as a check. This condition should
++		 * never happen. Ignore the halt if it does occur.
++		 */
++		hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++		hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
++		hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++		hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
++		hcsplt.d32 = DWC_READ_REG32(&hc_regs->hcsplt);
++		DWC_WARN
++		    ("%s: hc->halt_status == DWC_OTG_HC_XFER_NO_HALT_STATUS, "
++		     "channel %d, hcchar 0x%08x, hctsiz 0x%08x, "
++		     "hcint 0x%08x, hcintmsk 0x%08x, "
++		     "hcsplt 0x%08x, qtd->complete_split %d\n", __func__,
++		     hc->hc_num, hcchar.d32, hctsiz.d32, hcint.d32,
++		     hcintmsk.d32, hcsplt.d32, qtd->complete_split);
++
++		DWC_WARN("%s: no halt status, channel %d, ignoring interrupt\n",
++			 __func__, hc->hc_num);
++		DWC_WARN("\n");
++		clear_hc_int(hc_regs, chhltd);
++		return 0;
++	}
++
++	/*
++	 * This code is here only as a check. hcchar.chdis should
++	 * never be set when the halt interrupt occurs. Halt the
++	 * channel again if it does occur.
++	 */
++	hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
++	if (hcchar.b.chdis) {
++		DWC_WARN("%s: hcchar.chdis set unexpectedly, "
++			 "hcchar 0x%08x, trying to halt again\n",
++			 __func__, hcchar.d32);
++		clear_hc_int(hc_regs, chhltd);
++		hc->halt_pending = 0;
++		halt_channel(hcd, hc, qtd, hc->halt_status);
++		return 0;
++	}
++
++	return 1;
++}
++#endif
++
++/**
++ * Handles a host Channel Halted interrupt in DMA mode. This handler
++ * determines the reason the channel halted and proceeds accordingly.
++ */
++static void handle_hc_chhltd_intr_dma(dwc_otg_hcd_t * hcd,
++				      dwc_hc_t * hc,
++				      dwc_otg_hc_regs_t * hc_regs,
++				      dwc_otg_qtd_t * qtd)
++{
++	int out_nak_enh = 0;
++	hcint_data_t hcint;
++	hcintmsk_data_t hcintmsk;
++	/* For core with OUT NAK enhancement, the flow for high-
++	 * speed CONTROL/BULK OUT is handled a little differently.
++	 */
++	if (hcd->core_if->snpsid >= OTG_CORE_REV_2_71a) {
++		if (hc->speed == DWC_OTG_EP_SPEED_HIGH && !hc->ep_is_in &&
++		    (hc->ep_type == DWC_OTG_EP_TYPE_CONTROL ||
++		     hc->ep_type == DWC_OTG_EP_TYPE_BULK)) {
++			out_nak_enh = 1;
++		}
++	}
++
++	if (hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE ||
++	    (hc->halt_status == DWC_OTG_HC_XFER_AHB_ERR
++	     && !hcd->core_if->dma_desc_enable)) {
++		/*
++		 * Just release the channel. A dequeue can happen on a
++		 * transfer timeout. In the case of an AHB Error, the channel
++		 * was forced to halt because there's no way to gracefully
++		 * recover.
++		 */
++		if (hcd->core_if->dma_desc_enable)
++			dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs,
++						       hc->halt_status);
++		else
++			release_channel(hcd, hc, qtd, hc->halt_status);
++		return;
++	}
++
++	/* Read the HCINTn register to determine the cause for the halt. */
++
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++	hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
++
++	if (hcint.b.xfercomp) {
++		/** @todo This is here because of a possible hardware bug.  Spec
++		 * says that on SPLIT-ISOC OUT transfers in DMA mode that a HALT
++		 * interrupt w/ACK bit set should occur, but I only see the
++		 * XFERCOMP bit, even with it masked out.  This is a workaround
++		 * for that behavior.  Should fix this when hardware is fixed.
++		 */
++		if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC && !hc->ep_is_in) {
++			handle_hc_ack_intr(hcd, hc, hc_regs, qtd);
++		}
++		handle_hc_xfercomp_intr(hcd, hc, hc_regs, qtd);
++	} else if (hcint.b.stall) {
++		handle_hc_stall_intr(hcd, hc, hc_regs, qtd);
++	} else if (hcint.b.xacterr && !hcd->core_if->dma_desc_enable) {
++		if (out_nak_enh) {
++			if (hcint.b.nyet || hcint.b.nak || hcint.b.ack) {
++				DWC_DEBUGPL(DBG_HCD, "XactErr with NYET/NAK/ACK\n");
++				qtd->error_count = 0;
++			} else {
++				DWC_DEBUGPL(DBG_HCD, "XactErr without NYET/NAK/ACK\n");
++			}
++		}
++
++		/*
++		 * Must handle xacterr before nak or ack. Could get a xacterr
++		 * at the same time as either of these on a BULK/CONTROL OUT
++		 * that started with a PING. The xacterr takes precedence.
++		 */
++		handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
++	} else if (hcint.b.xcs_xact && hcd->core_if->dma_desc_enable) {
++		handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
++	} else if (hcint.b.ahberr && hcd->core_if->dma_desc_enable) {
++		handle_hc_ahberr_intr(hcd, hc, hc_regs, qtd);
++	} else if (hcint.b.bblerr) {
++		handle_hc_babble_intr(hcd, hc, hc_regs, qtd);
++	} else if (hcint.b.frmovrun) {
++		handle_hc_frmovrun_intr(hcd, hc, hc_regs, qtd);
++	} else if (hcint.b.datatglerr) {
++		handle_hc_datatglerr_intr(hcd, hc, hc_regs, qtd);
++	} else if (!out_nak_enh) {
++		if (hcint.b.nyet) {
++			/*
++			 * Must handle nyet before nak or ack. Could get a nyet at the
++			 * same time as either of those on a BULK/CONTROL OUT that
++			 * started with a PING. The nyet takes precedence.
++			 */
++			handle_hc_nyet_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.nak && !hcintmsk.b.nak) {
++			/*
++			 * If nak is not masked, it's because a non-split IN transfer
++			 * is in an error state. In that case, the nak is handled by
++			 * the nak interrupt handler, not here. Handle nak here for
++			 * BULK/CONTROL OUT transfers, which halt on a NAK to allow
++			 * rewinding the buffer pointer.
++			 */
++			handle_hc_nak_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.ack && !hcintmsk.b.ack) {
++			/*
++			 * If ack is not masked, it's because a non-split IN transfer
++			 * is in an error state. In that case, the ack is handled by
++			 * the ack interrupt handler, not here. Handle ack here for
++			 * split transfers. Start splits halt on ACK.
++			 */
++			handle_hc_ack_intr(hcd, hc, hc_regs, qtd);
++		} else {
++			if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
++			    hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
++				/*
++				 * A periodic transfer halted with no other channel
++				 * interrupts set. Assume it was halted by the core
++				 * because it could not be completed in its scheduled
++				 * (micro)frame.
++				 */
++#ifdef DEBUG
++				DWC_PRINTF
++				    ("%s: Halt channel %d (assume incomplete periodic transfer)\n",
++				     __func__, hc->hc_num);
++#endif
++				halt_channel(hcd, hc, qtd,
++					     DWC_OTG_HC_XFER_PERIODIC_INCOMPLETE);
++			} else {
++				DWC_ERROR
++				    ("%s: Channel %d, DMA Mode -- ChHltd set, but reason "
++				     "for halting is unknown, hcint 0x%08x, intsts 0x%08x\n",
++				     __func__, hc->hc_num, hcint.d32,
++				     DWC_READ_REG32(&hcd->
++						    core_if->core_global_regs->
++						    gintsts));
++				/* Failthrough: use 3-strikes rule */
++				qtd->error_count++;
++				dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++				update_urb_state_xfer_intr(hc, hc_regs,
++					   qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
++				halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
++			}
++
++		}
++	} else {
++		DWC_PRINTF("NYET/NAK/ACK/other in non-error case, 0x%08x\n",
++			   hcint.d32);
++		/* Failthrough: use 3-strikes rule */
++		qtd->error_count++;
++		dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++		update_urb_state_xfer_intr(hc, hc_regs,
++			   qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
++		halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
++	}
++}
++
++/**
++ * Handles a host channel Channel Halted interrupt.
++ *
++ * In slave mode, this handler is called only when the driver specifically
++ * requests a halt. This occurs during handling other host channel interrupts
++ * (e.g. nak, xacterr, stall, nyet, etc.).
++ *
++ * In DMA mode, this is the interrupt that occurs when the core has finished
++ * processing a transfer on a channel. Other host channel interrupts (except
++ * ahberr) are disabled in DMA mode.
++ */
++static int32_t handle_hc_chhltd_intr(dwc_otg_hcd_t * hcd,
++				     dwc_hc_t * hc,
++				     dwc_otg_hc_regs_t * hc_regs,
++				     dwc_otg_qtd_t * qtd)
++{
++	DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
++		    "Channel Halted--\n", hc->hc_num);
++
++	if (hcd->core_if->dma_enable) {
++		handle_hc_chhltd_intr_dma(hcd, hc, hc_regs, qtd);
++	} else {
++#ifdef DEBUG
++		if (!halt_status_ok(hcd, hc, hc_regs, qtd)) {
++			return 1;
++		}
++#endif
++		release_channel(hcd, hc, qtd, hc->halt_status);
++	}
++
++	return 1;
++}
++
++
++/**
++ * dwc_otg_fiq_unmangle_isoc() - Update the iso_frame_desc structure on
++ * FIQ transfer completion
++ * @hcd:	Pointer to dwc_otg_hcd struct
++ * @num:	Host channel number
++ *
++ * 1. Un-mangle the status as recorded in each iso_frame_desc status
++ * 2. Copy it from the dwc_otg_urb into the real URB
++ */
++void dwc_otg_fiq_unmangle_isoc(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh, dwc_otg_qtd_t *qtd, uint32_t num)
++{
++	struct dwc_otg_hcd_urb *dwc_urb = qtd->urb;
++	int nr_frames = dwc_urb->packet_count;
++	int i;
++	hcint_data_t frame_hcint;
++
++	for (i = 0; i < nr_frames; i++) {
++		frame_hcint.d32 = dwc_urb->iso_descs[i].status;
++		if (frame_hcint.b.xfercomp) {
++			dwc_urb->iso_descs[i].status = 0;
++			dwc_urb->actual_length += dwc_urb->iso_descs[i].actual_length;
++		} else if (frame_hcint.b.frmovrun) {
++			if (qh->ep_is_in)
++				dwc_urb->iso_descs[i].status = -DWC_E_NO_STREAM_RES;
++			else
++				dwc_urb->iso_descs[i].status = -DWC_E_COMMUNICATION;
++			dwc_urb->error_count++;
++			dwc_urb->iso_descs[i].actual_length = 0;
++		} else if (frame_hcint.b.xacterr) {
++			dwc_urb->iso_descs[i].status = -DWC_E_PROTOCOL;
++			dwc_urb->error_count++;
++			dwc_urb->iso_descs[i].actual_length = 0;
++		} else if (frame_hcint.b.bblerr) {
++			dwc_urb->iso_descs[i].status = -DWC_E_OVERFLOW;
++			dwc_urb->error_count++;
++			dwc_urb->iso_descs[i].actual_length = 0;
++		} else {
++			/* Something went wrong */
++			dwc_urb->iso_descs[i].status = -1;
++			dwc_urb->iso_descs[i].actual_length = 0;
++			dwc_urb->error_count++;
++		}
++	}
++	qh->sched_frame = dwc_frame_num_inc(qh->sched_frame, qh->interval * (nr_frames - 1));
++
++	//printk_ratelimited(KERN_INFO "%s: HS isochronous of %d/%d frames with %d errors complete\n",
++	//			__FUNCTION__, i, dwc_urb->packet_count, dwc_urb->error_count);
++}
++
++/**
++ * dwc_otg_fiq_unsetup_per_dma() - Remove data from bounce buffers for split transactions
++ * @hcd:	Pointer to dwc_otg_hcd struct
++ * @num:	Host channel number
++ *
++ * Copies data from the FIQ bounce buffers into the URB's transfer buffer. Does not modify URB state.
++ * Returns total length of data or -1 if the buffers were not used.
++ *
++ */
++int dwc_otg_fiq_unsetup_per_dma(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh, dwc_otg_qtd_t *qtd, uint32_t num)
++{
++	dwc_hc_t *hc = qh->channel;
++	struct fiq_dma_blob *blob = hcd->fiq_dmab;
++	struct fiq_channel_state *st = &hcd->fiq_state->channel[num];
++	uint8_t *ptr = NULL;
++	int index = 0, len = 0;
++	int i = 0;
++	if (hc->ep_is_in) {
++		/* Copy data out of the DMA bounce buffers to the URB's buffer.
++		 * The align_buf is ignored as this is ignored on FSM enqueue. */
++		ptr = qtd->urb->buf;
++		if (qh->ep_type == UE_ISOCHRONOUS) {
++			/* Isoc IN transactions - grab the offset of the iso_frame_desc into the URB transfer buffer */
++			index = qtd->isoc_frame_index;
++			ptr += qtd->urb->iso_descs[index].offset;
++		} else {
++			/* Need to increment by actual_length for interrupt IN */
++			ptr += qtd->urb->actual_length;
++		}
++
++		for (i = 0; i < st->dma_info.index; i++) {
++			len += st->dma_info.slot_len[i];
++			dwc_memcpy(ptr, &blob->channel[num].index[i].buf[0], st->dma_info.slot_len[i]);
++			ptr += st->dma_info.slot_len[i];
++		}
++		return len;
++	} else {
++		/* OUT endpoints - nothing to do. */
++		return -1;
++	}
++
++}
++/**
++ * dwc_otg_hcd_handle_hc_fsm() - handle an unmasked channel interrupt
++ * 				 from a channel handled in the FIQ
++ * @hcd:	Pointer to dwc_otg_hcd struct
++ * @num:	Host channel number
++ *
++ * If a host channel interrupt was received by the IRQ and this was a channel
++ * used by the FIQ, the execution flow for transfer completion is substantially
++ * different from the normal (messy) path. This function and its friends handles
++ * channel cleanup and transaction completion from a FIQ transaction.
++ */
++void dwc_otg_hcd_handle_hc_fsm(dwc_otg_hcd_t *hcd, uint32_t num)
++{
++	struct fiq_channel_state *st = &hcd->fiq_state->channel[num];
++	dwc_hc_t *hc = hcd->hc_ptr_array[num];
++	dwc_otg_qtd_t *qtd = DWC_CIRCLEQ_FIRST(&hc->qh->qtd_list);
++	dwc_otg_qh_t *qh = hc->qh;
++	dwc_otg_hc_regs_t *hc_regs = hcd->core_if->host_if->hc_regs[num];
++	hcint_data_t hcint = hcd->fiq_state->channel[num].hcint_copy;
++	int hostchannels  = 0;
++	fiq_print(FIQDBG_INT, hcd->fiq_state, "OUT %01d %01d ", num , st->fsm);
++
++	hostchannels = hcd->available_host_channels;
++	switch (st->fsm) {
++	case FIQ_TEST:
++		break;
++
++	case FIQ_DEQUEUE_ISSUED:
++		/* hc_halt was called. QTD no longer exists. */
++		/* TODO: for a nonperiodic split transaction, need to issue a
++		 * CLEAR_TT_BUFFER hub command if we were in the start-split phase.
++		 */
++		release_channel(hcd, hc, NULL, hc->halt_status);
++		break;
++
++	case FIQ_NP_SPLIT_DONE:
++		/* Nonperiodic transaction complete. */
++		if (!hc->ep_is_in) {
++			qtd->ssplit_out_xfer_count = hc->xfer_len;
++		}
++		if (hcint.b.xfercomp) {
++			handle_hc_xfercomp_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.nak) {
++			handle_hc_nak_intr(hcd, hc, hc_regs, qtd);
++		}
++		break;
++
++	case FIQ_NP_SPLIT_HS_ABORTED:
++		/* A HS abort is a 3-strikes on the HS bus at any point in the transaction.
++		 * Normally a CLEAR_TT_BUFFER hub command would be required: we can't do that
++		 * because there's no guarantee which order a non-periodic split happened in.
++		 * We could end up clearing a perfectly good transaction out of the buffer.
++		 */
++		if (hcint.b.xacterr) {
++			qtd->error_count += st->nr_errors;
++			handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.ahberr) {
++			handle_hc_ahberr_intr(hcd, hc, hc_regs, qtd);
++		} else {
++			local_fiq_disable();
++			BUG();
++		}
++		break;
++
++	case FIQ_NP_SPLIT_LS_ABORTED:
++		/* A few cases can cause this - either an unknown state on a SSPLIT or
++		 * STALL/data toggle error response on a CSPLIT */
++		if (hcint.b.stall) {
++			handle_hc_stall_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.datatglerr) {
++			handle_hc_datatglerr_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.bblerr) {
++			handle_hc_babble_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.ahberr) {
++			handle_hc_ahberr_intr(hcd, hc, hc_regs, qtd);
++		} else {
++			local_fiq_disable();
++			BUG();
++		}
++		break;
++
++	case FIQ_PER_SPLIT_DONE:
++		/* Isoc IN or Interrupt IN/OUT */
++
++		/* Flow control here is different from the normal execution by the driver.
++		* We need to completely ignore most of the driver's method of handling
++		* split transactions and do it ourselves.
++		*/
++		if (hc->ep_type == UE_INTERRUPT) {
++			if (hcint.b.nak) {
++					handle_hc_nak_intr(hcd, hc, hc_regs, qtd);
++			} else if (hc->ep_is_in) {
++				int len;
++				len = dwc_otg_fiq_unsetup_per_dma(hcd, hc->qh, qtd, num);
++				//printk(KERN_NOTICE "FIQ Transaction: hc=%d len=%d urb_len = %d\n", num, len, qtd->urb->length);
++				qtd->urb->actual_length += len;
++				if (qtd->urb->actual_length >= qtd->urb->length) {
++					qtd->urb->status = 0;
++					hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, qtd->urb->status);
++					release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++				} else {
++					/* Interrupt transfer not complete yet - is it a short read? */
++					if (len < hc->max_packet) {
++						/* Interrupt transaction complete */
++						qtd->urb->status = 0;
++						hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, qtd->urb->status);
++						release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++					} else {
++						/* Further transactions required */
++						release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
++					}
++				}
++			} else {
++				/* Interrupt OUT complete. */
++				dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
++				qtd->urb->actual_length += hc->xfer_len;
++				if (qtd->urb->actual_length >= qtd->urb->length) {
++					qtd->urb->status = 0;
++					hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, qtd->urb->status);
++					release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++				} else {
++					release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
++				}
++			}
++		} else {
++			/* ISOC IN complete. */
++			struct dwc_otg_hcd_iso_packet_desc *frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
++			int len = 0;
++			/* Record errors, update qtd. */
++			if (st->nr_errors) {
++				frame_desc->actual_length = 0;
++				frame_desc->status = -DWC_E_PROTOCOL;
++			} else {
++				frame_desc->status = 0;
++				/* Unswizzle dma */
++				len = dwc_otg_fiq_unsetup_per_dma(hcd, qh, qtd, num);
++				frame_desc->actual_length = len;
++			}
++			qtd->isoc_frame_index++;
++			if (qtd->isoc_frame_index == qtd->urb->packet_count) {
++				hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++			} else {
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
++			}
++		}
++		break;
++
++	case FIQ_PER_ISO_OUT_DONE: {
++			struct dwc_otg_hcd_iso_packet_desc *frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
++			/* Record errors, update qtd. */
++			if (st->nr_errors) {
++				frame_desc->actual_length = 0;
++				frame_desc->status = -DWC_E_PROTOCOL;
++			} else {
++				frame_desc->status = 0;
++				frame_desc->actual_length = frame_desc->length;
++			}
++			qtd->isoc_frame_index++;
++			qtd->isoc_split_offset = 0;
++			if (qtd->isoc_frame_index == qtd->urb->packet_count) {
++				hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++			} else {
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
++			}
++		}
++		break;
++
++	case FIQ_PER_SPLIT_NYET_ABORTED:
++		/* Doh. lost the data. */
++		printk_ratelimited(KERN_INFO "Transfer to device %d endpoint 0x%x frame %d failed "
++				"- FIQ reported NYET. Data may have been lost.\n",
++				hc->dev_addr, hc->ep_num, dwc_otg_hcd_get_frame_number(hcd) >> 3);
++		if (hc->ep_type == UE_ISOCHRONOUS) {
++			struct dwc_otg_hcd_iso_packet_desc *frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
++			/* Record errors, update qtd. */
++			frame_desc->actual_length = 0;
++			frame_desc->status = -DWC_E_PROTOCOL;
++			qtd->isoc_frame_index++;
++			qtd->isoc_split_offset = 0;
++			if (qtd->isoc_frame_index == qtd->urb->packet_count) {
++				hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++			} else {
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
++			}
++		} else {
++			release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
++		}
++		break;
++
++	case FIQ_HS_ISOC_DONE:
++		/* The FIQ has performed a whole pile of isochronous transactions.
++		 * The status is recorded as the interrupt state should the transaction
++		 * fail.
++		 */
++		dwc_otg_fiq_unmangle_isoc(hcd, qh, qtd, num);
++		hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
++		release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++		break;
++
++	case FIQ_PER_SPLIT_LS_ABORTED:
++		if (hcint.b.xacterr) {
++			/* Hub has responded with an ERR packet. Device
++			 * has been unplugged or the port has been disabled.
++			 * TODO: need to issue a reset to the hub port. */
++			qtd->error_count += 3;
++			handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.stall) {
++			handle_hc_stall_intr(hcd, hc, hc_regs, qtd);
++		} else if (hcint.b.bblerr) {
++			handle_hc_babble_intr(hcd, hc, hc_regs, qtd);
++		} else {
++			printk_ratelimited(KERN_INFO "Transfer to device %d endpoint 0x%x failed "
++				"- FIQ reported FSM=%d. Data may have been lost.\n",
++				st->fsm, hc->dev_addr, hc->ep_num);
++			release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
++		}
++		break;
++
++	case FIQ_PER_SPLIT_HS_ABORTED:
++		/* Either the SSPLIT phase suffered transaction errors or something
++		 * unexpected happened.
++		 */
++		qtd->error_count += 3;
++		handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
++		release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
++		break;
++
++	case FIQ_PER_SPLIT_TIMEOUT:
++		/* Couldn't complete in the nominated frame */
++		printk(KERN_INFO "Transfer to device %d endpoint 0x%x frame %d failed "
++				"- FIQ timed out. Data may have been lost.\n",
++				hc->dev_addr, hc->ep_num, dwc_otg_hcd_get_frame_number(hcd) >> 3);
++		if (hc->ep_type == UE_ISOCHRONOUS) {
++			struct dwc_otg_hcd_iso_packet_desc *frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
++			/* Record errors, update qtd. */
++			frame_desc->actual_length = 0;
++			if (hc->ep_is_in) {
++				frame_desc->status = -DWC_E_NO_STREAM_RES;
++			} else {
++				frame_desc->status = -DWC_E_COMMUNICATION;
++			}
++			qtd->isoc_frame_index++;
++			if (qtd->isoc_frame_index == qtd->urb->packet_count) {
++				hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
++			} else {
++				release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
++			}
++		} else {
++			release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
++		}
++		break;
++
++	default:
++		DWC_WARN("Unexpected state received on hc=%d fsm=%d on transfer to device %d ep 0x%x", 
++					hc->hc_num, st->fsm, hc->dev_addr, hc->ep_num);
++		qtd->error_count++;
++		release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
++	}
++	return;
++}
++
++/** Handles interrupt for a specific Host Channel */
++int32_t dwc_otg_hcd_handle_hc_n_intr(dwc_otg_hcd_t * dwc_otg_hcd, uint32_t num)
++{
++	int retval = 0;
++	hcint_data_t hcint;
++	hcintmsk_data_t hcintmsk;
++	dwc_hc_t *hc;
++	dwc_otg_hc_regs_t *hc_regs;
++	dwc_otg_qtd_t *qtd;
++
++	DWC_DEBUGPL(DBG_HCDV, "--Host Channel Interrupt--, Channel %d\n", num);
++
++	hc = dwc_otg_hcd->hc_ptr_array[num];
++	hc_regs = dwc_otg_hcd->core_if->host_if->hc_regs[num];
++	if(hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
++		/* We are responding to a channel disable. Driver
++		 * state is cleared - our qtd has gone away.
++		 */
++		release_channel(dwc_otg_hcd, hc, NULL, hc->halt_status);
++		return 1;
++	}
++	qtd = DWC_CIRCLEQ_FIRST(&hc->qh->qtd_list);
++
++	/*
++	 * FSM mode: Check to see if this is a HC interrupt from a channel handled by the FIQ.
++	 * Execution path is fundamentally different for the channels after a FIQ has completed
++	 * a split transaction.
++	 */
++	if (fiq_fsm_enable) {
++		switch (dwc_otg_hcd->fiq_state->channel[num].fsm) {
++			case FIQ_PASSTHROUGH:
++				break;
++			case FIQ_PASSTHROUGH_ERRORSTATE:
++				/* Hook into the error count */
++				fiq_print(FIQDBG_ERR, dwc_otg_hcd->fiq_state, "HCDERR%02d", num);
++				if (!dwc_otg_hcd->fiq_state->channel[num].nr_errors) {
++					qtd->error_count = 0;
++					fiq_print(FIQDBG_ERR, dwc_otg_hcd->fiq_state, "RESET   ");
++				}
++				break;
++			default:
++				dwc_otg_hcd_handle_hc_fsm(dwc_otg_hcd, num);
++				return 1;
++		}
++	}
++
++	hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
++	hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
++	hcint.d32 = hcint.d32 & hcintmsk.d32;
++	if (!dwc_otg_hcd->core_if->dma_enable) {
++		if (hcint.b.chhltd && hcint.d32 != 0x2) {
++			hcint.b.chhltd = 0;
++		}
++	}
++
++	if (hcint.b.xfercomp) {
++		retval |=
++		    handle_hc_xfercomp_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++		/*
++		 * If NYET occurred at same time as Xfer Complete, the NYET is
++		 * handled by the Xfer Complete interrupt handler. Don't want
++		 * to call the NYET interrupt handler in this case.
++		 */
++		hcint.b.nyet = 0;
++	}
++	if (hcint.b.chhltd) {
++		retval |= handle_hc_chhltd_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.ahberr) {
++		retval |= handle_hc_ahberr_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.stall) {
++		retval |= handle_hc_stall_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.nak) {
++		retval |= handle_hc_nak_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.ack) {
++		if(!hcint.b.chhltd)
++			retval |= handle_hc_ack_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.nyet) {
++		retval |= handle_hc_nyet_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.xacterr) {
++		retval |= handle_hc_xacterr_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.bblerr) {
++		retval |= handle_hc_babble_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.frmovrun) {
++		retval |=
++		    handle_hc_frmovrun_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++	if (hcint.b.datatglerr) {
++		retval |=
++		    handle_hc_datatglerr_intr(dwc_otg_hcd, hc, hc_regs, qtd);
++	}
++
++	return retval;
++}
++#endif /* DWC_DEVICE_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c
+@@ -0,0 +1,1005 @@
++
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_linux.c $
++ * $Revision: #20 $
++ * $Date: 2011/10/26 $
++ * $Change: 1872981 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_DEVICE_ONLY
++
++/**
++ * @file
++ *
++ * This file contains the implementation of the HCD. In Linux, the HCD
++ * implements the hc_driver API.
++ */
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/moduleparam.h>
++#include <linux/init.h>
++#include <linux/device.h>
++#include <linux/errno.h>
++#include <linux/list.h>
++#include <linux/interrupt.h>
++#include <linux/string.h>
++#include <linux/dma-mapping.h>
++#include <linux/version.h>
++#include <asm/io.h>
++#include <asm/fiq.h>
++#include <linux/usb.h>
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,35)
++#include <../drivers/usb/core/hcd.h>
++#else
++#include <linux/usb/hcd.h>
++#endif
++#include <asm/bug.h>
++
++#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30))
++#define USB_URB_EP_LINKING 1
++#else
++#define USB_URB_EP_LINKING 0
++#endif
++
++#include "dwc_otg_hcd_if.h"
++#include "dwc_otg_dbg.h"
++#include "dwc_otg_driver.h"
++#include "dwc_otg_hcd.h"
++
++extern unsigned char  _dwc_otg_fiq_stub, _dwc_otg_fiq_stub_end;
++
++/**
++ * Gets the endpoint number from a _bEndpointAddress argument. The endpoint is
++ * qualified with its direction (possible 32 endpoints per device).
++ */
++#define dwc_ep_addr_to_endpoint(_bEndpointAddress_) ((_bEndpointAddress_ & USB_ENDPOINT_NUMBER_MASK) | \
++						     ((_bEndpointAddress_ & USB_DIR_IN) != 0) << 4)
++
++static const char dwc_otg_hcd_name[] = "dwc_otg_hcd";
++
++extern bool fiq_enable;
++
++/** @name Linux HC Driver API Functions */
++/** @{ */
++/* manage i/o requests, device state */
++static int dwc_otg_urb_enqueue(struct usb_hcd *hcd,
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++		       struct usb_host_endpoint *ep,
++#endif
++		       struct urb *urb, gfp_t mem_flags);
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30)
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++static int dwc_otg_urb_dequeue(struct usb_hcd *hcd, struct urb *urb);
++#endif
++#else /* kernels at or post 2.6.30 */
++static int dwc_otg_urb_dequeue(struct usb_hcd *hcd,
++                               struct urb *urb, int status);
++#endif /* LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30) */
++
++static void endpoint_disable(struct usb_hcd *hcd, struct usb_host_endpoint *ep);
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30)
++static void endpoint_reset(struct usb_hcd *hcd, struct usb_host_endpoint *ep);
++#endif
++static irqreturn_t dwc_otg_hcd_irq(struct usb_hcd *hcd);
++extern int hcd_start(struct usb_hcd *hcd);
++extern void hcd_stop(struct usb_hcd *hcd);
++static int get_frame_number(struct usb_hcd *hcd);
++extern int hub_status_data(struct usb_hcd *hcd, char *buf);
++extern int hub_control(struct usb_hcd *hcd,
++		       u16 typeReq,
++		       u16 wValue, u16 wIndex, char *buf, u16 wLength);
++
++struct wrapper_priv_data {
++	dwc_otg_hcd_t *dwc_otg_hcd;
++};
++
++/** @} */
++
++static struct hc_driver dwc_otg_hc_driver = {
++
++	.description = dwc_otg_hcd_name,
++	.product_desc = "DWC OTG Controller",
++	.hcd_priv_size = sizeof(struct wrapper_priv_data),
++
++	.irq = dwc_otg_hcd_irq,
++
++	.flags = HCD_MEMORY | HCD_USB2,
++
++	//.reset =
++	.start = hcd_start,
++	//.suspend =
++	//.resume =
++	.stop = hcd_stop,
++
++	.urb_enqueue = dwc_otg_urb_enqueue,
++	.urb_dequeue = dwc_otg_urb_dequeue,
++	.endpoint_disable = endpoint_disable,
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30)
++	.endpoint_reset = endpoint_reset,
++#endif
++	.get_frame_number = get_frame_number,
++
++	.hub_status_data = hub_status_data,
++	.hub_control = hub_control,
++	//.bus_suspend =
++	//.bus_resume =
++};
++
++/** Gets the dwc_otg_hcd from a struct usb_hcd */
++static inline dwc_otg_hcd_t *hcd_to_dwc_otg_hcd(struct usb_hcd *hcd)
++{
++	struct wrapper_priv_data *p;
++	p = (struct wrapper_priv_data *)(hcd->hcd_priv);
++	return p->dwc_otg_hcd;
++}
++
++/** Gets the struct usb_hcd that contains a dwc_otg_hcd_t. */
++static inline struct usb_hcd *dwc_otg_hcd_to_hcd(dwc_otg_hcd_t * dwc_otg_hcd)
++{
++	return dwc_otg_hcd_get_priv_data(dwc_otg_hcd);
++}
++
++/** Gets the usb_host_endpoint associated with an URB. */
++inline struct usb_host_endpoint *dwc_urb_to_endpoint(struct urb *urb)
++{
++	struct usb_device *dev = urb->dev;
++	int ep_num = usb_pipeendpoint(urb->pipe);
++
++	if (usb_pipein(urb->pipe))
++		return dev->ep_in[ep_num];
++	else
++		return dev->ep_out[ep_num];
++}
++
++static int _disconnect(dwc_otg_hcd_t * hcd)
++{
++	struct usb_hcd *usb_hcd = dwc_otg_hcd_to_hcd(hcd);
++
++	usb_hcd->self.is_b_host = 0;
++	return 0;
++}
++
++static int _start(dwc_otg_hcd_t * hcd)
++{
++	struct usb_hcd *usb_hcd = dwc_otg_hcd_to_hcd(hcd);
++
++	usb_hcd->self.is_b_host = dwc_otg_hcd_is_b_host(hcd);
++	hcd_start(usb_hcd);
++
++	return 0;
++}
++
++static int _hub_info(dwc_otg_hcd_t * hcd, void *urb_handle, uint32_t * hub_addr,
++		     uint32_t * port_addr)
++{
++   struct urb *urb = (struct urb *)urb_handle;
++   struct usb_bus *bus;
++#if 1 //GRAYG - temporary
++   if (NULL == urb_handle)
++      DWC_ERROR("**** %s - NULL URB handle\n", __func__);//GRAYG
++   if (NULL == urb->dev)
++      DWC_ERROR("**** %s - URB has no device\n", __func__);//GRAYG
++   if (NULL == port_addr)
++      DWC_ERROR("**** %s - NULL port_address\n", __func__);//GRAYG
++#endif
++   if (urb->dev->tt) {
++        if (NULL == urb->dev->tt->hub) {
++                DWC_ERROR("**** %s - (URB's transactor has no TT - giving no hub)\n",
++                           __func__); //GRAYG
++                //*hub_addr = (u8)usb_pipedevice(urb->pipe); //GRAYG
++                *hub_addr = 0; //GRAYG
++                // we probably shouldn't have a transaction translator if
++                // there's no associated hub?
++        } else {
++		bus = hcd_to_bus(dwc_otg_hcd_to_hcd(hcd));
++		if (urb->dev->tt->hub == bus->root_hub)
++			*hub_addr = 0;
++		else
++			*hub_addr = urb->dev->tt->hub->devnum;
++	}
++	*port_addr = urb->dev->tt->multi ? urb->dev->ttport : 1;
++   } else {
++        *hub_addr = 0;
++	*port_addr = urb->dev->ttport;
++   }
++   return 0;
++}
++
++static int _speed(dwc_otg_hcd_t * hcd, void *urb_handle)
++{
++	struct urb *urb = (struct urb *)urb_handle;
++	return urb->dev->speed;
++}
++
++static int _get_b_hnp_enable(dwc_otg_hcd_t * hcd)
++{
++	struct usb_hcd *usb_hcd = dwc_otg_hcd_to_hcd(hcd);
++	return usb_hcd->self.b_hnp_enable;
++}
++
++static void allocate_bus_bandwidth(struct usb_hcd *hcd, uint32_t bw,
++				   struct urb *urb)
++{
++	hcd_to_bus(hcd)->bandwidth_allocated += bw / urb->interval;
++	if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
++		hcd_to_bus(hcd)->bandwidth_isoc_reqs++;
++	} else {
++		hcd_to_bus(hcd)->bandwidth_int_reqs++;
++	}
++}
++
++static void free_bus_bandwidth(struct usb_hcd *hcd, uint32_t bw,
++			       struct urb *urb)
++{
++	hcd_to_bus(hcd)->bandwidth_allocated -= bw / urb->interval;
++	if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
++		hcd_to_bus(hcd)->bandwidth_isoc_reqs--;
++	} else {
++		hcd_to_bus(hcd)->bandwidth_int_reqs--;
++	}
++}
++
++/**
++ * Sets the final status of an URB and returns it to the device driver. Any
++ * required cleanup of the URB is performed.  The HCD lock should be held on
++ * entry.
++ */
++static int _complete(dwc_otg_hcd_t * hcd, void *urb_handle,
++		     dwc_otg_hcd_urb_t * dwc_otg_urb, int32_t status)
++{
++	struct urb *urb = (struct urb *)urb_handle;
++	urb_tq_entry_t *new_entry;
++	int rc = 0;
++	if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
++		DWC_PRINTF("%s: urb %p, device %d, ep %d %s, status=%d\n",
++			   __func__, urb, usb_pipedevice(urb->pipe),
++			   usb_pipeendpoint(urb->pipe),
++			   usb_pipein(urb->pipe) ? "IN" : "OUT", status);
++		if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
++			int i;
++			for (i = 0; i < urb->number_of_packets; i++) {
++				DWC_PRINTF("  ISO Desc %d status: %d\n",
++					   i, urb->iso_frame_desc[i].status);
++			}
++		}
++	}
++	new_entry = DWC_ALLOC_ATOMIC(sizeof(urb_tq_entry_t));
++	urb->actual_length = dwc_otg_hcd_urb_get_actual_length(dwc_otg_urb);
++	/* Convert status value. */
++	switch (status) {
++	case -DWC_E_PROTOCOL:
++		status = -EPROTO;
++		break;
++	case -DWC_E_IN_PROGRESS:
++		status = -EINPROGRESS;
++		break;
++	case -DWC_E_PIPE:
++		status = -EPIPE;
++		break;
++	case -DWC_E_IO:
++		status = -EIO;
++		break;
++	case -DWC_E_TIMEOUT:
++		status = -ETIMEDOUT;
++		break;
++	case -DWC_E_OVERFLOW:
++		status = -EOVERFLOW;
++		break;
++	case -DWC_E_SHUTDOWN:
++		status = -ESHUTDOWN;
++		break;
++	default:
++		if (status) {
++			DWC_PRINTF("Uknown urb status %d\n", status);
++
++		}
++	}
++
++	if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
++		int i;
++
++		urb->error_count = dwc_otg_hcd_urb_get_error_count(dwc_otg_urb);
++		for (i = 0; i < urb->number_of_packets; ++i) {
++			urb->iso_frame_desc[i].actual_length =
++			    dwc_otg_hcd_urb_get_iso_desc_actual_length
++			    (dwc_otg_urb, i);
++			urb->iso_frame_desc[i].status =
++			    dwc_otg_hcd_urb_get_iso_desc_status(dwc_otg_urb, i);
++		}
++	}
++
++	urb->status = status;
++	urb->hcpriv = NULL;
++	if (!status) {
++		if ((urb->transfer_flags & URB_SHORT_NOT_OK) &&
++		    (urb->actual_length < urb->transfer_buffer_length)) {
++			urb->status = -EREMOTEIO;
++		}
++	}
++
++	if ((usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) ||
++	    (usb_pipetype(urb->pipe) == PIPE_INTERRUPT)) {
++		struct usb_host_endpoint *ep = dwc_urb_to_endpoint(urb);
++		if (ep) {
++			free_bus_bandwidth(dwc_otg_hcd_to_hcd(hcd),
++					   dwc_otg_hcd_get_ep_bandwidth(hcd,
++									ep->hcpriv),
++					   urb);
++		}
++	}
++	DWC_FREE(dwc_otg_urb);
++	if (!new_entry) {
++		DWC_ERROR("dwc_otg_hcd: complete: cannot allocate URB TQ entry\n");
++		urb->status = -EPROTO;
++		/* don't schedule the tasklet -
++		 * directly return the packet here with error. */
++#if USB_URB_EP_LINKING
++		usb_hcd_unlink_urb_from_ep(dwc_otg_hcd_to_hcd(hcd), urb);
++#endif
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++		usb_hcd_giveback_urb(dwc_otg_hcd_to_hcd(hcd), urb);
++#else
++		usb_hcd_giveback_urb(dwc_otg_hcd_to_hcd(hcd), urb, urb->status);
++#endif
++	} else {
++		new_entry->urb = urb;
++#if USB_URB_EP_LINKING
++		rc = usb_hcd_check_unlink_urb(dwc_otg_hcd_to_hcd(hcd), urb, urb->status);
++		if(0 == rc) {
++			usb_hcd_unlink_urb_from_ep(dwc_otg_hcd_to_hcd(hcd), urb);
++		}
++#endif
++		if(0 == rc) {
++			DWC_TAILQ_INSERT_TAIL(&hcd->completed_urb_list, new_entry,
++						urb_tq_entries);
++			DWC_TASK_HI_SCHEDULE(hcd->completion_tasklet);
++		}
++	}
++	return 0;
++}
++
++static struct dwc_otg_hcd_function_ops hcd_fops = {
++	.start = _start,
++	.disconnect = _disconnect,
++	.hub_info = _hub_info,
++	.speed = _speed,
++	.complete = _complete,
++	.get_b_hnp_enable = _get_b_hnp_enable,
++};
++
++static struct fiq_handler fh = {
++  .name = "usb_fiq",
++};
++
++static void hcd_init_fiq(void *cookie)
++{
++	dwc_otg_device_t *otg_dev = cookie;
++	dwc_otg_hcd_t *dwc_otg_hcd = otg_dev->hcd;
++	struct pt_regs regs;
++	int irq;
++
++	if (claim_fiq(&fh)) {
++		DWC_ERROR("Can't claim FIQ");
++		BUG();
++	}
++	DWC_WARN("FIQ on core %d at 0x%08x",
++				smp_processor_id(),
++				(fiq_fsm_enable ? (int)&dwc_otg_fiq_fsm : (int)&dwc_otg_fiq_nop));
++	DWC_WARN("FIQ ASM at 0x%08x length %d", (int)&_dwc_otg_fiq_stub, (int)(&_dwc_otg_fiq_stub_end - &_dwc_otg_fiq_stub));
++		set_fiq_handler((void *) &_dwc_otg_fiq_stub, &_dwc_otg_fiq_stub_end - &_dwc_otg_fiq_stub);
++	memset(&regs,0,sizeof(regs));
++
++	regs.ARM_r8 = (long) dwc_otg_hcd->fiq_state;
++	if (fiq_fsm_enable) {
++		regs.ARM_r9 = dwc_otg_hcd->core_if->core_params->host_channels;
++		//regs.ARM_r10 = dwc_otg_hcd->dma;
++		regs.ARM_fp = (long) dwc_otg_fiq_fsm;
++	} else {
++		regs.ARM_fp = (long) dwc_otg_fiq_nop;
++	}
++
++	regs.ARM_sp = (long) dwc_otg_hcd->fiq_stack + (sizeof(struct fiq_stack) - 4);
++
++//		__show_regs(&regs);
++	set_fiq_regs(&regs);
++
++	//Set the mphi periph to  the required registers
++	dwc_otg_hcd->fiq_state->mphi_regs.base    = otg_dev->os_dep.mphi_base;
++	dwc_otg_hcd->fiq_state->mphi_regs.ctrl    = otg_dev->os_dep.mphi_base + 0x4c;
++	dwc_otg_hcd->fiq_state->mphi_regs.outdda  = otg_dev->os_dep.mphi_base + 0x28;
++	dwc_otg_hcd->fiq_state->mphi_regs.outddb  = otg_dev->os_dep.mphi_base + 0x2c;
++	dwc_otg_hcd->fiq_state->mphi_regs.intstat = otg_dev->os_dep.mphi_base + 0x50;
++	dwc_otg_hcd->fiq_state->dwc_regs_base = otg_dev->os_dep.base;
++	DWC_WARN("MPHI regs_base at 0x%08x", (int)dwc_otg_hcd->fiq_state->mphi_regs.base);
++	//Enable mphi peripheral
++	writel((1<<31),dwc_otg_hcd->fiq_state->mphi_regs.ctrl);
++#ifdef DEBUG
++	if (readl(dwc_otg_hcd->fiq_state->mphi_regs.ctrl) & 0x80000000)
++		DWC_WARN("MPHI periph has been enabled");
++	else
++		DWC_WARN("MPHI periph has NOT been enabled");
++#endif
++	// Enable FIQ interrupt from USB peripheral
++#ifdef CONFIG_MULTI_IRQ_HANDLER
++	irq = platform_get_irq(otg_dev->os_dep.platformdev, 1);
++#else
++	irq = INTERRUPT_VC_USB;
++#endif
++	if (irq < 0) {
++		DWC_ERROR("Can't get FIQ irq");
++		return;
++	}
++	enable_fiq(irq);
++	local_fiq_enable();
++}
++
++/**
++ * Initializes the HCD. This function allocates memory for and initializes the
++ * static parts of the usb_hcd and dwc_otg_hcd structures. It also registers the
++ * USB bus with the core and calls the hc_driver->start() function. It returns
++ * a negative error on failure.
++ */
++int hcd_init(dwc_bus_dev_t *_dev)
++{
++	struct usb_hcd *hcd = NULL;
++	dwc_otg_hcd_t *dwc_otg_hcd = NULL;
++	dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
++	int retval = 0;
++        u64 dmamask;
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD INIT otg_dev=%p\n", otg_dev);
++
++	/* Set device flags indicating whether the HCD supports DMA. */
++	if (dwc_otg_is_dma_enable(otg_dev->core_if))
++                dmamask = DMA_BIT_MASK(32);
++        else
++                dmamask = 0;
++
++#if    defined(LM_INTERFACE) || defined(PLATFORM_INTERFACE)
++        dma_set_mask(&_dev->dev, dmamask);
++        dma_set_coherent_mask(&_dev->dev, dmamask);
++#elif  defined(PCI_INTERFACE)
++        pci_set_dma_mask(_dev, dmamask);
++        pci_set_consistent_dma_mask(_dev, dmamask);
++#endif
++
++	/*
++	 * Allocate memory for the base HCD plus the DWC OTG HCD.
++	 * Initialize the base HCD.
++	 */
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30)
++	hcd = usb_create_hcd(&dwc_otg_hc_driver, &_dev->dev, _dev->dev.bus_id);
++#else
++	hcd = usb_create_hcd(&dwc_otg_hc_driver, &_dev->dev, dev_name(&_dev->dev));
++	hcd->has_tt = 1;
++//      hcd->uses_new_polling = 1;
++//      hcd->poll_rh = 0;
++#endif
++	if (!hcd) {
++		retval = -ENOMEM;
++		goto error1;
++	}
++
++	hcd->regs = otg_dev->os_dep.base;
++
++
++	/* Initialize the DWC OTG HCD. */
++	dwc_otg_hcd = dwc_otg_hcd_alloc_hcd();
++	if (!dwc_otg_hcd) {
++		goto error2;
++	}
++	((struct wrapper_priv_data *)(hcd->hcd_priv))->dwc_otg_hcd =
++	    dwc_otg_hcd;
++	otg_dev->hcd = dwc_otg_hcd;
++
++	if (dwc_otg_hcd_init(dwc_otg_hcd, otg_dev->core_if)) {
++		goto error2;
++	}
++
++	if (fiq_enable) {
++		if (num_online_cpus() > 1) {
++			/* bcm2709: can run the FIQ on a separate core to IRQs */
++			smp_call_function_single(1, hcd_init_fiq, otg_dev, 1);
++		} else {
++			smp_call_function_single(0, hcd_init_fiq, otg_dev, 1);
++		}
++	}
++
++	otg_dev->hcd->otg_dev = otg_dev;
++	hcd->self.otg_port = dwc_otg_hcd_otg_port(dwc_otg_hcd);
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,33) //don't support for LM(with 2.6.20.1 kernel)
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,35) //version field absent later
++	hcd->self.otg_version = dwc_otg_get_otg_version(otg_dev->core_if);
++#endif
++	/* Don't support SG list at this point */
++	hcd->self.sg_tablesize = 0;
++#endif
++	/*
++	 * Finish generic HCD initialization and start the HCD. This function
++	 * allocates the DMA buffer pool, registers the USB bus, requests the
++	 * IRQ line, and calls hcd_start method.
++	 */
++#ifdef PLATFORM_INTERFACE
++	retval = usb_add_hcd(hcd, platform_get_irq(_dev, fiq_enable ? 0 : 1), IRQF_SHARED);
++#else
++	retval = usb_add_hcd(hcd, _dev->irq, IRQF_SHARED);
++#endif
++	if (retval < 0) {
++		goto error2;
++	}
++
++	dwc_otg_hcd_set_priv_data(dwc_otg_hcd, hcd);
++	return 0;
++
++error2:
++	usb_put_hcd(hcd);
++error1:
++	return retval;
++}
++
++/**
++ * Removes the HCD.
++ * Frees memory and resources associated with the HCD and deregisters the bus.
++ */
++void hcd_remove(dwc_bus_dev_t *_dev)
++{
++	dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
++	dwc_otg_hcd_t *dwc_otg_hcd;
++	struct usb_hcd *hcd;
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD REMOVE otg_dev=%p\n", otg_dev);
++
++	if (!otg_dev) {
++		DWC_DEBUGPL(DBG_ANY, "%s: otg_dev NULL!\n", __func__);
++		return;
++	}
++
++	dwc_otg_hcd = otg_dev->hcd;
++
++	if (!dwc_otg_hcd) {
++		DWC_DEBUGPL(DBG_ANY, "%s: otg_dev->hcd NULL!\n", __func__);
++		return;
++	}
++
++	hcd = dwc_otg_hcd_to_hcd(dwc_otg_hcd);
++
++	if (!hcd) {
++		DWC_DEBUGPL(DBG_ANY,
++			    "%s: dwc_otg_hcd_to_hcd(dwc_otg_hcd) NULL!\n",
++			    __func__);
++		return;
++	}
++	usb_remove_hcd(hcd);
++	dwc_otg_hcd_set_priv_data(dwc_otg_hcd, NULL);
++	dwc_otg_hcd_remove(dwc_otg_hcd);
++	usb_put_hcd(hcd);
++}
++
++/* =========================================================================
++ *  Linux HC Driver Functions
++ * ========================================================================= */
++
++/** Initializes the DWC_otg controller and its root hub and prepares it for host
++ * mode operation. Activates the root port. Returns 0 on success and a negative
++ * error code on failure. */
++int hcd_start(struct usb_hcd *hcd)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++	struct usb_bus *bus;
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD START\n");
++	bus = hcd_to_bus(hcd);
++
++	hcd->state = HC_STATE_RUNNING;
++	if (dwc_otg_hcd_start(dwc_otg_hcd, &hcd_fops)) {
++		return 0;
++	}
++
++	/* Initialize and connect root hub if one is not already attached */
++	if (bus->root_hub) {
++		DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD Has Root Hub\n");
++		/* Inform the HUB driver to resume. */
++		usb_hcd_resume_root_hub(hcd);
++	}
++
++	return 0;
++}
++
++/**
++ * Halts the DWC_otg host mode operations in a clean manner. USB transfers are
++ * stopped.
++ */
++void hcd_stop(struct usb_hcd *hcd)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++
++	dwc_otg_hcd_stop(dwc_otg_hcd);
++}
++
++/** Returns the current frame number. */
++static int get_frame_number(struct usb_hcd *hcd)
++{
++	hprt0_data_t hprt0;
++	dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++	hprt0.d32 = DWC_READ_REG32(dwc_otg_hcd->core_if->host_if->hprt0);
++	if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_HIGH_SPEED)
++		return dwc_otg_hcd_get_frame_number(dwc_otg_hcd) >> 3;
++	else
++		return dwc_otg_hcd_get_frame_number(dwc_otg_hcd);
++}
++
++#ifdef DEBUG
++static void dump_urb_info(struct urb *urb, char *fn_name)
++{
++	DWC_PRINTF("%s, urb %p\n", fn_name, urb);
++	DWC_PRINTF("  Device address: %d\n", usb_pipedevice(urb->pipe));
++	DWC_PRINTF("  Endpoint: %d, %s\n", usb_pipeendpoint(urb->pipe),
++		   (usb_pipein(urb->pipe) ? "IN" : "OUT"));
++	DWC_PRINTF("  Endpoint type: %s\n", ( {
++					     char *pipetype;
++					     switch (usb_pipetype(urb->pipe)) {
++case PIPE_CONTROL:
++pipetype = "CONTROL"; break; case PIPE_BULK:
++pipetype = "BULK"; break; case PIPE_INTERRUPT:
++pipetype = "INTERRUPT"; break; case PIPE_ISOCHRONOUS:
++pipetype = "ISOCHRONOUS"; break; default:
++					     pipetype = "UNKNOWN"; break;};
++					     pipetype;}
++		   )) ;
++	DWC_PRINTF("  Speed: %s\n", ( {
++				     char *speed; switch (urb->dev->speed) {
++case USB_SPEED_HIGH:
++speed = "HIGH"; break; case USB_SPEED_FULL:
++speed = "FULL"; break; case USB_SPEED_LOW:
++speed = "LOW"; break; default:
++				     speed = "UNKNOWN"; break;};
++				     speed;}
++		   )) ;
++	DWC_PRINTF("  Max packet size: %d\n",
++		   usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe)));
++	DWC_PRINTF("  Data buffer length: %d\n", urb->transfer_buffer_length);
++	DWC_PRINTF("  Transfer buffer: %p, Transfer DMA: %p\n",
++		   urb->transfer_buffer, (void *)urb->transfer_dma);
++	DWC_PRINTF("  Setup buffer: %p, Setup DMA: %p\n",
++		   urb->setup_packet, (void *)urb->setup_dma);
++	DWC_PRINTF("  Interval: %d\n", urb->interval);
++	if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
++		int i;
++		for (i = 0; i < urb->number_of_packets; i++) {
++			DWC_PRINTF("  ISO Desc %d:\n", i);
++			DWC_PRINTF("    offset: %d, length %d\n",
++				   urb->iso_frame_desc[i].offset,
++				   urb->iso_frame_desc[i].length);
++		}
++	}
++}
++#endif
++
++/** Starts processing a USB transfer request specified by a USB Request Block
++ * (URB). mem_flags indicates the type of memory allocation to use while
++ * processing this URB. */
++static int dwc_otg_urb_enqueue(struct usb_hcd *hcd,
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++		       struct usb_host_endpoint *ep,
++#endif
++		       struct urb *urb, gfp_t mem_flags)
++{
++	int retval = 0;
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,28)
++	struct usb_host_endpoint *ep = urb->ep;
++#endif
++	dwc_irqflags_t irqflags;
++        void **ref_ep_hcpriv = &ep->hcpriv;
++	dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++	dwc_otg_hcd_urb_t *dwc_otg_urb;
++	int i;
++	int alloc_bandwidth = 0;
++	uint8_t ep_type = 0;
++	uint32_t flags = 0;
++	void *buf;
++
++#ifdef DEBUG
++	if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
++		dump_urb_info(urb, "dwc_otg_urb_enqueue");
++	}
++#endif
++
++	if (!urb->transfer_buffer && urb->transfer_buffer_length)
++		return -EINVAL;
++
++	if ((usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS)
++	    || (usb_pipetype(urb->pipe) == PIPE_INTERRUPT)) {
++		if (!dwc_otg_hcd_is_bandwidth_allocated
++		    (dwc_otg_hcd, ref_ep_hcpriv)) {
++			alloc_bandwidth = 1;
++		}
++	}
++
++	switch (usb_pipetype(urb->pipe)) {
++	case PIPE_CONTROL:
++		ep_type = USB_ENDPOINT_XFER_CONTROL;
++		break;
++	case PIPE_ISOCHRONOUS:
++		ep_type = USB_ENDPOINT_XFER_ISOC;
++		break;
++	case PIPE_BULK:
++		ep_type = USB_ENDPOINT_XFER_BULK;
++		break;
++	case PIPE_INTERRUPT:
++		ep_type = USB_ENDPOINT_XFER_INT;
++		break;
++	default:
++                DWC_WARN("Wrong EP type - %d\n", usb_pipetype(urb->pipe));
++	}
++
++        /* # of packets is often 0 - do we really need to call this then? */
++	dwc_otg_urb = dwc_otg_hcd_urb_alloc(dwc_otg_hcd,
++					    urb->number_of_packets,
++					    mem_flags == GFP_ATOMIC ? 1 : 0);
++
++	if(dwc_otg_urb == NULL)
++		return -ENOMEM;
++
++	if (!dwc_otg_urb && urb->number_of_packets)
++		return -ENOMEM;
++
++	dwc_otg_hcd_urb_set_pipeinfo(dwc_otg_urb, usb_pipedevice(urb->pipe),
++				     usb_pipeendpoint(urb->pipe), ep_type,
++				     usb_pipein(urb->pipe),
++				     usb_maxpacket(urb->dev, urb->pipe,
++						   !(usb_pipein(urb->pipe))));
++
++	buf = urb->transfer_buffer;
++	if (hcd->self.uses_dma && !buf && urb->transfer_buffer_length) {
++		/*
++		 * Calculate virtual address from physical address,
++		 * because some class driver may not fill transfer_buffer.
++		 * In Buffer DMA mode virual address is used,
++		 * when handling non DWORD aligned buffers.
++		 */
++		buf = (void *)__bus_to_virt((unsigned long)urb->transfer_dma);
++		dev_warn_once(&urb->dev->dev,
++			      "USB transfer_buffer was NULL, will use __bus_to_virt(%pad)=%p\n",
++			      &urb->transfer_dma, buf);
++	}
++
++	if (!(urb->transfer_flags & URB_NO_INTERRUPT))
++		flags |= URB_GIVEBACK_ASAP;
++	if (urb->transfer_flags & URB_ZERO_PACKET)
++		flags |= URB_SEND_ZERO_PACKET;
++
++	dwc_otg_hcd_urb_set_params(dwc_otg_urb, urb, buf,
++				   urb->transfer_dma,
++				   urb->transfer_buffer_length,
++				   urb->setup_packet,
++				   urb->setup_dma, flags, urb->interval);
++
++	for (i = 0; i < urb->number_of_packets; ++i) {
++		dwc_otg_hcd_urb_set_iso_desc_params(dwc_otg_urb, i,
++						    urb->
++						    iso_frame_desc[i].offset,
++						    urb->
++						    iso_frame_desc[i].length);
++	}
++
++	DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &irqflags);
++	urb->hcpriv = dwc_otg_urb;
++#if USB_URB_EP_LINKING
++	retval = usb_hcd_link_urb_to_ep(hcd, urb);
++	if (0 == retval)
++#endif
++	{
++		retval = dwc_otg_hcd_urb_enqueue(dwc_otg_hcd, dwc_otg_urb,
++						/*(dwc_otg_qh_t **)*/
++						ref_ep_hcpriv, 1);
++		if (0 == retval) {
++			if (alloc_bandwidth) {
++				allocate_bus_bandwidth(hcd,
++						dwc_otg_hcd_get_ep_bandwidth(
++							dwc_otg_hcd, *ref_ep_hcpriv),
++						urb);
++			}
++		} else {
++			DWC_DEBUGPL(DBG_HCD, "DWC OTG dwc_otg_hcd_urb_enqueue failed rc %d\n", retval);
++#if USB_URB_EP_LINKING
++			usb_hcd_unlink_urb_from_ep(hcd, urb);
++#endif
++			DWC_FREE(dwc_otg_urb);
++			urb->hcpriv = NULL;
++			if (retval == -DWC_E_NO_DEVICE)
++				retval = -ENODEV;
++		}
++	}
++#if USB_URB_EP_LINKING
++	else
++	{
++		DWC_FREE(dwc_otg_urb);
++		urb->hcpriv = NULL;
++	}
++#endif
++	DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, irqflags);
++	return retval;
++}
++
++/** Aborts/cancels a USB transfer request. Always returns 0 to indicate
++ * success.  */
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++static int dwc_otg_urb_dequeue(struct usb_hcd *hcd, struct urb *urb)
++#else
++static int dwc_otg_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
++#endif
++{
++	dwc_irqflags_t flags;
++	dwc_otg_hcd_t *dwc_otg_hcd;
++        int rc;
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Dequeue\n");
++
++	dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++
++#ifdef DEBUG
++	if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
++		dump_urb_info(urb, "dwc_otg_urb_dequeue");
++	}
++#endif
++
++	DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &flags);
++	rc = usb_hcd_check_unlink_urb(hcd, urb, status);
++	if (0 == rc) {
++		if(urb->hcpriv != NULL) {
++	                dwc_otg_hcd_urb_dequeue(dwc_otg_hcd,
++	                                    (dwc_otg_hcd_urb_t *)urb->hcpriv);
++
++		        DWC_FREE(urb->hcpriv);
++			urb->hcpriv = NULL;
++		}
++        }
++
++        if (0 == rc) {
++		/* Higher layer software sets URB status. */
++#if USB_URB_EP_LINKING
++                usb_hcd_unlink_urb_from_ep(hcd, urb);
++#endif
++		DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
++
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++                usb_hcd_giveback_urb(hcd, urb);
++#else
++                usb_hcd_giveback_urb(hcd, urb, status);
++#endif
++                if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
++                        DWC_PRINTF("Called usb_hcd_giveback_urb() \n");
++                        DWC_PRINTF("  1urb->status = %d\n", urb->status);
++                }
++                DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Dequeue OK\n");
++        } else {
++		DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
++                DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Dequeue failed - rc %d\n",
++                            rc);
++        }
++
++	return rc;
++}
++
++/* Frees resources in the DWC_otg controller related to a given endpoint. Also
++ * clears state in the HCD related to the endpoint. Any URBs for the endpoint
++ * must already be dequeued. */
++static void endpoint_disable(struct usb_hcd *hcd, struct usb_host_endpoint *ep)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++
++	DWC_DEBUGPL(DBG_HCD,
++		    "DWC OTG HCD EP DISABLE: _bEndpointAddress=0x%02x, "
++		    "endpoint=%d\n", ep->desc.bEndpointAddress,
++		    dwc_ep_addr_to_endpoint(ep->desc.bEndpointAddress));
++	dwc_otg_hcd_endpoint_disable(dwc_otg_hcd, ep->hcpriv, 250);
++	ep->hcpriv = NULL;
++}
++
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30)
++/* Resets endpoint specific parameter values, in current version used to reset
++ * the data toggle(as a WA). This function can be called from usb_clear_halt routine */
++static void endpoint_reset(struct usb_hcd *hcd, struct usb_host_endpoint *ep)
++{
++	dwc_irqflags_t flags;
++	struct usb_device *udev = NULL;
++	int epnum = usb_endpoint_num(&ep->desc);
++	int is_out = usb_endpoint_dir_out(&ep->desc);
++	int is_control = usb_endpoint_xfer_control(&ep->desc);
++	dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++        struct device *dev = DWC_OTG_OS_GETDEV(dwc_otg_hcd->otg_dev->os_dep);
++
++	if (dev)
++		udev = to_usb_device(dev);
++	else
++		return;
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD EP RESET: Endpoint Num=0x%02d\n", epnum);
++
++	DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &flags);
++	usb_settoggle(udev, epnum, is_out, 0);
++	if (is_control)
++		usb_settoggle(udev, epnum, !is_out, 0);
++
++	if (ep->hcpriv) {
++		dwc_otg_hcd_endpoint_reset(dwc_otg_hcd, ep->hcpriv);
++	}
++	DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
++}
++#endif
++
++/** Handles host mode interrupts for the DWC_otg controller. Returns IRQ_NONE if
++ * there was no interrupt to handle. Returns IRQ_HANDLED if there was a valid
++ * interrupt.
++ *
++ * This function is called by the USB core when an interrupt occurs */
++static irqreturn_t dwc_otg_hcd_irq(struct usb_hcd *hcd)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++	int32_t retval = dwc_otg_hcd_handle_intr(dwc_otg_hcd);
++	if (retval != 0) {
++		S3C2410X_CLEAR_EINTPEND();
++	}
++	return IRQ_RETVAL(retval);
++}
++
++/** Creates Status Change bitmap for the root hub and root port. The bitmap is
++ * returned in buf. Bit 0 is the status change indicator for the root hub. Bit 1
++ * is the status change indicator for the single root port. Returns 1 if either
++ * change indicator is 1, otherwise returns 0. */
++int hub_status_data(struct usb_hcd *hcd, char *buf)
++{
++	dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
++
++	buf[0] = 0;
++	buf[0] |= (dwc_otg_hcd_is_status_changed(dwc_otg_hcd, 1)) << 1;
++
++	return (buf[0] != 0);
++}
++
++/** Handles hub class-specific requests. */
++int hub_control(struct usb_hcd *hcd,
++		u16 typeReq, u16 wValue, u16 wIndex, char *buf, u16 wLength)
++{
++	int retval;
++
++	retval = dwc_otg_hcd_hub_control(hcd_to_dwc_otg_hcd(hcd),
++					 typeReq, wValue, wIndex, buf, wLength);
++
++	switch (retval) {
++	case -DWC_E_INVALID:
++		retval = -EINVAL;
++		break;
++	}
++
++	return retval;
++}
++
++#endif /* DWC_DEVICE_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c
+@@ -0,0 +1,957 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_queue.c $
++ * $Revision: #44 $
++ * $Date: 2011/10/26 $
++ * $Change: 1873028 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_DEVICE_ONLY
++
++/**
++ * @file
++ *
++ * This file contains the functions to manage Queue Heads and Queue
++ * Transfer Descriptors.
++ */
++
++#include "dwc_otg_hcd.h"
++#include "dwc_otg_regs.h"
++
++extern bool microframe_schedule;
++
++/**
++ * Free each QTD in the QH's QTD-list then free the QH.  QH should already be
++ * removed from a list.  QTD list should already be empty if called from URB
++ * Dequeue.
++ *
++ * @param hcd HCD instance.
++ * @param qh The QH to free.
++ */
++void dwc_otg_hcd_qh_free(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	dwc_otg_qtd_t *qtd, *qtd_tmp;
++	dwc_irqflags_t flags;
++
++	/* Free each QTD in the QTD list */
++	DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
++	DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &qh->qtd_list, qtd_list_entry) {
++		DWC_CIRCLEQ_REMOVE(&qh->qtd_list, qtd, qtd_list_entry);
++		dwc_otg_hcd_qtd_free(qtd);
++	}
++
++	if (hcd->core_if->dma_desc_enable) {
++		dwc_otg_hcd_qh_free_ddma(hcd, qh);
++	} else if (qh->dw_align_buf) {
++		uint32_t buf_size;
++		if (qh->ep_type == UE_ISOCHRONOUS) {
++			buf_size = 4096;
++		} else {
++			buf_size = hcd->core_if->core_params->max_transfer_size;
++		}
++		DWC_DMA_FREE(buf_size, qh->dw_align_buf, qh->dw_align_buf_dma);
++	}
++
++	DWC_FREE(qh);
++	DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
++	return;
++}
++
++#define BitStuffTime(bytecount)  ((8 * 7* bytecount) / 6)
++#define HS_HOST_DELAY		5	/* nanoseconds */
++#define FS_LS_HOST_DELAY	1000	/* nanoseconds */
++#define HUB_LS_SETUP		333	/* nanoseconds */
++#define NS_TO_US(ns)		((ns + 500) / 1000)
++				/* convert & round nanoseconds to microseconds */
++
++static uint32_t calc_bus_time(int speed, int is_in, int is_isoc, int bytecount)
++{
++	unsigned long retval;
++
++	switch (speed) {
++	case USB_SPEED_HIGH:
++		if (is_isoc) {
++			retval =
++			    ((38 * 8 * 2083) +
++			     (2083 * (3 + BitStuffTime(bytecount)))) / 1000 +
++			    HS_HOST_DELAY;
++		} else {
++			retval =
++			    ((55 * 8 * 2083) +
++			     (2083 * (3 + BitStuffTime(bytecount)))) / 1000 +
++			    HS_HOST_DELAY;
++		}
++		break;
++	case USB_SPEED_FULL:
++		if (is_isoc) {
++			retval =
++			    (8354 * (31 + 10 * BitStuffTime(bytecount))) / 1000;
++			if (is_in) {
++				retval = 7268 + FS_LS_HOST_DELAY + retval;
++			} else {
++				retval = 6265 + FS_LS_HOST_DELAY + retval;
++			}
++		} else {
++			retval =
++			    (8354 * (31 + 10 * BitStuffTime(bytecount))) / 1000;
++			retval = 9107 + FS_LS_HOST_DELAY + retval;
++		}
++		break;
++	case USB_SPEED_LOW:
++		if (is_in) {
++			retval =
++			    (67667 * (31 + 10 * BitStuffTime(bytecount))) /
++			    1000;
++			retval =
++			    64060 + (2 * HUB_LS_SETUP) + FS_LS_HOST_DELAY +
++			    retval;
++		} else {
++			retval =
++			    (66700 * (31 + 10 * BitStuffTime(bytecount))) /
++			    1000;
++			retval =
++			    64107 + (2 * HUB_LS_SETUP) + FS_LS_HOST_DELAY +
++			    retval;
++		}
++		break;
++	default:
++		DWC_WARN("Unknown device speed\n");
++		retval = -1;
++	}
++
++	return NS_TO_US(retval);
++}
++
++/**
++ * Initializes a QH structure.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param qh  The QH to init.
++ * @param urb Holds the information about the device/endpoint that we need
++ * 	      to initialize the QH.
++ */
++#define SCHEDULE_SLOP 10
++void qh_init(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh, dwc_otg_hcd_urb_t * urb)
++{
++	char *speed, *type;
++	int dev_speed;
++	uint32_t hub_addr, hub_port;
++
++	dwc_memset(qh, 0, sizeof(dwc_otg_qh_t));
++
++	/* Initialize QH */
++	qh->ep_type = dwc_otg_hcd_get_pipe_type(&urb->pipe_info);
++	qh->ep_is_in = dwc_otg_hcd_is_pipe_in(&urb->pipe_info) ? 1 : 0;
++
++	qh->data_toggle = DWC_OTG_HC_PID_DATA0;
++	qh->maxp = dwc_otg_hcd_get_mps(&urb->pipe_info);
++	DWC_CIRCLEQ_INIT(&qh->qtd_list);
++	DWC_LIST_INIT(&qh->qh_list_entry);
++	qh->channel = NULL;
++
++	/* FS/LS Enpoint on HS Hub
++	 * NOT virtual root hub */
++	dev_speed = hcd->fops->speed(hcd, urb->priv);
++
++	hcd->fops->hub_info(hcd, urb->priv, &hub_addr, &hub_port);
++	qh->do_split = 0;
++	if (microframe_schedule)
++		qh->speed = dev_speed;
++
++	qh->nak_frame = 0xffff;
++
++	if (((dev_speed == USB_SPEED_LOW) ||
++	     (dev_speed == USB_SPEED_FULL)) &&
++	    (hub_addr != 0 && hub_addr != 1)) {
++		DWC_DEBUGPL(DBG_HCD,
++			    "QH init: EP %d: TT found at hub addr %d, for port %d\n",
++			    dwc_otg_hcd_get_ep_num(&urb->pipe_info), hub_addr,
++			    hub_port);
++		qh->do_split = 1;
++		qh->skip_count = 0;
++	}
++
++	if (qh->ep_type == UE_INTERRUPT || qh->ep_type == UE_ISOCHRONOUS) {
++		/* Compute scheduling parameters once and save them. */
++		hprt0_data_t hprt;
++
++		/** @todo Account for split transfers in the bus time. */
++		int bytecount =
++		    dwc_hb_mult(qh->maxp) * dwc_max_packet(qh->maxp);
++
++		qh->usecs =
++		    calc_bus_time((qh->do_split ? USB_SPEED_HIGH : dev_speed),
++				  qh->ep_is_in, (qh->ep_type == UE_ISOCHRONOUS),
++				  bytecount);
++		/* Start in a slightly future (micro)frame. */
++		qh->sched_frame = dwc_frame_num_inc(hcd->frame_number,
++						    SCHEDULE_SLOP);
++		qh->interval = urb->interval;
++
++#if 0
++		/* Increase interrupt polling rate for debugging. */
++		if (qh->ep_type == UE_INTERRUPT) {
++			qh->interval = 8;
++		}
++#endif
++		hprt.d32 = DWC_READ_REG32(hcd->core_if->host_if->hprt0);
++		if ((hprt.b.prtspd == DWC_HPRT0_PRTSPD_HIGH_SPEED) &&
++		    ((dev_speed == USB_SPEED_LOW) ||
++		     (dev_speed == USB_SPEED_FULL))) {
++			qh->interval *= 8;
++			qh->sched_frame |= 0x7;
++			qh->start_split_frame = qh->sched_frame;
++		}
++
++	}
++
++	DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD QH Initialized\n");
++	DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH  - qh = %p\n", qh);
++	DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH  - Device Address = %d\n",
++		    dwc_otg_hcd_get_dev_addr(&urb->pipe_info));
++	DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH  - Endpoint %d, %s\n",
++		    dwc_otg_hcd_get_ep_num(&urb->pipe_info),
++		    dwc_otg_hcd_is_pipe_in(&urb->pipe_info) ? "IN" : "OUT");
++	switch (dev_speed) {
++	case USB_SPEED_LOW:
++		qh->dev_speed = DWC_OTG_EP_SPEED_LOW;
++		speed = "low";
++		break;
++	case USB_SPEED_FULL:
++		qh->dev_speed = DWC_OTG_EP_SPEED_FULL;
++		speed = "full";
++		break;
++	case USB_SPEED_HIGH:
++		qh->dev_speed = DWC_OTG_EP_SPEED_HIGH;
++		speed = "high";
++		break;
++	default:
++		speed = "?";
++		break;
++	}
++	DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH  - Speed = %s\n", speed);
++
++	switch (qh->ep_type) {
++	case UE_ISOCHRONOUS:
++		type = "isochronous";
++		break;
++	case UE_INTERRUPT:
++		type = "interrupt";
++		break;
++	case UE_CONTROL:
++		type = "control";
++		break;
++	case UE_BULK:
++		type = "bulk";
++		break;
++	default:
++		type = "?";
++		break;
++	}
++
++	DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH  - Type = %s\n", type);
++
++#ifdef DEBUG
++	if (qh->ep_type == UE_INTERRUPT) {
++		DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - usecs = %d\n",
++			    qh->usecs);
++		DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - interval = %d\n",
++			    qh->interval);
++	}
++#endif
++
++}
++
++/**
++ * This function allocates and initializes a QH.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param urb Holds the information about the device/endpoint that we need
++ * 	      to initialize the QH.
++ * @param atomic_alloc Flag to do atomic allocation if needed
++ *
++ * @return Returns pointer to the newly allocated QH, or NULL on error. */
++dwc_otg_qh_t *dwc_otg_hcd_qh_create(dwc_otg_hcd_t * hcd,
++				    dwc_otg_hcd_urb_t * urb, int atomic_alloc)
++{
++	dwc_otg_qh_t *qh;
++
++	/* Allocate memory */
++	/** @todo add memflags argument */
++	qh = dwc_otg_hcd_qh_alloc(atomic_alloc);
++	if (qh == NULL) {
++		DWC_ERROR("qh allocation failed");
++		return NULL;
++	}
++
++	qh_init(hcd, qh, urb);
++
++	if (hcd->core_if->dma_desc_enable
++	    && (dwc_otg_hcd_qh_init_ddma(hcd, qh) < 0)) {
++		dwc_otg_hcd_qh_free(hcd, qh);
++		return NULL;
++	}
++
++	return qh;
++}
++
++/* microframe_schedule=0 start */
++
++/**
++ * Checks that a channel is available for a periodic transfer.
++ *
++ * @return 0 if successful, negative error code otherise.
++ */
++static int periodic_channel_available(dwc_otg_hcd_t * hcd)
++{
++	/*
++	 * Currently assuming that there is a dedicated host channnel for each
++	 * periodic transaction plus at least one host channel for
++	 * non-periodic transactions.
++	 */
++	int status;
++	int num_channels;
++
++	num_channels = hcd->core_if->core_params->host_channels;
++	if ((hcd->periodic_channels + hcd->non_periodic_channels < num_channels)
++	    && (hcd->periodic_channels < num_channels - 1)) {
++		status = 0;
++	} else {
++		DWC_INFO("%s: Total channels: %d, Periodic: %d, Non-periodic: %d\n",
++			__func__, num_channels, hcd->periodic_channels, hcd->non_periodic_channels);	//NOTICE
++		status = -DWC_E_NO_SPACE;
++	}
++
++	return status;
++}
++
++/**
++ * Checks that there is sufficient bandwidth for the specified QH in the
++ * periodic schedule. For simplicity, this calculation assumes that all the
++ * transfers in the periodic schedule may occur in the same (micro)frame.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param qh QH containing periodic bandwidth required.
++ *
++ * @return 0 if successful, negative error code otherwise.
++ */
++static int check_periodic_bandwidth(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	int status;
++	int16_t max_claimed_usecs;
++
++	status = 0;
++
++	if ((qh->dev_speed == DWC_OTG_EP_SPEED_HIGH) || qh->do_split) {
++		/*
++		 * High speed mode.
++		 * Max periodic usecs is 80% x 125 usec = 100 usec.
++		 */
++
++		max_claimed_usecs = 100 - qh->usecs;
++	} else {
++		/*
++		 * Full speed mode.
++		 * Max periodic usecs is 90% x 1000 usec = 900 usec.
++		 */
++		max_claimed_usecs = 900 - qh->usecs;
++	}
++
++	if (hcd->periodic_usecs > max_claimed_usecs) {
++		DWC_INFO("%s: already claimed usecs %d, required usecs %d\n", __func__, hcd->periodic_usecs, qh->usecs);	//NOTICE
++		status = -DWC_E_NO_SPACE;
++	}
++
++	return status;
++}
++
++/* microframe_schedule=0 end */
++
++/**
++ * Microframe scheduler
++ * track the total use in hcd->frame_usecs
++ * keep each qh use in qh->frame_usecs
++ * when surrendering the qh then donate the time back
++ */
++const unsigned short max_uframe_usecs[]={ 100, 100, 100, 100, 100, 100, 30, 0 };
++
++/*
++ * called from dwc_otg_hcd.c:dwc_otg_hcd_init
++ */
++int init_hcd_usecs(dwc_otg_hcd_t *_hcd)
++{
++	int i;
++	for (i=0; i<8; i++) {
++		_hcd->frame_usecs[i] = max_uframe_usecs[i];
++	}
++	return 0;
++}
++
++static int find_single_uframe(dwc_otg_hcd_t * _hcd, dwc_otg_qh_t * _qh)
++{
++	int i;
++	unsigned short utime;
++	int t_left;
++	int ret;
++	int done;
++
++	ret = -1;
++	utime = _qh->usecs;
++	t_left = utime;
++	i = 0;
++	done = 0;
++	while (done == 0) {
++		/* At the start _hcd->frame_usecs[i] = max_uframe_usecs[i]; */
++		if (utime <= _hcd->frame_usecs[i]) {
++			_hcd->frame_usecs[i] -= utime;
++			_qh->frame_usecs[i] += utime;
++			t_left -= utime;
++			ret = i;
++			done = 1;
++			return ret;
++		} else {
++			i++;
++			if (i == 8) {
++				done = 1;
++				ret = -1;
++			}
++		}
++	}
++	return ret;
++ }
++
++/*
++ * use this for FS apps that can span multiple uframes
++  */
++static int find_multi_uframe(dwc_otg_hcd_t * _hcd, dwc_otg_qh_t * _qh)
++{
++	int i;
++	int j;
++	unsigned short utime;
++	int t_left;
++	int ret;
++	int done;
++	unsigned short xtime;
++
++	ret = -1;
++	utime = _qh->usecs;
++	t_left = utime;
++	i = 0;
++	done = 0;
++loop:
++	while (done == 0) {
++		if(_hcd->frame_usecs[i] <= 0) {
++			i++;
++			if (i == 8) {
++				done = 1;
++				ret = -1;
++			}
++			goto loop;
++		}
++
++		/*
++		 * we need n consecutive slots
++		 * so use j as a start slot j plus j+1 must be enough time (for now)
++		 */
++		xtime= _hcd->frame_usecs[i];
++		for (j = i+1 ; j < 8 ; j++ ) {
++                       /*
++                        * if we add this frame remaining time to xtime we may
++                        * be OK, if not we need to test j for a complete frame
++                        */
++                       if ((xtime+_hcd->frame_usecs[j]) < utime) {
++                               if (_hcd->frame_usecs[j] < max_uframe_usecs[j]) {
++                                       j = 8;
++                                       ret = -1;
++                                       continue;
++                               }
++                       }
++                       if (xtime >= utime) {
++                               ret = i;
++                               j = 8;  /* stop loop with a good value ret */
++                               continue;
++                       }
++                       /* add the frame time to x time */
++                       xtime += _hcd->frame_usecs[j];
++		       /* we must have a fully available next frame or break */
++		       if ((xtime < utime)
++				       && (_hcd->frame_usecs[j] == max_uframe_usecs[j])) {
++			       ret = -1;
++			       j = 8;  /* stop loop with a bad value ret */
++			       continue;
++		       }
++		}
++		if (ret >= 0) {
++			t_left = utime;
++			for (j = i; (t_left>0) && (j < 8); j++ ) {
++				t_left -= _hcd->frame_usecs[j];
++				if ( t_left <= 0 ) {
++					_qh->frame_usecs[j] += _hcd->frame_usecs[j] + t_left;
++					_hcd->frame_usecs[j]= -t_left;
++					ret = i;
++					done = 1;
++				} else {
++					_qh->frame_usecs[j] += _hcd->frame_usecs[j];
++					_hcd->frame_usecs[j] = 0;
++				}
++			}
++		} else {
++			i++;
++			if (i == 8) {
++				done = 1;
++				ret = -1;
++			}
++		}
++	}
++	return ret;
++}
++
++static int find_uframe(dwc_otg_hcd_t * _hcd, dwc_otg_qh_t * _qh)
++{
++	int ret;
++	ret = -1;
++
++	if (_qh->speed == USB_SPEED_HIGH) {
++		/* if this is a hs transaction we need a full frame */
++		ret = find_single_uframe(_hcd, _qh);
++	} else {
++		/* if this is a fs transaction we may need a sequence of frames */
++		ret = find_multi_uframe(_hcd, _qh);
++	}
++	return ret;
++}
++
++/**
++ * Checks that the max transfer size allowed in a host channel is large enough
++ * to handle the maximum data transfer in a single (micro)frame for a periodic
++ * transfer.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param qh QH for a periodic endpoint.
++ *
++ * @return 0 if successful, negative error code otherwise.
++ */
++static int check_max_xfer_size(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	int status;
++	uint32_t max_xfer_size;
++	uint32_t max_channel_xfer_size;
++
++	status = 0;
++
++	max_xfer_size = dwc_max_packet(qh->maxp) * dwc_hb_mult(qh->maxp);
++	max_channel_xfer_size = hcd->core_if->core_params->max_transfer_size;
++
++	if (max_xfer_size > max_channel_xfer_size) {
++		DWC_INFO("%s: Periodic xfer length %d > " "max xfer length for channel %d\n",
++				__func__, max_xfer_size, max_channel_xfer_size);	//NOTICE
++		status = -DWC_E_NO_SPACE;
++	}
++
++	return status;
++}
++
++
++
++/**
++ * Schedules an interrupt or isochronous transfer in the periodic schedule.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param qh QH for the periodic transfer. The QH should already contain the
++ * scheduling information.
++ *
++ * @return 0 if successful, negative error code otherwise.
++ */
++static int schedule_periodic(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	int status = 0;
++
++	if (microframe_schedule) {
++		int frame;
++		status = find_uframe(hcd, qh);
++		frame = -1;
++		if (status == 0) {
++			frame = 7;
++		} else {
++			if (status > 0 )
++				frame = status-1;
++		}
++
++		/* Set the new frame up */
++		if (frame > -1) {
++			qh->sched_frame &= ~0x7;
++			qh->sched_frame |= (frame & 7);
++		}
++
++		if (status != -1)
++			status = 0;
++	} else {
++		status = periodic_channel_available(hcd);
++		if (status) {
++			DWC_INFO("%s: No host channel available for periodic " "transfer.\n", __func__);	//NOTICE
++			return status;
++		}
++
++		status = check_periodic_bandwidth(hcd, qh);
++	}
++	if (status) {
++		DWC_INFO("%s: Insufficient periodic bandwidth for "
++			    "periodic transfer.\n", __func__);
++		return status;
++	}
++	status = check_max_xfer_size(hcd, qh);
++	if (status) {
++		DWC_INFO("%s: Channel max transfer size too small "
++			    "for periodic transfer.\n", __func__);
++		return status;
++	}
++
++	if (hcd->core_if->dma_desc_enable) {
++		/* Don't rely on SOF and start in ready schedule */
++		DWC_LIST_INSERT_TAIL(&hcd->periodic_sched_ready, &qh->qh_list_entry);
++	}
++	else {
++		if(fiq_enable && (DWC_LIST_EMPTY(&hcd->periodic_sched_inactive) || dwc_frame_num_le(qh->sched_frame, hcd->fiq_state->next_sched_frame)))
++		{
++			hcd->fiq_state->next_sched_frame = qh->sched_frame;
++
++		}
++		/* Always start in the inactive schedule. */
++		DWC_LIST_INSERT_TAIL(&hcd->periodic_sched_inactive, &qh->qh_list_entry);
++	}
++
++	if (!microframe_schedule) {
++		/* Reserve the periodic channel. */
++		hcd->periodic_channels++;
++	}
++
++	/* Update claimed usecs per (micro)frame. */
++	hcd->periodic_usecs += qh->usecs;
++
++	return status;
++}
++
++
++/**
++ * This function adds a QH to either the non periodic or periodic schedule if
++ * it is not already in the schedule. If the QH is already in the schedule, no
++ * action is taken.
++ *
++ * @return 0 if successful, negative error code otherwise.
++ */
++int dwc_otg_hcd_qh_add(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	int status = 0;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	if (!DWC_LIST_EMPTY(&qh->qh_list_entry)) {
++		/* QH already in a schedule. */
++		return status;
++	}
++
++	/* Add the new QH to the appropriate schedule */
++	if (dwc_qh_is_non_per(qh)) {
++		/* Always start in the inactive schedule. */
++		DWC_LIST_INSERT_TAIL(&hcd->non_periodic_sched_inactive,
++				     &qh->qh_list_entry);
++		//hcd->fiq_state->kick_np_queues = 1;
++	} else {
++		status = schedule_periodic(hcd, qh);
++		if ( !hcd->periodic_qh_count ) {
++			intr_mask.b.sofintr = 1;
++			if (fiq_enable) {
++				local_fiq_disable();
++				fiq_fsm_spin_lock(&hcd->fiq_state->lock);
++				DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, intr_mask.d32, intr_mask.d32);
++				fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
++				local_fiq_enable();
++			} else {
++				DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, intr_mask.d32, intr_mask.d32);
++			}
++		}
++		hcd->periodic_qh_count++;
++	}
++
++	return status;
++}
++
++/**
++ * Removes an interrupt or isochronous transfer from the periodic schedule.
++ *
++ * @param hcd The HCD state structure for the DWC OTG controller.
++ * @param qh QH for the periodic transfer.
++ */
++static void deschedule_periodic(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	int i;
++	DWC_LIST_REMOVE_INIT(&qh->qh_list_entry);
++
++	/* Update claimed usecs per (micro)frame. */
++	hcd->periodic_usecs -= qh->usecs;
++
++	if (!microframe_schedule) {
++		/* Release the periodic channel reservation. */
++		hcd->periodic_channels--;
++	} else {
++		for (i = 0; i < 8; i++) {
++			hcd->frame_usecs[i] += qh->frame_usecs[i];
++			qh->frame_usecs[i] = 0;
++		}
++	}
++}
++
++/**
++ * Removes a QH from either the non-periodic or periodic schedule.  Memory is
++ * not freed.
++ *
++ * @param hcd The HCD state structure.
++ * @param qh QH to remove from schedule. */
++void dwc_otg_hcd_qh_remove(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
++{
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	if (DWC_LIST_EMPTY(&qh->qh_list_entry)) {
++		/* QH is not in a schedule. */
++		return;
++	}
++
++	if (dwc_qh_is_non_per(qh)) {
++		if (hcd->non_periodic_qh_ptr == &qh->qh_list_entry) {
++			hcd->non_periodic_qh_ptr =
++			    hcd->non_periodic_qh_ptr->next;
++		}
++		DWC_LIST_REMOVE_INIT(&qh->qh_list_entry);
++		//if (!DWC_LIST_EMPTY(&hcd->non_periodic_sched_inactive))
++		//	hcd->fiq_state->kick_np_queues = 1;
++	} else {
++		deschedule_periodic(hcd, qh);
++		hcd->periodic_qh_count--;
++		if( !hcd->periodic_qh_count && !fiq_fsm_enable ) {
++			intr_mask.b.sofintr = 1;
++			if (fiq_enable) {
++				local_fiq_disable();
++				fiq_fsm_spin_lock(&hcd->fiq_state->lock);
++				DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, intr_mask.d32, 0);
++				fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
++				local_fiq_enable();
++			} else {
++				DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, intr_mask.d32, 0);
++			}
++		}
++	}
++}
++
++/**
++ * Deactivates a QH. For non-periodic QHs, removes the QH from the active
++ * non-periodic schedule. The QH is added to the inactive non-periodic
++ * schedule if any QTDs are still attached to the QH.
++ *
++ * For periodic QHs, the QH is removed from the periodic queued schedule. If
++ * there are any QTDs still attached to the QH, the QH is added to either the
++ * periodic inactive schedule or the periodic ready schedule and its next
++ * scheduled frame is calculated. The QH is placed in the ready schedule if
++ * the scheduled frame has been reached already. Otherwise it's placed in the
++ * inactive schedule. If there are no QTDs attached to the QH, the QH is
++ * completely removed from the periodic schedule.
++ */
++void dwc_otg_hcd_qh_deactivate(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh,
++			       int sched_next_periodic_split)
++{
++	if (dwc_qh_is_non_per(qh)) {
++		dwc_otg_hcd_qh_remove(hcd, qh);
++		if (!DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
++			/* Add back to inactive non-periodic schedule. */
++			dwc_otg_hcd_qh_add(hcd, qh);
++			//hcd->fiq_state->kick_np_queues = 1;
++		}
++	} else {
++		uint16_t frame_number = dwc_otg_hcd_get_frame_number(hcd);
++
++		if (qh->do_split) {
++			/* Schedule the next continuing periodic split transfer */
++			if (sched_next_periodic_split) {
++
++				qh->sched_frame = frame_number;
++
++				if (dwc_frame_num_le(frame_number,
++						     dwc_frame_num_inc
++						     (qh->start_split_frame,
++						      1))) {
++					/*
++					 * Allow one frame to elapse after start
++					 * split microframe before scheduling
++					 * complete split, but DONT if we are
++					 * doing the next start split in the
++					 * same frame for an ISOC out.
++					 */
++					if ((qh->ep_type != UE_ISOCHRONOUS) ||
++					    (qh->ep_is_in != 0)) {
++						qh->sched_frame =
++						    dwc_frame_num_inc(qh->sched_frame, 1);
++					}
++				}
++			} else {
++				qh->sched_frame =
++				    dwc_frame_num_inc(qh->start_split_frame,
++						      qh->interval);
++				if (dwc_frame_num_le
++				    (qh->sched_frame, frame_number)) {
++					qh->sched_frame = frame_number;
++				}
++				qh->sched_frame |= 0x7;
++				qh->start_split_frame = qh->sched_frame;
++			}
++		} else {
++			qh->sched_frame =
++			    dwc_frame_num_inc(qh->sched_frame, qh->interval);
++			if (dwc_frame_num_le(qh->sched_frame, frame_number)) {
++				qh->sched_frame = frame_number;
++			}
++		}
++
++		if (DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
++			dwc_otg_hcd_qh_remove(hcd, qh);
++		} else {
++			/*
++			 * Remove from periodic_sched_queued and move to
++			 * appropriate queue.
++			 */
++			if ((microframe_schedule && dwc_frame_num_le(qh->sched_frame, frame_number)) ||
++			(!microframe_schedule && qh->sched_frame == frame_number)) {
++				DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_ready,
++						   &qh->qh_list_entry);
++			} else {
++				if(fiq_enable && !dwc_frame_num_le(hcd->fiq_state->next_sched_frame, qh->sched_frame))
++				{
++					hcd->fiq_state->next_sched_frame = qh->sched_frame;
++				}
++
++				DWC_LIST_MOVE_HEAD
++				    (&hcd->periodic_sched_inactive,
++				     &qh->qh_list_entry);
++			}
++		}
++	}
++}
++
++/**
++ * This function allocates and initializes a QTD.
++ *
++ * @param urb The URB to create a QTD from.  Each URB-QTD pair will end up
++ * 	      pointing to each other so each pair should have a unique correlation.
++ * @param atomic_alloc Flag to do atomic alloc if needed
++ *
++ * @return Returns pointer to the newly allocated QTD, or NULL on error. */
++dwc_otg_qtd_t *dwc_otg_hcd_qtd_create(dwc_otg_hcd_urb_t * urb, int atomic_alloc)
++{
++	dwc_otg_qtd_t *qtd;
++
++	qtd = dwc_otg_hcd_qtd_alloc(atomic_alloc);
++	if (qtd == NULL) {
++		return NULL;
++	}
++
++	dwc_otg_hcd_qtd_init(qtd, urb);
++	return qtd;
++}
++
++/**
++ * Initializes a QTD structure.
++ *
++ * @param qtd The QTD to initialize.
++ * @param urb The URB to use for initialization.  */
++void dwc_otg_hcd_qtd_init(dwc_otg_qtd_t * qtd, dwc_otg_hcd_urb_t * urb)
++{
++	dwc_memset(qtd, 0, sizeof(dwc_otg_qtd_t));
++	qtd->urb = urb;
++	if (dwc_otg_hcd_get_pipe_type(&urb->pipe_info) == UE_CONTROL) {
++		/*
++		 * The only time the QTD data toggle is used is on the data
++		 * phase of control transfers. This phase always starts with
++		 * DATA1.
++		 */
++		qtd->data_toggle = DWC_OTG_HC_PID_DATA1;
++		qtd->control_phase = DWC_OTG_CONTROL_SETUP;
++	}
++
++	/* start split */
++	qtd->complete_split = 0;
++	qtd->isoc_split_pos = DWC_HCSPLIT_XACTPOS_ALL;
++	qtd->isoc_split_offset = 0;
++	qtd->in_process = 0;
++
++	/* Store the qtd ptr in the urb to reference what QTD. */
++	urb->qtd = qtd;
++	return;
++}
++
++/**
++ * This function adds a QTD to the QTD-list of a QH.  It will find the correct
++ * QH to place the QTD into.  If it does not find a QH, then it will create a
++ * new QH. If the QH to which the QTD is added is not currently scheduled, it
++ * is placed into the proper schedule based on its EP type.
++ * HCD lock must be held and interrupts must be disabled on entry
++ *
++ * @param[in] qtd The QTD to add
++ * @param[in] hcd The DWC HCD structure
++ * @param[out] qh out parameter to return queue head
++ * @param atomic_alloc Flag to do atomic alloc if needed
++ *
++ * @return 0 if successful, negative error code otherwise.
++ */
++int dwc_otg_hcd_qtd_add(dwc_otg_qtd_t * qtd,
++			dwc_otg_hcd_t * hcd, dwc_otg_qh_t ** qh, int atomic_alloc)
++{
++	int retval = 0;
++	dwc_otg_hcd_urb_t *urb = qtd->urb;
++
++	/*
++	 * Get the QH which holds the QTD-list to insert to. Create QH if it
++	 * doesn't exist.
++	 */
++	if (*qh == NULL) {
++		*qh = dwc_otg_hcd_qh_create(hcd, urb, atomic_alloc);
++		if (*qh == NULL) {
++			retval = -DWC_E_NO_MEMORY;
++			goto done;
++		} else {
++			if (fiq_enable)
++				hcd->fiq_state->kick_np_queues = 1;
++		}
++	}
++	retval = dwc_otg_hcd_qh_add(hcd, *qh);
++	if (retval == 0) {
++		DWC_CIRCLEQ_INSERT_TAIL(&((*qh)->qtd_list), qtd,
++					qtd_list_entry);
++		qtd->qh = *qh;
++	}
++done:
++
++	return retval;
++}
++
++#endif /* DWC_DEVICE_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_os_dep.h
+@@ -0,0 +1,188 @@
++#ifndef _DWC_OS_DEP_H_
++#define _DWC_OS_DEP_H_
++
++/**
++ * @file
++ *
++ * This file contains OS dependent structures.
++ *
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/moduleparam.h>
++#include <linux/init.h>
++#include <linux/device.h>
++#include <linux/errno.h>
++#include <linux/types.h>
++#include <linux/slab.h>
++#include <linux/list.h>
++#include <linux/interrupt.h>
++#include <linux/ctype.h>
++#include <linux/string.h>
++#include <linux/dma-mapping.h>
++#include <linux/jiffies.h>
++#include <linux/delay.h>
++#include <linux/timer.h>
++#include <linux/workqueue.h>
++#include <linux/stat.h>
++#include <linux/pci.h>
++
++#include <linux/version.h>
++
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,20)
++# include <linux/irq.h>
++#endif
++
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,21)
++# include <linux/usb/ch9.h>
++#else
++# include <linux/usb_ch9.h>
++#endif
++
++#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,24)
++# include <linux/usb/gadget.h>
++#else
++# include <linux/usb_gadget.h>
++#endif
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20)
++# include <asm/irq.h>
++#endif
++
++#ifdef PCI_INTERFACE
++# include <asm/io.h>
++#endif
++
++#ifdef LM_INTERFACE
++# include <asm/unaligned.h>
++# include <asm/sizes.h>
++# include <asm/param.h>
++# include <asm/io.h>
++# if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30))
++#  include <asm/arch/hardware.h>
++#  include <asm/arch/lm.h>
++#  include <asm/arch/irqs.h>
++#  include <asm/arch/regs-irq.h>
++# else
++/* in 2.6.31, at least, we seem to have lost the generic LM infrastructure -
++   here we assume that the machine architecture provides definitions
++   in its own header
++*/
++#  include <mach/lm.h>
++#  include <mach/hardware.h>
++# endif
++#endif
++
++#ifdef PLATFORM_INTERFACE
++#include <linux/platform_device.h>
++#include <asm/mach/map.h>
++#endif
++
++/** The OS page size */
++#define DWC_OS_PAGE_SIZE	PAGE_SIZE
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,14)
++typedef int gfp_t;
++#endif
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,18)
++# define IRQF_SHARED SA_SHIRQ
++#endif
++
++typedef struct os_dependent {
++	/** Base address returned from ioremap() */
++	void *base;
++
++	/** Register offset for Diagnostic API */
++	uint32_t reg_offset;
++
++	/** Base address for MPHI peripheral */
++	void *mphi_base;
++
++#ifdef LM_INTERFACE
++	struct lm_device *lmdev;
++#elif  defined(PCI_INTERFACE)
++	struct pci_dev *pcidev;
++
++	/** Start address of a PCI region */
++	resource_size_t rsrc_start;
++
++	/** Length address of a PCI region */
++	resource_size_t rsrc_len;
++#elif  defined(PLATFORM_INTERFACE)
++	struct platform_device *platformdev;
++#endif
++
++} os_dependent_t;
++
++#ifdef __cplusplus
++}
++#endif
++
++
++
++/* Type for the our device on the chosen bus */
++#if   defined(LM_INTERFACE)
++typedef struct lm_device       dwc_bus_dev_t;
++#elif defined(PCI_INTERFACE)
++typedef struct pci_dev         dwc_bus_dev_t;
++#elif defined(PLATFORM_INTERFACE)
++typedef struct platform_device dwc_bus_dev_t;
++#endif
++
++/* Helper macro to retrieve drvdata from the device on the chosen bus */
++#if    defined(LM_INTERFACE)
++#define DWC_OTG_BUSDRVDATA(_dev) lm_get_drvdata(_dev)
++#elif  defined(PCI_INTERFACE)
++#define DWC_OTG_BUSDRVDATA(_dev) pci_get_drvdata(_dev)
++#elif  defined(PLATFORM_INTERFACE)
++#define DWC_OTG_BUSDRVDATA(_dev) platform_get_drvdata(_dev)
++#endif
++
++/**
++ * Helper macro returning the otg_device structure of a given struct device
++ *
++ * c.f. static dwc_otg_device_t *dwc_otg_drvdev(struct device *_dev)
++ */
++#ifdef LM_INTERFACE
++#define DWC_OTG_GETDRVDEV(_var, _dev) do { \
++                struct lm_device *lm_dev = \
++                        container_of(_dev, struct lm_device, dev); \
++                _var = lm_get_drvdata(lm_dev); \
++        } while (0)
++
++#elif defined(PCI_INTERFACE)
++#define DWC_OTG_GETDRVDEV(_var, _dev) do { \
++                _var = dev_get_drvdata(_dev); \
++        } while (0)
++
++#elif defined(PLATFORM_INTERFACE)
++#define DWC_OTG_GETDRVDEV(_var, _dev) do { \
++                struct platform_device *platform_dev = \
++                        container_of(_dev, struct platform_device, dev); \
++                _var = platform_get_drvdata(platform_dev); \
++        } while (0)
++#endif
++
++
++/**
++ * Helper macro returning the struct dev of the given struct os_dependent
++ *
++ * c.f. static struct device *dwc_otg_getdev(struct os_dependent *osdep)
++ */
++#ifdef LM_INTERFACE
++#define DWC_OTG_OS_GETDEV(_osdep) \
++        ((_osdep).lmdev == NULL? NULL: &(_osdep).lmdev->dev)
++#elif defined(PCI_INTERFACE)
++#define DWC_OTG_OS_GETDEV(_osdep) \
++        ((_osdep).pci_dev == NULL? NULL: &(_osdep).pci_dev->dev)
++#elif defined(PLATFORM_INTERFACE)
++#define DWC_OTG_OS_GETDEV(_osdep) \
++        ((_osdep).platformdev == NULL? NULL: &(_osdep).platformdev->dev)
++#endif
++
++
++
++
++#endif /* _DWC_OS_DEP_H_ */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd.c
+@@ -0,0 +1,2712 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd.c $
++ * $Revision: #101 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_HOST_ONLY
++
++/** @file
++ * This file implements PCD Core. All code in this file is portable and doesn't
++ * use any OS specific functions.
++ * PCD Core provides Interface, defined in <code><dwc_otg_pcd_if.h></code>
++ * header file, which can be used to implement OS specific PCD interface.
++ *
++ * An important function of the PCD is managing interrupts generated
++ * by the DWC_otg controller. The implementation of the DWC_otg device
++ * mode interrupt service routines is in dwc_otg_pcd_intr.c.
++ *
++ * @todo Add Device Mode test modes (Test J mode, Test K mode, etc).
++ * @todo Does it work when the request size is greater than DEPTSIZ
++ * transfer size
++ *
++ */
++
++#include "dwc_otg_pcd.h"
++
++#ifdef DWC_UTE_CFI
++#include "dwc_otg_cfi.h"
++
++extern int init_cfi(cfiobject_t * cfiobj);
++#endif
++
++/**
++ * Choose endpoint from ep arrays using usb_ep structure.
++ */
++static dwc_otg_pcd_ep_t *get_ep_from_handle(dwc_otg_pcd_t * pcd, void *handle)
++{
++	int i;
++	if (pcd->ep0.priv == handle) {
++		return &pcd->ep0;
++	}
++	for (i = 0; i < MAX_EPS_CHANNELS - 1; i++) {
++		if (pcd->in_ep[i].priv == handle)
++			return &pcd->in_ep[i];
++		if (pcd->out_ep[i].priv == handle)
++			return &pcd->out_ep[i];
++	}
++
++	return NULL;
++}
++
++/**
++ * This function completes a request.  It call's the request call back.
++ */
++void dwc_otg_request_done(dwc_otg_pcd_ep_t * ep, dwc_otg_pcd_request_t * req,
++			  int32_t status)
++{
++	unsigned stopped = ep->stopped;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(ep %p req %p)\n", __func__, ep, req);
++	DWC_CIRCLEQ_REMOVE_INIT(&ep->queue, req, queue_entry);
++
++	/* don't modify queue heads during completion callback */
++	ep->stopped = 1;
++	/* spin_unlock/spin_lock now done in fops->complete() */
++	ep->pcd->fops->complete(ep->pcd, ep->priv, req->priv, status,
++				req->actual);
++
++	if (ep->pcd->request_pending > 0) {
++		--ep->pcd->request_pending;
++	}
++
++	ep->stopped = stopped;
++	DWC_FREE(req);
++}
++
++/**
++ * This function terminates all the requsts in the EP request queue.
++ */
++void dwc_otg_request_nuke(dwc_otg_pcd_ep_t * ep)
++{
++	dwc_otg_pcd_request_t *req;
++
++	ep->stopped = 1;
++
++	/* called with irqs blocked?? */
++	while (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		req = DWC_CIRCLEQ_FIRST(&ep->queue);
++		dwc_otg_request_done(ep, req, -DWC_E_SHUTDOWN);
++	}
++}
++
++void dwc_otg_pcd_start(dwc_otg_pcd_t * pcd,
++		       const struct dwc_otg_pcd_function_ops *fops)
++{
++	pcd->fops = fops;
++}
++
++/**
++ * PCD Callback function for initializing the PCD when switching to
++ * device mode.
++ *
++ * @param p void pointer to the <code>dwc_otg_pcd_t</code>
++ */
++static int32_t dwc_otg_pcd_start_cb(void *p)
++{
++	dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) p;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++
++	/*
++	 * Initialized the Core for Device mode.
++	 */
++	if (dwc_otg_is_device_mode(core_if)) {
++		dwc_otg_core_dev_init(core_if);
++		/* Set core_if's lock pointer to the pcd->lock */
++		core_if->lock = pcd->lock;
++	}
++	return 1;
++}
++
++/** CFI-specific buffer allocation function for EP */
++#ifdef DWC_UTE_CFI
++uint8_t *cfiw_ep_alloc_buffer(dwc_otg_pcd_t * pcd, void *pep, dwc_dma_t * addr,
++			      size_t buflen, int flags)
++{
++	dwc_otg_pcd_ep_t *ep;
++	ep = get_ep_from_handle(pcd, pep);
++	if (!ep) {
++		DWC_WARN("bad ep\n");
++		return -DWC_E_INVALID;
++	}
++
++	return pcd->cfi->ops.ep_alloc_buf(pcd->cfi, pcd, ep, addr, buflen,
++					  flags);
++}
++#else
++uint8_t *cfiw_ep_alloc_buffer(dwc_otg_pcd_t * pcd, void *pep, dwc_dma_t * addr,
++			      size_t buflen, int flags);
++#endif
++
++/**
++ * PCD Callback function for notifying the PCD when resuming from
++ * suspend.
++ *
++ * @param p void pointer to the <code>dwc_otg_pcd_t</code>
++ */
++static int32_t dwc_otg_pcd_resume_cb(void *p)
++{
++	dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) p;
++
++	if (pcd->fops->resume) {
++		pcd->fops->resume(pcd);
++	}
++
++	/* Stop the SRP timeout timer. */
++	if ((GET_CORE_IF(pcd)->core_params->phy_type != DWC_PHY_TYPE_PARAM_FS)
++	    || (!GET_CORE_IF(pcd)->core_params->i2c_enable)) {
++		if (GET_CORE_IF(pcd)->srp_timer_started) {
++			GET_CORE_IF(pcd)->srp_timer_started = 0;
++			DWC_TIMER_CANCEL(GET_CORE_IF(pcd)->srp_timer);
++		}
++	}
++	return 1;
++}
++
++/**
++ * PCD Callback function for notifying the PCD device is suspended.
++ *
++ * @param p void pointer to the <code>dwc_otg_pcd_t</code>
++ */
++static int32_t dwc_otg_pcd_suspend_cb(void *p)
++{
++	dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) p;
++
++	if (pcd->fops->suspend) {
++		DWC_SPINUNLOCK(pcd->lock);
++		pcd->fops->suspend(pcd);
++		DWC_SPINLOCK(pcd->lock);
++	}
++
++	return 1;
++}
++
++/**
++ * PCD Callback function for stopping the PCD when switching to Host
++ * mode.
++ *
++ * @param p void pointer to the <code>dwc_otg_pcd_t</code>
++ */
++static int32_t dwc_otg_pcd_stop_cb(void *p)
++{
++	dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) p;
++	extern void dwc_otg_pcd_stop(dwc_otg_pcd_t * _pcd);
++
++	dwc_otg_pcd_stop(pcd);
++	return 1;
++}
++
++/**
++ * PCD Callback structure for handling mode switching.
++ */
++static dwc_otg_cil_callbacks_t pcd_callbacks = {
++	.start = dwc_otg_pcd_start_cb,
++	.stop = dwc_otg_pcd_stop_cb,
++	.suspend = dwc_otg_pcd_suspend_cb,
++	.resume_wakeup = dwc_otg_pcd_resume_cb,
++	.p = 0,			/* Set at registration */
++};
++
++/**
++ * This function allocates a DMA Descriptor chain for the Endpoint
++ * buffer to be used for a transfer to/from the specified endpoint.
++ */
++dwc_otg_dev_dma_desc_t *dwc_otg_ep_alloc_desc_chain(dwc_dma_t * dma_desc_addr,
++						    uint32_t count)
++{
++	return DWC_DMA_ALLOC_ATOMIC(count * sizeof(dwc_otg_dev_dma_desc_t),
++							dma_desc_addr);
++}
++
++/**
++ * This function frees a DMA Descriptor chain that was allocated by ep_alloc_desc.
++ */
++void dwc_otg_ep_free_desc_chain(dwc_otg_dev_dma_desc_t * desc_addr,
++				uint32_t dma_desc_addr, uint32_t count)
++{
++	DWC_DMA_FREE(count * sizeof(dwc_otg_dev_dma_desc_t), desc_addr,
++		     dma_desc_addr);
++}
++
++#ifdef DWC_EN_ISOC
++
++/**
++ * This function initializes a descriptor chain for Isochronous transfer
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param dwc_ep The EP to start the transfer on.
++ *
++ */
++void dwc_otg_iso_ep_start_ddma_transfer(dwc_otg_core_if_t * core_if,
++					dwc_ep_t * dwc_ep)
++{
++
++	dsts_data_t dsts = {.d32 = 0 };
++	depctl_data_t depctl = {.d32 = 0 };
++	volatile uint32_t *addr;
++	int i, j;
++	uint32_t len;
++
++	if (dwc_ep->is_in)
++		dwc_ep->desc_cnt = dwc_ep->buf_proc_intrvl / dwc_ep->bInterval;
++	else
++		dwc_ep->desc_cnt =
++		    dwc_ep->buf_proc_intrvl * dwc_ep->pkt_per_frm /
++		    dwc_ep->bInterval;
++
++	/** Allocate descriptors for double buffering */
++	dwc_ep->iso_desc_addr =
++	    dwc_otg_ep_alloc_desc_chain(&dwc_ep->iso_dma_desc_addr,
++					dwc_ep->desc_cnt * 2);
++	if (dwc_ep->desc_addr) {
++		DWC_WARN("%s, can't allocate DMA descriptor chain\n", __func__);
++		return;
++	}
++
++	dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++
++	/** ISO OUT EP */
++	if (dwc_ep->is_in == 0) {
++		dev_dma_desc_sts_t sts = {.d32 = 0 };
++		dwc_otg_dev_dma_desc_t *dma_desc = dwc_ep->iso_desc_addr;
++		dma_addr_t dma_ad;
++		uint32_t data_per_desc;
++		dwc_otg_dev_out_ep_regs_t *out_regs =
++		    core_if->dev_if->out_ep_regs[dwc_ep->num];
++		int offset;
++
++		addr = &core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl;
++		dma_ad = (dma_addr_t) DWC_READ_REG32(&(out_regs->doepdma));
++
++		/** Buffer 0 descriptors setup */
++		dma_ad = dwc_ep->dma_addr0;
++
++		sts.b_iso_out.bs = BS_HOST_READY;
++		sts.b_iso_out.rxsts = 0;
++		sts.b_iso_out.l = 0;
++		sts.b_iso_out.sp = 0;
++		sts.b_iso_out.ioc = 0;
++		sts.b_iso_out.pid = 0;
++		sts.b_iso_out.framenum = 0;
++
++		offset = 0;
++		for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
++		     i += dwc_ep->pkt_per_frm) {
++
++			for (j = 0; j < dwc_ep->pkt_per_frm; ++j) {
++				uint32_t len = (j + 1) * dwc_ep->maxpacket;
++				if (len > dwc_ep->data_per_frame)
++					data_per_desc =
++					    dwc_ep->data_per_frame -
++					    j * dwc_ep->maxpacket;
++				else
++					data_per_desc = dwc_ep->maxpacket;
++				len = data_per_desc % 4;
++				if (len)
++					data_per_desc += 4 - len;
++
++				sts.b_iso_out.rxbytes = data_per_desc;
++				dma_desc->buf = dma_ad;
++				dma_desc->status.d32 = sts.d32;
++
++				offset += data_per_desc;
++				dma_desc++;
++				dma_ad += data_per_desc;
++			}
++		}
++
++		for (j = 0; j < dwc_ep->pkt_per_frm - 1; ++j) {
++			uint32_t len = (j + 1) * dwc_ep->maxpacket;
++			if (len > dwc_ep->data_per_frame)
++				data_per_desc =
++				    dwc_ep->data_per_frame -
++				    j * dwc_ep->maxpacket;
++			else
++				data_per_desc = dwc_ep->maxpacket;
++			len = data_per_desc % 4;
++			if (len)
++				data_per_desc += 4 - len;
++			sts.b_iso_out.rxbytes = data_per_desc;
++			dma_desc->buf = dma_ad;
++			dma_desc->status.d32 = sts.d32;
++
++			offset += data_per_desc;
++			dma_desc++;
++			dma_ad += data_per_desc;
++		}
++
++		sts.b_iso_out.ioc = 1;
++		len = (j + 1) * dwc_ep->maxpacket;
++		if (len > dwc_ep->data_per_frame)
++			data_per_desc =
++			    dwc_ep->data_per_frame - j * dwc_ep->maxpacket;
++		else
++			data_per_desc = dwc_ep->maxpacket;
++		len = data_per_desc % 4;
++		if (len)
++			data_per_desc += 4 - len;
++		sts.b_iso_out.rxbytes = data_per_desc;
++
++		dma_desc->buf = dma_ad;
++		dma_desc->status.d32 = sts.d32;
++		dma_desc++;
++
++		/** Buffer 1 descriptors setup */
++		sts.b_iso_out.ioc = 0;
++		dma_ad = dwc_ep->dma_addr1;
++
++		offset = 0;
++		for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
++		     i += dwc_ep->pkt_per_frm) {
++			for (j = 0; j < dwc_ep->pkt_per_frm; ++j) {
++				uint32_t len = (j + 1) * dwc_ep->maxpacket;
++				if (len > dwc_ep->data_per_frame)
++					data_per_desc =
++					    dwc_ep->data_per_frame -
++					    j * dwc_ep->maxpacket;
++				else
++					data_per_desc = dwc_ep->maxpacket;
++				len = data_per_desc % 4;
++				if (len)
++					data_per_desc += 4 - len;
++
++				data_per_desc =
++				    sts.b_iso_out.rxbytes = data_per_desc;
++				dma_desc->buf = dma_ad;
++				dma_desc->status.d32 = sts.d32;
++
++				offset += data_per_desc;
++				dma_desc++;
++				dma_ad += data_per_desc;
++			}
++		}
++		for (j = 0; j < dwc_ep->pkt_per_frm - 1; ++j) {
++			data_per_desc =
++			    ((j + 1) * dwc_ep->maxpacket >
++			     dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
++			    j * dwc_ep->maxpacket : dwc_ep->maxpacket;
++			data_per_desc +=
++			    (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
++			sts.b_iso_out.rxbytes = data_per_desc;
++			dma_desc->buf = dma_ad;
++			dma_desc->status.d32 = sts.d32;
++
++			offset += data_per_desc;
++			dma_desc++;
++			dma_ad += data_per_desc;
++		}
++
++		sts.b_iso_out.ioc = 1;
++		sts.b_iso_out.l = 1;
++		data_per_desc =
++		    ((j + 1) * dwc_ep->maxpacket >
++		     dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
++		    j * dwc_ep->maxpacket : dwc_ep->maxpacket;
++		data_per_desc +=
++		    (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
++		sts.b_iso_out.rxbytes = data_per_desc;
++
++		dma_desc->buf = dma_ad;
++		dma_desc->status.d32 = sts.d32;
++
++		dwc_ep->next_frame = 0;
++
++		/** Write dma_ad into DOEPDMA register */
++		DWC_WRITE_REG32(&(out_regs->doepdma),
++				(uint32_t) dwc_ep->iso_dma_desc_addr);
++
++	}
++	/** ISO IN EP */
++	else {
++		dev_dma_desc_sts_t sts = {.d32 = 0 };
++		dwc_otg_dev_dma_desc_t *dma_desc = dwc_ep->iso_desc_addr;
++		dma_addr_t dma_ad;
++		dwc_otg_dev_in_ep_regs_t *in_regs =
++		    core_if->dev_if->in_ep_regs[dwc_ep->num];
++		unsigned int frmnumber;
++		fifosize_data_t txfifosize, rxfifosize;
++
++		txfifosize.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->in_ep_regs[dwc_ep->num]->
++				   dtxfsts);
++		rxfifosize.d32 =
++		    DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
++
++		addr = &core_if->dev_if->in_ep_regs[dwc_ep->num]->diepctl;
++
++		dma_ad = dwc_ep->dma_addr0;
++
++		dsts.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++
++		sts.b_iso_in.bs = BS_HOST_READY;
++		sts.b_iso_in.txsts = 0;
++		sts.b_iso_in.sp =
++		    (dwc_ep->data_per_frame % dwc_ep->maxpacket) ? 1 : 0;
++		sts.b_iso_in.ioc = 0;
++		sts.b_iso_in.pid = dwc_ep->pkt_per_frm;
++
++		frmnumber = dwc_ep->next_frame;
++
++		sts.b_iso_in.framenum = frmnumber;
++		sts.b_iso_in.txbytes = dwc_ep->data_per_frame;
++		sts.b_iso_in.l = 0;
++
++		/** Buffer 0 descriptors setup */
++		for (i = 0; i < dwc_ep->desc_cnt - 1; i++) {
++			dma_desc->buf = dma_ad;
++			dma_desc->status.d32 = sts.d32;
++			dma_desc++;
++
++			dma_ad += dwc_ep->data_per_frame;
++			sts.b_iso_in.framenum += dwc_ep->bInterval;
++		}
++
++		sts.b_iso_in.ioc = 1;
++		dma_desc->buf = dma_ad;
++		dma_desc->status.d32 = sts.d32;
++		++dma_desc;
++
++		/** Buffer 1 descriptors setup */
++		sts.b_iso_in.ioc = 0;
++		dma_ad = dwc_ep->dma_addr1;
++
++		for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
++		     i += dwc_ep->pkt_per_frm) {
++			dma_desc->buf = dma_ad;
++			dma_desc->status.d32 = sts.d32;
++			dma_desc++;
++
++			dma_ad += dwc_ep->data_per_frame;
++			sts.b_iso_in.framenum += dwc_ep->bInterval;
++
++			sts.b_iso_in.ioc = 0;
++		}
++		sts.b_iso_in.ioc = 1;
++		sts.b_iso_in.l = 1;
++
++		dma_desc->buf = dma_ad;
++		dma_desc->status.d32 = sts.d32;
++
++		dwc_ep->next_frame = sts.b_iso_in.framenum + dwc_ep->bInterval;
++
++		/** Write dma_ad into diepdma register */
++		DWC_WRITE_REG32(&(in_regs->diepdma),
++				(uint32_t) dwc_ep->iso_dma_desc_addr);
++	}
++	/** Enable endpoint, clear nak  */
++	depctl.d32 = 0;
++	depctl.b.epena = 1;
++	depctl.b.usbactep = 1;
++	depctl.b.cnak = 1;
++
++	DWC_MODIFY_REG32(addr, depctl.d32, depctl.d32);
++	depctl.d32 = DWC_READ_REG32(addr);
++}
++
++/**
++ * This function initializes a descriptor chain for Isochronous transfer
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ *
++ */
++void dwc_otg_iso_ep_start_buf_transfer(dwc_otg_core_if_t * core_if,
++				       dwc_ep_t * ep)
++{
++	depctl_data_t depctl = {.d32 = 0 };
++	volatile uint32_t *addr;
++
++	if (ep->is_in) {
++		addr = &core_if->dev_if->in_ep_regs[ep->num]->diepctl;
++	} else {
++		addr = &core_if->dev_if->out_ep_regs[ep->num]->doepctl;
++	}
++
++	if (core_if->dma_enable == 0 || core_if->dma_desc_enable != 0) {
++		return;
++	} else {
++		deptsiz_data_t deptsiz = {.d32 = 0 };
++
++		ep->xfer_len =
++		    ep->data_per_frame * ep->buf_proc_intrvl / ep->bInterval;
++		ep->pkt_cnt =
++		    (ep->xfer_len - 1 + ep->maxpacket) / ep->maxpacket;
++		ep->xfer_count = 0;
++		ep->xfer_buff =
++		    (ep->proc_buf_num) ? ep->xfer_buff1 : ep->xfer_buff0;
++		ep->dma_addr =
++		    (ep->proc_buf_num) ? ep->dma_addr1 : ep->dma_addr0;
++
++		if (ep->is_in) {
++			/* Program the transfer size and packet count
++			 *      as follows: xfersize = N * maxpacket +
++			 *      short_packet pktcnt = N + (short_packet
++			 *      exist ? 1 : 0)
++			 */
++			deptsiz.b.mc = ep->pkt_per_frm;
++			deptsiz.b.xfersize = ep->xfer_len;
++			deptsiz.b.pktcnt =
++			    (ep->xfer_len - 1 + ep->maxpacket) / ep->maxpacket;
++			DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
++					dieptsiz, deptsiz.d32);
++
++			/* Write the DMA register */
++			DWC_WRITE_REG32(&
++					(core_if->dev_if->in_ep_regs[ep->num]->
++					 diepdma), (uint32_t) ep->dma_addr);
++
++		} else {
++			deptsiz.b.pktcnt =
++			    (ep->xfer_len + (ep->maxpacket - 1)) /
++			    ep->maxpacket;
++			deptsiz.b.xfersize = deptsiz.b.pktcnt * ep->maxpacket;
++
++			DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[ep->num]->
++					doeptsiz, deptsiz.d32);
++
++			/* Write the DMA register */
++			DWC_WRITE_REG32(&
++					(core_if->dev_if->out_ep_regs[ep->num]->
++					 doepdma), (uint32_t) ep->dma_addr);
++
++		}
++		/** Enable endpoint, clear nak  */
++		depctl.d32 = 0;
++		depctl.b.epena = 1;
++		depctl.b.cnak = 1;
++
++		DWC_MODIFY_REG32(addr, depctl.d32, depctl.d32);
++	}
++}
++
++/**
++ * This function does the setup for a data transfer for an EP and
++ * starts the transfer. For an IN transfer, the packets will be
++ * loaded into the appropriate Tx FIFO in the ISR. For OUT transfers,
++ * the packets are unloaded from the Rx FIFO in the ISR.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ */
++
++static void dwc_otg_iso_ep_start_transfer(dwc_otg_core_if_t * core_if,
++					  dwc_ep_t * ep)
++{
++	if (core_if->dma_enable) {
++		if (core_if->dma_desc_enable) {
++			if (ep->is_in) {
++				ep->desc_cnt = ep->pkt_cnt / ep->pkt_per_frm;
++			} else {
++				ep->desc_cnt = ep->pkt_cnt;
++			}
++			dwc_otg_iso_ep_start_ddma_transfer(core_if, ep);
++		} else {
++			if (core_if->pti_enh_enable) {
++				dwc_otg_iso_ep_start_buf_transfer(core_if, ep);
++			} else {
++				ep->cur_pkt_addr =
++				    (ep->proc_buf_num) ? ep->xfer_buff1 : ep->
++				    xfer_buff0;
++				ep->cur_pkt_dma_addr =
++				    (ep->proc_buf_num) ? ep->dma_addr1 : ep->
++				    dma_addr0;
++				dwc_otg_iso_ep_start_frm_transfer(core_if, ep);
++			}
++		}
++	} else {
++		ep->cur_pkt_addr =
++		    (ep->proc_buf_num) ? ep->xfer_buff1 : ep->xfer_buff0;
++		ep->cur_pkt_dma_addr =
++		    (ep->proc_buf_num) ? ep->dma_addr1 : ep->dma_addr0;
++		dwc_otg_iso_ep_start_frm_transfer(core_if, ep);
++	}
++}
++
++/**
++ * This function stops transfer for an EP and
++ * resets the ep's variables.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ */
++
++void dwc_otg_iso_ep_stop_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	depctl_data_t depctl = {.d32 = 0 };
++	volatile uint32_t *addr;
++
++	if (ep->is_in == 1) {
++		addr = &core_if->dev_if->in_ep_regs[ep->num]->diepctl;
++	} else {
++		addr = &core_if->dev_if->out_ep_regs[ep->num]->doepctl;
++	}
++
++	/* disable the ep */
++	depctl.d32 = DWC_READ_REG32(addr);
++
++	depctl.b.epdis = 1;
++	depctl.b.snak = 1;
++
++	DWC_WRITE_REG32(addr, depctl.d32);
++
++	if (core_if->dma_desc_enable &&
++	    ep->iso_desc_addr && ep->iso_dma_desc_addr) {
++		dwc_otg_ep_free_desc_chain(ep->iso_desc_addr,
++					   ep->iso_dma_desc_addr,
++					   ep->desc_cnt * 2);
++	}
++
++	/* reset varibales */
++	ep->dma_addr0 = 0;
++	ep->dma_addr1 = 0;
++	ep->xfer_buff0 = 0;
++	ep->xfer_buff1 = 0;
++	ep->data_per_frame = 0;
++	ep->data_pattern_frame = 0;
++	ep->sync_frame = 0;
++	ep->buf_proc_intrvl = 0;
++	ep->bInterval = 0;
++	ep->proc_buf_num = 0;
++	ep->pkt_per_frm = 0;
++	ep->pkt_per_frm = 0;
++	ep->desc_cnt = 0;
++	ep->iso_desc_addr = 0;
++	ep->iso_dma_desc_addr = 0;
++}
++
++int dwc_otg_pcd_iso_ep_start(dwc_otg_pcd_t * pcd, void *ep_handle,
++			     uint8_t * buf0, uint8_t * buf1, dwc_dma_t dma0,
++			     dwc_dma_t dma1, int sync_frame, int dp_frame,
++			     int data_per_frame, int start_frame,
++			     int buf_proc_intrvl, void *req_handle,
++			     int atomic_alloc)
++{
++	dwc_otg_pcd_ep_t *ep;
++	dwc_irqflags_t flags = 0;
++	dwc_ep_t *dwc_ep;
++	int32_t frm_data;
++	dsts_data_t dsts;
++	dwc_otg_core_if_t *core_if;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++
++	if (!ep || !ep->desc || ep->dwc_ep.num == 0) {
++		DWC_WARN("bad ep\n");
++		return -DWC_E_INVALID;
++	}
++
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++	core_if = GET_CORE_IF(pcd);
++	dwc_ep = &ep->dwc_ep;
++
++	if (ep->iso_req_handle) {
++		DWC_WARN("ISO request in progress\n");
++	}
++
++	dwc_ep->dma_addr0 = dma0;
++	dwc_ep->dma_addr1 = dma1;
++
++	dwc_ep->xfer_buff0 = buf0;
++	dwc_ep->xfer_buff1 = buf1;
++
++	dwc_ep->data_per_frame = data_per_frame;
++
++	/** @todo - pattern data support is to be implemented in the future */
++	dwc_ep->data_pattern_frame = dp_frame;
++	dwc_ep->sync_frame = sync_frame;
++
++	dwc_ep->buf_proc_intrvl = buf_proc_intrvl;
++
++	dwc_ep->bInterval = 1 << (ep->desc->bInterval - 1);
++
++	dwc_ep->proc_buf_num = 0;
++
++	dwc_ep->pkt_per_frm = 0;
++	frm_data = ep->dwc_ep.data_per_frame;
++	while (frm_data > 0) {
++		dwc_ep->pkt_per_frm++;
++		frm_data -= ep->dwc_ep.maxpacket;
++	}
++
++	dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++
++	if (start_frame == -1) {
++		dwc_ep->next_frame = dsts.b.soffn + 1;
++		if (dwc_ep->bInterval != 1) {
++			dwc_ep->next_frame =
++			    dwc_ep->next_frame + (dwc_ep->bInterval - 1 -
++						  dwc_ep->next_frame %
++						  dwc_ep->bInterval);
++		}
++	} else {
++		dwc_ep->next_frame = start_frame;
++	}
++
++	if (!core_if->pti_enh_enable) {
++		dwc_ep->pkt_cnt =
++		    dwc_ep->buf_proc_intrvl * dwc_ep->pkt_per_frm /
++		    dwc_ep->bInterval;
++	} else {
++		dwc_ep->pkt_cnt =
++		    (dwc_ep->data_per_frame *
++		     (dwc_ep->buf_proc_intrvl / dwc_ep->bInterval)
++		     - 1 + dwc_ep->maxpacket) / dwc_ep->maxpacket;
++	}
++
++	if (core_if->dma_desc_enable) {
++		dwc_ep->desc_cnt =
++		    dwc_ep->buf_proc_intrvl * dwc_ep->pkt_per_frm /
++		    dwc_ep->bInterval;
++	}
++
++	if (atomic_alloc) {
++		dwc_ep->pkt_info =
++		    DWC_ALLOC_ATOMIC(sizeof(iso_pkt_info_t) * dwc_ep->pkt_cnt);
++	} else {
++		dwc_ep->pkt_info =
++		    DWC_ALLOC(sizeof(iso_pkt_info_t) * dwc_ep->pkt_cnt);
++	}
++	if (!dwc_ep->pkt_info) {
++		DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++		return -DWC_E_NO_MEMORY;
++	}
++	if (core_if->pti_enh_enable) {
++		dwc_memset(dwc_ep->pkt_info, 0,
++			   sizeof(iso_pkt_info_t) * dwc_ep->pkt_cnt);
++	}
++
++	dwc_ep->cur_pkt = 0;
++	ep->iso_req_handle = req_handle;
++
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++	dwc_otg_iso_ep_start_transfer(core_if, dwc_ep);
++	return 0;
++}
++
++int dwc_otg_pcd_iso_ep_stop(dwc_otg_pcd_t * pcd, void *ep_handle,
++			    void *req_handle)
++{
++	dwc_irqflags_t flags = 0;
++	dwc_otg_pcd_ep_t *ep;
++	dwc_ep_t *dwc_ep;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++	if (!ep || !ep->desc || ep->dwc_ep.num == 0) {
++		DWC_WARN("bad ep\n");
++		return -DWC_E_INVALID;
++	}
++	dwc_ep = &ep->dwc_ep;
++
++	dwc_otg_iso_ep_stop_transfer(GET_CORE_IF(pcd), dwc_ep);
++
++	DWC_FREE(dwc_ep->pkt_info);
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++	if (ep->iso_req_handle != req_handle) {
++		DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++		return -DWC_E_INVALID;
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++
++	ep->iso_req_handle = 0;
++	return 0;
++}
++
++/**
++ * This function is used for perodical data exchnage between PCD and gadget drivers.
++ * for Isochronous EPs
++ *
++ *	- Every time a sync period completes this function is called to
++ *	  perform data exchange between PCD and gadget
++ */
++void dwc_otg_iso_buffer_done(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * ep,
++			     void *req_handle)
++{
++	int i;
++	dwc_ep_t *dwc_ep;
++
++	dwc_ep = &ep->dwc_ep;
++
++	DWC_SPINUNLOCK(ep->pcd->lock);
++	pcd->fops->isoc_complete(pcd, ep->priv, ep->iso_req_handle,
++				 dwc_ep->proc_buf_num ^ 0x1);
++	DWC_SPINLOCK(ep->pcd->lock);
++
++	for (i = 0; i < dwc_ep->pkt_cnt; ++i) {
++		dwc_ep->pkt_info[i].status = 0;
++		dwc_ep->pkt_info[i].offset = 0;
++		dwc_ep->pkt_info[i].length = 0;
++	}
++}
++
++int dwc_otg_pcd_get_iso_packet_count(dwc_otg_pcd_t * pcd, void *ep_handle,
++				     void *iso_req_handle)
++{
++	dwc_otg_pcd_ep_t *ep;
++	dwc_ep_t *dwc_ep;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++	if (!ep->desc || ep->dwc_ep.num == 0) {
++		DWC_WARN("bad ep\n");
++		return -DWC_E_INVALID;
++	}
++	dwc_ep = &ep->dwc_ep;
++
++	return dwc_ep->pkt_cnt;
++}
++
++void dwc_otg_pcd_get_iso_packet_params(dwc_otg_pcd_t * pcd, void *ep_handle,
++				       void *iso_req_handle, int packet,
++				       int *status, int *actual, int *offset)
++{
++	dwc_otg_pcd_ep_t *ep;
++	dwc_ep_t *dwc_ep;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++	if (!ep)
++		DWC_WARN("bad ep\n");
++
++	dwc_ep = &ep->dwc_ep;
++
++	*status = dwc_ep->pkt_info[packet].status;
++	*actual = dwc_ep->pkt_info[packet].length;
++	*offset = dwc_ep->pkt_info[packet].offset;
++}
++
++#endif /* DWC_EN_ISOC */
++
++static void dwc_otg_pcd_init_ep(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * pcd_ep,
++				uint32_t is_in, uint32_t ep_num)
++{
++	/* Init EP structure */
++	pcd_ep->desc = 0;
++	pcd_ep->pcd = pcd;
++	pcd_ep->stopped = 1;
++	pcd_ep->queue_sof = 0;
++
++	/* Init DWC ep structure */
++	pcd_ep->dwc_ep.is_in = is_in;
++	pcd_ep->dwc_ep.num = ep_num;
++	pcd_ep->dwc_ep.active = 0;
++	pcd_ep->dwc_ep.tx_fifo_num = 0;
++	/* Control until ep is actvated */
++	pcd_ep->dwc_ep.type = DWC_OTG_EP_TYPE_CONTROL;
++	pcd_ep->dwc_ep.maxpacket = MAX_PACKET_SIZE;
++	pcd_ep->dwc_ep.dma_addr = 0;
++	pcd_ep->dwc_ep.start_xfer_buff = 0;
++	pcd_ep->dwc_ep.xfer_buff = 0;
++	pcd_ep->dwc_ep.xfer_len = 0;
++	pcd_ep->dwc_ep.xfer_count = 0;
++	pcd_ep->dwc_ep.sent_zlp = 0;
++	pcd_ep->dwc_ep.total_len = 0;
++	pcd_ep->dwc_ep.desc_addr = 0;
++	pcd_ep->dwc_ep.dma_desc_addr = 0;
++	DWC_CIRCLEQ_INIT(&pcd_ep->queue);
++}
++
++/**
++ * Initialize ep's
++ */
++static void dwc_otg_pcd_reinit(dwc_otg_pcd_t * pcd)
++{
++	int i;
++	uint32_t hwcfg1;
++	dwc_otg_pcd_ep_t *ep;
++	int in_ep_cntr, out_ep_cntr;
++	uint32_t num_in_eps = (GET_CORE_IF(pcd))->dev_if->num_in_eps;
++	uint32_t num_out_eps = (GET_CORE_IF(pcd))->dev_if->num_out_eps;
++
++	/**
++	 * Initialize the EP0 structure.
++	 */
++	ep = &pcd->ep0;
++	dwc_otg_pcd_init_ep(pcd, ep, 0, 0);
++
++	in_ep_cntr = 0;
++	hwcfg1 = (GET_CORE_IF(pcd))->hwcfg1.d32 >> 3;
++	for (i = 1; in_ep_cntr < num_in_eps; i++) {
++		if ((hwcfg1 & 0x1) == 0) {
++			dwc_otg_pcd_ep_t *ep = &pcd->in_ep[in_ep_cntr];
++			in_ep_cntr++;
++			/**
++			 * @todo NGS: Add direction to EP, based on contents
++			 * of HWCFG1.  Need a copy of HWCFG1 in pcd structure?
++			 * sprintf(";r
++			 */
++			dwc_otg_pcd_init_ep(pcd, ep, 1 /* IN */ , i);
++
++			DWC_CIRCLEQ_INIT(&ep->queue);
++		}
++		hwcfg1 >>= 2;
++	}
++
++	out_ep_cntr = 0;
++	hwcfg1 = (GET_CORE_IF(pcd))->hwcfg1.d32 >> 2;
++	for (i = 1; out_ep_cntr < num_out_eps; i++) {
++		if ((hwcfg1 & 0x1) == 0) {
++			dwc_otg_pcd_ep_t *ep = &pcd->out_ep[out_ep_cntr];
++			out_ep_cntr++;
++			/**
++			 * @todo NGS: Add direction to EP, based on contents
++			 * of HWCFG1.  Need a copy of HWCFG1 in pcd structure?
++			 * sprintf(";r
++			 */
++			dwc_otg_pcd_init_ep(pcd, ep, 0 /* OUT */ , i);
++			DWC_CIRCLEQ_INIT(&ep->queue);
++		}
++		hwcfg1 >>= 2;
++	}
++
++	pcd->ep0state = EP0_DISCONNECT;
++	pcd->ep0.dwc_ep.maxpacket = MAX_EP0_SIZE;
++	pcd->ep0.dwc_ep.type = DWC_OTG_EP_TYPE_CONTROL;
++}
++
++/**
++ * This function is called when the SRP timer expires. The SRP should
++ * complete within 6 seconds.
++ */
++static void srp_timeout(void *ptr)
++{
++	gotgctl_data_t gotgctl;
++	dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) ptr;
++	volatile uint32_t *addr = &core_if->core_global_regs->gotgctl;
++
++	gotgctl.d32 = DWC_READ_REG32(addr);
++
++	core_if->srp_timer_started = 0;
++
++	if (core_if->adp_enable) {
++		if (gotgctl.b.bsesvld == 0) {
++			gpwrdn_data_t gpwrdn = {.d32 = 0 };
++			DWC_PRINTF("SRP Timeout BSESSVLD = 0\n");
++			/* Power off the core */
++			if (core_if->power_down == 2) {
++				gpwrdn.b.pwrdnswtch = 1;
++				DWC_MODIFY_REG32(&core_if->
++						 core_global_regs->gpwrdn,
++						 gpwrdn.d32, 0);
++			}
++
++			gpwrdn.d32 = 0;
++			gpwrdn.b.pmuintsel = 1;
++			gpwrdn.b.pmuactv = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0,
++					 gpwrdn.d32);
++			dwc_otg_adp_probe_start(core_if);
++		} else {
++			DWC_PRINTF("SRP Timeout BSESSVLD = 1\n");
++			core_if->op_state = B_PERIPHERAL;
++			dwc_otg_core_init(core_if);
++			dwc_otg_enable_global_interrupts(core_if);
++			cil_pcd_start(core_if);
++		}
++	}
++
++	if ((core_if->core_params->phy_type == DWC_PHY_TYPE_PARAM_FS) &&
++	    (core_if->core_params->i2c_enable)) {
++		DWC_PRINTF("SRP Timeout\n");
++
++		if ((core_if->srp_success) && (gotgctl.b.bsesvld)) {
++			if (core_if->pcd_cb && core_if->pcd_cb->resume_wakeup) {
++				core_if->pcd_cb->resume_wakeup(core_if->pcd_cb->p);
++			}
++
++			/* Clear Session Request */
++			gotgctl.d32 = 0;
++			gotgctl.b.sesreq = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gotgctl,
++					 gotgctl.d32, 0);
++
++			core_if->srp_success = 0;
++		} else {
++			__DWC_ERROR("Device not connected/responding\n");
++			gotgctl.b.sesreq = 0;
++			DWC_WRITE_REG32(addr, gotgctl.d32);
++		}
++	} else if (gotgctl.b.sesreq) {
++		DWC_PRINTF("SRP Timeout\n");
++
++		__DWC_ERROR("Device not connected/responding\n");
++		gotgctl.b.sesreq = 0;
++		DWC_WRITE_REG32(addr, gotgctl.d32);
++	} else {
++		DWC_PRINTF(" SRP GOTGCTL=%0x\n", gotgctl.d32);
++	}
++}
++
++/**
++ * Tasklet
++ *
++ */
++extern void start_next_request(dwc_otg_pcd_ep_t * ep);
++
++static void start_xfer_tasklet_func(void *data)
++{
++	dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) data;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++
++	int i;
++	depctl_data_t diepctl;
++
++	DWC_DEBUGPL(DBG_PCDV, "Start xfer tasklet\n");
++
++	diepctl.d32 = DWC_READ_REG32(&core_if->dev_if->in_ep_regs[0]->diepctl);
++
++	if (pcd->ep0.queue_sof) {
++		pcd->ep0.queue_sof = 0;
++		start_next_request(&pcd->ep0);
++		// break;
++	}
++
++	for (i = 0; i < core_if->dev_if->num_in_eps; i++) {
++		depctl_data_t diepctl;
++		diepctl.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->in_ep_regs[i]->diepctl);
++
++		if (pcd->in_ep[i].queue_sof) {
++			pcd->in_ep[i].queue_sof = 0;
++			start_next_request(&pcd->in_ep[i]);
++			// break;
++		}
++	}
++
++	return;
++}
++
++/**
++ * This function initialized the PCD portion of the driver.
++ *
++ */
++dwc_otg_pcd_t *dwc_otg_pcd_init(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_pcd_t *pcd = NULL;
++	dwc_otg_dev_if_t *dev_if;
++	int i;
++
++	/*
++	 * Allocate PCD structure
++	 */
++	pcd = DWC_ALLOC(sizeof(dwc_otg_pcd_t));
++
++	if (pcd == NULL) {
++		return NULL;
++	}
++
++#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK))
++	DWC_SPINLOCK_ALLOC_LINUX_DEBUG(pcd->lock);
++#else
++	pcd->lock = DWC_SPINLOCK_ALLOC();
++#endif
++        DWC_DEBUGPL(DBG_HCDV, "Init of PCD %p given core_if %p\n",
++                    pcd, core_if);//GRAYG
++	if (!pcd->lock) {
++		DWC_ERROR("Could not allocate lock for pcd");
++		DWC_FREE(pcd);
++		return NULL;
++	}
++	/* Set core_if's lock pointer to hcd->lock */
++	core_if->lock = pcd->lock;
++	pcd->core_if = core_if;
++
++	dev_if = core_if->dev_if;
++	dev_if->isoc_ep = NULL;
++
++	if (core_if->hwcfg4.b.ded_fifo_en) {
++		DWC_PRINTF("Dedicated Tx FIFOs mode\n");
++	} else {
++		DWC_PRINTF("Shared Tx FIFO mode\n");
++	}
++
++	/*
++	 * Initialized the Core for Device mode here if there is nod ADP support.
++	 * Otherwise it will be done later in dwc_otg_adp_start routine.
++	 */
++	if (dwc_otg_is_device_mode(core_if) /*&& !core_if->adp_enable*/) {
++		dwc_otg_core_dev_init(core_if);
++	}
++
++	/*
++	 * Register the PCD Callbacks.
++	 */
++	dwc_otg_cil_register_pcd_callbacks(core_if, &pcd_callbacks, pcd);
++
++	/*
++	 * Initialize the DMA buffer for SETUP packets
++	 */
++	if (GET_CORE_IF(pcd)->dma_enable) {
++		pcd->setup_pkt =
++		    DWC_DMA_ALLOC(sizeof(*pcd->setup_pkt) * 5,
++				  &pcd->setup_pkt_dma_handle);
++		if (pcd->setup_pkt == NULL) {
++			DWC_FREE(pcd);
++			return NULL;
++		}
++
++		pcd->status_buf =
++		    DWC_DMA_ALLOC(sizeof(uint16_t),
++				  &pcd->status_buf_dma_handle);
++		if (pcd->status_buf == NULL) {
++			DWC_DMA_FREE(sizeof(*pcd->setup_pkt) * 5,
++				     pcd->setup_pkt, pcd->setup_pkt_dma_handle);
++			DWC_FREE(pcd);
++			return NULL;
++		}
++
++		if (GET_CORE_IF(pcd)->dma_desc_enable) {
++			dev_if->setup_desc_addr[0] =
++			    dwc_otg_ep_alloc_desc_chain
++			    (&dev_if->dma_setup_desc_addr[0], 1);
++			dev_if->setup_desc_addr[1] =
++			    dwc_otg_ep_alloc_desc_chain
++			    (&dev_if->dma_setup_desc_addr[1], 1);
++			dev_if->in_desc_addr =
++			    dwc_otg_ep_alloc_desc_chain
++			    (&dev_if->dma_in_desc_addr, 1);
++			dev_if->out_desc_addr =
++			    dwc_otg_ep_alloc_desc_chain
++			    (&dev_if->dma_out_desc_addr, 1);
++			pcd->data_terminated = 0;
++
++			if (dev_if->setup_desc_addr[0] == 0
++			    || dev_if->setup_desc_addr[1] == 0
++			    || dev_if->in_desc_addr == 0
++			    || dev_if->out_desc_addr == 0) {
++
++				if (dev_if->out_desc_addr)
++					dwc_otg_ep_free_desc_chain
++					    (dev_if->out_desc_addr,
++					     dev_if->dma_out_desc_addr, 1);
++				if (dev_if->in_desc_addr)
++					dwc_otg_ep_free_desc_chain
++					    (dev_if->in_desc_addr,
++					     dev_if->dma_in_desc_addr, 1);
++				if (dev_if->setup_desc_addr[1])
++					dwc_otg_ep_free_desc_chain
++					    (dev_if->setup_desc_addr[1],
++					     dev_if->dma_setup_desc_addr[1], 1);
++				if (dev_if->setup_desc_addr[0])
++					dwc_otg_ep_free_desc_chain
++					    (dev_if->setup_desc_addr[0],
++					     dev_if->dma_setup_desc_addr[0], 1);
++
++				DWC_DMA_FREE(sizeof(*pcd->setup_pkt) * 5,
++					     pcd->setup_pkt,
++					     pcd->setup_pkt_dma_handle);
++				DWC_DMA_FREE(sizeof(*pcd->status_buf),
++					     pcd->status_buf,
++					     pcd->status_buf_dma_handle);
++
++				DWC_FREE(pcd);
++
++				return NULL;
++			}
++		}
++	} else {
++		pcd->setup_pkt = DWC_ALLOC(sizeof(*pcd->setup_pkt) * 5);
++		if (pcd->setup_pkt == NULL) {
++			DWC_FREE(pcd);
++			return NULL;
++		}
++
++		pcd->status_buf = DWC_ALLOC(sizeof(uint16_t));
++		if (pcd->status_buf == NULL) {
++			DWC_FREE(pcd->setup_pkt);
++			DWC_FREE(pcd);
++			return NULL;
++		}
++	}
++
++	dwc_otg_pcd_reinit(pcd);
++
++	/* Allocate the cfi object for the PCD */
++#ifdef DWC_UTE_CFI
++	pcd->cfi = DWC_ALLOC(sizeof(cfiobject_t));
++	if (NULL == pcd->cfi)
++		goto fail;
++	if (init_cfi(pcd->cfi)) {
++		CFI_INFO("%s: Failed to init the CFI object\n", __func__);
++		goto fail;
++	}
++#endif
++
++	/* Initialize tasklets */
++	pcd->start_xfer_tasklet = DWC_TASK_ALLOC("xfer_tasklet",
++						 start_xfer_tasklet_func, pcd);
++	pcd->test_mode_tasklet = DWC_TASK_ALLOC("test_mode_tasklet",
++						do_test_mode, pcd);
++
++	/* Initialize SRP timer */
++	core_if->srp_timer = DWC_TIMER_ALLOC("SRP TIMER", srp_timeout, core_if);
++
++	if (core_if->core_params->dev_out_nak) {
++		/**
++		* Initialize xfer timeout timer. Implemented for
++		* 2.93a feature "Device DDMA OUT NAK Enhancement"
++		*/
++		for(i = 0; i < MAX_EPS_CHANNELS; i++) {
++			pcd->core_if->ep_xfer_timer[i] =
++				DWC_TIMER_ALLOC("ep timer", ep_xfer_timeout,
++				&pcd->core_if->ep_xfer_info[i]);
++		}
++	}
++
++	return pcd;
++#ifdef DWC_UTE_CFI
++fail:
++#endif
++	if (pcd->setup_pkt)
++		DWC_FREE(pcd->setup_pkt);
++	if (pcd->status_buf)
++		DWC_FREE(pcd->status_buf);
++#ifdef DWC_UTE_CFI
++	if (pcd->cfi)
++		DWC_FREE(pcd->cfi);
++#endif
++	if (pcd)
++		DWC_FREE(pcd);
++	return NULL;
++
++}
++
++/**
++ * Remove PCD specific data
++ */
++void dwc_otg_pcd_remove(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_dev_if_t *dev_if = GET_CORE_IF(pcd)->dev_if;
++	int i;
++	if (pcd->core_if->core_params->dev_out_nak) {
++		for (i = 0; i < MAX_EPS_CHANNELS; i++) {
++			DWC_TIMER_CANCEL(pcd->core_if->ep_xfer_timer[i]);
++			pcd->core_if->ep_xfer_info[i].state = 0;
++		}
++	}
++
++	if (GET_CORE_IF(pcd)->dma_enable) {
++		DWC_DMA_FREE(sizeof(*pcd->setup_pkt) * 5, pcd->setup_pkt,
++			     pcd->setup_pkt_dma_handle);
++		DWC_DMA_FREE(sizeof(uint16_t), pcd->status_buf,
++			     pcd->status_buf_dma_handle);
++		if (GET_CORE_IF(pcd)->dma_desc_enable) {
++			dwc_otg_ep_free_desc_chain(dev_if->setup_desc_addr[0],
++						   dev_if->dma_setup_desc_addr
++						   [0], 1);
++			dwc_otg_ep_free_desc_chain(dev_if->setup_desc_addr[1],
++						   dev_if->dma_setup_desc_addr
++						   [1], 1);
++			dwc_otg_ep_free_desc_chain(dev_if->in_desc_addr,
++						   dev_if->dma_in_desc_addr, 1);
++			dwc_otg_ep_free_desc_chain(dev_if->out_desc_addr,
++						   dev_if->dma_out_desc_addr,
++						   1);
++		}
++	} else {
++		DWC_FREE(pcd->setup_pkt);
++		DWC_FREE(pcd->status_buf);
++	}
++	DWC_SPINLOCK_FREE(pcd->lock);
++	/* Set core_if's lock pointer to NULL */
++	pcd->core_if->lock = NULL;
++
++	DWC_TASK_FREE(pcd->start_xfer_tasklet);
++	DWC_TASK_FREE(pcd->test_mode_tasklet);
++	if (pcd->core_if->core_params->dev_out_nak) {
++		for (i = 0; i < MAX_EPS_CHANNELS; i++) {
++			if (pcd->core_if->ep_xfer_timer[i]) {
++					DWC_TIMER_FREE(pcd->core_if->ep_xfer_timer[i]);
++			}
++		}
++	}
++
++/* Release the CFI object's dynamic memory */
++#ifdef DWC_UTE_CFI
++	if (pcd->cfi->ops.release) {
++		pcd->cfi->ops.release(pcd->cfi);
++	}
++#endif
++
++	DWC_FREE(pcd);
++}
++
++/**
++ * Returns whether registered pcd is dual speed or not
++ */
++uint32_t dwc_otg_pcd_is_dualspeed(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++
++	if ((core_if->core_params->speed == DWC_SPEED_PARAM_FULL) ||
++	    ((core_if->hwcfg2.b.hs_phy_type == 2) &&
++	     (core_if->hwcfg2.b.fs_phy_type == 1) &&
++	     (core_if->core_params->ulpi_fs_ls))) {
++		return 0;
++	}
++
++	return 1;
++}
++
++/**
++ * Returns whether registered pcd is OTG capable or not
++ */
++uint32_t dwc_otg_pcd_is_otg(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	gusbcfg_data_t usbcfg = {.d32 = 0 };
++
++	usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
++	if (!usbcfg.b.srpcap || !usbcfg.b.hnpcap) {
++		return 0;
++	}
++
++	return 1;
++}
++
++/**
++ * This function assigns periodic Tx FIFO to an periodic EP
++ * in shared Tx FIFO mode
++ */
++static uint32_t assign_tx_fifo(dwc_otg_core_if_t * core_if)
++{
++	uint32_t TxMsk = 1;
++	int i;
++
++	for (i = 0; i < core_if->hwcfg4.b.num_in_eps; ++i) {
++		if ((TxMsk & core_if->tx_msk) == 0) {
++			core_if->tx_msk |= TxMsk;
++			return i + 1;
++		}
++		TxMsk <<= 1;
++	}
++	return 0;
++}
++
++/**
++ * This function assigns periodic Tx FIFO to an periodic EP
++ * in shared Tx FIFO mode
++ */
++static uint32_t assign_perio_tx_fifo(dwc_otg_core_if_t * core_if)
++{
++	uint32_t PerTxMsk = 1;
++	int i;
++	for (i = 0; i < core_if->hwcfg4.b.num_dev_perio_in_ep; ++i) {
++		if ((PerTxMsk & core_if->p_tx_msk) == 0) {
++			core_if->p_tx_msk |= PerTxMsk;
++			return i + 1;
++		}
++		PerTxMsk <<= 1;
++	}
++	return 0;
++}
++
++/**
++ * This function releases periodic Tx FIFO
++ * in shared Tx FIFO mode
++ */
++static void release_perio_tx_fifo(dwc_otg_core_if_t * core_if,
++				  uint32_t fifo_num)
++{
++	core_if->p_tx_msk =
++	    (core_if->p_tx_msk & (1 << (fifo_num - 1))) ^ core_if->p_tx_msk;
++}
++
++/**
++ * This function releases periodic Tx FIFO
++ * in shared Tx FIFO mode
++ */
++static void release_tx_fifo(dwc_otg_core_if_t * core_if, uint32_t fifo_num)
++{
++	core_if->tx_msk =
++	    (core_if->tx_msk & (1 << (fifo_num - 1))) ^ core_if->tx_msk;
++}
++
++/**
++ * This function is being called from gadget
++ * to enable PCD endpoint.
++ */
++int dwc_otg_pcd_ep_enable(dwc_otg_pcd_t * pcd,
++			  const uint8_t * ep_desc, void *usb_ep)
++{
++	int num, dir;
++	dwc_otg_pcd_ep_t *ep = NULL;
++	const usb_endpoint_descriptor_t *desc;
++	dwc_irqflags_t flags;
++	fifosize_data_t dptxfsiz = {.d32 = 0 };
++	gdfifocfg_data_t gdfifocfg = {.d32 = 0 };
++	gdfifocfg_data_t gdfifocfgbase = {.d32 = 0 };
++	int retval = 0;
++	int i, epcount;
++
++	desc = (const usb_endpoint_descriptor_t *)ep_desc;
++
++	if (!desc) {
++		pcd->ep0.priv = usb_ep;
++		ep = &pcd->ep0;
++		retval = -DWC_E_INVALID;
++		goto out;
++	}
++
++	num = UE_GET_ADDR(desc->bEndpointAddress);
++	dir = UE_GET_DIR(desc->bEndpointAddress);
++
++	if (!desc->wMaxPacketSize) {
++		DWC_WARN("bad maxpacketsize\n");
++		retval = -DWC_E_INVALID;
++		goto out;
++	}
++
++	if (dir == UE_DIR_IN) {
++		epcount = pcd->core_if->dev_if->num_in_eps;
++		for (i = 0; i < epcount; i++) {
++			if (num == pcd->in_ep[i].dwc_ep.num) {
++				ep = &pcd->in_ep[i];
++				break;
++			}
++		}
++	} else {
++		epcount = pcd->core_if->dev_if->num_out_eps;
++		for (i = 0; i < epcount; i++) {
++			if (num == pcd->out_ep[i].dwc_ep.num) {
++				ep = &pcd->out_ep[i];
++				break;
++			}
++		}
++	}
++
++	if (!ep) {
++		DWC_WARN("bad address\n");
++		retval = -DWC_E_INVALID;
++		goto out;
++	}
++
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++
++	ep->desc = desc;
++	ep->priv = usb_ep;
++
++	/*
++	 * Activate the EP
++	 */
++	ep->stopped = 0;
++
++	ep->dwc_ep.is_in = (dir == UE_DIR_IN);
++	ep->dwc_ep.maxpacket = UGETW(desc->wMaxPacketSize);
++
++	ep->dwc_ep.type = desc->bmAttributes & UE_XFERTYPE;
++
++	if (ep->dwc_ep.is_in) {
++		if (!GET_CORE_IF(pcd)->en_multiple_tx_fifo) {
++			ep->dwc_ep.tx_fifo_num = 0;
++
++			if (ep->dwc_ep.type == UE_ISOCHRONOUS) {
++				/*
++				 * if ISOC EP then assign a Periodic Tx FIFO.
++				 */
++				ep->dwc_ep.tx_fifo_num =
++				    assign_perio_tx_fifo(GET_CORE_IF(pcd));
++			}
++		} else {
++			/*
++			 * if Dedicated FIFOs mode is on then assign a Tx FIFO.
++			 */
++			ep->dwc_ep.tx_fifo_num =
++			    assign_tx_fifo(GET_CORE_IF(pcd));
++		}
++
++		/* Calculating EP info controller base address */
++		if (ep->dwc_ep.tx_fifo_num
++		    && GET_CORE_IF(pcd)->en_multiple_tx_fifo) {
++			gdfifocfg.d32 =
++			    DWC_READ_REG32(&GET_CORE_IF(pcd)->
++					   core_global_regs->gdfifocfg);
++			gdfifocfgbase.d32 = gdfifocfg.d32 >> 16;
++			dptxfsiz.d32 =
++			    (DWC_READ_REG32
++			     (&GET_CORE_IF(pcd)->core_global_regs->
++			      dtxfsiz[ep->dwc_ep.tx_fifo_num - 1]) >> 16);
++			gdfifocfg.b.epinfobase =
++			    gdfifocfgbase.d32 + dptxfsiz.d32;
++			if (GET_CORE_IF(pcd)->snpsid <= OTG_CORE_REV_2_94a) {
++				DWC_WRITE_REG32(&GET_CORE_IF(pcd)->
++						core_global_regs->gdfifocfg,
++						gdfifocfg.d32);
++			}
++		}
++	}
++	/* Set initial data PID. */
++	if (ep->dwc_ep.type == UE_BULK) {
++		ep->dwc_ep.data_pid_start = 0;
++	}
++
++	/* Alloc DMA Descriptors */
++	if (GET_CORE_IF(pcd)->dma_desc_enable) {
++#ifndef DWC_UTE_PER_IO
++		if (ep->dwc_ep.type != UE_ISOCHRONOUS) {
++#endif
++			ep->dwc_ep.desc_addr =
++			    dwc_otg_ep_alloc_desc_chain(&ep->
++							dwc_ep.dma_desc_addr,
++							MAX_DMA_DESC_CNT);
++			if (!ep->dwc_ep.desc_addr) {
++				DWC_WARN("%s, can't allocate DMA descriptor\n",
++					 __func__);
++				retval = -DWC_E_SHUTDOWN;
++				DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++				goto out;
++			}
++#ifndef DWC_UTE_PER_IO
++		}
++#endif
++	}
++
++	DWC_DEBUGPL(DBG_PCD, "Activate %s: type=%d, mps=%d desc=%p\n",
++		    (ep->dwc_ep.is_in ? "IN" : "OUT"),
++		    ep->dwc_ep.type, ep->dwc_ep.maxpacket, ep->desc);
++#ifdef DWC_UTE_PER_IO
++	ep->dwc_ep.xiso_bInterval = 1 << (ep->desc->bInterval - 1);
++#endif
++	if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
++		ep->dwc_ep.bInterval = 1 << (ep->desc->bInterval - 1);
++		ep->dwc_ep.frame_num = 0xFFFFFFFF;
++	}
++
++	dwc_otg_ep_activate(GET_CORE_IF(pcd), &ep->dwc_ep);
++
++#ifdef DWC_UTE_CFI
++	if (pcd->cfi->ops.ep_enable) {
++		pcd->cfi->ops.ep_enable(pcd->cfi, pcd, ep);
++	}
++#endif
++
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++
++out:
++	return retval;
++}
++
++/**
++ * This function is being called from gadget
++ * to disable PCD endpoint.
++ */
++int dwc_otg_pcd_ep_disable(dwc_otg_pcd_t * pcd, void *ep_handle)
++{
++	dwc_otg_pcd_ep_t *ep;
++	dwc_irqflags_t flags;
++	dwc_otg_dev_dma_desc_t *desc_addr;
++	dwc_dma_t dma_desc_addr;
++	gdfifocfg_data_t gdfifocfgbase = {.d32 = 0 };
++	gdfifocfg_data_t gdfifocfg = {.d32 = 0 };
++	fifosize_data_t dptxfsiz = {.d32 = 0 };
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++
++	if (!ep || !ep->desc) {
++		DWC_DEBUGPL(DBG_PCD, "bad ep address\n");
++		return -DWC_E_INVALID;
++	}
++
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++
++	dwc_otg_request_nuke(ep);
++
++	dwc_otg_ep_deactivate(GET_CORE_IF(pcd), &ep->dwc_ep);
++	if (pcd->core_if->core_params->dev_out_nak) {
++		DWC_TIMER_CANCEL(pcd->core_if->ep_xfer_timer[ep->dwc_ep.num]);
++		pcd->core_if->ep_xfer_info[ep->dwc_ep.num].state = 0;
++	}
++	ep->desc = NULL;
++	ep->stopped = 1;
++
++	gdfifocfg.d32 =
++	    DWC_READ_REG32(&GET_CORE_IF(pcd)->core_global_regs->gdfifocfg);
++	gdfifocfgbase.d32 = gdfifocfg.d32 >> 16;
++
++	if (ep->dwc_ep.is_in) {
++		if (GET_CORE_IF(pcd)->en_multiple_tx_fifo) {
++			/* Flush the Tx FIFO */
++			dwc_otg_flush_tx_fifo(GET_CORE_IF(pcd),
++					      ep->dwc_ep.tx_fifo_num);
++		}
++		release_perio_tx_fifo(GET_CORE_IF(pcd), ep->dwc_ep.tx_fifo_num);
++		release_tx_fifo(GET_CORE_IF(pcd), ep->dwc_ep.tx_fifo_num);
++		if (GET_CORE_IF(pcd)->en_multiple_tx_fifo) {
++			/* Decreasing EPinfo Base Addr */
++			dptxfsiz.d32 =
++			    (DWC_READ_REG32
++			     (&GET_CORE_IF(pcd)->
++				core_global_regs->dtxfsiz[ep->dwc_ep.tx_fifo_num-1]) >> 16);
++			gdfifocfg.b.epinfobase = gdfifocfgbase.d32 - dptxfsiz.d32;
++			if (GET_CORE_IF(pcd)->snpsid <= OTG_CORE_REV_2_94a) {
++				DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gdfifocfg,
++					gdfifocfg.d32);
++			}
++		}
++	}
++
++	/* Free DMA Descriptors */
++	if (GET_CORE_IF(pcd)->dma_desc_enable) {
++		if (ep->dwc_ep.type != UE_ISOCHRONOUS) {
++			desc_addr = ep->dwc_ep.desc_addr;
++			dma_desc_addr = ep->dwc_ep.dma_desc_addr;
++
++			/* Cannot call dma_free_coherent() with IRQs disabled */
++			DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++			dwc_otg_ep_free_desc_chain(desc_addr, dma_desc_addr,
++						   MAX_DMA_DESC_CNT);
++
++			goto out_unlocked;
++		}
++	}
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++
++out_unlocked:
++	DWC_DEBUGPL(DBG_PCD, "%d %s disabled\n", ep->dwc_ep.num,
++		    ep->dwc_ep.is_in ? "IN" : "OUT");
++	return 0;
++
++}
++
++/******************************************************************************/
++#ifdef DWC_UTE_PER_IO
++
++/**
++ * Free the request and its extended parts
++ *
++ */
++void dwc_pcd_xiso_ereq_free(dwc_otg_pcd_ep_t * ep, dwc_otg_pcd_request_t * req)
++{
++	DWC_FREE(req->ext_req.per_io_frame_descs);
++	DWC_FREE(req);
++}
++
++/**
++ * Start the next request in the endpoint's queue.
++ *
++ */
++int dwc_otg_pcd_xiso_start_next_request(dwc_otg_pcd_t * pcd,
++					dwc_otg_pcd_ep_t * ep)
++{
++	int i;
++	dwc_otg_pcd_request_t *req = NULL;
++	dwc_ep_t *dwcep = NULL;
++	struct dwc_iso_xreq_port *ereq = NULL;
++	struct dwc_iso_pkt_desc_port *ddesc_iso;
++	uint16_t nat;
++	depctl_data_t diepctl;
++
++	dwcep = &ep->dwc_ep;
++
++	if (dwcep->xiso_active_xfers > 0) {
++#if 0	//Disable this to decrease s/w overhead that is crucial for Isoc transfers
++		DWC_WARN("There are currently active transfers for EP%d \
++				(active=%d; queued=%d)", dwcep->num, dwcep->xiso_active_xfers,
++				dwcep->xiso_queued_xfers);
++#endif
++		return 0;
++	}
++
++	nat = UGETW(ep->desc->wMaxPacketSize);
++	nat = (nat >> 11) & 0x03;
++
++	if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		req = DWC_CIRCLEQ_FIRST(&ep->queue);
++		ereq = &req->ext_req;
++		ep->stopped = 0;
++
++		/* Get the frame number */
++		dwcep->xiso_frame_num =
++		    dwc_otg_get_frame_number(GET_CORE_IF(pcd));
++		DWC_DEBUG("FRM_NUM=%d", dwcep->xiso_frame_num);
++
++		ddesc_iso = ereq->per_io_frame_descs;
++
++		if (dwcep->is_in) {
++			/* Setup DMA Descriptor chain for IN Isoc request */
++			for (i = 0; i < ereq->pio_pkt_count; i++) {
++				//if ((i % (nat + 1)) == 0)
++				if ( i > 0 )
++					dwcep->xiso_frame_num =
++					    (dwcep->xiso_bInterval +
++										dwcep->xiso_frame_num) & 0x3FFF;
++				dwcep->desc_addr[i].buf =
++				    req->dma + ddesc_iso[i].offset;
++				dwcep->desc_addr[i].status.b_iso_in.txbytes =
++				    ddesc_iso[i].length;
++				dwcep->desc_addr[i].status.b_iso_in.framenum =
++				    dwcep->xiso_frame_num;
++				dwcep->desc_addr[i].status.b_iso_in.bs =
++				    BS_HOST_READY;
++				dwcep->desc_addr[i].status.b_iso_in.txsts = 0;
++				dwcep->desc_addr[i].status.b_iso_in.sp =
++				    (ddesc_iso[i].length %
++				     dwcep->maxpacket) ? 1 : 0;
++				dwcep->desc_addr[i].status.b_iso_in.ioc = 0;
++				dwcep->desc_addr[i].status.b_iso_in.pid = nat + 1;
++				dwcep->desc_addr[i].status.b_iso_in.l = 0;
++
++				/* Process the last descriptor */
++				if (i == ereq->pio_pkt_count - 1) {
++					dwcep->desc_addr[i].status.b_iso_in.ioc = 1;
++					dwcep->desc_addr[i].status.b_iso_in.l = 1;
++				}
++			}
++
++			/* Setup and start the transfer for this endpoint */
++			dwcep->xiso_active_xfers++;
++			DWC_WRITE_REG32(&GET_CORE_IF(pcd)->dev_if->
++					in_ep_regs[dwcep->num]->diepdma,
++					dwcep->dma_desc_addr);
++			diepctl.d32 = 0;
++			diepctl.b.epena = 1;
++			diepctl.b.cnak = 1;
++			DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->dev_if->
++					 in_ep_regs[dwcep->num]->diepctl, 0,
++					 diepctl.d32);
++		} else {
++			/* Setup DMA Descriptor chain for OUT Isoc request */
++			for (i = 0; i < ereq->pio_pkt_count; i++) {
++				//if ((i % (nat + 1)) == 0)
++				dwcep->xiso_frame_num = (dwcep->xiso_bInterval +
++										dwcep->xiso_frame_num) & 0x3FFF;
++				dwcep->desc_addr[i].buf =
++				    req->dma + ddesc_iso[i].offset;
++				dwcep->desc_addr[i].status.b_iso_out.rxbytes =
++				    ddesc_iso[i].length;
++				dwcep->desc_addr[i].status.b_iso_out.framenum =
++				    dwcep->xiso_frame_num;
++				dwcep->desc_addr[i].status.b_iso_out.bs =
++				    BS_HOST_READY;
++				dwcep->desc_addr[i].status.b_iso_out.rxsts = 0;
++				dwcep->desc_addr[i].status.b_iso_out.sp =
++				    (ddesc_iso[i].length %
++				     dwcep->maxpacket) ? 1 : 0;
++				dwcep->desc_addr[i].status.b_iso_out.ioc = 0;
++				dwcep->desc_addr[i].status.b_iso_out.pid = nat + 1;
++				dwcep->desc_addr[i].status.b_iso_out.l = 0;
++
++				/* Process the last descriptor */
++				if (i == ereq->pio_pkt_count - 1) {
++					dwcep->desc_addr[i].status.b_iso_out.ioc = 1;
++					dwcep->desc_addr[i].status.b_iso_out.l = 1;
++				}
++			}
++
++			/* Setup and start the transfer for this endpoint */
++			dwcep->xiso_active_xfers++;
++			DWC_WRITE_REG32(&GET_CORE_IF(pcd)->
++					dev_if->out_ep_regs[dwcep->num]->
++					doepdma, dwcep->dma_desc_addr);
++			diepctl.d32 = 0;
++			diepctl.b.epena = 1;
++			diepctl.b.cnak = 1;
++			DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->
++					 dev_if->out_ep_regs[dwcep->num]->
++					 doepctl, 0, diepctl.d32);
++		}
++
++	} else {
++		ep->stopped = 1;
++	}
++
++	return 0;
++}
++
++/**
++ *	- Remove the request from the queue
++ */
++void complete_xiso_ep(dwc_otg_pcd_ep_t * ep)
++{
++	dwc_otg_pcd_request_t *req = NULL;
++	struct dwc_iso_xreq_port *ereq = NULL;
++	struct dwc_iso_pkt_desc_port *ddesc_iso = NULL;
++	dwc_ep_t *dwcep = NULL;
++	int i;
++
++	//DWC_DEBUG();
++	dwcep = &ep->dwc_ep;
++
++	/* Get the first pending request from the queue */
++	if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		req = DWC_CIRCLEQ_FIRST(&ep->queue);
++		if (!req) {
++			DWC_PRINTF("complete_ep 0x%p, req = NULL!\n", ep);
++			return;
++		}
++		dwcep->xiso_active_xfers--;
++		dwcep->xiso_queued_xfers--;
++		/* Remove this request from the queue */
++		DWC_CIRCLEQ_REMOVE_INIT(&ep->queue, req, queue_entry);
++	} else {
++		DWC_PRINTF("complete_ep 0x%p, ep->queue empty!\n", ep);
++		return;
++	}
++
++	ep->stopped = 1;
++	ereq = &req->ext_req;
++	ddesc_iso = ereq->per_io_frame_descs;
++
++	if (dwcep->xiso_active_xfers < 0) {
++		DWC_WARN("EP#%d (xiso_active_xfers=%d)", dwcep->num,
++			 dwcep->xiso_active_xfers);
++	}
++
++	/* Fill the Isoc descs of portable extended req from dma descriptors */
++	for (i = 0; i < ereq->pio_pkt_count; i++) {
++		if (dwcep->is_in) {	/* IN endpoints */
++			ddesc_iso[i].actual_length = ddesc_iso[i].length -
++			    dwcep->desc_addr[i].status.b_iso_in.txbytes;
++			ddesc_iso[i].status =
++			    dwcep->desc_addr[i].status.b_iso_in.txsts;
++		} else {	/* OUT endpoints */
++			ddesc_iso[i].actual_length = ddesc_iso[i].length -
++			    dwcep->desc_addr[i].status.b_iso_out.rxbytes;
++			ddesc_iso[i].status =
++			    dwcep->desc_addr[i].status.b_iso_out.rxsts;
++		}
++	}
++
++	DWC_SPINUNLOCK(ep->pcd->lock);
++
++	/* Call the completion function in the non-portable logic */
++	ep->pcd->fops->xisoc_complete(ep->pcd, ep->priv, req->priv, 0,
++				      &req->ext_req);
++
++	DWC_SPINLOCK(ep->pcd->lock);
++
++	/* Free the request - specific freeing needed for extended request object */
++	dwc_pcd_xiso_ereq_free(ep, req);
++
++	/* Start the next request */
++	dwc_otg_pcd_xiso_start_next_request(ep->pcd, ep);
++
++	return;
++}
++
++/**
++ * Create and initialize the Isoc pkt descriptors of the extended request.
++ *
++ */
++static int dwc_otg_pcd_xiso_create_pkt_descs(dwc_otg_pcd_request_t * req,
++					     void *ereq_nonport,
++					     int atomic_alloc)
++{
++	struct dwc_iso_xreq_port *ereq = NULL;
++	struct dwc_iso_xreq_port *req_mapped = NULL;
++	struct dwc_iso_pkt_desc_port *ipds = NULL;	/* To be created in this function */
++	uint32_t pkt_count;
++	int i;
++
++	ereq = &req->ext_req;
++	req_mapped = (struct dwc_iso_xreq_port *)ereq_nonport;
++	pkt_count = req_mapped->pio_pkt_count;
++
++	/* Create the isoc descs */
++	if (atomic_alloc) {
++		ipds = DWC_ALLOC_ATOMIC(sizeof(*ipds) * pkt_count);
++	} else {
++		ipds = DWC_ALLOC(sizeof(*ipds) * pkt_count);
++	}
++
++	if (!ipds) {
++		DWC_ERROR("Failed to allocate isoc descriptors");
++		return -DWC_E_NO_MEMORY;
++	}
++
++	/* Initialize the extended request fields */
++	ereq->per_io_frame_descs = ipds;
++	ereq->error_count = 0;
++	ereq->pio_alloc_pkt_count = pkt_count;
++	ereq->pio_pkt_count = pkt_count;
++	ereq->tr_sub_flags = req_mapped->tr_sub_flags;
++
++	/* Init the Isoc descriptors */
++	for (i = 0; i < pkt_count; i++) {
++		ipds[i].length = req_mapped->per_io_frame_descs[i].length;
++		ipds[i].offset = req_mapped->per_io_frame_descs[i].offset;
++		ipds[i].status = req_mapped->per_io_frame_descs[i].status;	/* 0 */
++		ipds[i].actual_length =
++		    req_mapped->per_io_frame_descs[i].actual_length;
++	}
++
++	return 0;
++}
++
++static void prn_ext_request(struct dwc_iso_xreq_port *ereq)
++{
++	struct dwc_iso_pkt_desc_port *xfd = NULL;
++	int i;
++
++	DWC_DEBUG("per_io_frame_descs=%p", ereq->per_io_frame_descs);
++	DWC_DEBUG("tr_sub_flags=%d", ereq->tr_sub_flags);
++	DWC_DEBUG("error_count=%d", ereq->error_count);
++	DWC_DEBUG("pio_alloc_pkt_count=%d", ereq->pio_alloc_pkt_count);
++	DWC_DEBUG("pio_pkt_count=%d", ereq->pio_pkt_count);
++	DWC_DEBUG("res=%d", ereq->res);
++
++	for (i = 0; i < ereq->pio_pkt_count; i++) {
++		xfd = &ereq->per_io_frame_descs[0];
++		DWC_DEBUG("FD #%d", i);
++
++		DWC_DEBUG("xfd->actual_length=%d", xfd->actual_length);
++		DWC_DEBUG("xfd->length=%d", xfd->length);
++		DWC_DEBUG("xfd->offset=%d", xfd->offset);
++		DWC_DEBUG("xfd->status=%d", xfd->status);
++	}
++}
++
++/**
++ *
++ */
++int dwc_otg_pcd_xiso_ep_queue(dwc_otg_pcd_t * pcd, void *ep_handle,
++			      uint8_t * buf, dwc_dma_t dma_buf, uint32_t buflen,
++			      int zero, void *req_handle, int atomic_alloc,
++			      void *ereq_nonport)
++{
++	dwc_otg_pcd_request_t *req = NULL;
++	dwc_otg_pcd_ep_t *ep;
++	dwc_irqflags_t flags;
++	int res;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++	if (!ep) {
++		DWC_WARN("bad ep\n");
++		return -DWC_E_INVALID;
++	}
++
++	/* We support this extension only for DDMA mode */
++	if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC)
++		if (!GET_CORE_IF(pcd)->dma_desc_enable)
++			return -DWC_E_INVALID;
++
++	/* Create a dwc_otg_pcd_request_t object */
++	if (atomic_alloc) {
++		req = DWC_ALLOC_ATOMIC(sizeof(*req));
++	} else {
++		req = DWC_ALLOC(sizeof(*req));
++	}
++
++	if (!req) {
++		return -DWC_E_NO_MEMORY;
++	}
++
++	/* Create the Isoc descs for this request which shall be the exact match
++	 * of the structure sent to us from the non-portable logic */
++	res =
++	    dwc_otg_pcd_xiso_create_pkt_descs(req, ereq_nonport, atomic_alloc);
++	if (res) {
++		DWC_WARN("Failed to init the Isoc descriptors");
++		DWC_FREE(req);
++		return res;
++	}
++
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++
++	DWC_CIRCLEQ_INIT_ENTRY(req, queue_entry);
++	req->buf = buf;
++	req->dma = dma_buf;
++	req->length = buflen;
++	req->sent_zlp = zero;
++	req->priv = req_handle;
++
++	//DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++	ep->dwc_ep.dma_addr = dma_buf;
++	ep->dwc_ep.start_xfer_buff = buf;
++	ep->dwc_ep.xfer_buff = buf;
++	ep->dwc_ep.xfer_len = 0;
++	ep->dwc_ep.xfer_count = 0;
++	ep->dwc_ep.sent_zlp = 0;
++	ep->dwc_ep.total_len = buflen;
++
++	/* Add this request to the tail */
++	DWC_CIRCLEQ_INSERT_TAIL(&ep->queue, req, queue_entry);
++	ep->dwc_ep.xiso_queued_xfers++;
++
++//DWC_DEBUG("CP_0");
++//DWC_DEBUG("req->ext_req.tr_sub_flags=%d", req->ext_req.tr_sub_flags);
++//prn_ext_request((struct dwc_iso_xreq_port *) ereq_nonport);
++//prn_ext_request(&req->ext_req);
++
++	//DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++
++	/* If the req->status == ASAP  then check if there is any active transfer
++	 * for this endpoint. If no active transfers, then get the first entry
++	 * from the queue and start that transfer
++	 */
++	if (req->ext_req.tr_sub_flags == DWC_EREQ_TF_ASAP) {
++		res = dwc_otg_pcd_xiso_start_next_request(pcd, ep);
++		if (res) {
++			DWC_WARN("Failed to start the next Isoc transfer");
++			DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++			DWC_FREE(req);
++			return res;
++		}
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++	return 0;
++}
++
++#endif
++/* END ifdef DWC_UTE_PER_IO ***************************************************/
++int dwc_otg_pcd_ep_queue(dwc_otg_pcd_t * pcd, void *ep_handle,
++			 uint8_t * buf, dwc_dma_t dma_buf, uint32_t buflen,
++			 int zero, void *req_handle, int atomic_alloc)
++{
++	dwc_irqflags_t flags;
++	dwc_otg_pcd_request_t *req;
++	dwc_otg_pcd_ep_t *ep;
++	uint32_t max_transfer;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++	if (!ep || (!ep->desc && ep->dwc_ep.num != 0)) {
++		DWC_WARN("bad ep\n");
++		return -DWC_E_INVALID;
++	}
++
++	if (atomic_alloc) {
++		req = DWC_ALLOC_ATOMIC(sizeof(*req));
++	} else {
++		req = DWC_ALLOC(sizeof(*req));
++	}
++
++	if (!req) {
++		return -DWC_E_NO_MEMORY;
++	}
++	DWC_CIRCLEQ_INIT_ENTRY(req, queue_entry);
++	if (!GET_CORE_IF(pcd)->core_params->opt) {
++		if (ep->dwc_ep.num != 0) {
++			DWC_ERROR("queue req %p, len %d buf %p\n",
++				  req_handle, buflen, buf);
++		}
++	}
++
++	req->buf = buf;
++	req->dma = dma_buf;
++	req->length = buflen;
++	req->sent_zlp = zero;
++	req->priv = req_handle;
++	req->dw_align_buf = NULL;
++	if ((dma_buf & 0x3) && GET_CORE_IF(pcd)->dma_enable
++			&& !GET_CORE_IF(pcd)->dma_desc_enable)
++		req->dw_align_buf = DWC_DMA_ALLOC(buflen,
++				 &req->dw_align_buf_dma);
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++
++	/*
++	 * After adding request to the queue for IN ISOC wait for In Token Received
++	 * when TX FIFO is empty interrupt and for OUT ISOC wait for OUT Token
++	 * Received when EP is disabled interrupt to obtain starting microframe
++	 * (odd/even) start transfer
++	 */
++	if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
++		if (req != 0) {
++			depctl_data_t depctl = {.d32 =
++				    DWC_READ_REG32(&pcd->core_if->dev_if->
++						   in_ep_regs[ep->dwc_ep.num]->
++						   diepctl) };
++			++pcd->request_pending;
++
++			DWC_CIRCLEQ_INSERT_TAIL(&ep->queue, req, queue_entry);
++			if (ep->dwc_ep.is_in) {
++				depctl.b.cnak = 1;
++				DWC_WRITE_REG32(&pcd->core_if->dev_if->
++						in_ep_regs[ep->dwc_ep.num]->
++						diepctl, depctl.d32);
++			}
++
++			DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++		}
++		return 0;
++	}
++
++	/*
++	 * For EP0 IN without premature status, zlp is required?
++	 */
++	if (ep->dwc_ep.num == 0 && ep->dwc_ep.is_in) {
++		DWC_DEBUGPL(DBG_PCDV, "%d-OUT ZLP\n", ep->dwc_ep.num);
++		//_req->zero = 1;
++	}
++
++	/* Start the transfer */
++	if (DWC_CIRCLEQ_EMPTY(&ep->queue) && !ep->stopped) {
++		/* EP0 Transfer? */
++		if (ep->dwc_ep.num == 0) {
++			switch (pcd->ep0state) {
++			case EP0_IN_DATA_PHASE:
++				DWC_DEBUGPL(DBG_PCD,
++					    "%s ep0: EP0_IN_DATA_PHASE\n",
++					    __func__);
++				break;
++
++			case EP0_OUT_DATA_PHASE:
++				DWC_DEBUGPL(DBG_PCD,
++					    "%s ep0: EP0_OUT_DATA_PHASE\n",
++					    __func__);
++				if (pcd->request_config) {
++					/* Complete STATUS PHASE */
++					ep->dwc_ep.is_in = 1;
++					pcd->ep0state = EP0_IN_STATUS_PHASE;
++				}
++				break;
++
++			case EP0_IN_STATUS_PHASE:
++				DWC_DEBUGPL(DBG_PCD,
++					    "%s ep0: EP0_IN_STATUS_PHASE\n",
++					    __func__);
++				break;
++
++			default:
++				DWC_DEBUGPL(DBG_ANY, "ep0: odd state %d\n",
++					    pcd->ep0state);
++				DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++				return -DWC_E_SHUTDOWN;
++			}
++
++			ep->dwc_ep.dma_addr = dma_buf;
++			ep->dwc_ep.start_xfer_buff = buf;
++			ep->dwc_ep.xfer_buff = buf;
++			ep->dwc_ep.xfer_len = buflen;
++			ep->dwc_ep.xfer_count = 0;
++			ep->dwc_ep.sent_zlp = 0;
++			ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
++
++			if (zero) {
++				if ((ep->dwc_ep.xfer_len %
++				     ep->dwc_ep.maxpacket == 0)
++				    && (ep->dwc_ep.xfer_len != 0)) {
++					ep->dwc_ep.sent_zlp = 1;
++				}
++
++			}
++
++			dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd),
++						   &ep->dwc_ep);
++		}		// non-ep0 endpoints
++		else {
++#ifdef DWC_UTE_CFI
++			if (ep->dwc_ep.buff_mode != BM_STANDARD) {
++				/* store the request length */
++				ep->dwc_ep.cfi_req_len = buflen;
++				pcd->cfi->ops.build_descriptors(pcd->cfi, pcd,
++								ep, req);
++			} else {
++#endif
++				max_transfer =
++				    GET_CORE_IF(ep->pcd)->core_params->
++				    max_transfer_size;
++
++				/* Setup and start the Transfer */
++				if (req->dw_align_buf){
++					if (ep->dwc_ep.is_in)
++						dwc_memcpy(req->dw_align_buf,
++							   buf, buflen);
++					ep->dwc_ep.dma_addr =
++					    req->dw_align_buf_dma;
++					ep->dwc_ep.start_xfer_buff =
++					    req->dw_align_buf;
++					ep->dwc_ep.xfer_buff =
++					    req->dw_align_buf;
++				} else {
++					ep->dwc_ep.dma_addr = dma_buf;
++					ep->dwc_ep.start_xfer_buff = buf;
++                                        ep->dwc_ep.xfer_buff = buf;
++				}
++				ep->dwc_ep.xfer_len = 0;
++				ep->dwc_ep.xfer_count = 0;
++				ep->dwc_ep.sent_zlp = 0;
++				ep->dwc_ep.total_len = buflen;
++
++				ep->dwc_ep.maxxfer = max_transfer;
++				if (GET_CORE_IF(pcd)->dma_desc_enable) {
++					uint32_t out_max_xfer =
++					    DDMA_MAX_TRANSFER_SIZE -
++					    (DDMA_MAX_TRANSFER_SIZE % 4);
++					if (ep->dwc_ep.is_in) {
++						if (ep->dwc_ep.maxxfer >
++						    DDMA_MAX_TRANSFER_SIZE) {
++							ep->dwc_ep.maxxfer =
++							    DDMA_MAX_TRANSFER_SIZE;
++						}
++					} else {
++						if (ep->dwc_ep.maxxfer >
++						    out_max_xfer) {
++							ep->dwc_ep.maxxfer =
++							    out_max_xfer;
++						}
++					}
++				}
++				if (ep->dwc_ep.maxxfer < ep->dwc_ep.total_len) {
++					ep->dwc_ep.maxxfer -=
++					    (ep->dwc_ep.maxxfer %
++					     ep->dwc_ep.maxpacket);
++				}
++
++				if (zero) {
++					if ((ep->dwc_ep.total_len %
++					     ep->dwc_ep.maxpacket == 0)
++					    && (ep->dwc_ep.total_len != 0)) {
++						ep->dwc_ep.sent_zlp = 1;
++					}
++				}
++#ifdef DWC_UTE_CFI
++			}
++#endif
++			dwc_otg_ep_start_transfer(GET_CORE_IF(pcd),
++						  &ep->dwc_ep);
++		}
++	}
++
++	if (req != 0) {
++		++pcd->request_pending;
++		DWC_CIRCLEQ_INSERT_TAIL(&ep->queue, req, queue_entry);
++		if (ep->dwc_ep.is_in && ep->stopped
++		    && !(GET_CORE_IF(pcd)->dma_enable)) {
++			/** @todo NGS Create a function for this. */
++			diepmsk_data_t diepmsk = {.d32 = 0 };
++			diepmsk.b.intktxfemp = 1;
++			if (GET_CORE_IF(pcd)->multiproc_int_enable) {
++				DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->
++						 dev_if->dev_global_regs->diepeachintmsk
++						 [ep->dwc_ep.num], 0,
++						 diepmsk.d32);
++			} else {
++				DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->
++						 dev_if->dev_global_regs->
++						 diepmsk, 0, diepmsk.d32);
++			}
++
++		}
++	}
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++
++	return 0;
++}
++
++int dwc_otg_pcd_ep_dequeue(dwc_otg_pcd_t * pcd, void *ep_handle,
++			   void *req_handle)
++{
++	dwc_irqflags_t flags;
++	dwc_otg_pcd_request_t *req;
++	dwc_otg_pcd_ep_t *ep;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++	if (!ep || (!ep->desc && ep->dwc_ep.num != 0)) {
++		DWC_WARN("bad argument\n");
++		return -DWC_E_INVALID;
++	}
++
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++
++	/* make sure it's actually queued on this endpoint */
++	DWC_CIRCLEQ_FOREACH(req, &ep->queue, queue_entry) {
++		if (req->priv == (void *)req_handle) {
++			break;
++		}
++	}
++
++	if (req->priv != (void *)req_handle) {
++		DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++		return -DWC_E_INVALID;
++	}
++
++	if (!DWC_CIRCLEQ_EMPTY_ENTRY(req, queue_entry)) {
++		dwc_otg_request_done(ep, req, -DWC_E_RESTART);
++	} else {
++		req = NULL;
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++
++	return req ? 0 : -DWC_E_SHUTDOWN;
++
++}
++
++/**
++ * dwc_otg_pcd_ep_wedge - sets the halt feature and ignores clear requests
++ *
++ * Use this to stall an endpoint and ignore CLEAR_FEATURE(HALT_ENDPOINT)
++ * requests. If the gadget driver clears the halt status, it will
++ * automatically unwedge the endpoint.
++ *
++ * Returns zero on success, else negative DWC error code.
++ */
++int dwc_otg_pcd_ep_wedge(dwc_otg_pcd_t * pcd, void *ep_handle)
++{
++	dwc_otg_pcd_ep_t *ep;
++	dwc_irqflags_t flags;
++	int retval = 0;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++
++	if ((!ep->desc && ep != &pcd->ep0) ||
++	    (ep->desc && (ep->desc->bmAttributes == UE_ISOCHRONOUS))) {
++		DWC_WARN("%s, bad ep\n", __func__);
++		return -DWC_E_INVALID;
++	}
++
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++	if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		DWC_WARN("%d %s XFer In process\n", ep->dwc_ep.num,
++			 ep->dwc_ep.is_in ? "IN" : "OUT");
++		retval = -DWC_E_AGAIN;
++	} else {
++                /* This code needs to be reviewed */
++		if (ep->dwc_ep.is_in == 1 && GET_CORE_IF(pcd)->dma_desc_enable) {
++			dtxfsts_data_t txstatus;
++			fifosize_data_t txfifosize;
++
++			txfifosize.d32 =
++			    DWC_READ_REG32(&GET_CORE_IF(pcd)->
++					   core_global_regs->dtxfsiz[ep->dwc_ep.
++								     tx_fifo_num]);
++			txstatus.d32 =
++			    DWC_READ_REG32(&GET_CORE_IF(pcd)->
++					   dev_if->in_ep_regs[ep->dwc_ep.num]->
++					   dtxfsts);
++
++			if (txstatus.b.txfspcavail < txfifosize.b.depth) {
++				DWC_WARN("%s() Data In Tx Fifo\n", __func__);
++				retval = -DWC_E_AGAIN;
++			} else {
++				if (ep->dwc_ep.num == 0) {
++					pcd->ep0state = EP0_STALL;
++				}
++
++				ep->stopped = 1;
++				dwc_otg_ep_set_stall(GET_CORE_IF(pcd),
++						     &ep->dwc_ep);
++			}
++		} else {
++			if (ep->dwc_ep.num == 0) {
++				pcd->ep0state = EP0_STALL;
++			}
++
++			ep->stopped = 1;
++			dwc_otg_ep_set_stall(GET_CORE_IF(pcd), &ep->dwc_ep);
++		}
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++
++	return retval;
++}
++
++int dwc_otg_pcd_ep_halt(dwc_otg_pcd_t * pcd, void *ep_handle, int value)
++{
++	dwc_otg_pcd_ep_t *ep;
++	dwc_irqflags_t flags;
++	int retval = 0;
++
++	ep = get_ep_from_handle(pcd, ep_handle);
++
++	if (!ep || (!ep->desc && ep != &pcd->ep0) ||
++	    (ep->desc && (ep->desc->bmAttributes == UE_ISOCHRONOUS))) {
++		DWC_WARN("%s, bad ep\n", __func__);
++		return -DWC_E_INVALID;
++	}
++
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++	if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		DWC_WARN("%d %s XFer In process\n", ep->dwc_ep.num,
++			 ep->dwc_ep.is_in ? "IN" : "OUT");
++		retval = -DWC_E_AGAIN;
++	} else if (value == 0) {
++		dwc_otg_ep_clear_stall(GET_CORE_IF(pcd), &ep->dwc_ep);
++	} else if (value == 1) {
++		if (ep->dwc_ep.is_in == 1 && GET_CORE_IF(pcd)->dma_desc_enable) {
++			dtxfsts_data_t txstatus;
++			fifosize_data_t txfifosize;
++
++			txfifosize.d32 =
++			    DWC_READ_REG32(&GET_CORE_IF(pcd)->core_global_regs->
++					   dtxfsiz[ep->dwc_ep.tx_fifo_num]);
++			txstatus.d32 =
++			    DWC_READ_REG32(&GET_CORE_IF(pcd)->dev_if->
++					   in_ep_regs[ep->dwc_ep.num]->dtxfsts);
++
++			if (txstatus.b.txfspcavail < txfifosize.b.depth) {
++				DWC_WARN("%s() Data In Tx Fifo\n", __func__);
++				retval = -DWC_E_AGAIN;
++			} else {
++				if (ep->dwc_ep.num == 0) {
++					pcd->ep0state = EP0_STALL;
++				}
++
++				ep->stopped = 1;
++				dwc_otg_ep_set_stall(GET_CORE_IF(pcd),
++						     &ep->dwc_ep);
++			}
++		} else {
++			if (ep->dwc_ep.num == 0) {
++				pcd->ep0state = EP0_STALL;
++			}
++
++			ep->stopped = 1;
++			dwc_otg_ep_set_stall(GET_CORE_IF(pcd), &ep->dwc_ep);
++		}
++	} else if (value == 2) {
++		ep->dwc_ep.stall_clear_flag = 0;
++	} else if (value == 3) {
++		ep->dwc_ep.stall_clear_flag = 1;
++	}
++
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++
++	return retval;
++}
++
++/**
++ * This function initiates remote wakeup of the host from suspend state.
++ */
++void dwc_otg_pcd_rem_wkup_from_suspend(dwc_otg_pcd_t * pcd, int set)
++{
++	dctl_data_t dctl = { 0 };
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dsts_data_t dsts;
++
++	dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++	if (!dsts.b.suspsts) {
++		DWC_WARN("Remote wakeup while is not in suspend state\n");
++	}
++	/* Check if DEVICE_REMOTE_WAKEUP feature enabled */
++	if (pcd->remote_wakeup_enable) {
++		if (set) {
++
++			if (core_if->adp_enable) {
++				gpwrdn_data_t gpwrdn;
++
++				dwc_otg_adp_probe_stop(core_if);
++
++				/* Mask SRP detected interrupt from Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.srp_det_msk = 1;
++				DWC_MODIFY_REG32(&core_if->
++						 core_global_regs->gpwrdn,
++						 gpwrdn.d32, 0);
++
++				/* Disable Power Down Logic */
++				gpwrdn.d32 = 0;
++				gpwrdn.b.pmuactv = 1;
++				DWC_MODIFY_REG32(&core_if->
++						 core_global_regs->gpwrdn,
++						 gpwrdn.d32, 0);
++
++				/*
++				 * Initialize the Core for Device mode.
++				 */
++				core_if->op_state = B_PERIPHERAL;
++				dwc_otg_core_init(core_if);
++				dwc_otg_enable_global_interrupts(core_if);
++				cil_pcd_start(core_if);
++
++				dwc_otg_initiate_srp(core_if);
++			}
++
++			dctl.b.rmtwkupsig = 1;
++			DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++					 dctl, 0, dctl.d32);
++			DWC_DEBUGPL(DBG_PCD, "Set Remote Wakeup\n");
++
++			dwc_mdelay(2);
++			DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++					 dctl, dctl.d32, 0);
++			DWC_DEBUGPL(DBG_PCD, "Clear Remote Wakeup\n");
++		}
++	} else {
++		DWC_DEBUGPL(DBG_PCD, "Remote Wakeup is disabled\n");
++	}
++}
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++/**
++ * This function initiates remote wakeup of the host from L1 sleep state.
++ */
++void dwc_otg_pcd_rem_wkup_from_sleep(dwc_otg_pcd_t * pcd, int set)
++{
++	glpmcfg_data_t lpmcfg;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++
++	/* Check if we are in L1 state */
++	if (!lpmcfg.b.prt_sleep_sts) {
++		DWC_DEBUGPL(DBG_PCD, "Device is not in sleep state\n");
++		return;
++	}
++
++	/* Check if host allows remote wakeup */
++	if (!lpmcfg.b.rem_wkup_en) {
++		DWC_DEBUGPL(DBG_PCD, "Host does not allow remote wakeup\n");
++		return;
++	}
++
++	/* Check if Resume OK */
++	if (!lpmcfg.b.sleep_state_resumeok) {
++		DWC_DEBUGPL(DBG_PCD, "Sleep state resume is not OK\n");
++		return;
++	}
++
++	lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
++	lpmcfg.b.en_utmi_sleep = 0;
++	lpmcfg.b.hird_thres &= (~(1 << 4));
++	DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, lpmcfg.d32);
++
++	if (set) {
++		dctl_data_t dctl = {.d32 = 0 };
++		dctl.b.rmtwkupsig = 1;
++		/* Set RmtWkUpSig bit to start remote wakup signaling.
++		 * Hardware will automatically clear this bit.
++		 */
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl,
++				 0, dctl.d32);
++		DWC_DEBUGPL(DBG_PCD, "Set Remote Wakeup\n");
++	}
++
++}
++#endif
++
++/**
++ * Performs remote wakeup.
++ */
++void dwc_otg_pcd_remote_wakeup(dwc_otg_pcd_t * pcd, int set)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_irqflags_t flags;
++	if (dwc_otg_is_device_mode(core_if)) {
++		DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++#ifdef CONFIG_USB_DWC_OTG_LPM
++		if (core_if->lx_state == DWC_OTG_L1) {
++			dwc_otg_pcd_rem_wkup_from_sleep(pcd, set);
++		} else {
++#endif
++			dwc_otg_pcd_rem_wkup_from_suspend(pcd, set);
++#ifdef CONFIG_USB_DWC_OTG_LPM
++		}
++#endif
++		DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++	}
++	return;
++}
++
++void dwc_otg_pcd_disconnect_us(dwc_otg_pcd_t * pcd, int no_of_usecs)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dctl_data_t dctl = { 0 };
++
++	if (dwc_otg_is_device_mode(core_if)) {
++		dctl.b.sftdiscon = 1;
++		DWC_PRINTF("Soft disconnect for %d useconds\n",no_of_usecs);
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
++		dwc_udelay(no_of_usecs);
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32,0);
++
++	} else{
++		DWC_PRINTF("NOT SUPPORTED IN HOST MODE\n");
++	}
++	return;
++
++}
++
++int dwc_otg_pcd_wakeup(dwc_otg_pcd_t * pcd)
++{
++	dsts_data_t dsts;
++	gotgctl_data_t gotgctl;
++
++	/*
++	 * This function starts the Protocol if no session is in progress. If
++	 * a session is already in progress, but the device is suspended,
++	 * remote wakeup signaling is started.
++	 */
++
++	/* Check if valid session */
++	gotgctl.d32 =
++	    DWC_READ_REG32(&(GET_CORE_IF(pcd)->core_global_regs->gotgctl));
++	if (gotgctl.b.bsesvld) {
++		/* Check if suspend state */
++		dsts.d32 =
++		    DWC_READ_REG32(&
++				   (GET_CORE_IF(pcd)->dev_if->
++				    dev_global_regs->dsts));
++		if (dsts.b.suspsts) {
++			dwc_otg_pcd_remote_wakeup(pcd, 1);
++		}
++	} else {
++		dwc_otg_pcd_initiate_srp(pcd);
++	}
++
++	return 0;
++
++}
++
++/**
++ * Start the SRP timer to detect when the SRP does not complete within
++ * 6 seconds.
++ *
++ * @param pcd the pcd structure.
++ */
++void dwc_otg_pcd_initiate_srp(dwc_otg_pcd_t * pcd)
++{
++	dwc_irqflags_t flags;
++	DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
++	dwc_otg_initiate_srp(GET_CORE_IF(pcd));
++	DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
++}
++
++int dwc_otg_pcd_get_frame_number(dwc_otg_pcd_t * pcd)
++{
++	return dwc_otg_get_frame_number(GET_CORE_IF(pcd));
++}
++
++int dwc_otg_pcd_is_lpm_enabled(dwc_otg_pcd_t * pcd)
++{
++	return GET_CORE_IF(pcd)->core_params->lpm_enable;
++}
++
++uint32_t get_b_hnp_enable(dwc_otg_pcd_t * pcd)
++{
++	return pcd->b_hnp_enable;
++}
++
++uint32_t get_a_hnp_support(dwc_otg_pcd_t * pcd)
++{
++	return pcd->a_hnp_support;
++}
++
++uint32_t get_a_alt_hnp_support(dwc_otg_pcd_t * pcd)
++{
++	return pcd->a_alt_hnp_support;
++}
++
++int dwc_otg_pcd_get_rmwkup_enable(dwc_otg_pcd_t * pcd)
++{
++	return pcd->remote_wakeup_enable;
++}
++
++#endif /* DWC_HOST_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd.h
+@@ -0,0 +1,266 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd.h $
++ * $Revision: #48 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_HOST_ONLY
++#if !defined(__DWC_PCD_H__)
++#define __DWC_PCD_H__
++
++#include "dwc_otg_os_dep.h"
++#include "usb.h"
++#include "dwc_otg_cil.h"
++#include "dwc_otg_pcd_if.h"
++struct cfiobject;
++
++/**
++ * @file
++ *
++ * This file contains the structures, constants, and interfaces for
++ * the Perpherial Contoller Driver (PCD).
++ *
++ * The Peripheral Controller Driver (PCD) for Linux will implement the
++ * Gadget API, so that the existing Gadget drivers can be used. For
++ * the Mass Storage Function driver the File-backed USB Storage Gadget
++ * (FBS) driver will be used.  The FBS driver supports the
++ * Control-Bulk (CB), Control-Bulk-Interrupt (CBI), and Bulk-Only
++ * transports.
++ *
++ */
++
++/** Invalid DMA Address */
++#define DWC_DMA_ADDR_INVALID	(~(dwc_dma_t)0)
++
++/** Max Transfer size for any EP */
++#define DDMA_MAX_TRANSFER_SIZE 65535
++
++/**
++ * Get the pointer to the core_if from the pcd pointer.
++ */
++#define GET_CORE_IF( _pcd ) (_pcd->core_if)
++
++/**
++ * States of EP0.
++ */
++typedef enum ep0_state {
++	EP0_DISCONNECT,		/* no host */
++	EP0_IDLE,
++	EP0_IN_DATA_PHASE,
++	EP0_OUT_DATA_PHASE,
++	EP0_IN_STATUS_PHASE,
++	EP0_OUT_STATUS_PHASE,
++	EP0_STALL,
++} ep0state_e;
++
++/** Fordward declaration.*/
++struct dwc_otg_pcd;
++
++/** DWC_otg iso request structure.
++ *
++ */
++typedef struct usb_iso_request dwc_otg_pcd_iso_request_t;
++
++#ifdef DWC_UTE_PER_IO
++
++/**
++ * This shall be the exact analogy of the same type structure defined in the
++ * usb_gadget.h. Each descriptor contains
++ */
++struct dwc_iso_pkt_desc_port {
++	uint32_t offset;
++	uint32_t length;	/* expected length */
++	uint32_t actual_length;
++	uint32_t status;
++};
++
++struct dwc_iso_xreq_port {
++	/** transfer/submission flag */
++	uint32_t tr_sub_flags;
++	/** Start the request ASAP */
++#define DWC_EREQ_TF_ASAP		0x00000002
++	/** Just enqueue the request w/o initiating a transfer */
++#define DWC_EREQ_TF_ENQUEUE		0x00000004
++
++	/**
++	* count of ISO packets attached to this request - shall
++	* not exceed the pio_alloc_pkt_count
++	*/
++	uint32_t pio_pkt_count;
++	/** count of ISO packets allocated for this request */
++	uint32_t pio_alloc_pkt_count;
++	/** number of ISO packet errors */
++	uint32_t error_count;
++	/** reserved for future extension */
++	uint32_t res;
++	/** Will be allocated and freed in the UTE gadget and based on the CFC value */
++	struct dwc_iso_pkt_desc_port *per_io_frame_descs;
++};
++#endif
++/** DWC_otg request structure.
++ * This structure is a list of requests.
++ */
++typedef struct dwc_otg_pcd_request {
++	void *priv;
++	void *buf;
++	dwc_dma_t dma;
++	uint32_t length;
++	uint32_t actual;
++	unsigned sent_zlp:1;
++    /**
++     * Used instead of original buffer if
++     * it(physical address) is not dword-aligned.
++     **/
++     uint8_t *dw_align_buf;
++     dwc_dma_t dw_align_buf_dma;
++
++	 DWC_CIRCLEQ_ENTRY(dwc_otg_pcd_request) queue_entry;
++#ifdef DWC_UTE_PER_IO
++	struct dwc_iso_xreq_port ext_req;
++	//void *priv_ereq_nport; /*  */
++#endif
++} dwc_otg_pcd_request_t;
++
++DWC_CIRCLEQ_HEAD(req_list, dwc_otg_pcd_request);
++
++/**	  PCD EP structure.
++ * This structure describes an EP, there is an array of EPs in the PCD
++ * structure.
++ */
++typedef struct dwc_otg_pcd_ep {
++	/** USB EP Descriptor */
++	const usb_endpoint_descriptor_t *desc;
++
++	/** queue of dwc_otg_pcd_requests. */
++	struct req_list queue;
++	unsigned stopped:1;
++	unsigned disabling:1;
++	unsigned dma:1;
++	unsigned queue_sof:1;
++
++#ifdef DWC_EN_ISOC
++	/** ISOC req handle passed */
++	void *iso_req_handle;
++#endif				//_EN_ISOC_
++
++	/** DWC_otg ep data. */
++	dwc_ep_t dwc_ep;
++
++	/** Pointer to PCD */
++	struct dwc_otg_pcd *pcd;
++
++	void *priv;
++} dwc_otg_pcd_ep_t;
++
++/** DWC_otg PCD Structure.
++ * This structure encapsulates the data for the dwc_otg PCD.
++ */
++struct dwc_otg_pcd {
++	const struct dwc_otg_pcd_function_ops *fops;
++	/** The DWC otg device pointer */
++	struct dwc_otg_device *otg_dev;
++	/** Core Interface */
++	dwc_otg_core_if_t *core_if;
++	/** State of EP0 */
++	ep0state_e ep0state;
++	/** EP0 Request is pending */
++	unsigned ep0_pending:1;
++	/** Indicates when SET CONFIGURATION Request is in process */
++	unsigned request_config:1;
++	/** The state of the Remote Wakeup Enable. */
++	unsigned remote_wakeup_enable:1;
++	/** The state of the B-Device HNP Enable. */
++	unsigned b_hnp_enable:1;
++	/** The state of A-Device HNP Support. */
++	unsigned a_hnp_support:1;
++	/** The state of the A-Device Alt HNP support. */
++	unsigned a_alt_hnp_support:1;
++	/** Count of pending Requests */
++	unsigned request_pending;
++
++	/** SETUP packet for EP0
++	 * This structure is allocated as a DMA buffer on PCD initialization
++	 * with enough space for up to 3 setup packets.
++	 */
++	union {
++		usb_device_request_t req;
++		uint32_t d32[2];
++	} *setup_pkt;
++
++	dwc_dma_t setup_pkt_dma_handle;
++
++	/* Additional buffer and flag for CTRL_WR premature case */
++	uint8_t *backup_buf;
++	unsigned data_terminated;
++
++	/** 2-byte dma buffer used to return status from GET_STATUS */
++	uint16_t *status_buf;
++	dwc_dma_t status_buf_dma_handle;
++
++	/** EP0 */
++	dwc_otg_pcd_ep_t ep0;
++
++	/** Array of IN EPs. */
++	dwc_otg_pcd_ep_t in_ep[MAX_EPS_CHANNELS - 1];
++	/** Array of OUT EPs. */
++	dwc_otg_pcd_ep_t out_ep[MAX_EPS_CHANNELS - 1];
++	/** number of valid EPs in the above array. */
++//        unsigned      num_eps : 4;
++	dwc_spinlock_t *lock;
++
++	/** Tasklet to defer starting of TEST mode transmissions until
++	 *	Status Phase has been completed.
++	 */
++	dwc_tasklet_t *test_mode_tasklet;
++
++	/** Tasklet to delay starting of xfer in DMA mode */
++	dwc_tasklet_t *start_xfer_tasklet;
++
++	/** The test mode to enter when the tasklet is executed. */
++	unsigned test_mode;
++	/** The cfi_api structure that implements most of the CFI API
++	 * and OTG specific core configuration functionality
++	 */
++#ifdef DWC_UTE_CFI
++	struct cfiobject *cfi;
++#endif
++
++};
++
++//FIXME this functions should be static, and this prototypes should be removed
++extern void dwc_otg_request_nuke(dwc_otg_pcd_ep_t * ep);
++extern void dwc_otg_request_done(dwc_otg_pcd_ep_t * ep,
++				 dwc_otg_pcd_request_t * req, int32_t status);
++
++void dwc_otg_iso_buffer_done(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * ep,
++			     void *req_handle);
++
++extern void do_test_mode(void *data);
++#endif
++#endif /* DWC_HOST_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd_if.h
+@@ -0,0 +1,360 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd_if.h $
++ * $Revision: #11 $
++ * $Date: 2011/10/26 $
++ * $Change: 1873028 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_HOST_ONLY
++
++#if !defined(__DWC_PCD_IF_H__)
++#define __DWC_PCD_IF_H__
++
++//#include "dwc_os.h"
++#include "dwc_otg_core_if.h"
++
++/** @file
++ * This file defines DWC_OTG PCD Core API.
++ */
++
++struct dwc_otg_pcd;
++typedef struct dwc_otg_pcd dwc_otg_pcd_t;
++
++/** Maxpacket size for EP0 */
++#define MAX_EP0_SIZE	64
++/** Maxpacket size for any EP */
++#define MAX_PACKET_SIZE 1024
++
++/** @name Function Driver Callbacks */
++/** @{ */
++
++/** This function will be called whenever a previously queued request has
++ * completed.  The status value will be set to -DWC_E_SHUTDOWN to indicated a
++ * failed or aborted transfer, or -DWC_E_RESTART to indicate the device was reset,
++ * or -DWC_E_TIMEOUT to indicate it timed out, or -DWC_E_INVALID to indicate invalid
++ * parameters. */
++typedef int (*dwc_completion_cb_t) (dwc_otg_pcd_t * pcd, void *ep_handle,
++				    void *req_handle, int32_t status,
++				    uint32_t actual);
++/**
++ * This function will be called whenever a previousle queued ISOC request has
++ * completed. Count of ISOC packets could be read using dwc_otg_pcd_get_iso_packet_count
++ * function.
++ * The status of each ISOC packet could be read using dwc_otg_pcd_get_iso_packet_*
++ * functions.
++ */
++typedef int (*dwc_isoc_completion_cb_t) (dwc_otg_pcd_t * pcd, void *ep_handle,
++					 void *req_handle, int proc_buf_num);
++/** This function should handle any SETUP request that cannot be handled by the
++ * PCD Core.  This includes most GET_DESCRIPTORs, SET_CONFIGS, Any
++ * class-specific requests, etc.  The function must non-blocking.
++ *
++ * Returns 0 on success.
++ * Returns -DWC_E_NOT_SUPPORTED if the request is not supported.
++ * Returns -DWC_E_INVALID if the setup request had invalid parameters or bytes.
++ * Returns -DWC_E_SHUTDOWN on any other error. */
++typedef int (*dwc_setup_cb_t) (dwc_otg_pcd_t * pcd, uint8_t * bytes);
++/** This is called whenever the device has been disconnected.  The function
++ * driver should take appropriate action to clean up all pending requests in the
++ * PCD Core, remove all endpoints (except ep0), and initialize back to reset
++ * state. */
++typedef int (*dwc_disconnect_cb_t) (dwc_otg_pcd_t * pcd);
++/** This function is called when device has been connected. */
++typedef int (*dwc_connect_cb_t) (dwc_otg_pcd_t * pcd, int speed);
++/** This function is called when device has been suspended */
++typedef int (*dwc_suspend_cb_t) (dwc_otg_pcd_t * pcd);
++/** This function is called when device has received LPM tokens, i.e.
++ * device has been sent to sleep state. */
++typedef int (*dwc_sleep_cb_t) (dwc_otg_pcd_t * pcd);
++/** This function is called when device has been resumed
++ * from suspend(L2) or L1 sleep state. */
++typedef int (*dwc_resume_cb_t) (dwc_otg_pcd_t * pcd);
++/** This function is called whenever hnp params has been changed.
++ * User can call get_b_hnp_enable, get_a_hnp_support, get_a_alt_hnp_support functions
++ * to get hnp parameters. */
++typedef int (*dwc_hnp_params_changed_cb_t) (dwc_otg_pcd_t * pcd);
++/** This function is called whenever USB RESET is detected. */
++typedef int (*dwc_reset_cb_t) (dwc_otg_pcd_t * pcd);
++
++typedef int (*cfi_setup_cb_t) (dwc_otg_pcd_t * pcd, void *ctrl_req_bytes);
++
++/**
++ *
++ * @param ep_handle	Void pointer to the usb_ep structure
++ * @param ereq_port Pointer to the extended request structure created in the
++ *					portable part.
++ */
++typedef int (*xiso_completion_cb_t) (dwc_otg_pcd_t * pcd, void *ep_handle,
++				     void *req_handle, int32_t status,
++				     void *ereq_port);
++/** Function Driver Ops Data Structure */
++struct dwc_otg_pcd_function_ops {
++	dwc_connect_cb_t connect;
++	dwc_disconnect_cb_t disconnect;
++	dwc_setup_cb_t setup;
++	dwc_completion_cb_t complete;
++	dwc_isoc_completion_cb_t isoc_complete;
++	dwc_suspend_cb_t suspend;
++	dwc_sleep_cb_t sleep;
++	dwc_resume_cb_t resume;
++	dwc_reset_cb_t reset;
++	dwc_hnp_params_changed_cb_t hnp_changed;
++	cfi_setup_cb_t cfi_setup;
++#ifdef DWC_UTE_PER_IO
++	xiso_completion_cb_t xisoc_complete;
++#endif
++};
++/** @} */
++
++/** @name Function Driver Functions */
++/** @{ */
++
++/** Call this function to get pointer on dwc_otg_pcd_t,
++ * this pointer will be used for all PCD API functions.
++ *
++ * @param core_if The DWC_OTG Core
++ */
++extern dwc_otg_pcd_t *dwc_otg_pcd_init(dwc_otg_core_if_t * core_if);
++
++/** Frees PCD allocated by dwc_otg_pcd_init
++ *
++ * @param pcd The PCD
++ */
++extern void dwc_otg_pcd_remove(dwc_otg_pcd_t * pcd);
++
++/** Call this to bind the function driver to the PCD Core.
++ *
++ * @param pcd Pointer on dwc_otg_pcd_t returned by dwc_otg_pcd_init function.
++ * @param fops The Function Driver Ops data structure containing pointers to all callbacks.
++ */
++extern void dwc_otg_pcd_start(dwc_otg_pcd_t * pcd,
++			      const struct dwc_otg_pcd_function_ops *fops);
++
++/** Enables an endpoint for use.  This function enables an endpoint in
++ * the PCD.  The endpoint is described by the ep_desc which has the
++ * same format as a USB ep descriptor.  The ep_handle parameter is used to refer
++ * to the endpoint from other API functions and in callbacks.  Normally this
++ * should be called after a SET_CONFIGURATION/SET_INTERFACE to configure the
++ * core for that interface.
++ *
++ * Returns -DWC_E_INVALID if invalid parameters were passed.
++ * Returns -DWC_E_SHUTDOWN if any other error ocurred.
++ * Returns 0 on success.
++ *
++ * @param pcd The PCD
++ * @param ep_desc Endpoint descriptor
++ * @param usb_ep Handle on endpoint, that will be used to identify endpoint.
++ */
++extern int dwc_otg_pcd_ep_enable(dwc_otg_pcd_t * pcd,
++				 const uint8_t * ep_desc, void *usb_ep);
++
++/** Disable the endpoint referenced by ep_handle.
++ *
++ * Returns -DWC_E_INVALID if invalid parameters were passed.
++ * Returns -DWC_E_SHUTDOWN if any other error occurred.
++ * Returns 0 on success. */
++extern int dwc_otg_pcd_ep_disable(dwc_otg_pcd_t * pcd, void *ep_handle);
++
++/** Queue a data transfer request on the endpoint referenced by ep_handle.
++ * After the transfer is completes, the complete callback will be called with
++ * the request status.
++ *
++ * @param pcd The PCD
++ * @param ep_handle The handle of the endpoint
++ * @param buf The buffer for the data
++ * @param dma_buf The DMA buffer for the data
++ * @param buflen The length of the data transfer
++ * @param zero Specifies whether to send zero length last packet.
++ * @param req_handle Set this handle to any value to use to reference this
++ * request in the ep_dequeue function or from the complete callback
++ * @param atomic_alloc If driver need to perform atomic allocations
++ * for internal data structures.
++ *
++ * Returns -DWC_E_INVALID if invalid parameters were passed.
++ * Returns -DWC_E_SHUTDOWN if any other error ocurred.
++ * Returns 0 on success. */
++extern int dwc_otg_pcd_ep_queue(dwc_otg_pcd_t * pcd, void *ep_handle,
++				uint8_t * buf, dwc_dma_t dma_buf,
++				uint32_t buflen, int zero, void *req_handle,
++				int atomic_alloc);
++#ifdef DWC_UTE_PER_IO
++/**
++ *
++ * @param ereq_nonport	Pointer to the extended request part of the
++ *						usb_request structure defined in usb_gadget.h file.
++ */
++extern int dwc_otg_pcd_xiso_ep_queue(dwc_otg_pcd_t * pcd, void *ep_handle,
++				     uint8_t * buf, dwc_dma_t dma_buf,
++				     uint32_t buflen, int zero,
++				     void *req_handle, int atomic_alloc,
++				     void *ereq_nonport);
++
++#endif
++
++/** De-queue the specified data transfer that has not yet completed.
++ *
++ * Returns -DWC_E_INVALID if invalid parameters were passed.
++ * Returns -DWC_E_SHUTDOWN if any other error ocurred.
++ * Returns 0 on success. */
++extern int dwc_otg_pcd_ep_dequeue(dwc_otg_pcd_t * pcd, void *ep_handle,
++				  void *req_handle);
++
++/** Halt (STALL) an endpoint or clear it.
++ *
++ * Returns -DWC_E_INVALID if invalid parameters were passed.
++ * Returns -DWC_E_SHUTDOWN if any other error ocurred.
++ * Returns -DWC_E_AGAIN if the STALL cannot be sent and must be tried again later
++ * Returns 0 on success. */
++extern int dwc_otg_pcd_ep_halt(dwc_otg_pcd_t * pcd, void *ep_handle, int value);
++
++/** This function */
++extern int dwc_otg_pcd_ep_wedge(dwc_otg_pcd_t * pcd, void *ep_handle);
++
++/** This function should be called on every hardware interrupt */
++extern int32_t dwc_otg_pcd_handle_intr(dwc_otg_pcd_t * pcd);
++
++/** This function returns current frame number */
++extern int dwc_otg_pcd_get_frame_number(dwc_otg_pcd_t * pcd);
++
++/**
++ * Start isochronous transfers on the endpoint referenced by ep_handle.
++ * For isochronous transfers duble buffering is used.
++ * After processing each of buffers comlete callback will be called with
++ * status for each transaction.
++ *
++ * @param pcd The PCD
++ * @param ep_handle The handle of the endpoint
++ * @param buf0 The virtual address of first data buffer
++ * @param buf1 The virtual address of second data buffer
++ * @param dma0 The DMA address of first data buffer
++ * @param dma1 The DMA address of second data buffer
++ * @param sync_frame Data pattern frame number
++ * @param dp_frame Data size for pattern frame
++ * @param data_per_frame Data size for regular frame
++ * @param start_frame Frame number to start transfers, if -1 then start transfers ASAP.
++ * @param buf_proc_intrvl Interval of ISOC Buffer processing
++ * @param req_handle Handle of ISOC request
++ * @param atomic_alloc Specefies whether to perform atomic allocation for
++ * 			internal data structures.
++ *
++ * Returns -DWC_E_NO_MEMORY if there is no enough memory.
++ * Returns -DWC_E_INVALID if incorrect arguments are passed to the function.
++ * Returns -DW_E_SHUTDOWN for any other error.
++ * Returns 0 on success
++ */
++extern int dwc_otg_pcd_iso_ep_start(dwc_otg_pcd_t * pcd, void *ep_handle,
++				    uint8_t * buf0, uint8_t * buf1,
++				    dwc_dma_t dma0, dwc_dma_t dma1,
++				    int sync_frame, int dp_frame,
++				    int data_per_frame, int start_frame,
++				    int buf_proc_intrvl, void *req_handle,
++				    int atomic_alloc);
++
++/** Stop ISOC transfers on endpoint referenced by ep_handle.
++ *
++ * @param pcd The PCD
++ * @param ep_handle The handle of the endpoint
++ * @param req_handle Handle of ISOC request
++ *
++ * Returns -DWC_E_INVALID if incorrect arguments are passed to the function
++ * Returns 0 on success
++ */
++int dwc_otg_pcd_iso_ep_stop(dwc_otg_pcd_t * pcd, void *ep_handle,
++			    void *req_handle);
++
++/** Get ISOC packet status.
++ *
++ * @param pcd The PCD
++ * @param ep_handle The handle of the endpoint
++ * @param iso_req_handle Isochronoush request handle
++ * @param packet Number of packet
++ * @param status Out parameter for returning status
++ * @param actual Out parameter for returning actual length
++ * @param offset Out parameter for returning offset
++ *
++ */
++extern void dwc_otg_pcd_get_iso_packet_params(dwc_otg_pcd_t * pcd,
++					      void *ep_handle,
++					      void *iso_req_handle, int packet,
++					      int *status, int *actual,
++					      int *offset);
++
++/** Get ISOC packet count.
++ *
++ * @param pcd The PCD
++ * @param ep_handle The handle of the endpoint
++ * @param iso_req_handle
++ */
++extern int dwc_otg_pcd_get_iso_packet_count(dwc_otg_pcd_t * pcd,
++					    void *ep_handle,
++					    void *iso_req_handle);
++
++/** This function starts the SRP Protocol if no session is in progress. If
++ * a session is already in progress, but the device is suspended,
++ * remote wakeup signaling is started.
++ */
++extern int dwc_otg_pcd_wakeup(dwc_otg_pcd_t * pcd);
++
++/** This function returns 1 if LPM support is enabled, and 0 otherwise. */
++extern int dwc_otg_pcd_is_lpm_enabled(dwc_otg_pcd_t * pcd);
++
++/** This function returns 1 if remote wakeup is allowed and 0, otherwise. */
++extern int dwc_otg_pcd_get_rmwkup_enable(dwc_otg_pcd_t * pcd);
++
++/** Initiate SRP */
++extern void dwc_otg_pcd_initiate_srp(dwc_otg_pcd_t * pcd);
++
++/** Starts remote wakeup signaling. */
++extern void dwc_otg_pcd_remote_wakeup(dwc_otg_pcd_t * pcd, int set);
++
++/** Starts micorsecond soft disconnect. */
++extern void dwc_otg_pcd_disconnect_us(dwc_otg_pcd_t * pcd, int no_of_usecs);
++/** This function returns whether device is dualspeed.*/
++extern uint32_t dwc_otg_pcd_is_dualspeed(dwc_otg_pcd_t * pcd);
++
++/** This function returns whether device is otg. */
++extern uint32_t dwc_otg_pcd_is_otg(dwc_otg_pcd_t * pcd);
++
++/** These functions allow to get hnp parameters */
++extern uint32_t get_b_hnp_enable(dwc_otg_pcd_t * pcd);
++extern uint32_t get_a_hnp_support(dwc_otg_pcd_t * pcd);
++extern uint32_t get_a_alt_hnp_support(dwc_otg_pcd_t * pcd);
++
++/** CFI specific Interface functions */
++/** Allocate a cfi buffer */
++extern uint8_t *cfiw_ep_alloc_buffer(dwc_otg_pcd_t * pcd, void *pep,
++				     dwc_dma_t * addr, size_t buflen,
++				     int flags);
++
++/******************************************************************************/
++
++/** @} */
++
++#endif				/* __DWC_PCD_IF_H__ */
++
++#endif				/* DWC_HOST_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c
+@@ -0,0 +1,5147 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd_intr.c $
++ * $Revision: #116 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++#ifndef DWC_HOST_ONLY
++
++#include "dwc_otg_pcd.h"
++
++#ifdef DWC_UTE_CFI
++#include "dwc_otg_cfi.h"
++#endif
++
++#ifdef DWC_UTE_PER_IO
++extern void complete_xiso_ep(dwc_otg_pcd_ep_t * ep);
++#endif
++//#define PRINT_CFI_DMA_DESCS
++
++#define DEBUG_EP0
++
++/**
++ * This function updates OTG.
++ */
++static void dwc_otg_pcd_update_otg(dwc_otg_pcd_t * pcd, const unsigned reset)
++{
++
++	if (reset) {
++		pcd->b_hnp_enable = 0;
++		pcd->a_hnp_support = 0;
++		pcd->a_alt_hnp_support = 0;
++	}
++
++	if (pcd->fops->hnp_changed) {
++		pcd->fops->hnp_changed(pcd);
++	}
++}
++
++/** @file
++ * This file contains the implementation of the PCD Interrupt handlers.
++ *
++ * The PCD handles the device interrupts.  Many conditions can cause a
++ * device interrupt. When an interrupt occurs, the device interrupt
++ * service routine determines the cause of the interrupt and
++ * dispatches handling to the appropriate function. These interrupt
++ * handling functions are described below.
++ * All interrupt registers are processed from LSB to MSB.
++ */
++
++/**
++ * This function prints the ep0 state for debug purposes.
++ */
++static inline void print_ep0_state(dwc_otg_pcd_t * pcd)
++{
++#ifdef DEBUG
++	char str[40];
++
++	switch (pcd->ep0state) {
++	case EP0_DISCONNECT:
++		dwc_strcpy(str, "EP0_DISCONNECT");
++		break;
++	case EP0_IDLE:
++		dwc_strcpy(str, "EP0_IDLE");
++		break;
++	case EP0_IN_DATA_PHASE:
++		dwc_strcpy(str, "EP0_IN_DATA_PHASE");
++		break;
++	case EP0_OUT_DATA_PHASE:
++		dwc_strcpy(str, "EP0_OUT_DATA_PHASE");
++		break;
++	case EP0_IN_STATUS_PHASE:
++		dwc_strcpy(str, "EP0_IN_STATUS_PHASE");
++		break;
++	case EP0_OUT_STATUS_PHASE:
++		dwc_strcpy(str, "EP0_OUT_STATUS_PHASE");
++		break;
++	case EP0_STALL:
++		dwc_strcpy(str, "EP0_STALL");
++		break;
++	default:
++		dwc_strcpy(str, "EP0_INVALID");
++	}
++
++	DWC_DEBUGPL(DBG_ANY, "%s(%d)\n", str, pcd->ep0state);
++#endif
++}
++
++/**
++ * This function calculate the size of the payload in the memory
++ * for out endpoints and prints size for debug purposes(used in
++ * 2.93a DevOutNak feature).
++ */
++static inline void print_memory_payload(dwc_otg_pcd_t * pcd,  dwc_ep_t * ep)
++{
++#ifdef DEBUG
++	deptsiz_data_t deptsiz_init = {.d32 = 0 };
++	deptsiz_data_t deptsiz_updt = {.d32 = 0 };
++	int pack_num;
++	unsigned payload;
++
++	deptsiz_init.d32 = pcd->core_if->start_doeptsiz_val[ep->num];
++	deptsiz_updt.d32 =
++		DWC_READ_REG32(&pcd->core_if->dev_if->
++						out_ep_regs[ep->num]->doeptsiz);
++	/* Payload will be */
++	payload = deptsiz_init.b.xfersize - deptsiz_updt.b.xfersize;
++	/* Packet count is decremented every time a packet
++	 * is written to the RxFIFO not in to the external memory
++	 * So, if payload == 0, then it means no packet was sent to ext memory*/
++	pack_num = (!payload) ? 0 : (deptsiz_init.b.pktcnt - deptsiz_updt.b.pktcnt);
++	DWC_DEBUGPL(DBG_PCDV,
++		"Payload for EP%d-%s\n",
++		ep->num, (ep->is_in ? "IN" : "OUT"));
++	DWC_DEBUGPL(DBG_PCDV,
++		"Number of transfered bytes = 0x%08x\n", payload);
++	DWC_DEBUGPL(DBG_PCDV,
++		"Number of transfered packets = %d\n", pack_num);
++#endif
++}
++
++
++#ifdef DWC_UTE_CFI
++static inline void print_desc(struct dwc_otg_dma_desc *ddesc,
++			      const uint8_t * epname, int descnum)
++{
++	CFI_INFO
++	    ("%s DMA_DESC(%d) buf=0x%08x bytes=0x%04x; sp=0x%x; l=0x%x; sts=0x%02x; bs=0x%02x\n",
++	     epname, descnum, ddesc->buf, ddesc->status.b.bytes,
++	     ddesc->status.b.sp, ddesc->status.b.l, ddesc->status.b.sts,
++	     ddesc->status.b.bs);
++}
++#endif
++
++/**
++ * This function returns pointer to in ep struct with number ep_num
++ */
++static inline dwc_otg_pcd_ep_t *get_in_ep(dwc_otg_pcd_t * pcd, uint32_t ep_num)
++{
++	int i;
++	int num_in_eps = GET_CORE_IF(pcd)->dev_if->num_in_eps;
++	if (ep_num == 0) {
++		return &pcd->ep0;
++	} else {
++		for (i = 0; i < num_in_eps; ++i) {
++			if (pcd->in_ep[i].dwc_ep.num == ep_num)
++				return &pcd->in_ep[i];
++		}
++		return 0;
++	}
++}
++
++/**
++ * This function returns pointer to out ep struct with number ep_num
++ */
++static inline dwc_otg_pcd_ep_t *get_out_ep(dwc_otg_pcd_t * pcd, uint32_t ep_num)
++{
++	int i;
++	int num_out_eps = GET_CORE_IF(pcd)->dev_if->num_out_eps;
++	if (ep_num == 0) {
++		return &pcd->ep0;
++	} else {
++		for (i = 0; i < num_out_eps; ++i) {
++			if (pcd->out_ep[i].dwc_ep.num == ep_num)
++				return &pcd->out_ep[i];
++		}
++		return 0;
++	}
++}
++
++/**
++ * This functions gets a pointer to an EP from the wIndex address
++ * value of the control request.
++ */
++dwc_otg_pcd_ep_t *get_ep_by_addr(dwc_otg_pcd_t * pcd, u16 wIndex)
++{
++	dwc_otg_pcd_ep_t *ep;
++	uint32_t ep_num = UE_GET_ADDR(wIndex);
++
++	if (ep_num == 0) {
++		ep = &pcd->ep0;
++	} else if (UE_GET_DIR(wIndex) == UE_DIR_IN) {	/* in ep */
++		ep = &pcd->in_ep[ep_num - 1];
++	} else {
++		ep = &pcd->out_ep[ep_num - 1];
++	}
++
++	return ep;
++}
++
++/**
++ * This function checks the EP request queue, if the queue is not
++ * empty the next request is started.
++ */
++void start_next_request(dwc_otg_pcd_ep_t * ep)
++{
++	dwc_otg_pcd_request_t *req = 0;
++	uint32_t max_transfer =
++	    GET_CORE_IF(ep->pcd)->core_params->max_transfer_size;
++
++#ifdef DWC_UTE_CFI
++	struct dwc_otg_pcd *pcd;
++	pcd = ep->pcd;
++#endif
++
++	if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		req = DWC_CIRCLEQ_FIRST(&ep->queue);
++
++#ifdef DWC_UTE_CFI
++		if (ep->dwc_ep.buff_mode != BM_STANDARD) {
++			ep->dwc_ep.cfi_req_len = req->length;
++			pcd->cfi->ops.build_descriptors(pcd->cfi, pcd, ep, req);
++		} else {
++#endif
++			/* Setup and start the Transfer */
++			if (req->dw_align_buf) {
++				ep->dwc_ep.dma_addr = req->dw_align_buf_dma;
++				ep->dwc_ep.start_xfer_buff = req->dw_align_buf;
++				ep->dwc_ep.xfer_buff = req->dw_align_buf;
++			} else {
++				ep->dwc_ep.dma_addr = req->dma;
++				ep->dwc_ep.start_xfer_buff = req->buf;
++				ep->dwc_ep.xfer_buff = req->buf;
++			}
++			ep->dwc_ep.sent_zlp = 0;
++			ep->dwc_ep.total_len = req->length;
++			ep->dwc_ep.xfer_len = 0;
++			ep->dwc_ep.xfer_count = 0;
++
++			ep->dwc_ep.maxxfer = max_transfer;
++			if (GET_CORE_IF(ep->pcd)->dma_desc_enable) {
++				uint32_t out_max_xfer = DDMA_MAX_TRANSFER_SIZE
++				    - (DDMA_MAX_TRANSFER_SIZE % 4);
++				if (ep->dwc_ep.is_in) {
++					if (ep->dwc_ep.maxxfer >
++					    DDMA_MAX_TRANSFER_SIZE) {
++						ep->dwc_ep.maxxfer =
++						    DDMA_MAX_TRANSFER_SIZE;
++					}
++				} else {
++					if (ep->dwc_ep.maxxfer > out_max_xfer) {
++						ep->dwc_ep.maxxfer =
++						    out_max_xfer;
++					}
++				}
++			}
++			if (ep->dwc_ep.maxxfer < ep->dwc_ep.total_len) {
++				ep->dwc_ep.maxxfer -=
++				    (ep->dwc_ep.maxxfer % ep->dwc_ep.maxpacket);
++			}
++			if (req->sent_zlp) {
++				if ((ep->dwc_ep.total_len %
++				     ep->dwc_ep.maxpacket == 0)
++				    && (ep->dwc_ep.total_len != 0)) {
++					ep->dwc_ep.sent_zlp = 1;
++				}
++
++			}
++#ifdef DWC_UTE_CFI
++		}
++#endif
++		dwc_otg_ep_start_transfer(GET_CORE_IF(ep->pcd), &ep->dwc_ep);
++	} else if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
++		DWC_PRINTF("There are no more ISOC requests \n");
++		ep->dwc_ep.frame_num = 0xFFFFFFFF;
++	}
++}
++
++/**
++ * This function handles the SOF Interrupts. At this time the SOF
++ * Interrupt is disabled.
++ */
++int32_t dwc_otg_pcd_handle_sof_intr(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++
++	gintsts_data_t gintsts;
++
++	DWC_DEBUGPL(DBG_PCD, "SOF\n");
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.sofintr = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * This function handles the Rx Status Queue Level Interrupt, which
++ * indicates that there is a least one packet in the Rx FIFO.  The
++ * packets are moved from the FIFO to memory, where they will be
++ * processed when the Endpoint Interrupt Register indicates Transfer
++ * Complete or SETUP Phase Done.
++ *
++ * Repeat the following until the Rx Status Queue is empty:
++ *	 -# Read the Receive Status Pop Register (GRXSTSP) to get Packet
++ *		info
++ *	 -# If Receive FIFO is empty then skip to step Clear the interrupt
++ *		and exit
++ *	 -# If SETUP Packet call dwc_otg_read_setup_packet to copy the
++ *		SETUP data to the buffer
++ *	 -# If OUT Data Packet call dwc_otg_read_packet to copy the data
++ *		to the destination buffer
++ */
++int32_t dwc_otg_pcd_handle_rx_status_q_level_intr(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	gintmsk_data_t gintmask = {.d32 = 0 };
++	device_grxsts_data_t status;
++	dwc_otg_pcd_ep_t *ep;
++	gintsts_data_t gintsts;
++#ifdef DEBUG
++	static char *dpid_str[] = { "D0", "D2", "D1", "MDATA" };
++#endif
++
++	//DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, _pcd);
++	/* Disable the Rx Status Queue Level interrupt */
++	gintmask.b.rxstsqlvl = 1;
++	DWC_MODIFY_REG32(&global_regs->gintmsk, gintmask.d32, 0);
++
++	/* Get the Status from the top of the FIFO */
++	status.d32 = DWC_READ_REG32(&global_regs->grxstsp);
++
++	DWC_DEBUGPL(DBG_PCD, "EP:%d BCnt:%d DPID:%s "
++		    "pktsts:%x Frame:%d(0x%0x)\n",
++		    status.b.epnum, status.b.bcnt,
++		    dpid_str[status.b.dpid],
++		    status.b.pktsts, status.b.fn, status.b.fn);
++	/* Get pointer to EP structure */
++	ep = get_out_ep(pcd, status.b.epnum);
++
++	switch (status.b.pktsts) {
++	case DWC_DSTS_GOUT_NAK:
++		DWC_DEBUGPL(DBG_PCDV, "Global OUT NAK\n");
++		break;
++	case DWC_STS_DATA_UPDT:
++		DWC_DEBUGPL(DBG_PCDV, "OUT Data Packet\n");
++		if (status.b.bcnt && ep->dwc_ep.xfer_buff) {
++			/** @todo NGS Check for buffer overflow? */
++			dwc_otg_read_packet(core_if,
++					    ep->dwc_ep.xfer_buff,
++					    status.b.bcnt);
++			ep->dwc_ep.xfer_count += status.b.bcnt;
++			ep->dwc_ep.xfer_buff += status.b.bcnt;
++		}
++		break;
++	case DWC_STS_XFER_COMP:
++		DWC_DEBUGPL(DBG_PCDV, "OUT Complete\n");
++		break;
++	case DWC_DSTS_SETUP_COMP:
++#ifdef DEBUG_EP0
++		DWC_DEBUGPL(DBG_PCDV, "Setup Complete\n");
++#endif
++		break;
++	case DWC_DSTS_SETUP_UPDT:
++		dwc_otg_read_setup_packet(core_if, pcd->setup_pkt->d32);
++#ifdef DEBUG_EP0
++		DWC_DEBUGPL(DBG_PCD,
++			    "SETUP PKT: %02x.%02x v%04x i%04x l%04x\n",
++			    pcd->setup_pkt->req.bmRequestType,
++			    pcd->setup_pkt->req.bRequest,
++			    UGETW(pcd->setup_pkt->req.wValue),
++			    UGETW(pcd->setup_pkt->req.wIndex),
++			    UGETW(pcd->setup_pkt->req.wLength));
++#endif
++		ep->dwc_ep.xfer_count += status.b.bcnt;
++		break;
++	default:
++		DWC_DEBUGPL(DBG_PCDV, "Invalid Packet Status (0x%0x)\n",
++			    status.b.pktsts);
++		break;
++	}
++
++	/* Enable the Rx Status Queue Level interrupt */
++	DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmask.d32);
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.rxstsqlvl = 1;
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	//DWC_DEBUGPL(DBG_PCDV, "EXIT: %s\n", __func__);
++	return 1;
++}
++
++/**
++ * This function examines the Device IN Token Learning Queue to
++ * determine the EP number of the last IN token received.  This
++ * implementation is for the Mass Storage device where there are only
++ * 2 IN EPs (Control-IN and BULK-IN).
++ *
++ * The EP numbers for the first six IN Tokens are in DTKNQR1 and there
++ * are 8 EP Numbers in each of the other possible DTKNQ Registers.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ *
++ */
++static inline int get_ep_of_last_in_token(dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_device_global_regs_t *dev_global_regs =
++	    core_if->dev_if->dev_global_regs;
++	const uint32_t TOKEN_Q_DEPTH = core_if->hwcfg2.b.dev_token_q_depth;
++	/* Number of Token Queue Registers */
++	const int DTKNQ_REG_CNT = (TOKEN_Q_DEPTH + 7) / 8;
++	dtknq1_data_t dtknqr1;
++	uint32_t in_tkn_epnums[4];
++	int ndx = 0;
++	int i = 0;
++	volatile uint32_t *addr = &dev_global_regs->dtknqr1;
++	int epnum = 0;
++
++	//DWC_DEBUGPL(DBG_PCD,"dev_token_q_depth=%d\n",TOKEN_Q_DEPTH);
++
++	/* Read the DTKNQ Registers */
++	for (i = 0; i < DTKNQ_REG_CNT; i++) {
++		in_tkn_epnums[i] = DWC_READ_REG32(addr);
++		DWC_DEBUGPL(DBG_PCDV, "DTKNQR%d=0x%08x\n", i + 1,
++			    in_tkn_epnums[i]);
++		if (addr == &dev_global_regs->dvbusdis) {
++			addr = &dev_global_regs->dtknqr3_dthrctl;
++		} else {
++			++addr;
++		}
++
++	}
++
++	/* Copy the DTKNQR1 data to the bit field. */
++	dtknqr1.d32 = in_tkn_epnums[0];
++	/* Get the EP numbers */
++	in_tkn_epnums[0] = dtknqr1.b.epnums0_5;
++	ndx = dtknqr1.b.intknwptr - 1;
++
++	//DWC_DEBUGPL(DBG_PCDV,"ndx=%d\n",ndx);
++	if (ndx == -1) {
++		/** @todo Find a simpler way to calculate the max
++		 * queue position.*/
++		int cnt = TOKEN_Q_DEPTH;
++		if (TOKEN_Q_DEPTH <= 6) {
++			cnt = TOKEN_Q_DEPTH - 1;
++		} else if (TOKEN_Q_DEPTH <= 14) {
++			cnt = TOKEN_Q_DEPTH - 7;
++		} else if (TOKEN_Q_DEPTH <= 22) {
++			cnt = TOKEN_Q_DEPTH - 15;
++		} else {
++			cnt = TOKEN_Q_DEPTH - 23;
++		}
++		epnum = (in_tkn_epnums[DTKNQ_REG_CNT - 1] >> (cnt * 4)) & 0xF;
++	} else {
++		if (ndx <= 5) {
++			epnum = (in_tkn_epnums[0] >> (ndx * 4)) & 0xF;
++		} else if (ndx <= 13) {
++			ndx -= 6;
++			epnum = (in_tkn_epnums[1] >> (ndx * 4)) & 0xF;
++		} else if (ndx <= 21) {
++			ndx -= 14;
++			epnum = (in_tkn_epnums[2] >> (ndx * 4)) & 0xF;
++		} else if (ndx <= 29) {
++			ndx -= 22;
++			epnum = (in_tkn_epnums[3] >> (ndx * 4)) & 0xF;
++		}
++	}
++	//DWC_DEBUGPL(DBG_PCD,"epnum=%d\n",epnum);
++	return epnum;
++}
++
++/**
++ * This interrupt occurs when the non-periodic Tx FIFO is half-empty.
++ * The active request is checked for the next packet to be loaded into
++ * the non-periodic Tx FIFO.
++ */
++int32_t dwc_otg_pcd_handle_np_tx_fifo_empty_intr(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	dwc_otg_dev_in_ep_regs_t *ep_regs;
++	gnptxsts_data_t txstatus = {.d32 = 0 };
++	gintsts_data_t gintsts;
++
++	int epnum = 0;
++	dwc_otg_pcd_ep_t *ep = 0;
++	uint32_t len = 0;
++	int dwords;
++
++	/* Get the epnum from the IN Token Learning Queue. */
++	epnum = get_ep_of_last_in_token(core_if);
++	ep = get_in_ep(pcd, epnum);
++
++	DWC_DEBUGPL(DBG_PCD, "NP TxFifo Empty: %d \n", epnum);
++
++	ep_regs = core_if->dev_if->in_ep_regs[epnum];
++
++	len = ep->dwc_ep.xfer_len - ep->dwc_ep.xfer_count;
++	if (len > ep->dwc_ep.maxpacket) {
++		len = ep->dwc_ep.maxpacket;
++	}
++	dwords = (len + 3) / 4;
++
++	/* While there is space in the queue and space in the FIFO and
++	 * More data to tranfer, Write packets to the Tx FIFO */
++	txstatus.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
++	DWC_DEBUGPL(DBG_PCDV, "b4 GNPTXSTS=0x%08x\n", txstatus.d32);
++
++	while (txstatus.b.nptxqspcavail > 0 &&
++	       txstatus.b.nptxfspcavail > dwords &&
++	       ep->dwc_ep.xfer_count < ep->dwc_ep.xfer_len) {
++		/* Write the FIFO */
++		dwc_otg_ep_write_packet(core_if, &ep->dwc_ep, 0);
++		len = ep->dwc_ep.xfer_len - ep->dwc_ep.xfer_count;
++
++		if (len > ep->dwc_ep.maxpacket) {
++			len = ep->dwc_ep.maxpacket;
++		}
++
++		dwords = (len + 3) / 4;
++		txstatus.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
++		DWC_DEBUGPL(DBG_PCDV, "GNPTXSTS=0x%08x\n", txstatus.d32);
++	}
++
++	DWC_DEBUGPL(DBG_PCDV, "GNPTXSTS=0x%08x\n",
++		    DWC_READ_REG32(&global_regs->gnptxsts));
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.nptxfempty = 1;
++	DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * This function is called when dedicated Tx FIFO Empty interrupt occurs.
++ * The active request is checked for the next packet to be loaded into
++ * apropriate Tx FIFO.
++ */
++static int32_t write_empty_tx_fifo(dwc_otg_pcd_t * pcd, uint32_t epnum)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	dwc_otg_dev_in_ep_regs_t *ep_regs;
++	dtxfsts_data_t txstatus = {.d32 = 0 };
++	dwc_otg_pcd_ep_t *ep = 0;
++	uint32_t len = 0;
++	int dwords;
++
++	ep = get_in_ep(pcd, epnum);
++
++	DWC_DEBUGPL(DBG_PCD, "Dedicated TxFifo Empty: %d \n", epnum);
++
++	ep_regs = core_if->dev_if->in_ep_regs[epnum];
++
++	len = ep->dwc_ep.xfer_len - ep->dwc_ep.xfer_count;
++
++	if (len > ep->dwc_ep.maxpacket) {
++		len = ep->dwc_ep.maxpacket;
++	}
++
++	dwords = (len + 3) / 4;
++
++	/* While there is space in the queue and space in the FIFO and
++	 * More data to tranfer, Write packets to the Tx FIFO */
++	txstatus.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts);
++	DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", epnum, txstatus.d32);
++
++	while (txstatus.b.txfspcavail > dwords &&
++	       ep->dwc_ep.xfer_count < ep->dwc_ep.xfer_len &&
++	       ep->dwc_ep.xfer_len != 0) {
++		/* Write the FIFO */
++		dwc_otg_ep_write_packet(core_if, &ep->dwc_ep, 0);
++
++		len = ep->dwc_ep.xfer_len - ep->dwc_ep.xfer_count;
++		if (len > ep->dwc_ep.maxpacket) {
++			len = ep->dwc_ep.maxpacket;
++		}
++
++		dwords = (len + 3) / 4;
++		txstatus.d32 =
++		    DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts);
++		DWC_DEBUGPL(DBG_PCDV, "dtxfsts[%d]=0x%08x\n", epnum,
++			    txstatus.d32);
++	}
++
++	DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", epnum,
++		    DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts));
++
++	return 1;
++}
++
++/**
++ * This function is called when the Device is disconnected. It stops
++ * any active requests and informs the Gadget driver of the
++ * disconnect.
++ */
++void dwc_otg_pcd_stop(dwc_otg_pcd_t * pcd)
++{
++	int i, num_in_eps, num_out_eps;
++	dwc_otg_pcd_ep_t *ep;
++
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_SPINLOCK(pcd->lock);
++
++	num_in_eps = GET_CORE_IF(pcd)->dev_if->num_in_eps;
++	num_out_eps = GET_CORE_IF(pcd)->dev_if->num_out_eps;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s() \n", __func__);
++	/* don't disconnect drivers more than once */
++	if (pcd->ep0state == EP0_DISCONNECT) {
++		DWC_DEBUGPL(DBG_ANY, "%s() Already Disconnected\n", __func__);
++		DWC_SPINUNLOCK(pcd->lock);
++		return;
++	}
++	pcd->ep0state = EP0_DISCONNECT;
++
++	/* Reset the OTG state. */
++	dwc_otg_pcd_update_otg(pcd, 1);
++
++	/* Disable the NP Tx Fifo Empty Interrupt. */
++	intr_mask.b.nptxfempty = 1;
++	DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
++			 intr_mask.d32, 0);
++
++	/* Flush the FIFOs */
++	/**@todo NGS Flush Periodic FIFOs */
++	dwc_otg_flush_tx_fifo(GET_CORE_IF(pcd), 0x10);
++	dwc_otg_flush_rx_fifo(GET_CORE_IF(pcd));
++
++	/* prevent new request submissions, kill any outstanding requests  */
++	ep = &pcd->ep0;
++	dwc_otg_request_nuke(ep);
++	/* prevent new request submissions, kill any outstanding requests  */
++	for (i = 0; i < num_in_eps; i++) {
++		dwc_otg_pcd_ep_t *ep = &pcd->in_ep[i];
++		dwc_otg_request_nuke(ep);
++	}
++	/* prevent new request submissions, kill any outstanding requests  */
++	for (i = 0; i < num_out_eps; i++) {
++		dwc_otg_pcd_ep_t *ep = &pcd->out_ep[i];
++		dwc_otg_request_nuke(ep);
++	}
++
++	/* report disconnect; the driver is already quiesced */
++	if (pcd->fops->disconnect) {
++		DWC_SPINUNLOCK(pcd->lock);
++		pcd->fops->disconnect(pcd);
++		DWC_SPINLOCK(pcd->lock);
++	}
++	DWC_SPINUNLOCK(pcd->lock);
++}
++
++/**
++ * This interrupt indicates that ...
++ */
++int32_t dwc_otg_pcd_handle_i2c_intr(dwc_otg_pcd_t * pcd)
++{
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	gintsts_data_t gintsts;
++
++	DWC_PRINTF("INTERRUPT Handler not implemented for %s\n", "i2cintr");
++	intr_mask.b.i2cintr = 1;
++	DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
++			 intr_mask.d32, 0);
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.i2cintr = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++	return 1;
++}
++
++/**
++ * This interrupt indicates that ...
++ */
++int32_t dwc_otg_pcd_handle_early_suspend_intr(dwc_otg_pcd_t * pcd)
++{
++	gintsts_data_t gintsts;
++#if defined(VERBOSE)
++	DWC_PRINTF("Early Suspend Detected\n");
++#endif
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.erlysuspend = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++	return 1;
++}
++
++/**
++ * This function configures EPO to receive SETUP packets.
++ *
++ * @todo NGS: Update the comments from the HW FS.
++ *
++ *	-# Program the following fields in the endpoint specific registers
++ *	for Control OUT EP 0, in order to receive a setup packet
++ *	- DOEPTSIZ0.Packet Count = 3 (To receive up to 3 back to back
++ *	  setup packets)
++ *	- DOEPTSIZE0.Transfer Size = 24 Bytes (To receive up to 3 back
++ *	  to back setup packets)
++ *		- In DMA mode, DOEPDMA0 Register with a memory address to
++ *		  store any setup packets received
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param pcd	  Programming view of the PCD.
++ */
++static inline void ep0_out_start(dwc_otg_core_if_t * core_if,
++				 dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	deptsiz0_data_t doeptsize0 = {.d32 = 0 };
++	dwc_otg_dev_dma_desc_t *dma_desc;
++	depctl_data_t doepctl = {.d32 = 0 };
++
++#ifdef VERBOSE
++	DWC_DEBUGPL(DBG_PCDV, "%s() doepctl0=%0x\n", __func__,
++		    DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl));
++#endif
++	if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
++		doepctl.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl);
++		if (doepctl.b.epena) {
++			return;
++		}
++	}
++
++	doeptsize0.b.supcnt = 3;
++	doeptsize0.b.pktcnt = 1;
++	doeptsize0.b.xfersize = 8 * 3;
++
++	if (core_if->dma_enable) {
++		if (!core_if->dma_desc_enable) {
++			/** put here as for Hermes mode deptisz register should not be written */
++			DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doeptsiz,
++					doeptsize0.d32);
++
++			/** @todo dma needs to handle multiple setup packets (up to 3) */
++			DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doepdma,
++					pcd->setup_pkt_dma_handle);
++		} else {
++			dev_if->setup_desc_index =
++			    (dev_if->setup_desc_index + 1) & 1;
++			dma_desc =
++			    dev_if->setup_desc_addr[dev_if->setup_desc_index];
++
++			/** DMA Descriptor Setup */
++			dma_desc->status.b.bs = BS_HOST_BUSY;
++			if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
++				dma_desc->status.b.sr = 0;
++				dma_desc->status.b.mtrf = 0;
++			}
++			dma_desc->status.b.l = 1;
++			dma_desc->status.b.ioc = 1;
++			dma_desc->status.b.bytes = pcd->ep0.dwc_ep.maxpacket;
++			dma_desc->buf = pcd->setup_pkt_dma_handle;
++			dma_desc->status.b.sts = 0;
++			dma_desc->status.b.bs = BS_HOST_READY;
++
++			/** DOEPDMA0 Register write */
++			DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doepdma,
++					dev_if->dma_setup_desc_addr
++					[dev_if->setup_desc_index]);
++		}
++
++	} else {
++		/** put here as for Hermes mode deptisz register should not be written */
++		DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doeptsiz,
++				doeptsize0.d32);
++	}
++
++	/** DOEPCTL0 Register write cnak will be set after setup interrupt */
++	doepctl.d32 = 0;
++	doepctl.b.epena = 1;
++	if (core_if->snpsid <= OTG_CORE_REV_2_94a) {
++	doepctl.b.cnak = 1;
++	DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doepctl, doepctl.d32);
++	} else {
++		DWC_MODIFY_REG32(&dev_if->out_ep_regs[0]->doepctl, 0, doepctl.d32);
++	}
++
++#ifdef VERBOSE
++	DWC_DEBUGPL(DBG_PCDV, "doepctl0=%0x\n",
++		    DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl));
++	DWC_DEBUGPL(DBG_PCDV, "diepctl0=%0x\n",
++		    DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl));
++#endif
++}
++
++/**
++ * This interrupt occurs when a USB Reset is detected. When the USB
++ * Reset Interrupt occurs the device state is set to DEFAULT and the
++ * EP0 state is set to IDLE.
++ *	-#	Set the NAK bit for all OUT endpoints (DOEPCTLn.SNAK = 1)
++ *	-#	Unmask the following interrupt bits
++ *		- DAINTMSK.INEP0 = 1 (Control 0 IN endpoint)
++ *	- DAINTMSK.OUTEP0 = 1 (Control 0 OUT endpoint)
++ *	- DOEPMSK.SETUP = 1
++ *	- DOEPMSK.XferCompl = 1
++ *	- DIEPMSK.XferCompl = 1
++ *	- DIEPMSK.TimeOut = 1
++ *	-# Program the following fields in the endpoint specific registers
++ *	for Control OUT EP 0, in order to receive a setup packet
++ *	- DOEPTSIZ0.Packet Count = 3 (To receive up to 3 back to back
++ *	  setup packets)
++ *	- DOEPTSIZE0.Transfer Size = 24 Bytes (To receive up to 3 back
++ *	  to back setup packets)
++ *		- In DMA mode, DOEPDMA0 Register with a memory address to
++ *		  store any setup packets received
++ * At this point, all the required initialization, except for enabling
++ * the control 0 OUT endpoint is done, for receiving SETUP packets.
++ */
++int32_t dwc_otg_pcd_handle_usb_reset_intr(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	depctl_data_t doepctl = {.d32 = 0 };
++	depctl_data_t diepctl = {.d32 = 0 };
++	daint_data_t daintmsk = {.d32 = 0 };
++	doepmsk_data_t doepmsk = {.d32 = 0 };
++	diepmsk_data_t diepmsk = {.d32 = 0 };
++	dcfg_data_t dcfg = {.d32 = 0 };
++	grstctl_t resetctl = {.d32 = 0 };
++	dctl_data_t dctl = {.d32 = 0 };
++	int i = 0;
++	gintsts_data_t gintsts;
++	pcgcctl_data_t power = {.d32 = 0 };
++
++	power.d32 = DWC_READ_REG32(core_if->pcgcctl);
++	if (power.b.stoppclk) {
++		power.d32 = 0;
++		power.b.stoppclk = 1;
++		DWC_MODIFY_REG32(core_if->pcgcctl, power.d32, 0);
++
++		power.b.pwrclmp = 1;
++		DWC_MODIFY_REG32(core_if->pcgcctl, power.d32, 0);
++
++		power.b.rstpdwnmodule = 1;
++		DWC_MODIFY_REG32(core_if->pcgcctl, power.d32, 0);
++	}
++
++	core_if->lx_state = DWC_OTG_L0;
++
++	DWC_PRINTF("USB RESET\n");
++#ifdef DWC_EN_ISOC
++	for (i = 1; i < 16; ++i) {
++		dwc_otg_pcd_ep_t *ep;
++		dwc_ep_t *dwc_ep;
++		ep = get_in_ep(pcd, i);
++		if (ep != 0) {
++			dwc_ep = &ep->dwc_ep;
++			dwc_ep->next_frame = 0xffffffff;
++		}
++	}
++#endif /* DWC_EN_ISOC */
++
++	/* reset the HNP settings */
++	dwc_otg_pcd_update_otg(pcd, 1);
++
++	/* Clear the Remote Wakeup Signalling */
++	dctl.b.rmtwkupsig = 1;
++	DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32, 0);
++
++	/* Set NAK for all OUT EPs */
++	doepctl.b.snak = 1;
++	for (i = 0; i <= dev_if->num_out_eps; i++) {
++		DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepctl, doepctl.d32);
++	}
++
++	/* Flush the NP Tx FIFO */
++	dwc_otg_flush_tx_fifo(core_if, 0x10);
++	/* Flush the Learning Queue */
++	resetctl.b.intknqflsh = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->grstctl, resetctl.d32);
++
++	if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable) {
++		core_if->start_predict = 0;
++		for (i = 0; i<= core_if->dev_if->num_in_eps; ++i) {
++			core_if->nextep_seq[i] = 0xff;	// 0xff - EP not active
++		}
++		core_if->nextep_seq[0] = 0;
++		core_if->first_in_nextep_seq = 0;
++		diepctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl);
++		diepctl.b.nextep = 0;
++		DWC_WRITE_REG32(&dev_if->in_ep_regs[0]->diepctl, diepctl.d32);
++
++		/* Update IN Endpoint Mismatch Count by active IN NP EP count + 1 */
++		dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
++		dcfg.b.epmscnt = 2;
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
++
++		DWC_DEBUGPL(DBG_PCDV,
++			    "%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
++			__func__, core_if->first_in_nextep_seq);
++		for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
++			DWC_DEBUGPL(DBG_PCDV, "%2d\n", core_if->nextep_seq[i]);
++		}
++	}
++
++	if (core_if->multiproc_int_enable) {
++		daintmsk.b.inep0 = 1;
++		daintmsk.b.outep0 = 1;
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->deachintmsk,
++				daintmsk.d32);
++
++		doepmsk.b.setup = 1;
++		doepmsk.b.xfercompl = 1;
++		doepmsk.b.ahberr = 1;
++		doepmsk.b.epdisabled = 1;
++
++		if ((core_if->dma_desc_enable) ||
++		    (core_if->dma_enable
++		     && core_if->snpsid >= OTG_CORE_REV_3_00a)) {
++			doepmsk.b.stsphsercvd = 1;
++		}
++		if (core_if->dma_desc_enable)
++			doepmsk.b.bna = 1;
++/*
++		doepmsk.b.babble = 1;
++		doepmsk.b.nyet = 1;
++
++		if (core_if->dma_enable) {
++			doepmsk.b.nak = 1;
++		}
++*/
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->doepeachintmsk[0],
++				doepmsk.d32);
++
++		diepmsk.b.xfercompl = 1;
++		diepmsk.b.timeout = 1;
++		diepmsk.b.epdisabled = 1;
++		diepmsk.b.ahberr = 1;
++		diepmsk.b.intknepmis = 1;
++		if (!core_if->en_multiple_tx_fifo && core_if->dma_enable)
++			diepmsk.b.intknepmis = 0;
++
++/*		if (core_if->dma_desc_enable) {
++			diepmsk.b.bna = 1;
++		}
++*/
++/*
++		if (core_if->dma_enable) {
++			diepmsk.b.nak = 1;
++		}
++*/
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->diepeachintmsk[0],
++				diepmsk.d32);
++	} else {
++		daintmsk.b.inep0 = 1;
++		daintmsk.b.outep0 = 1;
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->daintmsk,
++				daintmsk.d32);
++
++		doepmsk.b.setup = 1;
++		doepmsk.b.xfercompl = 1;
++		doepmsk.b.ahberr = 1;
++		doepmsk.b.epdisabled = 1;
++
++		if ((core_if->dma_desc_enable) ||
++		    (core_if->dma_enable
++		     && core_if->snpsid >= OTG_CORE_REV_3_00a)) {
++			doepmsk.b.stsphsercvd = 1;
++		}
++		if (core_if->dma_desc_enable)
++			doepmsk.b.bna = 1;
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->doepmsk, doepmsk.d32);
++
++		diepmsk.b.xfercompl = 1;
++		diepmsk.b.timeout = 1;
++		diepmsk.b.epdisabled = 1;
++		diepmsk.b.ahberr = 1;
++		if (!core_if->en_multiple_tx_fifo && core_if->dma_enable)
++			diepmsk.b.intknepmis = 0;
++/*
++		if (core_if->dma_desc_enable) {
++			diepmsk.b.bna = 1;
++		}
++*/
++
++		DWC_WRITE_REG32(&dev_if->dev_global_regs->diepmsk, diepmsk.d32);
++	}
++
++	/* Reset Device Address */
++	dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
++	dcfg.b.devaddr = 0;
++	DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
++
++	/* setup EP0 to receive SETUP packets */
++	if (core_if->snpsid <= OTG_CORE_REV_2_94a)
++		ep0_out_start(core_if, pcd);
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.usbreset = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * Get the device speed from the device status register and convert it
++ * to USB speed constant.
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ */
++static int get_device_speed(dwc_otg_core_if_t * core_if)
++{
++	dsts_data_t dsts;
++	int speed = 0;
++	dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
++
++	switch (dsts.b.enumspd) {
++	case DWC_DSTS_ENUMSPD_HS_PHY_30MHZ_OR_60MHZ:
++		speed = USB_SPEED_HIGH;
++		break;
++	case DWC_DSTS_ENUMSPD_FS_PHY_30MHZ_OR_60MHZ:
++	case DWC_DSTS_ENUMSPD_FS_PHY_48MHZ:
++		speed = USB_SPEED_FULL;
++		break;
++
++	case DWC_DSTS_ENUMSPD_LS_PHY_6MHZ:
++		speed = USB_SPEED_LOW;
++		break;
++	}
++
++	return speed;
++}
++
++/**
++ * Read the device status register and set the device speed in the
++ * data structure.
++ * Set up EP0 to receive SETUP packets by calling dwc_ep0_activate.
++ */
++int32_t dwc_otg_pcd_handle_enum_done_intr(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
++	gintsts_data_t gintsts;
++	gusbcfg_data_t gusbcfg;
++	dwc_otg_core_global_regs_t *global_regs =
++	    GET_CORE_IF(pcd)->core_global_regs;
++	uint8_t utmi16b, utmi8b;
++	int speed;
++	DWC_DEBUGPL(DBG_PCD, "SPEED ENUM\n");
++
++	if (GET_CORE_IF(pcd)->snpsid >= OTG_CORE_REV_2_60a) {
++		utmi16b = 6;	//vahrama old value was 6;
++		utmi8b = 9;
++	} else {
++		utmi16b = 4;
++		utmi8b = 8;
++	}
++	dwc_otg_ep0_activate(GET_CORE_IF(pcd), &ep0->dwc_ep);
++	if (GET_CORE_IF(pcd)->snpsid >= OTG_CORE_REV_3_00a) {
++		ep0_out_start(GET_CORE_IF(pcd), pcd);
++	}
++
++#ifdef DEBUG_EP0
++	print_ep0_state(pcd);
++#endif
++
++	if (pcd->ep0state == EP0_DISCONNECT) {
++		pcd->ep0state = EP0_IDLE;
++	} else if (pcd->ep0state == EP0_STALL) {
++		pcd->ep0state = EP0_IDLE;
++	}
++
++	pcd->ep0state = EP0_IDLE;
++
++	ep0->stopped = 0;
++
++	speed = get_device_speed(GET_CORE_IF(pcd));
++	pcd->fops->connect(pcd, speed);
++
++	/* Set USB turnaround time based on device speed and PHY interface. */
++	gusbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
++	if (speed == USB_SPEED_HIGH) {
++		if (GET_CORE_IF(pcd)->hwcfg2.b.hs_phy_type ==
++		    DWC_HWCFG2_HS_PHY_TYPE_ULPI) {
++			/* ULPI interface */
++			gusbcfg.b.usbtrdtim = 9;
++		}
++		if (GET_CORE_IF(pcd)->hwcfg2.b.hs_phy_type ==
++		    DWC_HWCFG2_HS_PHY_TYPE_UTMI) {
++			/* UTMI+ interface */
++			if (GET_CORE_IF(pcd)->hwcfg4.b.utmi_phy_data_width == 0) {
++				gusbcfg.b.usbtrdtim = utmi8b;
++			} else if (GET_CORE_IF(pcd)->hwcfg4.
++				   b.utmi_phy_data_width == 1) {
++				gusbcfg.b.usbtrdtim = utmi16b;
++			} else if (GET_CORE_IF(pcd)->
++				   core_params->phy_utmi_width == 8) {
++				gusbcfg.b.usbtrdtim = utmi8b;
++			} else {
++				gusbcfg.b.usbtrdtim = utmi16b;
++			}
++		}
++		if (GET_CORE_IF(pcd)->hwcfg2.b.hs_phy_type ==
++		    DWC_HWCFG2_HS_PHY_TYPE_UTMI_ULPI) {
++			/* UTMI+  OR  ULPI interface */
++			if (gusbcfg.b.ulpi_utmi_sel == 1) {
++				/* ULPI interface */
++				gusbcfg.b.usbtrdtim = 9;
++			} else {
++				/* UTMI+ interface */
++				if (GET_CORE_IF(pcd)->
++				    core_params->phy_utmi_width == 16) {
++					gusbcfg.b.usbtrdtim = utmi16b;
++				} else {
++					gusbcfg.b.usbtrdtim = utmi8b;
++				}
++			}
++		}
++	} else {
++		/* Full or low speed */
++		gusbcfg.b.usbtrdtim = 9;
++	}
++	DWC_WRITE_REG32(&global_regs->gusbcfg, gusbcfg.d32);
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.enumdone = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++	return 1;
++}
++
++/**
++ * This interrupt indicates that the ISO OUT Packet was dropped due to
++ * Rx FIFO full or Rx Status Queue Full.  If this interrupt occurs
++ * read all the data from the Rx FIFO.
++ */
++int32_t dwc_otg_pcd_handle_isoc_out_packet_dropped_intr(dwc_otg_pcd_t * pcd)
++{
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	gintsts_data_t gintsts;
++
++	DWC_WARN("INTERRUPT Handler not implemented for %s\n",
++		 "ISOC Out Dropped");
++
++	intr_mask.b.isooutdrop = 1;
++	DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
++			 intr_mask.d32, 0);
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.isooutdrop = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * This interrupt indicates the end of the portion of the micro-frame
++ * for periodic transactions.  If there is a periodic transaction for
++ * the next frame, load the packets into the EP periodic Tx FIFO.
++ */
++int32_t dwc_otg_pcd_handle_end_periodic_frame_intr(dwc_otg_pcd_t * pcd)
++{
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	gintsts_data_t gintsts;
++	DWC_PRINTF("INTERRUPT Handler not implemented for %s\n", "EOP");
++
++	intr_mask.b.eopframe = 1;
++	DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
++			 intr_mask.d32, 0);
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.eopframe = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * This interrupt indicates that EP of the packet on the top of the
++ * non-periodic Tx FIFO does not match EP of the IN Token received.
++ *
++ * The "Device IN Token Queue" Registers are read to determine the
++ * order the IN Tokens have been received. The non-periodic Tx FIFO
++ * is flushed, so it can be reloaded in the order seen in the IN Token
++ * Queue.
++ */
++int32_t dwc_otg_pcd_handle_ep_mismatch_intr(dwc_otg_pcd_t * pcd)
++{
++	gintsts_data_t gintsts;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dctl_data_t dctl;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	if (!core_if->en_multiple_tx_fifo && core_if->dma_enable) {
++		core_if->start_predict = 1;
++
++		DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, core_if);
++
++		gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++		if (!gintsts.b.ginnakeff) {
++			/* Disable EP Mismatch interrupt */
++			intr_mask.d32 = 0;
++			intr_mask.b.epmismatch = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, intr_mask.d32, 0);
++			/* Enable the Global IN NAK Effective Interrupt */
++			intr_mask.d32 = 0;
++			intr_mask.b.ginnakeff = 1;
++			DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0, intr_mask.d32);
++			/* Set the global non-periodic IN NAK handshake */
++			dctl.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
++			dctl.b.sgnpinnak = 1;
++			DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32);
++		} else {
++			DWC_PRINTF("gintsts.b.ginnakeff = 1! dctl.b.sgnpinnak not set\n");
++		}
++		/* Disabling of all EP's will be done in dwc_otg_pcd_handle_in_nak_effective()
++		 * handler after Global IN NAK Effective interrupt will be asserted */
++	}
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.epmismatch = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * This interrupt is valid only in DMA mode. This interrupt indicates that the
++ * core has stopped fetching data for IN endpoints due to the unavailability of
++ * TxFIFO space or Request Queue space. This interrupt is used by the
++ * application for an endpoint mismatch algorithm.
++ *
++ * @param pcd The PCD
++ */
++int32_t dwc_otg_pcd_handle_ep_fetsusp_intr(dwc_otg_pcd_t * pcd)
++{
++	gintsts_data_t gintsts;
++	gintmsk_data_t gintmsk_data;
++	dctl_data_t dctl;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, core_if);
++
++	/* Clear the global non-periodic IN NAK handshake */
++	dctl.d32 = 0;
++	dctl.b.cgnpinnak = 1;
++	DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
++
++	/* Mask GINTSTS.FETSUSP interrupt */
++	gintmsk_data.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
++	gintmsk_data.b.fetsusp = 0;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gintmsk_data.d32);
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.fetsusp = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
++
++	return 1;
++}
++/**
++ * This funcion stalls EP0.
++ */
++static inline void ep0_do_stall(dwc_otg_pcd_t * pcd, const int err_val)
++{
++	dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
++	usb_device_request_t *ctrl = &pcd->setup_pkt->req;
++	DWC_WARN("req %02x.%02x protocol STALL; err %d\n",
++		 ctrl->bmRequestType, ctrl->bRequest, err_val);
++
++	ep0->dwc_ep.is_in = 1;
++	dwc_otg_ep_set_stall(GET_CORE_IF(pcd), &ep0->dwc_ep);
++	pcd->ep0.stopped = 1;
++	pcd->ep0state = EP0_IDLE;
++	ep0_out_start(GET_CORE_IF(pcd), pcd);
++}
++
++/**
++ * This functions delegates the setup command to the gadget driver.
++ */
++static inline void do_gadget_setup(dwc_otg_pcd_t * pcd,
++				   usb_device_request_t * ctrl)
++{
++	int ret = 0;
++	DWC_SPINUNLOCK(pcd->lock);
++	ret = pcd->fops->setup(pcd, (uint8_t *) ctrl);
++	DWC_SPINLOCK(pcd->lock);
++	if (ret < 0) {
++		ep0_do_stall(pcd, ret);
++	}
++
++	/** @todo This is a g_file_storage gadget driver specific
++	 * workaround: a DELAYED_STATUS result from the fsg_setup
++	 * routine will result in the gadget queueing a EP0 IN status
++	 * phase for a two-stage control transfer. Exactly the same as
++	 * a SET_CONFIGURATION/SET_INTERFACE except that this is a class
++	 * specific request.  Need a generic way to know when the gadget
++	 * driver will queue the status phase. Can we assume when we
++	 * call the gadget driver setup() function that it will always
++	 * queue and require the following flag? Need to look into
++	 * this.
++	 */
++
++	if (ret == 256 + 999) {
++		pcd->request_config = 1;
++	}
++}
++
++#ifdef DWC_UTE_CFI
++/**
++ * This functions delegates the CFI setup commands to the gadget driver.
++ * This function will return a negative value to indicate a failure.
++ */
++static inline int cfi_gadget_setup(dwc_otg_pcd_t * pcd,
++				   struct cfi_usb_ctrlrequest *ctrl_req)
++{
++	int ret = 0;
++
++	if (pcd->fops && pcd->fops->cfi_setup) {
++		DWC_SPINUNLOCK(pcd->lock);
++		ret = pcd->fops->cfi_setup(pcd, ctrl_req);
++		DWC_SPINLOCK(pcd->lock);
++		if (ret < 0) {
++			ep0_do_stall(pcd, ret);
++			return ret;
++		}
++	}
++
++	return ret;
++}
++#endif
++
++/**
++ * This function starts the Zero-Length Packet for the IN status phase
++ * of a 2 stage control transfer.
++ */
++static inline void do_setup_in_status_phase(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
++	if (pcd->ep0state == EP0_STALL) {
++		return;
++	}
++
++	pcd->ep0state = EP0_IN_STATUS_PHASE;
++
++	/* Prepare for more SETUP Packets */
++	DWC_DEBUGPL(DBG_PCD, "EP0 IN ZLP\n");
++	if ((GET_CORE_IF(pcd)->snpsid >= OTG_CORE_REV_3_00a)
++	    && (pcd->core_if->dma_desc_enable)
++	    && (ep0->dwc_ep.xfer_count < ep0->dwc_ep.total_len)) {
++		DWC_DEBUGPL(DBG_PCDV,
++			    "Data terminated wait next packet in out_desc_addr\n");
++		pcd->backup_buf = phys_to_virt(ep0->dwc_ep.dma_addr);
++		pcd->data_terminated = 1;
++	}
++	ep0->dwc_ep.xfer_len = 0;
++	ep0->dwc_ep.xfer_count = 0;
++	ep0->dwc_ep.is_in = 1;
++	ep0->dwc_ep.dma_addr = pcd->setup_pkt_dma_handle;
++	dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd), &ep0->dwc_ep);
++
++	/* Prepare for more SETUP Packets */
++	//ep0_out_start(GET_CORE_IF(pcd), pcd);
++}
++
++/**
++ * This function starts the Zero-Length Packet for the OUT status phase
++ * of a 2 stage control transfer.
++ */
++static inline void do_setup_out_status_phase(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
++	if (pcd->ep0state == EP0_STALL) {
++		DWC_DEBUGPL(DBG_PCD, "EP0 STALLED\n");
++		return;
++	}
++	pcd->ep0state = EP0_OUT_STATUS_PHASE;
++
++	DWC_DEBUGPL(DBG_PCD, "EP0 OUT ZLP\n");
++	ep0->dwc_ep.xfer_len = 0;
++	ep0->dwc_ep.xfer_count = 0;
++	ep0->dwc_ep.is_in = 0;
++	ep0->dwc_ep.dma_addr = pcd->setup_pkt_dma_handle;
++	dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd), &ep0->dwc_ep);
++
++	/* Prepare for more SETUP Packets */
++	if (GET_CORE_IF(pcd)->dma_enable == 0) {
++		ep0_out_start(GET_CORE_IF(pcd), pcd);
++	}
++}
++
++/**
++ * Clear the EP halt (STALL) and if pending requests start the
++ * transfer.
++ */
++static inline void pcd_clear_halt(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * ep)
++{
++	if (ep->dwc_ep.stall_clear_flag == 0)
++		dwc_otg_ep_clear_stall(GET_CORE_IF(pcd), &ep->dwc_ep);
++
++	/* Reactive the EP */
++	dwc_otg_ep_activate(GET_CORE_IF(pcd), &ep->dwc_ep);
++	if (ep->stopped) {
++		ep->stopped = 0;
++		/* If there is a request in the EP queue start it */
++
++		/** @todo FIXME: this causes an EP mismatch in DMA mode.
++		 * epmismatch not yet implemented. */
++
++		/*
++		 * Above fixme is solved by implmenting a tasklet to call the
++		 * start_next_request(), outside of interrupt context at some
++		 * time after the current time, after a clear-halt setup packet.
++		 * Still need to implement ep mismatch in the future if a gadget
++		 * ever uses more than one endpoint at once
++		 */
++		ep->queue_sof = 1;
++		DWC_TASK_SCHEDULE(pcd->start_xfer_tasklet);
++	}
++	/* Start Control Status Phase */
++	do_setup_in_status_phase(pcd);
++}
++
++/**
++ * This function is called when the SET_FEATURE TEST_MODE Setup packet
++ * is sent from the host.  The Device Control register is written with
++ * the Test Mode bits set to the specified Test Mode.  This is done as
++ * a tasklet so that the "Status" phase of the control transfer
++ * completes before transmitting the TEST packets.
++ *
++ * @todo This has not been tested since the tasklet struct was put
++ * into the PCD struct!
++ *
++ */
++void do_test_mode(void *data)
++{
++	dctl_data_t dctl;
++	dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) data;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	int test_mode = pcd->test_mode;
++
++//        DWC_WARN("%s() has not been tested since being rewritten!\n", __func__);
++
++	dctl.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
++	switch (test_mode) {
++	case 1:		// TEST_J
++		dctl.b.tstctl = 1;
++		break;
++
++	case 2:		// TEST_K
++		dctl.b.tstctl = 2;
++		break;
++
++	case 3:		// TEST_SE0_NAK
++		dctl.b.tstctl = 3;
++		break;
++
++	case 4:		// TEST_PACKET
++		dctl.b.tstctl = 4;
++		break;
++
++	case 5:		// TEST_FORCE_ENABLE
++		dctl.b.tstctl = 5;
++		break;
++	}
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32);
++}
++
++/**
++ * This function process the GET_STATUS Setup Commands.
++ */
++static inline void do_get_status(dwc_otg_pcd_t * pcd)
++{
++	usb_device_request_t ctrl = pcd->setup_pkt->req;
++	dwc_otg_pcd_ep_t *ep;
++	dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
++	uint16_t *status = pcd->status_buf;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++
++#ifdef DEBUG_EP0
++	DWC_DEBUGPL(DBG_PCD,
++		    "GET_STATUS %02x.%02x v%04x i%04x l%04x\n",
++		    ctrl.bmRequestType, ctrl.bRequest,
++		    UGETW(ctrl.wValue), UGETW(ctrl.wIndex),
++		    UGETW(ctrl.wLength));
++#endif
++
++	switch (UT_GET_RECIPIENT(ctrl.bmRequestType)) {
++	case UT_DEVICE:
++		if(UGETW(ctrl.wIndex) == 0xF000) { /* OTG Status selector */
++			DWC_PRINTF("wIndex - %d\n", UGETW(ctrl.wIndex));
++			DWC_PRINTF("OTG VERSION - %d\n", core_if->otg_ver);
++			DWC_PRINTF("OTG CAP - %d, %d\n",
++				   core_if->core_params->otg_cap,
++						DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE);
++			if (core_if->otg_ver == 1
++			    && core_if->core_params->otg_cap ==
++			    DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE) {
++				uint8_t *otgsts = (uint8_t*)pcd->status_buf;
++				*otgsts = (core_if->otg_sts & 0x1);
++				pcd->ep0_pending = 1;
++				ep0->dwc_ep.start_xfer_buff =
++				    (uint8_t *) otgsts;
++				ep0->dwc_ep.xfer_buff = (uint8_t *) otgsts;
++				ep0->dwc_ep.dma_addr =
++				    pcd->status_buf_dma_handle;
++				ep0->dwc_ep.xfer_len = 1;
++				ep0->dwc_ep.xfer_count = 0;
++				ep0->dwc_ep.total_len = ep0->dwc_ep.xfer_len;
++				dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd),
++							   &ep0->dwc_ep);
++				return;
++			} else {
++				ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++				return;
++			}
++			break;
++		} else {
++			*status = 0x1;	/* Self powered */
++			*status |= pcd->remote_wakeup_enable << 1;
++			break;
++		}
++	case UT_INTERFACE:
++		*status = 0;
++		break;
++
++	case UT_ENDPOINT:
++		ep = get_ep_by_addr(pcd, UGETW(ctrl.wIndex));
++		if (ep == 0 || UGETW(ctrl.wLength) > 2) {
++			ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++			return;
++		}
++		/** @todo check for EP stall */
++		*status = ep->stopped;
++		break;
++	}
++	pcd->ep0_pending = 1;
++	ep0->dwc_ep.start_xfer_buff = (uint8_t *) status;
++	ep0->dwc_ep.xfer_buff = (uint8_t *) status;
++	ep0->dwc_ep.dma_addr = pcd->status_buf_dma_handle;
++	ep0->dwc_ep.xfer_len = 2;
++	ep0->dwc_ep.xfer_count = 0;
++	ep0->dwc_ep.total_len = ep0->dwc_ep.xfer_len;
++	dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd), &ep0->dwc_ep);
++}
++
++/**
++ * This function process the SET_FEATURE Setup Commands.
++ */
++static inline void do_set_feature(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++	usb_device_request_t ctrl = pcd->setup_pkt->req;
++	dwc_otg_pcd_ep_t *ep = 0;
++	int32_t otg_cap_param = core_if->core_params->otg_cap;
++	gotgctl_data_t gotgctl = {.d32 = 0 };
++
++	DWC_DEBUGPL(DBG_PCD, "SET_FEATURE:%02x.%02x v%04x i%04x l%04x\n",
++		    ctrl.bmRequestType, ctrl.bRequest,
++		    UGETW(ctrl.wValue), UGETW(ctrl.wIndex),
++		    UGETW(ctrl.wLength));
++	DWC_DEBUGPL(DBG_PCD, "otg_cap=%d\n", otg_cap_param);
++
++	switch (UT_GET_RECIPIENT(ctrl.bmRequestType)) {
++	case UT_DEVICE:
++		switch (UGETW(ctrl.wValue)) {
++		case UF_DEVICE_REMOTE_WAKEUP:
++			pcd->remote_wakeup_enable = 1;
++			break;
++
++		case UF_TEST_MODE:
++			/* Setup the Test Mode tasklet to do the Test
++			 * Packet generation after the SETUP Status
++			 * phase has completed. */
++
++			/** @todo This has not been tested since the
++			 * tasklet struct was put into the PCD
++			 * struct! */
++			pcd->test_mode = UGETW(ctrl.wIndex) >> 8;
++			DWC_TASK_SCHEDULE(pcd->test_mode_tasklet);
++			break;
++
++		case UF_DEVICE_B_HNP_ENABLE:
++			DWC_DEBUGPL(DBG_PCDV,
++				    "SET_FEATURE: USB_DEVICE_B_HNP_ENABLE\n");
++
++			/* dev may initiate HNP */
++			if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE) {
++				pcd->b_hnp_enable = 1;
++				dwc_otg_pcd_update_otg(pcd, 0);
++				DWC_DEBUGPL(DBG_PCD, "Request B HNP\n");
++				/**@todo Is the gotgctl.devhnpen cleared
++				 * by a USB Reset? */
++				gotgctl.b.devhnpen = 1;
++				gotgctl.b.hnpreq = 1;
++				DWC_WRITE_REG32(&global_regs->gotgctl,
++						gotgctl.d32);
++			} else {
++				ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++				return;
++			}
++			break;
++
++		case UF_DEVICE_A_HNP_SUPPORT:
++			/* RH port supports HNP */
++			DWC_DEBUGPL(DBG_PCDV,
++				    "SET_FEATURE: USB_DEVICE_A_HNP_SUPPORT\n");
++			if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE) {
++				pcd->a_hnp_support = 1;
++				dwc_otg_pcd_update_otg(pcd, 0);
++			} else {
++				ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++				return;
++			}
++			break;
++
++		case UF_DEVICE_A_ALT_HNP_SUPPORT:
++			/* other RH port does */
++			DWC_DEBUGPL(DBG_PCDV,
++				    "SET_FEATURE: USB_DEVICE_A_ALT_HNP_SUPPORT\n");
++			if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE) {
++				pcd->a_alt_hnp_support = 1;
++				dwc_otg_pcd_update_otg(pcd, 0);
++			} else {
++				ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++				return;
++			}
++			break;
++
++		default:
++			ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++			return;
++
++		}
++		do_setup_in_status_phase(pcd);
++		break;
++
++	case UT_INTERFACE:
++		do_gadget_setup(pcd, &ctrl);
++		break;
++
++	case UT_ENDPOINT:
++		if (UGETW(ctrl.wValue) == UF_ENDPOINT_HALT) {
++			ep = get_ep_by_addr(pcd, UGETW(ctrl.wIndex));
++			if (ep == 0) {
++				ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++				return;
++			}
++			ep->stopped = 1;
++			dwc_otg_ep_set_stall(core_if, &ep->dwc_ep);
++		}
++		do_setup_in_status_phase(pcd);
++		break;
++	}
++}
++
++/**
++ * This function process the CLEAR_FEATURE Setup Commands.
++ */
++static inline void do_clear_feature(dwc_otg_pcd_t * pcd)
++{
++	usb_device_request_t ctrl = pcd->setup_pkt->req;
++	dwc_otg_pcd_ep_t *ep = 0;
++
++	DWC_DEBUGPL(DBG_PCD,
++		    "CLEAR_FEATURE:%02x.%02x v%04x i%04x l%04x\n",
++		    ctrl.bmRequestType, ctrl.bRequest,
++		    UGETW(ctrl.wValue), UGETW(ctrl.wIndex),
++		    UGETW(ctrl.wLength));
++
++	switch (UT_GET_RECIPIENT(ctrl.bmRequestType)) {
++	case UT_DEVICE:
++		switch (UGETW(ctrl.wValue)) {
++		case UF_DEVICE_REMOTE_WAKEUP:
++			pcd->remote_wakeup_enable = 0;
++			break;
++
++		case UF_TEST_MODE:
++			/** @todo Add CLEAR_FEATURE for TEST modes. */
++			break;
++
++		default:
++			ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++			return;
++		}
++		do_setup_in_status_phase(pcd);
++		break;
++
++	case UT_ENDPOINT:
++		ep = get_ep_by_addr(pcd, UGETW(ctrl.wIndex));
++		if (ep == 0) {
++			ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
++			return;
++		}
++
++		pcd_clear_halt(pcd, ep);
++
++		break;
++	}
++}
++
++/**
++ * This function process the SET_ADDRESS Setup Commands.
++ */
++static inline void do_set_address(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_dev_if_t *dev_if = GET_CORE_IF(pcd)->dev_if;
++	usb_device_request_t ctrl = pcd->setup_pkt->req;
++
++	if (ctrl.bmRequestType == UT_DEVICE) {
++		dcfg_data_t dcfg = {.d32 = 0 };
++
++#ifdef DEBUG_EP0
++//                      DWC_DEBUGPL(DBG_PCDV, "SET_ADDRESS:%d\n", ctrl.wValue);
++#endif
++		dcfg.b.devaddr = UGETW(ctrl.wValue);
++		DWC_MODIFY_REG32(&dev_if->dev_global_regs->dcfg, 0, dcfg.d32);
++		do_setup_in_status_phase(pcd);
++	}
++}
++
++/**
++ *	This function processes SETUP commands. In Linux, the USB Command
++ *	processing is done in two places - the first being the PCD and the
++ *	second in the Gadget Driver (for example, the File-Backed Storage
++ *	Gadget Driver).
++ *
++ * <table>
++ * <tr><td>Command	</td><td>Driver </td><td>Description</td></tr>
++ *
++ * <tr><td>GET_STATUS </td><td>PCD </td><td>Command is processed as
++ * defined in chapter 9 of the USB 2.0 Specification chapter 9
++ * </td></tr>
++ *
++ * <tr><td>CLEAR_FEATURE </td><td>PCD </td><td>The Device and Endpoint
++ * requests are the ENDPOINT_HALT feature is procesed, all others the
++ * interface requests are ignored.</td></tr>
++ *
++ * <tr><td>SET_FEATURE </td><td>PCD </td><td>The Device and Endpoint
++ * requests are processed by the PCD.  Interface requests are passed
++ * to the Gadget Driver.</td></tr>
++ *
++ * <tr><td>SET_ADDRESS </td><td>PCD </td><td>Program the DCFG reg,
++ * with device address received </td></tr>
++ *
++ * <tr><td>GET_DESCRIPTOR </td><td>Gadget Driver </td><td>Return the
++ * requested descriptor</td></tr>
++ *
++ * <tr><td>SET_DESCRIPTOR </td><td>Gadget Driver </td><td>Optional -
++ * not implemented by any of the existing Gadget Drivers.</td></tr>
++ *
++ * <tr><td>SET_CONFIGURATION </td><td>Gadget Driver </td><td>Disable
++ * all EPs and enable EPs for new configuration.</td></tr>
++ *
++ * <tr><td>GET_CONFIGURATION </td><td>Gadget Driver </td><td>Return
++ * the current configuration</td></tr>
++ *
++ * <tr><td>SET_INTERFACE </td><td>Gadget Driver </td><td>Disable all
++ * EPs and enable EPs for new configuration.</td></tr>
++ *
++ * <tr><td>GET_INTERFACE </td><td>Gadget Driver </td><td>Return the
++ * current interface.</td></tr>
++ *
++ * <tr><td>SYNC_FRAME </td><td>PCD </td><td>Display debug
++ * message.</td></tr>
++ * </table>
++ *
++ * When the SETUP Phase Done interrupt occurs, the PCD SETUP commands are
++ * processed by pcd_setup. Calling the Function Driver's setup function from
++ * pcd_setup processes the gadget SETUP commands.
++ */
++static inline void pcd_setup(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	usb_device_request_t ctrl = pcd->setup_pkt->req;
++	dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
++
++	deptsiz0_data_t doeptsize0 = {.d32 = 0 };
++
++#ifdef DWC_UTE_CFI
++	int retval = 0;
++	struct cfi_usb_ctrlrequest cfi_req;
++#endif
++
++	doeptsize0.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[0]->doeptsiz);
++
++	/** In BDMA more then 1 setup packet is not supported till 3.00a */
++	if (core_if->dma_enable && core_if->dma_desc_enable == 0
++	    && (doeptsize0.b.supcnt < 2)
++	    && (core_if->snpsid < OTG_CORE_REV_2_94a)) {
++		DWC_ERROR
++		    ("\n\n-----------	 CANNOT handle > 1 setup packet in DMA mode\n\n");
++	}
++	if ((core_if->snpsid >= OTG_CORE_REV_3_00a)
++	    && (core_if->dma_enable == 1) && (core_if->dma_desc_enable == 0)) {
++		ctrl =
++		    (pcd->setup_pkt +
++		     (3 - doeptsize0.b.supcnt - 1 +
++		      ep0->dwc_ep.stp_rollover))->req;
++	}
++#ifdef DEBUG_EP0
++	DWC_DEBUGPL(DBG_PCD, "SETUP %02x.%02x v%04x i%04x l%04x\n",
++		    ctrl.bmRequestType, ctrl.bRequest,
++		    UGETW(ctrl.wValue), UGETW(ctrl.wIndex),
++		    UGETW(ctrl.wLength));
++#endif
++
++	/* Clean up the request queue */
++	dwc_otg_request_nuke(ep0);
++	ep0->stopped = 0;
++
++	if (ctrl.bmRequestType & UE_DIR_IN) {
++		ep0->dwc_ep.is_in = 1;
++		pcd->ep0state = EP0_IN_DATA_PHASE;
++	} else {
++		ep0->dwc_ep.is_in = 0;
++		pcd->ep0state = EP0_OUT_DATA_PHASE;
++	}
++
++	if (UGETW(ctrl.wLength) == 0) {
++		ep0->dwc_ep.is_in = 1;
++		pcd->ep0state = EP0_IN_STATUS_PHASE;
++	}
++
++	if (UT_GET_TYPE(ctrl.bmRequestType) != UT_STANDARD) {
++
++#ifdef DWC_UTE_CFI
++		DWC_MEMCPY(&cfi_req, &ctrl, sizeof(usb_device_request_t));
++
++		//printk(KERN_ALERT "CFI: req_type=0x%02x; req=0x%02x\n",
++				ctrl.bRequestType, ctrl.bRequest);
++		if (UT_GET_TYPE(cfi_req.bRequestType) == UT_VENDOR) {
++			if (cfi_req.bRequest > 0xB0 && cfi_req.bRequest < 0xBF) {
++				retval = cfi_setup(pcd, &cfi_req);
++				if (retval < 0) {
++					ep0_do_stall(pcd, retval);
++					pcd->ep0_pending = 0;
++					return;
++				}
++
++				/* if need gadget setup then call it and check the retval */
++				if (pcd->cfi->need_gadget_att) {
++					retval =
++					    cfi_gadget_setup(pcd,
++							     &pcd->
++							     cfi->ctrl_req);
++					if (retval < 0) {
++						pcd->ep0_pending = 0;
++						return;
++					}
++				}
++
++				if (pcd->cfi->need_status_in_complete) {
++					do_setup_in_status_phase(pcd);
++				}
++				return;
++			}
++		}
++#endif
++
++		/* handle non-standard (class/vendor) requests in the gadget driver */
++		do_gadget_setup(pcd, &ctrl);
++		return;
++	}
++
++	/** @todo NGS: Handle bad setup packet? */
++
++///////////////////////////////////////////
++//// --- Standard Request handling --- ////
++
++	switch (ctrl.bRequest) {
++	case UR_GET_STATUS:
++		do_get_status(pcd);
++		break;
++
++	case UR_CLEAR_FEATURE:
++		do_clear_feature(pcd);
++		break;
++
++	case UR_SET_FEATURE:
++		do_set_feature(pcd);
++		break;
++
++	case UR_SET_ADDRESS:
++		do_set_address(pcd);
++		break;
++
++	case UR_SET_INTERFACE:
++	case UR_SET_CONFIG:
++//              _pcd->request_config = 1;       /* Configuration changed */
++		do_gadget_setup(pcd, &ctrl);
++		break;
++
++	case UR_SYNCH_FRAME:
++		do_gadget_setup(pcd, &ctrl);
++		break;
++
++	default:
++		/* Call the Gadget Driver's setup functions */
++		do_gadget_setup(pcd, &ctrl);
++		break;
++	}
++}
++
++/**
++ * This function completes the ep0 control transfer.
++ */
++static int32_t ep0_complete_request(dwc_otg_pcd_ep_t * ep)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(ep->pcd);
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	dwc_otg_dev_in_ep_regs_t *in_ep_regs =
++	    dev_if->in_ep_regs[ep->dwc_ep.num];
++#ifdef DEBUG_EP0
++	dwc_otg_dev_out_ep_regs_t *out_ep_regs =
++	    dev_if->out_ep_regs[ep->dwc_ep.num];
++#endif
++	deptsiz0_data_t deptsiz;
++	dev_dma_desc_sts_t desc_sts;
++	dwc_otg_pcd_request_t *req;
++	int is_last = 0;
++	dwc_otg_pcd_t *pcd = ep->pcd;
++
++#ifdef DWC_UTE_CFI
++	struct cfi_usb_ctrlrequest *ctrlreq;
++	int retval = -DWC_E_NOT_SUPPORTED;
++#endif
++
++        desc_sts.b.bytes = 0;
++
++	if (pcd->ep0_pending && DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		if (ep->dwc_ep.is_in) {
++#ifdef DEBUG_EP0
++			DWC_DEBUGPL(DBG_PCDV, "Do setup OUT status phase\n");
++#endif
++			do_setup_out_status_phase(pcd);
++		} else {
++#ifdef DEBUG_EP0
++			DWC_DEBUGPL(DBG_PCDV, "Do setup IN status phase\n");
++#endif
++
++#ifdef DWC_UTE_CFI
++			ctrlreq = &pcd->cfi->ctrl_req;
++
++			if (UT_GET_TYPE(ctrlreq->bRequestType) == UT_VENDOR) {
++				if (ctrlreq->bRequest > 0xB0
++				    && ctrlreq->bRequest < 0xBF) {
++
++					/* Return if the PCD failed to handle the request */
++					if ((retval =
++					     pcd->cfi->ops.
++					     ctrl_write_complete(pcd->cfi,
++								 pcd)) < 0) {
++						CFI_INFO
++						    ("ERROR setting a new value in the PCD(%d)\n",
++						     retval);
++						ep0_do_stall(pcd, retval);
++						pcd->ep0_pending = 0;
++						return 0;
++					}
++
++					/* If the gadget needs to be notified on the request */
++					if (pcd->cfi->need_gadget_att == 1) {
++						//retval = do_gadget_setup(pcd, &pcd->cfi->ctrl_req);
++						retval =
++						    cfi_gadget_setup(pcd,
++								     &pcd->cfi->
++								     ctrl_req);
++
++						/* Return from the function if the gadget failed to process
++						 * the request properly - this should never happen !!!
++						 */
++						if (retval < 0) {
++							CFI_INFO
++							    ("ERROR setting a new value in the gadget(%d)\n",
++							     retval);
++							pcd->ep0_pending = 0;
++							return 0;
++						}
++					}
++
++					CFI_INFO("%s: RETVAL=%d\n", __func__,
++						 retval);
++					/* If we hit here then the PCD and the gadget has properly
++					 * handled the request - so send the ZLP IN to the host.
++					 */
++					/* @todo: MAS - decide whether we need to start the setup
++					 * stage based on the need_setup value of the cfi object
++					 */
++					do_setup_in_status_phase(pcd);
++					pcd->ep0_pending = 0;
++					return 1;
++				}
++			}
++#endif
++
++			do_setup_in_status_phase(pcd);
++		}
++		pcd->ep0_pending = 0;
++		return 1;
++	}
++
++	if (DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		return 0;
++	}
++	req = DWC_CIRCLEQ_FIRST(&ep->queue);
++
++	if (pcd->ep0state == EP0_OUT_STATUS_PHASE
++	    || pcd->ep0state == EP0_IN_STATUS_PHASE) {
++		is_last = 1;
++	} else if (ep->dwc_ep.is_in) {
++		deptsiz.d32 = DWC_READ_REG32(&in_ep_regs->dieptsiz);
++		if (core_if->dma_desc_enable != 0)
++			desc_sts = dev_if->in_desc_addr->status;
++#ifdef DEBUG_EP0
++		DWC_DEBUGPL(DBG_PCDV, "%d len=%d  xfersize=%d pktcnt=%d\n",
++			    ep->dwc_ep.num, ep->dwc_ep.xfer_len,
++			    deptsiz.b.xfersize, deptsiz.b.pktcnt);
++#endif
++
++		if (((core_if->dma_desc_enable == 0)
++		     && (deptsiz.b.xfersize == 0))
++		    || ((core_if->dma_desc_enable != 0)
++			&& (desc_sts.b.bytes == 0))) {
++			req->actual = ep->dwc_ep.xfer_count;
++			/* Is a Zero Len Packet needed? */
++			if (req->sent_zlp) {
++#ifdef DEBUG_EP0
++				DWC_DEBUGPL(DBG_PCD, "Setup Rx ZLP\n");
++#endif
++				req->sent_zlp = 0;
++			}
++			do_setup_out_status_phase(pcd);
++		}
++	} else {
++		/* ep0-OUT */
++#ifdef DEBUG_EP0
++		deptsiz.d32 = DWC_READ_REG32(&out_ep_regs->doeptsiz);
++		DWC_DEBUGPL(DBG_PCDV, "%d len=%d xsize=%d pktcnt=%d\n",
++			    ep->dwc_ep.num, ep->dwc_ep.xfer_len,
++			    deptsiz.b.xfersize, deptsiz.b.pktcnt);
++#endif
++		req->actual = ep->dwc_ep.xfer_count;
++
++		/* Is a Zero Len Packet needed? */
++		if (req->sent_zlp) {
++#ifdef DEBUG_EP0
++			DWC_DEBUGPL(DBG_PCDV, "Setup Tx ZLP\n");
++#endif
++			req->sent_zlp = 0;
++		}
++		/* For older cores do setup in status phase in Slave/BDMA modes,
++		 * starting from 3.00 do that only in slave, and for DMA modes
++		 * just re-enable ep 0 OUT here*/
++		if (core_if->dma_enable == 0
++		    || (core_if->dma_desc_enable == 0
++			&& core_if->snpsid <= OTG_CORE_REV_2_94a)) {
++			do_setup_in_status_phase(pcd);
++		} else if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
++			DWC_DEBUGPL(DBG_PCDV,
++				    "Enable out ep before in status phase\n");
++			ep0_out_start(core_if, pcd);
++		}
++	}
++
++	/* Complete the request */
++	if (is_last) {
++		dwc_otg_request_done(ep, req, 0);
++		ep->dwc_ep.start_xfer_buff = 0;
++		ep->dwc_ep.xfer_buff = 0;
++		ep->dwc_ep.xfer_len = 0;
++		return 1;
++	}
++	return 0;
++}
++
++#ifdef DWC_UTE_CFI
++/**
++ * This function calculates traverses all the CFI DMA descriptors and
++ * and accumulates the bytes that are left to be transfered.
++ *
++ * @return The total bytes left to transfered, or a negative value as failure
++ */
++static inline int cfi_calc_desc_residue(dwc_otg_pcd_ep_t * ep)
++{
++	int32_t ret = 0;
++	int i;
++	struct dwc_otg_dma_desc *ddesc = NULL;
++	struct cfi_ep *cfiep;
++
++	/* See if the pcd_ep has its respective cfi_ep mapped */
++	cfiep = get_cfi_ep_by_pcd_ep(ep->pcd->cfi, ep);
++	if (!cfiep) {
++		CFI_INFO("%s: Failed to find ep\n", __func__);
++		return -1;
++	}
++
++	ddesc = ep->dwc_ep.descs;
++
++	for (i = 0; (i < cfiep->desc_count) && (i < MAX_DMA_DESCS_PER_EP); i++) {
++
++#if defined(PRINT_CFI_DMA_DESCS)
++		print_desc(ddesc, ep->ep.name, i);
++#endif
++		ret += ddesc->status.b.bytes;
++		ddesc++;
++	}
++
++	if (ret)
++		CFI_INFO("!!!!!!!!!! WARNING (%s) - residue=%d\n", __func__,
++			 ret);
++
++	return ret;
++}
++#endif
++
++/**
++ * This function completes the request for the EP. If there are
++ * additional requests for the EP in the queue they will be started.
++ */
++static void complete_ep(dwc_otg_pcd_ep_t * ep)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(ep->pcd);
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	dwc_otg_dev_in_ep_regs_t *in_ep_regs =
++	    dev_if->in_ep_regs[ep->dwc_ep.num];
++	deptsiz_data_t deptsiz;
++	dev_dma_desc_sts_t desc_sts;
++	dwc_otg_pcd_request_t *req = 0;
++	dwc_otg_dev_dma_desc_t *dma_desc;
++	uint32_t byte_count = 0;
++	int is_last = 0;
++	int i;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s() %d-%s\n", __func__, ep->dwc_ep.num,
++		    (ep->dwc_ep.is_in ? "IN" : "OUT"));
++
++	/* Get any pending requests */
++	if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++		req = DWC_CIRCLEQ_FIRST(&ep->queue);
++		if (!req) {
++			DWC_PRINTF("complete_ep 0x%p, req = NULL!\n", ep);
++			return;
++		}
++	} else {
++		DWC_PRINTF("complete_ep 0x%p, ep->queue empty!\n", ep);
++		return;
++	}
++
++	DWC_DEBUGPL(DBG_PCD, "Requests %d\n", ep->pcd->request_pending);
++
++	if (ep->dwc_ep.is_in) {
++		deptsiz.d32 = DWC_READ_REG32(&in_ep_regs->dieptsiz);
++
++		if (core_if->dma_enable) {
++			if (core_if->dma_desc_enable == 0) {
++				if (deptsiz.b.xfersize == 0
++				    && deptsiz.b.pktcnt == 0) {
++					byte_count =
++					    ep->dwc_ep.xfer_len -
++					    ep->dwc_ep.xfer_count;
++
++					ep->dwc_ep.xfer_buff += byte_count;
++					ep->dwc_ep.dma_addr += byte_count;
++					ep->dwc_ep.xfer_count += byte_count;
++
++					DWC_DEBUGPL(DBG_PCDV,
++						    "%d-%s len=%d  xfersize=%d pktcnt=%d\n",
++						    ep->dwc_ep.num,
++						    (ep->dwc_ep.
++						     is_in ? "IN" : "OUT"),
++						    ep->dwc_ep.xfer_len,
++						    deptsiz.b.xfersize,
++						    deptsiz.b.pktcnt);
++
++					if (ep->dwc_ep.xfer_len <
++					    ep->dwc_ep.total_len) {
++						dwc_otg_ep_start_transfer
++						    (core_if, &ep->dwc_ep);
++					} else if (ep->dwc_ep.sent_zlp) {
++						/*
++						 * This fragment of code should initiate 0
++						 * length transfer in case if it is queued
++						 * a transfer with size divisible to EPs max
++						 * packet size and with usb_request zero field
++						 * is set, which means that after data is transfered,
++						 * it is also should be transfered
++						 * a 0 length packet at the end. For Slave and
++						 * Buffer DMA modes in this case SW has
++						 * to initiate 2 transfers one with transfer size,
++						 * and the second with 0 size. For Descriptor
++						 * DMA mode SW is able to initiate a transfer,
++						 * which will handle all the packets including
++						 * the last  0 length.
++						 */
++						ep->dwc_ep.sent_zlp = 0;
++						dwc_otg_ep_start_zl_transfer
++						    (core_if, &ep->dwc_ep);
++					} else {
++						is_last = 1;
++					}
++				} else {
++					if (ep->dwc_ep.type ==
++					    DWC_OTG_EP_TYPE_ISOC) {
++						req->actual = 0;
++						dwc_otg_request_done(ep, req, 0);
++
++						ep->dwc_ep.start_xfer_buff = 0;
++						ep->dwc_ep.xfer_buff = 0;
++						ep->dwc_ep.xfer_len = 0;
++
++						/* If there is a request in the queue start it. */
++						start_next_request(ep);
++					} else
++						DWC_WARN
++						("Incomplete transfer (%d - %s [siz=%d pkt=%d])\n",
++						ep->dwc_ep.num,
++						(ep->dwc_ep.is_in ? "IN" : "OUT"),
++						deptsiz.b.xfersize,
++						deptsiz.b.pktcnt);
++				}
++			} else {
++				dma_desc = ep->dwc_ep.desc_addr;
++				byte_count = 0;
++				ep->dwc_ep.sent_zlp = 0;
++
++#ifdef DWC_UTE_CFI
++				CFI_INFO("%s: BUFFER_MODE=%d\n", __func__,
++					 ep->dwc_ep.buff_mode);
++				if (ep->dwc_ep.buff_mode != BM_STANDARD) {
++					int residue;
++
++					residue = cfi_calc_desc_residue(ep);
++					if (residue < 0)
++						return;
++
++					byte_count = residue;
++				} else {
++#endif
++					for (i = 0; i < ep->dwc_ep.desc_cnt;
++					     ++i) {
++					desc_sts = dma_desc->status;
++					byte_count += desc_sts.b.bytes;
++					dma_desc++;
++				}
++#ifdef DWC_UTE_CFI
++				}
++#endif
++				if (byte_count == 0) {
++					ep->dwc_ep.xfer_count =
++					    ep->dwc_ep.total_len;
++					is_last = 1;
++				} else {
++					DWC_WARN("Incomplete transfer\n");
++				}
++			}
++		} else {
++			if (deptsiz.b.xfersize == 0 && deptsiz.b.pktcnt == 0) {
++				DWC_DEBUGPL(DBG_PCDV,
++					    "%d-%s len=%d  xfersize=%d pktcnt=%d\n",
++					    ep->dwc_ep.num,
++					    ep->dwc_ep.is_in ? "IN" : "OUT",
++					    ep->dwc_ep.xfer_len,
++					    deptsiz.b.xfersize,
++					    deptsiz.b.pktcnt);
++
++				/*      Check if the whole transfer was completed,
++				 *      if no, setup transfer for next portion of data
++				 */
++				if (ep->dwc_ep.xfer_len < ep->dwc_ep.total_len) {
++					dwc_otg_ep_start_transfer(core_if,
++								  &ep->dwc_ep);
++				} else if (ep->dwc_ep.sent_zlp) {
++					/*
++					 * This fragment of code should initiate 0
++					 * length trasfer in case if it is queued
++					 * a trasfer with size divisible to EPs max
++					 * packet size and with usb_request zero field
++					 * is set, which means that after data is transfered,
++					 * it is also should be transfered
++					 * a 0 length packet at the end. For Slave and
++					 * Buffer DMA modes in this case SW has
++					 * to initiate 2 transfers one with transfer size,
++					 * and the second with 0 size. For Desriptor
++					 * DMA mode SW is able to initiate a transfer,
++					 * which will handle all the packets including
++					 * the last  0 legth.
++					 */
++					ep->dwc_ep.sent_zlp = 0;
++					dwc_otg_ep_start_zl_transfer(core_if,
++								     &ep->dwc_ep);
++				} else {
++					is_last = 1;
++				}
++			} else {
++				DWC_WARN
++				    ("Incomplete transfer (%d-%s [siz=%d pkt=%d])\n",
++				     ep->dwc_ep.num,
++				     (ep->dwc_ep.is_in ? "IN" : "OUT"),
++				     deptsiz.b.xfersize, deptsiz.b.pktcnt);
++			}
++		}
++	} else {
++		dwc_otg_dev_out_ep_regs_t *out_ep_regs =
++		    dev_if->out_ep_regs[ep->dwc_ep.num];
++		desc_sts.d32 = 0;
++		if (core_if->dma_enable) {
++			if (core_if->dma_desc_enable) {
++				dma_desc = ep->dwc_ep.desc_addr;
++				byte_count = 0;
++				ep->dwc_ep.sent_zlp = 0;
++
++#ifdef DWC_UTE_CFI
++				CFI_INFO("%s: BUFFER_MODE=%d\n", __func__,
++					 ep->dwc_ep.buff_mode);
++				if (ep->dwc_ep.buff_mode != BM_STANDARD) {
++					int residue;
++					residue = cfi_calc_desc_residue(ep);
++					if (residue < 0)
++						return;
++					byte_count = residue;
++				} else {
++#endif
++
++					for (i = 0; i < ep->dwc_ep.desc_cnt;
++					     ++i) {
++						desc_sts = dma_desc->status;
++						byte_count += desc_sts.b.bytes;
++						dma_desc++;
++					}
++
++#ifdef DWC_UTE_CFI
++				}
++#endif
++				/* Checking for interrupt Out transfers with not
++				 * dword aligned mps sizes
++				 */
++				if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_INTR &&
++							(ep->dwc_ep.maxpacket%4)) {
++					ep->dwc_ep.xfer_count =
++					    ep->dwc_ep.total_len - byte_count;
++					if ((ep->dwc_ep.xfer_len %
++					     ep->dwc_ep.maxpacket)
++					    && (ep->dwc_ep.xfer_len /
++						ep->dwc_ep.maxpacket <
++						MAX_DMA_DESC_CNT))
++						ep->dwc_ep.xfer_len -=
++						    (ep->dwc_ep.desc_cnt -
++						     1) * ep->dwc_ep.maxpacket +
++						    ep->dwc_ep.xfer_len %
++						    ep->dwc_ep.maxpacket;
++					else
++						ep->dwc_ep.xfer_len -=
++						    ep->dwc_ep.desc_cnt *
++						    ep->dwc_ep.maxpacket;
++					if (ep->dwc_ep.xfer_len > 0) {
++						dwc_otg_ep_start_transfer
++						    (core_if, &ep->dwc_ep);
++					} else {
++						is_last = 1;
++					}
++				} else {
++					ep->dwc_ep.xfer_count =
++					    ep->dwc_ep.total_len - byte_count +
++					    ((4 -
++					      (ep->dwc_ep.
++					       total_len & 0x3)) & 0x3);
++					is_last = 1;
++				}
++			} else {
++				deptsiz.d32 = 0;
++				deptsiz.d32 =
++				    DWC_READ_REG32(&out_ep_regs->doeptsiz);
++
++				byte_count = (ep->dwc_ep.xfer_len -
++					      ep->dwc_ep.xfer_count -
++					      deptsiz.b.xfersize);
++				ep->dwc_ep.xfer_buff += byte_count;
++				ep->dwc_ep.dma_addr += byte_count;
++				ep->dwc_ep.xfer_count += byte_count;
++
++				/*      Check if the whole transfer was completed,
++				 *      if no, setup transfer for next portion of data
++				 */
++				if (ep->dwc_ep.xfer_len < ep->dwc_ep.total_len) {
++					dwc_otg_ep_start_transfer(core_if,
++								  &ep->dwc_ep);
++				} else if (ep->dwc_ep.sent_zlp) {
++					/*
++					 * This fragment of code should initiate 0
++					 * length trasfer in case if it is queued
++					 * a trasfer with size divisible to EPs max
++					 * packet size and with usb_request zero field
++					 * is set, which means that after data is transfered,
++					 * it is also should be transfered
++					 * a 0 length packet at the end. For Slave and
++					 * Buffer DMA modes in this case SW has
++					 * to initiate 2 transfers one with transfer size,
++					 * and the second with 0 size. For Desriptor
++					 * DMA mode SW is able to initiate a transfer,
++					 * which will handle all the packets including
++					 * the last  0 legth.
++					 */
++					ep->dwc_ep.sent_zlp = 0;
++					dwc_otg_ep_start_zl_transfer(core_if,
++								     &ep->dwc_ep);
++				} else {
++					is_last = 1;
++				}
++			}
++		} else {
++			/*      Check if the whole transfer was completed,
++			 *      if no, setup transfer for next portion of data
++			 */
++			if (ep->dwc_ep.xfer_len < ep->dwc_ep.total_len) {
++				dwc_otg_ep_start_transfer(core_if, &ep->dwc_ep);
++			} else if (ep->dwc_ep.sent_zlp) {
++				/*
++				 * This fragment of code should initiate 0
++				 * length transfer in case if it is queued
++				 * a transfer with size divisible to EPs max
++				 * packet size and with usb_request zero field
++				 * is set, which means that after data is transfered,
++				 * it is also should be transfered
++				 * a 0 length packet at the end. For Slave and
++				 * Buffer DMA modes in this case SW has
++				 * to initiate 2 transfers one with transfer size,
++				 * and the second with 0 size. For Descriptor
++				 * DMA mode SW is able to initiate a transfer,
++				 * which will handle all the packets including
++				 * the last  0 length.
++				 */
++				ep->dwc_ep.sent_zlp = 0;
++				dwc_otg_ep_start_zl_transfer(core_if,
++							     &ep->dwc_ep);
++			} else {
++				is_last = 1;
++			}
++		}
++
++		DWC_DEBUGPL(DBG_PCDV,
++			    "addr %p,	 %d-%s len=%d cnt=%d xsize=%d pktcnt=%d\n",
++			    &out_ep_regs->doeptsiz, ep->dwc_ep.num,
++			    ep->dwc_ep.is_in ? "IN" : "OUT",
++			    ep->dwc_ep.xfer_len, ep->dwc_ep.xfer_count,
++			    deptsiz.b.xfersize, deptsiz.b.pktcnt);
++	}
++
++	/* Complete the request */
++	if (is_last) {
++#ifdef DWC_UTE_CFI
++		if (ep->dwc_ep.buff_mode != BM_STANDARD) {
++			req->actual = ep->dwc_ep.cfi_req_len - byte_count;
++		} else {
++#endif
++			req->actual = ep->dwc_ep.xfer_count;
++#ifdef DWC_UTE_CFI
++		}
++#endif
++		if (req->dw_align_buf) {
++			if (!ep->dwc_ep.is_in) {
++				dwc_memcpy(req->buf, req->dw_align_buf, req->length);
++			}
++			DWC_DMA_FREE(req->length, req->dw_align_buf,
++				     req->dw_align_buf_dma);
++		}
++
++		dwc_otg_request_done(ep, req, 0);
++
++		ep->dwc_ep.start_xfer_buff = 0;
++		ep->dwc_ep.xfer_buff = 0;
++		ep->dwc_ep.xfer_len = 0;
++
++		/* If there is a request in the queue start it. */
++		start_next_request(ep);
++	}
++}
++
++#ifdef DWC_EN_ISOC
++
++/**
++ * This function BNA interrupt for Isochronous EPs
++ *
++ */
++static void dwc_otg_pcd_handle_iso_bna(dwc_otg_pcd_ep_t * ep)
++{
++	dwc_ep_t *dwc_ep = &ep->dwc_ep;
++	volatile uint32_t *addr;
++	depctl_data_t depctl = {.d32 = 0 };
++	dwc_otg_pcd_t *pcd = ep->pcd;
++	dwc_otg_dev_dma_desc_t *dma_desc;
++	int i;
++
++	dma_desc =
++	    dwc_ep->iso_desc_addr + dwc_ep->desc_cnt * (dwc_ep->proc_buf_num);
++
++	if (dwc_ep->is_in) {
++		dev_dma_desc_sts_t sts = {.d32 = 0 };
++		for (i = 0; i < dwc_ep->desc_cnt; ++i, ++dma_desc) {
++			sts.d32 = dma_desc->status.d32;
++			sts.b_iso_in.bs = BS_HOST_READY;
++			dma_desc->status.d32 = sts.d32;
++		}
++	} else {
++		dev_dma_desc_sts_t sts = {.d32 = 0 };
++		for (i = 0; i < dwc_ep->desc_cnt; ++i, ++dma_desc) {
++			sts.d32 = dma_desc->status.d32;
++			sts.b_iso_out.bs = BS_HOST_READY;
++			dma_desc->status.d32 = sts.d32;
++		}
++	}
++
++	if (dwc_ep->is_in == 0) {
++		addr =
++		    &GET_CORE_IF(pcd)->dev_if->out_ep_regs[dwc_ep->
++							   num]->doepctl;
++	} else {
++		addr =
++		    &GET_CORE_IF(pcd)->dev_if->in_ep_regs[dwc_ep->num]->diepctl;
++	}
++	depctl.b.epena = 1;
++	DWC_MODIFY_REG32(addr, depctl.d32, depctl.d32);
++}
++
++/**
++ * This function sets latest iso packet information(non-PTI mode)
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ *
++ */
++void set_current_pkt_info(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	deptsiz_data_t deptsiz = {.d32 = 0 };
++	dma_addr_t dma_addr;
++	uint32_t offset;
++
++	if (ep->proc_buf_num)
++		dma_addr = ep->dma_addr1;
++	else
++		dma_addr = ep->dma_addr0;
++
++	if (ep->is_in) {
++		deptsiz.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->
++				   in_ep_regs[ep->num]->dieptsiz);
++		offset = ep->data_per_frame;
++	} else {
++		deptsiz.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->
++				   out_ep_regs[ep->num]->doeptsiz);
++		offset =
++		    ep->data_per_frame +
++		    (0x4 & (0x4 - (ep->data_per_frame & 0x3)));
++	}
++
++	if (!deptsiz.b.xfersize) {
++		ep->pkt_info[ep->cur_pkt].length = ep->data_per_frame;
++		ep->pkt_info[ep->cur_pkt].offset =
++		    ep->cur_pkt_dma_addr - dma_addr;
++		ep->pkt_info[ep->cur_pkt].status = 0;
++	} else {
++		ep->pkt_info[ep->cur_pkt].length = ep->data_per_frame;
++		ep->pkt_info[ep->cur_pkt].offset =
++		    ep->cur_pkt_dma_addr - dma_addr;
++		ep->pkt_info[ep->cur_pkt].status = -DWC_E_NO_DATA;
++	}
++	ep->cur_pkt_addr += offset;
++	ep->cur_pkt_dma_addr += offset;
++	ep->cur_pkt++;
++}
++
++/**
++ * This function sets latest iso packet information(DDMA mode)
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param dwc_ep The EP to start the transfer on.
++ *
++ */
++static void set_ddma_iso_pkts_info(dwc_otg_core_if_t * core_if,
++				   dwc_ep_t * dwc_ep)
++{
++	dwc_otg_dev_dma_desc_t *dma_desc;
++	dev_dma_desc_sts_t sts = {.d32 = 0 };
++	iso_pkt_info_t *iso_packet;
++	uint32_t data_per_desc;
++	uint32_t offset;
++	int i, j;
++
++	iso_packet = dwc_ep->pkt_info;
++
++	/** Reinit closed DMA Descriptors*/
++	/** ISO OUT EP */
++	if (dwc_ep->is_in == 0) {
++		dma_desc =
++		    dwc_ep->iso_desc_addr +
++		    dwc_ep->desc_cnt * dwc_ep->proc_buf_num;
++		offset = 0;
++
++		for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
++		     i += dwc_ep->pkt_per_frm) {
++			for (j = 0; j < dwc_ep->pkt_per_frm; ++j) {
++				data_per_desc =
++				    ((j + 1) * dwc_ep->maxpacket >
++				     dwc_ep->
++				     data_per_frame) ? dwc_ep->data_per_frame -
++				    j * dwc_ep->maxpacket : dwc_ep->maxpacket;
++				data_per_desc +=
++				    (data_per_desc % 4) ? (4 -
++							   data_per_desc %
++							   4) : 0;
++
++				sts.d32 = dma_desc->status.d32;
++
++				/* Write status in iso_packet_decsriptor  */
++				iso_packet->status =
++				    sts.b_iso_out.rxsts +
++				    (sts.b_iso_out.bs ^ BS_DMA_DONE);
++				if (iso_packet->status) {
++					iso_packet->status = -DWC_E_NO_DATA;
++				}
++
++				/* Received data length */
++				if (!sts.b_iso_out.rxbytes) {
++					iso_packet->length =
++					    data_per_desc -
++					    sts.b_iso_out.rxbytes;
++				} else {
++					iso_packet->length =
++					    data_per_desc -
++					    sts.b_iso_out.rxbytes + (4 -
++								     dwc_ep->data_per_frame
++								     % 4);
++				}
++
++				iso_packet->offset = offset;
++
++				offset += data_per_desc;
++				dma_desc++;
++				iso_packet++;
++			}
++		}
++
++		for (j = 0; j < dwc_ep->pkt_per_frm - 1; ++j) {
++			data_per_desc =
++			    ((j + 1) * dwc_ep->maxpacket >
++			     dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
++			    j * dwc_ep->maxpacket : dwc_ep->maxpacket;
++			data_per_desc +=
++			    (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
++
++			sts.d32 = dma_desc->status.d32;
++
++			/* Write status in iso_packet_decsriptor  */
++			iso_packet->status =
++			    sts.b_iso_out.rxsts +
++			    (sts.b_iso_out.bs ^ BS_DMA_DONE);
++			if (iso_packet->status) {
++				iso_packet->status = -DWC_E_NO_DATA;
++			}
++
++			/* Received data length */
++			iso_packet->length =
++			    dwc_ep->data_per_frame - sts.b_iso_out.rxbytes;
++
++			iso_packet->offset = offset;
++
++			offset += data_per_desc;
++			iso_packet++;
++			dma_desc++;
++		}
++
++		sts.d32 = dma_desc->status.d32;
++
++		/* Write status in iso_packet_decsriptor  */
++		iso_packet->status =
++		    sts.b_iso_out.rxsts + (sts.b_iso_out.bs ^ BS_DMA_DONE);
++		if (iso_packet->status) {
++			iso_packet->status = -DWC_E_NO_DATA;
++		}
++		/* Received data length */
++		if (!sts.b_iso_out.rxbytes) {
++			iso_packet->length =
++			    dwc_ep->data_per_frame - sts.b_iso_out.rxbytes;
++		} else {
++			iso_packet->length =
++			    dwc_ep->data_per_frame - sts.b_iso_out.rxbytes +
++			    (4 - dwc_ep->data_per_frame % 4);
++		}
++
++		iso_packet->offset = offset;
++	} else {
++/** ISO IN EP */
++
++		dma_desc =
++		    dwc_ep->iso_desc_addr +
++		    dwc_ep->desc_cnt * dwc_ep->proc_buf_num;
++
++		for (i = 0; i < dwc_ep->desc_cnt - 1; i++) {
++			sts.d32 = dma_desc->status.d32;
++
++			/* Write status in iso packet descriptor */
++			iso_packet->status =
++			    sts.b_iso_in.txsts +
++			    (sts.b_iso_in.bs ^ BS_DMA_DONE);
++			if (iso_packet->status != 0) {
++				iso_packet->status = -DWC_E_NO_DATA;
++
++			}
++			/* Bytes has been transfered */
++			iso_packet->length =
++			    dwc_ep->data_per_frame - sts.b_iso_in.txbytes;
++
++			dma_desc++;
++			iso_packet++;
++		}
++
++		sts.d32 = dma_desc->status.d32;
++		while (sts.b_iso_in.bs == BS_DMA_BUSY) {
++			sts.d32 = dma_desc->status.d32;
++		}
++
++		/* Write status in iso packet descriptor ??? do be done with ERROR codes */
++		iso_packet->status =
++		    sts.b_iso_in.txsts + (sts.b_iso_in.bs ^ BS_DMA_DONE);
++		if (iso_packet->status != 0) {
++			iso_packet->status = -DWC_E_NO_DATA;
++		}
++
++		/* Bytes has been transfered */
++		iso_packet->length =
++		    dwc_ep->data_per_frame - sts.b_iso_in.txbytes;
++	}
++}
++
++/**
++ * This function reinitialize DMA Descriptors for Isochronous transfer
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param dwc_ep The EP to start the transfer on.
++ *
++ */
++static void reinit_ddma_iso_xfer(dwc_otg_core_if_t * core_if, dwc_ep_t * dwc_ep)
++{
++	int i, j;
++	dwc_otg_dev_dma_desc_t *dma_desc;
++	dma_addr_t dma_ad;
++	volatile uint32_t *addr;
++	dev_dma_desc_sts_t sts = {.d32 = 0 };
++	uint32_t data_per_desc;
++
++	if (dwc_ep->is_in == 0) {
++		addr = &core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl;
++	} else {
++		addr = &core_if->dev_if->in_ep_regs[dwc_ep->num]->diepctl;
++	}
++
++	if (dwc_ep->proc_buf_num == 0) {
++		/** Buffer 0 descriptors setup */
++		dma_ad = dwc_ep->dma_addr0;
++	} else {
++		/** Buffer 1 descriptors setup */
++		dma_ad = dwc_ep->dma_addr1;
++	}
++
++	/** Reinit closed DMA Descriptors*/
++	/** ISO OUT EP */
++	if (dwc_ep->is_in == 0) {
++		dma_desc =
++		    dwc_ep->iso_desc_addr +
++		    dwc_ep->desc_cnt * dwc_ep->proc_buf_num;
++
++		sts.b_iso_out.bs = BS_HOST_READY;
++		sts.b_iso_out.rxsts = 0;
++		sts.b_iso_out.l = 0;
++		sts.b_iso_out.sp = 0;
++		sts.b_iso_out.ioc = 0;
++		sts.b_iso_out.pid = 0;
++		sts.b_iso_out.framenum = 0;
++
++		for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
++		     i += dwc_ep->pkt_per_frm) {
++			for (j = 0; j < dwc_ep->pkt_per_frm; ++j) {
++				data_per_desc =
++				    ((j + 1) * dwc_ep->maxpacket >
++				     dwc_ep->
++				     data_per_frame) ? dwc_ep->data_per_frame -
++				    j * dwc_ep->maxpacket : dwc_ep->maxpacket;
++				data_per_desc +=
++				    (data_per_desc % 4) ? (4 -
++							   data_per_desc %
++							   4) : 0;
++				sts.b_iso_out.rxbytes = data_per_desc;
++				dma_desc->buf = dma_ad;
++				dma_desc->status.d32 = sts.d32;
++
++				dma_ad += data_per_desc;
++				dma_desc++;
++			}
++		}
++
++		for (j = 0; j < dwc_ep->pkt_per_frm - 1; ++j) {
++
++			data_per_desc =
++			    ((j + 1) * dwc_ep->maxpacket >
++			     dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
++			    j * dwc_ep->maxpacket : dwc_ep->maxpacket;
++			data_per_desc +=
++			    (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
++			sts.b_iso_out.rxbytes = data_per_desc;
++
++			dma_desc->buf = dma_ad;
++			dma_desc->status.d32 = sts.d32;
++
++			dma_desc++;
++			dma_ad += data_per_desc;
++		}
++
++		sts.b_iso_out.ioc = 1;
++		sts.b_iso_out.l = dwc_ep->proc_buf_num;
++
++		data_per_desc =
++		    ((j + 1) * dwc_ep->maxpacket >
++		     dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
++		    j * dwc_ep->maxpacket : dwc_ep->maxpacket;
++		data_per_desc +=
++		    (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
++		sts.b_iso_out.rxbytes = data_per_desc;
++
++		dma_desc->buf = dma_ad;
++		dma_desc->status.d32 = sts.d32;
++	} else {
++/** ISO IN EP */
++
++		dma_desc =
++		    dwc_ep->iso_desc_addr +
++		    dwc_ep->desc_cnt * dwc_ep->proc_buf_num;
++
++		sts.b_iso_in.bs = BS_HOST_READY;
++		sts.b_iso_in.txsts = 0;
++		sts.b_iso_in.sp = 0;
++		sts.b_iso_in.ioc = 0;
++		sts.b_iso_in.pid = dwc_ep->pkt_per_frm;
++		sts.b_iso_in.framenum = dwc_ep->next_frame;
++		sts.b_iso_in.txbytes = dwc_ep->data_per_frame;
++		sts.b_iso_in.l = 0;
++
++		for (i = 0; i < dwc_ep->desc_cnt - 1; i++) {
++			dma_desc->buf = dma_ad;
++			dma_desc->status.d32 = sts.d32;
++
++			sts.b_iso_in.framenum += dwc_ep->bInterval;
++			dma_ad += dwc_ep->data_per_frame;
++			dma_desc++;
++		}
++
++		sts.b_iso_in.ioc = 1;
++		sts.b_iso_in.l = dwc_ep->proc_buf_num;
++
++		dma_desc->buf = dma_ad;
++		dma_desc->status.d32 = sts.d32;
++
++		dwc_ep->next_frame =
++		    sts.b_iso_in.framenum + dwc_ep->bInterval * 1;
++	}
++	dwc_ep->proc_buf_num = (dwc_ep->proc_buf_num ^ 1) & 0x1;
++}
++
++/**
++ * This function is to handle Iso EP transfer complete interrupt
++ * in case Iso out packet was dropped
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param dwc_ep The EP for wihich transfer complete was asserted
++ *
++ */
++static uint32_t handle_iso_out_pkt_dropped(dwc_otg_core_if_t * core_if,
++					   dwc_ep_t * dwc_ep)
++{
++	uint32_t dma_addr;
++	uint32_t drp_pkt;
++	uint32_t drp_pkt_cnt;
++	deptsiz_data_t deptsiz = {.d32 = 0 };
++	depctl_data_t depctl = {.d32 = 0 };
++	int i;
++
++	deptsiz.d32 =
++	    DWC_READ_REG32(&core_if->dev_if->
++			   out_ep_regs[dwc_ep->num]->doeptsiz);
++
++	drp_pkt = dwc_ep->pkt_cnt - deptsiz.b.pktcnt;
++	drp_pkt_cnt = dwc_ep->pkt_per_frm - (drp_pkt % dwc_ep->pkt_per_frm);
++
++	/* Setting dropped packets status */
++	for (i = 0; i < drp_pkt_cnt; ++i) {
++		dwc_ep->pkt_info[drp_pkt].status = -DWC_E_NO_DATA;
++		drp_pkt++;
++		deptsiz.b.pktcnt--;
++	}
++
++	if (deptsiz.b.pktcnt > 0) {
++		deptsiz.b.xfersize =
++		    dwc_ep->xfer_len - (dwc_ep->pkt_cnt -
++					deptsiz.b.pktcnt) * dwc_ep->maxpacket;
++	} else {
++		deptsiz.b.xfersize = 0;
++		deptsiz.b.pktcnt = 0;
++	}
++
++	DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doeptsiz,
++			deptsiz.d32);
++
++	if (deptsiz.b.pktcnt > 0) {
++		if (dwc_ep->proc_buf_num) {
++			dma_addr =
++			    dwc_ep->dma_addr1 + dwc_ep->xfer_len -
++			    deptsiz.b.xfersize;
++		} else {
++			dma_addr =
++			    dwc_ep->dma_addr0 + dwc_ep->xfer_len -
++			    deptsiz.b.xfersize;;
++		}
++
++		DWC_WRITE_REG32(&core_if->dev_if->
++				out_ep_regs[dwc_ep->num]->doepdma, dma_addr);
++
++		/** Re-enable endpoint, clear nak  */
++		depctl.d32 = 0;
++		depctl.b.epena = 1;
++		depctl.b.cnak = 1;
++
++		DWC_MODIFY_REG32(&core_if->dev_if->
++				 out_ep_regs[dwc_ep->num]->doepctl, depctl.d32,
++				 depctl.d32);
++		return 0;
++	} else {
++		return 1;
++	}
++}
++
++/**
++ * This function sets iso packets information(PTI mode)
++ *
++ * @param core_if Programming view of DWC_otg controller.
++ * @param ep The EP to start the transfer on.
++ *
++ */
++static uint32_t set_iso_pkts_info(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
++{
++	int i, j;
++	dma_addr_t dma_ad;
++	iso_pkt_info_t *packet_info = ep->pkt_info;
++	uint32_t offset;
++	uint32_t frame_data;
++	deptsiz_data_t deptsiz;
++
++	if (ep->proc_buf_num == 0) {
++		/** Buffer 0 descriptors setup */
++		dma_ad = ep->dma_addr0;
++	} else {
++		/** Buffer 1 descriptors setup */
++		dma_ad = ep->dma_addr1;
++	}
++
++	if (ep->is_in) {
++		deptsiz.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
++				   dieptsiz);
++	} else {
++		deptsiz.d32 =
++		    DWC_READ_REG32(&core_if->dev_if->out_ep_regs[ep->num]->
++				   doeptsiz);
++	}
++
++	if (!deptsiz.b.xfersize) {
++		offset = 0;
++		for (i = 0; i < ep->pkt_cnt; i += ep->pkt_per_frm) {
++			frame_data = ep->data_per_frame;
++			for (j = 0; j < ep->pkt_per_frm; ++j) {
++
++				/* Packet status - is not set as initially
++				 * it is set to 0 and if packet was sent
++				 successfully, status field will remain 0*/
++
++				/* Bytes has been transfered */
++				packet_info->length =
++				    (ep->maxpacket <
++				     frame_data) ? ep->maxpacket : frame_data;
++
++				/* Received packet offset */
++				packet_info->offset = offset;
++				offset += packet_info->length;
++				frame_data -= packet_info->length;
++
++				packet_info++;
++			}
++		}
++		return 1;
++	} else {
++		/* This is a workaround for in case of Transfer Complete with
++		 * PktDrpSts interrupts merging - in this case Transfer complete
++		 * interrupt for Isoc Out Endpoint is asserted without PktDrpSts
++		 * set and with DOEPTSIZ register non zero. Investigations showed,
++		 * that this happens when Out packet is dropped, but because of
++		 * interrupts merging during first interrupt handling PktDrpSts
++		 * bit is cleared and for next merged interrupts it is not reset.
++		 * In this case SW hadles the interrupt as if PktDrpSts bit is set.
++		 */
++		if (ep->is_in) {
++			return 1;
++		} else {
++			return handle_iso_out_pkt_dropped(core_if, ep);
++		}
++	}
++}
++
++/**
++ * This function is to handle Iso EP transfer complete interrupt
++ *
++ * @param pcd The PCD
++ * @param ep The EP for which transfer complete was asserted
++ *
++ */
++static void complete_iso_ep(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * ep)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(ep->pcd);
++	dwc_ep_t *dwc_ep = &ep->dwc_ep;
++	uint8_t is_last = 0;
++
++	if (ep->dwc_ep.next_frame == 0xffffffff) {
++		DWC_WARN("Next frame is not set!\n");
++		return;
++	}
++
++	if (core_if->dma_enable) {
++		if (core_if->dma_desc_enable) {
++			set_ddma_iso_pkts_info(core_if, dwc_ep);
++			reinit_ddma_iso_xfer(core_if, dwc_ep);
++			is_last = 1;
++		} else {
++			if (core_if->pti_enh_enable) {
++				if (set_iso_pkts_info(core_if, dwc_ep)) {
++					dwc_ep->proc_buf_num =
++					    (dwc_ep->proc_buf_num ^ 1) & 0x1;
++					dwc_otg_iso_ep_start_buf_transfer
++					    (core_if, dwc_ep);
++					is_last = 1;
++				}
++			} else {
++				set_current_pkt_info(core_if, dwc_ep);
++				if (dwc_ep->cur_pkt >= dwc_ep->pkt_cnt) {
++					is_last = 1;
++					dwc_ep->cur_pkt = 0;
++					dwc_ep->proc_buf_num =
++					    (dwc_ep->proc_buf_num ^ 1) & 0x1;
++					if (dwc_ep->proc_buf_num) {
++						dwc_ep->cur_pkt_addr =
++						    dwc_ep->xfer_buff1;
++						dwc_ep->cur_pkt_dma_addr =
++						    dwc_ep->dma_addr1;
++					} else {
++						dwc_ep->cur_pkt_addr =
++						    dwc_ep->xfer_buff0;
++						dwc_ep->cur_pkt_dma_addr =
++						    dwc_ep->dma_addr0;
++					}
++
++				}
++				dwc_otg_iso_ep_start_frm_transfer(core_if,
++								  dwc_ep);
++			}
++		}
++	} else {
++		set_current_pkt_info(core_if, dwc_ep);
++		if (dwc_ep->cur_pkt >= dwc_ep->pkt_cnt) {
++			is_last = 1;
++			dwc_ep->cur_pkt = 0;
++			dwc_ep->proc_buf_num = (dwc_ep->proc_buf_num ^ 1) & 0x1;
++			if (dwc_ep->proc_buf_num) {
++				dwc_ep->cur_pkt_addr = dwc_ep->xfer_buff1;
++				dwc_ep->cur_pkt_dma_addr = dwc_ep->dma_addr1;
++			} else {
++				dwc_ep->cur_pkt_addr = dwc_ep->xfer_buff0;
++				dwc_ep->cur_pkt_dma_addr = dwc_ep->dma_addr0;
++			}
++
++		}
++		dwc_otg_iso_ep_start_frm_transfer(core_if, dwc_ep);
++	}
++	if (is_last)
++		dwc_otg_iso_buffer_done(pcd, ep, ep->iso_req_handle);
++}
++#endif /* DWC_EN_ISOC */
++
++/**
++ * This function handle BNA interrupt for Non Isochronous EPs
++ *
++ */
++static void dwc_otg_pcd_handle_noniso_bna(dwc_otg_pcd_ep_t * ep)
++{
++	dwc_ep_t *dwc_ep = &ep->dwc_ep;
++	volatile uint32_t *addr;
++	depctl_data_t depctl = {.d32 = 0 };
++	dwc_otg_pcd_t *pcd = ep->pcd;
++	dwc_otg_dev_dma_desc_t *dma_desc;
++	dev_dma_desc_sts_t sts = {.d32 = 0 };
++	dwc_otg_core_if_t *core_if = ep->pcd->core_if;
++	int i, start;
++
++	if (!dwc_ep->desc_cnt)
++		DWC_WARN("Ep%d %s Descriptor count = %d \n", dwc_ep->num,
++			 (dwc_ep->is_in ? "IN" : "OUT"), dwc_ep->desc_cnt);
++
++	if (core_if->core_params->cont_on_bna && !dwc_ep->is_in
++							&& dwc_ep->type != DWC_OTG_EP_TYPE_CONTROL) {
++		uint32_t doepdma;
++		dwc_otg_dev_out_ep_regs_t *out_regs =
++			core_if->dev_if->out_ep_regs[dwc_ep->num];
++		doepdma = DWC_READ_REG32(&(out_regs->doepdma));
++		start = (doepdma - dwc_ep->dma_desc_addr)/sizeof(dwc_otg_dev_dma_desc_t);
++		dma_desc = &(dwc_ep->desc_addr[start]);
++	} else {
++		start = 0;
++		dma_desc = dwc_ep->desc_addr;
++	}
++
++
++	for (i = start; i < dwc_ep->desc_cnt; ++i, ++dma_desc) {
++		sts.d32 = dma_desc->status.d32;
++		sts.b.bs = BS_HOST_READY;
++		dma_desc->status.d32 = sts.d32;
++	}
++
++	if (dwc_ep->is_in == 0) {
++		addr =
++		    &GET_CORE_IF(pcd)->dev_if->out_ep_regs[dwc_ep->num]->
++		    doepctl;
++	} else {
++		addr =
++		    &GET_CORE_IF(pcd)->dev_if->in_ep_regs[dwc_ep->num]->diepctl;
++	}
++	depctl.b.epena = 1;
++	depctl.b.cnak = 1;
++	DWC_MODIFY_REG32(addr, 0, depctl.d32);
++}
++
++/**
++ * This function handles EP0 Control transfers.
++ *
++ * The state of the control transfers are tracked in
++ * <code>ep0state</code>.
++ */
++static void handle_ep0(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
++	dev_dma_desc_sts_t desc_sts;
++	deptsiz0_data_t deptsiz;
++	uint32_t byte_count;
++
++#ifdef DEBUG_EP0
++	DWC_DEBUGPL(DBG_PCDV, "%s()\n", __func__);
++	print_ep0_state(pcd);
++#endif
++
++//      DWC_PRINTF("HANDLE EP0\n");
++
++	switch (pcd->ep0state) {
++	case EP0_DISCONNECT:
++		break;
++
++	case EP0_IDLE:
++		pcd->request_config = 0;
++
++		pcd_setup(pcd);
++		break;
++
++	case EP0_IN_DATA_PHASE:
++#ifdef DEBUG_EP0
++		DWC_DEBUGPL(DBG_PCD, "DATA_IN EP%d-%s: type=%d, mps=%d\n",
++			    ep0->dwc_ep.num, (ep0->dwc_ep.is_in ? "IN" : "OUT"),
++			    ep0->dwc_ep.type, ep0->dwc_ep.maxpacket);
++#endif
++
++		if (core_if->dma_enable != 0) {
++			/*
++			 * For EP0 we can only program 1 packet at a time so we
++			 * need to do the make calculations after each complete.
++			 * Call write_packet to make the calculations, as in
++			 * slave mode, and use those values to determine if we
++			 * can complete.
++			 */
++			if (core_if->dma_desc_enable == 0) {
++				deptsiz.d32 =
++				    DWC_READ_REG32(&core_if->
++						   dev_if->in_ep_regs[0]->
++						   dieptsiz);
++				byte_count =
++				    ep0->dwc_ep.xfer_len - deptsiz.b.xfersize;
++			} else {
++				desc_sts =
++				    core_if->dev_if->in_desc_addr->status;
++				byte_count =
++				    ep0->dwc_ep.xfer_len - desc_sts.b.bytes;
++			}
++			ep0->dwc_ep.xfer_count += byte_count;
++			ep0->dwc_ep.xfer_buff += byte_count;
++			ep0->dwc_ep.dma_addr += byte_count;
++		}
++		if (ep0->dwc_ep.xfer_count < ep0->dwc_ep.total_len) {
++			dwc_otg_ep0_continue_transfer(GET_CORE_IF(pcd),
++						      &ep0->dwc_ep);
++			DWC_DEBUGPL(DBG_PCD, "CONTINUE TRANSFER\n");
++		} else if (ep0->dwc_ep.sent_zlp) {
++			dwc_otg_ep0_continue_transfer(GET_CORE_IF(pcd),
++						      &ep0->dwc_ep);
++			ep0->dwc_ep.sent_zlp = 0;
++			DWC_DEBUGPL(DBG_PCD, "CONTINUE TRANSFER sent zlp\n");
++		} else {
++			ep0_complete_request(ep0);
++			DWC_DEBUGPL(DBG_PCD, "COMPLETE TRANSFER\n");
++		}
++		break;
++	case EP0_OUT_DATA_PHASE:
++#ifdef DEBUG_EP0
++		DWC_DEBUGPL(DBG_PCD, "DATA_OUT EP%d-%s: type=%d, mps=%d\n",
++			    ep0->dwc_ep.num, (ep0->dwc_ep.is_in ? "IN" : "OUT"),
++			    ep0->dwc_ep.type, ep0->dwc_ep.maxpacket);
++#endif
++		if (core_if->dma_enable != 0) {
++			if (core_if->dma_desc_enable == 0) {
++				deptsiz.d32 =
++				    DWC_READ_REG32(&core_if->
++						   dev_if->out_ep_regs[0]->
++						   doeptsiz);
++				byte_count =
++				    ep0->dwc_ep.maxpacket - deptsiz.b.xfersize;
++			} else {
++				desc_sts =
++				    core_if->dev_if->out_desc_addr->status;
++				byte_count =
++				    ep0->dwc_ep.maxpacket - desc_sts.b.bytes;
++			}
++			ep0->dwc_ep.xfer_count += byte_count;
++			ep0->dwc_ep.xfer_buff += byte_count;
++			ep0->dwc_ep.dma_addr += byte_count;
++		}
++		if (ep0->dwc_ep.xfer_count < ep0->dwc_ep.total_len) {
++			dwc_otg_ep0_continue_transfer(GET_CORE_IF(pcd),
++						      &ep0->dwc_ep);
++			DWC_DEBUGPL(DBG_PCD, "CONTINUE TRANSFER\n");
++		} else if (ep0->dwc_ep.sent_zlp) {
++			dwc_otg_ep0_continue_transfer(GET_CORE_IF(pcd),
++						      &ep0->dwc_ep);
++			ep0->dwc_ep.sent_zlp = 0;
++			DWC_DEBUGPL(DBG_PCD, "CONTINUE TRANSFER sent zlp\n");
++		} else {
++			ep0_complete_request(ep0);
++			DWC_DEBUGPL(DBG_PCD, "COMPLETE TRANSFER\n");
++		}
++		break;
++
++	case EP0_IN_STATUS_PHASE:
++	case EP0_OUT_STATUS_PHASE:
++		DWC_DEBUGPL(DBG_PCD, "CASE: EP0_STATUS\n");
++		ep0_complete_request(ep0);
++		pcd->ep0state = EP0_IDLE;
++		ep0->stopped = 1;
++		ep0->dwc_ep.is_in = 0;	/* OUT for next SETUP */
++
++		/* Prepare for more SETUP Packets */
++		if (core_if->dma_enable) {
++			ep0_out_start(core_if, pcd);
++		}
++		break;
++
++	case EP0_STALL:
++		DWC_ERROR("EP0 STALLed, should not get here pcd_setup()\n");
++		break;
++	}
++#ifdef DEBUG_EP0
++	print_ep0_state(pcd);
++#endif
++}
++
++/**
++ * Restart transfer
++ */
++static void restart_transfer(dwc_otg_pcd_t * pcd, const uint32_t epnum)
++{
++	dwc_otg_core_if_t *core_if;
++	dwc_otg_dev_if_t *dev_if;
++	deptsiz_data_t dieptsiz = {.d32 = 0 };
++	dwc_otg_pcd_ep_t *ep;
++
++	ep = get_in_ep(pcd, epnum);
++
++#ifdef DWC_EN_ISOC
++	if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
++		return;
++	}
++#endif /* DWC_EN_ISOC  */
++
++	core_if = GET_CORE_IF(pcd);
++	dev_if = core_if->dev_if;
++
++	dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dieptsiz);
++
++	DWC_DEBUGPL(DBG_PCD, "xfer_buff=%p xfer_count=%0x xfer_len=%0x"
++		    " stopped=%d\n", ep->dwc_ep.xfer_buff,
++		    ep->dwc_ep.xfer_count, ep->dwc_ep.xfer_len, ep->stopped);
++	/*
++	 * If xfersize is 0 and pktcnt in not 0, resend the last packet.
++	 */
++	if (dieptsiz.b.pktcnt && dieptsiz.b.xfersize == 0 &&
++	    ep->dwc_ep.start_xfer_buff != 0) {
++		if (ep->dwc_ep.total_len <= ep->dwc_ep.maxpacket) {
++			ep->dwc_ep.xfer_count = 0;
++			ep->dwc_ep.xfer_buff = ep->dwc_ep.start_xfer_buff;
++			ep->dwc_ep.xfer_len = ep->dwc_ep.xfer_count;
++		} else {
++			ep->dwc_ep.xfer_count -= ep->dwc_ep.maxpacket;
++			/* convert packet size to dwords. */
++			ep->dwc_ep.xfer_buff -= ep->dwc_ep.maxpacket;
++			ep->dwc_ep.xfer_len = ep->dwc_ep.xfer_count;
++		}
++		ep->stopped = 0;
++		DWC_DEBUGPL(DBG_PCD, "xfer_buff=%p xfer_count=%0x "
++			    "xfer_len=%0x stopped=%d\n",
++			    ep->dwc_ep.xfer_buff,
++			    ep->dwc_ep.xfer_count, ep->dwc_ep.xfer_len,
++			    ep->stopped);
++		if (epnum == 0) {
++			dwc_otg_ep0_start_transfer(core_if, &ep->dwc_ep);
++		} else {
++			dwc_otg_ep_start_transfer(core_if, &ep->dwc_ep);
++		}
++	}
++}
++
++/*
++ * This function create new nextep sequnce based on Learn Queue.
++ *
++ * @param core_if Programming view of DWC_otg controller
++ */
++void predict_nextep_seq( dwc_otg_core_if_t * core_if)
++{
++	dwc_otg_device_global_regs_t *dev_global_regs =
++	    core_if->dev_if->dev_global_regs;
++	const uint32_t TOKEN_Q_DEPTH = core_if->hwcfg2.b.dev_token_q_depth;
++	/* Number of Token Queue Registers */
++	const int DTKNQ_REG_CNT = (TOKEN_Q_DEPTH + 7) / 8;
++	dtknq1_data_t dtknqr1;
++	uint32_t in_tkn_epnums[4];
++	uint8_t seqnum[MAX_EPS_CHANNELS];
++	uint8_t intkn_seq[TOKEN_Q_DEPTH];
++	grstctl_t resetctl = {.d32 = 0 };
++	uint8_t temp;
++	int ndx = 0;
++	int start = 0;
++	int end = 0;
++	int sort_done = 0;
++	int i = 0;
++	volatile uint32_t *addr = &dev_global_regs->dtknqr1;
++
++
++	DWC_DEBUGPL(DBG_PCD,"dev_token_q_depth=%d\n",TOKEN_Q_DEPTH);
++
++	/* Read the DTKNQ Registers */
++	for (i = 0; i < DTKNQ_REG_CNT; i++) {
++		in_tkn_epnums[i] = DWC_READ_REG32(addr);
++		DWC_DEBUGPL(DBG_PCDV, "DTKNQR%d=0x%08x\n", i + 1,
++			    in_tkn_epnums[i]);
++		if (addr == &dev_global_regs->dvbusdis) {
++			addr = &dev_global_regs->dtknqr3_dthrctl;
++		} else {
++			++addr;
++		}
++
++	}
++
++	/* Copy the DTKNQR1 data to the bit field. */
++	dtknqr1.d32 = in_tkn_epnums[0];
++	if (dtknqr1.b.wrap_bit) {
++		ndx = dtknqr1.b.intknwptr;
++		end = ndx -1;
++		if (end < 0)
++			end = TOKEN_Q_DEPTH -1;
++	} else {
++		ndx = 0;
++		end = dtknqr1.b.intknwptr -1;
++		if (end < 0)
++			end = 0;
++	}
++	start = ndx;
++
++	/* Fill seqnum[] by initial values: EP number + 31 */
++	for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
++		seqnum[i] = i +31;
++	}
++
++	/* Fill intkn_seq[] from in_tkn_epnums[0] */
++	for (i=0; i < 6; i++)
++		intkn_seq[i] = (in_tkn_epnums[0] >> ((7-i) * 4)) & 0xf;
++
++	if (TOKEN_Q_DEPTH > 6) {
++		/* Fill intkn_seq[] from in_tkn_epnums[1] */
++		for (i=6; i < 14; i++)
++			intkn_seq[i] =
++			    (in_tkn_epnums[1] >> ((7 - (i - 6)) * 4)) & 0xf;
++	}
++
++	if (TOKEN_Q_DEPTH > 14) {
++		/* Fill intkn_seq[] from in_tkn_epnums[1] */
++		for (i=14; i < 22; i++)
++			intkn_seq[i] =
++			    (in_tkn_epnums[2] >> ((7 - (i - 14)) * 4)) & 0xf;
++	}
++
++	if (TOKEN_Q_DEPTH > 22) {
++		/* Fill intkn_seq[] from in_tkn_epnums[1] */
++		for (i=22; i < 30; i++)
++			intkn_seq[i] =
++			    (in_tkn_epnums[3] >> ((7 - (i - 22)) * 4)) & 0xf;
++	}
++
++	DWC_DEBUGPL(DBG_PCDV, "%s start=%d end=%d intkn_seq[]:\n", __func__,
++		    start, end);
++	for (i=0; i<TOKEN_Q_DEPTH; i++)
++		DWC_DEBUGPL(DBG_PCDV,"%d\n", intkn_seq[i]);
++
++	/* Update seqnum based on intkn_seq[] */
++	i = 0;
++	do {
++		seqnum[intkn_seq[ndx]] = i;
++		ndx++;
++		i++;
++		if (ndx == TOKEN_Q_DEPTH)
++			ndx = 0;
++	} while ( i < TOKEN_Q_DEPTH );
++
++	/* Mark non active EP's in seqnum[] by 0xff */
++	for (i=0; i<=core_if->dev_if->num_in_eps; i++) {
++		if (core_if->nextep_seq[i] == 0xff )
++			seqnum[i] = 0xff;
++	}
++
++	/* Sort seqnum[] */
++	sort_done = 0;
++	while (!sort_done) {
++		sort_done = 1;
++		for (i=0; i<core_if->dev_if->num_in_eps; i++) {
++			if (seqnum[i] > seqnum[i+1]) {
++				temp = seqnum[i];
++				seqnum[i] = seqnum[i+1];
++				seqnum[i+1] = temp;
++				sort_done = 0;
++			}
++		}
++	}
++
++	ndx = start + seqnum[0];
++	if (ndx >= TOKEN_Q_DEPTH)
++		ndx = ndx % TOKEN_Q_DEPTH;
++	core_if->first_in_nextep_seq = intkn_seq[ndx];
++
++	/* Update seqnum[] by EP numbers  */
++	for (i=0; i<=core_if->dev_if->num_in_eps; i++) {
++		ndx = start + i;
++		if (seqnum[i] < 31) {
++			ndx = start + seqnum[i];
++			if (ndx >= TOKEN_Q_DEPTH)
++				ndx = ndx % TOKEN_Q_DEPTH;
++			seqnum[i] = intkn_seq[ndx];
++		} else {
++			if (seqnum[i] < 0xff) {
++				seqnum[i] = seqnum[i] - 31;
++			} else {
++				break;
++			}
++		}
++	}
++
++	/* Update nextep_seq[] based on seqnum[] */
++	for (i=0; i<core_if->dev_if->num_in_eps; i++) {
++		if (seqnum[i] != 0xff) {
++			if (seqnum[i+1] != 0xff) {
++				core_if->nextep_seq[seqnum[i]] = seqnum[i+1];
++			} else {
++				core_if->nextep_seq[seqnum[i]] = core_if->first_in_nextep_seq;
++				break;
++			}
++		} else {
++			break;
++		}
++	}
++
++	DWC_DEBUGPL(DBG_PCDV, "%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
++		__func__, core_if->first_in_nextep_seq);
++	for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
++		DWC_DEBUGPL(DBG_PCDV,"%2d\n", core_if->nextep_seq[i]);
++	}
++
++	/* Flush the Learning Queue */
++	resetctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->grstctl);
++	resetctl.b.intknqflsh = 1;
++	DWC_WRITE_REG32(&core_if->core_global_regs->grstctl, resetctl.d32);
++
++
++}
++
++/**
++ * handle the IN EP disable interrupt.
++ */
++static inline void handle_in_ep_disable_intr(dwc_otg_pcd_t * pcd,
++					     const uint32_t epnum)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	deptsiz_data_t dieptsiz = {.d32 = 0 };
++	dctl_data_t dctl = {.d32 = 0 };
++	dwc_otg_pcd_ep_t *ep;
++	dwc_ep_t *dwc_ep;
++	gintmsk_data_t gintmsk_data;
++	depctl_data_t depctl;
++	uint32_t diepdma;
++	uint32_t remain_to_transfer = 0;
++	uint8_t i;
++	uint32_t xfer_size;
++
++	ep = get_in_ep(pcd, epnum);
++	dwc_ep = &ep->dwc_ep;
++
++	if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
++		dwc_otg_flush_tx_fifo(core_if, dwc_ep->tx_fifo_num);
++		complete_ep(ep);
++		return;
++	}
++
++	DWC_DEBUGPL(DBG_PCD, "diepctl%d=%0x\n", epnum,
++		    DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->diepctl));
++	dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dieptsiz);
++	depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->diepctl);
++
++	DWC_DEBUGPL(DBG_ANY, "pktcnt=%d size=%d\n",
++		    dieptsiz.b.pktcnt, dieptsiz.b.xfersize);
++
++	if ((core_if->start_predict == 0) || (depctl.b.eptype & 1)) {
++		if (ep->stopped) {
++			if (core_if->en_multiple_tx_fifo)
++				/* Flush the Tx FIFO */
++				dwc_otg_flush_tx_fifo(core_if, dwc_ep->tx_fifo_num);
++			/* Clear the Global IN NP NAK */
++			dctl.d32 = 0;
++			dctl.b.cgnpinnak = 1;
++			DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
++			/* Restart the transaction */
++			if (dieptsiz.b.pktcnt != 0 || dieptsiz.b.xfersize != 0) {
++				restart_transfer(pcd, epnum);
++			}
++		} else {
++			/* Restart the transaction */
++			if (dieptsiz.b.pktcnt != 0 || dieptsiz.b.xfersize != 0) {
++				restart_transfer(pcd, epnum);
++			}
++			DWC_DEBUGPL(DBG_ANY, "STOPPED!!!\n");
++		}
++		return;
++	}
++
++	if (core_if->start_predict > 2) {	// NP IN EP
++		core_if->start_predict--;
++		return;
++	}
++
++	core_if->start_predict--;
++
++	if (core_if->start_predict == 1) {	// All NP IN Ep's disabled now
++
++		predict_nextep_seq(core_if);
++
++		/* Update all active IN EP's NextEP field based of nextep_seq[] */
++		for ( i = 0; i <= core_if->dev_if->num_in_eps; i++) {
++			depctl.d32 =
++			    DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++			if (core_if->nextep_seq[i] != 0xff) {	// Active NP IN EP
++				depctl.b.nextep = core_if->nextep_seq[i];
++				DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepctl, depctl.d32);
++			}
++		}
++		/* Flush Shared NP TxFIFO */
++		dwc_otg_flush_tx_fifo(core_if, 0);
++		/* Rewind buffers */
++		if (!core_if->dma_desc_enable) {
++			i = core_if->first_in_nextep_seq;
++			do {
++				ep = get_in_ep(pcd, i);
++				dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->dieptsiz);
++				xfer_size = ep->dwc_ep.total_len - ep->dwc_ep.xfer_count;
++				if (xfer_size > ep->dwc_ep.maxxfer)
++					xfer_size = ep->dwc_ep.maxxfer;
++				depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++				if (dieptsiz.b.pktcnt != 0) {
++					if (xfer_size == 0) {
++						remain_to_transfer = 0;
++					} else {
++						if ((xfer_size % ep->dwc_ep.maxpacket) == 0) {
++							remain_to_transfer =
++								dieptsiz.b.pktcnt * ep->dwc_ep.maxpacket;
++						} else {
++							remain_to_transfer = ((dieptsiz.b.pktcnt -1) * ep->dwc_ep.maxpacket)
++								+ (xfer_size % ep->dwc_ep.maxpacket);
++						}
++					}
++					diepdma = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepdma);
++					dieptsiz.b.xfersize = remain_to_transfer;
++					DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->dieptsiz, dieptsiz.d32);
++					diepdma = ep->dwc_ep.dma_addr + (xfer_size - remain_to_transfer);
++					DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepdma, diepdma);
++				}
++				i = core_if->nextep_seq[i];
++			} while (i != core_if->first_in_nextep_seq);
++		} else { // dma_desc_enable
++				DWC_PRINTF("%s Learning Queue not supported in DDMA\n", __func__);
++		}
++
++		/* Restart transfers in predicted sequences */
++		i = core_if->first_in_nextep_seq;
++		do {
++			dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->dieptsiz);
++			depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++			if (dieptsiz.b.pktcnt != 0) {
++				depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++				depctl.b.epena = 1;
++				depctl.b.cnak = 1;
++				DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepctl, depctl.d32);
++			}
++			i = core_if->nextep_seq[i];
++		} while (i != core_if->first_in_nextep_seq);
++
++		/* Clear the global non-periodic IN NAK handshake */
++		dctl.d32 = 0;
++		dctl.b.cgnpinnak = 1;
++		DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
++
++		/* Unmask EP Mismatch interrupt */
++		gintmsk_data.d32 = 0;
++		gintmsk_data.b.epmismatch = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0, gintmsk_data.d32);
++
++		core_if->start_predict = 0;
++
++	}
++}
++
++/**
++ * Handler for the IN EP timeout handshake interrupt.
++ */
++static inline void handle_in_ep_timeout_intr(dwc_otg_pcd_t * pcd,
++					     const uint32_t epnum)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++
++#ifdef DEBUG
++	deptsiz_data_t dieptsiz = {.d32 = 0 };
++	uint32_t num = 0;
++#endif
++	dctl_data_t dctl = {.d32 = 0 };
++	dwc_otg_pcd_ep_t *ep;
++
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	ep = get_in_ep(pcd, epnum);
++
++	/* Disable the NP Tx Fifo Empty Interrrupt */
++	if (!core_if->dma_enable) {
++		intr_mask.b.nptxfempty = 1;
++		DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk,
++				 intr_mask.d32, 0);
++	}
++	/** @todo NGS Check EP type.
++	 * Implement for Periodic EPs */
++	/*
++	 * Non-periodic EP
++	 */
++	/* Enable the Global IN NAK Effective Interrupt */
++	intr_mask.b.ginnakeff = 1;
++	DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0, intr_mask.d32);
++
++	/* Set Global IN NAK */
++	dctl.b.sgnpinnak = 1;
++	DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
++
++	ep->stopped = 1;
++
++#ifdef DEBUG
++	dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[num]->dieptsiz);
++	DWC_DEBUGPL(DBG_ANY, "pktcnt=%d size=%d\n",
++		    dieptsiz.b.pktcnt, dieptsiz.b.xfersize);
++#endif
++
++#ifdef DISABLE_PERIODIC_EP
++	/*
++	 * Set the NAK bit for this EP to
++	 * start the disable process.
++	 */
++	diepctl.d32 = 0;
++	diepctl.b.snak = 1;
++	DWC_MODIFY_REG32(&dev_if->in_ep_regs[num]->diepctl, diepctl.d32,
++			 diepctl.d32);
++	ep->disabling = 1;
++	ep->stopped = 1;
++#endif
++}
++
++/**
++ * Handler for the IN EP NAK interrupt.
++ */
++static inline int32_t handle_in_ep_nak_intr(dwc_otg_pcd_t * pcd,
++					    const uint32_t epnum)
++{
++	/** @todo implement ISR */
++	dwc_otg_core_if_t *core_if;
++	diepmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_PRINTF("INTERRUPT Handler not implemented for %s\n", "IN EP NAK");
++	core_if = GET_CORE_IF(pcd);
++	intr_mask.b.nak = 1;
++
++	if (core_if->multiproc_int_enable) {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++				 diepeachintmsk[epnum], intr_mask.d32, 0);
++	} else {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->diepmsk,
++				 intr_mask.d32, 0);
++	}
++
++	return 1;
++}
++
++/**
++ * Handler for the OUT EP Babble interrupt.
++ */
++static inline int32_t handle_out_ep_babble_intr(dwc_otg_pcd_t * pcd,
++						const uint32_t epnum)
++{
++	/** @todo implement ISR */
++	dwc_otg_core_if_t *core_if;
++	doepmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_PRINTF("INTERRUPT Handler not implemented for %s\n",
++		   "OUT EP Babble");
++	core_if = GET_CORE_IF(pcd);
++	intr_mask.b.babble = 1;
++
++	if (core_if->multiproc_int_enable) {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++				 doepeachintmsk[epnum], intr_mask.d32, 0);
++	} else {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->doepmsk,
++				 intr_mask.d32, 0);
++	}
++
++	return 1;
++}
++
++/**
++ * Handler for the OUT EP NAK interrupt.
++ */
++static inline int32_t handle_out_ep_nak_intr(dwc_otg_pcd_t * pcd,
++					     const uint32_t epnum)
++{
++	/** @todo implement ISR */
++	dwc_otg_core_if_t *core_if;
++	doepmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_DEBUGPL(DBG_ANY, "INTERRUPT Handler not implemented for %s\n", "OUT EP NAK");
++	core_if = GET_CORE_IF(pcd);
++	intr_mask.b.nak = 1;
++
++	if (core_if->multiproc_int_enable) {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++				 doepeachintmsk[epnum], intr_mask.d32, 0);
++	} else {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->doepmsk,
++				 intr_mask.d32, 0);
++	}
++
++	return 1;
++}
++
++/**
++ * Handler for the OUT EP NYET interrupt.
++ */
++static inline int32_t handle_out_ep_nyet_intr(dwc_otg_pcd_t * pcd,
++					      const uint32_t epnum)
++{
++	/** @todo implement ISR */
++	dwc_otg_core_if_t *core_if;
++	doepmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_PRINTF("INTERRUPT Handler not implemented for %s\n", "OUT EP NYET");
++	core_if = GET_CORE_IF(pcd);
++	intr_mask.b.nyet = 1;
++
++	if (core_if->multiproc_int_enable) {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
++				 doepeachintmsk[epnum], intr_mask.d32, 0);
++	} else {
++		DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->doepmsk,
++				 intr_mask.d32, 0);
++	}
++
++	return 1;
++}
++
++/**
++ * This interrupt indicates that an IN EP has a pending Interrupt.
++ * The sequence for handling the IN EP interrupt is shown below:
++ * -#	Read the Device All Endpoint Interrupt register
++ * -#	Repeat the following for each IN EP interrupt bit set (from
++ *		LSB to MSB).
++ * -#	Read the Device Endpoint Interrupt (DIEPINTn) register
++ * -#	If "Transfer Complete" call the request complete function
++ * -#	If "Endpoint Disabled" complete the EP disable procedure.
++ * -#	If "AHB Error Interrupt" log error
++ * -#	If "Time-out Handshake" log error
++ * -#	If "IN Token Received when TxFIFO Empty" write packet to Tx
++ *		FIFO.
++ * -#	If "IN Token EP Mismatch" (disable, this is handled by EP
++ *		Mismatch Interrupt)
++ */
++static int32_t dwc_otg_pcd_handle_in_ep_intr(dwc_otg_pcd_t * pcd)
++{
++#define CLEAR_IN_EP_INTR(__core_if,__epnum,__intr) \
++do { \
++		diepint_data_t diepint = {.d32=0}; \
++		diepint.b.__intr = 1; \
++		DWC_WRITE_REG32(&__core_if->dev_if->in_ep_regs[__epnum]->diepint, \
++		diepint.d32); \
++} while (0)
++
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	dwc_otg_dev_if_t *dev_if = core_if->dev_if;
++	diepint_data_t diepint = {.d32 = 0 };
++	depctl_data_t depctl = {.d32 = 0 };
++	uint32_t ep_intr;
++	uint32_t epnum = 0;
++	dwc_otg_pcd_ep_t *ep;
++	dwc_ep_t *dwc_ep;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, pcd);
++
++	/* Read in the device interrupt bits */
++	ep_intr = dwc_otg_read_dev_all_in_ep_intr(core_if);
++
++	/* Service the Device IN interrupts for each endpoint */
++	while (ep_intr) {
++		if (ep_intr & 0x1) {
++			uint32_t empty_msk;
++			/* Get EP pointer */
++			ep = get_in_ep(pcd, epnum);
++			dwc_ep = &ep->dwc_ep;
++
++			depctl.d32 =
++			    DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->diepctl);
++			empty_msk =
++			    DWC_READ_REG32(&dev_if->
++					   dev_global_regs->dtknqr4_fifoemptymsk);
++
++			DWC_DEBUGPL(DBG_PCDV,
++				    "IN EP INTERRUPT - %d\nepmty_msk - %8x  diepctl - %8x\n",
++				    epnum, empty_msk, depctl.d32);
++
++			DWC_DEBUGPL(DBG_PCD,
++				    "EP%d-%s: type=%d, mps=%d\n",
++				    dwc_ep->num, (dwc_ep->is_in ? "IN" : "OUT"),
++				    dwc_ep->type, dwc_ep->maxpacket);
++
++			diepint.d32 =
++			    dwc_otg_read_dev_in_ep_intr(core_if, dwc_ep);
++
++			DWC_DEBUGPL(DBG_PCDV,
++				    "EP %d Interrupt Register - 0x%x\n", epnum,
++				    diepint.d32);
++			/* Transfer complete */
++			if (diepint.b.xfercompl) {
++				/* Disable the NP Tx FIFO Empty
++				 * Interrupt */
++				if (core_if->en_multiple_tx_fifo == 0) {
++					intr_mask.b.nptxfempty = 1;
++					DWC_MODIFY_REG32
++					    (&core_if->core_global_regs->gintmsk,
++					     intr_mask.d32, 0);
++				} else {
++					/* Disable the Tx FIFO Empty Interrupt for this EP */
++					uint32_t fifoemptymsk =
++					    0x1 << dwc_ep->num;
++					DWC_MODIFY_REG32(&core_if->
++							 dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
++							 fifoemptymsk, 0);
++				}
++				/* Clear the bit in DIEPINTn for this interrupt */
++				CLEAR_IN_EP_INTR(core_if, epnum, xfercompl);
++
++				/* Complete the transfer */
++				if (epnum == 0) {
++					handle_ep0(pcd);
++				}
++#ifdef DWC_EN_ISOC
++				else if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
++					if (!ep->stopped)
++						complete_iso_ep(pcd, ep);
++				}
++#endif /* DWC_EN_ISOC */
++#ifdef DWC_UTE_PER_IO
++				else if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
++					if (!ep->stopped)
++						complete_xiso_ep(ep);
++				}
++#endif /* DWC_UTE_PER_IO */
++				else {
++					if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC &&
++							dwc_ep->bInterval > 1) {
++						dwc_ep->frame_num += dwc_ep->bInterval;
++						if (dwc_ep->frame_num > 0x3FFF)
++						{
++							dwc_ep->frm_overrun = 1;
++							dwc_ep->frame_num &= 0x3FFF;
++						} else
++							dwc_ep->frm_overrun = 0;
++					}
++					complete_ep(ep);
++					if(diepint.b.nak)
++						CLEAR_IN_EP_INTR(core_if, epnum, nak);
++				}
++			}
++			/* Endpoint disable      */
++			if (diepint.b.epdisabled) {
++				DWC_DEBUGPL(DBG_ANY, "EP%d IN disabled\n",
++					    epnum);
++				handle_in_ep_disable_intr(pcd, epnum);
++
++				/* Clear the bit in DIEPINTn for this interrupt */
++				CLEAR_IN_EP_INTR(core_if, epnum, epdisabled);
++			}
++			/* AHB Error */
++			if (diepint.b.ahberr) {
++				DWC_ERROR("EP%d IN AHB Error\n", epnum);
++				/* Clear the bit in DIEPINTn for this interrupt */
++				CLEAR_IN_EP_INTR(core_if, epnum, ahberr);
++			}
++			/* TimeOUT Handshake (non-ISOC IN EPs) */
++			if (diepint.b.timeout) {
++				DWC_ERROR("EP%d IN Time-out\n", epnum);
++				handle_in_ep_timeout_intr(pcd, epnum);
++
++				CLEAR_IN_EP_INTR(core_if, epnum, timeout);
++			}
++			/** IN Token received with TxF Empty */
++			if (diepint.b.intktxfemp) {
++				DWC_DEBUGPL(DBG_ANY,
++					    "EP%d IN TKN TxFifo Empty\n",
++					    epnum);
++				if (!ep->stopped && epnum != 0) {
++
++					diepmsk_data_t diepmsk = {.d32 = 0 };
++					diepmsk.b.intktxfemp = 1;
++
++					if (core_if->multiproc_int_enable) {
++						DWC_MODIFY_REG32
++						    (&dev_if->dev_global_regs->diepeachintmsk
++						     [epnum], diepmsk.d32, 0);
++					} else {
++						DWC_MODIFY_REG32
++						    (&dev_if->dev_global_regs->diepmsk,
++						     diepmsk.d32, 0);
++					}
++				} else if (core_if->dma_desc_enable
++					   && epnum == 0
++					   && pcd->ep0state ==
++					   EP0_OUT_STATUS_PHASE) {
++					// EP0 IN set STALL
++					depctl.d32 =
++					    DWC_READ_REG32(&dev_if->in_ep_regs
++							   [epnum]->diepctl);
++
++					/* set the disable and stall bits */
++					if (depctl.b.epena) {
++						depctl.b.epdis = 1;
++					}
++					depctl.b.stall = 1;
++					DWC_WRITE_REG32(&dev_if->in_ep_regs
++							[epnum]->diepctl,
++							depctl.d32);
++				}
++				CLEAR_IN_EP_INTR(core_if, epnum, intktxfemp);
++			}
++			/** IN Token Received with EP mismatch */
++			if (diepint.b.intknepmis) {
++				DWC_DEBUGPL(DBG_ANY,
++					    "EP%d IN TKN EP Mismatch\n", epnum);
++				CLEAR_IN_EP_INTR(core_if, epnum, intknepmis);
++			}
++			/** IN Endpoint NAK Effective */
++			if (diepint.b.inepnakeff) {
++				DWC_DEBUGPL(DBG_ANY,
++					    "EP%d IN EP NAK Effective\n",
++					    epnum);
++				/* Periodic EP */
++				if (ep->disabling) {
++					depctl.d32 = 0;
++					depctl.b.snak = 1;
++					depctl.b.epdis = 1;
++					DWC_MODIFY_REG32(&dev_if->in_ep_regs
++							 [epnum]->diepctl,
++							 depctl.d32,
++							 depctl.d32);
++				}
++				CLEAR_IN_EP_INTR(core_if, epnum, inepnakeff);
++
++			}
++
++			/** IN EP Tx FIFO Empty Intr */
++			if (diepint.b.emptyintr) {
++				DWC_DEBUGPL(DBG_ANY,
++					    "EP%d Tx FIFO Empty Intr \n",
++					    epnum);
++				write_empty_tx_fifo(pcd, epnum);
++
++				CLEAR_IN_EP_INTR(core_if, epnum, emptyintr);
++
++			}
++
++			/** IN EP BNA Intr */
++			if (diepint.b.bna) {
++				CLEAR_IN_EP_INTR(core_if, epnum, bna);
++				if (core_if->dma_desc_enable) {
++#ifdef DWC_EN_ISOC
++					if (dwc_ep->type ==
++					    DWC_OTG_EP_TYPE_ISOC) {
++						/*
++						 * This checking is performed to prevent first "false" BNA
++						 * handling occuring right after reconnect
++						 */
++						if (dwc_ep->next_frame !=
++						    0xffffffff)
++							dwc_otg_pcd_handle_iso_bna(ep);
++					} else
++#endif				/* DWC_EN_ISOC */
++					{
++						dwc_otg_pcd_handle_noniso_bna(ep);
++					}
++				}
++			}
++			/* NAK Interrutp */
++			if (diepint.b.nak) {
++				DWC_DEBUGPL(DBG_ANY, "EP%d IN NAK Interrupt\n",
++					    epnum);
++				if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
++					depctl_data_t depctl;
++					if (ep->dwc_ep.frame_num == 0xFFFFFFFF) {
++						ep->dwc_ep.frame_num = core_if->frame_num;
++						if (ep->dwc_ep.bInterval > 1) {
++							depctl.d32 = 0;
++							depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->diepctl);
++							if (ep->dwc_ep.frame_num & 0x1) {
++								depctl.b.setd1pid = 1;
++								depctl.b.setd0pid = 0;
++							} else {
++								depctl.b.setd0pid = 1;
++								depctl.b.setd1pid = 0;
++							}
++							DWC_WRITE_REG32(&dev_if->in_ep_regs[epnum]->diepctl, depctl.d32);
++						}
++						start_next_request(ep);
++					}
++					ep->dwc_ep.frame_num += ep->dwc_ep.bInterval;
++					if (dwc_ep->frame_num > 0x3FFF)	{
++						dwc_ep->frm_overrun = 1;
++						dwc_ep->frame_num &= 0x3FFF;
++					} else
++						dwc_ep->frm_overrun = 0;
++				}
++
++				CLEAR_IN_EP_INTR(core_if, epnum, nak);
++			}
++		}
++		epnum++;
++		ep_intr >>= 1;
++	}
++
++	return 1;
++#undef CLEAR_IN_EP_INTR
++}
++
++/**
++ * This interrupt indicates that an OUT EP has a pending Interrupt.
++ * The sequence for handling the OUT EP interrupt is shown below:
++ * -#	Read the Device All Endpoint Interrupt register
++ * -#	Repeat the following for each OUT EP interrupt bit set (from
++ *		LSB to MSB).
++ * -#	Read the Device Endpoint Interrupt (DOEPINTn) register
++ * -#	If "Transfer Complete" call the request complete function
++ * -#	If "Endpoint Disabled" complete the EP disable procedure.
++ * -#	If "AHB Error Interrupt" log error
++ * -#	If "Setup Phase Done" process Setup Packet (See Standard USB
++ *		Command Processing)
++ */
++static int32_t dwc_otg_pcd_handle_out_ep_intr(dwc_otg_pcd_t * pcd)
++{
++#define CLEAR_OUT_EP_INTR(__core_if,__epnum,__intr) \
++do { \
++		doepint_data_t doepint = {.d32=0}; \
++		doepint.b.__intr = 1; \
++		DWC_WRITE_REG32(&__core_if->dev_if->out_ep_regs[__epnum]->doepint, \
++		doepint.d32); \
++} while (0)
++
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	uint32_t ep_intr;
++	doepint_data_t doepint = {.d32 = 0 };
++	uint32_t epnum = 0;
++	dwc_otg_pcd_ep_t *ep;
++	dwc_ep_t *dwc_ep;
++	dctl_data_t dctl = {.d32 = 0 };
++	gintmsk_data_t gintmsk = {.d32 = 0 };
++
++
++	DWC_DEBUGPL(DBG_PCDV, "%s()\n", __func__);
++
++	/* Read in the device interrupt bits */
++	ep_intr = dwc_otg_read_dev_all_out_ep_intr(core_if);
++
++	while (ep_intr) {
++		if (ep_intr & 0x1) {
++			/* Get EP pointer */
++			ep = get_out_ep(pcd, epnum);
++			dwc_ep = &ep->dwc_ep;
++
++#ifdef VERBOSE
++			DWC_DEBUGPL(DBG_PCDV,
++				    "EP%d-%s: type=%d, mps=%d\n",
++				    dwc_ep->num, (dwc_ep->is_in ? "IN" : "OUT"),
++				    dwc_ep->type, dwc_ep->maxpacket);
++#endif
++			doepint.d32 =
++			    dwc_otg_read_dev_out_ep_intr(core_if, dwc_ep);
++			/* Moved this interrupt upper due to core deffect of asserting
++			 * OUT EP 0 xfercompl along with stsphsrcvd in BDMA */
++			if (doepint.b.stsphsercvd) {
++				deptsiz0_data_t deptsiz;
++				CLEAR_OUT_EP_INTR(core_if, epnum, stsphsercvd);
++				deptsiz.d32 =
++				    DWC_READ_REG32(&core_if->dev_if->
++						   out_ep_regs[0]->doeptsiz);
++				if (core_if->snpsid >= OTG_CORE_REV_3_00a
++				    && core_if->dma_enable
++				    && core_if->dma_desc_enable == 0
++				    && doepint.b.xfercompl
++				    && deptsiz.b.xfersize == 24) {
++					CLEAR_OUT_EP_INTR(core_if, epnum,
++							  xfercompl);
++					doepint.b.xfercompl = 0;
++					ep0_out_start(core_if, pcd);
++				}
++				if ((core_if->dma_desc_enable) ||
++				    (core_if->dma_enable
++				     && core_if->snpsid >=
++				     OTG_CORE_REV_3_00a)) {
++					do_setup_in_status_phase(pcd);
++				}
++			}
++			/* Transfer complete */
++			if (doepint.b.xfercompl) {
++
++				if (epnum == 0) {
++					/* Clear the bit in DOEPINTn for this interrupt */
++					CLEAR_OUT_EP_INTR(core_if, epnum, xfercompl);
++					if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
++						DWC_DEBUGPL(DBG_PCDV, "DOEPINT=%x doepint=%x\n",
++							DWC_READ_REG32(&core_if->dev_if->out_ep_regs[0]->doepint),
++							doepint.d32);
++						DWC_DEBUGPL(DBG_PCDV, "DOEPCTL=%x \n",
++							DWC_READ_REG32(&core_if->dev_if->out_ep_regs[0]->doepctl));
++
++						if (core_if->snpsid >= OTG_CORE_REV_3_00a
++							&& core_if->dma_enable == 0) {
++							doepint_data_t doepint;
++							doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
++														out_ep_regs[0]->doepint);
++							if (pcd->ep0state == EP0_IDLE && doepint.b.sr) {
++								CLEAR_OUT_EP_INTR(core_if, epnum, sr);
++								goto exit_xfercompl;
++							}
++						}
++						/* In case of DDMA  look at SR bit to go to the Data Stage */
++						if (core_if->dma_desc_enable) {
++							dev_dma_desc_sts_t status = {.d32 = 0};
++							if (pcd->ep0state == EP0_IDLE) {
++								status.d32 = core_if->dev_if->setup_desc_addr[core_if->
++											dev_if->setup_desc_index]->status.d32;
++								if(pcd->data_terminated) {
++									 pcd->data_terminated = 0;
++									 status.d32 = core_if->dev_if->out_desc_addr->status.d32;
++									 dwc_memcpy(&pcd->setup_pkt->req, pcd->backup_buf, 8);
++								}
++								if (status.b.sr) {
++									if (doepint.b.setup) {
++										DWC_DEBUGPL(DBG_PCDV, "DMA DESC EP0_IDLE SR=1 setup=1\n");
++										/* Already started data stage, clear setup */
++										CLEAR_OUT_EP_INTR(core_if, epnum, setup);
++										doepint.b.setup = 0;
++										handle_ep0(pcd);
++										/* Prepare for more setup packets */
++										if (pcd->ep0state == EP0_IN_STATUS_PHASE ||
++											pcd->ep0state == EP0_IN_DATA_PHASE) {
++											ep0_out_start(core_if, pcd);
++										}
++
++										goto exit_xfercompl;
++									} else {
++										/* Prepare for more setup packets */
++										DWC_DEBUGPL(DBG_PCDV,
++											"EP0_IDLE SR=1 setup=0 new setup comes\n");
++										ep0_out_start(core_if, pcd);
++									}
++								}
++							} else {
++								dwc_otg_pcd_request_t *req;
++								dev_dma_desc_sts_t status = {.d32 = 0};
++								diepint_data_t diepint0;
++								diepint0.d32 = DWC_READ_REG32(&core_if->dev_if->
++															in_ep_regs[0]->diepint);
++
++								if (pcd->ep0state == EP0_STALL || pcd->ep0state == EP0_DISCONNECT) {
++									DWC_ERROR("EP0 is stalled/disconnected\n");
++								}
++
++								/* Clear IN xfercompl if set */
++								if (diepint0.b.xfercompl && (pcd->ep0state == EP0_IN_STATUS_PHASE
++									|| pcd->ep0state == EP0_IN_DATA_PHASE)) {
++									DWC_WRITE_REG32(&core_if->dev_if->
++										in_ep_regs[0]->diepint, diepint0.d32);
++								}
++
++								status.d32 = core_if->dev_if->setup_desc_addr[core_if->
++									dev_if->setup_desc_index]->status.d32;
++
++								if (ep->dwc_ep.xfer_count != ep->dwc_ep.total_len
++									&& (pcd->ep0state == EP0_OUT_DATA_PHASE))
++									status.d32 = core_if->dev_if->out_desc_addr->status.d32;
++								if (pcd->ep0state == EP0_OUT_STATUS_PHASE)
++									status.d32 = core_if->dev_if->
++									out_desc_addr->status.d32;
++
++								if (status.b.sr) {
++									if (DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++										DWC_DEBUGPL(DBG_PCDV, "Request queue empty!!\n");
++									} else {
++										DWC_DEBUGPL(DBG_PCDV, "complete req!!\n");
++										req = DWC_CIRCLEQ_FIRST(&ep->queue);
++										if (ep->dwc_ep.xfer_count != ep->dwc_ep.total_len &&
++											pcd->ep0state == EP0_OUT_DATA_PHASE) {
++												/* Read arrived setup packet from req->buf */
++												dwc_memcpy(&pcd->setup_pkt->req,
++													req->buf + ep->dwc_ep.xfer_count, 8);
++										}
++										req->actual = ep->dwc_ep.xfer_count;
++										dwc_otg_request_done(ep, req, -ECONNRESET);
++										ep->dwc_ep.start_xfer_buff = 0;
++										ep->dwc_ep.xfer_buff = 0;
++										ep->dwc_ep.xfer_len = 0;
++									}
++									pcd->ep0state = EP0_IDLE;
++									if (doepint.b.setup) {
++										DWC_DEBUGPL(DBG_PCDV, "EP0_IDLE SR=1 setup=1\n");
++										/* Data stage started, clear setup */
++										CLEAR_OUT_EP_INTR(core_if, epnum, setup);
++										doepint.b.setup = 0;
++										handle_ep0(pcd);
++										/* Prepare for setup packets if ep0in was enabled*/
++										if (pcd->ep0state == EP0_IN_STATUS_PHASE) {
++											ep0_out_start(core_if, pcd);
++										}
++
++										goto exit_xfercompl;
++									} else {
++										/* Prepare for more setup packets */
++										DWC_DEBUGPL(DBG_PCDV,
++											"EP0_IDLE SR=1 setup=0 new setup comes 2\n");
++										ep0_out_start(core_if, pcd);
++									}
++								}
++							}
++						}
++						if (core_if->snpsid >= OTG_CORE_REV_2_94a && core_if->dma_enable
++							&& core_if->dma_desc_enable == 0) {
++							doepint_data_t doepint_temp = {.d32 = 0};
++							deptsiz0_data_t doeptsize0 = {.d32 = 0 };
++							doepint_temp.d32 = DWC_READ_REG32(&core_if->dev_if->
++															out_ep_regs[ep->dwc_ep.num]->doepint);
++							doeptsize0.d32 = DWC_READ_REG32(&core_if->dev_if->
++															out_ep_regs[ep->dwc_ep.num]->doeptsiz);
++							if (pcd->ep0state == EP0_IDLE) {
++								if (doepint_temp.b.sr) {
++									CLEAR_OUT_EP_INTR(core_if, epnum, sr);
++								}
++									doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
++																	out_ep_regs[0]->doepint);
++									if (doeptsize0.b.supcnt == 3) {
++										DWC_DEBUGPL(DBG_ANY, "Rolling over!!!!!!!\n");
++										ep->dwc_ep.stp_rollover = 1;
++									}
++									if (doepint.b.setup) {
++retry:
++										/* Already started data stage, clear setup */
++										CLEAR_OUT_EP_INTR(core_if, epnum, setup);
++										doepint.b.setup = 0;
++										handle_ep0(pcd);
++										ep->dwc_ep.stp_rollover = 0;
++										/* Prepare for more setup packets */
++										if (pcd->ep0state == EP0_IN_STATUS_PHASE ||
++											pcd->ep0state == EP0_IN_DATA_PHASE) {
++											ep0_out_start(core_if, pcd);
++										}
++										goto exit_xfercompl;
++									} else {
++										/* Prepare for more setup packets */
++										DWC_DEBUGPL(DBG_ANY,
++											"EP0_IDLE SR=1 setup=0 new setup comes\n");
++										doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
++																	out_ep_regs[0]->doepint);
++										if(doepint.b.setup)
++											goto retry;
++										ep0_out_start(core_if, pcd);
++									}
++							} else {
++								dwc_otg_pcd_request_t *req;
++								diepint_data_t diepint0 = {.d32 = 0};
++								doepint_data_t doepint_temp = {.d32 = 0};
++								depctl_data_t diepctl0;
++								diepint0.d32 = DWC_READ_REG32(&core_if->dev_if->
++																in_ep_regs[0]->diepint);
++								diepctl0.d32 = DWC_READ_REG32(&core_if->dev_if->
++																in_ep_regs[0]->diepctl);
++
++								if (pcd->ep0state == EP0_IN_DATA_PHASE
++									|| pcd->ep0state == EP0_IN_STATUS_PHASE) {
++									if (diepint0.b.xfercompl) {
++										DWC_WRITE_REG32(&core_if->dev_if->
++											in_ep_regs[0]->diepint, diepint0.d32);
++									}
++									if (diepctl0.b.epena) {
++										diepint_data_t diepint = {.d32 = 0};
++										diepctl0.b.snak = 1;
++										DWC_WRITE_REG32(&core_if->dev_if->
++														in_ep_regs[0]->diepctl, diepctl0.d32);
++										do {
++											dwc_udelay(10);
++											diepint.d32 = DWC_READ_REG32(&core_if->dev_if->
++												in_ep_regs[0]->diepint);
++										} while (!diepint.b.inepnakeff);
++										diepint.b.inepnakeff = 1;
++										DWC_WRITE_REG32(&core_if->dev_if->
++											in_ep_regs[0]->diepint, diepint.d32);
++										diepctl0.d32 = 0;
++										diepctl0.b.epdis = 1;
++										DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[0]->diepctl,
++														diepctl0.d32);
++										do {
++											dwc_udelay(10);
++											diepint.d32 = DWC_READ_REG32(&core_if->dev_if->
++												in_ep_regs[0]->diepint);
++										} while (!diepint.b.epdisabled);
++										diepint.b.epdisabled = 1;
++										DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[0]->diepint,
++															diepint.d32);
++									}
++								}
++								doepint_temp.d32 = DWC_READ_REG32(&core_if->dev_if->
++																out_ep_regs[ep->dwc_ep.num]->doepint);
++								if (doepint_temp.b.sr) {
++									CLEAR_OUT_EP_INTR(core_if, epnum, sr);
++									if (DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++										DWC_DEBUGPL(DBG_PCDV, "Request queue empty!!\n");
++									} else {
++										DWC_DEBUGPL(DBG_PCDV, "complete req!!\n");
++										req = DWC_CIRCLEQ_FIRST(&ep->queue);
++										if (ep->dwc_ep.xfer_count != ep->dwc_ep.total_len &&
++											pcd->ep0state == EP0_OUT_DATA_PHASE) {
++												/* Read arrived setup packet from req->buf */
++												dwc_memcpy(&pcd->setup_pkt->req,
++													req->buf + ep->dwc_ep.xfer_count, 8);
++										}
++										req->actual = ep->dwc_ep.xfer_count;
++										dwc_otg_request_done(ep, req, -ECONNRESET);
++										ep->dwc_ep.start_xfer_buff = 0;
++										ep->dwc_ep.xfer_buff = 0;
++										ep->dwc_ep.xfer_len = 0;
++									}
++									pcd->ep0state = EP0_IDLE;
++									if (doepint.b.setup) {
++										DWC_DEBUGPL(DBG_PCDV, "EP0_IDLE SR=1 setup=1\n");
++										/* Data stage started, clear setup */
++										CLEAR_OUT_EP_INTR(core_if, epnum, setup);
++										doepint.b.setup = 0;
++										handle_ep0(pcd);
++										/* Prepare for setup packets if ep0in was enabled*/
++										if (pcd->ep0state == EP0_IN_STATUS_PHASE) {
++											ep0_out_start(core_if, pcd);
++										}
++										goto exit_xfercompl;
++									} else {
++										/* Prepare for more setup packets */
++										DWC_DEBUGPL(DBG_PCDV,
++											"EP0_IDLE SR=1 setup=0 new setup comes 2\n");
++										ep0_out_start(core_if, pcd);
++									}
++								}
++							}
++						}
++						if (core_if->dma_enable == 0 || pcd->ep0state != EP0_IDLE)
++							handle_ep0(pcd);
++exit_xfercompl:
++						DWC_DEBUGPL(DBG_PCDV, "DOEPINT=%x doepint=%x\n",
++							dwc_otg_read_dev_out_ep_intr(core_if, dwc_ep), doepint.d32);
++					} else {
++					if (core_if->dma_desc_enable == 0
++					    || pcd->ep0state != EP0_IDLE)
++						handle_ep0(pcd);
++					}
++#ifdef DWC_EN_ISOC
++				} else if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
++					if (doepint.b.pktdrpsts == 0) {
++						/* Clear the bit in DOEPINTn for this interrupt */
++						CLEAR_OUT_EP_INTR(core_if,
++								  epnum,
++								  xfercompl);
++						complete_iso_ep(pcd, ep);
++					} else {
++
++						doepint_data_t doepint = {.d32 = 0 };
++						doepint.b.xfercompl = 1;
++						doepint.b.pktdrpsts = 1;
++						DWC_WRITE_REG32
++						    (&core_if->dev_if->out_ep_regs
++						     [epnum]->doepint,
++						     doepint.d32);
++						if (handle_iso_out_pkt_dropped
++						    (core_if, dwc_ep)) {
++							complete_iso_ep(pcd,
++									ep);
++						}
++					}
++#endif /* DWC_EN_ISOC */
++#ifdef DWC_UTE_PER_IO
++				} else if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
++					CLEAR_OUT_EP_INTR(core_if, epnum, xfercompl);
++					if (!ep->stopped)
++						complete_xiso_ep(ep);
++#endif /* DWC_UTE_PER_IO */
++				} else {
++					/* Clear the bit in DOEPINTn for this interrupt */
++					CLEAR_OUT_EP_INTR(core_if, epnum,
++							  xfercompl);
++
++					if (core_if->core_params->dev_out_nak) {
++						DWC_TIMER_CANCEL(pcd->core_if->ep_xfer_timer[epnum]);
++						pcd->core_if->ep_xfer_info[epnum].state = 0;
++#ifdef DEBUG
++						print_memory_payload(pcd, dwc_ep);
++#endif
++					}
++					complete_ep(ep);
++				}
++
++			}
++
++			/* Endpoint disable      */
++			if (doepint.b.epdisabled) {
++
++				/* Clear the bit in DOEPINTn for this interrupt */
++				CLEAR_OUT_EP_INTR(core_if, epnum, epdisabled);
++				if (core_if->core_params->dev_out_nak) {
++#ifdef DEBUG
++					print_memory_payload(pcd, dwc_ep);
++#endif
++					/* In case of timeout condition */
++					if (core_if->ep_xfer_info[epnum].state == 2) {
++						dctl.d32 = DWC_READ_REG32(&core_if->dev_if->
++										dev_global_regs->dctl);
++						dctl.b.cgoutnak = 1;
++						DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl,
++																dctl.d32);
++						/* Unmask goutnakeff interrupt which was masked
++						 * during handle nak out interrupt */
++						gintmsk.b.goutnakeff = 1;
++						DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk,
++																0, gintmsk.d32);
++
++						complete_ep(ep);
++					}
++				}
++				if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC)
++				{
++					dctl_data_t dctl;
++					gintmsk_data_t intr_mask = {.d32 = 0};
++					dwc_otg_pcd_request_t *req = 0;
++
++					dctl.d32 = DWC_READ_REG32(&core_if->dev_if->
++						dev_global_regs->dctl);
++					dctl.b.cgoutnak = 1;
++					DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl,
++						dctl.d32);
++
++					intr_mask.d32 = 0;
++					intr_mask.b.incomplisoout = 1;
++
++					/* Get any pending requests */
++					if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
++						req = DWC_CIRCLEQ_FIRST(&ep->queue);
++						if (!req) {
++							DWC_PRINTF("complete_ep 0x%p, req = NULL!\n", ep);
++						} else {
++							dwc_otg_request_done(ep, req, 0);
++							start_next_request(ep);
++						}
++					} else {
++						DWC_PRINTF("complete_ep 0x%p, ep->queue empty!\n", ep);
++					}
++				}
++			}
++			/* AHB Error */
++			if (doepint.b.ahberr) {
++				DWC_ERROR("EP%d OUT AHB Error\n", epnum);
++				DWC_ERROR("EP%d DEPDMA=0x%08x \n",
++					  epnum, core_if->dev_if->out_ep_regs[epnum]->doepdma);
++				CLEAR_OUT_EP_INTR(core_if, epnum, ahberr);
++			}
++			/* Setup Phase Done (contorl EPs) */
++			if (doepint.b.setup) {
++#ifdef DEBUG_EP0
++				DWC_DEBUGPL(DBG_PCD, "EP%d SETUP Done\n", epnum);
++#endif
++				CLEAR_OUT_EP_INTR(core_if, epnum, setup);
++
++				handle_ep0(pcd);
++			}
++
++			/** OUT EP BNA Intr */
++			if (doepint.b.bna) {
++				CLEAR_OUT_EP_INTR(core_if, epnum, bna);
++				if (core_if->dma_desc_enable) {
++#ifdef DWC_EN_ISOC
++					if (dwc_ep->type ==
++					    DWC_OTG_EP_TYPE_ISOC) {
++						/*
++						 * This checking is performed to prevent first "false" BNA
++						 * handling occuring right after reconnect
++						 */
++						if (dwc_ep->next_frame !=
++						    0xffffffff)
++							dwc_otg_pcd_handle_iso_bna(ep);
++					} else
++#endif				/* DWC_EN_ISOC */
++					{
++						dwc_otg_pcd_handle_noniso_bna(ep);
++					}
++				}
++			}
++			/* Babble Interrupt */
++			if (doepint.b.babble) {
++				DWC_DEBUGPL(DBG_ANY, "EP%d OUT Babble\n",
++					    epnum);
++				handle_out_ep_babble_intr(pcd, epnum);
++
++				CLEAR_OUT_EP_INTR(core_if, epnum, babble);
++			}
++			if (doepint.b.outtknepdis) {
++				DWC_DEBUGPL(DBG_ANY, "EP%d OUT Token received when EP is \
++					disabled\n",epnum);
++				if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
++					doepmsk_data_t doepmsk = {.d32 = 0};
++					ep->dwc_ep.frame_num = core_if->frame_num;
++					if (ep->dwc_ep.bInterval > 1) {
++						depctl_data_t depctl;
++						depctl.d32 = DWC_READ_REG32(&core_if->dev_if->
++													out_ep_regs[epnum]->doepctl);
++						if (ep->dwc_ep.frame_num & 0x1) {
++							depctl.b.setd1pid = 1;
++							depctl.b.setd0pid = 0;
++						} else {
++							depctl.b.setd0pid = 1;
++							depctl.b.setd1pid = 0;
++						}
++						DWC_WRITE_REG32(&core_if->dev_if->
++										out_ep_regs[epnum]->doepctl, depctl.d32);
++					}
++					start_next_request(ep);
++					doepmsk.b.outtknepdis = 1;
++					DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->doepmsk,
++								 doepmsk.d32, 0);
++				}
++				CLEAR_OUT_EP_INTR(core_if, epnum, outtknepdis);
++			}
++
++			/* NAK Interrutp */
++			if (doepint.b.nak) {
++				DWC_DEBUGPL(DBG_ANY, "EP%d OUT NAK\n", epnum);
++				handle_out_ep_nak_intr(pcd, epnum);
++
++				CLEAR_OUT_EP_INTR(core_if, epnum, nak);
++			}
++			/* NYET Interrutp */
++			if (doepint.b.nyet) {
++				DWC_DEBUGPL(DBG_ANY, "EP%d OUT NYET\n", epnum);
++				handle_out_ep_nyet_intr(pcd, epnum);
++
++				CLEAR_OUT_EP_INTR(core_if, epnum, nyet);
++			}
++		}
++
++		epnum++;
++		ep_intr >>= 1;
++	}
++
++	return 1;
++
++#undef CLEAR_OUT_EP_INTR
++}
++static int drop_transfer(uint32_t trgt_fr, uint32_t curr_fr, uint8_t frm_overrun)
++{
++	int retval = 0;
++	if(!frm_overrun && curr_fr >= trgt_fr)
++		retval = 1;
++	else if (frm_overrun
++		 && (curr_fr >= trgt_fr && ((curr_fr - trgt_fr) < 0x3FFF / 2)))
++		retval = 1;
++	return retval;
++}
++/**
++ * Incomplete ISO IN Transfer Interrupt.
++ * This interrupt indicates one of the following conditions occurred
++ * while transmitting an ISOC transaction.
++ * - Corrupted IN Token for ISOC EP.
++ * - Packet not complete in FIFO.
++ * The follow actions will be taken:
++ *	-#	Determine the EP
++ *	-#	Set incomplete flag in dwc_ep structure
++ *	-#	Disable EP; when "Endpoint Disabled" interrupt is received
++ *		Flush FIFO
++ */
++int32_t dwc_otg_pcd_handle_incomplete_isoc_in_intr(dwc_otg_pcd_t * pcd)
++{
++	gintsts_data_t gintsts;
++
++#ifdef DWC_EN_ISOC
++	dwc_otg_dev_if_t *dev_if;
++	deptsiz_data_t deptsiz = {.d32 = 0 };
++	depctl_data_t depctl = {.d32 = 0 };
++	dsts_data_t dsts = {.d32 = 0 };
++	dwc_ep_t *dwc_ep;
++	int i;
++
++	dev_if = GET_CORE_IF(pcd)->dev_if;
++
++	for (i = 1; i <= dev_if->num_in_eps; ++i) {
++		dwc_ep = &pcd->in_ep[i].dwc_ep;
++		if (dwc_ep->active && dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
++			deptsiz.d32 =
++			    DWC_READ_REG32(&dev_if->in_ep_regs[i]->dieptsiz);
++			depctl.d32 =
++			    DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++
++			if (depctl.b.epdis && deptsiz.d32) {
++				set_current_pkt_info(GET_CORE_IF(pcd), dwc_ep);
++				if (dwc_ep->cur_pkt >= dwc_ep->pkt_cnt) {
++					dwc_ep->cur_pkt = 0;
++					dwc_ep->proc_buf_num =
++					    (dwc_ep->proc_buf_num ^ 1) & 0x1;
++
++					if (dwc_ep->proc_buf_num) {
++						dwc_ep->cur_pkt_addr =
++						    dwc_ep->xfer_buff1;
++						dwc_ep->cur_pkt_dma_addr =
++						    dwc_ep->dma_addr1;
++					} else {
++						dwc_ep->cur_pkt_addr =
++						    dwc_ep->xfer_buff0;
++						dwc_ep->cur_pkt_dma_addr =
++						    dwc_ep->dma_addr0;
++					}
++
++				}
++
++				dsts.d32 =
++				    DWC_READ_REG32(&GET_CORE_IF(pcd)->dev_if->
++						   dev_global_regs->dsts);
++				dwc_ep->next_frame = dsts.b.soffn;
++
++				dwc_otg_iso_ep_start_frm_transfer(GET_CORE_IF
++								  (pcd),
++								  dwc_ep);
++			}
++		}
++	}
++
++#else
++	depctl_data_t depctl = {.d32 = 0 };
++	dwc_ep_t *dwc_ep;
++	dwc_otg_dev_if_t *dev_if;
++	int i;
++	dev_if = GET_CORE_IF(pcd)->dev_if;
++
++	DWC_DEBUGPL(DBG_PCD,"Incomplete ISO IN \n");
++
++	for (i = 1; i <= dev_if->num_in_eps; ++i) {
++		dwc_ep = &pcd->in_ep[i-1].dwc_ep;
++		depctl.d32 =
++			DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++		if (depctl.b.epena && dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
++			if (drop_transfer(dwc_ep->frame_num, GET_CORE_IF(pcd)->frame_num,
++							dwc_ep->frm_overrun))
++			{
++				depctl.d32 =
++					DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++				depctl.b.snak = 1;
++				depctl.b.epdis = 1;
++				DWC_MODIFY_REG32(&dev_if->in_ep_regs[i]->diepctl, depctl.d32, depctl.d32);
++			}
++		}
++	}
++
++	/*intr_mask.b.incomplisoin = 1;
++	DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
++			 intr_mask.d32, 0);	 */
++#endif				//DWC_EN_ISOC
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.incomplisoin = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * Incomplete ISO OUT Transfer Interrupt.
++ *
++ * This interrupt indicates that the core has dropped an ISO OUT
++ * packet. The following conditions can be the cause:
++ * - FIFO Full, the entire packet would not fit in the FIFO.
++ * - CRC Error
++ * - Corrupted Token
++ * The follow actions will be taken:
++ *	-#	Determine the EP
++ *	-#	Set incomplete flag in dwc_ep structure
++ *	-#	Read any data from the FIFO
++ *	-#	Disable EP. When "Endpoint Disabled" interrupt is received
++ *		re-enable EP.
++ */
++int32_t dwc_otg_pcd_handle_incomplete_isoc_out_intr(dwc_otg_pcd_t * pcd)
++{
++
++	gintsts_data_t gintsts;
++
++#ifdef DWC_EN_ISOC
++	dwc_otg_dev_if_t *dev_if;
++	deptsiz_data_t deptsiz = {.d32 = 0 };
++	depctl_data_t depctl = {.d32 = 0 };
++	dsts_data_t dsts = {.d32 = 0 };
++	dwc_ep_t *dwc_ep;
++	int i;
++
++	dev_if = GET_CORE_IF(pcd)->dev_if;
++
++	for (i = 1; i <= dev_if->num_out_eps; ++i) {
++		dwc_ep = &pcd->in_ep[i].dwc_ep;
++		if (pcd->out_ep[i].dwc_ep.active &&
++		    pcd->out_ep[i].dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
++			deptsiz.d32 =
++			    DWC_READ_REG32(&dev_if->out_ep_regs[i]->doeptsiz);
++			depctl.d32 =
++			    DWC_READ_REG32(&dev_if->out_ep_regs[i]->doepctl);
++
++			if (depctl.b.epdis && deptsiz.d32) {
++				set_current_pkt_info(GET_CORE_IF(pcd),
++						     &pcd->out_ep[i].dwc_ep);
++				if (dwc_ep->cur_pkt >= dwc_ep->pkt_cnt) {
++					dwc_ep->cur_pkt = 0;
++					dwc_ep->proc_buf_num =
++					    (dwc_ep->proc_buf_num ^ 1) & 0x1;
++
++					if (dwc_ep->proc_buf_num) {
++						dwc_ep->cur_pkt_addr =
++						    dwc_ep->xfer_buff1;
++						dwc_ep->cur_pkt_dma_addr =
++						    dwc_ep->dma_addr1;
++					} else {
++						dwc_ep->cur_pkt_addr =
++						    dwc_ep->xfer_buff0;
++						dwc_ep->cur_pkt_dma_addr =
++						    dwc_ep->dma_addr0;
++					}
++
++				}
++
++				dsts.d32 =
++				    DWC_READ_REG32(&GET_CORE_IF(pcd)->dev_if->
++						   dev_global_regs->dsts);
++				dwc_ep->next_frame = dsts.b.soffn;
++
++				dwc_otg_iso_ep_start_frm_transfer(GET_CORE_IF
++								  (pcd),
++								  dwc_ep);
++			}
++		}
++	}
++#else
++	/** @todo implement ISR */
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	dwc_otg_core_if_t *core_if;
++	deptsiz_data_t deptsiz = {.d32 = 0 };
++	depctl_data_t depctl = {.d32 = 0 };
++	dctl_data_t dctl = {.d32 = 0 };
++	dwc_ep_t *dwc_ep = NULL;
++	int i;
++	core_if = GET_CORE_IF(pcd);
++
++	for (i = 0; i < core_if->dev_if->num_out_eps; ++i) {
++		dwc_ep = &pcd->out_ep[i].dwc_ep;
++		depctl.d32 =
++			DWC_READ_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl);
++		if (depctl.b.epena && depctl.b.dpid == (core_if->frame_num & 0x1)) {
++			core_if->dev_if->isoc_ep = dwc_ep;
++			deptsiz.d32 =
++					DWC_READ_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doeptsiz);
++				break;
++		}
++	}
++	dctl.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
++	gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
++	intr_mask.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
++
++	if (!intr_mask.b.goutnakeff) {
++		/* Unmask it */
++		intr_mask.b.goutnakeff = 1;
++		DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, intr_mask.d32);
++	}
++	if (!gintsts.b.goutnakeff) {
++		dctl.b.sgoutnak = 1;
++	}
++	DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32);
++
++	depctl.d32 = DWC_READ_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl);
++	if (depctl.b.epena) {
++		depctl.b.epdis = 1;
++		depctl.b.snak = 1;
++	}
++	DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl, depctl.d32);
++
++	intr_mask.d32 = 0;
++	intr_mask.b.incomplisoout = 1;
++
++#endif /* DWC_EN_ISOC */
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.incomplisoout = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * This function handles the Global IN NAK Effective interrupt.
++ *
++ */
++int32_t dwc_otg_pcd_handle_in_nak_effective(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_dev_if_t *dev_if = GET_CORE_IF(pcd)->dev_if;
++	depctl_data_t diepctl = {.d32 = 0 };
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	gintsts_data_t gintsts;
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++	int i;
++
++	DWC_DEBUGPL(DBG_PCD, "Global IN NAK Effective\n");
++
++	/* Disable all active IN EPs */
++	for (i = 0; i <= dev_if->num_in_eps; i++) {
++		diepctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
++		if (!(diepctl.b.eptype & 1) && diepctl.b.epena) {
++			if (core_if->start_predict > 0)
++				core_if->start_predict++;
++			diepctl.b.epdis = 1;
++			diepctl.b.snak = 1;
++			DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepctl, diepctl.d32);
++		}
++	}
++
++
++	/* Disable the Global IN NAK Effective Interrupt */
++	intr_mask.b.ginnakeff = 1;
++	DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
++			 intr_mask.d32, 0);
++
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.ginnakeff = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * OUT NAK Effective.
++ *
++ */
++int32_t dwc_otg_pcd_handle_out_nak_effective(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_dev_if_t *dev_if = GET_CORE_IF(pcd)->dev_if;
++	gintmsk_data_t intr_mask = {.d32 = 0 };
++	gintsts_data_t gintsts;
++	depctl_data_t doepctl;
++	int i;
++
++	/* Disable the Global OUT NAK Effective Interrupt */
++	intr_mask.b.goutnakeff = 1;
++	DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
++		intr_mask.d32, 0);
++
++	/* If DEV OUT NAK enabled*/
++	if (pcd->core_if->core_params->dev_out_nak) {
++		/* Run over all out endpoints to determine the ep number on
++		 * which the timeout has happened
++		 */
++		for (i = 0; i <= dev_if->num_out_eps; i++) {
++			if ( pcd->core_if->ep_xfer_info[i].state == 2 )
++				break;
++		}
++		if (i > dev_if->num_out_eps) {
++			dctl_data_t dctl;
++			dctl.d32 =
++			    DWC_READ_REG32(&dev_if->dev_global_regs->dctl);
++			dctl.b.cgoutnak = 1;
++			DWC_WRITE_REG32(&dev_if->dev_global_regs->dctl,
++				dctl.d32);
++			goto out;
++		}
++
++		/* Disable the endpoint */
++		doepctl.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[i]->doepctl);
++		if (doepctl.b.epena) {
++			doepctl.b.epdis = 1;
++			doepctl.b.snak = 1;
++		}
++		DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepctl, doepctl.d32);
++		return 1;
++	}
++	/* We come here from Incomplete ISO OUT handler */
++	if (dev_if->isoc_ep) {
++		dwc_ep_t *dwc_ep = (dwc_ep_t *)dev_if->isoc_ep;
++		uint32_t epnum = dwc_ep->num;
++		doepint_data_t doepint;
++		doepint.d32 =
++		    DWC_READ_REG32(&dev_if->out_ep_regs[dwc_ep->num]->doepint);
++		dev_if->isoc_ep = NULL;
++		doepctl.d32 =
++		    DWC_READ_REG32(&dev_if->out_ep_regs[epnum]->doepctl);
++		DWC_PRINTF("Before disable DOEPCTL = %08x\n", doepctl.d32);
++		if (doepctl.b.epena) {
++			doepctl.b.epdis = 1;
++			doepctl.b.snak = 1;
++		}
++		DWC_WRITE_REG32(&dev_if->out_ep_regs[epnum]->doepctl,
++				doepctl.d32);
++		return 1;
++	} else
++		DWC_PRINTF("INTERRUPT Handler not implemented for %s\n",
++			   "Global OUT NAK Effective\n");
++
++out:
++	/* Clear interrupt */
++	gintsts.d32 = 0;
++	gintsts.b.goutnakeff = 1;
++	DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
++			gintsts.d32);
++
++	return 1;
++}
++
++/**
++ * PCD interrupt handler.
++ *
++ * The PCD handles the device interrupts.  Many conditions can cause a
++ * device interrupt. When an interrupt occurs, the device interrupt
++ * service routine determines the cause of the interrupt and
++ * dispatches handling to the appropriate function. These interrupt
++ * handling functions are described below.
++ *
++ * All interrupt registers are processed from LSB to MSB.
++ *
++ */
++int32_t dwc_otg_pcd_handle_intr(dwc_otg_pcd_t * pcd)
++{
++	dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
++#ifdef VERBOSE
++	dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
++#endif
++	gintsts_data_t gintr_status;
++	int32_t retval = 0;
++
++	/* Exit from ISR if core is hibernated */
++	if (core_if->hibernation_suspend == 1) {
++		return retval;
++	}
++#ifdef VERBOSE
++	DWC_DEBUGPL(DBG_ANY, "%s() gintsts=%08x	 gintmsk=%08x\n",
++		    __func__,
++		    DWC_READ_REG32(&global_regs->gintsts),
++		    DWC_READ_REG32(&global_regs->gintmsk));
++#endif
++
++	if (dwc_otg_is_device_mode(core_if)) {
++		DWC_SPINLOCK(pcd->lock);
++#ifdef VERBOSE
++		DWC_DEBUGPL(DBG_PCDV, "%s() gintsts=%08x  gintmsk=%08x\n",
++			    __func__,
++			    DWC_READ_REG32(&global_regs->gintsts),
++			    DWC_READ_REG32(&global_regs->gintmsk));
++#endif
++
++		gintr_status.d32 = dwc_otg_read_core_intr(core_if);
++
++		DWC_DEBUGPL(DBG_PCDV, "%s: gintsts&gintmsk=%08x\n",
++			    __func__, gintr_status.d32);
++
++		if (gintr_status.b.sofintr) {
++			retval |= dwc_otg_pcd_handle_sof_intr(pcd);
++		}
++		if (gintr_status.b.rxstsqlvl) {
++			retval |=
++			    dwc_otg_pcd_handle_rx_status_q_level_intr(pcd);
++		}
++		if (gintr_status.b.nptxfempty) {
++			retval |= dwc_otg_pcd_handle_np_tx_fifo_empty_intr(pcd);
++		}
++		if (gintr_status.b.goutnakeff) {
++			retval |= dwc_otg_pcd_handle_out_nak_effective(pcd);
++		}
++		if (gintr_status.b.i2cintr) {
++			retval |= dwc_otg_pcd_handle_i2c_intr(pcd);
++		}
++		if (gintr_status.b.erlysuspend) {
++			retval |= dwc_otg_pcd_handle_early_suspend_intr(pcd);
++		}
++		if (gintr_status.b.usbreset) {
++			retval |= dwc_otg_pcd_handle_usb_reset_intr(pcd);
++		}
++		if (gintr_status.b.enumdone) {
++			retval |= dwc_otg_pcd_handle_enum_done_intr(pcd);
++		}
++		if (gintr_status.b.isooutdrop) {
++			retval |=
++			    dwc_otg_pcd_handle_isoc_out_packet_dropped_intr
++			    (pcd);
++		}
++		if (gintr_status.b.eopframe) {
++			retval |=
++			    dwc_otg_pcd_handle_end_periodic_frame_intr(pcd);
++		}
++		if (gintr_status.b.inepint) {
++			if (!core_if->multiproc_int_enable) {
++				retval |= dwc_otg_pcd_handle_in_ep_intr(pcd);
++			}
++		}
++		if (gintr_status.b.outepintr) {
++			if (!core_if->multiproc_int_enable) {
++				retval |= dwc_otg_pcd_handle_out_ep_intr(pcd);
++			}
++		}
++		if (gintr_status.b.epmismatch) {
++			retval |= dwc_otg_pcd_handle_ep_mismatch_intr(pcd);
++		}
++		if (gintr_status.b.fetsusp) {
++			retval |= dwc_otg_pcd_handle_ep_fetsusp_intr(pcd);
++		}
++		if (gintr_status.b.ginnakeff) {
++			retval |= dwc_otg_pcd_handle_in_nak_effective(pcd);
++		}
++		if (gintr_status.b.incomplisoin) {
++			retval |=
++			    dwc_otg_pcd_handle_incomplete_isoc_in_intr(pcd);
++		}
++		if (gintr_status.b.incomplisoout) {
++			retval |=
++			    dwc_otg_pcd_handle_incomplete_isoc_out_intr(pcd);
++		}
++
++		/* In MPI mode Device Endpoints interrupts are asserted
++		 * without setting outepintr and inepint bits set, so these
++		 * Interrupt handlers are called without checking these bit-fields
++		 */
++		if (core_if->multiproc_int_enable) {
++			retval |= dwc_otg_pcd_handle_in_ep_intr(pcd);
++			retval |= dwc_otg_pcd_handle_out_ep_intr(pcd);
++		}
++#ifdef VERBOSE
++		DWC_DEBUGPL(DBG_PCDV, "%s() gintsts=%0x\n", __func__,
++			    DWC_READ_REG32(&global_regs->gintsts));
++#endif
++		DWC_SPINUNLOCK(pcd->lock);
++	}
++	return retval;
++}
++
++#endif /* DWC_HOST_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd_linux.c
+@@ -0,0 +1,1280 @@
++ /* ==========================================================================
++  * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd_linux.c $
++  * $Revision: #21 $
++  * $Date: 2012/08/10 $
++  * $Change: 2047372 $
++  *
++  * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++  * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++  * otherwise expressly agreed to in writing between Synopsys and you.
++  *
++  * The Software IS NOT an item of Licensed Software or Licensed Product under
++  * any End User Software License Agreement or Agreement for Licensed Product
++  * with Synopsys or any supplement thereto. You are permitted to use and
++  * redistribute this Software in source and binary forms, with or without
++  * modification, provided that redistributions of source code must retain this
++  * notice. You may not view, use, disclose, copy or distribute this file or
++  * any information contained herein except pursuant to this license grant from
++  * Synopsys. If you do not agree with this notice, including the disclaimer
++  * below, then you are not authorized to use the Software.
++  *
++  * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++  * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++  * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++  * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++  * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++  * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++  * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++  * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++  * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++  * DAMAGE.
++  * ========================================================================== */
++#ifndef DWC_HOST_ONLY
++
++/** @file
++ * This file implements the Peripheral Controller Driver.
++ *
++ * The Peripheral Controller Driver (PCD) is responsible for
++ * translating requests from the Function Driver into the appropriate
++ * actions on the DWC_otg controller. It isolates the Function Driver
++ * from the specifics of the controller by providing an API to the
++ * Function Driver.
++ *
++ * The Peripheral Controller Driver for Linux will implement the
++ * Gadget API, so that the existing Gadget drivers can be used.
++ * (Gadget Driver is the Linux terminology for a Function Driver.)
++ *
++ * The Linux Gadget API is defined in the header file
++ * <code><linux/usb_gadget.h></code>.  The USB EP operations API is
++ * defined in the structure <code>usb_ep_ops</code> and the USB
++ * Controller API is defined in the structure
++ * <code>usb_gadget_ops</code>.
++ *
++ */
++
++#include "dwc_otg_os_dep.h"
++#include "dwc_otg_pcd_if.h"
++#include "dwc_otg_pcd.h"
++#include "dwc_otg_driver.h"
++#include "dwc_otg_dbg.h"
++
++extern bool fiq_enable;
++
++static struct gadget_wrapper {
++	dwc_otg_pcd_t *pcd;
++
++	struct usb_gadget gadget;
++	struct usb_gadget_driver *driver;
++
++	struct usb_ep ep0;
++	struct usb_ep in_ep[16];
++	struct usb_ep out_ep[16];
++
++} *gadget_wrapper;
++
++/* Display the contents of the buffer */
++extern void dump_msg(const u8 * buf, unsigned int length);
++/**
++ * Get the dwc_otg_pcd_ep_t* from usb_ep* pointer - NULL in case
++ * if the endpoint is not found
++ */
++static struct dwc_otg_pcd_ep *ep_from_handle(dwc_otg_pcd_t * pcd, void *handle)
++{
++	int i;
++	if (pcd->ep0.priv == handle) {
++		return &pcd->ep0;
++	}
++
++	for (i = 0; i < MAX_EPS_CHANNELS - 1; i++) {
++		if (pcd->in_ep[i].priv == handle)
++			return &pcd->in_ep[i];
++		if (pcd->out_ep[i].priv == handle)
++			return &pcd->out_ep[i];
++	}
++
++	return NULL;
++}
++
++/* USB Endpoint Operations */
++/*
++ * The following sections briefly describe the behavior of the Gadget
++ * API endpoint operations implemented in the DWC_otg driver
++ * software. Detailed descriptions of the generic behavior of each of
++ * these functions can be found in the Linux header file
++ * include/linux/usb_gadget.h.
++ *
++ * The Gadget API provides wrapper functions for each of the function
++ * pointers defined in usb_ep_ops. The Gadget Driver calls the wrapper
++ * function, which then calls the underlying PCD function. The
++ * following sections are named according to the wrapper
++ * functions. Within each section, the corresponding DWC_otg PCD
++ * function name is specified.
++ *
++ */
++
++/**
++ * This function is called by the Gadget Driver for each EP to be
++ * configured for the current configuration (SET_CONFIGURATION).
++ *
++ * This function initializes the dwc_otg_ep_t data structure, and then
++ * calls dwc_otg_ep_activate.
++ */
++static int ep_enable(struct usb_ep *usb_ep,
++		     const struct usb_endpoint_descriptor *ep_desc)
++{
++	int retval;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p,%p)\n", __func__, usb_ep, ep_desc);
++
++	if (!usb_ep || !ep_desc || ep_desc->bDescriptorType != USB_DT_ENDPOINT) {
++		DWC_WARN("%s, bad ep or descriptor\n", __func__);
++		return -EINVAL;
++	}
++	if (usb_ep == &gadget_wrapper->ep0) {
++		DWC_WARN("%s, bad ep(0)\n", __func__);
++		return -EINVAL;
++	}
++
++	/* Check FIFO size? */
++	if (!ep_desc->wMaxPacketSize) {
++		DWC_WARN("%s, bad %s maxpacket\n", __func__, usb_ep->name);
++		return -ERANGE;
++	}
++
++	if (!gadget_wrapper->driver ||
++	    gadget_wrapper->gadget.speed == USB_SPEED_UNKNOWN) {
++		DWC_WARN("%s, bogus device state\n", __func__);
++		return -ESHUTDOWN;
++	}
++
++	/* Delete after check - MAS */
++#if 0
++	nat = (uint32_t) ep_desc->wMaxPacketSize;
++	printk(KERN_ALERT "%s: nat (before) =%d\n", __func__, nat);
++	nat = (nat >> 11) & 0x03;
++	printk(KERN_ALERT "%s: nat (after) =%d\n", __func__, nat);
++#endif
++	retval = dwc_otg_pcd_ep_enable(gadget_wrapper->pcd,
++				       (const uint8_t *)ep_desc,
++				       (void *)usb_ep);
++	if (retval) {
++		DWC_WARN("dwc_otg_pcd_ep_enable failed\n");
++		return -EINVAL;
++	}
++
++	usb_ep->maxpacket = le16_to_cpu(ep_desc->wMaxPacketSize);
++
++	return 0;
++}
++
++/**
++ * This function is called when an EP is disabled due to disconnect or
++ * change in configuration. Any pending requests will terminate with a
++ * status of -ESHUTDOWN.
++ *
++ * This function modifies the dwc_otg_ep_t data structure for this EP,
++ * and then calls dwc_otg_ep_deactivate.
++ */
++static int ep_disable(struct usb_ep *usb_ep)
++{
++	int retval;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, usb_ep);
++	if (!usb_ep) {
++		DWC_DEBUGPL(DBG_PCD, "%s, %s not enabled\n", __func__,
++			    usb_ep ? usb_ep->name : NULL);
++		return -EINVAL;
++	}
++
++	retval = dwc_otg_pcd_ep_disable(gadget_wrapper->pcd, usb_ep);
++	if (retval) {
++		retval = -EINVAL;
++	}
++
++	return retval;
++}
++
++/**
++ * This function allocates a request object to use with the specified
++ * endpoint.
++ *
++ * @param ep The endpoint to be used with with the request
++ * @param gfp_flags the GFP_* flags to use.
++ */
++static struct usb_request *dwc_otg_pcd_alloc_request(struct usb_ep *ep,
++						     gfp_t gfp_flags)
++{
++	struct usb_request *usb_req;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p,%d)\n", __func__, ep, gfp_flags);
++	if (0 == ep) {
++		DWC_WARN("%s() %s\n", __func__, "Invalid EP!\n");
++		return 0;
++	}
++	usb_req = kmalloc(sizeof(*usb_req), gfp_flags);
++	if (0 == usb_req) {
++		DWC_WARN("%s() %s\n", __func__, "request allocation failed!\n");
++		return 0;
++	}
++	memset(usb_req, 0, sizeof(*usb_req));
++	usb_req->dma = DWC_DMA_ADDR_INVALID;
++
++	return usb_req;
++}
++
++/**
++ * This function frees a request object.
++ *
++ * @param ep The endpoint associated with the request
++ * @param req The request being freed
++ */
++static void dwc_otg_pcd_free_request(struct usb_ep *ep, struct usb_request *req)
++{
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p,%p)\n", __func__, ep, req);
++
++	if (0 == ep || 0 == req) {
++		DWC_WARN("%s() %s\n", __func__,
++			 "Invalid ep or req argument!\n");
++		return;
++	}
++
++	kfree(req);
++}
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++/**
++ * This function allocates an I/O buffer to be used for a transfer
++ * to/from the specified endpoint.
++ *
++ * @param usb_ep The endpoint to be used with with the request
++ * @param bytes The desired number of bytes for the buffer
++ * @param dma Pointer to the buffer's DMA address; must be valid
++ * @param gfp_flags the GFP_* flags to use.
++ * @return address of a new buffer or null is buffer could not be allocated.
++ */
++static void *dwc_otg_pcd_alloc_buffer(struct usb_ep *usb_ep, unsigned bytes,
++				      dma_addr_t * dma, gfp_t gfp_flags)
++{
++	void *buf;
++	dwc_otg_pcd_t *pcd = 0;
++
++	pcd = gadget_wrapper->pcd;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p,%d,%p,%0x)\n", __func__, usb_ep, bytes,
++		    dma, gfp_flags);
++
++	/* Check dword alignment */
++	if ((bytes & 0x3UL) != 0) {
++		DWC_WARN("%s() Buffer size is not a multiple of"
++			 "DWORD size (%d)", __func__, bytes);
++	}
++
++	buf = dma_alloc_coherent(NULL, bytes, dma, gfp_flags);
++
++	/* Check dword alignment */
++	if (((int)buf & 0x3UL) != 0) {
++		DWC_WARN("%s() Buffer is not DWORD aligned (%p)",
++			 __func__, buf);
++	}
++
++	return buf;
++}
++
++/**
++ * This function frees an I/O buffer that was allocated by alloc_buffer.
++ *
++ * @param usb_ep the endpoint associated with the buffer
++ * @param buf address of the buffer
++ * @param dma The buffer's DMA address
++ * @param bytes The number of bytes of the buffer
++ */
++static void dwc_otg_pcd_free_buffer(struct usb_ep *usb_ep, void *buf,
++				    dma_addr_t dma, unsigned bytes)
++{
++	dwc_otg_pcd_t *pcd = 0;
++
++	pcd = gadget_wrapper->pcd;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p,%0x,%d)\n", __func__, buf, dma, bytes);
++
++	dma_free_coherent(NULL, bytes, buf, dma);
++}
++#endif
++
++/**
++ * This function is used to submit an I/O Request to an EP.
++ *
++ *	- When the request completes the request's completion callback
++ *	  is called to return the request to the driver.
++ *	- An EP, except control EPs, may have multiple requests
++ *	  pending.
++ *	- Once submitted the request cannot be examined or modified.
++ *	- Each request is turned into one or more packets.
++ *	- A BULK EP can queue any amount of data; the transfer is
++ *	  packetized.
++ *	- Zero length Packets are specified with the request 'zero'
++ *	  flag.
++ */
++static int ep_queue(struct usb_ep *usb_ep, struct usb_request *usb_req,
++		    gfp_t gfp_flags)
++{
++	dwc_otg_pcd_t *pcd;
++	struct dwc_otg_pcd_ep *ep = NULL;
++	int retval = 0, is_isoc_ep = 0;
++	dma_addr_t dma_addr = DWC_DMA_ADDR_INVALID;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p,%p,%d)\n",
++		    __func__, usb_ep, usb_req, gfp_flags);
++
++	if (!usb_req || !usb_req->complete || !usb_req->buf) {
++		DWC_WARN("bad params\n");
++		return -EINVAL;
++	}
++
++	if (!usb_ep) {
++		DWC_WARN("bad ep\n");
++		return -EINVAL;
++	}
++
++	pcd = gadget_wrapper->pcd;
++	if (!gadget_wrapper->driver ||
++	    gadget_wrapper->gadget.speed == USB_SPEED_UNKNOWN) {
++		DWC_DEBUGPL(DBG_PCDV, "gadget.speed=%d\n",
++			    gadget_wrapper->gadget.speed);
++		DWC_WARN("bogus device state\n");
++		return -ESHUTDOWN;
++	}
++
++	DWC_DEBUGPL(DBG_PCD, "%s queue req %p, len %d buf %p\n",
++		    usb_ep->name, usb_req, usb_req->length, usb_req->buf);
++
++	usb_req->status = -EINPROGRESS;
++	usb_req->actual = 0;
++
++	ep = ep_from_handle(pcd, usb_ep);
++	if (ep == NULL)
++		is_isoc_ep = 0;
++	else
++		is_isoc_ep = (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) ? 1 : 0;
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++	dma_addr = usb_req->dma;
++#else
++	if (GET_CORE_IF(pcd)->dma_enable) {
++                dwc_otg_device_t *otg_dev = gadget_wrapper->pcd->otg_dev;
++                struct device *dev = NULL;
++
++                if (otg_dev != NULL)
++                        dev = DWC_OTG_OS_GETDEV(otg_dev->os_dep);
++
++		if (usb_req->length != 0 &&
++                    usb_req->dma == DWC_DMA_ADDR_INVALID) {
++                        dma_addr = dma_map_single(dev, usb_req->buf,
++                                                  usb_req->length,
++                                                  ep->dwc_ep.is_in ?
++                                                        DMA_TO_DEVICE:
++                                                        DMA_FROM_DEVICE);
++		}
++	}
++#endif
++
++#ifdef DWC_UTE_PER_IO
++	if (is_isoc_ep == 1) {
++		retval = dwc_otg_pcd_xiso_ep_queue(pcd, usb_ep, usb_req->buf, dma_addr,
++			usb_req->length, usb_req->zero, usb_req,
++			gfp_flags == GFP_ATOMIC ? 1 : 0, &usb_req->ext_req);
++		if (retval)
++			return -EINVAL;
++
++		return 0;
++	}
++#endif
++	retval = dwc_otg_pcd_ep_queue(pcd, usb_ep, usb_req->buf, dma_addr,
++				      usb_req->length, usb_req->zero, usb_req,
++				      gfp_flags == GFP_ATOMIC ? 1 : 0);
++	if (retval) {
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++/**
++ * This function cancels an I/O request from an EP.
++ */
++static int ep_dequeue(struct usb_ep *usb_ep, struct usb_request *usb_req)
++{
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p,%p)\n", __func__, usb_ep, usb_req);
++
++	if (!usb_ep || !usb_req) {
++		DWC_WARN("bad argument\n");
++		return -EINVAL;
++	}
++	if (!gadget_wrapper->driver ||
++	    gadget_wrapper->gadget.speed == USB_SPEED_UNKNOWN) {
++		DWC_WARN("bogus device state\n");
++		return -ESHUTDOWN;
++	}
++	if (dwc_otg_pcd_ep_dequeue(gadget_wrapper->pcd, usb_ep, usb_req)) {
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++/**
++ * usb_ep_set_halt stalls an endpoint.
++ *
++ * usb_ep_clear_halt clears an endpoint halt and resets its data
++ * toggle.
++ *
++ * Both of these functions are implemented with the same underlying
++ * function. The behavior depends on the value argument.
++ *
++ * @param[in] usb_ep the Endpoint to halt or clear halt.
++ * @param[in] value
++ *	- 0 means clear_halt.
++ *	- 1 means set_halt,
++ *	- 2 means clear stall lock flag.
++ *	- 3 means set  stall lock flag.
++ */
++static int ep_halt(struct usb_ep *usb_ep, int value)
++{
++	int retval = 0;
++
++	DWC_DEBUGPL(DBG_PCD, "HALT %s %d\n", usb_ep->name, value);
++
++	if (!usb_ep) {
++		DWC_WARN("bad ep\n");
++		return -EINVAL;
++	}
++
++	retval = dwc_otg_pcd_ep_halt(gadget_wrapper->pcd, usb_ep, value);
++	if (retval == -DWC_E_AGAIN) {
++		return -EAGAIN;
++	} else if (retval) {
++		retval = -EINVAL;
++	}
++
++	return retval;
++}
++
++//#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30))
++#if 0
++/**
++ * ep_wedge: sets the halt feature and ignores clear requests
++ *
++ * @usb_ep: the endpoint being wedged
++ *
++ * Use this to stall an endpoint and ignore CLEAR_FEATURE(HALT_ENDPOINT)
++ * requests. If the gadget driver clears the halt status, it will
++ * automatically unwedge the endpoint.
++ *
++ * Returns zero on success, else negative errno. *
++ * Check usb_ep_set_wedge() at "usb_gadget.h" for details
++ */
++static int ep_wedge(struct usb_ep *usb_ep)
++{
++	int retval = 0;
++
++	DWC_DEBUGPL(DBG_PCD, "WEDGE %s\n", usb_ep->name);
++
++	if (!usb_ep) {
++		DWC_WARN("bad ep\n");
++		return -EINVAL;
++	}
++
++	retval = dwc_otg_pcd_ep_wedge(gadget_wrapper->pcd, usb_ep);
++	if (retval == -DWC_E_AGAIN) {
++		retval = -EAGAIN;
++	} else if (retval) {
++		retval = -EINVAL;
++	}
++
++	return retval;
++}
++#endif
++
++#ifdef DWC_EN_ISOC
++/**
++ * This function is used to submit an ISOC Transfer Request to an EP.
++ *
++ *	- Every time a sync period completes the request's completion callback
++ *	  is called to provide data to the gadget driver.
++ *	- Once submitted the request cannot be modified.
++ *	- Each request is turned into periodic data packets untill ISO
++ *	  Transfer is stopped..
++ */
++static int iso_ep_start(struct usb_ep *usb_ep, struct usb_iso_request *req,
++			gfp_t gfp_flags)
++{
++	int retval = 0;
++
++	if (!req || !req->process_buffer || !req->buf0 || !req->buf1) {
++		DWC_WARN("bad params\n");
++		return -EINVAL;
++	}
++
++	if (!usb_ep) {
++		DWC_PRINTF("bad params\n");
++		return -EINVAL;
++	}
++
++	req->status = -EINPROGRESS;
++
++	retval =
++	    dwc_otg_pcd_iso_ep_start(gadget_wrapper->pcd, usb_ep, req->buf0,
++				     req->buf1, req->dma0, req->dma1,
++				     req->sync_frame, req->data_pattern_frame,
++				     req->data_per_frame,
++				     req->
++				     flags & USB_REQ_ISO_ASAP ? -1 :
++				     req->start_frame, req->buf_proc_intrvl,
++				     req, gfp_flags == GFP_ATOMIC ? 1 : 0);
++
++	if (retval) {
++		return -EINVAL;
++	}
++
++	return retval;
++}
++
++/**
++ * This function stops ISO EP Periodic Data Transfer.
++ */
++static int iso_ep_stop(struct usb_ep *usb_ep, struct usb_iso_request *req)
++{
++	int retval = 0;
++	if (!usb_ep) {
++		DWC_WARN("bad ep\n");
++	}
++
++	if (!gadget_wrapper->driver ||
++	    gadget_wrapper->gadget.speed == USB_SPEED_UNKNOWN) {
++		DWC_DEBUGPL(DBG_PCDV, "gadget.speed=%d\n",
++			    gadget_wrapper->gadget.speed);
++		DWC_WARN("bogus device state\n");
++	}
++
++	dwc_otg_pcd_iso_ep_stop(gadget_wrapper->pcd, usb_ep, req);
++	if (retval) {
++		retval = -EINVAL;
++	}
++
++	return retval;
++}
++
++static struct usb_iso_request *alloc_iso_request(struct usb_ep *ep,
++						 int packets, gfp_t gfp_flags)
++{
++	struct usb_iso_request *pReq = NULL;
++	uint32_t req_size;
++
++	req_size = sizeof(struct usb_iso_request);
++	req_size +=
++	    (2 * packets * (sizeof(struct usb_gadget_iso_packet_descriptor)));
++
++	pReq = kmalloc(req_size, gfp_flags);
++	if (!pReq) {
++		DWC_WARN("Can't allocate Iso Request\n");
++		return 0;
++	}
++	pReq->iso_packet_desc0 = (void *)(pReq + 1);
++
++	pReq->iso_packet_desc1 = pReq->iso_packet_desc0 + packets;
++
++	return pReq;
++}
++
++static void free_iso_request(struct usb_ep *ep, struct usb_iso_request *req)
++{
++	kfree(req);
++}
++
++static struct usb_isoc_ep_ops dwc_otg_pcd_ep_ops = {
++	.ep_ops = {
++		   .enable = ep_enable,
++		   .disable = ep_disable,
++
++		   .alloc_request = dwc_otg_pcd_alloc_request,
++		   .free_request = dwc_otg_pcd_free_request,
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++		   .alloc_buffer = dwc_otg_pcd_alloc_buffer,
++		   .free_buffer = dwc_otg_pcd_free_buffer,
++#endif
++
++		   .queue = ep_queue,
++		   .dequeue = ep_dequeue,
++
++		   .set_halt = ep_halt,
++		   .fifo_status = 0,
++		   .fifo_flush = 0,
++		   },
++	.iso_ep_start = iso_ep_start,
++	.iso_ep_stop = iso_ep_stop,
++	.alloc_iso_request = alloc_iso_request,
++	.free_iso_request = free_iso_request,
++};
++
++#else
++
++	int (*enable) (struct usb_ep *ep,
++		const struct usb_endpoint_descriptor *desc);
++	int (*disable) (struct usb_ep *ep);
++
++	struct usb_request *(*alloc_request) (struct usb_ep *ep,
++		gfp_t gfp_flags);
++	void (*free_request) (struct usb_ep *ep, struct usb_request *req);
++
++	int (*queue) (struct usb_ep *ep, struct usb_request *req,
++		gfp_t gfp_flags);
++	int (*dequeue) (struct usb_ep *ep, struct usb_request *req);
++
++	int (*set_halt) (struct usb_ep *ep, int value);
++	int (*set_wedge) (struct usb_ep *ep);
++
++	int (*fifo_status) (struct usb_ep *ep);
++	void (*fifo_flush) (struct usb_ep *ep);
++static struct usb_ep_ops dwc_otg_pcd_ep_ops = {
++	.enable = ep_enable,
++	.disable = ep_disable,
++
++	.alloc_request = dwc_otg_pcd_alloc_request,
++	.free_request = dwc_otg_pcd_free_request,
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
++	.alloc_buffer = dwc_otg_pcd_alloc_buffer,
++	.free_buffer = dwc_otg_pcd_free_buffer,
++#else
++	/* .set_wedge = ep_wedge, */
++        .set_wedge = NULL, /* uses set_halt instead */
++#endif
++
++	.queue = ep_queue,
++	.dequeue = ep_dequeue,
++
++	.set_halt = ep_halt,
++	.fifo_status = 0,
++	.fifo_flush = 0,
++
++};
++
++#endif /* _EN_ISOC_ */
++/*	Gadget Operations */
++/**
++ * The following gadget operations will be implemented in the DWC_otg
++ * PCD. Functions in the API that are not described below are not
++ * implemented.
++ *
++ * The Gadget API provides wrapper functions for each of the function
++ * pointers defined in usb_gadget_ops. The Gadget Driver calls the
++ * wrapper function, which then calls the underlying PCD function. The
++ * following sections are named according to the wrapper functions
++ * (except for ioctl, which doesn't have a wrapper function). Within
++ * each section, the corresponding DWC_otg PCD function name is
++ * specified.
++ *
++ */
++
++/**
++ *Gets the USB Frame number of the last SOF.
++ */
++static int get_frame_number(struct usb_gadget *gadget)
++{
++	struct gadget_wrapper *d;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, gadget);
++
++	if (gadget == 0) {
++		return -ENODEV;
++	}
++
++	d = container_of(gadget, struct gadget_wrapper, gadget);
++	return dwc_otg_pcd_get_frame_number(d->pcd);
++}
++
++#ifdef CONFIG_USB_DWC_OTG_LPM
++static int test_lpm_enabled(struct usb_gadget *gadget)
++{
++	struct gadget_wrapper *d;
++
++	d = container_of(gadget, struct gadget_wrapper, gadget);
++
++	return dwc_otg_pcd_is_lpm_enabled(d->pcd);
++}
++#endif
++
++/**
++ * Initiates Session Request Protocol (SRP) to wakeup the host if no
++ * session is in progress. If a session is already in progress, but
++ * the device is suspended, remote wakeup signaling is started.
++ *
++ */
++static int wakeup(struct usb_gadget *gadget)
++{
++	struct gadget_wrapper *d;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, gadget);
++
++	if (gadget == 0) {
++		return -ENODEV;
++	} else {
++		d = container_of(gadget, struct gadget_wrapper, gadget);
++	}
++	dwc_otg_pcd_wakeup(d->pcd);
++	return 0;
++}
++
++static const struct usb_gadget_ops dwc_otg_pcd_ops = {
++	.get_frame = get_frame_number,
++	.wakeup = wakeup,
++#ifdef CONFIG_USB_DWC_OTG_LPM
++	.lpm_support = test_lpm_enabled,
++#endif
++	// current versions must always be self-powered
++};
++
++static int _setup(dwc_otg_pcd_t * pcd, uint8_t * bytes)
++{
++	int retval = -DWC_E_NOT_SUPPORTED;
++	if (gadget_wrapper->driver && gadget_wrapper->driver->setup) {
++		retval = gadget_wrapper->driver->setup(&gadget_wrapper->gadget,
++						       (struct usb_ctrlrequest
++							*)bytes);
++	}
++
++	if (retval == -ENOTSUPP) {
++		retval = -DWC_E_NOT_SUPPORTED;
++	} else if (retval < 0) {
++		retval = -DWC_E_INVALID;
++	}
++
++	return retval;
++}
++
++#ifdef DWC_EN_ISOC
++static int _isoc_complete(dwc_otg_pcd_t * pcd, void *ep_handle,
++			  void *req_handle, int proc_buf_num)
++{
++	int i, packet_count;
++	struct usb_gadget_iso_packet_descriptor *iso_packet = 0;
++	struct usb_iso_request *iso_req = req_handle;
++
++	if (proc_buf_num) {
++		iso_packet = iso_req->iso_packet_desc1;
++	} else {
++		iso_packet = iso_req->iso_packet_desc0;
++	}
++	packet_count =
++	    dwc_otg_pcd_get_iso_packet_count(pcd, ep_handle, req_handle);
++	for (i = 0; i < packet_count; ++i) {
++		int status;
++		int actual;
++		int offset;
++		dwc_otg_pcd_get_iso_packet_params(pcd, ep_handle, req_handle,
++						  i, &status, &actual, &offset);
++		switch (status) {
++		case -DWC_E_NO_DATA:
++			status = -ENODATA;
++			break;
++		default:
++			if (status) {
++				DWC_PRINTF("unknown status in isoc packet\n");
++			}
++
++		}
++		iso_packet[i].status = status;
++		iso_packet[i].offset = offset;
++		iso_packet[i].actual_length = actual;
++	}
++
++	iso_req->status = 0;
++	iso_req->process_buffer(ep_handle, iso_req);
++
++	return 0;
++}
++#endif /* DWC_EN_ISOC */
++
++#ifdef DWC_UTE_PER_IO
++/**
++ * Copy the contents of the extended request to the Linux usb_request's
++ * extended part and call the gadget's completion.
++ *
++ * @param pcd			Pointer to the pcd structure
++ * @param ep_handle		Void pointer to the usb_ep structure
++ * @param req_handle	Void pointer to the usb_request structure
++ * @param status		Request status returned from the portable logic
++ * @param ereq_port		Void pointer to the extended request structure
++ *						created in the the portable part that contains the
++ *						results of the processed iso packets.
++ */
++static int _xisoc_complete(dwc_otg_pcd_t * pcd, void *ep_handle,
++			   void *req_handle, int32_t status, void *ereq_port)
++{
++	struct dwc_ute_iso_req_ext *ereqorg = NULL;
++	struct dwc_iso_xreq_port *ereqport = NULL;
++	struct dwc_ute_iso_packet_descriptor *desc_org = NULL;
++	int i;
++	struct usb_request *req;
++	//struct dwc_ute_iso_packet_descriptor *
++	//int status = 0;
++
++	req = (struct usb_request *)req_handle;
++	ereqorg = &req->ext_req;
++	ereqport = (struct dwc_iso_xreq_port *)ereq_port;
++	desc_org = ereqorg->per_io_frame_descs;
++
++	if (req && req->complete) {
++		/* Copy the request data from the portable logic to our request */
++		for (i = 0; i < ereqport->pio_pkt_count; i++) {
++			desc_org[i].actual_length =
++			    ereqport->per_io_frame_descs[i].actual_length;
++			desc_org[i].status =
++			    ereqport->per_io_frame_descs[i].status;
++		}
++
++		switch (status) {
++		case -DWC_E_SHUTDOWN:
++			req->status = -ESHUTDOWN;
++			break;
++		case -DWC_E_RESTART:
++			req->status = -ECONNRESET;
++			break;
++		case -DWC_E_INVALID:
++			req->status = -EINVAL;
++			break;
++		case -DWC_E_TIMEOUT:
++			req->status = -ETIMEDOUT;
++			break;
++		default:
++			req->status = status;
++		}
++
++		/* And call the gadget's completion */
++		req->complete(ep_handle, req);
++	}
++
++	return 0;
++}
++#endif /* DWC_UTE_PER_IO */
++
++static int _complete(dwc_otg_pcd_t * pcd, void *ep_handle,
++		     void *req_handle, int32_t status, uint32_t actual)
++{
++	struct usb_request *req = (struct usb_request *)req_handle;
++#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,27)
++	struct dwc_otg_pcd_ep *ep = NULL;
++#endif
++
++	if (req && req->complete) {
++		switch (status) {
++		case -DWC_E_SHUTDOWN:
++			req->status = -ESHUTDOWN;
++			break;
++		case -DWC_E_RESTART:
++			req->status = -ECONNRESET;
++			break;
++		case -DWC_E_INVALID:
++			req->status = -EINVAL;
++			break;
++		case -DWC_E_TIMEOUT:
++			req->status = -ETIMEDOUT;
++			break;
++		default:
++			req->status = status;
++
++		}
++
++		req->actual = actual;
++		DWC_SPINUNLOCK(pcd->lock);
++		req->complete(ep_handle, req);
++		DWC_SPINLOCK(pcd->lock);
++	}
++#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,27)
++	ep = ep_from_handle(pcd, ep_handle);
++	if (GET_CORE_IF(pcd)->dma_enable) {
++                if (req->length != 0) {
++                        dwc_otg_device_t *otg_dev = gadget_wrapper->pcd->otg_dev;
++                        struct device *dev = NULL;
++
++                        if (otg_dev != NULL)
++                                  dev = DWC_OTG_OS_GETDEV(otg_dev->os_dep);
++
++			dma_unmap_single(dev, req->dma, req->length,
++                                         ep->dwc_ep.is_in ?
++                                                DMA_TO_DEVICE: DMA_FROM_DEVICE);
++                }
++	}
++#endif
++
++	return 0;
++}
++
++static int _connect(dwc_otg_pcd_t * pcd, int speed)
++{
++	gadget_wrapper->gadget.speed = speed;
++	return 0;
++}
++
++static int _disconnect(dwc_otg_pcd_t * pcd)
++{
++	if (gadget_wrapper->driver && gadget_wrapper->driver->disconnect) {
++		gadget_wrapper->driver->disconnect(&gadget_wrapper->gadget);
++	}
++	return 0;
++}
++
++static int _resume(dwc_otg_pcd_t * pcd)
++{
++	if (gadget_wrapper->driver && gadget_wrapper->driver->resume) {
++		gadget_wrapper->driver->resume(&gadget_wrapper->gadget);
++	}
++
++	return 0;
++}
++
++static int _suspend(dwc_otg_pcd_t * pcd)
++{
++	if (gadget_wrapper->driver && gadget_wrapper->driver->suspend) {
++		gadget_wrapper->driver->suspend(&gadget_wrapper->gadget);
++	}
++	return 0;
++}
++
++/**
++ * This function updates the otg values in the gadget structure.
++ */
++static int _hnp_changed(dwc_otg_pcd_t * pcd)
++{
++
++	if (!gadget_wrapper->gadget.is_otg)
++		return 0;
++
++	gadget_wrapper->gadget.b_hnp_enable = get_b_hnp_enable(pcd);
++	gadget_wrapper->gadget.a_hnp_support = get_a_hnp_support(pcd);
++	gadget_wrapper->gadget.a_alt_hnp_support = get_a_alt_hnp_support(pcd);
++	return 0;
++}
++
++static int _reset(dwc_otg_pcd_t * pcd)
++{
++	return 0;
++}
++
++#ifdef DWC_UTE_CFI
++static int _cfi_setup(dwc_otg_pcd_t * pcd, void *cfi_req)
++{
++	int retval = -DWC_E_INVALID;
++	if (gadget_wrapper->driver->cfi_feature_setup) {
++		retval =
++		    gadget_wrapper->driver->
++		    cfi_feature_setup(&gadget_wrapper->gadget,
++				      (struct cfi_usb_ctrlrequest *)cfi_req);
++	}
++
++	return retval;
++}
++#endif
++
++static const struct dwc_otg_pcd_function_ops fops = {
++	.complete = _complete,
++#ifdef DWC_EN_ISOC
++	.isoc_complete = _isoc_complete,
++#endif
++	.setup = _setup,
++	.disconnect = _disconnect,
++	.connect = _connect,
++	.resume = _resume,
++	.suspend = _suspend,
++	.hnp_changed = _hnp_changed,
++	.reset = _reset,
++#ifdef DWC_UTE_CFI
++	.cfi_setup = _cfi_setup,
++#endif
++#ifdef DWC_UTE_PER_IO
++	.xisoc_complete = _xisoc_complete,
++#endif
++};
++
++/**
++ * This function is the top level PCD interrupt handler.
++ */
++static irqreturn_t dwc_otg_pcd_irq(int irq, void *dev)
++{
++	dwc_otg_pcd_t *pcd = dev;
++	int32_t retval = IRQ_NONE;
++
++	retval = dwc_otg_pcd_handle_intr(pcd);
++	if (retval != 0) {
++		S3C2410X_CLEAR_EINTPEND();
++	}
++	return IRQ_RETVAL(retval);
++}
++
++/**
++ * This function initialized the usb_ep structures to there default
++ * state.
++ *
++ * @param d Pointer on gadget_wrapper.
++ */
++void gadget_add_eps(struct gadget_wrapper *d)
++{
++	static const char *names[] = {
++
++		"ep0",
++		"ep1in",
++		"ep2in",
++		"ep3in",
++		"ep4in",
++		"ep5in",
++		"ep6in",
++		"ep7in",
++		"ep8in",
++		"ep9in",
++		"ep10in",
++		"ep11in",
++		"ep12in",
++		"ep13in",
++		"ep14in",
++		"ep15in",
++		"ep1out",
++		"ep2out",
++		"ep3out",
++		"ep4out",
++		"ep5out",
++		"ep6out",
++		"ep7out",
++		"ep8out",
++		"ep9out",
++		"ep10out",
++		"ep11out",
++		"ep12out",
++		"ep13out",
++		"ep14out",
++		"ep15out"
++	};
++
++	int i;
++	struct usb_ep *ep;
++	int8_t dev_endpoints;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s\n", __func__);
++
++	INIT_LIST_HEAD(&d->gadget.ep_list);
++	d->gadget.ep0 = &d->ep0;
++	d->gadget.speed = USB_SPEED_UNKNOWN;
++
++	INIT_LIST_HEAD(&d->gadget.ep0->ep_list);
++
++	/**
++	 * Initialize the EP0 structure.
++	 */
++	ep = &d->ep0;
++
++	/* Init the usb_ep structure. */
++	ep->name = names[0];
++	ep->ops = (struct usb_ep_ops *)&dwc_otg_pcd_ep_ops;
++
++	/**
++	 * @todo NGS: What should the max packet size be set to
++	 * here?  Before EP type is set?
++	 */
++	ep->maxpacket = MAX_PACKET_SIZE;
++	dwc_otg_pcd_ep_enable(d->pcd, NULL, ep);
++
++	list_add_tail(&ep->ep_list, &d->gadget.ep_list);
++
++	/**
++	 * Initialize the EP structures.
++	 */
++	dev_endpoints = d->pcd->core_if->dev_if->num_in_eps;
++
++	for (i = 0; i < dev_endpoints; i++) {
++		ep = &d->in_ep[i];
++
++		/* Init the usb_ep structure. */
++		ep->name = names[d->pcd->in_ep[i].dwc_ep.num];
++		ep->ops = (struct usb_ep_ops *)&dwc_otg_pcd_ep_ops;
++
++		/**
++		 * @todo NGS: What should the max packet size be set to
++		 * here?  Before EP type is set?
++		 */
++		ep->maxpacket = MAX_PACKET_SIZE;
++		list_add_tail(&ep->ep_list, &d->gadget.ep_list);
++	}
++
++	dev_endpoints = d->pcd->core_if->dev_if->num_out_eps;
++
++	for (i = 0; i < dev_endpoints; i++) {
++		ep = &d->out_ep[i];
++
++		/* Init the usb_ep structure. */
++		ep->name = names[15 + d->pcd->out_ep[i].dwc_ep.num];
++		ep->ops = (struct usb_ep_ops *)&dwc_otg_pcd_ep_ops;
++
++		/**
++		 * @todo NGS: What should the max packet size be set to
++		 * here?  Before EP type is set?
++		 */
++		ep->maxpacket = MAX_PACKET_SIZE;
++
++		list_add_tail(&ep->ep_list, &d->gadget.ep_list);
++	}
++
++	/* remove ep0 from the list.  There is a ep0 pointer. */
++	list_del_init(&d->ep0.ep_list);
++
++	d->ep0.maxpacket = MAX_EP0_SIZE;
++}
++
++/**
++ * This function releases the Gadget device.
++ * required by device_unregister().
++ *
++ * @todo Should this do something?	Should it free the PCD?
++ */
++static void dwc_otg_pcd_gadget_release(struct device *dev)
++{
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, dev);
++}
++
++static struct gadget_wrapper *alloc_wrapper(dwc_bus_dev_t *_dev)
++{
++	static char pcd_name[] = "dwc_otg_pcd";
++	dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
++	struct gadget_wrapper *d;
++	int retval;
++
++	d = DWC_ALLOC(sizeof(*d));
++	if (d == NULL) {
++		return NULL;
++	}
++
++	memset(d, 0, sizeof(*d));
++
++	d->gadget.name = pcd_name;
++	d->pcd = otg_dev->pcd;
++
++#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30)
++	strcpy(d->gadget.dev.bus_id, "gadget");
++#else
++	dev_set_name(&d->gadget.dev, "%s", "gadget");
++#endif
++
++	d->gadget.dev.parent = &_dev->dev;
++	d->gadget.dev.release = dwc_otg_pcd_gadget_release;
++	d->gadget.ops = &dwc_otg_pcd_ops;
++	d->gadget.max_speed = dwc_otg_pcd_is_dualspeed(otg_dev->pcd) ? USB_SPEED_HIGH:USB_SPEED_FULL;
++	d->gadget.is_otg = dwc_otg_pcd_is_otg(otg_dev->pcd);
++
++	d->driver = 0;
++	/* Register the gadget device */
++	retval = device_register(&d->gadget.dev);
++	if (retval != 0) {
++		DWC_ERROR("device_register failed\n");
++		DWC_FREE(d);
++		return NULL;
++	}
++
++	return d;
++}
++
++static void free_wrapper(struct gadget_wrapper *d)
++{
++	if (d->driver) {
++		/* should have been done already by driver model core */
++		DWC_WARN("driver '%s' is still registered\n",
++			 d->driver->driver.name);
++#ifdef CONFIG_USB_GADGET
++		usb_gadget_unregister_driver(d->driver);
++#endif
++	}
++
++	device_unregister(&d->gadget.dev);
++	DWC_FREE(d);
++}
++
++/**
++ * This function initialized the PCD portion of the driver.
++ *
++ */
++int pcd_init(dwc_bus_dev_t *_dev)
++{
++	dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
++	int retval = 0;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p) otg_dev=%p\n", __func__, _dev, otg_dev);
++
++	otg_dev->pcd = dwc_otg_pcd_init(otg_dev->core_if);
++
++	if (!otg_dev->pcd) {
++		DWC_ERROR("dwc_otg_pcd_init failed\n");
++		return -ENOMEM;
++	}
++
++	otg_dev->pcd->otg_dev = otg_dev;
++	gadget_wrapper = alloc_wrapper(_dev);
++
++	/*
++	 * Initialize EP structures
++	 */
++	gadget_add_eps(gadget_wrapper);
++	/*
++	 * Setup interupt handler
++	 */
++#ifdef PLATFORM_INTERFACE
++	DWC_DEBUGPL(DBG_ANY, "registering handler for irq%d\n",
++                    platform_get_irq(_dev, fiq_enable ? 0 : 1));
++	retval = request_irq(platform_get_irq(_dev, fiq_enable ? 0 : 1), dwc_otg_pcd_irq,
++			     IRQF_SHARED, gadget_wrapper->gadget.name,
++			     otg_dev->pcd);
++	if (retval != 0) {
++		DWC_ERROR("request of irq%d failed\n",
++                          platform_get_irq(_dev, fiq_enable ? 0 : 1));
++		free_wrapper(gadget_wrapper);
++		return -EBUSY;
++	}
++#else
++	DWC_DEBUGPL(DBG_ANY, "registering handler for irq%d\n",
++                    _dev->irq);
++	retval = request_irq(_dev->irq, dwc_otg_pcd_irq,
++			     IRQF_SHARED | IRQF_DISABLED,
++			     gadget_wrapper->gadget.name, otg_dev->pcd);
++	if (retval != 0) {
++		DWC_ERROR("request of irq%d failed\n", _dev->irq);
++		free_wrapper(gadget_wrapper);
++		return -EBUSY;
++	}
++#endif
++
++	dwc_otg_pcd_start(gadget_wrapper->pcd, &fops);
++
++	return retval;
++}
++
++/**
++ * Cleanup the PCD.
++ */
++void pcd_remove(dwc_bus_dev_t *_dev)
++{
++	dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
++	dwc_otg_pcd_t *pcd = otg_dev->pcd;
++
++	DWC_DEBUGPL(DBG_PCDV, "%s(%p) otg_dev %p\n", __func__, _dev, otg_dev);
++
++	/*
++	 * Free the IRQ
++	 */
++#ifdef PLATFORM_INTERFACE
++	free_irq(platform_get_irq(_dev, 0), pcd);
++#else
++	free_irq(_dev->irq, pcd);
++#endif
++	dwc_otg_pcd_remove(otg_dev->pcd);
++	free_wrapper(gadget_wrapper);
++	otg_dev->pcd = 0;
++}
++
++#endif /* DWC_HOST_ONLY */
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/dwc_otg_regs.h
+@@ -0,0 +1,2550 @@
++/* ==========================================================================
++ * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_regs.h $
++ * $Revision: #98 $
++ * $Date: 2012/08/10 $
++ * $Change: 2047372 $
++ *
++ * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
++ * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
++ * otherwise expressly agreed to in writing between Synopsys and you.
++ *
++ * The Software IS NOT an item of Licensed Software or Licensed Product under
++ * any End User Software License Agreement or Agreement for Licensed Product
++ * with Synopsys or any supplement thereto. You are permitted to use and
++ * redistribute this Software in source and binary forms, with or without
++ * modification, provided that redistributions of source code must retain this
++ * notice. You may not view, use, disclose, copy or distribute this file or
++ * any information contained herein except pursuant to this license grant from
++ * Synopsys. If you do not agree with this notice, including the disclaimer
++ * below, then you are not authorized to use the Software.
++ *
++ * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
++ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
++ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
++ * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
++ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
++ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
++ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
++ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
++ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
++ * DAMAGE.
++ * ========================================================================== */
++
++#ifndef __DWC_OTG_REGS_H__
++#define __DWC_OTG_REGS_H__
++
++#include "dwc_otg_core_if.h"
++
++/**
++ * @file
++ *
++ * This file contains the data structures for accessing the DWC_otg core registers.
++ *
++ * The application interfaces with the HS OTG core by reading from and
++ * writing to the Control and Status Register (CSR) space through the
++ * AHB Slave interface. These registers are 32 bits wide, and the
++ * addresses are 32-bit-block aligned.
++ * CSRs are classified as follows:
++ * - Core Global Registers
++ * - Device Mode Registers
++ * - Device Global Registers
++ * - Device Endpoint Specific Registers
++ * - Host Mode Registers
++ * - Host Global Registers
++ * - Host Port CSRs
++ * - Host Channel Specific Registers
++ *
++ * Only the Core Global registers can be accessed in both Device and
++ * Host modes. When the HS OTG core is operating in one mode, either
++ * Device or Host, the application must not access registers from the
++ * other mode. When the core switches from one mode to another, the
++ * registers in the new mode of operation must be reprogrammed as they
++ * would be after a power-on reset.
++ */
++
++/****************************************************************************/
++/** DWC_otg Core registers .
++ * The dwc_otg_core_global_regs structure defines the size
++ * and relative field offsets for the Core Global registers.
++ */
++typedef struct dwc_otg_core_global_regs {
++	/** OTG Control and Status Register.  <i>Offset: 000h</i> */
++	volatile uint32_t gotgctl;
++	/** OTG Interrupt Register.	 <i>Offset: 004h</i> */
++	volatile uint32_t gotgint;
++	/**Core AHB Configuration Register.	 <i>Offset: 008h</i> */
++	volatile uint32_t gahbcfg;
++
++#define DWC_GLBINTRMASK		0x0001
++#define DWC_DMAENABLE		0x0020
++#define DWC_NPTXEMPTYLVL_EMPTY	0x0080
++#define DWC_NPTXEMPTYLVL_HALFEMPTY	0x0000
++#define DWC_PTXEMPTYLVL_EMPTY	0x0100
++#define DWC_PTXEMPTYLVL_HALFEMPTY	0x0000
++
++	/**Core USB Configuration Register.	 <i>Offset: 00Ch</i> */
++	volatile uint32_t gusbcfg;
++	/**Core Reset Register.	 <i>Offset: 010h</i> */
++	volatile uint32_t grstctl;
++	/**Core Interrupt Register.	 <i>Offset: 014h</i> */
++	volatile uint32_t gintsts;
++	/**Core Interrupt Mask Register.  <i>Offset: 018h</i> */
++	volatile uint32_t gintmsk;
++	/**Receive Status Queue Read Register (Read Only).	<i>Offset: 01Ch</i> */
++	volatile uint32_t grxstsr;
++	/**Receive Status Queue Read & POP Register (Read Only).  <i>Offset: 020h</i>*/
++	volatile uint32_t grxstsp;
++	/**Receive FIFO Size Register.	<i>Offset: 024h</i> */
++	volatile uint32_t grxfsiz;
++	/**Non Periodic Transmit FIFO Size Register.  <i>Offset: 028h</i> */
++	volatile uint32_t gnptxfsiz;
++	/**Non Periodic Transmit FIFO/Queue Status Register (Read
++	 * Only). <i>Offset: 02Ch</i> */
++	volatile uint32_t gnptxsts;
++	/**I2C Access Register.	 <i>Offset: 030h</i> */
++	volatile uint32_t gi2cctl;
++	/**PHY Vendor Control Register.	 <i>Offset: 034h</i> */
++	volatile uint32_t gpvndctl;
++	/**General Purpose Input/Output Register.  <i>Offset: 038h</i> */
++	volatile uint32_t ggpio;
++	/**User ID Register.  <i>Offset: 03Ch</i> */
++	volatile uint32_t guid;
++	/**Synopsys ID Register (Read Only).  <i>Offset: 040h</i> */
++	volatile uint32_t gsnpsid;
++	/**User HW Config1 Register (Read Only).  <i>Offset: 044h</i> */
++	volatile uint32_t ghwcfg1;
++	/**User HW Config2 Register (Read Only).  <i>Offset: 048h</i> */
++	volatile uint32_t ghwcfg2;
++#define DWC_SLAVE_ONLY_ARCH 0
++#define DWC_EXT_DMA_ARCH 1
++#define DWC_INT_DMA_ARCH 2
++
++#define DWC_MODE_HNP_SRP_CAPABLE	0
++#define DWC_MODE_SRP_ONLY_CAPABLE	1
++#define DWC_MODE_NO_HNP_SRP_CAPABLE		2
++#define DWC_MODE_SRP_CAPABLE_DEVICE		3
++#define DWC_MODE_NO_SRP_CAPABLE_DEVICE	4
++#define DWC_MODE_SRP_CAPABLE_HOST	5
++#define DWC_MODE_NO_SRP_CAPABLE_HOST	6
++
++	/**User HW Config3 Register (Read Only).  <i>Offset: 04Ch</i> */
++	volatile uint32_t ghwcfg3;
++	/**User HW Config4 Register (Read Only).  <i>Offset: 050h</i>*/
++	volatile uint32_t ghwcfg4;
++	/** Core LPM Configuration register <i>Offset: 054h</i>*/
++	volatile uint32_t glpmcfg;
++	/** Global PowerDn Register <i>Offset: 058h</i> */
++	volatile uint32_t gpwrdn;
++	/** Global DFIFO SW Config Register  <i>Offset: 05Ch</i> */
++	volatile uint32_t gdfifocfg;
++	/** ADP Control Register  <i>Offset: 060h</i> */
++	volatile uint32_t adpctl;
++	/** Reserved  <i>Offset: 064h-0FFh</i> */
++	volatile uint32_t reserved39[39];
++	/** Host Periodic Transmit FIFO Size Register. <i>Offset: 100h</i> */
++	volatile uint32_t hptxfsiz;
++	/** Device Periodic Transmit FIFO#n Register if dedicated fifos are disabled,
++		otherwise Device Transmit FIFO#n Register.
++	 * <i>Offset: 104h + (FIFO_Number-1)*04h, 1 <= FIFO Number <= 15 (1<=n<=15).</i> */
++	volatile uint32_t dtxfsiz[15];
++} dwc_otg_core_global_regs_t;
++
++/**
++ * This union represents the bit fields of the Core OTG Control
++ * and Status Register (GOTGCTL).  Set the bits using the bit
++ * fields then write the <i>d32</i> value to the register.
++ */
++typedef union gotgctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned sesreqscs:1;
++		unsigned sesreq:1;
++		unsigned vbvalidoven:1;
++		unsigned vbvalidovval:1;
++		unsigned avalidoven:1;
++		unsigned avalidovval:1;
++		unsigned bvalidoven:1;
++		unsigned bvalidovval:1;
++		unsigned hstnegscs:1;
++		unsigned hnpreq:1;
++		unsigned hstsethnpen:1;
++		unsigned devhnpen:1;
++		unsigned reserved12_15:4;
++		unsigned conidsts:1;
++		unsigned dbnctime:1;
++		unsigned asesvld:1;
++		unsigned bsesvld:1;
++		unsigned otgver:1;
++		unsigned reserved1:1;
++		unsigned multvalidbc:5;
++		unsigned chirpen:1;
++		unsigned reserved28_31:4;
++	} b;
++} gotgctl_data_t;
++
++/**
++ * This union represents the bit fields of the Core OTG Interrupt Register
++ * (GOTGINT).  Set/clear the bits using the bit fields then write the <i>d32</i>
++ * value to the register.
++ */
++typedef union gotgint_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Current Mode */
++		unsigned reserved0_1:2;
++
++		/** Session End Detected */
++		unsigned sesenddet:1;
++
++		unsigned reserved3_7:5;
++
++		/** Session Request Success Status Change */
++		unsigned sesreqsucstschng:1;
++		/** Host Negotiation Success Status Change */
++		unsigned hstnegsucstschng:1;
++
++		unsigned reserved10_16:7;
++
++		/** Host Negotiation Detected */
++		unsigned hstnegdet:1;
++		/** A-Device Timeout Change */
++		unsigned adevtoutchng:1;
++		/** Debounce Done */
++		unsigned debdone:1;
++		/** Multi-Valued input changed */
++		unsigned mvic:1;
++
++		unsigned reserved31_21:11;
++
++	} b;
++} gotgint_data_t;
++
++/**
++ * This union represents the bit fields of the Core AHB Configuration
++ * Register (GAHBCFG). Set/clear the bits using the bit fields then
++ * write the <i>d32</i> value to the register.
++ */
++typedef union gahbcfg_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned glblintrmsk:1;
++#define DWC_GAHBCFG_GLBINT_ENABLE		1
++
++		unsigned hburstlen:4;
++#define DWC_GAHBCFG_INT_DMA_BURST_SINGLE	0
++#define DWC_GAHBCFG_INT_DMA_BURST_INCR		1
++#define DWC_GAHBCFG_INT_DMA_BURST_INCR4		3
++#define DWC_GAHBCFG_INT_DMA_BURST_INCR8		5
++#define DWC_GAHBCFG_INT_DMA_BURST_INCR16	7
++
++		unsigned dmaenable:1;
++#define DWC_GAHBCFG_DMAENABLE			1
++		unsigned reserved:1;
++		unsigned nptxfemplvl_txfemplvl:1;
++		unsigned ptxfemplvl:1;
++#define DWC_GAHBCFG_TXFEMPTYLVL_EMPTY		1
++#define DWC_GAHBCFG_TXFEMPTYLVL_HALFEMPTY	0
++		unsigned reserved9_20:12;
++		unsigned remmemsupp:1;
++		unsigned notialldmawrit:1;
++		unsigned ahbsingle:1;
++		unsigned reserved24_31:8;
++	} b;
++} gahbcfg_data_t;
++
++/**
++ * This union represents the bit fields of the Core USB Configuration
++ * Register (GUSBCFG). Set the bits using the bit fields then write
++ * the <i>d32</i> value to the register.
++ */
++typedef union gusbcfg_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned toutcal:3;
++		unsigned phyif:1;
++		unsigned ulpi_utmi_sel:1;
++		unsigned fsintf:1;
++		unsigned physel:1;
++		unsigned ddrsel:1;
++		unsigned srpcap:1;
++		unsigned hnpcap:1;
++		unsigned usbtrdtim:4;
++		unsigned reserved1:1;
++		unsigned phylpwrclksel:1;
++		unsigned otgutmifssel:1;
++		unsigned ulpi_fsls:1;
++		unsigned ulpi_auto_res:1;
++		unsigned ulpi_clk_sus_m:1;
++		unsigned ulpi_ext_vbus_drv:1;
++		unsigned ulpi_int_vbus_indicator:1;
++		unsigned term_sel_dl_pulse:1;
++		unsigned indicator_complement:1;
++		unsigned indicator_pass_through:1;
++		unsigned ulpi_int_prot_dis:1;
++		unsigned ic_usb_cap:1;
++		unsigned ic_traffic_pull_remove:1;
++		unsigned tx_end_delay:1;
++		unsigned force_host_mode:1;
++		unsigned force_dev_mode:1;
++		unsigned reserved31:1;
++	} b;
++} gusbcfg_data_t;
++
++/**
++ * This union represents the bit fields of the Core Reset Register
++ * (GRSTCTL).  Set/clear the bits using the bit fields then write the
++ * <i>d32</i> value to the register.
++ */
++typedef union grstctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Core Soft Reset (CSftRst) (Device and Host)
++		 *
++		 * The application can flush the control logic in the
++		 * entire core using this bit. This bit resets the
++		 * pipelines in the AHB Clock domain as well as the
++		 * PHY Clock domain.
++		 *
++		 * The state machines are reset to an IDLE state, the
++		 * control bits in the CSRs are cleared, all the
++		 * transmit FIFOs and the receive FIFO are flushed.
++		 *
++		 * The status mask bits that control the generation of
++		 * the interrupt, are cleared, to clear the
++		 * interrupt. The interrupt status bits are not
++		 * cleared, so the application can get the status of
++		 * any events that occurred in the core after it has
++		 * set this bit.
++		 *
++		 * Any transactions on the AHB are terminated as soon
++		 * as possible following the protocol. Any
++		 * transactions on the USB are terminated immediately.
++		 *
++		 * The configuration settings in the CSRs are
++		 * unchanged, so the software doesn't have to
++		 * reprogram these registers (Device
++		 * Configuration/Host Configuration/Core System
++		 * Configuration/Core PHY Configuration).
++		 *
++		 * The application can write to this bit, any time it
++		 * wants to reset the core. This is a self clearing
++		 * bit and the core clears this bit after all the
++		 * necessary logic is reset in the core, which may
++		 * take several clocks, depending on the current state
++		 * of the core.
++		 */
++		unsigned csftrst:1;
++		/** Hclk Soft Reset
++		 *
++		 * The application uses this bit to reset the control logic in
++		 * the AHB clock domain. Only AHB clock domain pipelines are
++		 * reset.
++		 */
++		unsigned hsftrst:1;
++		/** Host Frame Counter Reset (Host Only)<br>
++		 *
++		 * The application can reset the (micro)frame number
++		 * counter inside the core, using this bit. When the
++		 * (micro)frame counter is reset, the subsequent SOF
++		 * sent out by the core, will have a (micro)frame
++		 * number of 0.
++		 */
++		unsigned hstfrm:1;
++		/** In Token Sequence Learning Queue Flush
++		 * (INTknQFlsh) (Device Only)
++		 */
++		unsigned intknqflsh:1;
++		/** RxFIFO Flush (RxFFlsh) (Device and Host)
++		 *
++		 * The application can flush the entire Receive FIFO
++		 * using this bit. The application must first
++		 * ensure that the core is not in the middle of a
++		 * transaction. The application should write into
++		 * this bit, only after making sure that neither the
++		 * DMA engine is reading from the RxFIFO nor the MAC
++		 * is writing the data in to the FIFO. The
++		 * application should wait until the bit is cleared
++		 * before performing any other operations. This bit
++		 * will takes 8 clocks (slowest of PHY or AHB clock)
++		 * to clear.
++		 */
++		unsigned rxfflsh:1;
++		/** TxFIFO Flush (TxFFlsh) (Device and Host).
++		 *
++		 * This bit is used to selectively flush a single or
++		 * all transmit FIFOs. The application must first
++		 * ensure that the core is not in the middle of a
++		 * transaction. The application should write into
++		 * this bit, only after making sure that neither the
++		 * DMA engine is writing into the TxFIFO nor the MAC
++		 * is reading the data out of the FIFO. The
++		 * application should wait until the core clears this
++		 * bit, before performing any operations. This bit
++		 * will takes 8 clocks (slowest of PHY or AHB clock)
++		 * to clear.
++		 */
++		unsigned txfflsh:1;
++
++		/** TxFIFO Number (TxFNum) (Device and Host).
++		 *
++		 * This is the FIFO number which needs to be flushed,
++		 * using the TxFIFO Flush bit. This field should not
++		 * be changed until the TxFIFO Flush bit is cleared by
++		 * the core.
++		 *	 - 0x0 : Non Periodic TxFIFO Flush
++		 *	 - 0x1 : Periodic TxFIFO #1 Flush in device mode
++		 *	   or Periodic TxFIFO in host mode
++		 *	 - 0x2 : Periodic TxFIFO #2 Flush in device mode.
++		 *	 - ...
++		 *	 - 0xF : Periodic TxFIFO #15 Flush in device mode
++		 *	 - 0x10: Flush all the Transmit NonPeriodic and
++		 *	   Transmit Periodic FIFOs in the core
++		 */
++		unsigned txfnum:5;
++		/** Reserved */
++		unsigned reserved11_29:19;
++		/** DMA Request Signal.	 Indicated DMA request is in
++		 * probress. Used for debug purpose. */
++		unsigned dmareq:1;
++		/** AHB Master Idle.  Indicates the AHB Master State
++		 * Machine is in IDLE condition. */
++		unsigned ahbidle:1;
++	} b;
++} grstctl_t;
++
++/**
++ * This union represents the bit fields of the Core Interrupt Mask
++ * Register (GINTMSK). Set/clear the bits using the bit fields then
++ * write the <i>d32</i> value to the register.
++ */
++typedef union gintmsk_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned reserved0:1;
++		unsigned modemismatch:1;
++		unsigned otgintr:1;
++		unsigned sofintr:1;
++		unsigned rxstsqlvl:1;
++		unsigned nptxfempty:1;
++		unsigned ginnakeff:1;
++		unsigned goutnakeff:1;
++		unsigned ulpickint:1;
++		unsigned i2cintr:1;
++		unsigned erlysuspend:1;
++		unsigned usbsuspend:1;
++		unsigned usbreset:1;
++		unsigned enumdone:1;
++		unsigned isooutdrop:1;
++		unsigned eopframe:1;
++		unsigned restoredone:1;
++		unsigned epmismatch:1;
++		unsigned inepintr:1;
++		unsigned outepintr:1;
++		unsigned incomplisoin:1;
++		unsigned incomplisoout:1;
++		unsigned fetsusp:1;
++		unsigned resetdet:1;
++		unsigned portintr:1;
++		unsigned hcintr:1;
++		unsigned ptxfempty:1;
++		unsigned lpmtranrcvd:1;
++		unsigned conidstschng:1;
++		unsigned disconnect:1;
++		unsigned sessreqintr:1;
++		unsigned wkupintr:1;
++	} b;
++} gintmsk_data_t;
++/**
++ * This union represents the bit fields of the Core Interrupt Register
++ * (GINTSTS).  Set/clear the bits using the bit fields then write the
++ * <i>d32</i> value to the register.
++ */
++typedef union gintsts_data {
++	/** raw register data */
++	uint32_t d32;
++#define DWC_SOF_INTR_MASK 0x0008
++	/** register bits */
++	struct {
++#define DWC_HOST_MODE 1
++		unsigned curmode:1;
++		unsigned modemismatch:1;
++		unsigned otgintr:1;
++		unsigned sofintr:1;
++		unsigned rxstsqlvl:1;
++		unsigned nptxfempty:1;
++		unsigned ginnakeff:1;
++		unsigned goutnakeff:1;
++		unsigned ulpickint:1;
++		unsigned i2cintr:1;
++		unsigned erlysuspend:1;
++		unsigned usbsuspend:1;
++		unsigned usbreset:1;
++		unsigned enumdone:1;
++		unsigned isooutdrop:1;
++		unsigned eopframe:1;
++		unsigned restoredone:1;
++		unsigned epmismatch:1;
++		unsigned inepint:1;
++		unsigned outepintr:1;
++		unsigned incomplisoin:1;
++		unsigned incomplisoout:1;
++		unsigned fetsusp:1;
++		unsigned resetdet:1;
++		unsigned portintr:1;
++		unsigned hcintr:1;
++		unsigned ptxfempty:1;
++		unsigned lpmtranrcvd:1;
++		unsigned conidstschng:1;
++		unsigned disconnect:1;
++		unsigned sessreqintr:1;
++		unsigned wkupintr:1;
++	} b;
++} gintsts_data_t;
++
++/**
++ * This union represents the bit fields in the Device Receive Status Read and
++ * Pop Registers (GRXSTSR, GRXSTSP) Read the register into the <i>d32</i>
++ * element then read out the bits using the <i>b</i>it elements.
++ */
++typedef union device_grxsts_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned epnum:4;
++		unsigned bcnt:11;
++		unsigned dpid:2;
++
++#define DWC_STS_DATA_UPDT		0x2	// OUT Data Packet
++#define DWC_STS_XFER_COMP		0x3	// OUT Data Transfer Complete
++
++#define DWC_DSTS_GOUT_NAK		0x1	// Global OUT NAK
++#define DWC_DSTS_SETUP_COMP		0x4	// Setup Phase Complete
++#define DWC_DSTS_SETUP_UPDT 0x6	// SETUP Packet
++		unsigned pktsts:4;
++		unsigned fn:4;
++		unsigned reserved25_31:7;
++	} b;
++} device_grxsts_data_t;
++
++/**
++ * This union represents the bit fields in the Host Receive Status Read and
++ * Pop Registers (GRXSTSR, GRXSTSP) Read the register into the <i>d32</i>
++ * element then read out the bits using the <i>b</i>it elements.
++ */
++typedef union host_grxsts_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned chnum:4;
++		unsigned bcnt:11;
++		unsigned dpid:2;
++
++		unsigned pktsts:4;
++#define DWC_GRXSTS_PKTSTS_IN			  0x2
++#define DWC_GRXSTS_PKTSTS_IN_XFER_COMP	  0x3
++#define DWC_GRXSTS_PKTSTS_DATA_TOGGLE_ERR 0x5
++#define DWC_GRXSTS_PKTSTS_CH_HALTED		  0x7
++
++		unsigned reserved21_31:11;
++	} b;
++} host_grxsts_data_t;
++
++/**
++ * This union represents the bit fields in the FIFO Size Registers (HPTXFSIZ,
++ * GNPTXFSIZ, DPTXFSIZn, DIEPTXFn). Read the register into the <i>d32</i> element
++ * then read out the bits using the <i>b</i>it elements.
++ */
++typedef union fifosize_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned startaddr:16;
++		unsigned depth:16;
++	} b;
++} fifosize_data_t;
++
++/**
++ * This union represents the bit fields in the Non-Periodic Transmit
++ * FIFO/Queue Status Register (GNPTXSTS). Read the register into the
++ * <i>d32</i> element then read out the bits using the <i>b</i>it
++ * elements.
++ */
++typedef union gnptxsts_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned nptxfspcavail:16;
++		unsigned nptxqspcavail:8;
++		/** Top of the Non-Periodic Transmit Request Queue
++		 *	- bit 24 - Terminate (Last entry for the selected
++		 *	  channel/EP)
++		 *	- bits 26:25 - Token Type
++		 *	  - 2'b00 - IN/OUT
++		 *	  - 2'b01 - Zero Length OUT
++		 *	  - 2'b10 - PING/Complete Split
++		 *	  - 2'b11 - Channel Halt
++		 *	- bits 30:27 - Channel/EP Number
++		 */
++		unsigned nptxqtop_terminate:1;
++		unsigned nptxqtop_token:2;
++		unsigned nptxqtop_chnep:4;
++		unsigned reserved:1;
++	} b;
++} gnptxsts_data_t;
++
++/**
++ * This union represents the bit fields in the Transmit
++ * FIFO Status Register (DTXFSTS). Read the register into the
++ * <i>d32</i> element then read out the bits using the <i>b</i>it
++ * elements.
++ */
++typedef union dtxfsts_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned txfspcavail:16;
++		unsigned reserved:16;
++	} b;
++} dtxfsts_data_t;
++
++/**
++ * This union represents the bit fields in the I2C Control Register
++ * (I2CCTL). Read the register into the <i>d32</i> element then read out the
++ * bits using the <i>b</i>it elements.
++ */
++typedef union gi2cctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned rwdata:8;
++		unsigned regaddr:8;
++		unsigned addr:7;
++		unsigned i2cen:1;
++		unsigned ack:1;
++		unsigned i2csuspctl:1;
++		unsigned i2cdevaddr:2;
++		unsigned i2cdatse0:1;
++		unsigned reserved:1;
++		unsigned rw:1;
++		unsigned bsydne:1;
++	} b;
++} gi2cctl_data_t;
++
++/**
++ * This union represents the bit fields in the PHY Vendor Control Register
++ * (GPVNDCTL). Read the register into the <i>d32</i> element then read out the
++ * bits using the <i>b</i>it elements.
++ */
++typedef union gpvndctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned regdata:8;
++		unsigned vctrl:8;
++		unsigned regaddr16_21:6;
++		unsigned regwr:1;
++		unsigned reserved23_24:2;
++		unsigned newregreq:1;
++		unsigned vstsbsy:1;
++		unsigned vstsdone:1;
++		unsigned reserved28_30:3;
++		unsigned disulpidrvr:1;
++	} b;
++} gpvndctl_data_t;
++
++/**
++ * This union represents the bit fields in the General Purpose
++ * Input/Output Register (GGPIO).
++ * Read the register into the <i>d32</i> element then read out the
++ * bits using the <i>b</i>it elements.
++ */
++typedef union ggpio_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned gpi:16;
++		unsigned gpo:16;
++	} b;
++} ggpio_data_t;
++
++/**
++ * This union represents the bit fields in the User ID Register
++ * (GUID). Read the register into the <i>d32</i> element then read out the
++ * bits using the <i>b</i>it elements.
++ */
++typedef union guid_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned rwdata:32;
++	} b;
++} guid_data_t;
++
++/**
++ * This union represents the bit fields in the Synopsys ID Register
++ * (GSNPSID). Read the register into the <i>d32</i> element then read out the
++ * bits using the <i>b</i>it elements.
++ */
++typedef union gsnpsid_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned rwdata:32;
++	} b;
++} gsnpsid_data_t;
++
++/**
++ * This union represents the bit fields in the User HW Config1
++ * Register.  Read the register into the <i>d32</i> element then read
++ * out the bits using the <i>b</i>it elements.
++ */
++typedef union hwcfg1_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned ep_dir0:2;
++		unsigned ep_dir1:2;
++		unsigned ep_dir2:2;
++		unsigned ep_dir3:2;
++		unsigned ep_dir4:2;
++		unsigned ep_dir5:2;
++		unsigned ep_dir6:2;
++		unsigned ep_dir7:2;
++		unsigned ep_dir8:2;
++		unsigned ep_dir9:2;
++		unsigned ep_dir10:2;
++		unsigned ep_dir11:2;
++		unsigned ep_dir12:2;
++		unsigned ep_dir13:2;
++		unsigned ep_dir14:2;
++		unsigned ep_dir15:2;
++	} b;
++} hwcfg1_data_t;
++
++/**
++ * This union represents the bit fields in the User HW Config2
++ * Register.  Read the register into the <i>d32</i> element then read
++ * out the bits using the <i>b</i>it elements.
++ */
++typedef union hwcfg2_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/* GHWCFG2 */
++		unsigned op_mode:3;
++#define DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG 0
++#define DWC_HWCFG2_OP_MODE_SRP_ONLY_CAPABLE_OTG 1
++#define DWC_HWCFG2_OP_MODE_NO_HNP_SRP_CAPABLE_OTG 2
++#define DWC_HWCFG2_OP_MODE_SRP_CAPABLE_DEVICE 3
++#define DWC_HWCFG2_OP_MODE_NO_SRP_CAPABLE_DEVICE 4
++#define DWC_HWCFG2_OP_MODE_SRP_CAPABLE_HOST 5
++#define DWC_HWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST 6
++
++		unsigned architecture:2;
++		unsigned point2point:1;
++		unsigned hs_phy_type:2;
++#define DWC_HWCFG2_HS_PHY_TYPE_NOT_SUPPORTED 0
++#define DWC_HWCFG2_HS_PHY_TYPE_UTMI 1
++#define DWC_HWCFG2_HS_PHY_TYPE_ULPI 2
++#define DWC_HWCFG2_HS_PHY_TYPE_UTMI_ULPI 3
++
++		unsigned fs_phy_type:2;
++		unsigned num_dev_ep:4;
++		unsigned num_host_chan:4;
++		unsigned perio_ep_supported:1;
++		unsigned dynamic_fifo:1;
++		unsigned multi_proc_int:1;
++		unsigned reserved21:1;
++		unsigned nonperio_tx_q_depth:2;
++		unsigned host_perio_tx_q_depth:2;
++		unsigned dev_token_q_depth:5;
++		unsigned otg_enable_ic_usb:1;
++	} b;
++} hwcfg2_data_t;
++
++/**
++ * This union represents the bit fields in the User HW Config3
++ * Register.  Read the register into the <i>d32</i> element then read
++ * out the bits using the <i>b</i>it elements.
++ */
++typedef union hwcfg3_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/* GHWCFG3 */
++		unsigned xfer_size_cntr_width:4;
++		unsigned packet_size_cntr_width:3;
++		unsigned otg_func:1;
++		unsigned i2c:1;
++		unsigned vendor_ctrl_if:1;
++		unsigned optional_features:1;
++		unsigned synch_reset_type:1;
++		unsigned adp_supp:1;
++		unsigned otg_enable_hsic:1;
++		unsigned bc_support:1;
++		unsigned otg_lpm_en:1;
++		unsigned dfifo_depth:16;
++	} b;
++} hwcfg3_data_t;
++
++/**
++ * This union represents the bit fields in the User HW Config4
++ * Register.  Read the register into the <i>d32</i> element then read
++ * out the bits using the <i>b</i>it elements.
++ */
++typedef union hwcfg4_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned num_dev_perio_in_ep:4;
++		unsigned power_optimiz:1;
++		unsigned min_ahb_freq:1;
++		unsigned hiber:1;
++		unsigned xhiber:1;
++		unsigned reserved:6;
++		unsigned utmi_phy_data_width:2;
++		unsigned num_dev_mode_ctrl_ep:4;
++		unsigned iddig_filt_en:1;
++		unsigned vbus_valid_filt_en:1;
++		unsigned a_valid_filt_en:1;
++		unsigned b_valid_filt_en:1;
++		unsigned session_end_filt_en:1;
++		unsigned ded_fifo_en:1;
++		unsigned num_in_eps:4;
++		unsigned desc_dma:1;
++		unsigned desc_dma_dyn:1;
++	} b;
++} hwcfg4_data_t;
++
++/**
++ * This union represents the bit fields of the Core LPM Configuration
++ * Register (GLPMCFG). Set the bits using bit fields then write
++ * the <i>d32</i> value to the register.
++ */
++typedef union glpmctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** LPM-Capable (LPMCap) (Device and Host)
++		 * The application uses this bit to control
++		 * the DWC_otg core LPM capabilities.
++		 */
++		unsigned lpm_cap_en:1;
++		/** LPM response programmed by application (AppL1Res) (Device)
++		 * Handshake response to LPM token pre-programmed
++		 * by device application software.
++		 */
++		unsigned appl_resp:1;
++		/** Host Initiated Resume Duration (HIRD) (Device and Host)
++		 * In Host mode this field indicates the value of HIRD
++		 * to be sent in an LPM transaction.
++		 * In Device mode this field is updated with the
++		 * Received LPM Token HIRD bmAttribute
++		 * when an ACK/NYET/STALL response is sent
++		 * to an LPM transaction.
++		 */
++		unsigned hird:4;
++		/** RemoteWakeEnable (bRemoteWake) (Device and Host)
++		 * In Host mode this bit indicates the value of remote
++		 * wake up to be sent in wIndex field of LPM transaction.
++		 * In Device mode this field is updated with the
++		 * Received LPM Token bRemoteWake bmAttribute
++		 * when an ACK/NYET/STALL response is sent
++		 * to an LPM transaction.
++		 */
++		unsigned rem_wkup_en:1;
++		/** Enable utmi_sleep_n (EnblSlpM) (Device and Host)
++		 * The application uses this bit to control
++		 * the utmi_sleep_n assertion to the PHY when in L1 state.
++		 */
++		unsigned en_utmi_sleep:1;
++		/** HIRD Threshold (HIRD_Thres) (Device and Host)
++		 */
++		unsigned hird_thres:5;
++		/** LPM Response (CoreL1Res) (Device and Host)
++		 * In Host mode this bit contains handsake response to
++		 * LPM transaction.
++		 * In Device mode the response of the core to
++		 * LPM transaction received is reflected in these two bits.
++			- 0x0 : ERROR (No handshake response)
++			- 0x1 : STALL
++			- 0x2 : NYET
++			- 0x3 : ACK
++		 */
++		unsigned lpm_resp:2;
++		/** Port Sleep Status (SlpSts) (Device and Host)
++		 * This bit is set as long as a Sleep condition
++		 * is present on the USB bus.
++		 */
++		unsigned prt_sleep_sts:1;
++		/** Sleep State Resume OK (L1ResumeOK) (Device and Host)
++		 * Indicates that the application or host
++		 * can start resume from Sleep state.
++		 */
++		unsigned sleep_state_resumeok:1;
++		/** LPM channel Index (LPM_Chnl_Indx) (Host)
++		 * The channel number on which the LPM transaction
++		 * has to be applied while sending
++		 * an LPM transaction to the local device.
++		 */
++		unsigned lpm_chan_index:4;
++		/** LPM Retry Count (LPM_Retry_Cnt) (Host)
++		 * Number host retries that would be performed
++		 * if the device response was not valid response.
++		 */
++		unsigned retry_count:3;
++		/** Send LPM Transaction (SndLPM) (Host)
++		 * When set by application software,
++		 * an LPM transaction containing two tokens
++		 * is sent.
++		 */
++		unsigned send_lpm:1;
++		/** LPM Retry status (LPM_RetryCnt_Sts) (Host)
++		 * Number of LPM Host Retries still remaining
++		 * to be transmitted for the current LPM sequence
++		 */
++		unsigned retry_count_sts:3;
++		unsigned reserved28_29:2;
++		/** In host mode once this bit is set, the host
++		 * configures to drive the HSIC Idle state on the bus.
++		 * It then waits for the  device to initiate the Connect sequence.
++		 * In device mode once this bit is set, the device waits for
++		 * the HSIC Idle line state on the bus. Upon receving the Idle
++		 * line state, it initiates the HSIC Connect sequence.
++		 */
++		unsigned hsic_connect:1;
++		/** This bit overrides and functionally inverts
++		 * the if_select_hsic input port signal.
++		 */
++		unsigned inv_sel_hsic:1;
++	} b;
++} glpmcfg_data_t;
++
++/**
++ * This union represents the bit fields of the Core ADP Timer, Control and
++ * Status Register (ADPTIMCTLSTS). Set the bits using bit fields then write
++ * the <i>d32</i> value to the register.
++ */
++typedef union adpctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Probe Discharge (PRB_DSCHG)
++		 *  These bits set the times for TADP_DSCHG.
++		 *  These bits are defined as follows:
++		 *  2'b00 - 4 msec
++		 *  2'b01 - 8 msec
++		 *  2'b10 - 16 msec
++		 *  2'b11 - 32 msec
++		 */
++		unsigned prb_dschg:2;
++		/** Probe Delta (PRB_DELTA)
++		 *  These bits set the resolution for RTIM   value.
++		 *  The bits are defined in units of 32 kHz clock cycles as follows:
++		 *  2'b00  -  1 cycles
++		 *  2'b01  -  2 cycles
++		 *  2'b10 -  3 cycles
++		 *  2'b11 - 4 cycles
++		 *  For example if this value is chosen to 2'b01, it means that RTIM
++		 *  increments for every 3(three) 32Khz clock cycles.
++		 */
++		unsigned prb_delta:2;
++		/** Probe Period (PRB_PER)
++		 *  These bits sets the TADP_PRD as shown in Figure 4 as follows:
++		 *  2'b00  -  0.625 to 0.925 sec (typical 0.775 sec)
++		 *  2'b01  -  1.25 to 1.85 sec (typical 1.55 sec)
++		 *  2'b10  -  1.9 to 2.6 sec (typical 2.275 sec)
++		 *  2'b11  -  Reserved
++		 */
++		unsigned prb_per:2;
++		/** These bits capture the latest time it took for VBUS to ramp from
++		 *  VADP_SINK to VADP_PRB.
++		 *  0x000  -  1 cycles
++		 *  0x001  -  2 cycles
++		 *  0x002  -  3 cycles
++		 *  etc
++		 *  0x7FF  -  2048 cycles
++		 *  A time of 1024 cycles at 32 kHz corresponds to a time of 32 msec.
++		*/
++		unsigned rtim:11;
++		/** Enable Probe (EnaPrb)
++		 *  When programmed to 1'b1, the core performs a probe operation.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned enaprb:1;
++		/** Enable Sense (EnaSns)
++		 *  When programmed to 1'b1, the core performs a Sense operation.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned enasns:1;
++		/** ADP Reset (ADPRes)
++		 *  When set, ADP controller is reset.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned adpres:1;
++		/** ADP Enable (ADPEn)
++		 *  When set, the core performs either ADP probing or sensing
++		 *  based on EnaPrb or EnaSns.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned adpen:1;
++		/** ADP Probe Interrupt (ADP_PRB_INT)
++		 *  When this bit is set, it means that the VBUS
++		 *  voltage is greater than VADP_PRB or VADP_PRB is reached.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned adp_prb_int:1;
++		/**
++		 *  ADP Sense Interrupt (ADP_SNS_INT)
++		 *  When this bit is set, it means that the VBUS voltage is greater than
++		 *  VADP_SNS value or VADP_SNS is reached.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned adp_sns_int:1;
++		/** ADP Tomeout Interrupt (ADP_TMOUT_INT)
++		 *  This bit is relevant only for an ADP probe.
++		 *  When this bit is set, it means that the ramp time has
++		 *  completed ie ADPCTL.RTIM has reached its terminal value
++		 *  of 0x7FF.  This is a debug feature that allows software
++		 *  to read the ramp time after each cycle.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned adp_tmout_int:1;
++		/** ADP Probe Interrupt Mask (ADP_PRB_INT_MSK)
++		 *  When this bit is set, it unmasks the interrupt due to ADP_PRB_INT.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned adp_prb_int_msk:1;
++		/** ADP Sense Interrupt Mask (ADP_SNS_INT_MSK)
++		 *  When this bit is set, it unmasks the interrupt due to ADP_SNS_INT.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned adp_sns_int_msk:1;
++		/** ADP Timoeout Interrupt Mask (ADP_TMOUT_MSK)
++		 *  When this bit is set, it unmasks the interrupt due to ADP_TMOUT_INT.
++		 *  This bit is valid only if OTG_Ver = 1'b1.
++		 */
++		unsigned adp_tmout_int_msk:1;
++		/** Access Request
++		 * 2'b00 - Read/Write Valid (updated by the core)
++		 * 2'b01 - Read
++		 * 2'b00 - Write
++		 * 2'b00 - Reserved
++		 */
++		unsigned ar:2;
++		 /** Reserved */
++		unsigned reserved29_31:3;
++	} b;
++} adpctl_data_t;
++
++////////////////////////////////////////////
++// Device Registers
++/**
++ * Device Global Registers. <i>Offsets 800h-BFFh</i>
++ *
++ * The following structures define the size and relative field offsets
++ * for the Device Mode Registers.
++ *
++ * <i>These registers are visible only in Device mode and must not be
++ * accessed in Host mode, as the results are unknown.</i>
++ */
++typedef struct dwc_otg_dev_global_regs {
++	/** Device Configuration Register. <i>Offset 800h</i> */
++	volatile uint32_t dcfg;
++	/** Device Control Register. <i>Offset: 804h</i> */
++	volatile uint32_t dctl;
++	/** Device Status Register (Read Only). <i>Offset: 808h</i> */
++	volatile uint32_t dsts;
++	/** Reserved. <i>Offset: 80Ch</i> */
++	uint32_t unused;
++	/** Device IN Endpoint Common Interrupt Mask
++	 * Register. <i>Offset: 810h</i> */
++	volatile uint32_t diepmsk;
++	/** Device OUT Endpoint Common Interrupt Mask
++	 * Register. <i>Offset: 814h</i> */
++	volatile uint32_t doepmsk;
++	/** Device All Endpoints Interrupt Register.  <i>Offset: 818h</i> */
++	volatile uint32_t daint;
++	/** Device All Endpoints Interrupt Mask Register.  <i>Offset:
++	 * 81Ch</i> */
++	volatile uint32_t daintmsk;
++	/** Device IN Token Queue Read Register-1 (Read Only).
++	 * <i>Offset: 820h</i> */
++	volatile uint32_t dtknqr1;
++	/** Device IN Token Queue Read Register-2 (Read Only).
++	 * <i>Offset: 824h</i> */
++	volatile uint32_t dtknqr2;
++	/** Device VBUS	 discharge Register.  <i>Offset: 828h</i> */
++	volatile uint32_t dvbusdis;
++	/** Device VBUS Pulse Register.	 <i>Offset: 82Ch</i> */
++	volatile uint32_t dvbuspulse;
++	/** Device IN Token Queue Read Register-3 (Read Only). /
++	 *	Device Thresholding control register (Read/Write)
++	 * <i>Offset: 830h</i> */
++	volatile uint32_t dtknqr3_dthrctl;
++	/** Device IN Token Queue Read Register-4 (Read Only). /
++	 *	Device IN EPs empty Inr. Mask Register (Read/Write)
++	 * <i>Offset: 834h</i> */
++	volatile uint32_t dtknqr4_fifoemptymsk;
++	/** Device Each Endpoint Interrupt Register (Read Only). /
++	 * <i>Offset: 838h</i> */
++	volatile uint32_t deachint;
++	/** Device Each Endpoint Interrupt mask Register (Read/Write). /
++	 * <i>Offset: 83Ch</i> */
++	volatile uint32_t deachintmsk;
++	/** Device Each In Endpoint Interrupt mask Register (Read/Write). /
++	 * <i>Offset: 840h</i> */
++	volatile uint32_t diepeachintmsk[MAX_EPS_CHANNELS];
++	/** Device Each Out Endpoint Interrupt mask Register (Read/Write). /
++	 * <i>Offset: 880h</i> */
++	volatile uint32_t doepeachintmsk[MAX_EPS_CHANNELS];
++} dwc_otg_device_global_regs_t;
++
++/**
++ * This union represents the bit fields in the Device Configuration
++ * Register.  Read the register into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it elements.  Write the
++ * <i>d32</i> member to the dcfg register.
++ */
++typedef union dcfg_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Device Speed */
++		unsigned devspd:2;
++		/** Non Zero Length Status OUT Handshake */
++		unsigned nzstsouthshk:1;
++#define DWC_DCFG_SEND_STALL 1
++
++		unsigned ena32khzs:1;
++		/** Device Addresses */
++		unsigned devaddr:7;
++		/** Periodic Frame Interval */
++		unsigned perfrint:2;
++#define DWC_DCFG_FRAME_INTERVAL_80 0
++#define DWC_DCFG_FRAME_INTERVAL_85 1
++#define DWC_DCFG_FRAME_INTERVAL_90 2
++#define DWC_DCFG_FRAME_INTERVAL_95 3
++
++		/** Enable Device OUT NAK for bulk in DDMA mode */
++		unsigned endevoutnak:1;
++
++		unsigned reserved14_17:4;
++		/** In Endpoint Mis-match count */
++		unsigned epmscnt:5;
++		/** Enable Descriptor DMA in Device mode */
++		unsigned descdma:1;
++		unsigned perschintvl:2;
++		unsigned resvalid:6;
++	} b;
++} dcfg_data_t;
++
++/**
++ * This union represents the bit fields in the Device Control
++ * Register.  Read the register into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it elements.
++ */
++typedef union dctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Remote Wakeup */
++		unsigned rmtwkupsig:1;
++		/** Soft Disconnect */
++		unsigned sftdiscon:1;
++		/** Global Non-Periodic IN NAK Status */
++		unsigned gnpinnaksts:1;
++		/** Global OUT NAK Status */
++		unsigned goutnaksts:1;
++		/** Test Control */
++		unsigned tstctl:3;
++		/** Set Global Non-Periodic IN NAK */
++		unsigned sgnpinnak:1;
++		/** Clear Global Non-Periodic IN NAK */
++		unsigned cgnpinnak:1;
++		/** Set Global OUT NAK */
++		unsigned sgoutnak:1;
++		/** Clear Global OUT NAK */
++		unsigned cgoutnak:1;
++		/** Power-On Programming Done */
++		unsigned pwronprgdone:1;
++		/** Reserved */
++		unsigned reserved:1;
++		/** Global Multi Count */
++		unsigned gmc:2;
++		/** Ignore Frame Number for ISOC EPs */
++		unsigned ifrmnum:1;
++		/** NAK on Babble */
++		unsigned nakonbble:1;
++		/** Enable Continue on BNA */
++		unsigned encontonbna:1;
++
++		unsigned reserved18_31:14;
++	} b;
++} dctl_data_t;
++
++/**
++ * This union represents the bit fields in the Device Status
++ * Register.  Read the register into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it elements.
++ */
++typedef union dsts_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Suspend Status */
++		unsigned suspsts:1;
++		/** Enumerated Speed */
++		unsigned enumspd:2;
++#define DWC_DSTS_ENUMSPD_HS_PHY_30MHZ_OR_60MHZ 0
++#define DWC_DSTS_ENUMSPD_FS_PHY_30MHZ_OR_60MHZ 1
++#define DWC_DSTS_ENUMSPD_LS_PHY_6MHZ		   2
++#define DWC_DSTS_ENUMSPD_FS_PHY_48MHZ		   3
++		/** Erratic Error */
++		unsigned errticerr:1;
++		unsigned reserved4_7:4;
++		/** Frame or Microframe Number of the received SOF */
++		unsigned soffn:14;
++		unsigned reserved22_31:10;
++	} b;
++} dsts_data_t;
++
++/**
++ * This union represents the bit fields in the Device IN EP Interrupt
++ * Register and the Device IN EP Common Mask Register.
++ *
++ * - Read the register into the <i>d32</i> member then set/clear the
++ *	 bits using the <i>b</i>it elements.
++ */
++typedef union diepint_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Transfer complete mask */
++		unsigned xfercompl:1;
++		/** Endpoint disable mask */
++		unsigned epdisabled:1;
++		/** AHB Error mask */
++		unsigned ahberr:1;
++		/** TimeOUT Handshake mask (non-ISOC EPs) */
++		unsigned timeout:1;
++		/** IN Token received with TxF Empty mask */
++		unsigned intktxfemp:1;
++		/** IN Token Received with EP mismatch mask */
++		unsigned intknepmis:1;
++		/** IN Endpoint NAK Effective mask */
++		unsigned inepnakeff:1;
++		/** Reserved */
++		unsigned emptyintr:1;
++
++		unsigned txfifoundrn:1;
++
++		/** BNA Interrupt mask */
++		unsigned bna:1;
++
++		unsigned reserved10_12:3;
++		/** BNA Interrupt mask */
++		unsigned nak:1;
++
++		unsigned reserved14_31:18;
++	} b;
++} diepint_data_t;
++
++/**
++ * This union represents the bit fields in the Device IN EP
++ * Common/Dedicated Interrupt Mask Register.
++ */
++typedef union diepint_data diepmsk_data_t;
++
++/**
++ * This union represents the bit fields in the Device OUT EP Interrupt
++ * Registerand Device OUT EP Common Interrupt Mask Register.
++ *
++ * - Read the register into the <i>d32</i> member then set/clear the
++ *	 bits using the <i>b</i>it elements.
++ */
++typedef union doepint_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Transfer complete */
++		unsigned xfercompl:1;
++		/** Endpoint disable  */
++		unsigned epdisabled:1;
++		/** AHB Error */
++		unsigned ahberr:1;
++		/** Setup Phase Done (contorl EPs) */
++		unsigned setup:1;
++		/** OUT Token Received when Endpoint Disabled */
++		unsigned outtknepdis:1;
++
++		unsigned stsphsercvd:1;
++		/** Back-to-Back SETUP Packets Received */
++		unsigned back2backsetup:1;
++
++		unsigned reserved7:1;
++		/** OUT packet Error */
++		unsigned outpkterr:1;
++		/** BNA Interrupt */
++		unsigned bna:1;
++
++		unsigned reserved10:1;
++		/** Packet Drop Status */
++		unsigned pktdrpsts:1;
++		/** Babble Interrupt */
++		unsigned babble:1;
++		/** NAK Interrupt */
++		unsigned nak:1;
++		/** NYET Interrupt */
++		unsigned nyet:1;
++		/** Bit indicating setup packet received */
++		unsigned sr:1;
++
++		unsigned reserved16_31:16;
++	} b;
++} doepint_data_t;
++
++/**
++ * This union represents the bit fields in the Device OUT EP
++ * Common/Dedicated Interrupt Mask Register.
++ */
++typedef union doepint_data doepmsk_data_t;
++
++/**
++ * This union represents the bit fields in the Device All EP Interrupt
++ * and Mask Registers.
++ * - Read the register into the <i>d32</i> member then set/clear the
++ *	 bits using the <i>b</i>it elements.
++ */
++typedef union daint_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** IN Endpoint bits */
++		unsigned in:16;
++		/** OUT Endpoint bits */
++		unsigned out:16;
++	} ep;
++	struct {
++		/** IN Endpoint bits */
++		unsigned inep0:1;
++		unsigned inep1:1;
++		unsigned inep2:1;
++		unsigned inep3:1;
++		unsigned inep4:1;
++		unsigned inep5:1;
++		unsigned inep6:1;
++		unsigned inep7:1;
++		unsigned inep8:1;
++		unsigned inep9:1;
++		unsigned inep10:1;
++		unsigned inep11:1;
++		unsigned inep12:1;
++		unsigned inep13:1;
++		unsigned inep14:1;
++		unsigned inep15:1;
++		/** OUT Endpoint bits */
++		unsigned outep0:1;
++		unsigned outep1:1;
++		unsigned outep2:1;
++		unsigned outep3:1;
++		unsigned outep4:1;
++		unsigned outep5:1;
++		unsigned outep6:1;
++		unsigned outep7:1;
++		unsigned outep8:1;
++		unsigned outep9:1;
++		unsigned outep10:1;
++		unsigned outep11:1;
++		unsigned outep12:1;
++		unsigned outep13:1;
++		unsigned outep14:1;
++		unsigned outep15:1;
++	} b;
++} daint_data_t;
++
++/**
++ * This union represents the bit fields in the Device IN Token Queue
++ * Read Registers.
++ * - Read the register into the <i>d32</i> member.
++ * - READ-ONLY Register
++ */
++typedef union dtknq1_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** In Token Queue Write Pointer */
++		unsigned intknwptr:5;
++		/** Reserved */
++		unsigned reserved05_06:2;
++		/** write pointer has wrapped. */
++		unsigned wrap_bit:1;
++		/** EP Numbers of IN Tokens 0 ... 4 */
++		unsigned epnums0_5:24;
++	} b;
++} dtknq1_data_t;
++
++/**
++ * This union represents Threshold control Register
++ * - Read and write the register into the <i>d32</i> member.
++ * - READ-WRITABLE Register
++ */
++typedef union dthrctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** non ISO Tx Thr. Enable */
++		unsigned non_iso_thr_en:1;
++		/** ISO Tx Thr. Enable */
++		unsigned iso_thr_en:1;
++		/** Tx Thr. Length */
++		unsigned tx_thr_len:9;
++		/** AHB Threshold ratio */
++		unsigned ahb_thr_ratio:2;
++		/** Reserved */
++		unsigned reserved13_15:3;
++		/** Rx Thr. Enable */
++		unsigned rx_thr_en:1;
++		/** Rx Thr. Length */
++		unsigned rx_thr_len:9;
++		unsigned reserved26:1;
++		/** Arbiter Parking Enable*/
++		unsigned arbprken:1;
++		/** Reserved */
++		unsigned reserved28_31:4;
++	} b;
++} dthrctl_data_t;
++
++/**
++ * Device Logical IN Endpoint-Specific Registers. <i>Offsets
++ * 900h-AFCh</i>
++ *
++ * There will be one set of endpoint registers per logical endpoint
++ * implemented.
++ *
++ * <i>These registers are visible only in Device mode and must not be
++ * accessed in Host mode, as the results are unknown.</i>
++ */
++typedef struct dwc_otg_dev_in_ep_regs {
++	/** Device IN Endpoint Control Register. <i>Offset:900h +
++	 * (ep_num * 20h) + 00h</i> */
++	volatile uint32_t diepctl;
++	/** Reserved. <i>Offset:900h + (ep_num * 20h) + 04h</i> */
++	uint32_t reserved04;
++	/** Device IN Endpoint Interrupt Register. <i>Offset:900h +
++	 * (ep_num * 20h) + 08h</i> */
++	volatile uint32_t diepint;
++	/** Reserved. <i>Offset:900h + (ep_num * 20h) + 0Ch</i> */
++	uint32_t reserved0C;
++	/** Device IN Endpoint Transfer Size
++	 * Register. <i>Offset:900h + (ep_num * 20h) + 10h</i> */
++	volatile uint32_t dieptsiz;
++	/** Device IN Endpoint DMA Address Register. <i>Offset:900h +
++	 * (ep_num * 20h) + 14h</i> */
++	volatile uint32_t diepdma;
++	/** Device IN Endpoint Transmit FIFO Status Register. <i>Offset:900h +
++	 * (ep_num * 20h) + 18h</i> */
++	volatile uint32_t dtxfsts;
++	/** Device IN Endpoint DMA Buffer Register. <i>Offset:900h +
++	 * (ep_num * 20h) + 1Ch</i> */
++	volatile uint32_t diepdmab;
++} dwc_otg_dev_in_ep_regs_t;
++
++/**
++ * Device Logical OUT Endpoint-Specific Registers. <i>Offsets:
++ * B00h-CFCh</i>
++ *
++ * There will be one set of endpoint registers per logical endpoint
++ * implemented.
++ *
++ * <i>These registers are visible only in Device mode and must not be
++ * accessed in Host mode, as the results are unknown.</i>
++ */
++typedef struct dwc_otg_dev_out_ep_regs {
++	/** Device OUT Endpoint Control Register. <i>Offset:B00h +
++	 * (ep_num * 20h) + 00h</i> */
++	volatile uint32_t doepctl;
++	/** Reserved. <i>Offset:B00h + (ep_num * 20h) + 04h</i> */
++	uint32_t reserved04;
++	/** Device OUT Endpoint Interrupt Register. <i>Offset:B00h +
++	 * (ep_num * 20h) + 08h</i> */
++	volatile uint32_t doepint;
++	/** Reserved. <i>Offset:B00h + (ep_num * 20h) + 0Ch</i> */
++	uint32_t reserved0C;
++	/** Device OUT Endpoint Transfer Size Register. <i>Offset:
++	 * B00h + (ep_num * 20h) + 10h</i> */
++	volatile uint32_t doeptsiz;
++	/** Device OUT Endpoint DMA Address Register. <i>Offset:B00h
++	 * + (ep_num * 20h) + 14h</i> */
++	volatile uint32_t doepdma;
++	/** Reserved. <i>Offset:B00h + 	 * (ep_num * 20h) + 18h</i> */
++	uint32_t unused;
++	/** Device OUT Endpoint DMA Buffer Register. <i>Offset:B00h
++	 * + (ep_num * 20h) + 1Ch</i> */
++	uint32_t doepdmab;
++} dwc_otg_dev_out_ep_regs_t;
++
++/**
++ * This union represents the bit fields in the Device EP Control
++ * Register.  Read the register into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it elements.
++ */
++typedef union depctl_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Maximum Packet Size
++		 * IN/OUT EPn
++		 * IN/OUT EP0 - 2 bits
++		 *	 2'b00: 64 Bytes
++		 *	 2'b01: 32
++		 *	 2'b10: 16
++		 *	 2'b11: 8 */
++		unsigned mps:11;
++#define DWC_DEP0CTL_MPS_64	 0
++#define DWC_DEP0CTL_MPS_32	 1
++#define DWC_DEP0CTL_MPS_16	 2
++#define DWC_DEP0CTL_MPS_8	 3
++
++		/** Next Endpoint
++		 * IN EPn/IN EP0
++		 * OUT EPn/OUT EP0 - reserved */
++		unsigned nextep:4;
++
++		/** USB Active Endpoint */
++		unsigned usbactep:1;
++
++		/** Endpoint DPID (INTR/Bulk IN and OUT endpoints)
++		 * This field contains the PID of the packet going to
++		 * be received or transmitted on this endpoint. The
++		 * application should program the PID of the first
++		 * packet going to be received or transmitted on this
++		 * endpoint , after the endpoint is
++		 * activated. Application use the SetD1PID and
++		 * SetD0PID fields of this register to program either
++		 * D0 or D1 PID.
++		 *
++		 * The encoding for this field is
++		 *	 - 0: D0
++		 *	 - 1: D1
++		 */
++		unsigned dpid:1;
++
++		/** NAK Status */
++		unsigned naksts:1;
++
++		/** Endpoint Type
++		 *	2'b00: Control
++		 *	2'b01: Isochronous
++		 *	2'b10: Bulk
++		 *	2'b11: Interrupt */
++		unsigned eptype:2;
++
++		/** Snoop Mode
++		 * OUT EPn/OUT EP0
++		 * IN EPn/IN EP0 - reserved */
++		unsigned snp:1;
++
++		/** Stall Handshake */
++		unsigned stall:1;
++
++		/** Tx Fifo Number
++		 * IN EPn/IN EP0
++		 * OUT EPn/OUT EP0 - reserved */
++		unsigned txfnum:4;
++
++		/** Clear NAK */
++		unsigned cnak:1;
++		/** Set NAK */
++		unsigned snak:1;
++		/** Set DATA0 PID (INTR/Bulk IN and OUT endpoints)
++		 * Writing to this field sets the Endpoint DPID (DPID)
++		 * field in this register to DATA0. Set Even
++		 * (micro)frame (SetEvenFr) (ISO IN and OUT Endpoints)
++		 * Writing to this field sets the Even/Odd
++		 * (micro)frame (EO_FrNum) field to even (micro)
++		 * frame.
++		 */
++		unsigned setd0pid:1;
++		/** Set DATA1 PID (INTR/Bulk IN and OUT endpoints)
++		 * Writing to this field sets the Endpoint DPID (DPID)
++		 * field in this register to DATA1 Set Odd
++		 * (micro)frame (SetOddFr) (ISO IN and OUT Endpoints)
++		 * Writing to this field sets the Even/Odd
++		 * (micro)frame (EO_FrNum) field to odd (micro) frame.
++		 */
++		unsigned setd1pid:1;
++
++		/** Endpoint Disable */
++		unsigned epdis:1;
++		/** Endpoint Enable */
++		unsigned epena:1;
++	} b;
++} depctl_data_t;
++
++/**
++ * This union represents the bit fields in the Device EP Transfer
++ * Size Register.  Read the register into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it elements.
++ */
++typedef union deptsiz_data {
++		/** raw register data */
++	uint32_t d32;
++		/** register bits */
++	struct {
++		/** Transfer size */
++		unsigned xfersize:19;
++/** Max packet count for EP (pow(2,10)-1) */
++#define MAX_PKT_CNT 1023
++		/** Packet Count */
++		unsigned pktcnt:10;
++		/** Multi Count - Periodic IN endpoints */
++		unsigned mc:2;
++		unsigned reserved:1;
++	} b;
++} deptsiz_data_t;
++
++/**
++ * This union represents the bit fields in the Device EP 0 Transfer
++ * Size Register.  Read the register into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it elements.
++ */
++typedef union deptsiz0_data {
++		/** raw register data */
++	uint32_t d32;
++		/** register bits */
++	struct {
++		/** Transfer size */
++		unsigned xfersize:7;
++				/** Reserved */
++		unsigned reserved7_18:12;
++		/** Packet Count */
++		unsigned pktcnt:2;
++				/** Reserved */
++		unsigned reserved21_28:8;
++				/**Setup Packet Count (DOEPTSIZ0 Only) */
++		unsigned supcnt:2;
++		unsigned reserved31;
++	} b;
++} deptsiz0_data_t;
++
++/////////////////////////////////////////////////
++// DMA Descriptor Specific Structures
++//
++
++/** Buffer status definitions */
++
++#define BS_HOST_READY	0x0
++#define BS_DMA_BUSY		0x1
++#define BS_DMA_DONE		0x2
++#define BS_HOST_BUSY	0x3
++
++/** Receive/Transmit status definitions */
++
++#define RTS_SUCCESS		0x0
++#define RTS_BUFFLUSH	0x1
++#define RTS_RESERVED	0x2
++#define RTS_BUFERR		0x3
++
++/**
++ * This union represents the bit fields in the DMA Descriptor
++ * status quadlet. Read the quadlet into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it, <i>b_iso_out</i> and
++ * <i>b_iso_in</i> elements.
++ */
++typedef union dev_dma_desc_sts {
++		/** raw register data */
++	uint32_t d32;
++		/** quadlet bits */
++	struct {
++		/** Received number of bytes */
++		unsigned bytes:16;
++		/** NAK bit - only for OUT EPs */
++		unsigned nak:1;
++		unsigned reserved17_22:6;
++		/** Multiple Transfer - only for OUT EPs */
++		unsigned mtrf:1;
++		/** Setup Packet received - only for OUT EPs */
++		unsigned sr:1;
++		/** Interrupt On Complete */
++		unsigned ioc:1;
++		/** Short Packet */
++		unsigned sp:1;
++		/** Last */
++		unsigned l:1;
++		/** Receive Status */
++		unsigned sts:2;
++		/** Buffer Status */
++		unsigned bs:2;
++	} b;
++
++//#ifdef DWC_EN_ISOC
++		/** iso out quadlet bits */
++	struct {
++		/** Received number of bytes */
++		unsigned rxbytes:11;
++
++		unsigned reserved11:1;
++		/** Frame Number */
++		unsigned framenum:11;
++		/** Received ISO Data PID */
++		unsigned pid:2;
++		/** Interrupt On Complete */
++		unsigned ioc:1;
++		/** Short Packet */
++		unsigned sp:1;
++		/** Last */
++		unsigned l:1;
++		/** Receive Status */
++		unsigned rxsts:2;
++		/** Buffer Status */
++		unsigned bs:2;
++	} b_iso_out;
++
++		/** iso in quadlet bits */
++	struct {
++		/** Transmited number of bytes */
++		unsigned txbytes:12;
++		/** Frame Number */
++		unsigned framenum:11;
++		/** Transmited ISO Data PID */
++		unsigned pid:2;
++		/** Interrupt On Complete */
++		unsigned ioc:1;
++		/** Short Packet */
++		unsigned sp:1;
++		/** Last */
++		unsigned l:1;
++		/** Transmit Status */
++		unsigned txsts:2;
++		/** Buffer Status */
++		unsigned bs:2;
++	} b_iso_in;
++//#endif                                /* DWC_EN_ISOC */
++} dev_dma_desc_sts_t;
++
++/**
++ * DMA Descriptor structure
++ *
++ * DMA Descriptor structure contains two quadlets:
++ * Status quadlet and Data buffer pointer.
++ */
++typedef struct dwc_otg_dev_dma_desc {
++	/** DMA Descriptor status quadlet */
++	dev_dma_desc_sts_t status;
++	/** DMA Descriptor data buffer pointer */
++	uint32_t buf;
++} dwc_otg_dev_dma_desc_t;
++
++/**
++ * The dwc_otg_dev_if structure contains information needed to manage
++ * the DWC_otg controller acting in device mode. It represents the
++ * programming view of the device-specific aspects of the controller.
++ */
++typedef struct dwc_otg_dev_if {
++	/** Pointer to device Global registers.
++	 * Device Global Registers starting at offset 800h
++	 */
++	dwc_otg_device_global_regs_t *dev_global_regs;
++#define DWC_DEV_GLOBAL_REG_OFFSET 0x800
++
++	/**
++	 * Device Logical IN Endpoint-Specific Registers 900h-AFCh
++	 */
++	dwc_otg_dev_in_ep_regs_t *in_ep_regs[MAX_EPS_CHANNELS];
++#define DWC_DEV_IN_EP_REG_OFFSET 0x900
++#define DWC_EP_REG_OFFSET 0x20
++
++	/** Device Logical OUT Endpoint-Specific Registers B00h-CFCh */
++	dwc_otg_dev_out_ep_regs_t *out_ep_regs[MAX_EPS_CHANNELS];
++#define DWC_DEV_OUT_EP_REG_OFFSET 0xB00
++
++	/* Device configuration information */
++	uint8_t speed;				 /**< Device Speed	0: Unknown, 1: LS, 2:FS, 3: HS */
++	uint8_t num_in_eps;		 /**< Number # of Tx EP range: 0-15 exept ep0 */
++	uint8_t num_out_eps;		 /**< Number # of Rx EP range: 0-15 exept ep 0*/
++
++	/** Size of periodic FIFOs (Bytes) */
++	uint16_t perio_tx_fifo_size[MAX_PERIO_FIFOS];
++
++	/** Size of Tx FIFOs (Bytes) */
++	uint16_t tx_fifo_size[MAX_TX_FIFOS];
++
++	/** Thresholding enable flags and length varaiables **/
++	uint16_t rx_thr_en;
++	uint16_t iso_tx_thr_en;
++	uint16_t non_iso_tx_thr_en;
++
++	uint16_t rx_thr_length;
++	uint16_t tx_thr_length;
++
++	/**
++	 * Pointers to the DMA Descriptors for EP0 Control
++	 * transfers (virtual and physical)
++	 */
++
++	/** 2 descriptors for SETUP packets */
++	dwc_dma_t dma_setup_desc_addr[2];
++	dwc_otg_dev_dma_desc_t *setup_desc_addr[2];
++
++	/** Pointer to Descriptor with latest SETUP packet */
++	dwc_otg_dev_dma_desc_t *psetup;
++
++	/** Index of current SETUP handler descriptor */
++	uint32_t setup_desc_index;
++
++	/** Descriptor for Data In or Status In phases */
++	dwc_dma_t dma_in_desc_addr;
++	dwc_otg_dev_dma_desc_t *in_desc_addr;
++
++	/** Descriptor for Data Out or Status Out phases */
++	dwc_dma_t dma_out_desc_addr;
++	dwc_otg_dev_dma_desc_t *out_desc_addr;
++
++	/** Setup Packet Detected - if set clear NAK when queueing */
++	uint32_t spd;
++	/** Isoc ep pointer on which incomplete happens */
++	void *isoc_ep;
++
++} dwc_otg_dev_if_t;
++
++/////////////////////////////////////////////////
++// Host Mode Register Structures
++//
++/**
++ * The Host Global Registers structure defines the size and relative
++ * field offsets for the Host Mode Global Registers.  Host Global
++ * Registers offsets 400h-7FFh.
++*/
++typedef struct dwc_otg_host_global_regs {
++	/** Host Configuration Register.   <i>Offset: 400h</i> */
++	volatile uint32_t hcfg;
++	/** Host Frame Interval Register.	<i>Offset: 404h</i> */
++	volatile uint32_t hfir;
++	/** Host Frame Number / Frame Remaining Register. <i>Offset: 408h</i> */
++	volatile uint32_t hfnum;
++	/** Reserved.	<i>Offset: 40Ch</i> */
++	uint32_t reserved40C;
++	/** Host Periodic Transmit FIFO/ Queue Status Register. <i>Offset: 410h</i> */
++	volatile uint32_t hptxsts;
++	/** Host All Channels Interrupt Register. <i>Offset: 414h</i> */
++	volatile uint32_t haint;
++	/** Host All Channels Interrupt Mask Register. <i>Offset: 418h</i> */
++	volatile uint32_t haintmsk;
++	/** Host Frame List Base Address Register . <i>Offset: 41Ch</i> */
++	volatile uint32_t hflbaddr;
++} dwc_otg_host_global_regs_t;
++
++/**
++ * This union represents the bit fields in the Host Configuration Register.
++ * Read the register into the <i>d32</i> member then set/clear the bits using
++ * the <i>b</i>it elements. Write the <i>d32</i> member to the hcfg register.
++ */
++typedef union hcfg_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		/** FS/LS Phy Clock Select */
++		unsigned fslspclksel:2;
++#define DWC_HCFG_30_60_MHZ 0
++#define DWC_HCFG_48_MHZ	   1
++#define DWC_HCFG_6_MHZ	   2
++
++		/** FS/LS Only Support */
++		unsigned fslssupp:1;
++		unsigned reserved3_6:4;
++		/** Enable 32-KHz Suspend Mode */
++		unsigned ena32khzs:1;
++		/** Resume Validation Periiod */
++		unsigned resvalid:8;
++		unsigned reserved16_22:7;
++		/** Enable Scatter/gather DMA in Host mode */
++		unsigned descdma:1;
++		/** Frame List Entries */
++		unsigned frlisten:2;
++		/** Enable Periodic Scheduling */
++		unsigned perschedena:1;
++		unsigned reserved27_30:4;
++		unsigned modechtimen:1;
++	} b;
++} hcfg_data_t;
++
++/**
++ * This union represents the bit fields in the Host Frame Remaing/Number
++ * Register.
++ */
++typedef union hfir_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		unsigned frint:16;
++		unsigned hfirrldctrl:1;
++		unsigned reserved:15;
++	} b;
++} hfir_data_t;
++
++/**
++ * This union represents the bit fields in the Host Frame Remaing/Number
++ * Register.
++ */
++typedef union hfnum_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		unsigned frnum:16;
++#define DWC_HFNUM_MAX_FRNUM 0x3FFF
++		unsigned frrem:16;
++	} b;
++} hfnum_data_t;
++
++typedef union hptxsts_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		unsigned ptxfspcavail:16;
++		unsigned ptxqspcavail:8;
++		/** Top of the Periodic Transmit Request Queue
++		 *	- bit 24 - Terminate (last entry for the selected channel)
++		 *	- bits 26:25 - Token Type
++		 *	  - 2'b00 - Zero length
++		 *	  - 2'b01 - Ping
++		 *	  - 2'b10 - Disable
++		 *	- bits 30:27 - Channel Number
++		 *	- bit 31 - Odd/even microframe
++		 */
++		unsigned ptxqtop_terminate:1;
++		unsigned ptxqtop_token:2;
++		unsigned ptxqtop_chnum:4;
++		unsigned ptxqtop_odd:1;
++	} b;
++} hptxsts_data_t;
++
++/**
++ * This union represents the bit fields in the Host Port Control and Status
++ * Register. Read the register into the <i>d32</i> member then set/clear the
++ * bits using the <i>b</i>it elements. Write the <i>d32</i> member to the
++ * hprt0 register.
++ */
++typedef union hprt0_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned prtconnsts:1;
++		unsigned prtconndet:1;
++		unsigned prtena:1;
++		unsigned prtenchng:1;
++		unsigned prtovrcurract:1;
++		unsigned prtovrcurrchng:1;
++		unsigned prtres:1;
++		unsigned prtsusp:1;
++		unsigned prtrst:1;
++		unsigned reserved9:1;
++		unsigned prtlnsts:2;
++		unsigned prtpwr:1;
++		unsigned prttstctl:4;
++		unsigned prtspd:2;
++#define DWC_HPRT0_PRTSPD_HIGH_SPEED 0
++#define DWC_HPRT0_PRTSPD_FULL_SPEED 1
++#define DWC_HPRT0_PRTSPD_LOW_SPEED	2
++		unsigned reserved19_31:13;
++	} b;
++} hprt0_data_t;
++
++/**
++ * This union represents the bit fields in the Host All Interrupt
++ * Register.
++ */
++typedef union haint_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned ch0:1;
++		unsigned ch1:1;
++		unsigned ch2:1;
++		unsigned ch3:1;
++		unsigned ch4:1;
++		unsigned ch5:1;
++		unsigned ch6:1;
++		unsigned ch7:1;
++		unsigned ch8:1;
++		unsigned ch9:1;
++		unsigned ch10:1;
++		unsigned ch11:1;
++		unsigned ch12:1;
++		unsigned ch13:1;
++		unsigned ch14:1;
++		unsigned ch15:1;
++		unsigned reserved:16;
++	} b;
++
++	struct {
++		unsigned chint:16;
++		unsigned reserved:16;
++	} b2;
++} haint_data_t;
++
++/**
++ * This union represents the bit fields in the Host All Interrupt
++ * Register.
++ */
++typedef union haintmsk_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned ch0:1;
++		unsigned ch1:1;
++		unsigned ch2:1;
++		unsigned ch3:1;
++		unsigned ch4:1;
++		unsigned ch5:1;
++		unsigned ch6:1;
++		unsigned ch7:1;
++		unsigned ch8:1;
++		unsigned ch9:1;
++		unsigned ch10:1;
++		unsigned ch11:1;
++		unsigned ch12:1;
++		unsigned ch13:1;
++		unsigned ch14:1;
++		unsigned ch15:1;
++		unsigned reserved:16;
++	} b;
++
++	struct {
++		unsigned chint:16;
++		unsigned reserved:16;
++	} b2;
++} haintmsk_data_t;
++
++/**
++ * Host Channel Specific Registers. <i>500h-5FCh</i>
++ */
++typedef struct dwc_otg_hc_regs {
++	/** Host Channel 0 Characteristic Register. <i>Offset: 500h + (chan_num * 20h) + 00h</i> */
++	volatile uint32_t hcchar;
++	/** Host Channel 0 Split Control Register. <i>Offset: 500h + (chan_num * 20h) + 04h</i> */
++	volatile uint32_t hcsplt;
++	/** Host Channel 0 Interrupt Register. <i>Offset: 500h + (chan_num * 20h) + 08h</i> */
++	volatile uint32_t hcint;
++	/** Host Channel 0 Interrupt Mask Register. <i>Offset: 500h + (chan_num * 20h) + 0Ch</i> */
++	volatile uint32_t hcintmsk;
++	/** Host Channel 0 Transfer Size Register. <i>Offset: 500h + (chan_num * 20h) + 10h</i> */
++	volatile uint32_t hctsiz;
++	/** Host Channel 0 DMA Address Register. <i>Offset: 500h + (chan_num * 20h) + 14h</i> */
++	volatile uint32_t hcdma;
++	volatile uint32_t reserved;
++	/** Host Channel 0 DMA Buffer Address Register. <i>Offset: 500h + (chan_num * 20h) + 1Ch</i> */
++	volatile uint32_t hcdmab;
++} dwc_otg_hc_regs_t;
++
++/**
++ * This union represents the bit fields in the Host Channel Characteristics
++ * Register. Read the register into the <i>d32</i> member then set/clear the
++ * bits using the <i>b</i>it elements. Write the <i>d32</i> member to the
++ * hcchar register.
++ */
++typedef union hcchar_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		/** Maximum packet size in bytes */
++		unsigned mps:11;
++
++		/** Endpoint number */
++		unsigned epnum:4;
++
++		/** 0: OUT, 1: IN */
++		unsigned epdir:1;
++
++		unsigned reserved:1;
++
++		/** 0: Full/high speed device, 1: Low speed device */
++		unsigned lspddev:1;
++
++		/** 0: Control, 1: Isoc, 2: Bulk, 3: Intr */
++		unsigned eptype:2;
++
++		/** Packets per frame for periodic transfers. 0 is reserved. */
++		unsigned multicnt:2;
++
++		/** Device address */
++		unsigned devaddr:7;
++
++		/**
++		 * Frame to transmit periodic transaction.
++		 * 0: even, 1: odd
++		 */
++		unsigned oddfrm:1;
++
++		/** Channel disable */
++		unsigned chdis:1;
++
++		/** Channel enable */
++		unsigned chen:1;
++	} b;
++} hcchar_data_t;
++
++typedef union hcsplt_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		/** Port Address */
++		unsigned prtaddr:7;
++
++		/** Hub Address */
++		unsigned hubaddr:7;
++
++		/** Transaction Position */
++		unsigned xactpos:2;
++#define DWC_HCSPLIT_XACTPOS_MID 0
++#define DWC_HCSPLIT_XACTPOS_END 1
++#define DWC_HCSPLIT_XACTPOS_BEGIN 2
++#define DWC_HCSPLIT_XACTPOS_ALL 3
++
++		/** Do Complete Split */
++		unsigned compsplt:1;
++
++		/** Reserved */
++		unsigned reserved:14;
++
++		/** Split Enble */
++		unsigned spltena:1;
++	} b;
++} hcsplt_data_t;
++
++/**
++ * This union represents the bit fields in the Host All Interrupt
++ * Register.
++ */
++typedef union hcint_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** Transfer Complete */
++		unsigned xfercomp:1;
++		/** Channel Halted */
++		unsigned chhltd:1;
++		/** AHB Error */
++		unsigned ahberr:1;
++		/** STALL Response Received */
++		unsigned stall:1;
++		/** NAK Response Received */
++		unsigned nak:1;
++		/** ACK Response Received */
++		unsigned ack:1;
++		/** NYET Response Received */
++		unsigned nyet:1;
++		/** Transaction Err */
++		unsigned xacterr:1;
++		/** Babble Error */
++		unsigned bblerr:1;
++		/** Frame Overrun */
++		unsigned frmovrun:1;
++		/** Data Toggle Error */
++		unsigned datatglerr:1;
++		/** Buffer Not Available (only for DDMA mode) */
++		unsigned bna:1;
++		/** Exessive transaction error (only for DDMA mode) */
++		unsigned xcs_xact:1;
++		/** Frame List Rollover interrupt */
++		unsigned frm_list_roll:1;
++		/** Reserved */
++		unsigned reserved14_31:18;
++	} b;
++} hcint_data_t;
++
++/**
++ * This union represents the bit fields in the Host Channel Interrupt Mask
++ * Register. Read the register into the <i>d32</i> member then set/clear the
++ * bits using the <i>b</i>it elements. Write the <i>d32</i> member to the
++ * hcintmsk register.
++ */
++typedef union hcintmsk_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		unsigned xfercompl:1;
++		unsigned chhltd:1;
++		unsigned ahberr:1;
++		unsigned stall:1;
++		unsigned nak:1;
++		unsigned ack:1;
++		unsigned nyet:1;
++		unsigned xacterr:1;
++		unsigned bblerr:1;
++		unsigned frmovrun:1;
++		unsigned datatglerr:1;
++		unsigned bna:1;
++		unsigned xcs_xact:1;
++		unsigned frm_list_roll:1;
++		unsigned reserved14_31:18;
++	} b;
++} hcintmsk_data_t;
++
++/**
++ * This union represents the bit fields in the Host Channel Transfer Size
++ * Register. Read the register into the <i>d32</i> member then set/clear the
++ * bits using the <i>b</i>it elements. Write the <i>d32</i> member to the
++ * hcchar register.
++ */
++
++typedef union hctsiz_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		/** Total transfer size in bytes */
++		unsigned xfersize:19;
++
++		/** Data packets to transfer */
++		unsigned pktcnt:10;
++
++		/**
++		 * Packet ID for next data packet
++		 * 0: DATA0
++		 * 1: DATA2
++		 * 2: DATA1
++		 * 3: MDATA (non-Control), SETUP (Control)
++		 */
++		unsigned pid:2;
++#define DWC_HCTSIZ_DATA0 0
++#define DWC_HCTSIZ_DATA1 2
++#define DWC_HCTSIZ_DATA2 1
++#define DWC_HCTSIZ_MDATA 3
++#define DWC_HCTSIZ_SETUP 3
++
++		/** Do PING protocol when 1 */
++		unsigned dopng:1;
++	} b;
++
++	/** register bits */
++	struct {
++		/** Scheduling information */
++		unsigned schinfo:8;
++
++		/** Number of transfer descriptors.
++		 * Max value:
++		 * 64 in general,
++		 * 256 only for HS isochronous endpoint.
++		 */
++		unsigned ntd:8;
++
++		/** Data packets to transfer */
++		unsigned reserved16_28:13;
++
++		/**
++		 * Packet ID for next data packet
++		 * 0: DATA0
++		 * 1: DATA2
++		 * 2: DATA1
++		 * 3: MDATA (non-Control)
++		 */
++		unsigned pid:2;
++
++		/** Do PING protocol when 1 */
++		unsigned dopng:1;
++	} b_ddma;
++} hctsiz_data_t;
++
++/**
++ * This union represents the bit fields in the Host DMA Address
++ * Register used in Descriptor DMA mode.
++ */
++typedef union hcdma_data {
++	/** raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		unsigned reserved0_2:3;
++		/** Current Transfer Descriptor. Not used for ISOC */
++		unsigned ctd:8;
++		/** Start Address of Descriptor List */
++		unsigned dma_addr:21;
++	} b;
++} hcdma_data_t;
++
++/**
++ * This union represents the bit fields in the DMA Descriptor
++ * status quadlet for host mode. Read the quadlet into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it elements.
++ */
++typedef union host_dma_desc_sts {
++	/** raw register data */
++	uint32_t d32;
++	/** quadlet bits */
++
++	/* for non-isochronous  */
++	struct {
++		/** Number of bytes */
++		unsigned n_bytes:17;
++		/** QTD offset to jump when Short Packet received - only for IN EPs */
++		unsigned qtd_offset:6;
++		/**
++		 * Set to request the core to jump to alternate QTD if
++		 * Short Packet received - only for IN EPs
++		 */
++		unsigned a_qtd:1;
++		 /**
++		  * Setup Packet bit. When set indicates that buffer contains
++		  * setup packet.
++		  */
++		unsigned sup:1;
++		/** Interrupt On Complete */
++		unsigned ioc:1;
++		/** End of List */
++		unsigned eol:1;
++		unsigned reserved27:1;
++		/** Rx/Tx Status */
++		unsigned sts:2;
++#define DMA_DESC_STS_PKTERR	1
++		unsigned reserved30:1;
++		/** Active Bit */
++		unsigned a:1;
++	} b;
++	/* for isochronous */
++	struct {
++		/** Number of bytes */
++		unsigned n_bytes:12;
++		unsigned reserved12_24:13;
++		/** Interrupt On Complete */
++		unsigned ioc:1;
++		unsigned reserved26_27:2;
++		/** Rx/Tx Status */
++		unsigned sts:2;
++		unsigned reserved30:1;
++		/** Active Bit */
++		unsigned a:1;
++	} b_isoc;
++} host_dma_desc_sts_t;
++
++#define	MAX_DMA_DESC_SIZE		131071
++#define MAX_DMA_DESC_NUM_GENERIC	64
++#define MAX_DMA_DESC_NUM_HS_ISOC	256
++#define MAX_FRLIST_EN_NUM		64
++/**
++ * Host-mode DMA Descriptor structure
++ *
++ * DMA Descriptor structure contains two quadlets:
++ * Status quadlet and Data buffer pointer.
++ */
++typedef struct dwc_otg_host_dma_desc {
++	/** DMA Descriptor status quadlet */
++	host_dma_desc_sts_t status;
++	/** DMA Descriptor data buffer pointer */
++	uint32_t buf;
++} dwc_otg_host_dma_desc_t;
++
++/** OTG Host Interface Structure.
++ *
++ * The OTG Host Interface Structure structure contains information
++ * needed to manage the DWC_otg controller acting in host mode. It
++ * represents the programming view of the host-specific aspects of the
++ * controller.
++ */
++typedef struct dwc_otg_host_if {
++	/** Host Global Registers starting at offset 400h.*/
++	dwc_otg_host_global_regs_t *host_global_regs;
++#define DWC_OTG_HOST_GLOBAL_REG_OFFSET 0x400
++
++	/** Host Port 0 Control and Status Register */
++	volatile uint32_t *hprt0;
++#define DWC_OTG_HOST_PORT_REGS_OFFSET 0x440
++
++	/** Host Channel Specific Registers at offsets 500h-5FCh. */
++	dwc_otg_hc_regs_t *hc_regs[MAX_EPS_CHANNELS];
++#define DWC_OTG_HOST_CHAN_REGS_OFFSET 0x500
++#define DWC_OTG_CHAN_REGS_OFFSET 0x20
++
++	/* Host configuration information */
++	/** Number of Host Channels (range: 1-16) */
++	uint8_t num_host_channels;
++	/** Periodic EPs supported (0: no, 1: yes) */
++	uint8_t perio_eps_supported;
++	/** Periodic Tx FIFO Size (Only 1 host periodic Tx FIFO) */
++	uint16_t perio_tx_fifo_size;
++
++} dwc_otg_host_if_t;
++
++/**
++ * This union represents the bit fields in the Power and Clock Gating Control
++ * Register. Read the register into the <i>d32</i> member then set/clear the
++ * bits using the <i>b</i>it elements.
++ */
++typedef union pcgcctl_data {
++	/** raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		/** Stop Pclk */
++		unsigned stoppclk:1;
++		/** Gate Hclk */
++		unsigned gatehclk:1;
++		/** Power Clamp */
++		unsigned pwrclmp:1;
++		/** Reset Power Down Modules */
++		unsigned rstpdwnmodule:1;
++		/** Reserved */
++		unsigned reserved:1;
++		/** Enable Sleep Clock Gating (Enbl_L1Gating) */
++		unsigned enbl_sleep_gating:1;
++		/** PHY In Sleep (PhySleep) */
++		unsigned phy_in_sleep:1;
++		/** Deep Sleep*/
++		unsigned deep_sleep:1;
++		unsigned resetaftsusp:1;
++		unsigned restoremode:1;
++		unsigned enbl_extnd_hiber:1;
++		unsigned extnd_hiber_pwrclmp:1;
++		unsigned extnd_hiber_switch:1;
++		unsigned ess_reg_restored:1;
++		unsigned prt_clk_sel:2;
++		unsigned port_power:1;
++		unsigned max_xcvrselect:2;
++		unsigned max_termsel:1;
++		unsigned mac_dev_addr:7;
++		unsigned p2hd_dev_enum_spd:2;
++		unsigned p2hd_prt_spd:2;
++		unsigned if_dev_mode:1;
++	} b;
++} pcgcctl_data_t;
++
++/**
++ * This union represents the bit fields in the Global Data FIFO Software
++ * Configuration Register. Read the register into the <i>d32</i> member then
++ * set/clear the bits using the <i>b</i>it elements.
++ */
++typedef union gdfifocfg_data {
++	/* raw register data */
++	uint32_t d32;
++	/** register bits */
++	struct {
++		/** OTG Data FIFO depth */
++		unsigned gdfifocfg:16;
++		/** Start address of EP info controller */
++		unsigned epinfobase:16;
++	} b;
++} gdfifocfg_data_t;
++
++/**
++ * This union represents the bit fields in the Global Power Down Register
++ * Register. Read the register into the <i>d32</i> member then set/clear the
++ * bits using the <i>b</i>it elements.
++ */
++typedef union gpwrdn_data {
++	/* raw register data */
++	uint32_t d32;
++
++	/** register bits */
++	struct {
++		/** PMU Interrupt Select */
++		unsigned pmuintsel:1;
++		/** PMU Active */
++		unsigned pmuactv:1;
++		/** Restore */
++		unsigned restore:1;
++		/** Power Down Clamp */
++		unsigned pwrdnclmp:1;
++		/** Power Down Reset */
++		unsigned pwrdnrstn:1;
++		/** Power Down Switch */
++		unsigned pwrdnswtch:1;
++		/** Disable VBUS */
++		unsigned dis_vbus:1;
++		/** Line State Change */
++		unsigned lnstschng:1;
++		/** Line state change mask */
++		unsigned lnstchng_msk:1;
++		/** Reset Detected */
++		unsigned rst_det:1;
++		/** Reset Detect mask */
++		unsigned rst_det_msk:1;
++		/** Disconnect Detected */
++		unsigned disconn_det:1;
++		/** Disconnect Detect mask */
++		unsigned disconn_det_msk:1;
++		/** Connect Detected*/
++		unsigned connect_det:1;
++		/** Connect Detected Mask*/
++		unsigned connect_det_msk:1;
++		/** SRP Detected */
++		unsigned srp_det:1;
++		/** SRP Detect mask */
++		unsigned srp_det_msk:1;
++		/** Status Change Interrupt */
++		unsigned sts_chngint:1;
++		/** Status Change Interrupt Mask */
++		unsigned sts_chngint_msk:1;
++		/** Line State */
++		unsigned linestate:2;
++		/** Indicates current mode(status of IDDIG signal) */
++		unsigned idsts:1;
++		/** B Session Valid signal status*/
++		unsigned bsessvld:1;
++		/** ADP Event Detected */
++		unsigned adp_int:1;
++		/** Multi Valued ID pin */
++		unsigned mult_val_id_bc:5;
++		/** Reserved 24_31 */
++		unsigned reserved29_31:3;
++	} b;
++} gpwrdn_data_t;
++
++#endif
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/test/Makefile
+@@ -0,0 +1,16 @@
++
++PERL=/usr/bin/perl
++PL_TESTS=test_sysfs.pl test_mod_param.pl
++
++.PHONY : test
++test : perl_tests
++
++perl_tests :
++	@echo
++	@echo Running perl tests
++	@for test in $(PL_TESTS); do \
++	  if $(PERL) ./$$test ; then \
++	    echo "=======> $$test, PASSED" ; \
++	  else echo "=======> $$test, FAILED" ; \
++	  fi \
++	done
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/test/dwc_otg_test.pm
+@@ -0,0 +1,337 @@
++package dwc_otg_test;
++
++use strict;
++use Exporter ();
++
++use vars qw(@ISA @EXPORT
++$sysfsdir $paramdir $errors $params
++);
++
++ at ISA = qw(Exporter);
++
++#
++# Globals
++#
++$sysfsdir = "/sys/devices/lm0";
++$paramdir = "/sys/module/dwc_otg";
++$errors = 0;
++
++$params = [
++	   {
++	    NAME => "otg_cap",
++	    DEFAULT => 0,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 2
++	   },
++	   {
++	    NAME => "dma_enable",
++	    DEFAULT => 0,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 1
++	   },
++	   {
++	    NAME => "dma_burst_size",
++	    DEFAULT => 32,
++	    ENUM => [1, 4, 8, 16, 32, 64, 128, 256],
++	    LOW => 1,
++	    HIGH => 256
++	   },
++	   {
++	    NAME => "host_speed",
++	    DEFAULT => 0,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 1
++	   },
++	   {
++	    NAME => "host_support_fs_ls_low_power",
++	    DEFAULT => 0,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 1
++	   },
++	   {
++	    NAME => "host_ls_low_power_phy_clk",
++	    DEFAULT => 0,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 1
++	   },
++	   {
++	    NAME => "dev_speed",
++	    DEFAULT => 0,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 1
++	   },
++	   {
++	    NAME => "enable_dynamic_fifo",
++	    DEFAULT => 1,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 1
++	   },
++	   {
++	    NAME => "data_fifo_size",
++	    DEFAULT => 8192,
++	    ENUM => [],
++	    LOW => 32,
++	    HIGH => 32768
++	   },
++	   {
++	    NAME => "dev_rx_fifo_size",
++	    DEFAULT => 1064,
++	    ENUM => [],
++	    LOW => 16,
++	    HIGH => 32768
++	   },
++	   {
++	    NAME => "dev_nperio_tx_fifo_size",
++	    DEFAULT => 1024,
++	    ENUM => [],
++	    LOW => 16,
++	    HIGH => 32768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_1",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_2",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_3",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_4",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_5",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_6",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_7",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_8",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_9",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_10",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_11",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_12",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_13",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_14",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "dev_perio_tx_fifo_size_15",
++	    DEFAULT => 256,
++	    ENUM => [],
++	    LOW => 4,
++	    HIGH => 768
++	   },
++	   {
++	    NAME => "host_rx_fifo_size",
++	    DEFAULT => 1024,
++	    ENUM => [],
++	    LOW => 16,
++	    HIGH => 32768
++	   },
++	   {
++	    NAME => "host_nperio_tx_fifo_size",
++	    DEFAULT => 1024,
++	    ENUM => [],
++	    LOW => 16,
++	    HIGH => 32768
++	   },
++	   {
++	    NAME => "host_perio_tx_fifo_size",
++	    DEFAULT => 1024,
++	    ENUM => [],
++	    LOW => 16,
++	    HIGH => 32768
++	   },
++	   {
++	    NAME => "max_transfer_size",
++	    DEFAULT => 65535,
++	    ENUM => [],
++	    LOW => 2047,
++	    HIGH => 65535
++	   },
++	   {
++	    NAME => "max_packet_count",
++	    DEFAULT => 511,
++	    ENUM => [],
++	    LOW => 15,
++	    HIGH => 511
++	   },
++	   {
++	    NAME => "host_channels",
++	    DEFAULT => 12,
++	    ENUM => [],
++	    LOW => 1,
++	    HIGH => 16
++	   },
++	   {
++	    NAME => "dev_endpoints",
++	    DEFAULT => 6,
++	    ENUM => [],
++	    LOW => 1,
++	    HIGH => 15
++	   },
++	   {
++	    NAME => "phy_type",
++	    DEFAULT => 1,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 2
++	   },
++	   {
++	    NAME => "phy_utmi_width",
++	    DEFAULT => 16,
++	    ENUM => [8, 16],
++	    LOW => 8,
++	    HIGH => 16
++	   },
++	   {
++	    NAME => "phy_ulpi_ddr",
++	    DEFAULT => 0,
++	    ENUM => [],
++	    LOW => 0,
++	    HIGH => 1
++	   },
++	  ];
++
++
++#
++#
++sub check_arch {
++  $_ = `uname -m`;
++  chomp;
++  unless (m/armv4tl/) {
++    warn "# \n# Can't execute on $_.  Run on integrator platform.\n# \n";
++    return 0;
++  }
++  return 1;
++}
++
++#
++#
++sub load_module {
++  my $params = shift;
++  print "\nRemoving Module\n";
++  system "rmmod dwc_otg";
++  print "Loading Module\n";
++  if ($params ne "") {
++    print "Module Parameters: $params\n";
++  }
++  if (system("modprobe dwc_otg $params")) {
++    warn "Unable to load module\n";
++    return 0;
++  }
++  return 1;
++}
++
++#
++#
++sub test_status {
++  my $arg = shift;
++
++  print "\n";
++
++  if (defined $arg) {
++    warn "WARNING: $arg\n";
++  }
++
++  if ($errors > 0) {
++    warn "TEST FAILED with $errors errors\n";
++    return 0;
++  } else {
++    print "TEST PASSED\n";
++    return 0 if (defined $arg);
++  }
++  return 1;
++}
++
++#
++#
++ at EXPORT = qw(
++$sysfsdir
++$paramdir
++$params
++$errors
++check_arch
++load_module
++test_status
++);
++
++1;
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/test/test_mod_param.pl
+@@ -0,0 +1,133 @@
++#!/usr/bin/perl -w
++#
++# Run this program on the integrator.
++#
++# - Tests module parameter default values.
++# - Tests setting of valid module parameter values via modprobe.
++# - Tests invalid module parameter values.
++# -----------------------------------------------------------------------------
++use strict;
++use dwc_otg_test;
++
++check_arch() or die;
++
++#
++#
++sub test {
++  my ($param,$expected) = @_;
++  my $value = get($param);
++
++  if ($value == $expected) {
++    print "$param = $value, okay\n";
++  }
++
++  else {
++    warn "ERROR: value of $param != $expected, $value\n";
++    $errors ++;
++  }
++}
++
++#
++#
++sub get {
++  my $param = shift;
++  my $tmp = `cat $paramdir/$param`;
++  chomp $tmp;
++  return $tmp;
++}
++
++#
++#
++sub test_main {
++
++  print "\nTesting Module Parameters\n";
++
++  load_module("") or die;
++
++  # Test initial values
++  print "\nTesting Default Values\n";
++  foreach (@{$params}) {
++    test ($_->{NAME}, $_->{DEFAULT});
++  }
++
++  # Test low value
++  print "\nTesting Low Value\n";
++  my $cmd_params = "";
++  foreach (@{$params}) {
++    $cmd_params = $cmd_params . "$_->{NAME}=$_->{LOW} ";
++  }
++  load_module($cmd_params) or die;
++
++  foreach (@{$params}) {
++    test ($_->{NAME}, $_->{LOW});
++  }
++
++  # Test high value
++  print "\nTesting High Value\n";
++  $cmd_params = "";
++  foreach (@{$params}) {
++    $cmd_params = $cmd_params . "$_->{NAME}=$_->{HIGH} ";
++  }
++  load_module($cmd_params) or die;
++
++  foreach (@{$params}) {
++    test ($_->{NAME}, $_->{HIGH});
++  }
++
++  # Test Enum
++  print "\nTesting Enumerated\n";
++  foreach (@{$params}) {
++    if (defined $_->{ENUM}) {
++      my $value;
++      foreach $value (@{$_->{ENUM}}) {
++	$cmd_params = "$_->{NAME}=$value";
++	load_module($cmd_params) or die;
++	test ($_->{NAME}, $value);
++      }
++    }
++  }
++
++  # Test Invalid Values
++  print "\nTesting Invalid Values\n";
++  $cmd_params = "";
++  foreach (@{$params}) {
++    $cmd_params = $cmd_params . sprintf "$_->{NAME}=%d ", $_->{LOW}-1;
++  }
++  load_module($cmd_params) or die;
++
++  foreach (@{$params}) {
++    test ($_->{NAME}, $_->{DEFAULT});
++  }
++
++  $cmd_params = "";
++  foreach (@{$params}) {
++    $cmd_params = $cmd_params . sprintf "$_->{NAME}=%d ", $_->{HIGH}+1;
++  }
++  load_module($cmd_params) or die;
++
++  foreach (@{$params}) {
++    test ($_->{NAME}, $_->{DEFAULT});
++  }
++
++  print "\nTesting Enumerated\n";
++  foreach (@{$params}) {
++    if (defined $_->{ENUM}) {
++      my $value;
++      foreach $value (@{$_->{ENUM}}) {
++	$value = $value + 1;
++	$cmd_params = "$_->{NAME}=$value";
++	load_module($cmd_params) or die;
++	test ($_->{NAME}, $_->{DEFAULT});
++	$value = $value - 2;
++	$cmd_params = "$_->{NAME}=$value";
++	load_module($cmd_params) or die;
++	test ($_->{NAME}, $_->{DEFAULT});
++      }
++    }
++  }
++
++  test_status() or die;
++}
++
++test_main();
++0;
+--- /dev/null
++++ b/drivers/usb/host/dwc_otg/test/test_sysfs.pl
+@@ -0,0 +1,193 @@
++#!/usr/bin/perl -w
++#
++# Run this program on the integrator
++# - Tests select sysfs attributes.
++# - Todo ... test more attributes, hnp/srp, buspower/bussuspend, etc.
++# -----------------------------------------------------------------------------
++use strict;
++use dwc_otg_test;
++
++check_arch() or die;
++
++#
++#
++sub test {
++  my ($attr,$expected) = @_;
++  my $string = get($attr);
++
++  if ($string eq $expected) {
++    printf("$attr = $string, okay\n");
++  }
++  else {
++    warn "ERROR: value of $attr != $expected, $string\n";
++    $errors ++;
++  }
++}
++
++#
++#
++sub set {
++  my ($reg, $value) = @_;
++  system "echo $value > $sysfsdir/$reg";
++}
++
++#
++#
++sub get {
++  my $attr = shift;
++  my $string = `cat $sysfsdir/$attr`;
++  chomp $string;
++  if ($string =~ m/\s\=\s/) {
++    my $tmp;
++    ($tmp, $string) = split /\s=\s/, $string;
++  }
++  return $string;
++}
++
++#
++#
++sub test_main {
++  print("\nTesting Sysfs Attributes\n");
++
++  load_module("") or die;
++
++  # Test initial values of regoffset/regvalue/guid/gsnpsid
++  print("\nTesting Default Values\n");
++
++  test("regoffset", "0xffffffff");
++  test("regvalue", "invalid offset");
++  test("guid", "0x12345678");	# this will fail if it has been changed
++  test("gsnpsid", "0x4f54200a");
++
++  # Test operation of regoffset/regvalue
++  print("\nTesting regoffset\n");
++  set('regoffset', '5a5a5a5a');
++  test("regoffset", "0xffffffff");
++
++  set('regoffset', '0');
++  test("regoffset", "0x00000000");
++
++  set('regoffset', '40000');
++  test("regoffset", "0x00000000");
++
++  set('regoffset', '3ffff');
++  test("regoffset", "0x0003ffff");
++
++  set('regoffset', '1');
++  test("regoffset", "0x00000001");
++
++  print("\nTesting regvalue\n");
++  set('regoffset', '3c');
++  test("regvalue", "0x12345678");
++  set('regvalue', '5a5a5a5a');
++  test("regvalue", "0x5a5a5a5a");
++  set('regvalue','a5a5a5a5');
++  test("regvalue", "0xa5a5a5a5");
++  set('guid','12345678');
++
++  # Test HNP Capable
++  print("\nTesting HNP Capable bit\n");
++  set('hnpcapable', '1');
++  test("hnpcapable", "0x1");
++  set('hnpcapable','0');
++  test("hnpcapable", "0x0");
++
++  set('regoffset','0c');
++
++  my $old = get('gusbcfg');
++  print("setting hnpcapable\n");
++  set('hnpcapable', '1');
++  test("hnpcapable", "0x1");
++  test('gusbcfg', sprintf "0x%08x", (oct ($old) | (1<<9)));
++  test('regvalue', sprintf "0x%08x", (oct ($old) | (1<<9)));
++
++  $old = get('gusbcfg');
++  print("clearing hnpcapable\n");
++  set('hnpcapable', '0');
++  test("hnpcapable", "0x0");
++  test ('gusbcfg', sprintf "0x%08x", oct ($old) & (~(1<<9)));
++  test ('regvalue', sprintf "0x%08x", oct ($old) & (~(1<<9)));
++
++  # Test SRP Capable
++  print("\nTesting SRP Capable bit\n");
++  set('srpcapable', '1');
++  test("srpcapable", "0x1");
++  set('srpcapable','0');
++  test("srpcapable", "0x0");
++
++  set('regoffset','0c');
++
++  $old = get('gusbcfg');
++  print("setting srpcapable\n");
++  set('srpcapable', '1');
++  test("srpcapable", "0x1");
++  test('gusbcfg', sprintf "0x%08x", (oct ($old) | (1<<8)));
++  test('regvalue', sprintf "0x%08x", (oct ($old) | (1<<8)));
++
++  $old = get('gusbcfg');
++  print("clearing srpcapable\n");
++  set('srpcapable', '0');
++  test("srpcapable", "0x0");
++  test('gusbcfg', sprintf "0x%08x", oct ($old) & (~(1<<8)));
++  test('regvalue', sprintf "0x%08x", oct ($old) & (~(1<<8)));
++
++  # Test GGPIO
++  print("\nTesting GGPIO\n");
++  set('ggpio','5a5a5a5a');
++  test('ggpio','0x5a5a0000');
++  set('ggpio','a5a5a5a5');
++  test('ggpio','0xa5a50000');
++  set('ggpio','11110000');
++  test('ggpio','0x11110000');
++  set('ggpio','00001111');
++  test('ggpio','0x00000000');
++
++  # Test DEVSPEED
++  print("\nTesting DEVSPEED\n");
++  set('regoffset','800');
++  $old = get('regvalue');
++  set('devspeed','0');
++  test('devspeed','0x0');
++  test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3)));
++  set('devspeed','1');
++  test('devspeed','0x1');
++  test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3) | 1));
++  set('devspeed','2');
++  test('devspeed','0x2');
++  test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3) | 2));
++  set('devspeed','3');
++  test('devspeed','0x3');
++  test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3) | 3));
++  set('devspeed','4');
++  test('devspeed','0x0');
++  test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3)));
++  set('devspeed','5');
++  test('devspeed','0x1');
++  test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3) | 1));
++
++
++  #  mode	Returns the current mode:0 for device mode1 for host mode	Read
++  #  hnp	Initiate the Host Negotiation Protocol.  Read returns the status.	Read/Write
++  #  srp	Initiate the Session Request Protocol.  Read returns the status.	Read/Write
++  #  buspower	Get or Set the Power State of the bus (0 - Off or 1 - On) 	Read/Write
++  #  bussuspend	Suspend the USB bus.	Read/Write
++  #  busconnected	Get the connection status of the bus 	Read
++
++  #  gotgctl	Get or set the Core Control Status Register.	Read/Write
++  ##  gusbcfg	Get or set the Core USB Configuration Register	Read/Write
++  #  grxfsiz	Get or set the Receive FIFO Size Register	Read/Write
++  #  gnptxfsiz	Get or set the non-periodic Transmit Size Register	Read/Write
++  #  gpvndctl	Get or set the PHY Vendor Control Register	Read/Write
++  ##  ggpio	Get the value in the lower 16-bits of the General Purpose IO Register or Set the upper 16 bits.	Read/Write
++  ##  guid	Get or set the value of the User ID Register	Read/Write
++  ##  gsnpsid	Get the value of the Synopsys ID Regester	Read
++  ##  devspeed	Get or set the device speed setting in the DCFG register	Read/Write
++  #  enumspeed	Gets the device enumeration Speed.	Read
++  #  hptxfsiz	Get the value of the Host Periodic Transmit FIFO	Read
++  #  hprt0	Get or Set the value in the Host Port Control and Status Register	Read/Write
++
++  test_status("TEST NYI") or die;
++}
++
++test_main();
++0;
diff --git a/target/linux/brcm2708/patches-4.4/0030-bcm2708-framebuffer-driver.patch b/target/linux/brcm2708/patches-4.4/0030-bcm2708-framebuffer-driver.patch
new file mode 100644
index 0000000..cfebca2
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0030-bcm2708-framebuffer-driver.patch
@@ -0,0 +1,3455 @@
+From fdf40ab8630e6a5a370b8d938957709f6f8f8324 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 17 Jun 2015 17:06:34 +0100
+Subject: [PATCH 030/127] bcm2708 framebuffer driver
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+
+bcm2708_fb : Implement blanking support using the mailbox property interface
+
+bcm2708_fb: Add pan and vsync controls
+
+bcm2708_fb: DMA acceleration for fb_copyarea
+
+Based on http://www.raspberrypi.org/phpBB3/viewtopic.php?p=62425#p62425
+Also used Simon's dmaer_master module as a reference for tweaking DMA
+settings for better performance.
+
+For now busylooping only. IRQ support might be added later.
+With non-overclocked Raspberry Pi, the performance is ~360 MB/s
+for simple copy or ~260 MB/s for two-pass copy (used when dragging
+windows to the right).
+
+In the case of using DMA channel 0, the performance improves
+to ~440 MB/s.
+
+For comparison, VFP optimized CPU copy can only do ~114 MB/s in
+the same conditions (hindered by reading uncached source buffer).
+
+Signed-off-by: Siarhei Siamashka <siarhei.siamashka at gmail.com>
+
+bcm2708_fb: report number of dma copies
+
+Add a counter (exported via debugfs) reporting the
+number of dma copies that the framebuffer driver
+has done, in order to help evaluate different
+optimization strategies.
+
+Signed-off-by: Luke Diamand <luked at broadcom.com>
+
+bcm2708_fb: use IRQ for DMA copies
+
+The copyarea ioctl() uses DMA to speed things along. This
+was busy-waiting for completion. This change supports using
+an interrupt instead for larger transfers. For small
+transfers, busy-waiting is still likely to be faster.
+
+Signed-off-by: Luke Diamand <luke at diamand.org>
+
+bcm2708: Make ioctl logging quieter
+
+video: fbdev: bcm2708_fb: Don't panic on error
+
+No need to panic the kernel if the video driver fails.
+Just print a message and return an error.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+fbdev: bcm2708_fb: Add ARCH_BCM2835 support
+
+Add Device Tree support.
+Pass the device to dma_alloc_coherent() in order to get the
+correct bus address on ARCH_BCM2835.
+Use the new DMA legacy API header file.
+Including <mach/platform.h> is not necessary.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+BCM270x_DT: Add bcm2708-fb device
+
+Add bcm2708-fb to Device Tree and don't add the
+platform device when booting in DT mode.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/video/fbdev/Kconfig               |   14 +
+ drivers/video/fbdev/Makefile              |    1 +
+ drivers/video/fbdev/bcm2708_fb.c          |  847 ++++++++++
+ drivers/video/logo/logo_linux_clut224.ppm | 2483 ++++++++++-------------------
+ 4 files changed, 1743 insertions(+), 1602 deletions(-)
+ create mode 100644 drivers/video/fbdev/bcm2708_fb.c
+
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -224,6 +224,20 @@ config FB_TILEBLITTING
+ comment "Frame buffer hardware drivers"
+ 	depends on FB
+ 
++config FB_BCM2708
++	tristate "BCM2708 framebuffer support"
++	depends on FB && RASPBERRYPI_FIRMWARE
++	select FB_CFB_FILLRECT
++	select FB_CFB_COPYAREA
++	select FB_CFB_IMAGEBLIT
++	help
++	  This framebuffer device driver is for the BCM2708 framebuffer.
++
++	  If you want to compile this as a module (=code which can be
++	  inserted into and removed from the running kernel), say M
++	  here and read <file:Documentation/kbuild/modules.txt>.  The module
++	  will be called bcm2708_fb.
++
+ config FB_GRVGA
+ 	tristate "Aeroflex Gaisler framebuffer support"
+ 	depends on FB && SPARC
+--- a/drivers/video/fbdev/Makefile
++++ b/drivers/video/fbdev/Makefile
+@@ -12,6 +12,7 @@ obj-$(CONFIG_FB_MACMODES)      += macmod
+ obj-$(CONFIG_FB_WMT_GE_ROPS)   += wmt_ge_rops.o
+ 
+ # Hardware specific drivers go first
++obj-$(CONFIG_FB_BCM2708)	  += bcm2708_fb.o
+ obj-$(CONFIG_FB_AMIGA)            += amifb.o c2p_planar.o
+ obj-$(CONFIG_FB_ARC)              += arcfb.o
+ obj-$(CONFIG_FB_CLPS711X)	  += clps711x-fb.o
+--- /dev/null
++++ b/drivers/video/fbdev/bcm2708_fb.c
+@@ -0,0 +1,847 @@
++/*
++ *  linux/drivers/video/bcm2708_fb.c
++ *
++ * Copyright (C) 2010 Broadcom
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Broadcom simple framebuffer driver
++ *
++ * This file is derived from cirrusfb.c
++ * Copyright 1999-2001 Jeff Garzik <jgarzik at pobox.com>
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/errno.h>
++#include <linux/string.h>
++#include <linux/slab.h>
++#include <linux/mm.h>
++#include <linux/fb.h>
++#include <linux/init.h>
++#include <linux/interrupt.h>
++#include <linux/ioport.h>
++#include <linux/list.h>
++#include <linux/platform_data/dma-bcm2708.h>
++#include <linux/platform_device.h>
++#include <linux/clk.h>
++#include <linux/printk.h>
++#include <linux/console.h>
++#include <linux/debugfs.h>
++#include <asm/sizes.h>
++#include <linux/io.h>
++#include <linux/dma-mapping.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
++
++//#define BCM2708_FB_DEBUG
++#define MODULE_NAME "bcm2708_fb"
++
++#ifdef BCM2708_FB_DEBUG
++#define print_debug(fmt,...) pr_debug("%s:%s:%d: "fmt, MODULE_NAME, __func__, __LINE__, ##__VA_ARGS__)
++#else
++#define print_debug(fmt,...)
++#endif
++
++/* This is limited to 16 characters when displayed by X startup */
++static const char *bcm2708_name = "BCM2708 FB";
++
++#define DRIVER_NAME "bcm2708_fb"
++
++static int fbwidth = 800;  /* module parameter */
++static int fbheight = 480; /* module parameter */
++static int fbdepth = 16;   /* module parameter */
++static int fbswap = 0;     /* module parameter */
++
++static u32 dma_busy_wait_threshold = 1<<15;
++module_param(dma_busy_wait_threshold, int, 0644);
++MODULE_PARM_DESC(dma_busy_wait_threshold, "Busy-wait for DMA completion below this area");
++
++struct fb_alloc_tags {
++	struct rpi_firmware_property_tag_header tag1;
++	u32 xres, yres;
++	struct rpi_firmware_property_tag_header tag2;
++	u32 xres_virtual, yres_virtual;
++	struct rpi_firmware_property_tag_header tag3;
++	u32 bpp;
++	struct rpi_firmware_property_tag_header tag4;
++	u32 xoffset, yoffset;
++	struct rpi_firmware_property_tag_header tag5;
++	u32 base, screen_size;
++	struct rpi_firmware_property_tag_header tag6;
++	u32 pitch;
++};
++
++struct bcm2708_fb_stats {
++	struct debugfs_regset32 regset;
++	u32 dma_copies;
++	u32 dma_irqs;
++};
++
++struct bcm2708_fb {
++	struct fb_info fb;
++	struct platform_device *dev;
++	struct rpi_firmware *fw;
++	u32 cmap[16];
++	u32 gpu_cmap[256];
++	int dma_chan;
++	int dma_irq;
++	void __iomem *dma_chan_base;
++	void *cb_base;		/* DMA control blocks */
++	dma_addr_t cb_handle;
++	struct dentry *debugfs_dir;
++	wait_queue_head_t dma_waitq;
++	struct bcm2708_fb_stats stats;
++	unsigned long fb_bus_address;
++};
++
++#define to_bcm2708(info)	container_of(info, struct bcm2708_fb, fb)
++
++static void bcm2708_fb_debugfs_deinit(struct bcm2708_fb *fb)
++{
++	debugfs_remove_recursive(fb->debugfs_dir);
++	fb->debugfs_dir = NULL;
++}
++
++static int bcm2708_fb_debugfs_init(struct bcm2708_fb *fb)
++{
++	static struct debugfs_reg32 stats_registers[] = {
++		{
++			"dma_copies",
++			offsetof(struct bcm2708_fb_stats, dma_copies)
++		},
++		{
++			"dma_irqs",
++			offsetof(struct bcm2708_fb_stats, dma_irqs)
++		},
++	};
++
++	fb->debugfs_dir = debugfs_create_dir(DRIVER_NAME, NULL);
++	if (!fb->debugfs_dir) {
++		pr_warn("%s: could not create debugfs entry\n",
++			__func__);
++		return -EFAULT;
++	}
++
++	fb->stats.regset.regs = stats_registers;
++	fb->stats.regset.nregs = ARRAY_SIZE(stats_registers);
++	fb->stats.regset.base = &fb->stats;
++
++	if (!debugfs_create_regset32(
++		"stats", 0444, fb->debugfs_dir, &fb->stats.regset)) {
++		pr_warn("%s: could not create statistics registers\n",
++			__func__);
++		goto fail;
++	}
++	return 0;
++
++fail:
++	bcm2708_fb_debugfs_deinit(fb);
++	return -EFAULT;
++}
++
++static int bcm2708_fb_set_bitfields(struct fb_var_screeninfo *var)
++{
++	int ret = 0;
++
++	memset(&var->transp, 0, sizeof(var->transp));
++
++	var->red.msb_right = 0;
++	var->green.msb_right = 0;
++	var->blue.msb_right = 0;
++
++	switch (var->bits_per_pixel) {
++	case 1:
++	case 2:
++	case 4:
++	case 8:
++		var->red.length = var->bits_per_pixel;
++		var->red.offset = 0;
++		var->green.length = var->bits_per_pixel;
++		var->green.offset = 0;
++		var->blue.length = var->bits_per_pixel;
++		var->blue.offset = 0;
++		break;
++	case 16:
++		var->red.length = 5;
++		var->blue.length = 5;
++		/*
++		 * Green length can be 5 or 6 depending whether
++		 * we're operating in RGB555 or RGB565 mode.
++		 */
++		if (var->green.length != 5 && var->green.length != 6)
++			var->green.length = 6;
++		break;
++	case 24:
++		var->red.length = 8;
++		var->blue.length = 8;
++		var->green.length = 8;
++		break;
++	case 32:
++		var->red.length = 8;
++		var->green.length = 8;
++		var->blue.length = 8;
++		var->transp.length = 8;
++		break;
++	default:
++		ret = -EINVAL;
++		break;
++	}
++
++	/*
++	 * >= 16bpp displays have separate colour component bitfields
++	 * encoded in the pixel data.  Calculate their position from
++	 * the bitfield length defined above.
++	 */
++	if (ret == 0 && var->bits_per_pixel >= 24 && fbswap) {
++		var->blue.offset = 0;
++		var->green.offset = var->blue.offset + var->blue.length;
++		var->red.offset = var->green.offset + var->green.length;
++		var->transp.offset = var->red.offset + var->red.length;
++	} else if (ret == 0 && var->bits_per_pixel >= 24) {
++		var->red.offset = 0;
++		var->green.offset = var->red.offset + var->red.length;
++		var->blue.offset = var->green.offset + var->green.length;
++		var->transp.offset = var->blue.offset + var->blue.length;
++	} else if (ret == 0 && var->bits_per_pixel >= 16) {
++		var->blue.offset = 0;
++		var->green.offset = var->blue.offset + var->blue.length;
++		var->red.offset = var->green.offset + var->green.length;
++		var->transp.offset = var->red.offset + var->red.length;
++	}
++
++	return ret;
++}
++
++static int bcm2708_fb_check_var(struct fb_var_screeninfo *var,
++				struct fb_info *info)
++{
++	/* info input, var output */
++	int yres;
++
++	/* info input, var output */
++	print_debug("bcm2708_fb_check_var info(%p) %dx%d (%dx%d), %d, %d\n", info,
++		info->var.xres, info->var.yres, info->var.xres_virtual,
++		info->var.yres_virtual, (int)info->screen_size,
++		info->var.bits_per_pixel);
++	print_debug("bcm2708_fb_check_var var(%p) %dx%d (%dx%d), %d\n", var,
++		var->xres, var->yres, var->xres_virtual, var->yres_virtual,
++		var->bits_per_pixel);
++
++	if (!var->bits_per_pixel)
++		var->bits_per_pixel = 16;
++
++	if (bcm2708_fb_set_bitfields(var) != 0) {
++		pr_err("bcm2708_fb_check_var: invalid bits_per_pixel %d\n",
++		     var->bits_per_pixel);
++		return -EINVAL;
++	}
++
++
++	if (var->xres_virtual < var->xres)
++		var->xres_virtual = var->xres;
++	/* use highest possible virtual resolution */
++	if (var->yres_virtual == -1) {
++		var->yres_virtual = 480;
++
++		pr_err
++		    ("bcm2708_fb_check_var: virtual resolution set to maximum of %dx%d\n",
++		     var->xres_virtual, var->yres_virtual);
++	}
++	if (var->yres_virtual < var->yres)
++		var->yres_virtual = var->yres;
++
++	if (var->xoffset < 0)
++		var->xoffset = 0;
++	if (var->yoffset < 0)
++		var->yoffset = 0;
++
++	/* truncate xoffset and yoffset to maximum if too high */
++	if (var->xoffset > var->xres_virtual - var->xres)
++		var->xoffset = var->xres_virtual - var->xres - 1;
++	if (var->yoffset > var->yres_virtual - var->yres)
++		var->yoffset = var->yres_virtual - var->yres - 1;
++
++	return 0;
++}
++
++static int bcm2708_fb_set_par(struct fb_info *info)
++{
++	struct bcm2708_fb *fb = to_bcm2708(info);
++	struct fb_alloc_tags fbinfo = {
++		.tag1 = { RPI_FIRMWARE_FRAMEBUFFER_SET_PHYSICAL_WIDTH_HEIGHT,
++			  8, 0, },
++			.xres = info->var.xres,
++			.yres = info->var.yres,
++		.tag2 = { RPI_FIRMWARE_FRAMEBUFFER_SET_VIRTUAL_WIDTH_HEIGHT,
++			  8, 0, },
++			.xres_virtual = info->var.xres_virtual,
++			.yres_virtual = info->var.yres_virtual,
++		.tag3 = { RPI_FIRMWARE_FRAMEBUFFER_SET_DEPTH, 4, 0 },
++			.bpp = info->var.bits_per_pixel,
++		.tag4 = { RPI_FIRMWARE_FRAMEBUFFER_SET_VIRTUAL_OFFSET, 8, 0 },
++			.xoffset = info->var.xoffset,
++			.yoffset = info->var.yoffset,
++		.tag5 = { RPI_FIRMWARE_FRAMEBUFFER_ALLOCATE, 8, 0 },
++			.base = 0,
++			.screen_size = 0,
++		.tag6 = { RPI_FIRMWARE_FRAMEBUFFER_GET_PITCH, 4, 0 },
++			.pitch = 0,
++	};
++	int ret;
++
++	print_debug("bcm2708_fb_set_par info(%p) %dx%d (%dx%d), %d, %d\n", info,
++		info->var.xres, info->var.yres, info->var.xres_virtual,
++		info->var.yres_virtual, (int)info->screen_size,
++		info->var.bits_per_pixel);
++
++	ret = rpi_firmware_property_list(fb->fw, &fbinfo, sizeof(fbinfo));
++	if (ret) {
++		dev_err(info->device,
++			"Failed to allocate GPU framebuffer (%d)\n", ret);
++		return ret;
++	}
++
++	if (info->var.bits_per_pixel <= 8)
++		fb->fb.fix.visual = FB_VISUAL_PSEUDOCOLOR;
++	else
++		fb->fb.fix.visual = FB_VISUAL_TRUECOLOR;
++
++	fb->fb.fix.line_length = fbinfo.pitch;
++	fbinfo.base |= 0x40000000;
++	fb->fb_bus_address = fbinfo.base;
++	fbinfo.base &= ~0xc0000000;
++	fb->fb.fix.smem_start = fbinfo.base;
++	fb->fb.fix.smem_len = fbinfo.pitch * fbinfo.yres_virtual;
++	fb->fb.screen_size = fbinfo.screen_size;
++	if (fb->fb.screen_base)
++		iounmap(fb->fb.screen_base);
++	fb->fb.screen_base = ioremap_wc(fbinfo.base, fb->fb.screen_size);
++	if (!fb->fb.screen_base) {
++		/* the console may currently be locked */
++		console_trylock();
++		console_unlock();
++		dev_err(info->device, "Failed to set screen_base\n");
++		return -ENOMEM;
++	}
++
++	print_debug
++	    ("BCM2708FB: start = %p,%p width=%d, height=%d, bpp=%d, pitch=%d size=%d\n",
++	     (void *)fb->fb.screen_base, (void *)fb->fb_bus_address,
++	     fbinfo.xres, fbinfo.yres, fbinfo.bpp,
++	     fbinfo.pitch, (int)fb->fb.screen_size);
++
++	return 0;
++}
++
++static inline u32 convert_bitfield(int val, struct fb_bitfield *bf)
++{
++	unsigned int mask = (1 << bf->length) - 1;
++
++	return (val >> (16 - bf->length) & mask) << bf->offset;
++}
++
++
++static int bcm2708_fb_setcolreg(unsigned int regno, unsigned int red,
++				unsigned int green, unsigned int blue,
++				unsigned int transp, struct fb_info *info)
++{
++	struct bcm2708_fb *fb = to_bcm2708(info);
++
++	/*print_debug("BCM2708FB: setcolreg %d:(%02x,%02x,%02x,%02x) %x\n", regno, red, green, blue, transp, fb->fb.fix.visual);*/
++	if (fb->fb.var.bits_per_pixel <= 8) {
++		if (regno < 256) {
++			/* blue [23:16], green [15:8], red [7:0] */
++			fb->gpu_cmap[regno] = ((red   >> 8) & 0xff) << 0 |
++					      ((green >> 8) & 0xff) << 8 |
++					      ((blue  >> 8) & 0xff) << 16;
++		}
++		/* Hack: we need to tell GPU the palette has changed, but currently bcm2708_fb_set_par takes noticable time when called for every (256) colour */
++		/* So just call it for what looks like the last colour in a list for now. */
++		if (regno == 15 || regno == 255) {
++			struct packet {
++				u32 offset;
++				u32 length;
++				u32 cmap[256];
++			} *packet;
++			int ret;
++
++			packet = kmalloc(sizeof(*packet), GFP_KERNEL);
++			if (!packet)
++				return -ENOMEM;
++			packet->offset = 0;
++			packet->length = regno + 1;
++			memcpy(packet->cmap, fb->gpu_cmap, sizeof(packet->cmap));
++			ret = rpi_firmware_property(fb->fw, RPI_FIRMWARE_FRAMEBUFFER_SET_PALETTE,
++						    packet, (2 + packet->length) * sizeof(u32));
++			if (ret || packet->offset)
++				dev_err(info->device, "Failed to set palette (%d,%u)\n",
++					ret, packet->offset);
++			kfree(packet);
++		}
++        } else if (regno < 16) {
++		fb->cmap[regno] = convert_bitfield(transp, &fb->fb.var.transp) |
++		    convert_bitfield(blue, &fb->fb.var.blue) |
++		    convert_bitfield(green, &fb->fb.var.green) |
++		    convert_bitfield(red, &fb->fb.var.red);
++	}
++	return regno > 255;
++}
++
++static int bcm2708_fb_blank(int blank_mode, struct fb_info *info)
++{
++	struct bcm2708_fb *fb = to_bcm2708(info);
++	u32 value;
++	int ret;
++
++	switch (blank_mode) {
++	case FB_BLANK_UNBLANK:
++		value = 0;
++		break;
++	case FB_BLANK_NORMAL:
++	case FB_BLANK_VSYNC_SUSPEND:
++	case FB_BLANK_HSYNC_SUSPEND:
++	case FB_BLANK_POWERDOWN:
++		value = 1;
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	ret = rpi_firmware_property(fb->fw, RPI_FIRMWARE_FRAMEBUFFER_BLANK,
++				    &value, sizeof(value));
++	if (ret)
++		dev_err(info->device, "bcm2708_fb_blank(%d) failed: %d\n",
++			blank_mode, ret);
++
++	return ret;
++}
++
++static int bcm2708_fb_pan_display(struct fb_var_screeninfo *var, struct fb_info *info)
++{
++	s32 result;
++	info->var.xoffset = var->xoffset;
++	info->var.yoffset = var->yoffset;
++	result = bcm2708_fb_set_par(info);
++	if (result != 0)
++		pr_err("bcm2708_fb_pan_display(%d,%d) returns=%d\n", var->xoffset, var->yoffset, result);
++	return result;
++}
++
++static int bcm2708_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg)
++{
++	struct bcm2708_fb *fb = to_bcm2708(info);
++	u32 dummy = 0;
++	int ret;
++
++	switch (cmd) {
++	case FBIO_WAITFORVSYNC:
++		ret = rpi_firmware_property(fb->fw,
++					    RPI_FIRMWARE_FRAMEBUFFER_SET_VSYNC,
++					    &dummy, sizeof(dummy));
++		break;
++	default:
++		dev_dbg(info->device, "Unknown ioctl 0x%x\n", cmd);
++		return -ENOTTY;
++	}
++
++	if (ret)
++		dev_err(info->device, "ioctl 0x%x failed (%d)\n", cmd, ret);
++
++	return ret;
++}
++static void bcm2708_fb_fillrect(struct fb_info *info,
++				const struct fb_fillrect *rect)
++{
++	/* (is called) print_debug("bcm2708_fb_fillrect\n"); */
++	cfb_fillrect(info, rect);
++}
++
++/* A helper function for configuring dma control block */
++static void set_dma_cb(struct bcm2708_dma_cb *cb,
++		       int        burst_size,
++		       dma_addr_t dst,
++		       int        dst_stride,
++		       dma_addr_t src,
++		       int        src_stride,
++		       int        w,
++		       int        h)
++{
++	cb->info = BCM2708_DMA_BURST(burst_size) | BCM2708_DMA_S_WIDTH |
++		   BCM2708_DMA_S_INC | BCM2708_DMA_D_WIDTH |
++		   BCM2708_DMA_D_INC | BCM2708_DMA_TDMODE;
++	cb->dst = dst;
++	cb->src = src;
++	/*
++	 * This is not really obvious from the DMA documentation,
++	 * but the top 16 bits must be programmmed to "height -1"
++	 * and not "height" in 2D mode.
++	 */
++	cb->length = ((h - 1) << 16) | w;
++	cb->stride = ((dst_stride - w) << 16) | (u16)(src_stride - w);
++	cb->pad[0] = 0;
++	cb->pad[1] = 0;
++}
++
++static void bcm2708_fb_copyarea(struct fb_info *info,
++				const struct fb_copyarea *region)
++{
++	struct bcm2708_fb *fb = to_bcm2708(info);
++	struct bcm2708_dma_cb *cb = fb->cb_base;
++	int bytes_per_pixel = (info->var.bits_per_pixel + 7) >> 3;
++	/* Channel 0 supports larger bursts and is a bit faster */
++	int burst_size = (fb->dma_chan == 0) ? 8 : 2;
++	int pixels = region->width * region->height;
++
++	/* Fallback to cfb_copyarea() if we don't like something */
++	if (in_atomic() ||
++	    bytes_per_pixel > 4 ||
++	    info->var.xres * info->var.yres > 1920 * 1200 ||
++	    region->width <= 0 || region->width > info->var.xres ||
++	    region->height <= 0 || region->height > info->var.yres ||
++	    region->sx < 0 || region->sx >= info->var.xres ||
++	    region->sy < 0 || region->sy >= info->var.yres ||
++	    region->dx < 0 || region->dx >= info->var.xres ||
++	    region->dy < 0 || region->dy >= info->var.yres ||
++	    region->sx + region->width > info->var.xres ||
++	    region->dx + region->width > info->var.xres ||
++	    region->sy + region->height > info->var.yres ||
++	    region->dy + region->height > info->var.yres) {
++		cfb_copyarea(info, region);
++		return;
++	}
++
++	if (region->dy == region->sy && region->dx > region->sx) {
++		/*
++		 * A difficult case of overlapped copy. Because DMA can't
++		 * copy individual scanlines in backwards direction, we need
++		 * two-pass processing. We do it by programming a chain of dma
++		 * control blocks in the first 16K part of the buffer and use
++		 * the remaining 48K as the intermediate temporary scratch
++		 * buffer. The buffer size is sufficient to handle up to
++		 * 1920x1200 resolution at 32bpp pixel depth.
++		 */
++		int y;
++		dma_addr_t control_block_pa = fb->cb_handle;
++		dma_addr_t scratchbuf = fb->cb_handle + 16 * 1024;
++		int scanline_size = bytes_per_pixel * region->width;
++		int scanlines_per_cb = (64 * 1024 - 16 * 1024) / scanline_size;
++
++		for (y = 0; y < region->height; y += scanlines_per_cb) {
++			dma_addr_t src =
++				fb->fb_bus_address +
++				bytes_per_pixel * region->sx +
++				(region->sy + y) * fb->fb.fix.line_length;
++			dma_addr_t dst =
++				fb->fb_bus_address +
++				bytes_per_pixel * region->dx +
++				(region->dy + y) * fb->fb.fix.line_length;
++
++			if (region->height - y < scanlines_per_cb)
++				scanlines_per_cb = region->height - y;
++
++			set_dma_cb(cb, burst_size, scratchbuf, scanline_size,
++				   src, fb->fb.fix.line_length,
++				   scanline_size, scanlines_per_cb);
++			control_block_pa += sizeof(struct bcm2708_dma_cb);
++			cb->next = control_block_pa;
++			cb++;
++
++			set_dma_cb(cb, burst_size, dst, fb->fb.fix.line_length,
++				   scratchbuf, scanline_size,
++				   scanline_size, scanlines_per_cb);
++			control_block_pa += sizeof(struct bcm2708_dma_cb);
++			cb->next = control_block_pa;
++			cb++;
++		}
++		/* move the pointer back to the last dma control block */
++		cb--;
++	} else {
++		/* A single dma control block is enough. */
++		int sy, dy, stride;
++		if (region->dy <= region->sy) {
++			/* processing from top to bottom */
++			dy = region->dy;
++			sy = region->sy;
++			stride = fb->fb.fix.line_length;
++		} else {
++			/* processing from bottom to top */
++			dy = region->dy + region->height - 1;
++			sy = region->sy + region->height - 1;
++			stride = -fb->fb.fix.line_length;
++		}
++		set_dma_cb(cb, burst_size,
++			   fb->fb_bus_address + dy * fb->fb.fix.line_length +
++						   bytes_per_pixel * region->dx,
++			   stride,
++			   fb->fb_bus_address + sy * fb->fb.fix.line_length +
++						   bytes_per_pixel * region->sx,
++			   stride,
++			   region->width * bytes_per_pixel,
++			   region->height);
++	}
++
++	/* end of dma control blocks chain */
++	cb->next = 0;
++
++
++	if (pixels < dma_busy_wait_threshold) {
++		bcm_dma_start(fb->dma_chan_base, fb->cb_handle);
++		bcm_dma_wait_idle(fb->dma_chan_base);
++	} else {
++		void __iomem *dma_chan = fb->dma_chan_base;
++		cb->info |= BCM2708_DMA_INT_EN;
++		bcm_dma_start(fb->dma_chan_base, fb->cb_handle);
++		while (bcm_dma_is_busy(dma_chan)) {
++			wait_event_interruptible(
++				fb->dma_waitq,
++				!bcm_dma_is_busy(dma_chan));
++		}
++		fb->stats.dma_irqs++;
++	}
++	fb->stats.dma_copies++;
++}
++
++static void bcm2708_fb_imageblit(struct fb_info *info,
++				 const struct fb_image *image)
++{
++	/* (is called) print_debug("bcm2708_fb_imageblit\n"); */
++	cfb_imageblit(info, image);
++}
++
++static irqreturn_t bcm2708_fb_dma_irq(int irq, void *cxt)
++{
++	struct bcm2708_fb *fb = cxt;
++
++	/* FIXME: should read status register to check if this is
++	 * actually interrupting us or not, in case this interrupt
++	 * ever becomes shared amongst several DMA channels
++	 *
++	 * readl(dma_chan_base + BCM2708_DMA_CS) & BCM2708_DMA_IRQ;
++	 */
++
++	/* acknowledge the interrupt */
++	writel(BCM2708_DMA_INT, fb->dma_chan_base + BCM2708_DMA_CS);
++
++	wake_up(&fb->dma_waitq);
++	return IRQ_HANDLED;
++}
++
++static struct fb_ops bcm2708_fb_ops = {
++	.owner = THIS_MODULE,
++	.fb_check_var = bcm2708_fb_check_var,
++	.fb_set_par = bcm2708_fb_set_par,
++	.fb_setcolreg = bcm2708_fb_setcolreg,
++	.fb_blank = bcm2708_fb_blank,
++	.fb_fillrect = bcm2708_fb_fillrect,
++	.fb_copyarea = bcm2708_fb_copyarea,
++	.fb_imageblit = bcm2708_fb_imageblit,
++	.fb_pan_display = bcm2708_fb_pan_display,
++	.fb_ioctl = bcm2708_ioctl,
++};
++
++static int bcm2708_fb_register(struct bcm2708_fb *fb)
++{
++	int ret;
++
++	fb->fb.fbops = &bcm2708_fb_ops;
++	fb->fb.flags = FBINFO_FLAG_DEFAULT | FBINFO_HWACCEL_COPYAREA;
++	fb->fb.pseudo_palette = fb->cmap;
++
++	strncpy(fb->fb.fix.id, bcm2708_name, sizeof(fb->fb.fix.id));
++	fb->fb.fix.type = FB_TYPE_PACKED_PIXELS;
++	fb->fb.fix.type_aux = 0;
++	fb->fb.fix.xpanstep = 1;
++	fb->fb.fix.ypanstep = 1;
++	fb->fb.fix.ywrapstep = 0;
++	fb->fb.fix.accel = FB_ACCEL_NONE;
++
++	fb->fb.var.xres = fbwidth;
++	fb->fb.var.yres = fbheight;
++	fb->fb.var.xres_virtual = fbwidth;
++	fb->fb.var.yres_virtual = fbheight;
++	fb->fb.var.bits_per_pixel = fbdepth;
++	fb->fb.var.vmode = FB_VMODE_NONINTERLACED;
++	fb->fb.var.activate = FB_ACTIVATE_NOW;
++	fb->fb.var.nonstd = 0;
++	fb->fb.var.height = -1;		/* height of picture in mm    */
++	fb->fb.var.width = -1;		/* width of picture in mm    */
++	fb->fb.var.accel_flags = 0;
++
++	fb->fb.monspecs.hfmin = 0;
++	fb->fb.monspecs.hfmax = 100000;
++	fb->fb.monspecs.vfmin = 0;
++	fb->fb.monspecs.vfmax = 400;
++	fb->fb.monspecs.dclkmin = 1000000;
++	fb->fb.monspecs.dclkmax = 100000000;
++
++	bcm2708_fb_set_bitfields(&fb->fb.var);
++	init_waitqueue_head(&fb->dma_waitq);
++
++	/*
++	 * Allocate colourmap.
++	 */
++
++	fb_set_var(&fb->fb, &fb->fb.var);
++	ret = bcm2708_fb_set_par(&fb->fb);
++	if (ret)
++		return ret;
++
++	print_debug("BCM2708FB: registering framebuffer (%dx%d@%d) (%d)\n", fbwidth,
++		fbheight, fbdepth, fbswap);
++
++	ret = register_framebuffer(&fb->fb);
++	print_debug("BCM2708FB: register framebuffer (%d)\n", ret);
++	if (ret == 0)
++		goto out;
++
++	print_debug("BCM2708FB: cannot register framebuffer (%d)\n", ret);
++out:
++	return ret;
++}
++
++static int bcm2708_fb_probe(struct platform_device *dev)
++{
++	struct device_node *fw_np;
++	struct rpi_firmware *fw;
++	struct bcm2708_fb *fb;
++	int ret;
++
++	fw_np = of_parse_phandle(dev->dev.of_node, "firmware", 0);
++/* Remove comment when booting without Device Tree is no longer supported
++	if (!fw_np) {
++		dev_err(&dev->dev, "Missing firmware node\n");
++		return -ENOENT;
++	}
++*/
++	fw = rpi_firmware_get(fw_np);
++	if (!fw)
++		return -EPROBE_DEFER;
++
++	fb = kzalloc(sizeof(struct bcm2708_fb), GFP_KERNEL);
++	if (!fb) {
++		dev_err(&dev->dev,
++			"could not allocate new bcm2708_fb struct\n");
++		ret = -ENOMEM;
++		goto free_region;
++	}
++
++	fb->fw = fw;
++	bcm2708_fb_debugfs_init(fb);
++
++	fb->cb_base = dma_alloc_writecombine(&dev->dev, SZ_64K,
++					     &fb->cb_handle, GFP_KERNEL);
++	if (!fb->cb_base) {
++		dev_err(&dev->dev, "cannot allocate DMA CBs\n");
++		ret = -ENOMEM;
++		goto free_fb;
++	}
++
++	pr_info("BCM2708FB: allocated DMA memory %08x\n",
++	       fb->cb_handle);
++
++	ret = bcm_dma_chan_alloc(BCM_DMA_FEATURE_BULK,
++				 &fb->dma_chan_base, &fb->dma_irq);
++	if (ret < 0) {
++		dev_err(&dev->dev, "couldn't allocate a DMA channel\n");
++		goto free_cb;
++	}
++	fb->dma_chan = ret;
++
++	ret = request_irq(fb->dma_irq, bcm2708_fb_dma_irq,
++			  0, "bcm2708_fb dma", fb);
++	if (ret) {
++		pr_err("%s: failed to request DMA irq\n", __func__);
++		goto free_dma_chan;
++	}
++
++
++	pr_info("BCM2708FB: allocated DMA channel %d @ %p\n",
++	       fb->dma_chan, fb->dma_chan_base);
++
++	fb->dev = dev;
++	fb->fb.device = &dev->dev;
++
++	ret = bcm2708_fb_register(fb);
++	if (ret == 0) {
++		platform_set_drvdata(dev, fb);
++		goto out;
++	}
++
++free_dma_chan:
++	bcm_dma_chan_free(fb->dma_chan);
++free_cb:
++	dma_free_writecombine(&dev->dev, SZ_64K, fb->cb_base, fb->cb_handle);
++free_fb:
++	kfree(fb);
++free_region:
++	dev_err(&dev->dev, "probe failed, err %d\n", ret);
++out:
++	return ret;
++}
++
++static int bcm2708_fb_remove(struct platform_device *dev)
++{
++	struct bcm2708_fb *fb = platform_get_drvdata(dev);
++
++	platform_set_drvdata(dev, NULL);
++
++	if (fb->fb.screen_base)
++		iounmap(fb->fb.screen_base);
++	unregister_framebuffer(&fb->fb);
++
++	dma_free_writecombine(&dev->dev, SZ_64K, fb->cb_base, fb->cb_handle);
++	bcm_dma_chan_free(fb->dma_chan);
++
++	bcm2708_fb_debugfs_deinit(fb);
++
++	free_irq(fb->dma_irq, fb);
++
++	kfree(fb);
++
++	return 0;
++}
++
++static const struct of_device_id bcm2708_fb_of_match_table[] = {
++	{ .compatible = "brcm,bcm2708-fb", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, bcm2708_fb_of_match_table);
++
++static struct platform_driver bcm2708_fb_driver = {
++	.probe = bcm2708_fb_probe,
++	.remove = bcm2708_fb_remove,
++	.driver = {
++		   .name = DRIVER_NAME,
++		   .owner = THIS_MODULE,
++		   .of_match_table = bcm2708_fb_of_match_table,
++		   },
++};
++
++static int __init bcm2708_fb_init(void)
++{
++	return platform_driver_register(&bcm2708_fb_driver);
++}
++
++module_init(bcm2708_fb_init);
++
++static void __exit bcm2708_fb_exit(void)
++{
++	platform_driver_unregister(&bcm2708_fb_driver);
++}
++
++module_exit(bcm2708_fb_exit);
++
++module_param(fbwidth, int, 0644);
++module_param(fbheight, int, 0644);
++module_param(fbdepth, int, 0644);
++module_param(fbswap, int, 0644);
++
++MODULE_DESCRIPTION("BCM2708 framebuffer driver");
++MODULE_LICENSE("GPL");
++
++MODULE_PARM_DESC(fbwidth, "Width of ARM Framebuffer");
++MODULE_PARM_DESC(fbheight, "Height of ARM Framebuffer");
++MODULE_PARM_DESC(fbdepth, "Bit depth of ARM Framebuffer");
++MODULE_PARM_DESC(fbswap, "Swap order of red and blue in 24 and 32 bit modes");
+--- a/drivers/video/logo/logo_linux_clut224.ppm
++++ b/drivers/video/logo/logo_linux_clut224.ppm
+@@ -1,1604 +1,883 @@
+ P3
+-# Standard 224-color Linux logo
+-80 80
++63 80
+ 255
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6   6   6   6  10  10  10  10  10  10
+- 10  10  10   6   6   6   6   6   6   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   6   6   6  10  10  10  14  14  14
+- 22  22  22  26  26  26  30  30  30  34  34  34
+- 30  30  30  30  30  30  26  26  26  18  18  18
+- 14  14  14  10  10  10   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6  14  14  14  26  26  26  42  42  42
+- 54  54  54  66  66  66  78  78  78  78  78  78
+- 78  78  78  74  74  74  66  66  66  54  54  54
+- 42  42  42  26  26  26  18  18  18  10  10  10
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   1   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 22  22  22  42  42  42  66  66  66  86  86  86
+- 66  66  66  38  38  38  38  38  38  22  22  22
+- 26  26  26  34  34  34  54  54  54  66  66  66
+- 86  86  86  70  70  70  46  46  46  26  26  26
+- 14  14  14   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   1   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0  10  10  10  26  26  26
+- 50  50  50  82  82  82  58  58  58   6   6   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  6   6   6  54  54  54  86  86  86  66  66  66
+- 38  38  38  18  18  18   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   6   6   6  22  22  22  50  50  50
+- 78  78  78  34  34  34   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   6   6   6  70  70  70
+- 78  78  78  46  46  46  22  22  22   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   1   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6  18  18  18  42  42  42  82  82  82
+- 26  26  26   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  14  14  14
+- 46  46  46  34  34  34   6   6   6   2   2   6
+- 42  42  42  78  78  78  42  42  42  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   1   0   0   0   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+- 10  10  10  30  30  30  66  66  66  58  58  58
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  26  26  26
+- 86  86  86 101 101 101  46  46  46  10  10  10
+-  2   2   6  58  58  58  70  70  70  34  34  34
+- 10  10  10   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   1   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+- 14  14  14  42  42  42  86  86  86  10  10  10
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  30  30  30
+- 94  94  94  94  94  94  58  58  58  26  26  26
+-  2   2   6   6   6   6  78  78  78  54  54  54
+- 22  22  22   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 22  22  22  62  62  62  62  62  62   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  26  26  26
+- 54  54  54  38  38  38  18  18  18  10  10  10
+-  2   2   6   2   2   6  34  34  34  82  82  82
+- 38  38  38  14  14  14   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 30  30  30  78  78  78  30  30  30   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  10  10  10
+- 10  10  10   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  78  78  78
+- 50  50  50  18  18  18   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   1   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 38  38  38  86  86  86  14  14  14   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  54  54  54
+- 66  66  66  26  26  26   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 42  42  42  82  82  82   2   2   6   2   2   6
+-  2   2   6   6   6   6  10  10  10   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   6   6   6
+- 14  14  14  10  10  10   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  18  18  18
+- 82  82  82  34  34  34  10  10  10   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   1   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 46  46  46  86  86  86   2   2   6   2   2   6
+-  6   6   6   6   6   6  22  22  22  34  34  34
+-  6   6   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6  18  18  18  34  34  34
+- 10  10  10  50  50  50  22  22  22   2   2   6
+-  2   2   6   2   2   6   2   2   6  10  10  10
+- 86  86  86  42  42  42  14  14  14   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   1   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 46  46  46  86  86  86   2   2   6   2   2   6
+- 38  38  38 116 116 116  94  94  94  22  22  22
+- 22  22  22   2   2   6   2   2   6   2   2   6
+- 14  14  14  86  86  86 138 138 138 162 162 162
+-154 154 154  38  38  38  26  26  26   6   6   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 86  86  86  46  46  46  14  14  14   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 46  46  46  86  86  86   2   2   6  14  14  14
+-134 134 134 198 198 198 195 195 195 116 116 116
+- 10  10  10   2   2   6   2   2   6   6   6   6
+-101  98  89 187 187 187 210 210 210 218 218 218
+-214 214 214 134 134 134  14  14  14   6   6   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 86  86  86  50  50  50  18  18  18   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   1   0   0   0
+-  0   0   1   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 46  46  46  86  86  86   2   2   6  54  54  54
+-218 218 218 195 195 195 226 226 226 246 246 246
+- 58  58  58   2   2   6   2   2   6  30  30  30
+-210 210 210 253 253 253 174 174 174 123 123 123
+-221 221 221 234 234 234  74  74  74   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 70  70  70  58  58  58  22  22  22   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 46  46  46  82  82  82   2   2   6 106 106 106
+-170 170 170  26  26  26  86  86  86 226 226 226
+-123 123 123  10  10  10  14  14  14  46  46  46
+-231 231 231 190 190 190   6   6   6  70  70  70
+- 90  90  90 238 238 238 158 158 158   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 70  70  70  58  58  58  22  22  22   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   1   0   0   0
+-  0   0   1   0   0   1   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 42  42  42  86  86  86   6   6   6 116 116 116
+-106 106 106   6   6   6  70  70  70 149 149 149
+-128 128 128  18  18  18  38  38  38  54  54  54
+-221 221 221 106 106 106   2   2   6  14  14  14
+- 46  46  46 190 190 190 198 198 198   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 74  74  74  62  62  62  22  22  22   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   1   0   0   0
+-  0   0   1   0   0   0   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 42  42  42  94  94  94  14  14  14 101 101 101
+-128 128 128   2   2   6  18  18  18 116 116 116
+-118  98  46 121  92   8 121  92   8  98  78  10
+-162 162 162 106 106 106   2   2   6   2   2   6
+-  2   2   6 195 195 195 195 195 195   6   6   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 74  74  74  62  62  62  22  22  22   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   1   0   0   1
+-  0   0   1   0   0   0   0   0   1   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 38  38  38  90  90  90  14  14  14  58  58  58
+-210 210 210  26  26  26  54  38   6 154 114  10
+-226 170  11 236 186  11 225 175  15 184 144  12
+-215 174  15 175 146  61  37  26   9   2   2   6
+- 70  70  70 246 246 246 138 138 138   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 70  70  70  66  66  66  26  26  26   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 38  38  38  86  86  86  14  14  14  10  10  10
+-195 195 195 188 164 115 192 133   9 225 175  15
+-239 182  13 234 190  10 232 195  16 232 200  30
+-245 207  45 241 208  19 232 195  16 184 144  12
+-218 194 134 211 206 186  42  42  42   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 50  50  50  74  74  74  30  30  30   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 34  34  34  86  86  86  14  14  14   2   2   6
+-121  87  25 192 133   9 219 162  10 239 182  13
+-236 186  11 232 195  16 241 208  19 244 214  54
+-246 218  60 246 218  38 246 215  20 241 208  19
+-241 208  19 226 184  13 121  87  25   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 50  50  50  82  82  82  34  34  34  10  10  10
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 34  34  34  82  82  82  30  30  30  61  42   6
+-180 123   7 206 145  10 230 174  11 239 182  13
+-234 190  10 238 202  15 241 208  19 246 218  74
+-246 218  38 246 215  20 246 215  20 246 215  20
+-226 184  13 215 174  15 184 144  12   6   6   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 26  26  26  94  94  94  42  42  42  14  14  14
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 30  30  30  78  78  78  50  50  50 104  69   6
+-192 133   9 216 158  10 236 178  12 236 186  11
+-232 195  16 241 208  19 244 214  54 245 215  43
+-246 215  20 246 215  20 241 208  19 198 155  10
+-200 144  11 216 158  10 156 118  10   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  6   6   6  90  90  90  54  54  54  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 30  30  30  78  78  78  46  46  46  22  22  22
+-137  92   6 210 162  10 239 182  13 238 190  10
+-238 202  15 241 208  19 246 215  20 246 215  20
+-241 208  19 203 166  17 185 133  11 210 150  10
+-216 158  10 210 150  10 102  78  10   2   2   6
+-  6   6   6  54  54  54  14  14  14   2   2   6
+-  2   2   6  62  62  62  74  74  74  30  30  30
+- 10  10  10   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 34  34  34  78  78  78  50  50  50   6   6   6
+- 94  70  30 139 102  15 190 146  13 226 184  13
+-232 200  30 232 195  16 215 174  15 190 146  13
+-168 122  10 192 133   9 210 150  10 213 154  11
+-202 150  34 182 157 106 101  98  89   2   2   6
+-  2   2   6  78  78  78 116 116 116  58  58  58
+-  2   2   6  22  22  22  90  90  90  46  46  46
+- 18  18  18   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 38  38  38  86  86  86  50  50  50   6   6   6
+-128 128 128 174 154 114 156 107  11 168 122  10
+-198 155  10 184 144  12 197 138  11 200 144  11
+-206 145  10 206 145  10 197 138  11 188 164 115
+-195 195 195 198 198 198 174 174 174  14  14  14
+-  2   2   6  22  22  22 116 116 116 116 116 116
+- 22  22  22   2   2   6  74  74  74  70  70  70
+- 30  30  30  10  10  10   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  18  18  18
+- 50  50  50 101 101 101  26  26  26  10  10  10
+-138 138 138 190 190 190 174 154 114 156 107  11
+-197 138  11 200 144  11 197 138  11 192 133   9
+-180 123   7 190 142  34 190 178 144 187 187 187
+-202 202 202 221 221 221 214 214 214  66  66  66
+-  2   2   6   2   2   6  50  50  50  62  62  62
+-  6   6   6   2   2   6  10  10  10  90  90  90
+- 50  50  50  18  18  18   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0  10  10  10  34  34  34
+- 74  74  74  74  74  74   2   2   6   6   6   6
+-144 144 144 198 198 198 190 190 190 178 166 146
+-154 121  60 156 107  11 156 107  11 168 124  44
+-174 154 114 187 187 187 190 190 190 210 210 210
+-246 246 246 253 253 253 253 253 253 182 182 182
+-  6   6   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  62  62  62
+- 74  74  74  34  34  34  14  14  14   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0  10  10  10  22  22  22  54  54  54
+- 94  94  94  18  18  18   2   2   6  46  46  46
+-234 234 234 221 221 221 190 190 190 190 190 190
+-190 190 190 187 187 187 187 187 187 190 190 190
+-190 190 190 195 195 195 214 214 214 242 242 242
+-253 253 253 253 253 253 253 253 253 253 253 253
+- 82  82  82   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  14  14  14
+- 86  86  86  54  54  54  22  22  22   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6  18  18  18  46  46  46  90  90  90
+- 46  46  46  18  18  18   6   6   6 182 182 182
+-253 253 253 246 246 246 206 206 206 190 190 190
+-190 190 190 190 190 190 190 190 190 190 190 190
+-206 206 206 231 231 231 250 250 250 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-202 202 202  14  14  14   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 42  42  42  86  86  86  42  42  42  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 14  14  14  38  38  38  74  74  74  66  66  66
+-  2   2   6   6   6   6  90  90  90 250 250 250
+-253 253 253 253 253 253 238 238 238 198 198 198
+-190 190 190 190 190 190 195 195 195 221 221 221
+-246 246 246 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253  82  82  82   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6  78  78  78  70  70  70  34  34  34
+- 14  14  14   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 34  34  34  66  66  66  78  78  78   6   6   6
+-  2   2   6  18  18  18 218 218 218 253 253 253
+-253 253 253 253 253 253 253 253 253 246 246 246
+-226 226 226 231 231 231 246 246 246 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 178 178 178   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6  18  18  18  90  90  90  62  62  62
+- 30  30  30  10  10  10   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0  10  10  10  26  26  26
+- 58  58  58  90  90  90  18  18  18   2   2   6
+-  2   2   6 110 110 110 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-250 250 250 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 231 231 231  18  18  18   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6  18  18  18  94  94  94
+- 54  54  54  26  26  26  10  10  10   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   6   6   6  22  22  22  50  50  50
+- 90  90  90  26  26  26   2   2   6   2   2   6
+- 14  14  14 195 195 195 250 250 250 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-250 250 250 242 242 242  54  54  54   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6  38  38  38
+- 86  86  86  50  50  50  22  22  22   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6  14  14  14  38  38  38  82  82  82
+- 34  34  34   2   2   6   2   2   6   2   2   6
+- 42  42  42 195 195 195 246 246 246 253 253 253
+-253 253 253 253 253 253 253 253 253 250 250 250
+-242 242 242 242 242 242 250 250 250 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 250 250 250 246 246 246 238 238 238
+-226 226 226 231 231 231 101 101 101   6   6   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 38  38  38  82  82  82  42  42  42  14  14  14
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+- 10  10  10  26  26  26  62  62  62  66  66  66
+-  2   2   6   2   2   6   2   2   6   6   6   6
+- 70  70  70 170 170 170 206 206 206 234 234 234
+-246 246 246 250 250 250 250 250 250 238 238 238
+-226 226 226 231 231 231 238 238 238 250 250 250
+-250 250 250 250 250 250 246 246 246 231 231 231
+-214 214 214 206 206 206 202 202 202 202 202 202
+-198 198 198 202 202 202 182 182 182  18  18  18
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6  62  62  62  66  66  66  30  30  30
+- 10  10  10   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+- 14  14  14  42  42  42  82  82  82  18  18  18
+-  2   2   6   2   2   6   2   2   6  10  10  10
+- 94  94  94 182 182 182 218 218 218 242 242 242
+-250 250 250 253 253 253 253 253 253 250 250 250
+-234 234 234 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 246 246 246
+-238 238 238 226 226 226 210 210 210 202 202 202
+-195 195 195 195 195 195 210 210 210 158 158 158
+-  6   6   6  14  14  14  50  50  50  14  14  14
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   6   6   6  86  86  86  46  46  46
+- 18  18  18   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 22  22  22  54  54  54  70  70  70   2   2   6
+-  2   2   6  10  10  10   2   2   6  22  22  22
+-166 166 166 231 231 231 250 250 250 253 253 253
+-253 253 253 253 253 253 253 253 253 250 250 250
+-242 242 242 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 246 246 246
+-231 231 231 206 206 206 198 198 198 226 226 226
+- 94  94  94   2   2   6   6   6   6  38  38  38
+- 30  30  30   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6  62  62  62  66  66  66
+- 26  26  26  10  10  10   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 30  30  30  74  74  74  50  50  50   2   2   6
+- 26  26  26  26  26  26   2   2   6 106 106 106
+-238 238 238 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 246 246 246 218 218 218 202 202 202
+-210 210 210  14  14  14   2   2   6   2   2   6
+- 30  30  30  22  22  22   2   2   6   2   2   6
+-  2   2   6   2   2   6  18  18  18  86  86  86
+- 42  42  42  14  14  14   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 42  42  42  90  90  90  22  22  22   2   2   6
+- 42  42  42   2   2   6  18  18  18 218 218 218
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 250 250 250 221 221 221
+-218 218 218 101 101 101   2   2   6  14  14  14
+- 18  18  18  38  38  38  10  10  10   2   2   6
+-  2   2   6   2   2   6   2   2   6  78  78  78
+- 58  58  58  22  22  22   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  18  18  18
+- 54  54  54  82  82  82   2   2   6  26  26  26
+- 22  22  22   2   2   6 123 123 123 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 250 250 250
+-238 238 238 198 198 198   6   6   6  38  38  38
+- 58  58  58  26  26  26  38  38  38   2   2   6
+-  2   2   6   2   2   6   2   2   6  46  46  46
+- 78  78  78  30  30  30  10  10  10   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0  10  10  10  30  30  30
+- 74  74  74  58  58  58   2   2   6  42  42  42
+-  2   2   6  22  22  22 231 231 231 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 250 250 250
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 246 246 246  46  46  46  38  38  38
+- 42  42  42  14  14  14  38  38  38  14  14  14
+-  2   2   6   2   2   6   2   2   6   6   6   6
+- 86  86  86  46  46  46  14  14  14   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   6   6   6  14  14  14  42  42  42
+- 90  90  90  18  18  18  18  18  18  26  26  26
+-  2   2   6 116 116 116 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 250 250 250 238 238 238
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253  94  94  94   6   6   6
+-  2   2   6   2   2   6  10  10  10  34  34  34
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 74  74  74  58  58  58  22  22  22   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0  10  10  10  26  26  26  66  66  66
+- 82  82  82   2   2   6  38  38  38   6   6   6
+- 14  14  14 210 210 210 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 246 246 246 242 242 242
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 144 144 144   2   2   6
+-  2   2   6   2   2   6   2   2   6  46  46  46
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 42  42  42  74  74  74  30  30  30  10  10  10
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6  14  14  14  42  42  42  90  90  90
+- 26  26  26   6   6   6  42  42  42   2   2   6
+- 74  74  74 250 250 250 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 242 242 242 242 242 242
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 182 182 182   2   2   6
+-  2   2   6   2   2   6   2   2   6  46  46  46
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 10  10  10  86  86  86  38  38  38  10  10  10
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+- 10  10  10  26  26  26  66  66  66  82  82  82
+-  2   2   6  22  22  22  18  18  18   2   2   6
+-149 149 149 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 234 234 234 242 242 242
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 206 206 206   2   2   6
+-  2   2   6   2   2   6   2   2   6  38  38  38
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  6   6   6  86  86  86  46  46  46  14  14  14
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 18  18  18  46  46  46  86  86  86  18  18  18
+-  2   2   6  34  34  34  10  10  10   6   6   6
+-210 210 210 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 234 234 234 242 242 242
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 221 221 221   6   6   6
+-  2   2   6   2   2   6   6   6   6  30  30  30
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6  82  82  82  54  54  54  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 26  26  26  66  66  66  62  62  62   2   2   6
+-  2   2   6  38  38  38  10  10  10  26  26  26
+-238 238 238 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231 238 238 238
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231   6   6   6
+-  2   2   6   2   2   6  10  10  10  30  30  30
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6  66  66  66  58  58  58  22  22  22
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 38  38  38  78  78  78   6   6   6   2   2   6
+-  2   2   6  46  46  46  14  14  14  42  42  42
+-246 246 246 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231 242 242 242
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 234 234 234  10  10  10
+-  2   2   6   2   2   6  22  22  22  14  14  14
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6  66  66  66  62  62  62  22  22  22
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  18  18  18
+- 50  50  50  74  74  74   2   2   6   2   2   6
+- 14  14  14  70  70  70  34  34  34  62  62  62
+-250 250 250 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231 246 246 246
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 234 234 234  14  14  14
+-  2   2   6   2   2   6  30  30  30   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6  66  66  66  62  62  62  22  22  22
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  18  18  18
+- 54  54  54  62  62  62   2   2   6   2   2   6
+-  2   2   6  30  30  30  46  46  46  70  70  70
+-250 250 250 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231 246 246 246
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 226 226 226  10  10  10
+-  2   2   6   6   6   6  30  30  30   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6  66  66  66  58  58  58  22  22  22
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  22  22  22
+- 58  58  58  62  62  62   2   2   6   2   2   6
+-  2   2   6   2   2   6  30  30  30  78  78  78
+-250 250 250 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231 246 246 246
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 206 206 206   2   2   6
+- 22  22  22  34  34  34  18  14   6  22  22  22
+- 26  26  26  18  18  18   6   6   6   2   2   6
+-  2   2   6  82  82  82  54  54  54  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  26  26  26
+- 62  62  62 106 106 106  74  54  14 185 133  11
+-210 162  10 121  92   8   6   6   6  62  62  62
+-238 238 238 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231 246 246 246
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 158 158 158  18  18  18
+- 14  14  14   2   2   6   2   2   6   2   2   6
+-  6   6   6  18  18  18  66  66  66  38  38  38
+-  6   6   6  94  94  94  50  50  50  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 10  10  10  10  10  10  18  18  18  38  38  38
+- 78  78  78 142 134 106 216 158  10 242 186  14
+-246 190  14 246 190  14 156 118  10  10  10  10
+- 90  90  90 238 238 238 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231 250 250 250
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 246 230 190
+-238 204  91 238 204  91 181 142  44  37  26   9
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6  38  38  38  46  46  46
+- 26  26  26 106 106 106  54  54  54  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   6   6   6  14  14  14  22  22  22
+- 30  30  30  38  38  38  50  50  50  70  70  70
+-106 106 106 190 142  34 226 170  11 242 186  14
+-246 190  14 246 190  14 246 190  14 154 114  10
+-  6   6   6  74  74  74 226 226 226 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 231 231 231 250 250 250
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 228 184  62
+-241 196  14 241 208  19 232 195  16  38  30  10
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   6   6   6  30  30  30  26  26  26
+-203 166  17 154 142  90  66  66  66  26  26  26
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6  18  18  18  38  38  38  58  58  58
+- 78  78  78  86  86  86 101 101 101 123 123 123
+-175 146  61 210 150  10 234 174  13 246 186  14
+-246 190  14 246 190  14 246 190  14 238 190  10
+-102  78  10   2   2   6  46  46  46 198 198 198
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 234 234 234 242 242 242
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 224 178  62
+-242 186  14 241 196  14 210 166  10  22  18   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   6   6   6 121  92   8
+-238 202  15 232 195  16  82  82  82  34  34  34
+- 10  10  10   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+- 14  14  14  38  38  38  70  70  70 154 122  46
+-190 142  34 200 144  11 197 138  11 197 138  11
+-213 154  11 226 170  11 242 186  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-225 175  15  46  32   6   2   2   6  22  22  22
+-158 158 158 250 250 250 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 250 250 250 242 242 242 224 178  62
+-239 182  13 236 186  11 213 154  11  46  32   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6  61  42   6 225 175  15
+-238 190  10 236 186  11 112 100  78  42  42  42
+- 14  14  14   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 22  22  22  54  54  54 154 122  46 213 154  11
+-226 170  11 230 174  11 226 170  11 226 170  11
+-236 178  12 242 186  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-241 196  14 184 144  12  10  10  10   2   2   6
+-  6   6   6 116 116 116 242 242 242 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 231 231 231 198 198 198 214 170  54
+-236 178  12 236 178  12 210 150  10 137  92   6
+- 18  14   6   2   2   6   2   2   6   2   2   6
+-  6   6   6  70  47   6 200 144  11 236 178  12
+-239 182  13 239 182  13 124 112  88  58  58  58
+- 22  22  22   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 30  30  30  70  70  70 180 133  36 226 170  11
+-239 182  13 242 186  14 242 186  14 246 186  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 232 195  16  98  70   6   2   2   6
+-  2   2   6   2   2   6  66  66  66 221 221 221
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 206 206 206 198 198 198 214 166  58
+-230 174  11 230 174  11 216 158  10 192 133   9
+-163 110   8 116  81   8 102  78  10 116  81   8
+-167 114   7 197 138  11 226 170  11 239 182  13
+-242 186  14 242 186  14 162 146  94  78  78  78
+- 34  34  34  14  14  14   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 30  30  30  78  78  78 190 142  34 226 170  11
+-239 182  13 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 241 196  14 203 166  17  22  18   6
+-  2   2   6   2   2   6   2   2   6  38  38  38
+-218 218 218 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-250 250 250 206 206 206 198 198 198 202 162  69
+-226 170  11 236 178  12 224 166  10 210 150  10
+-200 144  11 197 138  11 192 133   9 197 138  11
+-210 150  10 226 170  11 242 186  14 246 190  14
+-246 190  14 246 186  14 225 175  15 124 112  88
+- 62  62  62  30  30  30  14  14  14   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 30  30  30  78  78  78 174 135  50 224 166  10
+-239 182  13 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 241 196  14 139 102  15
+-  2   2   6   2   2   6   2   2   6   2   2   6
+- 78  78  78 250 250 250 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-250 250 250 214 214 214 198 198 198 190 150  46
+-219 162  10 236 178  12 234 174  13 224 166  10
+-216 158  10 213 154  11 213 154  11 216 158  10
+-226 170  11 239 182  13 246 190  14 246 190  14
+-246 190  14 246 190  14 242 186  14 206 162  42
+-101 101 101  58  58  58  30  30  30  14  14  14
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 30  30  30  74  74  74 174 135  50 216 158  10
+-236 178  12 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 241 196  14 226 184  13
+- 61  42   6   2   2   6   2   2   6   2   2   6
+- 22  22  22 238 238 238 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 226 226 226 187 187 187 180 133  36
+-216 158  10 236 178  12 239 182  13 236 178  12
+-230 174  11 226 170  11 226 170  11 230 174  11
+-236 178  12 242 186  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 186  14 239 182  13
+-206 162  42 106 106 106  66  66  66  34  34  34
+- 14  14  14   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 26  26  26  70  70  70 163 133  67 213 154  11
+-236 178  12 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 241 196  14
+-190 146  13  18  14   6   2   2   6   2   2   6
+- 46  46  46 246 246 246 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 221 221 221  86  86  86 156 107  11
+-216 158  10 236 178  12 242 186  14 246 186  14
+-242 186  14 239 182  13 239 182  13 242 186  14
+-242 186  14 246 186  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-242 186  14 225 175  15 142 122  72  66  66  66
+- 30  30  30  10  10  10   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 26  26  26  70  70  70 163 133  67 210 150  10
+-236 178  12 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-232 195  16 121  92   8  34  34  34 106 106 106
+-221 221 221 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-242 242 242  82  82  82  18  14   6 163 110   8
+-216 158  10 236 178  12 242 186  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 242 186  14 163 133  67
+- 46  46  46  18  18  18   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  10  10  10
+- 30  30  30  78  78  78 163 133  67 210 150  10
+-236 178  12 246 186  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-241 196  14 215 174  15 190 178 144 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 218 218 218
+- 58  58  58   2   2   6  22  18   6 167 114   7
+-216 158  10 236 178  12 246 186  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 186  14 242 186  14 190 150  46
+- 54  54  54  22  22  22   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 38  38  38  86  86  86 180 133  36 213 154  11
+-236 178  12 246 186  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 232 195  16 190 146  13 214 214 214
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 250 250 250 170 170 170  26  26  26
+-  2   2   6   2   2   6  37  26   9 163 110   8
+-219 162  10 239 182  13 246 186  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 186  14 236 178  12 224 166  10 142 122  72
+- 46  46  46  18  18  18   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  18  18  18
+- 50  50  50 109 106  95 192 133   9 224 166  10
+-242 186  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-242 186  14 226 184  13 210 162  10 142 110  46
+-226 226 226 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-253 253 253 253 253 253 253 253 253 253 253 253
+-198 198 198  66  66  66   2   2   6   2   2   6
+-  2   2   6   2   2   6  50  34   6 156 107  11
+-219 162  10 239 182  13 246 186  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 242 186  14
+-234 174  13 213 154  11 154 122  46  66  66  66
+- 30  30  30  10  10  10   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  22  22  22
+- 58  58  58 154 121  60 206 145  10 234 174  13
+-242 186  14 246 186  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 186  14 236 178  12 210 162  10 163 110   8
+- 61  42   6 138 138 138 218 218 218 250 250 250
+-253 253 253 253 253 253 253 253 253 250 250 250
+-242 242 242 210 210 210 144 144 144  66  66  66
+-  6   6   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6  61  42   6 163 110   8
+-216 158  10 236 178  12 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 239 182  13 230 174  11 216 158  10
+-190 142  34 124 112  88  70  70  70  38  38  38
+- 18  18  18   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  22  22  22
+- 62  62  62 168 124  44 206 145  10 224 166  10
+-236 178  12 239 182  13 242 186  14 242 186  14
+-246 186  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 236 178  12 216 158  10 175 118   6
+- 80  54   7   2   2   6   6   6   6  30  30  30
+- 54  54  54  62  62  62  50  50  50  38  38  38
+- 14  14  14   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   6   6   6  80  54   7 167 114   7
+-213 154  11 236 178  12 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 190  14 242 186  14 239 182  13 239 182  13
+-230 174  11 210 150  10 174 135  50 124 112  88
+- 82  82  82  54  54  54  34  34  34  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  18  18  18
+- 50  50  50 158 118  36 192 133   9 200 144  11
+-216 158  10 219 162  10 224 166  10 226 170  11
+-230 174  11 236 178  12 239 182  13 239 182  13
+-242 186  14 246 186  14 246 190  14 246 190  14
+-246 190  14 246 190  14 246 190  14 246 190  14
+-246 186  14 230 174  11 210 150  10 163 110   8
+-104  69   6  10  10  10   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   6   6   6  91  60   6 167 114   7
+-206 145  10 230 174  11 242 186  14 246 190  14
+-246 190  14 246 190  14 246 186  14 242 186  14
+-239 182  13 230 174  11 224 166  10 213 154  11
+-180 133  36 124 112  88  86  86  86  58  58  58
+- 38  38  38  22  22  22  10  10  10   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0  14  14  14
+- 34  34  34  70  70  70 138 110  50 158 118  36
+-167 114   7 180 123   7 192 133   9 197 138  11
+-200 144  11 206 145  10 213 154  11 219 162  10
+-224 166  10 230 174  11 239 182  13 242 186  14
+-246 186  14 246 186  14 246 186  14 246 186  14
+-239 182  13 216 158  10 185 133  11 152  99   6
+-104  69   6  18  14   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   2   2   6   2   2   6   2   2   6
+-  2   2   6   6   6   6  80  54   7 152  99   6
+-192 133   9 219 162  10 236 178  12 239 182  13
+-246 186  14 242 186  14 239 182  13 236 178  12
+-224 166  10 206 145  10 192 133   9 154 121  60
+- 94  94  94  62  62  62  42  42  42  22  22  22
+- 14  14  14   6   6   6   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 18  18  18  34  34  34  58  58  58  78  78  78
+-101  98  89 124 112  88 142 110  46 156 107  11
+-163 110   8 167 114   7 175 118   6 180 123   7
+-185 133  11 197 138  11 210 150  10 219 162  10
+-226 170  11 236 178  12 236 178  12 234 174  13
+-219 162  10 197 138  11 163 110   8 130  83   6
+- 91  60   6  10  10  10   2   2   6   2   2   6
+- 18  18  18  38  38  38  38  38  38  38  38  38
+- 38  38  38  38  38  38  38  38  38  38  38  38
+- 38  38  38  38  38  38  26  26  26   2   2   6
+-  2   2   6   6   6   6  70  47   6 137  92   6
+-175 118   6 200 144  11 219 162  10 230 174  11
+-234 174  13 230 174  11 219 162  10 210 150  10
+-192 133   9 163 110   8 124 112  88  82  82  82
+- 50  50  50  30  30  30  14  14  14   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6  14  14  14  22  22  22  34  34  34
+- 42  42  42  58  58  58  74  74  74  86  86  86
+-101  98  89 122 102  70 130  98  46 121  87  25
+-137  92   6 152  99   6 163 110   8 180 123   7
+-185 133  11 197 138  11 206 145  10 200 144  11
+-180 123   7 156 107  11 130  83   6 104  69   6
+- 50  34   6  54  54  54 110 110 110 101  98  89
+- 86  86  86  82  82  82  78  78  78  78  78  78
+- 78  78  78  78  78  78  78  78  78  78  78  78
+- 78  78  78  82  82  82  86  86  86  94  94  94
+-106 106 106 101 101 101  86  66  34 124  80   6
+-156 107  11 180 123   7 192 133   9 200 144  11
+-206 145  10 200 144  11 192 133   9 175 118   6
+-139 102  15 109 106  95  70  70  70  42  42  42
+- 22  22  22  10  10  10   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   6   6   6  10  10  10
+- 14  14  14  22  22  22  30  30  30  38  38  38
+- 50  50  50  62  62  62  74  74  74  90  90  90
+-101  98  89 112 100  78 121  87  25 124  80   6
+-137  92   6 152  99   6 152  99   6 152  99   6
+-138  86   6 124  80   6  98  70   6  86  66  30
+-101  98  89  82  82  82  58  58  58  46  46  46
+- 38  38  38  34  34  34  34  34  34  34  34  34
+- 34  34  34  34  34  34  34  34  34  34  34  34
+- 34  34  34  34  34  34  38  38  38  42  42  42
+- 54  54  54  82  82  82  94  86  76  91  60   6
+-134  86   6 156 107  11 167 114   7 175 118   6
+-175 118   6 167 114   7 152  99   6 121  87  25
+-101  98  89  62  62  62  34  34  34  18  18  18
+-  6   6   6   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   6   6   6   6   6   6  10  10  10
+- 18  18  18  22  22  22  30  30  30  42  42  42
+- 50  50  50  66  66  66  86  86  86 101  98  89
+-106  86  58  98  70   6 104  69   6 104  69   6
+-104  69   6  91  60   6  82  62  34  90  90  90
+- 62  62  62  38  38  38  22  22  22  14  14  14
+- 10  10  10  10  10  10  10  10  10  10  10  10
+- 10  10  10  10  10  10   6   6   6  10  10  10
+- 10  10  10  10  10  10  10  10  10  14  14  14
+- 22  22  22  42  42  42  70  70  70  89  81  66
+- 80  54   7 104  69   6 124  80   6 137  92   6
+-134  86   6 116  81   8 100  82  52  86  86  86
+- 58  58  58  30  30  30  14  14  14   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   6   6   6  10  10  10  14  14  14
+- 18  18  18  26  26  26  38  38  38  54  54  54
+- 70  70  70  86  86  86  94  86  76  89  81  66
+- 89  81  66  86  86  86  74  74  74  50  50  50
+- 30  30  30  14  14  14   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6  18  18  18  34  34  34  58  58  58
+- 82  82  82  89  81  66  89  81  66  89  81  66
+- 94  86  66  94  86  76  74  74  74  50  50  50
+- 26  26  26  14  14  14   6   6   6   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  6   6   6   6   6   6  14  14  14  18  18  18
+- 30  30  30  38  38  38  46  46  46  54  54  54
+- 50  50  50  42  42  42  30  30  30  18  18  18
+- 10  10  10   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   6   6   6  14  14  14  26  26  26
+- 38  38  38  50  50  50  58  58  58  58  58  58
+- 54  54  54  42  42  42  30  30  30  18  18  18
+- 10  10  10   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+-  6   6   6  10  10  10  14  14  14  18  18  18
+- 18  18  18  14  14  14  10  10  10   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   6   6   6
+- 14  14  14  18  18  18  22  22  22  22  22  22
+- 18  18  18  14  14  14  10  10  10   6   6   6
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
+-  0   0   0   0   0   0   0   0   0   0   0   0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 1 0  0 0 0  0 0 0  1 1 0
++0 1 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  1 1 0  0 0 0  0 0 0
++0 1 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  1 1 0
++10 15 3  2 3 1  12 18 4  42 61 14  19 27 6  11 16 4
++38 55 13  10 15 3  3 4 1  10 15 3  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  2 3 1
++12 18 4  1 1 0  23 34 8  31 45 11  10 15 3  32 47 11
++34 49 12  3 4 1  3 4 1  3 4 1  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  10 15 3  29 42 10  26 37 9  12 18 4
++55 80 19  81 118 28  55 80 19  92 132 31  106 153 36  69 100 23
++100 144 34  80 116 27  42 61 14  81 118 28  23 34 8  27 40 9
++15 21 5  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  1 1 0  29 42 10  15 21 5  50 72 17
++74 107 25  45 64 15  102 148 35  80 116 27  84 121 28  111 160 38
++69 100 23  65 94 22  81 118 28  29 42 10  17 25 6  29 42 10
++23 34 8  2 3 1  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  3 4 1
++15 21 5  15 21 5  34 49 12  101 146 34  111 161 38  97 141 33
++97 141 33  119 172 41  117 170 40  116 167 40  118 170 40  118 171 40
++117 169 40  118 170 40  111 160 38  118 170 40  96 138 32  89 128 30
++81 118 28  11 16 4  10 15 3  1 1 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++3 4 1  3 4 1  34 49 12  101 146 34  79 115 27  111 160 38
++114 165 39  113 163 39  118 170 40  117 169 40  118 171 40  117 169 40
++116 167 40  119 172 41  113 163 39  92 132 31  105 151 36  113 163 39
++75 109 26  19 27 6  16 23 5  11 16 4  0 1 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  10 15 3
++80 116 27  106 153 36  105 151 36  114 165 39  118 170 40  118 171 40
++118 171 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 170 40  117 169 40  118 170 40  118 170 40
++117 170 40  75 109 26  75 109 26  34 49 12  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  3 4 1
++64 92 22  65 94 22  100 144 34  118 171 40  118 170 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  118 171 41  118 170 40  117 169 40
++109 158 37  105 151 36  104 150 35  47 69 16  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++42 61 14  115 167 39  118 170 40  117 169 40  117 169 40  117 169 40
++117 170 40  117 170 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  118 170 40  96 138 32  17 25 6  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  47 69 16
++114 165 39  117 168 40  117 170 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  118 170 40  117 169 40  117 169 40  117 169 40
++117 170 40  119 172 41  96 138 32  12 18 4  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  10 15 3
++32 47 11  105 151 36  118 170 40  117 169 40  117 169 40  116 168 40
++109 157 37  111 160 38  117 169 40  118 171 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  118 171 40  69 100 23  2 3 1
++0 0 0  0 0 0  0 0 0  0 0 0  19 27 6  101 146 34
++118 171 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 170 40
++118 171 40  115 166 39  107 154 36  111 161 38  117 169 40  117 169 40
++117 169 40  118 171 40  75 109 26  19 27 6  2 3 1  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  16 23 5
++89 128 30  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++111 160 38  92 132 31  79 115 27  96 138 32  115 166 39  119 171 41
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  118 170 40  109 157 37  26 37 9
++0 0 0  0 0 0  0 0 0  0 0 0  64 92 22  118 171 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  118 170 40  118 171 40  109 157 37
++89 128 30  81 118 28  100 144 34  115 166 39  117 169 40  117 169 40
++117 169 40  117 170 40  113 163 39  60 86 20  1 1 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++27 40 9  96 138 32  118 170 40  117 169 40  117 169 40  117 169 40
++117 170 40  117 169 40  101 146 34  67 96 23  55 80 19  84 121 28
++113 163 39  119 171 41  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  119 171 41  65 94 22
++0 0 0  0 0 0  0 0 0  15 21 5  101 146 34  118 171 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  118 170 40  118 171 40  104 150 35  69 100 23  53 76 18
++81 118 28  111 160 38  118 170 40  117 169 40  117 169 40  117 169 40
++117 169 40  114 165 39  69 100 23  10 15 3  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  1 1 0
++31 45 11  77 111 26  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  118 170 40  116 168 40  92 132 31  47 69 16
++38 55 13  81 118 28  113 163 39  119 171 41  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  118 171 41  92 132 31
++10 15 3  0 0 0  0 0 0  36 52 12  115 166 39  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  118 170 40
++118 171 40  102 148 35  64 92 22  34 49 12  65 94 22  106 153 36
++118 171 40  117 170 40  117 169 40  117 169 40  117 169 40  117 169 40
++118 170 40  107 154 36  55 80 19  15 21 5  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++29 42 10  101 146 34  118 171 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  118 171 40  113 163 39
++75 109 26  27 40 9  36 52 12  89 128 30  116 167 40  118 171 40
++117 169 40  117 169 40  117 169 40  117 169 40  118 170 40  104 150 35
++16 23 5  0 0 0  0 0 0  53 76 18  118 171 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  119 171 41  109 157 37
++67 96 23  23 34 8  42 61 14  96 138 32  118 170 40  118 170 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  74 107 25  10 15 3  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  31 45 11  101 146 34  118 170 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++119 171 41  102 148 35  47 69 16  14 20 5  50 72 17  102 148 35
++118 171 40  117 169 40  117 169 40  117 169 40  118 170 40  102 148 35
++15 21 5  0 0 0  0 0 0  50 72 17  118 170 40  117 169 40
++117 169 40  117 169 40  118 170 40  116 167 40  84 121 28  27 40 9
++19 27 6  74 107 25  114 165 39  118 171 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  75 109 26  10 15 4  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  38 55 13  102 148 35  118 171 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  118 170 40  115 167 39  77 111 26  17 25 6  19 27 6
++77 111 26  115 166 39  118 170 40  117 169 40  119 172 41  81 118 28
++3 4 1  0 0 0  0 0 0  27 40 9  111 160 38  118 170 40
++117 169 40  118 171 40  105 151 36  50 72 17  10 15 3  38 55 13
++100 144 34  118 171 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  79 115 27  15 21 5  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  10 15 3  64 92 22  111 160 38  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  118 171 40  96 138 32  32 47 11
++3 4 1  50 72 17  107 154 36  120 173 41  105 151 36  31 45 11
++0 0 0  0 0 0  0 0 0  3 4 1  65 94 22  117 169 40
++118 170 40  89 128 30  26 37 9  3 4 1  60 86 20  111 161 38
++118 171 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++97 141 33  36 52 12  1 1 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  14 20 5  75 109 26  117 168 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  118 171 40  107 154 36
++45 64 15  2 3 1  31 45 11  75 109 26  32 47 11  0 1 0
++0 0 0  0 0 0  0 0 0  0 0 0  10 15 3  55 80 19
++65 94 22  11 16 4  11 16 4  75 109 26  116 168 40  118 170 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  118 170 40  107 154 36
++47 69 16  3 4 1  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  12 18 4  69 100 23  111 161 38  118 171 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  118 170 40
++111 160 38  50 72 17  2 3 1  2 3 1  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  1 1 0
++1 1 0  12 18 4  81 118 28  118 170 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 170 40  118 171 40  101 146 34
++42 61 14  2 3 1  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  3 4 1  36 52 12  89 128 30
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++118 171 41  101 146 34  14 20 5  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  47 69 16  118 170 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 170 40  111 160 38  69 100 23  19 27 6
++0 1 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  11 16 4  69 100 23
++115 167 39  119 172 41  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++119 172 41  75 109 26  3 4 1  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  23 34 8  106 153 36  118 170 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++117 169 40  118 170 40  119 172 41  105 151 36  42 61 14  2 3 1
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  1 1 0  15 21 5
++45 64 15  80 116 27  114 165 39  118 170 40  117 169 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  119 172 41
++97 141 33  20 30 7  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  1 1 0  53 76 18  114 165 39  118 171 40  117 169 40
++117 169 40  117 169 40  117 169 40  117 169 40  117 169 40  117 169 40
++118 171 40  104 150 35  64 92 22  31 45 11  10 15 3  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  36 52 12  97 141 33  109 158 37  113 163 39  116 168 40
++117 169 40  117 170 40  118 170 40  119 172 41  115 167 39  84 121 28
++23 34 8  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  3 4 1  50 72 17  102 148 35  118 171 40
++119 171 41  118 170 40  117 169 40  117 169 40  115 166 39  111 161 38
++109 157 37  79 115 27  12 18 4  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  3 4 1  15 21 5  23 34 8  45 64 15  106 153 36
++116 167 40  111 160 38  101 146 34  79 115 27  42 61 14  10 15 3
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  1 1 0  20 30 7  60 86 20
++89 128 30  106 153 36  113 163 39  117 169 40  84 121 28  29 42 10
++19 27 6  10 15 3  2 3 1  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  16 23 5  38 55 13
++36 52 12  26 37 9  12 18 4  2 3 1  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  1 0 0  19 2 7  52 5 18
++78 7 27  88 8 31  81 7 29  56 5 19  25 2 9  3 0 1
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++3 4 1  19 27 6  31 45 11  38 55 13  32 47 11  3 4 1
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  3 0 1
++9 0 3  12 1 4  9 0 3  4 0 1  0 0 0  0 0 0
++0 0 0  0 0 0  28 3 10  99 9 35  156 14 55  182 16 64
++189 17 66  190 17 67  189 17 66  184 17 65  166 15 58  118 13 41
++45 4 16  3 0 1  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  11 1 4  52 5 18  101 9 35  134 12 47
++151 14 53  154 14 54  151 14 53  113 10 40  11 1 4  0 0 0
++3 0 1  67 6 24  159 14 56  190 17 67  190 17 67  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  190 17 67  191 17 67
++174 16 61  101 9 35  14 1 5  0 0 0  35 3 12  108 10 38
++122 11 43  122 11 43  112 10 39  87 8 30  50 5 17  13 1 5
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++3 0 1  56 5 19  141 13 49  182 16 64  191 17 67  191 17 67
++190 17 67  190 17 67  191 17 67  113 10 40  3 0 1  1 0 0
++79 7 28  180 16 63  190 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++189 17 66  188 17 66  122 11 43  11 1 4  41 4 14  176 16 62
++191 17 67  191 17 67  191 17 67  190 17 67  181 16 63  146 13 51
++75 7 26  10 1 4  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  7 1 2
++90 8 32  178 16 62  191 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  190 17 67  141 13 49  22 2 8  0 0 0  41 4 14
++173 16 61  190 17 67  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  88 8 31  1 0 0  89 8 31
++185 17 65  189 17 66  188 17 66  188 17 66  189 17 66  191 17 67
++186 17 65  124 11 43  25 2 9  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  2 0 1  89 8 31
++184 17 65  189 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++190 17 67  151 14 53  34 3 12  0 0 0  0 0 0  79 7 28
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  191 17 67  146 13 51  9 1 3  7 1 2
++108 10 38  187 17 66  189 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  190 17 67  141 13 49  22 2 8  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  52 5 18  176 16 62
++189 17 66  188 17 66  188 17 66  188 17 66  188 17 66  190 17 67
++151 14 53  38 3 13  0 0 0  0 0 0  0 0 0  50 5 17
++180 16 63  189 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  191 17 67  141 13 49  7 1 3  0 0 0
++11 1 4  112 10 39  187 17 66  189 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  190 17 67  113 10 40  5 0 2  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  7 1 3  132 12 46  191 17 67
++188 17 66  188 17 66  188 17 66  188 17 66  190 17 67  146 13 51
++35 3 12  0 0 0  0 0 0  0 0 0  0 0 0  5 0 2
++101 9 35  185 17 65  190 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  190 17 67  180 16 63  67 6 24  0 0 0  0 0 0
++0 0 0  11 1 4  108 10 38  186 17 65  189 17 66  188 17 66
++188 17 66  188 17 66  189 17 66  180 16 63  56 5 19  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  44 4 15  177 16 62  189 17 66
++188 17 66  188 17 66  189 17 66  189 17 66  134 12 47  28 3 10
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++8 1 3  79 7 28  159 14 56  188 17 66  191 17 67  190 17 67
++189 17 66  189 17 66  189 17 66  189 17 66  190 17 67  191 17 67
++188 17 66  158 14 55  72 7 25  4 0 1  0 0 0  0 0 0
++0 0 0  0 0 0  8 1 3  95 9 33  182 16 64  189 17 67
++188 17 66  188 17 66  188 17 66  191 17 67  122 11 43  3 0 1
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  88 8 31  190 17 67  188 17 66
++188 17 66  189 17 66  185 17 65  113 10 40  18 2 6  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  1 0 0  24 2 8  77 7 27  124 11 43  154 14 54
++168 15 59  173 16 61  173 16 61  168 15 59  154 14 54  124 11 43
++77 7 27  22 2 8  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  5 0 2  77 7 27  173 16 61
++190 17 67  188 17 66  188 17 66  190 17 67  164 15 57  23 2 8
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  1 0 0  118 13 41  191 17 67  188 17 66
++190 17 67  174 16 61  87 8 30  8 1 3  0 0 0  0 0 0
++0 0 0  0 0 0  10 1 4  29 3 10  40 4 14  36 3 13
++18 2 6  2 0 1  0 0 0  0 0 0  3 0 1  14 1 5
++26 2 9  33 3 11  32 3 11  25 2 9  13 1 5  3 0 1
++0 0 0  14 1 5  56 5 19  95 9 33  109 10 38  101 9 35
++77 7 27  35 3 12  5 0 2  0 0 0  1 0 0  56 5 19
++156 14 55  190 17 67  188 17 66  188 17 66  182 16 64  50 5 17
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  5 0 2  134 12 47  191 17 67  189 17 66
++151 14 53  52 5 18  2 0 1  0 0 0  0 0 0  1 0 0
++28 3 10  90 8 32  146 13 51  170 15 60  178 16 62  174 16 61
++158 14 55  112 10 39  40 4 14  1 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  3 0 1
++56 5 19  146 13 51  183 17 64  191 17 67  191 17 67  191 17 67
++188 17 66  173 16 61  122 11 43  41 4 14  1 0 0  0 0 0
++30 3 10  124 11 43  185 17 65  190 17 67  187 17 66  67 6 24
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  6 1 2  134 12 47  168 15 59  99 9 35
++21 2 7  0 0 0  0 0 0  0 0 0  6 1 2  77 7 27
++162 15 57  190 17 67  191 17 67  189 17 66  189 17 66  189 17 66
++190 17 67  191 17 67  169 15 59  75 7 26  3 0 1  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  2 0 1  79 7 28
++178 16 62  191 17 67  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  189 17 66  191 17 67  170 15 60  79 7 28  5 0 2
++0 0 0  10 1 3  78 7 27  159 14 56  188 17 66  75 7 26
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  1 0 0  35 3 12  29 3 10  2 0 1
++0 0 0  0 0 0  0 0 0  9 1 3  101 9 35  183 17 64
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  190 17 67  178 16 63  67 6 23  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  52 5 18  174 16 61
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  190 17 67  182 16 64  89 8 31
++4 0 1  0 0 0  0 0 0  25 2 9  73 7 26  31 3 11
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  4 0 1  98 9 34  187 17 66  189 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  190 17 67  158 14 55  25 2 9
++0 0 0  0 0 0  0 0 0  8 1 3  134 12 47  191 17 67
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  189 17 66  180 16 63
++68 6 24  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  6 1 2  19 2 7  3 0 1  0 0 0  0 0 0
++0 0 0  0 0 0  65 6 23  180 16 63  189 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  189 17 66  83 8 29
++0 0 0  0 0 0  0 0 0  41 4 14  177 16 62  189 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  190 17 67
++159 14 56  28 3 10  0 0 0  0 0 0  0 0 0  23 2 8
++41 4 14  5 0 2  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++23 2 8  113 10 40  159 14 56  65 6 23  0 0 0  0 0 0
++0 0 0  16 1 6  146 13 51  191 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  191 17 67  132 12 46
++5 0 2  0 0 0  0 0 0  77 7 27  189 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++190 17 67  98 9 34  0 0 0  0 0 0  12 1 4  134 12 47
++178 16 63  108 10 38  16 1 6  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  30 3 10
++141 13 49  190 17 67  191 17 67  134 12 47  6 1 2  0 0 0
++0 0 0  68 6 24  186 17 65  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  190 17 67  156 14 55
++14 1 5  0 0 0  0 0 0  98 9 34  191 17 67  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++190 17 67  156 14 55  19 2 7  0 0 0  47 4 16  181 16 63
++190 17 67  189 17 66  126 14 44  17 2 6  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  16 1 6  134 12 47
++191 17 67  188 17 66  190 17 67  162 15 57  19 2 7  0 0 0
++3 0 1  123 11 43  191 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  190 17 67  163 15 57
++20 2 7  0 0 0  0 0 0  101 9 35  191 17 67  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  182 16 64  52 5 18  0 0 0  73 7 26  188 17 66
++188 17 66  188 17 66  189 17 66  109 10 38  5 0 2  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  95 9 33  189 17 66
++188 17 66  188 17 66  189 17 66  171 15 60  29 3 10  0 0 0
++16 1 6  156 14 55  190 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  190 17 67  158 14 55
++17 2 6  0 0 0  0 0 0  85 8 30  190 17 67  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  189 17 66  81 7 29  0 0 0  85 8 30  190 17 67
++188 17 66  188 17 66  189 17 66  180 16 63  56 5 19  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  25 2 9  162 15 57  190 17 67
++188 17 66  188 17 66  189 17 66  173 16 61  31 3 11  0 0 0
++30 3 10  171 15 60  189 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  191 17 67  141 13 49
++7 1 2  0 0 0  0 0 0  56 5 19  183 17 64  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  191 17 67  98 9 34  0 0 0  88 8 31  190 17 67
++188 17 66  188 17 66  188 17 66  191 17 67  124 11 43  5 0 2
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  68 6 24  187 17 66  188 17 66
++188 17 66  188 17 66  189 17 66  170 15 60  28 3 10  0 0 0
++34 3 12  174 16 61  189 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  191 17 67  101 9 35
++0 0 0  0 0 0  0 0 0  21 2 7  159 14 56  190 17 67
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  191 17 67  98 9 34  0 0 0  81 7 29  189 17 66
++188 17 66  188 17 66  188 17 66  189 17 66  168 15 59  28 3 10
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  109 10 38  191 17 67  188 17 66
++188 17 66  188 17 66  190 17 67  163 15 57  21 2 7  0 0 0
++26 2 9  168 15 59  189 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  189 17 66  180 16 63  47 4 16
++0 0 0  0 0 0  0 0 0  0 0 0  108 10 38  190 17 67
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  189 17 66  78 7 27  0 0 0  68 6 24  187 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  183 17 64  56 5 19
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  3 0 1  131 12 46  191 17 67  188 17 66
++188 17 66  188 17 66  190 17 67  151 14 53  12 1 4  0 0 0
++11 1 4  146 13 51  190 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  191 17 67  126 14 44  7 1 2
++0 0 0  0 0 0  0 0 0  0 0 0  32 3 11  164 15 58
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++189 17 66  178 16 62  44 4 15  0 0 0  50 5 17  182 16 64
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  72 7 25
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  5 0 2  134 12 47  191 17 67  188 17 66
++188 17 66  188 17 66  191 17 67  131 12 46  3 0 1  0 0 0
++0 0 0  101 9 35  190 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  190 17 67  170 15 60  44 4 15  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  77 7 27
++183 17 64  189 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++191 17 67  134 12 47  9 1 3  0 0 0  31 3 11  171 15 60
++189 17 66  188 17 66  188 17 66  188 17 66  188 17 66  72 7 25
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  2 0 1  124 11 43  191 17 67  188 17 66
++188 17 66  188 17 66  191 17 67  101 9 35  0 0 0  0 0 0
++0 0 0  35 3 12  168 15 59  190 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  189 17 66  182 16 64  77 7 27  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  6 1 2
++99 9 35  185 17 65  189 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  189 17 66
++177 16 62  56 5 19  0 0 0  0 0 0  13 1 5  151 14 53
++190 17 67  188 17 66  188 17 66  188 17 66  185 17 65  56 5 19
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  99 9 35  191 17 67  188 17 66
++188 17 66  188 17 66  186 17 65  65 6 23  0 0 0  0 0 0
++0 0 0  0 0 0  79 7 28  182 16 64  190 17 67  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++191 17 67  177 16 62  83 8 29  4 0 1  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++8 1 3  89 8 31  175 16 62  191 17 67  189 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  190 17 67  181 16 63
++85 8 30  3 0 1  0 0 0  0 0 0  1 0 0  118 13 41
++191 17 67  188 17 66  188 17 66  189 17 66  173 16 61  34 3 12
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  56 5 19  183 17 64  188 17 66
++188 17 66  189 17 66  169 15 59  30 3 10  0 0 0  0 0 0
++0 0 0  0 0 0  5 0 2  83 8 29  173 16 61  191 17 67
++190 17 67  189 17 66  189 17 66  190 17 67  191 17 67  187 17 66
++151 14 53  56 5 19  3 0 1  0 0 0  16 1 6  50 5 17
++79 7 28  95 9 33  95 9 33  75 7 26  41 4 14  10 1 4
++0 0 0  2 0 1  50 5 17  132 12 46  178 16 62  190 17 67
++191 17 67  191 17 67  191 17 67  186 17 65  154 14 54  68 6 24
++4 0 1  0 0 0  0 0 0  0 0 0  0 0 0  72 7 25
++187 17 66  188 17 66  188 17 66  191 17 67  141 13 49  9 1 3
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  14 1 5  151 14 53  190 17 67
++188 17 66  191 17 67  131 12 46  5 0 2  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  2 0 1  44 4 15  113 10 40
++156 14 55  173 16 61  174 16 61  164 15 58  134 12 47  77 7 27
++18 2 6  0 0 0  16 1 6  85 8 30  151 14 53  182 16 64
++189 17 66  191 17 67  190 17 67  188 17 66  177 16 62  141 13 49
++68 6 24  8 1 3  0 0 0  8 1 3  44 4 15  88 8 31
++113 10 40  122 11 43  108 10 38  67 6 24  20 2 7  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  28 3 10
++166 15 58  190 17 67  188 17 66  187 17 66  79 7 28  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  73 7 26  185 17 65
++189 17 66  184 17 65  65 6 23  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  2 0 1
++17 2 6  32 3 11  34 3 12  22 2 8  6 1 2  0 0 0
++0 0 0  38 3 13  141 13 49  188 17 66  190 17 67  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  189 17 66  191 17 67
++184 17 65  122 11 43  21 2 7  0 0 0  0 0 0  0 0 0
++0 0 0  1 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  1 0 0
++108 10 38  191 17 67  191 17 67  141 13 49  16 1 6  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  8 1 3  112 10 39
++186 17 65  124 11 43  10 1 4  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++36 3 13  156 14 55  191 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++189 17 66  190 17 67  134 12 47  18 2 6  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  7 1 2  41 4 14  75 7 26  66 5 23  19 2 7
++26 2 9  144 13 50  154 14 54  40 4 14  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  13 1 5
++56 5 19  19 2 7  0 0 0  7 1 2  29 3 10  35 3 12
++19 2 7  2 0 1  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  13 1 5
++134 12 47  191 17 67  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  189 17 67  108 10 38  3 0 1  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  1 0 0
++40 4 14  124 11 43  177 16 62  188 17 66  187 17 66  144 13 50
++24 2 8  17 2 6  22 2 8  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  19 2 7  122 11 43  171 15 60  175 16 62
++159 14 56  112 10 39  40 4 14  2 0 1  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  72 7 25
++186 17 65  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  189 17 66  174 16 61  41 4 14  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  3 0 1  72 7 25
++168 15 59  191 17 67  189 17 66  188 17 66  188 17 66  190 17 67
++95 9 33  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  95 9 33  191 17 67  189 17 66  189 17 66
++190 17 67  191 17 67  171 15 60  90 8 32  12 1 4  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  5 0 2  132 12 46
++191 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  190 17 67  98 9 34  0 0 0
++0 0 0  0 0 0  0 0 0  5 0 2  88 8 31  180 16 63
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  191 17 67
++146 13 51  11 1 4  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  9 1 3  144 13 50  191 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  189 17 66  187 17 66  123 11 43  20 2 7
++0 0 0  0 0 0  0 0 0  0 0 0  21 2 7  163 15 57
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  191 17 67  134 12 47  5 0 2
++0 0 0  0 0 0  3 0 1  88 8 31  182 16 64  189 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  189 17 66
++171 15 60  31 3 11  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  20 2 7  162 15 57  190 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  190 17 67  132 12 46
++20 2 7  0 0 0  0 0 0  0 0 0  32 3 11  173 16 61
++189 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  190 17 67  151 14 53  12 1 4
++0 0 0  0 0 0  72 7 25  180 16 63  189 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++181 16 63  47 4 16  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  21 2 7  163 15 57  190 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  190 17 67
++122 11 43  9 1 3  0 0 0  0 0 0  30 3 10  171 15 60
++189 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  190 17 67  146 13 51  10 1 4
++0 0 0  38 3 13  166 15 58  190 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++183 17 64  52 5 18  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  13 1 5  154 14 54  190 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++186 17 65  79 7 28  0 0 0  0 0 0  14 1 5  156 14 54
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  191 17 67  124 11 43  2 0 1
++5 0 2  122 11 43  191 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++182 16 64  47 4 16  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  3 0 1  126 14 44  191 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++190 17 67  158 14 55  23 2 8  0 0 0  1 0 0  113 10 40
++191 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  78 7 27  0 0 0
++47 4 16  177 16 62  189 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  189 17 66
++173 16 61  34 3 12  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  85 8 30  189 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  79 7 28  0 0 0  0 0 0  47 4 16
++175 16 62  189 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  190 17 67  156 14 55  22 2 8  0 0 0
++109 10 38  191 17 67  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  190 17 67
++151 14 53  13 1 5  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  35 3 12  173 16 61  189 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  191 17 67  134 12 47  7 1 2  0 0 0  3 0 1
++99 9 35  188 17 66  189 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  189 17 66  181 16 63  68 6 24  0 0 0  18 2 6
++156 14 55  190 17 67  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  190 17 67
++101 9 35  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  3 0 1  118 13 41  191 17 67  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  189 17 66  168 15 59  28 3 10  0 0 0  0 0 0
++12 1 4  113 10 40  187 17 66  189 17 67  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++190 17 67  180 16 63  88 8 31  4 0 1  0 0 0  47 4 16
++180 16 63  189 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  190 17 67  168 15 59
++36 3 13  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  38 3 13  164 15 58  190 17 67
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  182 16 64  50 5 17  0 0 0  0 0 0
++0 0 0  11 1 4  90 8 32  169 15 59  190 17 67  190 17 67
++189 17 66  189 17 66  189 17 66  189 17 66  191 17 67  189 17 66
++158 14 55  68 6 24  4 0 1  0 0 0  0 0 0  73 7 26
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  189 17 66  185 17 65  83 8 29
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  65 6 23  174 16 61
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  185 17 65  56 5 19  0 0 0  0 0 0
++0 0 0  0 0 0  2 0 1  35 3 12  99 9 35  146 13 51
++170 15 60  177 16 62  177 16 62  166 15 58  141 13 49  85 8 30
++24 2 8  0 0 0  0 0 0  0 0 0  0 0 0  85 8 30
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  189 17 66  112 10 39  8 1 3
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  1 0 0  68 6 24
++170 15 60  191 17 67  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  182 16 64  50 5 17  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  1 0 0  11 1 4
++28 3 10  40 4 14  38 3 13  25 2 9  8 1 3  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  78 7 27
++189 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  189 17 66  187 17 66  113 10 40  14 1 5  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  1 0 0
++47 4 16  141 13 49  186 17 65  191 17 67  190 17 67  189 17 66
++189 17 66  191 17 67  156 14 55  20 2 7  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  44 4 15
++178 16 62  190 17 67  188 17 66  188 17 66  188 17 66  190 17 67
++191 17 67  173 16 61  90 8 32  10 1 4  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  14 1 5  68 6 24  131 12 46  162 15 57  174 16 61
++171 15 60  146 13 51  56 5 19  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  3 0 1  14 1 5  29 3 10
++41 4 14  47 4 16  50 5 17  45 4 16  34 3 12  18 2 6
++5 0 2  0 0 0  0 0 0  0 0 0  0 0 0  5 0 2
++90 8 32  169 15 59  185 17 65  187 17 66  182 16 64  163 15 57
++113 10 40  41 4 14  2 0 1  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  5 0 2  21 2 7  34 3 12
++29 3 10  11 1 4  0 0 0  0 0 0  0 0 0  0 0 0
++3 0 1  32 3 11  79 7 28  124 11 43  154 14 54  171 15 60
++180 16 63  182 16 64  182 16 64  180 16 63  174 16 61  159 14 56
++132 12 46  88 8 31  34 3 12  3 0 1  0 0 0  0 0 0
++3 0 1  29 3 10  56 5 19  65 6 23  50 5 17  23 2 8
++3 0 1  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  25 2 9
++109 10 38  169 15 59  189 17 66  191 17 67  190 17 67  189 17 66
++189 17 66  188 17 66  188 17 66  188 17 66  189 17 66  190 17 67
++191 17 67  190 17 67  171 15 60  98 9 34  10 1 3  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  14 1 5  141 13 49
++191 17 67  189 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  189 17 67  186 17 65  65 6 23  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  23 2 8  166 15 58
++190 17 67  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  189 17 66  176 16 62  45 4 16  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  1 0 0  83 8 29
++183 17 64  189 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++188 17 66  189 17 66  185 17 65  95 9 33  3 0 1  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  5 0 2
++85 8 30  176 16 62  191 17 67  188 17 66  188 17 66  188 17 66
++188 17 66  188 17 66  188 17 66  188 17 66  188 17 66  188 17 66
++191 17 67  180 16 63  95 9 33  7 1 3  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++2 0 1  52 5 18  141 13 49  185 17 65  191 17 67  189 17 67
++189 17 66  188 17 66  188 17 66  189 17 66  191 17 67  187 17 66
++146 13 51  56 5 19  4 0 1  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  14 1 5  68 6 24  131 12 46  166 15 58
++180 16 63  183 17 64  180 16 63  168 15 59  134 12 47  75 7 26
++17 2 6  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  5 0 2  24 2 8
++44 4 15  52 5 18  45 4 16  26 2 9  6 1 2  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0
++0 0 0  0 0 0  0 0 0
diff --git a/target/linux/brcm2708/patches-4.4/0031-dmaengine-Add-support-for-BCM2708.patch b/target/linux/brcm2708/patches-4.4/0031-dmaengine-Add-support-for-BCM2708.patch
new file mode 100644
index 0000000..38d70d9
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0031-dmaengine-Add-support-for-BCM2708.patch
@@ -0,0 +1,612 @@
+From 51d9f11052d8e51931b1c5f816d737e64e1caa22 Mon Sep 17 00:00:00 2001
+From: Florian Meier <florian.meier at koalo.de>
+Date: Fri, 22 Nov 2013 14:22:53 +0100
+Subject: [PATCH 031/127] dmaengine: Add support for BCM2708
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Add support for DMA controller of BCM2708 as used in the Raspberry Pi.
+Currently it only supports cyclic DMA.
+
+Signed-off-by: Florian Meier <florian.meier at koalo.de>
+
+dmaengine: expand functionality by supporting scatter/gather transfers sdhci-bcm2708 and dma.c: fix for LITE channels
+
+DMA: fix cyclic LITE length overflow bug
+
+dmaengine: bcm2708: Remove chancnt affectations
+
+Mirror bcm2835-dma.c commit 9eba5536a7434c69d8c185d4bd1c70734d92287d:
+chancnt is already filled by dma_async_device_register, which uses the channel
+list to know how much channels there is.
+
+Since it's already filled, we can safely remove it from the drivers' probe
+function.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dmaengine: bcm2708: overwrite dreq only if it is not set
+
+dreq is set when the DMA channel is fetched from Device Tree.
+slave_id is set using dmaengine_slave_config().
+Only overwrite dreq with slave_id if it is not set.
+
+dreq/slave_id in the cyclic DMA case is not touched, because I don't
+have hardware to test with.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dmaengine: bcm2708: do device registration in the board file
+
+Don't register the device in the driver. Do it in the board file.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dmaengine: bcm2708: don't restrict DT support to ARCH_BCM2835
+
+Both ARCH_BCM2835 and ARCH_BCM270x are built with OF now.
+Add Device Tree support to the non ARCH_BCM2835 case.
+Use the same driver name regardless of architecture.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+BCM270x_DT: add bcm2835-dma entry
+
+Add Device Tree entry for bcm2835-dma.
+The entry doesn't contain any resources since they are handled
+by the arch/arm/mach-bcm270x/dma.c driver.
+In non-DT mode, don't add the device in the board file.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+bcm2708-dmaengine: Add debug options
+
+BCM270x: Add memory and irq resources to dmaengine device and DT
+
+Prepare for merging of the legacy DMA API arch driver dma.c
+with bcm2708-dmaengine by adding memory and irq resources both
+to platform file device and Device Tree node.
+Don't use BCM_DMAMAN_DRIVER_NAME so we don't have to include mach/dma.h
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dmaengine: bcm2708: Merge with arch dma.c driver and disable dma.c
+
+Merge the legacy DMA API driver with bcm2708-dmaengine.
+This is done so we can use bcm2708_fb on ARCH_BCM2835 (mailbox
+driver is also needed).
+
+Changes to the dma.c code:
+- Use BIT() macro.
+- Cutdown some comments to one line.
+- Add mutex to vc_dmaman and use this, since the dev lock is locked
+  during probing of the engine part.
+- Add global g_dmaman variable since drvdata is used by the engine part.
+- Restructure for readability:
+  vc_dmaman_chan_alloc()
+  vc_dmaman_chan_free()
+  bcm_dma_chan_free()
+- Restructure bcm_dma_chan_alloc() to simplify error handling.
+- Use device irq resources instead of hardcoded bcm_dma_irqs table.
+- Remove dev_dmaman_register() and code it directly.
+- Remove dev_dmaman_deregister() and code it directly.
+- Simplify bcm_dmaman_probe() using devm_* functions.
+- Get dmachans from DT if available.
+- Keep 'dma.dmachans' module argument name for backwards compatibility.
+
+Make it available on ARCH_BCM2835 as well.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dmaengine: bcm2708: set residue_granularity field
+
+bcm2708-dmaengine supports residue reporting at burst level
+but didn't report this via the residue_granularity field.
+
+Without this field set properly we get playback issues with I2S cards.
+
+dmaengine: bcm2708-dmaengine: Fix memory leak when stopping a running transfer
+
+bcm2708-dmaengine: Use more DMA channels (but not 12)
+
+1) Only the bcm2708_fb drivers uses the legacy DMA API, and
+it requires a BULK-capable channel, so all other types
+(FAST, NORMAL and LITE) can be made available to the regular
+DMA API.
+
+2) DMA channels 11-14 share an interrupt. The driver can't
+handle this, so don't use channels 12-14 (12 was used, probably
+because it appears to have an interrupt, but in reality that
+interrupt is for activity on ANY channel). This may explain
+a lockup encountered when running out of DMA channels.
+
+The combined effect of this patch is to leave 7 DMA channels
+available + channel 0 for bcm2708_fb via the legacy API.
+
+See: https://github.com/raspberrypi/linux/issues/1110
+     https://github.com/raspberrypi/linux/issues/1108
+
+dmaengine: bcm2708: Make legacy API available for bcm2835-dma
+
+bcm2708_fb uses the legacy DMA API, so in order to start using
+bcm2835-dma, bcm2835-dma has to support the legacy API. Make this
+possible by exporting bcm_dmaman_probe() and bcm_dmaman_remove().
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dmaengine: bcm2708: Change DT compatible string
+
+Both bcm2835-dma and bcm2708-dmaengine have the same compatible string.
+So change compatible to "brcm,bcm2708-dma".
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+dmaengine: bcm2708: Remove driver but keep legacy API
+
+Dropping non-DT support means we don't need this driver,
+but we still need the legacy DMA API.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/dma/Kconfig                       |   4 +
+ drivers/dma/Makefile                      |   1 +
+ drivers/dma/bcm2708-dmaengine.c           | 281 ++++++++++++++++++++++++++++++
+ include/linux/platform_data/dma-bcm2708.h | 143 +++++++++++++++
+ 4 files changed, 429 insertions(+)
+ create mode 100644 drivers/dma/bcm2708-dmaengine.c
+ create mode 100644 include/linux/platform_data/dma-bcm2708.h
+
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -470,6 +470,10 @@ config TIMB_DMA
+ 	help
+ 	  Enable support for the Timberdale FPGA DMA engine.
+ 
++config DMA_BCM2708
++	tristate "BCM2708 DMA legacy API support"
++	depends on DMA_BCM2835
++
+ config TI_CPPI41
+ 	tristate "AM33xx CPPI41 DMA support"
+ 	depends on ARCH_OMAP
+--- a/drivers/dma/Makefile
++++ b/drivers/dma/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
+ obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
+ obj-$(CONFIG_AXI_DMAC) += dma-axi-dmac.o
+ obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o
++obj-$(CONFIG_DMA_BCM2708) += bcm2708-dmaengine.o
+ obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
+ obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
+ obj-$(CONFIG_DMA_JZ4780) += dma-jz4780.o
+--- /dev/null
++++ b/drivers/dma/bcm2708-dmaengine.c
+@@ -0,0 +1,281 @@
++/*
++ * BCM2708 legacy DMA API
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ */
++
++#include <linux/init.h>
++#include <linux/interrupt.h>
++#include <linux/list.h>
++#include <linux/module.h>
++#include <linux/platform_data/dma-bcm2708.h>
++#include <linux/platform_device.h>
++#include <linux/slab.h>
++#include <linux/io.h>
++#include <linux/spinlock.h>
++
++#include "virt-dma.h"
++
++#define CACHE_LINE_MASK 31
++#define DEFAULT_DMACHAN_BITMAP 0x10  /* channel 4 only */
++
++/* valid only for channels 0 - 14, 15 has its own base address */
++#define BCM2708_DMA_CHAN(n)	((n) << 8) /* base address */
++#define BCM2708_DMA_CHANIO(dma_base, n) \
++	((void __iomem *)((char *)(dma_base) + BCM2708_DMA_CHAN(n)))
++
++struct vc_dmaman {
++	void __iomem *dma_base;
++	u32 chan_available; /* bitmap of available channels */
++	u32 has_feature[BCM_DMA_FEATURE_COUNT]; /* bitmap of feature presence */
++	struct mutex lock;
++};
++
++static struct device *dmaman_dev;	/* we assume there's only one! */
++static struct vc_dmaman *g_dmaman;	/* DMA manager */
++
++/* DMA Auxiliary Functions */
++
++/* A DMA buffer on an arbitrary boundary may separate a cache line into a
++   section inside the DMA buffer and another section outside it.
++   Even if we flush DMA buffers from the cache there is always the chance that
++   during a DMA someone will access the part of a cache line that is outside
++   the DMA buffer - which will then bring in unwelcome data.
++   Without being able to dictate our own buffer pools we must insist that
++   DMA buffers consist of a whole number of cache lines.
++*/
++extern int bcm_sg_suitable_for_dma(struct scatterlist *sg_ptr, int sg_len)
++{
++	int i;
++
++	for (i = 0; i < sg_len; i++) {
++		if (sg_ptr[i].offset & CACHE_LINE_MASK ||
++		    sg_ptr[i].length & CACHE_LINE_MASK)
++			return 0;
++	}
++
++	return 1;
++}
++EXPORT_SYMBOL_GPL(bcm_sg_suitable_for_dma);
++
++extern void bcm_dma_start(void __iomem *dma_chan_base,
++			  dma_addr_t control_block)
++{
++	dsb();	/* ARM data synchronization (push) operation */
++
++	writel(control_block, dma_chan_base + BCM2708_DMA_ADDR);
++	writel(BCM2708_DMA_ACTIVE, dma_chan_base + BCM2708_DMA_CS);
++}
++EXPORT_SYMBOL_GPL(bcm_dma_start);
++
++extern void bcm_dma_wait_idle(void __iomem *dma_chan_base)
++{
++	dsb();
++
++	/* ugly busy wait only option for now */
++	while (readl(dma_chan_base + BCM2708_DMA_CS) & BCM2708_DMA_ACTIVE)
++		cpu_relax();
++}
++EXPORT_SYMBOL_GPL(bcm_dma_wait_idle);
++
++extern bool bcm_dma_is_busy(void __iomem *dma_chan_base)
++{
++	dsb();
++
++	return readl(dma_chan_base + BCM2708_DMA_CS) & BCM2708_DMA_ACTIVE;
++}
++EXPORT_SYMBOL_GPL(bcm_dma_is_busy);
++
++/* Complete an ongoing DMA (assuming its results are to be ignored)
++   Does nothing if there is no DMA in progress.
++   This routine waits for the current AXI transfer to complete before
++   terminating the current DMA. If the current transfer is hung on a DREQ used
++   by an uncooperative peripheral the AXI transfer may never complete.	In this
++   case the routine times out and return a non-zero error code.
++   Use of this routine doesn't guarantee that the ongoing or aborted DMA
++   does not produce an interrupt.
++*/
++extern int bcm_dma_abort(void __iomem *dma_chan_base)
++{
++	unsigned long int cs;
++	int rc = 0;
++
++	cs = readl(dma_chan_base + BCM2708_DMA_CS);
++
++	if (BCM2708_DMA_ACTIVE & cs) {
++		long int timeout = 10000;
++
++		/* write 0 to the active bit - pause the DMA */
++		writel(0, dma_chan_base + BCM2708_DMA_CS);
++
++		/* wait for any current AXI transfer to complete */
++		while (0 != (cs & BCM2708_DMA_ISPAUSED) && --timeout >= 0)
++			cs = readl(dma_chan_base + BCM2708_DMA_CS);
++
++		if (0 != (cs & BCM2708_DMA_ISPAUSED)) {
++			/* we'll un-pause when we set of our next DMA */
++			rc = -ETIMEDOUT;
++
++		} else if (BCM2708_DMA_ACTIVE & cs) {
++			/* terminate the control block chain */
++			writel(0, dma_chan_base + BCM2708_DMA_NEXTCB);
++
++			/* abort the whole DMA */
++			writel(BCM2708_DMA_ABORT | BCM2708_DMA_ACTIVE,
++			       dma_chan_base + BCM2708_DMA_CS);
++		}
++	}
++
++	return rc;
++}
++EXPORT_SYMBOL_GPL(bcm_dma_abort);
++
++ /* DMA Manager Device Methods */
++
++static void vc_dmaman_init(struct vc_dmaman *dmaman, void __iomem *dma_base,
++			   u32 chans_available)
++{
++	dmaman->dma_base = dma_base;
++	dmaman->chan_available = chans_available;
++	dmaman->has_feature[BCM_DMA_FEATURE_FAST_ORD] = 0x0c;  /* 2 & 3 */
++	dmaman->has_feature[BCM_DMA_FEATURE_BULK_ORD] = 0x01;  /* 0 */
++	dmaman->has_feature[BCM_DMA_FEATURE_NORMAL_ORD] = 0xfe;  /* 1 to 7 */
++	dmaman->has_feature[BCM_DMA_FEATURE_LITE_ORD] = 0x7f00;  /* 8 to 14 */
++}
++
++static int vc_dmaman_chan_alloc(struct vc_dmaman *dmaman,
++				unsigned required_feature_set)
++{
++	u32 chans;
++	int chan = 0;
++	int feature;
++
++	chans = dmaman->chan_available;
++	for (feature = 0; feature < BCM_DMA_FEATURE_COUNT; feature++)
++		/* select the subset of available channels with the desired
++		   features */
++		if (required_feature_set & (1 << feature))
++			chans &= dmaman->has_feature[feature];
++
++	if (!chans)
++		return -ENOENT;
++
++	/* return the ordinal of the first channel in the bitmap */
++	while (chans != 0 && (chans & 1) == 0) {
++		chans >>= 1;
++		chan++;
++	}
++	/* claim the channel */
++	dmaman->chan_available &= ~(1 << chan);
++
++	return chan;
++}
++
++static int vc_dmaman_chan_free(struct vc_dmaman *dmaman, int chan)
++{
++	if (chan < 0)
++		return -EINVAL;
++
++	if ((1 << chan) & dmaman->chan_available)
++		return -EIDRM;
++
++	dmaman->chan_available |= (1 << chan);
++
++	return 0;
++}
++
++/* DMA Manager Monitor */
++
++extern int bcm_dma_chan_alloc(unsigned required_feature_set,
++			      void __iomem **out_dma_base, int *out_dma_irq)
++{
++	struct vc_dmaman *dmaman = g_dmaman;
++	struct platform_device *pdev = to_platform_device(dmaman_dev);
++	struct resource *r;
++	int chan;
++
++	if (!dmaman_dev)
++		return -ENODEV;
++
++	mutex_lock(&dmaman->lock);
++	chan = vc_dmaman_chan_alloc(dmaman, required_feature_set);
++	if (chan < 0)
++		goto out;
++
++	r = platform_get_resource(pdev, IORESOURCE_IRQ, (unsigned int)chan);
++	if (!r) {
++		dev_err(dmaman_dev, "failed to get irq for DMA channel %d\n",
++			chan);
++		vc_dmaman_chan_free(dmaman, chan);
++		chan = -ENOENT;
++		goto out;
++	}
++
++	*out_dma_base = BCM2708_DMA_CHANIO(dmaman->dma_base, chan);
++	*out_dma_irq = r->start;
++	dev_dbg(dmaman_dev,
++		"Legacy API allocated channel=%d, base=%p, irq=%i\n",
++		chan, *out_dma_base, *out_dma_irq);
++
++out:
++	mutex_unlock(&dmaman->lock);
++
++	return chan;
++}
++EXPORT_SYMBOL_GPL(bcm_dma_chan_alloc);
++
++extern int bcm_dma_chan_free(int channel)
++{
++	struct vc_dmaman *dmaman = g_dmaman;
++	int rc;
++
++	if (!dmaman_dev)
++		return -ENODEV;
++
++	mutex_lock(&dmaman->lock);
++	rc = vc_dmaman_chan_free(dmaman, channel);
++	mutex_unlock(&dmaman->lock);
++
++	return rc;
++}
++EXPORT_SYMBOL_GPL(bcm_dma_chan_free);
++
++int bcm_dmaman_probe(struct platform_device *pdev, void __iomem *base,
++		     u32 chans_available)
++{
++	struct device *dev = &pdev->dev;
++	struct vc_dmaman *dmaman;
++
++	dmaman = devm_kzalloc(dev, sizeof(*dmaman), GFP_KERNEL);
++	if (!dmaman)
++		return -ENOMEM;
++
++	mutex_init(&dmaman->lock);
++	vc_dmaman_init(dmaman, base, chans_available);
++	g_dmaman = dmaman;
++	dmaman_dev = dev;
++
++	dev_info(dev, "DMA legacy API manager at %p, dmachans=0x%x\n",
++		 base, chans_available);
++
++	return 0;
++}
++EXPORT_SYMBOL(bcm_dmaman_probe);
++
++int bcm_dmaman_remove(struct platform_device *pdev)
++{
++	dmaman_dev = NULL;
++
++	return 0;
++}
++EXPORT_SYMBOL(bcm_dmaman_remove);
++
++MODULE_LICENSE("GPL");
+--- /dev/null
++++ b/include/linux/platform_data/dma-bcm2708.h
+@@ -0,0 +1,143 @@
++/*
++ *  Copyright (C) 2010 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#ifndef _PLAT_BCM2708_DMA_H
++#define _PLAT_BCM2708_DMA_H
++
++/* DMA CS Control and Status bits */
++#define BCM2708_DMA_ACTIVE	BIT(0)
++#define BCM2708_DMA_INT		BIT(2)
++#define BCM2708_DMA_ISPAUSED	BIT(4)  /* Pause requested or not active */
++#define BCM2708_DMA_ISHELD	BIT(5)  /* Is held by DREQ flow control */
++#define BCM2708_DMA_ERR		BIT(8)
++#define BCM2708_DMA_ABORT	BIT(30) /* stop current CB, go to next, WO */
++#define BCM2708_DMA_RESET	BIT(31) /* WO, self clearing */
++
++/* DMA control block "info" field bits */
++#define BCM2708_DMA_INT_EN	BIT(0)
++#define BCM2708_DMA_TDMODE	BIT(1)
++#define BCM2708_DMA_WAIT_RESP	BIT(3)
++#define BCM2708_DMA_D_INC	BIT(4)
++#define BCM2708_DMA_D_WIDTH	BIT(5)
++#define BCM2708_DMA_D_DREQ	BIT(6)
++#define BCM2708_DMA_S_INC	BIT(8)
++#define BCM2708_DMA_S_WIDTH	BIT(9)
++#define BCM2708_DMA_S_DREQ	BIT(10)
++
++#define	BCM2708_DMA_BURST(x)	(((x) & 0xf) << 12)
++#define	BCM2708_DMA_PER_MAP(x)	((x) << 16)
++#define	BCM2708_DMA_WAITS(x)	(((x) & 0x1f) << 21)
++
++#define BCM2708_DMA_DREQ_EMMC	11
++#define BCM2708_DMA_DREQ_SDHOST	13
++
++#define BCM2708_DMA_CS		0x00 /* Control and Status */
++#define BCM2708_DMA_ADDR	0x04
++/* the current control block appears in the following registers - read only */
++#define BCM2708_DMA_INFO	0x08
++#define BCM2708_DMA_SOURCE_AD	0x0c
++#define BCM2708_DMA_DEST_AD	0x10
++#define BCM2708_DMA_NEXTCB	0x1C
++#define BCM2708_DMA_DEBUG	0x20
++
++#define BCM2708_DMA4_CS		(BCM2708_DMA_CHAN(4) + BCM2708_DMA_CS)
++#define BCM2708_DMA4_ADDR	(BCM2708_DMA_CHAN(4) + BCM2708_DMA_ADDR)
++
++#define BCM2708_DMA_TDMODE_LEN(w, h) ((h) << 16 | (w))
++
++/* When listing features we can ask for when allocating DMA channels give
++   those with higher priority smaller ordinal numbers */
++#define BCM_DMA_FEATURE_FAST_ORD	0
++#define BCM_DMA_FEATURE_BULK_ORD	1
++#define BCM_DMA_FEATURE_NORMAL_ORD	2
++#define BCM_DMA_FEATURE_LITE_ORD	3
++#define BCM_DMA_FEATURE_FAST		BIT(BCM_DMA_FEATURE_FAST_ORD)
++#define BCM_DMA_FEATURE_BULK		BIT(BCM_DMA_FEATURE_BULK_ORD)
++#define BCM_DMA_FEATURE_NORMAL		BIT(BCM_DMA_FEATURE_NORMAL_ORD)
++#define BCM_DMA_FEATURE_LITE		BIT(BCM_DMA_FEATURE_LITE_ORD)
++#define BCM_DMA_FEATURE_COUNT		4
++
++struct bcm2708_dma_cb {
++	unsigned long info;
++	unsigned long src;
++	unsigned long dst;
++	unsigned long length;
++	unsigned long stride;
++	unsigned long next;
++	unsigned long pad[2];
++};
++
++struct scatterlist;
++struct platform_device;
++
++#ifdef CONFIG_DMA_BCM2708
++
++int bcm_sg_suitable_for_dma(struct scatterlist *sg_ptr, int sg_len);
++void bcm_dma_start(void __iomem *dma_chan_base, dma_addr_t control_block);
++void bcm_dma_wait_idle(void __iomem *dma_chan_base);
++bool bcm_dma_is_busy(void __iomem *dma_chan_base);
++int bcm_dma_abort(void __iomem *dma_chan_base);
++
++/* return channel no or -ve error */
++int bcm_dma_chan_alloc(unsigned preferred_feature_set,
++		       void __iomem **out_dma_base, int *out_dma_irq);
++int bcm_dma_chan_free(int channel);
++
++int bcm_dmaman_probe(struct platform_device *pdev, void __iomem *base,
++		     u32 chans_available);
++int bcm_dmaman_remove(struct platform_device *pdev);
++
++#else /* CONFIG_DMA_BCM2708 */
++
++static inline int bcm_sg_suitable_for_dma(struct scatterlist *sg_ptr,
++					  int sg_len)
++{
++	return 0;
++}
++
++static inline void bcm_dma_start(void __iomem *dma_chan_base,
++				 dma_addr_t control_block) { }
++
++static inline void bcm_dma_wait_idle(void __iomem *dma_chan_base) { }
++
++static inline bool bcm_dma_is_busy(void __iomem *dma_chan_base)
++{
++	return false;
++}
++
++static inline int bcm_dma_abort(void __iomem *dma_chan_base)
++{
++	return -EINVAL;
++}
++
++static inline int bcm_dma_chan_alloc(unsigned preferred_feature_set,
++				     void __iomem **out_dma_base,
++				     int *out_dma_irq)
++{
++	return -EINVAL;
++}
++
++static inline int bcm_dma_chan_free(int channel)
++{
++	return -EINVAL;
++}
++
++static inline int bcm_dmaman_probe(struct platform_device *pdev,
++				   void __iomem *base, u32 chans_available)
++{
++	return 0;
++}
++
++static inline int bcm_dmaman_remove(struct platform_device *pdev)
++{
++	return 0;
++}
++
++#endif /* CONFIG_DMA_BCM2708 */
++
++#endif /* _PLAT_BCM2708_DMA_H */
diff --git a/target/linux/brcm2708/patches-4.4/0032-Add-blk_pos-parameter-to-mmc-multi_io_quirk-callback.patch b/target/linux/brcm2708/patches-4.4/0032-Add-blk_pos-parameter-to-mmc-multi_io_quirk-callback.patch
new file mode 100644
index 0000000..9e697e8
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0032-Add-blk_pos-parameter-to-mmc-multi_io_quirk-callback.patch
@@ -0,0 +1,75 @@
+From 664c65e9a1191f4123b3d88595735d9ab56839e6 Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Fri, 17 Apr 2015 19:30:22 +0100
+Subject: [PATCH 032/127] Add blk_pos parameter to mmc multi_io_quirk callback
+
+---
+ drivers/mmc/card/block.c          | 1 +
+ drivers/mmc/host/omap_hsmmc.c     | 4 +++-
+ drivers/mmc/host/sh_mobile_sdhi.c | 4 +++-
+ drivers/mmc/host/tmio_mmc_pio.c   | 4 +++-
+ include/linux/mmc/host.h          | 4 +++-
+ 5 files changed, 13 insertions(+), 4 deletions(-)
+
+--- a/drivers/mmc/card/block.c
++++ b/drivers/mmc/card/block.c
+@@ -1510,6 +1510,7 @@ static void mmc_blk_rw_rq_prep(struct mm
+ 			brq->data.blocks = card->host->ops->multi_io_quirk(card,
+ 						(rq_data_dir(req) == READ) ?
+ 						MMC_DATA_READ : MMC_DATA_WRITE,
++						blk_rq_pos(req),
+ 						brq->data.blocks);
+ 	}
+ 
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -1832,7 +1832,9 @@ static void omap_hsmmc_conf_bus_power(st
+ }
+ 
+ static int omap_hsmmc_multi_io_quirk(struct mmc_card *card,
+-				     unsigned int direction, int blk_size)
++				     unsigned int direction,
++				     u32 blk_pos,
++				     int blk_size)
+ {
+ 	/* This controller can't do multiblock reads due to hw bugs */
+ 	if (direction == MMC_DATA_READ)
+--- a/drivers/mmc/host/sh_mobile_sdhi.c
++++ b/drivers/mmc/host/sh_mobile_sdhi.c
+@@ -170,7 +170,9 @@ static int sh_mobile_sdhi_write16_hook(s
+ }
+ 
+ static int sh_mobile_sdhi_multi_io_quirk(struct mmc_card *card,
+-					 unsigned int direction, int blk_size)
++					 unsigned int direction,
++					 u32 blk_pos,
++					 int blk_size)
+ {
+ 	/*
+ 	 * In Renesas controllers, when performing a
+--- a/drivers/mmc/host/tmio_mmc_pio.c
++++ b/drivers/mmc/host/tmio_mmc_pio.c
+@@ -1003,7 +1003,9 @@ static int tmio_mmc_get_ro(struct mmc_ho
+ }
+ 
+ static int tmio_multi_io_quirk(struct mmc_card *card,
+-			       unsigned int direction, int blk_size)
++			       unsigned int direction,
++			       u32 blk_pos,
++			       int blk_size)
+ {
+ 	struct tmio_mmc_host *host = mmc_priv(card->host);
+ 
+--- a/include/linux/mmc/host.h
++++ b/include/linux/mmc/host.h
+@@ -143,7 +143,9 @@ struct mmc_host_ops {
+ 	 * I/O. Returns the number of supported blocks for the request.
+ 	 */
+ 	int	(*multi_io_quirk)(struct mmc_card *card,
+-				  unsigned int direction, int blk_size);
++				  unsigned int direction,
++				  u32 blk_pos,
++				  int blk_size);
+ };
+ 
+ struct mmc_card;
diff --git a/target/linux/brcm2708/patches-4.4/0033-MMC-added-alternative-MMC-driver.patch b/target/linux/brcm2708/patches-4.4/0033-MMC-added-alternative-MMC-driver.patch
new file mode 100644
index 0000000..c0ae1a2
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0033-MMC-added-alternative-MMC-driver.patch
@@ -0,0 +1,1691 @@
+From d44fe69d1a40ccd48ca963476ae6a1d4378349fd Mon Sep 17 00:00:00 2001
+From: gellert <gellert at raspberrypi.org>
+Date: Fri, 15 Aug 2014 16:35:06 +0100
+Subject: [PATCH 033/127] MMC: added alternative MMC driver
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+mmc: Disable CMD23 transfers on all cards
+
+Pending wire-level investigation of these types of transfers
+and associated errors on bcm2835-mmc, disable for now. Fallback of
+CMD18/CMD25 transfers will be used automatically by the MMC layer.
+
+Reported/Tested-by: Gellert Weisz <gellert at raspberrypi.org>
+
+mmc: bcm2835-mmc: enable DT support for all architectures
+
+Both ARCH_BCM2835 and ARCH_BCM270x are built with OF now.
+Enable Device Tree support for all architectures.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+mmc: bcm2835-mmc: fix probe error handling
+
+Probe error handling is broken in several places.
+Simplify error handling by using device managed functions.
+Replace pr_{err,info} with dev_{err,info}.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+bcm2835-mmc: Add locks when accessing sdhost registers
+
+bcm2835-mmc: Add range of debug options for slowing things down
+
+bcm2835-mmc: Add option to disable some delays
+
+bcm2835-mmc: Add option to disable MMC_QUIRK_BLK_NO_CMD23
+
+bcm2835-mmc: Default to disabling MMC_QUIRK_BLK_NO_CMD23
+
+bcm2835-mmc: Adding overclocking option
+
+Allow a different clock speed to be substitued for a requested 50MHz.
+This option is exposed using the "overclock_50" DT parameter.
+Note that the mmc interface is restricted to EVEN integer divisions of
+250MHz, and the highest sensible option is 63 (250/4 = 62.5), the
+next being 125 (250/2) which is much too high.
+
+Use at your own risk.
+
+bcm2835-mmc: Round up the overclock, so 62 works for 62.5Mhz
+
+Also only warn once for each overclock setting.
+
+mmc: bcm2835-mmc: Make available on ARCH_BCM2835
+
+Make the bcm2835-mmc driver available for use on ARCH_BCM2835.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+BCM270x_DT: add bcm2835-mmc entry
+
+Add Device Tree entry for bcm2835-mmc.
+In non-DT mode, don't add the device in the board file.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+bcm2835-mmc: Don't overwrite MMC capabilities from DT
+---
+ drivers/mmc/core/quirks.c      |    6 +
+ drivers/mmc/host/Kconfig       |   29 +
+ drivers/mmc/host/Makefile      |    1 +
+ drivers/mmc/host/bcm2835-mmc.c | 1542 ++++++++++++++++++++++++++++++++++++++++
+ 4 files changed, 1578 insertions(+)
+ create mode 100644 drivers/mmc/host/bcm2835-mmc.c
+
+--- a/drivers/mmc/core/quirks.c
++++ b/drivers/mmc/core/quirks.c
+@@ -53,6 +53,7 @@ static const struct mmc_fixup mmc_fixup_
+ 
+ void mmc_fixup_device(struct mmc_card *card, const struct mmc_fixup *table)
+ {
++	extern unsigned mmc_debug;
+ 	const struct mmc_fixup *f;
+ 	u64 rev = cid_rev_card(card);
+ 
+@@ -77,5 +78,10 @@ void mmc_fixup_device(struct mmc_card *c
+ 			f->vendor_fixup(card, f->data);
+ 		}
+ 	}
++	/* SDHCI on BCM2708 - bug causes a certain sequence of CMD23 operations to fail.
++	 * Disable this flag for all cards (fall-back to CMD25/CMD18 multi-block transfers).
++	 */
++	if (mmc_debug & (1<<13))
++	card->quirks |= MMC_QUIRK_BLK_NO_CMD23;
+ }
+ EXPORT_SYMBOL(mmc_fixup_device);
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -4,6 +4,35 @@
+ 
+ comment "MMC/SD/SDIO Host Controller Drivers"
+ 
++config MMC_BCM2835
++	tristate "MMC support on BCM2835"
++	depends on MACH_BCM2708 || MACH_BCM2709 || ARCH_BCM2835
++	help
++	  This selects the MMC Interface on BCM2835.
++
++	  If you have a controller with this interface, say Y or M here.
++
++	  If unsure, say N.
++
++config MMC_BCM2835_DMA
++	bool "DMA support on BCM2835 Arasan controller"
++	depends on MMC_BCM2835
++	help
++	  Enable DMA support on the Arasan SDHCI controller in Broadcom 2708
++	  based chips.
++
++	  If unsure, say N.
++
++config MMC_BCM2835_PIO_DMA_BARRIER
++	int "Block count limit for PIO transfers"
++	depends on MMC_BCM2835 && MMC_BCM2835_DMA
++	range 0 256
++	default 2
++	help
++	  The inclusive limit in bytes under which PIO will be used instead of DMA
++
++	  If unsure, say 2 here.
++
+ config MMC_ARMMMCI
+ 	tristate "ARM AMBA Multimedia Card Interface support"
+ 	depends on ARM_AMBA
+--- a/drivers/mmc/host/Makefile
++++ b/drivers/mmc/host/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_MMC_SDHCI_S3C)	+= sdhci-s3c
+ obj-$(CONFIG_MMC_SDHCI_SIRF)   	+= sdhci-sirf.o
+ obj-$(CONFIG_MMC_SDHCI_F_SDH30)	+= sdhci_f_sdh30.o
+ obj-$(CONFIG_MMC_SDHCI_SPEAR)	+= sdhci-spear.o
++obj-$(CONFIG_MMC_BCM2835)	+= bcm2835-mmc.o
+ obj-$(CONFIG_MMC_WBSD)		+= wbsd.o
+ obj-$(CONFIG_MMC_AU1X)		+= au1xmmc.o
+ obj-$(CONFIG_MMC_MTK)		+= mtk-sd.o
+--- /dev/null
++++ b/drivers/mmc/host/bcm2835-mmc.c
+@@ -0,0 +1,1542 @@
++/*
++ * BCM2835 MMC host driver.
++ *
++ * Author:      Gellert Weisz <gellert at raspberrypi.org>
++ *              Copyright 2014
++ *
++ * Based on
++ *  sdhci-bcm2708.c by Broadcom
++ *  sdhci-bcm2835.c by Stephen Warren and Oleksandr Tymoshenko
++ *  sdhci.c and sdhci-pci.c by Pierre Ossman
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms and conditions of the GNU General Public License,
++ * version 2, as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope it will be useful, but WITHOUT
++ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
++ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
++ * more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
++ */
++
++#include <linux/delay.h>
++#include <linux/module.h>
++#include <linux/io.h>
++#include <linux/mmc/mmc.h>
++#include <linux/mmc/host.h>
++#include <linux/mmc/sd.h>
++#include <linux/scatterlist.h>
++#include <linux/of_address.h>
++#include <linux/of_irq.h>
++#include <linux/clk.h>
++#include <linux/platform_device.h>
++#include <linux/err.h>
++#include <linux/blkdev.h>
++#include <linux/dmaengine.h>
++#include <linux/dma-mapping.h>
++#include <linux/of_dma.h>
++
++#include "sdhci.h"
++
++
++#define DRIVER_NAME "mmc-bcm2835"
++
++#define DBG(f, x...) \
++pr_debug(DRIVER_NAME " [%s()]: " f, __func__, ## x)
++
++#ifndef CONFIG_MMC_BCM2835_DMA
++ #define FORCE_PIO
++#endif
++
++
++/* the inclusive limit in bytes under which PIO will be used instead of DMA */
++#ifdef CONFIG_MMC_BCM2835_PIO_DMA_BARRIER
++#define PIO_DMA_BARRIER CONFIG_MMC_BCM2835_PIO_DMA_BARRIER
++#else
++#define PIO_DMA_BARRIER 00
++#endif
++
++#define MIN_FREQ 400000
++#define TIMEOUT_VAL 0xE
++#define BCM2835_SDHCI_WRITE_DELAY(f)	(((2 * 1000000) / f) + 1)
++
++
++unsigned mmc_debug;
++unsigned mmc_debug2;
++
++struct bcm2835_host {
++	spinlock_t				lock;
++
++	void __iomem			*ioaddr;
++	u32						bus_addr;
++
++	struct mmc_host			*mmc;
++
++	u32						timeout;
++
++	int						clock;	/* Current clock speed */
++	u8						pwr;	/* Current voltage */
++
++	unsigned int			max_clk;		/* Max possible freq */
++	unsigned int			timeout_clk;	/* Timeout freq (KHz) */
++	unsigned int			clk_mul;		/* Clock Muliplier value */
++
++	struct tasklet_struct	finish_tasklet;		/* Tasklet structures */
++
++	struct timer_list		timer;			/* Timer for timeouts */
++
++	struct sg_mapping_iter	sg_miter;		/* SG state for PIO */
++	unsigned int			blocks;			/* remaining PIO blocks */
++
++	int						irq;			/* Device IRQ */
++
++
++	u32						ier;			/* cached registers */
++
++	struct mmc_request		*mrq;			/* Current request */
++	struct mmc_command		*cmd;			/* Current command */
++	struct mmc_data			*data;			/* Current data request */
++	unsigned int			data_early:1;		/* Data finished before cmd */
++
++	wait_queue_head_t		buf_ready_int;		/* Waitqueue for Buffer Read Ready interrupt */
++
++	u32						thread_isr;
++
++	u32						shadow;
++
++	/*DMA part*/
++	struct dma_chan			*dma_chan_rx;		/* DMA channel for reads */
++	struct dma_chan			*dma_chan_tx;		/* DMA channel for writes */
++	struct dma_async_tx_descriptor	*tx_desc;	/* descriptor */
++
++	bool					have_dma;
++	bool					use_dma;
++	/*end of DMA part*/
++
++	int						max_delay;	/* maximum length of time spent waiting */
++
++	int						flags;				/* Host attributes */
++#define SDHCI_REQ_USE_DMA	(1<<2)	/* Use DMA for this req. */
++#define SDHCI_DEVICE_DEAD	(1<<3)	/* Device unresponsive */
++#define SDHCI_AUTO_CMD12	(1<<6)	/* Auto CMD12 support */
++#define SDHCI_AUTO_CMD23	(1<<7)	/* Auto CMD23 support */
++#define SDHCI_SDIO_IRQ_ENABLED	(1<<9)	/* SDIO irq enabled */
++
++	u32				overclock_50;	/* frequency to use when 50MHz is requested (in MHz) */
++	u32				max_overclock;	/* Highest reported */
++};
++
++
++static inline void bcm2835_mmc_writel(struct bcm2835_host *host, u32 val, int reg, int from)
++{
++	unsigned delay;
++	lockdep_assert_held_once(&host->lock);
++	writel(val, host->ioaddr + reg);
++	udelay(BCM2835_SDHCI_WRITE_DELAY(max(host->clock, MIN_FREQ)));
++
++	delay = ((mmc_debug >> 16) & 0xf) << ((mmc_debug >> 20) & 0xf);
++	if (delay && !((1<<from) & mmc_debug2))
++		udelay(delay);
++}
++
++static inline void mmc_raw_writel(struct bcm2835_host *host, u32 val, int reg)
++{
++	unsigned delay;
++	lockdep_assert_held_once(&host->lock);
++	writel(val, host->ioaddr + reg);
++
++	delay = ((mmc_debug >> 24) & 0xf) << ((mmc_debug >> 28) & 0xf);
++	if (delay)
++		udelay(delay);
++}
++
++static inline u32 bcm2835_mmc_readl(struct bcm2835_host *host, int reg)
++{
++	lockdep_assert_held_once(&host->lock);
++	return readl(host->ioaddr + reg);
++}
++
++static inline void bcm2835_mmc_writew(struct bcm2835_host *host, u16 val, int reg)
++{
++	u32 oldval = (reg == SDHCI_COMMAND) ? host->shadow :
++		bcm2835_mmc_readl(host, reg & ~3);
++	u32 word_num = (reg >> 1) & 1;
++	u32 word_shift = word_num * 16;
++	u32 mask = 0xffff << word_shift;
++	u32 newval = (oldval & ~mask) | (val << word_shift);
++
++	if (reg == SDHCI_TRANSFER_MODE)
++		host->shadow = newval;
++	else
++		bcm2835_mmc_writel(host, newval, reg & ~3, 0);
++
++}
++
++static inline void bcm2835_mmc_writeb(struct bcm2835_host *host, u8 val, int reg)
++{
++	u32 oldval = bcm2835_mmc_readl(host, reg & ~3);
++	u32 byte_num = reg & 3;
++	u32 byte_shift = byte_num * 8;
++	u32 mask = 0xff << byte_shift;
++	u32 newval = (oldval & ~mask) | (val << byte_shift);
++
++	bcm2835_mmc_writel(host, newval, reg & ~3, 1);
++}
++
++
++static inline u16 bcm2835_mmc_readw(struct bcm2835_host *host, int reg)
++{
++	u32 val = bcm2835_mmc_readl(host, (reg & ~3));
++	u32 word_num = (reg >> 1) & 1;
++	u32 word_shift = word_num * 16;
++	u32 word = (val >> word_shift) & 0xffff;
++
++	return word;
++}
++
++static inline u8 bcm2835_mmc_readb(struct bcm2835_host *host, int reg)
++{
++	u32 val = bcm2835_mmc_readl(host, (reg & ~3));
++	u32 byte_num = reg & 3;
++	u32 byte_shift = byte_num * 8;
++	u32 byte = (val >> byte_shift) & 0xff;
++
++	return byte;
++}
++
++static void bcm2835_mmc_unsignal_irqs(struct bcm2835_host *host, u32 clear)
++{
++	u32 ier;
++
++	ier = bcm2835_mmc_readl(host, SDHCI_SIGNAL_ENABLE);
++	ier &= ~clear;
++	/* change which requests generate IRQs - makes no difference to
++	   the content of SDHCI_INT_STATUS, or the need to acknowledge IRQs */
++	bcm2835_mmc_writel(host, ier, SDHCI_SIGNAL_ENABLE, 2);
++}
++
++
++static void bcm2835_mmc_dumpregs(struct bcm2835_host *host)
++{
++	pr_debug(DRIVER_NAME ": =========== REGISTER DUMP (%s)===========\n",
++		mmc_hostname(host->mmc));
++
++	pr_debug(DRIVER_NAME ": Sys addr: 0x%08x | Version:  0x%08x\n",
++		bcm2835_mmc_readl(host, SDHCI_DMA_ADDRESS),
++		bcm2835_mmc_readw(host, SDHCI_HOST_VERSION));
++	pr_debug(DRIVER_NAME ": Blk size: 0x%08x | Blk cnt:  0x%08x\n",
++		bcm2835_mmc_readw(host, SDHCI_BLOCK_SIZE),
++		bcm2835_mmc_readw(host, SDHCI_BLOCK_COUNT));
++	pr_debug(DRIVER_NAME ": Argument: 0x%08x | Trn mode: 0x%08x\n",
++		bcm2835_mmc_readl(host, SDHCI_ARGUMENT),
++		bcm2835_mmc_readw(host, SDHCI_TRANSFER_MODE));
++	pr_debug(DRIVER_NAME ": Present:  0x%08x | Host ctl: 0x%08x\n",
++		bcm2835_mmc_readl(host, SDHCI_PRESENT_STATE),
++		bcm2835_mmc_readb(host, SDHCI_HOST_CONTROL));
++	pr_debug(DRIVER_NAME ": Power:    0x%08x | Blk gap:  0x%08x\n",
++		bcm2835_mmc_readb(host, SDHCI_POWER_CONTROL),
++		bcm2835_mmc_readb(host, SDHCI_BLOCK_GAP_CONTROL));
++	pr_debug(DRIVER_NAME ": Wake-up:  0x%08x | Clock:    0x%08x\n",
++		bcm2835_mmc_readb(host, SDHCI_WAKE_UP_CONTROL),
++		bcm2835_mmc_readw(host, SDHCI_CLOCK_CONTROL));
++	pr_debug(DRIVER_NAME ": Timeout:  0x%08x | Int stat: 0x%08x\n",
++		bcm2835_mmc_readb(host, SDHCI_TIMEOUT_CONTROL),
++		bcm2835_mmc_readl(host, SDHCI_INT_STATUS));
++	pr_debug(DRIVER_NAME ": Int enab: 0x%08x | Sig enab: 0x%08x\n",
++		bcm2835_mmc_readl(host, SDHCI_INT_ENABLE),
++		bcm2835_mmc_readl(host, SDHCI_SIGNAL_ENABLE));
++	pr_debug(DRIVER_NAME ": AC12 err: 0x%08x | Slot int: 0x%08x\n",
++		bcm2835_mmc_readw(host, SDHCI_ACMD12_ERR),
++		bcm2835_mmc_readw(host, SDHCI_SLOT_INT_STATUS));
++	pr_debug(DRIVER_NAME ": Caps:     0x%08x | Caps_1:   0x%08x\n",
++		bcm2835_mmc_readl(host, SDHCI_CAPABILITIES),
++		bcm2835_mmc_readl(host, SDHCI_CAPABILITIES_1));
++	pr_debug(DRIVER_NAME ": Cmd:      0x%08x | Max curr: 0x%08x\n",
++		bcm2835_mmc_readw(host, SDHCI_COMMAND),
++		bcm2835_mmc_readl(host, SDHCI_MAX_CURRENT));
++	pr_debug(DRIVER_NAME ": Host ctl2: 0x%08x\n",
++		bcm2835_mmc_readw(host, SDHCI_HOST_CONTROL2));
++
++	pr_debug(DRIVER_NAME ": ===========================================\n");
++}
++
++
++static void bcm2835_mmc_reset(struct bcm2835_host *host, u8 mask)
++{
++	unsigned long timeout;
++	unsigned long flags;
++
++	spin_lock_irqsave(&host->lock, flags);
++	bcm2835_mmc_writeb(host, mask, SDHCI_SOFTWARE_RESET);
++
++	if (mask & SDHCI_RESET_ALL)
++		host->clock = 0;
++
++	/* Wait max 100 ms */
++	timeout = 100;
++
++	/* hw clears the bit when it's done */
++	while (bcm2835_mmc_readb(host, SDHCI_SOFTWARE_RESET) & mask) {
++		if (timeout == 0) {
++			pr_err("%s: Reset 0x%x never completed.\n",
++				mmc_hostname(host->mmc), (int)mask);
++			bcm2835_mmc_dumpregs(host);
++			return;
++		}
++		timeout--;
++		spin_unlock_irqrestore(&host->lock, flags);
++		mdelay(1);
++		spin_lock_irqsave(&host->lock, flags);
++	}
++
++	if (100-timeout > 10 && 100-timeout > host->max_delay) {
++		host->max_delay = 100-timeout;
++		pr_warning("Warning: MMC controller hung for %d ms\n", host->max_delay);
++	}
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static void bcm2835_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios);
++
++static void bcm2835_mmc_init(struct bcm2835_host *host, int soft)
++{
++	unsigned long flags;
++	if (soft)
++		bcm2835_mmc_reset(host, SDHCI_RESET_CMD|SDHCI_RESET_DATA);
++	else
++		bcm2835_mmc_reset(host, SDHCI_RESET_ALL);
++
++	host->ier = SDHCI_INT_BUS_POWER | SDHCI_INT_DATA_END_BIT |
++		    SDHCI_INT_DATA_CRC | SDHCI_INT_DATA_TIMEOUT |
++		    SDHCI_INT_INDEX | SDHCI_INT_END_BIT | SDHCI_INT_CRC |
++		    SDHCI_INT_TIMEOUT | SDHCI_INT_DATA_END |
++		    SDHCI_INT_RESPONSE;
++
++	spin_lock_irqsave(&host->lock, flags);
++	bcm2835_mmc_writel(host, host->ier, SDHCI_INT_ENABLE, 3);
++	bcm2835_mmc_writel(host, host->ier, SDHCI_SIGNAL_ENABLE, 3);
++	spin_unlock_irqrestore(&host->lock, flags);
++
++	if (soft) {
++		/* force clock reconfiguration */
++		host->clock = 0;
++		bcm2835_mmc_set_ios(host->mmc, &host->mmc->ios);
++	}
++}
++
++
++
++static void bcm2835_mmc_finish_data(struct bcm2835_host *host);
++
++static void bcm2835_mmc_dma_complete(void *param)
++{
++	struct bcm2835_host *host = param;
++	struct dma_chan *dma_chan;
++	unsigned long flags;
++	u32 dir_data;
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	if (host->data && !(host->data->flags & MMC_DATA_WRITE)) {
++		/* otherwise handled in SDHCI IRQ */
++		dma_chan = host->dma_chan_rx;
++		dir_data = DMA_FROM_DEVICE;
++
++		dma_unmap_sg(dma_chan->device->dev,
++		     host->data->sg, host->data->sg_len,
++		     dir_data);
++
++		bcm2835_mmc_finish_data(host);
++	}
++
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static void bcm2835_bcm2835_mmc_read_block_pio(struct bcm2835_host *host)
++{
++	unsigned long flags;
++	size_t blksize, len, chunk;
++
++	u32 uninitialized_var(scratch);
++	u8 *buf;
++
++	blksize = host->data->blksz;
++	chunk = 0;
++
++	local_irq_save(flags);
++
++	while (blksize) {
++		if (!sg_miter_next(&host->sg_miter))
++			BUG();
++
++		len = min(host->sg_miter.length, blksize);
++
++		blksize -= len;
++		host->sg_miter.consumed = len;
++
++		buf = host->sg_miter.addr;
++
++		while (len) {
++			if (chunk == 0) {
++				scratch = bcm2835_mmc_readl(host, SDHCI_BUFFER);
++				chunk = 4;
++			}
++
++			*buf = scratch & 0xFF;
++
++			buf++;
++			scratch >>= 8;
++			chunk--;
++			len--;
++		}
++	}
++
++	sg_miter_stop(&host->sg_miter);
++
++	local_irq_restore(flags);
++}
++
++static void bcm2835_bcm2835_mmc_write_block_pio(struct bcm2835_host *host)
++{
++	unsigned long flags;
++	size_t blksize, len, chunk;
++	u32 scratch;
++	u8 *buf;
++
++	blksize = host->data->blksz;
++	chunk = 0;
++	chunk = 0;
++	scratch = 0;
++
++	local_irq_save(flags);
++
++	while (blksize) {
++		if (!sg_miter_next(&host->sg_miter))
++			BUG();
++
++		len = min(host->sg_miter.length, blksize);
++
++		blksize -= len;
++		host->sg_miter.consumed = len;
++
++		buf = host->sg_miter.addr;
++
++		while (len) {
++			scratch |= (u32)*buf << (chunk * 8);
++
++			buf++;
++			chunk++;
++			len--;
++
++			if ((chunk == 4) || ((len == 0) && (blksize == 0))) {
++				mmc_raw_writel(host, scratch, SDHCI_BUFFER);
++				chunk = 0;
++				scratch = 0;
++			}
++		}
++	}
++
++	sg_miter_stop(&host->sg_miter);
++
++	local_irq_restore(flags);
++}
++
++
++static void bcm2835_mmc_transfer_pio(struct bcm2835_host *host)
++{
++	u32 mask;
++
++	BUG_ON(!host->data);
++
++	if (host->blocks == 0)
++		return;
++
++	if (host->data->flags & MMC_DATA_READ)
++		mask = SDHCI_DATA_AVAILABLE;
++	else
++		mask = SDHCI_SPACE_AVAILABLE;
++
++	while (bcm2835_mmc_readl(host, SDHCI_PRESENT_STATE) & mask) {
++
++		if (host->data->flags & MMC_DATA_READ)
++			bcm2835_bcm2835_mmc_read_block_pio(host);
++		else
++			bcm2835_bcm2835_mmc_write_block_pio(host);
++
++		host->blocks--;
++
++		/* QUIRK used in sdhci.c removes the 'if' */
++		/* but it seems this is unnecessary */
++		if (host->blocks == 0)
++			break;
++
++
++	}
++}
++
++
++static void bcm2835_mmc_transfer_dma(struct bcm2835_host *host)
++{
++	u32 len, dir_data, dir_slave;
++	struct dma_async_tx_descriptor *desc = NULL;
++	struct dma_chan *dma_chan;
++
++
++	WARN_ON(!host->data);
++
++	if (!host->data)
++		return;
++
++	if (host->blocks == 0)
++		return;
++
++	if (host->data->flags & MMC_DATA_READ) {
++		dma_chan = host->dma_chan_rx;
++		dir_data = DMA_FROM_DEVICE;
++		dir_slave = DMA_DEV_TO_MEM;
++	} else {
++		dma_chan = host->dma_chan_tx;
++		dir_data = DMA_TO_DEVICE;
++		dir_slave = DMA_MEM_TO_DEV;
++	}
++
++	BUG_ON(!dma_chan->device);
++	BUG_ON(!dma_chan->device->dev);
++	BUG_ON(!host->data->sg);
++
++	len = dma_map_sg(dma_chan->device->dev, host->data->sg,
++			 host->data->sg_len, dir_data);
++	if (len > 0) {
++		desc = dmaengine_prep_slave_sg(dma_chan, host->data->sg,
++					       len, dir_slave,
++					       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
++	} else {
++		dev_err(mmc_dev(host->mmc), "dma_map_sg returned zero length\n");
++	}
++	if (desc) {
++		unsigned long flags;
++		spin_lock_irqsave(&host->lock, flags);
++		bcm2835_mmc_unsignal_irqs(host, SDHCI_INT_DATA_AVAIL |
++						    SDHCI_INT_SPACE_AVAIL);
++		host->tx_desc = desc;
++		desc->callback = bcm2835_mmc_dma_complete;
++		desc->callback_param = host;
++		spin_unlock_irqrestore(&host->lock, flags);
++		dmaengine_submit(desc);
++		dma_async_issue_pending(dma_chan);
++	}
++
++}
++
++
++
++static void bcm2835_mmc_set_transfer_irqs(struct bcm2835_host *host)
++{
++	u32 pio_irqs = SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL;
++	u32 dma_irqs = SDHCI_INT_DMA_END | SDHCI_INT_ADMA_ERROR;
++
++	if (host->use_dma)
++		host->ier = (host->ier & ~pio_irqs) | dma_irqs;
++	else
++		host->ier = (host->ier & ~dma_irqs) | pio_irqs;
++
++	bcm2835_mmc_writel(host, host->ier, SDHCI_INT_ENABLE, 4);
++	bcm2835_mmc_writel(host, host->ier, SDHCI_SIGNAL_ENABLE, 4);
++}
++
++
++static void bcm2835_mmc_prepare_data(struct bcm2835_host *host, struct mmc_command *cmd)
++{
++	u8 count;
++	struct mmc_data *data = cmd->data;
++
++	WARN_ON(host->data);
++
++	if (data || (cmd->flags & MMC_RSP_BUSY)) {
++		count = TIMEOUT_VAL;
++		bcm2835_mmc_writeb(host, count, SDHCI_TIMEOUT_CONTROL);
++	}
++
++	if (!data)
++		return;
++
++	/* Sanity checks */
++	BUG_ON(data->blksz * data->blocks > 524288);
++	BUG_ON(data->blksz > host->mmc->max_blk_size);
++	BUG_ON(data->blocks > 65535);
++
++	host->data = data;
++	host->data_early = 0;
++	host->data->bytes_xfered = 0;
++
++
++	if (!(host->flags & SDHCI_REQ_USE_DMA)) {
++		int flags;
++
++		flags = SG_MITER_ATOMIC;
++		if (host->data->flags & MMC_DATA_READ)
++			flags |= SG_MITER_TO_SG;
++		else
++			flags |= SG_MITER_FROM_SG;
++		sg_miter_start(&host->sg_miter, data->sg, data->sg_len, flags);
++		host->blocks = data->blocks;
++	}
++
++	host->use_dma = host->have_dma && data->blocks > PIO_DMA_BARRIER;
++
++	bcm2835_mmc_set_transfer_irqs(host);
++
++	/* Set the DMA boundary value and block size */
++	bcm2835_mmc_writew(host, SDHCI_MAKE_BLKSZ(SDHCI_DEFAULT_BOUNDARY_ARG,
++		data->blksz), SDHCI_BLOCK_SIZE);
++	bcm2835_mmc_writew(host, data->blocks, SDHCI_BLOCK_COUNT);
++
++	BUG_ON(!host->data);
++}
++
++static void bcm2835_mmc_set_transfer_mode(struct bcm2835_host *host,
++	struct mmc_command *cmd)
++{
++	u16 mode;
++	struct mmc_data *data = cmd->data;
++
++	if (data == NULL) {
++		/* clear Auto CMD settings for no data CMDs */
++		mode = bcm2835_mmc_readw(host, SDHCI_TRANSFER_MODE);
++		bcm2835_mmc_writew(host, mode & ~(SDHCI_TRNS_AUTO_CMD12 |
++				SDHCI_TRNS_AUTO_CMD23), SDHCI_TRANSFER_MODE);
++		return;
++	}
++
++	WARN_ON(!host->data);
++
++	mode = SDHCI_TRNS_BLK_CNT_EN;
++
++	if ((mmc_op_multi(cmd->opcode) || data->blocks > 1)) {
++		mode |= SDHCI_TRNS_MULTI;
++
++		/*
++		 * If we are sending CMD23, CMD12 never gets sent
++		 * on successful completion (so no Auto-CMD12).
++		 */
++		if (!host->mrq->sbc && (host->flags & SDHCI_AUTO_CMD12))
++			mode |= SDHCI_TRNS_AUTO_CMD12;
++		else if (host->mrq->sbc && (host->flags & SDHCI_AUTO_CMD23)) {
++			mode |= SDHCI_TRNS_AUTO_CMD23;
++			bcm2835_mmc_writel(host, host->mrq->sbc->arg, SDHCI_ARGUMENT2, 5);
++		}
++	}
++
++	if (data->flags & MMC_DATA_READ)
++		mode |= SDHCI_TRNS_READ;
++	if (host->flags & SDHCI_REQ_USE_DMA)
++		mode |= SDHCI_TRNS_DMA;
++
++	bcm2835_mmc_writew(host, mode, SDHCI_TRANSFER_MODE);
++}
++
++void bcm2835_mmc_send_command(struct bcm2835_host *host, struct mmc_command *cmd)
++{
++	int flags;
++	u32 mask;
++	unsigned long timeout;
++
++	WARN_ON(host->cmd);
++
++	/* Wait max 10 ms */
++	timeout = 1000;
++
++	mask = SDHCI_CMD_INHIBIT;
++	if ((cmd->data != NULL) || (cmd->flags & MMC_RSP_BUSY))
++		mask |= SDHCI_DATA_INHIBIT;
++
++	/* We shouldn't wait for data inihibit for stop commands, even
++	   though they might use busy signaling */
++	if (host->mrq->data && (cmd == host->mrq->data->stop))
++		mask &= ~SDHCI_DATA_INHIBIT;
++
++	while (bcm2835_mmc_readl(host, SDHCI_PRESENT_STATE) & mask) {
++		if (timeout == 0) {
++			pr_err("%s: Controller never released inhibit bit(s).\n",
++				mmc_hostname(host->mmc));
++			bcm2835_mmc_dumpregs(host);
++			cmd->error = -EIO;
++			tasklet_schedule(&host->finish_tasklet);
++			return;
++		}
++		timeout--;
++		udelay(10);
++	}
++
++	if ((1000-timeout)/100 > 1 && (1000-timeout)/100 > host->max_delay) {
++		host->max_delay = (1000-timeout)/100;
++		pr_warning("Warning: MMC controller hung for %d ms\n", host->max_delay);
++	}
++
++	timeout = jiffies;
++	if (!cmd->data && cmd->busy_timeout > 9000)
++		timeout += DIV_ROUND_UP(cmd->busy_timeout, 1000) * HZ + HZ;
++	else
++		timeout += 10 * HZ;
++	mod_timer(&host->timer, timeout);
++
++	host->cmd = cmd;
++
++	bcm2835_mmc_prepare_data(host, cmd);
++
++	bcm2835_mmc_writel(host, cmd->arg, SDHCI_ARGUMENT, 6);
++
++	bcm2835_mmc_set_transfer_mode(host, cmd);
++
++	if ((cmd->flags & MMC_RSP_136) && (cmd->flags & MMC_RSP_BUSY)) {
++		pr_err("%s: Unsupported response type!\n",
++			mmc_hostname(host->mmc));
++		cmd->error = -EINVAL;
++		tasklet_schedule(&host->finish_tasklet);
++		return;
++	}
++
++	if (!(cmd->flags & MMC_RSP_PRESENT))
++		flags = SDHCI_CMD_RESP_NONE;
++	else if (cmd->flags & MMC_RSP_136)
++		flags = SDHCI_CMD_RESP_LONG;
++	else if (cmd->flags & MMC_RSP_BUSY)
++		flags = SDHCI_CMD_RESP_SHORT_BUSY;
++	else
++		flags = SDHCI_CMD_RESP_SHORT;
++
++	if (cmd->flags & MMC_RSP_CRC)
++		flags |= SDHCI_CMD_CRC;
++	if (cmd->flags & MMC_RSP_OPCODE)
++		flags |= SDHCI_CMD_INDEX;
++
++	if (cmd->data)
++		flags |= SDHCI_CMD_DATA;
++
++	bcm2835_mmc_writew(host, SDHCI_MAKE_CMD(cmd->opcode, flags), SDHCI_COMMAND);
++}
++
++
++static void bcm2835_mmc_finish_data(struct bcm2835_host *host)
++{
++	struct mmc_data *data;
++
++	BUG_ON(!host->data);
++
++	data = host->data;
++	host->data = NULL;
++
++	if (data->error)
++		data->bytes_xfered = 0;
++	else
++		data->bytes_xfered = data->blksz * data->blocks;
++
++	/*
++	 * Need to send CMD12 if -
++	 * a) open-ended multiblock transfer (no CMD23)
++	 * b) error in multiblock transfer
++	 */
++	if (data->stop &&
++	    (data->error ||
++	     !host->mrq->sbc)) {
++
++		/*
++		 * The controller needs a reset of internal state machines
++		 * upon error conditions.
++		 */
++		if (data->error) {
++			bcm2835_mmc_reset(host, SDHCI_RESET_CMD);
++			bcm2835_mmc_reset(host, SDHCI_RESET_DATA);
++		}
++
++		bcm2835_mmc_send_command(host, data->stop);
++	} else
++		tasklet_schedule(&host->finish_tasklet);
++}
++
++static void bcm2835_mmc_finish_command(struct bcm2835_host *host)
++{
++	int i;
++
++	BUG_ON(host->cmd == NULL);
++
++	if (host->cmd->flags & MMC_RSP_PRESENT) {
++		if (host->cmd->flags & MMC_RSP_136) {
++			/* CRC is stripped so we need to do some shifting. */
++			for (i = 0; i < 4; i++) {
++				host->cmd->resp[i] = bcm2835_mmc_readl(host,
++					SDHCI_RESPONSE + (3-i)*4) << 8;
++				if (i != 3)
++					host->cmd->resp[i] |=
++						bcm2835_mmc_readb(host,
++						SDHCI_RESPONSE + (3-i)*4-1);
++			}
++		} else {
++			host->cmd->resp[0] = bcm2835_mmc_readl(host, SDHCI_RESPONSE);
++		}
++	}
++
++	host->cmd->error = 0;
++
++	/* Finished CMD23, now send actual command. */
++	if (host->cmd == host->mrq->sbc) {
++		host->cmd = NULL;
++		bcm2835_mmc_send_command(host, host->mrq->cmd);
++
++		if (host->mrq->cmd->data && host->use_dma) {
++			/* DMA transfer starts now, PIO starts after interrupt */
++			bcm2835_mmc_transfer_dma(host);
++		}
++	} else {
++
++		/* Processed actual command. */
++		if (host->data && host->data_early)
++			bcm2835_mmc_finish_data(host);
++
++		if (!host->cmd->data)
++			tasklet_schedule(&host->finish_tasklet);
++
++		host->cmd = NULL;
++	}
++}
++
++
++static void bcm2835_mmc_timeout_timer(unsigned long data)
++{
++	struct bcm2835_host *host;
++	unsigned long flags;
++
++	host = (struct bcm2835_host *)data;
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	if (host->mrq) {
++		pr_err("%s: Timeout waiting for hardware interrupt.\n",
++			mmc_hostname(host->mmc));
++		bcm2835_mmc_dumpregs(host);
++
++		if (host->data) {
++			host->data->error = -ETIMEDOUT;
++			bcm2835_mmc_finish_data(host);
++		} else {
++			if (host->cmd)
++				host->cmd->error = -ETIMEDOUT;
++			else
++				host->mrq->cmd->error = -ETIMEDOUT;
++
++			tasklet_schedule(&host->finish_tasklet);
++		}
++	}
++
++	mmiowb();
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++
++static void bcm2835_mmc_enable_sdio_irq_nolock(struct bcm2835_host *host, int enable)
++{
++	if (!(host->flags & SDHCI_DEVICE_DEAD)) {
++		if (enable)
++			host->ier |= SDHCI_INT_CARD_INT;
++		else
++			host->ier &= ~SDHCI_INT_CARD_INT;
++
++		bcm2835_mmc_writel(host, host->ier, SDHCI_INT_ENABLE, 7);
++		bcm2835_mmc_writel(host, host->ier, SDHCI_SIGNAL_ENABLE, 7);
++		mmiowb();
++	}
++}
++
++static void bcm2835_mmc_enable_sdio_irq(struct mmc_host *mmc, int enable)
++{
++	struct bcm2835_host *host = mmc_priv(mmc);
++	unsigned long flags;
++
++	spin_lock_irqsave(&host->lock, flags);
++	if (enable)
++		host->flags |= SDHCI_SDIO_IRQ_ENABLED;
++	else
++		host->flags &= ~SDHCI_SDIO_IRQ_ENABLED;
++
++	bcm2835_mmc_enable_sdio_irq_nolock(host, enable);
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static void bcm2835_mmc_cmd_irq(struct bcm2835_host *host, u32 intmask)
++{
++
++	BUG_ON(intmask == 0);
++
++	if (!host->cmd) {
++		pr_err("%s: Got command interrupt 0x%08x even "
++			"though no command operation was in progress.\n",
++			mmc_hostname(host->mmc), (unsigned)intmask);
++		bcm2835_mmc_dumpregs(host);
++		return;
++	}
++
++	if (intmask & SDHCI_INT_TIMEOUT)
++		host->cmd->error = -ETIMEDOUT;
++	else if (intmask & (SDHCI_INT_CRC | SDHCI_INT_END_BIT |
++			SDHCI_INT_INDEX)) {
++			host->cmd->error = -EILSEQ;
++	}
++
++	if (host->cmd->error) {
++		tasklet_schedule(&host->finish_tasklet);
++		return;
++	}
++
++	if (intmask & SDHCI_INT_RESPONSE)
++		bcm2835_mmc_finish_command(host);
++
++}
++
++static void bcm2835_mmc_data_irq(struct bcm2835_host *host, u32 intmask)
++{
++	struct dma_chan *dma_chan;
++	u32 dir_data;
++
++	BUG_ON(intmask == 0);
++
++	if (!host->data) {
++		/*
++		 * The "data complete" interrupt is also used to
++		 * indicate that a busy state has ended. See comment
++		 * above in sdhci_cmd_irq().
++		 */
++		if (host->cmd && (host->cmd->flags & MMC_RSP_BUSY)) {
++			if (intmask & SDHCI_INT_DATA_END) {
++				bcm2835_mmc_finish_command(host);
++				return;
++			}
++		}
++
++		pr_debug("%s: Got data interrupt 0x%08x even "
++			"though no data operation was in progress.\n",
++			mmc_hostname(host->mmc), (unsigned)intmask);
++		bcm2835_mmc_dumpregs(host);
++
++		return;
++	}
++
++	if (intmask & SDHCI_INT_DATA_TIMEOUT)
++		host->data->error = -ETIMEDOUT;
++	else if (intmask & SDHCI_INT_DATA_END_BIT)
++		host->data->error = -EILSEQ;
++	else if ((intmask & SDHCI_INT_DATA_CRC) &&
++		SDHCI_GET_CMD(bcm2835_mmc_readw(host, SDHCI_COMMAND))
++			!= MMC_BUS_TEST_R)
++		host->data->error = -EILSEQ;
++
++	if (host->use_dma) {
++		if  (host->data->flags & MMC_DATA_WRITE) {
++			/* IRQ handled here */
++
++			dma_chan = host->dma_chan_tx;
++			dir_data = DMA_TO_DEVICE;
++			dma_unmap_sg(dma_chan->device->dev,
++				 host->data->sg, host->data->sg_len,
++				 dir_data);
++
++			bcm2835_mmc_finish_data(host);
++		}
++
++	} else {
++		if (host->data->error)
++			bcm2835_mmc_finish_data(host);
++		else {
++			if (intmask & (SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL))
++				bcm2835_mmc_transfer_pio(host);
++
++			if (intmask & SDHCI_INT_DATA_END) {
++				if (host->cmd) {
++					/*
++					 * Data managed to finish before the
++					 * command completed. Make sure we do
++					 * things in the proper order.
++					 */
++					host->data_early = 1;
++				} else {
++					bcm2835_mmc_finish_data(host);
++				}
++			}
++		}
++	}
++}
++
++
++static irqreturn_t bcm2835_mmc_irq(int irq, void *dev_id)
++{
++	irqreturn_t result = IRQ_NONE;
++	struct bcm2835_host *host = dev_id;
++	u32 intmask, mask, unexpected = 0;
++	int max_loops = 16;
++
++	spin_lock(&host->lock);
++
++	intmask = bcm2835_mmc_readl(host, SDHCI_INT_STATUS);
++
++	if (!intmask || intmask == 0xffffffff) {
++		result = IRQ_NONE;
++		goto out;
++	}
++
++	do {
++		/* Clear selected interrupts. */
++		mask = intmask & (SDHCI_INT_CMD_MASK | SDHCI_INT_DATA_MASK |
++				  SDHCI_INT_BUS_POWER);
++		bcm2835_mmc_writel(host, mask, SDHCI_INT_STATUS, 8);
++
++
++		if (intmask & SDHCI_INT_CMD_MASK)
++			bcm2835_mmc_cmd_irq(host, intmask & SDHCI_INT_CMD_MASK);
++
++		if (intmask & SDHCI_INT_DATA_MASK)
++			bcm2835_mmc_data_irq(host, intmask & SDHCI_INT_DATA_MASK);
++
++		if (intmask & SDHCI_INT_BUS_POWER)
++			pr_err("%s: Card is consuming too much power!\n",
++				mmc_hostname(host->mmc));
++
++		if (intmask & SDHCI_INT_CARD_INT) {
++			bcm2835_mmc_enable_sdio_irq_nolock(host, false);
++			host->thread_isr |= SDHCI_INT_CARD_INT;
++			result = IRQ_WAKE_THREAD;
++		}
++
++		intmask &= ~(SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE |
++			     SDHCI_INT_CMD_MASK | SDHCI_INT_DATA_MASK |
++			     SDHCI_INT_ERROR | SDHCI_INT_BUS_POWER |
++			     SDHCI_INT_CARD_INT);
++
++		if (intmask) {
++			unexpected |= intmask;
++			bcm2835_mmc_writel(host, intmask, SDHCI_INT_STATUS, 9);
++		}
++
++		if (result == IRQ_NONE)
++			result = IRQ_HANDLED;
++
++		intmask = bcm2835_mmc_readl(host, SDHCI_INT_STATUS);
++	} while (intmask && --max_loops);
++out:
++	spin_unlock(&host->lock);
++
++	if (unexpected) {
++		pr_err("%s: Unexpected interrupt 0x%08x.\n",
++			   mmc_hostname(host->mmc), unexpected);
++		bcm2835_mmc_dumpregs(host);
++	}
++
++	return result;
++}
++
++static irqreturn_t bcm2835_mmc_thread_irq(int irq, void *dev_id)
++{
++	struct bcm2835_host *host = dev_id;
++	unsigned long flags;
++	u32 isr;
++
++	spin_lock_irqsave(&host->lock, flags);
++	isr = host->thread_isr;
++	host->thread_isr = 0;
++	spin_unlock_irqrestore(&host->lock, flags);
++
++	if (isr & SDHCI_INT_CARD_INT) {
++		sdio_run_irqs(host->mmc);
++
++		spin_lock_irqsave(&host->lock, flags);
++		if (host->flags & SDHCI_SDIO_IRQ_ENABLED)
++			bcm2835_mmc_enable_sdio_irq_nolock(host, true);
++		spin_unlock_irqrestore(&host->lock, flags);
++	}
++
++	return isr ? IRQ_HANDLED : IRQ_NONE;
++}
++
++
++
++void bcm2835_mmc_set_clock(struct bcm2835_host *host, unsigned int clock)
++{
++	int div = 0; /* Initialized for compiler warning */
++	int real_div = div, clk_mul = 1;
++	u16 clk = 0;
++	unsigned long timeout;
++	unsigned int input_clock = clock;
++
++	if (host->overclock_50 && (clock == 50000000))
++		clock = host->overclock_50 * 1000000 + 999999;
++
++	host->mmc->actual_clock = 0;
++
++	bcm2835_mmc_writew(host, 0, SDHCI_CLOCK_CONTROL);
++
++	if (clock == 0)
++		return;
++
++	/* Version 3.00 divisors must be a multiple of 2. */
++	if (host->max_clk <= clock)
++		div = 1;
++	else {
++		for (div = 2; div < SDHCI_MAX_DIV_SPEC_300;
++			 div += 2) {
++			if ((host->max_clk / div) <= clock)
++				break;
++		}
++	}
++
++	real_div = div;
++	div >>= 1;
++
++	if (real_div)
++		clock = (host->max_clk * clk_mul) / real_div;
++	host->mmc->actual_clock = clock;
++
++	if ((clock > input_clock) && (clock > host->max_overclock)) {
++		pr_warn("%s: Overclocking to %dHz\n",
++			mmc_hostname(host->mmc), clock);
++		host->max_overclock = clock;
++	}
++
++	clk |= (div & SDHCI_DIV_MASK) << SDHCI_DIVIDER_SHIFT;
++	clk |= ((div & SDHCI_DIV_HI_MASK) >> SDHCI_DIV_MASK_LEN)
++		<< SDHCI_DIVIDER_HI_SHIFT;
++	clk |= SDHCI_CLOCK_INT_EN;
++	bcm2835_mmc_writew(host, clk, SDHCI_CLOCK_CONTROL);
++
++	/* Wait max 20 ms */
++	timeout = 20;
++	while (!((clk = bcm2835_mmc_readw(host, SDHCI_CLOCK_CONTROL))
++		& SDHCI_CLOCK_INT_STABLE)) {
++		if (timeout == 0) {
++			pr_err("%s: Internal clock never "
++				"stabilised.\n", mmc_hostname(host->mmc));
++			bcm2835_mmc_dumpregs(host);
++			return;
++		}
++		timeout--;
++		mdelay(1);
++	}
++
++	if (20-timeout > 10 && 20-timeout > host->max_delay) {
++		host->max_delay = 20-timeout;
++		pr_warning("Warning: MMC controller hung for %d ms\n", host->max_delay);
++	}
++
++	clk |= SDHCI_CLOCK_CARD_EN;
++	bcm2835_mmc_writew(host, clk, SDHCI_CLOCK_CONTROL);
++}
++
++static void bcm2835_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
++{
++	struct bcm2835_host *host;
++	unsigned long flags;
++
++	host = mmc_priv(mmc);
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	WARN_ON(host->mrq != NULL);
++
++	host->mrq = mrq;
++
++	if (mrq->sbc && !(host->flags & SDHCI_AUTO_CMD23))
++		bcm2835_mmc_send_command(host, mrq->sbc);
++	else
++		bcm2835_mmc_send_command(host, mrq->cmd);
++
++	mmiowb();
++	spin_unlock_irqrestore(&host->lock, flags);
++
++	if (!(mrq->sbc && !(host->flags & SDHCI_AUTO_CMD23)) && mrq->cmd->data && host->use_dma) {
++		/* DMA transfer starts now, PIO starts after interrupt */
++		bcm2835_mmc_transfer_dma(host);
++	}
++}
++
++
++static void bcm2835_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
++{
++
++	struct bcm2835_host *host = mmc_priv(mmc);
++	unsigned long flags;
++	u8 ctrl;
++	u16 clk, ctrl_2;
++
++	pr_debug("bcm2835_mmc_set_ios: clock %d, pwr %d, bus_width %d, timing %d, vdd %d, drv_type %d\n",
++		 ios->clock, ios->power_mode, ios->bus_width,
++		 ios->timing, ios->signal_voltage, ios->drv_type);
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	if (!ios->clock || ios->clock != host->clock) {
++		bcm2835_mmc_set_clock(host, ios->clock);
++		host->clock = ios->clock;
++	}
++
++	if (host->pwr != SDHCI_POWER_330) {
++		host->pwr = SDHCI_POWER_330;
++		bcm2835_mmc_writeb(host, SDHCI_POWER_330 | SDHCI_POWER_ON, SDHCI_POWER_CONTROL);
++	}
++
++	ctrl = bcm2835_mmc_readb(host, SDHCI_HOST_CONTROL);
++
++	/* set bus width */
++	ctrl &= ~SDHCI_CTRL_8BITBUS;
++	if (ios->bus_width == MMC_BUS_WIDTH_4)
++		ctrl |= SDHCI_CTRL_4BITBUS;
++	else
++		ctrl &= ~SDHCI_CTRL_4BITBUS;
++
++	ctrl &= ~SDHCI_CTRL_HISPD; /* NO_HISPD_BIT */
++
++
++	bcm2835_mmc_writeb(host, ctrl, SDHCI_HOST_CONTROL);
++	/*
++	 * We only need to set Driver Strength if the
++	 * preset value enable is not set.
++	 */
++	ctrl_2 = bcm2835_mmc_readw(host, SDHCI_HOST_CONTROL2);
++	ctrl_2 &= ~SDHCI_CTRL_DRV_TYPE_MASK;
++	if (ios->drv_type == MMC_SET_DRIVER_TYPE_A)
++		ctrl_2 |= SDHCI_CTRL_DRV_TYPE_A;
++	else if (ios->drv_type == MMC_SET_DRIVER_TYPE_C)
++		ctrl_2 |= SDHCI_CTRL_DRV_TYPE_C;
++
++	bcm2835_mmc_writew(host, ctrl_2, SDHCI_HOST_CONTROL2);
++
++	/* Reset SD Clock Enable */
++	clk = bcm2835_mmc_readw(host, SDHCI_CLOCK_CONTROL);
++	clk &= ~SDHCI_CLOCK_CARD_EN;
++	bcm2835_mmc_writew(host, clk, SDHCI_CLOCK_CONTROL);
++
++	/* Re-enable SD Clock */
++	bcm2835_mmc_set_clock(host, host->clock);
++	bcm2835_mmc_writeb(host, ctrl, SDHCI_HOST_CONTROL);
++
++	mmiowb();
++
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++
++static struct mmc_host_ops bcm2835_ops = {
++	.request = bcm2835_mmc_request,
++	.set_ios = bcm2835_mmc_set_ios,
++	.enable_sdio_irq = bcm2835_mmc_enable_sdio_irq,
++};
++
++
++static void bcm2835_mmc_tasklet_finish(unsigned long param)
++{
++	struct bcm2835_host *host;
++	unsigned long flags;
++	struct mmc_request *mrq;
++
++	host = (struct bcm2835_host *)param;
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	/*
++	 * If this tasklet gets rescheduled while running, it will
++	 * be run again afterwards but without any active request.
++	 */
++	if (!host->mrq) {
++		spin_unlock_irqrestore(&host->lock, flags);
++		return;
++	}
++
++	del_timer(&host->timer);
++
++	mrq = host->mrq;
++
++	/*
++	 * The controller needs a reset of internal state machines
++	 * upon error conditions.
++	 */
++	if (!(host->flags & SDHCI_DEVICE_DEAD) &&
++	    ((mrq->cmd && mrq->cmd->error) ||
++		 (mrq->data && (mrq->data->error ||
++		  (mrq->data->stop && mrq->data->stop->error))))) {
++
++		spin_unlock_irqrestore(&host->lock, flags);
++		bcm2835_mmc_reset(host, SDHCI_RESET_CMD);
++		bcm2835_mmc_reset(host, SDHCI_RESET_DATA);
++		spin_lock_irqsave(&host->lock, flags);
++	}
++
++	host->mrq = NULL;
++	host->cmd = NULL;
++	host->data = NULL;
++
++	mmiowb();
++
++	spin_unlock_irqrestore(&host->lock, flags);
++	mmc_request_done(host->mmc, mrq);
++}
++
++
++
++static int bcm2835_mmc_add_host(struct bcm2835_host *host)
++{
++	struct mmc_host *mmc = host->mmc;
++	struct device *dev = mmc->parent;
++#ifndef FORCE_PIO
++	struct dma_slave_config cfg;
++#endif
++	int ret;
++
++	bcm2835_mmc_reset(host, SDHCI_RESET_ALL);
++
++	host->clk_mul = 0;
++
++	mmc->f_max = host->max_clk;
++	mmc->f_max = host->max_clk;
++	mmc->f_min = host->max_clk / SDHCI_MAX_DIV_SPEC_300;
++
++	/* SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK */
++	host->timeout_clk = mmc->f_max / 1000;
++	mmc->max_busy_timeout = (1 << 27) / host->timeout_clk;
++
++	/* host controller capabilities */
++	mmc->caps |= MMC_CAP_CMD23 | MMC_CAP_ERASE | MMC_CAP_NEEDS_POLL |
++		MMC_CAP_SDIO_IRQ | MMC_CAP_SD_HIGHSPEED |
++		MMC_CAP_MMC_HIGHSPEED | MMC_CAP_4_BIT_DATA;
++
++	mmc->caps2 |= MMC_CAP2_SDIO_IRQ_NOTHREAD;
++
++	host->flags = SDHCI_AUTO_CMD23;
++
++	dev_info(dev, "mmc_debug:%x mmc_debug2:%x\n", mmc_debug, mmc_debug2);
++#ifdef FORCE_PIO
++	dev_info(dev, "Forcing PIO mode\n");
++	host->have_dma = false;
++#else
++	if (IS_ERR_OR_NULL(host->dma_chan_tx) ||
++	    IS_ERR_OR_NULL(host->dma_chan_rx)) {
++		dev_err(dev, "%s: Unable to initialise DMA channels. Falling back to PIO\n",
++			DRIVER_NAME);
++		host->have_dma = false;
++	} else {
++		dev_info(dev, "DMA channels allocated");
++		host->have_dma = true;
++
++		cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
++		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
++		cfg.slave_id = 11;		/* DREQ channel */
++
++		cfg.direction = DMA_MEM_TO_DEV;
++		cfg.src_addr = 0;
++		cfg.dst_addr = host->bus_addr + SDHCI_BUFFER;
++		ret = dmaengine_slave_config(host->dma_chan_tx, &cfg);
++
++		cfg.direction = DMA_DEV_TO_MEM;
++		cfg.src_addr = host->bus_addr + SDHCI_BUFFER;
++		cfg.dst_addr = 0;
++		ret = dmaengine_slave_config(host->dma_chan_rx, &cfg);
++	}
++#endif
++	mmc->max_segs = 128;
++	mmc->max_req_size = 524288;
++	mmc->max_seg_size = mmc->max_req_size;
++	mmc->max_blk_size = 512;
++	mmc->max_blk_count =  65535;
++
++	/* report supported voltage ranges */
++	mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
++
++	tasklet_init(&host->finish_tasklet,
++		bcm2835_mmc_tasklet_finish, (unsigned long)host);
++
++	setup_timer(&host->timer, bcm2835_mmc_timeout_timer, (unsigned long)host);
++	init_waitqueue_head(&host->buf_ready_int);
++
++	bcm2835_mmc_init(host, 0);
++	ret = devm_request_threaded_irq(dev, host->irq, bcm2835_mmc_irq,
++					bcm2835_mmc_thread_irq, IRQF_SHARED,
++					mmc_hostname(mmc), host);
++	if (ret) {
++		dev_err(dev, "Failed to request IRQ %d: %d\n", host->irq, ret);
++		goto untasklet;
++	}
++
++	mmiowb();
++	mmc_add_host(mmc);
++
++	return 0;
++
++untasklet:
++	tasklet_kill(&host->finish_tasklet);
++
++	return ret;
++}
++
++static int bcm2835_mmc_probe(struct platform_device *pdev)
++{
++	struct device *dev = &pdev->dev;
++	struct device_node *node = dev->of_node;
++	struct clk *clk;
++	struct resource *iomem;
++	struct bcm2835_host *host;
++	struct mmc_host *mmc;
++	const __be32 *addr;
++	int ret;
++
++	mmc = mmc_alloc_host(sizeof(*host), dev);
++	if (!mmc)
++		return -ENOMEM;
++
++	mmc->ops = &bcm2835_ops;
++	host = mmc_priv(mmc);
++	host->mmc = mmc;
++	host->timeout = msecs_to_jiffies(1000);
++	spin_lock_init(&host->lock);
++
++	iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	host->ioaddr = devm_ioremap_resource(dev, iomem);
++	if (IS_ERR(host->ioaddr)) {
++		ret = PTR_ERR(host->ioaddr);
++		goto err;
++	}
++
++	addr = of_get_address(node, 0, NULL, NULL);
++	if (!addr) {
++		dev_err(dev, "could not get DMA-register address\n");
++		return -ENODEV;
++	}
++	host->bus_addr = be32_to_cpup(addr);
++	pr_debug(" - ioaddr %lx, iomem->start %lx, bus_addr %lx\n",
++		 (unsigned long)host->ioaddr,
++		 (unsigned long)iomem->start,
++		 (unsigned long)host->bus_addr);
++
++#ifndef FORCE_PIO
++	if (node) {
++		host->dma_chan_tx = dma_request_slave_channel(dev, "tx");
++		host->dma_chan_rx = dma_request_slave_channel(dev, "rx");
++	} else {
++		dma_cap_mask_t mask;
++
++		dma_cap_zero(mask);
++		/* we don't care about the channel, any would work */
++		dma_cap_set(DMA_SLAVE, mask);
++		host->dma_chan_tx = dma_request_channel(mask, NULL, NULL);
++		host->dma_chan_rx = dma_request_channel(mask, NULL, NULL);
++	}
++#endif
++	clk = devm_clk_get(dev, NULL);
++	if (IS_ERR(clk)) {
++		dev_err(dev, "could not get clk\n");
++		ret = PTR_ERR(clk);
++		goto err;
++	}
++
++	host->max_clk = clk_get_rate(clk);
++
++	host->irq = platform_get_irq(pdev, 0);
++	if (host->irq <= 0) {
++		dev_err(dev, "get IRQ failed\n");
++		ret = -EINVAL;
++		goto err;
++	}
++
++	if (node) {
++		mmc_of_parse(mmc);
++
++		/* Read any custom properties */
++		of_property_read_u32(node,
++				     "brcm,overclock-50",
++				     &host->overclock_50);
++	} else {
++		mmc->caps |= MMC_CAP_4_BIT_DATA;
++	}
++
++	ret = bcm2835_mmc_add_host(host);
++	if (ret)
++		goto err;
++
++	platform_set_drvdata(pdev, host);
++
++	return 0;
++err:
++	mmc_free_host(mmc);
++
++	return ret;
++}
++
++static int bcm2835_mmc_remove(struct platform_device *pdev)
++{
++	struct bcm2835_host *host = platform_get_drvdata(pdev);
++	unsigned long flags;
++	int dead;
++	u32 scratch;
++
++	dead = 0;
++	scratch = bcm2835_mmc_readl(host, SDHCI_INT_STATUS);
++	if (scratch == (u32)-1)
++		dead = 1;
++
++
++	if (dead) {
++		spin_lock_irqsave(&host->lock, flags);
++
++		host->flags |= SDHCI_DEVICE_DEAD;
++
++		if (host->mrq) {
++			pr_err("%s: Controller removed during "
++				" transfer!\n", mmc_hostname(host->mmc));
++
++			host->mrq->cmd->error = -ENOMEDIUM;
++			tasklet_schedule(&host->finish_tasklet);
++		}
++
++		spin_unlock_irqrestore(&host->lock, flags);
++	}
++
++	mmc_remove_host(host->mmc);
++
++	if (!dead)
++		bcm2835_mmc_reset(host, SDHCI_RESET_ALL);
++
++	free_irq(host->irq, host);
++
++	del_timer_sync(&host->timer);
++
++	tasklet_kill(&host->finish_tasklet);
++
++	mmc_free_host(host->mmc);
++	platform_set_drvdata(pdev, NULL);
++
++	return 0;
++}
++
++
++static const struct of_device_id bcm2835_mmc_match[] = {
++	{ .compatible = "brcm,bcm2835-mmc" },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, bcm2835_mmc_match);
++
++
++
++static struct platform_driver bcm2835_mmc_driver = {
++	.probe      = bcm2835_mmc_probe,
++	.remove     = bcm2835_mmc_remove,
++	.driver     = {
++		.name		= DRIVER_NAME,
++		.owner		= THIS_MODULE,
++		.of_match_table	= bcm2835_mmc_match,
++	},
++};
++module_platform_driver(bcm2835_mmc_driver);
++
++module_param(mmc_debug, uint, 0644);
++module_param(mmc_debug2, uint, 0644);
++MODULE_ALIAS("platform:mmc-bcm2835");
++MODULE_DESCRIPTION("BCM2835 SDHCI driver");
++MODULE_LICENSE("GPL v2");
++MODULE_AUTHOR("Gellert Weisz");
diff --git a/target/linux/brcm2708/patches-4.4/0034-Adding-bcm2835-sdhost-driver-and-an-overlay-to-enabl.patch b/target/linux/brcm2708/patches-4.4/0034-Adding-bcm2835-sdhost-driver-and-an-overlay-to-enabl.patch
new file mode 100644
index 0000000..c53bea3
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0034-Adding-bcm2835-sdhost-driver-and-an-overlay-to-enabl.patch
@@ -0,0 +1,2022 @@
+From 9fccc853c4977f9a125de717e7eee0f25895362e Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Wed, 25 Mar 2015 17:49:47 +0000
+Subject: [PATCH 034/127] Adding bcm2835-sdhost driver, and an overlay to
+ enable it
+
+BCM2835 has two SD card interfaces. This driver uses the other one.
+
+bcm2835-sdhost: Error handling fix, and code clarification
+
+bcm2835-sdhost: Adding overclocking option
+
+Allow a different clock speed to be substitued for a requested 50MHz.
+This option is exposed using the "overclock_50" DT parameter.
+Note that the sdhost interface is restricted to integer divisions of
+core_freq, and the highest sensible option for a core_freq of 250MHz
+is 84 (250/3 = 83.3MHz), the next being 125 (250/2) which is much too
+high.
+
+Use at your own risk.
+
+bcm2835-sdhost: Round up the overclock, so 62 works for 62.5Mhz
+
+Also only warn once for each overclock setting.
+
+bcm2835-sdhost: Improve error handling and recovery
+
+1) Expose the hw_reset method to the MMC framework, removing many
+   internal calls by the driver.
+
+2) Reduce overclock setting on error.
+
+3) Increase timeout to cope with high capacity cards.
+
+4) Add properties and parameters to control pio_limit and debug.
+
+5) Reduce messages at probe time.
+
+bcm2835-sdhost: Further improve overclock back-off
+
+bcm2835-sdhost: Clear HBLC for PIO mode
+
+Also update pio_limit default in overlay README.
+
+bcm2835-sdhost: Add the ERASE capability
+
+See: https://github.com/raspberrypi/linux/issues/1076
+
+bcm2835-sdhost: Ignore CRC7 for MMC CMD1
+
+It seems that the sdhost interface returns CRC7 errors for CMD1,
+which is the MMC-specific SEND_OP_COND. Returning these errors to
+the MMC layer causes a downward spiral, but ignoring them seems
+to be harmless.
+
+bcm2835-mmc/sdhost: Remove ARCH_BCM2835 differences
+
+The bcm2835-mmc driver (and -sdhost driver that copied from it)
+contains code to handle SDIO interrupts in a threaded interrupt
+handler rather than waking the MMC framework thread. The change
+follows a patch from Russell King that adds the facility as the
+preferred way of working.
+
+However, the new code path is only present in ARCH_BCM2835
+builds, which I have taken to be a way of testing the waters
+rather than making the change across the board; I can't see
+any technical reason why it wouldn't be enabled for MACH_BCM270X
+builds. So this patch standardises on the ARCH_BCM2835 code,
+removing the old code paths.
+
+bcm2835-sdhost: Don't log timeout errors unless debug=1
+
+The MMC card-discovery process generates timeouts. This is
+expected behaviour, so reporting it to the user serves no purpose.
+Suppress the reporting of timeout errors unless the debug flag
+is on.
+---
+ drivers/mmc/host/Kconfig          |   10 +
+ drivers/mmc/host/Makefile         |    1 +
+ drivers/mmc/host/bcm2835-sdhost.c | 1907 +++++++++++++++++++++++++++++++++++++
+ 3 files changed, 1918 insertions(+)
+ create mode 100644 drivers/mmc/host/bcm2835-sdhost.c
+
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -33,6 +33,16 @@ config MMC_BCM2835_PIO_DMA_BARRIER
+ 
+ 	  If unsure, say 2 here.
+ 
++config MMC_BCM2835_SDHOST
++	tristate "Support for the SDHost controller on BCM2708/9"
++	depends on MACH_BCM2708 || MACH_BCM2709 || ARCH_BCM2835
++	help
++	  This selects the SDHost controller on BCM2835/6.
++
++	  If you have a controller with this interface, say Y or M here.
++
++	  If unsure, say N.
++
+ config MMC_ARMMMCI
+ 	tristate "ARM AMBA Multimedia Card Interface support"
+ 	depends on ARM_AMBA
+--- a/drivers/mmc/host/Makefile
++++ b/drivers/mmc/host/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_MMC_SDHCI_S3C)	+= sdhci-s3c
+ obj-$(CONFIG_MMC_SDHCI_SIRF)   	+= sdhci-sirf.o
+ obj-$(CONFIG_MMC_SDHCI_F_SDH30)	+= sdhci_f_sdh30.o
+ obj-$(CONFIG_MMC_SDHCI_SPEAR)	+= sdhci-spear.o
++obj-$(CONFIG_MMC_BCM2835_SDHOST)	+= bcm2835-sdhost.o
+ obj-$(CONFIG_MMC_BCM2835)	+= bcm2835-mmc.o
+ obj-$(CONFIG_MMC_WBSD)		+= wbsd.o
+ obj-$(CONFIG_MMC_AU1X)		+= au1xmmc.o
+--- /dev/null
++++ b/drivers/mmc/host/bcm2835-sdhost.c
+@@ -0,0 +1,1907 @@
++/*
++ * BCM2835 SD host driver.
++ *
++ * Author:      Phil Elwell <phil at raspberrypi.org>
++ *              Copyright 2015
++ *
++ * Based on
++ *  mmc-bcm2835.c by Gellert Weisz
++ * which is, in turn, based on
++ *  sdhci-bcm2708.c by Broadcom
++ *  sdhci-bcm2835.c by Stephen Warren and Oleksandr Tymoshenko
++ *  sdhci.c and sdhci-pci.c by Pierre Ossman
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms and conditions of the GNU General Public License,
++ * version 2, as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope it will be useful, but WITHOUT
++ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
++ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
++ * more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
++ */
++
++#define SAFE_READ_THRESHOLD     4
++#define SAFE_WRITE_THRESHOLD    4
++#define ALLOW_DMA               1
++#define ALLOW_CMD23             0
++#define ALLOW_FAST              1
++#define USE_BLOCK_IRQ           1
++
++#include <linux/delay.h>
++#include <linux/module.h>
++#include <linux/io.h>
++#include <linux/mmc/mmc.h>
++#include <linux/mmc/host.h>
++#include <linux/mmc/sd.h>
++#include <linux/scatterlist.h>
++#include <linux/of_address.h>
++#include <linux/of_irq.h>
++#include <linux/clk.h>
++#include <linux/platform_device.h>
++#include <linux/err.h>
++#include <linux/blkdev.h>
++#include <linux/dmaengine.h>
++#include <linux/dma-mapping.h>
++#include <linux/of_dma.h>
++#include <linux/time.h>
++
++#define DRIVER_NAME "sdhost-bcm2835"
++
++#define SDCMD  0x00 /* Command to SD card              - 16 R/W */
++#define SDARG  0x04 /* Argument to SD card             - 32 R/W */
++#define SDTOUT 0x08 /* Start value for timeout counter - 32 R/W */
++#define SDCDIV 0x0c /* Start value for clock divider   - 11 R/W */
++#define SDRSP0 0x10 /* SD card response (31:0)         - 32 R   */
++#define SDRSP1 0x14 /* SD card response (63:32)        - 32 R   */
++#define SDRSP2 0x18 /* SD card response (95:64)        - 32 R   */
++#define SDRSP3 0x1c /* SD card response (127:96)       - 32 R   */
++#define SDHSTS 0x20 /* SD host status                  - 11 R   */
++#define SDVDD  0x30 /* SD card power control           -  1 R/W */
++#define SDEDM  0x34 /* Emergency Debug Mode            - 13 R/W */
++#define SDHCFG 0x38 /* Host configuration              -  2 R/W */
++#define SDHBCT 0x3c /* Host byte count (debug)         - 32 R/W */
++#define SDDATA 0x40 /* Data to/from SD card            - 32 R/W */
++#define SDHBLC 0x50 /* Host block count (SDIO/SDHC)    -  9 R/W */
++
++#define SDCMD_NEW_FLAG                  0x8000
++#define SDCMD_FAIL_FLAG                 0x4000
++#define SDCMD_BUSYWAIT                  0x800
++#define SDCMD_NO_RESPONSE               0x400
++#define SDCMD_LONG_RESPONSE             0x200
++#define SDCMD_WRITE_CMD                 0x80
++#define SDCMD_READ_CMD                  0x40
++#define SDCMD_CMD_MASK                  0x3f
++
++#define SDCDIV_MAX_CDIV                 0x7ff
++
++#define SDHSTS_BUSY_IRPT                0x400
++#define SDHSTS_BLOCK_IRPT               0x200
++#define SDHSTS_SDIO_IRPT                0x100
++#define SDHSTS_REW_TIME_OUT             0x80
++#define SDHSTS_CMD_TIME_OUT             0x40
++#define SDHSTS_CRC16_ERROR              0x20
++#define SDHSTS_CRC7_ERROR               0x10
++#define SDHSTS_FIFO_ERROR               0x08
++/* Reserved */
++/* Reserved */
++#define SDHSTS_DATA_FLAG                0x01
++
++#define SDHSTS_TRANSFER_ERROR_MASK      (SDHSTS_CRC7_ERROR|SDHSTS_CRC16_ERROR|SDHSTS_REW_TIME_OUT|SDHSTS_FIFO_ERROR)
++#define SDHSTS_ERROR_MASK               (SDHSTS_CMD_TIME_OUT|SDHSTS_TRANSFER_ERROR_MASK)
++
++#define SDHCFG_BUSY_IRPT_EN     (1<<10)
++#define SDHCFG_BLOCK_IRPT_EN    (1<<8)
++#define SDHCFG_SDIO_IRPT_EN     (1<<5)
++#define SDHCFG_DATA_IRPT_EN     (1<<4)
++#define SDHCFG_SLOW_CARD        (1<<3)
++#define SDHCFG_WIDE_EXT_BUS     (1<<2)
++#define SDHCFG_WIDE_INT_BUS     (1<<1)
++#define SDHCFG_REL_CMD_LINE     (1<<0)
++
++#define SDEDM_FORCE_DATA_MODE   (1<<19)
++#define SDEDM_CLOCK_PULSE       (1<<20)
++#define SDEDM_BYPASS            (1<<21)
++
++#define SDEDM_WRITE_THRESHOLD_SHIFT 9
++#define SDEDM_READ_THRESHOLD_SHIFT 14
++#define SDEDM_THRESHOLD_MASK     0x1f
++
++#define MHZ 1000000
++
++
++struct bcm2835_host {
++	spinlock_t		lock;
++
++	void __iomem		*ioaddr;
++	u32			bus_addr;
++
++	struct mmc_host		*mmc;
++
++	u32			pio_timeout;	/* In jiffies */
++
++	int			clock;		/* Current clock speed */
++
++	bool			slow_card;	/* Force 11-bit divisor */
++
++	unsigned int		max_clk;	/* Max possible freq */
++
++	struct tasklet_struct	finish_tasklet;	/* Tasklet structures */
++
++	struct timer_list	timer;		/* Timer for timeouts */
++
++	struct timer_list	pio_timer;	/* PIO error detection timer */
++
++	struct sg_mapping_iter	sg_miter;	/* SG state for PIO */
++	unsigned int		blocks;		/* remaining PIO blocks */
++
++	int			irq;		/* Device IRQ */
++
++
++	/* cached registers */
++	u32			hcfg;
++	u32			cdiv;
++
++	struct mmc_request		*mrq;			/* Current request */
++	struct mmc_command		*cmd;			/* Current command */
++	struct mmc_data			*data;			/* Current data request */
++	unsigned int			data_complete:1;	/* Data finished before cmd */
++
++	unsigned int			flush_fifo:1;		/* Drain the fifo when finishing */
++
++	unsigned int			use_busy:1;		/* Wait for busy interrupt */
++
++	unsigned int			debug:1;		/* Enable debug output */
++
++	u32				thread_isr;
++
++	/*DMA part*/
++	struct dma_chan			*dma_chan_rx;		/* DMA channel for reads */
++	struct dma_chan			*dma_chan_tx;		/* DMA channel for writes */
++
++	bool				allow_dma;
++	bool				have_dma;
++	bool				use_dma;
++	/*end of DMA part*/
++
++	int				max_delay;	/* maximum length of time spent waiting */
++	struct timeval			stop_time;	/* when the last stop was issued */
++	u32				delay_after_stop; /* minimum time between stop and subsequent data transfer */
++	u32				overclock_50;	/* frequency to use when 50MHz is requested (in MHz) */
++	u32				overclock;	/* Current frequency if overclocked, else zero */
++	u32				pio_limit;	/* Maximum block count for PIO (0 = always DMA) */
++};
++
++
++static inline void bcm2835_sdhost_write(struct bcm2835_host *host, u32 val, int reg)
++{
++	writel(val, host->ioaddr + reg);
++}
++
++static inline u32 bcm2835_sdhost_read(struct bcm2835_host *host, int reg)
++{
++	return readl(host->ioaddr + reg);
++}
++
++static inline u32 bcm2835_sdhost_read_relaxed(struct bcm2835_host *host, int reg)
++{
++	return readl_relaxed(host->ioaddr + reg);
++}
++
++static void bcm2835_sdhost_dumpcmd(struct bcm2835_host *host,
++				   struct mmc_command *cmd,
++				   const char *label)
++{
++	if (cmd)
++		pr_info("%s:%c%s op %d arg 0x%x flags 0x%x - resp %08x %08x %08x %08x, err %d\n",
++			mmc_hostname(host->mmc),
++			(cmd == host->cmd) ? '>' : ' ',
++			label, cmd->opcode, cmd->arg, cmd->flags,
++			cmd->resp[0], cmd->resp[1], cmd->resp[2], cmd->resp[3],
++			cmd->error);
++}
++
++static void bcm2835_sdhost_dumpregs(struct bcm2835_host *host)
++{
++	bcm2835_sdhost_dumpcmd(host, host->mrq->sbc, "sbc");
++	bcm2835_sdhost_dumpcmd(host, host->mrq->cmd, "cmd");
++	if (host->mrq->data)
++		pr_err("%s: data blocks %x blksz %x - err %d\n",
++		       mmc_hostname(host->mmc),
++		       host->mrq->data->blocks,
++		       host->mrq->data->blksz,
++		       host->mrq->data->error);
++	bcm2835_sdhost_dumpcmd(host, host->mrq->stop, "stop");
++
++	pr_info("%s: =========== REGISTER DUMP ===========\n",
++		mmc_hostname(host->mmc));
++
++	pr_info("%s: SDCMD  0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDCMD));
++	pr_info("%s: SDARG  0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDARG));
++	pr_info("%s: SDTOUT 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDTOUT));
++	pr_info("%s: SDCDIV 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDCDIV));
++	pr_info("%s: SDRSP0 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDRSP0));
++	pr_info("%s: SDRSP1 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDRSP1));
++	pr_info("%s: SDRSP2 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDRSP2));
++	pr_info("%s: SDRSP3 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDRSP3));
++	pr_info("%s: SDHSTS 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDHSTS));
++	pr_info("%s: SDVDD  0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDVDD));
++	pr_info("%s: SDEDM  0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDEDM));
++	pr_info("%s: SDHCFG 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDHCFG));
++	pr_info("%s: SDHBCT 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDHBCT));
++	pr_info("%s: SDHBLC 0x%08x\n",
++		mmc_hostname(host->mmc),
++		bcm2835_sdhost_read(host, SDHBLC));
++
++	pr_info("%s: ===========================================\n",
++		mmc_hostname(host->mmc));
++}
++
++
++static void bcm2835_sdhost_set_power(struct bcm2835_host *host, bool on)
++{
++	bcm2835_sdhost_write(host, on ? 1 : 0, SDVDD);
++}
++
++
++static void bcm2835_sdhost_reset_internal(struct bcm2835_host *host)
++{
++	u32 temp;
++
++	bcm2835_sdhost_set_power(host, false);
++
++	bcm2835_sdhost_write(host, 0, SDCMD);
++	bcm2835_sdhost_write(host, 0, SDARG);
++	bcm2835_sdhost_write(host, 0xf00000, SDTOUT);
++	bcm2835_sdhost_write(host, 0, SDCDIV);
++	bcm2835_sdhost_write(host, 0x7f8, SDHSTS); /* Write 1s to clear */
++	bcm2835_sdhost_write(host, 0, SDHCFG);
++	bcm2835_sdhost_write(host, 0, SDHBCT);
++	bcm2835_sdhost_write(host, 0, SDHBLC);
++
++	/* Limit fifo usage due to silicon bug */
++	temp = bcm2835_sdhost_read(host, SDEDM);
++	temp &= ~((SDEDM_THRESHOLD_MASK<<SDEDM_READ_THRESHOLD_SHIFT) |
++		  (SDEDM_THRESHOLD_MASK<<SDEDM_WRITE_THRESHOLD_SHIFT));
++	temp |= (SAFE_READ_THRESHOLD << SDEDM_READ_THRESHOLD_SHIFT) |
++		(SAFE_WRITE_THRESHOLD << SDEDM_WRITE_THRESHOLD_SHIFT);
++	bcm2835_sdhost_write(host, temp, SDEDM);
++	mdelay(10);
++	bcm2835_sdhost_set_power(host, true);
++	mdelay(10);
++	host->clock = 0;
++	bcm2835_sdhost_write(host, host->hcfg, SDHCFG);
++	bcm2835_sdhost_write(host, host->cdiv, SDCDIV);
++	mmiowb();
++}
++
++
++static void bcm2835_sdhost_reset(struct mmc_host *mmc)
++{
++	struct bcm2835_host *host = mmc_priv(mmc);
++	unsigned long flags;
++	if (host->debug)
++		pr_info("%s: reset\n", mmc_hostname(mmc));
++	spin_lock_irqsave(&host->lock, flags);
++
++	bcm2835_sdhost_reset_internal(host);
++
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static void bcm2835_sdhost_set_ios(struct mmc_host *mmc, struct mmc_ios *ios);
++
++static void bcm2835_sdhost_init(struct bcm2835_host *host, int soft)
++{
++	pr_debug("bcm2835_sdhost_init(%d)\n", soft);
++
++	/* Set interrupt enables */
++	host->hcfg = SDHCFG_BUSY_IRPT_EN;
++
++	bcm2835_sdhost_reset_internal(host);
++
++	if (soft) {
++		/* force clock reconfiguration */
++		host->clock = 0;
++		bcm2835_sdhost_set_ios(host->mmc, &host->mmc->ios);
++	}
++}
++
++static bool bcm2835_sdhost_is_write_complete(struct bcm2835_host *host)
++{
++	bool write_complete = ((bcm2835_sdhost_read(host, SDEDM) & 0xf) == 1);
++
++	if (!write_complete) {
++		/* Request an IRQ for the last block */
++		host->hcfg |= SDHCFG_BLOCK_IRPT_EN;
++		bcm2835_sdhost_write(host, host->hcfg, SDHCFG);
++		if ((bcm2835_sdhost_read(host, SDEDM) & 0xf) == 1) {
++			/* The write has now completed. Disable the interrupt
++			   and clear the status flag */
++			host->hcfg &= ~SDHCFG_BLOCK_IRPT_EN;
++			bcm2835_sdhost_write(host, host->hcfg, SDHCFG);
++			bcm2835_sdhost_write(host, SDHSTS_BLOCK_IRPT, SDHSTS);
++			write_complete = true;
++		}
++	}
++
++	return write_complete;
++}
++
++static void bcm2835_sdhost_wait_write_complete(struct bcm2835_host *host)
++{
++	int timediff;
++#ifdef DEBUG
++	static struct timeval start_time;
++	static int max_stall_time = 0;
++	static int total_stall_time = 0;
++	struct timeval before, after;
++
++	do_gettimeofday(&before);
++	if (max_stall_time == 0)
++		start_time = before;
++#endif
++
++	timediff = 0;
++
++	while (1) {
++		u32 edm = bcm2835_sdhost_read(host, SDEDM);
++		if ((edm & 0xf) == 1)
++			break;
++		timediff++;
++		if (timediff > 5000000) {
++#ifdef DEBUG
++			do_gettimeofday(&after);
++			timediff = (after.tv_sec - before.tv_sec)*1000000 +
++				(after.tv_usec - before.tv_usec);
++
++			pr_err(" wait_write_complete - still waiting after %dus\n",
++			       timediff);
++#else
++			pr_err(" wait_write_complete - still waiting after %d retries\n",
++			       timediff);
++#endif
++			bcm2835_sdhost_dumpregs(host);
++			host->data->error = -ETIMEDOUT;
++			return;
++		}
++	}
++
++#ifdef DEBUG
++	do_gettimeofday(&after);
++	timediff = (after.tv_sec - before.tv_sec)*1000000 + (after.tv_usec - before.tv_usec);
++
++	total_stall_time += timediff;
++	if (timediff > max_stall_time)
++		max_stall_time = timediff;
++
++	if ((after.tv_sec - start_time.tv_sec) > 10) {
++		pr_debug(" wait_write_complete - max wait %dus, total %dus\n",
++			 max_stall_time, total_stall_time);
++		start_time = after;
++		max_stall_time = 0;
++		total_stall_time = 0;
++	}
++#endif
++}
++
++static void bcm2835_sdhost_finish_data(struct bcm2835_host *host);
++
++static void bcm2835_sdhost_dma_complete(void *param)
++{
++	struct bcm2835_host *host = param;
++	struct dma_chan *dma_chan;
++	unsigned long flags;
++	u32 dir_data;
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	if (host->data) {
++		bool write_complete;
++		if (USE_BLOCK_IRQ)
++			write_complete = bcm2835_sdhost_is_write_complete(host);
++		else {
++			bcm2835_sdhost_wait_write_complete(host);
++			write_complete = true;
++		}
++		pr_debug("dma_complete() - write_complete=%d\n",
++			 write_complete);
++
++		if (write_complete || (host->data->flags & MMC_DATA_READ))
++		{
++			if (write_complete) {
++				dma_chan = host->dma_chan_tx;
++				dir_data = DMA_TO_DEVICE;
++			} else {
++				dma_chan = host->dma_chan_rx;
++				dir_data = DMA_FROM_DEVICE;
++			}
++
++			dma_unmap_sg(dma_chan->device->dev,
++				     host->data->sg, host->data->sg_len,
++				     dir_data);
++
++			bcm2835_sdhost_finish_data(host);
++		}
++	}
++
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static bool data_transfer_wait(struct bcm2835_host *host)
++{
++	unsigned long timeout = 1000000;
++	while (timeout)
++	{
++		u32 sdhsts = bcm2835_sdhost_read(host, SDHSTS);
++		if (sdhsts & SDHSTS_DATA_FLAG) {
++			bcm2835_sdhost_write(host, SDHSTS_DATA_FLAG, SDHSTS);
++			break;
++		}
++		timeout--;
++	}
++	if (timeout == 0) {
++	    pr_err("%s: Data %s timeout\n",
++		   mmc_hostname(host->mmc),
++		   (host->data->flags & MMC_DATA_READ) ? "read" : "write");
++	    bcm2835_sdhost_dumpregs(host);
++	    host->data->error = -ETIMEDOUT;
++	    return false;
++	}
++	return true;
++}
++
++static void bcm2835_sdhost_read_block_pio(struct bcm2835_host *host)
++{
++	unsigned long flags;
++	size_t blksize, len;
++	u32 *buf;
++
++	blksize = host->data->blksz;
++
++	local_irq_save(flags);
++
++	while (blksize) {
++		if (!sg_miter_next(&host->sg_miter))
++			BUG();
++
++		len = min(host->sg_miter.length, blksize);
++		BUG_ON(len % 4);
++
++		blksize -= len;
++		host->sg_miter.consumed = len;
++
++		buf = (u32 *)host->sg_miter.addr;
++
++		while (len) {
++			if (!data_transfer_wait(host))
++				break;
++
++			*(buf++) = bcm2835_sdhost_read(host, SDDATA);
++			len -= 4;
++		}
++
++		if (host->data->error)
++			break;
++	}
++
++	sg_miter_stop(&host->sg_miter);
++
++	local_irq_restore(flags);
++}
++
++static void bcm2835_sdhost_write_block_pio(struct bcm2835_host *host)
++{
++	unsigned long flags;
++	size_t blksize, len;
++	u32 *buf;
++
++	blksize = host->data->blksz;
++
++	local_irq_save(flags);
++
++	while (blksize) {
++		if (!sg_miter_next(&host->sg_miter))
++			BUG();
++
++		len = min(host->sg_miter.length, blksize);
++		BUG_ON(len % 4);
++
++		blksize -= len;
++		host->sg_miter.consumed = len;
++
++		buf = host->sg_miter.addr;
++
++		while (len) {
++			if (!data_transfer_wait(host))
++				break;
++
++			bcm2835_sdhost_write(host, *(buf++), SDDATA);
++			len -= 4;
++		}
++
++		if (host->data->error)
++			break;
++	}
++
++	sg_miter_stop(&host->sg_miter);
++
++	local_irq_restore(flags);
++}
++
++
++static void bcm2835_sdhost_transfer_pio(struct bcm2835_host *host)
++{
++	u32 sdhsts;
++	bool is_read;
++	BUG_ON(!host->data);
++
++	is_read = (host->data->flags & MMC_DATA_READ) != 0;
++	if (is_read)
++		bcm2835_sdhost_read_block_pio(host);
++	else
++		bcm2835_sdhost_write_block_pio(host);
++
++	sdhsts = bcm2835_sdhost_read(host, SDHSTS);
++	if (sdhsts & (SDHSTS_CRC16_ERROR |
++		      SDHSTS_CRC7_ERROR |
++		      SDHSTS_FIFO_ERROR)) {
++		pr_err("%s: %s transfer error - HSTS %x\n",
++		       mmc_hostname(host->mmc),
++		       is_read ? "read" : "write",
++		       sdhsts);
++		host->data->error = -EILSEQ;
++	} else if ((sdhsts & (SDHSTS_CMD_TIME_OUT |
++			      SDHSTS_REW_TIME_OUT))) {
++		pr_err("%s: %s timeout error - HSTS %x\n",
++		       mmc_hostname(host->mmc),
++		       is_read ? "read" : "write",
++		       sdhsts);
++		host->data->error = -ETIMEDOUT;
++	} else if (!is_read && !host->data->error) {
++		/* Start a timer in case a transfer error occurs because
++		   there is no error interrupt */
++		mod_timer(&host->pio_timer, jiffies + host->pio_timeout);
++	}
++}
++
++
++static void bcm2835_sdhost_transfer_dma(struct bcm2835_host *host)
++{
++	u32 len, dir_data, dir_slave;
++	struct dma_async_tx_descriptor *desc = NULL;
++	struct dma_chan *dma_chan;
++
++	pr_debug("bcm2835_sdhost_transfer_dma()\n");
++
++	WARN_ON(!host->data);
++
++	if (!host->data)
++		return;
++
++	if (host->data->flags & MMC_DATA_READ) {
++		dma_chan = host->dma_chan_rx;
++		dir_data = DMA_FROM_DEVICE;
++		dir_slave = DMA_DEV_TO_MEM;
++	} else {
++		dma_chan = host->dma_chan_tx;
++		dir_data = DMA_TO_DEVICE;
++		dir_slave = DMA_MEM_TO_DEV;
++	}
++
++	BUG_ON(!dma_chan->device);
++	BUG_ON(!dma_chan->device->dev);
++	BUG_ON(!host->data->sg);
++
++	len = dma_map_sg(dma_chan->device->dev, host->data->sg,
++			 host->data->sg_len, dir_data);
++	if (len > 0) {
++		desc = dmaengine_prep_slave_sg(dma_chan, host->data->sg,
++					       len, dir_slave,
++					       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
++	} else {
++		dev_err(mmc_dev(host->mmc), "dma_map_sg returned zero length\n");
++	}
++	if (desc) {
++		desc->callback = bcm2835_sdhost_dma_complete;
++		desc->callback_param = host;
++		dmaengine_submit(desc);
++		dma_async_issue_pending(dma_chan);
++	}
++
++}
++
++
++static void bcm2835_sdhost_set_transfer_irqs(struct bcm2835_host *host)
++{
++	u32 all_irqs = SDHCFG_DATA_IRPT_EN | SDHCFG_BLOCK_IRPT_EN |
++		SDHCFG_BUSY_IRPT_EN;
++	if (host->use_dma)
++		host->hcfg = (host->hcfg & ~all_irqs) |
++			SDHCFG_BUSY_IRPT_EN;
++	else
++		host->hcfg = (host->hcfg & ~all_irqs) |
++			SDHCFG_DATA_IRPT_EN |
++			SDHCFG_BUSY_IRPT_EN;
++
++	bcm2835_sdhost_write(host, host->hcfg, SDHCFG);
++}
++
++
++static void bcm2835_sdhost_prepare_data(struct bcm2835_host *host, struct mmc_command *cmd)
++{
++	struct mmc_data *data = cmd->data;
++
++	WARN_ON(host->data);
++
++	if (!data)
++		return;
++
++	/* Sanity checks */
++	BUG_ON(data->blksz * data->blocks > 524288);
++	BUG_ON(data->blksz > host->mmc->max_blk_size);
++	BUG_ON(data->blocks > 65535);
++
++	host->data = data;
++	host->data_complete = 0;
++	host->flush_fifo = 0;
++	host->data->bytes_xfered = 0;
++
++	host->use_dma = host->have_dma && (data->blocks > host->pio_limit);
++	if (!host->use_dma) {
++		int flags;
++
++		flags = SG_MITER_ATOMIC;
++		if (data->flags & MMC_DATA_READ)
++			flags |= SG_MITER_TO_SG;
++		else
++			flags |= SG_MITER_FROM_SG;
++		sg_miter_start(&host->sg_miter, data->sg, data->sg_len, flags);
++		host->blocks = data->blocks;
++	}
++
++	bcm2835_sdhost_set_transfer_irqs(host);
++
++	bcm2835_sdhost_write(host, data->blksz, SDHBCT);
++	bcm2835_sdhost_write(host, host->use_dma ? data->blocks : 0, SDHBLC);
++
++	BUG_ON(!host->data);
++}
++
++
++void bcm2835_sdhost_send_command(struct bcm2835_host *host, struct mmc_command *cmd)
++{
++	u32 sdcmd, sdhsts;
++	unsigned long timeout;
++	int delay;
++
++	WARN_ON(host->cmd);
++
++	if (cmd->data)
++		pr_debug("%s: send_command %d 0x%x "
++			 "(flags 0x%x) - %s %d*%d\n",
++			 mmc_hostname(host->mmc),
++			 cmd->opcode, cmd->arg, cmd->flags,
++			 (cmd->data->flags & MMC_DATA_READ) ?
++			 "read" : "write", cmd->data->blocks,
++			 cmd->data->blksz);
++	else
++		pr_debug("%s: send_command %d 0x%x (flags 0x%x)\n",
++			 mmc_hostname(host->mmc),
++			 cmd->opcode, cmd->arg, cmd->flags);
++
++	/* Wait max 100 ms */
++	timeout = 10000;
++
++	while (bcm2835_sdhost_read(host, SDCMD) & SDCMD_NEW_FLAG) {
++		if (timeout == 0) {
++			pr_err("%s: previous command never completed.\n",
++				mmc_hostname(host->mmc));
++			bcm2835_sdhost_dumpregs(host);
++			cmd->error = -EIO;
++			tasklet_schedule(&host->finish_tasklet);
++			return;
++		}
++		timeout--;
++		udelay(10);
++	}
++
++	delay = (10000 - timeout)/100;
++	if (delay > host->max_delay) {
++		host->max_delay = delay;
++		pr_warning("%s: controller hung for %d ms\n",
++			   mmc_hostname(host->mmc),
++			   host->max_delay);
++	}
++
++	timeout = jiffies;
++	if (!cmd->data && cmd->busy_timeout > 9000)
++		timeout += DIV_ROUND_UP(cmd->busy_timeout, 1000) * HZ + HZ;
++	else
++		timeout += 10 * HZ;
++	mod_timer(&host->timer, timeout);
++
++	host->cmd = cmd;
++
++	/* Clear any error flags */
++	sdhsts = bcm2835_sdhost_read(host, SDHSTS);
++	if (sdhsts & SDHSTS_ERROR_MASK)
++		bcm2835_sdhost_write(host, sdhsts, SDHSTS);
++
++	bcm2835_sdhost_prepare_data(host, cmd);
++
++	bcm2835_sdhost_write(host, cmd->arg, SDARG);
++
++	if ((cmd->flags & MMC_RSP_136) && (cmd->flags & MMC_RSP_BUSY)) {
++		pr_err("%s: unsupported response type!\n",
++			mmc_hostname(host->mmc));
++		cmd->error = -EINVAL;
++		tasklet_schedule(&host->finish_tasklet);
++		return;
++	}
++
++	sdcmd = cmd->opcode & SDCMD_CMD_MASK;
++
++	if (!(cmd->flags & MMC_RSP_PRESENT))
++		sdcmd |= SDCMD_NO_RESPONSE;
++	else {
++		if (cmd->flags & MMC_RSP_136)
++			sdcmd |= SDCMD_LONG_RESPONSE;
++		if (cmd->flags & MMC_RSP_BUSY) {
++			sdcmd |= SDCMD_BUSYWAIT;
++			host->use_busy = 1;
++		}
++	}
++
++	if (cmd->data) {
++		if (host->delay_after_stop) {
++			struct timeval now;
++			int time_since_stop;
++			do_gettimeofday(&now);
++			time_since_stop = (now.tv_sec - host->stop_time.tv_sec);
++			if (time_since_stop < 2) {
++				/* Possibly less than one second */
++				time_since_stop = time_since_stop * 1000000 +
++					(now.tv_usec - host->stop_time.tv_usec);
++				if (time_since_stop < host->delay_after_stop)
++					udelay(host->delay_after_stop -
++					       time_since_stop);
++			}
++		}
++
++		if (cmd->data->flags & MMC_DATA_WRITE)
++			sdcmd |= SDCMD_WRITE_CMD;
++		if (cmd->data->flags & MMC_DATA_READ)
++			sdcmd |= SDCMD_READ_CMD;
++	}
++
++	bcm2835_sdhost_write(host, sdcmd | SDCMD_NEW_FLAG, SDCMD);
++}
++
++
++static void bcm2835_sdhost_finish_command(struct bcm2835_host *host);
++static void bcm2835_sdhost_transfer_complete(struct bcm2835_host *host);
++
++static void bcm2835_sdhost_finish_data(struct bcm2835_host *host)
++{
++	struct mmc_data *data;
++
++	data = host->data;
++	BUG_ON(!data);
++
++	pr_debug("finish_data(error %d, stop %d, sbc %d)\n",
++	       data->error, data->stop ? 1 : 0,
++	       host->mrq->sbc ? 1 : 0);
++
++	host->hcfg &= ~(SDHCFG_DATA_IRPT_EN | SDHCFG_BLOCK_IRPT_EN);
++	bcm2835_sdhost_write(host, host->hcfg, SDHCFG);
++
++	if (data->error) {
++		data->bytes_xfered = 0;
++	} else
++		data->bytes_xfered = data->blksz * data->blocks;
++
++	host->data_complete = 1;
++
++	if (host->cmd) {
++		/*
++		 * Data managed to finish before the
++		 * command completed. Make sure we do
++		 * things in the proper order.
++		 */
++		pr_debug("Finished early - HSTS %x\n",
++			 bcm2835_sdhost_read(host, SDHSTS));
++	}
++	else
++		bcm2835_sdhost_transfer_complete(host);
++}
++
++
++static void bcm2835_sdhost_transfer_complete(struct bcm2835_host *host)
++{
++	struct mmc_data *data;
++
++	BUG_ON(host->cmd);
++	BUG_ON(!host->data);
++	BUG_ON(!host->data_complete);
++
++	data = host->data;
++	host->data = NULL;
++
++	pr_debug("transfer_complete(error %d, stop %d)\n",
++	       data->error, data->stop ? 1 : 0);
++
++	/*
++	 * Need to send CMD12 if -
++	 * a) open-ended multiblock transfer (no CMD23)
++	 * b) error in multiblock transfer
++	 */
++	if (data->stop &&
++	    (data->error ||
++	     !host->mrq->sbc)) {
++		host->flush_fifo = 1;
++		bcm2835_sdhost_send_command(host, data->stop);
++		if (host->delay_after_stop)
++			do_gettimeofday(&host->stop_time);
++		if (!host->use_busy)
++			bcm2835_sdhost_finish_command(host);
++	} else {
++		tasklet_schedule(&host->finish_tasklet);
++	}
++}
++
++static void bcm2835_sdhost_finish_command(struct bcm2835_host *host)
++{
++	u32 sdcmd;
++	unsigned long timeout;
++#ifdef DEBUG
++	struct timeval before, after;
++	int timediff = 0;
++#endif
++
++	pr_debug("finish_command(%x)\n", bcm2835_sdhost_read(host, SDCMD));
++
++	BUG_ON(!host->cmd || !host->mrq);
++
++#ifdef DEBUG
++	do_gettimeofday(&before);
++#endif
++	/* Wait max 100 ms */
++	timeout = 10000;
++	for (sdcmd = bcm2835_sdhost_read(host, SDCMD);
++	     (sdcmd & SDCMD_NEW_FLAG) && timeout;
++	     timeout--) {
++		if (host->flush_fifo) {
++			while (bcm2835_sdhost_read(host, SDHSTS) &
++			       SDHSTS_DATA_FLAG)
++				(void)bcm2835_sdhost_read(host, SDDATA);
++		}
++		udelay(10);
++		sdcmd = bcm2835_sdhost_read(host, SDCMD);
++	}
++#ifdef DEBUG
++	do_gettimeofday(&after);
++	timediff = (after.tv_sec - before.tv_sec)*1000000 +
++		(after.tv_usec - before.tv_usec);
++
++	pr_debug(" finish_command - waited %dus\n", timediff);
++#endif
++
++	if (timeout == 0) {
++		pr_err("%s: command never completed.\n",
++		       mmc_hostname(host->mmc));
++		bcm2835_sdhost_dumpregs(host);
++		host->cmd->error = -EIO;
++		tasklet_schedule(&host->finish_tasklet);
++		return;
++	}
++
++	if (host->flush_fifo) {
++		for (timeout = 100;
++		     (bcm2835_sdhost_read(host, SDHSTS) & SDHSTS_DATA_FLAG) && timeout;
++		     timeout--) {
++			(void)bcm2835_sdhost_read(host, SDDATA);
++		}
++		host->flush_fifo = 0;
++		if (timeout == 0) {
++			pr_err("%s: FIFO never drained.\n",
++			       mmc_hostname(host->mmc));
++			bcm2835_sdhost_dumpregs(host);
++			host->cmd->error = -EIO;
++			tasklet_schedule(&host->finish_tasklet);
++			return;
++		}
++	}
++
++	/* Check for errors */
++	if (sdcmd & SDCMD_FAIL_FLAG)
++	{
++		u32 sdhsts = bcm2835_sdhost_read(host, SDHSTS);
++
++		if (host->debug)
++			pr_info("%s: error detected - CMD %x, HSTS %03x, EDM %x\n",
++				mmc_hostname(host->mmc), sdcmd, sdhsts,
++				bcm2835_sdhost_read(host, SDEDM));
++
++		if ((sdhsts & SDHSTS_CRC7_ERROR) &&
++		    (host->cmd->opcode == 1)) {
++			if (host->debug)
++				pr_info("%s: ignoring CRC7 error for CMD1\n",
++					mmc_hostname(host->mmc));
++		} else {
++			if (sdhsts & SDHSTS_CMD_TIME_OUT) {
++				if (host->debug)
++					pr_err("%s: command %d timeout\n",
++					       mmc_hostname(host->mmc),
++					       host->cmd->opcode);
++				host->cmd->error = -ETIMEDOUT;
++			} else {
++				pr_err("%s: unexpected command %d error\n",
++				       mmc_hostname(host->mmc),
++				       host->cmd->opcode);
++				bcm2835_sdhost_dumpregs(host);
++				host->cmd->error = -EIO;
++			}
++			tasklet_schedule(&host->finish_tasklet);
++			return;
++		}
++	}
++
++	if (host->cmd->flags & MMC_RSP_PRESENT) {
++		if (host->cmd->flags & MMC_RSP_136) {
++			int i;
++			for (i = 0; i < 4; i++)
++				host->cmd->resp[3 - i] = bcm2835_sdhost_read(host, SDRSP0 + i*4);
++			pr_debug("%s: finish_command %08x %08x %08x %08x\n",
++				 mmc_hostname(host->mmc),
++				 host->cmd->resp[0], host->cmd->resp[1], host->cmd->resp[2], host->cmd->resp[3]);
++		} else {
++			host->cmd->resp[0] = bcm2835_sdhost_read(host, SDRSP0);
++			pr_debug("%s: finish_command %08x\n",
++				 mmc_hostname(host->mmc),
++				 host->cmd->resp[0]);
++		}
++	}
++
++	host->cmd->error = 0;
++
++	if (host->cmd == host->mrq->sbc) {
++		/* Finished CMD23, now send actual command. */
++		host->cmd = NULL;
++		bcm2835_sdhost_send_command(host, host->mrq->cmd);
++
++		if (host->cmd->data && host->use_dma)
++			/* DMA transfer starts now, PIO starts after irq */
++			bcm2835_sdhost_transfer_dma(host);
++
++		if (!host->use_busy)
++			bcm2835_sdhost_finish_command(host);
++	} else if (host->cmd == host->mrq->stop)
++		/* Finished CMD12 */
++		tasklet_schedule(&host->finish_tasklet);
++	else {
++		/* Processed actual command. */
++		host->cmd = NULL;
++		if (!host->data)
++			tasklet_schedule(&host->finish_tasklet);
++		else if (host->data_complete)
++			bcm2835_sdhost_transfer_complete(host);
++	}
++}
++
++static void bcm2835_sdhost_timeout(unsigned long data)
++{
++	struct bcm2835_host *host;
++	unsigned long flags;
++
++	host = (struct bcm2835_host *)data;
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	if (host->mrq) {
++		pr_err("%s: timeout waiting for hardware interrupt.\n",
++			mmc_hostname(host->mmc));
++		bcm2835_sdhost_dumpregs(host);
++
++		if (host->data) {
++			host->data->error = -ETIMEDOUT;
++			bcm2835_sdhost_finish_data(host);
++		} else {
++			if (host->cmd)
++				host->cmd->error = -ETIMEDOUT;
++			else
++				host->mrq->cmd->error = -ETIMEDOUT;
++
++			pr_debug("timeout_timer tasklet_schedule\n");
++			tasklet_schedule(&host->finish_tasklet);
++		}
++	}
++
++	mmiowb();
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static void bcm2835_sdhost_pio_timeout(unsigned long data)
++{
++	struct bcm2835_host *host;
++	unsigned long flags;
++
++	host = (struct bcm2835_host *)data;
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	if (host->data) {
++		u32 sdhsts = bcm2835_sdhost_read(host, SDHSTS);
++
++		if (sdhsts & SDHSTS_REW_TIME_OUT) {
++			pr_err("%s: transfer timeout\n",
++			       mmc_hostname(host->mmc));
++			if (host->debug)
++				bcm2835_sdhost_dumpregs(host);
++		} else {
++			pr_err("%s: unexpected transfer timeout\n",
++			       mmc_hostname(host->mmc));
++			bcm2835_sdhost_dumpregs(host);
++		}
++
++		bcm2835_sdhost_write(host, SDHSTS_TRANSFER_ERROR_MASK,
++				     SDHSTS);
++
++		host->data->error = -ETIMEDOUT;
++
++		bcm2835_sdhost_finish_data(host);
++	}
++
++	mmiowb();
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static void bcm2835_sdhost_enable_sdio_irq_nolock(struct bcm2835_host *host, int enable)
++{
++	if (enable)
++		host->hcfg |= SDHCFG_SDIO_IRPT_EN;
++	else
++		host->hcfg &= ~SDHCFG_SDIO_IRPT_EN;
++	bcm2835_sdhost_write(host, host->hcfg, SDHCFG);
++	mmiowb();
++}
++
++static void bcm2835_sdhost_enable_sdio_irq(struct mmc_host *mmc, int enable)
++{
++	struct bcm2835_host *host = mmc_priv(mmc);
++	unsigned long flags;
++
++	pr_debug("%s: enable_sdio_irq(%d)\n", mmc_hostname(mmc), enable);
++	spin_lock_irqsave(&host->lock, flags);
++	bcm2835_sdhost_enable_sdio_irq_nolock(host, enable);
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static u32 bcm2835_sdhost_busy_irq(struct bcm2835_host *host, u32 intmask)
++{
++	const u32 handled = (SDHSTS_REW_TIME_OUT | SDHSTS_CMD_TIME_OUT |
++			     SDHSTS_CRC16_ERROR | SDHSTS_CRC7_ERROR |
++			     SDHSTS_FIFO_ERROR);
++
++	if (!host->cmd) {
++		pr_err("%s: got command busy interrupt 0x%08x even "
++			"though no command operation was in progress.\n",
++			mmc_hostname(host->mmc), (unsigned)intmask);
++		bcm2835_sdhost_dumpregs(host);
++		return 0;
++	}
++
++	if (!host->use_busy) {
++		pr_err("%s: got command busy interrupt 0x%08x even "
++			"though not expecting one.\n",
++			mmc_hostname(host->mmc), (unsigned)intmask);
++		bcm2835_sdhost_dumpregs(host);
++		return 0;
++	}
++	host->use_busy = 0;
++
++	if (intmask & SDHSTS_ERROR_MASK)
++	{
++		pr_err("sdhost_busy_irq: intmask %x, data %p\n", intmask, host->mrq->data);
++		if (intmask & SDHSTS_CRC7_ERROR)
++			host->cmd->error = -EILSEQ;
++		else if (intmask & (SDHSTS_CRC16_ERROR |
++				    SDHSTS_FIFO_ERROR)) {
++			if (host->mrq->data)
++				host->mrq->data->error = -EILSEQ;
++			else
++				host->cmd->error = -EILSEQ;
++		} else if (intmask & SDHSTS_REW_TIME_OUT) {
++			if (host->mrq->data)
++				host->mrq->data->error = -ETIMEDOUT;
++			else
++				host->cmd->error = -ETIMEDOUT;
++		} else if (intmask & SDHSTS_CMD_TIME_OUT)
++			host->cmd->error = -ETIMEDOUT;
++
++		bcm2835_sdhost_dumpregs(host);
++		tasklet_schedule(&host->finish_tasklet);
++	}
++	else
++		bcm2835_sdhost_finish_command(host);
++
++	return handled;
++}
++
++static u32 bcm2835_sdhost_data_irq(struct bcm2835_host *host, u32 intmask)
++{
++	const u32 handled = (SDHSTS_REW_TIME_OUT |
++			     SDHSTS_CRC16_ERROR |
++			     SDHSTS_FIFO_ERROR);
++
++	/* There are no dedicated data/space available interrupt
++	   status bits, so it is necessary to use the single shared
++	   data/space available FIFO status bits. It is therefore not
++	   an error to get here when there is no data transfer in
++	   progress. */
++	if (!host->data)
++		return 0;
++
++	if (intmask & (SDHSTS_CRC16_ERROR |
++		       SDHSTS_FIFO_ERROR |
++		       SDHSTS_REW_TIME_OUT)) {
++		if (intmask & (SDHSTS_CRC16_ERROR |
++			       SDHSTS_FIFO_ERROR))
++			host->data->error = -EILSEQ;
++		else
++			host->data->error = -ETIMEDOUT;
++
++		bcm2835_sdhost_dumpregs(host);
++		tasklet_schedule(&host->finish_tasklet);
++		return handled;
++	}
++
++	/* Use the block interrupt for writes after the first block */
++	if (host->data->flags & MMC_DATA_WRITE) {
++		host->hcfg &= ~(SDHCFG_DATA_IRPT_EN);
++		host->hcfg |= SDHCFG_BLOCK_IRPT_EN;
++		bcm2835_sdhost_write(host, host->hcfg, SDHCFG);
++		if (host->data->error)
++			bcm2835_sdhost_finish_data(host);
++		else
++			bcm2835_sdhost_transfer_pio(host);
++	} else {
++		if (!host->data->error) {
++			bcm2835_sdhost_transfer_pio(host);
++			host->blocks--;
++		}
++		if ((host->blocks == 0) || host->data->error)
++			bcm2835_sdhost_finish_data(host);
++	}
++
++	return handled;
++}
++
++static u32 bcm2835_sdhost_block_irq(struct bcm2835_host *host, u32 intmask)
++{
++	struct dma_chan *dma_chan;
++	u32 dir_data;
++	const u32 handled = (SDHSTS_REW_TIME_OUT |
++			     SDHSTS_CRC16_ERROR |
++			     SDHSTS_FIFO_ERROR);
++
++	if (!host->data) {
++		pr_err("%s: got block interrupt 0x%08x even "
++			"though no data operation was in progress.\n",
++			mmc_hostname(host->mmc), (unsigned)intmask);
++		bcm2835_sdhost_dumpregs(host);
++		return handled;
++	}
++
++	if (intmask & (SDHSTS_CRC16_ERROR |
++		       SDHSTS_FIFO_ERROR |
++		       SDHSTS_REW_TIME_OUT)) {
++		if (intmask & (SDHSTS_CRC16_ERROR |
++			       SDHSTS_FIFO_ERROR))
++			host->data->error = -EILSEQ;
++		else
++			host->data->error = -ETIMEDOUT;
++
++		if (host->debug)
++			bcm2835_sdhost_dumpregs(host);
++		tasklet_schedule(&host->finish_tasklet);
++		return handled;
++	}
++
++	if (!host->use_dma) {
++		BUG_ON(!host->blocks);
++		host->blocks--;
++		if ((host->blocks == 0) || host->data->error) {
++			/* Cancel the timer */
++			del_timer(&host->pio_timer);
++
++			bcm2835_sdhost_finish_data(host);
++		} else {
++			bcm2835_sdhost_transfer_pio(host);
++
++			/* Reset the timer */
++			mod_timer(&host->pio_timer,
++				  jiffies + host->pio_timeout);
++		}
++	} else if (host->data->flags & MMC_DATA_WRITE) {
++		dma_chan = host->dma_chan_tx;
++		dir_data = DMA_TO_DEVICE;
++		dma_unmap_sg(dma_chan->device->dev,
++			     host->data->sg, host->data->sg_len,
++			     dir_data);
++
++		bcm2835_sdhost_finish_data(host);
++	}
++
++	return handled;
++}
++
++
++static irqreturn_t bcm2835_sdhost_irq(int irq, void *dev_id)
++{
++	irqreturn_t result = IRQ_NONE;
++	struct bcm2835_host *host = dev_id;
++	u32 unexpected = 0, early = 0;
++	int loops = 0;
++
++	spin_lock(&host->lock);
++
++	for (loops = 0; loops < 1; loops++) {
++		u32 intmask, handled;
++
++		intmask = bcm2835_sdhost_read(host, SDHSTS);
++		handled = intmask & (SDHSTS_BUSY_IRPT |
++				     SDHSTS_BLOCK_IRPT |
++				     SDHSTS_SDIO_IRPT |
++				     SDHSTS_DATA_FLAG);
++		if ((handled == SDHSTS_DATA_FLAG) &&
++		    (loops == 0) && !host->data) {
++			pr_err("%s: sdhost_irq data interrupt 0x%08x even "
++			       "though no data operation was in progress.\n",
++			       mmc_hostname(host->mmc),
++			       (unsigned)intmask);
++
++			bcm2835_sdhost_dumpregs(host);
++		}
++
++		if (!handled)
++			break;
++
++		if (loops)
++			early |= handled;
++
++		result = IRQ_HANDLED;
++
++		/* Clear all interrupts and notifications */
++		bcm2835_sdhost_write(host, intmask, SDHSTS);
++
++		if (intmask & SDHSTS_BUSY_IRPT)
++			handled |= bcm2835_sdhost_busy_irq(host, intmask);
++
++		/* There is no true data interrupt status bit, so it is
++		   necessary to qualify the data flag with the interrupt
++		   enable bit */
++		if ((intmask & SDHSTS_DATA_FLAG) &&
++		    (host->hcfg & SDHCFG_DATA_IRPT_EN))
++			handled |= bcm2835_sdhost_data_irq(host, intmask);
++
++		if (intmask & SDHSTS_BLOCK_IRPT)
++			handled |= bcm2835_sdhost_block_irq(host, intmask);
++
++		if (intmask & SDHSTS_SDIO_IRPT) {
++			bcm2835_sdhost_enable_sdio_irq_nolock(host, false);
++			host->thread_isr |= SDHSTS_SDIO_IRPT;
++			result = IRQ_WAKE_THREAD;
++		}
++
++		unexpected |= (intmask & ~handled);
++	}
++
++	mmiowb();
++
++	spin_unlock(&host->lock);
++
++	if (early)
++		pr_debug("%s: early %x (loops %d)\n",
++			 mmc_hostname(host->mmc), early, loops);
++
++	if (unexpected) {
++		pr_err("%s: unexpected interrupt 0x%08x.\n",
++			   mmc_hostname(host->mmc), unexpected);
++		bcm2835_sdhost_dumpregs(host);
++	}
++
++	return result;
++}
++
++static irqreturn_t bcm2835_sdhost_thread_irq(int irq, void *dev_id)
++{
++	struct bcm2835_host *host = dev_id;
++	unsigned long flags;
++	u32 isr;
++
++	spin_lock_irqsave(&host->lock, flags);
++	isr = host->thread_isr;
++	host->thread_isr = 0;
++	spin_unlock_irqrestore(&host->lock, flags);
++
++	if (isr & SDHSTS_SDIO_IRPT) {
++		sdio_run_irqs(host->mmc);
++
++/* Is this necessary? Why re-enable an interrupt which is enabled?
++		spin_lock_irqsave(&host->lock, flags);
++		if (host->flags & SDHSTS_SDIO_IRPT_ENABLED)
++			bcm2835_sdhost_enable_sdio_irq_nolock(host, true);
++		spin_unlock_irqrestore(&host->lock, flags);
++*/
++	}
++
++	return isr ? IRQ_HANDLED : IRQ_NONE;
++}
++
++
++
++void bcm2835_sdhost_set_clock(struct bcm2835_host *host, unsigned int clock)
++{
++	int div = 0; /* Initialized for compiler warning */
++	unsigned int input_clock = clock;
++
++	if (host->debug)
++		pr_info("%s: set_clock(%d)\n", mmc_hostname(host->mmc), clock);
++
++	if ((host->overclock_50 > 50) &&
++	    (clock == 50*MHZ)) {
++		clock = host->overclock_50 * MHZ + (MHZ - 1);
++	}
++
++	/* The SDCDIV register has 11 bits, and holds (div - 2).
++	   But in data mode the max is 50MHz wihout a minimum, and only the
++	   bottom 3 bits are used. Since the switch over is automatic (unless
++	   we have marked the card as slow...), chosen values have to make
++	   sense in both modes.
++	   Ident mode must be 100-400KHz, so can range check the requested
++	   clock. CMD15 must be used to return to data mode, so this can be
++	   monitored.
++
++	   clock 250MHz -> 0->125MHz, 1->83.3MHz, 2->62.5MHz, 3->50.0MHz
++                           4->41.7MHz, 5->35.7MHz, 6->31.3MHz, 7->27.8MHz
++
++			 623->400KHz/27.8MHz
++			 reset value (507)->491159/50MHz
++
++	   BUT, the 3-bit clock divisor in data mode is too small if the
++	   core clock is higher than 250MHz, so instead use the SLOW_CARD
++	   configuration bit to force the use of the ident clock divisor
++	   at all times.
++	*/
++
++	host->mmc->actual_clock = 0;
++
++	if (clock < 100000) {
++	    /* Can't stop the clock, but make it as slow as possible
++	     * to show willing
++	     */
++	    host->cdiv = SDCDIV_MAX_CDIV;
++	    bcm2835_sdhost_write(host, host->cdiv, SDCDIV);
++	    return;
++	}
++
++	div = host->max_clk / clock;
++	if (div < 2)
++		div = 2;
++	if ((host->max_clk / div) > clock)
++		div++;
++	div -= 2;
++
++	if (div > SDCDIV_MAX_CDIV)
++	    div = SDCDIV_MAX_CDIV;
++
++	clock = host->max_clk / (div + 2);
++	host->mmc->actual_clock = clock;
++
++	if (clock > input_clock) {
++		/* Save the closest value, to make it easier
++		   to reduce in the event of error */
++		host->overclock_50 = (clock/MHZ);
++
++		if (clock != host->overclock) {
++			pr_warn("%s: overclocking to %dHz\n",
++				mmc_hostname(host->mmc), clock);
++			host->overclock = clock;
++		}
++	}
++	else if (host->overclock)
++	{
++		host->overclock = 0;
++		if (clock == 50 * MHZ)
++			pr_warn("%s: cancelling overclock\n",
++				mmc_hostname(host->mmc));
++	}
++
++	host->cdiv = div;
++	bcm2835_sdhost_write(host, host->cdiv, SDCDIV);
++
++	/* Set the timeout to 500ms */
++	bcm2835_sdhost_write(host, host->mmc->actual_clock/2, SDTOUT);
++
++	if (host->debug)
++		pr_info("%s: clock=%d -> max_clk=%d, cdiv=%x (actual clock %d)\n",
++			mmc_hostname(host->mmc), input_clock,
++			host->max_clk, host->cdiv, host->mmc->actual_clock);
++}
++
++static void bcm2835_sdhost_request(struct mmc_host *mmc, struct mmc_request *mrq)
++{
++	struct bcm2835_host *host;
++	unsigned long flags;
++
++	host = mmc_priv(mmc);
++
++	if (host->debug) {
++		struct mmc_command *cmd = mrq->cmd;
++		BUG_ON(!cmd);
++		if (cmd->data)
++			pr_info("%s: cmd %d 0x%x (flags 0x%x) - %s %d*%d\n",
++				mmc_hostname(mmc),
++				cmd->opcode, cmd->arg, cmd->flags,
++				(cmd->data->flags & MMC_DATA_READ) ?
++				"read" : "write", cmd->data->blocks,
++				cmd->data->blksz);
++		else
++			pr_info("%s: cmd %d 0x%x (flags 0x%x)\n",
++				mmc_hostname(mmc),
++				cmd->opcode, cmd->arg, cmd->flags);
++	}
++
++	/* Reset the error statuses in case this is a retry */
++	if (mrq->cmd)
++		mrq->cmd->error = 0;
++	if (mrq->data)
++		mrq->data->error = 0;
++	if (mrq->stop)
++		mrq->stop->error = 0;
++
++	if (mrq->data && !is_power_of_2(mrq->data->blksz)) {
++		pr_err("%s: unsupported block size (%d bytes)\n",
++		       mmc_hostname(mmc), mrq->data->blksz);
++		mrq->cmd->error = -EINVAL;
++		mmc_request_done(mmc, mrq);
++		return;
++	}
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	WARN_ON(host->mrq != NULL);
++
++	host->mrq = mrq;
++
++	if (mrq->sbc)
++		bcm2835_sdhost_send_command(host, mrq->sbc);
++	else
++		bcm2835_sdhost_send_command(host, mrq->cmd);
++
++	mmiowb();
++	spin_unlock_irqrestore(&host->lock, flags);
++
++	if (!mrq->sbc && mrq->cmd->data && host->use_dma)
++		/* DMA transfer starts now, PIO starts after irq */
++		bcm2835_sdhost_transfer_dma(host);
++
++	if (!host->use_busy)
++		bcm2835_sdhost_finish_command(host);
++}
++
++
++static void bcm2835_sdhost_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
++{
++
++	struct bcm2835_host *host = mmc_priv(mmc);
++	unsigned long flags;
++
++	if (host->debug)
++		pr_info("%s: ios clock %d, pwr %d, bus_width %d, "
++			"timing %d, vdd %d, drv_type %d\n",
++			mmc_hostname(mmc),
++			ios->clock, ios->power_mode, ios->bus_width,
++			ios->timing, ios->signal_voltage, ios->drv_type);
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	if (!ios->clock || ios->clock != host->clock) {
++		bcm2835_sdhost_set_clock(host, ios->clock);
++		host->clock = ios->clock;
++	}
++
++	/* set bus width */
++	host->hcfg &= ~SDHCFG_WIDE_EXT_BUS;
++	if (ios->bus_width == MMC_BUS_WIDTH_4)
++		host->hcfg |= SDHCFG_WIDE_EXT_BUS;
++
++	host->hcfg |= SDHCFG_WIDE_INT_BUS;
++
++	/* Disable clever clock switching, to cope with fast core clocks */
++	host->hcfg |= SDHCFG_SLOW_CARD;
++
++	bcm2835_sdhost_write(host, host->hcfg, SDHCFG);
++
++	mmiowb();
++
++	spin_unlock_irqrestore(&host->lock, flags);
++}
++
++static int bcm2835_sdhost_multi_io_quirk(struct mmc_card *card,
++					 unsigned int direction,
++					 u32 blk_pos, int blk_size)
++{
++	/* There is a bug in the host controller hardware that makes
++	   reading the final sector of the card as part of a multiple read
++	   problematic. Detect that case and shorten the read accordingly.
++	*/
++	/* csd.capacity is in weird units - convert to sectors */
++	u32 card_sectors = (card->csd.capacity << (card->csd.read_blkbits - 9));
++
++	if ((direction == MMC_DATA_READ) &&
++	    ((blk_pos + blk_size) == card_sectors))
++		blk_size--;
++
++	return blk_size;
++}
++
++
++static struct mmc_host_ops bcm2835_sdhost_ops = {
++	.request = bcm2835_sdhost_request,
++	.set_ios = bcm2835_sdhost_set_ios,
++	.enable_sdio_irq = bcm2835_sdhost_enable_sdio_irq,
++	.hw_reset = bcm2835_sdhost_reset,
++	.multi_io_quirk = bcm2835_sdhost_multi_io_quirk,
++};
++
++
++static void bcm2835_sdhost_tasklet_finish(unsigned long param)
++{
++	struct bcm2835_host *host;
++	unsigned long flags;
++	struct mmc_request *mrq;
++
++	host = (struct bcm2835_host *)param;
++
++	spin_lock_irqsave(&host->lock, flags);
++
++	/*
++	 * If this tasklet gets rescheduled while running, it will
++	 * be run again afterwards but without any active request.
++	 */
++	if (!host->mrq) {
++		spin_unlock_irqrestore(&host->lock, flags);
++		return;
++	}
++
++	del_timer(&host->timer);
++
++	mrq = host->mrq;
++
++	/* Drop the overclock after any data corruption, or after any
++	   error overclocked */
++	if (host->overclock) {
++		if ((mrq->cmd && mrq->cmd->error) ||
++		    (mrq->data && mrq->data->error) ||
++		    (mrq->stop && mrq->stop->error)) {
++			host->overclock_50--;
++			pr_warn("%s: reducing overclock due to errors\n",
++				mmc_hostname(host->mmc));
++			bcm2835_sdhost_set_clock(host,50*MHZ);
++			mrq->cmd->error = -EILSEQ;
++			mrq->cmd->retries = 1;
++		}
++	}
++
++	host->mrq = NULL;
++	host->cmd = NULL;
++	host->data = NULL;
++
++	mmiowb();
++
++	spin_unlock_irqrestore(&host->lock, flags);
++	mmc_request_done(host->mmc, mrq);
++}
++
++
++
++int bcm2835_sdhost_add_host(struct bcm2835_host *host)
++{
++	struct mmc_host *mmc;
++	struct dma_slave_config cfg;
++	char pio_limit_string[20];
++	int ret;
++
++	mmc = host->mmc;
++
++	bcm2835_sdhost_reset_internal(host);
++
++	mmc->f_max = host->max_clk;
++	mmc->f_min = host->max_clk / SDCDIV_MAX_CDIV;
++
++	mmc->max_busy_timeout =  (~(unsigned int)0)/(mmc->f_max/1000);
++
++	pr_debug("f_max %d, f_min %d, max_busy_timeout %d\n",
++		 mmc->f_max, mmc->f_min, mmc->max_busy_timeout);
++
++	/* host controller capabilities */
++	mmc->caps |= /* MMC_CAP_SDIO_IRQ |*/ MMC_CAP_4_BIT_DATA |
++		MMC_CAP_SD_HIGHSPEED | MMC_CAP_MMC_HIGHSPEED |
++		MMC_CAP_NEEDS_POLL | MMC_CAP_HW_RESET | MMC_CAP_ERASE |
++		(ALLOW_CMD23 * MMC_CAP_CMD23);
++
++	spin_lock_init(&host->lock);
++
++	if (host->allow_dma) {
++		if (IS_ERR_OR_NULL(host->dma_chan_tx) ||
++		    IS_ERR_OR_NULL(host->dma_chan_rx)) {
++			pr_err("%s: unable to initialise DMA channels. "
++			       "Falling back to PIO\n",
++			       mmc_hostname(mmc));
++			host->have_dma = false;
++		} else {
++			host->have_dma = true;
++
++			cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
++			cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
++			cfg.slave_id = 13;		/* DREQ channel */
++
++			cfg.direction = DMA_MEM_TO_DEV;
++			cfg.src_addr = 0;
++			cfg.dst_addr = host->bus_addr + SDDATA;
++			ret = dmaengine_slave_config(host->dma_chan_tx, &cfg);
++
++			cfg.direction = DMA_DEV_TO_MEM;
++			cfg.src_addr = host->bus_addr + SDDATA;
++			cfg.dst_addr = 0;
++			ret = dmaengine_slave_config(host->dma_chan_rx, &cfg);
++		}
++	} else {
++		host->have_dma = false;
++	}
++
++	mmc->max_segs = 128;
++	mmc->max_req_size = 524288;
++	mmc->max_seg_size = mmc->max_req_size;
++	mmc->max_blk_size = 512;
++	mmc->max_blk_count =  65535;
++
++	/* report supported voltage ranges */
++	mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
++
++	tasklet_init(&host->finish_tasklet,
++		bcm2835_sdhost_tasklet_finish, (unsigned long)host);
++
++	setup_timer(&host->timer, bcm2835_sdhost_timeout,
++		    (unsigned long)host);
++
++	setup_timer(&host->pio_timer, bcm2835_sdhost_pio_timeout,
++		    (unsigned long)host);
++
++	bcm2835_sdhost_init(host, 0);
++	ret = request_threaded_irq(host->irq, bcm2835_sdhost_irq,
++				   bcm2835_sdhost_thread_irq,
++				   IRQF_SHARED,	mmc_hostname(mmc), host);
++	if (ret) {
++		pr_err("%s: failed to request IRQ %d: %d\n",
++		       mmc_hostname(mmc), host->irq, ret);
++		goto untasklet;
++	}
++
++	mmiowb();
++	mmc_add_host(mmc);
++
++	pio_limit_string[0] = '\0';
++	if (host->have_dma && (host->pio_limit > 0))
++		sprintf(pio_limit_string, " (>%d)", host->pio_limit);
++	pr_info("%s: %s loaded - DMA %s%s\n",
++		mmc_hostname(mmc), DRIVER_NAME,
++		host->have_dma ? "enabled" : "disabled",
++		pio_limit_string);
++
++	return 0;
++
++untasklet:
++	tasklet_kill(&host->finish_tasklet);
++
++	return ret;
++}
++
++static int bcm2835_sdhost_probe(struct platform_device *pdev)
++{
++	struct device *dev = &pdev->dev;
++	struct device_node *node = dev->of_node;
++	struct clk *clk;
++	struct resource *iomem;
++	struct bcm2835_host *host;
++	struct mmc_host *mmc;
++	const __be32 *addr;
++	int ret;
++
++	pr_debug("bcm2835_sdhost_probe\n");
++	mmc = mmc_alloc_host(sizeof(*host), dev);
++	if (!mmc)
++		return -ENOMEM;
++
++	mmc->ops = &bcm2835_sdhost_ops;
++	host = mmc_priv(mmc);
++	host->mmc = mmc;
++	host->pio_timeout = msecs_to_jiffies(500);
++	host->max_delay = 1; /* Warn if over 1ms */
++	spin_lock_init(&host->lock);
++
++	iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	host->ioaddr = devm_ioremap_resource(dev, iomem);
++	if (IS_ERR(host->ioaddr)) {
++		ret = PTR_ERR(host->ioaddr);
++		goto err;
++	}
++
++	addr = of_get_address(node, 0, NULL, NULL);
++	if (!addr) {
++		dev_err(dev, "could not get DMA-register address\n");
++		return -ENODEV;
++	}
++	host->bus_addr = be32_to_cpup(addr);
++	pr_debug(" - ioaddr %lx, iomem->start %lx, bus_addr %lx\n",
++		 (unsigned long)host->ioaddr,
++		 (unsigned long)iomem->start,
++		 (unsigned long)host->bus_addr);
++
++	host->allow_dma = ALLOW_DMA;
++
++	if (node) {
++		/* Read any custom properties */
++		of_property_read_u32(node,
++				     "brcm,delay-after-stop",
++				     &host->delay_after_stop);
++		of_property_read_u32(node,
++				     "brcm,overclock-50",
++				     &host->overclock_50);
++		of_property_read_u32(node,
++				     "brcm,pio-limit",
++				     &host->pio_limit);
++		host->allow_dma = ALLOW_DMA &&
++			!of_property_read_bool(node, "brcm,force-pio");
++		host->debug = of_property_read_bool(node, "brcm,debug");
++	}
++
++	if (host->allow_dma) {
++		if (node) {
++			host->dma_chan_tx =
++				dma_request_slave_channel(dev, "tx");
++			host->dma_chan_rx =
++				dma_request_slave_channel(dev, "rx");
++		} else {
++			dma_cap_mask_t mask;
++
++			dma_cap_zero(mask);
++			/* we don't care about the channel, any would work */
++			dma_cap_set(DMA_SLAVE, mask);
++			host->dma_chan_tx =
++				dma_request_channel(mask, NULL, NULL);
++			host->dma_chan_rx =
++				dma_request_channel(mask, NULL, NULL);
++		}
++	}
++
++	clk = devm_clk_get(dev, NULL);
++	if (IS_ERR(clk)) {
++		dev_err(dev, "could not get clk\n");
++		ret = PTR_ERR(clk);
++		goto err;
++	}
++
++	host->max_clk = clk_get_rate(clk);
++
++	host->irq = platform_get_irq(pdev, 0);
++	if (host->irq <= 0) {
++		dev_err(dev, "get IRQ failed\n");
++		ret = -EINVAL;
++		goto err;
++	}
++
++	pr_debug(" - max_clk %lx, irq %d\n",
++		 (unsigned long)host->max_clk,
++		 (int)host->irq);
++
++	if (node)
++		mmc_of_parse(mmc);
++	else
++		mmc->caps |= MMC_CAP_4_BIT_DATA;
++
++	ret = bcm2835_sdhost_add_host(host);
++	if (ret)
++		goto err;
++
++	platform_set_drvdata(pdev, host);
++
++	pr_debug("bcm2835_sdhost_probe -> OK\n");
++
++	return 0;
++
++err:
++	pr_debug("bcm2835_sdhost_probe -> err %d\n", ret);
++	mmc_free_host(mmc);
++
++	return ret;
++}
++
++static int bcm2835_sdhost_remove(struct platform_device *pdev)
++{
++	struct bcm2835_host *host = platform_get_drvdata(pdev);
++
++	pr_debug("bcm2835_sdhost_remove\n");
++
++	mmc_remove_host(host->mmc);
++
++	bcm2835_sdhost_set_power(host, false);
++
++	free_irq(host->irq, host);
++
++	del_timer_sync(&host->timer);
++
++	tasklet_kill(&host->finish_tasklet);
++
++	mmc_free_host(host->mmc);
++	platform_set_drvdata(pdev, NULL);
++
++	pr_debug("bcm2835_sdhost_remove - OK\n");
++	return 0;
++}
++
++
++static const struct of_device_id bcm2835_sdhost_match[] = {
++	{ .compatible = "brcm,bcm2835-sdhost" },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, bcm2835_sdhost_match);
++
++
++
++static struct platform_driver bcm2835_sdhost_driver = {
++	.probe      = bcm2835_sdhost_probe,
++	.remove     = bcm2835_sdhost_remove,
++	.driver     = {
++		.name		= DRIVER_NAME,
++		.owner		= THIS_MODULE,
++		.of_match_table	= bcm2835_sdhost_match,
++	},
++};
++module_platform_driver(bcm2835_sdhost_driver);
++
++MODULE_ALIAS("platform:sdhost-bcm2835");
++MODULE_DESCRIPTION("BCM2835 SDHost driver");
++MODULE_LICENSE("GPL v2");
++MODULE_AUTHOR("Phil Elwell");
diff --git a/target/linux/brcm2708/patches-4.4/0035-cma-Add-vc_cma-driver-to-enable-use-of-CMA.patch b/target/linux/brcm2708/patches-4.4/0035-cma-Add-vc_cma-driver-to-enable-use-of-CMA.patch
new file mode 100644
index 0000000..f41db1a
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0035-cma-Add-vc_cma-driver-to-enable-use-of-CMA.patch
@@ -0,0 +1,1326 @@
+From 596a45736af7c7721bd7b8a9cbd04cbcd6c11b53 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 3 Jul 2013 00:31:47 +0100
+Subject: [PATCH 035/127] cma: Add vc_cma driver to enable use of CMA
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+
+vc_cma: Make the vc_cma area the default contiguous DMA area
+
+vc_cma: Provide empty functions when module is not built
+
+Providing empty functions saves the users from guarding the
+function call with an #if clause.
+Move __init markings from prototypes to functions.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/char/Kconfig                  |    2 +
+ drivers/char/Makefile                 |    1 +
+ drivers/char/broadcom/Kconfig         |   15 +
+ drivers/char/broadcom/Makefile        |    1 +
+ drivers/char/broadcom/vc_cma/Makefile |   14 +
+ drivers/char/broadcom/vc_cma/vc_cma.c | 1193 +++++++++++++++++++++++++++++++++
+ include/linux/broadcom/vc_cma.h       |   36 +
+ 7 files changed, 1262 insertions(+)
+ create mode 100644 drivers/char/broadcom/Kconfig
+ create mode 100644 drivers/char/broadcom/Makefile
+ create mode 100644 drivers/char/broadcom/vc_cma/Makefile
+ create mode 100644 drivers/char/broadcom/vc_cma/vc_cma.c
+ create mode 100644 include/linux/broadcom/vc_cma.h
+
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -4,6 +4,8 @@
+ 
+ menu "Character devices"
+ 
++source "drivers/char/broadcom/Kconfig"
++
+ source "drivers/tty/Kconfig"
+ 
+ config DEVMEM
+--- a/drivers/char/Makefile
++++ b/drivers/char/Makefile
+@@ -60,3 +60,4 @@ js-rtc-y = rtc.o
+ 
+ obj-$(CONFIG_TILE_SROM)		+= tile-srom.o
+ obj-$(CONFIG_XILLYBUS)		+= xillybus/
++obj-$(CONFIG_BRCM_CHAR_DRIVERS) += broadcom/
+--- /dev/null
++++ b/drivers/char/broadcom/Kconfig
+@@ -0,0 +1,15 @@
++#
++# Broadcom char driver config
++#
++
++menuconfig BRCM_CHAR_DRIVERS
++	bool "Broadcom Char Drivers"
++	help
++	  Broadcom's char drivers
++
++config BCM_VC_CMA
++	bool "Videocore CMA"
++	depends on CMA && BRCM_CHAR_DRIVERS && BCM2708_VCHIQ
++	default n
++        help
++          Helper for videocore CMA access.
+--- /dev/null
++++ b/drivers/char/broadcom/Makefile
+@@ -0,0 +1 @@
++obj-$(CONFIG_BCM_VC_CMA)	+= vc_cma/
+--- /dev/null
++++ b/drivers/char/broadcom/vc_cma/Makefile
+@@ -0,0 +1,14 @@
++ccflags-y  += -Wall -Wstrict-prototypes -Wno-trigraphs
++ccflags-y  += -Werror
++ccflags-y  += -Iinclude/linux/broadcom
++ccflags-y  += -Idrivers/misc/vc04_services
++ccflags-y  += -Idrivers/misc/vc04_services/interface/vchi
++ccflags-y  += -Idrivers/misc/vc04_services/interface/vchiq_arm
++
++ccflags-y  += -D__KERNEL__
++ccflags-y  += -D__linux__
++ccflags-y  += -Werror
++
++obj-$(CONFIG_BCM_VC_CMA) += vc-cma.o
++
++vc-cma-objs := vc_cma.o
+--- /dev/null
++++ b/drivers/char/broadcom/vc_cma/vc_cma.c
+@@ -0,0 +1,1193 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/kthread.h>
++#include <linux/fs.h>
++#include <linux/device.h>
++#include <linux/cdev.h>
++#include <linux/mm.h>
++#include <linux/proc_fs.h>
++#include <linux/seq_file.h>
++#include <linux/dma-mapping.h>
++#include <linux/dma-contiguous.h>
++#include <linux/platform_device.h>
++#include <linux/uaccess.h>
++#include <asm/cacheflush.h>
++
++#include "vc_cma.h"
++
++#include "vchiq_util.h"
++#include "vchiq_connected.h"
++//#include "debug_sym.h"
++//#include "vc_mem.h"
++
++#define DRIVER_NAME  "vc-cma"
++
++#define LOG_DBG(fmt, ...) \
++	if (vc_cma_debug) \
++		printk(KERN_INFO fmt "\n", ##__VA_ARGS__)
++#define LOG_INFO(fmt, ...) \
++	printk(KERN_INFO fmt "\n", ##__VA_ARGS__)
++#define LOG_ERR(fmt, ...) \
++	printk(KERN_ERR fmt "\n", ##__VA_ARGS__)
++
++#define VC_CMA_FOURCC VCHIQ_MAKE_FOURCC('C', 'M', 'A', ' ')
++#define VC_CMA_VERSION 2
++
++#define VC_CMA_CHUNK_ORDER 6	/* 256K */
++#define VC_CMA_CHUNK_SIZE (4096 << VC_CMA_CHUNK_ORDER)
++#define VC_CMA_MAX_PARAMS_PER_MSG \
++	((VCHIQ_MAX_MSG_SIZE - sizeof(unsigned short))/sizeof(unsigned short))
++#define VC_CMA_RESERVE_COUNT_MAX 16
++
++#define PAGES_PER_CHUNK (VC_CMA_CHUNK_SIZE / PAGE_SIZE)
++
++#define VCADDR_TO_PHYSADDR(vcaddr) (mm_vc_mem_phys_addr + vcaddr)
++
++#define loud_error(...) \
++	LOG_ERR("===== " __VA_ARGS__)
++
++enum {
++	VC_CMA_MSG_QUIT,
++	VC_CMA_MSG_OPEN,
++	VC_CMA_MSG_TICK,
++	VC_CMA_MSG_ALLOC,	/* chunk count */
++	VC_CMA_MSG_FREE,	/* chunk, chunk, ... */
++	VC_CMA_MSG_ALLOCATED,	/* chunk, chunk, ... */
++	VC_CMA_MSG_REQUEST_ALLOC,	/* chunk count */
++	VC_CMA_MSG_REQUEST_FREE,	/* chunk count */
++	VC_CMA_MSG_RESERVE,	/* bytes lo, bytes hi */
++	VC_CMA_MSG_UPDATE_RESERVE,
++	VC_CMA_MSG_MAX
++};
++
++struct cma_msg {
++	unsigned short type;
++	unsigned short params[VC_CMA_MAX_PARAMS_PER_MSG];
++};
++
++struct vc_cma_reserve_user {
++	unsigned int pid;
++	unsigned int reserve;
++};
++
++/* Device (/dev) related variables */
++static dev_t vc_cma_devnum;
++static struct class *vc_cma_class;
++static struct cdev vc_cma_cdev;
++static int vc_cma_inited;
++static int vc_cma_debug;
++
++/* Proc entry */
++static struct proc_dir_entry *vc_cma_proc_entry;
++
++phys_addr_t vc_cma_base;
++struct page *vc_cma_base_page;
++unsigned int vc_cma_size;
++EXPORT_SYMBOL(vc_cma_size);
++unsigned int vc_cma_initial;
++unsigned int vc_cma_chunks;
++unsigned int vc_cma_chunks_used;
++unsigned int vc_cma_chunks_reserved;
++
++
++void *vc_cma_dma_alloc;
++unsigned int vc_cma_dma_size;
++
++static int in_loud_error;
++
++unsigned int vc_cma_reserve_total;
++unsigned int vc_cma_reserve_count;
++struct vc_cma_reserve_user vc_cma_reserve_users[VC_CMA_RESERVE_COUNT_MAX];
++static DEFINE_SEMAPHORE(vc_cma_reserve_mutex);
++static DEFINE_SEMAPHORE(vc_cma_worker_queue_push_mutex);
++
++static u64 vc_cma_dma_mask = DMA_BIT_MASK(32);
++static struct platform_device vc_cma_device = {
++	.name = "vc-cma",
++	.id = 0,
++	.dev = {
++		.dma_mask = &vc_cma_dma_mask,
++		.coherent_dma_mask = DMA_BIT_MASK(32),
++		},
++};
++
++static VCHIQ_INSTANCE_T cma_instance;
++static VCHIQ_SERVICE_HANDLE_T cma_service;
++static VCHIU_QUEUE_T cma_msg_queue;
++static struct task_struct *cma_worker;
++
++static int vc_cma_set_reserve(unsigned int reserve, unsigned int pid);
++static int vc_cma_alloc_chunks(int num_chunks, struct cma_msg *reply);
++static VCHIQ_STATUS_T cma_service_callback(VCHIQ_REASON_T reason,
++					   VCHIQ_HEADER_T * header,
++					   VCHIQ_SERVICE_HANDLE_T service,
++					   void *bulk_userdata);
++static void send_vc_msg(unsigned short type,
++			unsigned short param1, unsigned short param2);
++static bool send_worker_msg(VCHIQ_HEADER_T * msg);
++
++static int early_vc_cma_mem(char *p)
++{
++	unsigned int new_size;
++	printk(KERN_NOTICE "early_vc_cma_mem(%s)", p);
++	vc_cma_size = memparse(p, &p);
++	vc_cma_initial = vc_cma_size;
++	if (*p == '/')
++		vc_cma_size = memparse(p + 1, &p);
++	if (*p == '@')
++		vc_cma_base = memparse(p + 1, &p);
++
++	new_size = (vc_cma_size - ((-vc_cma_base) & (VC_CMA_CHUNK_SIZE - 1)))
++	    & ~(VC_CMA_CHUNK_SIZE - 1);
++	if (new_size > vc_cma_size)
++		vc_cma_size = 0;
++	vc_cma_initial = (vc_cma_initial + VC_CMA_CHUNK_SIZE - 1)
++	    & ~(VC_CMA_CHUNK_SIZE - 1);
++	if (vc_cma_initial > vc_cma_size)
++		vc_cma_initial = vc_cma_size;
++	vc_cma_base = (vc_cma_base + VC_CMA_CHUNK_SIZE - 1)
++	    & ~(VC_CMA_CHUNK_SIZE - 1);
++
++	printk(KERN_NOTICE " -> initial %x, size %x, base %x", vc_cma_initial,
++	       vc_cma_size, (unsigned int)vc_cma_base);
++
++	return 0;
++}
++
++early_param("vc-cma-mem", early_vc_cma_mem);
++
++void __init vc_cma_early_init(void)
++{
++	LOG_DBG("vc_cma_early_init - vc_cma_chunks = %d", vc_cma_chunks);
++	if (vc_cma_size) {
++		int rc = platform_device_register(&vc_cma_device);
++		LOG_DBG("platform_device_register -> %d", rc);
++	}
++}
++
++void __init vc_cma_reserve(void)
++{
++	/* if vc_cma_size is set, then declare vc CMA area of the same
++	 * size from the end of memory
++	 */
++	if (vc_cma_size) {
++		if (dma_declare_contiguous(&vc_cma_device.dev, vc_cma_size,
++					   vc_cma_base, 0) == 0) {
++			if (!dev_get_cma_area(NULL)) {
++				/* There is no default CMA area - make this
++				   the default */
++				struct cma *vc_cma_area = dev_get_cma_area(
++					&vc_cma_device.dev);
++				dma_contiguous_set_default(vc_cma_area);
++				LOG_INFO("vc_cma_reserve - using vc_cma as "
++					 "the default contiguous DMA area");
++			}
++		} else {
++			LOG_ERR("vc_cma: dma_declare_contiguous(%x,%x) failed",
++				vc_cma_size, (unsigned int)vc_cma_base);
++			vc_cma_size = 0;
++		}
++	}
++	vc_cma_chunks = vc_cma_size / VC_CMA_CHUNK_SIZE;
++}
++
++/****************************************************************************
++*
++*   vc_cma_open
++*
++***************************************************************************/
++
++static int vc_cma_open(struct inode *inode, struct file *file)
++{
++	(void)inode;
++	(void)file;
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   vc_cma_release
++*
++***************************************************************************/
++
++static int vc_cma_release(struct inode *inode, struct file *file)
++{
++	(void)inode;
++	(void)file;
++
++	vc_cma_set_reserve(0, current->tgid);
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   vc_cma_ioctl
++*
++***************************************************************************/
++
++static long vc_cma_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++	int rc = 0;
++
++	(void)cmd;
++	(void)arg;
++
++	switch (cmd) {
++	case VC_CMA_IOC_RESERVE:
++		rc = vc_cma_set_reserve((unsigned int)arg, current->tgid);
++		if (rc >= 0)
++			rc = 0;
++		break;
++	default:
++		LOG_ERR("vc-cma: Unknown ioctl %x", cmd);
++		return -ENOTTY;
++	}
++
++	return rc;
++}
++
++/****************************************************************************
++*
++*   File Operations for the driver.
++*
++***************************************************************************/
++
++static const struct file_operations vc_cma_fops = {
++	.owner = THIS_MODULE,
++	.open = vc_cma_open,
++	.release = vc_cma_release,
++	.unlocked_ioctl = vc_cma_ioctl,
++};
++
++/****************************************************************************
++*
++*   vc_cma_proc_open
++*
++***************************************************************************/
++
++static int vc_cma_show_info(struct seq_file *m, void *v)
++{
++	int i;
++
++	seq_printf(m, "Videocore CMA:\n");
++	seq_printf(m, "   Base       : %08x\n", (unsigned int)vc_cma_base);
++	seq_printf(m, "   Length     : %08x\n", vc_cma_size);
++	seq_printf(m, "   Initial    : %08x\n", vc_cma_initial);
++	seq_printf(m, "   Chunk size : %08x\n", VC_CMA_CHUNK_SIZE);
++	seq_printf(m, "   Chunks     : %4d (%d bytes)\n",
++		   (int)vc_cma_chunks,
++		   (int)(vc_cma_chunks * VC_CMA_CHUNK_SIZE));
++	seq_printf(m, "   Used       : %4d (%d bytes)\n",
++		   (int)vc_cma_chunks_used,
++		   (int)(vc_cma_chunks_used * VC_CMA_CHUNK_SIZE));
++	seq_printf(m, "   Reserved   : %4d (%d bytes)\n",
++		   (unsigned int)vc_cma_chunks_reserved,
++		   (int)(vc_cma_chunks_reserved * VC_CMA_CHUNK_SIZE));
++
++	for (i = 0; i < vc_cma_reserve_count; i++) {
++		struct vc_cma_reserve_user *user = &vc_cma_reserve_users[i];
++		seq_printf(m, "     PID %5d: %d bytes\n", user->pid,
++			   user->reserve);
++	}
++	seq_printf(m, "   dma_alloc  : %p (%d pages)\n",
++		   vc_cma_dma_alloc ? page_address(vc_cma_dma_alloc) : 0,
++		   vc_cma_dma_size);
++
++	seq_printf(m, "\n");
++
++	return 0;
++}
++
++static int vc_cma_proc_open(struct inode *inode, struct file *file)
++{
++	return single_open(file, vc_cma_show_info, NULL);
++}
++
++/****************************************************************************
++*
++*   vc_cma_proc_write
++*
++***************************************************************************/
++
++static int vc_cma_proc_write(struct file *file,
++			     const char __user *buffer,
++			     size_t size, loff_t *ppos)
++{
++	int rc = -EFAULT;
++	char input_str[20];
++
++	memset(input_str, 0, sizeof(input_str));
++
++	if (size > sizeof(input_str)) {
++		LOG_ERR("%s: input string length too long", __func__);
++		goto out;
++	}
++
++	if (copy_from_user(input_str, buffer, size - 1)) {
++		LOG_ERR("%s: failed to get input string", __func__);
++		goto out;
++	}
++#define ALLOC_STR "alloc"
++#define FREE_STR "free"
++#define DEBUG_STR "debug"
++#define RESERVE_STR "reserve"
++#define DMA_ALLOC_STR "dma_alloc"
++#define DMA_FREE_STR "dma_free"
++	if (strncmp(input_str, ALLOC_STR, strlen(ALLOC_STR)) == 0) {
++		int alloc_size;
++		char *p = input_str + strlen(ALLOC_STR);
++
++		while (*p == ' ')
++			p++;
++		alloc_size = memparse(p, NULL);
++		LOG_INFO("/proc/vc-cma: alloc %d", alloc_size);
++		if (alloc_size)
++			send_vc_msg(VC_CMA_MSG_REQUEST_FREE,
++				    alloc_size / VC_CMA_CHUNK_SIZE, 0);
++		else
++			LOG_ERR("invalid size '%s'", p);
++		rc = size;
++	} else if (strncmp(input_str, FREE_STR, strlen(FREE_STR)) == 0) {
++		int alloc_size;
++		char *p = input_str + strlen(FREE_STR);
++
++		while (*p == ' ')
++			p++;
++		alloc_size = memparse(p, NULL);
++		LOG_INFO("/proc/vc-cma: free %d", alloc_size);
++		if (alloc_size)
++			send_vc_msg(VC_CMA_MSG_REQUEST_ALLOC,
++				    alloc_size / VC_CMA_CHUNK_SIZE, 0);
++		else
++			LOG_ERR("invalid size '%s'", p);
++		rc = size;
++	} else if (strncmp(input_str, DEBUG_STR, strlen(DEBUG_STR)) == 0) {
++		char *p = input_str + strlen(DEBUG_STR);
++		while (*p == ' ')
++			p++;
++		if ((strcmp(p, "on") == 0) || (strcmp(p, "1") == 0))
++			vc_cma_debug = 1;
++		else if ((strcmp(p, "off") == 0) || (strcmp(p, "0") == 0))
++			vc_cma_debug = 0;
++		LOG_INFO("/proc/vc-cma: debug %s", vc_cma_debug ? "on" : "off");
++		rc = size;
++	} else if (strncmp(input_str, RESERVE_STR, strlen(RESERVE_STR)) == 0) {
++		int alloc_size;
++		int reserved;
++		char *p = input_str + strlen(RESERVE_STR);
++		while (*p == ' ')
++			p++;
++		alloc_size = memparse(p, NULL);
++
++		reserved = vc_cma_set_reserve(alloc_size, current->tgid);
++		rc = (reserved >= 0) ? size : reserved;
++	} else if (strncmp(input_str, DMA_ALLOC_STR, strlen(DMA_ALLOC_STR)) == 0) {
++		int alloc_size;
++		char *p = input_str + strlen(DMA_ALLOC_STR);
++		while (*p == ' ')
++			p++;
++		alloc_size = memparse(p, NULL);
++
++		if (vc_cma_dma_alloc) {
++		    dma_release_from_contiguous(NULL, vc_cma_dma_alloc,
++						vc_cma_dma_size);
++		    vc_cma_dma_alloc = NULL;
++		    vc_cma_dma_size = 0;
++		}
++		vc_cma_dma_alloc = dma_alloc_from_contiguous(NULL, alloc_size, 0);
++		vc_cma_dma_size = (vc_cma_dma_alloc ? alloc_size : 0);
++		if (vc_cma_dma_alloc)
++			LOG_INFO("dma_alloc(%d pages) -> %p", alloc_size, page_address(vc_cma_dma_alloc));
++		else
++			LOG_ERR("dma_alloc(%d pages) failed", alloc_size);
++		rc = size;
++	} else if (strncmp(input_str, DMA_FREE_STR, strlen(DMA_FREE_STR)) == 0) {
++		if (vc_cma_dma_alloc) {
++		    dma_release_from_contiguous(NULL, vc_cma_dma_alloc,
++						vc_cma_dma_size);
++		    vc_cma_dma_alloc = NULL;
++		    vc_cma_dma_size = 0;
++		}
++		rc = size;
++	}
++
++out:
++	return rc;
++}
++
++/****************************************************************************
++*
++*   File Operations for /proc interface.
++*
++***************************************************************************/
++
++static const struct file_operations vc_cma_proc_fops = {
++	.open = vc_cma_proc_open,
++	.read = seq_read,
++	.write = vc_cma_proc_write,
++	.llseek = seq_lseek,
++	.release = single_release
++};
++
++static int vc_cma_set_reserve(unsigned int reserve, unsigned int pid)
++{
++	struct vc_cma_reserve_user *user = NULL;
++	int delta = 0;
++	int i;
++
++	if (down_interruptible(&vc_cma_reserve_mutex))
++		return -ERESTARTSYS;
++
++	for (i = 0; i < vc_cma_reserve_count; i++) {
++		if (pid == vc_cma_reserve_users[i].pid) {
++			user = &vc_cma_reserve_users[i];
++			delta = reserve - user->reserve;
++			if (reserve)
++				user->reserve = reserve;
++			else {
++				/* Remove this entry by copying downwards */
++				while ((i + 1) < vc_cma_reserve_count) {
++					user[0].pid = user[1].pid;
++					user[0].reserve = user[1].reserve;
++					user++;
++					i++;
++				}
++				vc_cma_reserve_count--;
++				user = NULL;
++			}
++			break;
++		}
++	}
++
++	if (reserve && !user) {
++		if (vc_cma_reserve_count == VC_CMA_RESERVE_COUNT_MAX) {
++			LOG_ERR("vc-cma: Too many reservations - "
++				"increase CMA_RESERVE_COUNT_MAX");
++			up(&vc_cma_reserve_mutex);
++			return -EBUSY;
++		}
++		user = &vc_cma_reserve_users[vc_cma_reserve_count];
++		user->pid = pid;
++		user->reserve = reserve;
++		delta = reserve;
++		vc_cma_reserve_count++;
++	}
++
++	vc_cma_reserve_total += delta;
++
++	send_vc_msg(VC_CMA_MSG_RESERVE,
++		    vc_cma_reserve_total & 0xffff, vc_cma_reserve_total >> 16);
++
++	send_worker_msg((VCHIQ_HEADER_T *) VC_CMA_MSG_UPDATE_RESERVE);
++
++	LOG_DBG("/proc/vc-cma: reserve %d (PID %d) - total %u",
++		reserve, pid, vc_cma_reserve_total);
++
++	up(&vc_cma_reserve_mutex);
++
++	return vc_cma_reserve_total;
++}
++
++static VCHIQ_STATUS_T cma_service_callback(VCHIQ_REASON_T reason,
++					   VCHIQ_HEADER_T * header,
++					   VCHIQ_SERVICE_HANDLE_T service,
++					   void *bulk_userdata)
++{
++	switch (reason) {
++	case VCHIQ_MESSAGE_AVAILABLE:
++		if (!send_worker_msg(header))
++			return VCHIQ_RETRY;
++		break;
++	case VCHIQ_SERVICE_CLOSED:
++		LOG_DBG("CMA service closed");
++		break;
++	default:
++		LOG_ERR("Unexpected CMA callback reason %d", reason);
++		break;
++	}
++	return VCHIQ_SUCCESS;
++}
++
++static void send_vc_msg(unsigned short type,
++			unsigned short param1, unsigned short param2)
++{
++	unsigned short msg[] = { type, param1, param2 };
++	VCHIQ_ELEMENT_T elem = { &msg, sizeof(msg) };
++	VCHIQ_STATUS_T ret;
++	vchiq_use_service(cma_service);
++	ret = vchiq_queue_message(cma_service, &elem, 1);
++	vchiq_release_service(cma_service);
++	if (ret != VCHIQ_SUCCESS)
++		LOG_ERR("vchiq_queue_message returned %x", ret);
++}
++
++static bool send_worker_msg(VCHIQ_HEADER_T * msg)
++{
++	if (down_interruptible(&vc_cma_worker_queue_push_mutex))
++		return false;
++	vchiu_queue_push(&cma_msg_queue, msg);
++	up(&vc_cma_worker_queue_push_mutex);
++	return true;
++}
++
++static int vc_cma_alloc_chunks(int num_chunks, struct cma_msg *reply)
++{
++	int i;
++	for (i = 0; i < num_chunks; i++) {
++		struct page *chunk;
++		unsigned int chunk_num;
++		uint8_t *chunk_addr;
++		size_t chunk_size = PAGES_PER_CHUNK << PAGE_SHIFT;
++
++		chunk = dma_alloc_from_contiguous(&vc_cma_device.dev,
++						  PAGES_PER_CHUNK,
++						  VC_CMA_CHUNK_ORDER);
++		if (!chunk)
++			break;
++
++		chunk_addr = page_address(chunk);
++		dmac_flush_range(chunk_addr, chunk_addr + chunk_size);
++		outer_inv_range(__pa(chunk_addr), __pa(chunk_addr) +
++			chunk_size);
++
++		chunk_num =
++		    (page_to_phys(chunk) - vc_cma_base) / VC_CMA_CHUNK_SIZE;
++		BUG_ON(((page_to_phys(chunk) - vc_cma_base) %
++			VC_CMA_CHUNK_SIZE) != 0);
++		if (chunk_num >= vc_cma_chunks) {
++			phys_addr_t _pa = vc_cma_base + vc_cma_size - 1;
++			LOG_ERR("%s: ===============================",
++				__func__);
++			LOG_ERR("%s: chunk phys %x, vc_cma %pa-%pa - "
++				"bad SPARSEMEM configuration?",
++				__func__, (unsigned int)page_to_phys(chunk),
++				&vc_cma_base, &_pa);
++			LOG_ERR("%s: dev->cma_area = %p", __func__,
++				(void*)0/*vc_cma_device.dev.cma_area*/);
++			LOG_ERR("%s: ===============================",
++				__func__);
++			break;
++		}
++		reply->params[i] = chunk_num;
++		vc_cma_chunks_used++;
++	}
++
++	if (i < num_chunks) {
++		LOG_ERR("%s: dma_alloc_from_contiguous failed "
++			"for %x bytes (alloc %d of %d, %d free)",
++			__func__, VC_CMA_CHUNK_SIZE, i,
++			num_chunks, vc_cma_chunks - vc_cma_chunks_used);
++		num_chunks = i;
++	}
++
++	LOG_DBG("CMA allocated %d chunks -> %d used",
++		num_chunks, vc_cma_chunks_used);
++	reply->type = VC_CMA_MSG_ALLOCATED;
++
++	{
++		VCHIQ_ELEMENT_T elem = {
++			reply,
++			offsetof(struct cma_msg, params[0]) +
++			    num_chunks * sizeof(reply->params[0])
++		};
++		VCHIQ_STATUS_T ret;
++		vchiq_use_service(cma_service);
++		ret = vchiq_queue_message(cma_service, &elem, 1);
++		vchiq_release_service(cma_service);
++		if (ret != VCHIQ_SUCCESS)
++			LOG_ERR("vchiq_queue_message return " "%x", ret);
++	}
++
++	return num_chunks;
++}
++
++static int cma_worker_proc(void *param)
++{
++	static struct cma_msg reply;
++	(void)param;
++
++	while (1) {
++		VCHIQ_HEADER_T *msg;
++		static struct cma_msg msg_copy;
++		struct cma_msg *cma_msg = &msg_copy;
++		int type, msg_size;
++
++		msg = vchiu_queue_pop(&cma_msg_queue);
++		if ((unsigned int)msg >= VC_CMA_MSG_MAX) {
++			msg_size = msg->size;
++			memcpy(&msg_copy, msg->data, msg_size);
++			type = cma_msg->type;
++			vchiq_release_message(cma_service, msg);
++		} else {
++			msg_size = 0;
++			type = (int)msg;
++			if (type == VC_CMA_MSG_QUIT)
++				break;
++			else if (type == VC_CMA_MSG_UPDATE_RESERVE) {
++				msg = NULL;
++				cma_msg = NULL;
++			} else {
++				BUG();
++				continue;
++			}
++		}
++
++		switch (type) {
++		case VC_CMA_MSG_ALLOC:{
++				int num_chunks, free_chunks;
++				num_chunks = cma_msg->params[0];
++				free_chunks =
++				    vc_cma_chunks - vc_cma_chunks_used;
++				LOG_DBG("CMA_MSG_ALLOC(%d chunks)", num_chunks);
++				if (num_chunks > VC_CMA_MAX_PARAMS_PER_MSG) {
++					LOG_ERR
++					    ("CMA_MSG_ALLOC - chunk count (%d) "
++					     "exceeds VC_CMA_MAX_PARAMS_PER_MSG (%d)",
++					     num_chunks,
++					     VC_CMA_MAX_PARAMS_PER_MSG);
++					num_chunks = VC_CMA_MAX_PARAMS_PER_MSG;
++				}
++
++				if (num_chunks > free_chunks) {
++					LOG_ERR
++					    ("CMA_MSG_ALLOC - chunk count (%d) "
++					     "exceeds free chunks (%d)",
++					     num_chunks, free_chunks);
++					num_chunks = free_chunks;
++				}
++
++				vc_cma_alloc_chunks(num_chunks, &reply);
++			}
++			break;
++
++		case VC_CMA_MSG_FREE:{
++				int chunk_count =
++				    (msg_size -
++				     offsetof(struct cma_msg,
++					      params)) /
++				    sizeof(cma_msg->params[0]);
++				int i;
++				BUG_ON(chunk_count <= 0);
++
++				LOG_DBG("CMA_MSG_FREE(%d chunks - %x, ...)",
++					chunk_count, cma_msg->params[0]);
++				for (i = 0; i < chunk_count; i++) {
++					int chunk_num = cma_msg->params[i];
++					struct page *page = vc_cma_base_page +
++					    chunk_num * PAGES_PER_CHUNK;
++					if (chunk_num >= vc_cma_chunks) {
++						LOG_ERR
++						    ("CMA_MSG_FREE - chunk %d of %d"
++						     " (value %x) exceeds maximum "
++						     "(%x)", i, chunk_count,
++						     chunk_num,
++						     vc_cma_chunks - 1);
++						break;
++					}
++
++					if (!dma_release_from_contiguous
++					    (&vc_cma_device.dev, page,
++					     PAGES_PER_CHUNK)) {
++						phys_addr_t _pa = page_to_phys(page);
++						LOG_ERR
++						    ("CMA_MSG_FREE - failed to "
++						     "release chunk %d (phys %pa, "
++						     "page %x)", chunk_num,
++						     &_pa,
++						     (unsigned int)page);
++					}
++					vc_cma_chunks_used--;
++				}
++				LOG_DBG("CMA released %d chunks -> %d used",
++					i, vc_cma_chunks_used);
++			}
++			break;
++
++		case VC_CMA_MSG_UPDATE_RESERVE:{
++				int chunks_needed =
++				    ((vc_cma_reserve_total + VC_CMA_CHUNK_SIZE -
++				      1)
++				     / VC_CMA_CHUNK_SIZE) -
++				    vc_cma_chunks_reserved;
++
++				LOG_DBG
++				    ("CMA_MSG_UPDATE_RESERVE(%d chunks needed)",
++				     chunks_needed);
++
++				/* Cap the reservations to what is available */
++				if (chunks_needed > 0) {
++					if (chunks_needed >
++					    (vc_cma_chunks -
++					     vc_cma_chunks_used))
++						chunks_needed =
++						    (vc_cma_chunks -
++						     vc_cma_chunks_used);
++
++					chunks_needed =
++					    vc_cma_alloc_chunks(chunks_needed,
++								&reply);
++				}
++
++				LOG_DBG
++				    ("CMA_MSG_UPDATE_RESERVE(%d chunks allocated)",
++				     chunks_needed);
++				vc_cma_chunks_reserved += chunks_needed;
++			}
++			break;
++
++		default:
++			LOG_ERR("unexpected msg type %d", type);
++			break;
++		}
++	}
++
++	LOG_DBG("quitting...");
++	return 0;
++}
++
++/****************************************************************************
++*
++*   vc_cma_connected_init
++*
++*   This function is called once the videocore has been connected.
++*
++***************************************************************************/
++
++static void vc_cma_connected_init(void)
++{
++	VCHIQ_SERVICE_PARAMS_T service_params;
++
++	LOG_DBG("vc_cma_connected_init");
++
++	if (!vchiu_queue_init(&cma_msg_queue, 16)) {
++		LOG_ERR("could not create CMA msg queue");
++		goto fail_queue;
++	}
++
++	if (vchiq_initialise(&cma_instance) != VCHIQ_SUCCESS)
++		goto fail_vchiq_init;
++
++	vchiq_connect(cma_instance);
++
++	service_params.fourcc = VC_CMA_FOURCC;
++	service_params.callback = cma_service_callback;
++	service_params.userdata = NULL;
++	service_params.version = VC_CMA_VERSION;
++	service_params.version_min = VC_CMA_VERSION;
++
++	if (vchiq_open_service(cma_instance, &service_params,
++			       &cma_service) != VCHIQ_SUCCESS) {
++		LOG_ERR("failed to open service - already in use?");
++		goto fail_vchiq_open;
++	}
++
++	vchiq_release_service(cma_service);
++
++	cma_worker = kthread_create(cma_worker_proc, NULL, "cma_worker");
++	if (!cma_worker) {
++		LOG_ERR("could not create CMA worker thread");
++		goto fail_worker;
++	}
++	set_user_nice(cma_worker, -20);
++	wake_up_process(cma_worker);
++
++	return;
++
++fail_worker:
++	vchiq_close_service(cma_service);
++fail_vchiq_open:
++	vchiq_shutdown(cma_instance);
++fail_vchiq_init:
++	vchiu_queue_delete(&cma_msg_queue);
++fail_queue:
++	return;
++}
++
++void
++loud_error_header(void)
++{
++	if (in_loud_error)
++		return;
++
++	LOG_ERR("============================================================"
++		"================");
++	LOG_ERR("============================================================"
++		"================");
++	LOG_ERR("=====");
++
++	in_loud_error = 1;
++}
++
++void
++loud_error_footer(void)
++{
++	if (!in_loud_error)
++		return;
++
++	LOG_ERR("=====");
++	LOG_ERR("============================================================"
++		"================");
++	LOG_ERR("============================================================"
++		"================");
++
++	in_loud_error = 0;
++}
++
++#if 1
++static int check_cma_config(void) { return 1; }
++#else
++static int
++read_vc_debug_var(VC_MEM_ACCESS_HANDLE_T handle,
++	const char *symbol,
++	void *buf, size_t bufsize)
++{
++	VC_MEM_ADDR_T vcMemAddr;
++	size_t vcMemSize;
++	uint8_t *mapAddr;
++	off_t  vcMapAddr;
++
++	if (!LookupVideoCoreSymbol(handle, symbol,
++		&vcMemAddr,
++		&vcMemSize)) {
++		loud_error_header();
++		loud_error(
++			"failed to find VC symbol \"%s\".",
++			symbol);
++		loud_error_footer();
++		return 0;
++	}
++
++	if (vcMemSize != bufsize) {
++		loud_error_header();
++		loud_error(
++			"VC symbol \"%s\" is the wrong size.",
++			symbol);
++		loud_error_footer();
++		return 0;
++	}
++
++	vcMapAddr = (off_t)vcMemAddr & VC_MEM_TO_ARM_ADDR_MASK;
++	vcMapAddr += mm_vc_mem_phys_addr;
++	mapAddr = ioremap_nocache(vcMapAddr, vcMemSize);
++	if (mapAddr == 0) {
++		loud_error_header();
++		loud_error(
++			"failed to ioremap \"%s\" @ 0x%x "
++			"(phys: 0x%x, size: %u).",
++			symbol,
++			(unsigned int)vcMapAddr,
++			(unsigned int)vcMemAddr,
++			(unsigned int)vcMemSize);
++		loud_error_footer();
++		return 0;
++	}
++
++	memcpy(buf, mapAddr, bufsize);
++	iounmap(mapAddr);
++
++	return 1;
++}
++
++
++static int
++check_cma_config(void)
++{
++	VC_MEM_ACCESS_HANDLE_T mem_hndl;
++	VC_MEM_ADDR_T mempool_start;
++	VC_MEM_ADDR_T mempool_end;
++	VC_MEM_ADDR_T mempool_offline_start;
++	VC_MEM_ADDR_T mempool_offline_end;
++	VC_MEM_ADDR_T cam_alloc_base;
++	VC_MEM_ADDR_T cam_alloc_size;
++	VC_MEM_ADDR_T cam_alloc_end;
++	int success = 0;
++
++	if (OpenVideoCoreMemory(&mem_hndl) != 0)
++		goto out;
++
++	/* Read the relevant VideoCore variables */
++	if (!read_vc_debug_var(mem_hndl, "__MEMPOOL_START",
++		&mempool_start,
++		sizeof(mempool_start)))
++		goto close;
++
++	if (!read_vc_debug_var(mem_hndl, "__MEMPOOL_END",
++		&mempool_end,
++		sizeof(mempool_end)))
++		goto close;
++
++	if (!read_vc_debug_var(mem_hndl, "__MEMPOOL_OFFLINE_START",
++		&mempool_offline_start,
++		sizeof(mempool_offline_start)))
++		goto close;
++
++	if (!read_vc_debug_var(mem_hndl, "__MEMPOOL_OFFLINE_END",
++		&mempool_offline_end,
++		sizeof(mempool_offline_end)))
++		goto close;
++
++	if (!read_vc_debug_var(mem_hndl, "cam_alloc_base",
++		&cam_alloc_base,
++		sizeof(cam_alloc_base)))
++		goto close;
++
++	if (!read_vc_debug_var(mem_hndl, "cam_alloc_size",
++		&cam_alloc_size,
++		sizeof(cam_alloc_size)))
++		goto close;
++
++	cam_alloc_end = cam_alloc_base + cam_alloc_size;
++
++	success = 1;
++
++	/* Now the sanity checks */
++	if (!mempool_offline_start)
++		mempool_offline_start = mempool_start;
++	if (!mempool_offline_end)
++		mempool_offline_end = mempool_end;
++
++	if (VCADDR_TO_PHYSADDR(mempool_offline_start) != vc_cma_base) {
++		loud_error_header();
++		loud_error(
++			"__MEMPOOL_OFFLINE_START(%x -> %lx) doesn't match "
++			"vc_cma_base(%x)",
++			mempool_offline_start,
++			VCADDR_TO_PHYSADDR(mempool_offline_start),
++			vc_cma_base);
++		success = 0;
++	}
++
++	if (VCADDR_TO_PHYSADDR(mempool_offline_end) !=
++		(vc_cma_base + vc_cma_size)) {
++		loud_error_header();
++		loud_error(
++			"__MEMPOOL_OFFLINE_END(%x -> %lx) doesn't match "
++			"vc_cma_base(%x) + vc_cma_size(%x) = %x",
++			mempool_offline_start,
++			VCADDR_TO_PHYSADDR(mempool_offline_end),
++			vc_cma_base, vc_cma_size, vc_cma_base + vc_cma_size);
++		success = 0;
++	}
++
++	if (mempool_end < mempool_start) {
++		loud_error_header();
++		loud_error(
++			"__MEMPOOL_END(%x) must not be before "
++			"__MEMPOOL_START(%x)",
++			mempool_end,
++			mempool_start);
++		success = 0;
++	}
++
++	if (mempool_offline_end < mempool_offline_start) {
++		loud_error_header();
++		loud_error(
++			"__MEMPOOL_OFFLINE_END(%x) must not be before "
++			"__MEMPOOL_OFFLINE_START(%x)",
++			mempool_offline_end,
++			mempool_offline_start);
++		success = 0;
++	}
++
++	if (mempool_offline_start < mempool_start) {
++		loud_error_header();
++		loud_error(
++			"__MEMPOOL_OFFLINE_START(%x) must not be before "
++			"__MEMPOOL_START(%x)",
++			mempool_offline_start,
++			mempool_start);
++		success = 0;
++	}
++
++	if (mempool_offline_end > mempool_end) {
++		loud_error_header();
++		loud_error(
++			"__MEMPOOL_OFFLINE_END(%x) must not be after "
++			"__MEMPOOL_END(%x)",
++			mempool_offline_end,
++			mempool_end);
++		success = 0;
++	}
++
++	if ((cam_alloc_base < mempool_end) &&
++		(cam_alloc_end > mempool_start)) {
++		loud_error_header();
++		loud_error(
++			"cam_alloc pool(%x-%x) overlaps "
++			"mempool(%x-%x)",
++			cam_alloc_base, cam_alloc_end,
++			mempool_start, mempool_end);
++		success = 0;
++	}
++
++	loud_error_footer();
++
++close:
++	CloseVideoCoreMemory(mem_hndl);
++
++out:
++	return success;
++}
++#endif
++
++static int vc_cma_init(void)
++{
++	int rc = -EFAULT;
++	struct device *dev;
++
++	if (!check_cma_config())
++		goto out_release;
++
++	LOG_INFO("vc-cma: Videocore CMA driver");
++	LOG_INFO("vc-cma: vc_cma_base      = %pa", &vc_cma_base);
++	LOG_INFO("vc-cma: vc_cma_size      = 0x%08x (%u MiB)",
++		 vc_cma_size, vc_cma_size / (1024 * 1024));
++	LOG_INFO("vc-cma: vc_cma_initial   = 0x%08x (%u MiB)",
++		 vc_cma_initial, vc_cma_initial / (1024 * 1024));
++
++	vc_cma_base_page = phys_to_page(vc_cma_base);
++
++	if (vc_cma_chunks) {
++		int chunks_needed = vc_cma_initial / VC_CMA_CHUNK_SIZE;
++
++		for (vc_cma_chunks_used = 0;
++		     vc_cma_chunks_used < chunks_needed; vc_cma_chunks_used++) {
++			struct page *chunk;
++			chunk = dma_alloc_from_contiguous(&vc_cma_device.dev,
++							  PAGES_PER_CHUNK,
++							  VC_CMA_CHUNK_ORDER);
++			if (!chunk)
++				break;
++			BUG_ON(((page_to_phys(chunk) - vc_cma_base) %
++				VC_CMA_CHUNK_SIZE) != 0);
++		}
++		if (vc_cma_chunks_used != chunks_needed) {
++			LOG_ERR("%s: dma_alloc_from_contiguous failed (%d "
++				"bytes, allocation %d of %d)",
++				__func__, VC_CMA_CHUNK_SIZE,
++				vc_cma_chunks_used, chunks_needed);
++			goto out_release;
++		}
++
++		vchiq_add_connected_callback(vc_cma_connected_init);
++	}
++
++	rc = alloc_chrdev_region(&vc_cma_devnum, 0, 1, DRIVER_NAME);
++	if (rc < 0) {
++		LOG_ERR("%s: alloc_chrdev_region failed (rc=%d)", __func__, rc);
++		goto out_release;
++	}
++
++	cdev_init(&vc_cma_cdev, &vc_cma_fops);
++	rc = cdev_add(&vc_cma_cdev, vc_cma_devnum, 1);
++	if (rc != 0) {
++		LOG_ERR("%s: cdev_add failed (rc=%d)", __func__, rc);
++		goto out_unregister;
++	}
++
++	vc_cma_class = class_create(THIS_MODULE, DRIVER_NAME);
++	if (IS_ERR(vc_cma_class)) {
++		rc = PTR_ERR(vc_cma_class);
++		LOG_ERR("%s: class_create failed (rc=%d)", __func__, rc);
++		goto out_cdev_del;
++	}
++
++	dev = device_create(vc_cma_class, NULL, vc_cma_devnum, NULL,
++			    DRIVER_NAME);
++	if (IS_ERR(dev)) {
++		rc = PTR_ERR(dev);
++		LOG_ERR("%s: device_create failed (rc=%d)", __func__, rc);
++		goto out_class_destroy;
++	}
++
++	vc_cma_proc_entry = proc_create(DRIVER_NAME, 0444, NULL, &vc_cma_proc_fops);
++	if (vc_cma_proc_entry == NULL) {
++		rc = -EFAULT;
++		LOG_ERR("%s: proc_create failed", __func__);
++		goto out_device_destroy;
++	}
++
++	vc_cma_inited = 1;
++	return 0;
++
++out_device_destroy:
++	device_destroy(vc_cma_class, vc_cma_devnum);
++
++out_class_destroy:
++	class_destroy(vc_cma_class);
++	vc_cma_class = NULL;
++
++out_cdev_del:
++	cdev_del(&vc_cma_cdev);
++
++out_unregister:
++	unregister_chrdev_region(vc_cma_devnum, 1);
++
++out_release:
++	/* It is tempting to try to clean up by calling
++	   dma_release_from_contiguous for all allocated chunks, but it isn't
++	   a very safe thing to do. If vc_cma_initial is non-zero it is because
++	   VideoCore is already using that memory, so giving it back to Linux
++	   is likely to be fatal.
++	 */
++	return -1;
++}
++
++/****************************************************************************
++*
++*   vc_cma_exit
++*
++***************************************************************************/
++
++static void __exit vc_cma_exit(void)
++{
++	LOG_DBG("%s: called", __func__);
++
++	if (vc_cma_inited) {
++		remove_proc_entry(DRIVER_NAME, NULL);
++		device_destroy(vc_cma_class, vc_cma_devnum);
++		class_destroy(vc_cma_class);
++		cdev_del(&vc_cma_cdev);
++		unregister_chrdev_region(vc_cma_devnum, 1);
++	}
++}
++
++module_init(vc_cma_init);
++module_exit(vc_cma_exit);
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Broadcom Corporation");
+--- /dev/null
++++ b/include/linux/broadcom/vc_cma.h
+@@ -0,0 +1,36 @@
++/*****************************************************************************
++* Copyright 2012 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#if !defined( VC_CMA_H )
++#define VC_CMA_H
++
++#include <linux/ioctl.h>
++
++#define VC_CMA_IOC_MAGIC 0xc5
++
++#define VC_CMA_IOC_RESERVE _IO(VC_CMA_IOC_MAGIC, 0)
++
++#ifdef __KERNEL__
++
++#ifdef CONFIG_BCM_VC_CMA
++void vc_cma_early_init(void);
++void vc_cma_reserve(void);
++#else
++static inline void vc_cma_early_init(void) { }
++static inline void vc_cma_reserve(void) { }
++#endif
++
++#endif
++
++#endif /* VC_CMA_H */
diff --git a/target/linux/brcm2708/patches-4.4/0036-bcm2708-alsa-sound-driver.patch b/target/linux/brcm2708/patches-4.4/0036-bcm2708-alsa-sound-driver.patch
new file mode 100644
index 0000000..47cb912
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0036-bcm2708-alsa-sound-driver.patch
@@ -0,0 +1,2678 @@
+From d3e64892070a0399dad4cdae9e72bb9c7801b86d Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Mon, 26 Mar 2012 22:15:50 +0100
+Subject: [PATCH 036/127] bcm2708: alsa sound driver
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+
+alsa: add mmap support and some cleanups to bcm2835 ALSA driver
+
+snd-bcm2835: Add support for spdif/hdmi passthrough
+
+This adds a dedicated subdevice which can be used for passthrough of non-audio
+formats (ie encoded a52) through the hdmi audio link. In addition to this
+driver extension an appropriate card config is required to make alsa-lib
+support the AES parameters for this device.
+
+snd-bcm2708: Add mutex, improve logging
+
+Fix for ALSA driver crash
+
+Avoids an issue when closing and opening vchiq where a message can arrive before service handle has been written
+
+alsa: reduce severity of expected warning message
+
+snd-bcm2708: Fix dmesg spam for non-error case
+
+alsa: Ensure mutexes are released through error paths
+
+alsa: Make interrupted close paths quieter
+
+BCM270x: Add onboard sound device to Device Tree
+
+Add Device Tree support to alsa driver.
+Add device to Device Tree.
+Don't add platform devices when booting in DT mode.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ sound/arm/Kconfig                  |   8 +
+ sound/arm/Makefile                 |   5 +
+ sound/arm/bcm2835-ctl.c            | 323 +++++++++++++
+ sound/arm/bcm2835-pcm.c            | 557 +++++++++++++++++++++++
+ sound/arm/bcm2835-vchiq.c          | 902 +++++++++++++++++++++++++++++++++++++
+ sound/arm/bcm2835.c                | 511 +++++++++++++++++++++
+ sound/arm/bcm2835.h                | 167 +++++++
+ sound/arm/vc_vchi_audioserv_defs.h | 116 +++++
+ 8 files changed, 2589 insertions(+)
+ create mode 100755 sound/arm/bcm2835-ctl.c
+ create mode 100755 sound/arm/bcm2835-pcm.c
+ create mode 100755 sound/arm/bcm2835-vchiq.c
+ create mode 100644 sound/arm/bcm2835.c
+ create mode 100755 sound/arm/bcm2835.h
+ create mode 100644 sound/arm/vc_vchi_audioserv_defs.h
+
+--- a/sound/arm/Kconfig
++++ b/sound/arm/Kconfig
+@@ -40,5 +40,13 @@ config SND_PXA2XX_AC97
+ 	  Say Y or M if you want to support any AC97 codec attached to
+ 	  the PXA2xx AC97 interface.
+ 
++config SND_BCM2835
++	tristate "BCM2835 ALSA driver"
++	depends on (ARCH_BCM2708 || ARCH_BCM2709 || ARCH_BCM2835) \
++		   && BCM2708_VCHIQ && SND
++	select SND_PCM
++	help
++	  Say Y or M if you want to support BCM2835 Alsa pcm card driver
++
+ endif	# SND_ARM
+ 
+--- a/sound/arm/Makefile
++++ b/sound/arm/Makefile
+@@ -14,3 +14,8 @@ snd-pxa2xx-lib-$(CONFIG_SND_PXA2XX_LIB_A
+ 
+ obj-$(CONFIG_SND_PXA2XX_AC97)	+= snd-pxa2xx-ac97.o
+ snd-pxa2xx-ac97-objs		:= pxa2xx-ac97.o
++
++obj-$(CONFIG_SND_BCM2835)	+= snd-bcm2835.o
++snd-bcm2835-objs		:= bcm2835.o bcm2835-ctl.o bcm2835-pcm.o bcm2835-vchiq.o
++
++ccflags-y += -Idrivers/misc/vc04_services -Idrivers/misc/vc04_services/interface/vcos/linuxkernel -D__VCCOREVER__=0x04000000
+--- /dev/null
++++ b/sound/arm/bcm2835-ctl.c
+@@ -0,0 +1,323 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#include <linux/platform_device.h>
++#include <linux/init.h>
++#include <linux/io.h>
++#include <linux/jiffies.h>
++#include <linux/slab.h>
++#include <linux/time.h>
++#include <linux/wait.h>
++#include <linux/delay.h>
++#include <linux/moduleparam.h>
++#include <linux/sched.h>
++
++#include <sound/core.h>
++#include <sound/control.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/rawmidi.h>
++#include <sound/initval.h>
++#include <sound/tlv.h>
++#include <sound/asoundef.h>
++
++#include "bcm2835.h"
++
++/* volume maximum and minimum in terms of 0.01dB */
++#define CTRL_VOL_MAX 400
++#define CTRL_VOL_MIN -10239 /* originally -10240 */
++
++
++static int snd_bcm2835_ctl_info(struct snd_kcontrol *kcontrol,
++				struct snd_ctl_elem_info *uinfo)
++{
++	audio_info(" ... IN\n");
++	if (kcontrol->private_value == PCM_PLAYBACK_VOLUME) {
++		uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
++		uinfo->count = 1;
++		uinfo->value.integer.min = CTRL_VOL_MIN;
++		uinfo->value.integer.max = CTRL_VOL_MAX;      /* 2303 */
++	} else if (kcontrol->private_value == PCM_PLAYBACK_MUTE) {
++		uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN;
++		uinfo->count = 1;
++		uinfo->value.integer.min = 0;
++		uinfo->value.integer.max = 1;
++	} else if (kcontrol->private_value == PCM_PLAYBACK_DEVICE) {
++		uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
++		uinfo->count = 1;
++		uinfo->value.integer.min = 0;
++		uinfo->value.integer.max = AUDIO_DEST_MAX-1;
++	}
++	audio_info(" ... OUT\n");
++	return 0;
++}
++
++/* toggles mute on or off depending on the value of nmute, and returns
++ * 1 if the mute value was changed, otherwise 0
++ */
++static int toggle_mute(struct bcm2835_chip *chip, int nmute)
++{
++	/* if settings are ok, just return 0 */
++	if(chip->mute == nmute)
++		return 0;
++
++	/* if the sound is muted then we need to unmute */
++	if(chip->mute == CTRL_VOL_MUTE)
++	{
++		chip->volume = chip->old_volume; /* copy the old volume back */
++		audio_info("Unmuting, old_volume = %d, volume = %d ...\n", chip->old_volume, chip->volume);
++	}
++	else /* otherwise we mute */
++	{
++		chip->old_volume = chip->volume;
++		chip->volume = 26214; /* set volume to minimum level AKA mute */
++		audio_info("Muting, old_volume = %d, volume = %d ...\n", chip->old_volume, chip->volume);
++	}
++
++	chip->mute = nmute;
++	return 1;
++}
++
++static int snd_bcm2835_ctl_get(struct snd_kcontrol *kcontrol,
++			       struct snd_ctl_elem_value *ucontrol)
++{
++	struct bcm2835_chip *chip = snd_kcontrol_chip(kcontrol);
++
++	BUG_ON(!chip && !(chip->avail_substreams & AVAIL_SUBSTREAMS_MASK));
++
++	if (kcontrol->private_value == PCM_PLAYBACK_VOLUME)
++		ucontrol->value.integer.value[0] = chip2alsa(chip->volume);
++	else if (kcontrol->private_value == PCM_PLAYBACK_MUTE)
++		ucontrol->value.integer.value[0] = chip->mute;
++	else if (kcontrol->private_value == PCM_PLAYBACK_DEVICE)
++		ucontrol->value.integer.value[0] = chip->dest;
++
++	return 0;
++}
++
++static int snd_bcm2835_ctl_put(struct snd_kcontrol *kcontrol,
++			       struct snd_ctl_elem_value *ucontrol)
++{
++	struct bcm2835_chip *chip = snd_kcontrol_chip(kcontrol);
++	int changed = 0;
++
++	if (kcontrol->private_value == PCM_PLAYBACK_VOLUME) {
++		audio_info("Volume change attempted.. volume = %d new_volume = %d\n", chip->volume, (int)ucontrol->value.integer.value[0]);
++		if (chip->mute == CTRL_VOL_MUTE) {
++			/* changed = toggle_mute(chip, CTRL_VOL_UNMUTE); */
++			return 1; /* should return 0 to signify no change but the mixer takes this as the opposite sign (no idea why) */
++		}
++		if (changed
++		    || (ucontrol->value.integer.value[0] != chip2alsa(chip->volume))) {
++
++			chip->volume = alsa2chip(ucontrol->value.integer.value[0]);
++			changed = 1;
++		}
++
++	} else if (kcontrol->private_value == PCM_PLAYBACK_MUTE) {
++		/* Now implemented */
++		audio_info(" Mute attempted\n");
++		changed = toggle_mute(chip, ucontrol->value.integer.value[0]);
++
++	} else if (kcontrol->private_value == PCM_PLAYBACK_DEVICE) {
++		if (ucontrol->value.integer.value[0] != chip->dest) {
++			chip->dest = ucontrol->value.integer.value[0];
++			changed = 1;
++		}
++	}
++
++	if (changed) {
++		if (bcm2835_audio_set_ctls(chip))
++			printk(KERN_ERR "Failed to set ALSA controls..\n");
++	}
++
++	return changed;
++}
++
++static DECLARE_TLV_DB_SCALE(snd_bcm2835_db_scale, CTRL_VOL_MIN, 1, 1);
++
++static struct snd_kcontrol_new snd_bcm2835_ctl[] = {
++	{
++	 .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++	 .name = "PCM Playback Volume",
++	 .index = 0,
++	 .access = SNDRV_CTL_ELEM_ACCESS_READWRITE | SNDRV_CTL_ELEM_ACCESS_TLV_READ,
++	 .private_value = PCM_PLAYBACK_VOLUME,
++	 .info = snd_bcm2835_ctl_info,
++	 .get = snd_bcm2835_ctl_get,
++	 .put = snd_bcm2835_ctl_put,
++	 .count = 1,
++	 .tlv = {.p = snd_bcm2835_db_scale}
++	},
++	{
++	 .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++	 .name = "PCM Playback Switch",
++	 .index = 0,
++	 .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++	 .private_value = PCM_PLAYBACK_MUTE,
++	 .info = snd_bcm2835_ctl_info,
++	 .get = snd_bcm2835_ctl_get,
++	 .put = snd_bcm2835_ctl_put,
++	 .count = 1,
++	 },
++	{
++	 .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++	 .name = "PCM Playback Route",
++	 .index = 0,
++	 .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
++	 .private_value = PCM_PLAYBACK_DEVICE,
++	 .info = snd_bcm2835_ctl_info,
++	 .get = snd_bcm2835_ctl_get,
++	 .put = snd_bcm2835_ctl_put,
++	 .count = 1,
++	},
++};
++
++static int snd_bcm2835_spdif_default_info(struct snd_kcontrol *kcontrol,
++					  struct snd_ctl_elem_info *uinfo)
++{
++	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
++	uinfo->count = 1;
++	return 0;
++}
++
++static int snd_bcm2835_spdif_default_get(struct snd_kcontrol *kcontrol,
++					 struct snd_ctl_elem_value *ucontrol)
++{
++	struct bcm2835_chip *chip = snd_kcontrol_chip(kcontrol);
++	int i;
++
++	for (i = 0; i < 4; i++)
++		ucontrol->value.iec958.status[i] =
++			(chip->spdif_status >> (i * 8)) && 0xff;
++
++	return 0;
++}
++
++static int snd_bcm2835_spdif_default_put(struct snd_kcontrol *kcontrol,
++					 struct snd_ctl_elem_value *ucontrol)
++{
++	struct bcm2835_chip *chip = snd_kcontrol_chip(kcontrol);
++	unsigned int val = 0;
++	int i, change;
++
++	for (i = 0; i < 4; i++)
++		val |= (unsigned int)ucontrol->value.iec958.status[i] << (i * 8);
++
++	change = val != chip->spdif_status;
++	chip->spdif_status = val;
++
++	return change;
++}
++
++static int snd_bcm2835_spdif_mask_info(struct snd_kcontrol *kcontrol,
++				       struct snd_ctl_elem_info *uinfo)
++{
++	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
++	uinfo->count = 1;
++	return 0;
++}
++
++static int snd_bcm2835_spdif_mask_get(struct snd_kcontrol *kcontrol,
++				      struct snd_ctl_elem_value *ucontrol)
++{
++	/* bcm2835 supports only consumer mode and sets all other format flags
++	 * automatically. So the only thing left is signalling non-audio
++	 * content */
++	ucontrol->value.iec958.status[0] = IEC958_AES0_NONAUDIO;
++	return 0;
++}
++
++static int snd_bcm2835_spdif_stream_info(struct snd_kcontrol *kcontrol,
++					 struct snd_ctl_elem_info *uinfo)
++{
++	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
++	uinfo->count = 1;
++	return 0;
++}
++
++static int snd_bcm2835_spdif_stream_get(struct snd_kcontrol *kcontrol,
++					struct snd_ctl_elem_value *ucontrol)
++{
++	struct bcm2835_chip *chip = snd_kcontrol_chip(kcontrol);
++	int i;
++
++	for (i = 0; i < 4; i++)
++		ucontrol->value.iec958.status[i] =
++			(chip->spdif_status >> (i * 8)) & 0xff;
++	return 0;
++}
++
++static int snd_bcm2835_spdif_stream_put(struct snd_kcontrol *kcontrol,
++					struct snd_ctl_elem_value *ucontrol)
++{
++	struct bcm2835_chip *chip = snd_kcontrol_chip(kcontrol);
++	unsigned int val = 0;
++	int i, change;
++
++	for (i = 0; i < 4; i++)
++		val |= (unsigned int)ucontrol->value.iec958.status[i] << (i * 8);
++	change = val != chip->spdif_status;
++	chip->spdif_status = val;
++
++	return change;
++}
++
++static struct snd_kcontrol_new snd_bcm2835_spdif[] = {
++	{
++		.iface = SNDRV_CTL_ELEM_IFACE_PCM,
++		.name = SNDRV_CTL_NAME_IEC958("", PLAYBACK, DEFAULT),
++		.info = snd_bcm2835_spdif_default_info,
++		.get = snd_bcm2835_spdif_default_get,
++		.put = snd_bcm2835_spdif_default_put
++	},
++	{
++		.access = SNDRV_CTL_ELEM_ACCESS_READ,
++		.iface = SNDRV_CTL_ELEM_IFACE_PCM,
++		.name = SNDRV_CTL_NAME_IEC958("", PLAYBACK, CON_MASK),
++		.info = snd_bcm2835_spdif_mask_info,
++		.get = snd_bcm2835_spdif_mask_get,
++	},
++	{
++		.access = SNDRV_CTL_ELEM_ACCESS_READWRITE |
++			SNDRV_CTL_ELEM_ACCESS_INACTIVE,
++		.iface = SNDRV_CTL_ELEM_IFACE_PCM,
++		.name = SNDRV_CTL_NAME_IEC958("", PLAYBACK, PCM_STREAM),
++		.info = snd_bcm2835_spdif_stream_info,
++		.get = snd_bcm2835_spdif_stream_get,
++		.put = snd_bcm2835_spdif_stream_put,
++	},
++};
++
++int snd_bcm2835_new_ctl(bcm2835_chip_t * chip)
++{
++	int err;
++	unsigned int idx;
++
++	strcpy(chip->card->mixername, "Broadcom Mixer");
++	for (idx = 0; idx < ARRAY_SIZE(snd_bcm2835_ctl); idx++) {
++		err =
++		    snd_ctl_add(chip->card,
++				snd_ctl_new1(&snd_bcm2835_ctl[idx], chip));
++		if (err < 0)
++			return err;
++	}
++	for (idx = 0; idx < ARRAY_SIZE(snd_bcm2835_spdif); idx++) {
++		err = snd_ctl_add(chip->card,
++				snd_ctl_new1(&snd_bcm2835_spdif[idx], chip));
++		if (err < 0)
++			return err;
++	}
++	return 0;
++}
+--- /dev/null
++++ b/sound/arm/bcm2835-pcm.c
+@@ -0,0 +1,557 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#include <linux/interrupt.h>
++#include <linux/slab.h>
++
++#include <sound/asoundef.h>
++
++#include "bcm2835.h"
++
++/* hardware definition */
++static struct snd_pcm_hardware snd_bcm2835_playback_hw = {
++	.info = (SNDRV_PCM_INFO_INTERLEAVED | SNDRV_PCM_INFO_BLOCK_TRANSFER |
++		 SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_MMAP_VALID),
++	.formats = SNDRV_PCM_FMTBIT_U8 | SNDRV_PCM_FMTBIT_S16_LE,
++	.rates = SNDRV_PCM_RATE_CONTINUOUS | SNDRV_PCM_RATE_8000_48000,
++	.rate_min = 8000,
++	.rate_max = 48000,
++	.channels_min = 1,
++	.channels_max = 2,
++	.buffer_bytes_max = 128 * 1024,
++	.period_bytes_min =   1 * 1024,
++	.period_bytes_max = 128 * 1024,
++	.periods_min = 1,
++	.periods_max = 128,
++};
++
++static struct snd_pcm_hardware snd_bcm2835_playback_spdif_hw = {
++	.info = (SNDRV_PCM_INFO_INTERLEAVED | SNDRV_PCM_INFO_BLOCK_TRANSFER |
++		 SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_MMAP_VALID),
++	.formats = SNDRV_PCM_FMTBIT_S16_LE,
++	.rates = SNDRV_PCM_RATE_CONTINUOUS | SNDRV_PCM_RATE_44100 |
++		SNDRV_PCM_RATE_48000,
++	.rate_min = 44100,
++	.rate_max = 48000,
++	.channels_min = 2,
++	.channels_max = 2,
++	.buffer_bytes_max = 128 * 1024,
++	.period_bytes_min =   1 * 1024,
++	.period_bytes_max = 128 * 1024,
++	.periods_min = 1,
++	.periods_max = 128,
++};
++
++static void snd_bcm2835_playback_free(struct snd_pcm_runtime *runtime)
++{
++	audio_info("Freeing up alsa stream here ..\n");
++	if (runtime->private_data)
++		kfree(runtime->private_data);
++	runtime->private_data = NULL;
++}
++
++static irqreturn_t bcm2835_playback_fifo_irq(int irq, void *dev_id)
++{
++	bcm2835_alsa_stream_t *alsa_stream = (bcm2835_alsa_stream_t *) dev_id;
++	uint32_t consumed = 0;
++	int new_period = 0;
++
++	audio_info(" .. IN\n");
++
++	audio_info("alsa_stream=%p substream=%p\n", alsa_stream,
++		   alsa_stream ? alsa_stream->substream : 0);
++
++	if (alsa_stream->open)
++		consumed = bcm2835_audio_retrieve_buffers(alsa_stream);
++
++	/* We get called only if playback was triggered, So, the number of buffers we retrieve in
++	 * each iteration are the buffers that have been played out already
++	 */
++
++	if (alsa_stream->period_size) {
++		if ((alsa_stream->pos / alsa_stream->period_size) !=
++		    ((alsa_stream->pos + consumed) / alsa_stream->period_size))
++			new_period = 1;
++	}
++	audio_debug("updating pos cur: %d + %d max:%d period_bytes:%d, hw_ptr: %d new_period:%d\n",
++		      alsa_stream->pos,
++		      consumed,
++		      alsa_stream->buffer_size,
++			  (int)(alsa_stream->period_size*alsa_stream->substream->runtime->periods),
++			  frames_to_bytes(alsa_stream->substream->runtime, alsa_stream->substream->runtime->status->hw_ptr),
++			  new_period);
++	if (alsa_stream->buffer_size) {
++		alsa_stream->pos += consumed &~ (1<<30);
++		alsa_stream->pos %= alsa_stream->buffer_size;
++	}
++
++	if (alsa_stream->substream) {
++		if (new_period)
++			snd_pcm_period_elapsed(alsa_stream->substream);
++	} else {
++		audio_warning(" unexpected NULL substream\n");
++	}
++	audio_info(" .. OUT\n");
++
++	return IRQ_HANDLED;
++}
++
++/* open callback */
++static int snd_bcm2835_playback_open_generic(
++		struct snd_pcm_substream *substream, int spdif)
++{
++	bcm2835_chip_t *chip = snd_pcm_substream_chip(substream);
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	bcm2835_alsa_stream_t *alsa_stream;
++	int idx;
++	int err;
++
++	audio_info(" .. IN (%d)\n", substream->number);
++
++	if(mutex_lock_interruptible(&chip->audio_mutex))
++	{
++		audio_error("Interrupted whilst waiting for lock\n");
++		return -EINTR;
++	}
++	audio_info("Alsa open (%d)\n", substream->number);
++	idx = substream->number;
++
++	if (spdif && chip->opened != 0) {
++		err = -EBUSY;
++		goto out;
++	}
++	else if (!spdif && (chip->opened & (1 << idx))) {
++		err = -EBUSY;
++		goto out;
++	}
++	if (idx > MAX_SUBSTREAMS) {
++		audio_error
++		    ("substream(%d) device doesn't exist max(%d) substreams allowed\n",
++		     idx, MAX_SUBSTREAMS);
++		err = -ENODEV;
++		goto out;
++	}
++
++	/* Check if we are ready */
++	if (!(chip->avail_substreams & (1 << idx))) {
++		/* We are not ready yet */
++		audio_error("substream(%d) device is not ready yet\n", idx);
++		err = -EAGAIN;
++		goto out;
++	}
++
++	alsa_stream = kzalloc(sizeof(bcm2835_alsa_stream_t), GFP_KERNEL);
++	if (alsa_stream == NULL) {
++		err = -ENOMEM;
++		goto out;
++	}
++
++	/* Initialise alsa_stream */
++	alsa_stream->chip = chip;
++	alsa_stream->substream = substream;
++	alsa_stream->idx = idx;
++
++	sema_init(&alsa_stream->buffers_update_sem, 0);
++	sema_init(&alsa_stream->control_sem, 0);
++	spin_lock_init(&alsa_stream->lock);
++
++	/* Enabled in start trigger, called on each "fifo irq" after that */
++	alsa_stream->enable_fifo_irq = 0;
++	alsa_stream->fifo_irq_handler = bcm2835_playback_fifo_irq;
++
++	err = bcm2835_audio_open(alsa_stream);
++	if (err != 0) {
++		kfree(alsa_stream);
++		goto out;
++	}
++	runtime->private_data = alsa_stream;
++	runtime->private_free = snd_bcm2835_playback_free;
++	if (spdif) {
++		runtime->hw = snd_bcm2835_playback_spdif_hw;
++	} else {
++		/* clear spdif status, as we are not in spdif mode */
++		chip->spdif_status = 0;
++		runtime->hw = snd_bcm2835_playback_hw;
++	}
++	/* minimum 16 bytes alignment (for vchiq bulk transfers) */
++	snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES,
++				   16);
++
++	chip->alsa_stream[idx] = alsa_stream;
++
++	chip->opened |= (1 << idx);
++	alsa_stream->open = 1;
++	alsa_stream->draining = 1;
++
++out:
++	mutex_unlock(&chip->audio_mutex);
++
++	audio_info(" .. OUT =%d\n", err);
++
++	return err;
++}
++
++static int snd_bcm2835_playback_open(struct snd_pcm_substream *substream)
++{
++	return snd_bcm2835_playback_open_generic(substream, 0);
++}
++
++static int snd_bcm2835_playback_spdif_open(struct snd_pcm_substream *substream)
++{
++	return snd_bcm2835_playback_open_generic(substream, 1);
++}
++
++/* close callback */
++static int snd_bcm2835_playback_close(struct snd_pcm_substream *substream)
++{
++	/* the hardware-specific codes will be here */
++
++	bcm2835_chip_t *chip;
++	struct snd_pcm_runtime *runtime;
++	bcm2835_alsa_stream_t *alsa_stream;
++
++	audio_info(" .. IN\n");
++
++	chip = snd_pcm_substream_chip(substream);
++	if(mutex_lock_interruptible(&chip->audio_mutex))
++	{
++		audio_error("Interrupted whilst waiting for lock\n");
++		return -EINTR;
++	}
++	runtime = substream->runtime;
++	alsa_stream = runtime->private_data;
++
++	audio_info("Alsa close\n");
++
++	/*
++	 * Call stop if it's still running. This happens when app
++	 * is force killed and we don't get a stop trigger.
++	 */
++	if (alsa_stream->running) {
++		int err;
++		err = bcm2835_audio_stop(alsa_stream);
++		alsa_stream->running = 0;
++		if (err != 0)
++			audio_error(" Failed to STOP alsa device\n");
++	}
++
++	alsa_stream->period_size = 0;
++	alsa_stream->buffer_size = 0;
++
++	if (alsa_stream->open) {
++		alsa_stream->open = 0;
++		bcm2835_audio_close(alsa_stream);
++	}
++	if (alsa_stream->chip)
++		alsa_stream->chip->alsa_stream[alsa_stream->idx] = NULL;
++	/*
++	 * Do not free up alsa_stream here, it will be freed up by
++	 * runtime->private_free callback we registered in *_open above
++	 */
++
++	chip->opened &= ~(1 << substream->number);
++
++	mutex_unlock(&chip->audio_mutex);
++	audio_info(" .. OUT\n");
++
++	return 0;
++}
++
++/* hw_params callback */
++static int snd_bcm2835_pcm_hw_params(struct snd_pcm_substream *substream,
++				     struct snd_pcm_hw_params *params)
++{
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	bcm2835_alsa_stream_t *alsa_stream = runtime->private_data;
++	int err;
++
++	audio_info(" .. IN\n");
++
++	err = snd_pcm_lib_malloc_pages(substream, params_buffer_bytes(params));
++	if (err < 0) {
++		audio_error
++		    (" pcm_lib_malloc failed to allocated pages for buffers\n");
++		return err;
++	}
++
++	alsa_stream->channels = params_channels(params);
++	alsa_stream->params_rate = params_rate(params);
++	alsa_stream->pcm_format_width = snd_pcm_format_width(params_format (params));
++	audio_info(" .. OUT\n");
++
++	return err;
++}
++
++/* hw_free callback */
++static int snd_bcm2835_pcm_hw_free(struct snd_pcm_substream *substream)
++{
++	audio_info(" .. IN\n");
++	return snd_pcm_lib_free_pages(substream);
++}
++
++/* prepare callback */
++static int snd_bcm2835_pcm_prepare(struct snd_pcm_substream *substream)
++{
++	bcm2835_chip_t *chip = snd_pcm_substream_chip(substream);
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	bcm2835_alsa_stream_t *alsa_stream = runtime->private_data;
++	int channels;
++	int err;
++
++	audio_info(" .. IN\n");
++
++	/* notify the vchiq that it should enter spdif passthrough mode by
++	 * setting channels=0 (see
++	 * https://github.com/raspberrypi/linux/issues/528) */
++	if (chip->spdif_status & IEC958_AES0_NONAUDIO)
++		channels = 0;
++	else
++		channels = alsa_stream->channels;
++
++	err = bcm2835_audio_set_params(alsa_stream, channels,
++				       alsa_stream->params_rate,
++				       alsa_stream->pcm_format_width);
++	if (err < 0) {
++		audio_error(" error setting hw params\n");
++	}
++
++	bcm2835_audio_setup(alsa_stream);
++
++	/* in preparation of the stream, set the controls (volume level) of the stream */
++	bcm2835_audio_set_ctls(alsa_stream->chip);
++
++
++	memset(&alsa_stream->pcm_indirect, 0, sizeof(alsa_stream->pcm_indirect));
++
++	alsa_stream->pcm_indirect.hw_buffer_size =
++	alsa_stream->pcm_indirect.sw_buffer_size =
++		snd_pcm_lib_buffer_bytes(substream);
++
++	alsa_stream->buffer_size = snd_pcm_lib_buffer_bytes(substream);
++	alsa_stream->period_size = snd_pcm_lib_period_bytes(substream);
++	alsa_stream->pos = 0;
++
++	audio_debug("buffer_size=%d, period_size=%d pos=%d frame_bits=%d\n",
++		      alsa_stream->buffer_size, alsa_stream->period_size,
++		      alsa_stream->pos, runtime->frame_bits);
++
++	audio_info(" .. OUT\n");
++	return 0;
++}
++
++static void snd_bcm2835_pcm_transfer(struct snd_pcm_substream *substream,
++				    struct snd_pcm_indirect *rec, size_t bytes)
++{
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	bcm2835_alsa_stream_t *alsa_stream = runtime->private_data;
++	void *src = (void *)(substream->runtime->dma_area + rec->sw_data);
++	int err;
++
++	err = bcm2835_audio_write(alsa_stream, bytes, src);
++	if (err)
++		audio_error(" Failed to transfer to alsa device (%d)\n", err);
++
++}
++
++static int snd_bcm2835_pcm_ack(struct snd_pcm_substream *substream)
++{
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	bcm2835_alsa_stream_t *alsa_stream = runtime->private_data;
++	struct snd_pcm_indirect *pcm_indirect = &alsa_stream->pcm_indirect;
++
++	pcm_indirect->hw_queue_size = runtime->hw.buffer_bytes_max;
++	snd_pcm_indirect_playback_transfer(substream, pcm_indirect,
++					   snd_bcm2835_pcm_transfer);
++	return 0;
++}
++
++/* trigger callback */
++static int snd_bcm2835_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
++{
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	bcm2835_alsa_stream_t *alsa_stream = runtime->private_data;
++	int err = 0;
++
++	audio_info(" .. IN\n");
++
++	switch (cmd) {
++	case SNDRV_PCM_TRIGGER_START:
++		audio_debug("bcm2835_AUDIO_TRIGGER_START running=%d\n",
++			      alsa_stream->running);
++		if (!alsa_stream->running) {
++			err = bcm2835_audio_start(alsa_stream);
++			if (err == 0) {
++				alsa_stream->pcm_indirect.hw_io =
++				alsa_stream->pcm_indirect.hw_data =
++					bytes_to_frames(runtime,
++							alsa_stream->pos);
++				substream->ops->ack(substream);
++				alsa_stream->running = 1;
++				alsa_stream->draining = 1;
++			} else {
++				audio_error(" Failed to START alsa device (%d)\n", err);
++			}
++		}
++		break;
++	case SNDRV_PCM_TRIGGER_STOP:
++		audio_debug
++		    ("bcm2835_AUDIO_TRIGGER_STOP running=%d draining=%d\n",
++			     alsa_stream->running, runtime->status->state == SNDRV_PCM_STATE_DRAINING);
++		if (runtime->status->state == SNDRV_PCM_STATE_DRAINING) {
++			audio_info("DRAINING\n");
++			alsa_stream->draining = 1;
++		} else {
++			audio_info("DROPPING\n");
++			alsa_stream->draining = 0;
++		}
++		if (alsa_stream->running) {
++			err = bcm2835_audio_stop(alsa_stream);
++			if (err != 0)
++				audio_error(" Failed to STOP alsa device (%d)\n", err);
++			alsa_stream->running = 0;
++		}
++		break;
++	default:
++		err = -EINVAL;
++	}
++
++	audio_info(" .. OUT\n");
++	return err;
++}
++
++/* pointer callback */
++static snd_pcm_uframes_t
++snd_bcm2835_pcm_pointer(struct snd_pcm_substream *substream)
++{
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	bcm2835_alsa_stream_t *alsa_stream = runtime->private_data;
++
++	audio_info(" .. IN\n");
++
++	audio_debug("pcm_pointer... (%d) hwptr=%d appl=%d pos=%d\n", 0,
++		      frames_to_bytes(runtime, runtime->status->hw_ptr),
++		      frames_to_bytes(runtime, runtime->control->appl_ptr),
++		      alsa_stream->pos);
++
++	audio_info(" .. OUT\n");
++	return snd_pcm_indirect_playback_pointer(substream,
++						 &alsa_stream->pcm_indirect,
++						 alsa_stream->pos);
++}
++
++static int snd_bcm2835_pcm_lib_ioctl(struct snd_pcm_substream *substream,
++				     unsigned int cmd, void *arg)
++{
++	int ret = snd_pcm_lib_ioctl(substream, cmd, arg);
++	audio_info(" .. substream=%p, cmd=%d, arg=%p (%x) ret=%d\n", substream,
++		    cmd, arg, arg ? *(unsigned *)arg : 0, ret);
++	return ret;
++}
++
++/* operators */
++static struct snd_pcm_ops snd_bcm2835_playback_ops = {
++	.open = snd_bcm2835_playback_open,
++	.close = snd_bcm2835_playback_close,
++	.ioctl = snd_bcm2835_pcm_lib_ioctl,
++	.hw_params = snd_bcm2835_pcm_hw_params,
++	.hw_free = snd_bcm2835_pcm_hw_free,
++	.prepare = snd_bcm2835_pcm_prepare,
++	.trigger = snd_bcm2835_pcm_trigger,
++	.pointer = snd_bcm2835_pcm_pointer,
++	.ack = snd_bcm2835_pcm_ack,
++};
++
++static struct snd_pcm_ops snd_bcm2835_playback_spdif_ops = {
++	.open = snd_bcm2835_playback_spdif_open,
++	.close = snd_bcm2835_playback_close,
++	.ioctl = snd_bcm2835_pcm_lib_ioctl,
++	.hw_params = snd_bcm2835_pcm_hw_params,
++	.hw_free = snd_bcm2835_pcm_hw_free,
++	.prepare = snd_bcm2835_pcm_prepare,
++	.trigger = snd_bcm2835_pcm_trigger,
++	.pointer = snd_bcm2835_pcm_pointer,
++	.ack = snd_bcm2835_pcm_ack,
++};
++
++/* create a pcm device */
++int snd_bcm2835_new_pcm(bcm2835_chip_t * chip)
++{
++	struct snd_pcm *pcm;
++	int err;
++
++	audio_info(" .. IN\n");
++	mutex_init(&chip->audio_mutex);
++	if(mutex_lock_interruptible(&chip->audio_mutex))
++	{
++		audio_error("Interrupted whilst waiting for lock\n");
++		return -EINTR;
++	}
++	err =
++	    snd_pcm_new(chip->card, "bcm2835 ALSA", 0, MAX_SUBSTREAMS, 0, &pcm);
++	if (err < 0)
++		goto out;
++	pcm->private_data = chip;
++	strcpy(pcm->name, "bcm2835 ALSA");
++	chip->pcm = pcm;
++	chip->dest = AUDIO_DEST_AUTO;
++	chip->volume = alsa2chip(0);
++	chip->mute = CTRL_VOL_UNMUTE;	/*disable mute on startup */
++	/* set operators */
++	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK,
++			&snd_bcm2835_playback_ops);
++
++	/* pre-allocation of buffers */
++	/* NOTE: this may fail */
++	snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_CONTINUOUS,
++					      snd_dma_continuous_data
++					      (GFP_KERNEL), 64 * 1024,
++					      64 * 1024);
++
++out:
++	mutex_unlock(&chip->audio_mutex);
++	audio_info(" .. OUT\n");
++
++	return 0;
++}
++
++int snd_bcm2835_new_spdif_pcm(bcm2835_chip_t * chip)
++{
++	struct snd_pcm *pcm;
++	int err;
++
++	audio_info(" .. IN\n");
++	if(mutex_lock_interruptible(&chip->audio_mutex))
++	{
++		audio_error("Interrupted whilst waiting for lock\n");
++		return -EINTR;
++	}
++	err = snd_pcm_new(chip->card, "bcm2835 ALSA", 1, 1, 0, &pcm);
++	if (err < 0)
++		goto out;
++
++	pcm->private_data = chip;
++	strcpy(pcm->name, "bcm2835 IEC958/HDMI");
++	chip->pcm_spdif = pcm;
++	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK,
++			&snd_bcm2835_playback_spdif_ops);
++
++	snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_CONTINUOUS,
++					      snd_dma_continuous_data (GFP_KERNEL),
++					      64 * 1024, 64 * 1024);
++out:
++	mutex_unlock(&chip->audio_mutex);
++	audio_info(" .. OUT\n");
++
++	return 0;
++}
+--- /dev/null
++++ b/sound/arm/bcm2835-vchiq.c
+@@ -0,0 +1,902 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#include <linux/device.h>
++#include <sound/core.h>
++#include <sound/initval.h>
++#include <sound/pcm.h>
++#include <linux/io.h>
++#include <linux/interrupt.h>
++#include <linux/fs.h>
++#include <linux/file.h>
++#include <linux/mm.h>
++#include <linux/syscalls.h>
++#include <asm/uaccess.h>
++#include <linux/slab.h>
++#include <linux/delay.h>
++#include <linux/atomic.h>
++#include <linux/module.h>
++#include <linux/completion.h>
++
++#include "bcm2835.h"
++
++/* ---- Include Files -------------------------------------------------------- */
++
++#include "interface/vchi/vchi.h"
++#include "vc_vchi_audioserv_defs.h"
++
++/* ---- Private Constants and Types ------------------------------------------ */
++
++#define BCM2835_AUDIO_STOP           0
++#define BCM2835_AUDIO_START          1
++#define BCM2835_AUDIO_WRITE          2
++
++/* Logging macros (for remapping to other logging mechanisms, i.e., printf) */
++#ifdef AUDIO_DEBUG_ENABLE
++	#define LOG_ERR( fmt, arg... )   pr_err( "%s:%d " fmt, __func__, __LINE__, ##arg)
++	#define LOG_WARN( fmt, arg... )  pr_info( "%s:%d " fmt, __func__, __LINE__, ##arg)
++	#define LOG_INFO( fmt, arg... )  pr_info( "%s:%d " fmt, __func__, __LINE__, ##arg)
++	#define LOG_DBG( fmt, arg... )   pr_info( "%s:%d " fmt, __func__, __LINE__, ##arg)
++#else
++	#define LOG_ERR( fmt, arg... )   pr_err( "%s:%d " fmt, __func__, __LINE__, ##arg)
++	#define LOG_WARN( fmt, arg... )
++	#define LOG_INFO( fmt, arg... )
++	#define LOG_DBG( fmt, arg... )
++#endif
++
++typedef struct opaque_AUDIO_INSTANCE_T {
++	uint32_t num_connections;
++	VCHI_SERVICE_HANDLE_T vchi_handle[VCHI_MAX_NUM_CONNECTIONS];
++	struct completion msg_avail_comp;
++	struct mutex vchi_mutex;
++	bcm2835_alsa_stream_t *alsa_stream;
++	int32_t result;
++	short peer_version;
++} AUDIO_INSTANCE_T;
++
++bool force_bulk = false;
++
++/* ---- Private Variables ---------------------------------------------------- */
++
++/* ---- Private Function Prototypes ------------------------------------------ */
++
++/* ---- Private Functions ---------------------------------------------------- */
++
++static int bcm2835_audio_stop_worker(bcm2835_alsa_stream_t * alsa_stream);
++static int bcm2835_audio_start_worker(bcm2835_alsa_stream_t * alsa_stream);
++static int bcm2835_audio_write_worker(bcm2835_alsa_stream_t *alsa_stream,
++				      uint32_t count, void *src);
++
++typedef struct {
++	struct work_struct my_work;
++	bcm2835_alsa_stream_t *alsa_stream;
++	int cmd;
++	void *src;
++	uint32_t count;
++} my_work_t;
++
++static void my_wq_function(struct work_struct *work)
++{
++	my_work_t *w = (my_work_t *) work;
++	int ret = -9;
++	LOG_DBG(" .. IN %p:%d\n", w->alsa_stream, w->cmd);
++	switch (w->cmd) {
++	case BCM2835_AUDIO_START:
++		ret = bcm2835_audio_start_worker(w->alsa_stream);
++		break;
++	case BCM2835_AUDIO_STOP:
++		ret = bcm2835_audio_stop_worker(w->alsa_stream);
++		break;
++	case BCM2835_AUDIO_WRITE:
++		ret = bcm2835_audio_write_worker(w->alsa_stream, w->count,
++						 w->src);
++		break;
++	default:
++		LOG_ERR(" Unexpected work: %p:%d\n", w->alsa_stream, w->cmd);
++		break;
++	}
++	kfree((void *)work);
++	LOG_DBG(" .. OUT %d\n", ret);
++}
++
++int bcm2835_audio_start(bcm2835_alsa_stream_t * alsa_stream)
++{
++	int ret = -1;
++	LOG_DBG(" .. IN\n");
++	if (alsa_stream->my_wq) {
++		my_work_t *work = kmalloc(sizeof(my_work_t), GFP_ATOMIC);
++		/*--- Queue some work (item 1) ---*/
++		if (work) {
++			INIT_WORK((struct work_struct *)work, my_wq_function);
++			work->alsa_stream = alsa_stream;
++			work->cmd = BCM2835_AUDIO_START;
++			if (queue_work
++			    (alsa_stream->my_wq, (struct work_struct *)work))
++				ret = 0;
++		} else
++			LOG_ERR(" .. Error: NULL work kmalloc\n");
++	}
++	LOG_DBG(" .. OUT %d\n", ret);
++	return ret;
++}
++
++int bcm2835_audio_stop(bcm2835_alsa_stream_t * alsa_stream)
++{
++	int ret = -1;
++	LOG_DBG(" .. IN\n");
++	if (alsa_stream->my_wq) {
++		my_work_t *work = kmalloc(sizeof(my_work_t), GFP_ATOMIC);
++		 /*--- Queue some work (item 1) ---*/
++		if (work) {
++			INIT_WORK((struct work_struct *)work, my_wq_function);
++			work->alsa_stream = alsa_stream;
++			work->cmd = BCM2835_AUDIO_STOP;
++			if (queue_work
++			    (alsa_stream->my_wq, (struct work_struct *)work))
++				ret = 0;
++		} else
++			LOG_ERR(" .. Error: NULL work kmalloc\n");
++	}
++	LOG_DBG(" .. OUT %d\n", ret);
++	return ret;
++}
++
++int bcm2835_audio_write(bcm2835_alsa_stream_t *alsa_stream,
++			uint32_t count, void *src)
++{
++	int ret = -1;
++	LOG_DBG(" .. IN\n");
++	if (alsa_stream->my_wq) {
++		my_work_t *work = kmalloc(sizeof(my_work_t), GFP_ATOMIC);
++		 /*--- Queue some work (item 1) ---*/
++		if (work) {
++			INIT_WORK((struct work_struct *)work, my_wq_function);
++			work->alsa_stream = alsa_stream;
++			work->cmd = BCM2835_AUDIO_WRITE;
++			work->src = src;
++			work->count = count;
++			if (queue_work
++			    (alsa_stream->my_wq, (struct work_struct *)work))
++				ret = 0;
++		} else
++			LOG_ERR(" .. Error: NULL work kmalloc\n");
++	}
++	LOG_DBG(" .. OUT %d\n", ret);
++	return ret;
++}
++
++void my_workqueue_init(bcm2835_alsa_stream_t * alsa_stream)
++{
++	alsa_stream->my_wq = alloc_workqueue("my_queue", WQ_HIGHPRI, 1);
++	return;
++}
++
++void my_workqueue_quit(bcm2835_alsa_stream_t * alsa_stream)
++{
++	if (alsa_stream->my_wq) {
++		flush_workqueue(alsa_stream->my_wq);
++		destroy_workqueue(alsa_stream->my_wq);
++		alsa_stream->my_wq = NULL;
++	}
++	return;
++}
++
++static void audio_vchi_callback(void *param,
++				const VCHI_CALLBACK_REASON_T reason,
++				void *msg_handle)
++{
++	AUDIO_INSTANCE_T *instance = (AUDIO_INSTANCE_T *) param;
++	int32_t status;
++	int32_t msg_len;
++	VC_AUDIO_MSG_T m;
++	LOG_DBG(" .. IN instance=%p, handle=%p, alsa=%p, reason=%d, handle=%p\n",
++		instance, instance ? instance->vchi_handle[0] : NULL, instance ? instance->alsa_stream : NULL, reason, msg_handle);
++
++	if (reason != VCHI_CALLBACK_MSG_AVAILABLE) {
++		return;
++	}
++	if (!instance) {
++		LOG_ERR(" .. instance is null\n");
++		BUG();
++		return;
++  }
++  if (!instance->vchi_handle[0]) {
++		LOG_ERR(" .. instance->vchi_handle[0] is null\n");
++		BUG();
++		return;
++  }
++	status = vchi_msg_dequeue(instance->vchi_handle[0],
++				  &m, sizeof m, &msg_len, VCHI_FLAGS_NONE);
++	if (m.type == VC_AUDIO_MSG_TYPE_RESULT) {
++		LOG_DBG
++		    (" .. instance=%p, m.type=VC_AUDIO_MSG_TYPE_RESULT, success=%d\n",
++		     instance, m.u.result.success);
++		instance->result = m.u.result.success;
++		complete(&instance->msg_avail_comp);
++	} else if (m.type == VC_AUDIO_MSG_TYPE_COMPLETE) {
++		bcm2835_alsa_stream_t *alsa_stream = instance->alsa_stream;
++		irq_handler_t callback = (irq_handler_t) m.u.complete.callback;
++		LOG_DBG
++		    (" .. instance=%p, m.type=VC_AUDIO_MSG_TYPE_COMPLETE, complete=%d\n",
++		     instance, m.u.complete.count);
++		if (alsa_stream && callback) {
++			atomic_add(m.u.complete.count, &alsa_stream->retrieved);
++			callback(0, alsa_stream);
++		} else {
++			LOG_ERR(" .. unexpected alsa_stream=%p, callback=%p\n",
++				alsa_stream, callback);
++		}
++	} else {
++		LOG_ERR(" .. unexpected m.type=%d\n", m.type);
++	}
++	LOG_DBG(" .. OUT\n");
++}
++
++static AUDIO_INSTANCE_T *vc_vchi_audio_init(VCHI_INSTANCE_T vchi_instance,
++					    VCHI_CONNECTION_T **
++					    vchi_connections,
++					    uint32_t num_connections)
++{
++	uint32_t i;
++	AUDIO_INSTANCE_T *instance;
++	int status;
++
++	LOG_DBG("%s: start", __func__);
++
++	if (num_connections > VCHI_MAX_NUM_CONNECTIONS) {
++		LOG_ERR("%s: unsupported number of connections %u (max=%u)\n",
++			__func__, num_connections, VCHI_MAX_NUM_CONNECTIONS);
++
++		return NULL;
++	}
++	/* Allocate memory for this instance */
++	instance = kmalloc(sizeof(*instance), GFP_KERNEL);
++	if (!instance)
++		return NULL;
++
++	memset(instance, 0, sizeof(*instance));
++	instance->num_connections = num_connections;
++
++	/* Create a lock for exclusive, serialized VCHI connection access */
++	mutex_init(&instance->vchi_mutex);
++	/* Open the VCHI service connections */
++	for (i = 0; i < num_connections; i++) {
++		SERVICE_CREATION_T params = {
++			VCHI_VERSION_EX(VC_AUDIOSERV_VER, VC_AUDIOSERV_MIN_VER),
++			VC_AUDIO_SERVER_NAME,	// 4cc service code
++			vchi_connections[i],	// passed in fn pointers
++			0,	// rx fifo size (unused)
++			0,	// tx fifo size (unused)
++			audio_vchi_callback,	// service callback
++			instance,	// service callback parameter
++			1,	//TODO: remove VCOS_FALSE,   // unaligned bulk recieves
++			1,	//TODO: remove VCOS_FALSE,   // unaligned bulk transmits
++			0	// want crc check on bulk transfers
++		};
++
++		LOG_DBG("%s: about to open %i\n", __func__, i);
++		status = vchi_service_open(vchi_instance, &params,
++					   &instance->vchi_handle[i]);
++		LOG_DBG("%s: opened %i: %p=%d\n", __func__, i, instance->vchi_handle[i], status);
++		if (status) {
++			LOG_ERR
++			    ("%s: failed to open VCHI service connection (status=%d)\n",
++			     __func__, status);
++
++			goto err_close_services;
++		}
++		/* Finished with the service for now */
++		vchi_service_release(instance->vchi_handle[i]);
++	}
++
++	LOG_DBG("%s: okay\n", __func__);
++	return instance;
++
++err_close_services:
++	for (i = 0; i < instance->num_connections; i++) {
++		LOG_ERR("%s: closing %i: %p\n", __func__, i, instance->vchi_handle[i]);
++		if (instance->vchi_handle[i])
++			vchi_service_close(instance->vchi_handle[i]);
++	}
++
++	kfree(instance);
++	LOG_ERR("%s: error\n", __func__);
++
++	return NULL;
++}
++
++static int32_t vc_vchi_audio_deinit(AUDIO_INSTANCE_T * instance)
++{
++	uint32_t i;
++
++	LOG_DBG(" .. IN\n");
++
++	if (instance == NULL) {
++		LOG_ERR("%s: invalid handle %p\n", __func__, instance);
++
++		return -1;
++	}
++
++	LOG_DBG(" .. about to lock (%d)\n", instance->num_connections);
++	if(mutex_lock_interruptible(&instance->vchi_mutex))
++	{
++		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n",instance->num_connections);
++		return -EINTR;
++	}
++
++	/* Close all VCHI service connections */
++	for (i = 0; i < instance->num_connections; i++) {
++		int32_t success;
++		LOG_DBG(" .. %i:closing %p\n", i, instance->vchi_handle[i]);
++		vchi_service_use(instance->vchi_handle[i]);
++
++		success = vchi_service_close(instance->vchi_handle[i]);
++		if (success != 0) {
++			LOG_DBG
++			    ("%s: failed to close VCHI service connection (status=%d)\n",
++			     __func__, success);
++		}
++	}
++
++	mutex_unlock(&instance->vchi_mutex);
++
++	kfree(instance);
++
++	LOG_DBG(" .. OUT\n");
++
++	return 0;
++}
++
++static int bcm2835_audio_open_connection(bcm2835_alsa_stream_t * alsa_stream)
++{
++	static VCHI_INSTANCE_T vchi_instance;
++	static VCHI_CONNECTION_T *vchi_connection;
++	static int initted;
++	AUDIO_INSTANCE_T *instance = alsa_stream->instance;
++	int ret;
++	LOG_DBG(" .. IN\n");
++
++	LOG_INFO("%s: start\n", __func__);
++	BUG_ON(instance);
++	if (instance) {
++		LOG_ERR("%s: VCHI instance already open (%p)\n",
++			__func__, instance);
++		instance->alsa_stream = alsa_stream;
++		alsa_stream->instance = instance;
++		ret = 0;	// xxx todo -1;
++		goto err_free_mem;
++	}
++
++	/* Initialize and create a VCHI connection */
++	if (!initted) {
++	  ret = vchi_initialise(&vchi_instance);
++	  if (ret != 0) {
++		  LOG_ERR("%s: failed to initialise VCHI instance (ret=%d)\n",
++			  __func__, ret);
++
++		  ret = -EIO;
++		  goto err_free_mem;
++	  }
++	  ret = vchi_connect(NULL, 0, vchi_instance);
++	  if (ret != 0) {
++		  LOG_ERR("%s: failed to connect VCHI instance (ret=%d)\n",
++			  __func__, ret);
++
++		  ret = -EIO;
++		  goto err_free_mem;
++	  }
++	initted = 1;
++	}
++
++	/* Initialize an instance of the audio service */
++	instance = vc_vchi_audio_init(vchi_instance, &vchi_connection, 1);
++
++	if (instance == NULL) {
++		LOG_ERR("%s: failed to initialize audio service\n", __func__);
++
++		ret = -EPERM;
++		goto err_free_mem;
++	}
++
++	instance->alsa_stream = alsa_stream;
++	alsa_stream->instance = instance;
++
++	LOG_DBG(" success !\n");
++err_free_mem:
++	LOG_DBG(" .. OUT\n");
++
++	return ret;
++}
++
++int bcm2835_audio_open(bcm2835_alsa_stream_t * alsa_stream)
++{
++	AUDIO_INSTANCE_T *instance;
++	VC_AUDIO_MSG_T m;
++	int32_t success;
++	int ret;
++	LOG_DBG(" .. IN\n");
++
++	my_workqueue_init(alsa_stream);
++
++	ret = bcm2835_audio_open_connection(alsa_stream);
++	if (ret != 0) {
++		ret = -1;
++		goto exit;
++	}
++	instance = alsa_stream->instance;
++	LOG_DBG(" instance (%p)\n", instance);
++
++	if(mutex_lock_interruptible(&instance->vchi_mutex))
++	{
++		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n",instance->num_connections);
++		return -EINTR;
++	}
++	vchi_service_use(instance->vchi_handle[0]);
++
++	m.type = VC_AUDIO_MSG_TYPE_OPEN;
++
++	/* Send the message to the videocore */
++	success = vchi_msg_queue(instance->vchi_handle[0],
++				 &m, sizeof m,
++				 VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	if (success != 0) {
++		LOG_ERR("%s: failed on vchi_msg_queue (status=%d)\n",
++			__func__, success);
++
++		ret = -1;
++		goto unlock;
++	}
++
++	ret = 0;
++
++unlock:
++	vchi_service_release(instance->vchi_handle[0]);
++	mutex_unlock(&instance->vchi_mutex);
++exit:
++	LOG_DBG(" .. OUT\n");
++	return ret;
++}
++
++static int bcm2835_audio_set_ctls_chan(bcm2835_alsa_stream_t * alsa_stream,
++				       bcm2835_chip_t * chip)
++{
++	VC_AUDIO_MSG_T m;
++	AUDIO_INSTANCE_T *instance = alsa_stream->instance;
++	int32_t success;
++	int ret;
++	LOG_DBG(" .. IN\n");
++
++	LOG_INFO
++	    (" Setting ALSA dest(%d), volume(%d)\n", chip->dest, chip->volume);
++
++	if(mutex_lock_interruptible(&instance->vchi_mutex))
++	{
++		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n",instance->num_connections);
++		return -EINTR;
++	}
++	vchi_service_use(instance->vchi_handle[0]);
++
++	instance->result = -1;
++
++	m.type = VC_AUDIO_MSG_TYPE_CONTROL;
++	m.u.control.dest = chip->dest;
++	m.u.control.volume = chip->volume;
++
++	/* Create the message available completion */
++	init_completion(&instance->msg_avail_comp);
++
++	/* Send the message to the videocore */
++	success = vchi_msg_queue(instance->vchi_handle[0],
++				 &m, sizeof m,
++				 VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	if (success != 0) {
++		LOG_ERR("%s: failed on vchi_msg_queue (status=%d)\n",
++			__func__, success);
++
++		ret = -1;
++		goto unlock;
++	}
++
++	/* We are expecting a reply from the videocore */
++	ret = wait_for_completion_interruptible(&instance->msg_avail_comp);
++	if (ret) {
++		LOG_DBG("%s: failed on waiting for event (status=%d)\n",
++			__func__, success);
++		goto unlock;
++	}
++
++	if (instance->result != 0) {
++		LOG_ERR("%s: result=%d\n", __func__, instance->result);
++
++		ret = -1;
++		goto unlock;
++	}
++
++	ret = 0;
++
++unlock:
++	vchi_service_release(instance->vchi_handle[0]);
++	mutex_unlock(&instance->vchi_mutex);
++
++	LOG_DBG(" .. OUT\n");
++	return ret;
++}
++
++int bcm2835_audio_set_ctls(bcm2835_chip_t * chip)
++{
++	int i;
++	int ret = 0;
++	LOG_DBG(" .. IN\n");
++	LOG_DBG(" Setting ALSA dest(%d), volume(%d)\n", chip->dest, chip->volume);
++
++	/* change ctls for all substreams */
++	for (i = 0; i < MAX_SUBSTREAMS; i++) {
++		if (chip->avail_substreams & (1 << i)) {
++			if (!chip->alsa_stream[i])
++			{
++				LOG_DBG(" No ALSA stream available?! %i:%p (%x)\n", i, chip->alsa_stream[i], chip->avail_substreams);
++				ret = 0;
++			}
++			else if (bcm2835_audio_set_ctls_chan /* returns 0 on success */
++				 (chip->alsa_stream[i], chip) != 0)
++				 {
++					LOG_ERR("Couldn't set the controls for stream %d\n", i);
++					ret = -1;
++				 }
++			else LOG_DBG(" Controls set for stream %d\n", i);
++		}
++	}
++	LOG_DBG(" .. OUT ret=%d\n", ret);
++	return ret;
++}
++
++int bcm2835_audio_set_params(bcm2835_alsa_stream_t * alsa_stream,
++			     uint32_t channels, uint32_t samplerate,
++			     uint32_t bps)
++{
++	VC_AUDIO_MSG_T m;
++	AUDIO_INSTANCE_T *instance = alsa_stream->instance;
++	int32_t success;
++	int ret;
++	LOG_DBG(" .. IN\n");
++
++	LOG_INFO
++	    (" Setting ALSA channels(%d), samplerate(%d), bits-per-sample(%d)\n",
++	     channels, samplerate, bps);
++
++	/* resend ctls - alsa_stream may not have been open when first send */
++	ret = bcm2835_audio_set_ctls_chan(alsa_stream, alsa_stream->chip);
++	if (ret != 0) {
++		LOG_ERR(" Alsa controls not supported\n");
++		return -EINVAL;
++	}
++
++	if(mutex_lock_interruptible(&instance->vchi_mutex))
++	{
++		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n",instance->num_connections);
++		return -EINTR;
++	}
++	vchi_service_use(instance->vchi_handle[0]);
++
++	instance->result = -1;
++
++	m.type = VC_AUDIO_MSG_TYPE_CONFIG;
++	m.u.config.channels = channels;
++	m.u.config.samplerate = samplerate;
++	m.u.config.bps = bps;
++
++	/* Create the message available completion */
++	init_completion(&instance->msg_avail_comp);
++
++	/* Send the message to the videocore */
++	success = vchi_msg_queue(instance->vchi_handle[0],
++				 &m, sizeof m,
++				 VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	if (success != 0) {
++		LOG_ERR("%s: failed on vchi_msg_queue (status=%d)\n",
++			__func__, success);
++
++		ret = -1;
++		goto unlock;
++	}
++
++	/* We are expecting a reply from the videocore */
++	ret = wait_for_completion_interruptible(&instance->msg_avail_comp);
++	if (ret) {
++		LOG_DBG("%s: failed on waiting for event (status=%d)\n",
++			__func__, success);
++		goto unlock;
++	}
++
++	if (instance->result != 0) {
++		LOG_ERR("%s: result=%d", __func__, instance->result);
++
++		ret = -1;
++		goto unlock;
++	}
++
++	ret = 0;
++
++unlock:
++	vchi_service_release(instance->vchi_handle[0]);
++	mutex_unlock(&instance->vchi_mutex);
++
++	LOG_DBG(" .. OUT\n");
++	return ret;
++}
++
++int bcm2835_audio_setup(bcm2835_alsa_stream_t * alsa_stream)
++{
++	LOG_DBG(" .. IN\n");
++
++	LOG_DBG(" .. OUT\n");
++
++	return 0;
++}
++
++static int bcm2835_audio_start_worker(bcm2835_alsa_stream_t * alsa_stream)
++{
++	VC_AUDIO_MSG_T m;
++	AUDIO_INSTANCE_T *instance = alsa_stream->instance;
++	int32_t success;
++	int ret;
++	LOG_DBG(" .. IN\n");
++
++	if(mutex_lock_interruptible(&instance->vchi_mutex))
++	{
++		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n",instance->num_connections);
++		return -EINTR;
++	}
++	vchi_service_use(instance->vchi_handle[0]);
++
++	m.type = VC_AUDIO_MSG_TYPE_START;
++
++	/* Send the message to the videocore */
++	success = vchi_msg_queue(instance->vchi_handle[0],
++				 &m, sizeof m,
++				 VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	if (success != 0) {
++		LOG_ERR("%s: failed on vchi_msg_queue (status=%d)\n",
++			__func__, success);
++
++		ret = -1;
++		goto unlock;
++	}
++
++	ret = 0;
++
++unlock:
++	vchi_service_release(instance->vchi_handle[0]);
++	mutex_unlock(&instance->vchi_mutex);
++	LOG_DBG(" .. OUT\n");
++	return ret;
++}
++
++static int bcm2835_audio_stop_worker(bcm2835_alsa_stream_t * alsa_stream)
++{
++	VC_AUDIO_MSG_T m;
++	AUDIO_INSTANCE_T *instance = alsa_stream->instance;
++	int32_t success;
++	int ret;
++	LOG_DBG(" .. IN\n");
++
++	if(mutex_lock_interruptible(&instance->vchi_mutex))
++	{
++		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n",instance->num_connections);
++		return -EINTR;
++	}
++	vchi_service_use(instance->vchi_handle[0]);
++
++	m.type = VC_AUDIO_MSG_TYPE_STOP;
++	m.u.stop.draining = alsa_stream->draining;
++
++	/* Send the message to the videocore */
++	success = vchi_msg_queue(instance->vchi_handle[0],
++				 &m, sizeof m,
++				 VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	if (success != 0) {
++		LOG_ERR("%s: failed on vchi_msg_queue (status=%d)\n",
++			__func__, success);
++
++		ret = -1;
++		goto unlock;
++	}
++
++	ret = 0;
++
++unlock:
++	vchi_service_release(instance->vchi_handle[0]);
++	mutex_unlock(&instance->vchi_mutex);
++	LOG_DBG(" .. OUT\n");
++	return ret;
++}
++
++int bcm2835_audio_close(bcm2835_alsa_stream_t * alsa_stream)
++{
++	VC_AUDIO_MSG_T m;
++	AUDIO_INSTANCE_T *instance = alsa_stream->instance;
++	int32_t success;
++	int ret;
++	LOG_DBG(" .. IN\n");
++
++	my_workqueue_quit(alsa_stream);
++
++	if(mutex_lock_interruptible(&instance->vchi_mutex))
++	{
++		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n",instance->num_connections);
++		return -EINTR;
++	}
++	vchi_service_use(instance->vchi_handle[0]);
++
++	m.type = VC_AUDIO_MSG_TYPE_CLOSE;
++
++	/* Create the message available completion */
++	init_completion(&instance->msg_avail_comp);
++
++	/* Send the message to the videocore */
++	success = vchi_msg_queue(instance->vchi_handle[0],
++				 &m, sizeof m,
++				 VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	if (success != 0) {
++		LOG_ERR("%s: failed on vchi_msg_queue (status=%d)\n",
++			__func__, success);
++		ret = -1;
++		goto unlock;
++	}
++
++	ret = wait_for_completion_interruptible(&instance->msg_avail_comp);
++	if (ret) {
++		LOG_DBG("%s: failed on waiting for event (status=%d)\n",
++			__func__, success);
++		goto unlock;
++	}
++	if (instance->result != 0) {
++		LOG_ERR("%s: failed result (status=%d)\n",
++			__func__, instance->result);
++
++		ret = -1;
++		goto unlock;
++	}
++
++	ret = 0;
++
++unlock:
++	vchi_service_release(instance->vchi_handle[0]);
++	mutex_unlock(&instance->vchi_mutex);
++
++	/* Stop the audio service */
++	if (instance) {
++		vc_vchi_audio_deinit(instance);
++		alsa_stream->instance = NULL;
++	}
++	LOG_DBG(" .. OUT\n");
++	return ret;
++}
++
++int bcm2835_audio_write_worker(bcm2835_alsa_stream_t *alsa_stream,
++			       uint32_t count, void *src)
++{
++	VC_AUDIO_MSG_T m;
++	AUDIO_INSTANCE_T *instance = alsa_stream->instance;
++	int32_t success;
++	int ret;
++
++	LOG_DBG(" .. IN\n");
++
++	LOG_INFO(" Writing %d bytes from %p\n", count, src);
++
++	if(mutex_lock_interruptible(&instance->vchi_mutex))
++	{
++		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n",instance->num_connections);
++		return -EINTR;
++	}
++	vchi_service_use(instance->vchi_handle[0]);
++
++	if ( instance->peer_version==0 && vchi_get_peer_version(instance->vchi_handle[0], &instance->peer_version) == 0 ) {
++		LOG_DBG("%s: client version %d connected\n", __func__, instance->peer_version);
++	}
++	m.type = VC_AUDIO_MSG_TYPE_WRITE;
++	m.u.write.count = count;
++	// old version uses bulk, new version uses control
++	m.u.write.max_packet = instance->peer_version < 2 || force_bulk ? 0:4000;
++	m.u.write.callback = alsa_stream->fifo_irq_handler;
++	m.u.write.cookie = alsa_stream;
++	m.u.write.silence = src == NULL;
++
++	/* Send the message to the videocore */
++	success = vchi_msg_queue(instance->vchi_handle[0],
++				 &m, sizeof m,
++				 VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	if (success != 0) {
++		LOG_ERR("%s: failed on vchi_msg_queue (status=%d)\n",
++			__func__, success);
++
++		ret = -1;
++		goto unlock;
++	}
++	if (!m.u.write.silence) {
++		if (m.u.write.max_packet == 0) {
++			/* Send the message to the videocore */
++			success = vchi_bulk_queue_transmit(instance->vchi_handle[0],
++							   src, count,
++							   0 *
++							   VCHI_FLAGS_BLOCK_UNTIL_QUEUED
++							   +
++							   1 *
++							   VCHI_FLAGS_BLOCK_UNTIL_DATA_READ,
++							   NULL);
++		} else {
++			while (count > 0) {
++				int bytes = min((int)m.u.write.max_packet, (int)count);
++				success = vchi_msg_queue(instance->vchi_handle[0],
++							 src, bytes,
++							 VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++				src = (char *)src + bytes;
++				count -= bytes;
++			}
++		}
++		if (success != 0) {
++			LOG_ERR
++			    ("%s: failed on vchi_bulk_queue_transmit (status=%d)\n",
++			     __func__, success);
++
++			ret = -1;
++			goto unlock;
++		}
++	}
++	ret = 0;
++
++unlock:
++	vchi_service_release(instance->vchi_handle[0]);
++	mutex_unlock(&instance->vchi_mutex);
++	LOG_DBG(" .. OUT\n");
++	return ret;
++}
++
++/**
++  * Returns all buffers from arm->vc
++  */
++void bcm2835_audio_flush_buffers(bcm2835_alsa_stream_t * alsa_stream)
++{
++	LOG_DBG(" .. IN\n");
++	LOG_DBG(" .. OUT\n");
++	return;
++}
++
++/**
++  * Forces VC to flush(drop) its filled playback buffers and
++  * return them the us. (VC->ARM)
++  */
++void bcm2835_audio_flush_playback_buffers(bcm2835_alsa_stream_t * alsa_stream)
++{
++	LOG_DBG(" .. IN\n");
++	LOG_DBG(" .. OUT\n");
++}
++
++uint32_t bcm2835_audio_retrieve_buffers(bcm2835_alsa_stream_t * alsa_stream)
++{
++	uint32_t count = atomic_read(&alsa_stream->retrieved);
++	atomic_sub(count, &alsa_stream->retrieved);
++	return count;
++}
++
++module_param(force_bulk, bool, 0444);
++MODULE_PARM_DESC(force_bulk, "Force use of vchiq bulk for audio");
+--- /dev/null
++++ b/sound/arm/bcm2835.c
+@@ -0,0 +1,511 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#include <linux/platform_device.h>
++
++#include <linux/init.h>
++#include <linux/slab.h>
++#include <linux/module.h>
++#include <linux/of.h>
++
++#include "bcm2835.h"
++
++/* module parameters (see "Module Parameters") */
++/* SNDRV_CARDS: maximum number of cards supported by this module */
++static int index[MAX_SUBSTREAMS] = {[0 ... (MAX_SUBSTREAMS - 1)] = -1 };
++static char *id[MAX_SUBSTREAMS] = {[0 ... (MAX_SUBSTREAMS - 1)] = NULL };
++static int enable[MAX_SUBSTREAMS] = {[0 ... (MAX_SUBSTREAMS - 1)] = 1 };
++
++/* HACKY global pointers needed for successive probes to work : ssp
++ * But compared against the changes we will have to do in VC audio_ipc code
++ * to export 8 audio_ipc devices as a single IPC device and then monitor all
++ * four devices in a thread, this gets things done quickly and should be easier
++ * to debug if we run into issues
++ */
++
++static struct snd_card *g_card = NULL;
++static bcm2835_chip_t *g_chip = NULL;
++
++static int snd_bcm2835_free(bcm2835_chip_t * chip)
++{
++	kfree(chip);
++	return 0;
++}
++
++/* component-destructor
++ * (see "Management of Cards and Components")
++ */
++static int snd_bcm2835_dev_free(struct snd_device *device)
++{
++	return snd_bcm2835_free(device->device_data);
++}
++
++/* chip-specific constructor
++ * (see "Management of Cards and Components")
++ */
++static int snd_bcm2835_create(struct snd_card *card,
++					struct platform_device *pdev,
++					bcm2835_chip_t ** rchip)
++{
++	bcm2835_chip_t *chip;
++	int err;
++	static struct snd_device_ops ops = {
++		.dev_free = snd_bcm2835_dev_free,
++	};
++
++	*rchip = NULL;
++
++	chip = kzalloc(sizeof(*chip), GFP_KERNEL);
++	if (chip == NULL)
++		return -ENOMEM;
++
++	chip->card = card;
++
++	err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops);
++	if (err < 0) {
++		snd_bcm2835_free(chip);
++		return err;
++	}
++
++	*rchip = chip;
++	return 0;
++}
++
++static int snd_bcm2835_alsa_probe_dt(struct platform_device *pdev)
++{
++	struct device *dev = &pdev->dev;
++	bcm2835_chip_t *chip;
++	struct snd_card *card;
++	u32 numchans;
++	int err, i;
++
++	err = of_property_read_u32(dev->of_node, "brcm,pwm-channels",
++				   &numchans);
++	if (err) {
++		dev_err(dev, "Failed to get DT property 'brcm,pwm-channels'");
++		return err;
++	}
++
++	if (numchans == 0 || numchans > MAX_SUBSTREAMS) {
++		numchans = MAX_SUBSTREAMS;
++		dev_warn(dev, "Illegal 'brcm,pwm-channels' value, will use %u\n",
++			 numchans);
++	}
++
++	err = snd_card_new(NULL, -1, NULL, THIS_MODULE, 0, &card);
++	if (err) {
++		dev_err(dev, "Failed to create soundcard structure\n");
++		return err;
++	}
++
++	snd_card_set_dev(card, dev);
++	strcpy(card->driver, "bcm2835");
++	strcpy(card->shortname, "bcm2835 ALSA");
++	sprintf(card->longname, "%s", card->shortname);
++
++	err = snd_bcm2835_create(card, pdev, &chip);
++	if (err < 0) {
++		dev_err(dev, "Failed to create bcm2835 chip\n");
++		goto err_free;
++	}
++
++	err = snd_bcm2835_new_pcm(chip);
++	if (err < 0) {
++		dev_err(dev, "Failed to create new bcm2835 pcm device\n");
++		goto err_free;
++	}
++
++	err = snd_bcm2835_new_spdif_pcm(chip);
++	if (err < 0) {
++		dev_err(dev, "Failed to create new bcm2835 spdif pcm device\n");
++		goto err_free;
++	}
++
++	err = snd_bcm2835_new_ctl(chip);
++	if (err < 0) {
++		dev_err(dev, "Failed to create new bcm2835 ctl\n");
++		goto err_free;
++	}
++
++	for (i = 0; i < numchans; i++) {
++		chip->avail_substreams |= (1 << i);
++		chip->pdev[i] = pdev;
++	}
++
++	err = snd_card_register(card);
++	if (err) {
++		dev_err(dev, "Failed to register bcm2835 ALSA card \n");
++		goto err_free;
++	}
++
++	g_card = card;
++	g_chip = chip;
++	platform_set_drvdata(pdev, card);
++	audio_info("bcm2835 ALSA card created with %u channels\n", numchans);
++
++	return 0;
++
++err_free:
++	snd_card_free(card);
++
++	return err;
++}
++
++static int snd_bcm2835_alsa_probe(struct platform_device *pdev)
++{
++	static int dev;
++	bcm2835_chip_t *chip;
++	struct snd_card *card;
++	int err;
++
++	if (pdev->dev.of_node)
++		return snd_bcm2835_alsa_probe_dt(pdev);
++
++	if (dev >= MAX_SUBSTREAMS)
++		return -ENODEV;
++
++	if (!enable[dev]) {
++		dev++;
++		return -ENOENT;
++	}
++
++	if (dev > 0)
++		goto add_register_map;
++
++	err = snd_card_new(NULL, index[dev], id[dev], THIS_MODULE, 0, &g_card);
++	if (err < 0)
++		goto out;
++
++	snd_card_set_dev(g_card, &pdev->dev);
++	strcpy(g_card->driver, "bcm2835");
++	strcpy(g_card->shortname, "bcm2835 ALSA");
++	sprintf(g_card->longname, "%s", g_card->shortname);
++
++	err = snd_bcm2835_create(g_card, pdev, &chip);
++	if (err < 0) {
++		dev_err(&pdev->dev, "Failed to create bcm2835 chip\n");
++		goto out_bcm2835_create;
++	}
++
++	g_chip = chip;
++	err = snd_bcm2835_new_pcm(chip);
++	if (err < 0) {
++		dev_err(&pdev->dev, "Failed to create new BCM2835 pcm device\n");
++		goto out_bcm2835_new_pcm;
++	}
++
++	err = snd_bcm2835_new_spdif_pcm(chip);
++	if (err < 0) {
++		dev_err(&pdev->dev, "Failed to create new BCM2835 spdif pcm device\n");
++		goto out_bcm2835_new_spdif;
++	}
++
++	err = snd_bcm2835_new_ctl(chip);
++	if (err < 0) {
++		dev_err(&pdev->dev, "Failed to create new BCM2835 ctl\n");
++		goto out_bcm2835_new_ctl;
++	}
++
++add_register_map:
++	card = g_card;
++	chip = g_chip;
++
++	BUG_ON(!(card && chip));
++
++	chip->avail_substreams |= (1 << dev);
++	chip->pdev[dev] = pdev;
++
++	if (dev == 0) {
++		err = snd_card_register(card);
++		if (err < 0) {
++			dev_err(&pdev->dev,
++				"Failed to register bcm2835 ALSA card \n");
++			goto out_card_register;
++		}
++		platform_set_drvdata(pdev, card);
++		audio_info("bcm2835 ALSA card created!\n");
++	} else {
++		audio_info("bcm2835 ALSA chip created!\n");
++		platform_set_drvdata(pdev, (void *)dev);
++	}
++
++	dev++;
++
++	return 0;
++
++out_card_register:
++out_bcm2835_new_ctl:
++out_bcm2835_new_spdif:
++out_bcm2835_new_pcm:
++out_bcm2835_create:
++	BUG_ON(!g_card);
++	if (snd_card_free(g_card))
++		dev_err(&pdev->dev, "Failed to free Registered alsa card\n");
++	g_card = NULL;
++out:
++	dev = SNDRV_CARDS;	/* stop more avail_substreams from being probed */
++	dev_err(&pdev->dev, "BCM2835 ALSA Probe failed !!\n");
++	return err;
++}
++
++static int snd_bcm2835_alsa_remove(struct platform_device *pdev)
++{
++	uint32_t idx;
++	void *drv_data;
++
++	drv_data = platform_get_drvdata(pdev);
++
++	if (drv_data == (void *)g_card) {
++		/* This is the card device */
++		snd_card_free((struct snd_card *)drv_data);
++		g_card = NULL;
++		g_chip = NULL;
++	} else {
++		idx = (uint32_t) drv_data;
++		if (g_card != NULL) {
++			BUG_ON(!g_chip);
++			/* We pass chip device numbers in audio ipc devices
++			 * other than the one we registered our card with
++			 */
++			idx = (uint32_t) drv_data;
++			BUG_ON(!idx || idx > MAX_SUBSTREAMS);
++			g_chip->avail_substreams &= ~(1 << idx);
++			/* There should be atleast one substream registered
++			 * after we are done here, as it wil be removed when
++			 * the *remove* is called for the card device
++			 */
++			BUG_ON(!g_chip->avail_substreams);
++		}
++	}
++
++	platform_set_drvdata(pdev, NULL);
++
++	return 0;
++}
++
++#ifdef CONFIG_PM
++static int snd_bcm2835_alsa_suspend(struct platform_device *pdev,
++				    pm_message_t state)
++{
++	return 0;
++}
++
++static int snd_bcm2835_alsa_resume(struct platform_device *pdev)
++{
++	return 0;
++}
++
++#endif
++
++static const struct of_device_id snd_bcm2835_of_match_table[] = {
++	{ .compatible = "brcm,bcm2835-audio", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, snd_bcm2835_of_match_table);
++
++static struct platform_driver bcm2835_alsa0_driver = {
++	.probe = snd_bcm2835_alsa_probe,
++	.remove = snd_bcm2835_alsa_remove,
++#ifdef CONFIG_PM
++	.suspend = snd_bcm2835_alsa_suspend,
++	.resume = snd_bcm2835_alsa_resume,
++#endif
++	.driver = {
++		   .name = "bcm2835_AUD0",
++		   .owner = THIS_MODULE,
++		   .of_match_table = snd_bcm2835_of_match_table,
++		   },
++};
++
++static struct platform_driver bcm2835_alsa1_driver = {
++	.probe = snd_bcm2835_alsa_probe,
++	.remove = snd_bcm2835_alsa_remove,
++#ifdef CONFIG_PM
++	.suspend = snd_bcm2835_alsa_suspend,
++	.resume = snd_bcm2835_alsa_resume,
++#endif
++	.driver = {
++		   .name = "bcm2835_AUD1",
++		   .owner = THIS_MODULE,
++		   },
++};
++
++static struct platform_driver bcm2835_alsa2_driver = {
++	.probe = snd_bcm2835_alsa_probe,
++	.remove = snd_bcm2835_alsa_remove,
++#ifdef CONFIG_PM
++	.suspend = snd_bcm2835_alsa_suspend,
++	.resume = snd_bcm2835_alsa_resume,
++#endif
++	.driver = {
++		   .name = "bcm2835_AUD2",
++		   .owner = THIS_MODULE,
++		   },
++};
++
++static struct platform_driver bcm2835_alsa3_driver = {
++	.probe = snd_bcm2835_alsa_probe,
++	.remove = snd_bcm2835_alsa_remove,
++#ifdef CONFIG_PM
++	.suspend = snd_bcm2835_alsa_suspend,
++	.resume = snd_bcm2835_alsa_resume,
++#endif
++	.driver = {
++		   .name = "bcm2835_AUD3",
++		   .owner = THIS_MODULE,
++		   },
++};
++
++static struct platform_driver bcm2835_alsa4_driver = {
++	.probe = snd_bcm2835_alsa_probe,
++	.remove = snd_bcm2835_alsa_remove,
++#ifdef CONFIG_PM
++	.suspend = snd_bcm2835_alsa_suspend,
++	.resume = snd_bcm2835_alsa_resume,
++#endif
++	.driver = {
++		   .name = "bcm2835_AUD4",
++		   .owner = THIS_MODULE,
++		   },
++};
++
++static struct platform_driver bcm2835_alsa5_driver = {
++	.probe = snd_bcm2835_alsa_probe,
++	.remove = snd_bcm2835_alsa_remove,
++#ifdef CONFIG_PM
++	.suspend = snd_bcm2835_alsa_suspend,
++	.resume = snd_bcm2835_alsa_resume,
++#endif
++	.driver = {
++		   .name = "bcm2835_AUD5",
++		   .owner = THIS_MODULE,
++		   },
++};
++
++static struct platform_driver bcm2835_alsa6_driver = {
++	.probe = snd_bcm2835_alsa_probe,
++	.remove = snd_bcm2835_alsa_remove,
++#ifdef CONFIG_PM
++	.suspend = snd_bcm2835_alsa_suspend,
++	.resume = snd_bcm2835_alsa_resume,
++#endif
++	.driver = {
++		   .name = "bcm2835_AUD6",
++		   .owner = THIS_MODULE,
++		   },
++};
++
++static struct platform_driver bcm2835_alsa7_driver = {
++	.probe = snd_bcm2835_alsa_probe,
++	.remove = snd_bcm2835_alsa_remove,
++#ifdef CONFIG_PM
++	.suspend = snd_bcm2835_alsa_suspend,
++	.resume = snd_bcm2835_alsa_resume,
++#endif
++	.driver = {
++		   .name = "bcm2835_AUD7",
++		   .owner = THIS_MODULE,
++		   },
++};
++
++static int bcm2835_alsa_device_init(void)
++{
++	int err;
++	err = platform_driver_register(&bcm2835_alsa0_driver);
++	if (err) {
++		pr_err("Error registering bcm2835_alsa0_driver %d .\n", err);
++		goto out;
++	}
++
++	err = platform_driver_register(&bcm2835_alsa1_driver);
++	if (err) {
++		pr_err("Error registering bcm2835_alsa0_driver %d .\n", err);
++		goto unregister_0;
++	}
++
++	err = platform_driver_register(&bcm2835_alsa2_driver);
++	if (err) {
++		pr_err("Error registering bcm2835_alsa0_driver %d .\n", err);
++		goto unregister_1;
++	}
++
++	err = platform_driver_register(&bcm2835_alsa3_driver);
++	if (err) {
++		pr_err("Error registering bcm2835_alsa0_driver %d .\n", err);
++		goto unregister_2;
++	}
++
++	err = platform_driver_register(&bcm2835_alsa4_driver);
++	if (err) {
++		pr_err("Error registering bcm2835_alsa0_driver %d .\n", err);
++		goto unregister_3;
++	}
++
++	err = platform_driver_register(&bcm2835_alsa5_driver);
++	if (err) {
++		pr_err("Error registering bcm2835_alsa0_driver %d .\n", err);
++		goto unregister_4;
++	}
++
++	err = platform_driver_register(&bcm2835_alsa6_driver);
++	if (err) {
++		pr_err("Error registering bcm2835_alsa0_driver %d .\n", err);
++		goto unregister_5;
++	}
++
++	err = platform_driver_register(&bcm2835_alsa7_driver);
++	if (err) {
++		pr_err("Error registering bcm2835_alsa0_driver %d .\n", err);
++		goto unregister_6;
++	}
++
++	return 0;
++
++unregister_6:
++	platform_driver_unregister(&bcm2835_alsa6_driver);
++unregister_5:
++	platform_driver_unregister(&bcm2835_alsa5_driver);
++unregister_4:
++	platform_driver_unregister(&bcm2835_alsa4_driver);
++unregister_3:
++	platform_driver_unregister(&bcm2835_alsa3_driver);
++unregister_2:
++	platform_driver_unregister(&bcm2835_alsa2_driver);
++unregister_1:
++	platform_driver_unregister(&bcm2835_alsa1_driver);
++unregister_0:
++	platform_driver_unregister(&bcm2835_alsa0_driver);
++out:
++	return err;
++}
++
++static void bcm2835_alsa_device_exit(void)
++{
++	platform_driver_unregister(&bcm2835_alsa0_driver);
++	platform_driver_unregister(&bcm2835_alsa1_driver);
++	platform_driver_unregister(&bcm2835_alsa2_driver);
++	platform_driver_unregister(&bcm2835_alsa3_driver);
++	platform_driver_unregister(&bcm2835_alsa4_driver);
++	platform_driver_unregister(&bcm2835_alsa5_driver);
++	platform_driver_unregister(&bcm2835_alsa6_driver);
++	platform_driver_unregister(&bcm2835_alsa7_driver);
++}
++
++late_initcall(bcm2835_alsa_device_init);
++module_exit(bcm2835_alsa_device_exit);
++
++MODULE_AUTHOR("Dom Cobley");
++MODULE_DESCRIPTION("Alsa driver for BCM2835 chip");
++MODULE_LICENSE("GPL");
++MODULE_ALIAS("platform:bcm2835_alsa");
+--- /dev/null
++++ b/sound/arm/bcm2835.h
+@@ -0,0 +1,167 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#ifndef __SOUND_ARM_BCM2835_H
++#define __SOUND_ARM_BCM2835_H
++
++#include <linux/device.h>
++#include <linux/list.h>
++#include <linux/interrupt.h>
++#include <linux/wait.h>
++#include <sound/core.h>
++#include <sound/initval.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/pcm-indirect.h>
++#include <linux/workqueue.h>
++
++/*
++#define AUDIO_DEBUG_ENABLE
++#define AUDIO_VERBOSE_DEBUG_ENABLE
++*/
++
++/* Debug macros */
++
++#ifdef AUDIO_DEBUG_ENABLE
++#ifdef AUDIO_VERBOSE_DEBUG_ENABLE
++
++#define audio_debug(fmt, arg...)	\
++	printk(KERN_INFO"%s:%d " fmt, __func__, __LINE__, ##arg)
++
++#define audio_info(fmt, arg...)	\
++	printk(KERN_INFO"%s:%d " fmt, __func__, __LINE__, ##arg)
++
++#else
++
++#define audio_debug(fmt, arg...)
++
++#define audio_info(fmt, arg...)
++
++#endif /* AUDIO_VERBOSE_DEBUG_ENABLE */
++
++#else
++
++#define audio_debug(fmt, arg...)
++
++#define audio_info(fmt, arg...)
++
++#endif /* AUDIO_DEBUG_ENABLE */
++
++#define audio_error(fmt, arg...)	\
++	printk(KERN_ERR"%s:%d " fmt, __func__, __LINE__, ##arg)
++
++#define audio_warning(fmt, arg...)	\
++	printk(KERN_WARNING"%s:%d " fmt, __func__, __LINE__, ##arg)
++
++#define audio_alert(fmt, arg...)	\
++	printk(KERN_ALERT"%s:%d " fmt, __func__, __LINE__, ##arg)
++
++#define MAX_SUBSTREAMS			(8)
++#define AVAIL_SUBSTREAMS_MASK		(0xff)
++enum {
++	CTRL_VOL_MUTE,
++	CTRL_VOL_UNMUTE
++};
++
++/* macros for alsa2chip and chip2alsa, instead of functions */
++
++#define alsa2chip(vol) (uint)(-((vol << 8) / 100))	/* convert alsa to chip volume (defined as macro rather than function call) */
++#define chip2alsa(vol) -((vol * 100) >> 8)			/* convert chip to alsa volume */
++
++/* Some constants for values .. */
++typedef enum {
++	AUDIO_DEST_AUTO = 0,
++	AUDIO_DEST_HEADPHONES = 1,
++	AUDIO_DEST_HDMI = 2,
++	AUDIO_DEST_MAX,
++} SND_BCM2835_ROUTE_T;
++
++typedef enum {
++	PCM_PLAYBACK_VOLUME,
++	PCM_PLAYBACK_MUTE,
++	PCM_PLAYBACK_DEVICE,
++} SND_BCM2835_CTRL_T;
++
++/* definition of the chip-specific record */
++typedef struct bcm2835_chip {
++	struct snd_card *card;
++	struct snd_pcm *pcm;
++	struct snd_pcm *pcm_spdif;
++	/* Bitmat for valid reg_base and irq numbers */
++	uint32_t avail_substreams;
++	struct platform_device *pdev[MAX_SUBSTREAMS];
++	struct bcm2835_alsa_stream *alsa_stream[MAX_SUBSTREAMS];
++
++	int volume;
++	int old_volume; /* stores the volume value whist muted */
++	int dest;
++	int mute;
++
++	unsigned int opened;
++	unsigned int spdif_status;
++	struct mutex audio_mutex;
++} bcm2835_chip_t;
++
++typedef struct bcm2835_alsa_stream {
++	bcm2835_chip_t *chip;
++	struct snd_pcm_substream *substream;
++	struct snd_pcm_indirect pcm_indirect;
++
++	struct semaphore buffers_update_sem;
++	struct semaphore control_sem;
++	spinlock_t lock;
++	volatile uint32_t control;
++	volatile uint32_t status;
++
++	int open;
++	int running;
++	int draining;
++
++	int channels;
++	int params_rate;
++	int pcm_format_width;
++
++	unsigned int pos;
++	unsigned int buffer_size;
++	unsigned int period_size;
++
++	uint32_t enable_fifo_irq;
++	irq_handler_t fifo_irq_handler;
++
++	atomic_t retrieved;
++	struct opaque_AUDIO_INSTANCE_T *instance;
++	struct workqueue_struct *my_wq;
++	int idx;
++} bcm2835_alsa_stream_t;
++
++int snd_bcm2835_new_ctl(bcm2835_chip_t * chip);
++int snd_bcm2835_new_pcm(bcm2835_chip_t * chip);
++int snd_bcm2835_new_spdif_pcm(bcm2835_chip_t * chip);
++
++int bcm2835_audio_open(bcm2835_alsa_stream_t * alsa_stream);
++int bcm2835_audio_close(bcm2835_alsa_stream_t * alsa_stream);
++int bcm2835_audio_set_params(bcm2835_alsa_stream_t * alsa_stream,
++			     uint32_t channels, uint32_t samplerate,
++			     uint32_t bps);
++int bcm2835_audio_setup(bcm2835_alsa_stream_t * alsa_stream);
++int bcm2835_audio_start(bcm2835_alsa_stream_t * alsa_stream);
++int bcm2835_audio_stop(bcm2835_alsa_stream_t * alsa_stream);
++int bcm2835_audio_set_ctls(bcm2835_chip_t * chip);
++int bcm2835_audio_write(bcm2835_alsa_stream_t * alsa_stream, uint32_t count,
++			void *src);
++uint32_t bcm2835_audio_retrieve_buffers(bcm2835_alsa_stream_t * alsa_stream);
++void bcm2835_audio_flush_buffers(bcm2835_alsa_stream_t * alsa_stream);
++void bcm2835_audio_flush_playback_buffers(bcm2835_alsa_stream_t * alsa_stream);
++
++#endif /* __SOUND_ARM_BCM2835_H */
+--- /dev/null
++++ b/sound/arm/vc_vchi_audioserv_defs.h
+@@ -0,0 +1,116 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#ifndef _VC_AUDIO_DEFS_H_
++#define _VC_AUDIO_DEFS_H_
++
++#define VC_AUDIOSERV_MIN_VER 1
++#define VC_AUDIOSERV_VER 2
++
++// FourCC code used for VCHI connection
++#define VC_AUDIO_SERVER_NAME  MAKE_FOURCC("AUDS")
++
++// Maximum message length
++#define VC_AUDIO_MAX_MSG_LEN  (sizeof( VC_AUDIO_MSG_T ))
++
++// List of screens that are currently supported
++// All message types supported for HOST->VC direction
++typedef enum {
++	VC_AUDIO_MSG_TYPE_RESULT,	// Generic result
++	VC_AUDIO_MSG_TYPE_COMPLETE,	// Generic result
++	VC_AUDIO_MSG_TYPE_CONFIG,	// Configure audio
++	VC_AUDIO_MSG_TYPE_CONTROL,	// Configure audio
++	VC_AUDIO_MSG_TYPE_OPEN,	// Configure audio
++	VC_AUDIO_MSG_TYPE_CLOSE,	// Configure audio
++	VC_AUDIO_MSG_TYPE_START,	// Configure audio
++	VC_AUDIO_MSG_TYPE_STOP,	// Configure audio
++	VC_AUDIO_MSG_TYPE_WRITE,	// Configure audio
++	VC_AUDIO_MSG_TYPE_MAX
++} VC_AUDIO_MSG_TYPE;
++
++// configure the audio
++typedef struct {
++	uint32_t channels;
++	uint32_t samplerate;
++	uint32_t bps;
++
++} VC_AUDIO_CONFIG_T;
++
++typedef struct {
++	uint32_t volume;
++	uint32_t dest;
++
++} VC_AUDIO_CONTROL_T;
++
++// audio
++typedef struct {
++	uint32_t dummy;
++
++} VC_AUDIO_OPEN_T;
++
++// audio
++typedef struct {
++	uint32_t dummy;
++
++} VC_AUDIO_CLOSE_T;
++// audio
++typedef struct {
++	uint32_t dummy;
++
++} VC_AUDIO_START_T;
++// audio
++typedef struct {
++	uint32_t draining;
++
++} VC_AUDIO_STOP_T;
++
++// configure the write audio samples
++typedef struct {
++	uint32_t count;		// in bytes
++	void *callback;
++	void *cookie;
++	uint16_t silence;
++	uint16_t max_packet;
++} VC_AUDIO_WRITE_T;
++
++// Generic result for a request (VC->HOST)
++typedef struct {
++	int32_t success;	// Success value
++
++} VC_AUDIO_RESULT_T;
++
++// Generic result for a request (VC->HOST)
++typedef struct {
++	int32_t count;		// Success value
++	void *callback;
++	void *cookie;
++} VC_AUDIO_COMPLETE_T;
++
++// Message header for all messages in HOST->VC direction
++typedef struct {
++	int32_t type;		// Message type (VC_AUDIO_MSG_TYPE)
++	union {
++		VC_AUDIO_CONFIG_T config;
++		VC_AUDIO_CONTROL_T control;
++		VC_AUDIO_OPEN_T open;
++		VC_AUDIO_CLOSE_T close;
++		VC_AUDIO_START_T start;
++		VC_AUDIO_STOP_T stop;
++		VC_AUDIO_WRITE_T write;
++		VC_AUDIO_RESULT_T result;
++		VC_AUDIO_COMPLETE_T complete;
++	} u;
++} VC_AUDIO_MSG_T;
++
++#endif // _VC_AUDIO_DEFS_H_
diff --git a/target/linux/brcm2708/patches-4.4/0037-bcm2708-vchiq-driver.patch b/target/linux/brcm2708/patches-4.4/0037-bcm2708-vchiq-driver.patch
new file mode 100644
index 0000000..b2c3ee0
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0037-bcm2708-vchiq-driver.patch
@@ -0,0 +1,13200 @@
+From 01551caa3e4d71d3a75703b063a871f18541f38d Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Tue, 2 Jul 2013 23:42:01 +0100
+Subject: [PATCH 037/127] bcm2708 vchiq driver
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+
+vchiq: create_pagelist copes with vmalloc memory
+
+Signed-off-by: Daniel Stone <daniels at collabora.com>
+
+vchiq: fix the shim message release
+
+Signed-off-by: Daniel Stone <daniels at collabora.com>
+
+vchiq: export additional symbols
+
+Signed-off-by: Daniel Stone <daniels at collabora.com>
+
+VCHIQ: Make service closure fully synchronous (drv)
+
+This is one half of a two-part patch, the other half of which is to
+the vchiq_lib user library. With these patches, calls to
+vchiq_close_service and vchiq_remove_service won't return until any
+associated callbacks have been delivered to the callback thread.
+
+VCHIQ: Add per-service tracing
+
+The new service option VCHIQ_SERVICE_OPTION_TRACE is a boolean that
+toggles tracing for the specified service.
+
+This commit also introduces vchi_service_set_option and the associated
+option VCHI_SERVICE_OPTION_TRACE.
+
+vchiq: Make the synchronous-CLOSE logic more tolerant
+
+vchiq: Move logging control into debugfs
+
+vchiq: Take care of a corner case tickled by VCSM
+
+Closing a connection that isn't fully open requires care, since one
+side does not know the other side's port number. Code was present to
+handle the case where a CLOSE is sent immediately after an OPEN, i.e.
+before the OPENACK has been received, but this was incorrectly being
+used when an OPEN from a client using port 0 was rejected.
+
+(In the observed failure, the host was attempting to use the VCSM
+service, which isn't present in the 'cutdown' firmware. The failure
+was intermittent because sometimes the keepalive service would
+grab port 0.)
+
+This case can be distinguished because the client's remoteport will
+still be VCHIQ_PORT_FREE, and the srvstate will be OPENING. Either
+condition is sufficient to differentiate it from the special case
+described above.
+
+vchiq: Avoid high load when blocked and unkillable
+
+vchiq: Include SIGSTOP and SIGCONT in list of signals not-masked by vchiq to allow gdb to work
+
+vchiq_arm: Complete support for SYNCHRONOUS mode
+
+vchiq: Remove inline from suspend/resume
+
+vchiq: Allocation does not need to be atomic
+
+vchiq: Fix wrong condition check
+
+The log level is checked from within the log call. Remove the check in the call.
+
+Signed-off-by: Pranith Kumar <bobby.prani at gmail.com>
+
+BCM270x: Add vchiq device to platform file and Device Tree
+
+Prepare to turn the vchiq module into a driver.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+bcm2708: vchiq: Add Device Tree support
+
+Turn vchiq into a driver and stop hardcoding resources.
+Use devm_* functions in probe path to simplify cleanup.
+A global variable is used to hold the register address. This is done
+to keep this patch as small as possible.
+Also make available on ARCH_BCM2835.
+Based on work by Lubomir Rintel.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+vchiq: Change logging level for inbound data
+
+vchiq_arm: Two cacheing fixes
+
+1) Make fragment size vary with cache line size
+Without this patch, non-cache-line-aligned transfers may corrupt
+(or be corrupted by) adjacent data structures.
+
+Both ARM and VC need to be updated to enable this feature. This is
+ensured by having the loader apply a new DT parameter -
+cache-line-size. The existence of this parameter guarantees that the
+kernel is capable, and the parameter will only be modified from the
+safe default if the loader is capable.
+
+2) Flush/invalidate vmalloc'd memory, and invalidate after reads
+
+vchiq: fix NULL pointer dereference when closing driver
+
+The following code run as root will cause a null pointer dereference oops:
+
+        int fd = open("/dev/vc-cma", O_RDONLY);
+        if (fd < 0)
+                err(1, "open failed");
+        (void)close(fd);
+
+[ 1704.877721] Unable to handle kernel NULL pointer dereference at virtual address 00000000
+[ 1704.877725] pgd = b899c000
+[ 1704.877736] [00000000] *pgd=37fab831, *pte=00000000, *ppte=00000000
+[ 1704.877748] Internal error: Oops: 817 [#1] PREEMPT SMP ARM
+[ 1704.877765] Modules linked in: evdev i2c_bcm2708 uio_pdrv_genirq uio
+[ 1704.877774] CPU: 2 PID: 3656 Comm: stress-ng-fstat Not tainted 3.19.1-12-generic-bcm2709 #12-Ubuntu
+[ 1704.877777] Hardware name: BCM2709
+[ 1704.877783] task: b8ab9b00 ti: b7e68000 task.ti: b7e68000
+[ 1704.877798] PC is at __down_interruptible+0x50/0xec
+[ 1704.877806] LR is at down_interruptible+0x5c/0x68
+[ 1704.877813] pc : [<80630ee8>]    lr : [<800704b0>]    psr: 60080093
+sp : b7e69e50  ip : b7e69e88  fp : b7e69e84
+[ 1704.877817] r10: b88123c8  r9 : 00000010  r8 : 00000001
+[ 1704.877822] r7 : b8ab9b00  r6 : 7fffffff  r5 : 80a1cc34  r4 : 80a1cc34
+[ 1704.877826] r3 : b7e69e50  r2 : 00000000  r1 : 00000000  r0 : 80a1cc34
+[ 1704.877833] Flags: nZCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment user
+[ 1704.877838] Control: 10c5387d  Table: 3899c06a  DAC: 00000015
+[ 1704.877843] Process do-oops (pid: 3656, stack limit = 0xb7e68238)
+[ 1704.877848] Stack: (0xb7e69e50 to 0xb7e6a000)
+[ 1704.877856] 9e40:                                     80a1cc3c 00000000 00000010 b88123c8
+[ 1704.877865] 9e60: b7e69e84 80a1cc34 fff9fee9 ffffffff b7e68000 00000009 b7e69ea4 b7e69e88
+[ 1704.877874] 9e80: 800704b0 80630ea4 fff9fee9 60080013 80a1cc28 fff9fee9 b7e69edc b7e69ea8
+[ 1704.877884] 9ea0: 8040f558 80070460 fff9fee9 ffffffff 00000000 00000000 00000009 80a1cb7c
+[ 1704.877893] 9ec0: 00000000 80a1cb7c 00000000 00000010 b7e69ef4 b7e69ee0 803e1ba4 8040f514
+[ 1704.877902] 9ee0: 00000e48 80a1cb7c b7e69f14 b7e69ef8 803e1c9c 803e1b74 b88123c0 b92acb18
+[ 1704.877911] 9f00: b8812790 b8d815d8 b7e69f24 b7e69f18 803e2250 803e1bc8 b7e69f5c b7e69f28
+[ 1704.877921] 9f20: 80167bac 803e222c 00000000 00000000 b7e69f54 b8ab9ffc 00000000 8098c794
+[ 1704.877930] 9f40: b8ab9b00 8000efc4 b7e68000 00000000 b7e69f6c b7e69f60 80167d6c 80167b28
+[ 1704.877939] 9f60: b7e69f8c b7e69f70 80047d38 80167d60 b7e68000 b7e68010 8000efc4 b7e69fb0
+[ 1704.877949] 9f80: b7e69fac b7e69f90 80012820 80047c84 01155490 011549a8 00000001 00000006
+[ 1704.877957] 9fa0: 00000000 b7e69fb0 8000ee5c 80012790 00000000 353d8c0f 7efc4308 00000000
+[ 1704.877966] 9fc0: 01155490 011549a8 00000001 00000006 00000000 00000000 76cf3ba0 00000003
+[ 1704.877975] 9fe0: 00000000 7efc42e4 0002272f 76e2ed66 60080030 00000003 00000000 00000000
+[ 1704.877998] [<80630ee8>] (__down_interruptible) from [<800704b0>] (down_interruptible+0x5c/0x68)
+[ 1704.878015] [<800704b0>] (down_interruptible) from [<8040f558>] (vchiu_queue_push+0x50/0xd8)
+[ 1704.878032] [<8040f558>] (vchiu_queue_push) from [<803e1ba4>] (send_worker_msg+0x3c/0x54)
+[ 1704.878045] [<803e1ba4>] (send_worker_msg) from [<803e1c9c>] (vc_cma_set_reserve+0xe0/0x1c4)
+[ 1704.878057] [<803e1c9c>] (vc_cma_set_reserve) from [<803e2250>] (vc_cma_release+0x30/0x38)
+[ 1704.878069] [<803e2250>] (vc_cma_release) from [<80167bac>] (__fput+0x90/0x1e0)
+[ 1704.878082] [<80167bac>] (__fput) from [<80167d6c>] (____fput+0x18/0x1c)
+[ 1704.878094] [<80167d6c>] (____fput) from [<80047d38>] (task_work_run+0xc0/0xf8)
+[ 1704.878109] [<80047d38>] (task_work_run) from [<80012820>] (do_work_pending+0x9c/0xc4)
+[ 1704.878123] [<80012820>] (do_work_pending) from [<8000ee5c>] (work_pending+0xc/0x20)
+[ 1704.878133] Code: e50b1034 e3a01000 e50b2030 e580300c (e5823000)
+
+..the fix is to ensure that we have actually initialized the queue before we attempt
+to push any items onto it.  This occurs if we do an open() followed by a close() without
+any activity in between.
+
+Signed-off-by: Colin Ian King <colin.king at canonical.com>
+
+vchiq_arm: Sort out the vmalloc case
+
+See: https://github.com/raspberrypi/linux/issues/1055
+
+vchiq: hack: Add include depecated dma include file
+---
+ arch/arm/mach-bcm2708/include/mach/platform.h      |    2 +
+ arch/arm/mach-bcm2709/include/mach/platform.h      |    2 +
+ drivers/misc/Kconfig                               |    1 +
+ drivers/misc/Makefile                              |    1 +
+ drivers/misc/vc04_services/Kconfig                 |    9 +
+ drivers/misc/vc04_services/Makefile                |   14 +
+ .../interface/vchi/connections/connection.h        |  328 ++
+ .../interface/vchi/message_drivers/message.h       |  204 +
+ drivers/misc/vc04_services/interface/vchi/vchi.h   |  378 ++
+ .../misc/vc04_services/interface/vchi/vchi_cfg.h   |  224 ++
+ .../interface/vchi/vchi_cfg_internal.h             |   71 +
+ .../vc04_services/interface/vchi/vchi_common.h     |  175 +
+ .../misc/vc04_services/interface/vchi/vchi_mh.h    |   42 +
+ .../misc/vc04_services/interface/vchiq_arm/vchiq.h |   40 +
+ .../vc04_services/interface/vchiq_arm/vchiq_2835.h |   42 +
+ .../interface/vchiq_arm/vchiq_2835_arm.c           |  586 +++
+ .../vc04_services/interface/vchiq_arm/vchiq_arm.c  | 2903 +++++++++++++++
+ .../vc04_services/interface/vchiq_arm/vchiq_arm.h  |  220 ++
+ .../interface/vchiq_arm/vchiq_build_info.h         |   37 +
+ .../vc04_services/interface/vchiq_arm/vchiq_cfg.h  |   69 +
+ .../interface/vchiq_arm/vchiq_connected.c          |  120 +
+ .../interface/vchiq_arm/vchiq_connected.h          |   50 +
+ .../vc04_services/interface/vchiq_arm/vchiq_core.c | 3934 ++++++++++++++++++++
+ .../vc04_services/interface/vchiq_arm/vchiq_core.h |  712 ++++
+ .../interface/vchiq_arm/vchiq_debugfs.c            |  383 ++
+ .../interface/vchiq_arm/vchiq_debugfs.h            |   52 +
+ .../interface/vchiq_arm/vchiq_genversion           |   87 +
+ .../vc04_services/interface/vchiq_arm/vchiq_if.h   |  189 +
+ .../interface/vchiq_arm/vchiq_ioctl.h              |  131 +
+ .../interface/vchiq_arm/vchiq_kern_lib.c           |  458 +++
+ .../interface/vchiq_arm/vchiq_killable.h           |   69 +
+ .../interface/vchiq_arm/vchiq_memdrv.h             |   71 +
+ .../interface/vchiq_arm/vchiq_pagelist.h           |   58 +
+ .../vc04_services/interface/vchiq_arm/vchiq_shim.c |  860 +++++
+ .../vc04_services/interface/vchiq_arm/vchiq_util.c |  156 +
+ .../vc04_services/interface/vchiq_arm/vchiq_util.h |   82 +
+ .../interface/vchiq_arm/vchiq_version.c            |   59 +
+ 37 files changed, 12819 insertions(+)
+ create mode 100644 drivers/misc/vc04_services/Kconfig
+ create mode 100644 drivers/misc/vc04_services/Makefile
+ create mode 100644 drivers/misc/vc04_services/interface/vchi/connections/connection.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchi/message_drivers/message.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi_cfg.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi_cfg_internal.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi_common.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi_mh.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_2835.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_arm.c
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_arm.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_build_info.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_cfg.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_connected.c
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_connected.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_core.c
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_core.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_debugfs.c
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_debugfs.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_genversion
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_if.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_ioctl.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_kern_lib.c
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_killable.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_memdrv.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_pagelist.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_shim.c
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_util.c
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_util.h
+ create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_version.c
+
+--- a/arch/arm/mach-bcm2708/include/mach/platform.h
++++ b/arch/arm/mach-bcm2708/include/mach/platform.h
+@@ -78,6 +78,8 @@
+ #define ARMCTRL_IC_BASE          (ARM_BASE + 0x200)           /* ARM interrupt controller */
+ #define ARMCTRL_TIMER0_1_BASE    (ARM_BASE + 0x400)           /* Timer 0 and 1 */
+ #define ARMCTRL_0_SBM_BASE       (ARM_BASE + 0x800)           /* User 0 (ARM)'s Semaphores Doorbells and Mailboxes */
++#define ARMCTRL_0_BELL_BASE      (ARMCTRL_0_SBM_BASE + 0x40)  /* User 0 (ARM)'s Doorbell */
++#define ARMCTRL_0_MAIL0_BASE     (ARMCTRL_0_SBM_BASE + 0x80)  /* User 0 (ARM)'s Mailbox 0 */
+ 
+ /*
+  * Watchdog
+--- a/arch/arm/mach-bcm2709/include/mach/platform.h
++++ b/arch/arm/mach-bcm2709/include/mach/platform.h
+@@ -78,6 +78,8 @@
+ #define ARMCTRL_IC_BASE          (ARM_BASE + 0x200)           /* ARM interrupt controller */
+ #define ARMCTRL_TIMER0_1_BASE    (ARM_BASE + 0x400)           /* Timer 0 and 1 */
+ #define ARMCTRL_0_SBM_BASE       (ARM_BASE + 0x800)           /* User 0 (ARM)'s Semaphores Doorbells and Mailboxes */
++#define ARMCTRL_0_BELL_BASE      (ARMCTRL_0_SBM_BASE + 0x40)  /* User 0 (ARM)'s Doorbell */
++#define ARMCTRL_0_MAIL0_BASE     (ARMCTRL_0_SBM_BASE + 0x80)  /* User 0 (ARM)'s Mailbox 0 */
+ 
+ /*
+  * Watchdog
+--- a/drivers/misc/Kconfig
++++ b/drivers/misc/Kconfig
+@@ -533,6 +533,7 @@ source "drivers/misc/lis3lv02d/Kconfig"
+ source "drivers/misc/altera-stapl/Kconfig"
+ source "drivers/misc/mei/Kconfig"
+ source "drivers/misc/vmw_vmci/Kconfig"
++source "drivers/misc/vc04_services/Kconfig"
+ source "drivers/misc/mic/Kconfig"
+ source "drivers/misc/genwqe/Kconfig"
+ source "drivers/misc/echo/Kconfig"
+--- a/drivers/misc/Makefile
++++ b/drivers/misc/Makefile
+@@ -51,6 +51,7 @@ obj-$(CONFIG_INTEL_MEI)		+= mei/
+ obj-$(CONFIG_VMWARE_VMCI)	+= vmw_vmci/
+ obj-$(CONFIG_LATTICE_ECP3_CONFIG)	+= lattice-ecp3-config.o
+ obj-$(CONFIG_SRAM)		+= sram.o
++obj-$(CONFIG_BCM2708_VCHIQ)	+= vc04_services/
+ obj-y				+= mic/
+ obj-$(CONFIG_GENWQE)		+= genwqe/
+ obj-$(CONFIG_ECHO)		+= echo/
+--- /dev/null
++++ b/drivers/misc/vc04_services/Kconfig
+@@ -0,0 +1,9 @@
++config BCM2708_VCHIQ
++	tristate "Videocore VCHIQ"
++	depends on RASPBERRYPI_FIRMWARE
++	default y
++	help
++		Kernel to VideoCore communication interface for the
++		BCM2708 family of products.
++		Defaults to Y when the Broadcom Videocore services
++		are included in the build, N otherwise.
+--- /dev/null
++++ b/drivers/misc/vc04_services/Makefile
+@@ -0,0 +1,14 @@
++obj-$(CONFIG_BCM2708_VCHIQ)	+= vchiq.o
++
++vchiq-objs := \
++   interface/vchiq_arm/vchiq_core.o  \
++   interface/vchiq_arm/vchiq_arm.o \
++   interface/vchiq_arm/vchiq_kern_lib.o \
++   interface/vchiq_arm/vchiq_2835_arm.o \
++   interface/vchiq_arm/vchiq_debugfs.o \
++   interface/vchiq_arm/vchiq_shim.o \
++   interface/vchiq_arm/vchiq_util.o \
++   interface/vchiq_arm/vchiq_connected.o \
++
++ccflags-y += -DVCOS_VERIFY_BKPTS=1 -Idrivers/misc/vc04_services -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000
++
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchi/connections/connection.h
+@@ -0,0 +1,328 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef CONNECTION_H_
++#define CONNECTION_H_
++
++#include <linux/kernel.h>
++#include <linux/types.h>
++#include <linux/semaphore.h>
++
++#include "interface/vchi/vchi_cfg_internal.h"
++#include "interface/vchi/vchi_common.h"
++#include "interface/vchi/message_drivers/message.h"
++
++/******************************************************************************
++ Global defs
++ *****************************************************************************/
++
++// Opaque handle for a connection / service pair
++typedef struct opaque_vchi_connection_connected_service_handle_t *VCHI_CONNECTION_SERVICE_HANDLE_T;
++
++// opaque handle to the connection state information
++typedef struct opaque_vchi_connection_info_t VCHI_CONNECTION_STATE_T;
++
++typedef struct vchi_connection_t VCHI_CONNECTION_T;
++
++
++/******************************************************************************
++ API
++ *****************************************************************************/
++
++// Routine to init a connection with a particular low level driver
++typedef VCHI_CONNECTION_STATE_T * (*VCHI_CONNECTION_INIT_T)( struct vchi_connection_t * connection,
++                                                             const VCHI_MESSAGE_DRIVER_T * driver );
++
++// Routine to control CRC enabling at a connection level
++typedef int32_t (*VCHI_CONNECTION_CRC_CONTROL_T)( VCHI_CONNECTION_STATE_T *state_handle,
++                                                  VCHI_CRC_CONTROL_T control );
++
++// Routine to create a service
++typedef int32_t (*VCHI_CONNECTION_SERVICE_CONNECT_T)( VCHI_CONNECTION_STATE_T *state_handle,
++                                                      int32_t service_id,
++                                                      uint32_t rx_fifo_size,
++                                                      uint32_t tx_fifo_size,
++                                                      int server,
++                                                      VCHI_CALLBACK_T callback,
++                                                      void *callback_param,
++                                                      int32_t want_crc,
++                                                      int32_t want_unaligned_bulk_rx,
++                                                      int32_t want_unaligned_bulk_tx,
++                                                      VCHI_CONNECTION_SERVICE_HANDLE_T *service_handle );
++
++// Routine to close a service
++typedef int32_t (*VCHI_CONNECTION_SERVICE_DISCONNECT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle );
++
++// Routine to queue a message
++typedef int32_t (*VCHI_CONNECTION_SERVICE_QUEUE_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                            const void *data,
++                                                            uint32_t data_size,
++                                                            VCHI_FLAGS_T flags,
++                                                            void *msg_handle );
++
++// scatter-gather (vector) message queueing
++typedef int32_t (*VCHI_CONNECTION_SERVICE_QUEUE_MESSAGEV_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                             VCHI_MSG_VECTOR_T *vector,
++                                                             uint32_t count,
++                                                             VCHI_FLAGS_T flags,
++                                                             void *msg_handle );
++
++// Routine to dequeue a message
++typedef int32_t (*VCHI_CONNECTION_SERVICE_DEQUEUE_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                              void *data,
++                                                              uint32_t max_data_size_to_read,
++                                                              uint32_t *actual_msg_size,
++                                                              VCHI_FLAGS_T flags );
++
++// Routine to peek at a message
++typedef int32_t (*VCHI_CONNECTION_SERVICE_PEEK_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                           void **data,
++                                                           uint32_t *msg_size,
++                                                           VCHI_FLAGS_T flags );
++
++// Routine to hold a message
++typedef int32_t (*VCHI_CONNECTION_SERVICE_HOLD_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                           void **data,
++                                                           uint32_t *msg_size,
++                                                           VCHI_FLAGS_T flags,
++                                                           void **message_handle );
++
++// Routine to initialise a received message iterator
++typedef int32_t (*VCHI_CONNECTION_SERVICE_LOOKAHEAD_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                                VCHI_MSG_ITER_T *iter,
++                                                                VCHI_FLAGS_T flags );
++
++// Routine to release a held message
++typedef int32_t (*VCHI_CONNECTION_HELD_MSG_RELEASE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                       void *message_handle );
++
++// Routine to get info on a held message
++typedef int32_t (*VCHI_CONNECTION_HELD_MSG_INFO_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                    void *message_handle,
++                                                    void **data,
++                                                    int32_t *msg_size,
++                                                    uint32_t *tx_timestamp,
++                                                    uint32_t *rx_timestamp );
++
++// Routine to check whether the iterator has a next message
++typedef int32_t (*VCHI_CONNECTION_MSG_ITER_HAS_NEXT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
++                                                       const VCHI_MSG_ITER_T *iter );
++
++// Routine to advance the iterator
++typedef int32_t (*VCHI_CONNECTION_MSG_ITER_NEXT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
++                                                    VCHI_MSG_ITER_T *iter,
++                                                    void **data,
++                                                    uint32_t *msg_size );
++
++// Routine to remove the last message returned by the iterator
++typedef int32_t (*VCHI_CONNECTION_MSG_ITER_REMOVE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
++                                                      VCHI_MSG_ITER_T *iter );
++
++// Routine to hold the last message returned by the iterator
++typedef int32_t (*VCHI_CONNECTION_MSG_ITER_HOLD_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
++                                                    VCHI_MSG_ITER_T *iter,
++                                                    void **msg_handle );
++
++// Routine to transmit bulk data
++typedef int32_t (*VCHI_CONNECTION_BULK_QUEUE_TRANSMIT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                          const void *data_src,
++                                                          uint32_t data_size,
++                                                          VCHI_FLAGS_T flags,
++                                                          void *bulk_handle );
++
++// Routine to receive data
++typedef int32_t (*VCHI_CONNECTION_BULK_QUEUE_RECEIVE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
++                                                         void *data_dst,
++                                                         uint32_t data_size,
++                                                         VCHI_FLAGS_T flags,
++                                                         void *bulk_handle );
++
++// Routine to report if a server is available
++typedef int32_t (*VCHI_CONNECTION_SERVER_PRESENT)( VCHI_CONNECTION_STATE_T *state, int32_t service_id, int32_t peer_flags );
++
++// Routine to report the number of RX slots available
++typedef int (*VCHI_CONNECTION_RX_SLOTS_AVAILABLE)( const VCHI_CONNECTION_STATE_T *state );
++
++// Routine to report the RX slot size
++typedef uint32_t (*VCHI_CONNECTION_RX_SLOT_SIZE)( const VCHI_CONNECTION_STATE_T *state );
++
++// Callback to indicate that the other side has added a buffer to the rx bulk DMA FIFO
++typedef void (*VCHI_CONNECTION_RX_BULK_BUFFER_ADDED)(VCHI_CONNECTION_STATE_T *state,
++                                                     int32_t service,
++                                                     uint32_t length,
++                                                     MESSAGE_TX_CHANNEL_T channel,
++                                                     uint32_t channel_params,
++                                                     uint32_t data_length,
++                                                     uint32_t data_offset);
++
++// Callback to inform a service that a Xon or Xoff message has been received
++typedef void (*VCHI_CONNECTION_FLOW_CONTROL)(VCHI_CONNECTION_STATE_T *state, int32_t service_id, int32_t xoff);
++
++// Callback to inform a service that a server available reply message has been received
++typedef void (*VCHI_CONNECTION_SERVER_AVAILABLE_REPLY)(VCHI_CONNECTION_STATE_T *state, int32_t service_id, uint32_t flags);
++
++// Callback to indicate that bulk auxiliary messages have arrived
++typedef void (*VCHI_CONNECTION_BULK_AUX_RECEIVED)(VCHI_CONNECTION_STATE_T *state);
++
++// Callback to indicate that bulk auxiliary messages have arrived
++typedef void (*VCHI_CONNECTION_BULK_AUX_TRANSMITTED)(VCHI_CONNECTION_STATE_T *state, void *handle);
++
++// Callback with all the connection info you require
++typedef void (*VCHI_CONNECTION_INFO)(VCHI_CONNECTION_STATE_T *state, uint32_t protocol_version, uint32_t slot_size, uint32_t num_slots, uint32_t min_bulk_size);
++
++// Callback to inform of a disconnect
++typedef void (*VCHI_CONNECTION_DISCONNECT)(VCHI_CONNECTION_STATE_T *state, uint32_t flags);
++
++// Callback to inform of a power control request
++typedef void (*VCHI_CONNECTION_POWER_CONTROL)(VCHI_CONNECTION_STATE_T *state, MESSAGE_TX_CHANNEL_T channel, int32_t enable);
++
++// allocate memory suitably aligned for this connection
++typedef void * (*VCHI_BUFFER_ALLOCATE)(VCHI_CONNECTION_SERVICE_HANDLE_T service_handle, uint32_t * length);
++
++// free memory allocated by buffer_allocate
++typedef void   (*VCHI_BUFFER_FREE)(VCHI_CONNECTION_SERVICE_HANDLE_T service_handle, void * address);
++
++
++/******************************************************************************
++ System driver struct
++ *****************************************************************************/
++
++struct opaque_vchi_connection_api_t
++{
++   // Routine to init the connection
++   VCHI_CONNECTION_INIT_T                      init;
++
++   // Connection-level CRC control
++   VCHI_CONNECTION_CRC_CONTROL_T               crc_control;
++
++   // Routine to connect to or create service
++   VCHI_CONNECTION_SERVICE_CONNECT_T           service_connect;
++
++   // Routine to disconnect from a service
++   VCHI_CONNECTION_SERVICE_DISCONNECT_T        service_disconnect;
++
++   // Routine to queue a message
++   VCHI_CONNECTION_SERVICE_QUEUE_MESSAGE_T     service_queue_msg;
++
++   // scatter-gather (vector) message queue
++   VCHI_CONNECTION_SERVICE_QUEUE_MESSAGEV_T    service_queue_msgv;
++
++   // Routine to dequeue a message
++   VCHI_CONNECTION_SERVICE_DEQUEUE_MESSAGE_T   service_dequeue_msg;
++
++   // Routine to peek at a message
++   VCHI_CONNECTION_SERVICE_PEEK_MESSAGE_T      service_peek_msg;
++
++   // Routine to hold a message
++   VCHI_CONNECTION_SERVICE_HOLD_MESSAGE_T      service_hold_msg;
++
++   // Routine to initialise a received message iterator
++   VCHI_CONNECTION_SERVICE_LOOKAHEAD_MESSAGE_T service_look_ahead_msg;
++
++   // Routine to release a message
++   VCHI_CONNECTION_HELD_MSG_RELEASE_T          held_msg_release;
++
++   // Routine to get information on a held message
++   VCHI_CONNECTION_HELD_MSG_INFO_T             held_msg_info;
++
++   // Routine to check for next message on iterator
++   VCHI_CONNECTION_MSG_ITER_HAS_NEXT_T         msg_iter_has_next;
++
++   // Routine to get next message on iterator
++   VCHI_CONNECTION_MSG_ITER_NEXT_T             msg_iter_next;
++
++   // Routine to remove the last message returned by iterator
++   VCHI_CONNECTION_MSG_ITER_REMOVE_T           msg_iter_remove;
++
++   // Routine to hold the last message returned by iterator
++   VCHI_CONNECTION_MSG_ITER_HOLD_T             msg_iter_hold;
++
++   // Routine to transmit bulk data
++   VCHI_CONNECTION_BULK_QUEUE_TRANSMIT_T       bulk_queue_transmit;
++
++   // Routine to receive data
++   VCHI_CONNECTION_BULK_QUEUE_RECEIVE_T        bulk_queue_receive;
++
++   // Routine to report the available servers
++   VCHI_CONNECTION_SERVER_PRESENT              server_present;
++
++   // Routine to report the number of RX slots available
++   VCHI_CONNECTION_RX_SLOTS_AVAILABLE          connection_rx_slots_available;
++
++   // Routine to report the RX slot size
++   VCHI_CONNECTION_RX_SLOT_SIZE                connection_rx_slot_size;
++
++   // Callback to indicate that the other side has added a buffer to the rx bulk DMA FIFO
++   VCHI_CONNECTION_RX_BULK_BUFFER_ADDED        rx_bulk_buffer_added;
++
++   // Callback to inform a service that a Xon or Xoff message has been received
++   VCHI_CONNECTION_FLOW_CONTROL                flow_control;
++
++   // Callback to inform a service that a server available reply message has been received
++   VCHI_CONNECTION_SERVER_AVAILABLE_REPLY      server_available_reply;
++
++   // Callback to indicate that bulk auxiliary messages have arrived
++   VCHI_CONNECTION_BULK_AUX_RECEIVED           bulk_aux_received;
++
++   // Callback to indicate that a bulk auxiliary message has been transmitted
++   VCHI_CONNECTION_BULK_AUX_TRANSMITTED        bulk_aux_transmitted;
++
++   // Callback to provide information about the connection
++   VCHI_CONNECTION_INFO                        connection_info;
++
++   // Callback to notify that peer has requested disconnect
++   VCHI_CONNECTION_DISCONNECT                  disconnect;
++
++   // Callback to notify that peer has requested power change
++   VCHI_CONNECTION_POWER_CONTROL               power_control;
++
++   // allocate memory suitably aligned for this connection
++   VCHI_BUFFER_ALLOCATE                        buffer_allocate;
++
++   // free memory allocated by buffer_allocate
++   VCHI_BUFFER_FREE                            buffer_free;
++
++};
++
++struct vchi_connection_t {
++   const VCHI_CONNECTION_API_T *api;
++   VCHI_CONNECTION_STATE_T     *state;
++#ifdef VCHI_COARSE_LOCKING
++   struct semaphore             sem;
++#endif
++};
++
++
++#endif /* CONNECTION_H_ */
++
++/****************************** End of file **********************************/
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchi/message_drivers/message.h
+@@ -0,0 +1,204 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef _VCHI_MESSAGE_H_
++#define _VCHI_MESSAGE_H_
++
++#include <linux/kernel.h>
++#include <linux/types.h>
++#include <linux/semaphore.h>
++
++#include "interface/vchi/vchi_cfg_internal.h"
++#include "interface/vchi/vchi_common.h"
++
++
++typedef enum message_event_type {
++   MESSAGE_EVENT_NONE,
++   MESSAGE_EVENT_NOP,
++   MESSAGE_EVENT_MESSAGE,
++   MESSAGE_EVENT_SLOT_COMPLETE,
++   MESSAGE_EVENT_RX_BULK_PAUSED,
++   MESSAGE_EVENT_RX_BULK_COMPLETE,
++   MESSAGE_EVENT_TX_COMPLETE,
++   MESSAGE_EVENT_MSG_DISCARDED
++} MESSAGE_EVENT_TYPE_T;
++
++typedef enum vchi_msg_flags
++{
++   VCHI_MSG_FLAGS_NONE                  = 0x0,
++   VCHI_MSG_FLAGS_TERMINATE_DMA         = 0x1
++} VCHI_MSG_FLAGS_T;
++
++typedef enum message_tx_channel
++{
++   MESSAGE_TX_CHANNEL_MESSAGE           = 0,
++   MESSAGE_TX_CHANNEL_BULK              = 1 // drivers may provide multiple bulk channels, from 1 upwards
++} MESSAGE_TX_CHANNEL_T;
++
++// Macros used for cycling through bulk channels
++#define MESSAGE_TX_CHANNEL_BULK_PREV(c) (MESSAGE_TX_CHANNEL_BULK+((c)-MESSAGE_TX_CHANNEL_BULK+VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION-1)%VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION)
++#define MESSAGE_TX_CHANNEL_BULK_NEXT(c) (MESSAGE_TX_CHANNEL_BULK+((c)-MESSAGE_TX_CHANNEL_BULK+1)%VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION)
++
++typedef enum message_rx_channel
++{
++   MESSAGE_RX_CHANNEL_MESSAGE           = 0,
++   MESSAGE_RX_CHANNEL_BULK              = 1 // drivers may provide multiple bulk channels, from 1 upwards
++} MESSAGE_RX_CHANNEL_T;
++
++// Message receive slot information
++typedef struct rx_msg_slot_info {
++
++   struct rx_msg_slot_info *next;
++   //struct slot_info *prev;
++#if !defined VCHI_COARSE_LOCKING
++   struct semaphore   sem;
++#endif
++
++   uint8_t           *addr;               // base address of slot
++   uint32_t           len;                // length of slot in bytes
++
++   uint32_t           write_ptr;          // hardware causes this to advance
++   uint32_t           read_ptr;           // this module does the reading
++   int                active;             // is this slot in the hardware dma fifo?
++   uint32_t           msgs_parsed;        // count how many messages are in this slot
++   uint32_t           msgs_released;      // how many messages have been released
++   void              *state;              // connection state information
++   uint8_t            ref_count[VCHI_MAX_SERVICES_PER_CONNECTION];          // reference count for slots held by services
++} RX_MSG_SLOTINFO_T;
++
++// The message driver no longer needs to know about the fields of RX_BULK_SLOTINFO_T - sort this out.
++// In particular, it mustn't use addr and len - they're the client buffer, but the message
++// driver will be tasked with sending the aligned core section.
++typedef struct rx_bulk_slotinfo_t {
++   struct rx_bulk_slotinfo_t *next;
++
++   struct semaphore *blocking;
++
++   // needed by DMA
++   void        *addr;
++   uint32_t     len;
++
++   // needed for the callback
++   void        *service;
++   void        *handle;
++   VCHI_FLAGS_T flags;
++} RX_BULK_SLOTINFO_T;
++
++
++/* ----------------------------------------------------------------------
++ * each connection driver will have a pool of the following struct.
++ *
++ * the pool will be managed by vchi_qman_*
++ * this means there will be multiple queues (single linked lists)
++ * a given struct message_info will be on exactly one of these queues
++ * at any one time
++ * -------------------------------------------------------------------- */
++typedef struct rx_message_info {
++
++   struct message_info *next;
++   //struct message_info *prev;
++
++   uint8_t    *addr;
++   uint32_t   len;
++   RX_MSG_SLOTINFO_T *slot; // points to whichever slot contains this message
++   uint32_t   tx_timestamp;
++   uint32_t   rx_timestamp;
++
++} RX_MESSAGE_INFO_T;
++
++typedef struct {
++   MESSAGE_EVENT_TYPE_T type;
++
++   struct {
++      // for messages
++      void    *addr;           // address of message
++      uint16_t slot_delta;     // whether this message indicated slot delta
++      uint32_t len;            // length of message
++      RX_MSG_SLOTINFO_T *slot; // slot this message is in
++      int32_t  service;   // service id this message is destined for
++      uint32_t tx_timestamp;   // timestamp from the header
++      uint32_t rx_timestamp;   // timestamp when we parsed it
++   } message;
++
++   // FIXME: cleanup slot reporting...
++   RX_MSG_SLOTINFO_T *rx_msg;
++   RX_BULK_SLOTINFO_T *rx_bulk;
++   void *tx_handle;
++   MESSAGE_TX_CHANNEL_T tx_channel;
++
++} MESSAGE_EVENT_T;
++
++
++// callbacks
++typedef void VCHI_MESSAGE_DRIVER_EVENT_CALLBACK_T( void *state );
++
++typedef struct {
++   VCHI_MESSAGE_DRIVER_EVENT_CALLBACK_T *event_callback;
++} VCHI_MESSAGE_DRIVER_OPEN_T;
++
++
++// handle to this instance of message driver (as returned by ->open)
++typedef struct opaque_mhandle_t *VCHI_MDRIVER_HANDLE_T;
++
++struct opaque_vchi_message_driver_t {
++   VCHI_MDRIVER_HANDLE_T *(*open)( VCHI_MESSAGE_DRIVER_OPEN_T *params, void *state );
++   int32_t (*suspending)( VCHI_MDRIVER_HANDLE_T *handle );
++   int32_t (*resumed)( VCHI_MDRIVER_HANDLE_T *handle );
++   int32_t (*power_control)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T, int32_t enable );
++   int32_t (*add_msg_rx_slot)( VCHI_MDRIVER_HANDLE_T *handle, RX_MSG_SLOTINFO_T *slot );      // rx message
++   int32_t (*add_bulk_rx)( VCHI_MDRIVER_HANDLE_T *handle, void *data, uint32_t len, RX_BULK_SLOTINFO_T *slot );  // rx data (bulk)
++   int32_t (*send)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel, const void *data, uint32_t len, VCHI_MSG_FLAGS_T flags, void *send_handle );      // tx (message & bulk)
++   void    (*next_event)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_EVENT_T *event );     // get the next event from message_driver
++   int32_t (*enable)( VCHI_MDRIVER_HANDLE_T *handle );
++   int32_t (*form_message)( VCHI_MDRIVER_HANDLE_T *handle, int32_t service_id, VCHI_MSG_VECTOR_T *vector, uint32_t count, void
++                            *address, uint32_t length_avail, uint32_t max_total_length, int32_t pad_to_fill, int32_t allow_partial );
++
++   int32_t (*update_message)( VCHI_MDRIVER_HANDLE_T *handle, void *dest, int16_t *slot_count );
++   int32_t (*buffer_aligned)( VCHI_MDRIVER_HANDLE_T *handle, int tx, int uncached, const void *address, const uint32_t length );
++   void *  (*allocate_buffer)( VCHI_MDRIVER_HANDLE_T *handle, uint32_t *length );
++   void    (*free_buffer)( VCHI_MDRIVER_HANDLE_T *handle, void *address );
++   int     (*rx_slot_size)( VCHI_MDRIVER_HANDLE_T *handle, int msg_size );
++   int     (*tx_slot_size)( VCHI_MDRIVER_HANDLE_T *handle, int msg_size );
++
++   int32_t  (*tx_supports_terminate)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
++   uint32_t (*tx_bulk_chunk_size)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
++   int     (*tx_alignment)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
++   int     (*rx_alignment)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_RX_CHANNEL_T channel );
++   void    (*form_bulk_aux)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel, const void *data, uint32_t len, uint32_t chunk_size, const void **aux_data, int32_t *aux_len );
++   void    (*debug)( VCHI_MDRIVER_HANDLE_T *handle );
++};
++
++
++#endif // _VCHI_MESSAGE_H_
++
++/****************************** End of file ***********************************/
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchi/vchi.h
+@@ -0,0 +1,378 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHI_H_
++#define VCHI_H_
++
++#include "interface/vchi/vchi_cfg.h"
++#include "interface/vchi/vchi_common.h"
++#include "interface/vchi/connections/connection.h"
++#include "vchi_mh.h"
++
++
++/******************************************************************************
++ Global defs
++ *****************************************************************************/
++
++#define VCHI_BULK_ROUND_UP(x)     ((((unsigned long)(x))+VCHI_BULK_ALIGN-1) & ~(VCHI_BULK_ALIGN-1))
++#define VCHI_BULK_ROUND_DOWN(x)   (((unsigned long)(x)) & ~(VCHI_BULK_ALIGN-1))
++#define VCHI_BULK_ALIGN_NBYTES(x) (VCHI_BULK_ALIGNED(x) ? 0 : (VCHI_BULK_ALIGN - ((unsigned long)(x) & (VCHI_BULK_ALIGN-1))))
++
++#ifdef USE_VCHIQ_ARM
++#define VCHI_BULK_ALIGNED(x)      1
++#else
++#define VCHI_BULK_ALIGNED(x)      (((unsigned long)(x) & (VCHI_BULK_ALIGN-1)) == 0)
++#endif
++
++struct vchi_version {
++	uint32_t version;
++	uint32_t version_min;
++};
++#define VCHI_VERSION(v_) { v_, v_ }
++#define VCHI_VERSION_EX(v_, m_) { v_, m_ }
++
++typedef enum
++{
++   VCHI_VEC_POINTER,
++   VCHI_VEC_HANDLE,
++   VCHI_VEC_LIST
++} VCHI_MSG_VECTOR_TYPE_T;
++
++typedef struct vchi_msg_vector_ex {
++
++   VCHI_MSG_VECTOR_TYPE_T type;
++   union
++   {
++      // a memory handle
++      struct
++      {
++         VCHI_MEM_HANDLE_T handle;
++         uint32_t offset;
++         int32_t vec_len;
++      } handle;
++
++      // an ordinary data pointer
++      struct
++      {
++         const void *vec_base;
++         int32_t vec_len;
++      } ptr;
++
++      // a nested vector list
++      struct
++      {
++         struct vchi_msg_vector_ex *vec;
++         uint32_t vec_len;
++      } list;
++   } u;
++} VCHI_MSG_VECTOR_EX_T;
++
++
++// Construct an entry in a msg vector for a pointer (p) of length (l)
++#define VCHI_VEC_POINTER(p,l)  VCHI_VEC_POINTER, { { (VCHI_MEM_HANDLE_T)(p), (l) } }
++
++// Construct an entry in a msg vector for a message handle (h), starting at offset (o) of length (l)
++#define VCHI_VEC_HANDLE(h,o,l) VCHI_VEC_HANDLE,  { { (h), (o), (l) } }
++
++// Macros to manipulate 'FOURCC' values
++#define MAKE_FOURCC(x) ((int32_t)( (x[0] << 24) | (x[1] << 16) | (x[2] << 8) | x[3] ))
++#define FOURCC_TO_CHAR(x) (x >> 24) & 0xFF,(x >> 16) & 0xFF,(x >> 8) & 0xFF, x & 0xFF
++
++
++// Opaque service information
++struct opaque_vchi_service_t;
++
++// Descriptor for a held message. Allocated by client, initialised by vchi_msg_hold,
++// vchi_msg_iter_hold or vchi_msg_iter_hold_next. Fields are for internal VCHI use only.
++typedef struct
++{
++   struct opaque_vchi_service_t *service;
++   void *message;
++} VCHI_HELD_MSG_T;
++
++
++
++// structure used to provide the information needed to open a server or a client
++typedef struct {
++	struct vchi_version version;
++	int32_t service_id;
++	VCHI_CONNECTION_T *connection;
++	uint32_t rx_fifo_size;
++	uint32_t tx_fifo_size;
++	VCHI_CALLBACK_T callback;
++	void *callback_param;
++	/* client intends to receive bulk transfers of
++		odd lengths or into unaligned buffers */
++	int32_t want_unaligned_bulk_rx;
++	/* client intends to transmit bulk transfers of
++		odd lengths or out of unaligned buffers */
++	int32_t want_unaligned_bulk_tx;
++	/* client wants to check CRCs on (bulk) xfers.
++		Only needs to be set at 1 end - will do both directions. */
++	int32_t want_crc;
++} SERVICE_CREATION_T;
++
++// Opaque handle for a VCHI instance
++typedef struct opaque_vchi_instance_handle_t *VCHI_INSTANCE_T;
++
++// Opaque handle for a server or client
++typedef struct opaque_vchi_service_handle_t *VCHI_SERVICE_HANDLE_T;
++
++// Service registration & startup
++typedef void (*VCHI_SERVICE_INIT)(VCHI_INSTANCE_T initialise_instance, VCHI_CONNECTION_T **connections, uint32_t num_connections);
++
++typedef struct service_info_tag {
++   const char * const vll_filename; /* VLL to load to start this service. This is an empty string if VLL is "static" */
++   VCHI_SERVICE_INIT init;          /* Service initialisation function */
++   void *vll_handle;                /* VLL handle; NULL when unloaded or a "static VLL" in build */
++} SERVICE_INFO_T;
++
++/******************************************************************************
++ Global funcs - implementation is specific to which side you are on (local / remote)
++ *****************************************************************************/
++
++#ifdef __cplusplus
++extern "C" {
++#endif
++
++extern /*@observer@*/ VCHI_CONNECTION_T * vchi_create_connection( const VCHI_CONNECTION_API_T * function_table,
++                                                   const VCHI_MESSAGE_DRIVER_T * low_level);
++
++
++// Routine used to initialise the vchi on both local + remote connections
++extern int32_t vchi_initialise( VCHI_INSTANCE_T *instance_handle );
++
++extern int32_t vchi_exit( void );
++
++extern int32_t vchi_connect( VCHI_CONNECTION_T **connections,
++                             const uint32_t num_connections,
++                             VCHI_INSTANCE_T instance_handle );
++
++//When this is called, ensure that all services have no data pending.
++//Bulk transfers can remain 'queued'
++extern int32_t vchi_disconnect( VCHI_INSTANCE_T instance_handle );
++
++// Global control over bulk CRC checking
++extern int32_t vchi_crc_control( VCHI_CONNECTION_T *connection,
++                                 VCHI_CRC_CONTROL_T control );
++
++// helper functions
++extern void * vchi_allocate_buffer(VCHI_SERVICE_HANDLE_T handle, uint32_t *length);
++extern void vchi_free_buffer(VCHI_SERVICE_HANDLE_T handle, void *address);
++extern uint32_t vchi_current_time(VCHI_INSTANCE_T instance_handle);
++
++
++/******************************************************************************
++ Global service API
++ *****************************************************************************/
++// Routine to create a named service
++extern int32_t vchi_service_create( VCHI_INSTANCE_T instance_handle,
++                                    SERVICE_CREATION_T *setup,
++                                    VCHI_SERVICE_HANDLE_T *handle );
++
++// Routine to destory a service
++extern int32_t vchi_service_destroy( const VCHI_SERVICE_HANDLE_T handle );
++
++// Routine to open a named service
++extern int32_t vchi_service_open( VCHI_INSTANCE_T instance_handle,
++                                  SERVICE_CREATION_T *setup,
++                                  VCHI_SERVICE_HANDLE_T *handle);
++
++extern int32_t vchi_get_peer_version( const VCHI_SERVICE_HANDLE_T handle,
++                                      short *peer_version );
++
++// Routine to close a named service
++extern int32_t vchi_service_close( const VCHI_SERVICE_HANDLE_T handle );
++
++// Routine to increment ref count on a named service
++extern int32_t vchi_service_use( const VCHI_SERVICE_HANDLE_T handle );
++
++// Routine to decrement ref count on a named service
++extern int32_t vchi_service_release( const VCHI_SERVICE_HANDLE_T handle );
++
++// Routine to set a control option for a named service
++extern int32_t vchi_service_set_option( const VCHI_SERVICE_HANDLE_T handle,
++					VCHI_SERVICE_OPTION_T option,
++					int value);
++
++// Routine to send a message across a service
++extern int32_t vchi_msg_queue( VCHI_SERVICE_HANDLE_T handle,
++                               const void *data,
++                               uint32_t data_size,
++                               VCHI_FLAGS_T flags,
++                               void *msg_handle );
++
++// scatter-gather (vector) and send message
++int32_t vchi_msg_queuev_ex( VCHI_SERVICE_HANDLE_T handle,
++                            VCHI_MSG_VECTOR_EX_T *vector,
++                            uint32_t count,
++                            VCHI_FLAGS_T flags,
++                            void *msg_handle );
++
++// legacy scatter-gather (vector) and send message, only handles pointers
++int32_t vchi_msg_queuev( VCHI_SERVICE_HANDLE_T handle,
++                         VCHI_MSG_VECTOR_T *vector,
++                         uint32_t count,
++                         VCHI_FLAGS_T flags,
++                         void *msg_handle );
++
++// Routine to receive a msg from a service
++// Dequeue is equivalent to hold, copy into client buffer, release
++extern int32_t vchi_msg_dequeue( VCHI_SERVICE_HANDLE_T handle,
++                                 void *data,
++                                 uint32_t max_data_size_to_read,
++                                 uint32_t *actual_msg_size,
++                                 VCHI_FLAGS_T flags );
++
++// Routine to look at a message in place.
++// The message is not dequeued, so a subsequent call to peek or dequeue
++// will return the same message.
++extern int32_t vchi_msg_peek( VCHI_SERVICE_HANDLE_T handle,
++                              void **data,
++                              uint32_t *msg_size,
++                              VCHI_FLAGS_T flags );
++
++// Routine to remove a message after it has been read in place with peek
++// The first message on the queue is dequeued.
++extern int32_t vchi_msg_remove( VCHI_SERVICE_HANDLE_T handle );
++
++// Routine to look at a message in place.
++// The message is dequeued, so the caller is left holding it; the descriptor is
++// filled in and must be released when the user has finished with the message.
++extern int32_t vchi_msg_hold( VCHI_SERVICE_HANDLE_T handle,
++                              void **data,        // } may be NULL, as info can be
++                              uint32_t *msg_size, // } obtained from HELD_MSG_T
++                              VCHI_FLAGS_T flags,
++                              VCHI_HELD_MSG_T *message_descriptor );
++
++// Initialise an iterator to look through messages in place
++extern int32_t vchi_msg_look_ahead( VCHI_SERVICE_HANDLE_T handle,
++                                    VCHI_MSG_ITER_T *iter,
++                                    VCHI_FLAGS_T flags );
++
++/******************************************************************************
++ Global service support API - operations on held messages and message iterators
++ *****************************************************************************/
++
++// Routine to get the address of a held message
++extern void *vchi_held_msg_ptr( const VCHI_HELD_MSG_T *message );
++
++// Routine to get the size of a held message
++extern int32_t vchi_held_msg_size( const VCHI_HELD_MSG_T *message );
++
++// Routine to get the transmit timestamp as written into the header by the peer
++extern uint32_t vchi_held_msg_tx_timestamp( const VCHI_HELD_MSG_T *message );
++
++// Routine to get the reception timestamp, written as we parsed the header
++extern uint32_t vchi_held_msg_rx_timestamp( const VCHI_HELD_MSG_T *message );
++
++// Routine to release a held message after it has been processed
++extern int32_t vchi_held_msg_release( VCHI_HELD_MSG_T *message );
++
++// Indicates whether the iterator has a next message.
++extern int32_t vchi_msg_iter_has_next( const VCHI_MSG_ITER_T *iter );
++
++// Return the pointer and length for the next message and advance the iterator.
++extern int32_t vchi_msg_iter_next( VCHI_MSG_ITER_T *iter,
++                                   void **data,
++                                   uint32_t *msg_size );
++
++// Remove the last message returned by vchi_msg_iter_next.
++// Can only be called once after each call to vchi_msg_iter_next.
++extern int32_t vchi_msg_iter_remove( VCHI_MSG_ITER_T *iter );
++
++// Hold the last message returned by vchi_msg_iter_next.
++// Can only be called once after each call to vchi_msg_iter_next.
++extern int32_t vchi_msg_iter_hold( VCHI_MSG_ITER_T *iter,
++                                   VCHI_HELD_MSG_T *message );
++
++// Return information for the next message, and hold it, advancing the iterator.
++extern int32_t vchi_msg_iter_hold_next( VCHI_MSG_ITER_T *iter,
++                                        void **data,        // } may be NULL
++                                        uint32_t *msg_size, // }
++                                        VCHI_HELD_MSG_T *message );
++
++
++/******************************************************************************
++ Global bulk API
++ *****************************************************************************/
++
++// Routine to prepare interface for a transfer from the other side
++extern int32_t vchi_bulk_queue_receive( VCHI_SERVICE_HANDLE_T handle,
++                                        void *data_dst,
++                                        uint32_t data_size,
++                                        VCHI_FLAGS_T flags,
++                                        void *transfer_handle );
++
++
++// Prepare interface for a transfer from the other side into relocatable memory.
++int32_t vchi_bulk_queue_receive_reloc( const VCHI_SERVICE_HANDLE_T handle,
++                                       VCHI_MEM_HANDLE_T h_dst,
++                                       uint32_t offset,
++                                       uint32_t data_size,
++                                       const VCHI_FLAGS_T flags,
++                                       void * const bulk_handle );
++
++// Routine to queue up data ready for transfer to the other (once they have signalled they are ready)
++extern int32_t vchi_bulk_queue_transmit( VCHI_SERVICE_HANDLE_T handle,
++                                         const void *data_src,
++                                         uint32_t data_size,
++                                         VCHI_FLAGS_T flags,
++                                         void *transfer_handle );
++
++
++/******************************************************************************
++ Configuration plumbing
++ *****************************************************************************/
++
++// function prototypes for the different mid layers (the state info gives the different physical connections)
++extern const VCHI_CONNECTION_API_T *single_get_func_table( void );
++//extern const VCHI_CONNECTION_API_T *local_server_get_func_table( void );
++//extern const VCHI_CONNECTION_API_T *local_client_get_func_table( void );
++
++// declare all message drivers here
++const VCHI_MESSAGE_DRIVER_T *vchi_mphi_message_driver_func_table( void );
++
++#ifdef __cplusplus
++}
++#endif
++
++extern int32_t vchi_bulk_queue_transmit_reloc( VCHI_SERVICE_HANDLE_T handle,
++                                               VCHI_MEM_HANDLE_T h_src,
++                                               uint32_t offset,
++                                               uint32_t data_size,
++                                               VCHI_FLAGS_T flags,
++                                               void *transfer_handle );
++#endif /* VCHI_H_ */
++
++/****************************** End of file **********************************/
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchi/vchi_cfg.h
+@@ -0,0 +1,224 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHI_CFG_H_
++#define VCHI_CFG_H_
++
++/****************************************************************************************
++ * Defines in this first section are part of the VCHI API and may be examined by VCHI
++ * services.
++ ***************************************************************************************/
++
++/* Required alignment of base addresses for bulk transfer, if unaligned transfers are not enabled */
++/* Really determined by the message driver, and should be available from a run-time call. */
++#ifndef VCHI_BULK_ALIGN
++#   if __VCCOREVER__ >= 0x04000000
++#       define VCHI_BULK_ALIGN 32 // Allows for the need to do cache cleans
++#   else
++#       define VCHI_BULK_ALIGN 16
++#   endif
++#endif
++
++/* Required length multiple for bulk transfers, if unaligned transfers are not enabled */
++/* May be less than or greater than VCHI_BULK_ALIGN */
++/* Really determined by the message driver, and should be available from a run-time call. */
++#ifndef VCHI_BULK_GRANULARITY
++#   if __VCCOREVER__ >= 0x04000000
++#       define VCHI_BULK_GRANULARITY 32 // Allows for the need to do cache cleans
++#   else
++#       define VCHI_BULK_GRANULARITY 16
++#   endif
++#endif
++
++/* The largest possible message to be queued with vchi_msg_queue. */
++#ifndef VCHI_MAX_MSG_SIZE
++#   if defined VCHI_LOCAL_HOST_PORT
++#       define VCHI_MAX_MSG_SIZE     16384         // makes file transfers fast, but should they be using bulk?
++#   else
++#       define VCHI_MAX_MSG_SIZE      4096 // NOTE: THIS MUST BE LARGER THAN OR EQUAL TO THE SIZE OF THE KHRONOS MERGE BUFFER!!
++#   endif
++#endif
++
++/******************************************************************************************
++ * Defines below are system configuration options, and should not be used by VCHI services.
++ *****************************************************************************************/
++
++/* How many connections can we support? A localhost implementation uses 2 connections,
++ * 1 for host-app, 1 for VMCS, and these are hooked together by a loopback MPHI VCFW
++ * driver. */
++#ifndef VCHI_MAX_NUM_CONNECTIONS
++#   define VCHI_MAX_NUM_CONNECTIONS 3
++#endif
++
++/* How many services can we open per connection? Extending this doesn't cost processing time, just a small
++ * amount of static memory. */
++#ifndef VCHI_MAX_SERVICES_PER_CONNECTION
++#  define VCHI_MAX_SERVICES_PER_CONNECTION 36
++#endif
++
++/* Adjust if using a message driver that supports more logical TX channels */
++#ifndef VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION
++#   define VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION 9 // 1 MPHI + 8 CCP2 logical channels
++#endif
++
++/* Adjust if using a message driver that supports more logical RX channels */
++#ifndef VCHI_MAX_BULK_RX_CHANNELS_PER_CONNECTION
++#   define VCHI_MAX_BULK_RX_CHANNELS_PER_CONNECTION 1 // 1 MPHI
++#endif
++
++/* How many receive slots do we use. This times VCHI_MAX_MSG_SIZE gives the effective
++ * receive queue space, less message headers. */
++#ifndef VCHI_NUM_READ_SLOTS
++#  if defined(VCHI_LOCAL_HOST_PORT)
++#     define VCHI_NUM_READ_SLOTS 4
++#  else
++#     define VCHI_NUM_READ_SLOTS 48
++#  endif
++#endif
++
++/* Do we utilise overrun facility for receive message slots? Can aid peer transmit
++ * performance. Only define on VideoCore end, talking to host.
++ */
++//#define VCHI_MSG_RX_OVERRUN
++
++/* How many transmit slots do we use. Generally don't need many, as the hardware driver
++ * underneath VCHI will usually have its own buffering. */
++#ifndef VCHI_NUM_WRITE_SLOTS
++#  define VCHI_NUM_WRITE_SLOTS 4
++#endif
++
++/* If a service has held or queued received messages in VCHI_XOFF_THRESHOLD or more slots,
++ * then it's taking up too much buffer space, and the peer service will be told to stop
++ * transmitting with an XOFF message. For this to be effective, the VCHI_NUM_READ_SLOTS
++ * needs to be considerably bigger than VCHI_NUM_WRITE_SLOTS, or the transmit latency
++ * is too high. */
++#ifndef VCHI_XOFF_THRESHOLD
++#  define VCHI_XOFF_THRESHOLD (VCHI_NUM_READ_SLOTS / 2)
++#endif
++
++/* After we've sent an XOFF, the peer will be told to resume transmission once the local
++ * service has dequeued/released enough messages that it's now occupying
++ * VCHI_XON_THRESHOLD slots or fewer. */
++#ifndef VCHI_XON_THRESHOLD
++#  define VCHI_XON_THRESHOLD (VCHI_NUM_READ_SLOTS / 4)
++#endif
++
++/* A size below which a bulk transfer omits the handshake completely and always goes
++ * via the message channel, if bulk auxiliary is being sent on that service. (The user
++ * can guarantee this by enabling unaligned transmits).
++ * Not API. */
++#ifndef VCHI_MIN_BULK_SIZE
++#  define VCHI_MIN_BULK_SIZE    ( VCHI_MAX_MSG_SIZE / 2 < 4096 ? VCHI_MAX_MSG_SIZE / 2 : 4096 )
++#endif
++
++/* Maximum size of bulk transmission chunks, for each interface type. A trade-off between
++ * speed and latency; the smaller the chunk size the better change of messages and other
++ * bulk transmissions getting in when big bulk transfers are happening. Set to 0 to not
++ * break transmissions into chunks.
++ */
++#ifndef VCHI_MAX_BULK_CHUNK_SIZE_MPHI
++#  define VCHI_MAX_BULK_CHUNK_SIZE_MPHI (16 * 1024)
++#endif
++
++/* NB Chunked CCP2 transmissions violate the letter of the CCP2 spec by using "JPEG8" mode
++ * with multiple-line frames. Only use if the receiver can cope. */
++#ifndef VCHI_MAX_BULK_CHUNK_SIZE_CCP2
++#  define VCHI_MAX_BULK_CHUNK_SIZE_CCP2 0
++#endif
++
++/* How many TX messages can we have pending in our transmit slots. Once exhausted,
++ * vchi_msg_queue will be blocked. */
++#ifndef VCHI_TX_MSG_QUEUE_SIZE
++#  define VCHI_TX_MSG_QUEUE_SIZE           256
++#endif
++
++/* How many RX messages can we have parsed in the receive slots. Once exhausted, parsing
++ * will be suspended until older messages are dequeued/released. */
++#ifndef VCHI_RX_MSG_QUEUE_SIZE
++#  define VCHI_RX_MSG_QUEUE_SIZE           256
++#endif
++
++/* Really should be able to cope if we run out of received message descriptors, by
++ * suspending parsing as the comment above says, but we don't. This sweeps the issue
++ * under the carpet. */
++#if VCHI_RX_MSG_QUEUE_SIZE < (VCHI_MAX_MSG_SIZE/16 + 1) * VCHI_NUM_READ_SLOTS
++#  undef VCHI_RX_MSG_QUEUE_SIZE
++#  define VCHI_RX_MSG_QUEUE_SIZE (VCHI_MAX_MSG_SIZE/16 + 1) * VCHI_NUM_READ_SLOTS
++#endif
++
++/* How many bulk transmits can we have pending. Once exhausted, vchi_bulk_queue_transmit
++ * will be blocked. */
++#ifndef VCHI_TX_BULK_QUEUE_SIZE
++#  define VCHI_TX_BULK_QUEUE_SIZE           64
++#endif
++
++/* How many bulk receives can we have pending. Once exhausted, vchi_bulk_queue_receive
++ * will be blocked. */
++#ifndef VCHI_RX_BULK_QUEUE_SIZE
++#  define VCHI_RX_BULK_QUEUE_SIZE           64
++#endif
++
++/* A limit on how many outstanding bulk requests we expect the peer to give us. If
++ * the peer asks for more than this, VCHI will fail and assert. The number is determined
++ * by the peer's hardware - it's the number of outstanding requests that can be queued
++ * on all bulk channels. VC3's MPHI peripheral allows 16. */
++#ifndef VCHI_MAX_PEER_BULK_REQUESTS
++#  define VCHI_MAX_PEER_BULK_REQUESTS       32
++#endif
++
++/* Define VCHI_CCP2TX_MANUAL_POWER if the host tells us when to turn the CCP2
++ * transmitter on and off.
++ */
++/*#define VCHI_CCP2TX_MANUAL_POWER*/
++
++#ifndef VCHI_CCP2TX_MANUAL_POWER
++
++/* Timeout (in milliseconds) for putting the CCP2TX interface into IDLE state. Set
++ * negative for no IDLE.
++ */
++#  ifndef VCHI_CCP2TX_IDLE_TIMEOUT
++#    define VCHI_CCP2TX_IDLE_TIMEOUT        5
++#  endif
++
++/* Timeout (in milliseconds) for putting the CCP2TX interface into OFF state. Set
++ * negative for no OFF.
++ */
++#  ifndef VCHI_CCP2TX_OFF_TIMEOUT
++#    define VCHI_CCP2TX_OFF_TIMEOUT         1000
++#  endif
++
++#endif /* VCHI_CCP2TX_MANUAL_POWER */
++
++#endif /* VCHI_CFG_H_ */
++
++/****************************** End of file **********************************/
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchi/vchi_cfg_internal.h
+@@ -0,0 +1,71 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHI_CFG_INTERNAL_H_
++#define VCHI_CFG_INTERNAL_H_
++
++/****************************************************************************************
++ * Control optimisation attempts.
++ ***************************************************************************************/
++
++// Don't use lots of short-term locks - use great long ones, reducing the overall locks-per-second
++#define VCHI_COARSE_LOCKING
++
++// Avoid lock then unlock on exit from blocking queue operations (msg tx, bulk rx/tx)
++// (only relevant if VCHI_COARSE_LOCKING)
++#define VCHI_ELIDE_BLOCK_EXIT_LOCK
++
++// Avoid lock on non-blocking peek
++// (only relevant if VCHI_COARSE_LOCKING)
++#define VCHI_AVOID_PEEK_LOCK
++
++// Use one slot-handler thread per connection, rather than 1 thread dealing with all connections in rotation.
++#define VCHI_MULTIPLE_HANDLER_THREADS
++
++// Put free descriptors onto the head of the free queue, rather than the tail, so that we don't thrash
++// our way through the pool of descriptors.
++#define VCHI_PUSH_FREE_DESCRIPTORS_ONTO_HEAD
++
++// Don't issue a MSG_AVAILABLE callback for every single message. Possibly only safe if VCHI_COARSE_LOCKING.
++#define VCHI_FEWER_MSG_AVAILABLE_CALLBACKS
++
++// Don't use message descriptors for TX messages that don't need them
++#define VCHI_MINIMISE_TX_MSG_DESCRIPTORS
++
++// Nano-locks for multiqueue
++//#define VCHI_MQUEUE_NANOLOCKS
++
++// Lock-free(er) dequeuing
++//#define VCHI_RX_NANOLOCKS
++
++#endif /*VCHI_CFG_INTERNAL_H_*/
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchi/vchi_common.h
+@@ -0,0 +1,175 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHI_COMMON_H_
++#define VCHI_COMMON_H_
++
++
++//flags used when sending messages (must be bitmapped)
++typedef enum
++{
++   VCHI_FLAGS_NONE                      = 0x0,
++   VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE   = 0x1,   // waits for message to be received, or sent (NB. not the same as being seen on other side)
++   VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE = 0x2,   // run a callback when message sent
++   VCHI_FLAGS_BLOCK_UNTIL_QUEUED        = 0x4,   // return once the transfer is in a queue ready to go
++   VCHI_FLAGS_ALLOW_PARTIAL             = 0x8,
++   VCHI_FLAGS_BLOCK_UNTIL_DATA_READ     = 0x10,
++   VCHI_FLAGS_CALLBACK_WHEN_DATA_READ   = 0x20,
++
++   VCHI_FLAGS_ALIGN_SLOT            = 0x000080,  // internal use only
++   VCHI_FLAGS_BULK_AUX_QUEUED       = 0x010000,  // internal use only
++   VCHI_FLAGS_BULK_AUX_COMPLETE     = 0x020000,  // internal use only
++   VCHI_FLAGS_BULK_DATA_QUEUED      = 0x040000,  // internal use only
++   VCHI_FLAGS_BULK_DATA_COMPLETE    = 0x080000,  // internal use only
++   VCHI_FLAGS_INTERNAL              = 0xFF0000
++} VCHI_FLAGS_T;
++
++// constants for vchi_crc_control()
++typedef enum {
++   VCHI_CRC_NOTHING = -1,
++   VCHI_CRC_PER_SERVICE = 0,
++   VCHI_CRC_EVERYTHING = 1,
++} VCHI_CRC_CONTROL_T;
++
++//callback reasons when an event occurs on a service
++typedef enum
++{
++   VCHI_CALLBACK_REASON_MIN,
++
++   //This indicates that there is data available
++   //handle is the msg id that was transmitted with the data
++   //    When a message is received and there was no FULL message available previously, send callback
++   //    Tasks get kicked by the callback, reset their event and try and read from the fifo until it fails
++   VCHI_CALLBACK_MSG_AVAILABLE,
++   VCHI_CALLBACK_MSG_SENT,
++   VCHI_CALLBACK_MSG_SPACE_AVAILABLE, // XXX not yet implemented
++
++   // This indicates that a transfer from the other side has completed
++   VCHI_CALLBACK_BULK_RECEIVED,
++   //This indicates that data queued up to be sent has now gone
++   //handle is the msg id that was used when sending the data
++   VCHI_CALLBACK_BULK_SENT,
++   VCHI_CALLBACK_BULK_RX_SPACE_AVAILABLE, // XXX not yet implemented
++   VCHI_CALLBACK_BULK_TX_SPACE_AVAILABLE, // XXX not yet implemented
++
++   VCHI_CALLBACK_SERVICE_CLOSED,
++
++   // this side has sent XOFF to peer due to lack of data consumption by service
++   // (suggests the service may need to take some recovery action if it has
++   // been deliberately holding off consuming data)
++   VCHI_CALLBACK_SENT_XOFF,
++   VCHI_CALLBACK_SENT_XON,
++
++   // indicates that a bulk transfer has finished reading the source buffer
++   VCHI_CALLBACK_BULK_DATA_READ,
++
++   // power notification events (currently host side only)
++   VCHI_CALLBACK_PEER_OFF,
++   VCHI_CALLBACK_PEER_SUSPENDED,
++   VCHI_CALLBACK_PEER_ON,
++   VCHI_CALLBACK_PEER_RESUMED,
++   VCHI_CALLBACK_FORCED_POWER_OFF,
++
++#ifdef USE_VCHIQ_ARM
++   // some extra notifications provided by vchiq_arm
++   VCHI_CALLBACK_SERVICE_OPENED,
++   VCHI_CALLBACK_BULK_RECEIVE_ABORTED,
++   VCHI_CALLBACK_BULK_TRANSMIT_ABORTED,
++#endif
++
++   VCHI_CALLBACK_REASON_MAX
++} VCHI_CALLBACK_REASON_T;
++
++// service control options
++typedef enum
++{
++   VCHI_SERVICE_OPTION_MIN,
++
++   VCHI_SERVICE_OPTION_TRACE,
++   VCHI_SERVICE_OPTION_SYNCHRONOUS,
++
++   VCHI_SERVICE_OPTION_MAX
++} VCHI_SERVICE_OPTION_T;
++
++
++//Callback used by all services / bulk transfers
++typedef void (*VCHI_CALLBACK_T)( void *callback_param, //my service local param
++                                 VCHI_CALLBACK_REASON_T reason,
++                                 void *handle ); //for transmitting msg's only
++
++
++
++/*
++ * Define vector struct for scatter-gather (vector) operations
++ * Vectors can be nested - if a vector element has negative length, then
++ * the data pointer is treated as pointing to another vector array, with
++ * '-vec_len' elements. Thus to append a header onto an existing vector,
++ * you can do this:
++ *
++ * void foo(const VCHI_MSG_VECTOR_T *v, int n)
++ * {
++ *    VCHI_MSG_VECTOR_T nv[2];
++ *    nv[0].vec_base = my_header;
++ *    nv[0].vec_len = sizeof my_header;
++ *    nv[1].vec_base = v;
++ *    nv[1].vec_len = -n;
++ *    ...
++ *
++ */
++typedef struct vchi_msg_vector {
++   const void *vec_base;
++   int32_t vec_len;
++} VCHI_MSG_VECTOR_T;
++
++// Opaque type for a connection API
++typedef struct opaque_vchi_connection_api_t VCHI_CONNECTION_API_T;
++
++// Opaque type for a message driver
++typedef struct opaque_vchi_message_driver_t VCHI_MESSAGE_DRIVER_T;
++
++
++// Iterator structure for reading ahead through received message queue. Allocated by client,
++// initialised by vchi_msg_look_ahead. Fields are for internal VCHI use only.
++// Iterates over messages in queue at the instant of the call to vchi_msg_lookahead -
++// will not proceed to messages received since. Behaviour is undefined if an iterator
++// is used again after messages for that service are removed/dequeued by any
++// means other than vchi_msg_iter_... calls on the iterator itself.
++typedef struct {
++   struct opaque_vchi_service_t *service;
++   void *last;
++   void *next;
++   void *remove;
++} VCHI_MSG_ITER_T;
++
++
++#endif // VCHI_COMMON_H_
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchi/vchi_mh.h
+@@ -0,0 +1,42 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHI_MH_H_
++#define VCHI_MH_H_
++
++#include <linux/types.h>
++
++typedef int32_t VCHI_MEM_HANDLE_T;
++#define VCHI_MEM_HANDLE_INVALID 0
++
++#endif
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq.h
+@@ -0,0 +1,40 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_VCHIQ_H
++#define VCHIQ_VCHIQ_H
++
++#include "vchiq_if.h"
++#include "vchiq_util.h"
++
++#endif
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_2835.h
+@@ -0,0 +1,42 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_2835_H
++#define VCHIQ_2835_H
++
++#include "vchiq_pagelist.h"
++
++#define VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX 0
++#define VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX  1
++
++#endif /* VCHIQ_2835_H */
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+@@ -0,0 +1,586 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <linux/kernel.h>
++#include <linux/types.h>
++#include <linux/errno.h>
++#include <linux/interrupt.h>
++#include <linux/pagemap.h>
++#include <linux/dma-mapping.h>
++#include <linux/version.h>
++#include <linux/io.h>
++#include <linux/platform_device.h>
++#include <linux/uaccess.h>
++#include <linux/of.h>
++#include <asm/pgtable.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
++
++#define dmac_map_area			__glue(_CACHE,_dma_map_area)
++#define dmac_unmap_area 		__glue(_CACHE,_dma_unmap_area)
++
++extern void dmac_map_area(const void *, size_t, int);
++extern void dmac_unmap_area(const void *, size_t, int);
++
++#define TOTAL_SLOTS (VCHIQ_SLOT_ZERO_SLOTS + 2 * 32)
++
++#define VCHIQ_ARM_ADDRESS(x) ((void *)((char *)x + g_virt_to_bus_offset))
++
++#include "vchiq_arm.h"
++#include "vchiq_2835.h"
++#include "vchiq_connected.h"
++#include "vchiq_killable.h"
++
++#define MAX_FRAGMENTS (VCHIQ_NUM_CURRENT_BULKS * 2)
++
++#define BELL0	0x00
++#define BELL2	0x08
++
++typedef struct vchiq_2835_state_struct {
++   int inited;
++   VCHIQ_ARM_STATE_T arm_state;
++} VCHIQ_2835_ARM_STATE_T;
++
++static void __iomem *g_regs;
++static unsigned int g_cache_line_size = sizeof(CACHE_LINE_SIZE);
++static unsigned int g_fragments_size;
++static char *g_fragments_base;
++static char *g_free_fragments;
++static struct semaphore g_free_fragments_sema;
++static unsigned long g_virt_to_bus_offset;
++
++extern int vchiq_arm_log_level;
++
++static DEFINE_SEMAPHORE(g_free_fragments_mutex);
++
++static irqreturn_t
++vchiq_doorbell_irq(int irq, void *dev_id);
++
++static int
++create_pagelist(char __user *buf, size_t count, unsigned short type,
++                struct task_struct *task, PAGELIST_T ** ppagelist);
++
++static void
++free_pagelist(PAGELIST_T *pagelist, int actual);
++
++int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state)
++{
++	struct device *dev = &pdev->dev;
++	struct rpi_firmware *fw = platform_get_drvdata(pdev);
++	VCHIQ_SLOT_ZERO_T *vchiq_slot_zero;
++	struct resource *res;
++	void *slot_mem;
++	dma_addr_t slot_phys;
++	u32 channelbase;
++	int slot_mem_size, frag_mem_size;
++	int err, irq, i;
++
++	g_virt_to_bus_offset = virt_to_dma(dev, (void *)0);
++
++	(void)of_property_read_u32(dev->of_node, "cache-line-size",
++				   &g_cache_line_size);
++	g_fragments_size = 2 * g_cache_line_size;
++
++	/* Allocate space for the channels in coherent memory */
++	slot_mem_size = PAGE_ALIGN(TOTAL_SLOTS * VCHIQ_SLOT_SIZE);
++	frag_mem_size = PAGE_ALIGN(g_fragments_size * MAX_FRAGMENTS);
++
++	slot_mem = dmam_alloc_coherent(dev, slot_mem_size + frag_mem_size,
++				       &slot_phys, GFP_KERNEL);
++	if (!slot_mem) {
++		dev_err(dev, "could not allocate DMA memory\n");
++		return -ENOMEM;
++	}
++
++	WARN_ON(((int)slot_mem & (PAGE_SIZE - 1)) != 0);
++
++	vchiq_slot_zero = vchiq_init_slots(slot_mem, slot_mem_size);
++	if (!vchiq_slot_zero)
++		return -EINVAL;
++
++	vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX] =
++		(int)slot_phys + slot_mem_size;
++	vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX] =
++		MAX_FRAGMENTS;
++
++	g_fragments_base = (char *)slot_mem + slot_mem_size;
++	slot_mem_size += frag_mem_size;
++
++	g_free_fragments = g_fragments_base;
++	for (i = 0; i < (MAX_FRAGMENTS - 1); i++) {
++		*(char **)&g_fragments_base[i*g_fragments_size] =
++			&g_fragments_base[(i + 1)*g_fragments_size];
++	}
++	*(char **)&g_fragments_base[i * g_fragments_size] = NULL;
++	sema_init(&g_free_fragments_sema, MAX_FRAGMENTS);
++
++	if (vchiq_init_state(state, vchiq_slot_zero, 0) != VCHIQ_SUCCESS)
++		return -EINVAL;
++
++	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	g_regs = devm_ioremap_resource(&pdev->dev, res);
++	if (IS_ERR(g_regs))
++		return PTR_ERR(g_regs);
++
++	irq = platform_get_irq(pdev, 0);
++	if (irq <= 0) {
++		dev_err(dev, "failed to get IRQ\n");
++		return irq;
++	}
++
++	err = devm_request_irq(dev, irq, vchiq_doorbell_irq, IRQF_IRQPOLL,
++			       "VCHIQ doorbell", state);
++	if (err) {
++		dev_err(dev, "failed to register irq=%d\n", irq);
++		return err;
++	}
++
++	/* Send the base address of the slots to VideoCore */
++	channelbase = slot_phys;
++	err = rpi_firmware_property(fw, RPI_FIRMWARE_VCHIQ_INIT,
++				    &channelbase, sizeof(channelbase));
++	if (err || channelbase) {
++		dev_err(dev, "failed to set channelbase\n");
++		return err ? : -ENXIO;
++	}
++
++	vchiq_log_info(vchiq_arm_log_level,
++		"vchiq_init - done (slots %x, phys %pad)",
++		(unsigned int)vchiq_slot_zero, &slot_phys);
++
++	vchiq_call_connected_callbacks();
++
++   return 0;
++}
++
++VCHIQ_STATUS_T
++vchiq_platform_init_state(VCHIQ_STATE_T *state)
++{
++   VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++   state->platform_state = kzalloc(sizeof(VCHIQ_2835_ARM_STATE_T), GFP_KERNEL);
++   ((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited = 1;
++   status = vchiq_arm_init_state(state, &((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->arm_state);
++   if(status != VCHIQ_SUCCESS)
++   {
++      ((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited = 0;
++   }
++   return status;
++}
++
++VCHIQ_ARM_STATE_T*
++vchiq_platform_get_arm_state(VCHIQ_STATE_T *state)
++{
++   if(!((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited)
++   {
++      BUG();
++   }
++   return &((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->arm_state;
++}
++
++void
++remote_event_signal(REMOTE_EVENT_T *event)
++{
++	wmb();
++
++	event->fired = 1;
++
++	dsb();         /* data barrier operation */
++
++	if (event->armed)
++		writel(0, g_regs + BELL2); /* trigger vc interrupt */
++}
++
++int
++vchiq_copy_from_user(void *dst, const void *src, int size)
++{
++	if ((uint32_t)src < TASK_SIZE) {
++		return copy_from_user(dst, src, size);
++	} else {
++		memcpy(dst, src, size);
++		return 0;
++	}
++}
++
++VCHIQ_STATUS_T
++vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk, VCHI_MEM_HANDLE_T memhandle,
++	void *offset, int size, int dir)
++{
++	PAGELIST_T *pagelist;
++	int ret;
++
++	WARN_ON(memhandle != VCHI_MEM_HANDLE_INVALID);
++
++	ret = create_pagelist((char __user *)offset, size,
++			(dir == VCHIQ_BULK_RECEIVE)
++			? PAGELIST_READ
++			: PAGELIST_WRITE,
++			current,
++			&pagelist);
++	if (ret != 0)
++		return VCHIQ_ERROR;
++
++	bulk->handle = memhandle;
++	bulk->data = VCHIQ_ARM_ADDRESS(pagelist);
++
++	/* Store the pagelist address in remote_data, which isn't used by the
++	   slave. */
++	bulk->remote_data = pagelist;
++
++	return VCHIQ_SUCCESS;
++}
++
++void
++vchiq_complete_bulk(VCHIQ_BULK_T *bulk)
++{
++	if (bulk && bulk->remote_data && bulk->actual)
++		free_pagelist((PAGELIST_T *)bulk->remote_data, bulk->actual);
++}
++
++void
++vchiq_transfer_bulk(VCHIQ_BULK_T *bulk)
++{
++	/*
++	 * This should only be called on the master (VideoCore) side, but
++	 * provide an implementation to avoid the need for ifdefery.
++	 */
++	BUG();
++}
++
++void
++vchiq_dump_platform_state(void *dump_context)
++{
++	char buf[80];
++	int len;
++	len = snprintf(buf, sizeof(buf),
++		"  Platform: 2835 (VC master)");
++	vchiq_dump(dump_context, buf, len + 1);
++}
++
++VCHIQ_STATUS_T
++vchiq_platform_suspend(VCHIQ_STATE_T *state)
++{
++   return VCHIQ_ERROR;
++}
++
++VCHIQ_STATUS_T
++vchiq_platform_resume(VCHIQ_STATE_T *state)
++{
++   return VCHIQ_SUCCESS;
++}
++
++void
++vchiq_platform_paused(VCHIQ_STATE_T *state)
++{
++}
++
++void
++vchiq_platform_resumed(VCHIQ_STATE_T *state)
++{
++}
++
++int
++vchiq_platform_videocore_wanted(VCHIQ_STATE_T* state)
++{
++   return 1; // autosuspend not supported - videocore always wanted
++}
++
++int
++vchiq_platform_use_suspend_timer(void)
++{
++   return 0;
++}
++void
++vchiq_dump_platform_use_state(VCHIQ_STATE_T *state)
++{
++	vchiq_log_info(vchiq_arm_log_level, "Suspend timer not in use");
++}
++void
++vchiq_platform_handle_timeout(VCHIQ_STATE_T *state)
++{
++	(void)state;
++}
++/*
++ * Local functions
++ */
++
++static irqreturn_t
++vchiq_doorbell_irq(int irq, void *dev_id)
++{
++	VCHIQ_STATE_T *state = dev_id;
++	irqreturn_t ret = IRQ_NONE;
++	unsigned int status;
++
++	/* Read (and clear) the doorbell */
++	status = readl(g_regs + BELL0);
++
++	if (status & 0x4) {  /* Was the doorbell rung? */
++		remote_event_pollall(state);
++		ret = IRQ_HANDLED;
++	}
++
++	return ret;
++}
++
++/* There is a potential problem with partial cache lines (pages?)
++** at the ends of the block when reading. If the CPU accessed anything in
++** the same line (page?) then it may have pulled old data into the cache,
++** obscuring the new data underneath. We can solve this by transferring the
++** partial cache lines separately, and allowing the ARM to copy into the
++** cached area.
++
++** N.B. This implementation plays slightly fast and loose with the Linux
++** driver programming rules, e.g. its use of dmac_map_area instead of
++** dma_map_single, but it isn't a multi-platform driver and it benefits
++** from increased speed as a result.
++*/
++
++static int
++create_pagelist(char __user *buf, size_t count, unsigned short type,
++	struct task_struct *task, PAGELIST_T ** ppagelist)
++{
++	PAGELIST_T *pagelist;
++	struct page **pages;
++	unsigned long *addrs;
++	unsigned int num_pages, offset, i;
++	char *addr, *base_addr, *next_addr;
++	int run, addridx, actual_pages;
++        unsigned long *need_release;
++
++	offset = (unsigned int)buf & (PAGE_SIZE - 1);
++	num_pages = (count + offset + PAGE_SIZE - 1) / PAGE_SIZE;
++
++	*ppagelist = NULL;
++
++	/* Allocate enough storage to hold the page pointers and the page
++	** list
++	*/
++	pagelist = kmalloc(sizeof(PAGELIST_T) +
++                           (num_pages * sizeof(unsigned long)) +
++                           sizeof(unsigned long) +
++                           (num_pages * sizeof(pages[0])),
++                           GFP_KERNEL);
++
++	vchiq_log_trace(vchiq_arm_log_level,
++		"create_pagelist - %x", (unsigned int)pagelist);
++	if (!pagelist)
++		return -ENOMEM;
++
++	addrs = pagelist->addrs;
++        need_release = (unsigned long *)(addrs + num_pages);
++	pages = (struct page **)(addrs + num_pages + 1);
++
++	if (is_vmalloc_addr(buf)) {
++		int dir = (type == PAGELIST_WRITE) ?
++			DMA_TO_DEVICE : DMA_FROM_DEVICE;
++		unsigned long length = count;
++		unsigned int off = offset;
++
++		for (actual_pages = 0; actual_pages < num_pages;
++		     actual_pages++) {
++			struct page *pg = vmalloc_to_page(buf + (actual_pages *
++								 PAGE_SIZE));
++			size_t bytes = PAGE_SIZE - off;
++
++			if (bytes > length)
++				bytes = length;
++			pages[actual_pages] = pg;
++			dmac_map_area(page_address(pg) + off, bytes, dir);
++			length -= bytes;
++			off = 0;
++		}
++		*need_release = 0; /* do not try and release vmalloc pages */
++	} else {
++		down_read(&task->mm->mmap_sem);
++		actual_pages = get_user_pages(task, task->mm,
++				          (unsigned long)buf & ~(PAGE_SIZE - 1),
++					  num_pages,
++					  (type == PAGELIST_READ) /*Write */ ,
++					  0 /*Force */ ,
++					  pages,
++					  NULL /*vmas */);
++		up_read(&task->mm->mmap_sem);
++
++		if (actual_pages != num_pages) {
++			vchiq_log_info(vchiq_arm_log_level,
++				       "create_pagelist - only %d/%d pages locked",
++				       actual_pages,
++				       num_pages);
++
++			/* This is probably due to the process being killed */
++			while (actual_pages > 0)
++			{
++				actual_pages--;
++				page_cache_release(pages[actual_pages]);
++			}
++			kfree(pagelist);
++			if (actual_pages == 0)
++				actual_pages = -ENOMEM;
++			return actual_pages;
++		}
++		*need_release = 1; /* release user pages */
++	}
++
++	pagelist->length = count;
++	pagelist->type = type;
++	pagelist->offset = offset;
++
++	/* Group the pages into runs of contiguous pages */
++
++	base_addr = VCHIQ_ARM_ADDRESS(page_address(pages[0]));
++	next_addr = base_addr + PAGE_SIZE;
++	addridx = 0;
++	run = 0;
++
++	for (i = 1; i < num_pages; i++) {
++		addr = VCHIQ_ARM_ADDRESS(page_address(pages[i]));
++		if ((addr == next_addr) && (run < (PAGE_SIZE - 1))) {
++			next_addr += PAGE_SIZE;
++			run++;
++		} else {
++			addrs[addridx] = (unsigned long)base_addr + run;
++			addridx++;
++			base_addr = addr;
++			next_addr = addr + PAGE_SIZE;
++			run = 0;
++		}
++	}
++
++	addrs[addridx] = (unsigned long)base_addr + run;
++	addridx++;
++
++	/* Partial cache lines (fragments) require special measures */
++	if ((type == PAGELIST_READ) &&
++		((pagelist->offset & (g_cache_line_size - 1)) ||
++		((pagelist->offset + pagelist->length) &
++		(g_cache_line_size - 1)))) {
++		char *fragments;
++
++		if (down_interruptible(&g_free_fragments_sema) != 0) {
++			kfree(pagelist);
++			return -EINTR;
++		}
++
++		WARN_ON(g_free_fragments == NULL);
++
++		down(&g_free_fragments_mutex);
++		fragments = g_free_fragments;
++		WARN_ON(fragments == NULL);
++		g_free_fragments = *(char **) g_free_fragments;
++		up(&g_free_fragments_mutex);
++		pagelist->type = PAGELIST_READ_WITH_FRAGMENTS +
++			(fragments - g_fragments_base) / g_fragments_size;
++	}
++
++	dmac_flush_range(pagelist, addrs + num_pages);
++
++	*ppagelist = pagelist;
++
++	return 0;
++}
++
++static void
++free_pagelist(PAGELIST_T *pagelist, int actual)
++{
++        unsigned long *need_release;
++	struct page **pages;
++	unsigned int num_pages, i;
++
++	vchiq_log_trace(vchiq_arm_log_level,
++		"free_pagelist - %x, %d", (unsigned int)pagelist, actual);
++
++	num_pages =
++		(pagelist->length + pagelist->offset + PAGE_SIZE - 1) /
++		PAGE_SIZE;
++
++        need_release = (unsigned long *)(pagelist->addrs + num_pages);
++	pages = (struct page **)(pagelist->addrs + num_pages + 1);
++
++	/* Deal with any partial cache lines (fragments) */
++	if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS) {
++		char *fragments = g_fragments_base +
++			(pagelist->type - PAGELIST_READ_WITH_FRAGMENTS) *
++			g_fragments_size;
++		int head_bytes, tail_bytes;
++		head_bytes = (g_cache_line_size - pagelist->offset) &
++			(g_cache_line_size - 1);
++		tail_bytes = (pagelist->offset + actual) &
++			(g_cache_line_size - 1);
++
++		if ((actual >= 0) && (head_bytes != 0)) {
++			if (head_bytes > actual)
++				head_bytes = actual;
++
++			memcpy((char *)page_address(pages[0]) +
++				pagelist->offset,
++				fragments,
++				head_bytes);
++		}
++		if ((actual >= 0) && (head_bytes < actual) &&
++			(tail_bytes != 0)) {
++			memcpy((char *)page_address(pages[num_pages - 1]) +
++				((pagelist->offset + actual) &
++				(PAGE_SIZE - 1) & ~(g_cache_line_size - 1)),
++				fragments + g_cache_line_size,
++				tail_bytes);
++		}
++
++		down(&g_free_fragments_mutex);
++		*(char **)fragments = g_free_fragments;
++		g_free_fragments = fragments;
++		up(&g_free_fragments_mutex);
++		up(&g_free_fragments_sema);
++	}
++
++	if (*need_release) {
++		unsigned int length = pagelist->length;
++		unsigned int offset = pagelist->offset;
++
++		for (i = 0; i < num_pages; i++) {
++			struct page *pg = pages[i];
++
++			if (pagelist->type != PAGELIST_WRITE) {
++				unsigned int bytes = PAGE_SIZE - offset;
++
++				if (bytes > length)
++					bytes = length;
++				dmac_unmap_area(page_address(pg) + offset,
++						bytes, DMA_FROM_DEVICE);
++				length -= bytes;
++				offset = 0;
++				set_page_dirty(pg);
++			}
++			page_cache_release(pg);
++		}
++	}
++
++	kfree(pagelist);
++}
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -0,0 +1,2903 @@
++/**
++ * Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/errno.h>
++#include <linux/cdev.h>
++#include <linux/fs.h>
++#include <linux/device.h>
++#include <linux/mm.h>
++#include <linux/highmem.h>
++#include <linux/pagemap.h>
++#include <linux/bug.h>
++#include <linux/semaphore.h>
++#include <linux/list.h>
++#include <linux/of.h>
++#include <linux/platform_device.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
++
++#include "vchiq_core.h"
++#include "vchiq_ioctl.h"
++#include "vchiq_arm.h"
++#include "vchiq_debugfs.h"
++#include "vchiq_killable.h"
++
++#define DEVICE_NAME "vchiq"
++
++/* Override the default prefix, which would be vchiq_arm (from the filename) */
++#undef MODULE_PARAM_PREFIX
++#define MODULE_PARAM_PREFIX DEVICE_NAME "."
++
++#define VCHIQ_MINOR 0
++
++/* Some per-instance constants */
++#define MAX_COMPLETIONS 16
++#define MAX_SERVICES 64
++#define MAX_ELEMENTS 8
++#define MSG_QUEUE_SIZE 64
++
++#define KEEPALIVE_VER 1
++#define KEEPALIVE_VER_MIN KEEPALIVE_VER
++
++/* Run time control of log level, based on KERN_XXX level. */
++int vchiq_arm_log_level = VCHIQ_LOG_DEFAULT;
++int vchiq_susp_log_level = VCHIQ_LOG_ERROR;
++
++#define SUSPEND_TIMER_TIMEOUT_MS 100
++#define SUSPEND_RETRY_TIMER_TIMEOUT_MS 1000
++
++#define VC_SUSPEND_NUM_OFFSET 3 /* number of values before idle which are -ve */
++static const char *const suspend_state_names[] = {
++	"VC_SUSPEND_FORCE_CANCELED",
++	"VC_SUSPEND_REJECTED",
++	"VC_SUSPEND_FAILED",
++	"VC_SUSPEND_IDLE",
++	"VC_SUSPEND_REQUESTED",
++	"VC_SUSPEND_IN_PROGRESS",
++	"VC_SUSPEND_SUSPENDED"
++};
++#define VC_RESUME_NUM_OFFSET 1 /* number of values before idle which are -ve */
++static const char *const resume_state_names[] = {
++	"VC_RESUME_FAILED",
++	"VC_RESUME_IDLE",
++	"VC_RESUME_REQUESTED",
++	"VC_RESUME_IN_PROGRESS",
++	"VC_RESUME_RESUMED"
++};
++/* The number of times we allow force suspend to timeout before actually
++** _forcing_ suspend.  This is to cater for SW which fails to release vchiq
++** correctly - we don't want to prevent ARM suspend indefinitely in this case.
++*/
++#define FORCE_SUSPEND_FAIL_MAX 8
++
++/* The time in ms allowed for videocore to go idle when force suspend has been
++ * requested */
++#define FORCE_SUSPEND_TIMEOUT_MS 200
++
++
++static void suspend_timer_callback(unsigned long context);
++
++
++typedef struct user_service_struct {
++	VCHIQ_SERVICE_T *service;
++	void *userdata;
++	VCHIQ_INSTANCE_T instance;
++	char is_vchi;
++	char dequeue_pending;
++	char close_pending;
++	int message_available_pos;
++	int msg_insert;
++	int msg_remove;
++	struct semaphore insert_event;
++	struct semaphore remove_event;
++	struct semaphore close_event;
++	VCHIQ_HEADER_T * msg_queue[MSG_QUEUE_SIZE];
++} USER_SERVICE_T;
++
++struct bulk_waiter_node {
++	struct bulk_waiter bulk_waiter;
++	int pid;
++	struct list_head list;
++};
++
++struct vchiq_instance_struct {
++	VCHIQ_STATE_T *state;
++	VCHIQ_COMPLETION_DATA_T completions[MAX_COMPLETIONS];
++	int completion_insert;
++	int completion_remove;
++	struct semaphore insert_event;
++	struct semaphore remove_event;
++	struct mutex completion_mutex;
++
++	int connected;
++	int closing;
++	int pid;
++	int mark;
++	int use_close_delivered;
++	int trace;
++
++	struct list_head bulk_waiter_list;
++	struct mutex bulk_waiter_list_mutex;
++
++	VCHIQ_DEBUGFS_NODE_T debugfs_node;
++};
++
++typedef struct dump_context_struct {
++	char __user *buf;
++	size_t actual;
++	size_t space;
++	loff_t offset;
++} DUMP_CONTEXT_T;
++
++static struct cdev    vchiq_cdev;
++static dev_t          vchiq_devid;
++static VCHIQ_STATE_T g_state;
++static struct class  *vchiq_class;
++static struct device *vchiq_dev;
++static DEFINE_SPINLOCK(msg_queue_spinlock);
++
++static const char *const ioctl_names[] = {
++	"CONNECT",
++	"SHUTDOWN",
++	"CREATE_SERVICE",
++	"REMOVE_SERVICE",
++	"QUEUE_MESSAGE",
++	"QUEUE_BULK_TRANSMIT",
++	"QUEUE_BULK_RECEIVE",
++	"AWAIT_COMPLETION",
++	"DEQUEUE_MESSAGE",
++	"GET_CLIENT_ID",
++	"GET_CONFIG",
++	"CLOSE_SERVICE",
++	"USE_SERVICE",
++	"RELEASE_SERVICE",
++	"SET_SERVICE_OPTION",
++	"DUMP_PHYS_MEM",
++	"LIB_VERSION",
++	"CLOSE_DELIVERED"
++};
++
++vchiq_static_assert((sizeof(ioctl_names)/sizeof(ioctl_names[0])) ==
++	(VCHIQ_IOC_MAX + 1));
++
++static void
++dump_phys_mem(void *virt_addr, uint32_t num_bytes);
++
++/****************************************************************************
++*
++*   add_completion
++*
++***************************************************************************/
++
++static VCHIQ_STATUS_T
++add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason,
++	VCHIQ_HEADER_T *header, USER_SERVICE_T *user_service,
++	void *bulk_userdata)
++{
++	VCHIQ_COMPLETION_DATA_T *completion;
++	DEBUG_INITIALISE(g_state.local)
++
++	while (instance->completion_insert ==
++		(instance->completion_remove + MAX_COMPLETIONS)) {
++		/* Out of space - wait for the client */
++		DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++		vchiq_log_trace(vchiq_arm_log_level,
++			"add_completion - completion queue full");
++		DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT);
++		if (down_interruptible(&instance->remove_event) != 0) {
++			vchiq_log_info(vchiq_arm_log_level,
++				"service_callback interrupted");
++			return VCHIQ_RETRY;
++		} else if (instance->closing) {
++			vchiq_log_info(vchiq_arm_log_level,
++				"service_callback closing");
++			return VCHIQ_ERROR;
++		}
++		DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++	}
++
++	completion =
++		 &instance->completions[instance->completion_insert &
++		 (MAX_COMPLETIONS - 1)];
++
++	completion->header = header;
++	completion->reason = reason;
++	/* N.B. service_userdata is updated while processing AWAIT_COMPLETION */
++	completion->service_userdata = user_service->service;
++	completion->bulk_userdata = bulk_userdata;
++
++	if (reason == VCHIQ_SERVICE_CLOSED) {
++		/* Take an extra reference, to be held until
++		   this CLOSED notification is delivered. */
++		lock_service(user_service->service);
++		if (instance->use_close_delivered)
++			user_service->close_pending = 1;
++	}
++
++	/* A write barrier is needed here to ensure that the entire completion
++		record is written out before the insert point. */
++	wmb();
++
++	if (reason == VCHIQ_MESSAGE_AVAILABLE)
++		user_service->message_available_pos =
++			instance->completion_insert;
++	instance->completion_insert++;
++
++	up(&instance->insert_event);
++
++	return VCHIQ_SUCCESS;
++}
++
++/****************************************************************************
++*
++*   service_callback
++*
++***************************************************************************/
++
++static VCHIQ_STATUS_T
++service_callback(VCHIQ_REASON_T reason, VCHIQ_HEADER_T *header,
++	VCHIQ_SERVICE_HANDLE_T handle, void *bulk_userdata)
++{
++	/* How do we ensure the callback goes to the right client?
++	** The service_user data points to a USER_SERVICE_T record containing
++	** the original callback and the user state structure, which contains a
++	** circular buffer for completion records.
++	*/
++	USER_SERVICE_T *user_service;
++	VCHIQ_SERVICE_T *service;
++	VCHIQ_INSTANCE_T instance;
++	DEBUG_INITIALISE(g_state.local)
++
++	DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++
++	service = handle_to_service(handle);
++	BUG_ON(!service);
++	user_service = (USER_SERVICE_T *)service->base.userdata;
++	instance = user_service->instance;
++
++	if (!instance || instance->closing)
++		return VCHIQ_SUCCESS;
++
++	vchiq_log_trace(vchiq_arm_log_level,
++		"service_callback - service %lx(%d,%p), reason %d, header %lx, "
++		"instance %lx, bulk_userdata %lx",
++		(unsigned long)user_service,
++		service->localport, user_service->userdata,
++		reason, (unsigned long)header,
++		(unsigned long)instance, (unsigned long)bulk_userdata);
++
++	if (header && user_service->is_vchi) {
++		spin_lock(&msg_queue_spinlock);
++		while (user_service->msg_insert ==
++			(user_service->msg_remove + MSG_QUEUE_SIZE)) {
++			spin_unlock(&msg_queue_spinlock);
++			DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++			DEBUG_COUNT(MSG_QUEUE_FULL_COUNT);
++			vchiq_log_trace(vchiq_arm_log_level,
++				"service_callback - msg queue full");
++			/* If there is no MESSAGE_AVAILABLE in the completion
++			** queue, add one
++			*/
++			if ((user_service->message_available_pos -
++				instance->completion_remove) < 0) {
++				VCHIQ_STATUS_T status;
++				vchiq_log_info(vchiq_arm_log_level,
++					"Inserting extra MESSAGE_AVAILABLE");
++				DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++				status = add_completion(instance, reason,
++					NULL, user_service, bulk_userdata);
++				if (status != VCHIQ_SUCCESS) {
++					DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++					return status;
++				}
++			}
++
++			DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++			if (down_interruptible(&user_service->remove_event)
++				!= 0) {
++				vchiq_log_info(vchiq_arm_log_level,
++					"service_callback interrupted");
++				DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++				return VCHIQ_RETRY;
++			} else if (instance->closing) {
++				vchiq_log_info(vchiq_arm_log_level,
++					"service_callback closing");
++				DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++				return VCHIQ_ERROR;
++			}
++			DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++			spin_lock(&msg_queue_spinlock);
++		}
++
++		user_service->msg_queue[user_service->msg_insert &
++			(MSG_QUEUE_SIZE - 1)] = header;
++		user_service->msg_insert++;
++		spin_unlock(&msg_queue_spinlock);
++
++		up(&user_service->insert_event);
++
++		/* If there is a thread waiting in DEQUEUE_MESSAGE, or if
++		** there is a MESSAGE_AVAILABLE in the completion queue then
++		** bypass the completion queue.
++		*/
++		if (((user_service->message_available_pos -
++			instance->completion_remove) >= 0) ||
++			user_service->dequeue_pending) {
++			DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++			user_service->dequeue_pending = 0;
++			return VCHIQ_SUCCESS;
++		}
++
++		header = NULL;
++	}
++	DEBUG_TRACE(SERVICE_CALLBACK_LINE);
++
++	return add_completion(instance, reason, header, user_service,
++		bulk_userdata);
++}
++
++/****************************************************************************
++*
++*   user_service_free
++*
++***************************************************************************/
++static void
++user_service_free(void *userdata)
++{
++	kfree(userdata);
++}
++
++/****************************************************************************
++*
++*   close_delivered
++*
++***************************************************************************/
++static void close_delivered(USER_SERVICE_T *user_service)
++{
++	vchiq_log_info(vchiq_arm_log_level,
++		"close_delivered(handle=%x)",
++		user_service->service->handle);
++
++	if (user_service->close_pending) {
++		/* Allow the underlying service to be culled */
++		unlock_service(user_service->service);
++
++		/* Wake the user-thread blocked in close_ or remove_service */
++		up(&user_service->close_event);
++
++		user_service->close_pending = 0;
++	}
++}
++
++/****************************************************************************
++*
++*   vchiq_ioctl
++*
++***************************************************************************/
++static long
++vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++	VCHIQ_INSTANCE_T instance = file->private_data;
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++	VCHIQ_SERVICE_T *service = NULL;
++	long ret = 0;
++	int i, rc;
++	DEBUG_INITIALISE(g_state.local)
++
++	vchiq_log_trace(vchiq_arm_log_level,
++		 "vchiq_ioctl - instance %x, cmd %s, arg %lx",
++		(unsigned int)instance,
++		((_IOC_TYPE(cmd) == VCHIQ_IOC_MAGIC) &&
++		(_IOC_NR(cmd) <= VCHIQ_IOC_MAX)) ?
++		ioctl_names[_IOC_NR(cmd)] : "<invalid>", arg);
++
++	switch (cmd) {
++	case VCHIQ_IOC_SHUTDOWN:
++		if (!instance->connected)
++			break;
++
++		/* Remove all services */
++		i = 0;
++		while ((service = next_service_by_instance(instance->state,
++			instance, &i)) != NULL) {
++			status = vchiq_remove_service(service->handle);
++			unlock_service(service);
++			if (status != VCHIQ_SUCCESS)
++				break;
++		}
++		service = NULL;
++
++		if (status == VCHIQ_SUCCESS) {
++			/* Wake the completion thread and ask it to exit */
++			instance->closing = 1;
++			up(&instance->insert_event);
++		}
++
++		break;
++
++	case VCHIQ_IOC_CONNECT:
++		if (instance->connected) {
++			ret = -EINVAL;
++			break;
++		}
++		rc = mutex_lock_interruptible(&instance->state->mutex);
++		if (rc != 0) {
++			vchiq_log_error(vchiq_arm_log_level,
++				"vchiq: connect: could not lock mutex for "
++				"state %d: %d",
++				instance->state->id, rc);
++			ret = -EINTR;
++			break;
++		}
++		status = vchiq_connect_internal(instance->state, instance);
++		mutex_unlock(&instance->state->mutex);
++
++		if (status == VCHIQ_SUCCESS)
++			instance->connected = 1;
++		else
++			vchiq_log_error(vchiq_arm_log_level,
++				"vchiq: could not connect: %d", status);
++		break;
++
++	case VCHIQ_IOC_CREATE_SERVICE: {
++		VCHIQ_CREATE_SERVICE_T args;
++		USER_SERVICE_T *user_service = NULL;
++		void *userdata;
++		int srvstate;
++
++		if (copy_from_user
++			 (&args, (const void __user *)arg,
++			  sizeof(args)) != 0) {
++			ret = -EFAULT;
++			break;
++		}
++
++		user_service = kmalloc(sizeof(USER_SERVICE_T), GFP_KERNEL);
++		if (!user_service) {
++			ret = -ENOMEM;
++			break;
++		}
++
++		if (args.is_open) {
++			if (!instance->connected) {
++				ret = -ENOTCONN;
++				kfree(user_service);
++				break;
++			}
++			srvstate = VCHIQ_SRVSTATE_OPENING;
++		} else {
++			srvstate =
++				 instance->connected ?
++				 VCHIQ_SRVSTATE_LISTENING :
++				 VCHIQ_SRVSTATE_HIDDEN;
++		}
++
++		userdata = args.params.userdata;
++		args.params.callback = service_callback;
++		args.params.userdata = user_service;
++		service = vchiq_add_service_internal(
++				instance->state,
++				&args.params, srvstate,
++				instance, user_service_free);
++
++		if (service != NULL) {
++			user_service->service = service;
++			user_service->userdata = userdata;
++			user_service->instance = instance;
++			user_service->is_vchi = (args.is_vchi != 0);
++			user_service->dequeue_pending = 0;
++			user_service->close_pending = 0;
++			user_service->message_available_pos =
++				instance->completion_remove - 1;
++			user_service->msg_insert = 0;
++			user_service->msg_remove = 0;
++			sema_init(&user_service->insert_event, 0);
++			sema_init(&user_service->remove_event, 0);
++			sema_init(&user_service->close_event, 0);
++
++			if (args.is_open) {
++				status = vchiq_open_service_internal
++					(service, instance->pid);
++				if (status != VCHIQ_SUCCESS) {
++					vchiq_remove_service(service->handle);
++					service = NULL;
++					ret = (status == VCHIQ_RETRY) ?
++						-EINTR : -EIO;
++					break;
++				}
++			}
++
++			if (copy_to_user((void __user *)
++				&(((VCHIQ_CREATE_SERVICE_T __user *)
++					arg)->handle),
++				(const void *)&service->handle,
++				sizeof(service->handle)) != 0) {
++				ret = -EFAULT;
++				vchiq_remove_service(service->handle);
++			}
++
++			service = NULL;
++		} else {
++			ret = -EEXIST;
++			kfree(user_service);
++		}
++	} break;
++
++	case VCHIQ_IOC_CLOSE_SERVICE: {
++		VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
++
++		service = find_service_for_instance(instance, handle);
++		if (service != NULL) {
++			USER_SERVICE_T *user_service =
++				(USER_SERVICE_T *)service->base.userdata;
++			/* close_pending is false on first entry, and when the
++                           wait in vchiq_close_service has been interrupted. */
++			if (!user_service->close_pending) {
++				status = vchiq_close_service(service->handle);
++				if (status != VCHIQ_SUCCESS)
++					break;
++			}
++
++			/* close_pending is true once the underlying service
++			   has been closed until the client library calls the
++			   CLOSE_DELIVERED ioctl, signalling close_event. */
++			if (user_service->close_pending &&
++				down_interruptible(&user_service->close_event))
++				status = VCHIQ_RETRY;
++		}
++		else
++			ret = -EINVAL;
++	} break;
++
++	case VCHIQ_IOC_REMOVE_SERVICE: {
++		VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
++
++		service = find_service_for_instance(instance, handle);
++		if (service != NULL) {
++			USER_SERVICE_T *user_service =
++				(USER_SERVICE_T *)service->base.userdata;
++			/* close_pending is false on first entry, and when the
++                           wait in vchiq_close_service has been interrupted. */
++			if (!user_service->close_pending) {
++				status = vchiq_remove_service(service->handle);
++				if (status != VCHIQ_SUCCESS)
++					break;
++			}
++
++			/* close_pending is true once the underlying service
++			   has been closed until the client library calls the
++			   CLOSE_DELIVERED ioctl, signalling close_event. */
++			if (user_service->close_pending &&
++				down_interruptible(&user_service->close_event))
++				status = VCHIQ_RETRY;
++		}
++		else
++			ret = -EINVAL;
++	} break;
++
++	case VCHIQ_IOC_USE_SERVICE:
++	case VCHIQ_IOC_RELEASE_SERVICE:	{
++		VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
++
++		service = find_service_for_instance(instance, handle);
++		if (service != NULL) {
++			status = (cmd == VCHIQ_IOC_USE_SERVICE)	?
++				vchiq_use_service_internal(service) :
++				vchiq_release_service_internal(service);
++			if (status != VCHIQ_SUCCESS) {
++				vchiq_log_error(vchiq_susp_log_level,
++					"%s: cmd %s returned error %d for "
++					"service %c%c%c%c:%03d",
++					__func__,
++					(cmd == VCHIQ_IOC_USE_SERVICE) ?
++						"VCHIQ_IOC_USE_SERVICE" :
++						"VCHIQ_IOC_RELEASE_SERVICE",
++					status,
++					VCHIQ_FOURCC_AS_4CHARS(
++						service->base.fourcc),
++					service->client_id);
++				ret = -EINVAL;
++			}
++		} else
++			ret = -EINVAL;
++	} break;
++
++	case VCHIQ_IOC_QUEUE_MESSAGE: {
++		VCHIQ_QUEUE_MESSAGE_T args;
++		if (copy_from_user
++			 (&args, (const void __user *)arg,
++			  sizeof(args)) != 0) {
++			ret = -EFAULT;
++			break;
++		}
++
++		service = find_service_for_instance(instance, args.handle);
++
++		if ((service != NULL) && (args.count <= MAX_ELEMENTS)) {
++			/* Copy elements into kernel space */
++			VCHIQ_ELEMENT_T elements[MAX_ELEMENTS];
++			if (copy_from_user(elements, args.elements,
++				args.count * sizeof(VCHIQ_ELEMENT_T)) == 0)
++				status = vchiq_queue_message
++					(args.handle,
++					elements, args.count);
++			else
++				ret = -EFAULT;
++		} else {
++			ret = -EINVAL;
++		}
++	} break;
++
++	case VCHIQ_IOC_QUEUE_BULK_TRANSMIT:
++	case VCHIQ_IOC_QUEUE_BULK_RECEIVE: {
++		VCHIQ_QUEUE_BULK_TRANSFER_T args;
++		struct bulk_waiter_node *waiter = NULL;
++		VCHIQ_BULK_DIR_T dir =
++			(cmd == VCHIQ_IOC_QUEUE_BULK_TRANSMIT) ?
++			VCHIQ_BULK_TRANSMIT : VCHIQ_BULK_RECEIVE;
++
++		if (copy_from_user
++			(&args, (const void __user *)arg,
++			sizeof(args)) != 0) {
++			ret = -EFAULT;
++			break;
++		}
++
++		service = find_service_for_instance(instance, args.handle);
++		if (!service) {
++			ret = -EINVAL;
++			break;
++		}
++
++		if (args.mode == VCHIQ_BULK_MODE_BLOCKING) {
++			waiter = kzalloc(sizeof(struct bulk_waiter_node),
++				GFP_KERNEL);
++			if (!waiter) {
++				ret = -ENOMEM;
++				break;
++			}
++			args.userdata = &waiter->bulk_waiter;
++		} else if (args.mode == VCHIQ_BULK_MODE_WAITING) {
++			struct list_head *pos;
++			mutex_lock(&instance->bulk_waiter_list_mutex);
++			list_for_each(pos, &instance->bulk_waiter_list) {
++				if (list_entry(pos, struct bulk_waiter_node,
++					list)->pid == current->pid) {
++					waiter = list_entry(pos,
++						struct bulk_waiter_node,
++						list);
++					list_del(pos);
++					break;
++				}
++
++			}
++			mutex_unlock(&instance->bulk_waiter_list_mutex);
++			if (!waiter) {
++				vchiq_log_error(vchiq_arm_log_level,
++					"no bulk_waiter found for pid %d",
++					current->pid);
++				ret = -ESRCH;
++				break;
++			}
++			vchiq_log_info(vchiq_arm_log_level,
++				"found bulk_waiter %x for pid %d",
++				(unsigned int)waiter, current->pid);
++			args.userdata = &waiter->bulk_waiter;
++		}
++		status = vchiq_bulk_transfer
++			(args.handle,
++			 VCHI_MEM_HANDLE_INVALID,
++			 args.data, args.size,
++			 args.userdata, args.mode,
++			 dir);
++		if (!waiter)
++			break;
++		if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) ||
++			!waiter->bulk_waiter.bulk) {
++			if (waiter->bulk_waiter.bulk) {
++				/* Cancel the signal when the transfer
++				** completes. */
++				spin_lock(&bulk_waiter_spinlock);
++				waiter->bulk_waiter.bulk->userdata = NULL;
++				spin_unlock(&bulk_waiter_spinlock);
++			}
++			kfree(waiter);
++		} else {
++			const VCHIQ_BULK_MODE_T mode_waiting =
++				VCHIQ_BULK_MODE_WAITING;
++			waiter->pid = current->pid;
++			mutex_lock(&instance->bulk_waiter_list_mutex);
++			list_add(&waiter->list, &instance->bulk_waiter_list);
++			mutex_unlock(&instance->bulk_waiter_list_mutex);
++			vchiq_log_info(vchiq_arm_log_level,
++				"saved bulk_waiter %x for pid %d",
++				(unsigned int)waiter, current->pid);
++
++			if (copy_to_user((void __user *)
++				&(((VCHIQ_QUEUE_BULK_TRANSFER_T __user *)
++					arg)->mode),
++				(const void *)&mode_waiting,
++				sizeof(mode_waiting)) != 0)
++				ret = -EFAULT;
++		}
++	} break;
++
++	case VCHIQ_IOC_AWAIT_COMPLETION: {
++		VCHIQ_AWAIT_COMPLETION_T args;
++
++		DEBUG_TRACE(AWAIT_COMPLETION_LINE);
++		if (!instance->connected) {
++			ret = -ENOTCONN;
++			break;
++		}
++
++		if (copy_from_user(&args, (const void __user *)arg,
++			sizeof(args)) != 0) {
++			ret = -EFAULT;
++			break;
++		}
++
++		mutex_lock(&instance->completion_mutex);
++
++		DEBUG_TRACE(AWAIT_COMPLETION_LINE);
++		while ((instance->completion_remove ==
++			instance->completion_insert)
++			&& !instance->closing) {
++			int rc;
++			DEBUG_TRACE(AWAIT_COMPLETION_LINE);
++			mutex_unlock(&instance->completion_mutex);
++			rc = down_interruptible(&instance->insert_event);
++			mutex_lock(&instance->completion_mutex);
++			if (rc != 0) {
++				DEBUG_TRACE(AWAIT_COMPLETION_LINE);
++				vchiq_log_info(vchiq_arm_log_level,
++					"AWAIT_COMPLETION interrupted");
++				ret = -EINTR;
++				break;
++			}
++		}
++		DEBUG_TRACE(AWAIT_COMPLETION_LINE);
++
++		/* A read memory barrier is needed to stop prefetch of a stale
++		** completion record
++		*/
++		rmb();
++
++		if (ret == 0) {
++			int msgbufcount = args.msgbufcount;
++			for (ret = 0; ret < args.count; ret++) {
++				VCHIQ_COMPLETION_DATA_T *completion;
++				VCHIQ_SERVICE_T *service;
++				USER_SERVICE_T *user_service;
++				VCHIQ_HEADER_T *header;
++				if (instance->completion_remove ==
++					instance->completion_insert)
++					break;
++				completion = &instance->completions[
++					instance->completion_remove &
++					(MAX_COMPLETIONS - 1)];
++
++				service = completion->service_userdata;
++				user_service = service->base.userdata;
++				completion->service_userdata =
++					user_service->userdata;
++
++				header = completion->header;
++				if (header) {
++					void __user *msgbuf;
++					int msglen;
++
++					msglen = header->size +
++						sizeof(VCHIQ_HEADER_T);
++					/* This must be a VCHIQ-style service */
++					if (args.msgbufsize < msglen) {
++						vchiq_log_error(
++							vchiq_arm_log_level,
++							"header %x: msgbufsize"
++							" %x < msglen %x",
++							(unsigned int)header,
++							args.msgbufsize,
++							msglen);
++						WARN(1, "invalid message "
++							"size\n");
++						if (ret == 0)
++							ret = -EMSGSIZE;
++						break;
++					}
++					if (msgbufcount <= 0)
++						/* Stall here for lack of a
++						** buffer for the message. */
++						break;
++					/* Get the pointer from user space */
++					msgbufcount--;
++					if (copy_from_user(&msgbuf,
++						(const void __user *)
++						&args.msgbufs[msgbufcount],
++						sizeof(msgbuf)) != 0) {
++						if (ret == 0)
++							ret = -EFAULT;
++						break;
++					}
++
++					/* Copy the message to user space */
++					if (copy_to_user(msgbuf, header,
++						msglen) != 0) {
++						if (ret == 0)
++							ret = -EFAULT;
++						break;
++					}
++
++					/* Now it has been copied, the message
++					** can be released. */
++					vchiq_release_message(service->handle,
++						header);
++
++					/* The completion must point to the
++					** msgbuf. */
++					completion->header = msgbuf;
++				}
++
++				if ((completion->reason ==
++					VCHIQ_SERVICE_CLOSED) &&
++					!instance->use_close_delivered)
++					unlock_service(service);
++
++				if (copy_to_user((void __user *)(
++					(size_t)args.buf +
++					ret * sizeof(VCHIQ_COMPLETION_DATA_T)),
++					completion,
++					sizeof(VCHIQ_COMPLETION_DATA_T)) != 0) {
++						if (ret == 0)
++							ret = -EFAULT;
++					break;
++				}
++
++				instance->completion_remove++;
++			}
++
++			if (msgbufcount != args.msgbufcount) {
++				if (copy_to_user((void __user *)
++					&((VCHIQ_AWAIT_COMPLETION_T *)arg)->
++						msgbufcount,
++					&msgbufcount,
++					sizeof(msgbufcount)) != 0) {
++					ret = -EFAULT;
++				}
++			}
++		}
++
++		if (ret != 0)
++			up(&instance->remove_event);
++		mutex_unlock(&instance->completion_mutex);
++		DEBUG_TRACE(AWAIT_COMPLETION_LINE);
++	} break;
++
++	case VCHIQ_IOC_DEQUEUE_MESSAGE: {
++		VCHIQ_DEQUEUE_MESSAGE_T args;
++		USER_SERVICE_T *user_service;
++		VCHIQ_HEADER_T *header;
++
++		DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
++		if (copy_from_user
++			 (&args, (const void __user *)arg,
++			  sizeof(args)) != 0) {
++			ret = -EFAULT;
++			break;
++		}
++		service = find_service_for_instance(instance, args.handle);
++		if (!service) {
++			ret = -EINVAL;
++			break;
++		}
++		user_service = (USER_SERVICE_T *)service->base.userdata;
++		if (user_service->is_vchi == 0) {
++			ret = -EINVAL;
++			break;
++		}
++
++		spin_lock(&msg_queue_spinlock);
++		if (user_service->msg_remove == user_service->msg_insert) {
++			if (!args.blocking) {
++				spin_unlock(&msg_queue_spinlock);
++				DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
++				ret = -EWOULDBLOCK;
++				break;
++			}
++			user_service->dequeue_pending = 1;
++			do {
++				spin_unlock(&msg_queue_spinlock);
++				DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
++				if (down_interruptible(
++					&user_service->insert_event) != 0) {
++					vchiq_log_info(vchiq_arm_log_level,
++						"DEQUEUE_MESSAGE interrupted");
++					ret = -EINTR;
++					break;
++				}
++				spin_lock(&msg_queue_spinlock);
++			} while (user_service->msg_remove ==
++				user_service->msg_insert);
++
++			if (ret)
++				break;
++		}
++
++		BUG_ON((int)(user_service->msg_insert -
++			user_service->msg_remove) < 0);
++
++		header = user_service->msg_queue[user_service->msg_remove &
++			(MSG_QUEUE_SIZE - 1)];
++		user_service->msg_remove++;
++		spin_unlock(&msg_queue_spinlock);
++
++		up(&user_service->remove_event);
++		if (header == NULL)
++			ret = -ENOTCONN;
++		else if (header->size <= args.bufsize) {
++			/* Copy to user space if msgbuf is not NULL */
++			if ((args.buf == NULL) ||
++				(copy_to_user((void __user *)args.buf,
++				header->data,
++				header->size) == 0)) {
++				ret = header->size;
++				vchiq_release_message(
++					service->handle,
++					header);
++			} else
++				ret = -EFAULT;
++		} else {
++			vchiq_log_error(vchiq_arm_log_level,
++				"header %x: bufsize %x < size %x",
++				(unsigned int)header, args.bufsize,
++				header->size);
++			WARN(1, "invalid size\n");
++			ret = -EMSGSIZE;
++		}
++		DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
++	} break;
++
++	case VCHIQ_IOC_GET_CLIENT_ID: {
++		VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
++
++		ret = vchiq_get_client_id(handle);
++	} break;
++
++	case VCHIQ_IOC_GET_CONFIG: {
++		VCHIQ_GET_CONFIG_T args;
++		VCHIQ_CONFIG_T config;
++
++		if (copy_from_user(&args, (const void __user *)arg,
++			sizeof(args)) != 0) {
++			ret = -EFAULT;
++			break;
++		}
++		if (args.config_size > sizeof(config)) {
++			ret = -EINVAL;
++			break;
++		}
++		status = vchiq_get_config(instance, args.config_size, &config);
++		if (status == VCHIQ_SUCCESS) {
++			if (copy_to_user((void __user *)args.pconfig,
++				    &config, args.config_size) != 0) {
++				ret = -EFAULT;
++				break;
++			}
++		}
++	} break;
++
++	case VCHIQ_IOC_SET_SERVICE_OPTION: {
++		VCHIQ_SET_SERVICE_OPTION_T args;
++
++		if (copy_from_user(
++			&args, (const void __user *)arg,
++			sizeof(args)) != 0) {
++			ret = -EFAULT;
++			break;
++		}
++
++		service = find_service_for_instance(instance, args.handle);
++		if (!service) {
++			ret = -EINVAL;
++			break;
++		}
++
++		status = vchiq_set_service_option(
++				args.handle, args.option, args.value);
++	} break;
++
++	case VCHIQ_IOC_DUMP_PHYS_MEM: {
++		VCHIQ_DUMP_MEM_T  args;
++
++		if (copy_from_user
++			 (&args, (const void __user *)arg,
++			  sizeof(args)) != 0) {
++			ret = -EFAULT;
++			break;
++		}
++		dump_phys_mem(args.virt_addr, args.num_bytes);
++	} break;
++
++	case VCHIQ_IOC_LIB_VERSION: {
++		unsigned int lib_version = (unsigned int)arg;
++
++		if (lib_version < VCHIQ_VERSION_MIN)
++			ret = -EINVAL;
++		else if (lib_version >= VCHIQ_VERSION_CLOSE_DELIVERED)
++			instance->use_close_delivered = 1;
++	} break;
++
++	case VCHIQ_IOC_CLOSE_DELIVERED: {
++		VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
++
++		service = find_closed_service_for_instance(instance, handle);
++		if (service != NULL) {
++			USER_SERVICE_T *user_service =
++				(USER_SERVICE_T *)service->base.userdata;
++			close_delivered(user_service);
++		}
++		else
++			ret = -EINVAL;
++	} break;
++
++	default:
++		ret = -ENOTTY;
++		break;
++	}
++
++	if (service)
++		unlock_service(service);
++
++	if (ret == 0) {
++		if (status == VCHIQ_ERROR)
++			ret = -EIO;
++		else if (status == VCHIQ_RETRY)
++			ret = -EINTR;
++	}
++
++	if ((status == VCHIQ_SUCCESS) && (ret < 0) && (ret != -EINTR) &&
++		(ret != -EWOULDBLOCK))
++		vchiq_log_info(vchiq_arm_log_level,
++			"  ioctl instance %lx, cmd %s -> status %d, %ld",
++			(unsigned long)instance,
++			(_IOC_NR(cmd) <= VCHIQ_IOC_MAX) ?
++				ioctl_names[_IOC_NR(cmd)] :
++				"<invalid>",
++			status, ret);
++	else
++		vchiq_log_trace(vchiq_arm_log_level,
++			"  ioctl instance %lx, cmd %s -> status %d, %ld",
++			(unsigned long)instance,
++			(_IOC_NR(cmd) <= VCHIQ_IOC_MAX) ?
++				ioctl_names[_IOC_NR(cmd)] :
++				"<invalid>",
++			status, ret);
++
++	return ret;
++}
++
++/****************************************************************************
++*
++*   vchiq_open
++*
++***************************************************************************/
++
++static int
++vchiq_open(struct inode *inode, struct file *file)
++{
++	int dev = iminor(inode) & 0x0f;
++	vchiq_log_info(vchiq_arm_log_level, "vchiq_open");
++	switch (dev) {
++	case VCHIQ_MINOR: {
++		int ret;
++		VCHIQ_STATE_T *state = vchiq_get_state();
++		VCHIQ_INSTANCE_T instance;
++
++		if (!state) {
++			vchiq_log_error(vchiq_arm_log_level,
++				"vchiq has no connection to VideoCore");
++			return -ENOTCONN;
++		}
++
++		instance = kzalloc(sizeof(*instance), GFP_KERNEL);
++		if (!instance)
++			return -ENOMEM;
++
++		instance->state = state;
++		instance->pid = current->tgid;
++
++		ret = vchiq_debugfs_add_instance(instance);
++		if (ret != 0) {
++			kfree(instance);
++			return ret;
++		}
++
++		sema_init(&instance->insert_event, 0);
++		sema_init(&instance->remove_event, 0);
++		mutex_init(&instance->completion_mutex);
++		mutex_init(&instance->bulk_waiter_list_mutex);
++		INIT_LIST_HEAD(&instance->bulk_waiter_list);
++
++		file->private_data = instance;
++	} break;
++
++	default:
++		vchiq_log_error(vchiq_arm_log_level,
++			"Unknown minor device: %d", dev);
++		return -ENXIO;
++	}
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   vchiq_release
++*
++***************************************************************************/
++
++static int
++vchiq_release(struct inode *inode, struct file *file)
++{
++	int dev = iminor(inode) & 0x0f;
++	int ret = 0;
++	switch (dev) {
++	case VCHIQ_MINOR: {
++		VCHIQ_INSTANCE_T instance = file->private_data;
++		VCHIQ_STATE_T *state = vchiq_get_state();
++		VCHIQ_SERVICE_T *service;
++		int i;
++
++		vchiq_log_info(vchiq_arm_log_level,
++			"vchiq_release: instance=%lx",
++			(unsigned long)instance);
++
++		if (!state) {
++			ret = -EPERM;
++			goto out;
++		}
++
++		/* Ensure videocore is awake to allow termination. */
++		vchiq_use_internal(instance->state, NULL,
++				USE_TYPE_VCHIQ);
++
++		mutex_lock(&instance->completion_mutex);
++
++		/* Wake the completion thread and ask it to exit */
++		instance->closing = 1;
++		up(&instance->insert_event);
++
++		mutex_unlock(&instance->completion_mutex);
++
++		/* Wake the slot handler if the completion queue is full. */
++		up(&instance->remove_event);
++
++		/* Mark all services for termination... */
++		i = 0;
++		while ((service = next_service_by_instance(state, instance,
++			&i)) !=	NULL) {
++			USER_SERVICE_T *user_service = service->base.userdata;
++
++			/* Wake the slot handler if the msg queue is full. */
++			up(&user_service->remove_event);
++
++			vchiq_terminate_service_internal(service);
++			unlock_service(service);
++		}
++
++		/* ...and wait for them to die */
++		i = 0;
++		while ((service = next_service_by_instance(state, instance, &i))
++			!= NULL) {
++			USER_SERVICE_T *user_service = service->base.userdata;
++
++			down(&service->remove_event);
++
++			BUG_ON(service->srvstate != VCHIQ_SRVSTATE_FREE);
++
++			spin_lock(&msg_queue_spinlock);
++
++			while (user_service->msg_remove !=
++				user_service->msg_insert) {
++				VCHIQ_HEADER_T *header = user_service->
++					msg_queue[user_service->msg_remove &
++						(MSG_QUEUE_SIZE - 1)];
++				user_service->msg_remove++;
++				spin_unlock(&msg_queue_spinlock);
++
++				if (header)
++					vchiq_release_message(
++						service->handle,
++						header);
++				spin_lock(&msg_queue_spinlock);
++			}
++
++			spin_unlock(&msg_queue_spinlock);
++
++			unlock_service(service);
++		}
++
++		/* Release any closed services */
++		while (instance->completion_remove !=
++			instance->completion_insert) {
++			VCHIQ_COMPLETION_DATA_T *completion;
++			VCHIQ_SERVICE_T *service;
++			completion = &instance->completions[
++				instance->completion_remove &
++				(MAX_COMPLETIONS - 1)];
++			service = completion->service_userdata;
++			if (completion->reason == VCHIQ_SERVICE_CLOSED)
++			{
++				USER_SERVICE_T *user_service =
++					service->base.userdata;
++
++				/* Wake any blocked user-thread */
++				if (instance->use_close_delivered)
++					up(&user_service->close_event);
++				unlock_service(service);
++			}
++			instance->completion_remove++;
++		}
++
++		/* Release the PEER service count. */
++		vchiq_release_internal(instance->state, NULL);
++
++		{
++			struct list_head *pos, *next;
++			list_for_each_safe(pos, next,
++				&instance->bulk_waiter_list) {
++				struct bulk_waiter_node *waiter;
++				waiter = list_entry(pos,
++					struct bulk_waiter_node,
++					list);
++				list_del(pos);
++				vchiq_log_info(vchiq_arm_log_level,
++					"bulk_waiter - cleaned up %x "
++					"for pid %d",
++					(unsigned int)waiter, waiter->pid);
++				kfree(waiter);
++			}
++		}
++
++		vchiq_debugfs_remove_instance(instance);
++
++		kfree(instance);
++		file->private_data = NULL;
++	} break;
++
++	default:
++		vchiq_log_error(vchiq_arm_log_level,
++			"Unknown minor device: %d", dev);
++		ret = -ENXIO;
++	}
++
++out:
++	return ret;
++}
++
++/****************************************************************************
++*
++*   vchiq_dump
++*
++***************************************************************************/
++
++void
++vchiq_dump(void *dump_context, const char *str, int len)
++{
++	DUMP_CONTEXT_T *context = (DUMP_CONTEXT_T *)dump_context;
++
++	if (context->actual < context->space) {
++		int copy_bytes;
++		if (context->offset > 0) {
++			int skip_bytes = min(len, (int)context->offset);
++			str += skip_bytes;
++			len -= skip_bytes;
++			context->offset -= skip_bytes;
++			if (context->offset > 0)
++				return;
++		}
++		copy_bytes = min(len, (int)(context->space - context->actual));
++		if (copy_bytes == 0)
++			return;
++		if (copy_to_user(context->buf + context->actual, str,
++			copy_bytes))
++			context->actual = -EFAULT;
++		context->actual += copy_bytes;
++		len -= copy_bytes;
++
++		/* If tne terminating NUL is included in the length, then it
++		** marks the end of a line and should be replaced with a
++		** carriage return. */
++		if ((len == 0) && (str[copy_bytes - 1] == '\0')) {
++			char cr = '\n';
++			if (copy_to_user(context->buf + context->actual - 1,
++				&cr, 1))
++				context->actual = -EFAULT;
++		}
++	}
++}
++
++/****************************************************************************
++*
++*   vchiq_dump_platform_instance_state
++*
++***************************************************************************/
++
++void
++vchiq_dump_platform_instances(void *dump_context)
++{
++	VCHIQ_STATE_T *state = vchiq_get_state();
++	char buf[80];
++	int len;
++	int i;
++
++	/* There is no list of instances, so instead scan all services,
++		marking those that have been dumped. */
++
++	for (i = 0; i < state->unused_service; i++) {
++		VCHIQ_SERVICE_T *service = state->services[i];
++		VCHIQ_INSTANCE_T instance;
++
++		if (service && (service->base.callback == service_callback)) {
++			instance = service->instance;
++			if (instance)
++				instance->mark = 0;
++		}
++	}
++
++	for (i = 0; i < state->unused_service; i++) {
++		VCHIQ_SERVICE_T *service = state->services[i];
++		VCHIQ_INSTANCE_T instance;
++
++		if (service && (service->base.callback == service_callback)) {
++			instance = service->instance;
++			if (instance && !instance->mark) {
++				len = snprintf(buf, sizeof(buf),
++					"Instance %x: pid %d,%s completions "
++						"%d/%d",
++					(unsigned int)instance, instance->pid,
++					instance->connected ? " connected, " :
++						"",
++					instance->completion_insert -
++						instance->completion_remove,
++					MAX_COMPLETIONS);
++
++				vchiq_dump(dump_context, buf, len + 1);
++
++				instance->mark = 1;
++			}
++		}
++	}
++}
++
++/****************************************************************************
++*
++*   vchiq_dump_platform_service_state
++*
++***************************************************************************/
++
++void
++vchiq_dump_platform_service_state(void *dump_context, VCHIQ_SERVICE_T *service)
++{
++	USER_SERVICE_T *user_service = (USER_SERVICE_T *)service->base.userdata;
++	char buf[80];
++	int len;
++
++	len = snprintf(buf, sizeof(buf), "  instance %x",
++		(unsigned int)service->instance);
++
++	if ((service->base.callback == service_callback) &&
++		user_service->is_vchi) {
++		len += snprintf(buf + len, sizeof(buf) - len,
++			", %d/%d messages",
++			user_service->msg_insert - user_service->msg_remove,
++			MSG_QUEUE_SIZE);
++
++		if (user_service->dequeue_pending)
++			len += snprintf(buf + len, sizeof(buf) - len,
++				" (dequeue pending)");
++	}
++
++	vchiq_dump(dump_context, buf, len + 1);
++}
++
++/****************************************************************************
++*
++*   dump_user_mem
++*
++***************************************************************************/
++
++static void
++dump_phys_mem(void *virt_addr, uint32_t num_bytes)
++{
++	int            rc;
++	uint8_t       *end_virt_addr = virt_addr + num_bytes;
++	int            num_pages;
++	int            offset;
++	int            end_offset;
++	int            page_idx;
++	int            prev_idx;
++	struct page   *page;
++	struct page  **pages;
++	uint8_t       *kmapped_virt_ptr;
++
++	/* Align virtAddr and endVirtAddr to 16 byte boundaries. */
++
++	virt_addr = (void *)((unsigned long)virt_addr & ~0x0fuL);
++	end_virt_addr = (void *)(((unsigned long)end_virt_addr + 15uL) &
++		~0x0fuL);
++
++	offset = (int)(long)virt_addr & (PAGE_SIZE - 1);
++	end_offset = (int)(long)end_virt_addr & (PAGE_SIZE - 1);
++
++	num_pages = (offset + num_bytes + PAGE_SIZE - 1) / PAGE_SIZE;
++
++	pages = kmalloc(sizeof(struct page *) * num_pages, GFP_KERNEL);
++	if (pages == NULL) {
++		vchiq_log_error(vchiq_arm_log_level,
++			"Unable to allocation memory for %d pages\n",
++			num_pages);
++		return;
++	}
++
++	down_read(&current->mm->mmap_sem);
++	rc = get_user_pages(current,      /* task */
++		current->mm,              /* mm */
++		(unsigned long)virt_addr, /* start */
++		num_pages,                /* len */
++		0,                        /* write */
++		0,                        /* force */
++		pages,                    /* pages (array of page pointers) */
++		NULL);                    /* vmas */
++	up_read(&current->mm->mmap_sem);
++
++	prev_idx = -1;
++	page = NULL;
++
++	while (offset < end_offset) {
++
++		int page_offset = offset % PAGE_SIZE;
++		page_idx = offset / PAGE_SIZE;
++
++		if (page_idx != prev_idx) {
++
++			if (page != NULL)
++				kunmap(page);
++			page = pages[page_idx];
++			kmapped_virt_ptr = kmap(page);
++
++			prev_idx = page_idx;
++		}
++
++		if (vchiq_arm_log_level >= VCHIQ_LOG_TRACE)
++			vchiq_log_dump_mem("ph",
++				(uint32_t)(unsigned long)&kmapped_virt_ptr[
++					page_offset],
++				&kmapped_virt_ptr[page_offset], 16);
++
++		offset += 16;
++	}
++	if (page != NULL)
++		kunmap(page);
++
++	for (page_idx = 0; page_idx < num_pages; page_idx++)
++		page_cache_release(pages[page_idx]);
++
++	kfree(pages);
++}
++
++/****************************************************************************
++*
++*   vchiq_read
++*
++***************************************************************************/
++
++static ssize_t
++vchiq_read(struct file *file, char __user *buf,
++	size_t count, loff_t *ppos)
++{
++	DUMP_CONTEXT_T context;
++	context.buf = buf;
++	context.actual = 0;
++	context.space = count;
++	context.offset = *ppos;
++
++	vchiq_dump_state(&context, &g_state);
++
++	*ppos += context.actual;
++
++	return context.actual;
++}
++
++VCHIQ_STATE_T *
++vchiq_get_state(void)
++{
++
++	if (g_state.remote == NULL)
++		printk(KERN_ERR "%s: g_state.remote == NULL\n", __func__);
++	else if (g_state.remote->initialised != 1)
++		printk(KERN_NOTICE "%s: g_state.remote->initialised != 1 (%d)\n",
++			__func__, g_state.remote->initialised);
++
++	return ((g_state.remote != NULL) &&
++		(g_state.remote->initialised == 1)) ? &g_state : NULL;
++}
++
++static const struct file_operations
++vchiq_fops = {
++	.owner = THIS_MODULE,
++	.unlocked_ioctl = vchiq_ioctl,
++	.open = vchiq_open,
++	.release = vchiq_release,
++	.read = vchiq_read
++};
++
++/*
++ * Autosuspend related functionality
++ */
++
++int
++vchiq_videocore_wanted(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	if (!arm_state)
++		/* autosuspend not supported - always return wanted */
++		return 1;
++	else if (arm_state->blocked_count)
++		return 1;
++	else if (!arm_state->videocore_use_count)
++		/* usage count zero - check for override unless we're forcing */
++		if (arm_state->resume_blocked)
++			return 0;
++		else
++			return vchiq_platform_videocore_wanted(state);
++	else
++		/* non-zero usage count - videocore still required */
++		return 1;
++}
++
++static VCHIQ_STATUS_T
++vchiq_keepalive_vchiq_callback(VCHIQ_REASON_T reason,
++	VCHIQ_HEADER_T *header,
++	VCHIQ_SERVICE_HANDLE_T service_user,
++	void *bulk_user)
++{
++	vchiq_log_error(vchiq_susp_log_level,
++		"%s callback reason %d", __func__, reason);
++	return 0;
++}
++
++static int
++vchiq_keepalive_thread_func(void *v)
++{
++	VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++
++	VCHIQ_STATUS_T status;
++	VCHIQ_INSTANCE_T instance;
++	VCHIQ_SERVICE_HANDLE_T ka_handle;
++
++	VCHIQ_SERVICE_PARAMS_T params = {
++		.fourcc      = VCHIQ_MAKE_FOURCC('K', 'E', 'E', 'P'),
++		.callback    = vchiq_keepalive_vchiq_callback,
++		.version     = KEEPALIVE_VER,
++		.version_min = KEEPALIVE_VER_MIN
++	};
++
++	status = vchiq_initialise(&instance);
++	if (status != VCHIQ_SUCCESS) {
++		vchiq_log_error(vchiq_susp_log_level,
++			"%s vchiq_initialise failed %d", __func__, status);
++		goto exit;
++	}
++
++	status = vchiq_connect(instance);
++	if (status != VCHIQ_SUCCESS) {
++		vchiq_log_error(vchiq_susp_log_level,
++			"%s vchiq_connect failed %d", __func__, status);
++		goto shutdown;
++	}
++
++	status = vchiq_add_service(instance, &params, &ka_handle);
++	if (status != VCHIQ_SUCCESS) {
++		vchiq_log_error(vchiq_susp_log_level,
++			"%s vchiq_open_service failed %d", __func__, status);
++		goto shutdown;
++	}
++
++	while (1) {
++		long rc = 0, uc = 0;
++		if (wait_for_completion_interruptible(&arm_state->ka_evt)
++				!= 0) {
++			vchiq_log_error(vchiq_susp_log_level,
++				"%s interrupted", __func__);
++			flush_signals(current);
++			continue;
++		}
++
++		/* read and clear counters.  Do release_count then use_count to
++		 * prevent getting more releases than uses */
++		rc = atomic_xchg(&arm_state->ka_release_count, 0);
++		uc = atomic_xchg(&arm_state->ka_use_count, 0);
++
++		/* Call use/release service the requisite number of times.
++		 * Process use before release so use counts don't go negative */
++		while (uc--) {
++			atomic_inc(&arm_state->ka_use_ack_count);
++			status = vchiq_use_service(ka_handle);
++			if (status != VCHIQ_SUCCESS) {
++				vchiq_log_error(vchiq_susp_log_level,
++					"%s vchiq_use_service error %d",
++					__func__, status);
++			}
++		}
++		while (rc--) {
++			status = vchiq_release_service(ka_handle);
++			if (status != VCHIQ_SUCCESS) {
++				vchiq_log_error(vchiq_susp_log_level,
++					"%s vchiq_release_service error %d",
++					__func__, status);
++			}
++		}
++	}
++
++shutdown:
++	vchiq_shutdown(instance);
++exit:
++	return 0;
++}
++
++
++
++VCHIQ_STATUS_T
++vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state)
++{
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++
++	if (arm_state) {
++		rwlock_init(&arm_state->susp_res_lock);
++
++		init_completion(&arm_state->ka_evt);
++		atomic_set(&arm_state->ka_use_count, 0);
++		atomic_set(&arm_state->ka_use_ack_count, 0);
++		atomic_set(&arm_state->ka_release_count, 0);
++
++		init_completion(&arm_state->vc_suspend_complete);
++
++		init_completion(&arm_state->vc_resume_complete);
++		/* Initialise to 'done' state.  We only want to block on resume
++		 * completion while videocore is suspended. */
++		set_resume_state(arm_state, VC_RESUME_RESUMED);
++
++		init_completion(&arm_state->resume_blocker);
++		/* Initialise to 'done' state.  We only want to block on this
++		 * completion while resume is blocked */
++		complete_all(&arm_state->resume_blocker);
++
++		init_completion(&arm_state->blocked_blocker);
++		/* Initialise to 'done' state.  We only want to block on this
++		 * completion while things are waiting on the resume blocker */
++		complete_all(&arm_state->blocked_blocker);
++
++		arm_state->suspend_timer_timeout = SUSPEND_TIMER_TIMEOUT_MS;
++		arm_state->suspend_timer_running = 0;
++		init_timer(&arm_state->suspend_timer);
++		arm_state->suspend_timer.data = (unsigned long)(state);
++		arm_state->suspend_timer.function = suspend_timer_callback;
++
++		arm_state->first_connect = 0;
++
++	}
++	return status;
++}
++
++/*
++** Functions to modify the state variables;
++**	set_suspend_state
++**	set_resume_state
++**
++** There are more state variables than we might like, so ensure they remain in
++** step.  Suspend and resume state are maintained separately, since most of
++** these state machines can operate independently.  However, there are a few
++** states where state transitions in one state machine cause a reset to the
++** other state machine.  In addition, there are some completion events which
++** need to occur on state machine reset and end-state(s), so these are also
++** dealt with in these functions.
++**
++** In all states we set the state variable according to the input, but in some
++** cases we perform additional steps outlined below;
++**
++** VC_SUSPEND_IDLE - Initialise the suspend completion at the same time.
++**			The suspend completion is completed after any suspend
++**			attempt.  When we reset the state machine we also reset
++**			the completion.  This reset occurs when videocore is
++**			resumed, and also if we initiate suspend after a suspend
++**			failure.
++**
++** VC_SUSPEND_IN_PROGRESS - This state is considered the point of no return for
++**			suspend - ie from this point on we must try to suspend
++**			before resuming can occur.  We therefore also reset the
++**			resume state machine to VC_RESUME_IDLE in this state.
++**
++** VC_SUSPEND_SUSPENDED - Suspend has completed successfully. Also call
++**			complete_all on the suspend completion to notify
++**			anything waiting for suspend to happen.
++**
++** VC_SUSPEND_REJECTED - Videocore rejected suspend. Videocore will also
++**			initiate resume, so no need to alter resume state.
++**			We call complete_all on the suspend completion to notify
++**			of suspend rejection.
++**
++** VC_SUSPEND_FAILED - We failed to initiate videocore suspend.  We notify the
++**			suspend completion and reset the resume state machine.
++**
++** VC_RESUME_IDLE - Initialise the resume completion at the same time.  The
++**			resume completion is in it's 'done' state whenever
++**			videcore is running.  Therfore, the VC_RESUME_IDLE state
++**			implies that videocore is suspended.
++**			Hence, any thread which needs to wait until videocore is
++**			running can wait on this completion - it will only block
++**			if videocore is suspended.
++**
++** VC_RESUME_RESUMED - Resume has completed successfully.  Videocore is running.
++**			Call complete_all on the resume completion to unblock
++**			any threads waiting for resume.	 Also reset the suspend
++**			state machine to it's idle state.
++**
++** VC_RESUME_FAILED - Currently unused - no mechanism to fail resume exists.
++*/
++
++void
++set_suspend_state(VCHIQ_ARM_STATE_T *arm_state,
++	enum vc_suspend_status new_state)
++{
++	/* set the state in all cases */
++	arm_state->vc_suspend_state = new_state;
++
++	/* state specific additional actions */
++	switch (new_state) {
++	case VC_SUSPEND_FORCE_CANCELED:
++		complete_all(&arm_state->vc_suspend_complete);
++		break;
++	case VC_SUSPEND_REJECTED:
++		complete_all(&arm_state->vc_suspend_complete);
++		break;
++	case VC_SUSPEND_FAILED:
++		complete_all(&arm_state->vc_suspend_complete);
++		arm_state->vc_resume_state = VC_RESUME_RESUMED;
++		complete_all(&arm_state->vc_resume_complete);
++		break;
++	case VC_SUSPEND_IDLE:
++		reinit_completion(&arm_state->vc_suspend_complete);
++		break;
++	case VC_SUSPEND_REQUESTED:
++		break;
++	case VC_SUSPEND_IN_PROGRESS:
++		set_resume_state(arm_state, VC_RESUME_IDLE);
++		break;
++	case VC_SUSPEND_SUSPENDED:
++		complete_all(&arm_state->vc_suspend_complete);
++		break;
++	default:
++		BUG();
++		break;
++	}
++}
++
++void
++set_resume_state(VCHIQ_ARM_STATE_T *arm_state,
++	enum vc_resume_status new_state)
++{
++	/* set the state in all cases */
++	arm_state->vc_resume_state = new_state;
++
++	/* state specific additional actions */
++	switch (new_state) {
++	case VC_RESUME_FAILED:
++		break;
++	case VC_RESUME_IDLE:
++		reinit_completion(&arm_state->vc_resume_complete);
++		break;
++	case VC_RESUME_REQUESTED:
++		break;
++	case VC_RESUME_IN_PROGRESS:
++		break;
++	case VC_RESUME_RESUMED:
++		complete_all(&arm_state->vc_resume_complete);
++		set_suspend_state(arm_state, VC_SUSPEND_IDLE);
++		break;
++	default:
++		BUG();
++		break;
++	}
++}
++
++
++/* should be called with the write lock held */
++inline void
++start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state)
++{
++	del_timer(&arm_state->suspend_timer);
++	arm_state->suspend_timer.expires = jiffies +
++		msecs_to_jiffies(arm_state->
++			suspend_timer_timeout);
++	add_timer(&arm_state->suspend_timer);
++	arm_state->suspend_timer_running = 1;
++}
++
++/* should be called with the write lock held */
++static inline void
++stop_suspend_timer(VCHIQ_ARM_STATE_T *arm_state)
++{
++	if (arm_state->suspend_timer_running) {
++		del_timer(&arm_state->suspend_timer);
++		arm_state->suspend_timer_running = 0;
++	}
++}
++
++static inline int
++need_resume(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	return (arm_state->vc_suspend_state > VC_SUSPEND_IDLE) &&
++			(arm_state->vc_resume_state < VC_RESUME_REQUESTED) &&
++			vchiq_videocore_wanted(state);
++}
++
++static int
++block_resume(VCHIQ_ARM_STATE_T *arm_state)
++{
++	int status = VCHIQ_SUCCESS;
++	const unsigned long timeout_val =
++				msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS);
++	int resume_count = 0;
++
++	/* Allow any threads which were blocked by the last force suspend to
++	 * complete if they haven't already.  Only give this one shot; if
++	 * blocked_count is incremented after blocked_blocker is completed
++	 * (which only happens when blocked_count hits 0) then those threads
++	 * will have to wait until next time around */
++	if (arm_state->blocked_count) {
++		reinit_completion(&arm_state->blocked_blocker);
++		write_unlock_bh(&arm_state->susp_res_lock);
++		vchiq_log_info(vchiq_susp_log_level, "%s wait for previously "
++			"blocked clients", __func__);
++		if (wait_for_completion_interruptible_timeout(
++				&arm_state->blocked_blocker, timeout_val)
++					<= 0) {
++			vchiq_log_error(vchiq_susp_log_level, "%s wait for "
++				"previously blocked clients failed" , __func__);
++			status = VCHIQ_ERROR;
++			write_lock_bh(&arm_state->susp_res_lock);
++			goto out;
++		}
++		vchiq_log_info(vchiq_susp_log_level, "%s previously blocked "
++			"clients resumed", __func__);
++		write_lock_bh(&arm_state->susp_res_lock);
++	}
++
++	/* We need to wait for resume to complete if it's in process */
++	while (arm_state->vc_resume_state != VC_RESUME_RESUMED &&
++			arm_state->vc_resume_state > VC_RESUME_IDLE) {
++		if (resume_count > 1) {
++			status = VCHIQ_ERROR;
++			vchiq_log_error(vchiq_susp_log_level, "%s waited too "
++				"many times for resume" , __func__);
++			goto out;
++		}
++		write_unlock_bh(&arm_state->susp_res_lock);
++		vchiq_log_info(vchiq_susp_log_level, "%s wait for resume",
++			__func__);
++		if (wait_for_completion_interruptible_timeout(
++				&arm_state->vc_resume_complete, timeout_val)
++					<= 0) {
++			vchiq_log_error(vchiq_susp_log_level, "%s wait for "
++				"resume failed (%s)", __func__,
++				resume_state_names[arm_state->vc_resume_state +
++							VC_RESUME_NUM_OFFSET]);
++			status = VCHIQ_ERROR;
++			write_lock_bh(&arm_state->susp_res_lock);
++			goto out;
++		}
++		vchiq_log_info(vchiq_susp_log_level, "%s resumed", __func__);
++		write_lock_bh(&arm_state->susp_res_lock);
++		resume_count++;
++	}
++	reinit_completion(&arm_state->resume_blocker);
++	arm_state->resume_blocked = 1;
++
++out:
++	return status;
++}
++
++static inline void
++unblock_resume(VCHIQ_ARM_STATE_T *arm_state)
++{
++	complete_all(&arm_state->resume_blocker);
++	arm_state->resume_blocked = 0;
++}
++
++/* Initiate suspend via slot handler. Should be called with the write lock
++ * held */
++VCHIQ_STATUS_T
++vchiq_arm_vcsuspend(VCHIQ_STATE_T *state)
++{
++	VCHIQ_STATUS_T status = VCHIQ_ERROR;
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++	status = VCHIQ_SUCCESS;
++
++
++	switch (arm_state->vc_suspend_state) {
++	case VC_SUSPEND_REQUESTED:
++		vchiq_log_info(vchiq_susp_log_level, "%s: suspend already "
++			"requested", __func__);
++		break;
++	case VC_SUSPEND_IN_PROGRESS:
++		vchiq_log_info(vchiq_susp_log_level, "%s: suspend already in "
++			"progress", __func__);
++		break;
++
++	default:
++		/* We don't expect to be in other states, so log but continue
++		 * anyway */
++		vchiq_log_error(vchiq_susp_log_level,
++			"%s unexpected suspend state %s", __func__,
++			suspend_state_names[arm_state->vc_suspend_state +
++						VC_SUSPEND_NUM_OFFSET]);
++		/* fall through */
++	case VC_SUSPEND_REJECTED:
++	case VC_SUSPEND_FAILED:
++		/* Ensure any idle state actions have been run */
++		set_suspend_state(arm_state, VC_SUSPEND_IDLE);
++		/* fall through */
++	case VC_SUSPEND_IDLE:
++		vchiq_log_info(vchiq_susp_log_level,
++			"%s: suspending", __func__);
++		set_suspend_state(arm_state, VC_SUSPEND_REQUESTED);
++		/* kick the slot handler thread to initiate suspend */
++		request_poll(state, NULL, 0);
++		break;
++	}
++
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, status);
++	return status;
++}
++
++void
++vchiq_platform_check_suspend(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	int susp = 0;
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	write_lock_bh(&arm_state->susp_res_lock);
++	if (arm_state->vc_suspend_state == VC_SUSPEND_REQUESTED &&
++			arm_state->vc_resume_state == VC_RESUME_RESUMED) {
++		set_suspend_state(arm_state, VC_SUSPEND_IN_PROGRESS);
++		susp = 1;
++	}
++	write_unlock_bh(&arm_state->susp_res_lock);
++
++	if (susp)
++		vchiq_platform_suspend(state);
++
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
++	return;
++}
++
++
++static void
++output_timeout_error(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	char service_err[50] = "";
++	int vc_use_count = arm_state->videocore_use_count;
++	int active_services = state->unused_service;
++	int i;
++
++	if (!arm_state->videocore_use_count) {
++		snprintf(service_err, 50, " Videocore usecount is 0");
++		goto output_msg;
++	}
++	for (i = 0; i < active_services; i++) {
++		VCHIQ_SERVICE_T *service_ptr = state->services[i];
++		if (service_ptr && service_ptr->service_use_count &&
++			(service_ptr->srvstate != VCHIQ_SRVSTATE_FREE)) {
++			snprintf(service_err, 50, " %c%c%c%c(%d) service has "
++				"use count %d%s", VCHIQ_FOURCC_AS_4CHARS(
++					service_ptr->base.fourcc),
++				 service_ptr->client_id,
++				 service_ptr->service_use_count,
++				 service_ptr->service_use_count ==
++					 vc_use_count ? "" : " (+ more)");
++			break;
++		}
++	}
++
++output_msg:
++	vchiq_log_error(vchiq_susp_log_level,
++		"timed out waiting for vc suspend (%d).%s",
++		 arm_state->autosuspend_override, service_err);
++
++}
++
++/* Try to get videocore into suspended state, regardless of autosuspend state.
++** We don't actually force suspend, since videocore may get into a bad state
++** if we force suspend at a bad time.  Instead, we wait for autosuspend to
++** determine a good point to suspend.  If this doesn't happen within 100ms we
++** report failure.
++**
++** Returns VCHIQ_SUCCESS if videocore suspended successfully, VCHIQ_RETRY if
++** videocore failed to suspend in time or VCHIQ_ERROR if interrupted.
++*/
++VCHIQ_STATUS_T
++vchiq_arm_force_suspend(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	VCHIQ_STATUS_T status = VCHIQ_ERROR;
++	long rc = 0;
++	int repeat = -1;
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	write_lock_bh(&arm_state->susp_res_lock);
++
++	status = block_resume(arm_state);
++	if (status != VCHIQ_SUCCESS)
++		goto unlock;
++	if (arm_state->vc_suspend_state == VC_SUSPEND_SUSPENDED) {
++		/* Already suspended - just block resume and exit */
++		vchiq_log_info(vchiq_susp_log_level, "%s already suspended",
++			__func__);
++		status = VCHIQ_SUCCESS;
++		goto unlock;
++	} else if (arm_state->vc_suspend_state <= VC_SUSPEND_IDLE) {
++		/* initiate suspend immediately in the case that we're waiting
++		 * for the timeout */
++		stop_suspend_timer(arm_state);
++		if (!vchiq_videocore_wanted(state)) {
++			vchiq_log_info(vchiq_susp_log_level, "%s videocore "
++				"idle, initiating suspend", __func__);
++			status = vchiq_arm_vcsuspend(state);
++		} else if (arm_state->autosuspend_override <
++						FORCE_SUSPEND_FAIL_MAX) {
++			vchiq_log_info(vchiq_susp_log_level, "%s letting "
++				"videocore go idle", __func__);
++			status = VCHIQ_SUCCESS;
++		} else {
++			vchiq_log_warning(vchiq_susp_log_level, "%s failed too "
++				"many times - attempting suspend", __func__);
++			status = vchiq_arm_vcsuspend(state);
++		}
++	} else {
++		vchiq_log_info(vchiq_susp_log_level, "%s videocore suspend "
++			"in progress - wait for completion", __func__);
++		status = VCHIQ_SUCCESS;
++	}
++
++	/* Wait for suspend to happen due to system idle (not forced..) */
++	if (status != VCHIQ_SUCCESS)
++		goto unblock_resume;
++
++	do {
++		write_unlock_bh(&arm_state->susp_res_lock);
++
++		rc = wait_for_completion_interruptible_timeout(
++				&arm_state->vc_suspend_complete,
++				msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS));
++
++		write_lock_bh(&arm_state->susp_res_lock);
++		if (rc < 0) {
++			vchiq_log_warning(vchiq_susp_log_level, "%s "
++				"interrupted waiting for suspend", __func__);
++			status = VCHIQ_ERROR;
++			goto unblock_resume;
++		} else if (rc == 0) {
++			if (arm_state->vc_suspend_state > VC_SUSPEND_IDLE) {
++				/* Repeat timeout once if in progress */
++				if (repeat < 0) {
++					repeat = 1;
++					continue;
++				}
++			}
++			arm_state->autosuspend_override++;
++			output_timeout_error(state);
++
++			status = VCHIQ_RETRY;
++			goto unblock_resume;
++		}
++	} while (0 < (repeat--));
++
++	/* Check and report state in case we need to abort ARM suspend */
++	if (arm_state->vc_suspend_state != VC_SUSPEND_SUSPENDED) {
++		status = VCHIQ_RETRY;
++		vchiq_log_error(vchiq_susp_log_level,
++			"%s videocore suspend failed (state %s)", __func__,
++			suspend_state_names[arm_state->vc_suspend_state +
++						VC_SUSPEND_NUM_OFFSET]);
++		/* Reset the state only if it's still in an error state.
++		 * Something could have already initiated another suspend. */
++		if (arm_state->vc_suspend_state < VC_SUSPEND_IDLE)
++			set_suspend_state(arm_state, VC_SUSPEND_IDLE);
++
++		goto unblock_resume;
++	}
++
++	/* successfully suspended - unlock and exit */
++	goto unlock;
++
++unblock_resume:
++	/* all error states need to unblock resume before exit */
++	unblock_resume(arm_state);
++
++unlock:
++	write_unlock_bh(&arm_state->susp_res_lock);
++
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, status);
++	return status;
++}
++
++void
++vchiq_check_suspend(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	write_lock_bh(&arm_state->susp_res_lock);
++	if (arm_state->vc_suspend_state != VC_SUSPEND_SUSPENDED &&
++			arm_state->first_connect &&
++			!vchiq_videocore_wanted(state)) {
++		vchiq_arm_vcsuspend(state);
++	}
++	write_unlock_bh(&arm_state->susp_res_lock);
++
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
++	return;
++}
++
++
++int
++vchiq_arm_allow_resume(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	int resume = 0;
++	int ret = -1;
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	write_lock_bh(&arm_state->susp_res_lock);
++	unblock_resume(arm_state);
++	resume = vchiq_check_resume(state);
++	write_unlock_bh(&arm_state->susp_res_lock);
++
++	if (resume) {
++		if (wait_for_completion_interruptible(
++			&arm_state->vc_resume_complete) < 0) {
++			vchiq_log_error(vchiq_susp_log_level,
++				"%s interrupted", __func__);
++			/* failed, cannot accurately derive suspend
++			 * state, so exit early. */
++			goto out;
++		}
++	}
++
++	read_lock_bh(&arm_state->susp_res_lock);
++	if (arm_state->vc_suspend_state == VC_SUSPEND_SUSPENDED) {
++		vchiq_log_info(vchiq_susp_log_level,
++				"%s: Videocore remains suspended", __func__);
++	} else {
++		vchiq_log_info(vchiq_susp_log_level,
++				"%s: Videocore resumed", __func__);
++		ret = 0;
++	}
++	read_unlock_bh(&arm_state->susp_res_lock);
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
++	return ret;
++}
++
++/* This function should be called with the write lock held */
++int
++vchiq_check_resume(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	int resume = 0;
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	if (need_resume(state)) {
++		set_resume_state(arm_state, VC_RESUME_REQUESTED);
++		request_poll(state, NULL, 0);
++		resume = 1;
++	}
++
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
++	return resume;
++}
++
++void
++vchiq_platform_check_resume(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	int res = 0;
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	write_lock_bh(&arm_state->susp_res_lock);
++	if (arm_state->wake_address == 0) {
++		vchiq_log_info(vchiq_susp_log_level,
++					"%s: already awake", __func__);
++		goto unlock;
++	}
++	if (arm_state->vc_resume_state == VC_RESUME_IN_PROGRESS) {
++		vchiq_log_info(vchiq_susp_log_level,
++					"%s: already resuming", __func__);
++		goto unlock;
++	}
++
++	if (arm_state->vc_resume_state == VC_RESUME_REQUESTED) {
++		set_resume_state(arm_state, VC_RESUME_IN_PROGRESS);
++		res = 1;
++	} else
++		vchiq_log_trace(vchiq_susp_log_level,
++				"%s: not resuming (resume state %s)", __func__,
++				resume_state_names[arm_state->vc_resume_state +
++							VC_RESUME_NUM_OFFSET]);
++
++unlock:
++	write_unlock_bh(&arm_state->susp_res_lock);
++
++	if (res)
++		vchiq_platform_resume(state);
++
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
++	return;
++
++}
++
++
++
++VCHIQ_STATUS_T
++vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
++		enum USE_TYPE_E use_type)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	VCHIQ_STATUS_T ret = VCHIQ_SUCCESS;
++	char entity[16];
++	int *entity_uc;
++	int local_uc, local_entity_uc;
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	if (use_type == USE_TYPE_VCHIQ) {
++		sprintf(entity, "VCHIQ:   ");
++		entity_uc = &arm_state->peer_use_count;
++	} else if (service) {
++		sprintf(entity, "%c%c%c%c:%03d",
++			VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
++			service->client_id);
++		entity_uc = &service->service_use_count;
++	} else {
++		vchiq_log_error(vchiq_susp_log_level, "%s null service "
++				"ptr", __func__);
++		ret = VCHIQ_ERROR;
++		goto out;
++	}
++
++	write_lock_bh(&arm_state->susp_res_lock);
++	while (arm_state->resume_blocked) {
++		/* If we call 'use' while force suspend is waiting for suspend,
++		 * then we're about to block the thread which the force is
++		 * waiting to complete, so we're bound to just time out. In this
++		 * case, set the suspend state such that the wait will be
++		 * canceled, so we can complete as quickly as possible. */
++		if (arm_state->resume_blocked && arm_state->vc_suspend_state ==
++				VC_SUSPEND_IDLE) {
++			set_suspend_state(arm_state, VC_SUSPEND_FORCE_CANCELED);
++			break;
++		}
++		/* If suspend is already in progress then we need to block */
++		if (!try_wait_for_completion(&arm_state->resume_blocker)) {
++			/* Indicate that there are threads waiting on the resume
++			 * blocker.  These need to be allowed to complete before
++			 * a _second_ call to force suspend can complete,
++			 * otherwise low priority threads might never actually
++			 * continue */
++			arm_state->blocked_count++;
++			write_unlock_bh(&arm_state->susp_res_lock);
++			vchiq_log_info(vchiq_susp_log_level, "%s %s resume "
++				"blocked - waiting...", __func__, entity);
++			if (wait_for_completion_killable(
++					&arm_state->resume_blocker) != 0) {
++				vchiq_log_error(vchiq_susp_log_level, "%s %s "
++					"wait for resume blocker interrupted",
++					__func__, entity);
++				ret = VCHIQ_ERROR;
++				write_lock_bh(&arm_state->susp_res_lock);
++				arm_state->blocked_count--;
++				write_unlock_bh(&arm_state->susp_res_lock);
++				goto out;
++			}
++			vchiq_log_info(vchiq_susp_log_level, "%s %s resume "
++				"unblocked", __func__, entity);
++			write_lock_bh(&arm_state->susp_res_lock);
++			if (--arm_state->blocked_count == 0)
++				complete_all(&arm_state->blocked_blocker);
++		}
++	}
++
++	stop_suspend_timer(arm_state);
++
++	local_uc = ++arm_state->videocore_use_count;
++	local_entity_uc = ++(*entity_uc);
++
++	/* If there's a pending request which hasn't yet been serviced then
++	 * just clear it.  If we're past VC_SUSPEND_REQUESTED state then
++	 * vc_resume_complete will block until we either resume or fail to
++	 * suspend */
++	if (arm_state->vc_suspend_state <= VC_SUSPEND_REQUESTED)
++		set_suspend_state(arm_state, VC_SUSPEND_IDLE);
++
++	if ((use_type != USE_TYPE_SERVICE_NO_RESUME) && need_resume(state)) {
++		set_resume_state(arm_state, VC_RESUME_REQUESTED);
++		vchiq_log_info(vchiq_susp_log_level,
++			"%s %s count %d, state count %d",
++			__func__, entity, local_entity_uc, local_uc);
++		request_poll(state, NULL, 0);
++	} else
++		vchiq_log_trace(vchiq_susp_log_level,
++			"%s %s count %d, state count %d",
++			__func__, entity, *entity_uc, local_uc);
++
++
++	write_unlock_bh(&arm_state->susp_res_lock);
++
++	/* Completion is in a done state when we're not suspended, so this won't
++	 * block for the non-suspended case. */
++	if (!try_wait_for_completion(&arm_state->vc_resume_complete)) {
++		vchiq_log_info(vchiq_susp_log_level, "%s %s wait for resume",
++			__func__, entity);
++		if (wait_for_completion_killable(
++				&arm_state->vc_resume_complete) != 0) {
++			vchiq_log_error(vchiq_susp_log_level, "%s %s wait for "
++				"resume interrupted", __func__, entity);
++			ret = VCHIQ_ERROR;
++			goto out;
++		}
++		vchiq_log_info(vchiq_susp_log_level, "%s %s resumed", __func__,
++			entity);
++	}
++
++	if (ret == VCHIQ_SUCCESS) {
++		VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++		long ack_cnt = atomic_xchg(&arm_state->ka_use_ack_count, 0);
++		while (ack_cnt && (status == VCHIQ_SUCCESS)) {
++			/* Send the use notify to videocore */
++			status = vchiq_send_remote_use_active(state);
++			if (status == VCHIQ_SUCCESS)
++				ack_cnt--;
++			else
++				atomic_add(ack_cnt,
++					&arm_state->ka_use_ack_count);
++		}
++	}
++
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
++	return ret;
++}
++
++VCHIQ_STATUS_T
++vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	VCHIQ_STATUS_T ret = VCHIQ_SUCCESS;
++	char entity[16];
++	int *entity_uc;
++	int local_uc, local_entity_uc;
++
++	if (!arm_state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	if (service) {
++		sprintf(entity, "%c%c%c%c:%03d",
++			VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
++			service->client_id);
++		entity_uc = &service->service_use_count;
++	} else {
++		sprintf(entity, "PEER:   ");
++		entity_uc = &arm_state->peer_use_count;
++	}
++
++	write_lock_bh(&arm_state->susp_res_lock);
++	if (!arm_state->videocore_use_count || !(*entity_uc)) {
++		/* Don't use BUG_ON - don't allow user thread to crash kernel */
++		WARN_ON(!arm_state->videocore_use_count);
++		WARN_ON(!(*entity_uc));
++		ret = VCHIQ_ERROR;
++		goto unlock;
++	}
++	local_uc = --arm_state->videocore_use_count;
++	local_entity_uc = --(*entity_uc);
++
++	if (!vchiq_videocore_wanted(state)) {
++		if (vchiq_platform_use_suspend_timer() &&
++				!arm_state->resume_blocked) {
++			/* Only use the timer if we're not trying to force
++			 * suspend (=> resume_blocked) */
++			start_suspend_timer(arm_state);
++		} else {
++			vchiq_log_info(vchiq_susp_log_level,
++				"%s %s count %d, state count %d - suspending",
++				__func__, entity, *entity_uc,
++				arm_state->videocore_use_count);
++			vchiq_arm_vcsuspend(state);
++		}
++	} else
++		vchiq_log_trace(vchiq_susp_log_level,
++			"%s %s count %d, state count %d",
++			__func__, entity, *entity_uc,
++			arm_state->videocore_use_count);
++
++unlock:
++	write_unlock_bh(&arm_state->susp_res_lock);
++
++out:
++	vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
++	return ret;
++}
++
++void
++vchiq_on_remote_use(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++	atomic_inc(&arm_state->ka_use_count);
++	complete(&arm_state->ka_evt);
++}
++
++void
++vchiq_on_remote_release(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++	atomic_inc(&arm_state->ka_release_count);
++	complete(&arm_state->ka_evt);
++}
++
++VCHIQ_STATUS_T
++vchiq_use_service_internal(VCHIQ_SERVICE_T *service)
++{
++	return vchiq_use_internal(service->state, service, USE_TYPE_SERVICE);
++}
++
++VCHIQ_STATUS_T
++vchiq_release_service_internal(VCHIQ_SERVICE_T *service)
++{
++	return vchiq_release_internal(service->state, service);
++}
++
++VCHIQ_DEBUGFS_NODE_T *
++vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance)
++{
++	return &instance->debugfs_node;
++}
++
++int
++vchiq_instance_get_use_count(VCHIQ_INSTANCE_T instance)
++{
++	VCHIQ_SERVICE_T *service;
++	int use_count = 0, i;
++	i = 0;
++	while ((service = next_service_by_instance(instance->state,
++		instance, &i)) != NULL) {
++		use_count += service->service_use_count;
++		unlock_service(service);
++	}
++	return use_count;
++}
++
++int
++vchiq_instance_get_pid(VCHIQ_INSTANCE_T instance)
++{
++	return instance->pid;
++}
++
++int
++vchiq_instance_get_trace(VCHIQ_INSTANCE_T instance)
++{
++	return instance->trace;
++}
++
++void
++vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace)
++{
++	VCHIQ_SERVICE_T *service;
++	int i;
++	i = 0;
++	while ((service = next_service_by_instance(instance->state,
++		instance, &i)) != NULL) {
++		service->trace = trace;
++		unlock_service(service);
++	}
++	instance->trace = (trace != 0);
++}
++
++static void suspend_timer_callback(unsigned long context)
++{
++	VCHIQ_STATE_T *state = (VCHIQ_STATE_T *)context;
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	if (!arm_state)
++		goto out;
++	vchiq_log_info(vchiq_susp_log_level,
++		"%s - suspend timer expired - check suspend", __func__);
++	vchiq_check_suspend(state);
++out:
++	return;
++}
++
++VCHIQ_STATUS_T
++vchiq_use_service_no_resume(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	VCHIQ_STATUS_T ret = VCHIQ_ERROR;
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	if (service) {
++		ret = vchiq_use_internal(service->state, service,
++				USE_TYPE_SERVICE_NO_RESUME);
++		unlock_service(service);
++	}
++	return ret;
++}
++
++VCHIQ_STATUS_T
++vchiq_use_service(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	VCHIQ_STATUS_T ret = VCHIQ_ERROR;
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	if (service) {
++		ret = vchiq_use_internal(service->state, service,
++				USE_TYPE_SERVICE);
++		unlock_service(service);
++	}
++	return ret;
++}
++
++VCHIQ_STATUS_T
++vchiq_release_service(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	VCHIQ_STATUS_T ret = VCHIQ_ERROR;
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	if (service) {
++		ret = vchiq_release_internal(service->state, service);
++		unlock_service(service);
++	}
++	return ret;
++}
++
++void
++vchiq_dump_service_use_state(VCHIQ_STATE_T *state)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	int i, j = 0;
++	/* Only dump 64 services */
++	static const int local_max_services = 64;
++	/* If there's more than 64 services, only dump ones with
++	 * non-zero counts */
++	int only_nonzero = 0;
++	static const char *nz = "<-- preventing suspend";
++
++	enum vc_suspend_status vc_suspend_state;
++	enum vc_resume_status  vc_resume_state;
++	int peer_count;
++	int vc_use_count;
++	int active_services;
++	struct service_data_struct {
++		int fourcc;
++		int clientid;
++		int use_count;
++	} service_data[local_max_services];
++
++	if (!arm_state)
++		return;
++
++	read_lock_bh(&arm_state->susp_res_lock);
++	vc_suspend_state = arm_state->vc_suspend_state;
++	vc_resume_state  = arm_state->vc_resume_state;
++	peer_count = arm_state->peer_use_count;
++	vc_use_count = arm_state->videocore_use_count;
++	active_services = state->unused_service;
++	if (active_services > local_max_services)
++		only_nonzero = 1;
++
++	for (i = 0; (i < active_services) && (j < local_max_services); i++) {
++		VCHIQ_SERVICE_T *service_ptr = state->services[i];
++		if (!service_ptr)
++			continue;
++
++		if (only_nonzero && !service_ptr->service_use_count)
++			continue;
++
++		if (service_ptr->srvstate != VCHIQ_SRVSTATE_FREE) {
++			service_data[j].fourcc = service_ptr->base.fourcc;
++			service_data[j].clientid = service_ptr->client_id;
++			service_data[j++].use_count = service_ptr->
++							service_use_count;
++		}
++	}
++
++	read_unlock_bh(&arm_state->susp_res_lock);
++
++	vchiq_log_warning(vchiq_susp_log_level,
++		"-- Videcore suspend state: %s --",
++		suspend_state_names[vc_suspend_state + VC_SUSPEND_NUM_OFFSET]);
++	vchiq_log_warning(vchiq_susp_log_level,
++		"-- Videcore resume state: %s --",
++		resume_state_names[vc_resume_state + VC_RESUME_NUM_OFFSET]);
++
++	if (only_nonzero)
++		vchiq_log_warning(vchiq_susp_log_level, "Too many active "
++			"services (%d).  Only dumping up to first %d services "
++			"with non-zero use-count", active_services,
++			local_max_services);
++
++	for (i = 0; i < j; i++) {
++		vchiq_log_warning(vchiq_susp_log_level,
++			"----- %c%c%c%c:%d service count %d %s",
++			VCHIQ_FOURCC_AS_4CHARS(service_data[i].fourcc),
++			service_data[i].clientid,
++			service_data[i].use_count,
++			service_data[i].use_count ? nz : "");
++	}
++	vchiq_log_warning(vchiq_susp_log_level,
++		"----- VCHIQ use count count %d", peer_count);
++	vchiq_log_warning(vchiq_susp_log_level,
++		"--- Overall vchiq instance use count %d", vc_use_count);
++
++	vchiq_dump_platform_use_state(state);
++}
++
++VCHIQ_STATUS_T
++vchiq_check_service(VCHIQ_SERVICE_T *service)
++{
++	VCHIQ_ARM_STATE_T *arm_state;
++	VCHIQ_STATUS_T ret = VCHIQ_ERROR;
++
++	if (!service || !service->state)
++		goto out;
++
++	vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
++
++	arm_state = vchiq_platform_get_arm_state(service->state);
++
++	read_lock_bh(&arm_state->susp_res_lock);
++	if (service->service_use_count)
++		ret = VCHIQ_SUCCESS;
++	read_unlock_bh(&arm_state->susp_res_lock);
++
++	if (ret == VCHIQ_ERROR) {
++		vchiq_log_error(vchiq_susp_log_level,
++			"%s ERROR - %c%c%c%c:%d service count %d, "
++			"state count %d, videocore suspend state %s", __func__,
++			VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
++			service->client_id, service->service_use_count,
++			arm_state->videocore_use_count,
++			suspend_state_names[arm_state->vc_suspend_state +
++						VC_SUSPEND_NUM_OFFSET]);
++		vchiq_dump_service_use_state(service->state);
++	}
++out:
++	return ret;
++}
++
++/* stub functions */
++void vchiq_on_remote_use_active(VCHIQ_STATE_T *state)
++{
++	(void)state;
++}
++
++void vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state,
++	VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate)
++{
++	VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
++	vchiq_log_info(vchiq_susp_log_level, "%d: %s->%s", state->id,
++		get_conn_state_name(oldstate), get_conn_state_name(newstate));
++	if (state->conn_state == VCHIQ_CONNSTATE_CONNECTED) {
++		write_lock_bh(&arm_state->susp_res_lock);
++		if (!arm_state->first_connect) {
++			char threadname[10];
++			arm_state->first_connect = 1;
++			write_unlock_bh(&arm_state->susp_res_lock);
++			snprintf(threadname, sizeof(threadname), "VCHIQka-%d",
++				state->id);
++			arm_state->ka_thread = kthread_create(
++				&vchiq_keepalive_thread_func,
++				(void *)state,
++				threadname);
++			if (arm_state->ka_thread == NULL) {
++				vchiq_log_error(vchiq_susp_log_level,
++					"vchiq: FATAL: couldn't create thread %s",
++					threadname);
++			} else {
++				wake_up_process(arm_state->ka_thread);
++			}
++		} else
++			write_unlock_bh(&arm_state->susp_res_lock);
++	}
++}
++
++static int vchiq_probe(struct platform_device *pdev)
++{
++	struct device_node *fw_node;
++	struct rpi_firmware *fw;
++	int err;
++	void *ptr_err;
++
++	fw_node = of_parse_phandle(pdev->dev.of_node, "firmware", 0);
++/* Remove comment when booting without Device Tree is no longer supported
++	if (!fw_node) {
++		dev_err(&pdev->dev, "Missing firmware node\n");
++		return -ENOENT;
++	}
++*/
++	fw = rpi_firmware_get(fw_node);
++	if (!fw)
++		return -EPROBE_DEFER;
++
++	platform_set_drvdata(pdev, fw);
++
++	/* create debugfs entries */
++	err = vchiq_debugfs_init();
++	if (err != 0)
++		goto failed_debugfs_init;
++
++	err = alloc_chrdev_region(&vchiq_devid, VCHIQ_MINOR, 1, DEVICE_NAME);
++	if (err != 0) {
++		vchiq_log_error(vchiq_arm_log_level,
++			"Unable to allocate device number");
++		goto failed_alloc_chrdev;
++	}
++	cdev_init(&vchiq_cdev, &vchiq_fops);
++	vchiq_cdev.owner = THIS_MODULE;
++	err = cdev_add(&vchiq_cdev, vchiq_devid, 1);
++	if (err != 0) {
++		vchiq_log_error(vchiq_arm_log_level,
++			"Unable to register device");
++		goto failed_cdev_add;
++	}
++
++	/* create sysfs entries */
++	vchiq_class = class_create(THIS_MODULE, DEVICE_NAME);
++	ptr_err = vchiq_class;
++	if (IS_ERR(ptr_err))
++		goto failed_class_create;
++
++	vchiq_dev = device_create(vchiq_class, NULL,
++		vchiq_devid, NULL, "vchiq");
++	ptr_err = vchiq_dev;
++	if (IS_ERR(ptr_err))
++		goto failed_device_create;
++
++	err = vchiq_platform_init(pdev, &g_state);
++	if (err != 0)
++		goto failed_platform_init;
++
++	vchiq_log_info(vchiq_arm_log_level,
++		"vchiq: initialised - version %d (min %d), device %d.%d",
++		VCHIQ_VERSION, VCHIQ_VERSION_MIN,
++		MAJOR(vchiq_devid), MINOR(vchiq_devid));
++
++	return 0;
++
++failed_platform_init:
++	device_destroy(vchiq_class, vchiq_devid);
++failed_device_create:
++	class_destroy(vchiq_class);
++failed_class_create:
++	cdev_del(&vchiq_cdev);
++	err = PTR_ERR(ptr_err);
++failed_cdev_add:
++	unregister_chrdev_region(vchiq_devid, 1);
++failed_alloc_chrdev:
++	vchiq_debugfs_deinit();
++failed_debugfs_init:
++	vchiq_log_warning(vchiq_arm_log_level, "could not load vchiq");
++	return err;
++}
++
++static int vchiq_remove(struct platform_device *pdev)
++{
++	device_destroy(vchiq_class, vchiq_devid);
++	class_destroy(vchiq_class);
++	cdev_del(&vchiq_cdev);
++	unregister_chrdev_region(vchiq_devid, 1);
++
++	return 0;
++}
++
++static const struct of_device_id vchiq_of_match[] = {
++	{ .compatible = "brcm,bcm2835-vchiq", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, vchiq_of_match);
++
++static struct platform_driver vchiq_driver = {
++	.driver = {
++		.name = "bcm2835_vchiq",
++		.owner = THIS_MODULE,
++		.of_match_table = vchiq_of_match,
++	},
++	.probe = vchiq_probe,
++	.remove = vchiq_remove,
++};
++module_platform_driver(vchiq_driver);
++
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Broadcom Corporation");
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_arm.h
+@@ -0,0 +1,220 @@
++/**
++ * Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_ARM_H
++#define VCHIQ_ARM_H
++
++#include <linux/mutex.h>
++#include <linux/platform_device.h>
++#include <linux/semaphore.h>
++#include <linux/atomic.h>
++#include "vchiq_core.h"
++#include "vchiq_debugfs.h"
++
++
++enum vc_suspend_status {
++	VC_SUSPEND_FORCE_CANCELED = -3, /* Force suspend canceled, too busy */
++	VC_SUSPEND_REJECTED = -2,  /* Videocore rejected suspend request */
++	VC_SUSPEND_FAILED = -1,    /* Videocore suspend failed */
++	VC_SUSPEND_IDLE = 0,       /* VC active, no suspend actions */
++	VC_SUSPEND_REQUESTED,      /* User has requested suspend */
++	VC_SUSPEND_IN_PROGRESS,    /* Slot handler has recvd suspend request */
++	VC_SUSPEND_SUSPENDED       /* Videocore suspend succeeded */
++};
++
++enum vc_resume_status {
++	VC_RESUME_FAILED = -1, /* Videocore resume failed */
++	VC_RESUME_IDLE = 0,    /* VC suspended, no resume actions */
++	VC_RESUME_REQUESTED,   /* User has requested resume */
++	VC_RESUME_IN_PROGRESS, /* Slot handler has received resume request */
++	VC_RESUME_RESUMED      /* Videocore resumed successfully (active) */
++};
++
++
++enum USE_TYPE_E {
++	USE_TYPE_SERVICE,
++	USE_TYPE_SERVICE_NO_RESUME,
++	USE_TYPE_VCHIQ
++};
++
++
++
++typedef struct vchiq_arm_state_struct {
++	/* Keepalive-related data */
++	struct task_struct *ka_thread;
++	struct completion ka_evt;
++	atomic_t ka_use_count;
++	atomic_t ka_use_ack_count;
++	atomic_t ka_release_count;
++
++	struct completion vc_suspend_complete;
++	struct completion vc_resume_complete;
++
++	rwlock_t susp_res_lock;
++	enum vc_suspend_status vc_suspend_state;
++	enum vc_resume_status vc_resume_state;
++
++	unsigned int wake_address;
++
++	struct timer_list suspend_timer;
++	int suspend_timer_timeout;
++	int suspend_timer_running;
++
++	/* Global use count for videocore.
++	** This is equal to the sum of the use counts for all services.  When
++	** this hits zero the videocore suspend procedure will be initiated.
++	*/
++	int videocore_use_count;
++
++	/* Use count to track requests from videocore peer.
++	** This use count is not associated with a service, so needs to be
++	** tracked separately with the state.
++	*/
++	int peer_use_count;
++
++	/* Flag to indicate whether resume is blocked.  This happens when the
++	** ARM is suspending
++	*/
++	struct completion resume_blocker;
++	int resume_blocked;
++	struct completion blocked_blocker;
++	int blocked_count;
++
++	int autosuspend_override;
++
++	/* Flag to indicate that the first vchiq connect has made it through.
++	** This means that both sides should be fully ready, and we should
++	** be able to suspend after this point.
++	*/
++	int first_connect;
++
++	unsigned long long suspend_start_time;
++	unsigned long long sleep_start_time;
++	unsigned long long resume_start_time;
++	unsigned long long last_wake_time;
++
++} VCHIQ_ARM_STATE_T;
++
++extern int vchiq_arm_log_level;
++extern int vchiq_susp_log_level;
++
++int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATE_T *
++vchiq_get_state(void);
++
++extern VCHIQ_STATUS_T
++vchiq_arm_vcsuspend(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_arm_force_suspend(VCHIQ_STATE_T *state);
++
++extern int
++vchiq_arm_allow_resume(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_arm_vcresume(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state);
++
++extern int
++vchiq_check_resume(VCHIQ_STATE_T *state);
++
++extern void
++vchiq_check_suspend(VCHIQ_STATE_T *state);
++ VCHIQ_STATUS_T
++vchiq_use_service(VCHIQ_SERVICE_HANDLE_T handle);
++
++extern VCHIQ_STATUS_T
++vchiq_release_service(VCHIQ_SERVICE_HANDLE_T handle);
++
++extern VCHIQ_STATUS_T
++vchiq_check_service(VCHIQ_SERVICE_T *service);
++
++extern VCHIQ_STATUS_T
++vchiq_platform_suspend(VCHIQ_STATE_T *state);
++
++extern int
++vchiq_platform_videocore_wanted(VCHIQ_STATE_T *state);
++
++extern int
++vchiq_platform_use_suspend_timer(void);
++
++extern void
++vchiq_dump_platform_use_state(VCHIQ_STATE_T *state);
++
++extern void
++vchiq_dump_service_use_state(VCHIQ_STATE_T *state);
++
++extern VCHIQ_ARM_STATE_T*
++vchiq_platform_get_arm_state(VCHIQ_STATE_T *state);
++
++extern int
++vchiq_videocore_wanted(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
++		enum USE_TYPE_E use_type);
++extern VCHIQ_STATUS_T
++vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service);
++
++extern VCHIQ_DEBUGFS_NODE_T *
++vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance);
++
++extern int
++vchiq_instance_get_use_count(VCHIQ_INSTANCE_T instance);
++
++extern int
++vchiq_instance_get_pid(VCHIQ_INSTANCE_T instance);
++
++extern int
++vchiq_instance_get_trace(VCHIQ_INSTANCE_T instance);
++
++extern void
++vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace);
++
++extern void
++set_suspend_state(VCHIQ_ARM_STATE_T *arm_state,
++	enum vc_suspend_status new_state);
++
++extern void
++set_resume_state(VCHIQ_ARM_STATE_T *arm_state,
++	enum vc_resume_status new_state);
++
++extern void
++start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state);
++
++
++#endif /* VCHIQ_ARM_H */
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_build_info.h
+@@ -0,0 +1,37 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++const char *vchiq_get_build_hostname(void);
++const char *vchiq_get_build_version(void);
++const char *vchiq_get_build_time(void);
++const char *vchiq_get_build_date(void);
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_cfg.h
+@@ -0,0 +1,69 @@
++/**
++ * Copyright (c) 2010-2014 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_CFG_H
++#define VCHIQ_CFG_H
++
++#define VCHIQ_MAGIC              VCHIQ_MAKE_FOURCC('V', 'C', 'H', 'I')
++/* The version of VCHIQ - change with any non-trivial change */
++#define VCHIQ_VERSION            8
++/* The minimum compatible version - update to match VCHIQ_VERSION with any
++** incompatible change */
++#define VCHIQ_VERSION_MIN        3
++
++/* The version that introduced the VCHIQ_IOC_LIB_VERSION ioctl */
++#define VCHIQ_VERSION_LIB_VERSION 7
++
++/* The version that introduced the VCHIQ_IOC_CLOSE_DELIVERED ioctl */
++#define VCHIQ_VERSION_CLOSE_DELIVERED 7
++
++/* The version that made it safe to use SYNCHRONOUS mode */
++#define VCHIQ_VERSION_SYNCHRONOUS_MODE 8
++
++#define VCHIQ_MAX_STATES         1
++#define VCHIQ_MAX_SERVICES       4096
++#define VCHIQ_MAX_SLOTS          128
++#define VCHIQ_MAX_SLOTS_PER_SIDE 64
++
++#define VCHIQ_NUM_CURRENT_BULKS        32
++#define VCHIQ_NUM_SERVICE_BULKS        4
++
++#ifndef VCHIQ_ENABLE_DEBUG
++#define VCHIQ_ENABLE_DEBUG             1
++#endif
++
++#ifndef VCHIQ_ENABLE_STATS
++#define VCHIQ_ENABLE_STATS             1
++#endif
++
++#endif /* VCHIQ_CFG_H */
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_connected.c
+@@ -0,0 +1,120 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include "vchiq_connected.h"
++#include "vchiq_core.h"
++#include "vchiq_killable.h"
++#include <linux/module.h>
++#include <linux/mutex.h>
++
++#define  MAX_CALLBACKS  10
++
++static   int                        g_connected;
++static   int                        g_num_deferred_callbacks;
++static   VCHIQ_CONNECTED_CALLBACK_T g_deferred_callback[MAX_CALLBACKS];
++static   int                        g_once_init;
++static   struct mutex               g_connected_mutex;
++
++/****************************************************************************
++*
++* Function to initialize our lock.
++*
++***************************************************************************/
++
++static void connected_init(void)
++{
++	if (!g_once_init) {
++		mutex_init(&g_connected_mutex);
++		g_once_init = 1;
++	}
++}
++
++/****************************************************************************
++*
++* This function is used to defer initialization until the vchiq stack is
++* initialized. If the stack is already initialized, then the callback will
++* be made immediately, otherwise it will be deferred until
++* vchiq_call_connected_callbacks is called.
++*
++***************************************************************************/
++
++void vchiq_add_connected_callback(VCHIQ_CONNECTED_CALLBACK_T callback)
++{
++	connected_init();
++
++	if (mutex_lock_interruptible(&g_connected_mutex) != 0)
++		return;
++
++	if (g_connected)
++		/* We're already connected. Call the callback immediately. */
++
++		callback();
++	else {
++		if (g_num_deferred_callbacks >= MAX_CALLBACKS)
++			vchiq_log_error(vchiq_core_log_level,
++				"There already %d callback registered - "
++				"please increase MAX_CALLBACKS",
++				g_num_deferred_callbacks);
++		else {
++			g_deferred_callback[g_num_deferred_callbacks] =
++				callback;
++			g_num_deferred_callbacks++;
++		}
++	}
++	mutex_unlock(&g_connected_mutex);
++}
++
++/****************************************************************************
++*
++* This function is called by the vchiq stack once it has been connected to
++* the videocore and clients can start to use the stack.
++*
++***************************************************************************/
++
++void vchiq_call_connected_callbacks(void)
++{
++	int i;
++
++	connected_init();
++
++	if (mutex_lock_interruptible(&g_connected_mutex) != 0)
++		return;
++
++	for (i = 0; i <  g_num_deferred_callbacks; i++)
++		g_deferred_callback[i]();
++
++	g_num_deferred_callbacks = 0;
++	g_connected = 1;
++	mutex_unlock(&g_connected_mutex);
++}
++EXPORT_SYMBOL(vchiq_add_connected_callback);
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_connected.h
+@@ -0,0 +1,50 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_CONNECTED_H
++#define VCHIQ_CONNECTED_H
++
++/* ---- Include Files ----------------------------------------------------- */
++
++/* ---- Constants and Types ---------------------------------------------- */
++
++typedef void (*VCHIQ_CONNECTED_CALLBACK_T)(void);
++
++/* ---- Variable Externs ------------------------------------------------- */
++
++/* ---- Function Prototypes ---------------------------------------------- */
++
++void vchiq_add_connected_callback(VCHIQ_CONNECTED_CALLBACK_T callback);
++void vchiq_call_connected_callbacks(void);
++
++#endif /* VCHIQ_CONNECTED_H */
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -0,0 +1,3934 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include "vchiq_core.h"
++#include "vchiq_killable.h"
++
++#define VCHIQ_SLOT_HANDLER_STACK 8192
++
++#define HANDLE_STATE_SHIFT 12
++
++#define SLOT_INFO_FROM_INDEX(state, index) (state->slot_info + (index))
++#define SLOT_DATA_FROM_INDEX(state, index) (state->slot_data + (index))
++#define SLOT_INDEX_FROM_DATA(state, data) \
++	(((unsigned int)((char *)data - (char *)state->slot_data)) / \
++	VCHIQ_SLOT_SIZE)
++#define SLOT_INDEX_FROM_INFO(state, info) \
++	((unsigned int)(info - state->slot_info))
++#define SLOT_QUEUE_INDEX_FROM_POS(pos) \
++	((int)((unsigned int)(pos) / VCHIQ_SLOT_SIZE))
++
++#define BULK_INDEX(x) (x & (VCHIQ_NUM_SERVICE_BULKS - 1))
++
++#define SRVTRACE_LEVEL(srv) \
++	(((srv) && (srv)->trace) ? VCHIQ_LOG_TRACE : vchiq_core_msg_log_level)
++#define SRVTRACE_ENABLED(srv, lev) \
++	(((srv) && (srv)->trace) || (vchiq_core_msg_log_level >= (lev)))
++
++struct vchiq_open_payload {
++	int fourcc;
++	int client_id;
++	short version;
++	short version_min;
++};
++
++struct vchiq_openack_payload {
++	short version;
++};
++
++enum
++{
++	QMFLAGS_IS_BLOCKING     = (1 << 0),
++	QMFLAGS_NO_MUTEX_LOCK   = (1 << 1),
++	QMFLAGS_NO_MUTEX_UNLOCK = (1 << 2)
++};
++
++/* we require this for consistency between endpoints */
++vchiq_static_assert(sizeof(VCHIQ_HEADER_T) == 8);
++vchiq_static_assert(IS_POW2(sizeof(VCHIQ_HEADER_T)));
++vchiq_static_assert(IS_POW2(VCHIQ_NUM_CURRENT_BULKS));
++vchiq_static_assert(IS_POW2(VCHIQ_NUM_SERVICE_BULKS));
++vchiq_static_assert(IS_POW2(VCHIQ_MAX_SERVICES));
++vchiq_static_assert(VCHIQ_VERSION >= VCHIQ_VERSION_MIN);
++
++/* Run time control of log level, based on KERN_XXX level. */
++int vchiq_core_log_level = VCHIQ_LOG_DEFAULT;
++int vchiq_core_msg_log_level = VCHIQ_LOG_DEFAULT;
++int vchiq_sync_log_level = VCHIQ_LOG_DEFAULT;
++
++static atomic_t pause_bulks_count = ATOMIC_INIT(0);
++
++static DEFINE_SPINLOCK(service_spinlock);
++DEFINE_SPINLOCK(bulk_waiter_spinlock);
++DEFINE_SPINLOCK(quota_spinlock);
++
++VCHIQ_STATE_T *vchiq_states[VCHIQ_MAX_STATES];
++static unsigned int handle_seq;
++
++static const char *const srvstate_names[] = {
++	"FREE",
++	"HIDDEN",
++	"LISTENING",
++	"OPENING",
++	"OPEN",
++	"OPENSYNC",
++	"CLOSESENT",
++	"CLOSERECVD",
++	"CLOSEWAIT",
++	"CLOSED"
++};
++
++static const char *const reason_names[] = {
++	"SERVICE_OPENED",
++	"SERVICE_CLOSED",
++	"MESSAGE_AVAILABLE",
++	"BULK_TRANSMIT_DONE",
++	"BULK_RECEIVE_DONE",
++	"BULK_TRANSMIT_ABORTED",
++	"BULK_RECEIVE_ABORTED"
++};
++
++static const char *const conn_state_names[] = {
++	"DISCONNECTED",
++	"CONNECTING",
++	"CONNECTED",
++	"PAUSING",
++	"PAUSE_SENT",
++	"PAUSED",
++	"RESUMING",
++	"PAUSE_TIMEOUT",
++	"RESUME_TIMEOUT"
++};
++
++
++static void
++release_message_sync(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header);
++
++static const char *msg_type_str(unsigned int msg_type)
++{
++	switch (msg_type) {
++	case VCHIQ_MSG_PADDING:       return "PADDING";
++	case VCHIQ_MSG_CONNECT:       return "CONNECT";
++	case VCHIQ_MSG_OPEN:          return "OPEN";
++	case VCHIQ_MSG_OPENACK:       return "OPENACK";
++	case VCHIQ_MSG_CLOSE:         return "CLOSE";
++	case VCHIQ_MSG_DATA:          return "DATA";
++	case VCHIQ_MSG_BULK_RX:       return "BULK_RX";
++	case VCHIQ_MSG_BULK_TX:       return "BULK_TX";
++	case VCHIQ_MSG_BULK_RX_DONE:  return "BULK_RX_DONE";
++	case VCHIQ_MSG_BULK_TX_DONE:  return "BULK_TX_DONE";
++	case VCHIQ_MSG_PAUSE:         return "PAUSE";
++	case VCHIQ_MSG_RESUME:        return "RESUME";
++	case VCHIQ_MSG_REMOTE_USE:    return "REMOTE_USE";
++	case VCHIQ_MSG_REMOTE_RELEASE:      return "REMOTE_RELEASE";
++	case VCHIQ_MSG_REMOTE_USE_ACTIVE:   return "REMOTE_USE_ACTIVE";
++	}
++	return "???";
++}
++
++static inline void
++vchiq_set_service_state(VCHIQ_SERVICE_T *service, int newstate)
++{
++	vchiq_log_info(vchiq_core_log_level, "%d: srv:%d %s->%s",
++		service->state->id, service->localport,
++		srvstate_names[service->srvstate],
++		srvstate_names[newstate]);
++	service->srvstate = newstate;
++}
++
++VCHIQ_SERVICE_T *
++find_service_by_handle(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	VCHIQ_SERVICE_T *service;
++
++	spin_lock(&service_spinlock);
++	service = handle_to_service(handle);
++	if (service && (service->srvstate != VCHIQ_SRVSTATE_FREE) &&
++		(service->handle == handle)) {
++		BUG_ON(service->ref_count == 0);
++		service->ref_count++;
++	} else
++		service = NULL;
++	spin_unlock(&service_spinlock);
++
++	if (!service)
++		vchiq_log_info(vchiq_core_log_level,
++			"Invalid service handle 0x%x", handle);
++
++	return service;
++}
++
++VCHIQ_SERVICE_T *
++find_service_by_port(VCHIQ_STATE_T *state, int localport)
++{
++	VCHIQ_SERVICE_T *service = NULL;
++	if ((unsigned int)localport <= VCHIQ_PORT_MAX) {
++		spin_lock(&service_spinlock);
++		service = state->services[localport];
++		if (service && (service->srvstate != VCHIQ_SRVSTATE_FREE)) {
++			BUG_ON(service->ref_count == 0);
++			service->ref_count++;
++		} else
++			service = NULL;
++		spin_unlock(&service_spinlock);
++	}
++
++	if (!service)
++		vchiq_log_info(vchiq_core_log_level,
++			"Invalid port %d", localport);
++
++	return service;
++}
++
++VCHIQ_SERVICE_T *
++find_service_for_instance(VCHIQ_INSTANCE_T instance,
++	VCHIQ_SERVICE_HANDLE_T handle) {
++	VCHIQ_SERVICE_T *service;
++
++	spin_lock(&service_spinlock);
++	service = handle_to_service(handle);
++	if (service && (service->srvstate != VCHIQ_SRVSTATE_FREE) &&
++		(service->handle == handle) &&
++		(service->instance == instance)) {
++		BUG_ON(service->ref_count == 0);
++		service->ref_count++;
++	} else
++		service = NULL;
++	spin_unlock(&service_spinlock);
++
++	if (!service)
++		vchiq_log_info(vchiq_core_log_level,
++			"Invalid service handle 0x%x", handle);
++
++	return service;
++}
++
++VCHIQ_SERVICE_T *
++find_closed_service_for_instance(VCHIQ_INSTANCE_T instance,
++	VCHIQ_SERVICE_HANDLE_T handle) {
++	VCHIQ_SERVICE_T *service;
++
++	spin_lock(&service_spinlock);
++	service = handle_to_service(handle);
++	if (service &&
++		((service->srvstate == VCHIQ_SRVSTATE_FREE) ||
++		 (service->srvstate == VCHIQ_SRVSTATE_CLOSED)) &&
++		(service->handle == handle) &&
++		(service->instance == instance)) {
++		BUG_ON(service->ref_count == 0);
++		service->ref_count++;
++	} else
++		service = NULL;
++	spin_unlock(&service_spinlock);
++
++	if (!service)
++		vchiq_log_info(vchiq_core_log_level,
++			"Invalid service handle 0x%x", handle);
++
++	return service;
++}
++
++VCHIQ_SERVICE_T *
++next_service_by_instance(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance,
++	int *pidx)
++{
++	VCHIQ_SERVICE_T *service = NULL;
++	int idx = *pidx;
++
++	spin_lock(&service_spinlock);
++	while (idx < state->unused_service) {
++		VCHIQ_SERVICE_T *srv = state->services[idx++];
++		if (srv && (srv->srvstate != VCHIQ_SRVSTATE_FREE) &&
++			(srv->instance == instance)) {
++			service = srv;
++			BUG_ON(service->ref_count == 0);
++			service->ref_count++;
++			break;
++		}
++	}
++	spin_unlock(&service_spinlock);
++
++	*pidx = idx;
++
++	return service;
++}
++
++void
++lock_service(VCHIQ_SERVICE_T *service)
++{
++	spin_lock(&service_spinlock);
++	BUG_ON(!service || (service->ref_count == 0));
++	if (service)
++		service->ref_count++;
++	spin_unlock(&service_spinlock);
++}
++
++void
++unlock_service(VCHIQ_SERVICE_T *service)
++{
++	VCHIQ_STATE_T *state = service->state;
++	spin_lock(&service_spinlock);
++	BUG_ON(!service || (service->ref_count == 0));
++	if (service && service->ref_count) {
++		service->ref_count--;
++		if (!service->ref_count) {
++			BUG_ON(service->srvstate != VCHIQ_SRVSTATE_FREE);
++			state->services[service->localport] = NULL;
++		} else
++			service = NULL;
++	}
++	spin_unlock(&service_spinlock);
++
++	if (service && service->userdata_term)
++		service->userdata_term(service->base.userdata);
++
++	kfree(service);
++}
++
++int
++vchiq_get_client_id(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	int id;
++
++	id = service ? service->client_id : 0;
++	if (service)
++		unlock_service(service);
++
++	return id;
++}
++
++void *
++vchiq_get_service_userdata(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	VCHIQ_SERVICE_T *service = handle_to_service(handle);
++
++	return service ? service->base.userdata : NULL;
++}
++
++int
++vchiq_get_service_fourcc(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	VCHIQ_SERVICE_T *service = handle_to_service(handle);
++
++	return service ? service->base.fourcc : 0;
++}
++
++static void
++mark_service_closing_internal(VCHIQ_SERVICE_T *service, int sh_thread)
++{
++	VCHIQ_STATE_T *state = service->state;
++	VCHIQ_SERVICE_QUOTA_T *service_quota;
++
++	service->closing = 1;
++
++	/* Synchronise with other threads. */
++	mutex_lock(&state->recycle_mutex);
++	mutex_unlock(&state->recycle_mutex);
++	if (!sh_thread || (state->conn_state != VCHIQ_CONNSTATE_PAUSE_SENT)) {
++		/* If we're pausing then the slot_mutex is held until resume
++		 * by the slot handler.  Therefore don't try to acquire this
++		 * mutex if we're the slot handler and in the pause sent state.
++		 * We don't need to in this case anyway. */
++		mutex_lock(&state->slot_mutex);
++		mutex_unlock(&state->slot_mutex);
++	}
++
++	/* Unblock any sending thread. */
++	service_quota = &state->service_quotas[service->localport];
++	up(&service_quota->quota_event);
++}
++
++static void
++mark_service_closing(VCHIQ_SERVICE_T *service)
++{
++	mark_service_closing_internal(service, 0);
++}
++
++static inline VCHIQ_STATUS_T
++make_service_callback(VCHIQ_SERVICE_T *service, VCHIQ_REASON_T reason,
++	VCHIQ_HEADER_T *header, void *bulk_userdata)
++{
++	VCHIQ_STATUS_T status;
++	vchiq_log_trace(vchiq_core_log_level, "%d: callback:%d (%s, %x, %x)",
++		service->state->id, service->localport, reason_names[reason],
++		(unsigned int)header, (unsigned int)bulk_userdata);
++	status = service->base.callback(reason, header, service->handle,
++		bulk_userdata);
++	if (status == VCHIQ_ERROR) {
++		vchiq_log_warning(vchiq_core_log_level,
++			"%d: ignoring ERROR from callback to service %x",
++			service->state->id, service->handle);
++		status = VCHIQ_SUCCESS;
++	}
++	return status;
++}
++
++inline void
++vchiq_set_conn_state(VCHIQ_STATE_T *state, VCHIQ_CONNSTATE_T newstate)
++{
++	VCHIQ_CONNSTATE_T oldstate = state->conn_state;
++	vchiq_log_info(vchiq_core_log_level, "%d: %s->%s", state->id,
++		conn_state_names[oldstate],
++		conn_state_names[newstate]);
++	state->conn_state = newstate;
++	vchiq_platform_conn_state_changed(state, oldstate, newstate);
++}
++
++static inline void
++remote_event_create(REMOTE_EVENT_T *event)
++{
++	event->armed = 0;
++	/* Don't clear the 'fired' flag because it may already have been set
++	** by the other side. */
++	sema_init(event->event, 0);
++}
++
++static inline void
++remote_event_destroy(REMOTE_EVENT_T *event)
++{
++	(void)event;
++}
++
++static inline int
++remote_event_wait(REMOTE_EVENT_T *event)
++{
++	if (!event->fired) {
++		event->armed = 1;
++		dsb();
++		if (!event->fired) {
++			if (down_interruptible(event->event) != 0) {
++				event->armed = 0;
++				return 0;
++			}
++		}
++		event->armed = 0;
++		wmb();
++	}
++
++	event->fired = 0;
++	return 1;
++}
++
++static inline void
++remote_event_signal_local(REMOTE_EVENT_T *event)
++{
++	event->armed = 0;
++	up(event->event);
++}
++
++static inline void
++remote_event_poll(REMOTE_EVENT_T *event)
++{
++	if (event->fired && event->armed)
++		remote_event_signal_local(event);
++}
++
++void
++remote_event_pollall(VCHIQ_STATE_T *state)
++{
++	remote_event_poll(&state->local->sync_trigger);
++	remote_event_poll(&state->local->sync_release);
++	remote_event_poll(&state->local->trigger);
++	remote_event_poll(&state->local->recycle);
++}
++
++/* Round up message sizes so that any space at the end of a slot is always big
++** enough for a header. This relies on header size being a power of two, which
++** has been verified earlier by a static assertion. */
++
++static inline unsigned int
++calc_stride(unsigned int size)
++{
++	/* Allow room for the header */
++	size += sizeof(VCHIQ_HEADER_T);
++
++	/* Round up */
++	return (size + sizeof(VCHIQ_HEADER_T) - 1) & ~(sizeof(VCHIQ_HEADER_T)
++		- 1);
++}
++
++/* Called by the slot handler thread */
++static VCHIQ_SERVICE_T *
++get_listening_service(VCHIQ_STATE_T *state, int fourcc)
++{
++	int i;
++
++	WARN_ON(fourcc == VCHIQ_FOURCC_INVALID);
++
++	for (i = 0; i < state->unused_service; i++) {
++		VCHIQ_SERVICE_T *service = state->services[i];
++		if (service &&
++			(service->public_fourcc == fourcc) &&
++			((service->srvstate == VCHIQ_SRVSTATE_LISTENING) ||
++			((service->srvstate == VCHIQ_SRVSTATE_OPEN) &&
++			(service->remoteport == VCHIQ_PORT_FREE)))) {
++			lock_service(service);
++			return service;
++		}
++	}
++
++	return NULL;
++}
++
++/* Called by the slot handler thread */
++static VCHIQ_SERVICE_T *
++get_connected_service(VCHIQ_STATE_T *state, unsigned int port)
++{
++	int i;
++	for (i = 0; i < state->unused_service; i++) {
++		VCHIQ_SERVICE_T *service = state->services[i];
++		if (service && (service->srvstate == VCHIQ_SRVSTATE_OPEN)
++			&& (service->remoteport == port)) {
++			lock_service(service);
++			return service;
++		}
++	}
++	return NULL;
++}
++
++inline void
++request_poll(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, int poll_type)
++{
++	uint32_t value;
++
++	if (service) {
++		do {
++			value = atomic_read(&service->poll_flags);
++		} while (atomic_cmpxchg(&service->poll_flags, value,
++			value | (1 << poll_type)) != value);
++
++		do {
++			value = atomic_read(&state->poll_services[
++				service->localport>>5]);
++		} while (atomic_cmpxchg(
++			&state->poll_services[service->localport>>5],
++			value, value | (1 << (service->localport & 0x1f)))
++			!= value);
++	}
++
++	state->poll_needed = 1;
++	wmb();
++
++	/* ... and ensure the slot handler runs. */
++	remote_event_signal_local(&state->local->trigger);
++}
++
++/* Called from queue_message, by the slot handler and application threads,
++** with slot_mutex held */
++static VCHIQ_HEADER_T *
++reserve_space(VCHIQ_STATE_T *state, int space, int is_blocking)
++{
++	VCHIQ_SHARED_STATE_T *local = state->local;
++	int tx_pos = state->local_tx_pos;
++	int slot_space = VCHIQ_SLOT_SIZE - (tx_pos & VCHIQ_SLOT_MASK);
++
++	if (space > slot_space) {
++		VCHIQ_HEADER_T *header;
++		/* Fill the remaining space with padding */
++		WARN_ON(state->tx_data == NULL);
++		header = (VCHIQ_HEADER_T *)
++			(state->tx_data + (tx_pos & VCHIQ_SLOT_MASK));
++		header->msgid = VCHIQ_MSGID_PADDING;
++		header->size = slot_space - sizeof(VCHIQ_HEADER_T);
++
++		tx_pos += slot_space;
++	}
++
++	/* If necessary, get the next slot. */
++	if ((tx_pos & VCHIQ_SLOT_MASK) == 0) {
++		int slot_index;
++
++		/* If there is no free slot... */
++
++		if (down_trylock(&state->slot_available_event) != 0) {
++			/* ...wait for one. */
++
++			VCHIQ_STATS_INC(state, slot_stalls);
++
++			/* But first, flush through the last slot. */
++			state->local_tx_pos = tx_pos;
++			local->tx_pos = tx_pos;
++			remote_event_signal(&state->remote->trigger);
++
++			if (!is_blocking ||
++				(down_interruptible(
++				&state->slot_available_event) != 0))
++				return NULL; /* No space available */
++		}
++
++		BUG_ON(tx_pos ==
++			(state->slot_queue_available * VCHIQ_SLOT_SIZE));
++
++		slot_index = local->slot_queue[
++			SLOT_QUEUE_INDEX_FROM_POS(tx_pos) &
++			VCHIQ_SLOT_QUEUE_MASK];
++		state->tx_data =
++			(char *)SLOT_DATA_FROM_INDEX(state, slot_index);
++	}
++
++	state->local_tx_pos = tx_pos + space;
++
++	return (VCHIQ_HEADER_T *)(state->tx_data + (tx_pos & VCHIQ_SLOT_MASK));
++}
++
++/* Called by the recycle thread. */
++static void
++process_free_queue(VCHIQ_STATE_T *state)
++{
++	VCHIQ_SHARED_STATE_T *local = state->local;
++	BITSET_T service_found[BITSET_SIZE(VCHIQ_MAX_SERVICES)];
++	int slot_queue_available;
++
++	/* Use a read memory barrier to ensure that any state that may have
++	** been modified by another thread is not masked by stale prefetched
++	** values. */
++	rmb();
++
++	/* Find slots which have been freed by the other side, and return them
++	** to the available queue. */
++	slot_queue_available = state->slot_queue_available;
++
++	while (slot_queue_available != local->slot_queue_recycle) {
++		unsigned int pos;
++		int slot_index = local->slot_queue[slot_queue_available++ &
++			VCHIQ_SLOT_QUEUE_MASK];
++		char *data = (char *)SLOT_DATA_FROM_INDEX(state, slot_index);
++		int data_found = 0;
++
++		vchiq_log_trace(vchiq_core_log_level, "%d: pfq %d=%x %x %x",
++			state->id, slot_index, (unsigned int)data,
++			local->slot_queue_recycle, slot_queue_available);
++
++		/* Initialise the bitmask for services which have used this
++		** slot */
++		BITSET_ZERO(service_found);
++
++		pos = 0;
++
++		while (pos < VCHIQ_SLOT_SIZE) {
++			VCHIQ_HEADER_T *header =
++				(VCHIQ_HEADER_T *)(data + pos);
++			int msgid = header->msgid;
++			if (VCHIQ_MSG_TYPE(msgid) == VCHIQ_MSG_DATA) {
++				int port = VCHIQ_MSG_SRCPORT(msgid);
++				VCHIQ_SERVICE_QUOTA_T *service_quota =
++					&state->service_quotas[port];
++				int count;
++				spin_lock(&quota_spinlock);
++				count = service_quota->message_use_count;
++				if (count > 0)
++					service_quota->message_use_count =
++						count - 1;
++				spin_unlock(&quota_spinlock);
++
++				if (count == service_quota->message_quota)
++					/* Signal the service that it
++					** has dropped below its quota
++					*/
++					up(&service_quota->quota_event);
++				else if (count == 0) {
++					vchiq_log_error(vchiq_core_log_level,
++						"service %d "
++						"message_use_count=%d "
++						"(header %x, msgid %x, "
++						"header->msgid %x, "
++						"header->size %x)",
++						port,
++						service_quota->
++							message_use_count,
++						(unsigned int)header, msgid,
++						header->msgid,
++						header->size);
++					WARN(1, "invalid message use count\n");
++				}
++				if (!BITSET_IS_SET(service_found, port)) {
++					/* Set the found bit for this service */
++					BITSET_SET(service_found, port);
++
++					spin_lock(&quota_spinlock);
++					count = service_quota->slot_use_count;
++					if (count > 0)
++						service_quota->slot_use_count =
++							count - 1;
++					spin_unlock(&quota_spinlock);
++
++					if (count > 0) {
++						/* Signal the service in case
++						** it has dropped below its
++						** quota */
++						up(&service_quota->quota_event);
++						vchiq_log_trace(
++							vchiq_core_log_level,
++							"%d: pfq:%d %x@%x - "
++							"slot_use->%d",
++							state->id, port,
++							header->size,
++							(unsigned int)header,
++							count - 1);
++					} else {
++						vchiq_log_error(
++							vchiq_core_log_level,
++								"service %d "
++								"slot_use_count"
++								"=%d (header %x"
++								", msgid %x, "
++								"header->msgid"
++								" %x, header->"
++								"size %x)",
++							port, count,
++							(unsigned int)header,
++							msgid,
++							header->msgid,
++							header->size);
++						WARN(1, "bad slot use count\n");
++					}
++				}
++
++				data_found = 1;
++			}
++
++			pos += calc_stride(header->size);
++			if (pos > VCHIQ_SLOT_SIZE) {
++				vchiq_log_error(vchiq_core_log_level,
++					"pfq - pos %x: header %x, msgid %x, "
++					"header->msgid %x, header->size %x",
++					pos, (unsigned int)header, msgid,
++					header->msgid, header->size);
++				WARN(1, "invalid slot position\n");
++			}
++		}
++
++		if (data_found) {
++			int count;
++			spin_lock(&quota_spinlock);
++			count = state->data_use_count;
++			if (count > 0)
++				state->data_use_count =
++					count - 1;
++			spin_unlock(&quota_spinlock);
++			if (count == state->data_quota)
++				up(&state->data_quota_event);
++		}
++
++		state->slot_queue_available = slot_queue_available;
++		up(&state->slot_available_event);
++	}
++}
++
++/* Called by the slot handler and application threads */
++static VCHIQ_STATUS_T
++queue_message(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
++	int msgid, const VCHIQ_ELEMENT_T *elements,
++	int count, int size, int flags)
++{
++	VCHIQ_SHARED_STATE_T *local;
++	VCHIQ_SERVICE_QUOTA_T *service_quota = NULL;
++	VCHIQ_HEADER_T *header;
++	int type = VCHIQ_MSG_TYPE(msgid);
++
++	unsigned int stride;
++
++	local = state->local;
++
++	stride = calc_stride(size);
++
++	WARN_ON(!(stride <= VCHIQ_SLOT_SIZE));
++
++	if (!(flags & QMFLAGS_NO_MUTEX_LOCK) &&
++		(mutex_lock_interruptible(&state->slot_mutex) != 0))
++		return VCHIQ_RETRY;
++
++	if (type == VCHIQ_MSG_DATA) {
++		int tx_end_index;
++
++		BUG_ON(!service);
++		BUG_ON((flags & (QMFLAGS_NO_MUTEX_LOCK |
++				 QMFLAGS_NO_MUTEX_UNLOCK)) != 0);
++
++		if (service->closing) {
++			/* The service has been closed */
++			mutex_unlock(&state->slot_mutex);
++			return VCHIQ_ERROR;
++		}
++
++		service_quota = &state->service_quotas[service->localport];
++
++		spin_lock(&quota_spinlock);
++
++		/* Ensure this service doesn't use more than its quota of
++		** messages or slots */
++		tx_end_index = SLOT_QUEUE_INDEX_FROM_POS(
++			state->local_tx_pos + stride - 1);
++
++		/* Ensure data messages don't use more than their quota of
++		** slots */
++		while ((tx_end_index != state->previous_data_index) &&
++			(state->data_use_count == state->data_quota)) {
++			VCHIQ_STATS_INC(state, data_stalls);
++			spin_unlock(&quota_spinlock);
++			mutex_unlock(&state->slot_mutex);
++
++			if (down_interruptible(&state->data_quota_event)
++				!= 0)
++				return VCHIQ_RETRY;
++
++			mutex_lock(&state->slot_mutex);
++			spin_lock(&quota_spinlock);
++			tx_end_index = SLOT_QUEUE_INDEX_FROM_POS(
++				state->local_tx_pos + stride - 1);
++			if ((tx_end_index == state->previous_data_index) ||
++				(state->data_use_count < state->data_quota)) {
++				/* Pass the signal on to other waiters */
++				up(&state->data_quota_event);
++				break;
++			}
++		}
++
++		while ((service_quota->message_use_count ==
++				service_quota->message_quota) ||
++			((tx_end_index != service_quota->previous_tx_index) &&
++			(service_quota->slot_use_count ==
++				service_quota->slot_quota))) {
++			spin_unlock(&quota_spinlock);
++			vchiq_log_trace(vchiq_core_log_level,
++				"%d: qm:%d %s,%x - quota stall "
++				"(msg %d, slot %d)",
++				state->id, service->localport,
++				msg_type_str(type), size,
++				service_quota->message_use_count,
++				service_quota->slot_use_count);
++			VCHIQ_SERVICE_STATS_INC(service, quota_stalls);
++			mutex_unlock(&state->slot_mutex);
++			if (down_interruptible(&service_quota->quota_event)
++				!= 0)
++				return VCHIQ_RETRY;
++			if (service->closing)
++				return VCHIQ_ERROR;
++			if (mutex_lock_interruptible(&state->slot_mutex) != 0)
++				return VCHIQ_RETRY;
++			if (service->srvstate != VCHIQ_SRVSTATE_OPEN) {
++				/* The service has been closed */
++				mutex_unlock(&state->slot_mutex);
++				return VCHIQ_ERROR;
++			}
++			spin_lock(&quota_spinlock);
++			tx_end_index = SLOT_QUEUE_INDEX_FROM_POS(
++				state->local_tx_pos + stride - 1);
++		}
++
++		spin_unlock(&quota_spinlock);
++	}
++
++	header = reserve_space(state, stride, flags & QMFLAGS_IS_BLOCKING);
++
++	if (!header) {
++		if (service)
++			VCHIQ_SERVICE_STATS_INC(service, slot_stalls);
++		/* In the event of a failure, return the mutex to the
++		   state it was in */
++		if (!(flags & QMFLAGS_NO_MUTEX_LOCK))
++			mutex_unlock(&state->slot_mutex);
++		return VCHIQ_RETRY;
++	}
++
++	if (type == VCHIQ_MSG_DATA) {
++		int i, pos;
++		int tx_end_index;
++		int slot_use_count;
++
++		vchiq_log_info(vchiq_core_log_level,
++			"%d: qm %s@%x,%x (%d->%d)",
++			state->id,
++			msg_type_str(VCHIQ_MSG_TYPE(msgid)),
++			(unsigned int)header, size,
++			VCHIQ_MSG_SRCPORT(msgid),
++			VCHIQ_MSG_DSTPORT(msgid));
++
++		BUG_ON(!service);
++		BUG_ON((flags & (QMFLAGS_NO_MUTEX_LOCK |
++				 QMFLAGS_NO_MUTEX_UNLOCK)) != 0);
++
++		for (i = 0, pos = 0; i < (unsigned int)count;
++			pos += elements[i++].size)
++			if (elements[i].size) {
++				if (vchiq_copy_from_user
++					(header->data + pos, elements[i].data,
++					(size_t) elements[i].size) !=
++					VCHIQ_SUCCESS) {
++					mutex_unlock(&state->slot_mutex);
++					VCHIQ_SERVICE_STATS_INC(service,
++						error_count);
++					return VCHIQ_ERROR;
++				}
++				if (i == 0) {
++					if (SRVTRACE_ENABLED(service,
++							VCHIQ_LOG_INFO))
++						vchiq_log_dump_mem("Sent", 0,
++							header->data + pos,
++							min(64u,
++							elements[0].size));
++				}
++			}
++
++		spin_lock(&quota_spinlock);
++		service_quota->message_use_count++;
++
++		tx_end_index =
++			SLOT_QUEUE_INDEX_FROM_POS(state->local_tx_pos - 1);
++
++		/* If this transmission can't fit in the last slot used by any
++		** service, the data_use_count must be increased. */
++		if (tx_end_index != state->previous_data_index) {
++			state->previous_data_index = tx_end_index;
++			state->data_use_count++;
++		}
++
++		/* If this isn't the same slot last used by this service,
++		** the service's slot_use_count must be increased. */
++		if (tx_end_index != service_quota->previous_tx_index) {
++			service_quota->previous_tx_index = tx_end_index;
++			slot_use_count = ++service_quota->slot_use_count;
++		} else {
++			slot_use_count = 0;
++		}
++
++		spin_unlock(&quota_spinlock);
++
++		if (slot_use_count)
++			vchiq_log_trace(vchiq_core_log_level,
++				"%d: qm:%d %s,%x - slot_use->%d (hdr %p)",
++				state->id, service->localport,
++				msg_type_str(VCHIQ_MSG_TYPE(msgid)), size,
++				slot_use_count, header);
++
++		VCHIQ_SERVICE_STATS_INC(service, ctrl_tx_count);
++		VCHIQ_SERVICE_STATS_ADD(service, ctrl_tx_bytes, size);
++	} else {
++		vchiq_log_info(vchiq_core_log_level,
++			"%d: qm %s@%x,%x (%d->%d)", state->id,
++			msg_type_str(VCHIQ_MSG_TYPE(msgid)),
++			(unsigned int)header, size,
++			VCHIQ_MSG_SRCPORT(msgid),
++			VCHIQ_MSG_DSTPORT(msgid));
++		if (size != 0) {
++			WARN_ON(!((count == 1) && (size == elements[0].size)));
++			memcpy(header->data, elements[0].data,
++				elements[0].size);
++		}
++		VCHIQ_STATS_INC(state, ctrl_tx_count);
++	}
++
++	header->msgid = msgid;
++	header->size = size;
++
++	{
++		int svc_fourcc;
++
++		svc_fourcc = service
++			? service->base.fourcc
++			: VCHIQ_MAKE_FOURCC('?', '?', '?', '?');
++
++		vchiq_log_info(SRVTRACE_LEVEL(service),
++			"Sent Msg %s(%u) to %c%c%c%c s:%u d:%d len:%d",
++			msg_type_str(VCHIQ_MSG_TYPE(msgid)),
++			VCHIQ_MSG_TYPE(msgid),
++			VCHIQ_FOURCC_AS_4CHARS(svc_fourcc),
++			VCHIQ_MSG_SRCPORT(msgid),
++			VCHIQ_MSG_DSTPORT(msgid),
++			size);
++	}
++
++	/* Make sure the new header is visible to the peer. */
++	wmb();
++
++	/* Make the new tx_pos visible to the peer. */
++	local->tx_pos = state->local_tx_pos;
++	wmb();
++
++	if (service && (type == VCHIQ_MSG_CLOSE))
++		vchiq_set_service_state(service, VCHIQ_SRVSTATE_CLOSESENT);
++
++	if (!(flags & QMFLAGS_NO_MUTEX_UNLOCK))
++		mutex_unlock(&state->slot_mutex);
++
++	remote_event_signal(&state->remote->trigger);
++
++	return VCHIQ_SUCCESS;
++}
++
++/* Called by the slot handler and application threads */
++static VCHIQ_STATUS_T
++queue_message_sync(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
++	int msgid, const VCHIQ_ELEMENT_T *elements,
++	int count, int size, int is_blocking)
++{
++	VCHIQ_SHARED_STATE_T *local;
++	VCHIQ_HEADER_T *header;
++
++	local = state->local;
++
++	if ((VCHIQ_MSG_TYPE(msgid) != VCHIQ_MSG_RESUME) &&
++		(mutex_lock_interruptible(&state->sync_mutex) != 0))
++		return VCHIQ_RETRY;
++
++	remote_event_wait(&local->sync_release);
++
++	rmb();
++
++	header = (VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state,
++		local->slot_sync);
++
++	{
++		int oldmsgid = header->msgid;
++		if (oldmsgid != VCHIQ_MSGID_PADDING)
++			vchiq_log_error(vchiq_core_log_level,
++				"%d: qms - msgid %x, not PADDING",
++				state->id, oldmsgid);
++	}
++
++	if (service) {
++		int i, pos;
++
++		vchiq_log_info(vchiq_sync_log_level,
++			"%d: qms %s@%x,%x (%d->%d)", state->id,
++			msg_type_str(VCHIQ_MSG_TYPE(msgid)),
++			(unsigned int)header, size,
++			VCHIQ_MSG_SRCPORT(msgid),
++			VCHIQ_MSG_DSTPORT(msgid));
++
++		for (i = 0, pos = 0; i < (unsigned int)count;
++			pos += elements[i++].size)
++			if (elements[i].size) {
++				if (vchiq_copy_from_user
++					(header->data + pos, elements[i].data,
++					(size_t) elements[i].size) !=
++					VCHIQ_SUCCESS) {
++					mutex_unlock(&state->sync_mutex);
++					VCHIQ_SERVICE_STATS_INC(service,
++						error_count);
++					return VCHIQ_ERROR;
++				}
++				if (i == 0) {
++					if (vchiq_sync_log_level >=
++						VCHIQ_LOG_TRACE)
++						vchiq_log_dump_mem("Sent Sync",
++							0, header->data + pos,
++							min(64u,
++							elements[0].size));
++				}
++			}
++
++		VCHIQ_SERVICE_STATS_INC(service, ctrl_tx_count);
++		VCHIQ_SERVICE_STATS_ADD(service, ctrl_tx_bytes, size);
++	} else {
++		vchiq_log_info(vchiq_sync_log_level,
++			"%d: qms %s@%x,%x (%d->%d)", state->id,
++			msg_type_str(VCHIQ_MSG_TYPE(msgid)),
++			(unsigned int)header, size,
++			VCHIQ_MSG_SRCPORT(msgid),
++			VCHIQ_MSG_DSTPORT(msgid));
++		if (size != 0) {
++			WARN_ON(!((count == 1) && (size == elements[0].size)));
++			memcpy(header->data, elements[0].data,
++				elements[0].size);
++		}
++		VCHIQ_STATS_INC(state, ctrl_tx_count);
++	}
++
++	header->size = size;
++	header->msgid = msgid;
++
++	if (vchiq_sync_log_level >= VCHIQ_LOG_TRACE) {
++		int svc_fourcc;
++
++		svc_fourcc = service
++			? service->base.fourcc
++			: VCHIQ_MAKE_FOURCC('?', '?', '?', '?');
++
++		vchiq_log_trace(vchiq_sync_log_level,
++			"Sent Sync Msg %s(%u) to %c%c%c%c s:%u d:%d len:%d",
++			msg_type_str(VCHIQ_MSG_TYPE(msgid)),
++			VCHIQ_MSG_TYPE(msgid),
++			VCHIQ_FOURCC_AS_4CHARS(svc_fourcc),
++			VCHIQ_MSG_SRCPORT(msgid),
++			VCHIQ_MSG_DSTPORT(msgid),
++			size);
++	}
++
++	/* Make sure the new header is visible to the peer. */
++	wmb();
++
++	remote_event_signal(&state->remote->sync_trigger);
++
++	if (VCHIQ_MSG_TYPE(msgid) != VCHIQ_MSG_PAUSE)
++		mutex_unlock(&state->sync_mutex);
++
++	return VCHIQ_SUCCESS;
++}
++
++static inline void
++claim_slot(VCHIQ_SLOT_INFO_T *slot)
++{
++	slot->use_count++;
++}
++
++static void
++release_slot(VCHIQ_STATE_T *state, VCHIQ_SLOT_INFO_T *slot_info,
++	VCHIQ_HEADER_T *header, VCHIQ_SERVICE_T *service)
++{
++	int release_count;
++
++	mutex_lock(&state->recycle_mutex);
++
++	if (header) {
++		int msgid = header->msgid;
++		if (((msgid & VCHIQ_MSGID_CLAIMED) == 0) ||
++			(service && service->closing)) {
++			mutex_unlock(&state->recycle_mutex);
++			return;
++		}
++
++		/* Rewrite the message header to prevent a double
++		** release */
++		header->msgid = msgid & ~VCHIQ_MSGID_CLAIMED;
++	}
++
++	release_count = slot_info->release_count;
++	slot_info->release_count = ++release_count;
++
++	if (release_count == slot_info->use_count) {
++		int slot_queue_recycle;
++		/* Add to the freed queue */
++
++		/* A read barrier is necessary here to prevent speculative
++		** fetches of remote->slot_queue_recycle from overtaking the
++		** mutex. */
++		rmb();
++
++		slot_queue_recycle = state->remote->slot_queue_recycle;
++		state->remote->slot_queue[slot_queue_recycle &
++			VCHIQ_SLOT_QUEUE_MASK] =
++			SLOT_INDEX_FROM_INFO(state, slot_info);
++		state->remote->slot_queue_recycle = slot_queue_recycle + 1;
++		vchiq_log_info(vchiq_core_log_level,
++			"%d: release_slot %d - recycle->%x",
++			state->id, SLOT_INDEX_FROM_INFO(state, slot_info),
++			state->remote->slot_queue_recycle);
++
++		/* A write barrier is necessary, but remote_event_signal
++		** contains one. */
++		remote_event_signal(&state->remote->recycle);
++	}
++
++	mutex_unlock(&state->recycle_mutex);
++}
++
++/* Called by the slot handler - don't hold the bulk mutex */
++static VCHIQ_STATUS_T
++notify_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue,
++	int retry_poll)
++{
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++
++	vchiq_log_trace(vchiq_core_log_level,
++		"%d: nb:%d %cx - p=%x rn=%x r=%x",
++		service->state->id, service->localport,
++		(queue == &service->bulk_tx) ? 't' : 'r',
++		queue->process, queue->remote_notify, queue->remove);
++
++	if (service->state->is_master) {
++		while (queue->remote_notify != queue->process) {
++			VCHIQ_BULK_T *bulk =
++				&queue->bulks[BULK_INDEX(queue->remote_notify)];
++			int msgtype = (bulk->dir == VCHIQ_BULK_TRANSMIT) ?
++				VCHIQ_MSG_BULK_RX_DONE : VCHIQ_MSG_BULK_TX_DONE;
++			int msgid = VCHIQ_MAKE_MSG(msgtype, service->localport,
++				service->remoteport);
++			VCHIQ_ELEMENT_T element = { &bulk->actual, 4 };
++			/* Only reply to non-dummy bulk requests */
++			if (bulk->remote_data) {
++				status = queue_message(service->state, NULL,
++					msgid, &element, 1, 4, 0);
++				if (status != VCHIQ_SUCCESS)
++					break;
++			}
++			queue->remote_notify++;
++		}
++	} else {
++		queue->remote_notify = queue->process;
++	}
++
++	if (status == VCHIQ_SUCCESS) {
++		while (queue->remove != queue->remote_notify) {
++			VCHIQ_BULK_T *bulk =
++				&queue->bulks[BULK_INDEX(queue->remove)];
++
++			/* Only generate callbacks for non-dummy bulk
++			** requests, and non-terminated services */
++			if (bulk->data && service->instance) {
++				if (bulk->actual != VCHIQ_BULK_ACTUAL_ABORTED) {
++					if (bulk->dir == VCHIQ_BULK_TRANSMIT) {
++						VCHIQ_SERVICE_STATS_INC(service,
++							bulk_tx_count);
++						VCHIQ_SERVICE_STATS_ADD(service,
++							bulk_tx_bytes,
++							bulk->actual);
++					} else {
++						VCHIQ_SERVICE_STATS_INC(service,
++							bulk_rx_count);
++						VCHIQ_SERVICE_STATS_ADD(service,
++							bulk_rx_bytes,
++							bulk->actual);
++					}
++				} else {
++					VCHIQ_SERVICE_STATS_INC(service,
++						bulk_aborted_count);
++				}
++				if (bulk->mode == VCHIQ_BULK_MODE_BLOCKING) {
++					struct bulk_waiter *waiter;
++					spin_lock(&bulk_waiter_spinlock);
++					waiter = bulk->userdata;
++					if (waiter) {
++						waiter->actual = bulk->actual;
++						up(&waiter->event);
++					}
++					spin_unlock(&bulk_waiter_spinlock);
++				} else if (bulk->mode ==
++					VCHIQ_BULK_MODE_CALLBACK) {
++					VCHIQ_REASON_T reason = (bulk->dir ==
++						VCHIQ_BULK_TRANSMIT) ?
++						((bulk->actual ==
++						VCHIQ_BULK_ACTUAL_ABORTED) ?
++						VCHIQ_BULK_TRANSMIT_ABORTED :
++						VCHIQ_BULK_TRANSMIT_DONE) :
++						((bulk->actual ==
++						VCHIQ_BULK_ACTUAL_ABORTED) ?
++						VCHIQ_BULK_RECEIVE_ABORTED :
++						VCHIQ_BULK_RECEIVE_DONE);
++					status = make_service_callback(service,
++						reason,	NULL, bulk->userdata);
++					if (status == VCHIQ_RETRY)
++						break;
++				}
++			}
++
++			queue->remove++;
++			up(&service->bulk_remove_event);
++		}
++		if (!retry_poll)
++			status = VCHIQ_SUCCESS;
++	}
++
++	if (status == VCHIQ_RETRY)
++		request_poll(service->state, service,
++			(queue == &service->bulk_tx) ?
++			VCHIQ_POLL_TXNOTIFY : VCHIQ_POLL_RXNOTIFY);
++
++	return status;
++}
++
++/* Called by the slot handler thread */
++static void
++poll_services(VCHIQ_STATE_T *state)
++{
++	int group, i;
++
++	for (group = 0; group < BITSET_SIZE(state->unused_service); group++) {
++		uint32_t flags;
++		flags = atomic_xchg(&state->poll_services[group], 0);
++		for (i = 0; flags; i++) {
++			if (flags & (1 << i)) {
++				VCHIQ_SERVICE_T *service =
++					find_service_by_port(state,
++						(group<<5) + i);
++				uint32_t service_flags;
++				flags &= ~(1 << i);
++				if (!service)
++					continue;
++				service_flags =
++					atomic_xchg(&service->poll_flags, 0);
++				if (service_flags &
++					(1 << VCHIQ_POLL_REMOVE)) {
++					vchiq_log_info(vchiq_core_log_level,
++						"%d: ps - remove %d<->%d",
++						state->id, service->localport,
++						service->remoteport);
++
++					/* Make it look like a client, because
++					   it must be removed and not left in
++					   the LISTENING state. */
++					service->public_fourcc =
++						VCHIQ_FOURCC_INVALID;
++
++					if (vchiq_close_service_internal(
++						service, 0/*!close_recvd*/) !=
++						VCHIQ_SUCCESS)
++						request_poll(state, service,
++							VCHIQ_POLL_REMOVE);
++				} else if (service_flags &
++					(1 << VCHIQ_POLL_TERMINATE)) {
++					vchiq_log_info(vchiq_core_log_level,
++						"%d: ps - terminate %d<->%d",
++						state->id, service->localport,
++						service->remoteport);
++					if (vchiq_close_service_internal(
++						service, 0/*!close_recvd*/) !=
++						VCHIQ_SUCCESS)
++						request_poll(state, service,
++							VCHIQ_POLL_TERMINATE);
++				}
++				if (service_flags & (1 << VCHIQ_POLL_TXNOTIFY))
++					notify_bulks(service,
++						&service->bulk_tx,
++						1/*retry_poll*/);
++				if (service_flags & (1 << VCHIQ_POLL_RXNOTIFY))
++					notify_bulks(service,
++						&service->bulk_rx,
++						1/*retry_poll*/);
++				unlock_service(service);
++			}
++		}
++	}
++}
++
++/* Called by the slot handler or application threads, holding the bulk mutex. */
++static int
++resolve_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue)
++{
++	VCHIQ_STATE_T *state = service->state;
++	int resolved = 0;
++	int rc;
++
++	while ((queue->process != queue->local_insert) &&
++		(queue->process != queue->remote_insert)) {
++		VCHIQ_BULK_T *bulk = &queue->bulks[BULK_INDEX(queue->process)];
++
++		vchiq_log_trace(vchiq_core_log_level,
++			"%d: rb:%d %cx - li=%x ri=%x p=%x",
++			state->id, service->localport,
++			(queue == &service->bulk_tx) ? 't' : 'r',
++			queue->local_insert, queue->remote_insert,
++			queue->process);
++
++		WARN_ON(!((int)(queue->local_insert - queue->process) > 0));
++		WARN_ON(!((int)(queue->remote_insert - queue->process) > 0));
++
++		rc = mutex_lock_interruptible(&state->bulk_transfer_mutex);
++		if (rc != 0)
++			break;
++
++		vchiq_transfer_bulk(bulk);
++		mutex_unlock(&state->bulk_transfer_mutex);
++
++		if (SRVTRACE_ENABLED(service, VCHIQ_LOG_INFO)) {
++			const char *header = (queue == &service->bulk_tx) ?
++				"Send Bulk to" : "Recv Bulk from";
++			if (bulk->actual != VCHIQ_BULK_ACTUAL_ABORTED)
++				vchiq_log_info(SRVTRACE_LEVEL(service),
++					"%s %c%c%c%c d:%d len:%d %x<->%x",
++					header,
++					VCHIQ_FOURCC_AS_4CHARS(
++						service->base.fourcc),
++					service->remoteport,
++					bulk->size,
++					(unsigned int)bulk->data,
++					(unsigned int)bulk->remote_data);
++			else
++				vchiq_log_info(SRVTRACE_LEVEL(service),
++					"%s %c%c%c%c d:%d ABORTED - tx len:%d,"
++					" rx len:%d %x<->%x",
++					header,
++					VCHIQ_FOURCC_AS_4CHARS(
++						service->base.fourcc),
++					service->remoteport,
++					bulk->size,
++					bulk->remote_size,
++					(unsigned int)bulk->data,
++					(unsigned int)bulk->remote_data);
++		}
++
++		vchiq_complete_bulk(bulk);
++		queue->process++;
++		resolved++;
++	}
++	return resolved;
++}
++
++/* Called with the bulk_mutex held */
++static void
++abort_outstanding_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue)
++{
++	int is_tx = (queue == &service->bulk_tx);
++	vchiq_log_trace(vchiq_core_log_level,
++		"%d: aob:%d %cx - li=%x ri=%x p=%x",
++		service->state->id, service->localport, is_tx ? 't' : 'r',
++		queue->local_insert, queue->remote_insert, queue->process);
++
++	WARN_ON(!((int)(queue->local_insert - queue->process) >= 0));
++	WARN_ON(!((int)(queue->remote_insert - queue->process) >= 0));
++
++	while ((queue->process != queue->local_insert) ||
++		(queue->process != queue->remote_insert)) {
++		VCHIQ_BULK_T *bulk = &queue->bulks[BULK_INDEX(queue->process)];
++
++		if (queue->process == queue->remote_insert) {
++			/* fabricate a matching dummy bulk */
++			bulk->remote_data = NULL;
++			bulk->remote_size = 0;
++			queue->remote_insert++;
++		}
++
++		if (queue->process != queue->local_insert) {
++			vchiq_complete_bulk(bulk);
++
++			vchiq_log_info(SRVTRACE_LEVEL(service),
++				"%s %c%c%c%c d:%d ABORTED - tx len:%d, "
++				"rx len:%d",
++				is_tx ? "Send Bulk to" : "Recv Bulk from",
++				VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
++				service->remoteport,
++				bulk->size,
++				bulk->remote_size);
++		} else {
++			/* fabricate a matching dummy bulk */
++			bulk->data = NULL;
++			bulk->size = 0;
++			bulk->actual = VCHIQ_BULK_ACTUAL_ABORTED;
++			bulk->dir = is_tx ? VCHIQ_BULK_TRANSMIT :
++				VCHIQ_BULK_RECEIVE;
++			queue->local_insert++;
++		}
++
++		queue->process++;
++	}
++}
++
++/* Called from the slot handler thread */
++static void
++pause_bulks(VCHIQ_STATE_T *state)
++{
++	if (unlikely(atomic_inc_return(&pause_bulks_count) != 1)) {
++		WARN_ON_ONCE(1);
++		atomic_set(&pause_bulks_count, 1);
++		return;
++	}
++
++	/* Block bulk transfers from all services */
++	mutex_lock(&state->bulk_transfer_mutex);
++}
++
++/* Called from the slot handler thread */
++static void
++resume_bulks(VCHIQ_STATE_T *state)
++{
++	int i;
++	if (unlikely(atomic_dec_return(&pause_bulks_count) != 0)) {
++		WARN_ON_ONCE(1);
++		atomic_set(&pause_bulks_count, 0);
++		return;
++	}
++
++	/* Allow bulk transfers from all services */
++	mutex_unlock(&state->bulk_transfer_mutex);
++
++	if (state->deferred_bulks == 0)
++		return;
++
++	/* Deal with any bulks which had to be deferred due to being in
++	 * paused state.  Don't try to match up to number of deferred bulks
++	 * in case we've had something come and close the service in the
++	 * interim - just process all bulk queues for all services */
++	vchiq_log_info(vchiq_core_log_level, "%s: processing %d deferred bulks",
++		__func__, state->deferred_bulks);
++
++	for (i = 0; i < state->unused_service; i++) {
++		VCHIQ_SERVICE_T *service = state->services[i];
++		int resolved_rx = 0;
++		int resolved_tx = 0;
++		if (!service || (service->srvstate != VCHIQ_SRVSTATE_OPEN))
++			continue;
++
++		mutex_lock(&service->bulk_mutex);
++		resolved_rx = resolve_bulks(service, &service->bulk_rx);
++		resolved_tx = resolve_bulks(service, &service->bulk_tx);
++		mutex_unlock(&service->bulk_mutex);
++		if (resolved_rx)
++			notify_bulks(service, &service->bulk_rx, 1);
++		if (resolved_tx)
++			notify_bulks(service, &service->bulk_tx, 1);
++	}
++	state->deferred_bulks = 0;
++}
++
++static int
++parse_open(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header)
++{
++	VCHIQ_SERVICE_T *service = NULL;
++	int msgid, size;
++	int type;
++	unsigned int localport, remoteport;
++
++	msgid = header->msgid;
++	size = header->size;
++	type = VCHIQ_MSG_TYPE(msgid);
++	localport = VCHIQ_MSG_DSTPORT(msgid);
++	remoteport = VCHIQ_MSG_SRCPORT(msgid);
++	if (size >= sizeof(struct vchiq_open_payload)) {
++		const struct vchiq_open_payload *payload =
++			(struct vchiq_open_payload *)header->data;
++		unsigned int fourcc;
++
++		fourcc = payload->fourcc;
++		vchiq_log_info(vchiq_core_log_level,
++			"%d: prs OPEN@%x (%d->'%c%c%c%c')",
++			state->id, (unsigned int)header,
++			localport,
++			VCHIQ_FOURCC_AS_4CHARS(fourcc));
++
++		service = get_listening_service(state, fourcc);
++
++		if (service) {
++			/* A matching service exists */
++			short version = payload->version;
++			short version_min = payload->version_min;
++			if ((service->version < version_min) ||
++				(version < service->version_min)) {
++				/* Version mismatch */
++				vchiq_loud_error_header();
++				vchiq_loud_error("%d: service %d (%c%c%c%c) "
++					"version mismatch - local (%d, min %d)"
++					" vs. remote (%d, min %d)",
++					state->id, service->localport,
++					VCHIQ_FOURCC_AS_4CHARS(fourcc),
++					service->version, service->version_min,
++					version, version_min);
++				vchiq_loud_error_footer();
++				unlock_service(service);
++				service = NULL;
++				goto fail_open;
++			}
++			service->peer_version = version;
++
++			if (service->srvstate == VCHIQ_SRVSTATE_LISTENING) {
++				struct vchiq_openack_payload ack_payload = {
++					service->version
++				};
++				VCHIQ_ELEMENT_T body = {
++					&ack_payload,
++					sizeof(ack_payload)
++				};
++
++				if (state->version_common <
++				    VCHIQ_VERSION_SYNCHRONOUS_MODE)
++					service->sync = 0;
++
++				/* Acknowledge the OPEN */
++				if (service->sync &&
++				    (state->version_common >=
++				     VCHIQ_VERSION_SYNCHRONOUS_MODE)) {
++					if (queue_message_sync(state, NULL,
++						VCHIQ_MAKE_MSG(
++							VCHIQ_MSG_OPENACK,
++							service->localport,
++							remoteport),
++						&body, 1, sizeof(ack_payload),
++						0) == VCHIQ_RETRY)
++						goto bail_not_ready;
++				} else {
++					if (queue_message(state, NULL,
++						VCHIQ_MAKE_MSG(
++							VCHIQ_MSG_OPENACK,
++							service->localport,
++							remoteport),
++						&body, 1, sizeof(ack_payload),
++						0) == VCHIQ_RETRY)
++						goto bail_not_ready;
++				}
++
++				/* The service is now open */
++				vchiq_set_service_state(service,
++					service->sync ? VCHIQ_SRVSTATE_OPENSYNC
++					: VCHIQ_SRVSTATE_OPEN);
++			}
++
++			service->remoteport = remoteport;
++			service->client_id = ((int *)header->data)[1];
++			if (make_service_callback(service, VCHIQ_SERVICE_OPENED,
++				NULL, NULL) == VCHIQ_RETRY) {
++				/* Bail out if not ready */
++				service->remoteport = VCHIQ_PORT_FREE;
++				goto bail_not_ready;
++			}
++
++			/* Success - the message has been dealt with */
++			unlock_service(service);
++			return 1;
++		}
++	}
++
++fail_open:
++	/* No available service, or an invalid request - send a CLOSE */
++	if (queue_message(state, NULL,
++		VCHIQ_MAKE_MSG(VCHIQ_MSG_CLOSE, 0, VCHIQ_MSG_SRCPORT(msgid)),
++		NULL, 0, 0, 0) == VCHIQ_RETRY)
++		goto bail_not_ready;
++
++	return 1;
++
++bail_not_ready:
++	if (service)
++		unlock_service(service);
++
++	return 0;
++}
++
++/* Called by the slot handler thread */
++static void
++parse_rx_slots(VCHIQ_STATE_T *state)
++{
++	VCHIQ_SHARED_STATE_T *remote = state->remote;
++	VCHIQ_SERVICE_T *service = NULL;
++	int tx_pos;
++	DEBUG_INITIALISE(state->local)
++
++	tx_pos = remote->tx_pos;
++
++	while (state->rx_pos != tx_pos) {
++		VCHIQ_HEADER_T *header;
++		int msgid, size;
++		int type;
++		unsigned int localport, remoteport;
++
++		DEBUG_TRACE(PARSE_LINE);
++		if (!state->rx_data) {
++			int rx_index;
++			WARN_ON(!((state->rx_pos & VCHIQ_SLOT_MASK) == 0));
++			rx_index = remote->slot_queue[
++				SLOT_QUEUE_INDEX_FROM_POS(state->rx_pos) &
++				VCHIQ_SLOT_QUEUE_MASK];
++			state->rx_data = (char *)SLOT_DATA_FROM_INDEX(state,
++				rx_index);
++			state->rx_info = SLOT_INFO_FROM_INDEX(state, rx_index);
++
++			/* Initialise use_count to one, and increment
++			** release_count at the end of the slot to avoid
++			** releasing the slot prematurely. */
++			state->rx_info->use_count = 1;
++			state->rx_info->release_count = 0;
++		}
++
++		header = (VCHIQ_HEADER_T *)(state->rx_data +
++			(state->rx_pos & VCHIQ_SLOT_MASK));
++		DEBUG_VALUE(PARSE_HEADER, (int)header);
++		msgid = header->msgid;
++		DEBUG_VALUE(PARSE_MSGID, msgid);
++		size = header->size;
++		type = VCHIQ_MSG_TYPE(msgid);
++		localport = VCHIQ_MSG_DSTPORT(msgid);
++		remoteport = VCHIQ_MSG_SRCPORT(msgid);
++
++		if (type != VCHIQ_MSG_DATA)
++			VCHIQ_STATS_INC(state, ctrl_rx_count);
++
++		switch (type) {
++		case VCHIQ_MSG_OPENACK:
++		case VCHIQ_MSG_CLOSE:
++		case VCHIQ_MSG_DATA:
++		case VCHIQ_MSG_BULK_RX:
++		case VCHIQ_MSG_BULK_TX:
++		case VCHIQ_MSG_BULK_RX_DONE:
++		case VCHIQ_MSG_BULK_TX_DONE:
++			service = find_service_by_port(state, localport);
++			if ((!service ||
++			     ((service->remoteport != remoteport) &&
++			      (service->remoteport != VCHIQ_PORT_FREE))) &&
++			    (localport == 0) &&
++			    (type == VCHIQ_MSG_CLOSE)) {
++				/* This could be a CLOSE from a client which
++				   hadn't yet received the OPENACK - look for
++				   the connected service */
++				if (service)
++					unlock_service(service);
++				service = get_connected_service(state,
++					remoteport);
++				if (service)
++					vchiq_log_warning(vchiq_core_log_level,
++						"%d: prs %s@%x (%d->%d) - "
++						"found connected service %d",
++						state->id, msg_type_str(type),
++						(unsigned int)header,
++						remoteport, localport,
++						service->localport);
++			}
++
++			if (!service) {
++				vchiq_log_error(vchiq_core_log_level,
++					"%d: prs %s@%x (%d->%d) - "
++					"invalid/closed service %d",
++					state->id, msg_type_str(type),
++					(unsigned int)header,
++					remoteport, localport, localport);
++				goto skip_message;
++			}
++			break;
++		default:
++			break;
++		}
++
++		if (SRVTRACE_ENABLED(service, VCHIQ_LOG_INFO)) {
++			int svc_fourcc;
++
++			svc_fourcc = service
++				? service->base.fourcc
++				: VCHIQ_MAKE_FOURCC('?', '?', '?', '?');
++			vchiq_log_info(SRVTRACE_LEVEL(service),
++				"Rcvd Msg %s(%u) from %c%c%c%c s:%d d:%d "
++				"len:%d",
++				msg_type_str(type), type,
++				VCHIQ_FOURCC_AS_4CHARS(svc_fourcc),
++				remoteport, localport, size);
++			if (size > 0)
++				vchiq_log_dump_mem("Rcvd", 0, header->data,
++					min(64, size));
++		}
++
++		if (((unsigned int)header & VCHIQ_SLOT_MASK) + calc_stride(size)
++			> VCHIQ_SLOT_SIZE) {
++			vchiq_log_error(vchiq_core_log_level,
++				"header %x (msgid %x) - size %x too big for "
++				"slot",
++				(unsigned int)header, (unsigned int)msgid,
++				(unsigned int)size);
++			WARN(1, "oversized for slot\n");
++		}
++
++		switch (type) {
++		case VCHIQ_MSG_OPEN:
++			WARN_ON(!(VCHIQ_MSG_DSTPORT(msgid) == 0));
++			if (!parse_open(state, header))
++				goto bail_not_ready;
++			break;
++		case VCHIQ_MSG_OPENACK:
++			if (size >= sizeof(struct vchiq_openack_payload)) {
++				const struct vchiq_openack_payload *payload =
++					(struct vchiq_openack_payload *)
++					header->data;
++				service->peer_version = payload->version;
++			}
++			vchiq_log_info(vchiq_core_log_level,
++				"%d: prs OPENACK@%x,%x (%d->%d) v:%d",
++				state->id, (unsigned int)header, size,
++				remoteport, localport, service->peer_version);
++			if (service->srvstate ==
++				VCHIQ_SRVSTATE_OPENING) {
++				service->remoteport = remoteport;
++				vchiq_set_service_state(service,
++					VCHIQ_SRVSTATE_OPEN);
++				up(&service->remove_event);
++			} else
++				vchiq_log_error(vchiq_core_log_level,
++					"OPENACK received in state %s",
++					srvstate_names[service->srvstate]);
++			break;
++		case VCHIQ_MSG_CLOSE:
++			WARN_ON(size != 0); /* There should be no data */
++
++			vchiq_log_info(vchiq_core_log_level,
++				"%d: prs CLOSE@%x (%d->%d)",
++				state->id, (unsigned int)header,
++				remoteport, localport);
++
++			mark_service_closing_internal(service, 1);
++
++			if (vchiq_close_service_internal(service,
++				1/*close_recvd*/) == VCHIQ_RETRY)
++				goto bail_not_ready;
++
++			vchiq_log_info(vchiq_core_log_level,
++				"Close Service %c%c%c%c s:%u d:%d",
++				VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
++				service->localport,
++				service->remoteport);
++			break;
++		case VCHIQ_MSG_DATA:
++			vchiq_log_info(vchiq_core_log_level,
++				"%d: prs DATA@%x,%x (%d->%d)",
++				state->id, (unsigned int)header, size,
++				remoteport, localport);
++
++			if ((service->remoteport == remoteport)
++				&& (service->srvstate ==
++				VCHIQ_SRVSTATE_OPEN)) {
++				header->msgid = msgid | VCHIQ_MSGID_CLAIMED;
++				claim_slot(state->rx_info);
++				DEBUG_TRACE(PARSE_LINE);
++				if (make_service_callback(service,
++					VCHIQ_MESSAGE_AVAILABLE, header,
++					NULL) == VCHIQ_RETRY) {
++					DEBUG_TRACE(PARSE_LINE);
++					goto bail_not_ready;
++				}
++				VCHIQ_SERVICE_STATS_INC(service, ctrl_rx_count);
++				VCHIQ_SERVICE_STATS_ADD(service, ctrl_rx_bytes,
++					size);
++			} else {
++				VCHIQ_STATS_INC(state, error_count);
++			}
++			break;
++		case VCHIQ_MSG_CONNECT:
++			vchiq_log_info(vchiq_core_log_level,
++				"%d: prs CONNECT@%x",
++				state->id, (unsigned int)header);
++			state->version_common = ((VCHIQ_SLOT_ZERO_T *)
++						 state->slot_data)->version;
++			up(&state->connect);
++			break;
++		case VCHIQ_MSG_BULK_RX:
++		case VCHIQ_MSG_BULK_TX: {
++			VCHIQ_BULK_QUEUE_T *queue;
++			WARN_ON(!state->is_master);
++			queue = (type == VCHIQ_MSG_BULK_RX) ?
++				&service->bulk_tx : &service->bulk_rx;
++			if ((service->remoteport == remoteport)
++				&& (service->srvstate ==
++				VCHIQ_SRVSTATE_OPEN)) {
++				VCHIQ_BULK_T *bulk;
++				int resolved = 0;
++
++				DEBUG_TRACE(PARSE_LINE);
++				if (mutex_lock_interruptible(
++					&service->bulk_mutex) != 0) {
++					DEBUG_TRACE(PARSE_LINE);
++					goto bail_not_ready;
++				}
++
++				WARN_ON(!(queue->remote_insert < queue->remove +
++					VCHIQ_NUM_SERVICE_BULKS));
++				bulk = &queue->bulks[
++					BULK_INDEX(queue->remote_insert)];
++				bulk->remote_data =
++					(void *)((int *)header->data)[0];
++				bulk->remote_size = ((int *)header->data)[1];
++				wmb();
++
++				vchiq_log_info(vchiq_core_log_level,
++					"%d: prs %s@%x (%d->%d) %x@%x",
++					state->id, msg_type_str(type),
++					(unsigned int)header,
++					remoteport, localport,
++					bulk->remote_size,
++					(unsigned int)bulk->remote_data);
++
++				queue->remote_insert++;
++
++				if (atomic_read(&pause_bulks_count)) {
++					state->deferred_bulks++;
++					vchiq_log_info(vchiq_core_log_level,
++						"%s: deferring bulk (%d)",
++						__func__,
++						state->deferred_bulks);
++					if (state->conn_state !=
++						VCHIQ_CONNSTATE_PAUSE_SENT)
++						vchiq_log_error(
++							vchiq_core_log_level,
++							"%s: bulks paused in "
++							"unexpected state %s",
++							__func__,
++							conn_state_names[
++							state->conn_state]);
++				} else if (state->conn_state ==
++					VCHIQ_CONNSTATE_CONNECTED) {
++					DEBUG_TRACE(PARSE_LINE);
++					resolved = resolve_bulks(service,
++						queue);
++				}
++
++				mutex_unlock(&service->bulk_mutex);
++				if (resolved)
++					notify_bulks(service, queue,
++						1/*retry_poll*/);
++			}
++		} break;
++		case VCHIQ_MSG_BULK_RX_DONE:
++		case VCHIQ_MSG_BULK_TX_DONE:
++			WARN_ON(state->is_master);
++			if ((service->remoteport == remoteport)
++				&& (service->srvstate !=
++				VCHIQ_SRVSTATE_FREE)) {
++				VCHIQ_BULK_QUEUE_T *queue;
++				VCHIQ_BULK_T *bulk;
++
++				queue = (type == VCHIQ_MSG_BULK_RX_DONE) ?
++					&service->bulk_rx : &service->bulk_tx;
++
++				DEBUG_TRACE(PARSE_LINE);
++				if (mutex_lock_interruptible(
++					&service->bulk_mutex) != 0) {
++					DEBUG_TRACE(PARSE_LINE);
++					goto bail_not_ready;
++				}
++				if ((int)(queue->remote_insert -
++					queue->local_insert) >= 0) {
++					vchiq_log_error(vchiq_core_log_level,
++						"%d: prs %s@%x (%d->%d) "
++						"unexpected (ri=%d,li=%d)",
++						state->id, msg_type_str(type),
++						(unsigned int)header,
++						remoteport, localport,
++						queue->remote_insert,
++						queue->local_insert);
++					mutex_unlock(&service->bulk_mutex);
++					break;
++				}
++
++				BUG_ON(queue->process == queue->local_insert);
++				BUG_ON(queue->process != queue->remote_insert);
++
++				bulk = &queue->bulks[
++					BULK_INDEX(queue->remote_insert)];
++				bulk->actual = *(int *)header->data;
++				queue->remote_insert++;
++
++				vchiq_log_info(vchiq_core_log_level,
++					"%d: prs %s@%x (%d->%d) %x@%x",
++					state->id, msg_type_str(type),
++					(unsigned int)header,
++					remoteport, localport,
++					bulk->actual, (unsigned int)bulk->data);
++
++				vchiq_log_trace(vchiq_core_log_level,
++					"%d: prs:%d %cx li=%x ri=%x p=%x",
++					state->id, localport,
++					(type == VCHIQ_MSG_BULK_RX_DONE) ?
++						'r' : 't',
++					queue->local_insert,
++					queue->remote_insert, queue->process);
++
++				DEBUG_TRACE(PARSE_LINE);
++				WARN_ON(queue->process == queue->local_insert);
++				vchiq_complete_bulk(bulk);
++				queue->process++;
++				mutex_unlock(&service->bulk_mutex);
++				DEBUG_TRACE(PARSE_LINE);
++				notify_bulks(service, queue, 1/*retry_poll*/);
++				DEBUG_TRACE(PARSE_LINE);
++			}
++			break;
++		case VCHIQ_MSG_PADDING:
++			vchiq_log_trace(vchiq_core_log_level,
++				"%d: prs PADDING@%x,%x",
++				state->id, (unsigned int)header, size);
++			break;
++		case VCHIQ_MSG_PAUSE:
++			/* If initiated, signal the application thread */
++			vchiq_log_trace(vchiq_core_log_level,
++				"%d: prs PAUSE@%x,%x",
++				state->id, (unsigned int)header, size);
++			if (state->conn_state == VCHIQ_CONNSTATE_PAUSED) {
++				vchiq_log_error(vchiq_core_log_level,
++					"%d: PAUSE received in state PAUSED",
++					state->id);
++				break;
++			}
++			if (state->conn_state != VCHIQ_CONNSTATE_PAUSE_SENT) {
++				/* Send a PAUSE in response */
++				if (queue_message(state, NULL,
++					VCHIQ_MAKE_MSG(VCHIQ_MSG_PAUSE, 0, 0),
++					NULL, 0, 0, QMFLAGS_NO_MUTEX_UNLOCK)
++				    == VCHIQ_RETRY)
++					goto bail_not_ready;
++				if (state->is_master)
++					pause_bulks(state);
++			}
++			/* At this point slot_mutex is held */
++			vchiq_set_conn_state(state, VCHIQ_CONNSTATE_PAUSED);
++			vchiq_platform_paused(state);
++			break;
++		case VCHIQ_MSG_RESUME:
++			vchiq_log_trace(vchiq_core_log_level,
++				"%d: prs RESUME@%x,%x",
++				state->id, (unsigned int)header, size);
++			/* Release the slot mutex */
++			mutex_unlock(&state->slot_mutex);
++			if (state->is_master)
++				resume_bulks(state);
++			vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
++			vchiq_platform_resumed(state);
++			break;
++
++		case VCHIQ_MSG_REMOTE_USE:
++			vchiq_on_remote_use(state);
++			break;
++		case VCHIQ_MSG_REMOTE_RELEASE:
++			vchiq_on_remote_release(state);
++			break;
++		case VCHIQ_MSG_REMOTE_USE_ACTIVE:
++			vchiq_on_remote_use_active(state);
++			break;
++
++		default:
++			vchiq_log_error(vchiq_core_log_level,
++				"%d: prs invalid msgid %x@%x,%x",
++				state->id, msgid, (unsigned int)header, size);
++			WARN(1, "invalid message\n");
++			break;
++		}
++
++skip_message:
++		if (service) {
++			unlock_service(service);
++			service = NULL;
++		}
++
++		state->rx_pos += calc_stride(size);
++
++		DEBUG_TRACE(PARSE_LINE);
++		/* Perform some housekeeping when the end of the slot is
++		** reached. */
++		if ((state->rx_pos & VCHIQ_SLOT_MASK) == 0) {
++			/* Remove the extra reference count. */
++			release_slot(state, state->rx_info, NULL, NULL);
++			state->rx_data = NULL;
++		}
++	}
++
++bail_not_ready:
++	if (service)
++		unlock_service(service);
++}
++
++/* Called by the slot handler thread */
++static int
++slot_handler_func(void *v)
++{
++	VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
++	VCHIQ_SHARED_STATE_T *local = state->local;
++	DEBUG_INITIALISE(local)
++
++	while (1) {
++		DEBUG_COUNT(SLOT_HANDLER_COUNT);
++		DEBUG_TRACE(SLOT_HANDLER_LINE);
++		remote_event_wait(&local->trigger);
++
++		rmb();
++
++		DEBUG_TRACE(SLOT_HANDLER_LINE);
++		if (state->poll_needed) {
++			/* Check if we need to suspend - may change our
++			 * conn_state */
++			vchiq_platform_check_suspend(state);
++
++			state->poll_needed = 0;
++
++			/* Handle service polling and other rare conditions here
++			** out of the mainline code */
++			switch (state->conn_state) {
++			case VCHIQ_CONNSTATE_CONNECTED:
++				/* Poll the services as requested */
++				poll_services(state);
++				break;
++
++			case VCHIQ_CONNSTATE_PAUSING:
++				if (state->is_master)
++					pause_bulks(state);
++				if (queue_message(state, NULL,
++					VCHIQ_MAKE_MSG(VCHIQ_MSG_PAUSE, 0, 0),
++					NULL, 0, 0,
++					QMFLAGS_NO_MUTEX_UNLOCK)
++				    != VCHIQ_RETRY) {
++					vchiq_set_conn_state(state,
++						VCHIQ_CONNSTATE_PAUSE_SENT);
++				} else {
++					if (state->is_master)
++						resume_bulks(state);
++					/* Retry later */
++					state->poll_needed = 1;
++				}
++				break;
++
++			case VCHIQ_CONNSTATE_PAUSED:
++				vchiq_platform_resume(state);
++				break;
++
++			case VCHIQ_CONNSTATE_RESUMING:
++				if (queue_message(state, NULL,
++					VCHIQ_MAKE_MSG(VCHIQ_MSG_RESUME, 0, 0),
++					NULL, 0, 0, QMFLAGS_NO_MUTEX_LOCK)
++					!= VCHIQ_RETRY) {
++					if (state->is_master)
++						resume_bulks(state);
++					vchiq_set_conn_state(state,
++						VCHIQ_CONNSTATE_CONNECTED);
++					vchiq_platform_resumed(state);
++				} else {
++					/* This should really be impossible,
++					** since the PAUSE should have flushed
++					** through outstanding messages. */
++					vchiq_log_error(vchiq_core_log_level,
++						"Failed to send RESUME "
++						"message");
++					BUG();
++				}
++				break;
++
++			case VCHIQ_CONNSTATE_PAUSE_TIMEOUT:
++			case VCHIQ_CONNSTATE_RESUME_TIMEOUT:
++				vchiq_platform_handle_timeout(state);
++				break;
++			default:
++				break;
++			}
++
++
++		}
++
++		DEBUG_TRACE(SLOT_HANDLER_LINE);
++		parse_rx_slots(state);
++	}
++	return 0;
++}
++
++
++/* Called by the recycle thread */
++static int
++recycle_func(void *v)
++{
++	VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
++	VCHIQ_SHARED_STATE_T *local = state->local;
++
++	while (1) {
++		remote_event_wait(&local->recycle);
++
++		process_free_queue(state);
++	}
++	return 0;
++}
++
++
++/* Called by the sync thread */
++static int
++sync_func(void *v)
++{
++	VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
++	VCHIQ_SHARED_STATE_T *local = state->local;
++	VCHIQ_HEADER_T *header = (VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state,
++		state->remote->slot_sync);
++
++	while (1) {
++		VCHIQ_SERVICE_T *service;
++		int msgid, size;
++		int type;
++		unsigned int localport, remoteport;
++
++		remote_event_wait(&local->sync_trigger);
++
++		rmb();
++
++		msgid = header->msgid;
++		size = header->size;
++		type = VCHIQ_MSG_TYPE(msgid);
++		localport = VCHIQ_MSG_DSTPORT(msgid);
++		remoteport = VCHIQ_MSG_SRCPORT(msgid);
++
++		service = find_service_by_port(state, localport);
++
++		if (!service) {
++			vchiq_log_error(vchiq_sync_log_level,
++				"%d: sf %s@%x (%d->%d) - "
++				"invalid/closed service %d",
++				state->id, msg_type_str(type),
++				(unsigned int)header,
++				remoteport, localport, localport);
++			release_message_sync(state, header);
++			continue;
++		}
++
++		if (vchiq_sync_log_level >= VCHIQ_LOG_TRACE) {
++			int svc_fourcc;
++
++			svc_fourcc = service
++				? service->base.fourcc
++				: VCHIQ_MAKE_FOURCC('?', '?', '?', '?');
++			vchiq_log_trace(vchiq_sync_log_level,
++				"Rcvd Msg %s from %c%c%c%c s:%d d:%d len:%d",
++				msg_type_str(type),
++				VCHIQ_FOURCC_AS_4CHARS(svc_fourcc),
++				remoteport, localport, size);
++			if (size > 0)
++				vchiq_log_dump_mem("Rcvd", 0, header->data,
++					min(64, size));
++		}
++
++		switch (type) {
++		case VCHIQ_MSG_OPENACK:
++			if (size >= sizeof(struct vchiq_openack_payload)) {
++				const struct vchiq_openack_payload *payload =
++					(struct vchiq_openack_payload *)
++					header->data;
++				service->peer_version = payload->version;
++			}
++			vchiq_log_info(vchiq_sync_log_level,
++				"%d: sf OPENACK@%x,%x (%d->%d) v:%d",
++				state->id, (unsigned int)header, size,
++				remoteport, localport, service->peer_version);
++			if (service->srvstate == VCHIQ_SRVSTATE_OPENING) {
++				service->remoteport = remoteport;
++				vchiq_set_service_state(service,
++					VCHIQ_SRVSTATE_OPENSYNC);
++				service->sync = 1;
++				up(&service->remove_event);
++			}
++			release_message_sync(state, header);
++			break;
++
++		case VCHIQ_MSG_DATA:
++			vchiq_log_trace(vchiq_sync_log_level,
++				"%d: sf DATA@%x,%x (%d->%d)",
++				state->id, (unsigned int)header, size,
++				remoteport, localport);
++
++			if ((service->remoteport == remoteport) &&
++				(service->srvstate ==
++				VCHIQ_SRVSTATE_OPENSYNC)) {
++				if (make_service_callback(service,
++					VCHIQ_MESSAGE_AVAILABLE, header,
++					NULL) == VCHIQ_RETRY)
++					vchiq_log_error(vchiq_sync_log_level,
++						"synchronous callback to "
++						"service %d returns "
++						"VCHIQ_RETRY",
++						localport);
++			}
++			break;
++
++		default:
++			vchiq_log_error(vchiq_sync_log_level,
++				"%d: sf unexpected msgid %x@%x,%x",
++				state->id, msgid, (unsigned int)header, size);
++			release_message_sync(state, header);
++			break;
++		}
++
++		unlock_service(service);
++	}
++
++	return 0;
++}
++
++
++static void
++init_bulk_queue(VCHIQ_BULK_QUEUE_T *queue)
++{
++	queue->local_insert = 0;
++	queue->remote_insert = 0;
++	queue->process = 0;
++	queue->remote_notify = 0;
++	queue->remove = 0;
++}
++
++
++inline const char *
++get_conn_state_name(VCHIQ_CONNSTATE_T conn_state)
++{
++	return conn_state_names[conn_state];
++}
++
++
++VCHIQ_SLOT_ZERO_T *
++vchiq_init_slots(void *mem_base, int mem_size)
++{
++	int mem_align = (VCHIQ_SLOT_SIZE - (int)mem_base) & VCHIQ_SLOT_MASK;
++	VCHIQ_SLOT_ZERO_T *slot_zero =
++		(VCHIQ_SLOT_ZERO_T *)((char *)mem_base + mem_align);
++	int num_slots = (mem_size - mem_align)/VCHIQ_SLOT_SIZE;
++	int first_data_slot = VCHIQ_SLOT_ZERO_SLOTS;
++
++	/* Ensure there is enough memory to run an absolutely minimum system */
++	num_slots -= first_data_slot;
++
++	if (num_slots < 4) {
++		vchiq_log_error(vchiq_core_log_level,
++			"vchiq_init_slots - insufficient memory %x bytes",
++			mem_size);
++		return NULL;
++	}
++
++	memset(slot_zero, 0, sizeof(VCHIQ_SLOT_ZERO_T));
++
++	slot_zero->magic = VCHIQ_MAGIC;
++	slot_zero->version = VCHIQ_VERSION;
++	slot_zero->version_min = VCHIQ_VERSION_MIN;
++	slot_zero->slot_zero_size = sizeof(VCHIQ_SLOT_ZERO_T);
++	slot_zero->slot_size = VCHIQ_SLOT_SIZE;
++	slot_zero->max_slots = VCHIQ_MAX_SLOTS;
++	slot_zero->max_slots_per_side = VCHIQ_MAX_SLOTS_PER_SIDE;
++
++	slot_zero->master.slot_sync = first_data_slot;
++	slot_zero->master.slot_first = first_data_slot + 1;
++	slot_zero->master.slot_last = first_data_slot + (num_slots/2) - 1;
++	slot_zero->slave.slot_sync = first_data_slot + (num_slots/2);
++	slot_zero->slave.slot_first = first_data_slot + (num_slots/2) + 1;
++	slot_zero->slave.slot_last = first_data_slot + num_slots - 1;
++
++	return slot_zero;
++}
++
++VCHIQ_STATUS_T
++vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero,
++		 int is_master)
++{
++	VCHIQ_SHARED_STATE_T *local;
++	VCHIQ_SHARED_STATE_T *remote;
++	VCHIQ_STATUS_T status;
++	char threadname[10];
++	static int id;
++	int i;
++
++	vchiq_log_warning(vchiq_core_log_level,
++		"%s: slot_zero = 0x%08lx, is_master = %d",
++		__func__, (unsigned long)slot_zero, is_master);
++
++	/* Check the input configuration */
++
++	if (slot_zero->magic != VCHIQ_MAGIC) {
++		vchiq_loud_error_header();
++		vchiq_loud_error("Invalid VCHIQ magic value found.");
++		vchiq_loud_error("slot_zero=%x: magic=%x (expected %x)",
++			(unsigned int)slot_zero, slot_zero->magic, VCHIQ_MAGIC);
++		vchiq_loud_error_footer();
++		return VCHIQ_ERROR;
++	}
++
++	if (slot_zero->version < VCHIQ_VERSION_MIN) {
++		vchiq_loud_error_header();
++		vchiq_loud_error("Incompatible VCHIQ versions found.");
++		vchiq_loud_error("slot_zero=%x: VideoCore version=%d "
++			"(minimum %d)",
++			(unsigned int)slot_zero, slot_zero->version,
++			VCHIQ_VERSION_MIN);
++		vchiq_loud_error("Restart with a newer VideoCore image.");
++		vchiq_loud_error_footer();
++		return VCHIQ_ERROR;
++	}
++
++	if (VCHIQ_VERSION < slot_zero->version_min) {
++		vchiq_loud_error_header();
++		vchiq_loud_error("Incompatible VCHIQ versions found.");
++		vchiq_loud_error("slot_zero=%x: version=%d (VideoCore "
++			"minimum %d)",
++			(unsigned int)slot_zero, VCHIQ_VERSION,
++			slot_zero->version_min);
++		vchiq_loud_error("Restart with a newer kernel.");
++		vchiq_loud_error_footer();
++		return VCHIQ_ERROR;
++	}
++
++	if ((slot_zero->slot_zero_size != sizeof(VCHIQ_SLOT_ZERO_T)) ||
++		 (slot_zero->slot_size != VCHIQ_SLOT_SIZE) ||
++		 (slot_zero->max_slots != VCHIQ_MAX_SLOTS) ||
++		 (slot_zero->max_slots_per_side != VCHIQ_MAX_SLOTS_PER_SIDE)) {
++		vchiq_loud_error_header();
++		if (slot_zero->slot_zero_size != sizeof(VCHIQ_SLOT_ZERO_T))
++			vchiq_loud_error("slot_zero=%x: slot_zero_size=%x "
++				"(expected %x)",
++				(unsigned int)slot_zero,
++				slot_zero->slot_zero_size,
++				sizeof(VCHIQ_SLOT_ZERO_T));
++		if (slot_zero->slot_size != VCHIQ_SLOT_SIZE)
++			vchiq_loud_error("slot_zero=%x: slot_size=%d "
++				"(expected %d",
++				(unsigned int)slot_zero, slot_zero->slot_size,
++				VCHIQ_SLOT_SIZE);
++		if (slot_zero->max_slots != VCHIQ_MAX_SLOTS)
++			vchiq_loud_error("slot_zero=%x: max_slots=%d "
++				"(expected %d)",
++				(unsigned int)slot_zero, slot_zero->max_slots,
++				VCHIQ_MAX_SLOTS);
++		if (slot_zero->max_slots_per_side != VCHIQ_MAX_SLOTS_PER_SIDE)
++			vchiq_loud_error("slot_zero=%x: max_slots_per_side=%d "
++				"(expected %d)",
++				(unsigned int)slot_zero,
++				slot_zero->max_slots_per_side,
++				VCHIQ_MAX_SLOTS_PER_SIDE);
++		vchiq_loud_error_footer();
++		return VCHIQ_ERROR;
++	}
++
++	if (VCHIQ_VERSION < slot_zero->version)
++		slot_zero->version = VCHIQ_VERSION;
++
++	if (is_master) {
++		local = &slot_zero->master;
++		remote = &slot_zero->slave;
++	} else {
++		local = &slot_zero->slave;
++		remote = &slot_zero->master;
++	}
++
++	if (local->initialised) {
++		vchiq_loud_error_header();
++		if (remote->initialised)
++			vchiq_loud_error("local state has already been "
++				"initialised");
++		else
++			vchiq_loud_error("master/slave mismatch - two %ss",
++				is_master ? "master" : "slave");
++		vchiq_loud_error_footer();
++		return VCHIQ_ERROR;
++	}
++
++	memset(state, 0, sizeof(VCHIQ_STATE_T));
++
++	state->id = id++;
++	state->is_master = is_master;
++
++	/*
++		initialize shared state pointers
++	 */
++
++	state->local = local;
++	state->remote = remote;
++	state->slot_data = (VCHIQ_SLOT_T *)slot_zero;
++
++	/*
++		initialize events and mutexes
++	 */
++
++	sema_init(&state->connect, 0);
++	mutex_init(&state->mutex);
++	sema_init(&state->trigger_event, 0);
++	sema_init(&state->recycle_event, 0);
++	sema_init(&state->sync_trigger_event, 0);
++	sema_init(&state->sync_release_event, 0);
++
++	mutex_init(&state->slot_mutex);
++	mutex_init(&state->recycle_mutex);
++	mutex_init(&state->sync_mutex);
++	mutex_init(&state->bulk_transfer_mutex);
++
++	sema_init(&state->slot_available_event, 0);
++	sema_init(&state->slot_remove_event, 0);
++	sema_init(&state->data_quota_event, 0);
++
++	state->slot_queue_available = 0;
++
++	for (i = 0; i < VCHIQ_MAX_SERVICES; i++) {
++		VCHIQ_SERVICE_QUOTA_T *service_quota =
++			&state->service_quotas[i];
++		sema_init(&service_quota->quota_event, 0);
++	}
++
++	for (i = local->slot_first; i <= local->slot_last; i++) {
++		local->slot_queue[state->slot_queue_available++] = i;
++		up(&state->slot_available_event);
++	}
++
++	state->default_slot_quota = state->slot_queue_available/2;
++	state->default_message_quota =
++		min((unsigned short)(state->default_slot_quota * 256),
++		(unsigned short)~0);
++
++	state->previous_data_index = -1;
++	state->data_use_count = 0;
++	state->data_quota = state->slot_queue_available - 1;
++
++	local->trigger.event = &state->trigger_event;
++	remote_event_create(&local->trigger);
++	local->tx_pos = 0;
++
++	local->recycle.event = &state->recycle_event;
++	remote_event_create(&local->recycle);
++	local->slot_queue_recycle = state->slot_queue_available;
++
++	local->sync_trigger.event = &state->sync_trigger_event;
++	remote_event_create(&local->sync_trigger);
++
++	local->sync_release.event = &state->sync_release_event;
++	remote_event_create(&local->sync_release);
++
++	/* At start-of-day, the slot is empty and available */
++	((VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state, local->slot_sync))->msgid
++		= VCHIQ_MSGID_PADDING;
++	remote_event_signal_local(&local->sync_release);
++
++	local->debug[DEBUG_ENTRIES] = DEBUG_MAX;
++
++	status = vchiq_platform_init_state(state);
++
++	/*
++		bring up slot handler thread
++	 */
++	snprintf(threadname, sizeof(threadname), "VCHIQ-%d", state->id);
++	state->slot_handler_thread = kthread_create(&slot_handler_func,
++		(void *)state,
++		threadname);
++
++	if (state->slot_handler_thread == NULL) {
++		vchiq_loud_error_header();
++		vchiq_loud_error("couldn't create thread %s", threadname);
++		vchiq_loud_error_footer();
++		return VCHIQ_ERROR;
++	}
++	set_user_nice(state->slot_handler_thread, -19);
++	wake_up_process(state->slot_handler_thread);
++
++	snprintf(threadname, sizeof(threadname), "VCHIQr-%d", state->id);
++	state->recycle_thread = kthread_create(&recycle_func,
++		(void *)state,
++		threadname);
++	if (state->recycle_thread == NULL) {
++		vchiq_loud_error_header();
++		vchiq_loud_error("couldn't create thread %s", threadname);
++		vchiq_loud_error_footer();
++		return VCHIQ_ERROR;
++	}
++	set_user_nice(state->recycle_thread, -19);
++	wake_up_process(state->recycle_thread);
++
++	snprintf(threadname, sizeof(threadname), "VCHIQs-%d", state->id);
++	state->sync_thread = kthread_create(&sync_func,
++		(void *)state,
++		threadname);
++	if (state->sync_thread == NULL) {
++		vchiq_loud_error_header();
++		vchiq_loud_error("couldn't create thread %s", threadname);
++		vchiq_loud_error_footer();
++		return VCHIQ_ERROR;
++	}
++	set_user_nice(state->sync_thread, -20);
++	wake_up_process(state->sync_thread);
++
++	BUG_ON(state->id >= VCHIQ_MAX_STATES);
++	vchiq_states[state->id] = state;
++
++	/* Indicate readiness to the other side */
++	local->initialised = 1;
++
++	return status;
++}
++
++/* Called from application thread when a client or server service is created. */
++VCHIQ_SERVICE_T *
++vchiq_add_service_internal(VCHIQ_STATE_T *state,
++	const VCHIQ_SERVICE_PARAMS_T *params, int srvstate,
++	VCHIQ_INSTANCE_T instance, VCHIQ_USERDATA_TERM_T userdata_term)
++{
++	VCHIQ_SERVICE_T *service;
++
++	service = kmalloc(sizeof(VCHIQ_SERVICE_T), GFP_KERNEL);
++	if (service) {
++		service->base.fourcc   = params->fourcc;
++		service->base.callback = params->callback;
++		service->base.userdata = params->userdata;
++		service->handle        = VCHIQ_SERVICE_HANDLE_INVALID;
++		service->ref_count     = 1;
++		service->srvstate      = VCHIQ_SRVSTATE_FREE;
++		service->userdata_term = userdata_term;
++		service->localport     = VCHIQ_PORT_FREE;
++		service->remoteport    = VCHIQ_PORT_FREE;
++
++		service->public_fourcc = (srvstate == VCHIQ_SRVSTATE_OPENING) ?
++			VCHIQ_FOURCC_INVALID : params->fourcc;
++		service->client_id     = 0;
++		service->auto_close    = 1;
++		service->sync          = 0;
++		service->closing       = 0;
++		service->trace         = 0;
++		atomic_set(&service->poll_flags, 0);
++		service->version       = params->version;
++		service->version_min   = params->version_min;
++		service->state         = state;
++		service->instance      = instance;
++		service->service_use_count = 0;
++		init_bulk_queue(&service->bulk_tx);
++		init_bulk_queue(&service->bulk_rx);
++		sema_init(&service->remove_event, 0);
++		sema_init(&service->bulk_remove_event, 0);
++		mutex_init(&service->bulk_mutex);
++		memset(&service->stats, 0, sizeof(service->stats));
++	} else {
++		vchiq_log_error(vchiq_core_log_level,
++			"Out of memory");
++	}
++
++	if (service) {
++		VCHIQ_SERVICE_T **pservice = NULL;
++		int i;
++
++		/* Although it is perfectly possible to use service_spinlock
++		** to protect the creation of services, it is overkill as it
++		** disables interrupts while the array is searched.
++		** The only danger is of another thread trying to create a
++		** service - service deletion is safe.
++		** Therefore it is preferable to use state->mutex which,
++		** although slower to claim, doesn't block interrupts while
++		** it is held.
++		*/
++
++		mutex_lock(&state->mutex);
++
++		/* Prepare to use a previously unused service */
++		if (state->unused_service < VCHIQ_MAX_SERVICES)
++			pservice = &state->services[state->unused_service];
++
++		if (srvstate == VCHIQ_SRVSTATE_OPENING) {
++			for (i = 0; i < state->unused_service; i++) {
++				VCHIQ_SERVICE_T *srv = state->services[i];
++				if (!srv) {
++					pservice = &state->services[i];
++					break;
++				}
++			}
++		} else {
++			for (i = (state->unused_service - 1); i >= 0; i--) {
++				VCHIQ_SERVICE_T *srv = state->services[i];
++				if (!srv)
++					pservice = &state->services[i];
++				else if ((srv->public_fourcc == params->fourcc)
++					&& ((srv->instance != instance) ||
++					(srv->base.callback !=
++					params->callback))) {
++					/* There is another server using this
++					** fourcc which doesn't match. */
++					pservice = NULL;
++					break;
++				}
++			}
++		}
++
++		if (pservice) {
++			service->localport = (pservice - state->services);
++			if (!handle_seq)
++				handle_seq = VCHIQ_MAX_STATES *
++					 VCHIQ_MAX_SERVICES;
++			service->handle = handle_seq |
++				(state->id * VCHIQ_MAX_SERVICES) |
++				service->localport;
++			handle_seq += VCHIQ_MAX_STATES * VCHIQ_MAX_SERVICES;
++			*pservice = service;
++			if (pservice == &state->services[state->unused_service])
++				state->unused_service++;
++		}
++
++		mutex_unlock(&state->mutex);
++
++		if (!pservice) {
++			kfree(service);
++			service = NULL;
++		}
++	}
++
++	if (service) {
++		VCHIQ_SERVICE_QUOTA_T *service_quota =
++			&state->service_quotas[service->localport];
++		service_quota->slot_quota = state->default_slot_quota;
++		service_quota->message_quota = state->default_message_quota;
++		if (service_quota->slot_use_count == 0)
++			service_quota->previous_tx_index =
++				SLOT_QUEUE_INDEX_FROM_POS(state->local_tx_pos)
++				- 1;
++
++		/* Bring this service online */
++		vchiq_set_service_state(service, srvstate);
++
++		vchiq_log_info(vchiq_core_msg_log_level,
++			"%s Service %c%c%c%c SrcPort:%d",
++			(srvstate == VCHIQ_SRVSTATE_OPENING)
++			? "Open" : "Add",
++			VCHIQ_FOURCC_AS_4CHARS(params->fourcc),
++			service->localport);
++	}
++
++	/* Don't unlock the service - leave it with a ref_count of 1. */
++
++	return service;
++}
++
++VCHIQ_STATUS_T
++vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id)
++{
++	struct vchiq_open_payload payload = {
++		service->base.fourcc,
++		client_id,
++		service->version,
++		service->version_min
++	};
++	VCHIQ_ELEMENT_T body = { &payload, sizeof(payload) };
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++
++	service->client_id = client_id;
++	vchiq_use_service_internal(service);
++	status = queue_message(service->state, NULL,
++		VCHIQ_MAKE_MSG(VCHIQ_MSG_OPEN, service->localport, 0),
++		&body, 1, sizeof(payload), QMFLAGS_IS_BLOCKING);
++	if (status == VCHIQ_SUCCESS) {
++		/* Wait for the ACK/NAK */
++		if (down_interruptible(&service->remove_event) != 0) {
++			status = VCHIQ_RETRY;
++			vchiq_release_service_internal(service);
++		} else if ((service->srvstate != VCHIQ_SRVSTATE_OPEN) &&
++			(service->srvstate != VCHIQ_SRVSTATE_OPENSYNC)) {
++			if (service->srvstate != VCHIQ_SRVSTATE_CLOSEWAIT)
++				vchiq_log_error(vchiq_core_log_level,
++					"%d: osi - srvstate = %s (ref %d)",
++					service->state->id,
++					srvstate_names[service->srvstate],
++					service->ref_count);
++			status = VCHIQ_ERROR;
++			VCHIQ_SERVICE_STATS_INC(service, error_count);
++			vchiq_release_service_internal(service);
++		}
++	}
++	return status;
++}
++
++static void
++release_service_messages(VCHIQ_SERVICE_T *service)
++{
++	VCHIQ_STATE_T *state = service->state;
++	int slot_last = state->remote->slot_last;
++	int i;
++
++	/* Release any claimed messages aimed at this service */
++
++	if (service->sync) {
++		VCHIQ_HEADER_T *header =
++			(VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state,
++						state->remote->slot_sync);
++		if (VCHIQ_MSG_DSTPORT(header->msgid) == service->localport)
++			release_message_sync(state, header);
++
++		return;
++	}
++
++	for (i = state->remote->slot_first; i <= slot_last; i++) {
++		VCHIQ_SLOT_INFO_T *slot_info =
++			SLOT_INFO_FROM_INDEX(state, i);
++		if (slot_info->release_count != slot_info->use_count) {
++			char *data =
++				(char *)SLOT_DATA_FROM_INDEX(state, i);
++			unsigned int pos, end;
++
++			end = VCHIQ_SLOT_SIZE;
++			if (data == state->rx_data)
++				/* This buffer is still being read from - stop
++				** at the current read position */
++				end = state->rx_pos & VCHIQ_SLOT_MASK;
++
++			pos = 0;
++
++			while (pos < end) {
++				VCHIQ_HEADER_T *header =
++					(VCHIQ_HEADER_T *)(data + pos);
++				int msgid = header->msgid;
++				int port = VCHIQ_MSG_DSTPORT(msgid);
++				if ((port == service->localport) &&
++					(msgid & VCHIQ_MSGID_CLAIMED)) {
++					vchiq_log_info(vchiq_core_log_level,
++						"  fsi - hdr %x",
++						(unsigned int)header);
++					release_slot(state, slot_info, header,
++						NULL);
++				}
++				pos += calc_stride(header->size);
++				if (pos > VCHIQ_SLOT_SIZE) {
++					vchiq_log_error(vchiq_core_log_level,
++						"fsi - pos %x: header %x, "
++						"msgid %x, header->msgid %x, "
++						"header->size %x",
++						pos, (unsigned int)header,
++						msgid, header->msgid,
++						header->size);
++					WARN(1, "invalid slot position\n");
++				}
++			}
++		}
++	}
++}
++
++static int
++do_abort_bulks(VCHIQ_SERVICE_T *service)
++{
++	VCHIQ_STATUS_T status;
++
++	/* Abort any outstanding bulk transfers */
++	if (mutex_lock_interruptible(&service->bulk_mutex) != 0)
++		return 0;
++	abort_outstanding_bulks(service, &service->bulk_tx);
++	abort_outstanding_bulks(service, &service->bulk_rx);
++	mutex_unlock(&service->bulk_mutex);
++
++	status = notify_bulks(service, &service->bulk_tx, 0/*!retry_poll*/);
++	if (status == VCHIQ_SUCCESS)
++		status = notify_bulks(service, &service->bulk_rx,
++			0/*!retry_poll*/);
++	return (status == VCHIQ_SUCCESS);
++}
++
++static VCHIQ_STATUS_T
++close_service_complete(VCHIQ_SERVICE_T *service, int failstate)
++{
++	VCHIQ_STATUS_T status;
++	int is_server = (service->public_fourcc != VCHIQ_FOURCC_INVALID);
++	int newstate;
++
++	switch (service->srvstate) {
++	case VCHIQ_SRVSTATE_OPEN:
++	case VCHIQ_SRVSTATE_CLOSESENT:
++	case VCHIQ_SRVSTATE_CLOSERECVD:
++		if (is_server) {
++			if (service->auto_close) {
++				service->client_id = 0;
++				service->remoteport = VCHIQ_PORT_FREE;
++				newstate = VCHIQ_SRVSTATE_LISTENING;
++			} else
++				newstate = VCHIQ_SRVSTATE_CLOSEWAIT;
++		} else
++			newstate = VCHIQ_SRVSTATE_CLOSED;
++		vchiq_set_service_state(service, newstate);
++		break;
++	case VCHIQ_SRVSTATE_LISTENING:
++		break;
++	default:
++		vchiq_log_error(vchiq_core_log_level,
++			"close_service_complete(%x) called in state %s",
++			service->handle, srvstate_names[service->srvstate]);
++		WARN(1, "close_service_complete in unexpected state\n");
++		return VCHIQ_ERROR;
++	}
++
++	status = make_service_callback(service,
++		VCHIQ_SERVICE_CLOSED, NULL, NULL);
++
++	if (status != VCHIQ_RETRY) {
++		int uc = service->service_use_count;
++		int i;
++		/* Complete the close process */
++		for (i = 0; i < uc; i++)
++			/* cater for cases where close is forced and the
++			** client may not close all it's handles */
++			vchiq_release_service_internal(service);
++
++		service->client_id = 0;
++		service->remoteport = VCHIQ_PORT_FREE;
++
++		if (service->srvstate == VCHIQ_SRVSTATE_CLOSED)
++			vchiq_free_service_internal(service);
++		else if (service->srvstate != VCHIQ_SRVSTATE_CLOSEWAIT) {
++			if (is_server)
++				service->closing = 0;
++
++			up(&service->remove_event);
++		}
++	} else
++		vchiq_set_service_state(service, failstate);
++
++	return status;
++}
++
++/* Called by the slot handler */
++VCHIQ_STATUS_T
++vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd)
++{
++	VCHIQ_STATE_T *state = service->state;
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++	int is_server = (service->public_fourcc != VCHIQ_FOURCC_INVALID);
++
++	vchiq_log_info(vchiq_core_log_level, "%d: csi:%d,%d (%s)",
++		service->state->id, service->localport, close_recvd,
++		srvstate_names[service->srvstate]);
++
++	switch (service->srvstate) {
++	case VCHIQ_SRVSTATE_CLOSED:
++	case VCHIQ_SRVSTATE_HIDDEN:
++	case VCHIQ_SRVSTATE_LISTENING:
++	case VCHIQ_SRVSTATE_CLOSEWAIT:
++		if (close_recvd)
++			vchiq_log_error(vchiq_core_log_level,
++				"vchiq_close_service_internal(1) called "
++				"in state %s",
++				srvstate_names[service->srvstate]);
++		else if (is_server) {
++			if (service->srvstate == VCHIQ_SRVSTATE_LISTENING) {
++				status = VCHIQ_ERROR;
++			} else {
++				service->client_id = 0;
++				service->remoteport = VCHIQ_PORT_FREE;
++				if (service->srvstate ==
++					VCHIQ_SRVSTATE_CLOSEWAIT)
++					vchiq_set_service_state(service,
++						VCHIQ_SRVSTATE_LISTENING);
++			}
++			up(&service->remove_event);
++		} else
++			vchiq_free_service_internal(service);
++		break;
++	case VCHIQ_SRVSTATE_OPENING:
++		if (close_recvd) {
++			/* The open was rejected - tell the user */
++			vchiq_set_service_state(service,
++				VCHIQ_SRVSTATE_CLOSEWAIT);
++			up(&service->remove_event);
++		} else {
++			/* Shutdown mid-open - let the other side know */
++			status = queue_message(state, service,
++				VCHIQ_MAKE_MSG
++				(VCHIQ_MSG_CLOSE,
++				service->localport,
++				VCHIQ_MSG_DSTPORT(service->remoteport)),
++				NULL, 0, 0, 0);
++		}
++		break;
++
++	case VCHIQ_SRVSTATE_OPENSYNC:
++		mutex_lock(&state->sync_mutex);
++		/* Drop through */
++
++	case VCHIQ_SRVSTATE_OPEN:
++		if (state->is_master || close_recvd) {
++			if (!do_abort_bulks(service))
++				status = VCHIQ_RETRY;
++		}
++
++		release_service_messages(service);
++
++		if (status == VCHIQ_SUCCESS)
++			status = queue_message(state, service,
++				VCHIQ_MAKE_MSG
++				(VCHIQ_MSG_CLOSE,
++				service->localport,
++				VCHIQ_MSG_DSTPORT(service->remoteport)),
++				NULL, 0, 0, QMFLAGS_NO_MUTEX_UNLOCK);
++
++		if (status == VCHIQ_SUCCESS) {
++			if (!close_recvd) {
++				/* Change the state while the mutex is
++				   still held */
++				vchiq_set_service_state(service,
++							VCHIQ_SRVSTATE_CLOSESENT);
++				mutex_unlock(&state->slot_mutex);
++				if (service->sync)
++					mutex_unlock(&state->sync_mutex);
++				break;
++			}
++		} else if (service->srvstate == VCHIQ_SRVSTATE_OPENSYNC) {
++			mutex_unlock(&state->sync_mutex);
++			break;
++		} else
++			break;
++
++		/* Change the state while the mutex is still held */
++		vchiq_set_service_state(service, VCHIQ_SRVSTATE_CLOSERECVD);
++		mutex_unlock(&state->slot_mutex);
++		if (service->sync)
++			mutex_unlock(&state->sync_mutex);
++
++		status = close_service_complete(service,
++				VCHIQ_SRVSTATE_CLOSERECVD);
++		break;
++
++	case VCHIQ_SRVSTATE_CLOSESENT:
++		if (!close_recvd)
++			/* This happens when a process is killed mid-close */
++			break;
++
++		if (!state->is_master) {
++			if (!do_abort_bulks(service)) {
++				status = VCHIQ_RETRY;
++				break;
++			}
++		}
++
++		if (status == VCHIQ_SUCCESS)
++			status = close_service_complete(service,
++				VCHIQ_SRVSTATE_CLOSERECVD);
++		break;
++
++	case VCHIQ_SRVSTATE_CLOSERECVD:
++		if (!close_recvd && is_server)
++			/* Force into LISTENING mode */
++			vchiq_set_service_state(service,
++				VCHIQ_SRVSTATE_LISTENING);
++		status = close_service_complete(service,
++			VCHIQ_SRVSTATE_CLOSERECVD);
++		break;
++
++	default:
++		vchiq_log_error(vchiq_core_log_level,
++			"vchiq_close_service_internal(%d) called in state %s",
++			close_recvd, srvstate_names[service->srvstate]);
++		break;
++	}
++
++	return status;
++}
++
++/* Called from the application process upon process death */
++void
++vchiq_terminate_service_internal(VCHIQ_SERVICE_T *service)
++{
++	VCHIQ_STATE_T *state = service->state;
++
++	vchiq_log_info(vchiq_core_log_level, "%d: tsi - (%d<->%d)",
++		state->id, service->localport, service->remoteport);
++
++	mark_service_closing(service);
++
++	/* Mark the service for removal by the slot handler */
++	request_poll(state, service, VCHIQ_POLL_REMOVE);
++}
++
++/* Called from the slot handler */
++void
++vchiq_free_service_internal(VCHIQ_SERVICE_T *service)
++{
++	VCHIQ_STATE_T *state = service->state;
++
++	vchiq_log_info(vchiq_core_log_level, "%d: fsi - (%d)",
++		state->id, service->localport);
++
++	switch (service->srvstate) {
++	case VCHIQ_SRVSTATE_OPENING:
++	case VCHIQ_SRVSTATE_CLOSED:
++	case VCHIQ_SRVSTATE_HIDDEN:
++	case VCHIQ_SRVSTATE_LISTENING:
++	case VCHIQ_SRVSTATE_CLOSEWAIT:
++		break;
++	default:
++		vchiq_log_error(vchiq_core_log_level,
++			"%d: fsi - (%d) in state %s",
++			state->id, service->localport,
++			srvstate_names[service->srvstate]);
++		return;
++	}
++
++	vchiq_set_service_state(service, VCHIQ_SRVSTATE_FREE);
++
++	up(&service->remove_event);
++
++	/* Release the initial lock */
++	unlock_service(service);
++}
++
++VCHIQ_STATUS_T
++vchiq_connect_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance)
++{
++	VCHIQ_SERVICE_T *service;
++	int i;
++
++	/* Find all services registered to this client and enable them. */
++	i = 0;
++	while ((service = next_service_by_instance(state, instance,
++		&i)) !=	NULL) {
++		if (service->srvstate == VCHIQ_SRVSTATE_HIDDEN)
++			vchiq_set_service_state(service,
++				VCHIQ_SRVSTATE_LISTENING);
++		unlock_service(service);
++	}
++
++	if (state->conn_state == VCHIQ_CONNSTATE_DISCONNECTED) {
++		if (queue_message(state, NULL,
++			VCHIQ_MAKE_MSG(VCHIQ_MSG_CONNECT, 0, 0), NULL, 0,
++			0, QMFLAGS_IS_BLOCKING) == VCHIQ_RETRY)
++			return VCHIQ_RETRY;
++
++		vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTING);
++	}
++
++	if (state->conn_state == VCHIQ_CONNSTATE_CONNECTING) {
++		if (down_interruptible(&state->connect) != 0)
++			return VCHIQ_RETRY;
++
++		vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
++		up(&state->connect);
++	}
++
++	return VCHIQ_SUCCESS;
++}
++
++VCHIQ_STATUS_T
++vchiq_shutdown_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance)
++{
++	VCHIQ_SERVICE_T *service;
++	int i;
++
++	/* Find all services registered to this client and enable them. */
++	i = 0;
++	while ((service = next_service_by_instance(state, instance,
++		&i)) !=	NULL) {
++		(void)vchiq_remove_service(service->handle);
++		unlock_service(service);
++	}
++
++	return VCHIQ_SUCCESS;
++}
++
++VCHIQ_STATUS_T
++vchiq_pause_internal(VCHIQ_STATE_T *state)
++{
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++
++	switch (state->conn_state) {
++	case VCHIQ_CONNSTATE_CONNECTED:
++		/* Request a pause */
++		vchiq_set_conn_state(state, VCHIQ_CONNSTATE_PAUSING);
++		request_poll(state, NULL, 0);
++		break;
++	default:
++		vchiq_log_error(vchiq_core_log_level,
++			"vchiq_pause_internal in state %s\n",
++			conn_state_names[state->conn_state]);
++		status = VCHIQ_ERROR;
++		VCHIQ_STATS_INC(state, error_count);
++		break;
++	}
++
++	return status;
++}
++
++VCHIQ_STATUS_T
++vchiq_resume_internal(VCHIQ_STATE_T *state)
++{
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++
++	if (state->conn_state == VCHIQ_CONNSTATE_PAUSED) {
++		vchiq_set_conn_state(state, VCHIQ_CONNSTATE_RESUMING);
++		request_poll(state, NULL, 0);
++	} else {
++		status = VCHIQ_ERROR;
++		VCHIQ_STATS_INC(state, error_count);
++	}
++
++	return status;
++}
++
++VCHIQ_STATUS_T
++vchiq_close_service(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	/* Unregister the service */
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++
++	if (!service)
++		return VCHIQ_ERROR;
++
++	vchiq_log_info(vchiq_core_log_level,
++		"%d: close_service:%d",
++		service->state->id, service->localport);
++
++	if ((service->srvstate == VCHIQ_SRVSTATE_FREE) ||
++		(service->srvstate == VCHIQ_SRVSTATE_LISTENING) ||
++		(service->srvstate == VCHIQ_SRVSTATE_HIDDEN)) {
++		unlock_service(service);
++		return VCHIQ_ERROR;
++	}
++
++	mark_service_closing(service);
++
++	if (current == service->state->slot_handler_thread) {
++		status = vchiq_close_service_internal(service,
++			0/*!close_recvd*/);
++		BUG_ON(status == VCHIQ_RETRY);
++	} else {
++	/* Mark the service for termination by the slot handler */
++		request_poll(service->state, service, VCHIQ_POLL_TERMINATE);
++	}
++
++	while (1) {
++		if (down_interruptible(&service->remove_event) != 0) {
++			status = VCHIQ_RETRY;
++			break;
++		}
++
++		if ((service->srvstate == VCHIQ_SRVSTATE_FREE) ||
++			(service->srvstate == VCHIQ_SRVSTATE_LISTENING) ||
++			(service->srvstate == VCHIQ_SRVSTATE_OPEN))
++			break;
++
++		vchiq_log_warning(vchiq_core_log_level,
++			"%d: close_service:%d - waiting in state %s",
++			service->state->id, service->localport,
++			srvstate_names[service->srvstate]);
++	}
++
++	if ((status == VCHIQ_SUCCESS) &&
++		(service->srvstate != VCHIQ_SRVSTATE_FREE) &&
++		(service->srvstate != VCHIQ_SRVSTATE_LISTENING))
++		status = VCHIQ_ERROR;
++
++	unlock_service(service);
++
++	return status;
++}
++
++VCHIQ_STATUS_T
++vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	/* Unregister the service */
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
++
++	if (!service)
++		return VCHIQ_ERROR;
++
++	vchiq_log_info(vchiq_core_log_level,
++		"%d: remove_service:%d",
++		service->state->id, service->localport);
++
++	if (service->srvstate == VCHIQ_SRVSTATE_FREE) {
++		unlock_service(service);
++		return VCHIQ_ERROR;
++	}
++
++	mark_service_closing(service);
++
++	if ((service->srvstate == VCHIQ_SRVSTATE_HIDDEN) ||
++		(current == service->state->slot_handler_thread)) {
++		/* Make it look like a client, because it must be removed and
++		   not left in the LISTENING state. */
++		service->public_fourcc = VCHIQ_FOURCC_INVALID;
++
++		status = vchiq_close_service_internal(service,
++			0/*!close_recvd*/);
++		BUG_ON(status == VCHIQ_RETRY);
++	} else {
++		/* Mark the service for removal by the slot handler */
++		request_poll(service->state, service, VCHIQ_POLL_REMOVE);
++	}
++	while (1) {
++		if (down_interruptible(&service->remove_event) != 0) {
++			status = VCHIQ_RETRY;
++			break;
++		}
++
++		if ((service->srvstate == VCHIQ_SRVSTATE_FREE) ||
++			(service->srvstate == VCHIQ_SRVSTATE_OPEN))
++			break;
++
++		vchiq_log_warning(vchiq_core_log_level,
++			"%d: remove_service:%d - waiting in state %s",
++			service->state->id, service->localport,
++			srvstate_names[service->srvstate]);
++	}
++
++	if ((status == VCHIQ_SUCCESS) &&
++		(service->srvstate != VCHIQ_SRVSTATE_FREE))
++		status = VCHIQ_ERROR;
++
++	unlock_service(service);
++
++	return status;
++}
++
++
++/* This function may be called by kernel threads or user threads.
++ * User threads may receive VCHIQ_RETRY to indicate that a signal has been
++ * received and the call should be retried after being returned to user
++ * context.
++ * When called in blocking mode, the userdata field points to a bulk_waiter
++ * structure.
++ */
++VCHIQ_STATUS_T
++vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle,
++	VCHI_MEM_HANDLE_T memhandle, void *offset, int size, void *userdata,
++	VCHIQ_BULK_MODE_T mode, VCHIQ_BULK_DIR_T dir)
++{
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	VCHIQ_BULK_QUEUE_T *queue;
++	VCHIQ_BULK_T *bulk;
++	VCHIQ_STATE_T *state;
++	struct bulk_waiter *bulk_waiter = NULL;
++	const char dir_char = (dir == VCHIQ_BULK_TRANSMIT) ? 't' : 'r';
++	const int dir_msgtype = (dir == VCHIQ_BULK_TRANSMIT) ?
++		VCHIQ_MSG_BULK_TX : VCHIQ_MSG_BULK_RX;
++	VCHIQ_STATUS_T status = VCHIQ_ERROR;
++
++	if (!service ||
++		 (service->srvstate != VCHIQ_SRVSTATE_OPEN) ||
++		 ((memhandle == VCHI_MEM_HANDLE_INVALID) && (offset == NULL)) ||
++		 (vchiq_check_service(service) != VCHIQ_SUCCESS))
++		goto error_exit;
++
++	switch (mode) {
++	case VCHIQ_BULK_MODE_NOCALLBACK:
++	case VCHIQ_BULK_MODE_CALLBACK:
++		break;
++	case VCHIQ_BULK_MODE_BLOCKING:
++		bulk_waiter = (struct bulk_waiter *)userdata;
++		sema_init(&bulk_waiter->event, 0);
++		bulk_waiter->actual = 0;
++		bulk_waiter->bulk = NULL;
++		break;
++	case VCHIQ_BULK_MODE_WAITING:
++		bulk_waiter = (struct bulk_waiter *)userdata;
++		bulk = bulk_waiter->bulk;
++		goto waiting;
++	default:
++		goto error_exit;
++	}
++
++	state = service->state;
++
++	queue = (dir == VCHIQ_BULK_TRANSMIT) ?
++		&service->bulk_tx : &service->bulk_rx;
++
++	if (mutex_lock_interruptible(&service->bulk_mutex) != 0) {
++		status = VCHIQ_RETRY;
++		goto error_exit;
++	}
++
++	if (queue->local_insert == queue->remove + VCHIQ_NUM_SERVICE_BULKS) {
++		VCHIQ_SERVICE_STATS_INC(service, bulk_stalls);
++		do {
++			mutex_unlock(&service->bulk_mutex);
++			if (down_interruptible(&service->bulk_remove_event)
++				!= 0) {
++				status = VCHIQ_RETRY;
++				goto error_exit;
++			}
++			if (mutex_lock_interruptible(&service->bulk_mutex)
++				!= 0) {
++				status = VCHIQ_RETRY;
++				goto error_exit;
++			}
++		} while (queue->local_insert == queue->remove +
++				VCHIQ_NUM_SERVICE_BULKS);
++	}
++
++	bulk = &queue->bulks[BULK_INDEX(queue->local_insert)];
++
++	bulk->mode = mode;
++	bulk->dir = dir;
++	bulk->userdata = userdata;
++	bulk->size = size;
++	bulk->actual = VCHIQ_BULK_ACTUAL_ABORTED;
++
++	if (vchiq_prepare_bulk_data(bulk, memhandle, offset, size, dir) !=
++		VCHIQ_SUCCESS)
++		goto unlock_error_exit;
++
++	wmb();
++
++	vchiq_log_info(vchiq_core_log_level,
++		"%d: bt (%d->%d) %cx %x@%x %x",
++		state->id,
++		service->localport, service->remoteport, dir_char,
++		size, (unsigned int)bulk->data, (unsigned int)userdata);
++
++	/* The slot mutex must be held when the service is being closed, so
++	   claim it here to ensure that isn't happening */
++	if (mutex_lock_interruptible(&state->slot_mutex) != 0) {
++		status = VCHIQ_RETRY;
++		goto cancel_bulk_error_exit;
++	}
++
++	if (service->srvstate != VCHIQ_SRVSTATE_OPEN)
++		goto unlock_both_error_exit;
++
++	if (state->is_master) {
++		queue->local_insert++;
++		if (resolve_bulks(service, queue))
++			request_poll(state, service,
++				(dir == VCHIQ_BULK_TRANSMIT) ?
++				VCHIQ_POLL_TXNOTIFY : VCHIQ_POLL_RXNOTIFY);
++	} else {
++		int payload[2] = { (int)bulk->data, bulk->size };
++		VCHIQ_ELEMENT_T element = { payload, sizeof(payload) };
++
++		status = queue_message(state, NULL,
++			VCHIQ_MAKE_MSG(dir_msgtype,
++				service->localport, service->remoteport),
++			&element, 1, sizeof(payload),
++			QMFLAGS_IS_BLOCKING |
++			QMFLAGS_NO_MUTEX_LOCK |
++			QMFLAGS_NO_MUTEX_UNLOCK);
++		if (status != VCHIQ_SUCCESS) {
++			goto unlock_both_error_exit;
++		}
++		queue->local_insert++;
++	}
++
++	mutex_unlock(&state->slot_mutex);
++	mutex_unlock(&service->bulk_mutex);
++
++	vchiq_log_trace(vchiq_core_log_level,
++		"%d: bt:%d %cx li=%x ri=%x p=%x",
++		state->id,
++		service->localport, dir_char,
++		queue->local_insert, queue->remote_insert, queue->process);
++
++waiting:
++	unlock_service(service);
++
++	status = VCHIQ_SUCCESS;
++
++	if (bulk_waiter) {
++		bulk_waiter->bulk = bulk;
++		if (down_interruptible(&bulk_waiter->event) != 0)
++			status = VCHIQ_RETRY;
++		else if (bulk_waiter->actual == VCHIQ_BULK_ACTUAL_ABORTED)
++			status = VCHIQ_ERROR;
++	}
++
++	return status;
++
++unlock_both_error_exit:
++	mutex_unlock(&state->slot_mutex);
++cancel_bulk_error_exit:
++	vchiq_complete_bulk(bulk);
++unlock_error_exit:
++	mutex_unlock(&service->bulk_mutex);
++
++error_exit:
++	if (service)
++		unlock_service(service);
++	return status;
++}
++
++VCHIQ_STATUS_T
++vchiq_queue_message(VCHIQ_SERVICE_HANDLE_T handle,
++	const VCHIQ_ELEMENT_T *elements, unsigned int count)
++{
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	VCHIQ_STATUS_T status = VCHIQ_ERROR;
++
++	unsigned int size = 0;
++	unsigned int i;
++
++	if (!service ||
++		(vchiq_check_service(service) != VCHIQ_SUCCESS))
++		goto error_exit;
++
++	for (i = 0; i < (unsigned int)count; i++) {
++		if (elements[i].size) {
++			if (elements[i].data == NULL) {
++				VCHIQ_SERVICE_STATS_INC(service, error_count);
++				goto error_exit;
++			}
++			size += elements[i].size;
++		}
++	}
++
++	if (size > VCHIQ_MAX_MSG_SIZE) {
++		VCHIQ_SERVICE_STATS_INC(service, error_count);
++		goto error_exit;
++	}
++
++	switch (service->srvstate) {
++	case VCHIQ_SRVSTATE_OPEN:
++		status = queue_message(service->state, service,
++				VCHIQ_MAKE_MSG(VCHIQ_MSG_DATA,
++					service->localport,
++					service->remoteport),
++				elements, count, size, 1);
++		break;
++	case VCHIQ_SRVSTATE_OPENSYNC:
++		status = queue_message_sync(service->state, service,
++				VCHIQ_MAKE_MSG(VCHIQ_MSG_DATA,
++					service->localport,
++					service->remoteport),
++				elements, count, size, 1);
++		break;
++	default:
++		status = VCHIQ_ERROR;
++		break;
++	}
++
++error_exit:
++	if (service)
++		unlock_service(service);
++
++	return status;
++}
++
++void
++vchiq_release_message(VCHIQ_SERVICE_HANDLE_T handle, VCHIQ_HEADER_T *header)
++{
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	VCHIQ_SHARED_STATE_T *remote;
++	VCHIQ_STATE_T *state;
++	int slot_index;
++
++	if (!service)
++		return;
++
++	state = service->state;
++	remote = state->remote;
++
++	slot_index = SLOT_INDEX_FROM_DATA(state, (void *)header);
++
++	if ((slot_index >= remote->slot_first) &&
++		(slot_index <= remote->slot_last)) {
++		int msgid = header->msgid;
++		if (msgid & VCHIQ_MSGID_CLAIMED) {
++			VCHIQ_SLOT_INFO_T *slot_info =
++				SLOT_INFO_FROM_INDEX(state, slot_index);
++
++			release_slot(state, slot_info, header, service);
++		}
++	} else if (slot_index == remote->slot_sync)
++		release_message_sync(state, header);
++
++	unlock_service(service);
++}
++
++static void
++release_message_sync(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header)
++{
++	header->msgid = VCHIQ_MSGID_PADDING;
++	wmb();
++	remote_event_signal(&state->remote->sync_release);
++}
++
++VCHIQ_STATUS_T
++vchiq_get_peer_version(VCHIQ_SERVICE_HANDLE_T handle, short *peer_version)
++{
++   VCHIQ_STATUS_T status = VCHIQ_ERROR;
++   VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++
++   if (!service ||
++      (vchiq_check_service(service) != VCHIQ_SUCCESS) ||
++      !peer_version)
++      goto exit;
++   *peer_version = service->peer_version;
++   status = VCHIQ_SUCCESS;
++
++exit:
++   if (service)
++      unlock_service(service);
++   return status;
++}
++
++VCHIQ_STATUS_T
++vchiq_get_config(VCHIQ_INSTANCE_T instance,
++	int config_size, VCHIQ_CONFIG_T *pconfig)
++{
++	VCHIQ_CONFIG_T config;
++
++	(void)instance;
++
++	config.max_msg_size           = VCHIQ_MAX_MSG_SIZE;
++	config.bulk_threshold         = VCHIQ_MAX_MSG_SIZE;
++	config.max_outstanding_bulks  = VCHIQ_NUM_SERVICE_BULKS;
++	config.max_services           = VCHIQ_MAX_SERVICES;
++	config.version                = VCHIQ_VERSION;
++	config.version_min            = VCHIQ_VERSION_MIN;
++
++	if (config_size > sizeof(VCHIQ_CONFIG_T))
++		return VCHIQ_ERROR;
++
++	memcpy(pconfig, &config,
++		min(config_size, (int)(sizeof(VCHIQ_CONFIG_T))));
++
++	return VCHIQ_SUCCESS;
++}
++
++VCHIQ_STATUS_T
++vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T handle,
++	VCHIQ_SERVICE_OPTION_T option, int value)
++{
++	VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
++	VCHIQ_STATUS_T status = VCHIQ_ERROR;
++
++	if (service) {
++		switch (option) {
++		case VCHIQ_SERVICE_OPTION_AUTOCLOSE:
++			service->auto_close = value;
++			status = VCHIQ_SUCCESS;
++			break;
++
++		case VCHIQ_SERVICE_OPTION_SLOT_QUOTA: {
++			VCHIQ_SERVICE_QUOTA_T *service_quota =
++				&service->state->service_quotas[
++					service->localport];
++			if (value == 0)
++				value = service->state->default_slot_quota;
++			if ((value >= service_quota->slot_use_count) &&
++				 (value < (unsigned short)~0)) {
++				service_quota->slot_quota = value;
++				if ((value >= service_quota->slot_use_count) &&
++					(service_quota->message_quota >=
++					 service_quota->message_use_count)) {
++					/* Signal the service that it may have
++					** dropped below its quota */
++					up(&service_quota->quota_event);
++				}
++				status = VCHIQ_SUCCESS;
++			}
++		} break;
++
++		case VCHIQ_SERVICE_OPTION_MESSAGE_QUOTA: {
++			VCHIQ_SERVICE_QUOTA_T *service_quota =
++				&service->state->service_quotas[
++					service->localport];
++			if (value == 0)
++				value = service->state->default_message_quota;
++			if ((value >= service_quota->message_use_count) &&
++				 (value < (unsigned short)~0)) {
++				service_quota->message_quota = value;
++				if ((value >=
++					service_quota->message_use_count) &&
++					(service_quota->slot_quota >=
++					service_quota->slot_use_count))
++					/* Signal the service that it may have
++					** dropped below its quota */
++					up(&service_quota->quota_event);
++				status = VCHIQ_SUCCESS;
++			}
++		} break;
++
++		case VCHIQ_SERVICE_OPTION_SYNCHRONOUS:
++			if ((service->srvstate == VCHIQ_SRVSTATE_HIDDEN) ||
++				(service->srvstate ==
++				VCHIQ_SRVSTATE_LISTENING)) {
++				service->sync = value;
++				status = VCHIQ_SUCCESS;
++			}
++			break;
++
++		case VCHIQ_SERVICE_OPTION_TRACE:
++			service->trace = value;
++			status = VCHIQ_SUCCESS;
++			break;
++
++		default:
++			break;
++		}
++		unlock_service(service);
++	}
++
++	return status;
++}
++
++void
++vchiq_dump_shared_state(void *dump_context, VCHIQ_STATE_T *state,
++	VCHIQ_SHARED_STATE_T *shared, const char *label)
++{
++	static const char *const debug_names[] = {
++		"<entries>",
++		"SLOT_HANDLER_COUNT",
++		"SLOT_HANDLER_LINE",
++		"PARSE_LINE",
++		"PARSE_HEADER",
++		"PARSE_MSGID",
++		"AWAIT_COMPLETION_LINE",
++		"DEQUEUE_MESSAGE_LINE",
++		"SERVICE_CALLBACK_LINE",
++		"MSG_QUEUE_FULL_COUNT",
++		"COMPLETION_QUEUE_FULL_COUNT"
++	};
++	int i;
++
++	char buf[80];
++	int len;
++	len = snprintf(buf, sizeof(buf),
++		"  %s: slots %d-%d tx_pos=%x recycle=%x",
++		label, shared->slot_first, shared->slot_last,
++		shared->tx_pos, shared->slot_queue_recycle);
++	vchiq_dump(dump_context, buf, len + 1);
++
++	len = snprintf(buf, sizeof(buf),
++		"    Slots claimed:");
++	vchiq_dump(dump_context, buf, len + 1);
++
++	for (i = shared->slot_first; i <= shared->slot_last; i++) {
++		VCHIQ_SLOT_INFO_T slot_info = *SLOT_INFO_FROM_INDEX(state, i);
++		if (slot_info.use_count != slot_info.release_count) {
++			len = snprintf(buf, sizeof(buf),
++				"      %d: %d/%d", i, slot_info.use_count,
++				slot_info.release_count);
++			vchiq_dump(dump_context, buf, len + 1);
++		}
++	}
++
++	for (i = 1; i < shared->debug[DEBUG_ENTRIES]; i++) {
++		len = snprintf(buf, sizeof(buf), "    DEBUG: %s = %d(%x)",
++			debug_names[i], shared->debug[i], shared->debug[i]);
++		vchiq_dump(dump_context, buf, len + 1);
++	}
++}
++
++void
++vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state)
++{
++	char buf[80];
++	int len;
++	int i;
++
++	len = snprintf(buf, sizeof(buf), "State %d: %s", state->id,
++		conn_state_names[state->conn_state]);
++	vchiq_dump(dump_context, buf, len + 1);
++
++	len = snprintf(buf, sizeof(buf),
++		"  tx_pos=%x(@%x), rx_pos=%x(@%x)",
++		state->local->tx_pos,
++		(uint32_t)state->tx_data +
++			(state->local_tx_pos & VCHIQ_SLOT_MASK),
++		state->rx_pos,
++		(uint32_t)state->rx_data +
++			(state->rx_pos & VCHIQ_SLOT_MASK));
++	vchiq_dump(dump_context, buf, len + 1);
++
++	len = snprintf(buf, sizeof(buf),
++		"  Version: %d (min %d)",
++		VCHIQ_VERSION, VCHIQ_VERSION_MIN);
++	vchiq_dump(dump_context, buf, len + 1);
++
++	if (VCHIQ_ENABLE_STATS) {
++		len = snprintf(buf, sizeof(buf),
++			"  Stats: ctrl_tx_count=%d, ctrl_rx_count=%d, "
++			"error_count=%d",
++			state->stats.ctrl_tx_count, state->stats.ctrl_rx_count,
++			state->stats.error_count);
++		vchiq_dump(dump_context, buf, len + 1);
++	}
++
++	len = snprintf(buf, sizeof(buf),
++		"  Slots: %d available (%d data), %d recyclable, %d stalls "
++		"(%d data)",
++		((state->slot_queue_available * VCHIQ_SLOT_SIZE) -
++			state->local_tx_pos) / VCHIQ_SLOT_SIZE,
++		state->data_quota - state->data_use_count,
++		state->local->slot_queue_recycle - state->slot_queue_available,
++		state->stats.slot_stalls, state->stats.data_stalls);
++	vchiq_dump(dump_context, buf, len + 1);
++
++	vchiq_dump_platform_state(dump_context);
++
++	vchiq_dump_shared_state(dump_context, state, state->local, "Local");
++	vchiq_dump_shared_state(dump_context, state, state->remote, "Remote");
++
++	vchiq_dump_platform_instances(dump_context);
++
++	for (i = 0; i < state->unused_service; i++) {
++		VCHIQ_SERVICE_T *service = find_service_by_port(state, i);
++
++		if (service) {
++			vchiq_dump_service_state(dump_context, service);
++			unlock_service(service);
++		}
++	}
++}
++
++void
++vchiq_dump_service_state(void *dump_context, VCHIQ_SERVICE_T *service)
++{
++	char buf[80];
++	int len;
++
++	len = snprintf(buf, sizeof(buf), "Service %d: %s (ref %u)",
++		service->localport, srvstate_names[service->srvstate],
++		service->ref_count - 1); /*Don't include the lock just taken*/
++
++	if (service->srvstate != VCHIQ_SRVSTATE_FREE) {
++		char remoteport[30];
++		VCHIQ_SERVICE_QUOTA_T *service_quota =
++			&service->state->service_quotas[service->localport];
++		int fourcc = service->base.fourcc;
++		int tx_pending, rx_pending;
++		if (service->remoteport != VCHIQ_PORT_FREE) {
++			int len2 = snprintf(remoteport, sizeof(remoteport),
++				"%d", service->remoteport);
++			if (service->public_fourcc != VCHIQ_FOURCC_INVALID)
++				snprintf(remoteport + len2,
++					sizeof(remoteport) - len2,
++					" (client %x)", service->client_id);
++		} else
++			strcpy(remoteport, "n/a");
++
++		len += snprintf(buf + len, sizeof(buf) - len,
++			" '%c%c%c%c' remote %s (msg use %d/%d, slot use %d/%d)",
++			VCHIQ_FOURCC_AS_4CHARS(fourcc),
++			remoteport,
++			service_quota->message_use_count,
++			service_quota->message_quota,
++			service_quota->slot_use_count,
++			service_quota->slot_quota);
++
++		vchiq_dump(dump_context, buf, len + 1);
++
++		tx_pending = service->bulk_tx.local_insert -
++			service->bulk_tx.remote_insert;
++
++		rx_pending = service->bulk_rx.local_insert -
++			service->bulk_rx.remote_insert;
++
++		len = snprintf(buf, sizeof(buf),
++			"  Bulk: tx_pending=%d (size %d),"
++			" rx_pending=%d (size %d)",
++			tx_pending,
++			tx_pending ? service->bulk_tx.bulks[
++			BULK_INDEX(service->bulk_tx.remove)].size : 0,
++			rx_pending,
++			rx_pending ? service->bulk_rx.bulks[
++			BULK_INDEX(service->bulk_rx.remove)].size : 0);
++
++		if (VCHIQ_ENABLE_STATS) {
++			vchiq_dump(dump_context, buf, len + 1);
++
++			len = snprintf(buf, sizeof(buf),
++				"  Ctrl: tx_count=%d, tx_bytes=%llu, "
++				"rx_count=%d, rx_bytes=%llu",
++				service->stats.ctrl_tx_count,
++				service->stats.ctrl_tx_bytes,
++				service->stats.ctrl_rx_count,
++				service->stats.ctrl_rx_bytes);
++			vchiq_dump(dump_context, buf, len + 1);
++
++			len = snprintf(buf, sizeof(buf),
++				"  Bulk: tx_count=%d, tx_bytes=%llu, "
++				"rx_count=%d, rx_bytes=%llu",
++				service->stats.bulk_tx_count,
++				service->stats.bulk_tx_bytes,
++				service->stats.bulk_rx_count,
++				service->stats.bulk_rx_bytes);
++			vchiq_dump(dump_context, buf, len + 1);
++
++			len = snprintf(buf, sizeof(buf),
++				"  %d quota stalls, %d slot stalls, "
++				"%d bulk stalls, %d aborted, %d errors",
++				service->stats.quota_stalls,
++				service->stats.slot_stalls,
++				service->stats.bulk_stalls,
++				service->stats.bulk_aborted_count,
++				service->stats.error_count);
++		 }
++	}
++
++	vchiq_dump(dump_context, buf, len + 1);
++
++	if (service->srvstate != VCHIQ_SRVSTATE_FREE)
++		vchiq_dump_platform_service_state(dump_context, service);
++}
++
++
++void
++vchiq_loud_error_header(void)
++{
++	vchiq_log_error(vchiq_core_log_level,
++		"============================================================"
++		"================");
++	vchiq_log_error(vchiq_core_log_level,
++		"============================================================"
++		"================");
++	vchiq_log_error(vchiq_core_log_level, "=====");
++}
++
++void
++vchiq_loud_error_footer(void)
++{
++	vchiq_log_error(vchiq_core_log_level, "=====");
++	vchiq_log_error(vchiq_core_log_level,
++		"============================================================"
++		"================");
++	vchiq_log_error(vchiq_core_log_level,
++		"============================================================"
++		"================");
++}
++
++
++VCHIQ_STATUS_T vchiq_send_remote_use(VCHIQ_STATE_T *state)
++{
++	VCHIQ_STATUS_T status = VCHIQ_RETRY;
++	if (state->conn_state != VCHIQ_CONNSTATE_DISCONNECTED)
++		status = queue_message(state, NULL,
++			VCHIQ_MAKE_MSG(VCHIQ_MSG_REMOTE_USE, 0, 0),
++			NULL, 0, 0, 0);
++	return status;
++}
++
++VCHIQ_STATUS_T vchiq_send_remote_release(VCHIQ_STATE_T *state)
++{
++	VCHIQ_STATUS_T status = VCHIQ_RETRY;
++	if (state->conn_state != VCHIQ_CONNSTATE_DISCONNECTED)
++		status = queue_message(state, NULL,
++			VCHIQ_MAKE_MSG(VCHIQ_MSG_REMOTE_RELEASE, 0, 0),
++			NULL, 0, 0, 0);
++	return status;
++}
++
++VCHIQ_STATUS_T vchiq_send_remote_use_active(VCHIQ_STATE_T *state)
++{
++	VCHIQ_STATUS_T status = VCHIQ_RETRY;
++	if (state->conn_state != VCHIQ_CONNSTATE_DISCONNECTED)
++		status = queue_message(state, NULL,
++			VCHIQ_MAKE_MSG(VCHIQ_MSG_REMOTE_USE_ACTIVE, 0, 0),
++			NULL, 0, 0, 0);
++	return status;
++}
++
++void vchiq_log_dump_mem(const char *label, uint32_t addr, const void *voidMem,
++	size_t numBytes)
++{
++	const uint8_t  *mem = (const uint8_t *)voidMem;
++	size_t          offset;
++	char            lineBuf[100];
++	char           *s;
++
++	while (numBytes > 0) {
++		s = lineBuf;
++
++		for (offset = 0; offset < 16; offset++) {
++			if (offset < numBytes)
++				s += snprintf(s, 4, "%02x ", mem[offset]);
++			else
++				s += snprintf(s, 4, "   ");
++		}
++
++		for (offset = 0; offset < 16; offset++) {
++			if (offset < numBytes) {
++				uint8_t ch = mem[offset];
++
++				if ((ch < ' ') || (ch > '~'))
++					ch = '.';
++				*s++ = (char)ch;
++			}
++		}
++		*s++ = '\0';
++
++		if ((label != NULL) && (*label != '\0'))
++			vchiq_log_trace(VCHIQ_LOG_TRACE,
++				"%s: %08x: %s", label, addr, lineBuf);
++		else
++			vchiq_log_trace(VCHIQ_LOG_TRACE,
++				"%08x: %s", addr, lineBuf);
++
++		addr += 16;
++		mem += 16;
++		if (numBytes > 16)
++			numBytes -= 16;
++		else
++			numBytes = 0;
++	}
++}
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_core.h
+@@ -0,0 +1,712 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_CORE_H
++#define VCHIQ_CORE_H
++
++#include <linux/mutex.h>
++#include <linux/semaphore.h>
++#include <linux/kthread.h>
++
++#include "vchiq_cfg.h"
++
++#include "vchiq.h"
++
++/* Run time control of log level, based on KERN_XXX level. */
++#define VCHIQ_LOG_DEFAULT  4
++#define VCHIQ_LOG_ERROR    3
++#define VCHIQ_LOG_WARNING  4
++#define VCHIQ_LOG_INFO     6
++#define VCHIQ_LOG_TRACE    7
++
++#define VCHIQ_LOG_PREFIX   KERN_INFO "vchiq: "
++
++#ifndef vchiq_log_error
++#define vchiq_log_error(cat, fmt, ...) \
++	do { if (cat >= VCHIQ_LOG_ERROR) \
++		printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
++#endif
++#ifndef vchiq_log_warning
++#define vchiq_log_warning(cat, fmt, ...) \
++	do { if (cat >= VCHIQ_LOG_WARNING) \
++		 printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
++#endif
++#ifndef vchiq_log_info
++#define vchiq_log_info(cat, fmt, ...) \
++	do { if (cat >= VCHIQ_LOG_INFO) \
++		printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
++#endif
++#ifndef vchiq_log_trace
++#define vchiq_log_trace(cat, fmt, ...) \
++	do { if (cat >= VCHIQ_LOG_TRACE) \
++		printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
++#endif
++
++#define vchiq_loud_error(...) \
++	vchiq_log_error(vchiq_core_log_level, "===== " __VA_ARGS__)
++
++#ifndef vchiq_static_assert
++#define vchiq_static_assert(cond) __attribute__((unused)) \
++	extern int vchiq_static_assert[(cond) ? 1 : -1]
++#endif
++
++#define IS_POW2(x) (x && ((x & (x - 1)) == 0))
++
++/* Ensure that the slot size and maximum number of slots are powers of 2 */
++vchiq_static_assert(IS_POW2(VCHIQ_SLOT_SIZE));
++vchiq_static_assert(IS_POW2(VCHIQ_MAX_SLOTS));
++vchiq_static_assert(IS_POW2(VCHIQ_MAX_SLOTS_PER_SIDE));
++
++#define VCHIQ_SLOT_MASK        (VCHIQ_SLOT_SIZE - 1)
++#define VCHIQ_SLOT_QUEUE_MASK  (VCHIQ_MAX_SLOTS_PER_SIDE - 1)
++#define VCHIQ_SLOT_ZERO_SLOTS  ((sizeof(VCHIQ_SLOT_ZERO_T) + \
++	VCHIQ_SLOT_SIZE - 1) / VCHIQ_SLOT_SIZE)
++
++#define VCHIQ_MSG_PADDING            0  /* -                                 */
++#define VCHIQ_MSG_CONNECT            1  /* -                                 */
++#define VCHIQ_MSG_OPEN               2  /* + (srcport, -), fourcc, client_id */
++#define VCHIQ_MSG_OPENACK            3  /* + (srcport, dstport)              */
++#define VCHIQ_MSG_CLOSE              4  /* + (srcport, dstport)              */
++#define VCHIQ_MSG_DATA               5  /* + (srcport, dstport)              */
++#define VCHIQ_MSG_BULK_RX            6  /* + (srcport, dstport), data, size  */
++#define VCHIQ_MSG_BULK_TX            7  /* + (srcport, dstport), data, size  */
++#define VCHIQ_MSG_BULK_RX_DONE       8  /* + (srcport, dstport), actual      */
++#define VCHIQ_MSG_BULK_TX_DONE       9  /* + (srcport, dstport), actual      */
++#define VCHIQ_MSG_PAUSE             10  /* -                                 */
++#define VCHIQ_MSG_RESUME            11  /* -                                 */
++#define VCHIQ_MSG_REMOTE_USE        12  /* -                                 */
++#define VCHIQ_MSG_REMOTE_RELEASE    13  /* -                                 */
++#define VCHIQ_MSG_REMOTE_USE_ACTIVE 14  /* -                                 */
++
++#define VCHIQ_PORT_MAX                 (VCHIQ_MAX_SERVICES - 1)
++#define VCHIQ_PORT_FREE                0x1000
++#define VCHIQ_PORT_IS_VALID(port)      (port < VCHIQ_PORT_FREE)
++#define VCHIQ_MAKE_MSG(type, srcport, dstport) \
++	((type<<24) | (srcport<<12) | (dstport<<0))
++#define VCHIQ_MSG_TYPE(msgid)          ((unsigned int)msgid >> 24)
++#define VCHIQ_MSG_SRCPORT(msgid) \
++	(unsigned short)(((unsigned int)msgid >> 12) & 0xfff)
++#define VCHIQ_MSG_DSTPORT(msgid) \
++	((unsigned short)msgid & 0xfff)
++
++#define VCHIQ_FOURCC_AS_4CHARS(fourcc)	\
++	((fourcc) >> 24) & 0xff, \
++	((fourcc) >> 16) & 0xff, \
++	((fourcc) >>  8) & 0xff, \
++	(fourcc) & 0xff
++
++/* Ensure the fields are wide enough */
++vchiq_static_assert(VCHIQ_MSG_SRCPORT(VCHIQ_MAKE_MSG(0, 0, VCHIQ_PORT_MAX))
++	== 0);
++vchiq_static_assert(VCHIQ_MSG_TYPE(VCHIQ_MAKE_MSG(0, VCHIQ_PORT_MAX, 0)) == 0);
++vchiq_static_assert((unsigned int)VCHIQ_PORT_MAX <
++	(unsigned int)VCHIQ_PORT_FREE);
++
++#define VCHIQ_MSGID_PADDING            VCHIQ_MAKE_MSG(VCHIQ_MSG_PADDING, 0, 0)
++#define VCHIQ_MSGID_CLAIMED            0x40000000
++
++#define VCHIQ_FOURCC_INVALID           0x00000000
++#define VCHIQ_FOURCC_IS_LEGAL(fourcc)  (fourcc != VCHIQ_FOURCC_INVALID)
++
++#define VCHIQ_BULK_ACTUAL_ABORTED -1
++
++typedef uint32_t BITSET_T;
++
++vchiq_static_assert((sizeof(BITSET_T) * 8) == 32);
++
++#define BITSET_SIZE(b)        ((b + 31) >> 5)
++#define BITSET_WORD(b)        (b >> 5)
++#define BITSET_BIT(b)         (1 << (b & 31))
++#define BITSET_ZERO(bs)       memset(bs, 0, sizeof(bs))
++#define BITSET_IS_SET(bs, b)  (bs[BITSET_WORD(b)] & BITSET_BIT(b))
++#define BITSET_SET(bs, b)     (bs[BITSET_WORD(b)] |= BITSET_BIT(b))
++#define BITSET_CLR(bs, b)     (bs[BITSET_WORD(b)] &= ~BITSET_BIT(b))
++
++#if VCHIQ_ENABLE_STATS
++#define VCHIQ_STATS_INC(state, stat) (state->stats. stat++)
++#define VCHIQ_SERVICE_STATS_INC(service, stat) (service->stats. stat++)
++#define VCHIQ_SERVICE_STATS_ADD(service, stat, addend) \
++	(service->stats. stat += addend)
++#else
++#define VCHIQ_STATS_INC(state, stat) ((void)0)
++#define VCHIQ_SERVICE_STATS_INC(service, stat) ((void)0)
++#define VCHIQ_SERVICE_STATS_ADD(service, stat, addend) ((void)0)
++#endif
++
++enum {
++	DEBUG_ENTRIES,
++#if VCHIQ_ENABLE_DEBUG
++	DEBUG_SLOT_HANDLER_COUNT,
++	DEBUG_SLOT_HANDLER_LINE,
++	DEBUG_PARSE_LINE,
++	DEBUG_PARSE_HEADER,
++	DEBUG_PARSE_MSGID,
++	DEBUG_AWAIT_COMPLETION_LINE,
++	DEBUG_DEQUEUE_MESSAGE_LINE,
++	DEBUG_SERVICE_CALLBACK_LINE,
++	DEBUG_MSG_QUEUE_FULL_COUNT,
++	DEBUG_COMPLETION_QUEUE_FULL_COUNT,
++#endif
++	DEBUG_MAX
++};
++
++#if VCHIQ_ENABLE_DEBUG
++
++#define DEBUG_INITIALISE(local) int *debug_ptr = (local)->debug;
++#define DEBUG_TRACE(d) \
++	do { debug_ptr[DEBUG_ ## d] = __LINE__; dsb(); } while (0)
++#define DEBUG_VALUE(d, v) \
++	do { debug_ptr[DEBUG_ ## d] = (v); dsb(); } while (0)
++#define DEBUG_COUNT(d) \
++	do { debug_ptr[DEBUG_ ## d]++; dsb(); } while (0)
++
++#else /* VCHIQ_ENABLE_DEBUG */
++
++#define DEBUG_INITIALISE(local)
++#define DEBUG_TRACE(d)
++#define DEBUG_VALUE(d, v)
++#define DEBUG_COUNT(d)
++
++#endif /* VCHIQ_ENABLE_DEBUG */
++
++typedef enum {
++	VCHIQ_CONNSTATE_DISCONNECTED,
++	VCHIQ_CONNSTATE_CONNECTING,
++	VCHIQ_CONNSTATE_CONNECTED,
++	VCHIQ_CONNSTATE_PAUSING,
++	VCHIQ_CONNSTATE_PAUSE_SENT,
++	VCHIQ_CONNSTATE_PAUSED,
++	VCHIQ_CONNSTATE_RESUMING,
++	VCHIQ_CONNSTATE_PAUSE_TIMEOUT,
++	VCHIQ_CONNSTATE_RESUME_TIMEOUT
++} VCHIQ_CONNSTATE_T;
++
++enum {
++	VCHIQ_SRVSTATE_FREE,
++	VCHIQ_SRVSTATE_HIDDEN,
++	VCHIQ_SRVSTATE_LISTENING,
++	VCHIQ_SRVSTATE_OPENING,
++	VCHIQ_SRVSTATE_OPEN,
++	VCHIQ_SRVSTATE_OPENSYNC,
++	VCHIQ_SRVSTATE_CLOSESENT,
++	VCHIQ_SRVSTATE_CLOSERECVD,
++	VCHIQ_SRVSTATE_CLOSEWAIT,
++	VCHIQ_SRVSTATE_CLOSED
++};
++
++enum {
++	VCHIQ_POLL_TERMINATE,
++	VCHIQ_POLL_REMOVE,
++	VCHIQ_POLL_TXNOTIFY,
++	VCHIQ_POLL_RXNOTIFY,
++	VCHIQ_POLL_COUNT
++};
++
++typedef enum {
++	VCHIQ_BULK_TRANSMIT,
++	VCHIQ_BULK_RECEIVE
++} VCHIQ_BULK_DIR_T;
++
++typedef void (*VCHIQ_USERDATA_TERM_T)(void *userdata);
++
++typedef struct vchiq_bulk_struct {
++	short mode;
++	short dir;
++	void *userdata;
++	VCHI_MEM_HANDLE_T handle;
++	void *data;
++	int size;
++	void *remote_data;
++	int remote_size;
++	int actual;
++} VCHIQ_BULK_T;
++
++typedef struct vchiq_bulk_queue_struct {
++	int local_insert;  /* Where to insert the next local bulk */
++	int remote_insert; /* Where to insert the next remote bulk (master) */
++	int process;       /* Bulk to transfer next */
++	int remote_notify; /* Bulk to notify the remote client of next (mstr) */
++	int remove;        /* Bulk to notify the local client of, and remove,
++			   ** next */
++	VCHIQ_BULK_T bulks[VCHIQ_NUM_SERVICE_BULKS];
++} VCHIQ_BULK_QUEUE_T;
++
++typedef struct remote_event_struct {
++	int armed;
++	int fired;
++	struct semaphore *event;
++} REMOTE_EVENT_T;
++
++typedef struct opaque_platform_state_t *VCHIQ_PLATFORM_STATE_T;
++
++typedef struct vchiq_state_struct VCHIQ_STATE_T;
++
++typedef struct vchiq_slot_struct {
++	char data[VCHIQ_SLOT_SIZE];
++} VCHIQ_SLOT_T;
++
++typedef struct vchiq_slot_info_struct {
++	/* Use two counters rather than one to avoid the need for a mutex. */
++	short use_count;
++	short release_count;
++} VCHIQ_SLOT_INFO_T;
++
++typedef struct vchiq_service_struct {
++	VCHIQ_SERVICE_BASE_T base;
++	VCHIQ_SERVICE_HANDLE_T handle;
++	unsigned int ref_count;
++	int srvstate;
++	VCHIQ_USERDATA_TERM_T userdata_term;
++	unsigned int localport;
++	unsigned int remoteport;
++	int public_fourcc;
++	int client_id;
++	char auto_close;
++	char sync;
++	char closing;
++	char trace;
++	atomic_t poll_flags;
++	short version;
++	short version_min;
++	short peer_version;
++
++	VCHIQ_STATE_T *state;
++	VCHIQ_INSTANCE_T instance;
++
++	int service_use_count;
++
++	VCHIQ_BULK_QUEUE_T bulk_tx;
++	VCHIQ_BULK_QUEUE_T bulk_rx;
++
++	struct semaphore remove_event;
++	struct semaphore bulk_remove_event;
++	struct mutex bulk_mutex;
++
++	struct service_stats_struct {
++		int quota_stalls;
++		int slot_stalls;
++		int bulk_stalls;
++		int error_count;
++		int ctrl_tx_count;
++		int ctrl_rx_count;
++		int bulk_tx_count;
++		int bulk_rx_count;
++		int bulk_aborted_count;
++		uint64_t ctrl_tx_bytes;
++		uint64_t ctrl_rx_bytes;
++		uint64_t bulk_tx_bytes;
++		uint64_t bulk_rx_bytes;
++	} stats;
++} VCHIQ_SERVICE_T;
++
++/* The quota information is outside VCHIQ_SERVICE_T so that it can be
++	statically allocated, since for accounting reasons a service's slot
++	usage is carried over between users of the same port number.
++ */
++typedef struct vchiq_service_quota_struct {
++	unsigned short slot_quota;
++	unsigned short slot_use_count;
++	unsigned short message_quota;
++	unsigned short message_use_count;
++	struct semaphore quota_event;
++	int previous_tx_index;
++} VCHIQ_SERVICE_QUOTA_T;
++
++typedef struct vchiq_shared_state_struct {
++
++	/* A non-zero value here indicates that the content is valid. */
++	int initialised;
++
++	/* The first and last (inclusive) slots allocated to the owner. */
++	int slot_first;
++	int slot_last;
++
++	/* The slot allocated to synchronous messages from the owner. */
++	int slot_sync;
++
++	/* Signalling this event indicates that owner's slot handler thread
++	** should run. */
++	REMOTE_EVENT_T trigger;
++
++	/* Indicates the byte position within the stream where the next message
++	** will be written. The least significant bits are an index into the
++	** slot. The next bits are the index of the slot in slot_queue. */
++	int tx_pos;
++
++	/* This event should be signalled when a slot is recycled. */
++	REMOTE_EVENT_T recycle;
++
++	/* The slot_queue index where the next recycled slot will be written. */
++	int slot_queue_recycle;
++
++	/* This event should be signalled when a synchronous message is sent. */
++	REMOTE_EVENT_T sync_trigger;
++
++	/* This event should be signalled when a synchronous message has been
++	** released. */
++	REMOTE_EVENT_T sync_release;
++
++	/* A circular buffer of slot indexes. */
++	int slot_queue[VCHIQ_MAX_SLOTS_PER_SIDE];
++
++	/* Debugging state */
++	int debug[DEBUG_MAX];
++} VCHIQ_SHARED_STATE_T;
++
++typedef struct vchiq_slot_zero_struct {
++	int magic;
++	short version;
++	short version_min;
++	int slot_zero_size;
++	int slot_size;
++	int max_slots;
++	int max_slots_per_side;
++	int platform_data[2];
++	VCHIQ_SHARED_STATE_T master;
++	VCHIQ_SHARED_STATE_T slave;
++	VCHIQ_SLOT_INFO_T slots[VCHIQ_MAX_SLOTS];
++} VCHIQ_SLOT_ZERO_T;
++
++struct vchiq_state_struct {
++	int id;
++	int initialised;
++	VCHIQ_CONNSTATE_T conn_state;
++	int is_master;
++	short version_common;
++
++	VCHIQ_SHARED_STATE_T *local;
++	VCHIQ_SHARED_STATE_T *remote;
++	VCHIQ_SLOT_T *slot_data;
++
++	unsigned short default_slot_quota;
++	unsigned short default_message_quota;
++
++	/* Event indicating connect message received */
++	struct semaphore connect;
++
++	/* Mutex protecting services */
++	struct mutex mutex;
++	VCHIQ_INSTANCE_T *instance;
++
++	/* Processes incoming messages */
++	struct task_struct *slot_handler_thread;
++
++	/* Processes recycled slots */
++	struct task_struct *recycle_thread;
++
++	/* Processes synchronous messages */
++	struct task_struct *sync_thread;
++
++	/* Local implementation of the trigger remote event */
++	struct semaphore trigger_event;
++
++	/* Local implementation of the recycle remote event */
++	struct semaphore recycle_event;
++
++	/* Local implementation of the sync trigger remote event */
++	struct semaphore sync_trigger_event;
++
++	/* Local implementation of the sync release remote event */
++	struct semaphore sync_release_event;
++
++	char *tx_data;
++	char *rx_data;
++	VCHIQ_SLOT_INFO_T *rx_info;
++
++	struct mutex slot_mutex;
++
++	struct mutex recycle_mutex;
++
++	struct mutex sync_mutex;
++
++	struct mutex bulk_transfer_mutex;
++
++	/* Indicates the byte position within the stream from where the next
++	** message will be read. The least significant bits are an index into
++	** the slot.The next bits are the index of the slot in
++	** remote->slot_queue. */
++	int rx_pos;
++
++	/* A cached copy of local->tx_pos. Only write to local->tx_pos, and read
++		from remote->tx_pos. */
++	int local_tx_pos;
++
++	/* The slot_queue index of the slot to become available next. */
++	int slot_queue_available;
++
++	/* A flag to indicate if any poll has been requested */
++	int poll_needed;
++
++	/* Ths index of the previous slot used for data messages. */
++	int previous_data_index;
++
++	/* The number of slots occupied by data messages. */
++	unsigned short data_use_count;
++
++	/* The maximum number of slots to be occupied by data messages. */
++	unsigned short data_quota;
++
++	/* An array of bit sets indicating which services must be polled. */
++	atomic_t poll_services[BITSET_SIZE(VCHIQ_MAX_SERVICES)];
++
++	/* The number of the first unused service */
++	int unused_service;
++
++	/* Signalled when a free slot becomes available. */
++	struct semaphore slot_available_event;
++
++	struct semaphore slot_remove_event;
++
++	/* Signalled when a free data slot becomes available. */
++	struct semaphore data_quota_event;
++
++	/* Incremented when there are bulk transfers which cannot be processed
++	 * whilst paused and must be processed on resume */
++	int deferred_bulks;
++
++	struct state_stats_struct {
++		int slot_stalls;
++		int data_stalls;
++		int ctrl_tx_count;
++		int ctrl_rx_count;
++		int error_count;
++	} stats;
++
++	VCHIQ_SERVICE_T * services[VCHIQ_MAX_SERVICES];
++	VCHIQ_SERVICE_QUOTA_T service_quotas[VCHIQ_MAX_SERVICES];
++	VCHIQ_SLOT_INFO_T slot_info[VCHIQ_MAX_SLOTS];
++
++	VCHIQ_PLATFORM_STATE_T platform_state;
++};
++
++struct bulk_waiter {
++	VCHIQ_BULK_T *bulk;
++	struct semaphore event;
++	int actual;
++};
++
++extern spinlock_t bulk_waiter_spinlock;
++
++extern int vchiq_core_log_level;
++extern int vchiq_core_msg_log_level;
++extern int vchiq_sync_log_level;
++
++extern VCHIQ_STATE_T *vchiq_states[VCHIQ_MAX_STATES];
++
++extern const char *
++get_conn_state_name(VCHIQ_CONNSTATE_T conn_state);
++
++extern VCHIQ_SLOT_ZERO_T *
++vchiq_init_slots(void *mem_base, int mem_size);
++
++extern VCHIQ_STATUS_T
++vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero,
++	int is_master);
++
++extern VCHIQ_STATUS_T
++vchiq_connect_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance);
++
++extern VCHIQ_SERVICE_T *
++vchiq_add_service_internal(VCHIQ_STATE_T *state,
++	const VCHIQ_SERVICE_PARAMS_T *params, int srvstate,
++	VCHIQ_INSTANCE_T instance, VCHIQ_USERDATA_TERM_T userdata_term);
++
++extern VCHIQ_STATUS_T
++vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id);
++
++extern VCHIQ_STATUS_T
++vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd);
++
++extern void
++vchiq_terminate_service_internal(VCHIQ_SERVICE_T *service);
++
++extern void
++vchiq_free_service_internal(VCHIQ_SERVICE_T *service);
++
++extern VCHIQ_STATUS_T
++vchiq_shutdown_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance);
++
++extern VCHIQ_STATUS_T
++vchiq_pause_internal(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_resume_internal(VCHIQ_STATE_T *state);
++
++extern void
++remote_event_pollall(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle,
++	VCHI_MEM_HANDLE_T memhandle, void *offset, int size, void *userdata,
++	VCHIQ_BULK_MODE_T mode, VCHIQ_BULK_DIR_T dir);
++
++extern void
++vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state);
++
++extern void
++vchiq_dump_service_state(void *dump_context, VCHIQ_SERVICE_T *service);
++
++extern void
++vchiq_loud_error_header(void);
++
++extern void
++vchiq_loud_error_footer(void);
++
++extern void
++request_poll(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, int poll_type);
++
++static inline VCHIQ_SERVICE_T *
++handle_to_service(VCHIQ_SERVICE_HANDLE_T handle)
++{
++	VCHIQ_STATE_T *state = vchiq_states[(handle / VCHIQ_MAX_SERVICES) &
++		(VCHIQ_MAX_STATES - 1)];
++	if (!state)
++		return NULL;
++
++	return state->services[handle & (VCHIQ_MAX_SERVICES - 1)];
++}
++
++extern VCHIQ_SERVICE_T *
++find_service_by_handle(VCHIQ_SERVICE_HANDLE_T handle);
++
++extern VCHIQ_SERVICE_T *
++find_service_by_port(VCHIQ_STATE_T *state, int localport);
++
++extern VCHIQ_SERVICE_T *
++find_service_for_instance(VCHIQ_INSTANCE_T instance,
++	VCHIQ_SERVICE_HANDLE_T handle);
++
++extern VCHIQ_SERVICE_T *
++find_closed_service_for_instance(VCHIQ_INSTANCE_T instance,
++	VCHIQ_SERVICE_HANDLE_T handle);
++
++extern VCHIQ_SERVICE_T *
++next_service_by_instance(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance,
++	int *pidx);
++
++extern void
++lock_service(VCHIQ_SERVICE_T *service);
++
++extern void
++unlock_service(VCHIQ_SERVICE_T *service);
++
++/* The following functions are called from vchiq_core, and external
++** implementations must be provided. */
++
++extern VCHIQ_STATUS_T
++vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk,
++	VCHI_MEM_HANDLE_T memhandle, void *offset, int size, int dir);
++
++extern void
++vchiq_transfer_bulk(VCHIQ_BULK_T *bulk);
++
++extern void
++vchiq_complete_bulk(VCHIQ_BULK_T *bulk);
++
++extern VCHIQ_STATUS_T
++vchiq_copy_from_user(void *dst, const void *src, int size);
++
++extern void
++remote_event_signal(REMOTE_EVENT_T *event);
++
++void
++vchiq_platform_check_suspend(VCHIQ_STATE_T *state);
++
++extern void
++vchiq_platform_paused(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_platform_resume(VCHIQ_STATE_T *state);
++
++extern void
++vchiq_platform_resumed(VCHIQ_STATE_T *state);
++
++extern void
++vchiq_dump(void *dump_context, const char *str, int len);
++
++extern void
++vchiq_dump_platform_state(void *dump_context);
++
++extern void
++vchiq_dump_platform_instances(void *dump_context);
++
++extern void
++vchiq_dump_platform_service_state(void *dump_context,
++	VCHIQ_SERVICE_T *service);
++
++extern VCHIQ_STATUS_T
++vchiq_use_service_internal(VCHIQ_SERVICE_T *service);
++
++extern VCHIQ_STATUS_T
++vchiq_release_service_internal(VCHIQ_SERVICE_T *service);
++
++extern void
++vchiq_on_remote_use(VCHIQ_STATE_T *state);
++
++extern void
++vchiq_on_remote_release(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_platform_init_state(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_check_service(VCHIQ_SERVICE_T *service);
++
++extern void
++vchiq_on_remote_use_active(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_send_remote_use(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_send_remote_release(VCHIQ_STATE_T *state);
++
++extern VCHIQ_STATUS_T
++vchiq_send_remote_use_active(VCHIQ_STATE_T *state);
++
++extern void
++vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state,
++	VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate);
++
++extern void
++vchiq_platform_handle_timeout(VCHIQ_STATE_T *state);
++
++extern void
++vchiq_set_conn_state(VCHIQ_STATE_T *state, VCHIQ_CONNSTATE_T newstate);
++
++
++extern void
++vchiq_log_dump_mem(const char *label, uint32_t addr, const void *voidMem,
++	size_t numBytes);
++
++#endif
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_debugfs.c
+@@ -0,0 +1,383 @@
++/**
++ * Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++
++#include <linux/debugfs.h>
++#include "vchiq_core.h"
++#include "vchiq_arm.h"
++#include "vchiq_debugfs.h"
++
++#ifdef CONFIG_DEBUG_FS
++
++/****************************************************************************
++*
++*   log category entries
++*
++***************************************************************************/
++#define DEBUGFS_WRITE_BUF_SIZE 256
++
++#define VCHIQ_LOG_ERROR_STR   "error"
++#define VCHIQ_LOG_WARNING_STR "warning"
++#define VCHIQ_LOG_INFO_STR    "info"
++#define VCHIQ_LOG_TRACE_STR   "trace"
++
++
++/* Top-level debug info */
++struct vchiq_debugfs_info {
++	/* Global 'vchiq' debugfs entry used by all instances */
++	struct dentry *vchiq_cfg_dir;
++
++	/* one entry per client process */
++	struct dentry *clients;
++
++	/* log categories */
++	struct dentry *log_categories;
++};
++
++static struct vchiq_debugfs_info debugfs_info;
++
++/* Log category debugfs entries */
++struct vchiq_debugfs_log_entry {
++	const char *name;
++	int *plevel;
++	struct dentry *dir;
++};
++
++static struct vchiq_debugfs_log_entry vchiq_debugfs_log_entries[] = {
++	{ "core", &vchiq_core_log_level },
++	{ "msg",  &vchiq_core_msg_log_level },
++	{ "sync", &vchiq_sync_log_level },
++	{ "susp", &vchiq_susp_log_level },
++	{ "arm",  &vchiq_arm_log_level },
++};
++static int n_log_entries =
++	sizeof(vchiq_debugfs_log_entries)/sizeof(vchiq_debugfs_log_entries[0]);
++
++
++static struct dentry *vchiq_clients_top(void);
++static struct dentry *vchiq_debugfs_top(void);
++
++static int debugfs_log_show(struct seq_file *f, void *offset)
++{
++	int *levp = f->private;
++	char *log_value = NULL;
++
++	switch (*levp) {
++	case VCHIQ_LOG_ERROR:
++		log_value = VCHIQ_LOG_ERROR_STR;
++		break;
++	case VCHIQ_LOG_WARNING:
++		log_value = VCHIQ_LOG_WARNING_STR;
++		break;
++	case VCHIQ_LOG_INFO:
++		log_value = VCHIQ_LOG_INFO_STR;
++		break;
++	case VCHIQ_LOG_TRACE:
++		log_value = VCHIQ_LOG_TRACE_STR;
++		break;
++	default:
++		break;
++	}
++
++	seq_printf(f, "%s\n", log_value ? log_value : "(null)");
++
++	return 0;
++}
++
++static int debugfs_log_open(struct inode *inode, struct file *file)
++{
++	return single_open(file, debugfs_log_show, inode->i_private);
++}
++
++static int debugfs_log_write(struct file *file,
++	const char __user *buffer,
++	size_t count, loff_t *ppos)
++{
++	struct seq_file *f = (struct seq_file *)file->private_data;
++	int *levp = f->private;
++	char kbuf[DEBUGFS_WRITE_BUF_SIZE + 1];
++
++	memset(kbuf, 0, DEBUGFS_WRITE_BUF_SIZE + 1);
++	if (count >= DEBUGFS_WRITE_BUF_SIZE)
++		count = DEBUGFS_WRITE_BUF_SIZE;
++
++	if (copy_from_user(kbuf, buffer, count) != 0)
++		return -EFAULT;
++	kbuf[count - 1] = 0;
++
++	if (strncmp("error", kbuf, strlen("error")) == 0)
++		*levp = VCHIQ_LOG_ERROR;
++	else if (strncmp("warning", kbuf, strlen("warning")) == 0)
++		*levp = VCHIQ_LOG_WARNING;
++	else if (strncmp("info", kbuf, strlen("info")) == 0)
++		*levp = VCHIQ_LOG_INFO;
++	else if (strncmp("trace", kbuf, strlen("trace")) == 0)
++		*levp = VCHIQ_LOG_TRACE;
++	else
++		*levp = VCHIQ_LOG_DEFAULT;
++
++	*ppos += count;
++
++	return count;
++}
++
++static const struct file_operations debugfs_log_fops = {
++	.owner		= THIS_MODULE,
++	.open		= debugfs_log_open,
++	.write		= debugfs_log_write,
++	.read		= seq_read,
++	.llseek		= seq_lseek,
++	.release	= single_release,
++};
++
++/* create an entry under <debugfs>/vchiq/log for each log category */
++static int vchiq_debugfs_create_log_entries(struct dentry *top)
++{
++	struct dentry *dir;
++	size_t i;
++	int ret = 0;
++	dir = debugfs_create_dir("log", vchiq_debugfs_top());
++	if (!dir)
++		return -ENOMEM;
++	debugfs_info.log_categories = dir;
++
++	for (i = 0; i < n_log_entries; i++) {
++		void *levp = (void *)vchiq_debugfs_log_entries[i].plevel;
++		dir = debugfs_create_file(vchiq_debugfs_log_entries[i].name,
++					  0644,
++					  debugfs_info.log_categories,
++					  levp,
++					  &debugfs_log_fops);
++		if (!dir) {
++			ret = -ENOMEM;
++			break;
++		}
++
++		vchiq_debugfs_log_entries[i].dir = dir;
++	}
++	return ret;
++}
++
++static int debugfs_usecount_show(struct seq_file *f, void *offset)
++{
++	VCHIQ_INSTANCE_T instance = f->private;
++	int use_count;
++
++	use_count = vchiq_instance_get_use_count(instance);
++	seq_printf(f, "%d\n", use_count);
++
++	return 0;
++}
++
++static int debugfs_usecount_open(struct inode *inode, struct file *file)
++{
++	return single_open(file, debugfs_usecount_show, inode->i_private);
++}
++
++static const struct file_operations debugfs_usecount_fops = {
++	.owner		= THIS_MODULE,
++	.open		= debugfs_usecount_open,
++	.read		= seq_read,
++	.llseek		= seq_lseek,
++	.release	= single_release,
++};
++
++static int debugfs_trace_show(struct seq_file *f, void *offset)
++{
++	VCHIQ_INSTANCE_T instance = f->private;
++	int trace;
++
++	trace = vchiq_instance_get_trace(instance);
++	seq_printf(f, "%s\n", trace ? "Y" : "N");
++
++	return 0;
++}
++
++static int debugfs_trace_open(struct inode *inode, struct file *file)
++{
++	return single_open(file, debugfs_trace_show, inode->i_private);
++}
++
++static int debugfs_trace_write(struct file *file,
++	const char __user *buffer,
++	size_t count, loff_t *ppos)
++{
++	struct seq_file *f = (struct seq_file *)file->private_data;
++	VCHIQ_INSTANCE_T instance = f->private;
++	char firstchar;
++
++	if (copy_from_user(&firstchar, buffer, 1) != 0)
++		return -EFAULT;
++
++	switch (firstchar) {
++	case 'Y':
++	case 'y':
++	case '1':
++		vchiq_instance_set_trace(instance, 1);
++		break;
++	case 'N':
++	case 'n':
++	case '0':
++		vchiq_instance_set_trace(instance, 0);
++		break;
++	default:
++		break;
++	}
++
++	*ppos += count;
++
++	return count;
++}
++
++static const struct file_operations debugfs_trace_fops = {
++	.owner		= THIS_MODULE,
++	.open		= debugfs_trace_open,
++	.write		= debugfs_trace_write,
++	.read		= seq_read,
++	.llseek		= seq_lseek,
++	.release	= single_release,
++};
++
++/* add an instance (process) to the debugfs entries */
++int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance)
++{
++	char pidstr[16];
++	struct dentry *top, *use_count, *trace;
++	struct dentry *clients = vchiq_clients_top();
++
++	snprintf(pidstr, sizeof(pidstr), "%d",
++		 vchiq_instance_get_pid(instance));
++
++	top = debugfs_create_dir(pidstr, clients);
++	if (!top)
++		goto fail_top;
++
++	use_count = debugfs_create_file("use_count",
++					0444, top,
++					instance,
++					&debugfs_usecount_fops);
++	if (!use_count)
++		goto fail_use_count;
++
++	trace = debugfs_create_file("trace",
++				    0644, top,
++				    instance,
++				    &debugfs_trace_fops);
++	if (!trace)
++		goto fail_trace;
++
++	vchiq_instance_get_debugfs_node(instance)->dentry = top;
++
++	return 0;
++
++fail_trace:
++	debugfs_remove(use_count);
++fail_use_count:
++	debugfs_remove(top);
++fail_top:
++	return -ENOMEM;
++}
++
++void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance)
++{
++	VCHIQ_DEBUGFS_NODE_T *node = vchiq_instance_get_debugfs_node(instance);
++	debugfs_remove_recursive(node->dentry);
++}
++
++
++int vchiq_debugfs_init(void)
++{
++	BUG_ON(debugfs_info.vchiq_cfg_dir != NULL);
++
++	debugfs_info.vchiq_cfg_dir = debugfs_create_dir("vchiq", NULL);
++	if (debugfs_info.vchiq_cfg_dir == NULL)
++		goto fail;
++
++	debugfs_info.clients = debugfs_create_dir("clients",
++				vchiq_debugfs_top());
++	if (!debugfs_info.clients)
++		goto fail;
++
++	if (vchiq_debugfs_create_log_entries(vchiq_debugfs_top()) != 0)
++		goto fail;
++
++	return 0;
++
++fail:
++	vchiq_debugfs_deinit();
++	vchiq_log_error(vchiq_arm_log_level,
++		"%s: failed to create debugfs directory",
++		__func__);
++
++	return -ENOMEM;
++}
++
++/* remove all the debugfs entries */
++void vchiq_debugfs_deinit(void)
++{
++	debugfs_remove_recursive(vchiq_debugfs_top());
++}
++
++static struct dentry *vchiq_clients_top(void)
++{
++	return debugfs_info.clients;
++}
++
++static struct dentry *vchiq_debugfs_top(void)
++{
++	BUG_ON(debugfs_info.vchiq_cfg_dir == NULL);
++	return debugfs_info.vchiq_cfg_dir;
++}
++
++#else /* CONFIG_DEBUG_FS */
++
++int vchiq_debugfs_init(void)
++{
++	return 0;
++}
++
++void vchiq_debugfs_deinit(void)
++{
++}
++
++int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance)
++{
++	return 0;
++}
++
++void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance)
++{
++}
++
++#endif /* CONFIG_DEBUG_FS */
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_debugfs.h
+@@ -0,0 +1,52 @@
++/**
++ * Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_DEBUGFS_H
++#define VCHIQ_DEBUGFS_H
++
++#include "vchiq_core.h"
++
++typedef struct vchiq_debugfs_node_struct
++{
++    struct dentry *dentry;
++} VCHIQ_DEBUGFS_NODE_T;
++
++int vchiq_debugfs_init(void);
++
++void vchiq_debugfs_deinit(void);
++
++int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance);
++
++void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance);
++
++#endif /* VCHIQ_DEBUGFS_H */
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_genversion
+@@ -0,0 +1,87 @@
++#!/usr/bin/perl -w
++
++use strict;
++
++#
++# Generate a version from available information
++#
++
++my $prefix = shift @ARGV;
++my $root = shift @ARGV;
++
++
++if ( not defined $root ) {
++	die "usage: $0 prefix root-dir\n";
++}
++
++if ( ! -d $root ) {
++	die "root directory $root not found\n";
++}
++
++my $version = "unknown";
++my $tainted = "";
++
++if ( -d "$root/.git" ) {
++	# attempt to work out git version. only do so
++	# on a linux build host, as cygwin builds are
++	# already slow enough
++
++	if ( -f "/usr/bin/git" || -f "/usr/local/bin/git" ) {
++		if (not open(F, "git --git-dir $root/.git rev-parse --verify HEAD|")) {
++			$version = "no git version";
++		}
++		else {
++			$version = <F>;
++			$version =~ s/[ \r\n]*$//;     # chomp may not be enough (cygwin).
++			$version =~ s/^[ \r\n]*//;     # chomp may not be enough (cygwin).
++		}
++
++		if (open(G, "git --git-dir $root/.git status --porcelain|")) {
++			$tainted = <G>;
++			$tainted =~ s/[ \r\n]*$//;     # chomp may not be enough (cygwin).
++			$tainted =~ s/^[ \r\n]*//;     # chomp may not be enough (cygwin).
++			if (length $tainted) {
++			$version = join ' ', $version, "(tainted)";
++		}
++		else {
++			$version = join ' ', $version, "(clean)";
++         }
++		}
++	}
++}
++
++my $hostname = `hostname`;
++$hostname =~ s/[ \r\n]*$//;     # chomp may not be enough (cygwin).
++$hostname =~ s/^[ \r\n]*//;     # chomp may not be enough (cygwin).
++
++
++print STDERR "Version $version\n";
++print <<EOF;
++#include "${prefix}_build_info.h"
++#include <linux/broadcom/vc_debug_sym.h>
++
++VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_hostname, "$hostname" );
++VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_version, "$version" );
++VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_time,    __TIME__ );
++VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_date,    __DATE__ );
++
++const char *vchiq_get_build_hostname( void )
++{
++   return vchiq_build_hostname;
++}
++
++const char *vchiq_get_build_version( void )
++{
++   return vchiq_build_version;
++}
++
++const char *vchiq_get_build_date( void )
++{
++   return vchiq_build_date;
++}
++
++const char *vchiq_get_build_time( void )
++{
++   return vchiq_build_time;
++}
++EOF
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_if.h
+@@ -0,0 +1,189 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_IF_H
++#define VCHIQ_IF_H
++
++#include "interface/vchi/vchi_mh.h"
++
++#define VCHIQ_SERVICE_HANDLE_INVALID 0
++
++#define VCHIQ_SLOT_SIZE     4096
++#define VCHIQ_MAX_MSG_SIZE  (VCHIQ_SLOT_SIZE - sizeof(VCHIQ_HEADER_T))
++#define VCHIQ_CHANNEL_SIZE  VCHIQ_MAX_MSG_SIZE /* For backwards compatibility */
++
++#define VCHIQ_MAKE_FOURCC(x0, x1, x2, x3) \
++			(((x0) << 24) | ((x1) << 16) | ((x2) << 8) | (x3))
++#define VCHIQ_GET_SERVICE_USERDATA(service) vchiq_get_service_userdata(service)
++#define VCHIQ_GET_SERVICE_FOURCC(service)   vchiq_get_service_fourcc(service)
++
++typedef enum {
++	VCHIQ_SERVICE_OPENED,         /* service, -, -             */
++	VCHIQ_SERVICE_CLOSED,         /* service, -, -             */
++	VCHIQ_MESSAGE_AVAILABLE,      /* service, header, -        */
++	VCHIQ_BULK_TRANSMIT_DONE,     /* service, -, bulk_userdata */
++	VCHIQ_BULK_RECEIVE_DONE,      /* service, -, bulk_userdata */
++	VCHIQ_BULK_TRANSMIT_ABORTED,  /* service, -, bulk_userdata */
++	VCHIQ_BULK_RECEIVE_ABORTED    /* service, -, bulk_userdata */
++} VCHIQ_REASON_T;
++
++typedef enum {
++	VCHIQ_ERROR   = -1,
++	VCHIQ_SUCCESS = 0,
++	VCHIQ_RETRY   = 1
++} VCHIQ_STATUS_T;
++
++typedef enum {
++	VCHIQ_BULK_MODE_CALLBACK,
++	VCHIQ_BULK_MODE_BLOCKING,
++	VCHIQ_BULK_MODE_NOCALLBACK,
++	VCHIQ_BULK_MODE_WAITING		/* Reserved for internal use */
++} VCHIQ_BULK_MODE_T;
++
++typedef enum {
++	VCHIQ_SERVICE_OPTION_AUTOCLOSE,
++	VCHIQ_SERVICE_OPTION_SLOT_QUOTA,
++	VCHIQ_SERVICE_OPTION_MESSAGE_QUOTA,
++	VCHIQ_SERVICE_OPTION_SYNCHRONOUS,
++	VCHIQ_SERVICE_OPTION_TRACE
++} VCHIQ_SERVICE_OPTION_T;
++
++typedef struct vchiq_header_struct {
++	/* The message identifier - opaque to applications. */
++	int msgid;
++
++	/* Size of message data. */
++	unsigned int size;
++
++	char data[0];           /* message */
++} VCHIQ_HEADER_T;
++
++typedef struct {
++	const void *data;
++	unsigned int size;
++} VCHIQ_ELEMENT_T;
++
++typedef unsigned int VCHIQ_SERVICE_HANDLE_T;
++
++typedef VCHIQ_STATUS_T (*VCHIQ_CALLBACK_T)(VCHIQ_REASON_T, VCHIQ_HEADER_T *,
++	VCHIQ_SERVICE_HANDLE_T, void *);
++
++typedef struct vchiq_service_base_struct {
++	int fourcc;
++	VCHIQ_CALLBACK_T callback;
++	void *userdata;
++} VCHIQ_SERVICE_BASE_T;
++
++typedef struct vchiq_service_params_struct {
++	int fourcc;
++	VCHIQ_CALLBACK_T callback;
++	void *userdata;
++	short version;       /* Increment for non-trivial changes */
++	short version_min;   /* Update for incompatible changes */
++} VCHIQ_SERVICE_PARAMS_T;
++
++typedef struct vchiq_config_struct {
++	unsigned int max_msg_size;
++	unsigned int bulk_threshold; /* The message size above which it
++					is better to use a bulk transfer
++					(<= max_msg_size) */
++	unsigned int max_outstanding_bulks;
++	unsigned int max_services;
++	short version;      /* The version of VCHIQ */
++	short version_min;  /* The minimum compatible version of VCHIQ */
++} VCHIQ_CONFIG_T;
++
++typedef struct vchiq_instance_struct *VCHIQ_INSTANCE_T;
++typedef void (*VCHIQ_REMOTE_USE_CALLBACK_T)(void *cb_arg);
++
++extern VCHIQ_STATUS_T vchiq_initialise(VCHIQ_INSTANCE_T *pinstance);
++extern VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance);
++extern VCHIQ_STATUS_T vchiq_connect(VCHIQ_INSTANCE_T instance);
++extern VCHIQ_STATUS_T vchiq_add_service(VCHIQ_INSTANCE_T instance,
++	const VCHIQ_SERVICE_PARAMS_T *params,
++	VCHIQ_SERVICE_HANDLE_T *pservice);
++extern VCHIQ_STATUS_T vchiq_open_service(VCHIQ_INSTANCE_T instance,
++	const VCHIQ_SERVICE_PARAMS_T *params,
++	VCHIQ_SERVICE_HANDLE_T *pservice);
++extern VCHIQ_STATUS_T vchiq_close_service(VCHIQ_SERVICE_HANDLE_T service);
++extern VCHIQ_STATUS_T vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T service);
++extern VCHIQ_STATUS_T vchiq_use_service(VCHIQ_SERVICE_HANDLE_T service);
++extern VCHIQ_STATUS_T vchiq_use_service_no_resume(
++	VCHIQ_SERVICE_HANDLE_T service);
++extern VCHIQ_STATUS_T vchiq_release_service(VCHIQ_SERVICE_HANDLE_T service);
++
++extern VCHIQ_STATUS_T vchiq_queue_message(VCHIQ_SERVICE_HANDLE_T service,
++	const VCHIQ_ELEMENT_T *elements, unsigned int count);
++extern void           vchiq_release_message(VCHIQ_SERVICE_HANDLE_T service,
++	VCHIQ_HEADER_T *header);
++extern VCHIQ_STATUS_T vchiq_queue_bulk_transmit(VCHIQ_SERVICE_HANDLE_T service,
++	const void *data, unsigned int size, void *userdata);
++extern VCHIQ_STATUS_T vchiq_queue_bulk_receive(VCHIQ_SERVICE_HANDLE_T service,
++	void *data, unsigned int size, void *userdata);
++extern VCHIQ_STATUS_T vchiq_queue_bulk_transmit_handle(
++	VCHIQ_SERVICE_HANDLE_T service, VCHI_MEM_HANDLE_T handle,
++	const void *offset, unsigned int size, void *userdata);
++extern VCHIQ_STATUS_T vchiq_queue_bulk_receive_handle(
++	VCHIQ_SERVICE_HANDLE_T service, VCHI_MEM_HANDLE_T handle,
++	void *offset, unsigned int size, void *userdata);
++extern VCHIQ_STATUS_T vchiq_bulk_transmit(VCHIQ_SERVICE_HANDLE_T service,
++	const void *data, unsigned int size, void *userdata,
++	VCHIQ_BULK_MODE_T mode);
++extern VCHIQ_STATUS_T vchiq_bulk_receive(VCHIQ_SERVICE_HANDLE_T service,
++	void *data, unsigned int size, void *userdata,
++	VCHIQ_BULK_MODE_T mode);
++extern VCHIQ_STATUS_T vchiq_bulk_transmit_handle(VCHIQ_SERVICE_HANDLE_T service,
++	VCHI_MEM_HANDLE_T handle, const void *offset, unsigned int size,
++	void *userdata,	VCHIQ_BULK_MODE_T mode);
++extern VCHIQ_STATUS_T vchiq_bulk_receive_handle(VCHIQ_SERVICE_HANDLE_T service,
++	VCHI_MEM_HANDLE_T handle, void *offset, unsigned int size,
++	void *userdata, VCHIQ_BULK_MODE_T mode);
++extern int   vchiq_get_client_id(VCHIQ_SERVICE_HANDLE_T service);
++extern void *vchiq_get_service_userdata(VCHIQ_SERVICE_HANDLE_T service);
++extern int   vchiq_get_service_fourcc(VCHIQ_SERVICE_HANDLE_T service);
++extern VCHIQ_STATUS_T vchiq_get_config(VCHIQ_INSTANCE_T instance,
++	int config_size, VCHIQ_CONFIG_T *pconfig);
++extern VCHIQ_STATUS_T vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T service,
++	VCHIQ_SERVICE_OPTION_T option, int value);
++
++extern VCHIQ_STATUS_T vchiq_remote_use(VCHIQ_INSTANCE_T instance,
++	VCHIQ_REMOTE_USE_CALLBACK_T callback, void *cb_arg);
++extern VCHIQ_STATUS_T vchiq_remote_release(VCHIQ_INSTANCE_T instance);
++
++extern VCHIQ_STATUS_T vchiq_dump_phys_mem(VCHIQ_SERVICE_HANDLE_T service,
++	void *ptr, size_t num_bytes);
++
++extern VCHIQ_STATUS_T vchiq_get_peer_version(VCHIQ_SERVICE_HANDLE_T handle,
++      short *peer_version);
++
++#endif /* VCHIQ_IF_H */
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_ioctl.h
+@@ -0,0 +1,131 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_IOCTLS_H
++#define VCHIQ_IOCTLS_H
++
++#include <linux/ioctl.h>
++#include "vchiq_if.h"
++
++#define VCHIQ_IOC_MAGIC 0xc4
++#define VCHIQ_INVALID_HANDLE (~0)
++
++typedef struct {
++	VCHIQ_SERVICE_PARAMS_T params;
++	int is_open;
++	int is_vchi;
++	unsigned int handle;       /* OUT */
++} VCHIQ_CREATE_SERVICE_T;
++
++typedef struct {
++	unsigned int handle;
++	unsigned int count;
++	const VCHIQ_ELEMENT_T *elements;
++} VCHIQ_QUEUE_MESSAGE_T;
++
++typedef struct {
++	unsigned int handle;
++	void *data;
++	unsigned int size;
++	void *userdata;
++	VCHIQ_BULK_MODE_T mode;
++} VCHIQ_QUEUE_BULK_TRANSFER_T;
++
++typedef struct {
++	VCHIQ_REASON_T reason;
++	VCHIQ_HEADER_T *header;
++	void *service_userdata;
++	void *bulk_userdata;
++} VCHIQ_COMPLETION_DATA_T;
++
++typedef struct {
++	unsigned int count;
++	VCHIQ_COMPLETION_DATA_T *buf;
++	unsigned int msgbufsize;
++	unsigned int msgbufcount; /* IN/OUT */
++	void **msgbufs;
++} VCHIQ_AWAIT_COMPLETION_T;
++
++typedef struct {
++	unsigned int handle;
++	int blocking;
++	unsigned int bufsize;
++	void *buf;
++} VCHIQ_DEQUEUE_MESSAGE_T;
++
++typedef struct {
++	unsigned int config_size;
++	VCHIQ_CONFIG_T *pconfig;
++} VCHIQ_GET_CONFIG_T;
++
++typedef struct {
++	unsigned int handle;
++	VCHIQ_SERVICE_OPTION_T option;
++	int value;
++} VCHIQ_SET_SERVICE_OPTION_T;
++
++typedef struct {
++	void     *virt_addr;
++	size_t    num_bytes;
++} VCHIQ_DUMP_MEM_T;
++
++#define VCHIQ_IOC_CONNECT              _IO(VCHIQ_IOC_MAGIC,   0)
++#define VCHIQ_IOC_SHUTDOWN             _IO(VCHIQ_IOC_MAGIC,   1)
++#define VCHIQ_IOC_CREATE_SERVICE \
++	_IOWR(VCHIQ_IOC_MAGIC, 2, VCHIQ_CREATE_SERVICE_T)
++#define VCHIQ_IOC_REMOVE_SERVICE       _IO(VCHIQ_IOC_MAGIC,   3)
++#define VCHIQ_IOC_QUEUE_MESSAGE \
++	_IOW(VCHIQ_IOC_MAGIC,  4, VCHIQ_QUEUE_MESSAGE_T)
++#define VCHIQ_IOC_QUEUE_BULK_TRANSMIT \
++	_IOWR(VCHIQ_IOC_MAGIC, 5, VCHIQ_QUEUE_BULK_TRANSFER_T)
++#define VCHIQ_IOC_QUEUE_BULK_RECEIVE \
++	_IOWR(VCHIQ_IOC_MAGIC, 6, VCHIQ_QUEUE_BULK_TRANSFER_T)
++#define VCHIQ_IOC_AWAIT_COMPLETION \
++	_IOWR(VCHIQ_IOC_MAGIC, 7, VCHIQ_AWAIT_COMPLETION_T)
++#define VCHIQ_IOC_DEQUEUE_MESSAGE \
++	_IOWR(VCHIQ_IOC_MAGIC, 8, VCHIQ_DEQUEUE_MESSAGE_T)
++#define VCHIQ_IOC_GET_CLIENT_ID        _IO(VCHIQ_IOC_MAGIC,   9)
++#define VCHIQ_IOC_GET_CONFIG \
++	_IOWR(VCHIQ_IOC_MAGIC, 10, VCHIQ_GET_CONFIG_T)
++#define VCHIQ_IOC_CLOSE_SERVICE        _IO(VCHIQ_IOC_MAGIC,   11)
++#define VCHIQ_IOC_USE_SERVICE          _IO(VCHIQ_IOC_MAGIC,   12)
++#define VCHIQ_IOC_RELEASE_SERVICE      _IO(VCHIQ_IOC_MAGIC,   13)
++#define VCHIQ_IOC_SET_SERVICE_OPTION \
++	_IOW(VCHIQ_IOC_MAGIC,  14, VCHIQ_SET_SERVICE_OPTION_T)
++#define VCHIQ_IOC_DUMP_PHYS_MEM \
++	_IOW(VCHIQ_IOC_MAGIC,  15, VCHIQ_DUMP_MEM_T)
++#define VCHIQ_IOC_LIB_VERSION          _IO(VCHIQ_IOC_MAGIC,   16)
++#define VCHIQ_IOC_CLOSE_DELIVERED      _IO(VCHIQ_IOC_MAGIC,   17)
++#define VCHIQ_IOC_MAX                  17
++
++#endif
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_kern_lib.c
+@@ -0,0 +1,458 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++/* ---- Include Files ---------------------------------------------------- */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/mutex.h>
++
++#include "vchiq_core.h"
++#include "vchiq_arm.h"
++#include "vchiq_killable.h"
++
++/* ---- Public Variables ------------------------------------------------- */
++
++/* ---- Private Constants and Types -------------------------------------- */
++
++struct bulk_waiter_node {
++	struct bulk_waiter bulk_waiter;
++	int pid;
++	struct list_head list;
++};
++
++struct vchiq_instance_struct {
++	VCHIQ_STATE_T *state;
++
++	int connected;
++
++	struct list_head bulk_waiter_list;
++	struct mutex bulk_waiter_list_mutex;
++};
++
++static VCHIQ_STATUS_T
++vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data,
++	unsigned int size, VCHIQ_BULK_DIR_T dir);
++
++/****************************************************************************
++*
++*   vchiq_initialise
++*
++***************************************************************************/
++#define VCHIQ_INIT_RETRIES 10
++VCHIQ_STATUS_T vchiq_initialise(VCHIQ_INSTANCE_T *instanceOut)
++{
++	VCHIQ_STATUS_T status = VCHIQ_ERROR;
++	VCHIQ_STATE_T *state;
++	VCHIQ_INSTANCE_T instance = NULL;
++        int i;
++
++	vchiq_log_trace(vchiq_core_log_level, "%s called", __func__);
++
++        /* VideoCore may not be ready due to boot up timing.
++           It may never be ready if kernel and firmware are mismatched, so don't block forever. */
++        for (i=0; i<VCHIQ_INIT_RETRIES; i++) {
++		state = vchiq_get_state();
++		if (state)
++			break;
++		udelay(500);
++	}
++	if (i==VCHIQ_INIT_RETRIES) {
++		vchiq_log_error(vchiq_core_log_level,
++			"%s: videocore not initialized\n", __func__);
++		goto failed;
++	} else if (i>0) {
++		vchiq_log_warning(vchiq_core_log_level,
++			"%s: videocore initialized after %d retries\n", __func__, i);
++	}
++
++	instance = kzalloc(sizeof(*instance), GFP_KERNEL);
++	if (!instance) {
++		vchiq_log_error(vchiq_core_log_level,
++			"%s: error allocating vchiq instance\n", __func__);
++		goto failed;
++	}
++
++	instance->connected = 0;
++	instance->state = state;
++	mutex_init(&instance->bulk_waiter_list_mutex);
++	INIT_LIST_HEAD(&instance->bulk_waiter_list);
++
++	*instanceOut = instance;
++
++	status = VCHIQ_SUCCESS;
++
++failed:
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p): returning %d", __func__, instance, status);
++
++	return status;
++}
++EXPORT_SYMBOL(vchiq_initialise);
++
++/****************************************************************************
++*
++*   vchiq_shutdown
++*
++***************************************************************************/
++
++VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance)
++{
++	VCHIQ_STATUS_T status;
++	VCHIQ_STATE_T *state = instance->state;
++
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p) called", __func__, instance);
++
++	if (mutex_lock_interruptible(&state->mutex) != 0)
++		return VCHIQ_RETRY;
++
++	/* Remove all services */
++	status = vchiq_shutdown_internal(state, instance);
++
++	mutex_unlock(&state->mutex);
++
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p): returning %d", __func__, instance, status);
++
++	if (status == VCHIQ_SUCCESS) {
++		struct list_head *pos, *next;
++		list_for_each_safe(pos, next,
++				&instance->bulk_waiter_list) {
++			struct bulk_waiter_node *waiter;
++			waiter = list_entry(pos,
++					struct bulk_waiter_node,
++					list);
++			list_del(pos);
++			vchiq_log_info(vchiq_arm_log_level,
++					"bulk_waiter - cleaned up %x "
++					"for pid %d",
++					(unsigned int)waiter, waiter->pid);
++			kfree(waiter);
++		}
++		kfree(instance);
++	}
++
++	return status;
++}
++EXPORT_SYMBOL(vchiq_shutdown);
++
++/****************************************************************************
++*
++*   vchiq_is_connected
++*
++***************************************************************************/
++
++int vchiq_is_connected(VCHIQ_INSTANCE_T instance)
++{
++	return instance->connected;
++}
++
++/****************************************************************************
++*
++*   vchiq_connect
++*
++***************************************************************************/
++
++VCHIQ_STATUS_T vchiq_connect(VCHIQ_INSTANCE_T instance)
++{
++	VCHIQ_STATUS_T status;
++	VCHIQ_STATE_T *state = instance->state;
++
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p) called", __func__, instance);
++
++	if (mutex_lock_interruptible(&state->mutex) != 0) {
++		vchiq_log_trace(vchiq_core_log_level,
++			"%s: call to mutex_lock failed", __func__);
++		status = VCHIQ_RETRY;
++		goto failed;
++	}
++	status = vchiq_connect_internal(state, instance);
++
++	if (status == VCHIQ_SUCCESS)
++		instance->connected = 1;
++
++	mutex_unlock(&state->mutex);
++
++failed:
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p): returning %d", __func__, instance, status);
++
++	return status;
++}
++EXPORT_SYMBOL(vchiq_connect);
++
++/****************************************************************************
++*
++*   vchiq_add_service
++*
++***************************************************************************/
++
++VCHIQ_STATUS_T vchiq_add_service(
++	VCHIQ_INSTANCE_T              instance,
++	const VCHIQ_SERVICE_PARAMS_T *params,
++	VCHIQ_SERVICE_HANDLE_T       *phandle)
++{
++	VCHIQ_STATUS_T status;
++	VCHIQ_STATE_T *state = instance->state;
++	VCHIQ_SERVICE_T *service = NULL;
++	int srvstate;
++
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p) called", __func__, instance);
++
++	*phandle = VCHIQ_SERVICE_HANDLE_INVALID;
++
++	srvstate = vchiq_is_connected(instance)
++		? VCHIQ_SRVSTATE_LISTENING
++		: VCHIQ_SRVSTATE_HIDDEN;
++
++	service = vchiq_add_service_internal(
++		state,
++		params,
++		srvstate,
++		instance,
++		NULL);
++
++	if (service) {
++		*phandle = service->handle;
++		status = VCHIQ_SUCCESS;
++	} else
++		status = VCHIQ_ERROR;
++
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p): returning %d", __func__, instance, status);
++
++	return status;
++}
++EXPORT_SYMBOL(vchiq_add_service);
++
++/****************************************************************************
++*
++*   vchiq_open_service
++*
++***************************************************************************/
++
++VCHIQ_STATUS_T vchiq_open_service(
++	VCHIQ_INSTANCE_T              instance,
++	const VCHIQ_SERVICE_PARAMS_T *params,
++	VCHIQ_SERVICE_HANDLE_T       *phandle)
++{
++	VCHIQ_STATUS_T   status = VCHIQ_ERROR;
++	VCHIQ_STATE_T   *state = instance->state;
++	VCHIQ_SERVICE_T *service = NULL;
++
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p) called", __func__, instance);
++
++	*phandle = VCHIQ_SERVICE_HANDLE_INVALID;
++
++	if (!vchiq_is_connected(instance))
++		goto failed;
++
++	service = vchiq_add_service_internal(state,
++		params,
++		VCHIQ_SRVSTATE_OPENING,
++		instance,
++		NULL);
++
++	if (service) {
++		*phandle = service->handle;
++		status = vchiq_open_service_internal(service, current->pid);
++		if (status != VCHIQ_SUCCESS) {
++			vchiq_remove_service(service->handle);
++			*phandle = VCHIQ_SERVICE_HANDLE_INVALID;
++		}
++	}
++
++failed:
++	vchiq_log_trace(vchiq_core_log_level,
++		"%s(%p): returning %d", __func__, instance, status);
++
++	return status;
++}
++EXPORT_SYMBOL(vchiq_open_service);
++
++VCHIQ_STATUS_T
++vchiq_queue_bulk_transmit(VCHIQ_SERVICE_HANDLE_T handle,
++	const void *data, unsigned int size, void *userdata)
++{
++	return vchiq_bulk_transfer(handle,
++		VCHI_MEM_HANDLE_INVALID, (void *)data, size, userdata,
++		VCHIQ_BULK_MODE_CALLBACK, VCHIQ_BULK_TRANSMIT);
++}
++EXPORT_SYMBOL(vchiq_queue_bulk_transmit);
++
++VCHIQ_STATUS_T
++vchiq_queue_bulk_receive(VCHIQ_SERVICE_HANDLE_T handle, void *data,
++	unsigned int size, void *userdata)
++{
++	return vchiq_bulk_transfer(handle,
++		VCHI_MEM_HANDLE_INVALID, data, size, userdata,
++		VCHIQ_BULK_MODE_CALLBACK, VCHIQ_BULK_RECEIVE);
++}
++EXPORT_SYMBOL(vchiq_queue_bulk_receive);
++
++VCHIQ_STATUS_T
++vchiq_bulk_transmit(VCHIQ_SERVICE_HANDLE_T handle, const void *data,
++	unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode)
++{
++	VCHIQ_STATUS_T status;
++
++	switch (mode) {
++	case VCHIQ_BULK_MODE_NOCALLBACK:
++	case VCHIQ_BULK_MODE_CALLBACK:
++		status = vchiq_bulk_transfer(handle,
++			VCHI_MEM_HANDLE_INVALID, (void *)data, size, userdata,
++			mode, VCHIQ_BULK_TRANSMIT);
++		break;
++	case VCHIQ_BULK_MODE_BLOCKING:
++		status = vchiq_blocking_bulk_transfer(handle,
++			(void *)data, size, VCHIQ_BULK_TRANSMIT);
++		break;
++	default:
++		return VCHIQ_ERROR;
++	}
++
++	return status;
++}
++EXPORT_SYMBOL(vchiq_bulk_transmit);
++
++VCHIQ_STATUS_T
++vchiq_bulk_receive(VCHIQ_SERVICE_HANDLE_T handle, void *data,
++	unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode)
++{
++	VCHIQ_STATUS_T status;
++
++	switch (mode) {
++	case VCHIQ_BULK_MODE_NOCALLBACK:
++	case VCHIQ_BULK_MODE_CALLBACK:
++		status = vchiq_bulk_transfer(handle,
++			VCHI_MEM_HANDLE_INVALID, data, size, userdata,
++			mode, VCHIQ_BULK_RECEIVE);
++		break;
++	case VCHIQ_BULK_MODE_BLOCKING:
++		status = vchiq_blocking_bulk_transfer(handle,
++			(void *)data, size, VCHIQ_BULK_RECEIVE);
++		break;
++	default:
++		return VCHIQ_ERROR;
++	}
++
++	return status;
++}
++EXPORT_SYMBOL(vchiq_bulk_receive);
++
++static VCHIQ_STATUS_T
++vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data,
++	unsigned int size, VCHIQ_BULK_DIR_T dir)
++{
++	VCHIQ_INSTANCE_T instance;
++	VCHIQ_SERVICE_T *service;
++	VCHIQ_STATUS_T status;
++	struct bulk_waiter_node *waiter = NULL;
++	struct list_head *pos;
++
++	service = find_service_by_handle(handle);
++	if (!service)
++		return VCHIQ_ERROR;
++
++	instance = service->instance;
++
++	unlock_service(service);
++
++	mutex_lock(&instance->bulk_waiter_list_mutex);
++	list_for_each(pos, &instance->bulk_waiter_list) {
++		if (list_entry(pos, struct bulk_waiter_node,
++				list)->pid == current->pid) {
++			waiter = list_entry(pos,
++				struct bulk_waiter_node,
++				list);
++			list_del(pos);
++			break;
++		}
++	}
++	mutex_unlock(&instance->bulk_waiter_list_mutex);
++
++	if (waiter) {
++		VCHIQ_BULK_T *bulk = waiter->bulk_waiter.bulk;
++		if (bulk) {
++			/* This thread has an outstanding bulk transfer. */
++			if ((bulk->data != data) ||
++				(bulk->size != size)) {
++				/* This is not a retry of the previous one.
++				** Cancel the signal when the transfer
++				** completes. */
++				spin_lock(&bulk_waiter_spinlock);
++				bulk->userdata = NULL;
++				spin_unlock(&bulk_waiter_spinlock);
++			}
++		}
++	}
++
++	if (!waiter) {
++		waiter = kzalloc(sizeof(struct bulk_waiter_node), GFP_KERNEL);
++		if (!waiter) {
++			vchiq_log_error(vchiq_core_log_level,
++				"%s - out of memory", __func__);
++			return VCHIQ_ERROR;
++		}
++	}
++
++	status = vchiq_bulk_transfer(handle, VCHI_MEM_HANDLE_INVALID,
++		data, size, &waiter->bulk_waiter, VCHIQ_BULK_MODE_BLOCKING,
++		dir);
++	if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) ||
++		!waiter->bulk_waiter.bulk) {
++		VCHIQ_BULK_T *bulk = waiter->bulk_waiter.bulk;
++		if (bulk) {
++			/* Cancel the signal when the transfer
++			 ** completes. */
++			spin_lock(&bulk_waiter_spinlock);
++			bulk->userdata = NULL;
++			spin_unlock(&bulk_waiter_spinlock);
++		}
++		kfree(waiter);
++	} else {
++		waiter->pid = current->pid;
++		mutex_lock(&instance->bulk_waiter_list_mutex);
++		list_add(&waiter->list, &instance->bulk_waiter_list);
++		mutex_unlock(&instance->bulk_waiter_list_mutex);
++		vchiq_log_info(vchiq_arm_log_level,
++				"saved bulk_waiter %x for pid %d",
++				(unsigned int)waiter, current->pid);
++	}
++
++	return status;
++}
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_killable.h
+@@ -0,0 +1,69 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_KILLABLE_H
++#define VCHIQ_KILLABLE_H
++
++#include <linux/mutex.h>
++#include <linux/semaphore.h>
++
++#define SHUTDOWN_SIGS   (sigmask(SIGKILL) | sigmask(SIGINT) | sigmask(SIGQUIT) | sigmask(SIGTRAP) | sigmask(SIGSTOP) | sigmask(SIGCONT))
++
++static inline int __must_check down_interruptible_killable(struct semaphore *sem)
++{
++	/* Allow interception of killable signals only. We don't want to be interrupted by harmless signals like SIGALRM */
++	int ret;
++	sigset_t blocked, oldset;
++	siginitsetinv(&blocked, SHUTDOWN_SIGS);
++	sigprocmask(SIG_SETMASK, &blocked, &oldset);
++	ret = down_interruptible(sem);
++	sigprocmask(SIG_SETMASK, &oldset, NULL);
++	return ret;
++}
++#define down_interruptible down_interruptible_killable
++
++
++static inline int __must_check mutex_lock_interruptible_killable(struct mutex *lock)
++{
++	/* Allow interception of killable signals only. We don't want to be interrupted by harmless signals like SIGALRM */
++	int ret;
++	sigset_t blocked, oldset;
++	siginitsetinv(&blocked, SHUTDOWN_SIGS);
++	sigprocmask(SIG_SETMASK, &blocked, &oldset);
++	ret = mutex_lock_interruptible(lock);
++	sigprocmask(SIG_SETMASK, &oldset, NULL);
++	return ret;
++}
++#define mutex_lock_interruptible mutex_lock_interruptible_killable
++
++#endif
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_memdrv.h
+@@ -0,0 +1,71 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_MEMDRV_H
++#define VCHIQ_MEMDRV_H
++
++/* ---- Include Files ----------------------------------------------------- */
++
++#include <linux/kernel.h>
++#include "vchiq_if.h"
++
++/* ---- Constants and Types ---------------------------------------------- */
++
++typedef struct {
++	 void                   *armSharedMemVirt;
++	 dma_addr_t              armSharedMemPhys;
++	 size_t                  armSharedMemSize;
++
++	 void                   *vcSharedMemVirt;
++	 dma_addr_t              vcSharedMemPhys;
++	 size_t                  vcSharedMemSize;
++} VCHIQ_SHARED_MEM_INFO_T;
++
++/* ---- Variable Externs ------------------------------------------------- */
++
++/* ---- Function Prototypes ---------------------------------------------- */
++
++void vchiq_get_shared_mem_info(VCHIQ_SHARED_MEM_INFO_T *info);
++
++VCHIQ_STATUS_T vchiq_memdrv_initialise(void);
++
++VCHIQ_STATUS_T vchiq_userdrv_create_instance(
++	const VCHIQ_PLATFORM_DATA_T * platform_data);
++
++VCHIQ_STATUS_T vchiq_userdrv_suspend(
++	const VCHIQ_PLATFORM_DATA_T * platform_data);
++
++VCHIQ_STATUS_T vchiq_userdrv_resume(
++	const VCHIQ_PLATFORM_DATA_T * platform_data);
++
++#endif
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_pagelist.h
+@@ -0,0 +1,58 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_PAGELIST_H
++#define VCHIQ_PAGELIST_H
++
++#ifndef PAGE_SIZE
++#define PAGE_SIZE 4096
++#endif
++#define CACHE_LINE_SIZE 32
++#define PAGELIST_WRITE 0
++#define PAGELIST_READ 1
++#define PAGELIST_READ_WITH_FRAGMENTS 2
++
++typedef struct pagelist_struct {
++	unsigned long length;
++	unsigned short type;
++	unsigned short offset;
++	unsigned long addrs[1];	/* N.B. 12 LSBs hold the number of following
++				   pages at consecutive addresses. */
++} PAGELIST_T;
++
++typedef struct fragments_struct {
++	char headbuf[CACHE_LINE_SIZE];
++	char tailbuf[CACHE_LINE_SIZE];
++} FRAGMENTS_T;
++
++#endif /* VCHIQ_PAGELIST_H */
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_shim.c
+@@ -0,0 +1,860 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++
++#include "interface/vchi/vchi.h"
++#include "vchiq.h"
++#include "vchiq_core.h"
++
++#include "vchiq_util.h"
++
++#include <stddef.h>
++
++#define vchiq_status_to_vchi(status) ((int32_t)status)
++
++typedef struct {
++	VCHIQ_SERVICE_HANDLE_T handle;
++
++	VCHIU_QUEUE_T queue;
++
++	VCHI_CALLBACK_T callback;
++	void *callback_param;
++} SHIM_SERVICE_T;
++
++/* ----------------------------------------------------------------------
++ * return pointer to the mphi message driver function table
++ * -------------------------------------------------------------------- */
++const VCHI_MESSAGE_DRIVER_T *
++vchi_mphi_message_driver_func_table(void)
++{
++	return NULL;
++}
++
++/* ----------------------------------------------------------------------
++ * return a pointer to the 'single' connection driver fops
++ * -------------------------------------------------------------------- */
++const VCHI_CONNECTION_API_T *
++single_get_func_table(void)
++{
++	return NULL;
++}
++
++VCHI_CONNECTION_T *vchi_create_connection(
++	const VCHI_CONNECTION_API_T *function_table,
++	const VCHI_MESSAGE_DRIVER_T *low_level)
++{
++	(void)function_table;
++	(void)low_level;
++	return NULL;
++}
++
++/***********************************************************
++ * Name: vchi_msg_peek
++ *
++ * Arguments:  const VCHI_SERVICE_HANDLE_T handle,
++ *             void **data,
++ *             uint32_t *msg_size,
++
++
++ *             VCHI_FLAGS_T flags
++ *
++ * Description: Routine to return a pointer to the current message (to allow in
++ *              place processing). The message can be removed using
++ *              vchi_msg_remove when you're finished
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++int32_t vchi_msg_peek(VCHI_SERVICE_HANDLE_T handle,
++	void **data,
++	uint32_t *msg_size,
++	VCHI_FLAGS_T flags)
++{
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	VCHIQ_HEADER_T *header;
++
++	WARN_ON((flags != VCHI_FLAGS_NONE) &&
++		(flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
++
++	if (flags == VCHI_FLAGS_NONE)
++		if (vchiu_queue_is_empty(&service->queue))
++			return -1;
++
++	header = vchiu_queue_peek(&service->queue);
++
++	*data = header->data;
++	*msg_size = header->size;
++
++	return 0;
++}
++EXPORT_SYMBOL(vchi_msg_peek);
++
++/***********************************************************
++ * Name: vchi_msg_remove
++ *
++ * Arguments:  const VCHI_SERVICE_HANDLE_T handle,
++ *
++ * Description: Routine to remove a message (after it has been read with
++ *              vchi_msg_peek)
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++int32_t vchi_msg_remove(VCHI_SERVICE_HANDLE_T handle)
++{
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	VCHIQ_HEADER_T *header;
++
++	header = vchiu_queue_pop(&service->queue);
++
++	vchiq_release_message(service->handle, header);
++
++	return 0;
++}
++EXPORT_SYMBOL(vchi_msg_remove);
++
++/***********************************************************
++ * Name: vchi_msg_queue
++ *
++ * Arguments:  VCHI_SERVICE_HANDLE_T handle,
++ *             const void *data,
++ *             uint32_t data_size,
++ *             VCHI_FLAGS_T flags,
++ *             void *msg_handle,
++ *
++ * Description: Thin wrapper to queue a message onto a connection
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++int32_t vchi_msg_queue(VCHI_SERVICE_HANDLE_T handle,
++	const void *data,
++	uint32_t data_size,
++	VCHI_FLAGS_T flags,
++	void *msg_handle)
++{
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	VCHIQ_ELEMENT_T element = {data, data_size};
++	VCHIQ_STATUS_T status;
++
++	(void)msg_handle;
++
++	WARN_ON(flags != VCHI_FLAGS_BLOCK_UNTIL_QUEUED);
++
++	status = vchiq_queue_message(service->handle, &element, 1);
++
++	/* vchiq_queue_message() may return VCHIQ_RETRY, so we need to
++	** implement a retry mechanism since this function is supposed
++	** to block until queued
++	*/
++	while (status == VCHIQ_RETRY) {
++		msleep(1);
++		status = vchiq_queue_message(service->handle, &element, 1);
++	}
++
++	return vchiq_status_to_vchi(status);
++}
++EXPORT_SYMBOL(vchi_msg_queue);
++
++/***********************************************************
++ * Name: vchi_bulk_queue_receive
++ *
++ * Arguments:  VCHI_BULK_HANDLE_T handle,
++ *             void *data_dst,
++ *             const uint32_t data_size,
++ *             VCHI_FLAGS_T flags
++ *             void *bulk_handle
++ *
++ * Description: Routine to setup a rcv buffer
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++int32_t vchi_bulk_queue_receive(VCHI_SERVICE_HANDLE_T handle,
++	void *data_dst,
++	uint32_t data_size,
++	VCHI_FLAGS_T flags,
++	void *bulk_handle)
++{
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	VCHIQ_BULK_MODE_T mode;
++	VCHIQ_STATUS_T status;
++
++	switch ((int)flags) {
++	case VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE
++		| VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
++		WARN_ON(!service->callback);
++		mode = VCHIQ_BULK_MODE_CALLBACK;
++		break;
++	case VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE:
++		mode = VCHIQ_BULK_MODE_BLOCKING;
++		break;
++	case VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
++	case VCHI_FLAGS_NONE:
++		mode = VCHIQ_BULK_MODE_NOCALLBACK;
++		break;
++	default:
++		WARN(1, "unsupported message\n");
++		return vchiq_status_to_vchi(VCHIQ_ERROR);
++	}
++
++	status = vchiq_bulk_receive(service->handle, data_dst, data_size,
++		bulk_handle, mode);
++
++	/* vchiq_bulk_receive() may return VCHIQ_RETRY, so we need to
++	** implement a retry mechanism since this function is supposed
++	** to block until queued
++	*/
++	while (status == VCHIQ_RETRY) {
++		msleep(1);
++		status = vchiq_bulk_receive(service->handle, data_dst,
++			data_size, bulk_handle, mode);
++	}
++
++	return vchiq_status_to_vchi(status);
++}
++EXPORT_SYMBOL(vchi_bulk_queue_receive);
++
++/***********************************************************
++ * Name: vchi_bulk_queue_transmit
++ *
++ * Arguments:  VCHI_BULK_HANDLE_T handle,
++ *             const void *data_src,
++ *             uint32_t data_size,
++ *             VCHI_FLAGS_T flags,
++ *             void *bulk_handle
++ *
++ * Description: Routine to transmit some data
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++int32_t vchi_bulk_queue_transmit(VCHI_SERVICE_HANDLE_T handle,
++	const void *data_src,
++	uint32_t data_size,
++	VCHI_FLAGS_T flags,
++	void *bulk_handle)
++{
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	VCHIQ_BULK_MODE_T mode;
++	VCHIQ_STATUS_T status;
++
++	switch ((int)flags) {
++	case VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE
++		| VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
++		WARN_ON(!service->callback);
++		mode = VCHIQ_BULK_MODE_CALLBACK;
++		break;
++	case VCHI_FLAGS_BLOCK_UNTIL_DATA_READ:
++	case VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE:
++		mode = VCHIQ_BULK_MODE_BLOCKING;
++		break;
++	case VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
++	case VCHI_FLAGS_NONE:
++		mode = VCHIQ_BULK_MODE_NOCALLBACK;
++		break;
++	default:
++		WARN(1, "unsupported message\n");
++		return vchiq_status_to_vchi(VCHIQ_ERROR);
++	}
++
++	status = vchiq_bulk_transmit(service->handle, data_src, data_size,
++		bulk_handle, mode);
++
++	/* vchiq_bulk_transmit() may return VCHIQ_RETRY, so we need to
++	** implement a retry mechanism since this function is supposed
++	** to block until queued
++	*/
++	while (status == VCHIQ_RETRY) {
++		msleep(1);
++		status = vchiq_bulk_transmit(service->handle, data_src,
++			data_size, bulk_handle, mode);
++	}
++
++	return vchiq_status_to_vchi(status);
++}
++EXPORT_SYMBOL(vchi_bulk_queue_transmit);
++
++/***********************************************************
++ * Name: vchi_msg_dequeue
++ *
++ * Arguments:  VCHI_SERVICE_HANDLE_T handle,
++ *             void *data,
++ *             uint32_t max_data_size_to_read,
++ *             uint32_t *actual_msg_size
++ *             VCHI_FLAGS_T flags
++ *
++ * Description: Routine to dequeue a message into the supplied buffer
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++int32_t vchi_msg_dequeue(VCHI_SERVICE_HANDLE_T handle,
++	void *data,
++	uint32_t max_data_size_to_read,
++	uint32_t *actual_msg_size,
++	VCHI_FLAGS_T flags)
++{
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	VCHIQ_HEADER_T *header;
++
++	WARN_ON((flags != VCHI_FLAGS_NONE) &&
++		(flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
++
++	if (flags == VCHI_FLAGS_NONE)
++		if (vchiu_queue_is_empty(&service->queue))
++			return -1;
++
++	header = vchiu_queue_pop(&service->queue);
++
++	memcpy(data, header->data, header->size < max_data_size_to_read ?
++		header->size : max_data_size_to_read);
++
++	*actual_msg_size = header->size;
++
++	vchiq_release_message(service->handle, header);
++
++	return 0;
++}
++EXPORT_SYMBOL(vchi_msg_dequeue);
++
++/***********************************************************
++ * Name: vchi_msg_queuev
++ *
++ * Arguments:  VCHI_SERVICE_HANDLE_T handle,
++ *             VCHI_MSG_VECTOR_T *vector,
++ *             uint32_t count,
++ *             VCHI_FLAGS_T flags,
++ *             void *msg_handle
++ *
++ * Description: Thin wrapper to queue a message onto a connection
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++
++vchiq_static_assert(sizeof(VCHI_MSG_VECTOR_T) == sizeof(VCHIQ_ELEMENT_T));
++vchiq_static_assert(offsetof(VCHI_MSG_VECTOR_T, vec_base) ==
++	offsetof(VCHIQ_ELEMENT_T, data));
++vchiq_static_assert(offsetof(VCHI_MSG_VECTOR_T, vec_len) ==
++	offsetof(VCHIQ_ELEMENT_T, size));
++
++int32_t vchi_msg_queuev(VCHI_SERVICE_HANDLE_T handle,
++	VCHI_MSG_VECTOR_T *vector,
++	uint32_t count,
++	VCHI_FLAGS_T flags,
++	void *msg_handle)
++{
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++
++	(void)msg_handle;
++
++	WARN_ON(flags != VCHI_FLAGS_BLOCK_UNTIL_QUEUED);
++
++	return vchiq_status_to_vchi(vchiq_queue_message(service->handle,
++		(const VCHIQ_ELEMENT_T *)vector, count));
++}
++EXPORT_SYMBOL(vchi_msg_queuev);
++
++/***********************************************************
++ * Name: vchi_held_msg_release
++ *
++ * Arguments:  VCHI_HELD_MSG_T *message
++ *
++ * Description: Routine to release a held message (after it has been read with
++ *              vchi_msg_hold)
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++int32_t vchi_held_msg_release(VCHI_HELD_MSG_T *message)
++{
++	vchiq_release_message((VCHIQ_SERVICE_HANDLE_T)message->service,
++		(VCHIQ_HEADER_T *)message->message);
++
++	return 0;
++}
++EXPORT_SYMBOL(vchi_held_msg_release);
++
++/***********************************************************
++ * Name: vchi_msg_hold
++ *
++ * Arguments:  VCHI_SERVICE_HANDLE_T handle,
++ *             void **data,
++ *             uint32_t *msg_size,
++ *             VCHI_FLAGS_T flags,
++ *             VCHI_HELD_MSG_T *message_handle
++ *
++ * Description: Routine to return a pointer to the current message (to allow
++ *              in place processing). The message is dequeued - don't forget
++ *              to release the message using vchi_held_msg_release when you're
++ *              finished.
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++int32_t vchi_msg_hold(VCHI_SERVICE_HANDLE_T handle,
++	void **data,
++	uint32_t *msg_size,
++	VCHI_FLAGS_T flags,
++	VCHI_HELD_MSG_T *message_handle)
++{
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	VCHIQ_HEADER_T *header;
++
++	WARN_ON((flags != VCHI_FLAGS_NONE) &&
++		(flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
++
++	if (flags == VCHI_FLAGS_NONE)
++		if (vchiu_queue_is_empty(&service->queue))
++			return -1;
++
++	header = vchiu_queue_pop(&service->queue);
++
++	*data = header->data;
++	*msg_size = header->size;
++
++	message_handle->service =
++		(struct opaque_vchi_service_t *)service->handle;
++	message_handle->message = header;
++
++	return 0;
++}
++EXPORT_SYMBOL(vchi_msg_hold);
++
++/***********************************************************
++ * Name: vchi_initialise
++ *
++ * Arguments: VCHI_INSTANCE_T *instance_handle
++ *
++ * Description: Initialises the hardware but does not transmit anything
++ *              When run as a Host App this will be called twice hence the need
++ *              to malloc the state information
++ *
++ * Returns: 0 if successful, failure otherwise
++ *
++ ***********************************************************/
++
++int32_t vchi_initialise(VCHI_INSTANCE_T *instance_handle)
++{
++	VCHIQ_INSTANCE_T instance;
++	VCHIQ_STATUS_T status;
++
++	status = vchiq_initialise(&instance);
++
++	*instance_handle = (VCHI_INSTANCE_T)instance;
++
++	return vchiq_status_to_vchi(status);
++}
++EXPORT_SYMBOL(vchi_initialise);
++
++/***********************************************************
++ * Name: vchi_connect
++ *
++ * Arguments: VCHI_CONNECTION_T **connections
++ *            const uint32_t num_connections
++ *            VCHI_INSTANCE_T instance_handle)
++ *
++ * Description: Starts the command service on each connection,
++ *              causing INIT messages to be pinged back and forth
++ *
++ * Returns: 0 if successful, failure otherwise
++ *
++ ***********************************************************/
++int32_t vchi_connect(VCHI_CONNECTION_T **connections,
++	const uint32_t num_connections,
++	VCHI_INSTANCE_T instance_handle)
++{
++	VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
++
++	(void)connections;
++	(void)num_connections;
++
++	return vchiq_connect(instance);
++}
++EXPORT_SYMBOL(vchi_connect);
++
++
++/***********************************************************
++ * Name: vchi_disconnect
++ *
++ * Arguments: VCHI_INSTANCE_T instance_handle
++ *
++ * Description: Stops the command service on each connection,
++ *              causing DE-INIT messages to be pinged back and forth
++ *
++ * Returns: 0 if successful, failure otherwise
++ *
++ ***********************************************************/
++int32_t vchi_disconnect(VCHI_INSTANCE_T instance_handle)
++{
++	VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
++	return vchiq_status_to_vchi(vchiq_shutdown(instance));
++}
++EXPORT_SYMBOL(vchi_disconnect);
++
++
++/***********************************************************
++ * Name: vchi_service_open
++ * Name: vchi_service_create
++ *
++ * Arguments: VCHI_INSTANCE_T *instance_handle
++ *            SERVICE_CREATION_T *setup,
++ *            VCHI_SERVICE_HANDLE_T *handle
++ *
++ * Description: Routine to open a service
++ *
++ * Returns: int32_t - success == 0
++ *
++ ***********************************************************/
++
++static VCHIQ_STATUS_T shim_callback(VCHIQ_REASON_T reason,
++	VCHIQ_HEADER_T *header, VCHIQ_SERVICE_HANDLE_T handle, void *bulk_user)
++{
++	SHIM_SERVICE_T *service =
++		(SHIM_SERVICE_T *)VCHIQ_GET_SERVICE_USERDATA(handle);
++
++        if (!service->callback)
++		goto release;
++
++	switch (reason) {
++	case VCHIQ_MESSAGE_AVAILABLE:
++		vchiu_queue_push(&service->queue, header);
++
++		service->callback(service->callback_param,
++				  VCHI_CALLBACK_MSG_AVAILABLE, NULL);
++
++		goto done;
++		break;
++
++	case VCHIQ_BULK_TRANSMIT_DONE:
++		service->callback(service->callback_param,
++				  VCHI_CALLBACK_BULK_SENT, bulk_user);
++		break;
++
++	case VCHIQ_BULK_RECEIVE_DONE:
++		service->callback(service->callback_param,
++				  VCHI_CALLBACK_BULK_RECEIVED, bulk_user);
++		break;
++
++	case VCHIQ_SERVICE_CLOSED:
++		service->callback(service->callback_param,
++				  VCHI_CALLBACK_SERVICE_CLOSED, NULL);
++		break;
++
++	case VCHIQ_SERVICE_OPENED:
++		/* No equivalent VCHI reason */
++		break;
++
++	case VCHIQ_BULK_TRANSMIT_ABORTED:
++		service->callback(service->callback_param,
++				  VCHI_CALLBACK_BULK_TRANSMIT_ABORTED,
++				  bulk_user);
++		break;
++
++	case VCHIQ_BULK_RECEIVE_ABORTED:
++		service->callback(service->callback_param,
++				  VCHI_CALLBACK_BULK_RECEIVE_ABORTED,
++				  bulk_user);
++		break;
++
++	default:
++		WARN(1, "not supported\n");
++		break;
++	}
++
++release:
++        vchiq_release_message(service->handle, header);
++done:
++	return VCHIQ_SUCCESS;
++}
++
++static SHIM_SERVICE_T *service_alloc(VCHIQ_INSTANCE_T instance,
++	SERVICE_CREATION_T *setup)
++{
++	SHIM_SERVICE_T *service = kzalloc(sizeof(SHIM_SERVICE_T), GFP_KERNEL);
++
++	(void)instance;
++
++	if (service) {
++		if (vchiu_queue_init(&service->queue, 64)) {
++			service->callback = setup->callback;
++			service->callback_param = setup->callback_param;
++		} else {
++			kfree(service);
++			service = NULL;
++		}
++	}
++
++	return service;
++}
++
++static void service_free(SHIM_SERVICE_T *service)
++{
++	if (service) {
++		vchiu_queue_delete(&service->queue);
++		kfree(service);
++	}
++}
++
++int32_t vchi_service_open(VCHI_INSTANCE_T instance_handle,
++	SERVICE_CREATION_T *setup,
++	VCHI_SERVICE_HANDLE_T *handle)
++{
++	VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
++	SHIM_SERVICE_T *service = service_alloc(instance, setup);
++
++	*handle = (VCHI_SERVICE_HANDLE_T)service;
++
++	if (service) {
++		VCHIQ_SERVICE_PARAMS_T params;
++		VCHIQ_STATUS_T status;
++
++		memset(&params, 0, sizeof(params));
++		params.fourcc = setup->service_id;
++		params.callback = shim_callback;
++		params.userdata = service;
++		params.version = setup->version.version;
++		params.version_min = setup->version.version_min;
++
++		status = vchiq_open_service(instance, &params,
++			&service->handle);
++		if (status != VCHIQ_SUCCESS) {
++			service_free(service);
++			service = NULL;
++			*handle = NULL;
++		}
++	}
++
++	return (service != NULL) ? 0 : -1;
++}
++EXPORT_SYMBOL(vchi_service_open);
++
++int32_t vchi_service_create(VCHI_INSTANCE_T instance_handle,
++	SERVICE_CREATION_T *setup,
++	VCHI_SERVICE_HANDLE_T *handle)
++{
++	VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
++	SHIM_SERVICE_T *service = service_alloc(instance, setup);
++
++	*handle = (VCHI_SERVICE_HANDLE_T)service;
++
++	if (service) {
++		VCHIQ_SERVICE_PARAMS_T params;
++		VCHIQ_STATUS_T status;
++
++		memset(&params, 0, sizeof(params));
++		params.fourcc = setup->service_id;
++		params.callback = shim_callback;
++		params.userdata = service;
++		params.version = setup->version.version;
++		params.version_min = setup->version.version_min;
++		status = vchiq_add_service(instance, &params, &service->handle);
++
++		if (status != VCHIQ_SUCCESS) {
++			service_free(service);
++			service = NULL;
++			*handle = NULL;
++		}
++	}
++
++	return (service != NULL) ? 0 : -1;
++}
++EXPORT_SYMBOL(vchi_service_create);
++
++int32_t vchi_service_close(const VCHI_SERVICE_HANDLE_T handle)
++{
++	int32_t ret = -1;
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	if (service) {
++		VCHIQ_STATUS_T status = vchiq_close_service(service->handle);
++		if (status == VCHIQ_SUCCESS) {
++			service_free(service);
++			service = NULL;
++		}
++
++		ret = vchiq_status_to_vchi(status);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(vchi_service_close);
++
++int32_t vchi_service_destroy(const VCHI_SERVICE_HANDLE_T handle)
++{
++	int32_t ret = -1;
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	if (service) {
++		VCHIQ_STATUS_T status = vchiq_remove_service(service->handle);
++		if (status == VCHIQ_SUCCESS) {
++			service_free(service);
++			service = NULL;
++		}
++
++		ret = vchiq_status_to_vchi(status);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(vchi_service_destroy);
++
++int32_t vchi_service_set_option(const VCHI_SERVICE_HANDLE_T handle,
++				VCHI_SERVICE_OPTION_T option,
++				int value)
++{
++	int32_t ret = -1;
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	VCHIQ_SERVICE_OPTION_T vchiq_option;
++	switch (option) {
++	case VCHI_SERVICE_OPTION_TRACE:
++		vchiq_option = VCHIQ_SERVICE_OPTION_TRACE;
++		break;
++	case VCHI_SERVICE_OPTION_SYNCHRONOUS:
++		vchiq_option = VCHIQ_SERVICE_OPTION_SYNCHRONOUS;
++		break;
++	default:
++		service = NULL;
++		break;
++	}
++	if (service) {
++		VCHIQ_STATUS_T status =
++			vchiq_set_service_option(service->handle,
++						vchiq_option,
++						value);
++
++		ret = vchiq_status_to_vchi(status);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(vchi_service_set_option);
++
++int32_t vchi_get_peer_version( const VCHI_SERVICE_HANDLE_T handle, short *peer_version )
++{
++   int32_t ret = -1;
++   SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++   if(service)
++   {
++      VCHIQ_STATUS_T status = vchiq_get_peer_version(service->handle, peer_version);
++      ret = vchiq_status_to_vchi( status );
++   }
++   return ret;
++}
++EXPORT_SYMBOL(vchi_get_peer_version);
++
++/* ----------------------------------------------------------------------
++ * read a uint32_t from buffer.
++ * network format is defined to be little endian
++ * -------------------------------------------------------------------- */
++uint32_t
++vchi_readbuf_uint32(const void *_ptr)
++{
++	const unsigned char *ptr = _ptr;
++	return ptr[0] | (ptr[1] << 8) | (ptr[2] << 16) | (ptr[3] << 24);
++}
++
++/* ----------------------------------------------------------------------
++ * write a uint32_t to buffer.
++ * network format is defined to be little endian
++ * -------------------------------------------------------------------- */
++void
++vchi_writebuf_uint32(void *_ptr, uint32_t value)
++{
++	unsigned char *ptr = _ptr;
++	ptr[0] = (unsigned char)((value >> 0)  & 0xFF);
++	ptr[1] = (unsigned char)((value >> 8)  & 0xFF);
++	ptr[2] = (unsigned char)((value >> 16) & 0xFF);
++	ptr[3] = (unsigned char)((value >> 24) & 0xFF);
++}
++
++/* ----------------------------------------------------------------------
++ * read a uint16_t from buffer.
++ * network format is defined to be little endian
++ * -------------------------------------------------------------------- */
++uint16_t
++vchi_readbuf_uint16(const void *_ptr)
++{
++	const unsigned char *ptr = _ptr;
++	return ptr[0] | (ptr[1] << 8);
++}
++
++/* ----------------------------------------------------------------------
++ * write a uint16_t into the buffer.
++ * network format is defined to be little endian
++ * -------------------------------------------------------------------- */
++void
++vchi_writebuf_uint16(void *_ptr, uint16_t value)
++{
++	unsigned char *ptr = _ptr;
++	ptr[0] = (value >> 0)  & 0xFF;
++	ptr[1] = (value >> 8)  & 0xFF;
++}
++
++/***********************************************************
++ * Name: vchi_service_use
++ *
++ * Arguments: const VCHI_SERVICE_HANDLE_T handle
++ *
++ * Description: Routine to increment refcount on a service
++ *
++ * Returns: void
++ *
++ ***********************************************************/
++int32_t vchi_service_use(const VCHI_SERVICE_HANDLE_T handle)
++{
++	int32_t ret = -1;
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	if (service)
++		ret = vchiq_status_to_vchi(vchiq_use_service(service->handle));
++	return ret;
++}
++EXPORT_SYMBOL(vchi_service_use);
++
++/***********************************************************
++ * Name: vchi_service_release
++ *
++ * Arguments: const VCHI_SERVICE_HANDLE_T handle
++ *
++ * Description: Routine to decrement refcount on a service
++ *
++ * Returns: void
++ *
++ ***********************************************************/
++int32_t vchi_service_release(const VCHI_SERVICE_HANDLE_T handle)
++{
++	int32_t ret = -1;
++	SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
++	if (service)
++		ret = vchiq_status_to_vchi(
++			vchiq_release_service(service->handle));
++	return ret;
++}
++EXPORT_SYMBOL(vchi_service_release);
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_util.c
+@@ -0,0 +1,156 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include "vchiq_util.h"
++#include "vchiq_killable.h"
++
++static inline int is_pow2(int i)
++{
++	return i && !(i & (i - 1));
++}
++
++int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size)
++{
++	WARN_ON(!is_pow2(size));
++
++	queue->size = size;
++	queue->read = 0;
++	queue->write = 0;
++	queue->initialized = 1;
++
++	sema_init(&queue->pop, 0);
++	sema_init(&queue->push, 0);
++
++	queue->storage = kzalloc(size * sizeof(VCHIQ_HEADER_T *), GFP_KERNEL);
++	if (queue->storage == NULL) {
++		vchiu_queue_delete(queue);
++		return 0;
++	}
++	return 1;
++}
++
++void vchiu_queue_delete(VCHIU_QUEUE_T *queue)
++{
++	if (queue->storage != NULL)
++		kfree(queue->storage);
++}
++
++int vchiu_queue_is_empty(VCHIU_QUEUE_T *queue)
++{
++	return queue->read == queue->write;
++}
++
++int vchiu_queue_is_full(VCHIU_QUEUE_T *queue)
++{
++	return queue->write == queue->read + queue->size;
++}
++
++void vchiu_queue_push(VCHIU_QUEUE_T *queue, VCHIQ_HEADER_T *header)
++{
++	if (!queue->initialized)
++		return;
++
++	while (queue->write == queue->read + queue->size) {
++		if (down_interruptible(&queue->pop) != 0) {
++			flush_signals(current);
++		}
++	}
++
++	/*
++	 * Write to queue->storage must be visible after read from
++	 * queue->read
++	 */
++	smp_mb();
++
++	queue->storage[queue->write & (queue->size - 1)] = header;
++
++	/*
++	 * Write to queue->storage must be visible before write to
++	 * queue->write
++	 */
++	smp_wmb();
++
++	queue->write++;
++
++	up(&queue->push);
++}
++
++VCHIQ_HEADER_T *vchiu_queue_peek(VCHIU_QUEUE_T *queue)
++{
++	while (queue->write == queue->read) {
++		if (down_interruptible(&queue->push) != 0) {
++			flush_signals(current);
++		}
++	}
++
++	up(&queue->push); // We haven't removed anything from the queue.
++
++	/*
++	 * Read from queue->storage must be visible after read from
++	 * queue->write
++	 */
++	smp_rmb();
++
++	return queue->storage[queue->read & (queue->size - 1)];
++}
++
++VCHIQ_HEADER_T *vchiu_queue_pop(VCHIU_QUEUE_T *queue)
++{
++	VCHIQ_HEADER_T *header;
++
++	while (queue->write == queue->read) {
++		if (down_interruptible(&queue->push) != 0) {
++			flush_signals(current);
++		}
++	}
++
++	/*
++	 * Read from queue->storage must be visible after read from
++	 * queue->write
++	 */
++	smp_rmb();
++
++	header = queue->storage[queue->read & (queue->size - 1)];
++
++	/*
++	 * Read from queue->storage must be visible before write to
++	 * queue->read
++	 */
++	smp_mb();
++
++	queue->read++;
++
++	up(&queue->pop);
++
++	return header;
++}
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_util.h
+@@ -0,0 +1,82 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef VCHIQ_UTIL_H
++#define VCHIQ_UTIL_H
++
++#include <linux/types.h>
++#include <linux/semaphore.h>
++#include <linux/mutex.h>
++#include <linux/bitops.h>
++#include <linux/kthread.h>
++#include <linux/wait.h>
++#include <linux/vmalloc.h>
++#include <linux/jiffies.h>
++#include <linux/delay.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/interrupt.h>
++#include <linux/random.h>
++#include <linux/sched.h>
++#include <linux/ctype.h>
++#include <linux/uaccess.h>
++#include <linux/time.h>  /* for time_t */
++#include <linux/slab.h>
++#include <linux/vmalloc.h>
++
++#include "vchiq_if.h"
++
++typedef struct {
++	int size;
++	int read;
++	int write;
++	int initialized;
++
++	struct semaphore pop;
++	struct semaphore push;
++
++	VCHIQ_HEADER_T **storage;
++} VCHIU_QUEUE_T;
++
++extern int  vchiu_queue_init(VCHIU_QUEUE_T *queue, int size);
++extern void vchiu_queue_delete(VCHIU_QUEUE_T *queue);
++
++extern int vchiu_queue_is_empty(VCHIU_QUEUE_T *queue);
++extern int vchiu_queue_is_full(VCHIU_QUEUE_T *queue);
++
++extern void vchiu_queue_push(VCHIU_QUEUE_T *queue, VCHIQ_HEADER_T *header);
++
++extern VCHIQ_HEADER_T *vchiu_queue_peek(VCHIU_QUEUE_T *queue);
++extern VCHIQ_HEADER_T *vchiu_queue_pop(VCHIU_QUEUE_T *queue);
++
++#endif
+--- /dev/null
++++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_version.c
+@@ -0,0 +1,59 @@
++/**
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++#include "vchiq_build_info.h"
++#include <linux/broadcom/vc_debug_sym.h>
++
++VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_hostname, "dc4-arm-01" );
++VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_version, "9245b4c35b99b3870e1f7dc598c5692b3c66a6f0 (tainted)" );
++VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_time,    __TIME__ );
++VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_date,    __DATE__ );
++
++const char *vchiq_get_build_hostname( void )
++{
++   return vchiq_build_hostname;
++}
++
++const char *vchiq_get_build_version( void )
++{
++   return vchiq_build_version;
++}
++
++const char *vchiq_get_build_date( void )
++{
++   return vchiq_build_date;
++}
++
++const char *vchiq_get_build_time( void )
++{
++   return vchiq_build_time;
++}
diff --git a/target/linux/brcm2708/patches-4.4/0038-vc_mem-Add-vc_mem-driver.patch b/target/linux/brcm2708/patches-4.4/0038-vc_mem-Add-vc_mem-driver.patch
new file mode 100644
index 0000000..f7cc213
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0038-vc_mem-Add-vc_mem-driver.patch
@@ -0,0 +1,991 @@
+From bb0a865b1cbbb1dd887378111af03727884e3476 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 17 Jun 2015 16:07:06 +0100
+Subject: [PATCH 038/127] vc_mem: Add vc_mem driver
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+
+BCM270x: Move vc_mem
+
+Make the vc_mem module available for ARCH_BCM2835 by moving it.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ arch/arm/mach-bcm2709/include/mach/vc_mem.h |  35 ---
+ arch/arm/mach-bcm2709/vc_mem.c              | 431 ----------------------------
+ drivers/char/broadcom/Kconfig               |  12 +-
+ drivers/char/broadcom/Makefile              |   1 +
+ drivers/char/broadcom/vc_mem.c              | 422 +++++++++++++++++++++++++++
+ include/linux/broadcom/vc_mem.h             |  35 +++
+ 6 files changed, 469 insertions(+), 467 deletions(-)
+ delete mode 100644 arch/arm/mach-bcm2709/include/mach/vc_mem.h
+ delete mode 100644 arch/arm/mach-bcm2709/vc_mem.c
+ create mode 100644 drivers/char/broadcom/vc_mem.c
+ create mode 100644 include/linux/broadcom/vc_mem.h
+
+--- a/arch/arm/mach-bcm2709/include/mach/vc_mem.h
++++ /dev/null
+@@ -1,35 +0,0 @@
+-/*****************************************************************************
+-* Copyright 2010 - 2011 Broadcom Corporation.  All rights reserved.
+-*
+-* Unless you and Broadcom execute a separate written software license
+-* agreement governing use of this software, this software is licensed to you
+-* under the terms of the GNU General Public License version 2, available at
+-* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
+-*
+-* Notwithstanding the above, under no circumstances may you combine this
+-* software in any way with any other Broadcom software provided under a
+-* license other than the GPL, without Broadcom's express prior written
+-* consent.
+-*****************************************************************************/
+-
+-#if !defined( VC_MEM_H )
+-#define VC_MEM_H
+-
+-#include <linux/ioctl.h>
+-
+-#define VC_MEM_IOC_MAGIC  'v'
+-
+-#define VC_MEM_IOC_MEM_PHYS_ADDR    _IOR( VC_MEM_IOC_MAGIC, 0, unsigned long )
+-#define VC_MEM_IOC_MEM_SIZE         _IOR( VC_MEM_IOC_MAGIC, 1, unsigned int )
+-#define VC_MEM_IOC_MEM_BASE         _IOR( VC_MEM_IOC_MAGIC, 2, unsigned int )
+-#define VC_MEM_IOC_MEM_LOAD         _IOR( VC_MEM_IOC_MAGIC, 3, unsigned int )
+-
+-#if defined( __KERNEL__ )
+-#define VC_MEM_TO_ARM_ADDR_MASK 0x3FFFFFFF
+-
+-extern unsigned long mm_vc_mem_phys_addr;
+-extern unsigned int  mm_vc_mem_size;
+-extern int vc_mem_get_current_size( void );
+-#endif
+-
+-#endif  /* VC_MEM_H */
+--- a/arch/arm/mach-bcm2709/vc_mem.c
++++ /dev/null
+@@ -1,431 +0,0 @@
+-/*****************************************************************************
+-* Copyright 2010 - 2011 Broadcom Corporation.  All rights reserved.
+-*
+-* Unless you and Broadcom execute a separate written software license
+-* agreement governing use of this software, this software is licensed to you
+-* under the terms of the GNU General Public License version 2, available at
+-* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
+-*
+-* Notwithstanding the above, under no circumstances may you combine this
+-* software in any way with any other Broadcom software provided under a
+-* license other than the GPL, without Broadcom's express prior written
+-* consent.
+-*****************************************************************************/
+-
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/fs.h>
+-#include <linux/device.h>
+-#include <linux/cdev.h>
+-#include <linux/mm.h>
+-#include <linux/slab.h>
+-#include <linux/debugfs.h>
+-#include <asm/uaccess.h>
+-#include <linux/dma-mapping.h>
+-#include <linux/platform_data/mailbox-bcm2708.h>
+-
+-#ifdef CONFIG_ARCH_KONA
+-#include <chal/chal_ipc.h>
+-#elif defined(CONFIG_ARCH_BCM2708) || defined(CONFIG_ARCH_BCM2709)
+-#else
+-#include <csp/chal_ipc.h>
+-#endif
+-
+-#include "mach/vc_mem.h"
+-
+-#define DRIVER_NAME  "vc-mem"
+-
+-// Device (/dev) related variables
+-static dev_t vc_mem_devnum = 0;
+-static struct class *vc_mem_class = NULL;
+-static struct cdev vc_mem_cdev;
+-static int vc_mem_inited = 0;
+-
+-#ifdef CONFIG_DEBUG_FS
+-static struct dentry *vc_mem_debugfs_entry;
+-#endif
+-
+-/*
+- * Videocore memory addresses and size
+- *
+- * Drivers that wish to know the videocore memory addresses and sizes should
+- * use these variables instead of the MM_IO_BASE and MM_ADDR_IO defines in
+- * headers. This allows the other drivers to not be tied down to a a certain
+- * address/size at compile time.
+- *
+- * In the future, the goal is to have the videocore memory virtual address and
+- * size be calculated at boot time rather than at compile time. The decision of
+- * where the videocore memory resides and its size would be in the hands of the
+- * bootloader (and/or kernel). When that happens, the values of these variables
+- * would be calculated and assigned in the init function.
+- */
+-// in the 2835 VC in mapped above ARM, but ARM has full access to VC space
+-unsigned long mm_vc_mem_phys_addr = 0x00000000;
+-unsigned int mm_vc_mem_size = 0;
+-unsigned int mm_vc_mem_base = 0;
+-
+-EXPORT_SYMBOL(mm_vc_mem_phys_addr);
+-EXPORT_SYMBOL(mm_vc_mem_size);
+-EXPORT_SYMBOL(mm_vc_mem_base);
+-
+-static uint phys_addr = 0;
+-static uint mem_size = 0;
+-static uint mem_base = 0;
+-
+-
+-/****************************************************************************
+-*
+-*   vc_mem_open
+-*
+-***************************************************************************/
+-
+-static int
+-vc_mem_open(struct inode *inode, struct file *file)
+-{
+-	(void) inode;
+-	(void) file;
+-
+-	pr_debug("%s: called file = 0x%p\n", __func__, file);
+-
+-	return 0;
+-}
+-
+-/****************************************************************************
+-*
+-*   vc_mem_release
+-*
+-***************************************************************************/
+-
+-static int
+-vc_mem_release(struct inode *inode, struct file *file)
+-{
+-	(void) inode;
+-	(void) file;
+-
+-	pr_debug("%s: called file = 0x%p\n", __func__, file);
+-
+-	return 0;
+-}
+-
+-/****************************************************************************
+-*
+-*   vc_mem_get_size
+-*
+-***************************************************************************/
+-
+-static void
+-vc_mem_get_size(void)
+-{
+-}
+-
+-/****************************************************************************
+-*
+-*   vc_mem_get_base
+-*
+-***************************************************************************/
+-
+-static void
+-vc_mem_get_base(void)
+-{
+-}
+-
+-/****************************************************************************
+-*
+-*   vc_mem_get_current_size
+-*
+-***************************************************************************/
+-
+-int
+-vc_mem_get_current_size(void)
+-{
+-	return mm_vc_mem_size;
+-}
+-
+-EXPORT_SYMBOL_GPL(vc_mem_get_current_size);
+-
+-/****************************************************************************
+-*
+-*   vc_mem_ioctl
+-*
+-***************************************************************************/
+-
+-static long
+-vc_mem_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+-{
+-	int rc = 0;
+-
+-	(void) cmd;
+-	(void) arg;
+-
+-	pr_debug("%s: called file = 0x%p\n", __func__, file);
+-
+-	switch (cmd) {
+-	case VC_MEM_IOC_MEM_PHYS_ADDR:
+-		{
+-			pr_debug("%s: VC_MEM_IOC_MEM_PHYS_ADDR=0x%p\n",
+-				__func__, (void *) mm_vc_mem_phys_addr);
+-
+-			if (copy_to_user((void *) arg, &mm_vc_mem_phys_addr,
+-					 sizeof (mm_vc_mem_phys_addr)) != 0) {
+-				rc = -EFAULT;
+-			}
+-			break;
+-		}
+-	case VC_MEM_IOC_MEM_SIZE:
+-		{
+-			// Get the videocore memory size first
+-			vc_mem_get_size();
+-
+-			pr_debug("%s: VC_MEM_IOC_MEM_SIZE=%u\n", __func__,
+-				mm_vc_mem_size);
+-
+-			if (copy_to_user((void *) arg, &mm_vc_mem_size,
+-					 sizeof (mm_vc_mem_size)) != 0) {
+-				rc = -EFAULT;
+-			}
+-			break;
+-		}
+-	case VC_MEM_IOC_MEM_BASE:
+-		{
+-			// Get the videocore memory base
+-			vc_mem_get_base();
+-
+-			pr_debug("%s: VC_MEM_IOC_MEM_BASE=%u\n", __func__,
+-				mm_vc_mem_base);
+-
+-			if (copy_to_user((void *) arg, &mm_vc_mem_base,
+-					 sizeof (mm_vc_mem_base)) != 0) {
+-				rc = -EFAULT;
+-			}
+-			break;
+-		}
+-	case VC_MEM_IOC_MEM_LOAD:
+-		{
+-			// Get the videocore memory base
+-			vc_mem_get_base();
+-
+-			pr_debug("%s: VC_MEM_IOC_MEM_LOAD=%u\n", __func__,
+-				mm_vc_mem_base);
+-
+-			if (copy_to_user((void *) arg, &mm_vc_mem_base,
+-					 sizeof (mm_vc_mem_base)) != 0) {
+-				rc = -EFAULT;
+-			}
+-			break;
+-		}
+-	default:
+-		{
+-			return -ENOTTY;
+-		}
+-	}
+-	pr_debug("%s: file = 0x%p returning %d\n", __func__, file, rc);
+-
+-	return rc;
+-}
+-
+-/****************************************************************************
+-*
+-*   vc_mem_mmap
+-*
+-***************************************************************************/
+-
+-static int
+-vc_mem_mmap(struct file *filp, struct vm_area_struct *vma)
+-{
+-	int rc = 0;
+-	unsigned long length = vma->vm_end - vma->vm_start;
+-	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
+-
+-	pr_debug("%s: vm_start = 0x%08lx vm_end = 0x%08lx vm_pgoff = 0x%08lx\n",
+-		__func__, (long) vma->vm_start, (long) vma->vm_end,
+-		(long) vma->vm_pgoff);
+-
+-	if (offset + length > mm_vc_mem_size) {
+-		pr_err("%s: length %ld is too big\n", __func__, length);
+-		return -EINVAL;
+-	}
+-	// Do not cache the memory map
+-	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+-
+-	rc = remap_pfn_range(vma, vma->vm_start,
+-			     (mm_vc_mem_phys_addr >> PAGE_SHIFT) +
+-			     vma->vm_pgoff, length, vma->vm_page_prot);
+-	if (rc != 0) {
+-		pr_err("%s: remap_pfn_range failed (rc=%d)\n", __func__, rc);
+-	}
+-
+-	return rc;
+-}
+-
+-/****************************************************************************
+-*
+-*   File Operations for the driver.
+-*
+-***************************************************************************/
+-
+-static const struct file_operations vc_mem_fops = {
+-	.owner = THIS_MODULE,
+-	.open = vc_mem_open,
+-	.release = vc_mem_release,
+-	.unlocked_ioctl = vc_mem_ioctl,
+-	.mmap = vc_mem_mmap,
+-};
+-
+-#ifdef CONFIG_DEBUG_FS
+-static void vc_mem_debugfs_deinit(void)
+-{
+-	debugfs_remove_recursive(vc_mem_debugfs_entry);
+-	vc_mem_debugfs_entry = NULL;
+-}
+-
+-
+-static int vc_mem_debugfs_init(
+-	struct device *dev)
+-{
+-	vc_mem_debugfs_entry = debugfs_create_dir(DRIVER_NAME, NULL);
+-	if (!vc_mem_debugfs_entry) {
+-		dev_warn(dev, "could not create debugfs entry\n");
+-		return -EFAULT;
+-	}
+-
+-	if (!debugfs_create_x32("vc_mem_phys_addr",
+-				0444,
+-				vc_mem_debugfs_entry,
+-				(u32 *)&mm_vc_mem_phys_addr)) {
+-		dev_warn(dev, "%s:could not create vc_mem_phys entry\n",
+-			__func__);
+-		goto fail;
+-	}
+-
+-	if (!debugfs_create_x32("vc_mem_size",
+-				0444,
+-				vc_mem_debugfs_entry,
+-				(u32 *)&mm_vc_mem_size)) {
+-		dev_warn(dev, "%s:could not create vc_mem_size entry\n",
+-			__func__);
+-		goto fail;
+-	}
+-
+-	if (!debugfs_create_x32("vc_mem_base",
+-				0444,
+-				vc_mem_debugfs_entry,
+-				(u32 *)&mm_vc_mem_base)) {
+-		dev_warn(dev, "%s:could not create vc_mem_base entry\n",
+-			 __func__);
+-		goto fail;
+-	}
+-
+-	return 0;
+-
+-fail:
+-	vc_mem_debugfs_deinit();
+-	return -EFAULT;
+-}
+-
+-#endif /* CONFIG_DEBUG_FS */
+-
+-
+-/****************************************************************************
+-*
+-*   vc_mem_init
+-*
+-***************************************************************************/
+-
+-static int __init
+-vc_mem_init(void)
+-{
+-	int rc = -EFAULT;
+-	struct device *dev;
+-
+-	pr_debug("%s: called\n", __func__);
+-
+-	mm_vc_mem_phys_addr = phys_addr;
+-	mm_vc_mem_size = mem_size;
+-	mm_vc_mem_base = mem_base;
+-
+-	vc_mem_get_size();
+-
+-	pr_info("vc-mem: phys_addr:0x%08lx mem_base=0x%08x mem_size:0x%08x(%u MiB)\n",
+-		mm_vc_mem_phys_addr, mm_vc_mem_base, mm_vc_mem_size, mm_vc_mem_size / (1024 * 1024));
+-
+-	if ((rc = alloc_chrdev_region(&vc_mem_devnum, 0, 1, DRIVER_NAME)) < 0) {
+-		pr_err("%s: alloc_chrdev_region failed (rc=%d)\n",
+-		       __func__, rc);
+-		goto out_err;
+-	}
+-
+-	cdev_init(&vc_mem_cdev, &vc_mem_fops);
+-	if ((rc = cdev_add(&vc_mem_cdev, vc_mem_devnum, 1)) != 0) {
+-		pr_err("%s: cdev_add failed (rc=%d)\n", __func__, rc);
+-		goto out_unregister;
+-	}
+-
+-	vc_mem_class = class_create(THIS_MODULE, DRIVER_NAME);
+-	if (IS_ERR(vc_mem_class)) {
+-		rc = PTR_ERR(vc_mem_class);
+-		pr_err("%s: class_create failed (rc=%d)\n", __func__, rc);
+-		goto out_cdev_del;
+-	}
+-
+-	dev = device_create(vc_mem_class, NULL, vc_mem_devnum, NULL,
+-			    DRIVER_NAME);
+-	if (IS_ERR(dev)) {
+-		rc = PTR_ERR(dev);
+-		pr_err("%s: device_create failed (rc=%d)\n", __func__, rc);
+-		goto out_class_destroy;
+-	}
+-
+-#ifdef CONFIG_DEBUG_FS
+-	/* don't fail if the debug entries cannot be created */
+-	vc_mem_debugfs_init(dev);
+-#endif
+-
+-	vc_mem_inited = 1;
+-	return 0;
+-
+-	device_destroy(vc_mem_class, vc_mem_devnum);
+-
+-      out_class_destroy:
+-	class_destroy(vc_mem_class);
+-	vc_mem_class = NULL;
+-
+-      out_cdev_del:
+-	cdev_del(&vc_mem_cdev);
+-
+-      out_unregister:
+-	unregister_chrdev_region(vc_mem_devnum, 1);
+-
+-      out_err:
+-	return -1;
+-}
+-
+-/****************************************************************************
+-*
+-*   vc_mem_exit
+-*
+-***************************************************************************/
+-
+-static void __exit
+-vc_mem_exit(void)
+-{
+-	pr_debug("%s: called\n", __func__);
+-
+-	if (vc_mem_inited) {
+-#if CONFIG_DEBUG_FS
+-		vc_mem_debugfs_deinit();
+-#endif
+-		device_destroy(vc_mem_class, vc_mem_devnum);
+-		class_destroy(vc_mem_class);
+-		cdev_del(&vc_mem_cdev);
+-		unregister_chrdev_region(vc_mem_devnum, 1);
+-	}
+-}
+-
+-module_init(vc_mem_init);
+-module_exit(vc_mem_exit);
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Broadcom Corporation");
+-
+-module_param(phys_addr, uint, 0644);
+-module_param(mem_size, uint, 0644);
+-module_param(mem_base, uint, 0644);
+--- a/drivers/char/broadcom/Kconfig
++++ b/drivers/char/broadcom/Kconfig
+@@ -7,9 +7,19 @@ menuconfig BRCM_CHAR_DRIVERS
+ 	help
+ 	  Broadcom's char drivers
+ 
++if BRCM_CHAR_DRIVERS
++
+ config BCM_VC_CMA
+ 	bool "Videocore CMA"
+-	depends on CMA && BRCM_CHAR_DRIVERS && BCM2708_VCHIQ
++	depends on CMA && BCM2708_VCHIQ
+ 	default n
+         help
+           Helper for videocore CMA access.
++
++config BCM2708_VCMEM
++	bool "Videocore Memory"
++        default y
++        help
++          Helper for videocore memory access and total size allocation.
++
++endif
+--- a/drivers/char/broadcom/Makefile
++++ b/drivers/char/broadcom/Makefile
+@@ -1 +1,2 @@
+ obj-$(CONFIG_BCM_VC_CMA)	+= vc_cma/
++obj-$(CONFIG_BCM2708_VCMEM)	+= vc_mem.o
+--- /dev/null
++++ b/drivers/char/broadcom/vc_mem.c
+@@ -0,0 +1,422 @@
++/*****************************************************************************
++* Copyright 2010 - 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/fs.h>
++#include <linux/device.h>
++#include <linux/cdev.h>
++#include <linux/mm.h>
++#include <linux/slab.h>
++#include <linux/debugfs.h>
++#include <asm/uaccess.h>
++#include <linux/dma-mapping.h>
++#include <linux/broadcom/vc_mem.h>
++
++#define DRIVER_NAME  "vc-mem"
++
++// Device (/dev) related variables
++static dev_t vc_mem_devnum = 0;
++static struct class *vc_mem_class = NULL;
++static struct cdev vc_mem_cdev;
++static int vc_mem_inited = 0;
++
++#ifdef CONFIG_DEBUG_FS
++static struct dentry *vc_mem_debugfs_entry;
++#endif
++
++/*
++ * Videocore memory addresses and size
++ *
++ * Drivers that wish to know the videocore memory addresses and sizes should
++ * use these variables instead of the MM_IO_BASE and MM_ADDR_IO defines in
++ * headers. This allows the other drivers to not be tied down to a a certain
++ * address/size at compile time.
++ *
++ * In the future, the goal is to have the videocore memory virtual address and
++ * size be calculated at boot time rather than at compile time. The decision of
++ * where the videocore memory resides and its size would be in the hands of the
++ * bootloader (and/or kernel). When that happens, the values of these variables
++ * would be calculated and assigned in the init function.
++ */
++// in the 2835 VC in mapped above ARM, but ARM has full access to VC space
++unsigned long mm_vc_mem_phys_addr = 0x00000000;
++unsigned int mm_vc_mem_size = 0;
++unsigned int mm_vc_mem_base = 0;
++
++EXPORT_SYMBOL(mm_vc_mem_phys_addr);
++EXPORT_SYMBOL(mm_vc_mem_size);
++EXPORT_SYMBOL(mm_vc_mem_base);
++
++static uint phys_addr = 0;
++static uint mem_size = 0;
++static uint mem_base = 0;
++
++
++/****************************************************************************
++*
++*   vc_mem_open
++*
++***************************************************************************/
++
++static int
++vc_mem_open(struct inode *inode, struct file *file)
++{
++	(void) inode;
++	(void) file;
++
++	pr_debug("%s: called file = 0x%p\n", __func__, file);
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   vc_mem_release
++*
++***************************************************************************/
++
++static int
++vc_mem_release(struct inode *inode, struct file *file)
++{
++	(void) inode;
++	(void) file;
++
++	pr_debug("%s: called file = 0x%p\n", __func__, file);
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   vc_mem_get_size
++*
++***************************************************************************/
++
++static void
++vc_mem_get_size(void)
++{
++}
++
++/****************************************************************************
++*
++*   vc_mem_get_base
++*
++***************************************************************************/
++
++static void
++vc_mem_get_base(void)
++{
++}
++
++/****************************************************************************
++*
++*   vc_mem_get_current_size
++*
++***************************************************************************/
++
++int
++vc_mem_get_current_size(void)
++{
++	return mm_vc_mem_size;
++}
++
++EXPORT_SYMBOL_GPL(vc_mem_get_current_size);
++
++/****************************************************************************
++*
++*   vc_mem_ioctl
++*
++***************************************************************************/
++
++static long
++vc_mem_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++	int rc = 0;
++
++	(void) cmd;
++	(void) arg;
++
++	pr_debug("%s: called file = 0x%p\n", __func__, file);
++
++	switch (cmd) {
++	case VC_MEM_IOC_MEM_PHYS_ADDR:
++		{
++			pr_debug("%s: VC_MEM_IOC_MEM_PHYS_ADDR=0x%p\n",
++				__func__, (void *) mm_vc_mem_phys_addr);
++
++			if (copy_to_user((void *) arg, &mm_vc_mem_phys_addr,
++					 sizeof (mm_vc_mem_phys_addr)) != 0) {
++				rc = -EFAULT;
++			}
++			break;
++		}
++	case VC_MEM_IOC_MEM_SIZE:
++		{
++			// Get the videocore memory size first
++			vc_mem_get_size();
++
++			pr_debug("%s: VC_MEM_IOC_MEM_SIZE=%u\n", __func__,
++				mm_vc_mem_size);
++
++			if (copy_to_user((void *) arg, &mm_vc_mem_size,
++					 sizeof (mm_vc_mem_size)) != 0) {
++				rc = -EFAULT;
++			}
++			break;
++		}
++	case VC_MEM_IOC_MEM_BASE:
++		{
++			// Get the videocore memory base
++			vc_mem_get_base();
++
++			pr_debug("%s: VC_MEM_IOC_MEM_BASE=%u\n", __func__,
++				mm_vc_mem_base);
++
++			if (copy_to_user((void *) arg, &mm_vc_mem_base,
++					 sizeof (mm_vc_mem_base)) != 0) {
++				rc = -EFAULT;
++			}
++			break;
++		}
++	case VC_MEM_IOC_MEM_LOAD:
++		{
++			// Get the videocore memory base
++			vc_mem_get_base();
++
++			pr_debug("%s: VC_MEM_IOC_MEM_LOAD=%u\n", __func__,
++				mm_vc_mem_base);
++
++			if (copy_to_user((void *) arg, &mm_vc_mem_base,
++					 sizeof (mm_vc_mem_base)) != 0) {
++				rc = -EFAULT;
++			}
++			break;
++		}
++	default:
++		{
++			return -ENOTTY;
++		}
++	}
++	pr_debug("%s: file = 0x%p returning %d\n", __func__, file, rc);
++
++	return rc;
++}
++
++/****************************************************************************
++*
++*   vc_mem_mmap
++*
++***************************************************************************/
++
++static int
++vc_mem_mmap(struct file *filp, struct vm_area_struct *vma)
++{
++	int rc = 0;
++	unsigned long length = vma->vm_end - vma->vm_start;
++	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
++
++	pr_debug("%s: vm_start = 0x%08lx vm_end = 0x%08lx vm_pgoff = 0x%08lx\n",
++		__func__, (long) vma->vm_start, (long) vma->vm_end,
++		(long) vma->vm_pgoff);
++
++	if (offset + length > mm_vc_mem_size) {
++		pr_err("%s: length %ld is too big\n", __func__, length);
++		return -EINVAL;
++	}
++	// Do not cache the memory map
++	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
++
++	rc = remap_pfn_range(vma, vma->vm_start,
++			     (mm_vc_mem_phys_addr >> PAGE_SHIFT) +
++			     vma->vm_pgoff, length, vma->vm_page_prot);
++	if (rc != 0) {
++		pr_err("%s: remap_pfn_range failed (rc=%d)\n", __func__, rc);
++	}
++
++	return rc;
++}
++
++/****************************************************************************
++*
++*   File Operations for the driver.
++*
++***************************************************************************/
++
++static const struct file_operations vc_mem_fops = {
++	.owner = THIS_MODULE,
++	.open = vc_mem_open,
++	.release = vc_mem_release,
++	.unlocked_ioctl = vc_mem_ioctl,
++	.mmap = vc_mem_mmap,
++};
++
++#ifdef CONFIG_DEBUG_FS
++static void vc_mem_debugfs_deinit(void)
++{
++	debugfs_remove_recursive(vc_mem_debugfs_entry);
++	vc_mem_debugfs_entry = NULL;
++}
++
++
++static int vc_mem_debugfs_init(
++	struct device *dev)
++{
++	vc_mem_debugfs_entry = debugfs_create_dir(DRIVER_NAME, NULL);
++	if (!vc_mem_debugfs_entry) {
++		dev_warn(dev, "could not create debugfs entry\n");
++		return -EFAULT;
++	}
++
++	if (!debugfs_create_x32("vc_mem_phys_addr",
++				0444,
++				vc_mem_debugfs_entry,
++				(u32 *)&mm_vc_mem_phys_addr)) {
++		dev_warn(dev, "%s:could not create vc_mem_phys entry\n",
++			__func__);
++		goto fail;
++	}
++
++	if (!debugfs_create_x32("vc_mem_size",
++				0444,
++				vc_mem_debugfs_entry,
++				(u32 *)&mm_vc_mem_size)) {
++		dev_warn(dev, "%s:could not create vc_mem_size entry\n",
++			__func__);
++		goto fail;
++	}
++
++	if (!debugfs_create_x32("vc_mem_base",
++				0444,
++				vc_mem_debugfs_entry,
++				(u32 *)&mm_vc_mem_base)) {
++		dev_warn(dev, "%s:could not create vc_mem_base entry\n",
++			 __func__);
++		goto fail;
++	}
++
++	return 0;
++
++fail:
++	vc_mem_debugfs_deinit();
++	return -EFAULT;
++}
++
++#endif /* CONFIG_DEBUG_FS */
++
++
++/****************************************************************************
++*
++*   vc_mem_init
++*
++***************************************************************************/
++
++static int __init
++vc_mem_init(void)
++{
++	int rc = -EFAULT;
++	struct device *dev;
++
++	pr_debug("%s: called\n", __func__);
++
++	mm_vc_mem_phys_addr = phys_addr;
++	mm_vc_mem_size = mem_size;
++	mm_vc_mem_base = mem_base;
++
++	vc_mem_get_size();
++
++	pr_info("vc-mem: phys_addr:0x%08lx mem_base=0x%08x mem_size:0x%08x(%u MiB)\n",
++		mm_vc_mem_phys_addr, mm_vc_mem_base, mm_vc_mem_size, mm_vc_mem_size / (1024 * 1024));
++
++	if ((rc = alloc_chrdev_region(&vc_mem_devnum, 0, 1, DRIVER_NAME)) < 0) {
++		pr_err("%s: alloc_chrdev_region failed (rc=%d)\n",
++		       __func__, rc);
++		goto out_err;
++	}
++
++	cdev_init(&vc_mem_cdev, &vc_mem_fops);
++	if ((rc = cdev_add(&vc_mem_cdev, vc_mem_devnum, 1)) != 0) {
++		pr_err("%s: cdev_add failed (rc=%d)\n", __func__, rc);
++		goto out_unregister;
++	}
++
++	vc_mem_class = class_create(THIS_MODULE, DRIVER_NAME);
++	if (IS_ERR(vc_mem_class)) {
++		rc = PTR_ERR(vc_mem_class);
++		pr_err("%s: class_create failed (rc=%d)\n", __func__, rc);
++		goto out_cdev_del;
++	}
++
++	dev = device_create(vc_mem_class, NULL, vc_mem_devnum, NULL,
++			    DRIVER_NAME);
++	if (IS_ERR(dev)) {
++		rc = PTR_ERR(dev);
++		pr_err("%s: device_create failed (rc=%d)\n", __func__, rc);
++		goto out_class_destroy;
++	}
++
++#ifdef CONFIG_DEBUG_FS
++	/* don't fail if the debug entries cannot be created */
++	vc_mem_debugfs_init(dev);
++#endif
++
++	vc_mem_inited = 1;
++	return 0;
++
++	device_destroy(vc_mem_class, vc_mem_devnum);
++
++      out_class_destroy:
++	class_destroy(vc_mem_class);
++	vc_mem_class = NULL;
++
++      out_cdev_del:
++	cdev_del(&vc_mem_cdev);
++
++      out_unregister:
++	unregister_chrdev_region(vc_mem_devnum, 1);
++
++      out_err:
++	return -1;
++}
++
++/****************************************************************************
++*
++*   vc_mem_exit
++*
++***************************************************************************/
++
++static void __exit
++vc_mem_exit(void)
++{
++	pr_debug("%s: called\n", __func__);
++
++	if (vc_mem_inited) {
++#if CONFIG_DEBUG_FS
++		vc_mem_debugfs_deinit();
++#endif
++		device_destroy(vc_mem_class, vc_mem_devnum);
++		class_destroy(vc_mem_class);
++		cdev_del(&vc_mem_cdev);
++		unregister_chrdev_region(vc_mem_devnum, 1);
++	}
++}
++
++module_init(vc_mem_init);
++module_exit(vc_mem_exit);
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Broadcom Corporation");
++
++module_param(phys_addr, uint, 0644);
++module_param(mem_size, uint, 0644);
++module_param(mem_base, uint, 0644);
+--- /dev/null
++++ b/include/linux/broadcom/vc_mem.h
+@@ -0,0 +1,35 @@
++/*****************************************************************************
++* Copyright 2010 - 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#ifndef _VC_MEM_H
++#define _VC_MEM_H
++
++#include <linux/ioctl.h>
++
++#define VC_MEM_IOC_MAGIC  'v'
++
++#define VC_MEM_IOC_MEM_PHYS_ADDR    _IOR( VC_MEM_IOC_MAGIC, 0, unsigned long )
++#define VC_MEM_IOC_MEM_SIZE         _IOR( VC_MEM_IOC_MAGIC, 1, unsigned int )
++#define VC_MEM_IOC_MEM_BASE         _IOR( VC_MEM_IOC_MAGIC, 2, unsigned int )
++#define VC_MEM_IOC_MEM_LOAD         _IOR( VC_MEM_IOC_MAGIC, 3, unsigned int )
++
++#if defined( __KERNEL__ )
++#define VC_MEM_TO_ARM_ADDR_MASK 0x3FFFFFFF
++
++extern unsigned long mm_vc_mem_phys_addr;
++extern unsigned int  mm_vc_mem_size;
++extern int vc_mem_get_current_size( void );
++#endif
++
++#endif  /* _VC_MEM_H */
diff --git a/target/linux/brcm2708/patches-4.4/0039-vcsm-VideoCore-shared-memory-service-for-BCM2835.patch b/target/linux/brcm2708/patches-4.4/0039-vcsm-VideoCore-shared-memory-service-for-BCM2835.patch
new file mode 100644
index 0000000..a67c9b2
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0039-vcsm-VideoCore-shared-memory-service-for-BCM2835.patch
@@ -0,0 +1,4393 @@
+From 40793263e733bd6fa4b2a063891661ae59076430 Mon Sep 17 00:00:00 2001
+From: Tim Gover <tgover at broadcom.com>
+Date: Tue, 22 Jul 2014 15:41:04 +0100
+Subject: [PATCH 039/127] vcsm: VideoCore shared memory service for BCM2835
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Add experimental support for the VideoCore shared memory service.
+This allows user processes to allocate memory from VideoCore's
+GPU relocatable heap and mmap the buffers. Additionally, the memory
+handles can passed to other VideoCore services such as MMAL, OpenMax
+and DispmanX
+
+TODO
+* This driver was originally released for BCM28155 which has a different
+  cache architecture to BCM2835. Consequently, in this release only
+  uncached mappings are supported. However, there's no fundamental
+  reason which cached mappings cannot be support or BCM2835
+* More refactoring is required to remove the typedefs.
+* Re-enable the some of the commented out debug-fs statistics which were
+  disabled when migrating code from proc-fs.
+* There's a lot of code to support sharing of VCSM in order to support
+  Android. This could probably done more cleanly or perhaps just
+  removed.
+
+Signed-off-by: Tim Gover <timgover at gmail.com>
+
+config: Disable VC_SM for now to fix hang with cutdown kernel
+
+vcsm: Use boolean as it cannot be built as module
+
+On building the bcm_vc_sm as a module we get the following error:
+
+v7_dma_flush_range and do_munmap are undefined in vc-sm.ko.
+
+Fix by making it not an option to build as module
+
+vcsm: Add ioctl for custom cache flushing
+
+vc-sm: Move headers out of arch directory
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/char/broadcom/Kconfig            |    9 +
+ drivers/char/broadcom/Makefile           |    1 +
+ drivers/char/broadcom/vc_sm/Makefile     |   20 +
+ drivers/char/broadcom/vc_sm/vc_sm_defs.h |  181 ++
+ drivers/char/broadcom/vc_sm/vc_sm_knl.h  |   55 +
+ drivers/char/broadcom/vc_sm/vc_vchi_sm.c |  492 +++++
+ drivers/char/broadcom/vc_sm/vc_vchi_sm.h |   82 +
+ drivers/char/broadcom/vc_sm/vmcs_sm.c    | 3211 ++++++++++++++++++++++++++++++
+ include/linux/broadcom/vmcs_sm_ioctl.h   |  248 +++
+ 9 files changed, 4299 insertions(+)
+ create mode 100644 drivers/char/broadcom/vc_sm/Makefile
+ create mode 100644 drivers/char/broadcom/vc_sm/vc_sm_defs.h
+ create mode 100644 drivers/char/broadcom/vc_sm/vc_sm_knl.h
+ create mode 100644 drivers/char/broadcom/vc_sm/vc_vchi_sm.c
+ create mode 100644 drivers/char/broadcom/vc_sm/vc_vchi_sm.h
+ create mode 100644 drivers/char/broadcom/vc_sm/vmcs_sm.c
+ create mode 100644 include/linux/broadcom/vmcs_sm_ioctl.h
+
+--- a/drivers/char/broadcom/Kconfig
++++ b/drivers/char/broadcom/Kconfig
+@@ -23,3 +23,12 @@ config BCM2708_VCMEM
+           Helper for videocore memory access and total size allocation.
+ 
+ endif
++
++config BCM_VC_SM
++	bool "VMCS Shared Memory"
++	depends on BCM2708_VCHIQ
++	select BCM2708_VCMEM
++	default n
++	help
++	Support for the VC shared memory on the Broadcom reference
++	design. Uses the VCHIQ stack.
+--- a/drivers/char/broadcom/Makefile
++++ b/drivers/char/broadcom/Makefile
+@@ -1,2 +1,3 @@
+ obj-$(CONFIG_BCM_VC_CMA)	+= vc_cma/
+ obj-$(CONFIG_BCM2708_VCMEM)	+= vc_mem.o
++obj-$(CONFIG_BCM_VC_SM)         += vc_sm/
+--- /dev/null
++++ b/drivers/char/broadcom/vc_sm/Makefile
+@@ -0,0 +1,20 @@
++EXTRA_CFLAGS  += -Wall -Wstrict-prototypes -Wno-trigraphs -O2
++
++EXTRA_CFLAGS  += -I"drivers/misc/vc04_services"
++EXTRA_CFLAGS  += -I"drivers/misc/vc04_services/interface/vchi"
++EXTRA_CFLAGS  += -I"drivers/misc/vc04_services/interface/vchiq_arm"
++EXTRA_CFLAGS  += -I"$(srctree)/fs/"
++
++EXTRA_CFLAGS  += -DOS_ASSERT_FAILURE
++EXTRA_CFLAGS  += -D__STDC_VERSION=199901L
++EXTRA_CFLAGS  += -D__STDC_VERSION__=199901L
++EXTRA_CFLAGS  += -D__VCCOREVER__=0
++EXTRA_CFLAGS  += -D__KERNEL__
++EXTRA_CFLAGS  += -D__linux__
++EXTRA_CFLAGS  += -Werror
++
++obj-$(CONFIG_BCM_VC_SM) := vc-sm.o
++
++vc-sm-objs := \
++    vmcs_sm.o \
++    vc_vchi_sm.o
+--- /dev/null
++++ b/drivers/char/broadcom/vc_sm/vc_sm_defs.h
+@@ -0,0 +1,181 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#ifndef __VC_SM_DEFS_H__INCLUDED__
++#define __VC_SM_DEFS_H__INCLUDED__
++
++/* FourCC code used for VCHI connection */
++#define VC_SM_SERVER_NAME MAKE_FOURCC("SMEM")
++
++/* Maximum message length */
++#define VC_SM_MAX_MSG_LEN (sizeof(VC_SM_MSG_UNION_T) + \
++	sizeof(VC_SM_MSG_HDR_T))
++#define VC_SM_MAX_RSP_LEN (sizeof(VC_SM_MSG_UNION_T))
++
++/* Resource name maximum size */
++#define VC_SM_RESOURCE_NAME 32
++
++/* All message types supported for HOST->VC direction */
++typedef enum {
++	/* Allocate shared memory block */
++	VC_SM_MSG_TYPE_ALLOC,
++	/* Lock allocated shared memory block */
++	VC_SM_MSG_TYPE_LOCK,
++	/* Unlock allocated shared memory block */
++	VC_SM_MSG_TYPE_UNLOCK,
++	/* Unlock allocated shared memory block, do not answer command */
++	VC_SM_MSG_TYPE_UNLOCK_NOANS,
++	/* Free shared memory block */
++	VC_SM_MSG_TYPE_FREE,
++	/* Resize a shared memory block */
++	VC_SM_MSG_TYPE_RESIZE,
++	/* Walk the allocated shared memory block(s) */
++	VC_SM_MSG_TYPE_WALK_ALLOC,
++
++	/* A previously applied action will need to be reverted */
++	VC_SM_MSG_TYPE_ACTION_CLEAN,
++	VC_SM_MSG_TYPE_MAX
++} VC_SM_MSG_TYPE;
++
++/* Type of memory to be allocated */
++typedef enum {
++	VC_SM_ALLOC_CACHED,
++	VC_SM_ALLOC_NON_CACHED,
++
++} VC_SM_ALLOC_TYPE_T;
++
++/* Message header for all messages in HOST->VC direction */
++typedef struct {
++	int32_t type;
++	uint32_t trans_id;
++	uint8_t body[0];
++
++} VC_SM_MSG_HDR_T;
++
++/* Request to allocate memory (HOST->VC) */
++typedef struct {
++	/* type of memory to allocate */
++	VC_SM_ALLOC_TYPE_T type;
++	/* byte amount of data to allocate per unit */
++	uint32_t base_unit;
++	/* number of unit to allocate */
++	uint32_t num_unit;
++	/* alignement to be applied on allocation */
++	uint32_t alignement;
++	/* identity of who allocated this block */
++	uint32_t allocator;
++	/* resource name (for easier tracking on vc side) */
++	char name[VC_SM_RESOURCE_NAME];
++
++} VC_SM_ALLOC_T;
++
++/* Result of a requested memory allocation (VC->HOST) */
++typedef struct {
++	/* Transaction identifier */
++	uint32_t trans_id;
++
++	/* Resource handle */
++	uint32_t res_handle;
++	/* Pointer to resource buffer */
++	void *res_mem;
++	/* Resource base size (bytes) */
++	uint32_t res_base_size;
++	/* Resource number */
++	uint32_t res_num;
++
++} VC_SM_ALLOC_RESULT_T;
++
++/* Request to free a previously allocated memory (HOST->VC) */
++typedef struct {
++	/* Resource handle (returned from alloc) */
++	uint32_t res_handle;
++	/* Resource buffer (returned from alloc) */
++	void *res_mem;
++
++} VC_SM_FREE_T;
++
++/* Request to lock a previously allocated memory (HOST->VC) */
++typedef struct {
++	/* Resource handle (returned from alloc) */
++	uint32_t res_handle;
++	/* Resource buffer (returned from alloc) */
++	void *res_mem;
++
++} VC_SM_LOCK_UNLOCK_T;
++
++/* Request to resize a previously allocated memory (HOST->VC) */
++typedef struct {
++	/* Resource handle (returned from alloc) */
++	uint32_t res_handle;
++	/* Resource buffer (returned from alloc) */
++	void *res_mem;
++	/* Resource *new* size requested (bytes) */
++	uint32_t res_new_size;
++
++} VC_SM_RESIZE_T;
++
++/* Result of a requested memory lock (VC->HOST) */
++typedef struct {
++	/* Transaction identifier */
++	uint32_t trans_id;
++
++	/* Resource handle */
++	uint32_t res_handle;
++	/* Pointer to resource buffer */
++	void *res_mem;
++	/* Pointer to former resource buffer if the memory
++	 * was reallocated */
++	void *res_old_mem;
++
++} VC_SM_LOCK_RESULT_T;
++
++/* Generic result for a request (VC->HOST) */
++typedef struct {
++	/* Transaction identifier */
++	uint32_t trans_id;
++
++	int32_t success;
++
++} VC_SM_RESULT_T;
++
++/* Request to revert a previously applied action (HOST->VC) */
++typedef struct {
++	/* Action of interest */
++	VC_SM_MSG_TYPE res_action;
++	/* Transaction identifier for the action of interest */
++	uint32_t action_trans_id;
++
++} VC_SM_ACTION_CLEAN_T;
++
++/* Request to remove all data associated with a given allocator (HOST->VC) */
++typedef struct {
++	/* Allocator identifier */
++	uint32_t allocator;
++
++} VC_SM_FREE_ALL_T;
++
++/* Union of ALL messages */
++typedef union {
++	VC_SM_ALLOC_T alloc;
++	VC_SM_ALLOC_RESULT_T alloc_result;
++	VC_SM_FREE_T free;
++	VC_SM_ACTION_CLEAN_T action_clean;
++	VC_SM_RESIZE_T resize;
++	VC_SM_LOCK_RESULT_T lock_result;
++	VC_SM_RESULT_T result;
++	VC_SM_FREE_ALL_T free_all;
++
++} VC_SM_MSG_UNION_T;
++
++#endif /* __VC_SM_DEFS_H__INCLUDED__ */
+--- /dev/null
++++ b/drivers/char/broadcom/vc_sm/vc_sm_knl.h
+@@ -0,0 +1,55 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#ifndef __VC_SM_KNL_H__INCLUDED__
++#define __VC_SM_KNL_H__INCLUDED__
++
++#if !defined(__KERNEL__)
++#error "This interface is for kernel use only..."
++#endif
++
++/* Type of memory to be locked (ie mapped) */
++typedef enum {
++	VC_SM_LOCK_CACHED,
++	VC_SM_LOCK_NON_CACHED,
++
++} VC_SM_LOCK_CACHE_MODE_T;
++
++/* Allocate a shared memory handle and block.
++*/
++int vc_sm_alloc(VC_SM_ALLOC_T *alloc, int *handle);
++
++/* Free a previously allocated shared memory handle and block.
++*/
++int vc_sm_free(int handle);
++
++/* Lock a memory handle for use by kernel.
++*/
++int vc_sm_lock(int handle, VC_SM_LOCK_CACHE_MODE_T mode,
++	       long unsigned int *data);
++
++/* Unlock a memory handle in use by kernel.
++*/
++int vc_sm_unlock(int handle, int flush, int no_vc_unlock);
++
++/* Get an internal resource handle mapped from the external one.
++*/
++int vc_sm_int_handle(int handle);
++
++/* Map a shared memory region for use by kernel.
++*/
++int vc_sm_map(int handle, unsigned int sm_addr, VC_SM_LOCK_CACHE_MODE_T mode,
++	      long unsigned int *data);
++
++#endif /* __VC_SM_KNL_H__INCLUDED__ */
+--- /dev/null
++++ b/drivers/char/broadcom/vc_sm/vc_vchi_sm.c
+@@ -0,0 +1,492 @@
++/*****************************************************************************
++* Copyright 2011-2012 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++/* ---- Include Files ----------------------------------------------------- */
++#include <linux/types.h>
++#include <linux/kernel.h>
++#include <linux/list.h>
++#include <linux/semaphore.h>
++#include <linux/mutex.h>
++#include <linux/slab.h>
++#include <linux/kthread.h>
++
++#include "vc_vchi_sm.h"
++
++#define VC_SM_VER  1
++#define VC_SM_MIN_VER 0
++
++/* ---- Private Constants and Types -------------------------------------- */
++
++/* Command blocks come from a pool */
++#define SM_MAX_NUM_CMD_RSP_BLKS 32
++
++struct sm_cmd_rsp_blk {
++	struct list_head head;	/* To create lists */
++	struct semaphore sema;	/* To be signaled when the response is there */
++
++	uint16_t id;
++	uint16_t length;
++
++	uint8_t msg[VC_SM_MAX_MSG_LEN];
++
++	uint32_t wait:1;
++	uint32_t sent:1;
++	uint32_t alloc:1;
++
++};
++
++struct sm_instance {
++	uint32_t num_connections;
++	VCHI_SERVICE_HANDLE_T vchi_handle[VCHI_MAX_NUM_CONNECTIONS];
++	struct task_struct *io_thread;
++	struct semaphore io_sema;
++
++	uint32_t trans_id;
++
++	struct mutex lock;
++	struct list_head cmd_list;
++	struct list_head rsp_list;
++	struct list_head dead_list;
++
++	struct sm_cmd_rsp_blk free_blk[SM_MAX_NUM_CMD_RSP_BLKS];
++	struct list_head free_list;
++	struct mutex free_lock;
++	struct semaphore free_sema;
++
++};
++
++/* ---- Private Variables ------------------------------------------------ */
++
++/* ---- Private Function Prototypes -------------------------------------- */
++
++/* ---- Private Functions ------------------------------------------------ */
++static struct
++sm_cmd_rsp_blk *vc_vchi_cmd_create(struct sm_instance *instance,
++		VC_SM_MSG_TYPE id, void *msg,
++		uint32_t size, int wait)
++{
++	struct sm_cmd_rsp_blk *blk;
++	VC_SM_MSG_HDR_T *hdr;
++
++	if (down_interruptible(&instance->free_sema)) {
++		blk = kmalloc(sizeof(*blk), GFP_KERNEL);
++		if (!blk)
++			return NULL;
++
++		blk->alloc = 1;
++		sema_init(&blk->sema, 0);
++	} else {
++		mutex_lock(&instance->free_lock);
++		blk =
++		    list_first_entry(&instance->free_list,
++				    struct sm_cmd_rsp_blk, head);
++		list_del(&blk->head);
++		mutex_unlock(&instance->free_lock);
++	}
++
++	blk->sent = 0;
++	blk->wait = wait;
++	blk->length = sizeof(*hdr) + size;
++
++	hdr = (VC_SM_MSG_HDR_T *) blk->msg;
++	hdr->type = id;
++	mutex_lock(&instance->lock);
++	hdr->trans_id = blk->id = ++instance->trans_id;
++	mutex_unlock(&instance->lock);
++
++	if (size)
++		memcpy(hdr->body, msg, size);
++
++	return blk;
++}
++
++static void
++vc_vchi_cmd_delete(struct sm_instance *instance, struct sm_cmd_rsp_blk *blk)
++{
++	if (blk->alloc) {
++		kfree(blk);
++		return;
++	}
++
++	mutex_lock(&instance->free_lock);
++	list_add(&blk->head, &instance->free_list);
++	mutex_unlock(&instance->free_lock);
++	up(&instance->free_sema);
++}
++
++static int vc_vchi_sm_videocore_io(void *arg)
++{
++	struct sm_instance *instance = arg;
++	struct sm_cmd_rsp_blk *cmd = NULL, *cmd_tmp;
++	VC_SM_RESULT_T *reply;
++	uint32_t reply_len;
++	int32_t status;
++	int svc_use = 1;
++
++	while (1) {
++		if (svc_use)
++			vchi_service_release(instance->vchi_handle[0]);
++		svc_use = 0;
++		if (!down_interruptible(&instance->io_sema)) {
++			vchi_service_use(instance->vchi_handle[0]);
++			svc_use = 1;
++
++			do {
++				unsigned int flags;
++				/*
++				 * Get new command and move it to response list
++				 */
++				mutex_lock(&instance->lock);
++				if (list_empty(&instance->cmd_list)) {
++					/* no more commands to process */
++					mutex_unlock(&instance->lock);
++					break;
++				}
++				cmd =
++				    list_first_entry(&instance->cmd_list,
++						     struct sm_cmd_rsp_blk,
++						     head);
++				list_move(&cmd->head, &instance->rsp_list);
++				cmd->sent = 1;
++				mutex_unlock(&instance->lock);
++
++				/* Send the command */
++				flags = VCHI_FLAGS_BLOCK_UNTIL_QUEUED;
++				status = vchi_msg_queue(
++						instance->vchi_handle[0],
++						cmd->msg, cmd->length,
++						flags, NULL);
++				if (status) {
++					pr_err("%s: failed to queue message (%d)",
++					     __func__, status);
++				}
++
++				/* If no reply is needed then we're done */
++				if (!cmd->wait) {
++					mutex_lock(&instance->lock);
++					list_del(&cmd->head);
++					mutex_unlock(&instance->lock);
++					vc_vchi_cmd_delete(instance, cmd);
++					continue;
++				}
++
++				if (status) {
++					up(&cmd->sema);
++					continue;
++				}
++
++			} while (1);
++
++			while (!vchi_msg_peek
++			       (instance->vchi_handle[0], (void **)&reply,
++				&reply_len, VCHI_FLAGS_NONE)) {
++				mutex_lock(&instance->lock);
++				list_for_each_entry(cmd, &instance->rsp_list,
++						    head) {
++					if (cmd->id == reply->trans_id)
++						break;
++				}
++				mutex_unlock(&instance->lock);
++
++				if (&cmd->head == &instance->rsp_list) {
++					pr_debug("%s: received response %u, throw away...",
++					     __func__, reply->trans_id);
++				} else if (reply_len > sizeof(cmd->msg)) {
++					pr_err("%s: reply too big (%u) %u, throw away...",
++					     __func__, reply_len,
++					     reply->trans_id);
++				} else {
++					memcpy(cmd->msg, reply, reply_len);
++					up(&cmd->sema);
++				}
++
++				vchi_msg_remove(instance->vchi_handle[0]);
++			}
++
++			/* Go through the dead list and free them */
++			mutex_lock(&instance->lock);
++			list_for_each_entry_safe(cmd, cmd_tmp,
++						 &instance->dead_list, head) {
++				list_del(&cmd->head);
++				vc_vchi_cmd_delete(instance, cmd);
++			}
++			mutex_unlock(&instance->lock);
++		}
++	}
++
++	return 0;
++}
++
++static void vc_sm_vchi_callback(void *param,
++				const VCHI_CALLBACK_REASON_T reason,
++				void *msg_handle)
++{
++	struct sm_instance *instance = param;
++
++	(void)msg_handle;
++
++	switch (reason) {
++	case VCHI_CALLBACK_MSG_AVAILABLE:
++		up(&instance->io_sema);
++		break;
++
++	case VCHI_CALLBACK_SERVICE_CLOSED:
++		pr_info("%s: service CLOSED!!", __func__);
++	default:
++		break;
++	}
++}
++
++VC_VCHI_SM_HANDLE_T vc_vchi_sm_init(VCHI_INSTANCE_T vchi_instance,
++				    VCHI_CONNECTION_T **vchi_connections,
++				    uint32_t num_connections)
++{
++	uint32_t i;
++	struct sm_instance *instance;
++	int status;
++
++	pr_debug("%s: start", __func__);
++
++	if (num_connections > VCHI_MAX_NUM_CONNECTIONS) {
++		pr_err("%s: unsupported number of connections %u (max=%u)",
++			__func__, num_connections, VCHI_MAX_NUM_CONNECTIONS);
++
++		goto err_null;
++	}
++	/* Allocate memory for this instance */
++	instance = kzalloc(sizeof(*instance), GFP_KERNEL);
++
++	/* Misc initialisations */
++	mutex_init(&instance->lock);
++	sema_init(&instance->io_sema, 0);
++	INIT_LIST_HEAD(&instance->cmd_list);
++	INIT_LIST_HEAD(&instance->rsp_list);
++	INIT_LIST_HEAD(&instance->dead_list);
++	INIT_LIST_HEAD(&instance->free_list);
++	sema_init(&instance->free_sema, SM_MAX_NUM_CMD_RSP_BLKS);
++	mutex_init(&instance->free_lock);
++	for (i = 0; i < SM_MAX_NUM_CMD_RSP_BLKS; i++) {
++		sema_init(&instance->free_blk[i].sema, 0);
++		list_add(&instance->free_blk[i].head, &instance->free_list);
++	}
++
++	/* Open the VCHI service connections */
++	instance->num_connections = num_connections;
++	for (i = 0; i < num_connections; i++) {
++		SERVICE_CREATION_T params = {
++			VCHI_VERSION_EX(VC_SM_VER, VC_SM_MIN_VER),
++			VC_SM_SERVER_NAME,
++			vchi_connections[i],
++			0,
++			0,
++			vc_sm_vchi_callback,
++			instance,
++			0,
++			0,
++			0,
++		};
++
++		status = vchi_service_open(vchi_instance,
++					   &params, &instance->vchi_handle[i]);
++		if (status) {
++			pr_err("%s: failed to open VCHI service (%d)",
++					__func__, status);
++
++			goto err_close_services;
++		}
++	}
++
++	/* Create the thread which takes care of all io to/from videoocore. */
++	instance->io_thread = kthread_create(&vc_vchi_sm_videocore_io,
++					     (void *)instance, "SMIO");
++	if (instance->io_thread == NULL) {
++		pr_err("%s: failed to create SMIO thread", __func__);
++
++		goto err_close_services;
++	}
++	set_user_nice(instance->io_thread, -10);
++	wake_up_process(instance->io_thread);
++
++	pr_debug("%s: success - instance 0x%x", __func__, (unsigned)instance);
++	return instance;
++
++err_close_services:
++	for (i = 0; i < instance->num_connections; i++) {
++		if (instance->vchi_handle[i] != NULL)
++			vchi_service_close(instance->vchi_handle[i]);
++	}
++	kfree(instance);
++err_null:
++	pr_debug("%s: FAILED", __func__);
++	return NULL;
++}
++
++int vc_vchi_sm_stop(VC_VCHI_SM_HANDLE_T *handle)
++{
++	struct sm_instance *instance;
++	uint32_t i;
++
++	if (handle == NULL) {
++		pr_err("%s: invalid pointer to handle %p", __func__, handle);
++		goto lock;
++	}
++
++	if (*handle == NULL) {
++		pr_err("%s: invalid handle %p", __func__, *handle);
++		goto lock;
++	}
++
++	instance = *handle;
++
++	/* Close all VCHI service connections */
++	for (i = 0; i < instance->num_connections; i++) {
++		int32_t success;
++		vchi_service_use(instance->vchi_handle[i]);
++
++		success = vchi_service_close(instance->vchi_handle[i]);
++	}
++
++	kfree(instance);
++
++	*handle = NULL;
++	return 0;
++
++lock:
++	return -EINVAL;
++}
++
++int vc_vchi_sm_send_msg(VC_VCHI_SM_HANDLE_T handle,
++			VC_SM_MSG_TYPE msg_id,
++			void *msg, uint32_t msg_size,
++			void *result, uint32_t result_size,
++			uint32_t *cur_trans_id, uint8_t wait_reply)
++{
++	int status = 0;
++	struct sm_instance *instance = handle;
++	struct sm_cmd_rsp_blk *cmd_blk;
++
++	if (handle == NULL) {
++		pr_err("%s: invalid handle", __func__);
++		return -EINVAL;
++	}
++	if (msg == NULL) {
++		pr_err("%s: invalid msg pointer", __func__);
++		return -EINVAL;
++	}
++
++	cmd_blk =
++	    vc_vchi_cmd_create(instance, msg_id, msg, msg_size, wait_reply);
++	if (cmd_blk == NULL) {
++		pr_err("[%s]: failed to allocate global tracking resource",
++			__func__);
++		return -ENOMEM;
++	}
++
++	if (cur_trans_id != NULL)
++		*cur_trans_id = cmd_blk->id;
++
++	mutex_lock(&instance->lock);
++	list_add_tail(&cmd_blk->head, &instance->cmd_list);
++	mutex_unlock(&instance->lock);
++	up(&instance->io_sema);
++
++	if (!wait_reply)
++		/* We're done */
++		return 0;
++
++	/* Wait for the response */
++	if (down_interruptible(&cmd_blk->sema)) {
++		mutex_lock(&instance->lock);
++		if (!cmd_blk->sent) {
++			list_del(&cmd_blk->head);
++			mutex_unlock(&instance->lock);
++			vc_vchi_cmd_delete(instance, cmd_blk);
++			return -ENXIO;
++		}
++		mutex_unlock(&instance->lock);
++
++		mutex_lock(&instance->lock);
++		list_move(&cmd_blk->head, &instance->dead_list);
++		mutex_unlock(&instance->lock);
++		up(&instance->io_sema);
++		return -EINTR;	/* We're done */
++	}
++
++	if (result && result_size) {
++		memcpy(result, cmd_blk->msg, result_size);
++	} else {
++		VC_SM_RESULT_T *res = (VC_SM_RESULT_T *) cmd_blk->msg;
++		status = (res->success == 0) ? 0 : -ENXIO;
++	}
++
++	mutex_lock(&instance->lock);
++	list_del(&cmd_blk->head);
++	mutex_unlock(&instance->lock);
++	vc_vchi_cmd_delete(instance, cmd_blk);
++	return status;
++}
++
++int vc_vchi_sm_alloc(VC_VCHI_SM_HANDLE_T handle, VC_SM_ALLOC_T *msg,
++		VC_SM_ALLOC_RESULT_T *result, uint32_t *cur_trans_id)
++{
++	return vc_vchi_sm_send_msg(handle, VC_SM_MSG_TYPE_ALLOC,
++				   msg, sizeof(*msg), result, sizeof(*result),
++				   cur_trans_id, 1);
++}
++
++int vc_vchi_sm_free(VC_VCHI_SM_HANDLE_T handle,
++		    VC_SM_FREE_T *msg, uint32_t *cur_trans_id)
++{
++	return vc_vchi_sm_send_msg(handle, VC_SM_MSG_TYPE_FREE,
++				   msg, sizeof(*msg), 0, 0, cur_trans_id, 0);
++}
++
++int vc_vchi_sm_lock(VC_VCHI_SM_HANDLE_T handle,
++		    VC_SM_LOCK_UNLOCK_T *msg,
++		    VC_SM_LOCK_RESULT_T *result, uint32_t *cur_trans_id)
++{
++	return vc_vchi_sm_send_msg(handle, VC_SM_MSG_TYPE_LOCK,
++				   msg, sizeof(*msg), result, sizeof(*result),
++				   cur_trans_id, 1);
++}
++
++int vc_vchi_sm_unlock(VC_VCHI_SM_HANDLE_T handle,
++		      VC_SM_LOCK_UNLOCK_T *msg,
++		      uint32_t *cur_trans_id, uint8_t wait_reply)
++{
++	return vc_vchi_sm_send_msg(handle, wait_reply ?
++				   VC_SM_MSG_TYPE_UNLOCK :
++				   VC_SM_MSG_TYPE_UNLOCK_NOANS, msg,
++				   sizeof(*msg), 0, 0, cur_trans_id,
++				   wait_reply);
++}
++
++int vc_vchi_sm_resize(VC_VCHI_SM_HANDLE_T handle, VC_SM_RESIZE_T *msg,
++		uint32_t *cur_trans_id)
++{
++	return vc_vchi_sm_send_msg(handle, VC_SM_MSG_TYPE_RESIZE,
++				   msg, sizeof(*msg), 0, 0, cur_trans_id, 1);
++}
++
++int vc_vchi_sm_walk_alloc(VC_VCHI_SM_HANDLE_T handle)
++{
++	return vc_vchi_sm_send_msg(handle, VC_SM_MSG_TYPE_WALK_ALLOC,
++				   0, 0, 0, 0, 0, 0);
++}
++
++int vc_vchi_sm_clean_up(VC_VCHI_SM_HANDLE_T handle, VC_SM_ACTION_CLEAN_T *msg)
++{
++	return vc_vchi_sm_send_msg(handle, VC_SM_MSG_TYPE_ACTION_CLEAN,
++				   msg, sizeof(*msg), 0, 0, 0, 0);
++}
+--- /dev/null
++++ b/drivers/char/broadcom/vc_sm/vc_vchi_sm.h
+@@ -0,0 +1,82 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#ifndef __VC_VCHI_SM_H__INCLUDED__
++#define __VC_VCHI_SM_H__INCLUDED__
++
++#include "interface/vchi/vchi.h"
++
++#include "vc_sm_defs.h"
++
++/* Forward declare.
++*/
++typedef struct sm_instance *VC_VCHI_SM_HANDLE_T;
++
++/* Initialize the shared memory service, opens up vchi connection to talk to it.
++*/
++VC_VCHI_SM_HANDLE_T vc_vchi_sm_init(VCHI_INSTANCE_T vchi_instance,
++				    VCHI_CONNECTION_T **vchi_connections,
++				    uint32_t num_connections);
++
++/* Terminates the shared memory service.
++*/
++int vc_vchi_sm_stop(VC_VCHI_SM_HANDLE_T *handle);
++
++/* Ask the shared memory service to allocate some memory on videocre and
++** return the result of this allocation (which upon success will be a pointer
++** to some memory in videocore space).
++*/
++int vc_vchi_sm_alloc(VC_VCHI_SM_HANDLE_T handle,
++		     VC_SM_ALLOC_T *alloc,
++		     VC_SM_ALLOC_RESULT_T *alloc_result, uint32_t *trans_id);
++
++/* Ask the shared memory service to free up some memory that was previously
++** allocated by the vc_vchi_sm_alloc function call.
++*/
++int vc_vchi_sm_free(VC_VCHI_SM_HANDLE_T handle,
++		    VC_SM_FREE_T *free, uint32_t *trans_id);
++
++/* Ask the shared memory service to lock up some memory that was previously
++** allocated by the vc_vchi_sm_alloc function call.
++*/
++int vc_vchi_sm_lock(VC_VCHI_SM_HANDLE_T handle,
++		    VC_SM_LOCK_UNLOCK_T *lock_unlock,
++		    VC_SM_LOCK_RESULT_T *lock_result, uint32_t *trans_id);
++
++/* Ask the shared memory service to unlock some memory that was previously
++** allocated by the vc_vchi_sm_alloc function call.
++*/
++int vc_vchi_sm_unlock(VC_VCHI_SM_HANDLE_T handle,
++		      VC_SM_LOCK_UNLOCK_T *lock_unlock,
++		      uint32_t *trans_id, uint8_t wait_reply);
++
++/* Ask the shared memory service to resize some memory that was previously
++** allocated by the vc_vchi_sm_alloc function call.
++*/
++int vc_vchi_sm_resize(VC_VCHI_SM_HANDLE_T handle,
++		      VC_SM_RESIZE_T *resize, uint32_t *trans_id);
++
++/* Walk the allocated resources on the videocore side, the allocation will
++** show up in the log.  This is purely for debug/information and takes no
++** specific actions.
++*/
++int vc_vchi_sm_walk_alloc(VC_VCHI_SM_HANDLE_T handle);
++
++/* Clean up following a previously interrupted action which left the system
++** in a bad state of some sort.
++*/
++int vc_vchi_sm_clean_up(VC_VCHI_SM_HANDLE_T handle,
++			VC_SM_ACTION_CLEAN_T *action_clean);
++
++#endif /* __VC_VCHI_SM_H__INCLUDED__ */
+--- /dev/null
++++ b/drivers/char/broadcom/vc_sm/vmcs_sm.c
+@@ -0,0 +1,3211 @@
++/*****************************************************************************
++* Copyright 2011-2012 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++/* ---- Include Files ----------------------------------------------------- */
++
++#include <linux/cdev.h>
++#include <linux/broadcom/vc_mem.h>
++#include <linux/device.h>
++#include <linux/debugfs.h>
++#include <linux/dma-mapping.h>
++#include <linux/errno.h>
++#include <linux/fs.h>
++#include <linux/hugetlb.h>
++#include <linux/ioctl.h>
++#include <linux/kernel.h>
++#include <linux/list.h>
++#include <linux/module.h>
++#include <linux/mm.h>
++#include <linux/pfn.h>
++#include <linux/proc_fs.h>
++#include <linux/pagemap.h>
++#include <linux/semaphore.h>
++#include <linux/slab.h>
++#include <linux/seq_file.h>
++#include <linux/types.h>
++#include <asm/cacheflush.h>
++
++#include "vchiq_connected.h"
++#include "vc_vchi_sm.h"
++
++#include <linux/broadcom/vmcs_sm_ioctl.h>
++#include "vc_sm_knl.h"
++
++/* ---- Private Constants and Types --------------------------------------- */
++
++#define DEVICE_NAME              "vcsm"
++#define DEVICE_MINOR             0
++
++#define VC_SM_DIR_ROOT_NAME       "vc-smem"
++#define VC_SM_DIR_ALLOC_NAME      "alloc"
++#define VC_SM_STATE               "state"
++#define VC_SM_STATS               "statistics"
++#define VC_SM_RESOURCES           "resources"
++#define VC_SM_DEBUG               "debug"
++#define VC_SM_WRITE_BUF_SIZE      128
++
++/* Statistics tracked per resource and globally.
++*/
++enum SM_STATS_T {
++	/* Attempt. */
++	ALLOC,
++	FREE,
++	LOCK,
++	UNLOCK,
++	MAP,
++	FLUSH,
++	INVALID,
++
++	END_ATTEMPT,
++
++	/* Failure. */
++	ALLOC_FAIL,
++	FREE_FAIL,
++	LOCK_FAIL,
++	UNLOCK_FAIL,
++	MAP_FAIL,
++	FLUSH_FAIL,
++	INVALID_FAIL,
++
++	END_ALL,
++
++};
++
++static const char *const sm_stats_human_read[] = {
++	"Alloc",
++	"Free",
++	"Lock",
++	"Unlock",
++	"Map",
++	"Cache Flush",
++	"Cache Invalidate",
++};
++
++typedef int (*VC_SM_SHOW) (struct seq_file *s, void *v);
++struct SM_PDE_T {
++	VC_SM_SHOW show;          /* Debug fs function hookup. */
++	struct dentry *dir_entry; /* Debug fs directory entry. */
++	void *priv_data;          /* Private data */
++
++};
++
++/* Single resource allocation tracked for all devices.
++*/
++struct sm_mmap {
++	struct list_head map_list;	/* Linked list of maps. */
++
++	struct SM_RESOURCE_T *resource;	/* Pointer to the resource. */
++
++	pid_t res_pid;		/* PID owning that resource. */
++	unsigned int res_vc_hdl;	/* Resource handle (videocore). */
++	unsigned int res_usr_hdl;	/* Resource handle (user). */
++
++	long unsigned int res_addr;	/* Mapped virtual address. */
++	struct vm_area_struct *vma;	/* VM area for this mapping. */
++	unsigned int ref_count;	/* Reference count to this vma. */
++
++	/* Used to link maps associated with a resource. */
++	struct list_head resource_map_list;
++};
++
++/* Single resource allocation tracked for each opened device.
++*/
++struct SM_RESOURCE_T {
++	struct list_head resource_list;	/* List of resources. */
++	struct list_head global_resource_list;	/* Global list of resources. */
++
++	pid_t pid;		/* PID owning that resource. */
++	uint32_t res_guid;	/* Unique identifier. */
++	uint32_t lock_count;	/* Lock count for this resource. */
++	uint32_t ref_count;	/* Ref count for this resource. */
++
++	uint32_t res_handle;	/* Resource allocation handle. */
++	void *res_base_mem;	/* Resource base memory address. */
++	uint32_t res_size;	/* Resource size allocated. */
++	enum vmcs_sm_cache_e res_cached;	/* Resource cache type. */
++	struct SM_RESOURCE_T *res_shared;	/* Shared resource */
++
++	enum SM_STATS_T res_stats[END_ALL];	/* Resource statistics. */
++
++	uint8_t map_count;	/* Counter of mappings for this resource. */
++	struct list_head map_list;	/* Maps associated with a resource. */
++
++	struct SM_PRIV_DATA_T *private;
++};
++
++/* Private file data associated with each opened device.
++*/
++struct SM_PRIV_DATA_T {
++	struct list_head resource_list; /* List of resources. */
++
++	pid_t pid;                      /* PID of creator. */
++
++	struct dentry *dir_pid;	   /* Debug fs entries root. */
++	struct SM_PDE_T dir_stats; /* Debug fs entries statistics sub-tree. */
++	struct SM_PDE_T dir_res;   /* Debug fs resource sub-tree. */
++
++	int restart_sys;           /* Tracks restart on interrupt. */
++	VC_SM_MSG_TYPE int_action; /* Interrupted action. */
++	uint32_t int_trans_id;     /* Interrupted transaction. */
++
++};
++
++/* Global state information.
++*/
++struct SM_STATE_T {
++	VC_VCHI_SM_HANDLE_T sm_handle;	/* Handle for videocore service. */
++	struct dentry *dir_root;   /* Debug fs entries root. */
++	struct dentry *dir_alloc;  /* Debug fs entries allocations. */
++	struct SM_PDE_T dir_stats; /* Debug fs entries statistics sub-tree. */
++	struct SM_PDE_T dir_state; /* Debug fs entries state sub-tree. */
++	struct dentry *debug;      /* Debug fs entries debug. */
++
++	struct mutex map_lock;          /* Global map lock. */
++	struct list_head map_list;      /* List of maps. */
++	struct list_head resource_list;	/* List of resources. */
++
++	enum SM_STATS_T deceased[END_ALL];    /* Natural termination stats. */
++	enum SM_STATS_T terminated[END_ALL];  /* Forced termination stats. */
++	uint32_t res_deceased_cnt;	      /* Natural termination counter. */
++	uint32_t res_terminated_cnt;	      /* Forced termination counter. */
++
++	struct cdev sm_cdev;	/* Device. */
++	dev_t sm_devid;		/* Device identifier. */
++	struct class *sm_class;	/* Class. */
++	struct device *sm_dev;	/* Device. */
++
++	struct SM_PRIV_DATA_T *data_knl;    /* Kernel internal data tracking. */
++
++	struct mutex lock;	/* Global lock. */
++	uint32_t guid;		/* GUID (next) tracker. */
++
++};
++
++/* ---- Private Variables ----------------------------------------------- */
++
++static struct SM_STATE_T *sm_state;
++static int sm_inited;
++
++static const char *const sm_cache_map_vector[] = {
++	"(null)",
++	"host",
++	"videocore",
++	"host+videocore",
++};
++
++/* ---- Private Function Prototypes -------------------------------------- */
++
++/* ---- Private Functions ------------------------------------------------ */
++
++static inline unsigned vcaddr_to_pfn(unsigned long vc_addr)
++{
++	unsigned long pfn = vc_addr & 0x3FFFFFFF;
++	pfn += mm_vc_mem_phys_addr;
++	pfn >>= PAGE_SHIFT;
++	return pfn;
++}
++
++/* Carries over to the state statistics the statistics once owned by a deceased
++** resource.
++*/
++static void vc_sm_resource_deceased(struct SM_RESOURCE_T *p_res, int terminated)
++{
++	if (sm_state != NULL) {
++		if (p_res != NULL) {
++			int ix;
++
++			if (terminated)
++				sm_state->res_terminated_cnt++;
++			else
++				sm_state->res_deceased_cnt++;
++
++			for (ix = 0; ix < END_ALL; ix++) {
++				if (terminated)
++					sm_state->terminated[ix] +=
++					    p_res->res_stats[ix];
++				else
++					sm_state->deceased[ix] +=
++					    p_res->res_stats[ix];
++			}
++		}
++	}
++}
++
++/* Fetch a videocore handle corresponding to a mapping of the pid+address
++** returns 0 (ie NULL) if no such handle exists in the global map.
++*/
++static unsigned int vmcs_sm_vc_handle_from_pid_and_address(unsigned int pid,
++							   unsigned int addr)
++{
++	struct sm_mmap *map = NULL;
++	unsigned int handle = 0;
++
++	if (!sm_state || addr == 0)
++		goto out;
++
++	mutex_lock(&(sm_state->map_lock));
++
++	/* Lookup the resource.
++	 */
++	if (!list_empty(&sm_state->map_list)) {
++		list_for_each_entry(map, &sm_state->map_list, map_list) {
++			if (map->res_pid != pid || map->res_addr != addr)
++				continue;
++
++			pr_debug("[%s]: global map %p (pid %u, addr %lx) -> vc-hdl %x (usr-hdl %x)\n",
++				__func__, map, map->res_pid, map->res_addr,
++				map->res_vc_hdl, map->res_usr_hdl);
++
++			handle = map->res_vc_hdl;
++			break;
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++out:
++	/* Use a debug log here as it may be a valid situation that we query
++	 ** for something that is not mapped, we do not want a kernel log each
++	 ** time around.
++	 **
++	 ** There are other error log that would pop up accordingly if someone
++	 ** subsequently tries to use something invalid after being told not to
++	 ** use it...
++	 */
++	if (handle == 0) {
++		pr_debug("[%s]: not a valid map (pid %u, addr %x)\n",
++			__func__, pid, addr);
++	}
++
++	return handle;
++}
++
++/* Fetch a user handle corresponding to a mapping of the pid+address
++** returns 0 (ie NULL) if no such handle exists in the global map.
++*/
++static unsigned int vmcs_sm_usr_handle_from_pid_and_address(unsigned int pid,
++							    unsigned int addr)
++{
++	struct sm_mmap *map = NULL;
++	unsigned int handle = 0;
++
++	if (!sm_state || addr == 0)
++		goto out;
++
++	mutex_lock(&(sm_state->map_lock));
++
++	/* Lookup the resource.
++	 */
++	if (!list_empty(&sm_state->map_list)) {
++		list_for_each_entry(map, &sm_state->map_list, map_list) {
++			if (map->res_pid != pid || map->res_addr != addr)
++				continue;
++
++			pr_debug("[%s]: global map %p (pid %u, addr %lx) -> usr-hdl %x (vc-hdl %x)\n",
++				__func__, map, map->res_pid, map->res_addr,
++				map->res_usr_hdl, map->res_vc_hdl);
++
++			handle = map->res_usr_hdl;
++			break;
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++out:
++	/* Use a debug log here as it may be a valid situation that we query
++	 * for something that is not mapped yet.
++	 *
++	 * There are other error log that would pop up accordingly if someone
++	 * subsequently tries to use something invalid after being told not to
++	 * use it...
++	 */
++	if (handle == 0)
++		pr_debug("[%s]: not a valid map (pid %u, addr %x)\n",
++			__func__, pid, addr);
++
++	return handle;
++}
++
++#if defined(DO_NOT_USE)
++/* Fetch an address corresponding to a mapping of the pid+handle
++** returns 0 (ie NULL) if no such address exists in the global map.
++*/
++static unsigned int vmcs_sm_usr_address_from_pid_and_vc_handle(unsigned int pid,
++							       unsigned int hdl)
++{
++	struct sm_mmap *map = NULL;
++	unsigned int addr = 0;
++
++	if (sm_state == NULL || hdl == 0)
++		goto out;
++
++	mutex_lock(&(sm_state->map_lock));
++
++	/* Lookup the resource.
++	 */
++	if (!list_empty(&sm_state->map_list)) {
++		list_for_each_entry(map, &sm_state->map_list, map_list) {
++			if (map->res_pid != pid || map->res_vc_hdl != hdl)
++				continue;
++
++			pr_debug("[%s]: global map %p (pid %u, vc-hdl %x, usr-hdl %x) -> addr %lx\n",
++				__func__, map, map->res_pid, map->res_vc_hdl,
++				map->res_usr_hdl, map->res_addr);
++
++			addr = map->res_addr;
++			break;
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++out:
++	/* Use a debug log here as it may be a valid situation that we query
++	 ** for something that is not mapped, we do not want a kernel log each
++	 ** time around.
++	 **
++	 ** There are other error log that would pop up accordingly if someone
++	 ** subsequently tries to use something invalid after being told not to
++	 ** use it...
++	 */
++	if (addr == 0)
++		pr_debug("[%s]: not a valid map (pid %u, hdl %x)\n",
++			__func__, pid, hdl);
++
++	return addr;
++}
++#endif
++
++/* Fetch an address corresponding to a mapping of the pid+handle
++** returns 0 (ie NULL) if no such address exists in the global map.
++*/
++static unsigned int vmcs_sm_usr_address_from_pid_and_usr_handle(unsigned int
++								pid,
++								unsigned int
++								hdl)
++{
++	struct sm_mmap *map = NULL;
++	unsigned int addr = 0;
++
++	if (sm_state == NULL || hdl == 0)
++		goto out;
++
++	mutex_lock(&(sm_state->map_lock));
++
++	/* Lookup the resource.
++	 */
++	if (!list_empty(&sm_state->map_list)) {
++		list_for_each_entry(map, &sm_state->map_list, map_list) {
++			if (map->res_pid != pid || map->res_usr_hdl != hdl)
++				continue;
++
++			pr_debug("[%s]: global map %p (pid %u, vc-hdl %x, usr-hdl %x) -> addr %lx\n",
++				__func__, map, map->res_pid, map->res_vc_hdl,
++				map->res_usr_hdl, map->res_addr);
++
++			addr = map->res_addr;
++			break;
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++out:
++	/* Use a debug log here as it may be a valid situation that we query
++	 * for something that is not mapped, we do not want a kernel log each
++	 * time around.
++	 *
++	 * There are other error log that would pop up accordingly if someone
++	 * subsequently tries to use something invalid after being told not to
++	 * use it...
++	 */
++	if (addr == 0)
++		pr_debug("[%s]: not a valid map (pid %u, hdl %x)\n", __func__,
++				pid, hdl);
++
++	return addr;
++}
++
++/* Adds a resource mapping to the global data list.
++*/
++static void vmcs_sm_add_map(struct SM_STATE_T *state,
++			    struct SM_RESOURCE_T *resource, struct sm_mmap *map)
++{
++	mutex_lock(&(state->map_lock));
++
++	/* Add to the global list of mappings
++	 */
++	list_add(&map->map_list, &state->map_list);
++
++	/* Add to the list of mappings for this resource
++	 */
++	list_add(&map->resource_map_list, &resource->map_list);
++	resource->map_count++;
++
++	mutex_unlock(&(state->map_lock));
++
++	pr_debug("[%s]: added map %p (pid %u, vc-hdl %x, usr-hdl %x, addr %lx)\n",
++		__func__, map, map->res_pid, map->res_vc_hdl,
++		map->res_usr_hdl, map->res_addr);
++}
++
++/* Removes a resource mapping from the global data list.
++*/
++static void vmcs_sm_remove_map(struct SM_STATE_T *state,
++			       struct SM_RESOURCE_T *resource,
++			       struct sm_mmap *map)
++{
++	mutex_lock(&(state->map_lock));
++
++	/* Remove from the global list of mappings
++	 */
++	list_del(&map->map_list);
++
++	/* Remove from the list of mapping for this resource
++	 */
++	list_del(&map->resource_map_list);
++	if (resource->map_count > 0)
++		resource->map_count--;
++
++	mutex_unlock(&(state->map_lock));
++
++	pr_debug("[%s]: removed map %p (pid %d, vc-hdl %x, usr-hdl %x, addr %lx)\n",
++		__func__, map, map->res_pid, map->res_vc_hdl, map->res_usr_hdl,
++		map->res_addr);
++
++	kfree(map);
++}
++
++/* Read callback for the global state proc entry.
++*/
++static int vc_sm_global_state_show(struct seq_file *s, void *v)
++{
++	struct sm_mmap *map = NULL;
++	int map_count = 0;
++
++	if (sm_state == NULL)
++		return 0;
++
++	seq_printf(s, "\nVC-ServiceHandle     0x%x\n",
++		   (unsigned int)sm_state->sm_handle);
++
++	/* Log all applicable mapping(s).
++	 */
++
++	mutex_lock(&(sm_state->map_lock));
++
++	if (!list_empty(&sm_state->map_list)) {
++		list_for_each_entry(map, &sm_state->map_list, map_list) {
++			map_count++;
++
++			seq_printf(s, "\nMapping                0x%x\n",
++				   (unsigned int)map);
++			seq_printf(s, "           TGID        %u\n",
++				   map->res_pid);
++			seq_printf(s, "           VC-HDL      0x%x\n",
++				   map->res_vc_hdl);
++			seq_printf(s, "           USR-HDL     0x%x\n",
++				   map->res_usr_hdl);
++			seq_printf(s, "           USR-ADDR    0x%lx\n",
++				   map->res_addr);
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++	seq_printf(s, "\n\nTotal map count:   %d\n\n", map_count);
++
++	return 0;
++}
++
++static int vc_sm_global_statistics_show(struct seq_file *s, void *v)
++{
++	int ix;
++
++	/* Global state tracked statistics.
++	 */
++	if (sm_state != NULL) {
++		seq_puts(s, "\nDeceased Resources Statistics\n");
++
++		seq_printf(s, "\nNatural Cause (%u occurences)\n",
++			   sm_state->res_deceased_cnt);
++		for (ix = 0; ix < END_ATTEMPT; ix++) {
++			if (sm_state->deceased[ix] > 0) {
++				seq_printf(s, "                %u\t%s\n",
++					   sm_state->deceased[ix],
++					   sm_stats_human_read[ix]);
++			}
++		}
++		seq_puts(s, "\n");
++		for (ix = 0; ix < END_ATTEMPT; ix++) {
++			if (sm_state->deceased[ix + END_ATTEMPT] > 0) {
++				seq_printf(s, "                %u\tFAILED %s\n",
++					   sm_state->deceased[ix + END_ATTEMPT],
++					   sm_stats_human_read[ix]);
++			}
++		}
++
++		seq_printf(s, "\nForcefull (%u occurences)\n",
++			   sm_state->res_terminated_cnt);
++		for (ix = 0; ix < END_ATTEMPT; ix++) {
++			if (sm_state->terminated[ix] > 0) {
++				seq_printf(s, "                %u\t%s\n",
++					   sm_state->terminated[ix],
++					   sm_stats_human_read[ix]);
++			}
++		}
++		seq_puts(s, "\n");
++		for (ix = 0; ix < END_ATTEMPT; ix++) {
++			if (sm_state->terminated[ix + END_ATTEMPT] > 0) {
++				seq_printf(s, "                %u\tFAILED %s\n",
++					   sm_state->terminated[ix +
++								END_ATTEMPT],
++					   sm_stats_human_read[ix]);
++			}
++		}
++	}
++
++	return 0;
++}
++
++#if 0
++/* Read callback for the statistics proc entry.
++*/
++static int vc_sm_statistics_show(struct seq_file *s, void *v)
++{
++	int ix;
++	struct SM_PRIV_DATA_T *file_data;
++	struct SM_RESOURCE_T *resource;
++	int res_count = 0;
++	struct SM_PDE_T *p_pde;
++
++	p_pde = (struct SM_PDE_T *)(s->private);
++	file_data = (struct SM_PRIV_DATA_T *)(p_pde->priv_data);
++
++	if (file_data == NULL)
++		return 0;
++
++	/* Per process statistics.
++	 */
++
++	seq_printf(s, "\nStatistics for TGID %d\n", file_data->pid);
++
++	mutex_lock(&(sm_state->map_lock));
++
++	if (!list_empty(&file_data->resource_list)) {
++		list_for_each_entry(resource, &file_data->resource_list,
++				    resource_list) {
++			res_count++;
++
++			seq_printf(s, "\nGUID:         0x%x\n\n",
++				   resource->res_guid);
++			for (ix = 0; ix < END_ATTEMPT; ix++) {
++				if (resource->res_stats[ix] > 0) {
++					seq_printf(s,
++						   "                %u\t%s\n",
++						   resource->res_stats[ix],
++						   sm_stats_human_read[ix]);
++				}
++			}
++			seq_puts(s, "\n");
++			for (ix = 0; ix < END_ATTEMPT; ix++) {
++				if (resource->res_stats[ix + END_ATTEMPT] > 0) {
++					seq_printf(s,
++						   "                %u\tFAILED %s\n",
++						   resource->res_stats[
++						   ix + END_ATTEMPT],
++						   sm_stats_human_read[ix]);
++				}
++			}
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++	seq_printf(s, "\nResources Count %d\n", res_count);
++
++	return 0;
++}
++#endif
++
++#if 0
++/* Read callback for the allocation proc entry.  */
++static int vc_sm_alloc_show(struct seq_file *s, void *v)
++{
++	struct SM_PRIV_DATA_T *file_data;
++	struct SM_RESOURCE_T *resource;
++	int alloc_count = 0;
++	struct SM_PDE_T *p_pde;
++
++	p_pde = (struct SM_PDE_T *)(s->private);
++	file_data = (struct SM_PRIV_DATA_T *)(p_pde->priv_data);
++
++	if (!file_data)
++		return 0;
++
++	/* Per process statistics.  */
++	seq_printf(s, "\nAllocation for TGID %d\n", file_data->pid);
++
++	mutex_lock(&(sm_state->map_lock));
++
++	if (!list_empty(&file_data->resource_list)) {
++		list_for_each_entry(resource, &file_data->resource_list,
++				    resource_list) {
++			alloc_count++;
++
++			seq_printf(s, "\nGUID:              0x%x\n",
++				   resource->res_guid);
++			seq_printf(s, "Lock Count:        %u\n",
++				   resource->lock_count);
++			seq_printf(s, "Mapped:            %s\n",
++				   (resource->map_count ? "yes" : "no"));
++			seq_printf(s, "VC-handle:         0x%x\n",
++				   resource->res_handle);
++			seq_printf(s, "VC-address:        0x%p\n",
++				   resource->res_base_mem);
++			seq_printf(s, "VC-size (bytes):   %u\n",
++				   resource->res_size);
++			seq_printf(s, "Cache:             %s\n",
++				   sm_cache_map_vector[resource->res_cached]);
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++	seq_printf(s, "\n\nTotal allocation count: %d\n\n", alloc_count);
++
++	return 0;
++}
++#endif
++
++static int vc_sm_seq_file_show(struct seq_file *s, void *v)
++{
++	struct SM_PDE_T *sm_pde;
++
++	sm_pde = (struct SM_PDE_T *)(s->private);
++
++	if (sm_pde && sm_pde->show)
++		sm_pde->show(s, v);
++
++	return 0;
++}
++
++static int vc_sm_single_open(struct inode *inode, struct file *file)
++{
++	return single_open(file, vc_sm_seq_file_show, inode->i_private);
++}
++
++static const struct file_operations vc_sm_debug_fs_fops = {
++	.open = vc_sm_single_open,
++	.read = seq_read,
++	.llseek = seq_lseek,
++	.release = single_release,
++};
++
++/* Adds a resource to the private data list which tracks all the allocated
++** data.
++*/
++static void vmcs_sm_add_resource(struct SM_PRIV_DATA_T *privdata,
++				 struct SM_RESOURCE_T *resource)
++{
++	mutex_lock(&(sm_state->map_lock));
++	list_add(&resource->resource_list, &privdata->resource_list);
++	list_add(&resource->global_resource_list, &sm_state->resource_list);
++	mutex_unlock(&(sm_state->map_lock));
++
++	pr_debug("[%s]: added resource %p (base addr %p, hdl %x, size %u, cache %u)\n",
++		__func__, resource, resource->res_base_mem,
++		resource->res_handle, resource->res_size, resource->res_cached);
++}
++
++/* Locates a resource and acquire a reference on it.
++** The resource won't be deleted while there is a reference on it.
++*/
++static struct SM_RESOURCE_T *vmcs_sm_acquire_resource(struct SM_PRIV_DATA_T
++						      *private,
++						      unsigned int res_guid)
++{
++	struct SM_RESOURCE_T *resource, *ret = NULL;
++
++	mutex_lock(&(sm_state->map_lock));
++
++	list_for_each_entry(resource, &private->resource_list, resource_list) {
++		if (resource->res_guid != res_guid)
++			continue;
++
++		pr_debug("[%s]: located resource %p (guid: %x, base addr %p, hdl %x, size %u, cache %u)\n",
++			__func__, resource, resource->res_guid,
++			resource->res_base_mem, resource->res_handle,
++			resource->res_size, resource->res_cached);
++		resource->ref_count++;
++		ret = resource;
++		break;
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++	return ret;
++}
++
++/* Locates a resource and acquire a reference on it.
++** The resource won't be deleted while there is a reference on it.
++*/
++static struct SM_RESOURCE_T *vmcs_sm_acquire_first_resource(
++		struct SM_PRIV_DATA_T *private)
++{
++	struct SM_RESOURCE_T *resource, *ret = NULL;
++
++	mutex_lock(&(sm_state->map_lock));
++
++	list_for_each_entry(resource, &private->resource_list, resource_list) {
++		pr_debug("[%s]: located resource %p (guid: %x, base addr %p, hdl %x, size %u, cache %u)\n",
++			__func__, resource, resource->res_guid,
++			resource->res_base_mem, resource->res_handle,
++			resource->res_size, resource->res_cached);
++		resource->ref_count++;
++		ret = resource;
++		break;
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++	return ret;
++}
++
++/* Locates a resource and acquire a reference on it.
++** The resource won't be deleted while there is a reference on it.
++*/
++static struct SM_RESOURCE_T *vmcs_sm_acquire_global_resource(unsigned int
++							     res_guid)
++{
++	struct SM_RESOURCE_T *resource, *ret = NULL;
++
++	mutex_lock(&(sm_state->map_lock));
++
++	list_for_each_entry(resource, &sm_state->resource_list,
++			    global_resource_list) {
++		if (resource->res_guid != res_guid)
++			continue;
++
++		pr_debug("[%s]: located resource %p (guid: %x, base addr %p, hdl %x, size %u, cache %u)\n",
++			__func__, resource, resource->res_guid,
++			resource->res_base_mem, resource->res_handle,
++			resource->res_size, resource->res_cached);
++		resource->ref_count++;
++		ret = resource;
++		break;
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++	return ret;
++}
++
++/* Release a previously acquired resource.
++** The resource will be deleted when its refcount reaches 0.
++*/
++static void vmcs_sm_release_resource(struct SM_RESOURCE_T *resource, int force)
++{
++	struct SM_PRIV_DATA_T *private = resource->private;
++	struct sm_mmap *map, *map_tmp;
++	struct SM_RESOURCE_T *res_tmp;
++	int ret;
++
++	mutex_lock(&(sm_state->map_lock));
++
++	if (--resource->ref_count) {
++		if (force)
++			pr_err("[%s]: resource %p in use\n", __func__, resource);
++
++		mutex_unlock(&(sm_state->map_lock));
++		return;
++	}
++
++	/* Time to free the resource. Start by removing it from the list */
++	list_del(&resource->resource_list);
++	list_del(&resource->global_resource_list);
++
++	/* Walk the global resource list, find out if the resource is used
++	 * somewhere else. In which case we don't want to delete it.
++	 */
++	list_for_each_entry(res_tmp, &sm_state->resource_list,
++			    global_resource_list) {
++		if (res_tmp->res_handle == resource->res_handle) {
++			resource->res_handle = 0;
++			break;
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++	pr_debug("[%s]: freeing data - guid %x, hdl %x, base address %p\n",
++		__func__, resource->res_guid, resource->res_handle,
++		resource->res_base_mem);
++	resource->res_stats[FREE]++;
++
++	/* Make sure the resource we're removing is unmapped first */
++	if (resource->map_count && !list_empty(&resource->map_list)) {
++		down_write(&current->mm->mmap_sem);
++		list_for_each_entry_safe(map, map_tmp, &resource->map_list,
++					 resource_map_list) {
++			ret =
++			    do_munmap(current->mm, map->res_addr,
++				      resource->res_size);
++			if (ret) {
++				pr_err("[%s]: could not unmap resource %p\n",
++					__func__, resource);
++			}
++		}
++		up_write(&current->mm->mmap_sem);
++	}
++
++	/* Free up the videocore allocated resource.
++	 */
++	if (resource->res_handle) {
++		VC_SM_FREE_T free = {
++			resource->res_handle, resource->res_base_mem
++		};
++		int status = vc_vchi_sm_free(sm_state->sm_handle, &free,
++					     &private->int_trans_id);
++		if (status != 0 && status != -EINTR) {
++			pr_err("[%s]: failed to free memory on videocore (status: %u, trans_id: %u)\n",
++			     __func__, status, private->int_trans_id);
++			resource->res_stats[FREE_FAIL]++;
++			ret = -EPERM;
++		}
++	}
++
++	/* Free up the shared resource.
++	 */
++	if (resource->res_shared)
++		vmcs_sm_release_resource(resource->res_shared, 0);
++
++	/* Free up the local resource tracking this allocation.
++	 */
++	vc_sm_resource_deceased(resource, force);
++	kfree(resource);
++}
++
++/* Dump the map table for the driver.  If process is -1, dumps the whole table,
++** if process is a valid pid (non -1) dump only the entries associated with the
++** pid of interest.
++*/
++static void vmcs_sm_host_walk_map_per_pid(int pid)
++{
++	struct sm_mmap *map = NULL;
++
++	/* Make sure the device was started properly.
++	 */
++	if (sm_state == NULL) {
++		pr_err("[%s]: invalid device\n", __func__);
++		return;
++	}
++
++	mutex_lock(&(sm_state->map_lock));
++
++	/* Log all applicable mapping(s).
++	 */
++	if (!list_empty(&sm_state->map_list)) {
++		list_for_each_entry(map, &sm_state->map_list, map_list) {
++			if (pid == -1 || map->res_pid == pid) {
++				pr_info("[%s]: tgid: %u - vc-hdl: %x, usr-hdl: %x, usr-addr: %lx\n",
++				     __func__, map->res_pid, map->res_vc_hdl,
++				     map->res_usr_hdl, map->res_addr);
++			}
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++	return;
++}
++
++/* Dump the allocation table from host side point of view.  This only dumps the
++** data allocated for this process/device referenced by the file_data.
++*/
++static void vmcs_sm_host_walk_alloc(struct SM_PRIV_DATA_T *file_data)
++{
++	struct SM_RESOURCE_T *resource = NULL;
++
++	/* Make sure the device was started properly.
++	 */
++	if ((sm_state == NULL) || (file_data == NULL)) {
++		pr_err("[%s]: invalid device\n", __func__);
++		return;
++	}
++
++	mutex_lock(&(sm_state->map_lock));
++
++	if (!list_empty(&file_data->resource_list)) {
++		list_for_each_entry(resource, &file_data->resource_list,
++				    resource_list) {
++			pr_info("[%s]: guid: %x - hdl: %x, vc-mem: %p, size: %u, cache: %u\n",
++			     __func__, resource->res_guid, resource->res_handle,
++			     resource->res_base_mem, resource->res_size,
++			     resource->res_cached);
++		}
++	}
++
++	mutex_unlock(&(sm_state->map_lock));
++
++	return;
++}
++
++/* Create support for private data tracking.
++*/
++static struct SM_PRIV_DATA_T *vc_sm_create_priv_data(pid_t id)
++{
++	char alloc_name[32];
++	struct SM_PRIV_DATA_T *file_data = NULL;
++
++	/* Allocate private structure. */
++	file_data = kzalloc(sizeof(*file_data), GFP_KERNEL);
++
++	if (!file_data) {
++		pr_err("[%s]: cannot allocate file data\n", __func__);
++		goto out;
++	}
++
++	snprintf(alloc_name, sizeof(alloc_name), "%d", id);
++
++	INIT_LIST_HEAD(&file_data->resource_list);
++	file_data->pid = id;
++	file_data->dir_pid = debugfs_create_dir(alloc_name,
++			sm_state->dir_alloc);
++#if 0
++  /* TODO: fix this to support querying statistics per pid */
++
++	if (IS_ERR_OR_NULL(file_data->dir_pid)) {
++		file_data->dir_pid = NULL;
++	} else {
++		struct dentry *dir_entry;
++
++		dir_entry = debugfs_create_file(VC_SM_RESOURCES, S_IRUGO,
++				file_data->dir_pid, file_data,
++				vc_sm_debug_fs_fops);
++
++		file_data->dir_res.dir_entry = dir_entry;
++		file_data->dir_res.priv_data = file_data;
++		file_data->dir_res.show = &vc_sm_alloc_show;
++
++		dir_entry = debugfs_create_file(VC_SM_STATS, S_IRUGO,
++				file_data->dir_pid, file_data,
++				vc_sm_debug_fs_fops);
++
++		file_data->dir_res.dir_entry = dir_entry;
++		file_data->dir_res.priv_data = file_data;
++		file_data->dir_res.show = &vc_sm_statistics_show;
++	}
++	pr_debug("[%s]: private data allocated %p\n", __func__, file_data);
++
++#endif
++out:
++	return file_data;
++}
++
++/* Open the device.  Creates a private state to help track all allocation
++** associated with this device.
++*/
++static int vc_sm_open(struct inode *inode, struct file *file)
++{
++	int ret = 0;
++
++	/* Make sure the device was started properly.
++	 */
++	if (!sm_state) {
++		pr_err("[%s]: invalid device\n", __func__);
++		ret = -EPERM;
++		goto out;
++	}
++
++	file->private_data = vc_sm_create_priv_data(current->tgid);
++	if (file->private_data == NULL) {
++		pr_err("[%s]: failed to create data tracker\n", __func__);
++
++		ret = -ENOMEM;
++		goto out;
++	}
++
++out:
++	return ret;
++}
++
++/* Close the device.  Free up all resources still associated with this device
++** at the time.
++*/
++static int vc_sm_release(struct inode *inode, struct file *file)
++{
++	struct SM_PRIV_DATA_T *file_data =
++	    (struct SM_PRIV_DATA_T *)file->private_data;
++	struct SM_RESOURCE_T *resource;
++	int ret = 0;
++
++	/* Make sure the device was started properly.
++	 */
++	if (sm_state == NULL || file_data == NULL) {
++		pr_err("[%s]: invalid device\n", __func__);
++		ret = -EPERM;
++		goto out;
++	}
++
++	pr_debug("[%s]: using private data %p\n", __func__, file_data);
++
++	if (file_data->restart_sys == -EINTR) {
++		VC_SM_ACTION_CLEAN_T action_clean;
++
++		pr_debug("[%s]: releasing following EINTR on %u (trans_id: %u) (likely due to signal)...\n",
++			__func__, file_data->int_action,
++			file_data->int_trans_id);
++
++		action_clean.res_action = file_data->int_action;
++		action_clean.action_trans_id = file_data->int_trans_id;
++
++		vc_vchi_sm_clean_up(sm_state->sm_handle, &action_clean);
++	}
++
++	while ((resource = vmcs_sm_acquire_first_resource(file_data)) != NULL) {
++		vmcs_sm_release_resource(resource, 0);
++		vmcs_sm_release_resource(resource, 1);
++	}
++
++	/* Remove the corresponding proc entry. */
++	debugfs_remove_recursive(file_data->dir_pid);
++
++	/* Terminate the private data.
++	 */
++	kfree(file_data);
++
++out:
++	return ret;
++}
++
++static void vcsm_vma_open(struct vm_area_struct *vma)
++{
++	struct sm_mmap *map = (struct sm_mmap *)vma->vm_private_data;
++
++	pr_debug("[%s]: virt %lx-%lx, pid %i, pfn %i\n",
++		__func__, vma->vm_start, vma->vm_end, (int)current->tgid,
++		(int)vma->vm_pgoff);
++
++	map->ref_count++;
++}
++
++static void vcsm_vma_close(struct vm_area_struct *vma)
++{
++	struct sm_mmap *map = (struct sm_mmap *)vma->vm_private_data;
++
++	pr_debug("[%s]: virt %lx-%lx, pid %i, pfn %i\n",
++		__func__, vma->vm_start, vma->vm_end, (int)current->tgid,
++		(int)vma->vm_pgoff);
++
++	map->ref_count--;
++
++	/* Remove from the map table.
++	 */
++	if (map->ref_count == 0)
++		vmcs_sm_remove_map(sm_state, map->resource, map);
++}
++
++static int vcsm_vma_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
++{
++	struct sm_mmap *map = (struct sm_mmap *)vma->vm_private_data;
++	struct SM_RESOURCE_T *resource = map->resource;
++	pgoff_t page_offset;
++	unsigned long pfn;
++	int ret = 0;
++
++	/* Lock the resource if necessary.
++	 */
++	if (!resource->lock_count) {
++		VC_SM_LOCK_UNLOCK_T lock_unlock;
++		VC_SM_LOCK_RESULT_T lock_result;
++		int status;
++
++		lock_unlock.res_handle = resource->res_handle;
++		lock_unlock.res_mem = resource->res_base_mem;
++
++		pr_debug("[%s]: attempt to lock data - hdl %x, base address %p\n",
++			__func__, lock_unlock.res_handle, lock_unlock.res_mem);
++
++		/* Lock the videocore allocated resource.
++		 */
++		status = vc_vchi_sm_lock(sm_state->sm_handle,
++					 &lock_unlock, &lock_result, 0);
++		if ((status != 0) ||
++		    ((status == 0) && (lock_result.res_mem == NULL))) {
++			pr_err("[%s]: failed to lock memory on videocore (status: %u)\n",
++					__func__, status);
++			resource->res_stats[LOCK_FAIL]++;
++			return VM_FAULT_SIGBUS;
++		}
++
++		pfn = vcaddr_to_pfn((unsigned long)resource->res_base_mem);
++		outer_inv_range(__pfn_to_phys(pfn),
++				__pfn_to_phys(pfn) + resource->res_size);
++
++		resource->res_stats[LOCK]++;
++		resource->lock_count++;
++
++		/* Keep track of the new base memory.
++		 */
++		if ((lock_result.res_mem != NULL) &&
++		    (lock_result.res_old_mem != NULL) &&
++		    (lock_result.res_mem != lock_result.res_old_mem)) {
++			resource->res_base_mem = lock_result.res_mem;
++		}
++	}
++
++	/* We don't use vmf->pgoff since that has the fake offset */
++	page_offset = ((unsigned long)vmf->virtual_address - vma->vm_start);
++	pfn = (uint32_t)resource->res_base_mem & 0x3FFFFFFF;
++	pfn += mm_vc_mem_phys_addr;
++	pfn += page_offset;
++	pfn >>= PAGE_SHIFT;
++
++	/* Finally, remap it */
++	ret = vm_insert_pfn(vma, (unsigned long)vmf->virtual_address, pfn);
++
++	switch (ret) {
++	case 0:
++	case -ERESTARTSYS:
++		return VM_FAULT_NOPAGE;
++	case -ENOMEM:
++	case -EAGAIN:
++		return VM_FAULT_OOM;
++	default:
++		return VM_FAULT_SIGBUS;
++	}
++}
++
++static struct vm_operations_struct vcsm_vm_ops = {
++	.open = vcsm_vma_open,
++	.close = vcsm_vma_close,
++	.fault = vcsm_vma_fault,
++};
++
++/* Walks a VMA and clean each valid page from the cache */
++static void vcsm_vma_cache_clean_page_range(unsigned long addr,
++					    unsigned long end)
++{
++	pgd_t *pgd;
++	pud_t *pud;
++	pmd_t *pmd;
++	pte_t *pte;
++	unsigned long pgd_next, pud_next, pmd_next;
++
++	if (addr >= end)
++		return;
++
++	/* Walk PGD */
++	pgd = pgd_offset(current->mm, addr);
++	do {
++		pgd_next = pgd_addr_end(addr, end);
++
++		if (pgd_none(*pgd) || pgd_bad(*pgd))
++			continue;
++
++		/* Walk PUD */
++		pud = pud_offset(pgd, addr);
++		do {
++			pud_next = pud_addr_end(addr, pgd_next);
++			if (pud_none(*pud) || pud_bad(*pud))
++				continue;
++
++			/* Walk PMD */
++			pmd = pmd_offset(pud, addr);
++			do {
++				pmd_next = pmd_addr_end(addr, pud_next);
++				if (pmd_none(*pmd) || pmd_bad(*pmd))
++					continue;
++
++				/* Walk PTE */
++				pte = pte_offset_map(pmd, addr);
++				do {
++					if (pte_none(*pte)
++					    || !pte_present(*pte))
++						continue;
++
++					/* Clean + invalidate */
++					dmac_flush_range((const void *) addr,
++							 (const void *)
++							 (addr + PAGE_SIZE));
++
++				} while (pte++, addr +=
++					 PAGE_SIZE, addr != pmd_next);
++				pte_unmap(pte);
++
++			} while (pmd++, addr = pmd_next, addr != pud_next);
++
++		} while (pud++, addr = pud_next, addr != pgd_next);
++	} while (pgd++, addr = pgd_next, addr != end);
++}
++
++/* Map an allocated data into something that the user space.
++*/
++static int vc_sm_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	int ret = 0;
++	struct SM_PRIV_DATA_T *file_data =
++	    (struct SM_PRIV_DATA_T *)file->private_data;
++	struct SM_RESOURCE_T *resource = NULL;
++	struct sm_mmap *map = NULL;
++
++	/* Make sure the device was started properly.
++	 */
++	if ((sm_state == NULL) || (file_data == NULL)) {
++		pr_err("[%s]: invalid device\n", __func__);
++		return -EPERM;
++	}
++
++	pr_debug("[%s]: private data %p, guid %x\n", __func__, file_data,
++		((unsigned int)vma->vm_pgoff << PAGE_SHIFT));
++
++	/* We lookup to make sure that the data we are being asked to mmap is
++	 ** something that we allocated.
++	 **
++	 ** We use the offset information as the key to tell us which resource
++	 ** we are mapping.
++	 */
++	resource = vmcs_sm_acquire_resource(file_data,
++					    ((unsigned int)vma->vm_pgoff <<
++					     PAGE_SHIFT));
++	if (resource == NULL) {
++		pr_err("[%s]: failed to locate resource for guid %x\n", __func__,
++			((unsigned int)vma->vm_pgoff << PAGE_SHIFT));
++		return -ENOMEM;
++	}
++
++	pr_debug("[%s]: guid %x, tgid %u, %u, %u\n",
++		__func__, resource->res_guid, current->tgid, resource->pid,
++		file_data->pid);
++
++	/* Check permissions.
++	 */
++	if (resource->pid && (resource->pid != current->tgid)) {
++		pr_err("[%s]: current tgid %u != %u owner\n",
++			__func__, current->tgid, resource->pid);
++		ret = -EPERM;
++		goto error;
++	}
++
++	/* Verify that what we are asked to mmap is proper.
++	 */
++	if (resource->res_size != (unsigned int)(vma->vm_end - vma->vm_start)) {
++		pr_err("[%s]: size inconsistency (resource: %u - mmap: %u)\n",
++			__func__,
++			resource->res_size,
++			(unsigned int)(vma->vm_end - vma->vm_start));
++
++		ret = -EINVAL;
++		goto error;
++	}
++
++	/* Keep track of the tuple in the global resource list such that one
++	 * can do a mapping lookup for address/memory handle.
++	 */
++	map = kzalloc(sizeof(*map), GFP_KERNEL);
++	if (map == NULL) {
++		pr_err("[%s]: failed to allocate global tracking resource\n",
++			__func__);
++		ret = -ENOMEM;
++		goto error;
++	}
++
++	map->res_pid = current->tgid;
++	map->res_vc_hdl = resource->res_handle;
++	map->res_usr_hdl = resource->res_guid;
++	map->res_addr = (long unsigned int)vma->vm_start;
++	map->resource = resource;
++	map->vma = vma;
++	vmcs_sm_add_map(sm_state, resource, map);
++
++	/* We are not actually mapping the pages, we just provide a fault
++	 ** handler to allow pages to be mapped when accessed
++	 */
++	vma->vm_flags |=
++	    VM_IO | VM_PFNMAP | VM_DONTCOPY | VM_DONTEXPAND;
++	vma->vm_ops = &vcsm_vm_ops;
++	vma->vm_private_data = map;
++
++	/* vm_pgoff is the first PFN of the mapped memory */
++	vma->vm_pgoff = (unsigned long)resource->res_base_mem & 0x3FFFFFFF;
++	vma->vm_pgoff += mm_vc_mem_phys_addr;
++	vma->vm_pgoff >>= PAGE_SHIFT;
++
++	if ((resource->res_cached == VMCS_SM_CACHE_NONE) ||
++	    (resource->res_cached == VMCS_SM_CACHE_VC)) {
++		/* Allocated non host cached memory, honour it.
++		 */
++		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
++	}
++
++	pr_debug("[%s]: resource %p (guid %x) - cnt %u, base address %p, handle %x, size %u (%u), cache %u\n",
++		__func__,
++		resource, resource->res_guid, resource->lock_count,
++		resource->res_base_mem, resource->res_handle,
++		resource->res_size, (unsigned int)(vma->vm_end - vma->vm_start),
++		resource->res_cached);
++
++	pr_debug("[%s]: resource %p (base address %p, handle %x) - map-count %d, usr-addr %x\n",
++		__func__, resource, resource->res_base_mem,
++		resource->res_handle, resource->map_count,
++		(unsigned int)vma->vm_start);
++
++	vcsm_vma_open(vma);
++	resource->res_stats[MAP]++;
++	vmcs_sm_release_resource(resource, 0);
++	return 0;
++
++error:
++	resource->res_stats[MAP_FAIL]++;
++	vmcs_sm_release_resource(resource, 0);
++	return ret;
++}
++
++/* Allocate a shared memory handle and block.
++*/
++int vc_sm_ioctl_alloc(struct SM_PRIV_DATA_T *private,
++		      struct vmcs_sm_ioctl_alloc *ioparam)
++{
++	int ret = 0;
++	int status;
++	struct SM_RESOURCE_T *resource;
++	VC_SM_ALLOC_T alloc = { 0 };
++	VC_SM_ALLOC_RESULT_T result = { 0 };
++
++	/* Setup our allocation parameters */
++	alloc.type = ((ioparam->cached == VMCS_SM_CACHE_VC)
++		      || (ioparam->cached ==
++			  VMCS_SM_CACHE_BOTH)) ? VC_SM_ALLOC_CACHED :
++	    VC_SM_ALLOC_NON_CACHED;
++	alloc.base_unit = ioparam->size;
++	alloc.num_unit = ioparam->num;
++	alloc.allocator = current->tgid;
++	/* Align to kernel page size */
++	alloc.alignement = 4096;
++	/* Align the size to the kernel page size */
++	alloc.base_unit =
++	    (alloc.base_unit + alloc.alignement - 1) & ~(alloc.alignement - 1);
++	if (*ioparam->name) {
++		memcpy(alloc.name, ioparam->name, sizeof(alloc.name) - 1);
++	} else {
++		memcpy(alloc.name, VMCS_SM_RESOURCE_NAME_DEFAULT,
++		       sizeof(VMCS_SM_RESOURCE_NAME_DEFAULT));
++	}
++
++	pr_debug("[%s]: attempt to allocate \"%s\" data - type %u, base %u (%u), num %u, alignement %u\n",
++		__func__, alloc.name, alloc.type, ioparam->size,
++		alloc.base_unit, alloc.num_unit, alloc.alignement);
++
++	/* Allocate local resource to track this allocation.
++	 */
++	resource = kzalloc(sizeof(*resource), GFP_KERNEL);
++	if (!resource) {
++		ret = -ENOMEM;
++		goto error;
++	}
++	INIT_LIST_HEAD(&resource->map_list);
++	resource->ref_count++;
++	resource->pid = current->tgid;
++
++	/* Allocate the videocore resource.
++	 */
++	status = vc_vchi_sm_alloc(sm_state->sm_handle, &alloc, &result,
++				  &private->int_trans_id);
++	if (status == -EINTR) {
++		pr_debug("[%s]: requesting allocate memory action restart (trans_id: %u)\n",
++			__func__, private->int_trans_id);
++		ret = -ERESTARTSYS;
++		private->restart_sys = -EINTR;
++		private->int_action = VC_SM_MSG_TYPE_ALLOC;
++		goto error;
++	} else if (status != 0 || (status == 0 && result.res_mem == NULL)) {
++		pr_err("[%s]: failed to allocate memory on videocore (status: %u, trans_id: %u)\n",
++		     __func__, status, private->int_trans_id);
++		ret = -ENOMEM;
++		resource->res_stats[ALLOC_FAIL]++;
++		goto error;
++	}
++
++	/* Keep track of the resource we created.
++	 */
++	resource->private = private;
++	resource->res_handle = result.res_handle;
++	resource->res_base_mem = result.res_mem;
++	resource->res_size = alloc.base_unit * alloc.num_unit;
++	resource->res_cached = ioparam->cached;
++
++	/* Kernel/user GUID.  This global identifier is used for mmap'ing the
++	 * allocated region from user space, it is passed as the mmap'ing
++	 * offset, we use it to 'hide' the videocore handle/address.
++	 */
++	mutex_lock(&sm_state->lock);
++	resource->res_guid = ++sm_state->guid;
++	mutex_unlock(&sm_state->lock);
++	resource->res_guid <<= PAGE_SHIFT;
++
++	vmcs_sm_add_resource(private, resource);
++
++	pr_debug("[%s]: allocated data - guid %x, hdl %x, base address %p, size %d, cache %d\n",
++		__func__, resource->res_guid, resource->res_handle,
++		resource->res_base_mem, resource->res_size,
++		resource->res_cached);
++
++	/* We're done */
++	resource->res_stats[ALLOC]++;
++	ioparam->handle = resource->res_guid;
++	return 0;
++
++error:
++	pr_err("[%s]: failed to allocate \"%s\" data (%i) - type %u, base %u (%u), num %u, alignment %u\n",
++	     __func__, alloc.name, ret, alloc.type, ioparam->size,
++	     alloc.base_unit, alloc.num_unit, alloc.alignement);
++	if (resource != NULL) {
++		vc_sm_resource_deceased(resource, 1);
++		kfree(resource);
++	}
++	return ret;
++}
++
++/* Share an allocate memory handle and block.
++*/
++int vc_sm_ioctl_alloc_share(struct SM_PRIV_DATA_T *private,
++			    struct vmcs_sm_ioctl_alloc_share *ioparam)
++{
++	struct SM_RESOURCE_T *resource, *shared_resource;
++	int ret = 0;
++
++	pr_debug("[%s]: attempt to share resource %u\n", __func__,
++			ioparam->handle);
++
++	shared_resource = vmcs_sm_acquire_global_resource(ioparam->handle);
++	if (shared_resource == NULL) {
++		ret = -ENOMEM;
++		goto error;
++	}
++
++	/* Allocate local resource to track this allocation.
++	 */
++	resource = kzalloc(sizeof(*resource), GFP_KERNEL);
++	if (resource == NULL) {
++		pr_err("[%s]: failed to allocate local tracking resource\n",
++			__func__);
++		ret = -ENOMEM;
++		goto error;
++	}
++	INIT_LIST_HEAD(&resource->map_list);
++	resource->ref_count++;
++	resource->pid = current->tgid;
++
++	/* Keep track of the resource we created.
++	 */
++	resource->private = private;
++	resource->res_handle = shared_resource->res_handle;
++	resource->res_base_mem = shared_resource->res_base_mem;
++	resource->res_size = shared_resource->res_size;
++	resource->res_cached = shared_resource->res_cached;
++	resource->res_shared = shared_resource;
++
++	mutex_lock(&sm_state->lock);
++	resource->res_guid = ++sm_state->guid;
++	mutex_unlock(&sm_state->lock);
++	resource->res_guid <<= PAGE_SHIFT;
++
++	vmcs_sm_add_resource(private, resource);
++
++	pr_debug("[%s]: allocated data - guid %x, hdl %x, base address %p, size %d, cache %d\n",
++		__func__, resource->res_guid, resource->res_handle,
++		resource->res_base_mem, resource->res_size,
++		resource->res_cached);
++
++	/* We're done */
++	resource->res_stats[ALLOC]++;
++	ioparam->handle = resource->res_guid;
++	ioparam->size = resource->res_size;
++	return 0;
++
++error:
++	pr_err("[%s]: failed to share %u\n", __func__, ioparam->handle);
++	if (shared_resource != NULL)
++		vmcs_sm_release_resource(shared_resource, 0);
++
++	return ret;
++}
++
++/* Free a previously allocated shared memory handle and block.
++*/
++static int vc_sm_ioctl_free(struct SM_PRIV_DATA_T *private,
++			    struct vmcs_sm_ioctl_free *ioparam)
++{
++	struct SM_RESOURCE_T *resource =
++	    vmcs_sm_acquire_resource(private, ioparam->handle);
++
++	if (resource == NULL) {
++		pr_err("[%s]: resource for guid %u does not exist\n", __func__,
++			ioparam->handle);
++		return -EINVAL;
++	}
++
++	/* Check permissions.
++	 */
++	if (resource->pid && (resource->pid != current->tgid)) {
++		pr_err("[%s]: current tgid %u != %u owner\n",
++			__func__, current->tgid, resource->pid);
++		vmcs_sm_release_resource(resource, 0);
++		return -EPERM;
++	}
++
++	vmcs_sm_release_resource(resource, 0);
++	vmcs_sm_release_resource(resource, 0);
++	return 0;
++}
++
++/* Resize a previously allocated shared memory handle and block.
++*/
++static int vc_sm_ioctl_resize(struct SM_PRIV_DATA_T *private,
++			      struct vmcs_sm_ioctl_resize *ioparam)
++{
++	int ret = 0;
++	int status;
++	VC_SM_RESIZE_T resize;
++	struct SM_RESOURCE_T *resource;
++
++	/* Locate resource from GUID.
++	 */
++	resource = vmcs_sm_acquire_resource(private, ioparam->handle);
++	if (!resource) {
++		pr_err("[%s]: failed resource - guid %x\n",
++				__func__, ioparam->handle);
++		ret = -EFAULT;
++		goto error;
++	}
++
++	/* If the resource is locked, its reference count will be not NULL,
++	 ** in which case we will not be allowed to resize it anyways, so
++	 ** reject the attempt here.
++	 */
++	if (resource->lock_count != 0) {
++		pr_err("[%s]: cannot resize - guid %x, ref-cnt %d\n",
++		     __func__, ioparam->handle, resource->lock_count);
++		ret = -EFAULT;
++		goto error;
++	}
++
++	/* Check permissions.
++	 */
++	if (resource->pid && (resource->pid != current->tgid)) {
++		pr_err("[%s]: current tgid %u != %u owner\n", __func__,
++				current->tgid, resource->pid);
++		ret = -EPERM;
++		goto error;
++	}
++
++	if (resource->map_count != 0) {
++		pr_err("[%s]: cannot resize - guid %x, ref-cnt %d\n",
++		     __func__, ioparam->handle, resource->map_count);
++		ret = -EFAULT;
++		goto error;
++	}
++
++	resize.res_handle = resource->res_handle;
++	resize.res_mem = resource->res_base_mem;
++	resize.res_new_size = ioparam->new_size;
++
++	pr_debug("[%s]: attempt to resize data - guid %x, hdl %x, base address %p\n",
++		__func__, ioparam->handle, resize.res_handle, resize.res_mem);
++
++	/* Resize the videocore allocated resource.
++	 */
++	status = vc_vchi_sm_resize(sm_state->sm_handle, &resize,
++				   &private->int_trans_id);
++	if (status == -EINTR) {
++		pr_debug("[%s]: requesting resize memory action restart (trans_id: %u)\n",
++			__func__, private->int_trans_id);
++		ret = -ERESTARTSYS;
++		private->restart_sys = -EINTR;
++		private->int_action = VC_SM_MSG_TYPE_RESIZE;
++		goto error;
++	} else if (status != 0) {
++		pr_err("[%s]: failed to resize memory on videocore (status: %u, trans_id: %u)\n",
++		     __func__, status, private->int_trans_id);
++		ret = -EPERM;
++		goto error;
++	}
++
++	pr_debug("[%s]: success to resize data - hdl %x, size %d -> %d\n",
++		__func__, resize.res_handle, resource->res_size,
++		resize.res_new_size);
++
++	/* Successfully resized, save the information and inform the user.
++	 */
++	ioparam->old_size = resource->res_size;
++	resource->res_size = resize.res_new_size;
++
++error:
++	if (resource)
++		vmcs_sm_release_resource(resource, 0);
++
++	return ret;
++}
++
++/* Lock a previously allocated shared memory handle and block.
++*/
++static int vc_sm_ioctl_lock(struct SM_PRIV_DATA_T *private,
++			    struct vmcs_sm_ioctl_lock_unlock *ioparam,
++			    int change_cache, enum vmcs_sm_cache_e cache_type,
++			    unsigned int vc_addr)
++{
++	int status;
++	VC_SM_LOCK_UNLOCK_T lock;
++	VC_SM_LOCK_RESULT_T result;
++	struct SM_RESOURCE_T *resource;
++	int ret = 0;
++	struct sm_mmap *map, *map_tmp;
++	long unsigned int phys_addr;
++
++	map = NULL;
++
++	/* Locate resource from GUID.
++	 */
++	resource = vmcs_sm_acquire_resource(private, ioparam->handle);
++	if (resource == NULL) {
++		ret = -EINVAL;
++		goto error;
++	}
++
++	/* Check permissions.
++	 */
++	if (resource->pid && (resource->pid != current->tgid)) {
++		pr_err("[%s]: current tgid %u != %u owner\n", __func__,
++				current->tgid, resource->pid);
++		ret = -EPERM;
++		goto error;
++	}
++
++	lock.res_handle = resource->res_handle;
++	lock.res_mem = resource->res_base_mem;
++
++	/* Take the lock and get the address to be mapped.
++	 */
++	if (vc_addr == 0) {
++		pr_debug("[%s]: attempt to lock data - guid %x, hdl %x, base address %p\n",
++			__func__, ioparam->handle, lock.res_handle,
++			lock.res_mem);
++
++		/* Lock the videocore allocated resource.
++		 */
++		status = vc_vchi_sm_lock(sm_state->sm_handle, &lock, &result,
++					 &private->int_trans_id);
++		if (status == -EINTR) {
++			pr_debug("[%s]: requesting lock memory action restart (trans_id: %u)\n",
++				__func__, private->int_trans_id);
++			ret = -ERESTARTSYS;
++			private->restart_sys = -EINTR;
++			private->int_action = VC_SM_MSG_TYPE_LOCK;
++			goto error;
++		} else if (status != 0 ||
++			   (status == 0 && result.res_mem == NULL)) {
++			pr_err("[%s]: failed to lock memory on videocore (status: %u, trans_id: %u)\n",
++			     __func__, status, private->int_trans_id);
++			ret = -EPERM;
++			resource->res_stats[LOCK_FAIL]++;
++			goto error;
++		}
++
++		pr_debug("[%s]: succeed to lock data - hdl %x, base address %p (%p), ref-cnt %d\n",
++			__func__, lock.res_handle, result.res_mem,
++			lock.res_mem, resource->lock_count);
++	}
++	/* Lock assumed taken already, address to be mapped is known.
++	 */
++	else
++		resource->res_base_mem = (void *)vc_addr;
++
++	resource->res_stats[LOCK]++;
++	resource->lock_count++;
++
++	/* Keep track of the new base memory allocation if it has changed.
++	 */
++	if ((vc_addr == 0) &&
++	    (result.res_mem != NULL) &&
++	    (result.res_old_mem != NULL) &&
++	    (result.res_mem != result.res_old_mem)) {
++		resource->res_base_mem = result.res_mem;
++
++		/* Kernel allocated resources.
++		 */
++		if (resource->pid == 0) {
++			if (!list_empty(&resource->map_list)) {
++				list_for_each_entry_safe(map, map_tmp,
++							 &resource->map_list,
++							 resource_map_list) {
++					if (map->res_addr) {
++						iounmap((void *)map->res_addr);
++						map->res_addr = 0;
++
++						vmcs_sm_remove_map(sm_state,
++								map->resource,
++								map);
++						break;
++					}
++				}
++			}
++		}
++	}
++
++	if (change_cache)
++		resource->res_cached = cache_type;
++
++	if (resource->map_count) {
++		ioparam->addr =
++		    vmcs_sm_usr_address_from_pid_and_usr_handle(
++				    current->tgid, ioparam->handle);
++
++		pr_debug("[%s] map_count %d private->pid %d current->tgid %d hnd %x addr %u\n",
++			__func__, resource->map_count, private->pid,
++			current->tgid, ioparam->handle, ioparam->addr);
++	} else {
++		/* Kernel allocated resources.
++		 */
++		if (resource->pid == 0) {
++			pr_debug("[%s]: attempt mapping kernel resource - guid %x, hdl %x\n",
++				__func__, ioparam->handle, lock.res_handle);
++
++			ioparam->addr = 0;
++
++			map = kzalloc(sizeof(*map), GFP_KERNEL);
++			if (map == NULL) {
++				pr_err("[%s]: failed allocating tracker\n",
++						__func__);
++				ret = -ENOMEM;
++				goto error;
++			} else {
++				phys_addr = (uint32_t)resource->res_base_mem &
++				    0x3FFFFFFF;
++				phys_addr += mm_vc_mem_phys_addr;
++				if (resource->res_cached
++						== VMCS_SM_CACHE_HOST) {
++					ioparam->addr = (long unsigned int)
++					/* TODO - make cached work */
++					    ioremap_nocache(phys_addr,
++							   resource->res_size);
++
++					pr_debug("[%s]: mapping kernel - guid %x, hdl %x - cached mapping %u\n",
++						__func__, ioparam->handle,
++						lock.res_handle, ioparam->addr);
++				} else {
++					ioparam->addr = (long unsigned int)
++					    ioremap_nocache(phys_addr,
++							    resource->res_size);
++
++					pr_debug("[%s]: mapping kernel- guid %x, hdl %x - non cached mapping %u\n",
++						__func__, ioparam->handle,
++						lock.res_handle, ioparam->addr);
++				}
++
++				map->res_pid = 0;
++				map->res_vc_hdl = resource->res_handle;
++				map->res_usr_hdl = resource->res_guid;
++				map->res_addr = ioparam->addr;
++				map->resource = resource;
++				map->vma = NULL;
++
++				vmcs_sm_add_map(sm_state, resource, map);
++			}
++		} else
++			ioparam->addr = 0;
++	}
++
++error:
++	if (resource)
++		vmcs_sm_release_resource(resource, 0);
++
++	return ret;
++}
++
++/* Unlock a previously allocated shared memory handle and block.
++*/
++static int vc_sm_ioctl_unlock(struct SM_PRIV_DATA_T *private,
++			      struct vmcs_sm_ioctl_lock_unlock *ioparam,
++			      int flush, int wait_reply, int no_vc_unlock)
++{
++	int status;
++	VC_SM_LOCK_UNLOCK_T unlock;
++	struct sm_mmap *map, *map_tmp;
++	struct SM_RESOURCE_T *resource;
++	int ret = 0;
++
++	map = NULL;
++
++	/* Locate resource from GUID.
++	 */
++	resource = vmcs_sm_acquire_resource(private, ioparam->handle);
++	if (resource == NULL) {
++		ret = -EINVAL;
++		goto error;
++	}
++
++	/* Check permissions.
++	 */
++	if (resource->pid && (resource->pid != current->tgid)) {
++		pr_err("[%s]: current tgid %u != %u owner\n",
++			__func__, current->tgid, resource->pid);
++		ret = -EPERM;
++		goto error;
++	}
++
++	unlock.res_handle = resource->res_handle;
++	unlock.res_mem = resource->res_base_mem;
++
++	pr_debug("[%s]: attempt to unlock data - guid %x, hdl %x, base address %p\n",
++		__func__, ioparam->handle, unlock.res_handle, unlock.res_mem);
++
++	/* User space allocated resources.
++	 */
++	if (resource->pid) {
++		/* Flush if requested */
++		if (resource->res_cached && flush) {
++			dma_addr_t phys_addr = 0;
++			resource->res_stats[FLUSH]++;
++
++			phys_addr =
++			    (dma_addr_t)((uint32_t)resource->res_base_mem &
++					 0x3FFFFFFF);
++			phys_addr += (dma_addr_t)mm_vc_mem_phys_addr;
++
++			/* L1 cache flush */
++			down_read(&current->mm->mmap_sem);
++			list_for_each_entry(map, &resource->map_list,
++					    resource_map_list) {
++				if (map->vma) {
++					unsigned long start;
++					unsigned long end;
++					start = map->vma->vm_start;
++					end = map->vma->vm_end;
++
++					vcsm_vma_cache_clean_page_range(
++							start, end);
++				}
++			}
++			up_read(&current->mm->mmap_sem);
++
++			/* L2 cache flush */
++			outer_clean_range(phys_addr,
++					  phys_addr +
++					  (size_t) resource->res_size);
++		}
++
++		/* We need to zap all the vmas associated with this resource */
++		if (resource->lock_count == 1) {
++			down_read(&current->mm->mmap_sem);
++			list_for_each_entry(map, &resource->map_list,
++					    resource_map_list) {
++				if (map->vma) {
++					zap_vma_ptes(map->vma,
++						     map->vma->vm_start,
++						     map->vma->vm_end -
++						     map->vma->vm_start);
++				}
++			}
++			up_read(&current->mm->mmap_sem);
++		}
++	}
++	/* Kernel allocated resources. */
++	else {
++		/* Global + Taken in this context */
++		if (resource->ref_count == 2) {
++			if (!list_empty(&resource->map_list)) {
++				list_for_each_entry_safe(map, map_tmp,
++						&resource->map_list,
++						resource_map_list) {
++					if (map->res_addr) {
++						if (flush &&
++								(resource->res_cached ==
++									VMCS_SM_CACHE_HOST)) {
++							long unsigned int
++								phys_addr;
++							phys_addr = (uint32_t)
++								resource->res_base_mem & 0x3FFFFFFF;
++							phys_addr +=
++								mm_vc_mem_phys_addr;
++
++							/* L1 cache flush */
++							dmac_flush_range((const
++										void
++										*)
++									map->res_addr, (const void *)
++									(map->res_addr + resource->res_size));
++
++							/* L2 cache flush */
++							outer_clean_range
++								(phys_addr,
++								 phys_addr +
++								 (size_t)
++								 resource->res_size);
++						}
++
++						iounmap((void *)map->res_addr);
++						map->res_addr = 0;
++
++						vmcs_sm_remove_map(sm_state,
++								map->resource,
++								map);
++						break;
++					}
++				}
++			}
++		}
++	}
++
++	if (resource->lock_count) {
++		/* Bypass the videocore unlock.
++		 */
++		if (no_vc_unlock)
++			status = 0;
++		/* Unlock the videocore allocated resource.
++		 */
++		else {
++			status =
++			    vc_vchi_sm_unlock(sm_state->sm_handle, &unlock,
++					      &private->int_trans_id,
++					      wait_reply);
++			if (status == -EINTR) {
++				pr_debug("[%s]: requesting unlock memory action restart (trans_id: %u)\n",
++					__func__, private->int_trans_id);
++
++				ret = -ERESTARTSYS;
++				resource->res_stats[UNLOCK]--;
++				private->restart_sys = -EINTR;
++				private->int_action = VC_SM_MSG_TYPE_UNLOCK;
++				goto error;
++			} else if (status != 0) {
++				pr_err("[%s]: failed to unlock vc mem (status: %u, trans_id: %u)\n",
++				     __func__, status, private->int_trans_id);
++
++				ret = -EPERM;
++				resource->res_stats[UNLOCK_FAIL]++;
++				goto error;
++			}
++		}
++
++		resource->res_stats[UNLOCK]++;
++		resource->lock_count--;
++	}
++
++	pr_debug("[%s]: success to unlock data - hdl %x, base address %p, ref-cnt %d\n",
++		__func__, unlock.res_handle, unlock.res_mem,
++		resource->lock_count);
++
++error:
++	if (resource)
++		vmcs_sm_release_resource(resource, 0);
++
++	return ret;
++}
++
++/* Handle control from host. */
++static long vc_sm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++	int ret = 0;
++	unsigned int cmdnr = _IOC_NR(cmd);
++	struct SM_PRIV_DATA_T *file_data =
++	    (struct SM_PRIV_DATA_T *)file->private_data;
++	struct SM_RESOURCE_T *resource = NULL;
++
++	/* Validate we can work with this device. */
++	if ((sm_state == NULL) || (file_data == NULL)) {
++		pr_err("[%s]: invalid device\n", __func__);
++		ret = -EPERM;
++		goto out;
++	}
++
++	pr_debug("[%s]: cmd %x tgid %u, owner %u\n", __func__, cmdnr,
++			current->tgid, file_data->pid);
++
++	/* Action is a re-post of a previously interrupted action? */
++	if (file_data->restart_sys == -EINTR) {
++		VC_SM_ACTION_CLEAN_T action_clean;
++
++		pr_debug("[%s]: clean up of action %u (trans_id: %u) following EINTR\n",
++			__func__, file_data->int_action,
++			file_data->int_trans_id);
++
++		action_clean.res_action = file_data->int_action;
++		action_clean.action_trans_id = file_data->int_trans_id;
++
++		vc_vchi_sm_clean_up(sm_state->sm_handle, &action_clean);
++
++		file_data->restart_sys = 0;
++	}
++
++	/* Now process the command.
++	 */
++	switch (cmdnr) {
++		/* New memory allocation.
++		 */
++	case VMCS_SM_CMD_ALLOC:
++		{
++			struct vmcs_sm_ioctl_alloc ioparam;
++
++			/* Get the parameter data.
++			 */
++			if (copy_from_user
++			    (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			ret = vc_sm_ioctl_alloc(file_data, &ioparam);
++			if (!ret &&
++			    (copy_to_user((void *)arg,
++					  &ioparam, sizeof(ioparam)) != 0)) {
++				struct vmcs_sm_ioctl_free freeparam = {
++					ioparam.handle
++				};
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++						__func__, cmdnr);
++				vc_sm_ioctl_free(file_data, &freeparam);
++				ret = -EFAULT;
++			}
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/* Share existing memory allocation.
++		 */
++	case VMCS_SM_CMD_ALLOC_SHARE:
++		{
++			struct vmcs_sm_ioctl_alloc_share ioparam;
++
++			/* Get the parameter data.
++			 */
++			if (copy_from_user
++			    (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			ret = vc_sm_ioctl_alloc_share(file_data, &ioparam);
++
++			/* Copy result back to user.
++			 */
++			if (!ret
++			    && copy_to_user((void *)arg, &ioparam,
++					    sizeof(ioparam)) != 0) {
++				struct vmcs_sm_ioctl_free freeparam = {
++					ioparam.handle
++				};
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++						__func__, cmdnr);
++				vc_sm_ioctl_free(file_data, &freeparam);
++				ret = -EFAULT;
++			}
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/* Lock (attempt to) *and* register a cache behavior change.
++		 */
++	case VMCS_SM_CMD_LOCK_CACHE:
++		{
++			struct vmcs_sm_ioctl_lock_cache ioparam;
++			struct vmcs_sm_ioctl_lock_unlock lock;
++
++			/* Get parameter data.
++			 */
++			if (copy_from_user
++			    (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			lock.handle = ioparam.handle;
++			ret =
++			    vc_sm_ioctl_lock(file_data, &lock, 1,
++					     ioparam.cached, 0);
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/* Lock (attempt to) existing memory allocation.
++		 */
++	case VMCS_SM_CMD_LOCK:
++		{
++			struct vmcs_sm_ioctl_lock_unlock ioparam;
++
++			/* Get parameter data.
++			 */
++			if (copy_from_user
++			    (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			ret = vc_sm_ioctl_lock(file_data, &ioparam, 0, 0, 0);
++
++			/* Copy result back to user.
++			 */
++			if (copy_to_user((void *)arg, &ioparam, sizeof(ioparam))
++			    != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++				     __func__, cmdnr);
++				ret = -EFAULT;
++			}
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/* Unlock (attempt to) existing memory allocation.
++		 */
++	case VMCS_SM_CMD_UNLOCK:
++		{
++			struct vmcs_sm_ioctl_lock_unlock ioparam;
++
++			/* Get parameter data.
++			 */
++			if (copy_from_user
++			    (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			ret = vc_sm_ioctl_unlock(file_data, &ioparam, 0, 1, 0);
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/* Resize (attempt to) existing memory allocation.
++		 */
++	case VMCS_SM_CMD_RESIZE:
++		{
++			struct vmcs_sm_ioctl_resize ioparam;
++
++			/* Get parameter data.
++			 */
++			if (copy_from_user
++			    (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			ret = vc_sm_ioctl_resize(file_data, &ioparam);
++
++			/* Copy result back to user.
++			 */
++			if (copy_to_user((void *)arg, &ioparam, sizeof(ioparam))
++			    != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++				     __func__, cmdnr);
++				ret = -EFAULT;
++			}
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/* Terminate existing memory allocation.
++		 */
++	case VMCS_SM_CMD_FREE:
++		{
++			struct vmcs_sm_ioctl_free ioparam;
++
++			/* Get parameter data.
++			 */
++			if (copy_from_user
++			    (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			ret = vc_sm_ioctl_free(file_data, &ioparam);
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/* Walk allocation on videocore, information shows up in the
++		 ** videocore log.
++		 */
++	case VMCS_SM_CMD_VC_WALK_ALLOC:
++		{
++			pr_debug("[%s]: invoking walk alloc\n", __func__);
++
++			if (vc_vchi_sm_walk_alloc(sm_state->sm_handle) != 0)
++				pr_err("[%s]: failed to walk-alloc on videocore\n",
++				     __func__);
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++/* Walk mapping table on host, information shows up in the
++		 ** kernel log.
++		 */
++	case VMCS_SM_CMD_HOST_WALK_MAP:
++		{
++			/* Use pid of -1 to tell to walk the whole map. */
++			vmcs_sm_host_walk_map_per_pid(-1);
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/* Walk mapping table per process on host.  */
++	case VMCS_SM_CMD_HOST_WALK_PID_ALLOC:
++		{
++			struct vmcs_sm_ioctl_walk ioparam;
++
++			/* Get parameter data.  */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			vmcs_sm_host_walk_alloc(file_data);
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/* Walk allocation per process on host.  */
++	case VMCS_SM_CMD_HOST_WALK_PID_MAP:
++		{
++			struct vmcs_sm_ioctl_walk ioparam;
++
++			/* Get parameter data.  */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			vmcs_sm_host_walk_map_per_pid(ioparam.pid);
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/* Gets the size of the memory associated with a user handle. */
++	case VMCS_SM_CMD_SIZE_USR_HANDLE:
++		{
++			struct vmcs_sm_ioctl_size ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			/* Locate resource from GUID. */
++			resource =
++			    vmcs_sm_acquire_resource(file_data, ioparam.handle);
++			if (resource != NULL) {
++				ioparam.size = resource->res_size;
++				vmcs_sm_release_resource(resource, 0);
++			} else {
++				ioparam.size = 0;
++			}
++
++			if (copy_to_user((void *)arg,
++					 &ioparam, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++				     __func__, cmdnr);
++				ret = -EFAULT;
++			}
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/* Verify we are dealing with a valid resource. */
++	case VMCS_SM_CMD_CHK_USR_HANDLE:
++		{
++			struct vmcs_sm_ioctl_chk ioparam;
++
++			/* Get parameter data.
++			 */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++
++				ret = -EFAULT;
++				goto out;
++			}
++
++			/* Locate resource from GUID. */
++			resource =
++			    vmcs_sm_acquire_resource(file_data, ioparam.handle);
++			if (resource == NULL)
++				ret = -EINVAL;
++			/* If the resource is cacheable, return additional
++			 * information that may be needed to flush the cache.
++			 */
++			else if ((resource->res_cached == VMCS_SM_CACHE_HOST) ||
++				 (resource->res_cached == VMCS_SM_CACHE_BOTH)) {
++				ioparam.addr =
++				    vmcs_sm_usr_address_from_pid_and_usr_handle
++				    (current->tgid, ioparam.handle);
++				ioparam.size = resource->res_size;
++				ioparam.cache = resource->res_cached;
++			} else {
++				ioparam.addr = 0;
++				ioparam.size = 0;
++				ioparam.cache = resource->res_cached;
++			}
++
++			if (resource)
++				vmcs_sm_release_resource(resource, 0);
++
++			if (copy_to_user((void *)arg,
++					 &ioparam, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++			}
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/*
++		 * Maps a user handle given the process and the virtual address.
++		 */
++	case VMCS_SM_CMD_MAPPED_USR_HANDLE:
++		{
++			struct vmcs_sm_ioctl_map ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++
++				ret = -EFAULT;
++				goto out;
++			}
++
++			ioparam.handle =
++			    vmcs_sm_usr_handle_from_pid_and_address(
++					    ioparam.pid, ioparam.addr);
++
++			resource =
++			    vmcs_sm_acquire_resource(file_data, ioparam.handle);
++			if ((resource != NULL)
++			    && ((resource->res_cached == VMCS_SM_CACHE_HOST)
++				|| (resource->res_cached ==
++				    VMCS_SM_CACHE_BOTH))) {
++				ioparam.size = resource->res_size;
++			} else {
++				ioparam.size = 0;
++			}
++
++			if (resource)
++				vmcs_sm_release_resource(resource, 0);
++
++			if (copy_to_user((void *)arg,
++					 &ioparam, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++				     __func__, cmdnr);
++				ret = -EFAULT;
++			}
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/*
++		 * Maps a videocore handle given process and virtual address.
++		 */
++	case VMCS_SM_CMD_MAPPED_VC_HDL_FROM_ADDR:
++		{
++			struct vmcs_sm_ioctl_map ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			ioparam.handle = vmcs_sm_vc_handle_from_pid_and_address(
++					    ioparam.pid, ioparam.addr);
++
++			if (copy_to_user((void *)arg,
++					 &ioparam, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++				     __func__, cmdnr);
++
++				ret = -EFAULT;
++			}
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++		/* Maps a videocore handle given process and user handle. */
++	case VMCS_SM_CMD_MAPPED_VC_HDL_FROM_HDL:
++		{
++			struct vmcs_sm_ioctl_map ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			/* Locate resource from GUID. */
++			resource =
++			    vmcs_sm_acquire_resource(file_data, ioparam.handle);
++			if (resource != NULL) {
++				ioparam.handle = resource->res_handle;
++				vmcs_sm_release_resource(resource, 0);
++			} else {
++				ioparam.handle = 0;
++			}
++
++			if (copy_to_user((void *)arg,
++					 &ioparam, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++				     __func__, cmdnr);
++
++				ret = -EFAULT;
++			}
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/*
++		 * Maps a videocore address given process and videocore handle.
++		 */
++	case VMCS_SM_CMD_MAPPED_VC_ADDR_FROM_HDL:
++		{
++			struct vmcs_sm_ioctl_map ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++
++				ret = -EFAULT;
++				goto out;
++			}
++
++			/* Locate resource from GUID. */
++			resource =
++			    vmcs_sm_acquire_resource(file_data, ioparam.handle);
++			if (resource != NULL) {
++				ioparam.addr =
++					(unsigned int)resource->res_base_mem;
++				vmcs_sm_release_resource(resource, 0);
++			} else {
++				ioparam.addr = 0;
++			}
++
++			if (copy_to_user((void *)arg,
++					 &ioparam, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++				     __func__, cmdnr);
++				ret = -EFAULT;
++			}
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/* Maps a user address given process and vc handle.
++		 */
++	case VMCS_SM_CMD_MAPPED_USR_ADDRESS:
++		{
++			struct vmcs_sm_ioctl_map ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			/*
++			 * Return the address information from the mapping,
++			 * 0 (ie NULL) if it cannot locate the actual mapping.
++			 */
++			ioparam.addr =
++			    vmcs_sm_usr_address_from_pid_and_usr_handle
++			    (ioparam.pid, ioparam.handle);
++
++			if (copy_to_user((void *)arg,
++					 &ioparam, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-to-user for cmd %x\n",
++				     __func__, cmdnr);
++				ret = -EFAULT;
++			}
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/* Flush the cache for a given mapping. */
++	case VMCS_SM_CMD_FLUSH:
++		{
++			struct vmcs_sm_ioctl_cache ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			/* Locate resource from GUID. */
++			resource =
++			    vmcs_sm_acquire_resource(file_data, ioparam.handle);
++
++			if ((resource != NULL) && resource->res_cached) {
++				dma_addr_t phys_addr = 0;
++
++				resource->res_stats[FLUSH]++;
++
++				phys_addr =
++				    (dma_addr_t)((uint32_t)
++						 resource->res_base_mem &
++						 0x3FFFFFFF);
++				phys_addr += (dma_addr_t)mm_vc_mem_phys_addr;
++
++				/* L1 cache flush */
++				down_read(&current->mm->mmap_sem);
++				vcsm_vma_cache_clean_page_range((unsigned long)
++								ioparam.addr,
++								(unsigned long)
++								ioparam.addr +
++								ioparam.size);
++				up_read(&current->mm->mmap_sem);
++
++				/* L2 cache flush */
++				outer_clean_range(phys_addr,
++						  phys_addr +
++						  (size_t) ioparam.size);
++			} else if (resource == NULL) {
++				ret = -EINVAL;
++				goto out;
++			}
++
++			if (resource)
++				vmcs_sm_release_resource(resource, 0);
++
++			/* Done. */
++			goto out;
++		}
++		break;
++
++		/* Invalidate the cache for a given mapping. */
++	case VMCS_SM_CMD_INVALID:
++		{
++			struct vmcs_sm_ioctl_cache ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++
++			/* Locate resource from GUID.
++			 */
++			resource =
++			    vmcs_sm_acquire_resource(file_data, ioparam.handle);
++
++			if ((resource != NULL) && resource->res_cached) {
++				dma_addr_t phys_addr = 0;
++
++				resource->res_stats[INVALID]++;
++
++				phys_addr =
++				    (dma_addr_t)((uint32_t)
++						 resource->res_base_mem &
++						 0x3FFFFFFF);
++				phys_addr += (dma_addr_t)mm_vc_mem_phys_addr;
++
++				/* L2 cache invalidate */
++				outer_inv_range(phys_addr,
++						phys_addr +
++						(size_t) ioparam.size);
++
++				/* L1 cache invalidate */
++				down_read(&current->mm->mmap_sem);
++				vcsm_vma_cache_clean_page_range((unsigned long)
++								ioparam.addr,
++								(unsigned long)
++								ioparam.addr +
++								ioparam.size);
++				up_read(&current->mm->mmap_sem);
++			} else if (resource == NULL) {
++				ret = -EINVAL;
++				goto out;
++			}
++
++			if (resource)
++				vmcs_sm_release_resource(resource, 0);
++
++			/* Done.
++			 */
++			goto out;
++		}
++		break;
++
++	/* Flush/Invalidate the cache for a given mapping. */
++	case VMCS_SM_CMD_CLEAN_INVALID:
++		{
++			int i;
++			struct vmcs_sm_ioctl_clean_invalid ioparam;
++
++			/* Get parameter data. */
++			if (copy_from_user(&ioparam,
++					   (void *)arg, sizeof(ioparam)) != 0) {
++				pr_err("[%s]: failed to copy-from-user for cmd %x\n",
++						__func__, cmdnr);
++				ret = -EFAULT;
++				goto out;
++			}
++			for (i=0; i<sizeof ioparam.s/sizeof *ioparam.s; i++) {
++				switch (ioparam.s[i].cmd) {
++					default: case 0: break; /* NOOP */
++					case 1:	/* L1/L2 invalidate virtual range */
++					case 2: /* L1/L2 clean physical range */
++					case 3: /* L1/L2 clean+invalidate all */
++					{
++						/* Locate resource from GUID.
++						 */
++						resource =
++						    vmcs_sm_acquire_resource(file_data, ioparam.s[i].handle);
++
++						if ((resource != NULL) && resource->res_cached) {
++							unsigned long base = ioparam.s[i].addr & ~(PAGE_SIZE-1);
++							unsigned long end = (ioparam.s[i].addr + ioparam.s[i].size + PAGE_SIZE-1) & ~(PAGE_SIZE-1);
++							resource->res_stats[ioparam.s[i].cmd == 1 ? INVALID:FLUSH]++;
++
++							/* L1/L2 cache flush */
++							down_read(&current->mm->mmap_sem);
++							vcsm_vma_cache_clean_page_range(base, end);
++							up_read(&current->mm->mmap_sem);
++						} else if (resource == NULL) {
++							ret = -EINVAL;
++							goto out;
++						}
++
++						if (resource)
++							vmcs_sm_release_resource(resource, 0);
++					}
++					break;
++				}
++			}
++		}
++		break;
++
++	default:
++		{
++			ret = -EINVAL;
++			goto out;
++		}
++		break;
++	}
++
++out:
++	return ret;
++}
++
++/* Device operations that we managed in this driver.
++*/
++static const struct file_operations vmcs_sm_ops = {
++	.owner = THIS_MODULE,
++	.unlocked_ioctl = vc_sm_ioctl,
++	.open = vc_sm_open,
++	.release = vc_sm_release,
++	.mmap = vc_sm_mmap,
++};
++
++/* Creation of device.
++*/
++static int vc_sm_create_sharedmemory(void)
++{
++	int ret;
++
++	if (sm_state == NULL) {
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	/* Create a device class for creating dev nodes.
++	 */
++	sm_state->sm_class = class_create(THIS_MODULE, "vc-sm");
++	if (IS_ERR(sm_state->sm_class)) {
++		pr_err("[%s]: unable to create device class\n", __func__);
++		ret = PTR_ERR(sm_state->sm_class);
++		goto out;
++	}
++
++	/* Create a character driver.
++	 */
++	ret = alloc_chrdev_region(&sm_state->sm_devid,
++				  DEVICE_MINOR, 1, DEVICE_NAME);
++	if (ret != 0) {
++		pr_err("[%s]: unable to allocate device number\n", __func__);
++		goto out_dev_class_destroy;
++	}
++
++	cdev_init(&sm_state->sm_cdev, &vmcs_sm_ops);
++	ret = cdev_add(&sm_state->sm_cdev, sm_state->sm_devid, 1);
++	if (ret != 0) {
++		pr_err("[%s]: unable to register device\n", __func__);
++		goto out_chrdev_unreg;
++	}
++
++	/* Create a device node.
++	 */
++	sm_state->sm_dev = device_create(sm_state->sm_class,
++					 NULL,
++					 MKDEV(MAJOR(sm_state->sm_devid),
++					       DEVICE_MINOR), NULL,
++					 DEVICE_NAME);
++	if (IS_ERR(sm_state->sm_dev)) {
++		pr_err("[%s]: unable to create device node\n", __func__);
++		ret = PTR_ERR(sm_state->sm_dev);
++		goto out_chrdev_del;
++	}
++
++	goto out;
++
++out_chrdev_del:
++	cdev_del(&sm_state->sm_cdev);
++out_chrdev_unreg:
++	unregister_chrdev_region(sm_state->sm_devid, 1);
++out_dev_class_destroy:
++	class_destroy(sm_state->sm_class);
++	sm_state->sm_class = NULL;
++out:
++	return ret;
++}
++
++/* Termination of the device.
++*/
++static int vc_sm_remove_sharedmemory(void)
++{
++	int ret;
++
++	if (sm_state == NULL) {
++		/* Nothing to do.
++		 */
++		ret = 0;
++		goto out;
++	}
++
++	/* Remove the sharedmemory character driver.
++	 */
++	cdev_del(&sm_state->sm_cdev);
++
++	/* Unregister region.
++	 */
++	unregister_chrdev_region(sm_state->sm_devid, 1);
++
++	ret = 0;
++	goto out;
++
++out:
++	return ret;
++}
++
++/* Videocore connected.  */
++static void vc_sm_connected_init(void)
++{
++	int ret;
++	VCHI_INSTANCE_T vchi_instance;
++	VCHI_CONNECTION_T *vchi_connection = NULL;
++
++	pr_info("[%s]: start\n", __func__);
++
++	/* Allocate memory for the state structure.
++	 */
++	sm_state = kzalloc(sizeof(struct SM_STATE_T), GFP_KERNEL);
++	if (sm_state == NULL) {
++		pr_err("[%s]: failed to allocate memory\n", __func__);
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	mutex_init(&sm_state->lock);
++	mutex_init(&sm_state->map_lock);
++
++	/* Initialize and create a VCHI connection for the shared memory service
++	 ** running on videocore.
++	 */
++	ret = vchi_initialise(&vchi_instance);
++	if (ret != 0) {
++		pr_err("[%s]: failed to initialise VCHI instance (ret=%d)\n",
++			__func__, ret);
++
++		ret = -EIO;
++		goto err_free_mem;
++	}
++
++	ret = vchi_connect(NULL, 0, vchi_instance);
++	if (ret != 0) {
++		pr_err("[%s]: failed to connect VCHI instance (ret=%d)\n",
++			__func__, ret);
++
++		ret = -EIO;
++		goto err_free_mem;
++	}
++
++	/* Initialize an instance of the shared memory service. */
++	sm_state->sm_handle =
++	    vc_vchi_sm_init(vchi_instance, &vchi_connection, 1);
++	if (sm_state->sm_handle == NULL) {
++		pr_err("[%s]: failed to initialize shared memory service\n",
++			__func__);
++
++		ret = -EPERM;
++		goto err_free_mem;
++	}
++
++	/* Create a debug fs directory entry (root). */
++	sm_state->dir_root = debugfs_create_dir(VC_SM_DIR_ROOT_NAME, NULL);
++	if (!sm_state->dir_root) {
++		pr_err("[%s]: failed to create \'%s\' directory entry\n",
++			__func__, VC_SM_DIR_ROOT_NAME);
++
++		ret = -EPERM;
++		goto err_stop_sm_service;
++	}
++
++	sm_state->dir_state.show = &vc_sm_global_state_show;
++	sm_state->dir_state.dir_entry = debugfs_create_file(VC_SM_STATE,
++			S_IRUGO, sm_state->dir_root, &sm_state->dir_state,
++			&vc_sm_debug_fs_fops);
++
++	sm_state->dir_stats.show = &vc_sm_global_statistics_show;
++	sm_state->dir_stats.dir_entry = debugfs_create_file(VC_SM_STATS,
++			S_IRUGO, sm_state->dir_root, &sm_state->dir_stats,
++			&vc_sm_debug_fs_fops);
++
++	/* Create the proc entry children. */
++	sm_state->dir_alloc = debugfs_create_dir(VC_SM_DIR_ALLOC_NAME,
++			sm_state->dir_root);
++
++	/* Create a shared memory device. */
++	ret = vc_sm_create_sharedmemory();
++	if (ret != 0) {
++		pr_err("[%s]: failed to create shared memory device\n",
++			__func__);
++		goto err_remove_debugfs;
++	}
++
++	INIT_LIST_HEAD(&sm_state->map_list);
++	INIT_LIST_HEAD(&sm_state->resource_list);
++
++	sm_state->data_knl = vc_sm_create_priv_data(0);
++	if (sm_state->data_knl == NULL) {
++		pr_err("[%s]: failed to create kernel private data tracker\n",
++			__func__);
++		goto err_remove_shared_memory;
++	}
++
++	/* Done!
++	 */
++	sm_inited = 1;
++	goto out;
++
++err_remove_shared_memory:
++	vc_sm_remove_sharedmemory();
++err_remove_debugfs:
++	debugfs_remove_recursive(sm_state->dir_root);
++err_stop_sm_service:
++	vc_vchi_sm_stop(&sm_state->sm_handle);
++err_free_mem:
++	kfree(sm_state);
++out:
++	pr_info("[%s]: end - returning %d\n", __func__, ret);
++}
++
++/* Driver loading. */
++static int __init vc_sm_init(void)
++{
++	pr_info("vc-sm: Videocore shared memory driver\n");
++	vchiq_add_connected_callback(vc_sm_connected_init);
++	return 0;
++}
++
++/* Driver unloading. */
++static void __exit vc_sm_exit(void)
++{
++	pr_debug("[%s]: start\n", __func__);
++	if (sm_inited) {
++		/* Remove shared memory device.
++		 */
++		vc_sm_remove_sharedmemory();
++
++		/* Remove all proc entries.
++		 */
++		debugfs_remove_recursive(sm_state->dir_root);
++
++		/* Stop the videocore shared memory service.
++		 */
++		vc_vchi_sm_stop(&sm_state->sm_handle);
++
++		/* Free the memory for the state structure.
++		 */
++		mutex_destroy(&(sm_state->map_lock));
++		kfree(sm_state);
++	}
++
++	pr_debug("[%s]: end\n", __func__);
++}
++
++#if defined(__KERNEL__)
++/* Allocate a shared memory handle and block. */
++int vc_sm_alloc(VC_SM_ALLOC_T *alloc, int *handle)
++{
++	struct vmcs_sm_ioctl_alloc ioparam = { 0 };
++	int ret;
++	struct SM_RESOURCE_T *resource;
++
++	/* Validate we can work with this device.
++	 */
++	if (sm_state == NULL || alloc == NULL || handle == NULL) {
++		pr_err("[%s]: invalid input\n", __func__);
++		return -EPERM;
++	}
++
++	ioparam.size = alloc->base_unit;
++	ioparam.num = alloc->num_unit;
++	ioparam.cached =
++	    alloc->type == VC_SM_ALLOC_CACHED ? VMCS_SM_CACHE_VC : 0;
++
++	ret = vc_sm_ioctl_alloc(sm_state->data_knl, &ioparam);
++
++	if (ret == 0) {
++		resource =
++		    vmcs_sm_acquire_resource(sm_state->data_knl,
++					     ioparam.handle);
++		if (resource) {
++			resource->pid = 0;
++			vmcs_sm_release_resource(resource, 0);
++
++			/* Assign valid handle at this time.
++			 */
++			*handle = ioparam.handle;
++		} else {
++			ret = -ENOMEM;
++		}
++	}
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(vc_sm_alloc);
++
++/* Get an internal resource handle mapped from the external one.
++*/
++int vc_sm_int_handle(int handle)
++{
++	struct SM_RESOURCE_T *resource;
++	int ret = 0;
++
++	/* Validate we can work with this device.
++	 */
++	if (sm_state == NULL || handle == 0) {
++		pr_err("[%s]: invalid input\n", __func__);
++		return 0;
++	}
++
++	/* Locate resource from GUID.
++	 */
++	resource = vmcs_sm_acquire_resource(sm_state->data_knl, handle);
++	if (resource) {
++		ret = resource->res_handle;
++		vmcs_sm_release_resource(resource, 0);
++	}
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(vc_sm_int_handle);
++
++/* Free a previously allocated shared memory handle and block.
++*/
++int vc_sm_free(int handle)
++{
++	struct vmcs_sm_ioctl_free ioparam = { handle };
++
++	/* Validate we can work with this device.
++	 */
++	if (sm_state == NULL || handle == 0) {
++		pr_err("[%s]: invalid input\n", __func__);
++		return -EPERM;
++	}
++
++	return vc_sm_ioctl_free(sm_state->data_knl, &ioparam);
++}
++EXPORT_SYMBOL_GPL(vc_sm_free);
++
++/* Lock a memory handle for use by kernel.
++*/
++int vc_sm_lock(int handle, VC_SM_LOCK_CACHE_MODE_T mode,
++	       long unsigned int *data)
++{
++	struct vmcs_sm_ioctl_lock_unlock ioparam;
++	int ret;
++
++	/* Validate we can work with this device.
++	 */
++	if (sm_state == NULL || handle == 0 || data == NULL) {
++		pr_err("[%s]: invalid input\n", __func__);
++		return -EPERM;
++	}
++
++	*data = 0;
++
++	ioparam.handle = handle;
++	ret = vc_sm_ioctl_lock(sm_state->data_knl,
++			       &ioparam,
++			       1,
++			       ((mode ==
++				 VC_SM_LOCK_CACHED) ? VMCS_SM_CACHE_HOST :
++				VMCS_SM_CACHE_NONE), 0);
++
++	*data = ioparam.addr;
++	return ret;
++}
++EXPORT_SYMBOL_GPL(vc_sm_lock);
++
++/* Unlock a memory handle in use by kernel.
++*/
++int vc_sm_unlock(int handle, int flush, int no_vc_unlock)
++{
++	struct vmcs_sm_ioctl_lock_unlock ioparam;
++
++	/* Validate we can work with this device.
++	 */
++	if (sm_state == NULL || handle == 0) {
++		pr_err("[%s]: invalid input\n", __func__);
++		return -EPERM;
++	}
++
++	ioparam.handle = handle;
++	return vc_sm_ioctl_unlock(sm_state->data_knl,
++				  &ioparam, flush, 0, no_vc_unlock);
++}
++EXPORT_SYMBOL_GPL(vc_sm_unlock);
++
++/* Map a shared memory region for use by kernel.
++*/
++int vc_sm_map(int handle, unsigned int sm_addr, VC_SM_LOCK_CACHE_MODE_T mode,
++	      long unsigned int *data)
++{
++	struct vmcs_sm_ioctl_lock_unlock ioparam;
++	int ret;
++
++	/* Validate we can work with this device.
++	 */
++	if (sm_state == NULL || handle == 0 || data == NULL || sm_addr == 0) {
++		pr_err("[%s]: invalid input\n", __func__);
++		return -EPERM;
++	}
++
++	*data = 0;
++
++	ioparam.handle = handle;
++	ret = vc_sm_ioctl_lock(sm_state->data_knl,
++			       &ioparam,
++			       1,
++			       ((mode ==
++				 VC_SM_LOCK_CACHED) ? VMCS_SM_CACHE_HOST :
++				VMCS_SM_CACHE_NONE), sm_addr);
++
++	*data = ioparam.addr;
++	return ret;
++}
++EXPORT_SYMBOL_GPL(vc_sm_map);
++#endif
++
++late_initcall(vc_sm_init);
++module_exit(vc_sm_exit);
++
++MODULE_AUTHOR("Broadcom");
++MODULE_DESCRIPTION("VideoCore SharedMemory Driver");
++MODULE_LICENSE("GPL v2");
+--- /dev/null
++++ b/include/linux/broadcom/vmcs_sm_ioctl.h
+@@ -0,0 +1,248 @@
++/*****************************************************************************
++*  Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++*  Unless you and Broadcom execute a separate written software license
++*  agreement governing use of this software, this software is licensed to you
++*  under the terms of the GNU General Public License version 2, available at
++*  http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++*  Notwithstanding the above, under no circumstances may you combine this
++*  software in any way with any other Broadcom software provided under a
++*  license other than the GPL, without Broadcom's express prior written
++*  consent.
++*
++*****************************************************************************/
++
++#if !defined(__VMCS_SM_IOCTL_H__INCLUDED__)
++#define __VMCS_SM_IOCTL_H__INCLUDED__
++
++/* ---- Include Files ---------------------------------------------------- */
++
++#if defined(__KERNEL__)
++#include <linux/types.h>	/* Needed for standard types */
++#else
++#include <stdint.h>
++#endif
++
++#include <linux/ioctl.h>
++
++/* ---- Constants and Types ---------------------------------------------- */
++
++#define VMCS_SM_RESOURCE_NAME               32
++#define VMCS_SM_RESOURCE_NAME_DEFAULT       "sm-host-resource"
++
++/* Type define used to create unique IOCTL number */
++#define VMCS_SM_MAGIC_TYPE                  'I'
++
++/* IOCTL commands */
++enum vmcs_sm_cmd_e {
++	VMCS_SM_CMD_ALLOC = 0x5A,	/* Start at 0x5A arbitrarily */
++	VMCS_SM_CMD_ALLOC_SHARE,
++	VMCS_SM_CMD_LOCK,
++	VMCS_SM_CMD_LOCK_CACHE,
++	VMCS_SM_CMD_UNLOCK,
++	VMCS_SM_CMD_RESIZE,
++	VMCS_SM_CMD_UNMAP,
++	VMCS_SM_CMD_FREE,
++	VMCS_SM_CMD_FLUSH,
++	VMCS_SM_CMD_INVALID,
++
++	VMCS_SM_CMD_SIZE_USR_HANDLE,
++	VMCS_SM_CMD_CHK_USR_HANDLE,
++
++	VMCS_SM_CMD_MAPPED_USR_HANDLE,
++	VMCS_SM_CMD_MAPPED_USR_ADDRESS,
++	VMCS_SM_CMD_MAPPED_VC_HDL_FROM_ADDR,
++	VMCS_SM_CMD_MAPPED_VC_HDL_FROM_HDL,
++	VMCS_SM_CMD_MAPPED_VC_ADDR_FROM_HDL,
++
++	VMCS_SM_CMD_VC_WALK_ALLOC,
++	VMCS_SM_CMD_HOST_WALK_MAP,
++	VMCS_SM_CMD_HOST_WALK_PID_ALLOC,
++	VMCS_SM_CMD_HOST_WALK_PID_MAP,
++
++	VMCS_SM_CMD_CLEAN_INVALID,
++
++	VMCS_SM_CMD_LAST	/* Do no delete */
++};
++
++/* Cache type supported, conveniently matches the user space definition in
++** user-vcsm.h.
++*/
++enum vmcs_sm_cache_e {
++	VMCS_SM_CACHE_NONE,
++	VMCS_SM_CACHE_HOST,
++	VMCS_SM_CACHE_VC,
++	VMCS_SM_CACHE_BOTH,
++};
++
++/* IOCTL Data structures */
++struct vmcs_sm_ioctl_alloc {
++	/* user -> kernel */
++	unsigned int size;
++	unsigned int num;
++	enum vmcs_sm_cache_e cached;
++	char name[VMCS_SM_RESOURCE_NAME];
++
++	/* kernel -> user */
++	unsigned int handle;
++	/* unsigned int base_addr; */
++};
++
++struct vmcs_sm_ioctl_alloc_share {
++	/* user -> kernel */
++	unsigned int handle;
++	unsigned int size;
++};
++
++struct vmcs_sm_ioctl_free {
++	/* user -> kernel */
++	unsigned int handle;
++	/* unsigned int base_addr; */
++};
++
++struct vmcs_sm_ioctl_lock_unlock {
++	/* user -> kernel */
++	unsigned int handle;
++
++	/* kernel -> user */
++	unsigned int addr;
++};
++
++struct vmcs_sm_ioctl_lock_cache {
++	/* user -> kernel */
++	unsigned int handle;
++	enum vmcs_sm_cache_e cached;
++};
++
++struct vmcs_sm_ioctl_resize {
++	/* user -> kernel */
++	unsigned int handle;
++	unsigned int new_size;
++
++	/* kernel -> user */
++	unsigned int old_size;
++};
++
++struct vmcs_sm_ioctl_map {
++	/* user -> kernel */
++	/* and kernel -> user */
++	unsigned int pid;
++	unsigned int handle;
++	unsigned int addr;
++
++	/* kernel -> user */
++	unsigned int size;
++};
++
++struct vmcs_sm_ioctl_walk {
++	/* user -> kernel */
++	unsigned int pid;
++};
++
++struct vmcs_sm_ioctl_chk {
++	/* user -> kernel */
++	unsigned int handle;
++
++	/* kernel -> user */
++	unsigned int addr;
++	unsigned int size;
++	enum vmcs_sm_cache_e cache;
++};
++
++struct vmcs_sm_ioctl_size {
++	/* user -> kernel */
++	unsigned int handle;
++
++	/* kernel -> user */
++	unsigned int size;
++};
++
++struct vmcs_sm_ioctl_cache {
++	/* user -> kernel */
++	unsigned int handle;
++	unsigned int addr;
++	unsigned int size;
++};
++
++struct vmcs_sm_ioctl_clean_invalid {
++	/* user -> kernel */
++	struct {
++		unsigned int cmd;
++		unsigned int handle;
++		unsigned int addr;
++		unsigned int size;
++	} s[8];
++};
++
++/* IOCTL numbers */
++#define VMCS_SM_IOCTL_MEM_ALLOC\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_ALLOC,\
++	 struct vmcs_sm_ioctl_alloc)
++#define VMCS_SM_IOCTL_MEM_ALLOC_SHARE\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_ALLOC_SHARE,\
++	 struct vmcs_sm_ioctl_alloc_share)
++#define VMCS_SM_IOCTL_MEM_LOCK\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_LOCK,\
++	 struct vmcs_sm_ioctl_lock_unlock)
++#define VMCS_SM_IOCTL_MEM_LOCK_CACHE\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_LOCK_CACHE,\
++	 struct vmcs_sm_ioctl_lock_cache)
++#define VMCS_SM_IOCTL_MEM_UNLOCK\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_UNLOCK,\
++	 struct vmcs_sm_ioctl_lock_unlock)
++#define VMCS_SM_IOCTL_MEM_RESIZE\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_RESIZE,\
++	 struct vmcs_sm_ioctl_resize)
++#define VMCS_SM_IOCTL_MEM_FREE\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_FREE,\
++	 struct vmcs_sm_ioctl_free)
++#define VMCS_SM_IOCTL_MEM_FLUSH\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_FLUSH,\
++	 struct vmcs_sm_ioctl_cache)
++#define VMCS_SM_IOCTL_MEM_INVALID\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_INVALID,\
++	 struct vmcs_sm_ioctl_cache)
++#define VMCS_SM_IOCTL_MEM_CLEAN_INVALID\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_CLEAN_INVALID,\
++	 struct vmcs_sm_ioctl_clean_invalid)
++
++#define VMCS_SM_IOCTL_SIZE_USR_HDL\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_SIZE_USR_HANDLE,\
++	 struct vmcs_sm_ioctl_size)
++#define VMCS_SM_IOCTL_CHK_USR_HDL\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_CHK_USR_HANDLE,\
++	 struct vmcs_sm_ioctl_chk)
++
++#define VMCS_SM_IOCTL_MAP_USR_HDL\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_MAPPED_USR_HANDLE,\
++	 struct vmcs_sm_ioctl_map)
++#define VMCS_SM_IOCTL_MAP_USR_ADDRESS\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_MAPPED_USR_ADDRESS,\
++	 struct vmcs_sm_ioctl_map)
++#define VMCS_SM_IOCTL_MAP_VC_HDL_FR_ADDR\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_MAPPED_VC_HDL_FROM_ADDR,\
++	 struct vmcs_sm_ioctl_map)
++#define VMCS_SM_IOCTL_MAP_VC_HDL_FR_HDL\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_MAPPED_VC_HDL_FROM_HDL,\
++	 struct vmcs_sm_ioctl_map)
++#define VMCS_SM_IOCTL_MAP_VC_ADDR_FR_HDL\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_MAPPED_VC_ADDR_FROM_HDL,\
++	 struct vmcs_sm_ioctl_map)
++
++#define VMCS_SM_IOCTL_VC_WALK_ALLOC\
++	_IO(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_VC_WALK_ALLOC)
++#define VMCS_SM_IOCTL_HOST_WALK_MAP\
++	_IO(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_HOST_WALK_MAP)
++#define VMCS_SM_IOCTL_HOST_WALK_PID_ALLOC\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_HOST_WALK_PID_ALLOC,\
++	 struct vmcs_sm_ioctl_walk)
++#define VMCS_SM_IOCTL_HOST_WALK_PID_MAP\
++	_IOR(VMCS_SM_MAGIC_TYPE, VMCS_SM_CMD_HOST_WALK_PID_MAP,\
++	 struct vmcs_sm_ioctl_walk)
++
++/* ---- Variable Externs ------------------------------------------------- */
++
++/* ---- Function Prototypes ---------------------------------------------- */
++
++#endif /* __VMCS_SM_IOCTL_H__INCLUDED__ */
diff --git a/target/linux/brcm2708/patches-4.4/0040-Add-dev-gpiomem-device-for-rootless-user-GPIO-access.patch b/target/linux/brcm2708/patches-4.4/0040-Add-dev-gpiomem-device-for-rootless-user-GPIO-access.patch
new file mode 100644
index 0000000..cf964e1
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0040-Add-dev-gpiomem-device-for-rootless-user-GPIO-access.patch
@@ -0,0 +1,306 @@
+From 6ddd3a6f0a8a3e4506b4fe4e1b410d9d945c5df2 Mon Sep 17 00:00:00 2001
+From: Luke Wren <luke at raspberrypi.org>
+Date: Fri, 21 Aug 2015 23:14:48 +0100
+Subject: [PATCH 040/127] Add /dev/gpiomem device for rootless user GPIO access
+
+Signed-off-by: Luke Wren <luke at raspberrypi.org>
+
+bcm2835-gpiomem: Fix for ARCH_BCM2835 builds
+
+Build on ARCH_BCM2835, and fail to probe if no IO resource.
+
+See: https://github.com/raspberrypi/linux/issues/1154
+---
+ drivers/char/broadcom/Kconfig           |   9 ++
+ drivers/char/broadcom/Makefile          |   3 +
+ drivers/char/broadcom/bcm2835-gpiomem.c | 260 ++++++++++++++++++++++++++++++++
+ 3 files changed, 272 insertions(+)
+ create mode 100644 drivers/char/broadcom/bcm2835-gpiomem.c
+
+--- a/drivers/char/broadcom/Kconfig
++++ b/drivers/char/broadcom/Kconfig
+@@ -32,3 +32,12 @@ config BCM_VC_SM
+ 	help
+ 	Support for the VC shared memory on the Broadcom reference
+ 	design. Uses the VCHIQ stack.
++
++config BCM2835_DEVGPIOMEM
++	tristate "/dev/gpiomem rootless GPIO access via mmap() on the BCM2835"
++	default m
++	help
++		Provides users with root-free access to the GPIO registers
++		on the 2835. Calling mmap(/dev/gpiomem) will map the GPIO
++		register page to the user's pointer.
++
+--- a/drivers/char/broadcom/Makefile
++++ b/drivers/char/broadcom/Makefile
+@@ -1,3 +1,6 @@
+ obj-$(CONFIG_BCM_VC_CMA)	+= vc_cma/
+ obj-$(CONFIG_BCM2708_VCMEM)	+= vc_mem.o
+ obj-$(CONFIG_BCM_VC_SM)         += vc_sm/
++
++obj-$(CONFIG_BCM2835_DEVGPIOMEM)+= bcm2835-gpiomem.o
++
+--- /dev/null
++++ b/drivers/char/broadcom/bcm2835-gpiomem.c
+@@ -0,0 +1,260 @@
++/**
++ * GPIO memory device driver
++ *
++ * Creates a chardev /dev/gpiomem which will provide user access to
++ * the BCM2835's GPIO registers when it is mmap()'d.
++ * No longer need root for user GPIO access, but without relaxing permissions
++ * on /dev/mem.
++ *
++ * Written by Luke Wren <luke at raspberrypi.org>
++ * Copyright (c) 2015, Raspberry Pi (Trading) Ltd.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/of.h>
++#include <linux/platform_device.h>
++#include <linux/mm.h>
++#include <linux/slab.h>
++#include <linux/cdev.h>
++#include <linux/pagemap.h>
++#include <linux/io.h>
++
++#define DEVICE_NAME "bcm2835-gpiomem"
++#define DRIVER_NAME "gpiomem-bcm2835"
++#define DEVICE_MINOR 0
++
++struct bcm2835_gpiomem_instance {
++	unsigned long gpio_regs_phys;
++	struct device *dev;
++};
++
++static struct cdev bcm2835_gpiomem_cdev;
++static dev_t bcm2835_gpiomem_devid;
++static struct class *bcm2835_gpiomem_class;
++static struct device *bcm2835_gpiomem_dev;
++static struct bcm2835_gpiomem_instance *inst;
++
++
++/****************************************************************************
++*
++*   GPIO mem chardev file ops
++*
++***************************************************************************/
++
++static int bcm2835_gpiomem_open(struct inode *inode, struct file *file)
++{
++	int dev = iminor(inode);
++	int ret = 0;
++
++	dev_info(inst->dev, "gpiomem device opened.");
++
++	if (dev != DEVICE_MINOR) {
++		dev_err(inst->dev, "Unknown minor device: %d", dev);
++		ret = -ENXIO;
++	}
++	return ret;
++}
++
++static int bcm2835_gpiomem_release(struct inode *inode, struct file *file)
++{
++	int dev = iminor(inode);
++	int ret = 0;
++
++	if (dev != DEVICE_MINOR) {
++		dev_err(inst->dev, "Unknown minor device %d", dev);
++		ret = -ENXIO;
++	}
++	return ret;
++}
++
++static const struct vm_operations_struct bcm2835_gpiomem_vm_ops = {
++#ifdef CONFIG_HAVE_IOREMAP_PROT
++	.access = generic_access_phys
++#endif
++};
++
++static int bcm2835_gpiomem_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	/* Ignore what the user says - they're getting the GPIO regs
++	   whether they like it or not! */
++	unsigned long gpio_page = inst->gpio_regs_phys >> PAGE_SHIFT;
++
++	vma->vm_page_prot = phys_mem_access_prot(file, gpio_page,
++						 PAGE_SIZE,
++						 vma->vm_page_prot);
++	vma->vm_ops = &bcm2835_gpiomem_vm_ops;
++	if (remap_pfn_range(vma, vma->vm_start,
++			gpio_page,
++			PAGE_SIZE,
++			vma->vm_page_prot)) {
++		return -EAGAIN;
++	}
++	return 0;
++}
++
++static const struct file_operations
++bcm2835_gpiomem_fops = {
++	.owner = THIS_MODULE,
++	.open = bcm2835_gpiomem_open,
++	.release = bcm2835_gpiomem_release,
++	.mmap = bcm2835_gpiomem_mmap,
++};
++
++
++ /****************************************************************************
++*
++*   Probe and remove functions
++*
++***************************************************************************/
++
++
++static int bcm2835_gpiomem_probe(struct platform_device *pdev)
++{
++	int err;
++	void *ptr_err;
++	struct device *dev = &pdev->dev;
++	struct resource *ioresource;
++
++	/* Allocate buffers and instance data */
++
++	inst = kzalloc(sizeof(struct bcm2835_gpiomem_instance), GFP_KERNEL);
++
++	if (!inst) {
++		err = -ENOMEM;
++		goto failed_inst_alloc;
++	}
++
++	inst->dev = dev;
++
++	ioresource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (ioresource) {
++		inst->gpio_regs_phys = ioresource->start;
++	} else {
++		dev_err(inst->dev, "failed to get IO resource");
++		err = -ENOENT;
++		goto failed_get_resource;
++	}
++
++	/* Create character device entries */
++
++	err = alloc_chrdev_region(&bcm2835_gpiomem_devid,
++				  DEVICE_MINOR, 1, DEVICE_NAME);
++	if (err != 0) {
++		dev_err(inst->dev, "unable to allocate device number");
++		goto failed_alloc_chrdev;
++	}
++	cdev_init(&bcm2835_gpiomem_cdev, &bcm2835_gpiomem_fops);
++	bcm2835_gpiomem_cdev.owner = THIS_MODULE;
++	err = cdev_add(&bcm2835_gpiomem_cdev, bcm2835_gpiomem_devid, 1);
++	if (err != 0) {
++		dev_err(inst->dev, "unable to register device");
++		goto failed_cdev_add;
++	}
++
++	/* Create sysfs entries */
++
++	bcm2835_gpiomem_class = class_create(THIS_MODULE, DEVICE_NAME);
++	ptr_err = bcm2835_gpiomem_class;
++	if (IS_ERR(ptr_err))
++		goto failed_class_create;
++
++	bcm2835_gpiomem_dev = device_create(bcm2835_gpiomem_class, NULL,
++					bcm2835_gpiomem_devid, NULL,
++					"gpiomem");
++	ptr_err = bcm2835_gpiomem_dev;
++	if (IS_ERR(ptr_err))
++		goto failed_device_create;
++
++	dev_info(inst->dev, "Initialised: Registers at 0x%08lx",
++		inst->gpio_regs_phys);
++
++	return 0;
++
++failed_device_create:
++	class_destroy(bcm2835_gpiomem_class);
++failed_class_create:
++	cdev_del(&bcm2835_gpiomem_cdev);
++	err = PTR_ERR(ptr_err);
++failed_cdev_add:
++	unregister_chrdev_region(bcm2835_gpiomem_devid, 1);
++failed_alloc_chrdev:
++failed_get_resource:
++	kfree(inst);
++failed_inst_alloc:
++	dev_err(inst->dev, "could not load bcm2835_gpiomem");
++	return err;
++}
++
++static int bcm2835_gpiomem_remove(struct platform_device *pdev)
++{
++	struct device *dev = inst->dev;
++
++	kfree(inst);
++	device_destroy(bcm2835_gpiomem_class, bcm2835_gpiomem_devid);
++	class_destroy(bcm2835_gpiomem_class);
++	cdev_del(&bcm2835_gpiomem_cdev);
++	unregister_chrdev_region(bcm2835_gpiomem_devid, 1);
++
++	dev_info(dev, "GPIO mem driver removed - OK");
++	return 0;
++}
++
++ /****************************************************************************
++*
++*   Register the driver with device tree
++*
++***************************************************************************/
++
++static const struct of_device_id bcm2835_gpiomem_of_match[] = {
++	{.compatible = "brcm,bcm2835-gpiomem",},
++	{ /* sentinel */ },
++};
++
++MODULE_DEVICE_TABLE(of, bcm2835_gpiomem_of_match);
++
++static struct platform_driver bcm2835_gpiomem_driver = {
++	.probe = bcm2835_gpiomem_probe,
++	.remove = bcm2835_gpiomem_remove,
++	.driver = {
++		   .name = DRIVER_NAME,
++		   .owner = THIS_MODULE,
++		   .of_match_table = bcm2835_gpiomem_of_match,
++		   },
++};
++
++module_platform_driver(bcm2835_gpiomem_driver);
++
++MODULE_ALIAS("platform:gpiomem-bcm2835");
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("gpiomem driver for accessing GPIO from userspace");
++MODULE_AUTHOR("Luke Wren <luke at raspberrypi.org>");
diff --git a/target/linux/brcm2708/patches-4.4/0041-Add-SMI-driver.patch b/target/linux/brcm2708/patches-4.4/0041-Add-SMI-driver.patch
new file mode 100644
index 0000000..7075ddb
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0041-Add-SMI-driver.patch
@@ -0,0 +1,1930 @@
+From 9e50bbd976d3c77770fe98e7e1465012aa2d19d4 Mon Sep 17 00:00:00 2001
+From: Luke Wren <wren6991 at gmail.com>
+Date: Sat, 5 Sep 2015 01:14:45 +0100
+Subject: [PATCH 041/127] Add SMI driver
+
+Signed-off-by: Luke Wren <wren6991 at gmail.com>
+---
+ .../bindings/misc/brcm,bcm2835-smi-dev.txt         |  17 +
+ .../devicetree/bindings/misc/brcm,bcm2835-smi.txt  |  48 +
+ drivers/char/broadcom/Kconfig                      |   8 +
+ drivers/char/broadcom/Makefile                     |   2 +-
+ drivers/char/broadcom/bcm2835_smi_dev.c            | 402 +++++++++
+ drivers/misc/Kconfig                               |   8 +
+ drivers/misc/Makefile                              |   1 +
+ drivers/misc/bcm2835_smi.c                         | 985 +++++++++++++++++++++
+ include/linux/broadcom/bcm2835_smi.h               | 391 ++++++++
+ 9 files changed, 1861 insertions(+), 1 deletion(-)
+ create mode 100644 Documentation/devicetree/bindings/misc/brcm,bcm2835-smi-dev.txt
+ create mode 100644 Documentation/devicetree/bindings/misc/brcm,bcm2835-smi.txt
+ create mode 100644 drivers/char/broadcom/bcm2835_smi_dev.c
+ create mode 100644 drivers/misc/bcm2835_smi.c
+ create mode 100644 include/linux/broadcom/bcm2835_smi.h
+
+--- /dev/null
++++ b/Documentation/devicetree/bindings/misc/brcm,bcm2835-smi-dev.txt
+@@ -0,0 +1,17 @@
++* Broadcom BCM2835 SMI character device driver.
++
++SMI or secondary memory interface is a peripheral specific to certain Broadcom
++SOCs, and is helpful for talking to things like parallel-interface displays
++and NAND flashes (in fact, most things with a parallel register interface).
++
++This driver adds a character device which provides a user-space interface to
++an instance of the SMI driver.
++
++Required properties:
++- compatible: "brcm,bcm2835-smi-dev"
++- smi_handle: a phandle to the smi node.
++
++Optional properties:
++- None.
++
++
+--- /dev/null
++++ b/Documentation/devicetree/bindings/misc/brcm,bcm2835-smi.txt
+@@ -0,0 +1,48 @@
++* Broadcom BCM2835 SMI driver.
++
++SMI or secondary memory interface is a peripheral specific to certain Broadcom
++SOCs, and is helpful for talking to things like parallel-interface displays
++and NAND flashes (in fact, most things with a parallel register interface).
++
++Required properties:
++- compatible: "brcm,bcm2835-smi"
++- reg: Should contain location and length of SMI registers and SMI clkman regs
++- interrupts: *the* SMI interrupt.
++- pinctrl-names: should be "default".
++- pinctrl-0: the phandle of the gpio pin node.
++- brcm,smi-clock-source: the clock source for clkman
++- brcm,smi-clock-divisor: the integer clock divisor for clkman
++- dmas: the dma controller phandle and the DREQ number (4 on a 2835)
++- dma-names: the name used by the driver to request its channel.
++  Should be "rx-tx".
++
++Optional properties:
++- None.
++
++Examples:
++
++8 data pin configuration:
++
++smi: smi at 7e600000 {
++	compatible = "brcm,bcm2835-smi";
++	reg = <0x7e600000 0x44>, <0x7e1010b0 0x8>;
++	interrupts = <2 16>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&smi_pins>;
++	brcm,smi-clock-source = <6>;
++	brcm,smi-clock-divisor = <4>;
++	dmas = <&dma 4>;
++	dma-names = "rx-tx";
++
++	status = "okay";
++};
++
++smi_pins: smi_pins {
++	brcm,pins = <2 3 4 5 6 7 8 9 10 11 12 13 14 15>;
++	/* Alt 1: SMI */
++	brcm,function = <5 5 5 5 5 5 5 5 5 5 5 5 5 5>;
++	/* /CS, /WE and /OE are pulled high, as they are
++	   generally active low signals */
++	brcm,pull = <2 2 2 2 2 2 0 0 0 0 0 0 0 0>;
++};
++
+--- a/drivers/char/broadcom/Kconfig
++++ b/drivers/char/broadcom/Kconfig
+@@ -41,3 +41,11 @@ config BCM2835_DEVGPIOMEM
+ 		on the 2835. Calling mmap(/dev/gpiomem) will map the GPIO
+ 		register page to the user's pointer.
+ 
++config BCM2835_SMI_DEV
++	tristate "Character device driver for BCM2835 Secondary Memory Interface"
++	depends on (MACH_BCM2708 || MACH_BCM2709 || ARCH_BCM2835) && BCM2835_SMI
++	default m
++	help
++		This driver provides a character device interface (ioctl + read/write) to
++		Broadcom's Secondary Memory interface. The low-level functionality is provided
++		by the SMI driver itself.
+--- a/drivers/char/broadcom/Makefile
++++ b/drivers/char/broadcom/Makefile
+@@ -3,4 +3,4 @@ obj-$(CONFIG_BCM2708_VCMEM)	+= vc_mem.o
+ obj-$(CONFIG_BCM_VC_SM)         += vc_sm/
+ 
+ obj-$(CONFIG_BCM2835_DEVGPIOMEM)+= bcm2835-gpiomem.o
+-
++obj-$(CONFIG_BCM2835_SMI_DEV)	+= bcm2835_smi_dev.o
+--- /dev/null
++++ b/drivers/char/broadcom/bcm2835_smi_dev.c
+@@ -0,0 +1,402 @@
++/**
++ * Character device driver for Broadcom Secondary Memory Interface
++ *
++ * Written by Luke Wren <luke at raspberrypi.org>
++ * Copyright (c) 2015, Raspberry Pi (Trading) Ltd.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/of.h>
++#include <linux/platform_device.h>
++#include <linux/slab.h>
++#include <linux/mm.h>
++#include <linux/pagemap.h>
++#include <linux/fs.h>
++#include <linux/cdev.h>
++#include <linux/fs.h>
++
++#include <linux/broadcom/bcm2835_smi.h>
++
++#define DEVICE_NAME "bcm2835-smi-dev"
++#define DRIVER_NAME "smi-dev-bcm2835"
++#define DEVICE_MINOR 0
++
++static struct cdev bcm2835_smi_cdev;
++static dev_t bcm2835_smi_devid;
++static struct class *bcm2835_smi_class;
++static struct device *bcm2835_smi_dev;
++
++struct bcm2835_smi_dev_instance {
++	struct device *dev;
++};
++
++static struct bcm2835_smi_instance *smi_inst;
++static struct bcm2835_smi_dev_instance *inst;
++
++static const char *const ioctl_names[] = {
++	"READ_SETTINGS",
++	"WRITE_SETTINGS",
++	"ADDRESS"
++};
++
++/****************************************************************************
++*
++*   SMI chardev file ops
++*
++***************************************************************************/
++static long
++bcm2835_smi_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
++{
++	long ret = 0;
++
++	dev_info(inst->dev, "serving ioctl...");
++
++	switch (cmd) {
++	case BCM2835_SMI_IOC_GET_SETTINGS:{
++		struct smi_settings *settings;
++
++		dev_info(inst->dev, "Reading SMI settings to user.");
++		settings = bcm2835_smi_get_settings_from_regs(smi_inst);
++		if (copy_to_user((void *)arg, settings,
++				 sizeof(struct smi_settings)))
++			dev_err(inst->dev, "settings copy failed.");
++		break;
++	}
++	case BCM2835_SMI_IOC_WRITE_SETTINGS:{
++		struct smi_settings *settings;
++
++		dev_info(inst->dev, "Setting user's SMI settings.");
++		settings = bcm2835_smi_get_settings_from_regs(smi_inst);
++		if (copy_from_user(settings, (void *)arg,
++				   sizeof(struct smi_settings)))
++			dev_err(inst->dev, "settings copy failed.");
++		else
++			bcm2835_smi_set_regs_from_settings(smi_inst);
++		break;
++	}
++	case BCM2835_SMI_IOC_ADDRESS:
++		dev_info(inst->dev, "SMI address set: 0x%02x", (int)arg);
++		bcm2835_smi_set_address(smi_inst, arg);
++		break;
++	default:
++		dev_err(inst->dev, "invalid ioctl cmd: %d", cmd);
++		ret = -ENOTTY;
++		break;
++	}
++
++	return ret;
++}
++
++static int bcm2835_smi_open(struct inode *inode, struct file *file)
++{
++	int dev = iminor(inode);
++
++	dev_dbg(inst->dev, "SMI device opened.");
++
++	if (dev != DEVICE_MINOR) {
++		dev_err(inst->dev,
++			"bcm2835_smi_release: Unknown minor device: %d",
++			dev);
++		return -ENXIO;
++	}
++
++	return 0;
++}
++
++static int bcm2835_smi_release(struct inode *inode, struct file *file)
++{
++	int dev = iminor(inode);
++
++	if (dev != DEVICE_MINOR) {
++		dev_err(inst->dev,
++			"bcm2835_smi_release: Unknown minor device %d", dev);
++		return -ENXIO;
++	}
++
++	return 0;
++}
++
++static ssize_t dma_bounce_user(
++	enum dma_transfer_direction dma_dir,
++	char __user *user_ptr,
++	size_t count,
++	struct bcm2835_smi_bounce_info *bounce)
++{
++	int chunk_size;
++	int chunk_no = 0;
++	int count_left = count;
++
++	while (count_left) {
++		int rv;
++		void *buf;
++
++		/* Wait for current chunk to complete: */
++		if (down_timeout(&bounce->callback_sem,
++			msecs_to_jiffies(1000))) {
++			dev_err(inst->dev, "DMA bounce timed out");
++			count -= (count_left);
++			break;
++		}
++
++		if (bounce->callback_sem.count >= DMA_BOUNCE_BUFFER_COUNT - 1)
++			dev_err(inst->dev, "WARNING: Ring buffer overflow");
++		chunk_size = count_left > DMA_BOUNCE_BUFFER_SIZE ?
++			DMA_BOUNCE_BUFFER_SIZE : count_left;
++		buf = bounce->buffer[chunk_no % DMA_BOUNCE_BUFFER_COUNT];
++		if (dma_dir == DMA_DEV_TO_MEM)
++			rv = copy_to_user(user_ptr, buf, chunk_size);
++		else
++			rv = copy_from_user(buf, user_ptr, chunk_size);
++		if (rv)
++			dev_err(inst->dev, "copy_*_user() failed!: %d", rv);
++		user_ptr += chunk_size;
++		count_left -= chunk_size;
++		chunk_no++;
++	}
++	return count;
++}
++
++static ssize_t
++bcm2835_read_file(struct file *f, char __user *user_ptr,
++		  size_t count, loff_t *offs)
++{
++	int odd_bytes;
++
++	dev_dbg(inst->dev, "User reading %d bytes from SMI.", count);
++	/* We don't want to DMA a number of bytes % 4 != 0 (32 bit FIFO) */
++	if (count > DMA_THRESHOLD_BYTES)
++		odd_bytes = count & 0x3;
++	else
++		odd_bytes = count;
++	count -= odd_bytes;
++	if (count) {
++		struct bcm2835_smi_bounce_info *bounce;
++
++		count = bcm2835_smi_user_dma(smi_inst,
++			DMA_DEV_TO_MEM, user_ptr, count,
++			&bounce);
++		if (count)
++			count = dma_bounce_user(DMA_DEV_TO_MEM, user_ptr,
++				count, bounce);
++	}
++	if (odd_bytes) {
++		/* Read from FIFO directly if not using DMA */
++		uint8_t buf[DMA_THRESHOLD_BYTES];
++
++		bcm2835_smi_read_buf(smi_inst, buf, odd_bytes);
++		if (copy_to_user(user_ptr, buf, odd_bytes))
++			dev_err(inst->dev, "copy_to_user() failed.");
++		count += odd_bytes;
++
++	}
++	return count;
++}
++
++static ssize_t
++bcm2835_write_file(struct file *f, const char __user *user_ptr,
++		   size_t count, loff_t *offs)
++{
++	int odd_bytes;
++
++	dev_dbg(inst->dev, "User writing %d bytes to SMI.", count);
++	if (count > DMA_THRESHOLD_BYTES)
++		odd_bytes = count & 0x3;
++	else
++		odd_bytes = count;
++	count -= odd_bytes;
++	if (count) {
++		struct bcm2835_smi_bounce_info *bounce;
++
++		count = bcm2835_smi_user_dma(smi_inst,
++			DMA_MEM_TO_DEV, (char __user *)user_ptr, count,
++			&bounce);
++		if (count)
++			count = dma_bounce_user(DMA_MEM_TO_DEV,
++				(char __user *)user_ptr,
++				count, bounce);
++	}
++	if (odd_bytes) {
++		uint8_t buf[DMA_THRESHOLD_BYTES];
++
++		if (copy_from_user(buf, user_ptr, odd_bytes))
++			dev_err(inst->dev, "copy_from_user() failed.");
++		else
++			bcm2835_smi_write_buf(smi_inst, buf, odd_bytes);
++		count += odd_bytes;
++	}
++	return count;
++}
++
++static const struct file_operations
++bcm2835_smi_fops = {
++	.owner = THIS_MODULE,
++	.unlocked_ioctl = bcm2835_smi_ioctl,
++	.open = bcm2835_smi_open,
++	.release = bcm2835_smi_release,
++	.read = bcm2835_read_file,
++	.write = bcm2835_write_file,
++};
++
++
++/****************************************************************************
++*
++*   bcm2835_smi_probe - called when the driver is loaded.
++*
++***************************************************************************/
++
++static int bcm2835_smi_dev_probe(struct platform_device *pdev)
++{
++	int err;
++	void *ptr_err;
++	struct device *dev = &pdev->dev;
++	struct device_node *node = dev->of_node, *smi_node;
++
++	if (!node) {
++		dev_err(dev, "No device tree node supplied!");
++		return -EINVAL;
++	}
++
++	smi_node = of_parse_phandle(node, "smi_handle", 0);
++
++	if (!smi_node) {
++		dev_err(dev, "No such property: smi_handle");
++		return -ENXIO;
++	}
++
++	smi_inst = bcm2835_smi_get(smi_node);
++
++	if (!smi_inst)
++		return -EPROBE_DEFER;
++
++	/* Allocate buffers and instance data */
++
++	inst = devm_kzalloc(dev, sizeof(*inst), GFP_KERNEL);
++
++	if (!inst)
++		return -ENOMEM;
++
++	inst->dev = dev;
++
++	/* Create character device entries */
++
++	err = alloc_chrdev_region(&bcm2835_smi_devid,
++				  DEVICE_MINOR, 1, DEVICE_NAME);
++	if (err != 0) {
++		dev_err(inst->dev, "unable to allocate device number");
++		return -ENOMEM;
++	}
++	cdev_init(&bcm2835_smi_cdev, &bcm2835_smi_fops);
++	bcm2835_smi_cdev.owner = THIS_MODULE;
++	err = cdev_add(&bcm2835_smi_cdev, bcm2835_smi_devid, 1);
++	if (err != 0) {
++		dev_err(inst->dev, "unable to register device");
++		err = -ENOMEM;
++		goto failed_cdev_add;
++	}
++
++	/* Create sysfs entries */
++
++	bcm2835_smi_class = class_create(THIS_MODULE, DEVICE_NAME);
++	ptr_err = bcm2835_smi_class;
++	if (IS_ERR(ptr_err))
++		goto failed_class_create;
++
++	bcm2835_smi_dev = device_create(bcm2835_smi_class, NULL,
++					bcm2835_smi_devid, NULL,
++					"smi");
++	ptr_err = bcm2835_smi_dev;
++	if (IS_ERR(ptr_err))
++		goto failed_device_create;
++
++	dev_info(inst->dev, "initialised");
++
++	return 0;
++
++failed_device_create:
++	class_destroy(bcm2835_smi_class);
++failed_class_create:
++	cdev_del(&bcm2835_smi_cdev);
++	err = PTR_ERR(ptr_err);
++failed_cdev_add:
++	unregister_chrdev_region(bcm2835_smi_devid, 1);
++	dev_err(dev, "could not load bcm2835_smi_dev");
++	return err;
++}
++
++/****************************************************************************
++*
++*   bcm2835_smi_remove - called when the driver is unloaded.
++*
++***************************************************************************/
++
++static int bcm2835_smi_dev_remove(struct platform_device *pdev)
++{
++	device_destroy(bcm2835_smi_class, bcm2835_smi_devid);
++	class_destroy(bcm2835_smi_class);
++	cdev_del(&bcm2835_smi_cdev);
++	unregister_chrdev_region(bcm2835_smi_devid, 1);
++
++	dev_info(inst->dev, "SMI character dev removed - OK");
++	return 0;
++}
++
++/****************************************************************************
++*
++*   Register the driver with device tree
++*
++***************************************************************************/
++
++static const struct of_device_id bcm2835_smi_dev_of_match[] = {
++	{.compatible = "brcm,bcm2835-smi-dev",},
++	{ /* sentinel */ },
++};
++
++MODULE_DEVICE_TABLE(of, bcm2835_smi_dev_of_match);
++
++static struct platform_driver bcm2835_smi_dev_driver = {
++	.probe = bcm2835_smi_dev_probe,
++	.remove = bcm2835_smi_dev_remove,
++	.driver = {
++		   .name = DRIVER_NAME,
++		   .owner = THIS_MODULE,
++		   .of_match_table = bcm2835_smi_dev_of_match,
++		   },
++};
++
++module_platform_driver(bcm2835_smi_dev_driver);
++
++MODULE_ALIAS("platform:smi-dev-bcm2835");
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION(
++	"Character device driver for BCM2835's secondary memory interface");
++MODULE_AUTHOR("Luke Wren <luke at raspberrypi.org>");
+--- a/drivers/misc/Kconfig
++++ b/drivers/misc/Kconfig
+@@ -10,6 +10,14 @@ config SENSORS_LIS3LV02D
+ 	select INPUT_POLLDEV
+ 	default n
+ 
++config BCM2835_SMI
++	tristate "Broadcom 283x Secondary Memory Interface driver"
++	depends on MACH_BCM2708 || MACH_BCM2709 || ARCH_BCM2835
++	default m
++	help
++		Driver for enabling and using Broadcom's Secondary/Slow Memory Interface.
++		Appears as /dev/bcm2835_smi. For ioctl interface see drivers/misc/bcm2835_smi.h
++
+ config AD525X_DPOT
+ 	tristate "Analog Devices Digital Potentiometers"
+ 	depends on (I2C || SPI) && SYSFS
+--- a/drivers/misc/Makefile
++++ b/drivers/misc/Makefile
+@@ -9,6 +9,7 @@ obj-$(CONFIG_AD525X_DPOT_SPI)	+= ad525x_
+ obj-$(CONFIG_INTEL_MID_PTI)	+= pti.o
+ obj-$(CONFIG_ATMEL_SSC)		+= atmel-ssc.o
+ obj-$(CONFIG_ATMEL_TCLIB)	+= atmel_tclib.o
++obj-$(CONFIG_BCM2835_SMI)	+= bcm2835_smi.o
+ obj-$(CONFIG_BMP085)		+= bmp085.o
+ obj-$(CONFIG_BMP085_I2C)	+= bmp085-i2c.o
+ obj-$(CONFIG_BMP085_SPI)	+= bmp085-spi.o
+--- /dev/null
++++ b/drivers/misc/bcm2835_smi.c
+@@ -0,0 +1,985 @@
++/**
++ * Broadcom Secondary Memory Interface driver
++ *
++ * Written by Luke Wren <luke at raspberrypi.org>
++ * Copyright (c) 2015, Raspberry Pi (Trading) Ltd.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/of.h>
++#include <linux/platform_device.h>
++#include <linux/of_address.h>
++#include <linux/of_platform.h>
++#include <linux/mm.h>
++#include <linux/slab.h>
++#include <linux/pagemap.h>
++#include <linux/dma-mapping.h>
++#include <linux/dmaengine.h>
++#include <linux/semaphore.h>
++#include <linux/spinlock.h>
++#include <linux/io.h>
++
++#define BCM2835_SMI_IMPLEMENTATION
++#include <linux/broadcom/bcm2835_smi.h>
++
++#define DRIVER_NAME "smi-bcm2835"
++
++#define N_PAGES_FROM_BYTES(n) ((n + PAGE_SIZE-1) / PAGE_SIZE)
++
++#define DMA_WRITE_TO_MEM true
++#define DMA_READ_FROM_MEM false
++
++struct bcm2835_smi_instance {
++	struct device *dev;
++	struct smi_settings settings;
++	__iomem void *smi_regs_ptr, *cm_smi_regs_ptr;
++	dma_addr_t smi_regs_busaddr;
++
++	struct dma_chan *dma_chan;
++	struct dma_slave_config dma_config;
++
++	struct bcm2835_smi_bounce_info bounce;
++
++	struct scatterlist buffer_sgl;
++
++	int clock_source;
++	int clock_divisor;
++
++	/* Sometimes we are called into in an atomic context (e.g. by
++	   JFFS2 + MTD) so we can't use a mutex */
++	spinlock_t transaction_lock;
++};
++
++/****************************************************************************
++*
++*   SMI clock manager setup
++*
++***************************************************************************/
++
++static inline void write_smi_cm_reg(struct bcm2835_smi_instance *inst,
++	u32 val, unsigned reg)
++{
++	writel(CM_PWD | val, inst->cm_smi_regs_ptr + reg);
++}
++
++static inline u32 read_smi_cm_reg(struct bcm2835_smi_instance *inst,
++	unsigned reg)
++{
++	return readl(inst->cm_smi_regs_ptr + reg);
++}
++
++static void smi_setup_clock(struct bcm2835_smi_instance *inst)
++{
++	dev_dbg(inst->dev, "Setting up clock...");
++	/* Disable SMI clock and wait for it to stop. */
++	write_smi_cm_reg(inst, 0, CM_SMI_CTL);
++	while (read_smi_cm_reg(inst, CM_SMI_CTL) & CM_SMI_CTL_BUSY)
++		;
++
++	write_smi_cm_reg(inst, (inst->clock_divisor << CM_SMI_DIV_DIVI_OFFS),
++	       CM_SMI_DIV);
++	write_smi_cm_reg(inst, (inst->clock_source << CM_SMI_CTL_SRC_OFFS),
++	       CM_SMI_CTL);
++
++	/* Enable the clock */
++	write_smi_cm_reg(inst, (inst->clock_source << CM_SMI_CTL_SRC_OFFS) |
++	       CM_SMI_CTL_ENAB, CM_SMI_CTL);
++}
++
++/****************************************************************************
++*
++*   SMI peripheral setup
++*
++***************************************************************************/
++
++static inline void write_smi_reg(struct bcm2835_smi_instance *inst,
++	u32 val, unsigned reg)
++{
++	writel(val, inst->smi_regs_ptr + reg);
++}
++
++static inline u32 read_smi_reg(struct bcm2835_smi_instance *inst, unsigned reg)
++{
++	return readl(inst->smi_regs_ptr + reg);
++}
++
++/* Token-paste macro for e.g SMIDSR_RSTROBE ->  value of SMIDSR_RSTROBE_MASK */
++#define _CONCAT(x, y) x##y
++#define CONCAT(x, y) _CONCAT(x, y)
++
++#define SET_BIT_FIELD(dest, field, bits) ((dest) = \
++	((dest) & ~CONCAT(field, _MASK)) | (((bits) << CONCAT(field, _OFFS))& \
++	 CONCAT(field, _MASK)))
++#define GET_BIT_FIELD(src, field) (((src) & \
++	CONCAT(field, _MASK)) >> CONCAT(field, _OFFS))
++
++static void smi_dump_context_labelled(struct bcm2835_smi_instance *inst,
++	const char *label)
++{
++	dev_err(inst->dev, "SMI context dump: %s", label);
++	dev_err(inst->dev, "SMICS:  0x%08x", read_smi_reg(inst, SMICS));
++	dev_err(inst->dev, "SMIL:   0x%08x", read_smi_reg(inst, SMIL));
++	dev_err(inst->dev, "SMIDSR: 0x%08x", read_smi_reg(inst, SMIDSR0));
++	dev_err(inst->dev, "SMIDSW: 0x%08x", read_smi_reg(inst, SMIDSW0));
++	dev_err(inst->dev, "SMIDC:  0x%08x", read_smi_reg(inst, SMIDC));
++	dev_err(inst->dev, "SMIFD:  0x%08x", read_smi_reg(inst, SMIFD));
++	dev_err(inst->dev, " ");
++}
++
++static inline void smi_dump_context(struct bcm2835_smi_instance *inst)
++{
++	smi_dump_context_labelled(inst, "");
++}
++
++static void smi_get_default_settings(struct bcm2835_smi_instance *inst)
++{
++	struct smi_settings *settings = &inst->settings;
++
++	settings->data_width = SMI_WIDTH_16BIT;
++	settings->pack_data = true;
++
++	settings->read_setup_time = 1;
++	settings->read_hold_time = 1;
++	settings->read_pace_time = 1;
++	settings->read_strobe_time = 3;
++
++	settings->write_setup_time = settings->read_setup_time;
++	settings->write_hold_time = settings->read_hold_time;
++	settings->write_pace_time = settings->read_pace_time;
++	settings->write_strobe_time = settings->read_strobe_time;
++
++	settings->dma_enable = true;
++	settings->dma_passthrough_enable = false;
++	settings->dma_read_thresh = 0x01;
++	settings->dma_write_thresh = 0x3f;
++	settings->dma_panic_read_thresh = 0x20;
++	settings->dma_panic_write_thresh = 0x20;
++}
++
++void bcm2835_smi_set_regs_from_settings(struct bcm2835_smi_instance *inst)
++{
++	struct smi_settings *settings = &inst->settings;
++	int smidsr_temp = 0, smidsw_temp = 0, smics_temp,
++	    smidcs_temp, smidc_temp = 0;
++
++	spin_lock(&inst->transaction_lock);
++
++	/* temporarily disable the peripheral: */
++	smics_temp = read_smi_reg(inst, SMICS);
++	write_smi_reg(inst, 0, SMICS);
++	smidcs_temp = read_smi_reg(inst, SMIDCS);
++	write_smi_reg(inst, 0, SMIDCS);
++
++	if (settings->pack_data)
++		smics_temp |= SMICS_PXLDAT;
++	else
++		smics_temp &= ~SMICS_PXLDAT;
++
++	SET_BIT_FIELD(smidsr_temp, SMIDSR_RWIDTH, settings->data_width);
++	SET_BIT_FIELD(smidsr_temp, SMIDSR_RSETUP, settings->read_setup_time);
++	SET_BIT_FIELD(smidsr_temp, SMIDSR_RHOLD, settings->read_hold_time);
++	SET_BIT_FIELD(smidsr_temp, SMIDSR_RPACE, settings->read_pace_time);
++	SET_BIT_FIELD(smidsr_temp, SMIDSR_RSTROBE, settings->read_strobe_time);
++	write_smi_reg(inst, smidsr_temp, SMIDSR0);
++
++	SET_BIT_FIELD(smidsw_temp, SMIDSW_WWIDTH, settings->data_width);
++	if (settings->data_width == SMI_WIDTH_8BIT)
++		smidsw_temp |= SMIDSW_WSWAP;
++	else
++		smidsw_temp &= ~SMIDSW_WSWAP;
++	SET_BIT_FIELD(smidsw_temp, SMIDSW_WSETUP, settings->write_setup_time);
++	SET_BIT_FIELD(smidsw_temp, SMIDSW_WHOLD, settings->write_hold_time);
++	SET_BIT_FIELD(smidsw_temp, SMIDSW_WPACE, settings->write_pace_time);
++	SET_BIT_FIELD(smidsw_temp, SMIDSW_WSTROBE,
++			settings->write_strobe_time);
++	write_smi_reg(inst, smidsw_temp, SMIDSW0);
++
++	SET_BIT_FIELD(smidc_temp, SMIDC_REQR, settings->dma_read_thresh);
++	SET_BIT_FIELD(smidc_temp, SMIDC_REQW, settings->dma_write_thresh);
++	SET_BIT_FIELD(smidc_temp, SMIDC_PANICR,
++		      settings->dma_panic_read_thresh);
++	SET_BIT_FIELD(smidc_temp, SMIDC_PANICW,
++		      settings->dma_panic_write_thresh);
++	if (settings->dma_passthrough_enable) {
++		smidc_temp |= SMIDC_DMAP;
++		smidsr_temp |= SMIDSR_RDREQ;
++		write_smi_reg(inst, smidsr_temp, SMIDSR0);
++		smidsw_temp |= SMIDSW_WDREQ;
++		write_smi_reg(inst, smidsw_temp, SMIDSW0);
++	} else
++		smidc_temp &= ~SMIDC_DMAP;
++	if (settings->dma_enable)
++		smidc_temp |= SMIDC_DMAEN;
++	else
++		smidc_temp &= ~SMIDC_DMAEN;
++
++	write_smi_reg(inst, smidc_temp, SMIDC);
++
++	/* re-enable (if was previously enabled) */
++	write_smi_reg(inst, smics_temp, SMICS);
++	write_smi_reg(inst, smidcs_temp, SMIDCS);
++
++	spin_unlock(&inst->transaction_lock);
++}
++EXPORT_SYMBOL(bcm2835_smi_set_regs_from_settings);
++
++struct smi_settings *bcm2835_smi_get_settings_from_regs
++	(struct bcm2835_smi_instance *inst)
++{
++	struct smi_settings *settings = &inst->settings;
++	int smidsr, smidsw, smidc;
++
++	spin_lock(&inst->transaction_lock);
++
++	smidsr = read_smi_reg(inst, SMIDSR0);
++	smidsw = read_smi_reg(inst, SMIDSW0);
++	smidc = read_smi_reg(inst, SMIDC);
++
++	settings->pack_data = (read_smi_reg(inst, SMICS) & SMICS_PXLDAT) ?
++	    true : false;
++
++	settings->data_width = GET_BIT_FIELD(smidsr, SMIDSR_RWIDTH);
++	settings->read_setup_time = GET_BIT_FIELD(smidsr, SMIDSR_RSETUP);
++	settings->read_hold_time = GET_BIT_FIELD(smidsr, SMIDSR_RHOLD);
++	settings->read_pace_time = GET_BIT_FIELD(smidsr, SMIDSR_RPACE);
++	settings->read_strobe_time = GET_BIT_FIELD(smidsr, SMIDSR_RSTROBE);
++
++	settings->write_setup_time = GET_BIT_FIELD(smidsw, SMIDSW_WSETUP);
++	settings->write_hold_time = GET_BIT_FIELD(smidsw, SMIDSW_WHOLD);
++	settings->write_pace_time = GET_BIT_FIELD(smidsw, SMIDSW_WPACE);
++	settings->write_strobe_time = GET_BIT_FIELD(smidsw, SMIDSW_WSTROBE);
++
++	settings->dma_read_thresh = GET_BIT_FIELD(smidc, SMIDC_REQR);
++	settings->dma_write_thresh = GET_BIT_FIELD(smidc, SMIDC_REQW);
++	settings->dma_panic_read_thresh = GET_BIT_FIELD(smidc, SMIDC_PANICR);
++	settings->dma_panic_write_thresh = GET_BIT_FIELD(smidc, SMIDC_PANICW);
++	settings->dma_passthrough_enable = (smidc & SMIDC_DMAP) ? true : false;
++	settings->dma_enable = (smidc & SMIDC_DMAEN) ? true : false;
++
++	spin_unlock(&inst->transaction_lock);
++
++	return settings;
++}
++EXPORT_SYMBOL(bcm2835_smi_get_settings_from_regs);
++
++static inline void smi_set_address(struct bcm2835_smi_instance *inst,
++	unsigned int address)
++{
++	int smia_temp = 0, smida_temp = 0;
++
++	SET_BIT_FIELD(smia_temp, SMIA_ADDR, address);
++	SET_BIT_FIELD(smida_temp, SMIDA_ADDR, address);
++
++	/* Write to both address registers - user doesn't care whether we're
++	   doing programmed or direct transfers. */
++	write_smi_reg(inst, smia_temp, SMIA);
++	write_smi_reg(inst, smida_temp, SMIDA);
++}
++
++static void smi_setup_regs(struct bcm2835_smi_instance *inst)
++{
++
++	dev_dbg(inst->dev, "Initialising SMI registers...");
++	/* Disable the peripheral if already enabled */
++	write_smi_reg(inst, 0, SMICS);
++	write_smi_reg(inst, 0, SMIDCS);
++
++	smi_get_default_settings(inst);
++	bcm2835_smi_set_regs_from_settings(inst);
++	smi_set_address(inst, 0);
++
++	write_smi_reg(inst, read_smi_reg(inst, SMICS) | SMICS_ENABLE, SMICS);
++	write_smi_reg(inst, read_smi_reg(inst, SMIDCS) | SMIDCS_ENABLE,
++		SMIDCS);
++}
++
++/****************************************************************************
++*
++*   Low-level SMI access functions
++*   Other modules should use the exported higher-level functions e.g.
++*   bcm2835_smi_write_buf() unless they have a good reason to use these
++*
++***************************************************************************/
++
++static inline uint32_t smi_read_single_word(struct bcm2835_smi_instance *inst)
++{
++	int timeout = 0;
++
++	write_smi_reg(inst, SMIDCS_ENABLE, SMIDCS);
++	write_smi_reg(inst, SMIDCS_ENABLE | SMIDCS_START, SMIDCS);
++	/* Make sure things happen in the right order...*/
++	mb();
++	while (!(read_smi_reg(inst, SMIDCS) & SMIDCS_DONE) &&
++		++timeout < 10000)
++		;
++	if (timeout < 10000)
++		return read_smi_reg(inst, SMIDD);
++
++	dev_err(inst->dev,
++		"SMI direct read timed out (is the clock set up correctly?)");
++	return 0;
++}
++
++static inline void smi_write_single_word(struct bcm2835_smi_instance *inst,
++	uint32_t data)
++{
++	int timeout = 0;
++
++	write_smi_reg(inst, SMIDCS_ENABLE | SMIDCS_WRITE, SMIDCS);
++	write_smi_reg(inst, data, SMIDD);
++	write_smi_reg(inst, SMIDCS_ENABLE | SMIDCS_WRITE | SMIDCS_START,
++		SMIDCS);
++
++	while (!(read_smi_reg(inst, SMIDCS) & SMIDCS_DONE) &&
++		++timeout < 10000)
++		;
++	if (timeout >= 10000)
++		dev_err(inst->dev,
++		"SMI direct write timed out (is the clock set up correctly?)");
++}
++
++/* Initiates a programmed read into the read FIFO. It is up to the caller to
++ * read data from the FIFO -  either via paced DMA transfer,
++ * or polling SMICS_RXD to check whether data is available.
++ * SMICS_ACTIVE will go low upon completion. */
++static void smi_init_programmed_read(struct bcm2835_smi_instance *inst,
++	int num_transfers)
++{
++	int smics_temp;
++
++	/* Disable the peripheral: */
++	smics_temp = read_smi_reg(inst, SMICS) & ~(SMICS_ENABLE | SMICS_WRITE);
++	write_smi_reg(inst, smics_temp, SMICS);
++	while (read_smi_reg(inst, SMICS) & SMICS_ENABLE)
++		;
++
++	/* Program the transfer count: */
++	write_smi_reg(inst, num_transfers, SMIL);
++
++	/* re-enable and start: */
++	smics_temp |= SMICS_ENABLE;
++	write_smi_reg(inst, smics_temp, SMICS);
++	smics_temp |= SMICS_CLEAR;
++	/* Just to be certain: */
++	mb();
++	while (read_smi_reg(inst, SMICS) & SMICS_ACTIVE)
++		;
++	write_smi_reg(inst, smics_temp, SMICS);
++	smics_temp |= SMICS_START;
++	write_smi_reg(inst, smics_temp, SMICS);
++}
++
++/* Initiates a programmed write sequence, using data from the write FIFO.
++ * It is up to the caller to initiate a DMA transfer before calling,
++ * or use another method to keep the write FIFO topped up.
++ * SMICS_ACTIVE will go low upon completion.
++ */
++static void smi_init_programmed_write(struct bcm2835_smi_instance *inst,
++	int num_transfers)
++{
++	int smics_temp;
++
++	/* Disable the peripheral: */
++	smics_temp = read_smi_reg(inst, SMICS) & ~SMICS_ENABLE;
++	write_smi_reg(inst, smics_temp, SMICS);
++	while (read_smi_reg(inst, SMICS) & SMICS_ENABLE)
++		;
++
++	/* Program the transfer count: */
++	write_smi_reg(inst, num_transfers, SMIL);
++
++	/* setup, re-enable and start: */
++	smics_temp |= SMICS_WRITE | SMICS_ENABLE;
++	write_smi_reg(inst, smics_temp, SMICS);
++	smics_temp |= SMICS_START;
++	write_smi_reg(inst, smics_temp, SMICS);
++}
++
++/* Initiate a read and then poll FIFO for data, reading out as it appears. */
++static void smi_read_fifo(struct bcm2835_smi_instance *inst,
++	uint32_t *dest, int n_bytes)
++{
++	if (read_smi_reg(inst, SMICS) & SMICS_RXD) {
++		smi_dump_context_labelled(inst,
++			"WARNING: read FIFO not empty at start of read call.");
++		while (read_smi_reg(inst, SMICS))
++			;
++	}
++
++	/* Dispatch the read: */
++	if (inst->settings.data_width == SMI_WIDTH_8BIT)
++		smi_init_programmed_read(inst, n_bytes);
++	else if (inst->settings.data_width == SMI_WIDTH_16BIT)
++		smi_init_programmed_read(inst, n_bytes / 2);
++	else {
++		dev_err(inst->dev, "Unsupported data width for read.");
++		return;
++	}
++
++	/* Poll FIFO to keep it empty */
++	while (!(read_smi_reg(inst, SMICS) & SMICS_DONE))
++		if (read_smi_reg(inst, SMICS) & SMICS_RXD)
++			*dest++ = read_smi_reg(inst, SMID);
++
++	/* Ensure that the FIFO is emptied */
++	if (read_smi_reg(inst, SMICS) & SMICS_RXD) {
++		int fifo_count;
++
++		fifo_count = GET_BIT_FIELD(read_smi_reg(inst, SMIFD),
++			SMIFD_FCNT);
++		while (fifo_count--)
++			*dest++ = read_smi_reg(inst, SMID);
++	}
++
++	if (!(read_smi_reg(inst, SMICS) & SMICS_DONE))
++		smi_dump_context_labelled(inst,
++			"WARNING: transaction finished but done bit not set.");
++
++	if (read_smi_reg(inst, SMICS) & SMICS_RXD)
++		smi_dump_context_labelled(inst,
++			"WARNING: read FIFO not empty at end of read call.");
++
++}
++
++/* Initiate a write, and then keep the FIFO topped up. */
++static void smi_write_fifo(struct bcm2835_smi_instance *inst,
++	uint32_t *src, int n_bytes)
++{
++	int i, timeout = 0;
++
++	/* Empty FIFOs if not already so */
++	if (!(read_smi_reg(inst, SMICS) & SMICS_TXE)) {
++		smi_dump_context_labelled(inst,
++		    "WARNING: write fifo not empty at start of write call.");
++		write_smi_reg(inst, read_smi_reg(inst, SMICS) | SMICS_CLEAR,
++			SMICS);
++	}
++
++	/* Initiate the transfer */
++	if (inst->settings.data_width == SMI_WIDTH_8BIT)
++		smi_init_programmed_write(inst, n_bytes);
++	else if (inst->settings.data_width == SMI_WIDTH_16BIT)
++		smi_init_programmed_write(inst, n_bytes / 2);
++	else {
++		dev_err(inst->dev, "Unsupported data width for write.");
++		return;
++	}
++	/* Fill the FIFO: */
++	for (i = 0; i < (n_bytes - 1) / 4 + 1; ++i) {
++		while (!(read_smi_reg(inst, SMICS) & SMICS_TXD))
++			;
++		write_smi_reg(inst, *src++, SMID);
++	}
++	/* Busy wait... */
++	while (!(read_smi_reg(inst, SMICS) & SMICS_DONE) && ++timeout <
++		1000000)
++		;
++	if (timeout >= 1000000)
++		smi_dump_context_labelled(inst,
++			"Timed out on write operation!");
++	if (!(read_smi_reg(inst, SMICS) & SMICS_TXE))
++		smi_dump_context_labelled(inst,
++			"WARNING: FIFO not empty at end of write operation.");
++}
++
++/****************************************************************************
++*
++*   SMI DMA operations
++*
++***************************************************************************/
++
++/* Disable SMI and put it into the correct direction before doing DMA setup.
++   Stops spurious DREQs during setup. Peripheral is re-enabled by init_*() */
++static void smi_disable(struct bcm2835_smi_instance *inst,
++	enum dma_transfer_direction direction)
++{
++	int smics_temp = read_smi_reg(inst, SMICS) & ~SMICS_ENABLE;
++
++	if (direction == DMA_DEV_TO_MEM)
++		smics_temp &= ~SMICS_WRITE;
++	else
++		smics_temp |= SMICS_WRITE;
++	write_smi_reg(inst, smics_temp, SMICS);
++	while (read_smi_reg(inst, SMICS) & SMICS_ACTIVE)
++		;
++}
++
++static struct scatterlist *smi_scatterlist_from_buffer(
++	struct bcm2835_smi_instance *inst,
++	dma_addr_t buf,
++	size_t len,
++	struct scatterlist *sg)
++{
++	sg_init_table(sg, 1);
++	sg_dma_address(sg) = buf;
++	sg_dma_len(sg) = len;
++	return sg;
++}
++
++static void smi_dma_callback_user_copy(void *param)
++{
++	/* Notify the bottom half that a chunk is ready for user copy */
++	struct bcm2835_smi_instance *inst =
++		(struct bcm2835_smi_instance *)param;
++
++	up(&inst->bounce.callback_sem);
++}
++
++/* Creates a descriptor, assigns the given callback, and submits the
++   descriptor to dmaengine. Does not block - can queue up multiple
++   descriptors and then wait for them all to complete.
++   sg_len is the number of control blocks, NOT the number of bytes.
++   dir can be DMA_MEM_TO_DEV or DMA_DEV_TO_MEM.
++   callback can be NULL - in this case it is not called. */
++static inline struct dma_async_tx_descriptor *smi_dma_submit_sgl(
++	struct bcm2835_smi_instance *inst,
++	struct scatterlist *sgl,
++	size_t sg_len,
++	enum dma_transfer_direction dir,
++	dma_async_tx_callback callback)
++{
++	struct dma_async_tx_descriptor *desc;
++
++	desc = dmaengine_prep_slave_sg(inst->dma_chan,
++				       sgl,
++				       sg_len,
++				       dir,
++				       DMA_PREP_INTERRUPT | DMA_CTRL_ACK |
++				       DMA_PREP_FENCE);
++	if (!desc) {
++		dev_err(inst->dev, "read_sgl: dma slave preparation failed!");
++		write_smi_reg(inst, read_smi_reg(inst, SMICS) & ~SMICS_ACTIVE,
++			SMICS);
++		while (read_smi_reg(inst, SMICS) & SMICS_ACTIVE)
++			cpu_relax();
++		write_smi_reg(inst, read_smi_reg(inst, SMICS) | SMICS_ACTIVE,
++			SMICS);
++		return NULL;
++	}
++	desc->callback = callback;
++	desc->callback_param = inst;
++	if (dmaengine_submit(desc) < 0)
++		return NULL;
++	return desc;
++}
++
++/* NB this function blocks until the transfer is complete */
++static void
++smi_dma_read_sgl(struct bcm2835_smi_instance *inst,
++	struct scatterlist *sgl, size_t sg_len, size_t n_bytes)
++{
++	struct dma_async_tx_descriptor *desc;
++
++	/* Disable SMI and set to read before dispatching DMA - if SMI is in
++	 * write mode and TX fifo is empty, it will generate a DREQ which may
++	 * cause the read DMA to complete before the SMI read command is even
++	 * dispatched! We want to dispatch DMA before SMI read so that reading
++	 * is gapless, for logic analyser.
++	 */
++
++	smi_disable(inst, DMA_DEV_TO_MEM);
++
++	desc = smi_dma_submit_sgl(inst, sgl, sg_len, DMA_DEV_TO_MEM, NULL);
++	dma_async_issue_pending(inst->dma_chan);
++
++	if (inst->settings.data_width == SMI_WIDTH_8BIT)
++		smi_init_programmed_read(inst, n_bytes);
++	else
++		smi_init_programmed_read(inst, n_bytes / 2);
++
++	if (dma_wait_for_async_tx(desc) == DMA_ERROR)
++		smi_dump_context_labelled(inst, "DMA timeout!");
++}
++
++static void
++smi_dma_write_sgl(struct bcm2835_smi_instance *inst,
++	struct scatterlist *sgl, size_t sg_len, size_t n_bytes)
++{
++	struct dma_async_tx_descriptor *desc;
++
++	if (inst->settings.data_width == SMI_WIDTH_8BIT)
++		smi_init_programmed_write(inst, n_bytes);
++	else
++		smi_init_programmed_write(inst, n_bytes / 2);
++
++	desc = smi_dma_submit_sgl(inst, sgl, sg_len, DMA_MEM_TO_DEV, NULL);
++	dma_async_issue_pending(inst->dma_chan);
++
++	if (dma_wait_for_async_tx(desc) == DMA_ERROR)
++		smi_dump_context_labelled(inst, "DMA timeout!");
++	else
++		/* Wait for SMI to finish our writes */
++		while (!(read_smi_reg(inst, SMICS) & SMICS_DONE))
++			cpu_relax();
++}
++
++ssize_t bcm2835_smi_user_dma(
++	struct bcm2835_smi_instance *inst,
++	enum dma_transfer_direction dma_dir,
++	char __user *user_ptr, size_t count,
++	struct bcm2835_smi_bounce_info **bounce)
++{
++	int chunk_no = 0, chunk_size, count_left = count;
++	struct scatterlist *sgl;
++	void (*init_trans_func)(struct bcm2835_smi_instance *, int);
++
++	spin_lock(&inst->transaction_lock);
++
++	if (dma_dir == DMA_DEV_TO_MEM)
++		init_trans_func = smi_init_programmed_read;
++	else
++		init_trans_func = smi_init_programmed_write;
++
++	smi_disable(inst, dma_dir);
++
++	sema_init(&inst->bounce.callback_sem, 0);
++	if (bounce)
++		*bounce = &inst->bounce;
++	while (count_left) {
++		chunk_size = count_left > DMA_BOUNCE_BUFFER_SIZE ?
++			DMA_BOUNCE_BUFFER_SIZE : count_left;
++		if (chunk_size == DMA_BOUNCE_BUFFER_SIZE) {
++			sgl =
++			&inst->bounce.sgl[chunk_no % DMA_BOUNCE_BUFFER_COUNT];
++		} else {
++			sgl = smi_scatterlist_from_buffer(
++				inst,
++				inst->bounce.phys[
++					chunk_no % DMA_BOUNCE_BUFFER_COUNT],
++				chunk_size,
++				&inst->buffer_sgl);
++		}
++
++		if (!smi_dma_submit_sgl(inst, sgl, 1, dma_dir,
++			smi_dma_callback_user_copy
++		)) {
++			dev_err(inst->dev, "sgl submit failed");
++			count = 0;
++			goto out;
++		}
++		count_left -= chunk_size;
++		chunk_no++;
++	}
++	dma_async_issue_pending(inst->dma_chan);
++
++	if (inst->settings.data_width == SMI_WIDTH_8BIT)
++		init_trans_func(inst, count);
++	else if (inst->settings.data_width == SMI_WIDTH_16BIT)
++		init_trans_func(inst, count / 2);
++out:
++	spin_unlock(&inst->transaction_lock);
++	return count;
++}
++EXPORT_SYMBOL(bcm2835_smi_user_dma);
++
++
++/****************************************************************************
++*
++*   High level buffer transfer functions - for use by other drivers
++*
++***************************************************************************/
++
++/* Buffer must be physically contiguous - i.e. kmalloc, not vmalloc! */
++void bcm2835_smi_write_buf(
++	struct bcm2835_smi_instance *inst,
++	const void *buf, size_t n_bytes)
++{
++	int odd_bytes = n_bytes & 0x3;
++
++	n_bytes -= odd_bytes;
++
++	spin_lock(&inst->transaction_lock);
++
++	if (n_bytes > DMA_THRESHOLD_BYTES) {
++		dma_addr_t phy_addr = dma_map_single(
++			inst->dev,
++			(void *)buf,
++			n_bytes,
++			DMA_MEM_TO_DEV);
++		struct scatterlist *sgl =
++			smi_scatterlist_from_buffer(inst, phy_addr, n_bytes,
++				&inst->buffer_sgl);
++
++		if (!sgl) {
++			smi_dump_context_labelled(inst,
++			"Error: could not create scatterlist for write!");
++			goto out;
++		}
++		smi_dma_write_sgl(inst, sgl, 1, n_bytes);
++
++		dma_unmap_single
++			(inst->dev, phy_addr, n_bytes, DMA_MEM_TO_DEV);
++	} else if (n_bytes) {
++		smi_write_fifo(inst, (uint32_t *) buf, n_bytes);
++	}
++	buf += n_bytes;
++
++	if (inst->settings.data_width == SMI_WIDTH_8BIT) {
++		while (odd_bytes--)
++			smi_write_single_word(inst, *(uint8_t *) (buf++));
++	} else {
++		while (odd_bytes >= 2) {
++			smi_write_single_word(inst, *(uint16_t *)buf);
++			buf += 2;
++			odd_bytes -= 2;
++		}
++		if (odd_bytes) {
++			/* Reading an odd number of bytes on a 16 bit bus is
++			   a user bug. It's kinder to fail early and tell them
++			   than to e.g. transparently give them the bottom byte
++			   of a 16 bit transfer. */
++			dev_err(inst->dev,
++		"WARNING: odd number of bytes specified for wide transfer.");
++			dev_err(inst->dev,
++		"At least one byte dropped as a result.");
++			dump_stack();
++		}
++	}
++out:
++	spin_unlock(&inst->transaction_lock);
++}
++EXPORT_SYMBOL(bcm2835_smi_write_buf);
++
++void bcm2835_smi_read_buf(struct bcm2835_smi_instance *inst,
++	void *buf, size_t n_bytes)
++{
++
++	/* SMI is inherently 32-bit, which causes surprising amounts of mess
++	   for bytes % 4 != 0. Easiest to avoid this mess altogether
++	   by handling remainder separately. */
++	int odd_bytes = n_bytes & 0x3;
++
++	spin_lock(&inst->transaction_lock);
++	n_bytes -= odd_bytes;
++	if (n_bytes > DMA_THRESHOLD_BYTES) {
++		dma_addr_t phy_addr = dma_map_single(inst->dev,
++						     buf, n_bytes,
++						     DMA_DEV_TO_MEM);
++		struct scatterlist *sgl = smi_scatterlist_from_buffer(
++			inst, phy_addr, n_bytes,
++			&inst->buffer_sgl);
++		if (!sgl) {
++			smi_dump_context_labelled(inst,
++			"Error: could not create scatterlist for read!");
++			goto out;
++		}
++		smi_dma_read_sgl(inst, sgl, 1, n_bytes);
++		dma_unmap_single(inst->dev, phy_addr, n_bytes, DMA_DEV_TO_MEM);
++	} else if (n_bytes) {
++		smi_read_fifo(inst, (uint32_t *)buf, n_bytes);
++	}
++	buf += n_bytes;
++
++	if (inst->settings.data_width == SMI_WIDTH_8BIT) {
++		while (odd_bytes--)
++			*((uint8_t *) (buf++)) = smi_read_single_word(inst);
++	} else {
++		while (odd_bytes >= 2) {
++			*(uint16_t *) buf = smi_read_single_word(inst);
++			buf += 2;
++			odd_bytes -= 2;
++		}
++		if (odd_bytes) {
++			dev_err(inst->dev,
++		"WARNING: odd number of bytes specified for wide transfer.");
++			dev_err(inst->dev,
++		"At least one byte dropped as a result.");
++			dump_stack();
++		}
++	}
++out:
++	spin_unlock(&inst->transaction_lock);
++}
++EXPORT_SYMBOL(bcm2835_smi_read_buf);
++
++void bcm2835_smi_set_address(struct bcm2835_smi_instance *inst,
++	unsigned int address)
++{
++	spin_lock(&inst->transaction_lock);
++	smi_set_address(inst, address);
++	spin_unlock(&inst->transaction_lock);
++}
++EXPORT_SYMBOL(bcm2835_smi_set_address);
++
++struct bcm2835_smi_instance *bcm2835_smi_get(struct device_node *node)
++{
++	struct platform_device *pdev;
++
++	if (!node)
++		return NULL;
++
++	pdev = of_find_device_by_node(node);
++	if (!pdev)
++		return NULL;
++
++	return platform_get_drvdata(pdev);
++}
++EXPORT_SYMBOL(bcm2835_smi_get);
++
++/****************************************************************************
++*
++*   bcm2835_smi_probe - called when the driver is loaded.
++*
++***************************************************************************/
++
++static int bcm2835_smi_dma_setup(struct bcm2835_smi_instance *inst)
++{
++	int i, rv = 0;
++
++	inst->dma_chan = dma_request_slave_channel(inst->dev, "rx-tx");
++
++	inst->dma_config.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
++	inst->dma_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
++	inst->dma_config.src_addr = inst->smi_regs_busaddr + SMID;
++	inst->dma_config.dst_addr = inst->dma_config.src_addr;
++	/* Direction unimportant - always overridden by prep_slave_sg */
++	inst->dma_config.direction = DMA_DEV_TO_MEM;
++	dmaengine_slave_config(inst->dma_chan, &inst->dma_config);
++	/* Alloc and map bounce buffers */
++	for (i = 0; i < DMA_BOUNCE_BUFFER_COUNT; ++i) {
++		inst->bounce.buffer[i] =
++		dmam_alloc_coherent(inst->dev, DMA_BOUNCE_BUFFER_SIZE,
++				&inst->bounce.phys[i],
++				GFP_KERNEL);
++		if (!inst->bounce.buffer[i]) {
++			dev_err(inst->dev, "Could not allocate buffer!");
++			rv = -ENOMEM;
++			break;
++		}
++		smi_scatterlist_from_buffer(
++			inst,
++			inst->bounce.phys[i],
++			DMA_BOUNCE_BUFFER_SIZE,
++			&inst->bounce.sgl[i]
++		);
++	}
++
++	return rv;
++}
++
++static int bcm2835_smi_probe(struct platform_device *pdev)
++{
++	int err;
++	struct device *dev = &pdev->dev;
++	struct device_node *node = dev->of_node;
++	struct resource *ioresource;
++	struct bcm2835_smi_instance *inst;
++
++	/* Allocate buffers and instance data */
++
++	inst = devm_kzalloc(dev, sizeof(struct bcm2835_smi_instance),
++		GFP_KERNEL);
++
++	if (!inst)
++		return -ENOMEM;
++
++	inst->dev = dev;
++	spin_lock_init(&inst->transaction_lock);
++
++	/* We require device tree support */
++	if (!node)
++		return -EINVAL;
++
++	ioresource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	inst->smi_regs_ptr = devm_ioremap_resource(dev, ioresource);
++	ioresource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
++	inst->cm_smi_regs_ptr = devm_ioremap_resource(dev, ioresource);
++	inst->smi_regs_busaddr = be32_to_cpu(
++		*of_get_address(node, 0, NULL, NULL));
++	of_property_read_u32(node,
++			     "brcm,smi-clock-source",
++			     &inst->clock_source);
++	of_property_read_u32(node,
++			     "brcm,smi-clock-divisor",
++			     &inst->clock_divisor);
++
++	err = bcm2835_smi_dma_setup(inst);
++	if (err)
++		return err;
++
++	/* Finally, do peripheral setup */
++
++	smi_setup_clock(inst);
++	smi_setup_regs(inst);
++
++	platform_set_drvdata(pdev, inst);
++
++	dev_info(inst->dev, "initialised");
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   bcm2835_smi_remove - called when the driver is unloaded.
++*
++***************************************************************************/
++
++static int bcm2835_smi_remove(struct platform_device *pdev)
++{
++	struct bcm2835_smi_instance *inst = platform_get_drvdata(pdev);
++	struct device *dev = inst->dev;
++
++	dev_info(dev, "SMI device removed - OK");
++	return 0;
++}
++
++/****************************************************************************
++*
++*   Register the driver with device tree
++*
++***************************************************************************/
++
++static const struct of_device_id bcm2835_smi_of_match[] = {
++	{.compatible = "brcm,bcm2835-smi",},
++	{ /* sentinel */ },
++};
++
++MODULE_DEVICE_TABLE(of, bcm2835_smi_of_match);
++
++static struct platform_driver bcm2835_smi_driver = {
++	.probe = bcm2835_smi_probe,
++	.remove = bcm2835_smi_remove,
++	.driver = {
++		   .name = DRIVER_NAME,
++		   .owner = THIS_MODULE,
++		   .of_match_table = bcm2835_smi_of_match,
++		   },
++};
++
++module_platform_driver(bcm2835_smi_driver);
++
++MODULE_ALIAS("platform:smi-bcm2835");
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Device driver for BCM2835's secondary memory interface");
++MODULE_AUTHOR("Luke Wren <luke at raspberrypi.org>");
+--- /dev/null
++++ b/include/linux/broadcom/bcm2835_smi.h
+@@ -0,0 +1,391 @@
++/**
++ * Declarations and definitions for Broadcom's Secondary Memory Interface
++ *
++ * Written by Luke Wren <luke at raspberrypi.org>
++ * Copyright (c) 2015, Raspberry Pi (Trading) Ltd.
++ * Copyright (c) 2010-2012 Broadcom. All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#ifndef BCM2835_SMI_H
++#define BCM2835_SMI_H
++
++#include <linux/ioctl.h>
++
++#ifndef __KERNEL__
++#include <stdint.h>
++#include <stdbool.h>
++#endif
++
++#define BCM2835_SMI_IOC_MAGIC 0x1
++#define BCM2835_SMI_INVALID_HANDLE (~0)
++
++/* IOCTLs 0x100...0x1ff are not device-specific - we can use them */
++#define BCM2835_SMI_IOC_GET_SETTINGS    _IO(BCM2835_SMI_IOC_MAGIC, 0)
++#define BCM2835_SMI_IOC_WRITE_SETTINGS  _IO(BCM2835_SMI_IOC_MAGIC, 1)
++#define BCM2835_SMI_IOC_ADDRESS	 _IO(BCM2835_SMI_IOC_MAGIC, 2)
++#define BCM2835_SMI_IOC_MAX	     2
++
++#define SMI_WIDTH_8BIT 0
++#define SMI_WIDTH_16BIT 1
++#define SMI_WIDTH_9BIT 2
++#define SMI_WIDTH_18BIT 3
++
++/* max number of bytes where DMA will not be used */
++#define DMA_THRESHOLD_BYTES 128
++#define DMA_BOUNCE_BUFFER_SIZE (1024 * 1024 / 2)
++#define DMA_BOUNCE_BUFFER_COUNT 3
++
++
++struct smi_settings {
++	int data_width;
++	/* Whether or not to pack multiple SMI transfers into a
++	   single 32 bit FIFO word */
++	bool pack_data;
++
++	/* Timing for reads (writes the same but for WE)
++	 *
++	 * OE ----------+	   +--------------------
++	 *		|	   |
++	 *		+----------+
++	 * SD -<==============================>-----------
++	 * SA -<=========================================>-
++	 *    <-setup->  <-strobe ->  <-hold ->  <- pace ->
++	 */
++
++	int read_setup_time;
++	int read_hold_time;
++	int read_pace_time;
++	int read_strobe_time;
++
++	int write_setup_time;
++	int write_hold_time;
++	int write_pace_time;
++	int write_strobe_time;
++
++	bool dma_enable;		/* DREQs */
++	bool dma_passthrough_enable;	/* External DREQs */
++	int dma_read_thresh;
++	int dma_write_thresh;
++	int dma_panic_read_thresh;
++	int dma_panic_write_thresh;
++};
++
++/****************************************************************************
++*
++*   Declare exported SMI functions
++*
++***************************************************************************/
++
++#ifdef __KERNEL__
++
++#include <linux/dmaengine.h> /* for enum dma_transfer_direction */
++#include <linux/of.h>
++#include <linux/semaphore.h>
++
++struct bcm2835_smi_instance;
++
++struct bcm2835_smi_bounce_info {
++	struct semaphore callback_sem;
++	void *buffer[DMA_BOUNCE_BUFFER_COUNT];
++	dma_addr_t phys[DMA_BOUNCE_BUFFER_COUNT];
++	struct scatterlist sgl[DMA_BOUNCE_BUFFER_COUNT];
++};
++
++
++void bcm2835_smi_set_regs_from_settings(struct bcm2835_smi_instance *);
++
++struct smi_settings *bcm2835_smi_get_settings_from_regs(
++	struct bcm2835_smi_instance *inst);
++
++void bcm2835_smi_write_buf(
++	struct bcm2835_smi_instance *inst,
++	const void *buf,
++	size_t n_bytes);
++
++void bcm2835_smi_read_buf(
++	struct bcm2835_smi_instance *inst,
++	void *buf,
++	size_t n_bytes);
++
++void bcm2835_smi_set_address(struct bcm2835_smi_instance *inst,
++	unsigned int address);
++
++ssize_t bcm2835_smi_user_dma(
++	struct bcm2835_smi_instance *inst,
++	enum dma_transfer_direction dma_dir,
++	char __user *user_ptr,
++	size_t count,
++	struct bcm2835_smi_bounce_info **bounce);
++
++struct bcm2835_smi_instance *bcm2835_smi_get(struct device_node *node);
++
++#endif /* __KERNEL__ */
++
++/****************************************************************
++*
++*	Implementation-only declarations
++*
++****************************************************************/
++
++#ifdef BCM2835_SMI_IMPLEMENTATION
++
++/* Clock manager registers for SMI clock: */
++#define CM_SMI_BASE_ADDRESS ((BCM2708_PERI_BASE) + 0x1010b0)
++/* Clock manager "password" to protect registers from spurious writes */
++#define CM_PWD (0x5a << 24)
++
++#define CM_SMI_CTL	0x00
++#define CM_SMI_DIV	0x04
++
++#define CM_SMI_CTL_FLIP (1 << 8)
++#define CM_SMI_CTL_BUSY (1 << 7)
++#define CM_SMI_CTL_KILL (1 << 5)
++#define CM_SMI_CTL_ENAB (1 << 4)
++#define CM_SMI_CTL_SRC_MASK (0xf)
++#define CM_SMI_CTL_SRC_OFFS (0)
++
++#define CM_SMI_DIV_DIVI_MASK (0xf <<  12)
++#define CM_SMI_DIV_DIVI_OFFS (12)
++#define CM_SMI_DIV_DIVF_MASK (0xff << 4)
++#define CM_SMI_DIV_DIVF_OFFS (4)
++
++/* SMI register mapping:*/
++#define SMI_BASE_ADDRESS ((BCM2708_PERI_BASE) + 0x600000)
++
++#define SMICS	0x00	/* control + status register		*/
++#define SMIL	0x04	/* length/count (n external txfers)	*/
++#define SMIA	0x08	/* address register			*/
++#define SMID	0x0c	/* data register			*/
++#define SMIDSR0	0x10	/* device 0 read settings		*/
++#define SMIDSW0	0x14	/* device 0 write settings		*/
++#define SMIDSR1	0x18	/* device 1 read settings		*/
++#define SMIDSW1	0x1c	/* device 1 write settings		*/
++#define SMIDSR2	0x20	/* device 2 read settings		*/
++#define SMIDSW2	0x24	/* device 2 write settings		*/
++#define SMIDSR3	0x28	/* device 3 read settings		*/
++#define SMIDSW3	0x2c	/* device 3 write settings		*/
++#define SMIDC	0x30	/* DMA control registers		*/
++#define SMIDCS	0x34	/* direct control/status register	*/
++#define SMIDA	0x38	/* direct address register		*/
++#define SMIDD	0x3c	/* direct data registers		*/
++#define SMIFD	0x40	/* FIFO debug register			*/
++
++
++
++/* Control and Status register bits:
++ * SMICS_RXF	: RX fifo full: 1 when RX fifo is full
++ * SMICS_TXE	: TX fifo empty: 1 when empty.
++ * SMICS_RXD	: RX fifo contains data: 1 when there is data.
++ * SMICS_TXD	: TX fifo can accept data: 1 when true.
++ * SMICS_RXR	: RX fifo needs reading: 1 when fifo more than 3/4 full, or
++ *		  when "DONE" and fifo not emptied.
++ * SMICS_TXW	: TX fifo needs writing: 1 when less than 1/4 full.
++ * SMICS_AFERR	: AXI FIFO error: 1 when fifo read when empty or written
++ *		  when full. Write 1 to clear.
++ * SMICS_EDREQ	: 1 when external DREQ received.
++ * SMICS_PXLDAT	:  Pixel data:	write 1 to enable pixel transfer modes.
++ * SMICS_SETERR	: 1 if there was an error writing to setup regs (e.g.
++ *		  tx was in progress). Write 1 to clear.
++ * SMICS_PVMODE	: Set to 1 to enable pixel valve mode.
++ * SMICS_INTR	: Set to 1 to enable interrupt on RX.
++ * SMICS_INTT	: Set to 1 to enable interrupt on TX.
++ * SMICS_INTD	: Set to 1 to enable interrupt on DONE condition.
++ * SMICS_TEEN	: Tear effect mode enabled: Programmed transfers will wait
++ *		  for a TE trigger before writing.
++ * SMICS_PAD1	: Padding settings for external transfers. For writes: the
++ *		  number of bytes initially written to  the TX fifo that
++ * SMICS_PAD0	: should be ignored. For reads: the number of bytes that will
++ *		  be read before the data, and should be dropped.
++ * SMICS_WRITE	: Transfer direction: 1 = write to external device, 0 = read
++ * SMICS_CLEAR	: Write 1 to clear the FIFOs.
++ * SMICS_START	: Write 1 to start the programmed transfer.
++ * SMICS_ACTIVE	: Reads as 1 when a programmed transfer is underway.
++ * SMICS_DONE	: Reads as 1 when transfer finished. For RX, not set until
++ *		  FIFO emptied.
++ * SMICS_ENABLE	: Set to 1 to enable the SMI peripheral, 0 to disable.
++ */
++
++#define SMICS_RXF	(1 << 31)
++#define SMICS_TXE	(1 << 30)
++#define SMICS_RXD	(1 << 29)
++#define SMICS_TXD	(1 << 28)
++#define SMICS_RXR	(1 << 27)
++#define SMICS_TXW	(1 << 26)
++#define SMICS_AFERR	(1 << 25)
++#define SMICS_EDREQ	(1 << 15)
++#define SMICS_PXLDAT	(1 << 14)
++#define SMICS_SETERR	(1 << 13)
++#define SMICS_PVMODE	(1 << 12)
++#define SMICS_INTR	(1 << 11)
++#define SMICS_INTT	(1 << 10)
++#define SMICS_INTD	(1 << 9)
++#define SMICS_TEEN	(1 << 8)
++#define SMICS_PAD1	(1 << 7)
++#define SMICS_PAD0	(1 << 6)
++#define SMICS_WRITE	(1 << 5)
++#define SMICS_CLEAR	(1 << 4)
++#define SMICS_START	(1 << 3)
++#define SMICS_ACTIVE	(1 << 2)
++#define SMICS_DONE	(1 << 1)
++#define SMICS_ENABLE	(1 << 0)
++
++/* Address register bits: */
++
++#define SMIA_DEVICE_MASK ((1 << 9) | (1 << 8))
++#define SMIA_DEVICE_OFFS (8)
++#define SMIA_ADDR_MASK (0x3f)	/* bits 5 -> 0 */
++#define SMIA_ADDR_OFFS (0)
++
++/* DMA control register bits:
++ * SMIDC_DMAEN	: DMA enable: set 1: DMA requests will be issued.
++ * SMIDC_DMAP	: DMA passthrough: when set to 0, top two data pins are used by
++ *		  SMI as usual. When set to 1, the top two pins are used for
++ *		  external DREQs: pin 16 read request, 17 write.
++ * SMIDC_PANIC*	: Threshold at which DMA will panic during read/write.
++ * SMIDC_REQ*	: Threshold at which DMA will generate a DREQ.
++ */
++
++#define SMIDC_DMAEN		(1 << 28)
++#define SMIDC_DMAP		(1 << 24)
++#define SMIDC_PANICR_MASK	(0x3f << 18)
++#define SMIDC_PANICR_OFFS	(18)
++#define SMIDC_PANICW_MASK	(0x3f << 12)
++#define SMIDC_PANICW_OFFS	(12)
++#define SMIDC_REQR_MASK		(0x3f << 6)
++#define SMIDC_REQR_OFFS		(6)
++#define SMIDC_REQW_MASK		(0x3f)
++#define SMIDC_REQW_OFFS		(0)
++
++/* Device settings register bits: same for all 4 (or 3?) device register sets.
++ * Device read settings:
++ * SMIDSR_RWIDTH	: Read transfer width. 00 = 8bit, 01 = 16bit,
++ *			  10 = 18bit, 11 = 9bit.
++ * SMIDSR_RSETUP	: Read setup time: number of core cycles between chip
++ *			  select/address and read strobe. Min 1, max 64.
++ * SMIDSR_MODE68	: 1 for System 68 mode (i.e. enable + direction pins,
++ *			  rather than OE + WE pin)
++ * SMIDSR_FSETUP	: If set to 1, setup time only applies to first
++ *			  transfer after address change.
++ * SMIDSR_RHOLD		: Number of core cycles between read strobe going
++ *			  inactive and CS/address going inactive. Min 1, max 64
++ * SMIDSR_RPACEALL	: When set to 1, this device's RPACE value will always
++ *			  be used for the next transaction, even if it is not
++ *			  to this device.
++ * SMIDSR_RPACE		: Number of core cycles spent waiting between CS
++ *			  deassert and start of next transfer. Min 1, max 128
++ * SMIDSR_RDREQ		: 1 = use external DMA request on SD16 to pace reads
++ *			  from device. Must also set DMAP in SMICS.
++ * SMIDSR_RSTROBE	: Number of cycles to assert the read strobe.
++ *			  min 1, max 128.
++ */
++#define SMIDSR_RWIDTH_MASK	((1<<31)|(1<<30))
++#define SMIDSR_RWIDTH_OFFS	(30)
++#define SMIDSR_RSETUP_MASK	(0x3f << 24)
++#define SMIDSR_RSETUP_OFFS	(24)
++#define SMIDSR_MODE68		(1 << 23)
++#define SMIDSR_FSETUP		(1 << 22)
++#define SMIDSR_RHOLD_MASK	(0x3f << 16)
++#define SMIDSR_RHOLD_OFFS	(16)
++#define SMIDSR_RPACEALL		(1 << 15)
++#define SMIDSR_RPACE_MASK	(0x7f << 8)
++#define SMIDSR_RPACE_OFFS	(8)
++#define SMIDSR_RDREQ		(1 << 7)
++#define SMIDSR_RSTROBE_MASK	(0x7f)
++#define SMIDSR_RSTROBE_OFFS	(0)
++
++/* Device write settings:
++ * SMIDSW_WWIDTH	: Write transfer width. 00 = 8bit, 01 = 16bit,
++ *			  10= 18bit, 11 = 9bit.
++ * SMIDSW_WSETUP	: Number of cycles between CS assert and write strobe.
++ *			  Min 1, max 64.
++ * SMIDSW_WFORMAT	: Pixel format of input. 0 = 16bit RGB 565,
++ *			  1 = 32bit RGBA 8888
++ * SMIDSW_WSWAP		: 1 = swap pixel data bits. (Use with SMICS_PXLDAT)
++ * SMIDSW_WHOLD		: Time between WE deassert and CS deassert. 1 to 64
++ * SMIDSW_WPACEALL	: 1: this device's WPACE will be used for the next
++ *			  transfer, regardless of that transfer's device.
++ * SMIDSW_WPACE		: Cycles between CS deassert and next CS assert.
++ *			  Min 1, max 128
++ * SMIDSW_WDREQ		: Use external DREQ on pin 17 to pace writes. DMAP must
++ *			  be set in SMICS.
++ * SMIDSW_WSTROBE	: Number of cycles to assert the write strobe.
++ *			  Min 1, max 128
++ */
++#define SMIDSW_WWIDTH_MASK	 ((1<<31)|(1<<30))
++#define SMIDSW_WWIDTH_OFFS	(30)
++#define SMIDSW_WSETUP_MASK	(0x3f << 24)
++#define SMIDSW_WSETUP_OFFS	(24)
++#define SMIDSW_WFORMAT		(1 << 23)
++#define SMIDSW_WSWAP		(1 << 22)
++#define SMIDSW_WHOLD_MASK	(0x3f << 16)
++#define SMIDSW_WHOLD_OFFS	(16)
++#define SMIDSW_WPACEALL		(1 << 15)
++#define SMIDSW_WPACE_MASK	(0x7f << 8)
++#define SMIDSW_WPACE_OFFS	(8)
++#define SMIDSW_WDREQ		(1 << 7)
++#define SMIDSW_WSTROBE_MASK	 (0x7f)
++#define SMIDSW_WSTROBE_OFFS	 (0)
++
++/* Direct transfer control + status register
++ * SMIDCS_WRITE	: Direction of transfer: 1 -> write, 0 -> read
++ * SMIDCS_DONE	: 1 when a transfer has finished. Write 1 to clear.
++ * SMIDCS_START	: Write 1 to start a transfer, if one is not already underway.
++ * SMIDCE_ENABLE: Write 1 to enable SMI in direct mode.
++ */
++
++#define SMIDCS_WRITE		(1 << 3)
++#define SMIDCS_DONE		(1 << 2)
++#define SMIDCS_START		(1 << 1)
++#define SMIDCS_ENABLE		(1 << 0)
++
++/* Direct transfer address register
++ * SMIDA_DEVICE	: Indicates which of the device settings banks should be used.
++ * SMIDA_ADDR	: The value to be asserted on the address pins.
++ */
++
++#define SMIDA_DEVICE_MASK	((1<<9)|(1<<8))
++#define SMIDA_DEVICE_OFFS	(8)
++#define SMIDA_ADDR_MASK		(0x3f)
++#define SMIDA_ADDR_OFFS		(0)
++
++/* FIFO debug register
++ * SMIFD_FLVL	: The high-tide mark of FIFO count during the most recent txfer
++ * SMIFD_FCNT	: The current FIFO count.
++ */
++#define SMIFD_FLVL_MASK		(0x3f << 8)
++#define SMIFD_FLVL_OFFS		(8)
++#define SMIFD_FCNT_MASK		(0x3f)
++#define SMIFD_FCNT_OFFS		(0)
++
++#endif /* BCM2835_SMI_IMPLEMENTATION */
++
++#endif /* BCM2835_SMI_H */
diff --git a/target/linux/brcm2708/patches-4.4/0042-Add-SMI-NAND-driver.patch b/target/linux/brcm2708/patches-4.4/0042-Add-SMI-NAND-driver.patch
new file mode 100644
index 0000000..091e333
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0042-Add-SMI-NAND-driver.patch
@@ -0,0 +1,358 @@
+From 142af590ca0a7fc36905f5bd123103c730d22a07 Mon Sep 17 00:00:00 2001
+From: Luke Wren <wren6991 at gmail.com>
+Date: Sat, 5 Sep 2015 01:16:10 +0100
+Subject: [PATCH 042/127] Add SMI NAND driver
+
+Signed-off-by: Luke Wren <wren6991 at gmail.com>
+---
+ .../bindings/mtd/brcm,bcm2835-smi-nand.txt         |  42 ++++
+ drivers/mtd/nand/Kconfig                           |   7 +
+ drivers/mtd/nand/Makefile                          |   1 +
+ drivers/mtd/nand/bcm2835_smi_nand.c                | 268 +++++++++++++++++++++
+ 4 files changed, 318 insertions(+)
+ create mode 100644 Documentation/devicetree/bindings/mtd/brcm,bcm2835-smi-nand.txt
+ create mode 100644 drivers/mtd/nand/bcm2835_smi_nand.c
+
+--- /dev/null
++++ b/Documentation/devicetree/bindings/mtd/brcm,bcm2835-smi-nand.txt
+@@ -0,0 +1,42 @@
++* BCM2835 SMI NAND flash
++
++This driver is a shim between the BCM2835 SMI driver (SMI is a peripheral for
++talking to parallel register interfaces) and Linux's MTD layer.
++
++Required properties:
++- compatible: "brcm,bcm2835-smi-nand"
++- status: "okay"
++
++Optional properties:
++- partition at n, where n is an integer from a consecutive sequence starting at 0
++	- Difficult to store partition table on NAND device - normally put it
++	in the source code, kernel bootparams, or device tree (the best way!)
++	- Sub-properties:
++		- label: the partition name, as shown by mtdinfo /dev/mtd*
++		- reg: the size and offset of this partition.
++		- (optional) read-only: an empty property flagging as read only
++
++Example:
++
++nand: flash at 0 {
++	compatible = "brcm,bcm2835-smi-nand";
++	status = "okay";
++
++	partition at 0 {
++		label = "stage2";
++		// 128k
++		reg = <0 0x20000>;
++		read-only;
++	};
++	partition at 1 {
++		label = "firmware";
++		// 16M
++		reg = <0x20000 0x1000000>;
++		read-only;
++	};
++	partition at 2 {
++		label = "root";
++		// 2G
++		reg = <0x1020000 0x80000000>;
++	};
++};
+\ No newline at end of file
+--- a/drivers/mtd/nand/Kconfig
++++ b/drivers/mtd/nand/Kconfig
+@@ -41,6 +41,13 @@ config MTD_SM_COMMON
+ 	tristate
+ 	default n
+ 
++config MTD_NAND_BCM2835_SMI
++        tristate "Use Broadcom's Secondary Memory Interface as a NAND controller (BCM283x)"
++        depends on (MACH_BCM2708 || MACH_BCM2709 || ARCH_BCM2835) && BCM2835_SMI && MTD_NAND
++        default m
++        help
++	  Uses the BCM2835's SMI peripheral as a NAND controller.
++
+ config MTD_NAND_DENALI
+ 	tristate
+ 
+--- a/drivers/mtd/nand/Makefile
++++ b/drivers/mtd/nand/Makefile
+@@ -14,6 +14,7 @@ obj-$(CONFIG_MTD_NAND_DENALI)		+= denali
+ obj-$(CONFIG_MTD_NAND_DENALI_PCI)	+= denali_pci.o
+ obj-$(CONFIG_MTD_NAND_DENALI_DT)	+= denali_dt.o
+ obj-$(CONFIG_MTD_NAND_AU1550)		+= au1550nd.o
++obj-$(CONFIG_MTD_NAND_BCM2835_SMI)	+= bcm2835_smi_nand.o
+ obj-$(CONFIG_MTD_NAND_BF5XX)		+= bf5xx_nand.o
+ obj-$(CONFIG_MTD_NAND_S3C2410)		+= s3c2410.o
+ obj-$(CONFIG_MTD_NAND_DAVINCI)		+= davinci_nand.o
+--- /dev/null
++++ b/drivers/mtd/nand/bcm2835_smi_nand.c
+@@ -0,0 +1,268 @@
++/**
++ * NAND flash driver for Broadcom Secondary Memory Interface
++ *
++ * Written by Luke Wren <luke at raspberrypi.org>
++ * Copyright (c) 2015, Raspberry Pi (Trading) Ltd.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/of.h>
++#include <linux/platform_device.h>
++#include <linux/slab.h>
++#include <linux/mtd/nand.h>
++#include <linux/mtd/partitions.h>
++
++#include <linux/broadcom/bcm2835_smi.h>
++
++#define DEVICE_NAME "bcm2835-smi-nand"
++#define DRIVER_NAME "smi-nand-bcm2835"
++
++struct bcm2835_smi_nand_host {
++	struct bcm2835_smi_instance *smi_inst;
++	struct nand_chip nand_chip;
++	struct mtd_info mtd;
++	struct device *dev;
++};
++
++/****************************************************************************
++*
++*   NAND functionality implementation
++*
++****************************************************************************/
++
++#define SMI_NAND_CLE_PIN 0x01
++#define SMI_NAND_ALE_PIN 0x02
++
++static inline void bcm2835_smi_nand_cmd_ctrl(struct mtd_info *mtd, int cmd,
++					     unsigned int ctrl)
++{
++	uint32_t cmd32 = cmd;
++	uint32_t addr = ~(SMI_NAND_CLE_PIN | SMI_NAND_ALE_PIN);
++	struct bcm2835_smi_nand_host *host = dev_get_drvdata(mtd->dev.parent);
++	struct bcm2835_smi_instance *inst = host->smi_inst;
++
++	if (ctrl & NAND_CLE)
++		addr |= SMI_NAND_CLE_PIN;
++	if (ctrl & NAND_ALE)
++		addr |= SMI_NAND_ALE_PIN;
++	/* Lower ALL the CS pins! */
++	if (ctrl & NAND_NCE)
++		addr &= (SMI_NAND_CLE_PIN | SMI_NAND_ALE_PIN);
++
++	bcm2835_smi_set_address(inst, addr);
++
++	if (cmd != NAND_CMD_NONE)
++		bcm2835_smi_write_buf(inst, &cmd32, 1);
++}
++
++static inline uint8_t bcm2835_smi_nand_read_byte(struct mtd_info *mtd)
++{
++	uint8_t byte;
++	struct bcm2835_smi_nand_host *host = dev_get_drvdata(mtd->dev.parent);
++	struct bcm2835_smi_instance *inst = host->smi_inst;
++
++	bcm2835_smi_read_buf(inst, &byte, 1);
++	return byte;
++}
++
++static inline void bcm2835_smi_nand_write_byte(struct mtd_info *mtd,
++					       uint8_t byte)
++{
++	struct bcm2835_smi_nand_host *host = dev_get_drvdata(mtd->dev.parent);
++	struct bcm2835_smi_instance *inst = host->smi_inst;
++
++	bcm2835_smi_write_buf(inst, &byte, 1);
++}
++
++static inline void bcm2835_smi_nand_write_buf(struct mtd_info *mtd,
++					      const uint8_t *buf, int len)
++{
++	struct bcm2835_smi_nand_host *host = dev_get_drvdata(mtd->dev.parent);
++	struct bcm2835_smi_instance *inst = host->smi_inst;
++
++	bcm2835_smi_write_buf(inst, buf, len);
++}
++
++static inline void bcm2835_smi_nand_read_buf(struct mtd_info *mtd,
++					     uint8_t *buf, int len)
++{
++	struct bcm2835_smi_nand_host *host = dev_get_drvdata(mtd->dev.parent);
++	struct bcm2835_smi_instance *inst = host->smi_inst;
++
++	bcm2835_smi_read_buf(inst, buf, len);
++}
++
++/****************************************************************************
++*
++*   Probe and remove functions
++*
++***************************************************************************/
++
++static int bcm2835_smi_nand_probe(struct platform_device *pdev)
++{
++	struct bcm2835_smi_nand_host *host;
++	struct nand_chip *this;
++	struct mtd_info *mtd;
++	struct device *dev = &pdev->dev;
++	struct device_node *node = dev->of_node, *smi_node;
++	struct mtd_part_parser_data ppdata;
++	struct smi_settings *smi_settings;
++	struct bcm2835_smi_instance *smi_inst;
++	int ret = -ENXIO;
++
++	if (!node) {
++		dev_err(dev, "No device tree node supplied!");
++		return -EINVAL;
++	}
++
++	smi_node = of_parse_phandle(node, "smi_handle", 0);
++
++	/* Request use of SMI peripheral: */
++	smi_inst = bcm2835_smi_get(smi_node);
++
++	if (!smi_inst) {
++		dev_err(dev, "Could not register with SMI.");
++		return -EPROBE_DEFER;
++	}
++
++	/* Set SMI timing and bus width */
++
++	smi_settings = bcm2835_smi_get_settings_from_regs(smi_inst);
++
++	smi_settings->data_width = SMI_WIDTH_8BIT;
++	smi_settings->read_setup_time = 2;
++	smi_settings->read_hold_time = 1;
++	smi_settings->read_pace_time = 1;
++	smi_settings->read_strobe_time = 3;
++
++	smi_settings->write_setup_time = 2;
++	smi_settings->write_hold_time = 1;
++	smi_settings->write_pace_time = 1;
++	smi_settings->write_strobe_time = 3;
++
++	bcm2835_smi_set_regs_from_settings(smi_inst);
++
++	host = devm_kzalloc(dev, sizeof(struct bcm2835_smi_nand_host),
++		GFP_KERNEL);
++	if (!host)
++		return -ENOMEM;
++
++	host->dev = dev;
++	host->smi_inst = smi_inst;
++
++	platform_set_drvdata(pdev, host);
++
++	/* Link the structures together */
++
++	this = &host->nand_chip;
++	mtd = &host->mtd;
++	mtd->priv = this;
++	mtd->owner = THIS_MODULE;
++	mtd->dev.parent = dev;
++	mtd->name = DRIVER_NAME;
++	ppdata.of_node = node;
++
++	/* 20 us command delay time... */
++	this->chip_delay = 20;
++
++	this->priv = host;
++	this->cmd_ctrl = bcm2835_smi_nand_cmd_ctrl;
++	this->read_byte = bcm2835_smi_nand_read_byte;
++	this->write_byte = bcm2835_smi_nand_write_byte;
++	this->write_buf = bcm2835_smi_nand_write_buf;
++	this->read_buf = bcm2835_smi_nand_read_buf;
++
++	this->ecc.mode = NAND_ECC_SOFT;
++
++	/* Should never be accessed directly: */
++
++	this->IO_ADDR_R = (void *)0xdeadbeef;
++	this->IO_ADDR_W = (void *)0xdeadbeef;
++
++	/* First scan to find the device and get the page size */
++
++	if (nand_scan_ident(mtd, 1, NULL))
++		return -ENXIO;
++
++	/* Second phase scan */
++
++	if (nand_scan_tail(mtd))
++		return -ENXIO;
++
++	ret = mtd_device_parse_register(mtd, NULL, &ppdata, NULL, 0);
++	if (!ret)
++		return 0;
++
++	nand_release(mtd);
++	return -EINVAL;
++}
++
++static int bcm2835_smi_nand_remove(struct platform_device *pdev)
++{
++	struct bcm2835_smi_nand_host *host = platform_get_drvdata(pdev);
++
++	nand_release(&host->mtd);
++
++	return 0;
++}
++
++/****************************************************************************
++*
++*   Register the driver with device tree
++*
++***************************************************************************/
++
++static const struct of_device_id bcm2835_smi_nand_of_match[] = {
++	{.compatible = "brcm,bcm2835-smi-nand",},
++	{ /* sentinel */ }
++};
++
++MODULE_DEVICE_TABLE(of, bcm2835_smi_nand_of_match);
++
++static struct platform_driver bcm2835_smi_nand_driver = {
++	.probe = bcm2835_smi_nand_probe,
++	.remove = bcm2835_smi_nand_remove,
++	.driver = {
++		.name = DRIVER_NAME,
++		.owner = THIS_MODULE,
++		.of_match_table = bcm2835_smi_nand_of_match,
++	},
++};
++
++module_platform_driver(bcm2835_smi_nand_driver);
++
++MODULE_ALIAS("platform:smi-nand-bcm2835");
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION
++	("Driver for NAND chips using Broadcom Secondary Memory Interface");
++MODULE_AUTHOR("Luke Wren <luke at raspberrypi.org>");
diff --git a/target/linux/brcm2708/patches-4.4/0043-lirc-added-support-for-RaspberryPi-GPIO.patch b/target/linux/brcm2708/patches-4.4/0043-lirc-added-support-for-RaspberryPi-GPIO.patch
new file mode 100644
index 0000000..728ac37
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0043-lirc-added-support-for-RaspberryPi-GPIO.patch
@@ -0,0 +1,841 @@
+From b9a3e6cbf575d0463af4b47ce48f76624bae10a1 Mon Sep 17 00:00:00 2001
+From: Aron Szabo <aron at aron.ws>
+Date: Sat, 16 Jun 2012 12:15:55 +0200
+Subject: [PATCH 043/127] lirc: added support for RaspberryPi GPIO
+
+lirc_rpi: Use read_current_timer to determine transmitter delay. Thanks to jjmz and others
+See: https://github.com/raspberrypi/linux/issues/525
+
+lirc: Remove restriction on gpio pins that can be used with lirc
+
+Compute Module, for example could use different pins
+
+lirc_rpi: Add parameter to specify input pin pull
+
+Depending on the connected IR circuitry it might be desirable to change the
+gpios internal pull from it pull-down default behaviour. Add a module
+parameter to allow the user to set it explicitly.
+
+Signed-off-by: Julian Scheel <julian at jusst.de>
+
+lirc-rpi: Use the higher-level irq control functions
+
+This module used to access the irq_chip methods of the
+gpio controller directly, rather than going through the
+standard enable_irq/irq_set_irq_type functions. This
+caused problems on pinctrl-bcm2835 which only implements
+the irq_enable/disable methods and not irq_unmask/mask.
+
+lirc-rpi: Correct the interrupt usage
+
+1) Correct the use of enable_irq (i.e. don't call it so often)
+2) Correct the shutdown sequence.
+3) Avoid a bcm2708_gpio driver quirk by setting the irq flags earlier
+
+lirc-rpi: use getnstimeofday instead of read_current_timer
+
+read_current_timer isn't guaranteed to return values in
+microseconds, and indeed it doesn't on a Pi2.
+
+Issue: linux#827
+
+lirc-rpi: Add device tree support, and a suitable overlay
+
+The overlay supports DT parameters that match the old module
+parameters, except that gpio_in_pull should be set using the
+strings "up", "down" or "off".
+
+lirc-rpi: Also support pinctrl-bcm2835 in non-DT mode
+---
+ drivers/staging/media/lirc/Kconfig    |   6 +
+ drivers/staging/media/lirc/Makefile   |   1 +
+ drivers/staging/media/lirc/lirc_rpi.c | 730 ++++++++++++++++++++++++++++++++++
+ include/linux/platform_data/bcm2708.h |  23 ++
+ 4 files changed, 760 insertions(+)
+ create mode 100644 drivers/staging/media/lirc/lirc_rpi.c
+ create mode 100644 include/linux/platform_data/bcm2708.h
+
+--- a/drivers/staging/media/lirc/Kconfig
++++ b/drivers/staging/media/lirc/Kconfig
+@@ -32,6 +32,12 @@ config LIRC_PARALLEL
+ 	help
+ 	  Driver for Homebrew Parallel Port Receivers
+ 
++config LIRC_RPI
++	tristate "Homebrew GPIO Port Receiver/Transmitter for the RaspberryPi"
++	depends on LIRC
++	help
++	  Driver for Homebrew GPIO Port Receiver/Transmitter for the RaspberryPi
++
+ config LIRC_SASEM
+ 	tristate "Sasem USB IR Remote"
+ 	depends on LIRC && USB
+--- a/drivers/staging/media/lirc/Makefile
++++ b/drivers/staging/media/lirc/Makefile
+@@ -6,6 +6,7 @@
+ obj-$(CONFIG_LIRC_BT829)	+= lirc_bt829.o
+ obj-$(CONFIG_LIRC_IMON)		+= lirc_imon.o
+ obj-$(CONFIG_LIRC_PARALLEL)	+= lirc_parallel.o
++obj-$(CONFIG_LIRC_RPI)		+= lirc_rpi.o
+ obj-$(CONFIG_LIRC_SASEM)	+= lirc_sasem.o
+ obj-$(CONFIG_LIRC_SERIAL)	+= lirc_serial.o
+ obj-$(CONFIG_LIRC_SIR)		+= lirc_sir.o
+--- /dev/null
++++ b/drivers/staging/media/lirc/lirc_rpi.c
+@@ -0,0 +1,730 @@
++/*
++ * lirc_rpi.c
++ *
++ * lirc_rpi - Device driver that records pulse- and pause-lengths
++ *	      (space-lengths) (just like the lirc_serial driver does)
++ *	      between GPIO interrupt events on the Raspberry Pi.
++ *	      Lots of code has been taken from the lirc_serial module,
++ *	      so I would like say thanks to the authors.
++ *
++ * Copyright (C) 2012 Aron Robert Szabo <aron at reon.hu>,
++ *		      Michael Bishop <cleverca22 at gmail.com>
++ *  This program is free software; you can redistribute it and/or modify
++ *  it under the terms of the GNU General Public License as published by
++ *  the Free Software Foundation; either version 2 of the License, or
++ *  (at your option) any later version.
++ *
++ *  This program is distributed in the hope that it will be useful,
++ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
++ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ *  GNU General Public License for more details.
++ *
++ *  You should have received a copy of the GNU General Public License
++ *  along with this program; if not, write to the Free Software
++ *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
++ */
++
++#include <linux/module.h>
++#include <linux/errno.h>
++#include <linux/interrupt.h>
++#include <linux/sched.h>
++#include <linux/kernel.h>
++#include <linux/time.h>
++#include <linux/timex.h>
++#include <linux/timekeeping.h>
++#include <linux/string.h>
++#include <linux/delay.h>
++#include <linux/platform_device.h>
++#include <linux/irq.h>
++#include <linux/spinlock.h>
++#include <media/lirc.h>
++#include <media/lirc_dev.h>
++#include <linux/gpio.h>
++#include <linux/of_platform.h>
++#include <linux/platform_data/bcm2708.h>
++
++#define LIRC_DRIVER_NAME "lirc_rpi"
++#define RBUF_LEN 256
++#define LIRC_TRANSMITTER_LATENCY 50
++
++#ifndef MAX_UDELAY_MS
++#define MAX_UDELAY_US 5000
++#else
++#define MAX_UDELAY_US (MAX_UDELAY_MS*1000)
++#endif
++
++#define dprintk(fmt, args...)					\
++	do {							\
++		if (debug)					\
++			printk(KERN_DEBUG LIRC_DRIVER_NAME ": "	\
++			       fmt, ## args);			\
++	} while (0)
++
++/* module parameters */
++
++/* set the default GPIO input pin */
++static int gpio_in_pin = 18;
++/* set the default pull behaviour for input pin */
++static int gpio_in_pull = BCM2708_PULL_DOWN;
++/* set the default GPIO output pin */
++static int gpio_out_pin = 17;
++/* enable debugging messages */
++static bool debug;
++/* -1 = auto, 0 = active high, 1 = active low */
++static int sense = -1;
++/* use softcarrier by default */
++static bool softcarrier = 1;
++/* 0 = do not invert output, 1 = invert output */
++static bool invert = 0;
++
++struct gpio_chip *gpiochip;
++static int irq_num;
++
++/* forward declarations */
++static long send_pulse(unsigned long length);
++static void send_space(long length);
++static void lirc_rpi_exit(void);
++
++static struct platform_device *lirc_rpi_dev;
++static struct timeval lasttv = { 0, 0 };
++static struct lirc_buffer rbuf;
++static spinlock_t lock;
++
++/* initialized/set in init_timing_params() */
++static unsigned int freq = 38000;
++static unsigned int duty_cycle = 50;
++static unsigned long period;
++static unsigned long pulse_width;
++static unsigned long space_width;
++
++static void safe_udelay(unsigned long usecs)
++{
++	while (usecs > MAX_UDELAY_US) {
++		udelay(MAX_UDELAY_US);
++		usecs -= MAX_UDELAY_US;
++	}
++	udelay(usecs);
++}
++
++static unsigned long read_current_us(void)
++{
++	struct timespec now;
++	getnstimeofday(&now);
++	return (now.tv_sec * 1000000) + (now.tv_nsec/1000);
++}
++
++static int init_timing_params(unsigned int new_duty_cycle,
++	unsigned int new_freq)
++{
++	if (1000 * 1000000L / new_freq * new_duty_cycle / 100 <=
++	    LIRC_TRANSMITTER_LATENCY)
++		return -EINVAL;
++	if (1000 * 1000000L / new_freq * (100 - new_duty_cycle) / 100 <=
++	    LIRC_TRANSMITTER_LATENCY)
++		return -EINVAL;
++	duty_cycle = new_duty_cycle;
++	freq = new_freq;
++	period = 1000 * 1000000L / freq;
++	pulse_width = period * duty_cycle / 100;
++	space_width = period - pulse_width;
++	dprintk("in init_timing_params, freq=%d pulse=%ld, "
++		"space=%ld\n", freq, pulse_width, space_width);
++	return 0;
++}
++
++static long send_pulse_softcarrier(unsigned long length)
++{
++	int flag;
++	unsigned long actual, target;
++	unsigned long actual_us, initial_us, target_us;
++
++	length *= 1000;
++
++	actual = 0; target = 0; flag = 0;
++	actual_us = read_current_us();
++
++	while (actual < length) {
++		if (flag) {
++			gpiochip->set(gpiochip, gpio_out_pin, invert);
++			target += space_width;
++		} else {
++			gpiochip->set(gpiochip, gpio_out_pin, !invert);
++			target += pulse_width;
++		}
++		initial_us = actual_us;
++		target_us = actual_us + (target - actual) / 1000;
++		/*
++		 * Note - we've checked in ioctl that the pulse/space
++		 * widths are big enough so that d is > 0
++		 */
++		if  ((int)(target_us - actual_us) > 0)
++			udelay(target_us - actual_us);
++		actual_us = read_current_us();
++		actual += (actual_us - initial_us) * 1000;
++		flag = !flag;
++	}
++	return (actual-length) / 1000;
++}
++
++static long send_pulse(unsigned long length)
++{
++	if (length <= 0)
++		return 0;
++
++	if (softcarrier) {
++		return send_pulse_softcarrier(length);
++	} else {
++		gpiochip->set(gpiochip, gpio_out_pin, !invert);
++		safe_udelay(length);
++		return 0;
++	}
++}
++
++static void send_space(long length)
++{
++	gpiochip->set(gpiochip, gpio_out_pin, invert);
++	if (length <= 0)
++		return;
++	safe_udelay(length);
++}
++
++static void rbwrite(int l)
++{
++	if (lirc_buffer_full(&rbuf)) {
++		/* no new signals will be accepted */
++		dprintk("Buffer overrun\n");
++		return;
++	}
++	lirc_buffer_write(&rbuf, (void *)&l);
++}
++
++static void frbwrite(int l)
++{
++	/* simple noise filter */
++	static int pulse, space;
++	static unsigned int ptr;
++
++	if (ptr > 0 && (l & PULSE_BIT)) {
++		pulse += l & PULSE_MASK;
++		if (pulse > 250) {
++			rbwrite(space);
++			rbwrite(pulse | PULSE_BIT);
++			ptr = 0;
++			pulse = 0;
++		}
++		return;
++	}
++	if (!(l & PULSE_BIT)) {
++		if (ptr == 0) {
++			if (l > 20000) {
++				space = l;
++				ptr++;
++				return;
++			}
++		} else {
++			if (l > 20000) {
++				space += pulse;
++				if (space > PULSE_MASK)
++					space = PULSE_MASK;
++				space += l;
++				if (space > PULSE_MASK)
++					space = PULSE_MASK;
++				pulse = 0;
++				return;
++			}
++			rbwrite(space);
++			rbwrite(pulse | PULSE_BIT);
++			ptr = 0;
++			pulse = 0;
++		}
++	}
++	rbwrite(l);
++}
++
++static irqreturn_t irq_handler(int i, void *blah, struct pt_regs *regs)
++{
++	struct timeval tv;
++	long deltv;
++	int data;
++	int signal;
++
++	/* use the GPIO signal level */
++	signal = gpiochip->get(gpiochip, gpio_in_pin);
++
++	if (sense != -1) {
++		/* get current time */
++		do_gettimeofday(&tv);
++
++		/* calc time since last interrupt in microseconds */
++		deltv = tv.tv_sec-lasttv.tv_sec;
++		if (tv.tv_sec < lasttv.tv_sec ||
++		    (tv.tv_sec == lasttv.tv_sec &&
++		     tv.tv_usec < lasttv.tv_usec)) {
++			printk(KERN_WARNING LIRC_DRIVER_NAME
++			       ": AIEEEE: your clock just jumped backwards\n");
++			printk(KERN_WARNING LIRC_DRIVER_NAME
++			       ": %d %d %lx %lx %lx %lx\n", signal, sense,
++			       tv.tv_sec, lasttv.tv_sec,
++			       tv.tv_usec, lasttv.tv_usec);
++			data = PULSE_MASK;
++		} else if (deltv > 15) {
++			data = PULSE_MASK; /* really long time */
++			if (!(signal^sense)) {
++				/* sanity check */
++				printk(KERN_WARNING LIRC_DRIVER_NAME
++				       ": AIEEEE: %d %d %lx %lx %lx %lx\n",
++				       signal, sense, tv.tv_sec, lasttv.tv_sec,
++				       tv.tv_usec, lasttv.tv_usec);
++				/*
++				 * detecting pulse while this
++				 * MUST be a space!
++				 */
++				sense = sense ? 0 : 1;
++			}
++		} else {
++			data = (int) (deltv*1000000 +
++				      (tv.tv_usec - lasttv.tv_usec));
++		}
++		frbwrite(signal^sense ? data : (data|PULSE_BIT));
++		lasttv = tv;
++		wake_up_interruptible(&rbuf.wait_poll);
++	}
++
++	return IRQ_HANDLED;
++}
++
++static int is_right_chip(struct gpio_chip *chip, void *data)
++{
++	dprintk("is_right_chip %s %d\n", chip->label, strcmp(data, chip->label));
++
++	if (strcmp(data, chip->label) == 0)
++		return 1;
++	return 0;
++}
++
++static inline int read_bool_property(const struct device_node *np,
++				     const char *propname,
++				     bool *out_value)
++{
++	u32 value = 0;
++	int err = of_property_read_u32(np, propname, &value);
++	if (err == 0)
++		*out_value = (value != 0);
++	return err;
++}
++
++static void read_pin_settings(struct device_node *node)
++{
++	u32 pin;
++	int index;
++
++	for (index = 0;
++	     of_property_read_u32_index(
++		     node,
++		     "brcm,pins",
++		     index,
++		     &pin) == 0;
++	     index++) {
++		u32 function;
++		int err;
++		err = of_property_read_u32_index(
++			node,
++			"brcm,function",
++			index,
++			&function);
++		if (err == 0) {
++			if (function == 1) /* Output */
++				gpio_out_pin = pin;
++			else if (function == 0) /* Input */
++				gpio_in_pin = pin;
++		}
++	}
++}
++
++static int init_port(void)
++{
++	int i, nlow, nhigh;
++	struct device_node *node;
++
++	node = lirc_rpi_dev->dev.of_node;
++
++	gpiochip = gpiochip_find("bcm2708_gpio", is_right_chip);
++
++	/*
++	 * Because of the lack of a setpull function, only support
++	 * pinctrl-bcm2835 if using device tree.
++	*/
++	if (!gpiochip && node)
++		gpiochip = gpiochip_find("pinctrl-bcm2835", is_right_chip);
++
++	if (!gpiochip) {
++		pr_err(LIRC_DRIVER_NAME ": gpio chip not found!\n");
++		return -ENODEV;
++	}
++
++	if (node) {
++		struct device_node *pins_node;
++
++		pins_node = of_parse_phandle(node, "pinctrl-0", 0);
++		if (!pins_node) {
++			printk(KERN_ERR LIRC_DRIVER_NAME
++			       ": pinctrl settings not found!\n");
++			return -EINVAL;
++		}
++
++		read_pin_settings(pins_node);
++
++		of_property_read_u32(node, "rpi,sense", &sense);
++
++		read_bool_property(node, "rpi,softcarrier", &softcarrier);
++
++		read_bool_property(node, "rpi,invert", &invert);
++
++		read_bool_property(node, "rpi,debug", &debug);
++
++	} else {
++		return -EINVAL;
++	}
++
++	gpiochip->set(gpiochip, gpio_out_pin, invert);
++
++	irq_num = gpiochip->to_irq(gpiochip, gpio_in_pin);
++	dprintk("to_irq %d\n", irq_num);
++
++	/* if pin is high, then this must be an active low receiver. */
++	if (sense == -1) {
++		/* wait 1/2 sec for the power supply */
++		msleep(500);
++
++		/*
++		 * probe 9 times every 0.04s, collect "votes" for
++		 * active high/low
++		 */
++		nlow = 0;
++		nhigh = 0;
++		for (i = 0; i < 9; i++) {
++			if (gpiochip->get(gpiochip, gpio_in_pin))
++				nlow++;
++			else
++				nhigh++;
++			msleep(40);
++		}
++		sense = (nlow >= nhigh ? 1 : 0);
++		printk(KERN_INFO LIRC_DRIVER_NAME
++		       ": auto-detected active %s receiver on GPIO pin %d\n",
++		       sense ? "low" : "high", gpio_in_pin);
++	} else {
++		printk(KERN_INFO LIRC_DRIVER_NAME
++		       ": manually using active %s receiver on GPIO pin %d\n",
++		       sense ? "low" : "high", gpio_in_pin);
++	}
++
++	return 0;
++}
++
++// called when the character device is opened
++static int set_use_inc(void *data)
++{
++	int result;
++
++	/* initialize timestamp */
++	do_gettimeofday(&lasttv);
++
++	result = request_irq(irq_num,
++			     (irq_handler_t) irq_handler,
++			     IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING,
++			     LIRC_DRIVER_NAME, (void*) 0);
++
++	switch (result) {
++	case -EBUSY:
++		printk(KERN_ERR LIRC_DRIVER_NAME
++		       ": IRQ %d is busy\n",
++		       irq_num);
++		return -EBUSY;
++	case -EINVAL:
++		printk(KERN_ERR LIRC_DRIVER_NAME
++		       ": Bad irq number or handler\n");
++		return -EINVAL;
++	default:
++		dprintk("Interrupt %d obtained\n",
++			irq_num);
++		break;
++	};
++
++	/* initialize pulse/space widths */
++	init_timing_params(duty_cycle, freq);
++
++	return 0;
++}
++
++static void set_use_dec(void *data)
++{
++	/* GPIO Pin Falling/Rising Edge Detect Disable */
++	irq_set_irq_type(irq_num, 0);
++	disable_irq(irq_num);
++
++	free_irq(irq_num, (void *) 0);
++
++	dprintk(KERN_INFO LIRC_DRIVER_NAME
++		": freed IRQ %d\n", irq_num);
++}
++
++static ssize_t lirc_write(struct file *file, const char *buf,
++	size_t n, loff_t *ppos)
++{
++	int i, count;
++	unsigned long flags;
++	long delta = 0;
++	int *wbuf;
++
++	count = n / sizeof(int);
++	if (n % sizeof(int) || count % 2 == 0)
++		return -EINVAL;
++	wbuf = memdup_user(buf, n);
++	if (IS_ERR(wbuf))
++		return PTR_ERR(wbuf);
++	spin_lock_irqsave(&lock, flags);
++
++	for (i = 0; i < count; i++) {
++		if (i%2)
++			send_space(wbuf[i] - delta);
++		else
++			delta = send_pulse(wbuf[i]);
++	}
++	gpiochip->set(gpiochip, gpio_out_pin, invert);
++
++	spin_unlock_irqrestore(&lock, flags);
++	kfree(wbuf);
++	return n;
++}
++
++static long lirc_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
++{
++	int result;
++	__u32 value;
++
++	switch (cmd) {
++	case LIRC_GET_SEND_MODE:
++		return -ENOIOCTLCMD;
++		break;
++
++	case LIRC_SET_SEND_MODE:
++		result = get_user(value, (__u32 *) arg);
++		if (result)
++			return result;
++		/* only LIRC_MODE_PULSE supported */
++		if (value != LIRC_MODE_PULSE)
++			return -ENOSYS;
++		break;
++
++	case LIRC_GET_LENGTH:
++		return -ENOSYS;
++		break;
++
++	case LIRC_SET_SEND_DUTY_CYCLE:
++		dprintk("SET_SEND_DUTY_CYCLE\n");
++		result = get_user(value, (__u32 *) arg);
++		if (result)
++			return result;
++		if (value <= 0 || value > 100)
++			return -EINVAL;
++		return init_timing_params(value, freq);
++		break;
++
++	case LIRC_SET_SEND_CARRIER:
++		dprintk("SET_SEND_CARRIER\n");
++		result = get_user(value, (__u32 *) arg);
++		if (result)
++			return result;
++		if (value > 500000 || value < 20000)
++			return -EINVAL;
++		return init_timing_params(duty_cycle, value);
++		break;
++
++	default:
++		return lirc_dev_fop_ioctl(filep, cmd, arg);
++	}
++	return 0;
++}
++
++static const struct file_operations lirc_fops = {
++	.owner		= THIS_MODULE,
++	.write		= lirc_write,
++	.unlocked_ioctl	= lirc_ioctl,
++	.read		= lirc_dev_fop_read,
++	.poll		= lirc_dev_fop_poll,
++	.open		= lirc_dev_fop_open,
++	.release	= lirc_dev_fop_close,
++	.llseek		= no_llseek,
++};
++
++static struct lirc_driver driver = {
++	.name		= LIRC_DRIVER_NAME,
++	.minor		= -1,
++	.code_length	= 1,
++	.sample_rate	= 0,
++	.data		= NULL,
++	.add_to_buf	= NULL,
++	.rbuf		= &rbuf,
++	.set_use_inc	= set_use_inc,
++	.set_use_dec	= set_use_dec,
++	.fops		= &lirc_fops,
++	.dev		= NULL,
++	.owner		= THIS_MODULE,
++};
++
++static const struct of_device_id lirc_rpi_of_match[] = {
++	{ .compatible = "rpi,lirc-rpi", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, lirc_rpi_of_match);
++
++static struct platform_driver lirc_rpi_driver = {
++	.driver = {
++		.name   = LIRC_DRIVER_NAME,
++		.owner  = THIS_MODULE,
++		.of_match_table = of_match_ptr(lirc_rpi_of_match),
++	},
++};
++
++static int __init lirc_rpi_init(void)
++{
++	struct device_node *node;
++	int result;
++
++	/* Init read buffer. */
++	result = lirc_buffer_init(&rbuf, sizeof(int), RBUF_LEN);
++	if (result < 0)
++		return -ENOMEM;
++
++	result = platform_driver_register(&lirc_rpi_driver);
++	if (result) {
++		printk(KERN_ERR LIRC_DRIVER_NAME
++		       ": lirc register returned %d\n", result);
++		goto exit_buffer_free;
++	}
++
++	node = of_find_compatible_node(NULL, NULL,
++				       lirc_rpi_of_match[0].compatible);
++
++	if (node) {
++		/* DT-enabled */
++		lirc_rpi_dev = of_find_device_by_node(node);
++		WARN_ON(lirc_rpi_dev->dev.of_node != node);
++		of_node_put(node);
++	}
++	else {
++		lirc_rpi_dev = platform_device_alloc(LIRC_DRIVER_NAME, 0);
++		if (!lirc_rpi_dev) {
++			result = -ENOMEM;
++			goto exit_driver_unregister;
++		}
++
++		result = platform_device_add(lirc_rpi_dev);
++		if (result)
++			goto exit_device_put;
++	}
++
++	return 0;
++
++	exit_device_put:
++	platform_device_put(lirc_rpi_dev);
++
++	exit_driver_unregister:
++	platform_driver_unregister(&lirc_rpi_driver);
++
++	exit_buffer_free:
++	lirc_buffer_free(&rbuf);
++
++	return result;
++}
++
++static void lirc_rpi_exit(void)
++{
++	if (!lirc_rpi_dev->dev.of_node)
++		platform_device_unregister(lirc_rpi_dev);
++	platform_driver_unregister(&lirc_rpi_driver);
++	lirc_buffer_free(&rbuf);
++}
++
++static int __init lirc_rpi_init_module(void)
++{
++	int result;
++
++	result = lirc_rpi_init();
++	if (result)
++		return result;
++
++	result = init_port();
++	if (result < 0)
++		goto exit_rpi;
++
++	driver.features = LIRC_CAN_SET_SEND_DUTY_CYCLE |
++			  LIRC_CAN_SET_SEND_CARRIER |
++			  LIRC_CAN_SEND_PULSE |
++			  LIRC_CAN_REC_MODE2;
++
++	driver.dev = &lirc_rpi_dev->dev;
++	driver.minor = lirc_register_driver(&driver);
++
++	if (driver.minor < 0) {
++		printk(KERN_ERR LIRC_DRIVER_NAME
++		       ": device registration failed with %d\n", result);
++		result = -EIO;
++		goto exit_rpi;
++	}
++
++	printk(KERN_INFO LIRC_DRIVER_NAME ": driver registered!\n");
++
++	return 0;
++
++	exit_rpi:
++	lirc_rpi_exit();
++
++	return result;
++}
++
++static void __exit lirc_rpi_exit_module(void)
++{
++	lirc_unregister_driver(driver.minor);
++
++	gpio_free(gpio_out_pin);
++	gpio_free(gpio_in_pin);
++
++	lirc_rpi_exit();
++
++	printk(KERN_INFO LIRC_DRIVER_NAME ": cleaned up module\n");
++}
++
++module_init(lirc_rpi_init_module);
++module_exit(lirc_rpi_exit_module);
++
++MODULE_DESCRIPTION("Infra-red receiver and blaster driver for Raspberry Pi GPIO.");
++MODULE_AUTHOR("Aron Robert Szabo <aron at reon.hu>");
++MODULE_AUTHOR("Michael Bishop <cleverca22 at gmail.com>");
++MODULE_LICENSE("GPL");
++
++module_param(gpio_out_pin, int, S_IRUGO);
++MODULE_PARM_DESC(gpio_out_pin, "GPIO output/transmitter pin number of the BCM"
++		 " processor. (default 17");
++
++module_param(gpio_in_pin, int, S_IRUGO);
++MODULE_PARM_DESC(gpio_in_pin, "GPIO input pin number of the BCM processor."
++		 " (default 18");
++
++module_param(gpio_in_pull, int, S_IRUGO);
++MODULE_PARM_DESC(gpio_in_pull, "GPIO input pin pull configuration."
++		 " (0 = off, 1 = up, 2 = down, default down)");
++
++module_param(sense, int, S_IRUGO);
++MODULE_PARM_DESC(sense, "Override autodetection of IR receiver circuit"
++		 " (0 = active high, 1 = active low )");
++
++module_param(softcarrier, bool, S_IRUGO);
++MODULE_PARM_DESC(softcarrier, "Software carrier (0 = off, 1 = on, default on)");
++
++module_param(invert, bool, S_IRUGO);
++MODULE_PARM_DESC(invert, "Invert output (0 = off, 1 = on, default off");
++
++module_param(debug, bool, S_IRUGO | S_IWUSR);
++MODULE_PARM_DESC(debug, "Enable debugging messages");
+--- /dev/null
++++ b/include/linux/platform_data/bcm2708.h
+@@ -0,0 +1,23 @@
++/*
++ * include/linux/platform_data/bcm2708.h
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ * (C) 2014 Julian Scheel <julian at jusst.de>
++ *
++ */
++#ifndef __BCM2708_H_
++#define __BCM2708_H_
++
++typedef enum {
++	BCM2708_PULL_OFF,
++	BCM2708_PULL_UP,
++	BCM2708_PULL_DOWN
++} bcm2708_gpio_pull_t;
++
++extern int bcm2708_gpio_setpull(struct gpio_chip *gc, unsigned offset,
++		bcm2708_gpio_pull_t value);
++
++#endif
diff --git a/target/linux/brcm2708/patches-4.4/0044-Add-cpufreq-driver.patch b/target/linux/brcm2708/patches-4.4/0044-Add-cpufreq-driver.patch
new file mode 100644
index 0000000..00be38f
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0044-Add-cpufreq-driver.patch
@@ -0,0 +1,257 @@
+From 21e72e1f417fe2c097fe8f8ba4963abdde88a6ca Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 3 Jul 2013 00:49:20 +0100
+Subject: [PATCH 044/127] Add cpufreq driver
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+---
+ drivers/cpufreq/Kconfig.arm       |   9 ++
+ drivers/cpufreq/Makefile          |   1 +
+ drivers/cpufreq/bcm2835-cpufreq.c | 213 ++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 223 insertions(+)
+ create mode 100644 drivers/cpufreq/bcm2835-cpufreq.c
+
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -217,6 +217,15 @@ config ARM_SPEAR_CPUFREQ
+ 	help
+ 	  This adds the CPUFreq driver support for SPEAr SOCs.
+ 
++config ARM_BCM2835_CPUFREQ
++	depends on RASPBERRYPI_FIRMWARE
++	bool "BCM2835 Driver"
++	default y
++	help
++	  This adds the CPUFreq driver for BCM2835
++
++	  If in doubt, say N.
++
+ config ARM_TEGRA20_CPUFREQ
+ 	bool "Tegra20 CPUFreq support"
+ 	depends on ARCH_TEGRA
+--- a/drivers/cpufreq/Makefile
++++ b/drivers/cpufreq/Makefile
+@@ -73,6 +73,7 @@ obj-$(CONFIG_ARM_SA1100_CPUFREQ)	+= sa11
+ obj-$(CONFIG_ARM_SA1110_CPUFREQ)	+= sa1110-cpufreq.o
+ obj-$(CONFIG_ARM_SCPI_CPUFREQ)		+= scpi-cpufreq.o
+ obj-$(CONFIG_ARM_SPEAR_CPUFREQ)		+= spear-cpufreq.o
++obj-$(CONFIG_ARM_BCM2835_CPUFREQ)	+= bcm2835-cpufreq.o
+ obj-$(CONFIG_ARM_TEGRA20_CPUFREQ)	+= tegra20-cpufreq.o
+ obj-$(CONFIG_ARM_TEGRA124_CPUFREQ)	+= tegra124-cpufreq.o
+ obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ)	+= vexpress-spc-cpufreq.o
+--- /dev/null
++++ b/drivers/cpufreq/bcm2835-cpufreq.c
+@@ -0,0 +1,213 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++/*****************************************************************************
++* FILENAME: bcm2835-cpufreq.h
++* DESCRIPTION: This driver dynamically manages the CPU Frequency of the ARM
++* processor. Messages are sent to Videocore either setting or requesting the
++* frequency of the ARM in order to match an appropiate frequency to the current
++* usage of the processor. The policy which selects the frequency to use is
++* defined in the kernel .config file, but can be changed during runtime.
++*****************************************************************************/
++
++/* ---------- INCLUDES ---------- */
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/cpufreq.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
++
++/* ---------- DEFINES ---------- */
++/*#define CPUFREQ_DEBUG_ENABLE*/		/* enable debugging */
++#define MODULE_NAME "bcm2835-cpufreq"
++
++#define VCMSG_ID_ARM_CLOCK 0x000000003		/* Clock/Voltage ID's */
++
++/* debug printk macros */
++#ifdef CPUFREQ_DEBUG_ENABLE
++#define print_debug(fmt,...) pr_debug("%s:%s:%d: "fmt, MODULE_NAME, __func__, __LINE__, ##__VA_ARGS__)
++#else
++#define print_debug(fmt,...)
++#endif
++#define print_err(fmt,...) pr_err("%s:%s:%d: "fmt, MODULE_NAME, __func__,__LINE__, ##__VA_ARGS__)
++#define print_info(fmt,...) pr_info("%s: "fmt, MODULE_NAME, ##__VA_ARGS__)
++
++/* ---------- GLOBALS ---------- */
++static struct cpufreq_driver bcm2835_cpufreq_driver;	/* the cpufreq driver global */
++
++static struct cpufreq_frequency_table bcm2835_freq_table[] = {
++	{0, 0, 0},
++	{0, 0, 0},
++	{0, 0, CPUFREQ_TABLE_END},
++};
++
++/*
++ ===============================================
++  clk_rate either gets or sets the clock rates.
++ ===============================================
++*/
++
++static int bcm2835_cpufreq_clock_property(u32 tag, u32 id, u32 *val)
++{
++	struct rpi_firmware *fw = rpi_firmware_get(NULL);
++	struct {
++		u32 id;
++		u32 val;
++	} packet;
++	int ret;
++
++	packet.id = id;
++	packet.val = *val;
++	ret = rpi_firmware_property(fw, tag, &packet, sizeof(packet));
++	if (ret)
++		return ret;
++
++	*val = packet.val;
++
++	return 0;
++}
++
++static uint32_t bcm2835_cpufreq_set_clock(int cur_rate, int arm_rate)
++{
++	u32 rate = arm_rate * 1000;
++	int ret;
++
++	ret = bcm2835_cpufreq_clock_property(RPI_FIRMWARE_SET_CLOCK_RATE, VCMSG_ID_ARM_CLOCK, &rate);
++	if (ret) {
++		print_err("Failed to set clock: %d (%d)\n", arm_rate, ret);
++		return 0;
++	}
++
++	rate /= 1000;
++	print_debug("Setting new frequency = %d -> %d (actual %d)\n", cur_rate, arm_rate, rate);
++
++	return rate;
++}
++
++static uint32_t bcm2835_cpufreq_get_clock(int tag)
++{
++	u32 rate;
++	int ret;
++
++	ret = bcm2835_cpufreq_clock_property(tag, VCMSG_ID_ARM_CLOCK, &rate);
++	if (ret) {
++		print_err("Failed to get clock (%d)\n", ret);
++		return 0;
++	}
++
++	rate /= 1000;
++	print_debug("%s frequency = %u\n",
++		tag == RPI_FIRMWARE_GET_CLOCK_RATE ? "Current":
++		tag == RPI_FIRMWARE_GET_MIN_CLOCK_RATE ? "Min":
++		tag == RPI_FIRMWARE_GET_MAX_CLOCK_RATE ? "Max":
++		"Unexpected", rate);
++
++	return rate;
++}
++
++/*
++ ====================================================
++  Module Initialisation registers the cpufreq driver
++ ====================================================
++*/
++static int __init bcm2835_cpufreq_module_init(void)
++{
++	print_debug("IN\n");
++	return cpufreq_register_driver(&bcm2835_cpufreq_driver);
++}
++
++/*
++ =============
++  Module exit
++ =============
++*/
++static void __exit bcm2835_cpufreq_module_exit(void)
++{
++	print_debug("IN\n");
++	cpufreq_unregister_driver(&bcm2835_cpufreq_driver);
++	return;
++}
++
++/*
++ ==============================================================
++  Initialisation function sets up the CPU policy for first use
++ ==============================================================
++*/
++static int bcm2835_cpufreq_driver_init(struct cpufreq_policy *policy)
++{
++	/* measured value of how long it takes to change frequency */
++	const unsigned int transition_latency = 355000; /* ns */
++
++	if (!rpi_firmware_get(NULL)) {
++		print_err("Firmware is not available\n");
++		return -ENODEV;
++	}
++
++	/* now find out what the maximum and minimum frequencies are */
++	bcm2835_freq_table[0].frequency = bcm2835_cpufreq_get_clock(RPI_FIRMWARE_GET_MIN_CLOCK_RATE);
++	bcm2835_freq_table[1].frequency = bcm2835_cpufreq_get_clock(RPI_FIRMWARE_GET_MAX_CLOCK_RATE);
++
++	print_info("min=%d max=%d\n", bcm2835_freq_table[0].frequency, bcm2835_freq_table[1].frequency);
++	return cpufreq_generic_init(policy, bcm2835_freq_table, transition_latency);
++}
++
++/*
++ =====================================================================
++  Target index function chooses the requested frequency from the table
++ =====================================================================
++*/
++
++static int bcm2835_cpufreq_driver_target_index(struct cpufreq_policy *policy, unsigned int state)
++{
++	unsigned int target_freq = bcm2835_freq_table[state].frequency;
++	unsigned int cur = bcm2835_cpufreq_set_clock(policy->cur, target_freq);
++
++	if (!cur)
++	{
++		print_err("Error occurred setting a new frequency (%d)\n", target_freq);
++		return -EINVAL;
++	}
++	print_debug("%s: %i: freq %d->%d\n", policy->governor->name, state, policy->cur, cur);
++	return 0;
++}
++
++/*
++ ======================================================
++  Get function returns the current frequency from table
++ ======================================================
++*/
++
++static unsigned int bcm2835_cpufreq_driver_get(unsigned int cpu)
++{
++	unsigned int actual_rate = bcm2835_cpufreq_get_clock(RPI_FIRMWARE_GET_CLOCK_RATE);
++	print_debug("cpu%d: freq=%d\n", cpu, actual_rate);
++	return actual_rate <= bcm2835_freq_table[0].frequency ? bcm2835_freq_table[0].frequency : bcm2835_freq_table[1].frequency;
++}
++
++/* the CPUFreq driver */
++static struct cpufreq_driver bcm2835_cpufreq_driver = {
++	.name         = "BCM2835 CPUFreq",
++	.init         = bcm2835_cpufreq_driver_init,
++	.verify       = cpufreq_generic_frequency_table_verify,
++	.target_index = bcm2835_cpufreq_driver_target_index,
++	.get          = bcm2835_cpufreq_driver_get,
++	.attr         = cpufreq_generic_attr,
++};
++
++MODULE_AUTHOR("Dorian Peake and Dom Cobley");
++MODULE_DESCRIPTION("CPU frequency driver for BCM2835 chip");
++MODULE_LICENSE("GPL");
++
++module_init(bcm2835_cpufreq_module_init);
++module_exit(bcm2835_cpufreq_module_exit);
diff --git a/target/linux/brcm2708/patches-4.4/0045-Added-hwmon-thermal-driver-for-reporting-core-temper.patch b/target/linux/brcm2708/patches-4.4/0045-Added-hwmon-thermal-driver-for-reporting-core-temper.patch
new file mode 100644
index 0000000..bcb7550
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0045-Added-hwmon-thermal-driver-for-reporting-core-temper.patch
@@ -0,0 +1,193 @@
+From ffe7f669c4106fafe86597774697048258cedbc9 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Tue, 26 Mar 2013 19:24:24 +0000
+Subject: [PATCH 045/127] Added hwmon/thermal driver for reporting core
+ temperature. Thanks Dorian
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+BCM270x: Move thermal sensor to Device Tree
+
+Add Device Tree support to bcm2835-thermal driver.
+Add thermal sensor device to Device Tree.
+Don't add platform device when booting in DT mode.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/thermal/Kconfig           |   7 ++
+ drivers/thermal/Makefile          |   1 +
+ drivers/thermal/bcm2835-thermal.c | 141 ++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 149 insertions(+)
+ create mode 100644 drivers/thermal/bcm2835-thermal.c
+
+--- a/drivers/thermal/Kconfig
++++ b/drivers/thermal/Kconfig
+@@ -285,6 +285,13 @@ config INTEL_POWERCLAMP
+ 	  enforce idle time which results in more package C-state residency. The
+ 	  user interface is exposed via generic thermal framework.
+ 
++config THERMAL_BCM2835
++	depends on RASPBERRYPI_FIRMWARE
++	tristate "BCM2835 Thermal Driver"
++	help
++	  This will enable temperature monitoring for the Broadcom BCM2835
++	  chip. If built as a module, it will be called 'bcm2835-thermal'.
++
+ config X86_PKG_TEMP_THERMAL
+ 	tristate "X86 package temperature thermal driver"
+ 	depends on X86_THERMAL_VECTOR
+--- a/drivers/thermal/Makefile
++++ b/drivers/thermal/Makefile
+@@ -38,6 +38,7 @@ obj-$(CONFIG_ARMADA_THERMAL)	+= armada_t
+ obj-$(CONFIG_IMX_THERMAL)	+= imx_thermal.o
+ obj-$(CONFIG_DB8500_CPUFREQ_COOLING)	+= db8500_cpufreq_cooling.o
+ obj-$(CONFIG_INTEL_POWERCLAMP)	+= intel_powerclamp.o
++obj-$(CONFIG_THERMAL_BCM2835)	+= bcm2835-thermal.o
+ obj-$(CONFIG_X86_PKG_TEMP_THERMAL)	+= x86_pkg_temp_thermal.o
+ obj-$(CONFIG_INTEL_SOC_DTS_IOSF_CORE)	+= intel_soc_dts_iosf.o
+ obj-$(CONFIG_INTEL_SOC_DTS_THERMAL)	+= intel_soc_dts_thermal.o
+--- /dev/null
++++ b/drivers/thermal/bcm2835-thermal.c
+@@ -0,0 +1,141 @@
++/*****************************************************************************
++* Copyright 2011 Broadcom Corporation.  All rights reserved.
++*
++* Unless you and Broadcom execute a separate written software license
++* agreement governing use of this software, this software is licensed to you
++* under the terms of the GNU General Public License version 2, available at
++* http://www.broadcom.com/licenses/GPLv2.php (the "GPL").
++*
++* Notwithstanding the above, under no circumstances may you combine this
++* software in any way with any other Broadcom software provided under a
++* license other than the GPL, without Broadcom's express prior written
++* consent.
++*****************************************************************************/
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++#include <linux/thermal.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
++
++static int bcm2835_thermal_get_property(struct thermal_zone_device *tz,
++					int *temp, u32 tag)
++{
++	struct rpi_firmware *fw = tz->devdata;
++	struct {
++		u32 id;
++		u32 val;
++	} packet;
++	int ret;
++
++	*temp = 0;
++	packet.id = 0;
++	ret = rpi_firmware_property(fw, tag, &packet, sizeof(packet));
++	if (ret) {
++		dev_err(&tz->device, "Failed to get temperature\n");
++		return ret;
++	}
++
++	*temp = packet.val;
++	dev_dbg(&tz->device, "%stemp=%d\n",
++		tag == RPI_FIRMWARE_GET_MAX_TEMPERATURE ? "max" : "", *temp);
++
++	return 0;
++}
++
++static int bcm2835_thermal_get_temp(struct thermal_zone_device *tz,
++				    int *temp)
++{
++	return bcm2835_thermal_get_property(tz, temp,
++					    RPI_FIRMWARE_GET_TEMPERATURE);
++}
++
++static int bcm2835_thermal_get_max_temp(struct thermal_zone_device *tz,
++					int trip, int *temp)
++{
++	/*
++	 * The maximum safe temperature of the SoC.
++	 * Overclock may be disabled above this temperature.
++	 */
++	return bcm2835_thermal_get_property(tz, temp,
++					    RPI_FIRMWARE_GET_MAX_TEMPERATURE);
++}
++
++static int bcm2835_thermal_get_trip_type(struct thermal_zone_device *tz,
++					 int trip, enum thermal_trip_type *type)
++{
++	*type = THERMAL_TRIP_HOT;
++
++	return 0;
++}
++
++static int bcm2835_thermal_get_mode(struct thermal_zone_device *tz,
++				    enum thermal_device_mode *mode)
++{
++	*mode = THERMAL_DEVICE_ENABLED;
++
++	return 0;
++}
++
++static struct thermal_zone_device_ops ops  = {
++	.get_temp = bcm2835_thermal_get_temp,
++	.get_trip_temp = bcm2835_thermal_get_max_temp,
++	.get_trip_type = bcm2835_thermal_get_trip_type,
++	.get_mode = bcm2835_thermal_get_mode,
++};
++
++static int bcm2835_thermal_probe(struct platform_device *pdev)
++{
++	struct device_node *fw_np;
++	struct rpi_firmware *fw;
++	struct thermal_zone_device *tz;
++
++	fw_np = of_parse_phandle(pdev->dev.of_node, "firmware", 0);
++/* Remove comment when booting without Device Tree is no longer supported
++	if (!fw_np) {
++		dev_err(&pdev->dev, "Missing firmware node\n");
++		return -ENOENT;
++	}
++*/
++	fw = rpi_firmware_get(fw_np);
++	if (!fw)
++		return -EPROBE_DEFER;
++
++	tz = thermal_zone_device_register("bcm2835_thermal", 1, 0, fw, &ops,
++					  NULL, 0, 0);
++	if (IS_ERR(tz)) {
++		dev_err(&pdev->dev, "Failed to register the thermal device\n");
++		return PTR_ERR(tz);
++	}
++
++	platform_set_drvdata(pdev, tz);
++
++	return 0;
++}
++
++static int bcm2835_thermal_remove(struct platform_device *pdev)
++{
++	thermal_zone_device_unregister(platform_get_drvdata(pdev));
++
++	return 0;
++}
++
++static const struct of_device_id bcm2835_thermal_of_match_table[] = {
++	{ .compatible = "brcm,bcm2835-thermal", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, bcm2835_thermal_of_match_table);
++
++static struct platform_driver bcm2835_thermal_driver = {
++	.probe = bcm2835_thermal_probe,
++	.remove = bcm2835_thermal_remove,
++	.driver = {
++		.name = "bcm2835_thermal",
++		.of_match_table = bcm2835_thermal_of_match_table,
++	},
++};
++module_platform_driver(bcm2835_thermal_driver);
++
++MODULE_AUTHOR("Dorian Peake");
++MODULE_AUTHOR("Noralf Trønnes");
++MODULE_DESCRIPTION("Thermal driver for bcm2835 chip");
++MODULE_LICENSE("GPL");
diff --git a/target/linux/brcm2708/patches-4.4/0046-Add-Chris-Boot-s-i2c-driver.patch b/target/linux/brcm2708/patches-4.4/0046-Add-Chris-Boot-s-i2c-driver.patch
new file mode 100644
index 0000000..c3e8b0c
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0046-Add-Chris-Boot-s-i2c-driver.patch
@@ -0,0 +1,635 @@
+From 052aa5593d1cd52bbabc0985e9630f8272f3226a Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 17 Jun 2015 15:44:08 +0100
+Subject: [PATCH 046/127] Add Chris Boot's i2c driver
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+i2c-bcm2708: fixed baudrate
+
+Fixed issue where the wrong CDIV value was set for baudrates below 3815 Hz (for 250MHz bus clock).
+In that case the computed CDIV value was more than 0xffff. However the CDIV register width is only 16 bits.
+This resulted in incorrect setting of CDIV and higher baudrate than intended.
+Example: 3500Hz -> CDIV=0x11704 -> CDIV(16bit)=0x1704 -> 42430Hz
+After correction: 3500Hz -> CDIV=0x11704 -> CDIV(16bit)=0xffff -> 3815Hz
+The correct baudrate is shown in the log after the cdiv > 0xffff correction.
+
+Perform I2C combined transactions when possible
+
+Perform I2C combined transactions whenever possible, within the
+restrictions of the Broadcomm Serial Controller.
+
+Disable DONE interrupt during TA poll
+
+Prevent interrupt from being triggered if poll is missed and transfer
+starts and finishes.
+
+i2c: Make combined transactions optional and disabled by default
+
+i2c: bcm2708: add device tree support
+
+Add DT support to driver and add to .dtsi file.
+Setup pins in .dts file.
+i2c is disabled by default.
+
+Signed-off-by: Noralf Tronnes <notro at tronnes.org>
+
+bcm2708: don't register i2c controllers when using DT
+
+The devices for the i2c controllers are in the Device Tree.
+Only register devices when not using DT.
+
+Signed-off-by: Noralf Tronnes <notro at tronnes.org>
+
+I2C: Only register the I2C device for the current board revision
+
+i2c_bcm2708: Fix clock reference counting
+
+Fix grabbing lock from atomic context in i2c driver
+
+2 main changes:
+- check for timeouts in the bcm2708_bsc_setup function as indicated by this comment:
+      /* poll for transfer start bit (should only take 1-20 polls) */
+  This implies that the setup function can now fail so account for this everywhere it's called
+- Removed the clk_get_rate call from inside the setup function as it locks a mutex and that's not ok since we call it from under a spin lock.
+
+i2c-bcm2708: When using DT, leave the GPIO setup to pinctrl
+
+i2c-bcm2708: Increase timeouts to allow larger transfers
+
+Use the timeout value provided by the I2C_TIMEOUT ioctl when waiting
+for completion. The default timeout is 1 second.
+
+See: https://github.com/raspberrypi/linux/issues/260
+
+i2c-bcm2708/BCM270X_DT: Add support for I2C2
+
+The third I2C bus (I2C2) is normally reserved for HDMI use. Careless
+use of this bus can break an attached display - use with caution.
+
+It is recommended to disable accesses by VideoCore by setting
+hdmi_ignore_edid=1 or hdmi_edid_file=1 in config.txt.
+
+The interface is disabled by default - enable using the
+i2c2_iknowwhatimdoing DT parameter.
+
+bcm2708-spi: Don't use static pin configuration with DT
+
+Also remove superfluous error checking - the SPI framework ensures the
+validity of the chip_select value.
+
+i2c-bcm2708: Remove non-DT support
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/i2c/busses/Kconfig       |  21 +-
+ drivers/i2c/busses/Makefile      |   2 +
+ drivers/i2c/busses/i2c-bcm2708.c | 493 +++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 515 insertions(+), 1 deletion(-)
+ create mode 100644 drivers/i2c/busses/i2c-bcm2708.c
+
+--- a/drivers/i2c/busses/Kconfig
++++ b/drivers/i2c/busses/Kconfig
+@@ -8,6 +8,25 @@ menu "I2C Hardware Bus support"
+ comment "PC SMBus host controller drivers"
+ 	depends on PCI
+ 
++config I2C_BCM2708
++	tristate "BCM2708 BSC"
++	depends on MACH_BCM2708 || MACH_BCM2709 || ARCH_BCM2835
++	help
++	  Enabling this option will add BSC (Broadcom Serial Controller)
++	  support for the BCM2708. BSC is a Broadcom proprietary bus compatible
++	  with I2C/TWI/SMBus.
++
++config I2C_BCM2708_BAUDRATE
++	prompt "BCM2708 I2C baudrate"
++	depends on I2C_BCM2708
++	int
++	default 100000
++	help
++	  Set the I2C baudrate. This will alter the default value. A
++	  different baudrate can be set by using a module parameter as well. If
++	  no parameter is provided when loading, this is the value that will be
++	  used.
++
+ config I2C_ALI1535
+ 	tristate "ALI 1535"
+ 	depends on PCI
+@@ -365,7 +384,7 @@ config I2C_AXXIA
+ 
+ config I2C_BCM2835
+ 	tristate "Broadcom BCM2835 I2C controller"
+-	depends on ARCH_BCM2835
++	depends on ARCH_BCM2835 || ARCH_BCM2708 || ARCH_BCM2709
+ 	help
+ 	  If you say yes to this option, support will be included for the
+ 	  BCM2835 I2C controller.
+--- a/drivers/i2c/busses/Makefile
++++ b/drivers/i2c/busses/Makefile
+@@ -2,6 +2,8 @@
+ # Makefile for the i2c bus drivers.
+ #
+ 
++obj-$(CONFIG_I2C_BCM2708)	+= i2c-bcm2708.o
++
+ # ACPI drivers
+ obj-$(CONFIG_I2C_SCMI)		+= i2c-scmi.o
+ 
+--- /dev/null
++++ b/drivers/i2c/busses/i2c-bcm2708.c
+@@ -0,0 +1,493 @@
++/*
++ * Driver for Broadcom BCM2708 BSC Controllers
++ *
++ * Copyright (C) 2012 Chris Boot & Frank Buss
++ *
++ * This driver is inspired by:
++ * i2c-ocores.c, by Peter Korsgaard <jacmet at sunsite.dk>
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation; either version 2 of the License, or
++ * (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/spinlock.h>
++#include <linux/clk.h>
++#include <linux/err.h>
++#include <linux/of.h>
++#include <linux/platform_device.h>
++#include <linux/io.h>
++#include <linux/slab.h>
++#include <linux/i2c.h>
++#include <linux/interrupt.h>
++#include <linux/sched.h>
++#include <linux/wait.h>
++
++/* BSC register offsets */
++#define BSC_C			0x00
++#define BSC_S			0x04
++#define BSC_DLEN		0x08
++#define BSC_A			0x0c
++#define BSC_FIFO		0x10
++#define BSC_DIV			0x14
++#define BSC_DEL			0x18
++#define BSC_CLKT		0x1c
++
++/* Bitfields in BSC_C */
++#define BSC_C_I2CEN		0x00008000
++#define BSC_C_INTR		0x00000400
++#define BSC_C_INTT		0x00000200
++#define BSC_C_INTD		0x00000100
++#define BSC_C_ST		0x00000080
++#define BSC_C_CLEAR_1		0x00000020
++#define BSC_C_CLEAR_2		0x00000010
++#define BSC_C_READ		0x00000001
++
++/* Bitfields in BSC_S */
++#define BSC_S_CLKT		0x00000200
++#define BSC_S_ERR		0x00000100
++#define BSC_S_RXF		0x00000080
++#define BSC_S_TXE		0x00000040
++#define BSC_S_RXD		0x00000020
++#define BSC_S_TXD		0x00000010
++#define BSC_S_RXR		0x00000008
++#define BSC_S_TXW		0x00000004
++#define BSC_S_DONE		0x00000002
++#define BSC_S_TA		0x00000001
++
++#define I2C_WAIT_LOOP_COUNT	200
++
++#define DRV_NAME		"bcm2708_i2c"
++
++static unsigned int baudrate = CONFIG_I2C_BCM2708_BAUDRATE;
++module_param(baudrate, uint, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP);
++MODULE_PARM_DESC(baudrate, "The I2C baudrate");
++
++static bool combined = false;
++module_param(combined, bool, 0644);
++MODULE_PARM_DESC(combined, "Use combined transactions");
++
++struct bcm2708_i2c {
++	struct i2c_adapter adapter;
++
++	spinlock_t lock;
++	void __iomem *base;
++	int irq;
++	struct clk *clk;
++	u32 cdiv;
++
++	struct completion done;
++
++	struct i2c_msg *msg;
++	int pos;
++	int nmsgs;
++	bool error;
++};
++
++static inline u32 bcm2708_rd(struct bcm2708_i2c *bi, unsigned reg)
++{
++	return readl(bi->base + reg);
++}
++
++static inline void bcm2708_wr(struct bcm2708_i2c *bi, unsigned reg, u32 val)
++{
++	writel(val, bi->base + reg);
++}
++
++static inline void bcm2708_bsc_reset(struct bcm2708_i2c *bi)
++{
++	bcm2708_wr(bi, BSC_C, 0);
++	bcm2708_wr(bi, BSC_S, BSC_S_CLKT | BSC_S_ERR | BSC_S_DONE);
++}
++
++static inline void bcm2708_bsc_fifo_drain(struct bcm2708_i2c *bi)
++{
++	while ((bcm2708_rd(bi, BSC_S) & BSC_S_RXD) && (bi->pos < bi->msg->len))
++		bi->msg->buf[bi->pos++] = bcm2708_rd(bi, BSC_FIFO);
++}
++
++static inline void bcm2708_bsc_fifo_fill(struct bcm2708_i2c *bi)
++{
++	while ((bcm2708_rd(bi, BSC_S) & BSC_S_TXD) && (bi->pos < bi->msg->len))
++		bcm2708_wr(bi, BSC_FIFO, bi->msg->buf[bi->pos++]);
++}
++
++static inline int bcm2708_bsc_setup(struct bcm2708_i2c *bi)
++{
++	u32 cdiv, s;
++	u32 c = BSC_C_I2CEN | BSC_C_INTD | BSC_C_ST | BSC_C_CLEAR_1;
++	int wait_loops = I2C_WAIT_LOOP_COUNT;
++
++	/* Can't call clk_get_rate as it locks a mutex and here we are spinlocked.
++	 * Use the value that we cached in the probe.
++	 */
++	cdiv = bi->cdiv;
++
++	if (bi->msg->flags & I2C_M_RD)
++		c |= BSC_C_INTR | BSC_C_READ;
++	else
++		c |= BSC_C_INTT;
++
++	bcm2708_wr(bi, BSC_DIV, cdiv);
++	bcm2708_wr(bi, BSC_A, bi->msg->addr);
++	bcm2708_wr(bi, BSC_DLEN, bi->msg->len);
++	if (combined)
++	{
++		/* Do the next two messages meet combined transaction criteria?
++		   - Current message is a write, next message is a read
++		   - Both messages to same slave address
++		   - Write message can fit inside FIFO (16 bytes or less) */
++		if ( (bi->nmsgs > 1) &&
++			!(bi->msg[0].flags & I2C_M_RD) && (bi->msg[1].flags & I2C_M_RD) &&
++			 (bi->msg[0].addr == bi->msg[1].addr) && (bi->msg[0].len <= 16)) {
++			/* Fill FIFO with entire write message (16 byte FIFO) */
++			while (bi->pos < bi->msg->len) {
++				bcm2708_wr(bi, BSC_FIFO, bi->msg->buf[bi->pos++]);
++			}
++			/* Start write transfer (no interrupts, don't clear FIFO) */
++			bcm2708_wr(bi, BSC_C, BSC_C_I2CEN | BSC_C_ST);
++
++			/* poll for transfer start bit (should only take 1-20 polls) */
++			do {
++				s = bcm2708_rd(bi, BSC_S);
++			} while (!(s & (BSC_S_TA | BSC_S_ERR | BSC_S_CLKT | BSC_S_DONE)) && --wait_loops >= 0);
++
++			/* did we time out or some error occured? */
++			if (wait_loops < 0 || (s & (BSC_S_ERR | BSC_S_CLKT))) {
++				return -1;
++			}
++
++			/* Send next read message before the write transfer finishes. */
++			bi->nmsgs--;
++			bi->msg++;
++			bi->pos = 0;
++			bcm2708_wr(bi, BSC_DLEN, bi->msg->len);
++			c = BSC_C_I2CEN | BSC_C_INTD | BSC_C_INTR | BSC_C_ST | BSC_C_READ;
++		}
++	}
++	bcm2708_wr(bi, BSC_C, c);
++
++	return 0;
++}
++
++static irqreturn_t bcm2708_i2c_interrupt(int irq, void *dev_id)
++{
++	struct bcm2708_i2c *bi = dev_id;
++	bool handled = true;
++	u32 s;
++	int ret;
++
++	spin_lock(&bi->lock);
++
++	/* we may see camera interrupts on the "other" I2C channel
++		   Just return if we've not sent anything */
++	if (!bi->nmsgs || !bi->msg) {
++		goto early_exit;
++	}
++
++	s = bcm2708_rd(bi, BSC_S);
++
++	if (s & (BSC_S_CLKT | BSC_S_ERR)) {
++		bcm2708_bsc_reset(bi);
++		bi->error = true;
++
++		bi->msg = 0; /* to inform the that all work is done */
++		bi->nmsgs = 0;
++		/* wake up our bh */
++		complete(&bi->done);
++	} else if (s & BSC_S_DONE) {
++		bi->nmsgs--;
++
++		if (bi->msg->flags & I2C_M_RD) {
++			bcm2708_bsc_fifo_drain(bi);
++		}
++
++		bcm2708_bsc_reset(bi);
++
++		if (bi->nmsgs) {
++			/* advance to next message */
++			bi->msg++;
++			bi->pos = 0;
++			ret = bcm2708_bsc_setup(bi);
++			if (ret < 0) {
++				bcm2708_bsc_reset(bi);
++				bi->error = true;
++				bi->msg = 0; /* to inform the that all work is done */
++				bi->nmsgs = 0;
++				/* wake up our bh */
++				complete(&bi->done);
++				goto early_exit;
++			}
++		} else {
++			bi->msg = 0; /* to inform the that all work is done */
++			bi->nmsgs = 0;
++			/* wake up our bh */
++			complete(&bi->done);
++		}
++	} else if (s & BSC_S_TXW) {
++		bcm2708_bsc_fifo_fill(bi);
++	} else if (s & BSC_S_RXR) {
++		bcm2708_bsc_fifo_drain(bi);
++	} else {
++		handled = false;
++	}
++
++early_exit:
++	spin_unlock(&bi->lock);
++
++	return handled ? IRQ_HANDLED : IRQ_NONE;
++}
++
++static int bcm2708_i2c_master_xfer(struct i2c_adapter *adap,
++	struct i2c_msg *msgs, int num)
++{
++	struct bcm2708_i2c *bi = adap->algo_data;
++	unsigned long flags;
++	int ret;
++
++	spin_lock_irqsave(&bi->lock, flags);
++
++	reinit_completion(&bi->done);
++	bi->msg = msgs;
++	bi->pos = 0;
++	bi->nmsgs = num;
++	bi->error = false;
++
++	ret = bcm2708_bsc_setup(bi);
++
++	spin_unlock_irqrestore(&bi->lock, flags);
++
++	/* check the result of the setup */
++	if (ret < 0)
++	{
++		dev_err(&adap->dev, "transfer setup timed out\n");
++		goto error_timeout;
++	}
++
++	ret = wait_for_completion_timeout(&bi->done, adap->timeout);
++	if (ret == 0) {
++		dev_err(&adap->dev, "transfer timed out\n");
++		goto error_timeout;
++	}
++
++	ret = bi->error ? -EIO : num;
++	return ret;
++
++error_timeout:
++	spin_lock_irqsave(&bi->lock, flags);
++	bcm2708_bsc_reset(bi);
++	bi->msg = 0; /* to inform the interrupt handler that there's nothing else to be done */
++	bi->nmsgs = 0;
++	spin_unlock_irqrestore(&bi->lock, flags);
++	return -ETIMEDOUT;
++}
++
++static u32 bcm2708_i2c_functionality(struct i2c_adapter *adap)
++{
++	return I2C_FUNC_I2C | /*I2C_FUNC_10BIT_ADDR |*/ I2C_FUNC_SMBUS_EMUL;
++}
++
++static struct i2c_algorithm bcm2708_i2c_algorithm = {
++	.master_xfer = bcm2708_i2c_master_xfer,
++	.functionality = bcm2708_i2c_functionality,
++};
++
++static int bcm2708_i2c_probe(struct platform_device *pdev)
++{
++	struct resource *regs;
++	int irq, err = -ENOMEM;
++	struct clk *clk;
++	struct bcm2708_i2c *bi;
++	struct i2c_adapter *adap;
++	unsigned long bus_hz;
++	u32 cdiv;
++
++	if (pdev->dev.of_node) {
++		u32 bus_clk_rate;
++		pdev->id = of_alias_get_id(pdev->dev.of_node, "i2c");
++		if (pdev->id < 0) {
++			dev_err(&pdev->dev, "alias is missing\n");
++			return -EINVAL;
++		}
++		if (!of_property_read_u32(pdev->dev.of_node,
++					"clock-frequency", &bus_clk_rate))
++			baudrate = bus_clk_rate;
++		else
++			dev_warn(&pdev->dev,
++				"Could not read clock-frequency property\n");
++	}
++
++	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!regs) {
++		dev_err(&pdev->dev, "could not get IO memory\n");
++		return -ENXIO;
++	}
++
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		dev_err(&pdev->dev, "could not get IRQ\n");
++		return irq;
++	}
++
++	clk = clk_get(&pdev->dev, NULL);
++	if (IS_ERR(clk)) {
++		dev_err(&pdev->dev, "could not find clk: %ld\n", PTR_ERR(clk));
++		return PTR_ERR(clk);
++	}
++
++	err = clk_prepare_enable(clk);
++	if (err) {
++		dev_err(&pdev->dev, "could not enable clk: %d\n", err);
++		goto out_clk_put;
++	}
++
++	bi = kzalloc(sizeof(*bi), GFP_KERNEL);
++	if (!bi)
++		goto out_clk_disable;
++
++	platform_set_drvdata(pdev, bi);
++
++	adap = &bi->adapter;
++	adap->class = I2C_CLASS_HWMON | I2C_CLASS_DDC;
++	adap->algo = &bcm2708_i2c_algorithm;
++	adap->algo_data = bi;
++	adap->dev.parent = &pdev->dev;
++	adap->nr = pdev->id;
++	strlcpy(adap->name, dev_name(&pdev->dev), sizeof(adap->name));
++	adap->dev.of_node = pdev->dev.of_node;
++
++	switch (pdev->id) {
++	case 0:
++		adap->class = I2C_CLASS_HWMON;
++		break;
++	case 1:
++		adap->class = I2C_CLASS_DDC;
++		break;
++	case 2:
++		adap->class = I2C_CLASS_DDC;
++		break;
++	default:
++		dev_err(&pdev->dev, "can only bind to BSC 0, 1 or 2\n");
++		err = -ENXIO;
++		goto out_free_bi;
++	}
++
++	spin_lock_init(&bi->lock);
++	init_completion(&bi->done);
++
++	bi->base = ioremap(regs->start, resource_size(regs));
++	if (!bi->base) {
++		dev_err(&pdev->dev, "could not remap memory\n");
++		goto out_free_bi;
++	}
++
++	bi->irq = irq;
++	bi->clk = clk;
++
++	err = request_irq(irq, bcm2708_i2c_interrupt, IRQF_SHARED,
++			dev_name(&pdev->dev), bi);
++	if (err) {
++		dev_err(&pdev->dev, "could not request IRQ: %d\n", err);
++		goto out_iounmap;
++	}
++
++	bcm2708_bsc_reset(bi);
++
++	err = i2c_add_numbered_adapter(adap);
++	if (err < 0) {
++		dev_err(&pdev->dev, "could not add I2C adapter: %d\n", err);
++		goto out_free_irq;
++	}
++
++	bus_hz = clk_get_rate(bi->clk);
++	cdiv = bus_hz / baudrate;
++	if (cdiv > 0xffff) {
++		cdiv = 0xffff;
++		baudrate = bus_hz / cdiv;
++	}
++	bi->cdiv = cdiv;
++
++	dev_info(&pdev->dev, "BSC%d Controller at 0x%08lx (irq %d) (baudrate %d)\n",
++		pdev->id, (unsigned long)regs->start, irq, baudrate);
++
++	return 0;
++
++out_free_irq:
++	free_irq(bi->irq, bi);
++out_iounmap:
++	iounmap(bi->base);
++out_free_bi:
++	kfree(bi);
++out_clk_disable:
++	clk_disable_unprepare(clk);
++out_clk_put:
++	clk_put(clk);
++	return err;
++}
++
++static int bcm2708_i2c_remove(struct platform_device *pdev)
++{
++	struct bcm2708_i2c *bi = platform_get_drvdata(pdev);
++
++	platform_set_drvdata(pdev, NULL);
++
++	i2c_del_adapter(&bi->adapter);
++	free_irq(bi->irq, bi);
++	iounmap(bi->base);
++	clk_disable_unprepare(bi->clk);
++	clk_put(bi->clk);
++	kfree(bi);
++
++	return 0;
++}
++
++static const struct of_device_id bcm2708_i2c_of_match[] = {
++        { .compatible = "brcm,bcm2708-i2c" },
++        {},
++};
++MODULE_DEVICE_TABLE(of, bcm2708_i2c_of_match);
++
++static struct platform_driver bcm2708_i2c_driver = {
++	.driver		= {
++		.name	= DRV_NAME,
++		.owner	= THIS_MODULE,
++		.of_match_table = bcm2708_i2c_of_match,
++	},
++	.probe		= bcm2708_i2c_probe,
++	.remove		= bcm2708_i2c_remove,
++};
++
++// module_platform_driver(bcm2708_i2c_driver);
++
++
++static int __init bcm2708_i2c_init(void)
++{
++	return platform_driver_register(&bcm2708_i2c_driver);
++}
++
++static void __exit bcm2708_i2c_exit(void)
++{
++	platform_driver_unregister(&bcm2708_i2c_driver);
++}
++
++module_init(bcm2708_i2c_init);
++module_exit(bcm2708_i2c_exit);
++
++
++
++MODULE_DESCRIPTION("BSC controller driver for Broadcom BCM2708");
++MODULE_AUTHOR("Chris Boot <bootc at bootc.net>");
++MODULE_LICENSE("GPL v2");
++MODULE_ALIAS("platform:" DRV_NAME);
diff --git a/target/linux/brcm2708/patches-4.4/0047-char-broadcom-Add-vcio-module.patch b/target/linux/brcm2708/patches-4.4/0047-char-broadcom-Add-vcio-module.patch
new file mode 100644
index 0000000..b9a5c24
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0047-char-broadcom-Add-vcio-module.patch
@@ -0,0 +1,221 @@
+From 18ec30d7e1571af21c41b687bd12e22f05ffd0b2 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Fri, 26 Jun 2015 14:27:06 +0200
+Subject: [PATCH 047/127] char: broadcom: Add vcio module
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Add module for accessing the mailbox property channel through
+/dev/vcio. Was previously in bcm2708-vcio.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/char/broadcom/Kconfig  |   6 ++
+ drivers/char/broadcom/Makefile |   1 +
+ drivers/char/broadcom/vcio.c   | 175 +++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 182 insertions(+)
+ create mode 100644 drivers/char/broadcom/vcio.c
+
+--- a/drivers/char/broadcom/Kconfig
++++ b/drivers/char/broadcom/Kconfig
+@@ -22,6 +22,12 @@ config BCM2708_VCMEM
+         help
+           Helper for videocore memory access and total size allocation.
+ 
++config BCM_VCIO
++	tristate "Mailbox userspace access"
++	depends on BCM2835_MBOX
++	help
++	  Gives access to the mailbox property channel from userspace.
++
+ endif
+ 
+ config BCM_VC_SM
+--- a/drivers/char/broadcom/Makefile
++++ b/drivers/char/broadcom/Makefile
+@@ -1,5 +1,6 @@
+ obj-$(CONFIG_BCM_VC_CMA)	+= vc_cma/
+ obj-$(CONFIG_BCM2708_VCMEM)	+= vc_mem.o
++obj-$(CONFIG_BCM_VCIO)		+= vcio.o
+ obj-$(CONFIG_BCM_VC_SM)         += vc_sm/
+ 
+ obj-$(CONFIG_BCM2835_DEVGPIOMEM)+= bcm2835-gpiomem.o
+--- /dev/null
++++ b/drivers/char/broadcom/vcio.c
+@@ -0,0 +1,175 @@
++/*
++ *  Copyright (C) 2010 Broadcom
++ *  Copyright (C) 2015 Noralf Trønnes
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ */
++
++#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
++
++#include <linux/cdev.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/init.h>
++#include <linux/ioctl.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/uaccess.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
++
++#define MBOX_CHAN_PROPERTY 8
++
++#define VCIO_IOC_MAGIC 100
++#define IOCTL_MBOX_PROPERTY _IOWR(VCIO_IOC_MAGIC, 0, char *)
++
++static struct {
++	dev_t devt;
++	struct cdev cdev;
++	struct class *class;
++	struct rpi_firmware *fw;
++} vcio;
++
++static int vcio_user_property_list(void *user)
++{
++	u32 *buf, size;
++	int ret;
++
++	/* The first 32-bit is the size of the buffer */
++	if (copy_from_user(&size, user, sizeof(size)))
++		return -EFAULT;
++
++	buf = kmalloc(size, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	if (copy_from_user(buf, user, size)) {
++		kfree(buf);
++		return -EFAULT;
++	}
++
++	/* Strip off protocol encapsulation */
++	ret = rpi_firmware_property_list(vcio.fw, &buf[2], size - 12);
++	if (ret) {
++		kfree(buf);
++		return ret;
++	}
++
++	buf[1] = RPI_FIRMWARE_STATUS_SUCCESS;
++	if (copy_to_user(user, buf, size))
++		ret = -EFAULT;
++
++	kfree(buf);
++
++	return ret;
++}
++
++static int vcio_device_open(struct inode *inode, struct file *file)
++{
++	try_module_get(THIS_MODULE);
++
++	return 0;
++}
++
++static int vcio_device_release(struct inode *inode, struct file *file)
++{
++	module_put(THIS_MODULE);
++
++	return 0;
++}
++
++static long vcio_device_ioctl(struct file *file, unsigned int ioctl_num,
++			      unsigned long ioctl_param)
++{
++	switch (ioctl_num) {
++	case IOCTL_MBOX_PROPERTY:
++		return vcio_user_property_list((void *)ioctl_param);
++	default:
++		pr_err("unknown ioctl: %d\n", ioctl_num);
++		return -EINVAL;
++	}
++}
++
++const struct file_operations vcio_fops = {
++	.unlocked_ioctl = vcio_device_ioctl,
++	.open = vcio_device_open,
++	.release = vcio_device_release,
++};
++
++static int __init vcio_init(void)
++{
++	struct device_node *np;
++	static struct device *dev;
++	int ret;
++
++	np = of_find_compatible_node(NULL, NULL,
++				     "raspberrypi,bcm2835-firmware");
++/* Uncomment this when we only boot with Device Tree
++	if (!of_device_is_available(np))
++		return -ENODEV;
++*/
++	vcio.fw = rpi_firmware_get(np);
++	if (!vcio.fw)
++		return -ENODEV;
++
++	ret = alloc_chrdev_region(&vcio.devt, 0, 1, "vcio");
++	if (ret) {
++		pr_err("failed to allocate device number\n");
++		return ret;
++	}
++
++	cdev_init(&vcio.cdev, &vcio_fops);
++	vcio.cdev.owner = THIS_MODULE;
++	ret = cdev_add(&vcio.cdev, vcio.devt, 1);
++	if (ret) {
++		pr_err("failed to register device\n");
++		goto err_unregister_chardev;
++	}
++
++	/*
++	 * Create sysfs entries
++	 * 'bcm2708_vcio' is used for backwards compatibility so we don't break
++	 * userspace. Raspian has a udev rule that changes the permissions.
++	 */
++	vcio.class = class_create(THIS_MODULE, "bcm2708_vcio");
++	if (IS_ERR(vcio.class)) {
++		ret = PTR_ERR(vcio.class);
++		pr_err("failed to create class\n");
++		goto err_cdev_del;
++	}
++
++	dev = device_create(vcio.class, NULL, vcio.devt, NULL, "vcio");
++	if (IS_ERR(dev)) {
++		ret = PTR_ERR(dev);
++		pr_err("failed to create device\n");
++		goto err_class_destroy;
++	}
++
++	return 0;
++
++err_class_destroy:
++	class_destroy(vcio.class);
++err_cdev_del:
++	cdev_del(&vcio.cdev);
++err_unregister_chardev:
++	unregister_chrdev_region(vcio.devt, 1);
++
++	return ret;
++}
++module_init(vcio_init);
++
++static void __exit vcio_exit(void)
++{
++	device_destroy(vcio.class, vcio.devt);
++	class_destroy(vcio.class);
++	cdev_del(&vcio.cdev);
++	unregister_chrdev_region(vcio.devt, 1);
++}
++module_exit(vcio_exit);
++
++MODULE_AUTHOR("Gray Girling");
++MODULE_AUTHOR("Noralf Trønnes");
++MODULE_DESCRIPTION("Mailbox userspace access");
++MODULE_LICENSE("GPL");
diff --git a/target/linux/brcm2708/patches-4.4/0048-firmware-bcm2835-Support-ARCH_BCM270x.patch b/target/linux/brcm2708/patches-4.4/0048-firmware-bcm2835-Support-ARCH_BCM270x.patch
new file mode 100644
index 0000000..78e8605
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0048-firmware-bcm2835-Support-ARCH_BCM270x.patch
@@ -0,0 +1,106 @@
+From 1d1c3e9b18717f6510b67c582e93051a3a948eb6 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Fri, 26 Jun 2015 14:25:01 +0200
+Subject: [PATCH 048/127] firmware: bcm2835: Support ARCH_BCM270x
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Support booting without Device Tree.
+Turn on USB power.
+Load driver early because of lacking support for deferred probing
+in many drivers.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ drivers/firmware/raspberrypi.c | 41 +++++++++++++++++++++++++++++++++++++++--
+ 1 file changed, 39 insertions(+), 2 deletions(-)
+
+--- a/drivers/firmware/raspberrypi.c
++++ b/drivers/firmware/raspberrypi.c
+@@ -28,6 +28,8 @@ struct rpi_firmware {
+ 	u32 enabled;
+ };
+ 
++static struct platform_device *g_pdev;
++
+ static DEFINE_MUTEX(transaction_lock);
+ 
+ static void response_callback(struct mbox_client *cl, void *msg)
+@@ -183,6 +185,25 @@ rpi_firmware_print_firmware_revision(str
+ 	}
+ }
+ 
++static int raspberrypi_firmware_set_power(struct rpi_firmware *fw,
++					  u32 domain, bool on)
++{
++	struct {
++		u32 domain;
++		u32 on;
++	} packet;
++	int ret;
++
++	packet.domain = domain;
++	packet.on = on;
++	ret = rpi_firmware_property(fw, RPI_FIRMWARE_SET_POWER_STATE,
++				    &packet, sizeof(packet));
++	if (!ret && packet.on != on)
++		ret = -EINVAL;
++
++	return ret;
++}
++
+ static int rpi_firmware_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -207,9 +228,13 @@ static int rpi_firmware_probe(struct pla
+ 	init_completion(&fw->c);
+ 
+ 	platform_set_drvdata(pdev, fw);
++	g_pdev = pdev;
+ 
+ 	rpi_firmware_print_firmware_revision(fw);
+ 
++	if (raspberrypi_firmware_set_power(fw, 3, true))
++		dev_err(dev, "failed to turn on USB power\n");
++
+ 	return 0;
+ }
+ 
+@@ -218,6 +243,7 @@ static int rpi_firmware_remove(struct pl
+ 	struct rpi_firmware *fw = platform_get_drvdata(pdev);
+ 
+ 	mbox_free_channel(fw->chan);
++	g_pdev = NULL;
+ 
+ 	return 0;
+ }
+@@ -230,7 +256,7 @@ static int rpi_firmware_remove(struct pl
+  */
+ struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
+ {
+-	struct platform_device *pdev = of_find_device_by_node(firmware_node);
++	struct platform_device *pdev = g_pdev;
+ 
+ 	if (!pdev)
+ 		return NULL;
+@@ -253,7 +279,18 @@ static struct platform_driver rpi_firmwa
+ 	.probe		= rpi_firmware_probe,
+ 	.remove		= rpi_firmware_remove,
+ };
+-module_platform_driver(rpi_firmware_driver);
++
++static int __init rpi_firmware_init(void)
++{
++	return platform_driver_register(&rpi_firmware_driver);
++}
++subsys_initcall(rpi_firmware_init);
++
++static void __init rpi_firmware_exit(void)
++{
++	platform_driver_unregister(&rpi_firmware_driver);
++}
++module_exit(rpi_firmware_exit);
+ 
+ MODULE_AUTHOR("Eric Anholt <eric at anholt.net>");
+ MODULE_DESCRIPTION("Raspberry Pi firmware driver");
diff --git a/target/linux/brcm2708/patches-4.4/0049-bcm2835-add-v4l2-camera-device.patch b/target/linux/brcm2708/patches-4.4/0049-bcm2835-add-v4l2-camera-device.patch
new file mode 100644
index 0000000..b9f1746
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0049-bcm2835-add-v4l2-camera-device.patch
@@ -0,0 +1,7338 @@
+From 1e7cf5d5acfaecbc92766c179dba544064fb94cc Mon Sep 17 00:00:00 2001
+From: Vincent Sanders <vincent.sanders at collabora.co.uk>
+Date: Wed, 30 Jan 2013 12:45:18 +0000
+Subject: [PATCH 049/127] bcm2835: add v4l2 camera device
+
+- Supports raw YUV capture, preview, JPEG and H264.
+- Uses videobuf2 for data transfer, using dma_buf.
+- Uses 3.6.10 timestamping
+- Camera power based on use
+- Uses immutable input mode on video encoder
+
+Signed-off-by: Daniel Stone <daniels at collabora.com>
+Signed-off-by: Luke Diamand <luked at broadcom.com>
+
+V4L2: Fixes from 6by9
+
+V4L2: Fix EV values. Add manual shutter speed control
+
+V4L2 EV values should be in units of 1/1000. Corrected.
+Add support for V4L2_CID_EXPOSURE_ABSOLUTE which should
+give manual shutter control. Requires manual exposure mode
+to be selected first.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Correct JPEG Q-factor range
+
+Should be 1-100, not 0-100
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Fix issue of driver jamming if STREAMON failed.
+
+Fix issue where the driver was left in a partially enabled
+state if STREAMON failed, and would then reject many IOCTLs
+as it thought it was streaming.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Fix ISO controls.
+
+Driver was passing the index to the GPU, and not the desired
+ISO value.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add flicker avoidance controls
+
+Add support for V4L2_CID_POWER_LINE_FREQUENCY to set flicker
+avoidance frequencies.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add support for frame rate control.
+
+Add support for frame rate (or time per frame as V4L2
+inverts it) control via s_parm.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Improve G_FBUF handling so we pass conformance
+
+Return some sane numbers for get framebuffer so that
+we pass conformance.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Fix information advertised through g_vidfmt
+
+Width and height were being stored based on incorrect
+values.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add support for inline H264 headers
+
+Add support for V4L2_CID_MPEG_VIDEO_REPEAT_SEQ_HEADER
+to control H264 inline headers.
+Requires firmware fix to work correctly, otherwise format
+has to be set to H264 before this parameter is set.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Fix JPEG timestamp issue
+
+JPEG images were coming through from the GPU with timestamp
+of 0. Detect this and give current system time instead
+of some invalid value.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Fix issue when switching down JPEG resolution.
+
+JPEG buffer size calculation is based on input resolution.
+Input resolution was being configured after output port
+format. Caused failures if switching from one JPEG resolution
+to a smaller one.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Enable MJPEG encoding
+
+Requires GPU firmware update to support MJPEG encoder.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Correct flag settings for compressed formats
+
+Set flags field correctly on enum_fmt_vid_cap for compressed
+image formats.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: H264 profile & level ctrls, FPS control and auto exp pri
+
+Several control handling updates.
+H264 profile and level controls.
+Timeperframe/FPS reworked to add V4L2_CID_EXPOSURE_AUTO_PRIORITY to
+select whether AE is allowed to override the framerate specified.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Correct BGR24 to RGB24 in format table
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add additional pixel formats. Correct colourspace
+
+Adds the other flavours of YUYV, and NV12.
+Corrects the overlay advertised colourspace.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Drop logging msg from info to debug
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Initial pass at scene modes.
+
+Only supports exposure mode and metering modes.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add manual white balance control.
+
+Adds support for V4L2_CID_RED_BALANCE and
+V4L2_CID_BLUE_BALANCE. Only has an effect if
+V4L2_CID_AUTO_N_PRESET_WHITE_BALANCE has
+V4L2_WHITE_BALANCE_MANUAL selected.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+config: Enable V4L / MMAL driver
+
+V4L2: Increase the MMAL timeout to 3sec
+
+MJPEG codec flush is now taking longer and results
+in a kernel panic if the driver has stopped waiting for
+the result when it finally completes.
+Increase the timeout value from 1 to 3secs.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add support for setting H264_I_PERIOD
+
+Adds support for the parameter V4L2_CID_MPEG_VIDEO_H264_I_PERIOD
+to set the frequency with which I frames are produced.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Enable GPU function for removing padding from images.
+
+GPU can now support arbitrary strides, although may require
+additional processing to achieve it. Enable this feature
+so that the images delivered are the size requested.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add support for V4L2_PIX_FMT_BGR32
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Set the colourspace to avoid odd YUV-RGB conversions
+
+Removes the amiguity from the conversion routines and stops
+them dropping back to the SD vs HD choice of coeffs.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Make video/still threshold a run-time param
+
+Move the define for at what resolution the driver
+switches from a video mode capture to a stills mode
+capture to module parameters.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Fix incorrect pool sizing
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add option to disable enum_framesizes.
+
+Gstreamer's handling of a driver that advertises
+V4L2_FRMSIZE_TYPE_STEPWISE to define the supported
+resolutions is broken. See bug
+https://bugzilla.gnome.org/show_bug.cgi?id=726521
+
+Optional parameter of gst_v4l2src_is_broken added.
+If non-zero, the driver claims not to support that
+ioctl, and gstreamer should be happy again (it
+guesses a set of defaults for itself).
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Add support for more image formats
+
+Adds YVU420 (YV12), YVU420SP (NV21), and BGR888.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+V4L2: Extend range for V4L2_CID_MPEG_VIDEO_H264_I_PERIOD
+
+Request to extend the range from the fairly arbitrary
+1000 frames (33 seconds at 30fps). Extend out to the
+max range supported (int32 value).
+Also allow 0, which is handled by the codec as only
+send an I-frame on the first frame and never again.
+There may be an exception if it detects a significant
+scene change, but there's no easy way around that.
+
+Signed-off-by: Dave Stevenson <dsteve at broadcom.com>
+
+bcm2835-camera: stop_streaming now has a void return
+
+BCM2835-V4L2: Fix compliance test failures
+
+VIDIOC_TRY_FMT and VIDIOC_S_FMT tests were faling due
+to reporting V4L2_COLORSPACE_JPEG when the colour
+format wasn't V4L2_PIX_FMT_JPEG.
+Now reports V4L2_COLORSPACE_SMPTE170M for YUV formats.
+
+bcm2835 camera planar/packed stride length
+
+Added a field to the mmal_fmt struct used to compute the bytes per line
+when using a particular format. This results in the correct stride being
+calculated even when the format is planar.
+
+Signed-off-by: Garrett Wilson <g at floft.net>
+
+bcm2835: camera: check for scene not being found
+
+static analysis by cppcheck detected some potential NULL pointer
+dereference issues:
+
+[drivers/media/platform/bcm2835/controls.c:854]: (error) Possible null
+  pointer dereference: scene
+  (and lines 858, 859 too)
+
+it is possible that scene is not found because of an invalue ctrl->val
+and is therefore NULL and hence causing a null pointer dereference.
+
+Signed-off-by: Colin Ian King <colin.king at canonical.com>
+
+bcm2835: memcpy port data to m rather than rmsg
+
+static analysis by cppcheck detected a memcpy to rmsg which is
+not actually initialized at that point.  The memcpy should be copying
+to variable m instead.
+
+Signed-off-by: Colin Ian King <colin.king at canonical.com>
+
+BCM2835-V4L2: Return buffers to videobuf2 on shutdown
+
+https://github.com/raspberrypi/linux/issues/817
+Fixes the kernel warning from videobuf2 as buffers
+are now returned as they are being flushed on
+stop_streaming.
+
+squash: Fixup bcm2835-camera for changes in kernel 4.4 api
+---
+ Documentation/video4linux/bcm2835-v4l2.txt       |   60 +
+ drivers/media/platform/Kconfig                   |    2 +
+ drivers/media/platform/Makefile                  |    2 +
+ drivers/media/platform/bcm2835/Kconfig           |   25 +
+ drivers/media/platform/bcm2835/Makefile          |    5 +
+ drivers/media/platform/bcm2835/bcm2835-camera.c  | 1844 +++++++++++++++++++++
+ drivers/media/platform/bcm2835/bcm2835-camera.h  |  126 ++
+ drivers/media/platform/bcm2835/controls.c        | 1324 +++++++++++++++
+ drivers/media/platform/bcm2835/mmal-common.h     |   53 +
+ drivers/media/platform/bcm2835/mmal-encodings.h  |  127 ++
+ drivers/media/platform/bcm2835/mmal-msg-common.h |   50 +
+ drivers/media/platform/bcm2835/mmal-msg-format.h |   81 +
+ drivers/media/platform/bcm2835/mmal-msg-port.h   |  107 ++
+ drivers/media/platform/bcm2835/mmal-msg.h        |  404 +++++
+ drivers/media/platform/bcm2835/mmal-parameters.h |  656 ++++++++
+ drivers/media/platform/bcm2835/mmal-vchiq.c      | 1916 ++++++++++++++++++++++
+ drivers/media/platform/bcm2835/mmal-vchiq.h      |  178 ++
+ 17 files changed, 6960 insertions(+)
+ create mode 100644 Documentation/video4linux/bcm2835-v4l2.txt
+ create mode 100644 drivers/media/platform/bcm2835/Kconfig
+ create mode 100644 drivers/media/platform/bcm2835/Makefile
+ create mode 100644 drivers/media/platform/bcm2835/bcm2835-camera.c
+ create mode 100644 drivers/media/platform/bcm2835/bcm2835-camera.h
+ create mode 100644 drivers/media/platform/bcm2835/controls.c
+ create mode 100644 drivers/media/platform/bcm2835/mmal-common.h
+ create mode 100644 drivers/media/platform/bcm2835/mmal-encodings.h
+ create mode 100644 drivers/media/platform/bcm2835/mmal-msg-common.h
+ create mode 100644 drivers/media/platform/bcm2835/mmal-msg-format.h
+ create mode 100644 drivers/media/platform/bcm2835/mmal-msg-port.h
+ create mode 100644 drivers/media/platform/bcm2835/mmal-msg.h
+ create mode 100644 drivers/media/platform/bcm2835/mmal-parameters.h
+ create mode 100644 drivers/media/platform/bcm2835/mmal-vchiq.c
+ create mode 100644 drivers/media/platform/bcm2835/mmal-vchiq.h
+
+--- /dev/null
++++ b/Documentation/video4linux/bcm2835-v4l2.txt
+@@ -0,0 +1,60 @@
++
++BCM2835 (aka Raspberry Pi) V4L2 driver
++======================================
++
++1. Copyright
++============
++
++Copyright © 2013 Raspberry Pi (Trading) Ltd.
++
++2. License
++==========
++
++This program is free software; you can redistribute it and/or modify
++it under the terms of the GNU General Public License as published by
++the Free Software Foundation; either version 2 of the License, or
++(at your option) any later version.
++
++This program is distributed in the hope that it will be useful,
++but WITHOUT ANY WARRANTY; without even the implied warranty of
++MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++GNU General Public License for more details.
++
++You should have received a copy of the GNU General Public License
++along with this program; if not, write to the Free Software
++Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
++
++3. Quick Start
++==============
++
++You need a version 1.0 or later of v4l2-ctl, available from:
++	git://git.linuxtv.org/v4l-utils.git
++
++$ sudo modprobe bcm2835-v4l2
++
++Turn on the overlay:
++
++$ v4l2-ctl --overlay=1
++
++Turn off the overlay:
++
++$ v4l2-ctl --overlay=0
++
++Set the capture format for video:
++
++$ v4l2-ctl  --set-fmt-video=width=1920,height=1088,pixelformat=4
++
++(Note: 1088 not 1080).
++
++Capture:
++
++$ v4l2-ctl --stream-mmap=3 --stream-count=100 --stream-to=somefile.h264
++
++Stills capture:
++
++$ v4l2-ctl  --set-fmt-video=width=2592,height=1944,pixelformat=3
++$ v4l2-ctl --stream-mmap=3 --stream-count=1 --stream-to=somefile.jpg
++
++List of available formats:
++
++$ v4l2-ctl --list-formats
+--- a/drivers/media/platform/Kconfig
++++ b/drivers/media/platform/Kconfig
+@@ -11,6 +11,8 @@ menuconfig V4L_PLATFORM_DRIVERS
+ 
+ if V4L_PLATFORM_DRIVERS
+ 
++source "drivers/media/platform/bcm2835/Kconfig"
++
+ source "drivers/media/platform/marvell-ccic/Kconfig"
+ 
+ config VIDEO_VIA_CAMERA
+--- a/drivers/media/platform/Makefile
++++ b/drivers/media/platform/Makefile
+@@ -2,6 +2,8 @@
+ # Makefile for the video capture/playback device drivers.
+ #
+ 
++obj-$(CONFIG_VIDEO_BCM2835)		+= bcm2835/
++
+ obj-$(CONFIG_VIDEO_TIMBERDALE)	+= timblogiw.o
+ obj-$(CONFIG_VIDEO_M32R_AR_M64278) += arv.o
+ 
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/Kconfig
+@@ -0,0 +1,25 @@
++# Broadcom VideoCore IV v4l2 camera support
++
++config VIDEO_BCM2835
++	bool "Broadcom BCM2835 camera interface driver"
++	depends on VIDEO_V4L2 && (ARCH_BCM2708 || ARCH_BCM2709 || ARCH_BCM2835)
++	---help---
++	  Say Y here to enable camera host interface devices for
++	  Broadcom BCM2835 SoC. This operates over the VCHIQ interface
++	  to a service running on VideoCore.
++
++
++if VIDEO_BCM2835
++
++config VIDEO_BCM2835_MMAL
++	tristate "Broadcom BM2835 MMAL camera interface driver"
++	depends on BCM2708_VCHIQ
++	select VIDEOBUF2_VMALLOC
++	---help---
++	  This is a V4L2 driver for the Broadcom BCM2835 MMAL camera host interface
++
++	  To compile this driver as a module, choose M here: the
++	  module will be called bcm2835-v4l2.o
++
++
++endif # VIDEO_BM2835
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/Makefile
+@@ -0,0 +1,5 @@
++bcm2835-v4l2-objs := bcm2835-camera.o controls.o mmal-vchiq.o
++
++obj-$(CONFIG_VIDEO_BCM2835_MMAL) += bcm2835-v4l2.o
++
++ccflags-$(CONFIG_VIDEO_BCM2835) += -Idrivers/misc/vc04_services -Idrivers/misc/vc04_services/interface/vcos/linuxkernel -D__VCCOREVER__=0x04000000
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/bcm2835-camera.c
+@@ -0,0 +1,1844 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ */
++
++#include <linux/errno.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <media/videobuf2-vmalloc.h>
++#include <media/videobuf2-dma-contig.h>
++#include <media/v4l2-device.h>
++#include <media/v4l2-ioctl.h>
++#include <media/v4l2-ctrls.h>
++#include <media/v4l2-fh.h>
++#include <media/v4l2-event.h>
++#include <media/v4l2-common.h>
++#include <linux/delay.h>
++
++#include "mmal-common.h"
++#include "mmal-encodings.h"
++#include "mmal-vchiq.h"
++#include "mmal-msg.h"
++#include "mmal-parameters.h"
++#include "bcm2835-camera.h"
++
++#define BM2835_MMAL_VERSION "0.0.2"
++#define BM2835_MMAL_MODULE_NAME "bcm2835-v4l2"
++#define MIN_WIDTH 16
++#define MIN_HEIGHT 16
++#define MAX_WIDTH 2592
++#define MAX_HEIGHT 1944
++#define MIN_BUFFER_SIZE (80*1024)
++
++#define MAX_VIDEO_MODE_WIDTH 1280
++#define MAX_VIDEO_MODE_HEIGHT 720
++
++MODULE_DESCRIPTION("Broadcom 2835 MMAL video capture");
++MODULE_AUTHOR("Vincent Sanders");
++MODULE_LICENSE("GPL");
++MODULE_VERSION(BM2835_MMAL_VERSION);
++
++int bcm2835_v4l2_debug;
++module_param_named(debug, bcm2835_v4l2_debug, int, 0644);
++MODULE_PARM_DESC(bcm2835_v4l2_debug, "Debug level 0-2");
++
++int max_video_width = MAX_VIDEO_MODE_WIDTH;
++int max_video_height = MAX_VIDEO_MODE_HEIGHT;
++module_param(max_video_width, int, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
++MODULE_PARM_DESC(max_video_width, "Threshold for video mode");
++module_param(max_video_height, int, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
++MODULE_PARM_DESC(max_video_height, "Threshold for video mode");
++
++/* Gstreamer bug https://bugzilla.gnome.org/show_bug.cgi?id=726521
++ * v4l2src does bad (and actually wrong) things when the vidioc_enum_framesizes
++ * function says type V4L2_FRMSIZE_TYPE_STEPWISE, which we do by default.
++ * It's happier if we just don't say anything at all, when it then
++ * sets up a load of defaults that it thinks might work.
++ * If gst_v4l2src_is_broken is non-zero, then we remove the function from
++ * our function table list (actually switch to an alternate set, but same
++ * result).
++ */
++int gst_v4l2src_is_broken = 0;
++module_param(gst_v4l2src_is_broken, int, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
++MODULE_PARM_DESC(gst_v4l2src_is_broken, "If non-zero, enable workaround for Gstreamer");
++
++static struct bm2835_mmal_dev *gdev;	/* global device data */
++
++#define FPS_MIN 1
++#define FPS_MAX 90
++
++/* timeperframe: min/max and default */
++static const struct v4l2_fract
++	tpf_min     = {.numerator = 1,		.denominator = FPS_MAX},
++	tpf_max     = {.numerator = 1,	        .denominator = FPS_MIN},
++	tpf_default = {.numerator = 1000,	.denominator = 30000};
++
++/* video formats */
++static struct mmal_fmt formats[] = {
++	{
++	 .name = "4:2:0, planar, YUV",
++	 .fourcc = V4L2_PIX_FMT_YUV420,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_I420,
++	 .depth = 12,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 1,
++	 },
++	{
++	 .name = "4:2:2, packed, YUYV",
++	 .fourcc = V4L2_PIX_FMT_YUYV,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_YUYV,
++	 .depth = 16,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 2,
++	 },
++	{
++	 .name = "RGB24 (LE)",
++	 .fourcc = V4L2_PIX_FMT_RGB24,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_BGR24,
++	 .depth = 24,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 3,
++	 },
++	{
++	 .name = "JPEG",
++	 .fourcc = V4L2_PIX_FMT_JPEG,
++	 .flags = V4L2_FMT_FLAG_COMPRESSED,
++	 .mmal = MMAL_ENCODING_JPEG,
++	 .depth = 8,
++	 .mmal_component = MMAL_COMPONENT_IMAGE_ENCODE,
++	 .ybbp = 0,
++	 },
++	{
++	 .name = "H264",
++	 .fourcc = V4L2_PIX_FMT_H264,
++	 .flags = V4L2_FMT_FLAG_COMPRESSED,
++	 .mmal = MMAL_ENCODING_H264,
++	 .depth = 8,
++	 .mmal_component = MMAL_COMPONENT_VIDEO_ENCODE,
++	 .ybbp = 0,
++	 },
++	{
++	 .name = "MJPEG",
++	 .fourcc = V4L2_PIX_FMT_MJPEG,
++	 .flags = V4L2_FMT_FLAG_COMPRESSED,
++	 .mmal = MMAL_ENCODING_MJPEG,
++	 .depth = 8,
++	 .mmal_component = MMAL_COMPONENT_VIDEO_ENCODE,
++	 .ybbp = 0,
++	 },
++	{
++	 .name = "4:2:2, packed, YVYU",
++	 .fourcc = V4L2_PIX_FMT_YVYU,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_YVYU,
++	 .depth = 16,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 2,
++	 },
++	{
++	 .name = "4:2:2, packed, VYUY",
++	 .fourcc = V4L2_PIX_FMT_VYUY,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_VYUY,
++	 .depth = 16,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 2,
++	 },
++	{
++	 .name = "4:2:2, packed, UYVY",
++	 .fourcc = V4L2_PIX_FMT_UYVY,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_UYVY,
++	 .depth = 16,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 2,
++	 },
++	{
++	 .name = "4:2:0, planar, NV12",
++	 .fourcc = V4L2_PIX_FMT_NV12,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_NV12,
++	 .depth = 12,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 1,
++	 },
++	{
++	 .name = "RGB24 (BE)",
++	 .fourcc = V4L2_PIX_FMT_BGR24,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_RGB24,
++	 .depth = 24,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 3,
++	 },
++	{
++	 .name = "4:2:0, planar, YVU",
++	 .fourcc = V4L2_PIX_FMT_YVU420,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_YV12,
++	 .depth = 12,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 1,
++	 },
++	{
++	 .name = "4:2:0, planar, NV21",
++	 .fourcc = V4L2_PIX_FMT_NV21,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_NV21,
++	 .depth = 12,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 1,
++	 },
++	{
++	 .name = "RGB32 (BE)",
++	 .fourcc = V4L2_PIX_FMT_BGR32,
++	 .flags = 0,
++	 .mmal = MMAL_ENCODING_BGRA,
++	 .depth = 32,
++	 .mmal_component = MMAL_COMPONENT_CAMERA,
++	 .ybbp = 4,
++	 },
++};
++
++static struct mmal_fmt *get_format(struct v4l2_format *f)
++{
++	struct mmal_fmt *fmt;
++	unsigned int k;
++
++	for (k = 0; k < ARRAY_SIZE(formats); k++) {
++		fmt = &formats[k];
++		if (fmt->fourcc == f->fmt.pix.pixelformat)
++			break;
++	}
++
++	if (k == ARRAY_SIZE(formats))
++		return NULL;
++
++	return &formats[k];
++}
++
++/* ------------------------------------------------------------------
++	Videobuf queue operations
++   ------------------------------------------------------------------*/
++
++static int queue_setup(struct vb2_queue *vq, const void *parg,
++		       unsigned int *nbuffers, unsigned int *nplanes,
++		       unsigned int sizes[], void *alloc_ctxs[])
++{
++	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
++	unsigned long size;
++
++	/* refuse queue setup if port is not configured */
++	if (dev->capture.port == NULL) {
++		v4l2_err(&dev->v4l2_dev,
++			 "%s: capture port not configured\n", __func__);
++		return -EINVAL;
++	}
++
++	size = dev->capture.port->current_buffer.size;
++	if (size == 0) {
++		v4l2_err(&dev->v4l2_dev,
++			 "%s: capture port buffer size is zero\n", __func__);
++		return -EINVAL;
++	}
++
++	if (*nbuffers < (dev->capture.port->current_buffer.num + 2))
++		*nbuffers = (dev->capture.port->current_buffer.num + 2);
++
++	*nplanes = 1;
++
++	sizes[0] = size;
++
++	/*
++	 * videobuf2-vmalloc allocator is context-less so no need to set
++	 * alloc_ctxs array.
++	 */
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
++		 __func__, dev);
++
++	return 0;
++}
++
++static int buffer_prepare(struct vb2_buffer *vb)
++{
++	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
++	unsigned long size;
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
++		 __func__, dev);
++
++	BUG_ON(dev->capture.port == NULL);
++	BUG_ON(dev->capture.fmt == NULL);
++
++	size = dev->capture.stride * dev->capture.height;
++	if (vb2_plane_size(vb, 0) < size) {
++		v4l2_err(&dev->v4l2_dev,
++			 "%s data will not fit into plane (%lu < %lu)\n",
++			 __func__, vb2_plane_size(vb, 0), size);
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++static inline bool is_capturing(struct bm2835_mmal_dev *dev)
++{
++	return dev->capture.camera_port ==
++	    &dev->
++	    component[MMAL_COMPONENT_CAMERA]->output[MMAL_CAMERA_PORT_CAPTURE];
++}
++
++static void buffer_cb(struct vchiq_mmal_instance *instance,
++		      struct vchiq_mmal_port *port,
++		      int status,
++		      struct mmal_buffer *buf,
++		      unsigned long length, u32 mmal_flags, s64 dts, s64 pts)
++{
++	struct bm2835_mmal_dev *dev = port->cb_ctx;
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		 "%s: status:%d, buf:%p, length:%lu, flags %u, pts %lld\n",
++		 __func__, status, buf, length, mmal_flags, pts);
++
++	if (status != 0) {
++		/* error in transfer */
++		if (buf != NULL) {
++			/* there was a buffer with the error so return it */
++			vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++		}
++		return;
++	} else if (length == 0) {
++		/* stream ended */
++		if (buf != NULL) {
++			/* this should only ever happen if the port is
++			 * disabled and there are buffers still queued
++			 */
++			vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++			pr_debug("Empty buffer");
++		} else if (dev->capture.frame_count) {
++			/* grab another frame */
++			if (is_capturing(dev)) {
++				pr_debug("Grab another frame");
++				vchiq_mmal_port_parameter_set(
++					instance,
++					dev->capture.
++					camera_port,
++					MMAL_PARAMETER_CAPTURE,
++					&dev->capture.
++					frame_count,
++					sizeof(dev->capture.frame_count));
++			}
++		} else {
++			/* signal frame completion */
++			complete(&dev->capture.frame_cmplt);
++		}
++	} else {
++		if (dev->capture.frame_count) {
++			if (dev->capture.vc_start_timestamp != -1 &&
++			    pts != 0) {
++				s64 runtime_us = pts -
++				    dev->capture.vc_start_timestamp;
++				u32 div = 0;
++				u32 rem = 0;
++
++				div =
++				    div_u64_rem(runtime_us, USEC_PER_SEC, &rem);
++				buf->vb.timestamp.tv_sec =
++				    dev->capture.kernel_start_ts.tv_sec - 1 +
++				    div;
++				buf->vb.timestamp.tv_usec =
++				    dev->capture.kernel_start_ts.tv_usec + rem;
++
++				if (buf->vb.timestamp.tv_usec >=
++				    USEC_PER_SEC) {
++					buf->vb.timestamp.tv_sec++;
++					buf->vb.timestamp.tv_usec -=
++					    USEC_PER_SEC;
++				}
++				v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++					 "Convert start time %d.%06d and %llu "
++					 "with offset %llu to %d.%06d\n",
++					 (int)dev->capture.kernel_start_ts.
++					 tv_sec,
++					 (int)dev->capture.kernel_start_ts.
++					 tv_usec,
++					 dev->capture.vc_start_timestamp, pts,
++					 (int)buf->vb.timestamp.tv_sec,
++					 (int)buf->vb.timestamp.
++					 tv_usec);
++			} else {
++				v4l2_get_timestamp(&buf->vb.timestamp);
++			}
++
++			vb2_set_plane_payload(&buf->vb.vb2_buf, 0, length);
++			vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_DONE);
++
++			if (mmal_flags & MMAL_BUFFER_HEADER_FLAG_EOS &&
++			    is_capturing(dev)) {
++				v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++					 "Grab another frame as buffer has EOS");
++				vchiq_mmal_port_parameter_set(
++					instance,
++					dev->capture.
++					camera_port,
++					MMAL_PARAMETER_CAPTURE,
++					&dev->capture.
++					frame_count,
++					sizeof(dev->capture.frame_count));
++			}
++		} else {
++			/* signal frame completion */
++			vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++			complete(&dev->capture.frame_cmplt);
++		}
++	}
++}
++
++static int enable_camera(struct bm2835_mmal_dev *dev)
++{
++	int ret;
++	if (!dev->camera_use_count) {
++		ret = vchiq_mmal_component_enable(
++				dev->instance,
++				dev->component[MMAL_COMPONENT_CAMERA]);
++		if (ret < 0) {
++			v4l2_err(&dev->v4l2_dev,
++				 "Failed enabling camera, ret %d\n", ret);
++			return -EINVAL;
++		}
++	}
++	dev->camera_use_count++;
++	v4l2_dbg(1, bcm2835_v4l2_debug,
++		 &dev->v4l2_dev, "enabled camera (refcount %d)\n",
++			dev->camera_use_count);
++	return 0;
++}
++
++static int disable_camera(struct bm2835_mmal_dev *dev)
++{
++	int ret;
++	if (!dev->camera_use_count) {
++		v4l2_err(&dev->v4l2_dev,
++			 "Disabled the camera when already disabled\n");
++		return -EINVAL;
++	}
++	dev->camera_use_count--;
++	if (!dev->camera_use_count) {
++		unsigned int i = 0xFFFFFFFF;
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "Disabling camera\n");
++		ret =
++		    vchiq_mmal_component_disable(
++				dev->instance,
++				dev->component[MMAL_COMPONENT_CAMERA]);
++		if (ret < 0) {
++			v4l2_err(&dev->v4l2_dev,
++				 "Failed disabling camera, ret %d\n", ret);
++			return -EINVAL;
++		}
++		vchiq_mmal_port_parameter_set(
++			dev->instance,
++			&dev->component[MMAL_COMPONENT_CAMERA]->control,
++			MMAL_PARAMETER_CAMERA_NUM, &i,
++			sizeof(i));
++	}
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		 "Camera refcount now %d\n", dev->camera_use_count);
++	return 0;
++}
++
++static void buffer_queue(struct vb2_buffer *vb)
++{
++	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
++	struct vb2_v4l2_buffer *vb2 = to_vb2_v4l2_buffer(vb);
++	struct mmal_buffer *buf = container_of(vb2, struct mmal_buffer, vb);
++	int ret;
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		 "%s: dev:%p buf:%p\n", __func__, dev, buf);
++
++	buf->buffer = vb2_plane_vaddr(&buf->vb.vb2_buf, 0);
++	buf->buffer_size = vb2_plane_size(&buf->vb.vb2_buf, 0);
++
++	ret = vchiq_mmal_submit_buffer(dev->instance, dev->capture.port, buf);
++	if (ret < 0)
++		v4l2_err(&dev->v4l2_dev, "%s: error submitting buffer\n",
++			 __func__);
++}
++
++static int start_streaming(struct vb2_queue *vq, unsigned int count)
++{
++	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
++	int ret;
++	int parameter_size;
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
++		 __func__, dev);
++
++	/* ensure a format has actually been set */
++	if (dev->capture.port == NULL)
++		return -EINVAL;
++
++	if (enable_camera(dev) < 0) {
++		v4l2_err(&dev->v4l2_dev, "Failed to enable camera\n");
++		return -EINVAL;
++	}
++
++	/*init_completion(&dev->capture.frame_cmplt); */
++
++	/* enable frame capture */
++	dev->capture.frame_count = 1;
++
++	/* if the preview is not already running, wait for a few frames for AGC
++	 * to settle down.
++	 */
++	if (!dev->component[MMAL_COMPONENT_PREVIEW]->enabled)
++		msleep(300);
++
++	/* enable the connection from camera to encoder (if applicable) */
++	if (dev->capture.camera_port != dev->capture.port
++	    && dev->capture.camera_port) {
++		ret = vchiq_mmal_port_enable(dev->instance,
++					     dev->capture.camera_port, NULL);
++		if (ret) {
++			v4l2_err(&dev->v4l2_dev,
++				 "Failed to enable encode tunnel - error %d\n",
++				 ret);
++			return -1;
++		}
++	}
++
++	/* Get VC timestamp at this point in time */
++	parameter_size = sizeof(dev->capture.vc_start_timestamp);
++	if (vchiq_mmal_port_parameter_get(dev->instance,
++					  dev->capture.camera_port,
++					  MMAL_PARAMETER_SYSTEM_TIME,
++					  &dev->capture.vc_start_timestamp,
++					  &parameter_size)) {
++		v4l2_err(&dev->v4l2_dev,
++			 "Failed to get VC start time - update your VC f/w\n");
++
++		/* Flag to indicate just to rely on kernel timestamps */
++		dev->capture.vc_start_timestamp = -1;
++	} else
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "Start time %lld size %d\n",
++			 dev->capture.vc_start_timestamp, parameter_size);
++
++	v4l2_get_timestamp(&dev->capture.kernel_start_ts);
++
++	/* enable the camera port */
++	dev->capture.port->cb_ctx = dev;
++	ret =
++	    vchiq_mmal_port_enable(dev->instance, dev->capture.port, buffer_cb);
++	if (ret) {
++		v4l2_err(&dev->v4l2_dev,
++			"Failed to enable capture port - error %d. "
++			"Disabling camera port again\n", ret);
++
++		vchiq_mmal_port_disable(dev->instance,
++					dev->capture.camera_port);
++		if (disable_camera(dev) < 0) {
++			v4l2_err(&dev->v4l2_dev, "Failed to disable camera\n");
++			return -EINVAL;
++		}
++		return -1;
++	}
++
++	/* capture the first frame */
++	vchiq_mmal_port_parameter_set(dev->instance,
++				      dev->capture.camera_port,
++				      MMAL_PARAMETER_CAPTURE,
++				      &dev->capture.frame_count,
++				      sizeof(dev->capture.frame_count));
++	return 0;
++}
++
++/* abort streaming and wait for last buffer */
++static void stop_streaming(struct vb2_queue *vq)
++{
++	int ret;
++	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
++		 __func__, dev);
++
++	init_completion(&dev->capture.frame_cmplt);
++	dev->capture.frame_count = 0;
++
++	/* ensure a format has actually been set */
++	if (dev->capture.port == NULL) {
++		v4l2_err(&dev->v4l2_dev,
++			 "no capture port - stream not started?\n");
++		return;
++	}
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "stopping capturing\n");
++
++	/* stop capturing frames */
++	vchiq_mmal_port_parameter_set(dev->instance,
++				      dev->capture.camera_port,
++				      MMAL_PARAMETER_CAPTURE,
++				      &dev->capture.frame_count,
++				      sizeof(dev->capture.frame_count));
++
++	/* wait for last frame to complete */
++	ret = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
++	if (ret <= 0)
++		v4l2_err(&dev->v4l2_dev,
++			 "error %d waiting for frame completion\n", ret);
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		 "disabling connection\n");
++
++	/* disable the connection from camera to encoder */
++	ret = vchiq_mmal_port_disable(dev->instance, dev->capture.camera_port);
++	if (!ret && dev->capture.camera_port != dev->capture.port) {
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "disabling port\n");
++		ret = vchiq_mmal_port_disable(dev->instance, dev->capture.port);
++	} else if (dev->capture.camera_port != dev->capture.port) {
++		v4l2_err(&dev->v4l2_dev, "port_disable failed, error %d\n",
++			 ret);
++	}
++
++	if (disable_camera(dev) < 0)
++		v4l2_err(&dev->v4l2_dev, "Failed to disable camera\n");
++}
++
++static void bm2835_mmal_lock(struct vb2_queue *vq)
++{
++	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
++	mutex_lock(&dev->mutex);
++}
++
++static void bm2835_mmal_unlock(struct vb2_queue *vq)
++{
++	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
++	mutex_unlock(&dev->mutex);
++}
++
++static struct vb2_ops bm2835_mmal_video_qops = {
++	.queue_setup = queue_setup,
++	.buf_prepare = buffer_prepare,
++	.buf_queue = buffer_queue,
++	.start_streaming = start_streaming,
++	.stop_streaming = stop_streaming,
++	.wait_prepare = bm2835_mmal_unlock,
++	.wait_finish = bm2835_mmal_lock,
++};
++
++/* ------------------------------------------------------------------
++	IOCTL operations
++   ------------------------------------------------------------------*/
++
++/* overlay ioctl */
++static int vidioc_enum_fmt_vid_overlay(struct file *file, void *priv,
++				       struct v4l2_fmtdesc *f)
++{
++	struct mmal_fmt *fmt;
++
++	if (f->index >= ARRAY_SIZE(formats))
++		return -EINVAL;
++
++	fmt = &formats[f->index];
++
++	strlcpy(f->description, fmt->name, sizeof(f->description));
++	f->pixelformat = fmt->fourcc;
++	f->flags = fmt->flags;
++
++	return 0;
++}
++
++static int vidioc_g_fmt_vid_overlay(struct file *file, void *priv,
++				    struct v4l2_format *f)
++{
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++
++	f->fmt.win = dev->overlay;
++
++	return 0;
++}
++
++static int vidioc_try_fmt_vid_overlay(struct file *file, void *priv,
++				      struct v4l2_format *f)
++{
++	/* Only support one format so get the current one. */
++	vidioc_g_fmt_vid_overlay(file, priv, f);
++
++	/* todo: allow the size and/or offset to be changed. */
++	return 0;
++}
++
++static int vidioc_s_fmt_vid_overlay(struct file *file, void *priv,
++				    struct v4l2_format *f)
++{
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++
++	vidioc_try_fmt_vid_overlay(file, priv, f);
++
++	dev->overlay = f->fmt.win;
++
++	/* todo: program the preview port parameters */
++	return 0;
++}
++
++static int vidioc_overlay(struct file *file, void *f, unsigned int on)
++{
++	int ret;
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++	struct vchiq_mmal_port *src;
++	struct vchiq_mmal_port *dst;
++	struct mmal_parameter_displayregion prev_config = {
++		.set = MMAL_DISPLAY_SET_LAYER | MMAL_DISPLAY_SET_ALPHA |
++		    MMAL_DISPLAY_SET_DEST_RECT | MMAL_DISPLAY_SET_FULLSCREEN,
++		.layer = PREVIEW_LAYER,
++		.alpha = 255,
++		.fullscreen = 0,
++		.dest_rect = {
++			      .x = dev->overlay.w.left,
++			      .y = dev->overlay.w.top,
++			      .width = dev->overlay.w.width,
++			      .height = dev->overlay.w.height,
++			      },
++	};
++
++	if ((on && dev->component[MMAL_COMPONENT_PREVIEW]->enabled) ||
++	    (!on && !dev->component[MMAL_COMPONENT_PREVIEW]->enabled))
++		return 0;	/* already in requested state */
++
++	src =
++	    &dev->component[MMAL_COMPONENT_CAMERA]->
++	    output[MMAL_CAMERA_PORT_PREVIEW];
++
++	if (!on) {
++		/* disconnect preview ports and disable component */
++		ret = vchiq_mmal_port_disable(dev->instance, src);
++		if (!ret)
++			ret =
++			    vchiq_mmal_port_connect_tunnel(dev->instance, src,
++							   NULL);
++		if (ret >= 0)
++			ret = vchiq_mmal_component_disable(
++					dev->instance,
++					dev->component[MMAL_COMPONENT_PREVIEW]);
++
++		disable_camera(dev);
++		return ret;
++	}
++
++	/* set preview port format and connect it to output */
++	dst = &dev->component[MMAL_COMPONENT_PREVIEW]->input[0];
++
++	ret = vchiq_mmal_port_set_format(dev->instance, src);
++	if (ret < 0)
++		goto error;
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, dst,
++					    MMAL_PARAMETER_DISPLAYREGION,
++					    &prev_config, sizeof(prev_config));
++	if (ret < 0)
++		goto error;
++
++	if (enable_camera(dev) < 0)
++		goto error;
++
++	ret = vchiq_mmal_component_enable(
++			dev->instance,
++			dev->component[MMAL_COMPONENT_PREVIEW]);
++	if (ret < 0)
++		goto error;
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "connecting %p to %p\n",
++		 src, dst);
++	ret = vchiq_mmal_port_connect_tunnel(dev->instance, src, dst);
++	if (!ret)
++		ret = vchiq_mmal_port_enable(dev->instance, src, NULL);
++error:
++	return ret;
++}
++
++static int vidioc_g_fbuf(struct file *file, void *fh,
++			 struct v4l2_framebuffer *a)
++{
++	/* The video overlay must stay within the framebuffer and can't be
++	   positioned independently. */
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++	struct vchiq_mmal_port *preview_port =
++		    &dev->component[MMAL_COMPONENT_CAMERA]->
++		    output[MMAL_CAMERA_PORT_PREVIEW];
++	a->flags = V4L2_FBUF_FLAG_OVERLAY;
++	a->fmt.width = preview_port->es.video.width;
++	a->fmt.height = preview_port->es.video.height;
++	a->fmt.pixelformat = V4L2_PIX_FMT_YUV420;
++	a->fmt.bytesperline = preview_port->es.video.width;
++	a->fmt.sizeimage = (preview_port->es.video.width *
++			       preview_port->es.video.height * 3)>>1;
++	a->fmt.colorspace = V4L2_COLORSPACE_SMPTE170M;
++
++	return 0;
++}
++
++/* input ioctls */
++static int vidioc_enum_input(struct file *file, void *priv,
++			     struct v4l2_input *inp)
++{
++	/* only a single camera input */
++	if (inp->index != 0)
++		return -EINVAL;
++
++	inp->type = V4L2_INPUT_TYPE_CAMERA;
++	sprintf(inp->name, "Camera %u", inp->index);
++	return 0;
++}
++
++static int vidioc_g_input(struct file *file, void *priv, unsigned int *i)
++{
++	*i = 0;
++	return 0;
++}
++
++static int vidioc_s_input(struct file *file, void *priv, unsigned int i)
++{
++	if (i != 0)
++		return -EINVAL;
++
++	return 0;
++}
++
++/* capture ioctls */
++static int vidioc_querycap(struct file *file, void *priv,
++			   struct v4l2_capability *cap)
++{
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++	u32 major;
++	u32 minor;
++
++	vchiq_mmal_version(dev->instance, &major, &minor);
++
++	strcpy(cap->driver, "bm2835 mmal");
++	snprintf(cap->card, sizeof(cap->card), "mmal service %d.%d",
++		 major, minor);
++
++	snprintf(cap->bus_info, sizeof(cap->bus_info),
++		 "platform:%s", dev->v4l2_dev.name);
++	cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OVERLAY |
++	    V4L2_CAP_STREAMING | V4L2_CAP_READWRITE;
++	cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
++
++	return 0;
++}
++
++static int vidioc_enum_fmt_vid_cap(struct file *file, void *priv,
++				   struct v4l2_fmtdesc *f)
++{
++	struct mmal_fmt *fmt;
++
++	if (f->index >= ARRAY_SIZE(formats))
++		return -EINVAL;
++
++	fmt = &formats[f->index];
++
++	strlcpy(f->description, fmt->name, sizeof(f->description));
++	f->pixelformat = fmt->fourcc;
++	f->flags = fmt->flags;
++
++	return 0;
++}
++
++static int vidioc_g_fmt_vid_cap(struct file *file, void *priv,
++				struct v4l2_format *f)
++{
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++
++	f->fmt.pix.width = dev->capture.width;
++	f->fmt.pix.height = dev->capture.height;
++	f->fmt.pix.field = V4L2_FIELD_NONE;
++	f->fmt.pix.pixelformat = dev->capture.fmt->fourcc;
++	f->fmt.pix.bytesperline = dev->capture.stride;
++	f->fmt.pix.sizeimage = dev->capture.buffersize;
++
++	if (dev->capture.fmt->fourcc == V4L2_PIX_FMT_RGB24)
++		f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
++	else if (dev->capture.fmt->fourcc == V4L2_PIX_FMT_JPEG)
++		f->fmt.pix.colorspace = V4L2_COLORSPACE_JPEG;
++	else
++		f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
++	f->fmt.pix.priv = 0;
++
++	v4l2_dump_pix_format(1, bcm2835_v4l2_debug, &dev->v4l2_dev, &f->fmt.pix,
++			     __func__);
++	return 0;
++}
++
++static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
++				  struct v4l2_format *f)
++{
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++	struct mmal_fmt *mfmt;
++
++	mfmt = get_format(f);
++	if (!mfmt) {
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "Fourcc format (0x%08x) unknown.\n",
++			 f->fmt.pix.pixelformat);
++		f->fmt.pix.pixelformat = formats[0].fourcc;
++		mfmt = get_format(f);
++	}
++
++	f->fmt.pix.field = V4L2_FIELD_NONE;
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		"Clipping/aligning %dx%d format %08X\n",
++		f->fmt.pix.width, f->fmt.pix.height, f->fmt.pix.pixelformat);
++
++	v4l_bound_align_image(&f->fmt.pix.width, MIN_WIDTH, MAX_WIDTH, 1,
++			      &f->fmt.pix.height, MIN_HEIGHT, MAX_HEIGHT, 1, 0);
++	f->fmt.pix.bytesperline = f->fmt.pix.width * mfmt->ybbp;
++
++	/* Image buffer has to be padded to allow for alignment, even though
++	 * we then remove that padding before delivering the buffer.
++	 */
++	f->fmt.pix.sizeimage = ((f->fmt.pix.height+15)&~15) *
++			(((f->fmt.pix.width+31)&~31) * mfmt->depth) >> 3;
++
++	if ((mfmt->flags & V4L2_FMT_FLAG_COMPRESSED) &&
++	    f->fmt.pix.sizeimage < MIN_BUFFER_SIZE)
++		f->fmt.pix.sizeimage = MIN_BUFFER_SIZE;
++
++	if (f->fmt.pix.pixelformat == V4L2_PIX_FMT_RGB24)
++		f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
++	else if (f->fmt.pix.pixelformat == V4L2_PIX_FMT_JPEG)
++		f->fmt.pix.colorspace = V4L2_COLORSPACE_JPEG;
++	else
++		f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
++	f->fmt.pix.priv = 0;
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		"Now %dx%d format %08X\n",
++		f->fmt.pix.width, f->fmt.pix.height, f->fmt.pix.pixelformat);
++
++	v4l2_dump_pix_format(1, bcm2835_v4l2_debug, &dev->v4l2_dev, &f->fmt.pix,
++			     __func__);
++	return 0;
++}
++
++static int mmal_setup_components(struct bm2835_mmal_dev *dev,
++				 struct v4l2_format *f)
++{
++	int ret;
++	struct vchiq_mmal_port *port = NULL, *camera_port = NULL;
++	struct vchiq_mmal_component *encode_component = NULL;
++	struct mmal_fmt *mfmt = get_format(f);
++
++	BUG_ON(!mfmt);
++
++	if (dev->capture.encode_component) {
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "vid_cap - disconnect previous tunnel\n");
++
++		/* Disconnect any previous connection */
++		vchiq_mmal_port_connect_tunnel(dev->instance,
++					       dev->capture.camera_port, NULL);
++		dev->capture.camera_port = NULL;
++		ret = vchiq_mmal_component_disable(dev->instance,
++						   dev->capture.
++						   encode_component);
++		if (ret)
++			v4l2_err(&dev->v4l2_dev,
++				 "Failed to disable encode component %d\n",
++				 ret);
++
++		dev->capture.encode_component = NULL;
++	}
++	/* format dependant port setup */
++	switch (mfmt->mmal_component) {
++	case MMAL_COMPONENT_CAMERA:
++		/* Make a further decision on port based on resolution */
++		if (f->fmt.pix.width <= max_video_width
++		    && f->fmt.pix.height <= max_video_height)
++			camera_port = port =
++			    &dev->component[MMAL_COMPONENT_CAMERA]->
++			    output[MMAL_CAMERA_PORT_VIDEO];
++		else
++			camera_port = port =
++			    &dev->component[MMAL_COMPONENT_CAMERA]->
++			    output[MMAL_CAMERA_PORT_CAPTURE];
++		break;
++	case MMAL_COMPONENT_IMAGE_ENCODE:
++		encode_component = dev->component[MMAL_COMPONENT_IMAGE_ENCODE];
++		port = &dev->component[MMAL_COMPONENT_IMAGE_ENCODE]->output[0];
++		camera_port =
++		    &dev->component[MMAL_COMPONENT_CAMERA]->
++		    output[MMAL_CAMERA_PORT_CAPTURE];
++		break;
++	case MMAL_COMPONENT_VIDEO_ENCODE:
++		encode_component = dev->component[MMAL_COMPONENT_VIDEO_ENCODE];
++		port = &dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->output[0];
++		camera_port =
++		    &dev->component[MMAL_COMPONENT_CAMERA]->
++		    output[MMAL_CAMERA_PORT_VIDEO];
++		break;
++	default:
++		break;
++	}
++
++	if (!port)
++		return -EINVAL;
++
++	if (encode_component)
++		camera_port->format.encoding = MMAL_ENCODING_OPAQUE;
++	else
++		camera_port->format.encoding = mfmt->mmal;
++
++	camera_port->format.encoding_variant = 0;
++	camera_port->es.video.width = f->fmt.pix.width;
++	camera_port->es.video.height = f->fmt.pix.height;
++	camera_port->es.video.crop.x = 0;
++	camera_port->es.video.crop.y = 0;
++	camera_port->es.video.crop.width = f->fmt.pix.width;
++	camera_port->es.video.crop.height = f->fmt.pix.height;
++	camera_port->es.video.frame_rate.num = 0;
++	camera_port->es.video.frame_rate.den = 1;
++	camera_port->es.video.color_space = MMAL_COLOR_SPACE_JPEG_JFIF;
++
++	ret = vchiq_mmal_port_set_format(dev->instance, camera_port);
++
++	if (!ret
++	    && camera_port ==
++	    &dev->component[MMAL_COMPONENT_CAMERA]->
++	    output[MMAL_CAMERA_PORT_VIDEO]) {
++		bool overlay_enabled =
++		    !!dev->component[MMAL_COMPONENT_PREVIEW]->enabled;
++		struct vchiq_mmal_port *preview_port =
++		    &dev->component[MMAL_COMPONENT_CAMERA]->
++		    output[MMAL_CAMERA_PORT_PREVIEW];
++		/* Preview and encode ports need to match on resolution */
++		if (overlay_enabled) {
++			/* Need to disable the overlay before we can update
++			 * the resolution
++			 */
++			ret =
++			    vchiq_mmal_port_disable(dev->instance,
++						    preview_port);
++			if (!ret)
++				ret =
++				    vchiq_mmal_port_connect_tunnel(
++						dev->instance,
++						preview_port,
++						NULL);
++		}
++		preview_port->es.video.width = f->fmt.pix.width;
++		preview_port->es.video.height = f->fmt.pix.height;
++		preview_port->es.video.crop.x = 0;
++		preview_port->es.video.crop.y = 0;
++		preview_port->es.video.crop.width = f->fmt.pix.width;
++		preview_port->es.video.crop.height = f->fmt.pix.height;
++		preview_port->es.video.frame_rate.num =
++					  dev->capture.timeperframe.denominator;
++		preview_port->es.video.frame_rate.den =
++					  dev->capture.timeperframe.numerator;
++		ret = vchiq_mmal_port_set_format(dev->instance, preview_port);
++		if (overlay_enabled) {
++			ret = vchiq_mmal_port_connect_tunnel(
++				dev->instance,
++				preview_port,
++				&dev->component[MMAL_COMPONENT_PREVIEW]->input[0]);
++			if (!ret)
++				ret = vchiq_mmal_port_enable(dev->instance,
++							     preview_port,
++							     NULL);
++		}
++	}
++
++	if (ret) {
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "%s failed to set format %dx%d %08X\n", __func__,
++			 f->fmt.pix.width, f->fmt.pix.height,
++			 f->fmt.pix.pixelformat);
++		/* ensure capture is not going to be tried */
++		dev->capture.port = NULL;
++	} else {
++		if (encode_component) {
++			v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++				 "vid_cap - set up encode comp\n");
++
++			/* configure buffering */
++			camera_port->current_buffer.size =
++			    camera_port->recommended_buffer.size;
++			camera_port->current_buffer.num =
++			    camera_port->recommended_buffer.num;
++
++			ret =
++			    vchiq_mmal_port_connect_tunnel(
++					dev->instance,
++					camera_port,
++					&encode_component->input[0]);
++			if (ret) {
++				v4l2_dbg(1, bcm2835_v4l2_debug,
++					 &dev->v4l2_dev,
++					 "%s failed to create connection\n",
++					 __func__);
++				/* ensure capture is not going to be tried */
++				dev->capture.port = NULL;
++			} else {
++				port->es.video.width = f->fmt.pix.width;
++				port->es.video.height = f->fmt.pix.height;
++				port->es.video.crop.x = 0;
++				port->es.video.crop.y = 0;
++				port->es.video.crop.width = f->fmt.pix.width;
++				port->es.video.crop.height = f->fmt.pix.height;
++				port->es.video.frame_rate.num =
++					  dev->capture.timeperframe.denominator;
++				port->es.video.frame_rate.den =
++					  dev->capture.timeperframe.numerator;
++
++				port->format.encoding = mfmt->mmal;
++				port->format.encoding_variant = 0;
++				/* Set any encoding specific parameters */
++				switch (mfmt->mmal_component) {
++				case MMAL_COMPONENT_VIDEO_ENCODE:
++					port->format.bitrate =
++					    dev->capture.encode_bitrate;
++					break;
++				case MMAL_COMPONENT_IMAGE_ENCODE:
++					/* Could set EXIF parameters here */
++					break;
++				default:
++					break;
++				}
++				ret = vchiq_mmal_port_set_format(dev->instance,
++								 port);
++				if (ret)
++					v4l2_dbg(1, bcm2835_v4l2_debug,
++						 &dev->v4l2_dev,
++						 "%s failed to set format %dx%d fmt %08X\n",
++						 __func__,
++						 f->fmt.pix.width,
++						 f->fmt.pix.height,
++						 f->fmt.pix.pixelformat
++						 );
++			}
++
++			if (!ret) {
++				ret = vchiq_mmal_component_enable(
++						dev->instance,
++						encode_component);
++				if (ret) {
++					v4l2_dbg(1, bcm2835_v4l2_debug,
++					   &dev->v4l2_dev,
++					   "%s Failed to enable encode components\n",
++					   __func__);
++				}
++			}
++			if (!ret) {
++				/* configure buffering */
++				port->current_buffer.num = 1;
++				port->current_buffer.size =
++				    f->fmt.pix.sizeimage;
++				if (port->format.encoding ==
++				    MMAL_ENCODING_JPEG) {
++					v4l2_dbg(1, bcm2835_v4l2_debug,
++					    &dev->v4l2_dev,
++					    "JPG - buf size now %d was %d\n",
++					    f->fmt.pix.sizeimage,
++					    port->current_buffer.size);
++					port->current_buffer.size =
++					    (f->fmt.pix.sizeimage <
++					     (100 << 10))
++					    ? (100 << 10) : f->fmt.pix.
++					    sizeimage;
++				}
++				v4l2_dbg(1, bcm2835_v4l2_debug,
++					 &dev->v4l2_dev,
++					 "vid_cap - cur_buf.size set to %d\n",
++					 f->fmt.pix.sizeimage);
++				port->current_buffer.alignment = 0;
++			}
++		} else {
++			/* configure buffering */
++			camera_port->current_buffer.num = 1;
++			camera_port->current_buffer.size = f->fmt.pix.sizeimage;
++			camera_port->current_buffer.alignment = 0;
++		}
++
++		if (!ret) {
++			dev->capture.fmt = mfmt;
++			dev->capture.stride = f->fmt.pix.bytesperline;
++			dev->capture.width = camera_port->es.video.crop.width;
++			dev->capture.height = camera_port->es.video.crop.height;
++			dev->capture.buffersize = port->current_buffer.size;
++
++			/* select port for capture */
++			dev->capture.port = port;
++			dev->capture.camera_port = camera_port;
++			dev->capture.encode_component = encode_component;
++			v4l2_dbg(1, bcm2835_v4l2_debug,
++				 &dev->v4l2_dev,
++				"Set dev->capture.fmt %08X, %dx%d, stride %d, size %d",
++				port->format.encoding,
++				dev->capture.width, dev->capture.height,
++				dev->capture.stride, dev->capture.buffersize);
++		}
++	}
++
++	/* todo: Need to convert the vchiq/mmal error into a v4l2 error. */
++	return ret;
++}
++
++static int vidioc_s_fmt_vid_cap(struct file *file, void *priv,
++				struct v4l2_format *f)
++{
++	int ret;
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++	struct mmal_fmt *mfmt;
++
++	/* try the format to set valid parameters */
++	ret = vidioc_try_fmt_vid_cap(file, priv, f);
++	if (ret) {
++		v4l2_err(&dev->v4l2_dev,
++			 "vid_cap - vidioc_try_fmt_vid_cap failed\n");
++		return ret;
++	}
++
++	/* if a capture is running refuse to set format */
++	if (vb2_is_busy(&dev->capture.vb_vidq)) {
++		v4l2_info(&dev->v4l2_dev, "%s device busy\n", __func__);
++		return -EBUSY;
++	}
++
++	/* If the format is unsupported v4l2 says we should switch to
++	 * a supported one and not return an error. */
++	mfmt = get_format(f);
++	if (!mfmt) {
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "Fourcc format (0x%08x) unknown.\n",
++			 f->fmt.pix.pixelformat);
++		f->fmt.pix.pixelformat = formats[0].fourcc;
++		mfmt = get_format(f);
++	}
++
++	ret = mmal_setup_components(dev, f);
++	if (ret != 0) {
++		v4l2_err(&dev->v4l2_dev,
++			 "%s: failed to setup mmal components: %d\n",
++			 __func__, ret);
++		ret = -EINVAL;
++	}
++
++	return ret;
++}
++
++int vidioc_enum_framesizes(struct file *file, void *fh,
++			   struct v4l2_frmsizeenum *fsize)
++{
++	static const struct v4l2_frmsize_stepwise sizes = {
++		MIN_WIDTH, MAX_WIDTH, 2,
++		MIN_HEIGHT, MAX_HEIGHT, 2
++	};
++	int i;
++
++	if (fsize->index)
++		return -EINVAL;
++	for (i = 0; i < ARRAY_SIZE(formats); i++)
++		if (formats[i].fourcc == fsize->pixel_format)
++			break;
++	if (i == ARRAY_SIZE(formats))
++		return -EINVAL;
++	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
++	fsize->stepwise = sizes;
++	return 0;
++}
++
++/* timeperframe is arbitrary and continous */
++static int vidioc_enum_frameintervals(struct file *file, void *priv,
++					     struct v4l2_frmivalenum *fival)
++{
++	int i;
++
++	if (fival->index)
++		return -EINVAL;
++
++	for (i = 0; i < ARRAY_SIZE(formats); i++)
++		if (formats[i].fourcc == fival->pixel_format)
++			break;
++	if (i == ARRAY_SIZE(formats))
++		return -EINVAL;
++
++	/* regarding width & height - we support any within range */
++	if (fival->width < MIN_WIDTH || fival->width > MAX_WIDTH ||
++	    fival->height < MIN_HEIGHT || fival->height > MAX_HEIGHT)
++		return -EINVAL;
++
++	fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
++
++	/* fill in stepwise (step=1.0 is requred by V4L2 spec) */
++	fival->stepwise.min  = tpf_min;
++	fival->stepwise.max  = tpf_max;
++	fival->stepwise.step = (struct v4l2_fract) {1, 1};
++
++	return 0;
++}
++
++static int vidioc_g_parm(struct file *file, void *priv,
++			  struct v4l2_streamparm *parm)
++{
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++
++	if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
++		return -EINVAL;
++
++	parm->parm.capture.capability   = V4L2_CAP_TIMEPERFRAME;
++	parm->parm.capture.timeperframe = dev->capture.timeperframe;
++	parm->parm.capture.readbuffers  = 1;
++	return 0;
++}
++
++#define FRACT_CMP(a, OP, b)	\
++	((u64)(a).numerator * (b).denominator  OP  \
++	 (u64)(b).numerator * (a).denominator)
++
++static int vidioc_s_parm(struct file *file, void *priv,
++			  struct v4l2_streamparm *parm)
++{
++	struct bm2835_mmal_dev *dev = video_drvdata(file);
++	struct v4l2_fract tpf;
++	struct mmal_parameter_rational fps_param;
++
++	if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
++		return -EINVAL;
++
++	tpf = parm->parm.capture.timeperframe;
++
++	/* tpf: {*, 0} resets timing; clip to [min, max]*/
++	tpf = tpf.denominator ? tpf : tpf_default;
++	tpf = FRACT_CMP(tpf, <, tpf_min) ? tpf_min : tpf;
++	tpf = FRACT_CMP(tpf, >, tpf_max) ? tpf_max : tpf;
++
++	dev->capture.timeperframe = tpf;
++	parm->parm.capture.timeperframe = tpf;
++	parm->parm.capture.readbuffers  = 1;
++
++	fps_param.num = 0;	/* Select variable fps, and then use
++				 * FPS_RANGE to select the actual limits.
++				 */
++	fps_param.den = 1;
++	set_framerate_params(dev);
++
++	return 0;
++}
++
++static const struct v4l2_ioctl_ops camera0_ioctl_ops = {
++	/* overlay */
++	.vidioc_enum_fmt_vid_overlay = vidioc_enum_fmt_vid_overlay,
++	.vidioc_g_fmt_vid_overlay = vidioc_g_fmt_vid_overlay,
++	.vidioc_try_fmt_vid_overlay = vidioc_try_fmt_vid_overlay,
++	.vidioc_s_fmt_vid_overlay = vidioc_s_fmt_vid_overlay,
++	.vidioc_overlay = vidioc_overlay,
++	.vidioc_g_fbuf = vidioc_g_fbuf,
++
++	/* inputs */
++	.vidioc_enum_input = vidioc_enum_input,
++	.vidioc_g_input = vidioc_g_input,
++	.vidioc_s_input = vidioc_s_input,
++
++	/* capture */
++	.vidioc_querycap = vidioc_querycap,
++	.vidioc_enum_fmt_vid_cap = vidioc_enum_fmt_vid_cap,
++	.vidioc_g_fmt_vid_cap = vidioc_g_fmt_vid_cap,
++	.vidioc_try_fmt_vid_cap = vidioc_try_fmt_vid_cap,
++	.vidioc_s_fmt_vid_cap = vidioc_s_fmt_vid_cap,
++
++	/* buffer management */
++	.vidioc_reqbufs = vb2_ioctl_reqbufs,
++	.vidioc_create_bufs = vb2_ioctl_create_bufs,
++	.vidioc_prepare_buf = vb2_ioctl_prepare_buf,
++	.vidioc_querybuf = vb2_ioctl_querybuf,
++	.vidioc_qbuf = vb2_ioctl_qbuf,
++	.vidioc_dqbuf = vb2_ioctl_dqbuf,
++	.vidioc_enum_framesizes = vidioc_enum_framesizes,
++	.vidioc_enum_frameintervals = vidioc_enum_frameintervals,
++	.vidioc_g_parm        = vidioc_g_parm,
++	.vidioc_s_parm        = vidioc_s_parm,
++	.vidioc_streamon = vb2_ioctl_streamon,
++	.vidioc_streamoff = vb2_ioctl_streamoff,
++
++	.vidioc_log_status = v4l2_ctrl_log_status,
++	.vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
++	.vidioc_unsubscribe_event = v4l2_event_unsubscribe,
++};
++
++static const struct v4l2_ioctl_ops camera0_ioctl_ops_gstreamer = {
++	/* overlay */
++	.vidioc_enum_fmt_vid_overlay = vidioc_enum_fmt_vid_overlay,
++	.vidioc_g_fmt_vid_overlay = vidioc_g_fmt_vid_overlay,
++	.vidioc_try_fmt_vid_overlay = vidioc_try_fmt_vid_overlay,
++	.vidioc_s_fmt_vid_overlay = vidioc_s_fmt_vid_overlay,
++	.vidioc_overlay = vidioc_overlay,
++	.vidioc_g_fbuf = vidioc_g_fbuf,
++
++	/* inputs */
++	.vidioc_enum_input = vidioc_enum_input,
++	.vidioc_g_input = vidioc_g_input,
++	.vidioc_s_input = vidioc_s_input,
++
++	/* capture */
++	.vidioc_querycap = vidioc_querycap,
++	.vidioc_enum_fmt_vid_cap = vidioc_enum_fmt_vid_cap,
++	.vidioc_g_fmt_vid_cap = vidioc_g_fmt_vid_cap,
++	.vidioc_try_fmt_vid_cap = vidioc_try_fmt_vid_cap,
++	.vidioc_s_fmt_vid_cap = vidioc_s_fmt_vid_cap,
++
++	/* buffer management */
++	.vidioc_reqbufs = vb2_ioctl_reqbufs,
++	.vidioc_create_bufs = vb2_ioctl_create_bufs,
++	.vidioc_prepare_buf = vb2_ioctl_prepare_buf,
++	.vidioc_querybuf = vb2_ioctl_querybuf,
++	.vidioc_qbuf = vb2_ioctl_qbuf,
++	.vidioc_dqbuf = vb2_ioctl_dqbuf,
++	/* Remove this function ptr to fix gstreamer bug
++	.vidioc_enum_framesizes = vidioc_enum_framesizes, */
++	.vidioc_enum_frameintervals = vidioc_enum_frameintervals,
++	.vidioc_g_parm        = vidioc_g_parm,
++	.vidioc_s_parm        = vidioc_s_parm,
++	.vidioc_streamon = vb2_ioctl_streamon,
++	.vidioc_streamoff = vb2_ioctl_streamoff,
++
++	.vidioc_log_status = v4l2_ctrl_log_status,
++	.vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
++	.vidioc_unsubscribe_event = v4l2_event_unsubscribe,
++};
++
++/* ------------------------------------------------------------------
++	Driver init/finalise
++   ------------------------------------------------------------------*/
++
++static const struct v4l2_file_operations camera0_fops = {
++	.owner = THIS_MODULE,
++	.open = v4l2_fh_open,
++	.release = vb2_fop_release,
++	.read = vb2_fop_read,
++	.poll = vb2_fop_poll,
++	.unlocked_ioctl = video_ioctl2,	/* V4L2 ioctl handler */
++	.mmap = vb2_fop_mmap,
++};
++
++static struct video_device vdev_template = {
++	.name = "camera0",
++	.fops = &camera0_fops,
++	.ioctl_ops = &camera0_ioctl_ops,
++	.release = video_device_release_empty,
++};
++
++static int set_camera_parameters(struct vchiq_mmal_instance *instance,
++				 struct vchiq_mmal_component *camera)
++{
++	int ret;
++	struct mmal_parameter_camera_config cam_config = {
++		.max_stills_w = MAX_WIDTH,
++		.max_stills_h = MAX_HEIGHT,
++		.stills_yuv422 = 1,
++		.one_shot_stills = 1,
++		.max_preview_video_w = (max_video_width > 1920) ?
++						max_video_width : 1920,
++		.max_preview_video_h = (max_video_height > 1088) ?
++						max_video_height : 1088,
++		.num_preview_video_frames = 3,
++		.stills_capture_circular_buffer_height = 0,
++		.fast_preview_resume = 0,
++		.use_stc_timestamp = MMAL_PARAM_TIMESTAMP_MODE_RAW_STC
++	};
++
++	ret = vchiq_mmal_port_parameter_set(instance, &camera->control,
++					    MMAL_PARAMETER_CAMERA_CONFIG,
++					    &cam_config, sizeof(cam_config));
++	return ret;
++}
++
++/* MMAL instance and component init */
++static int __init mmal_init(struct bm2835_mmal_dev *dev)
++{
++	int ret;
++	struct mmal_es_format *format;
++	u32 bool_true = 1;
++
++	ret = vchiq_mmal_init(&dev->instance);
++	if (ret < 0)
++		return ret;
++
++	/* get the camera component ready */
++	ret = vchiq_mmal_component_init(dev->instance, "ril.camera",
++					&dev->component[MMAL_COMPONENT_CAMERA]);
++	if (ret < 0)
++		goto unreg_mmal;
++
++	if (dev->component[MMAL_COMPONENT_CAMERA]->outputs <
++	    MMAL_CAMERA_PORT_COUNT) {
++		ret = -EINVAL;
++		goto unreg_camera;
++	}
++
++	ret = set_camera_parameters(dev->instance,
++				    dev->component[MMAL_COMPONENT_CAMERA]);
++	if (ret < 0)
++		goto unreg_camera;
++
++	format =
++	    &dev->component[MMAL_COMPONENT_CAMERA]->
++	    output[MMAL_CAMERA_PORT_PREVIEW].format;
++
++	format->encoding = MMAL_ENCODING_OPAQUE;
++	format->encoding_variant = MMAL_ENCODING_I420;
++
++	format->es->video.width = 1024;
++	format->es->video.height = 768;
++	format->es->video.crop.x = 0;
++	format->es->video.crop.y = 0;
++	format->es->video.crop.width = 1024;
++	format->es->video.crop.height = 768;
++	format->es->video.frame_rate.num = 0; /* Rely on fps_range */
++	format->es->video.frame_rate.den = 1;
++
++	format =
++	    &dev->component[MMAL_COMPONENT_CAMERA]->
++	    output[MMAL_CAMERA_PORT_VIDEO].format;
++
++	format->encoding = MMAL_ENCODING_OPAQUE;
++	format->encoding_variant = MMAL_ENCODING_I420;
++
++	format->es->video.width = 1024;
++	format->es->video.height = 768;
++	format->es->video.crop.x = 0;
++	format->es->video.crop.y = 0;
++	format->es->video.crop.width = 1024;
++	format->es->video.crop.height = 768;
++	format->es->video.frame_rate.num = 0; /* Rely on fps_range */
++	format->es->video.frame_rate.den = 1;
++
++	vchiq_mmal_port_parameter_set(dev->instance,
++		&dev->component[MMAL_COMPONENT_CAMERA]->
++				output[MMAL_CAMERA_PORT_VIDEO],
++		MMAL_PARAMETER_NO_IMAGE_PADDING,
++		&bool_true, sizeof(bool_true));
++
++	format =
++	    &dev->component[MMAL_COMPONENT_CAMERA]->
++	    output[MMAL_CAMERA_PORT_CAPTURE].format;
++
++	format->encoding = MMAL_ENCODING_OPAQUE;
++
++	format->es->video.width = 2592;
++	format->es->video.height = 1944;
++	format->es->video.crop.x = 0;
++	format->es->video.crop.y = 0;
++	format->es->video.crop.width = 2592;
++	format->es->video.crop.height = 1944;
++	format->es->video.frame_rate.num = 0; /* Rely on fps_range */
++	format->es->video.frame_rate.den = 1;
++
++	dev->capture.width = format->es->video.width;
++	dev->capture.height = format->es->video.height;
++	dev->capture.fmt = &formats[0];
++	dev->capture.encode_component = NULL;
++	dev->capture.timeperframe = tpf_default;
++	dev->capture.enc_profile = V4L2_MPEG_VIDEO_H264_PROFILE_HIGH;
++	dev->capture.enc_level = V4L2_MPEG_VIDEO_H264_LEVEL_4_0;
++
++	vchiq_mmal_port_parameter_set(dev->instance,
++		&dev->component[MMAL_COMPONENT_CAMERA]->
++			output[MMAL_CAMERA_PORT_CAPTURE],
++		MMAL_PARAMETER_NO_IMAGE_PADDING,
++		&bool_true, sizeof(bool_true));
++
++	/* get the preview component ready */
++	ret = vchiq_mmal_component_init(
++			dev->instance, "ril.video_render",
++			&dev->component[MMAL_COMPONENT_PREVIEW]);
++	if (ret < 0)
++		goto unreg_camera;
++
++	if (dev->component[MMAL_COMPONENT_PREVIEW]->inputs < 1) {
++		ret = -EINVAL;
++		pr_debug("too few input ports %d needed %d\n",
++			 dev->component[MMAL_COMPONENT_PREVIEW]->inputs, 1);
++		goto unreg_preview;
++	}
++
++	/* get the image encoder component ready */
++	ret = vchiq_mmal_component_init(
++		dev->instance, "ril.image_encode",
++		&dev->component[MMAL_COMPONENT_IMAGE_ENCODE]);
++	if (ret < 0)
++		goto unreg_preview;
++
++	if (dev->component[MMAL_COMPONENT_IMAGE_ENCODE]->inputs < 1) {
++		ret = -EINVAL;
++		v4l2_err(&dev->v4l2_dev, "too few input ports %d needed %d\n",
++			 dev->component[MMAL_COMPONENT_IMAGE_ENCODE]->inputs,
++			 1);
++		goto unreg_image_encoder;
++	}
++
++	/* get the video encoder component ready */
++	ret = vchiq_mmal_component_init(dev->instance, "ril.video_encode",
++					&dev->
++					component[MMAL_COMPONENT_VIDEO_ENCODE]);
++	if (ret < 0)
++		goto unreg_image_encoder;
++
++	if (dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->inputs < 1) {
++		ret = -EINVAL;
++		v4l2_err(&dev->v4l2_dev, "too few input ports %d needed %d\n",
++			 dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->inputs,
++			 1);
++		goto unreg_vid_encoder;
++	}
++
++	{
++		struct vchiq_mmal_port *encoder_port =
++			&dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->output[0];
++		encoder_port->format.encoding = MMAL_ENCODING_H264;
++		ret = vchiq_mmal_port_set_format(dev->instance,
++			encoder_port);
++	}
++
++	{
++		unsigned int enable = 1;
++		vchiq_mmal_port_parameter_set(
++			dev->instance,
++			&dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->control,
++			MMAL_PARAMETER_VIDEO_IMMUTABLE_INPUT,
++			&enable, sizeof(enable));
++
++		vchiq_mmal_port_parameter_set(dev->instance,
++			&dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->control,
++			MMAL_PARAMETER_MINIMISE_FRAGMENTATION,
++			&enable,
++			sizeof(enable));
++	}
++	ret = bm2835_mmal_set_all_camera_controls(dev);
++	if (ret < 0)
++		goto unreg_vid_encoder;
++
++	return 0;
++
++unreg_vid_encoder:
++	pr_err("Cleanup: Destroy video encoder\n");
++	vchiq_mmal_component_finalise(
++		dev->instance,
++		dev->component[MMAL_COMPONENT_VIDEO_ENCODE]);
++
++unreg_image_encoder:
++	pr_err("Cleanup: Destroy image encoder\n");
++	vchiq_mmal_component_finalise(
++		dev->instance,
++		dev->component[MMAL_COMPONENT_IMAGE_ENCODE]);
++
++unreg_preview:
++	pr_err("Cleanup: Destroy video render\n");
++	vchiq_mmal_component_finalise(dev->instance,
++				      dev->component[MMAL_COMPONENT_PREVIEW]);
++
++unreg_camera:
++	pr_err("Cleanup: Destroy camera\n");
++	vchiq_mmal_component_finalise(dev->instance,
++				      dev->component[MMAL_COMPONENT_CAMERA]);
++
++unreg_mmal:
++	vchiq_mmal_finalise(dev->instance);
++	return ret;
++}
++
++static int __init bm2835_mmal_init_device(struct bm2835_mmal_dev *dev,
++					  struct video_device *vfd)
++{
++	int ret;
++
++	*vfd = vdev_template;
++	if (gst_v4l2src_is_broken) {
++		v4l2_info(&dev->v4l2_dev,
++		  "Work-around for gstreamer issue is active.\n");
++		vfd->ioctl_ops = &camera0_ioctl_ops_gstreamer;
++	}
++
++	vfd->v4l2_dev = &dev->v4l2_dev;
++
++	vfd->lock = &dev->mutex;
++
++	vfd->queue = &dev->capture.vb_vidq;
++
++	/* video device needs to be able to access instance data */
++	video_set_drvdata(vfd, dev);
++
++	ret = video_register_device(vfd, VFL_TYPE_GRABBER, -1);
++	if (ret < 0)
++		return ret;
++
++	v4l2_info(vfd->v4l2_dev,
++		"V4L2 device registered as %s - stills mode > %dx%d\n",
++		video_device_node_name(vfd), max_video_width, max_video_height);
++
++	return 0;
++}
++
++static struct v4l2_format default_v4l2_format = {
++	.fmt.pix.pixelformat = V4L2_PIX_FMT_JPEG,
++	.fmt.pix.width = 1024,
++	.fmt.pix.bytesperline = 1024,
++	.fmt.pix.height = 768,
++	.fmt.pix.sizeimage = 1024*768,
++};
++
++static int __init bm2835_mmal_init(void)
++{
++	int ret;
++	struct bm2835_mmal_dev *dev;
++	struct vb2_queue *q;
++
++	dev = kzalloc(sizeof(*gdev), GFP_KERNEL);
++	if (!dev)
++		return -ENOMEM;
++
++	/* setup device defaults */
++	dev->overlay.w.left = 150;
++	dev->overlay.w.top = 50;
++	dev->overlay.w.width = 1024;
++	dev->overlay.w.height = 768;
++	dev->overlay.clipcount = 0;
++	dev->overlay.field = V4L2_FIELD_NONE;
++
++	dev->capture.fmt = &formats[3]; /* JPEG */
++
++	/* v4l device registration */
++	snprintf(dev->v4l2_dev.name, sizeof(dev->v4l2_dev.name),
++		 "%s", BM2835_MMAL_MODULE_NAME);
++	ret = v4l2_device_register(NULL, &dev->v4l2_dev);
++	if (ret)
++		goto free_dev;
++
++	/* setup v4l controls */
++	ret = bm2835_mmal_init_controls(dev, &dev->ctrl_handler);
++	if (ret < 0)
++		goto unreg_dev;
++	dev->v4l2_dev.ctrl_handler = &dev->ctrl_handler;
++
++	/* mmal init */
++	ret = mmal_init(dev);
++	if (ret < 0)
++		goto unreg_dev;
++
++	/* initialize queue */
++	q = &dev->capture.vb_vidq;
++	memset(q, 0, sizeof(*q));
++	q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
++	q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_READ;
++	q->drv_priv = dev;
++	q->buf_struct_size = sizeof(struct mmal_buffer);
++	q->ops = &bm2835_mmal_video_qops;
++	q->mem_ops = &vb2_vmalloc_memops;
++	q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
++	ret = vb2_queue_init(q);
++	if (ret < 0)
++		goto unreg_dev;
++
++	/* v4l2 core mutex used to protect all fops and v4l2 ioctls. */
++	mutex_init(&dev->mutex);
++
++	/* initialise video devices */
++	ret = bm2835_mmal_init_device(dev, &dev->vdev);
++	if (ret < 0)
++		goto unreg_dev;
++
++	/* Really want to call vidioc_s_fmt_vid_cap with the default
++	 * format, but currently the APIs don't join up.
++	 */
++	ret = mmal_setup_components(dev, &default_v4l2_format);
++	if (ret < 0) {
++		v4l2_err(&dev->v4l2_dev,
++			 "%s: could not setup components\n", __func__);
++		goto unreg_dev;
++	}
++
++	v4l2_info(&dev->v4l2_dev,
++		  "Broadcom 2835 MMAL video capture ver %s loaded.\n",
++		  BM2835_MMAL_VERSION);
++
++	gdev = dev;
++	return 0;
++
++unreg_dev:
++	v4l2_ctrl_handler_free(&dev->ctrl_handler);
++	v4l2_device_unregister(&dev->v4l2_dev);
++
++free_dev:
++	kfree(dev);
++
++	v4l2_err(&dev->v4l2_dev,
++		 "%s: error %d while loading driver\n",
++		 BM2835_MMAL_MODULE_NAME, ret);
++
++	return ret;
++}
++
++static void __exit bm2835_mmal_exit(void)
++{
++	if (!gdev)
++		return;
++
++	v4l2_info(&gdev->v4l2_dev, "unregistering %s\n",
++		  video_device_node_name(&gdev->vdev));
++
++	video_unregister_device(&gdev->vdev);
++
++	if (gdev->capture.encode_component) {
++		v4l2_dbg(1, bcm2835_v4l2_debug, &gdev->v4l2_dev,
++			 "mmal_exit - disconnect tunnel\n");
++		vchiq_mmal_port_connect_tunnel(gdev->instance,
++					       gdev->capture.camera_port, NULL);
++		vchiq_mmal_component_disable(gdev->instance,
++					     gdev->capture.encode_component);
++	}
++	vchiq_mmal_component_disable(gdev->instance,
++				     gdev->component[MMAL_COMPONENT_CAMERA]);
++
++	vchiq_mmal_component_finalise(gdev->instance,
++				      gdev->
++				      component[MMAL_COMPONENT_VIDEO_ENCODE]);
++
++	vchiq_mmal_component_finalise(gdev->instance,
++				      gdev->
++				      component[MMAL_COMPONENT_IMAGE_ENCODE]);
++
++	vchiq_mmal_component_finalise(gdev->instance,
++				      gdev->component[MMAL_COMPONENT_PREVIEW]);
++
++	vchiq_mmal_component_finalise(gdev->instance,
++				      gdev->component[MMAL_COMPONENT_CAMERA]);
++
++	vchiq_mmal_finalise(gdev->instance);
++
++	v4l2_ctrl_handler_free(&gdev->ctrl_handler);
++
++	v4l2_device_unregister(&gdev->v4l2_dev);
++
++	kfree(gdev);
++}
++
++module_init(bm2835_mmal_init);
++module_exit(bm2835_mmal_exit);
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/bcm2835-camera.h
+@@ -0,0 +1,126 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ *
++ * core driver device
++ */
++
++#define V4L2_CTRL_COUNT 28 /* number of v4l controls */
++
++enum {
++	MMAL_COMPONENT_CAMERA = 0,
++	MMAL_COMPONENT_PREVIEW,
++	MMAL_COMPONENT_IMAGE_ENCODE,
++	MMAL_COMPONENT_VIDEO_ENCODE,
++	MMAL_COMPONENT_COUNT
++};
++
++enum {
++	MMAL_CAMERA_PORT_PREVIEW = 0,
++	MMAL_CAMERA_PORT_VIDEO,
++	MMAL_CAMERA_PORT_CAPTURE,
++	MMAL_CAMERA_PORT_COUNT
++};
++
++#define PREVIEW_LAYER      2
++
++extern int bcm2835_v4l2_debug;
++
++struct bm2835_mmal_dev {
++	/* v4l2 devices */
++	struct v4l2_device     v4l2_dev;
++	struct video_device    vdev;
++	struct mutex           mutex;
++
++	/* controls */
++	struct v4l2_ctrl_handler  ctrl_handler;
++	struct v4l2_ctrl          *ctrls[V4L2_CTRL_COUNT];
++	enum v4l2_scene_mode	  scene_mode;
++	struct mmal_colourfx      colourfx;
++	int                       hflip;
++	int                       vflip;
++	int			  red_gain;
++	int			  blue_gain;
++	enum mmal_parameter_exposuremode exposure_mode_user;
++	enum v4l2_exposure_auto_type exposure_mode_v4l2_user;
++	/* active exposure mode may differ if selected via a scene mode */
++	enum mmal_parameter_exposuremode exposure_mode_active;
++	enum mmal_parameter_exposuremeteringmode metering_mode;
++	unsigned int		  manual_shutter_speed;
++	bool			  exp_auto_priority;
++
++	/* allocated mmal instance and components */
++	struct vchiq_mmal_instance   *instance;
++	struct vchiq_mmal_component  *component[MMAL_COMPONENT_COUNT];
++	int camera_use_count;
++
++	struct v4l2_window overlay;
++
++	struct {
++		unsigned int     width;  /* width */
++		unsigned int     height;  /* height */
++		unsigned int     stride;  /* stride */
++		unsigned int     buffersize; /* buffer size with padding */
++		struct mmal_fmt  *fmt;
++		struct v4l2_fract timeperframe;
++
++		/* H264 encode bitrate */
++		int         encode_bitrate;
++		/* H264 bitrate mode. CBR/VBR */
++		int         encode_bitrate_mode;
++		/* H264 profile */
++		enum v4l2_mpeg_video_h264_profile enc_profile;
++		/* H264 level */
++		enum v4l2_mpeg_video_h264_level enc_level;
++		/* JPEG Q-factor */
++		int         q_factor;
++
++		struct vb2_queue	vb_vidq;
++
++		/* VC start timestamp for streaming */
++		s64         vc_start_timestamp;
++		/* Kernel start timestamp for streaming */
++		struct timeval kernel_start_ts;
++
++		struct vchiq_mmal_port  *port; /* port being used for capture */
++		/* camera port being used for capture */
++		struct vchiq_mmal_port  *camera_port;
++		/* component being used for encode */
++		struct vchiq_mmal_component *encode_component;
++		/* number of frames remaining which driver should capture */
++		unsigned int  frame_count;
++		/* last frame completion */
++		struct completion  frame_cmplt;
++
++	} capture;
++
++};
++
++int bm2835_mmal_init_controls(
++			struct bm2835_mmal_dev *dev,
++			struct v4l2_ctrl_handler *hdl);
++
++int bm2835_mmal_set_all_camera_controls(struct bm2835_mmal_dev *dev);
++int set_framerate_params(struct bm2835_mmal_dev *dev);
++
++/* Debug helpers */
++
++#define v4l2_dump_pix_format(level, debug, dev, pix_fmt, desc)	\
++{	\
++	v4l2_dbg(level, debug, dev,	\
++"%s: w %u h %u field %u pfmt 0x%x bpl %u sz_img %u colorspace 0x%x priv %u\n", \
++		desc == NULL ? "" : desc,	\
++		(pix_fmt)->width, (pix_fmt)->height, (pix_fmt)->field,	\
++		(pix_fmt)->pixelformat, (pix_fmt)->bytesperline,	\
++		(pix_fmt)->sizeimage, (pix_fmt)->colorspace, (pix_fmt)->priv); \
++}
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/controls.c
+@@ -0,0 +1,1324 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ */
++
++#include <linux/errno.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <media/videobuf2-vmalloc.h>
++#include <media/v4l2-device.h>
++#include <media/v4l2-ioctl.h>
++#include <media/v4l2-ctrls.h>
++#include <media/v4l2-fh.h>
++#include <media/v4l2-event.h>
++#include <media/v4l2-common.h>
++
++#include "mmal-common.h"
++#include "mmal-vchiq.h"
++#include "mmal-parameters.h"
++#include "bcm2835-camera.h"
++
++/* The supported V4L2_CID_AUTO_EXPOSURE_BIAS values are from -4.0 to +4.0.
++ * MMAL values are in 1/6th increments so the MMAL range is -24 to +24.
++ * V4L2 docs say value "is expressed in terms of EV, drivers should interpret
++ * the values as 0.001 EV units, where the value 1000 stands for +1 EV."
++ * V4L2 is limited to a max of 32 values in a menu, so count in 1/3rds from
++ * -4 to +4
++ */
++static const s64 ev_bias_qmenu[] = {
++	-4000, -3667, -3333,
++	-3000, -2667, -2333,
++	-2000, -1667, -1333,
++	-1000,  -667,  -333,
++	    0,   333,   667,
++	 1000,  1333,  1667,
++	 2000,  2333,  2667,
++	 3000,  3333,  3667,
++	 4000
++};
++
++/* Supported ISO values
++ * ISOO = auto ISO
++ */
++static const s64 iso_qmenu[] = {
++	0, 100, 200, 400, 800,
++};
++
++static const s64 mains_freq_qmenu[] = {
++	V4L2_CID_POWER_LINE_FREQUENCY_DISABLED,
++	V4L2_CID_POWER_LINE_FREQUENCY_50HZ,
++	V4L2_CID_POWER_LINE_FREQUENCY_60HZ,
++	V4L2_CID_POWER_LINE_FREQUENCY_AUTO
++};
++
++/* Supported video encode modes */
++static const s64 bitrate_mode_qmenu[] = {
++	(s64)V4L2_MPEG_VIDEO_BITRATE_MODE_VBR,
++	(s64)V4L2_MPEG_VIDEO_BITRATE_MODE_CBR,
++};
++
++enum bm2835_mmal_ctrl_type {
++	MMAL_CONTROL_TYPE_STD,
++	MMAL_CONTROL_TYPE_STD_MENU,
++	MMAL_CONTROL_TYPE_INT_MENU,
++	MMAL_CONTROL_TYPE_CLUSTER, /* special cluster entry */
++};
++
++struct bm2835_mmal_v4l2_ctrl;
++
++typedef	int(bm2835_mmal_v4l2_ctrl_cb)(
++				struct bm2835_mmal_dev *dev,
++				struct v4l2_ctrl *ctrl,
++				const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl);
++
++struct bm2835_mmal_v4l2_ctrl {
++	u32 id; /* v4l2 control identifier */
++	enum bm2835_mmal_ctrl_type type;
++	/* control minimum value or
++	 * mask for MMAL_CONTROL_TYPE_STD_MENU */
++	s32 min;
++	s32 max; /* maximum value of control */
++	s32 def;  /* default value of control */
++	s32 step; /* step size of the control */
++	const s64 *imenu; /* integer menu array */
++	u32 mmal_id; /* mmal parameter id */
++	bm2835_mmal_v4l2_ctrl_cb *setter;
++	bool ignore_errors;
++};
++
++struct v4l2_to_mmal_effects_setting {
++	u32 v4l2_effect;
++	u32 mmal_effect;
++	s32 col_fx_enable;
++	s32 col_fx_fixed_cbcr;
++	u32 u;
++	u32 v;
++	u32 num_effect_params;
++	u32 effect_params[MMAL_MAX_IMAGEFX_PARAMETERS];
++};
++
++static const struct v4l2_to_mmal_effects_setting
++	v4l2_to_mmal_effects_values[] = {
++	{  V4L2_COLORFX_NONE,         MMAL_PARAM_IMAGEFX_NONE,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_BW,           MMAL_PARAM_IMAGEFX_NONE,
++		1,   0,    128,  128, 0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_SEPIA,        MMAL_PARAM_IMAGEFX_NONE,
++		1,   0,    87,   151, 0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_NEGATIVE,     MMAL_PARAM_IMAGEFX_NEGATIVE,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_EMBOSS,       MMAL_PARAM_IMAGEFX_EMBOSS,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_SKETCH,       MMAL_PARAM_IMAGEFX_SKETCH,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_SKY_BLUE,     MMAL_PARAM_IMAGEFX_PASTEL,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_GRASS_GREEN,  MMAL_PARAM_IMAGEFX_WATERCOLOUR,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_SKIN_WHITEN,  MMAL_PARAM_IMAGEFX_WASHEDOUT,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_VIVID,        MMAL_PARAM_IMAGEFX_SATURATION,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_AQUA,         MMAL_PARAM_IMAGEFX_NONE,
++		1,   0,    171,  121, 0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_ART_FREEZE,   MMAL_PARAM_IMAGEFX_HATCH,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_SILHOUETTE,   MMAL_PARAM_IMAGEFX_FILM,
++		0,   0,    0,    0,   0, {0, 0, 0, 0, 0} },
++	{  V4L2_COLORFX_SOLARIZATION, MMAL_PARAM_IMAGEFX_SOLARIZE,
++		0,   0,    0,    0,   5, {1, 128, 160, 160, 48} },
++	{  V4L2_COLORFX_ANTIQUE,      MMAL_PARAM_IMAGEFX_COLOURBALANCE,
++		0,   0,    0,    0,   3, {108, 274, 238, 0, 0} },
++	{  V4L2_COLORFX_SET_CBCR,     MMAL_PARAM_IMAGEFX_NONE,
++		1,   1,    0,    0,   0, {0, 0, 0, 0, 0} }
++};
++
++struct v4l2_mmal_scene_config {
++	enum v4l2_scene_mode			v4l2_scene;
++	enum mmal_parameter_exposuremode	exposure_mode;
++	enum mmal_parameter_exposuremeteringmode metering_mode;
++};
++
++static const struct v4l2_mmal_scene_config scene_configs[] = {
++	/* V4L2_SCENE_MODE_NONE automatically added */
++	{
++		V4L2_SCENE_MODE_NIGHT,
++		MMAL_PARAM_EXPOSUREMODE_NIGHT,
++		MMAL_PARAM_EXPOSUREMETERINGMODE_AVERAGE
++	},
++	{
++		V4L2_SCENE_MODE_SPORTS,
++		MMAL_PARAM_EXPOSUREMODE_SPORTS,
++		MMAL_PARAM_EXPOSUREMETERINGMODE_AVERAGE
++	},
++};
++
++/* control handlers*/
++
++static int ctrl_set_rational(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	struct mmal_parameter_rational rational_value;
++	struct vchiq_mmal_port *control;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	rational_value.num = ctrl->val;
++	rational_value.den = 100;
++
++	return vchiq_mmal_port_parameter_set(dev->instance, control,
++					     mmal_ctrl->mmal_id,
++					     &rational_value,
++					     sizeof(rational_value));
++}
++
++static int ctrl_set_value(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	u32 u32_value;
++	struct vchiq_mmal_port *control;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	u32_value = ctrl->val;
++
++	return vchiq_mmal_port_parameter_set(dev->instance, control,
++					     mmal_ctrl->mmal_id,
++					     &u32_value, sizeof(u32_value));
++}
++
++static int ctrl_set_value_menu(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	u32 u32_value;
++	struct vchiq_mmal_port *control;
++
++	if (ctrl->val > mmal_ctrl->max || ctrl->val < mmal_ctrl->min)
++		return 1;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	u32_value = mmal_ctrl->imenu[ctrl->val];
++
++	return vchiq_mmal_port_parameter_set(dev->instance, control,
++					     mmal_ctrl->mmal_id,
++					     &u32_value, sizeof(u32_value));
++}
++
++static int ctrl_set_value_ev(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	s32 s32_value;
++	struct vchiq_mmal_port *control;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	s32_value = (ctrl->val-12)*2;	/* Convert from index to 1/6ths */
++
++	return vchiq_mmal_port_parameter_set(dev->instance, control,
++					     mmal_ctrl->mmal_id,
++					     &s32_value, sizeof(s32_value));
++}
++
++static int ctrl_set_rotate(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	int ret;
++	u32 u32_value;
++	struct vchiq_mmal_component *camera;
++
++	camera = dev->component[MMAL_COMPONENT_CAMERA];
++
++	u32_value = ((ctrl->val % 360) / 90) * 90;
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, &camera->output[0],
++					    mmal_ctrl->mmal_id,
++					    &u32_value, sizeof(u32_value));
++	if (ret < 0)
++		return ret;
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, &camera->output[1],
++					    mmal_ctrl->mmal_id,
++					    &u32_value, sizeof(u32_value));
++	if (ret < 0)
++		return ret;
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, &camera->output[2],
++					    mmal_ctrl->mmal_id,
++					    &u32_value, sizeof(u32_value));
++
++	return ret;
++}
++
++static int ctrl_set_flip(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	int ret;
++	u32 u32_value;
++	struct vchiq_mmal_component *camera;
++
++	if (ctrl->id == V4L2_CID_HFLIP)
++		dev->hflip = ctrl->val;
++	else
++		dev->vflip = ctrl->val;
++
++	camera = dev->component[MMAL_COMPONENT_CAMERA];
++
++	if (dev->hflip && dev->vflip)
++		u32_value = MMAL_PARAM_MIRROR_BOTH;
++	else if (dev->hflip)
++		u32_value = MMAL_PARAM_MIRROR_HORIZONTAL;
++	else if (dev->vflip)
++		u32_value = MMAL_PARAM_MIRROR_VERTICAL;
++	else
++		u32_value = MMAL_PARAM_MIRROR_NONE;
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, &camera->output[0],
++					    mmal_ctrl->mmal_id,
++					    &u32_value, sizeof(u32_value));
++	if (ret < 0)
++		return ret;
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, &camera->output[1],
++					    mmal_ctrl->mmal_id,
++					    &u32_value, sizeof(u32_value));
++	if (ret < 0)
++		return ret;
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, &camera->output[2],
++					    mmal_ctrl->mmal_id,
++					    &u32_value, sizeof(u32_value));
++
++	return ret;
++
++}
++
++static int ctrl_set_exposure(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	enum mmal_parameter_exposuremode exp_mode = dev->exposure_mode_user;
++	u32 shutter_speed = 0;
++	struct vchiq_mmal_port *control;
++	int ret = 0;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	if (mmal_ctrl->mmal_id == MMAL_PARAMETER_SHUTTER_SPEED)	{
++		/* V4L2 is in 100usec increments.
++		 * MMAL is 1usec.
++		 */
++		dev->manual_shutter_speed = ctrl->val * 100;
++	} else if (mmal_ctrl->mmal_id == MMAL_PARAMETER_EXPOSURE_MODE) {
++		switch (ctrl->val) {
++		case V4L2_EXPOSURE_AUTO:
++			exp_mode = MMAL_PARAM_EXPOSUREMODE_AUTO;
++			break;
++
++		case V4L2_EXPOSURE_MANUAL:
++			exp_mode = MMAL_PARAM_EXPOSUREMODE_OFF;
++			break;
++		}
++		dev->exposure_mode_user = exp_mode;
++		dev->exposure_mode_v4l2_user = ctrl->val;
++	} else if (mmal_ctrl->id == V4L2_CID_EXPOSURE_AUTO_PRIORITY) {
++		dev->exp_auto_priority = ctrl->val;
++	}
++
++	if (dev->scene_mode == V4L2_SCENE_MODE_NONE) {
++		if (exp_mode == MMAL_PARAM_EXPOSUREMODE_OFF)
++			shutter_speed = dev->manual_shutter_speed;
++
++		ret = vchiq_mmal_port_parameter_set(dev->instance,
++					control,
++					MMAL_PARAMETER_SHUTTER_SPEED,
++					&shutter_speed,
++					sizeof(shutter_speed));
++		ret += vchiq_mmal_port_parameter_set(dev->instance,
++					control,
++					MMAL_PARAMETER_EXPOSURE_MODE,
++					&exp_mode,
++					sizeof(u32));
++		dev->exposure_mode_active = exp_mode;
++	}
++	/* exposure_dynamic_framerate (V4L2_CID_EXPOSURE_AUTO_PRIORITY) should
++	 * always apply irrespective of scene mode.
++	 */
++	ret += set_framerate_params(dev);
++
++	return ret;
++}
++
++static int ctrl_set_metering_mode(struct bm2835_mmal_dev *dev,
++			   struct v4l2_ctrl *ctrl,
++			   const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	switch (ctrl->val) {
++	case V4L2_EXPOSURE_METERING_AVERAGE:
++		dev->metering_mode = MMAL_PARAM_EXPOSUREMETERINGMODE_AVERAGE;
++		break;
++
++	case V4L2_EXPOSURE_METERING_CENTER_WEIGHTED:
++		dev->metering_mode = MMAL_PARAM_EXPOSUREMETERINGMODE_BACKLIT;
++		break;
++
++	case V4L2_EXPOSURE_METERING_SPOT:
++		dev->metering_mode = MMAL_PARAM_EXPOSUREMETERINGMODE_SPOT;
++		break;
++
++	/* todo matrix weighting not added to Linux API till 3.9
++	case V4L2_EXPOSURE_METERING_MATRIX:
++		dev->metering_mode = MMAL_PARAM_EXPOSUREMETERINGMODE_MATRIX;
++		break;
++	*/
++
++	}
++
++	if (dev->scene_mode == V4L2_SCENE_MODE_NONE) {
++		struct vchiq_mmal_port *control;
++		u32 u32_value = dev->metering_mode;
++
++		control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++		return vchiq_mmal_port_parameter_set(dev->instance, control,
++					     mmal_ctrl->mmal_id,
++					     &u32_value, sizeof(u32_value));
++	} else
++		return 0;
++}
++
++static int ctrl_set_flicker_avoidance(struct bm2835_mmal_dev *dev,
++			   struct v4l2_ctrl *ctrl,
++			   const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	u32 u32_value;
++	struct vchiq_mmal_port *control;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	switch (ctrl->val) {
++	case V4L2_CID_POWER_LINE_FREQUENCY_DISABLED:
++		u32_value = MMAL_PARAM_FLICKERAVOID_OFF;
++		break;
++	case V4L2_CID_POWER_LINE_FREQUENCY_50HZ:
++		u32_value = MMAL_PARAM_FLICKERAVOID_50HZ;
++		break;
++	case V4L2_CID_POWER_LINE_FREQUENCY_60HZ:
++		u32_value = MMAL_PARAM_FLICKERAVOID_60HZ;
++		break;
++	case V4L2_CID_POWER_LINE_FREQUENCY_AUTO:
++		u32_value = MMAL_PARAM_FLICKERAVOID_AUTO;
++		break;
++	}
++
++	return vchiq_mmal_port_parameter_set(dev->instance, control,
++					     mmal_ctrl->mmal_id,
++					     &u32_value, sizeof(u32_value));
++}
++
++static int ctrl_set_awb_mode(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	u32 u32_value;
++	struct vchiq_mmal_port *control;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	switch (ctrl->val) {
++	case V4L2_WHITE_BALANCE_MANUAL:
++		u32_value = MMAL_PARAM_AWBMODE_OFF;
++		break;
++
++	case V4L2_WHITE_BALANCE_AUTO:
++		u32_value = MMAL_PARAM_AWBMODE_AUTO;
++		break;
++
++	case V4L2_WHITE_BALANCE_INCANDESCENT:
++		u32_value = MMAL_PARAM_AWBMODE_INCANDESCENT;
++		break;
++
++	case V4L2_WHITE_BALANCE_FLUORESCENT:
++		u32_value = MMAL_PARAM_AWBMODE_FLUORESCENT;
++		break;
++
++	case V4L2_WHITE_BALANCE_FLUORESCENT_H:
++		u32_value = MMAL_PARAM_AWBMODE_TUNGSTEN;
++		break;
++
++	case V4L2_WHITE_BALANCE_HORIZON:
++		u32_value = MMAL_PARAM_AWBMODE_HORIZON;
++		break;
++
++	case V4L2_WHITE_BALANCE_DAYLIGHT:
++		u32_value = MMAL_PARAM_AWBMODE_SUNLIGHT;
++		break;
++
++	case V4L2_WHITE_BALANCE_FLASH:
++		u32_value = MMAL_PARAM_AWBMODE_FLASH;
++		break;
++
++	case V4L2_WHITE_BALANCE_CLOUDY:
++		u32_value = MMAL_PARAM_AWBMODE_CLOUDY;
++		break;
++
++	case V4L2_WHITE_BALANCE_SHADE:
++		u32_value = MMAL_PARAM_AWBMODE_SHADE;
++		break;
++
++	}
++
++	return vchiq_mmal_port_parameter_set(dev->instance, control,
++					     mmal_ctrl->mmal_id,
++					     &u32_value, sizeof(u32_value));
++}
++
++static int ctrl_set_awb_gains(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	struct vchiq_mmal_port *control;
++	struct mmal_parameter_awbgains gains;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	if (ctrl->id == V4L2_CID_RED_BALANCE)
++		dev->red_gain = ctrl->val;
++	else if (ctrl->id == V4L2_CID_BLUE_BALANCE)
++		dev->blue_gain = ctrl->val;
++
++	gains.r_gain.num = dev->red_gain;
++	gains.b_gain.num = dev->blue_gain;
++	gains.r_gain.den = gains.b_gain.den = 1000;
++
++	return vchiq_mmal_port_parameter_set(dev->instance, control,
++					     mmal_ctrl->mmal_id,
++					     &gains, sizeof(gains));
++}
++
++static int ctrl_set_image_effect(struct bm2835_mmal_dev *dev,
++		   struct v4l2_ctrl *ctrl,
++		   const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	int ret = -EINVAL;
++	int i, j;
++	struct vchiq_mmal_port *control;
++	struct mmal_parameter_imagefx_parameters imagefx;
++
++	for (i = 0; i < ARRAY_SIZE(v4l2_to_mmal_effects_values); i++) {
++		if (ctrl->val == v4l2_to_mmal_effects_values[i].v4l2_effect) {
++
++			imagefx.effect =
++				v4l2_to_mmal_effects_values[i].mmal_effect;
++			imagefx.num_effect_params =
++				v4l2_to_mmal_effects_values[i].num_effect_params;
++
++			if (imagefx.num_effect_params > MMAL_MAX_IMAGEFX_PARAMETERS)
++				imagefx.num_effect_params = MMAL_MAX_IMAGEFX_PARAMETERS;
++
++			for (j = 0; j < imagefx.num_effect_params; j++)
++				imagefx.effect_parameter[j] =
++					v4l2_to_mmal_effects_values[i].effect_params[j];
++
++			dev->colourfx.enable =
++				v4l2_to_mmal_effects_values[i].col_fx_enable;
++			if (!v4l2_to_mmal_effects_values[i].col_fx_fixed_cbcr) {
++				dev->colourfx.u =
++					v4l2_to_mmal_effects_values[i].u;
++				dev->colourfx.v =
++					v4l2_to_mmal_effects_values[i].v;
++			}
++
++			control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++			ret = vchiq_mmal_port_parameter_set(
++					dev->instance, control,
++					MMAL_PARAMETER_IMAGE_EFFECT_PARAMETERS,
++					&imagefx, sizeof(imagefx));
++			if (ret)
++				goto exit;
++
++			ret = vchiq_mmal_port_parameter_set(
++					dev->instance, control,
++					MMAL_PARAMETER_COLOUR_EFFECT,
++					&dev->colourfx, sizeof(dev->colourfx));
++		}
++	}
++
++exit:
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		 "mmal_ctrl:%p ctrl id:0x%x ctrl val:%d imagefx:0x%x color_effect:%s u:%d v:%d ret %d(%d)\n",
++				mmal_ctrl, ctrl->id, ctrl->val, imagefx.effect,
++				dev->colourfx.enable ? "true" : "false",
++				dev->colourfx.u, dev->colourfx.v,
++				ret, (ret == 0 ? 0 : -EINVAL));
++	return (ret == 0 ? 0 : EINVAL);
++}
++
++static int ctrl_set_colfx(struct bm2835_mmal_dev *dev,
++		   struct v4l2_ctrl *ctrl,
++		   const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	int ret = -EINVAL;
++	struct vchiq_mmal_port *control;
++
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	dev->colourfx.enable = (ctrl->val & 0xff00) >> 8;
++	dev->colourfx.enable = ctrl->val & 0xff;
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, control,
++					MMAL_PARAMETER_COLOUR_EFFECT,
++					&dev->colourfx, sizeof(dev->colourfx));
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		 "%s: After: mmal_ctrl:%p ctrl id:0x%x ctrl val:%d ret %d(%d)\n",
++			__func__, mmal_ctrl, ctrl->id, ctrl->val, ret,
++			(ret == 0 ? 0 : -EINVAL));
++	return (ret == 0 ? 0 : EINVAL);
++}
++
++static int ctrl_set_bitrate(struct bm2835_mmal_dev *dev,
++		   struct v4l2_ctrl *ctrl,
++		   const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	int ret;
++	struct vchiq_mmal_port *encoder_out;
++
++	dev->capture.encode_bitrate = ctrl->val;
++
++	encoder_out = &dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->output[0];
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance, encoder_out,
++					    mmal_ctrl->mmal_id,
++					    &ctrl->val, sizeof(ctrl->val));
++	ret = 0;
++	return ret;
++}
++
++static int ctrl_set_bitrate_mode(struct bm2835_mmal_dev *dev,
++		   struct v4l2_ctrl *ctrl,
++		   const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	u32 bitrate_mode;
++	struct vchiq_mmal_port *encoder_out;
++
++	encoder_out = &dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->output[0];
++
++	dev->capture.encode_bitrate_mode = ctrl->val;
++	switch (ctrl->val) {
++	default:
++	case V4L2_MPEG_VIDEO_BITRATE_MODE_VBR:
++		bitrate_mode = MMAL_VIDEO_RATECONTROL_VARIABLE;
++		break;
++	case V4L2_MPEG_VIDEO_BITRATE_MODE_CBR:
++		bitrate_mode = MMAL_VIDEO_RATECONTROL_CONSTANT;
++		break;
++	}
++
++	vchiq_mmal_port_parameter_set(dev->instance, encoder_out,
++					     mmal_ctrl->mmal_id,
++					     &bitrate_mode,
++					     sizeof(bitrate_mode));
++	return 0;
++}
++
++static int ctrl_set_image_encode_output(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	u32 u32_value;
++	struct vchiq_mmal_port *jpeg_out;
++
++	jpeg_out = &dev->component[MMAL_COMPONENT_IMAGE_ENCODE]->output[0];
++
++	u32_value = ctrl->val;
++
++	return vchiq_mmal_port_parameter_set(dev->instance, jpeg_out,
++					     mmal_ctrl->mmal_id,
++					     &u32_value, sizeof(u32_value));
++}
++
++static int ctrl_set_video_encode_param_output(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	u32 u32_value;
++	struct vchiq_mmal_port *vid_enc_ctl;
++
++	vid_enc_ctl = &dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->output[0];
++
++	u32_value = ctrl->val;
++
++	return vchiq_mmal_port_parameter_set(dev->instance, vid_enc_ctl,
++					     mmal_ctrl->mmal_id,
++					     &u32_value, sizeof(u32_value));
++}
++
++static int ctrl_set_video_encode_profile_level(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	struct mmal_parameter_video_profile param;
++	int ret = 0;
++
++	if (ctrl->id == V4L2_CID_MPEG_VIDEO_H264_PROFILE) {
++		switch (ctrl->val) {
++		case V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE:
++		case V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE:
++		case V4L2_MPEG_VIDEO_H264_PROFILE_MAIN:
++		case V4L2_MPEG_VIDEO_H264_PROFILE_HIGH:
++			dev->capture.enc_profile = ctrl->val;
++			break;
++		default:
++			ret = -EINVAL;
++			break;
++		}
++	} else if (ctrl->id == V4L2_CID_MPEG_VIDEO_H264_LEVEL) {
++		switch (ctrl->val) {
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1_0:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1B:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1_1:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1_2:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1_3:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_2_0:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_2_1:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_2_2:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_3_0:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_3_1:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_3_2:
++		case V4L2_MPEG_VIDEO_H264_LEVEL_4_0:
++			dev->capture.enc_level = ctrl->val;
++			break;
++		default:
++			ret = -EINVAL;
++			break;
++		}
++	}
++
++	if (!ret) {
++		switch (dev->capture.enc_profile) {
++		case V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE:
++			param.profile = MMAL_VIDEO_PROFILE_H264_BASELINE;
++			break;
++		case V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE:
++			param.profile =
++				MMAL_VIDEO_PROFILE_H264_CONSTRAINED_BASELINE;
++			break;
++		case V4L2_MPEG_VIDEO_H264_PROFILE_MAIN:
++			param.profile = MMAL_VIDEO_PROFILE_H264_MAIN;
++			break;
++		case V4L2_MPEG_VIDEO_H264_PROFILE_HIGH:
++			param.profile = MMAL_VIDEO_PROFILE_H264_HIGH;
++			break;
++		default:
++			/* Should never get here */
++			break;
++		}
++
++		switch (dev->capture.enc_level) {
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1_0:
++			param.level = MMAL_VIDEO_LEVEL_H264_1;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1B:
++			param.level = MMAL_VIDEO_LEVEL_H264_1b;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1_1:
++			param.level = MMAL_VIDEO_LEVEL_H264_11;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1_2:
++			param.level = MMAL_VIDEO_LEVEL_H264_12;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_1_3:
++			param.level = MMAL_VIDEO_LEVEL_H264_13;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_2_0:
++			param.level = MMAL_VIDEO_LEVEL_H264_2;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_2_1:
++			param.level = MMAL_VIDEO_LEVEL_H264_21;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_2_2:
++			param.level = MMAL_VIDEO_LEVEL_H264_22;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_3_0:
++			param.level = MMAL_VIDEO_LEVEL_H264_3;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_3_1:
++			param.level = MMAL_VIDEO_LEVEL_H264_31;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_3_2:
++			param.level = MMAL_VIDEO_LEVEL_H264_32;
++			break;
++		case V4L2_MPEG_VIDEO_H264_LEVEL_4_0:
++			param.level = MMAL_VIDEO_LEVEL_H264_4;
++			break;
++		default:
++			/* Should never get here */
++			break;
++		}
++
++		ret = vchiq_mmal_port_parameter_set(dev->instance,
++			&dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->output[0],
++			mmal_ctrl->mmal_id,
++			&param, sizeof(param));
++	}
++	return ret;
++}
++
++static int ctrl_set_scene_mode(struct bm2835_mmal_dev *dev,
++		      struct v4l2_ctrl *ctrl,
++		      const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
++{
++	int ret = 0;
++	int shutter_speed;
++	struct vchiq_mmal_port *control;
++
++	v4l2_dbg(0, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		"scene mode selected %d, was %d\n", ctrl->val,
++		dev->scene_mode);
++	control = &dev->component[MMAL_COMPONENT_CAMERA]->control;
++
++	if (ctrl->val == dev->scene_mode)
++		return 0;
++
++	if (ctrl->val == V4L2_SCENE_MODE_NONE) {
++		/* Restore all user selections */
++		dev->scene_mode = V4L2_SCENE_MODE_NONE;
++
++		if (dev->exposure_mode_user == MMAL_PARAM_EXPOSUREMODE_OFF)
++			shutter_speed = dev->manual_shutter_speed;
++		else
++			shutter_speed = 0;
++
++		v4l2_dbg(0, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			"%s: scene mode none: shut_speed %d, exp_mode %d, metering %d\n",
++			__func__, shutter_speed, dev->exposure_mode_user,
++			dev->metering_mode);
++		ret = vchiq_mmal_port_parameter_set(dev->instance,
++					control,
++					MMAL_PARAMETER_SHUTTER_SPEED,
++					&shutter_speed,
++					sizeof(shutter_speed));
++		ret += vchiq_mmal_port_parameter_set(dev->instance,
++					control,
++					MMAL_PARAMETER_EXPOSURE_MODE,
++					&dev->exposure_mode_user,
++					sizeof(u32));
++		dev->exposure_mode_active = dev->exposure_mode_user;
++		ret += vchiq_mmal_port_parameter_set(dev->instance,
++					control,
++					MMAL_PARAMETER_EXP_METERING_MODE,
++					&dev->metering_mode,
++					sizeof(u32));
++		ret += set_framerate_params(dev);
++	} else {
++		/* Set up scene mode */
++		int i;
++		const struct v4l2_mmal_scene_config *scene = NULL;
++		int shutter_speed;
++		enum mmal_parameter_exposuremode exposure_mode;
++		enum mmal_parameter_exposuremeteringmode metering_mode;
++
++		for (i = 0; i < ARRAY_SIZE(scene_configs); i++) {
++			if (scene_configs[i].v4l2_scene ==
++				ctrl->val) {
++				scene = &scene_configs[i];
++				break;
++			}
++		}
++		if (!scene)
++			return -EINVAL;
++		if (i >= ARRAY_SIZE(scene_configs))
++			return -EINVAL;
++
++		/* Set all the values */
++		dev->scene_mode = ctrl->val;
++
++		if (scene->exposure_mode == MMAL_PARAM_EXPOSUREMODE_OFF)
++			shutter_speed = dev->manual_shutter_speed;
++		else
++			shutter_speed = 0;
++		exposure_mode = scene->exposure_mode;
++		metering_mode = scene->metering_mode;
++
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			"%s: scene mode none: shut_speed %d, exp_mode %d, metering %d\n",
++			__func__, shutter_speed, exposure_mode, metering_mode);
++
++		ret = vchiq_mmal_port_parameter_set(dev->instance, control,
++					MMAL_PARAMETER_SHUTTER_SPEED,
++					&shutter_speed,
++					sizeof(shutter_speed));
++		ret += vchiq_mmal_port_parameter_set(dev->instance,
++					control,
++					MMAL_PARAMETER_EXPOSURE_MODE,
++					&exposure_mode,
++					sizeof(u32));
++		dev->exposure_mode_active = exposure_mode;
++		ret += vchiq_mmal_port_parameter_set(dev->instance, control,
++					MMAL_PARAMETER_EXPOSURE_MODE,
++					&exposure_mode,
++					sizeof(u32));
++		ret += vchiq_mmal_port_parameter_set(dev->instance, control,
++					MMAL_PARAMETER_EXP_METERING_MODE,
++					&metering_mode,
++					sizeof(u32));
++		ret += set_framerate_params(dev);
++	}
++	if (ret) {
++		v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			"%s: Setting scene to %d, ret=%d\n",
++			__func__, ctrl->val, ret);
++		ret = -EINVAL;
++	}
++	return 0;
++}
++
++static int bm2835_mmal_s_ctrl(struct v4l2_ctrl *ctrl)
++{
++	struct bm2835_mmal_dev *dev =
++		container_of(ctrl->handler, struct bm2835_mmal_dev,
++			     ctrl_handler);
++	const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl = ctrl->priv;
++	int ret;
++
++	if ((mmal_ctrl == NULL) ||
++	    (mmal_ctrl->id != ctrl->id) ||
++	    (mmal_ctrl->setter == NULL)) {
++		pr_warn("mmal_ctrl:%p ctrl id:%d\n", mmal_ctrl, ctrl->id);
++		return -EINVAL;
++	}
++
++	ret = mmal_ctrl->setter(dev, ctrl, mmal_ctrl);
++	if (ret)
++		pr_warn("ctrl id:%d/MMAL param %08X- returned ret %d\n",
++				ctrl->id, mmal_ctrl->mmal_id, ret);
++	if (mmal_ctrl->ignore_errors)
++		ret = 0;
++	return ret;
++}
++
++static const struct v4l2_ctrl_ops bm2835_mmal_ctrl_ops = {
++	.s_ctrl = bm2835_mmal_s_ctrl,
++};
++
++
++
++static const struct bm2835_mmal_v4l2_ctrl v4l2_ctrls[V4L2_CTRL_COUNT] = {
++	{
++		V4L2_CID_SATURATION, MMAL_CONTROL_TYPE_STD,
++		-100, 100, 0, 1, NULL,
++		MMAL_PARAMETER_SATURATION,
++		&ctrl_set_rational,
++		false
++	},
++	{
++		V4L2_CID_SHARPNESS, MMAL_CONTROL_TYPE_STD,
++		-100, 100, 0, 1, NULL,
++		MMAL_PARAMETER_SHARPNESS,
++		&ctrl_set_rational,
++		false
++	},
++	{
++		V4L2_CID_CONTRAST, MMAL_CONTROL_TYPE_STD,
++		-100, 100, 0, 1, NULL,
++		MMAL_PARAMETER_CONTRAST,
++		&ctrl_set_rational,
++		false
++	},
++	{
++		V4L2_CID_BRIGHTNESS, MMAL_CONTROL_TYPE_STD,
++		0, 100, 50, 1, NULL,
++		MMAL_PARAMETER_BRIGHTNESS,
++		&ctrl_set_rational,
++		false
++	},
++	{
++		V4L2_CID_ISO_SENSITIVITY, MMAL_CONTROL_TYPE_INT_MENU,
++		0, ARRAY_SIZE(iso_qmenu) - 1, 0, 1, iso_qmenu,
++		MMAL_PARAMETER_ISO,
++		&ctrl_set_value_menu,
++		false
++	},
++	{
++		V4L2_CID_IMAGE_STABILIZATION, MMAL_CONTROL_TYPE_STD,
++		0, 1, 0, 1, NULL,
++		MMAL_PARAMETER_VIDEO_STABILISATION,
++		&ctrl_set_value,
++		false
++	},
++/*	{
++		0, MMAL_CONTROL_TYPE_CLUSTER, 3, 1, 0, NULL, 0, NULL
++	}, */
++	{
++		V4L2_CID_EXPOSURE_AUTO, MMAL_CONTROL_TYPE_STD_MENU,
++		~0x03, 3, V4L2_EXPOSURE_AUTO, 0, NULL,
++		MMAL_PARAMETER_EXPOSURE_MODE,
++		&ctrl_set_exposure,
++		false
++	},
++/* todo this needs mixing in with set exposure
++	{
++	       V4L2_CID_SCENE_MODE, MMAL_CONTROL_TYPE_STD_MENU,
++	},
++ */
++	{
++		V4L2_CID_EXPOSURE_ABSOLUTE, MMAL_CONTROL_TYPE_STD,
++		/* Units of 100usecs */
++		1, 1*1000*10, 100*10, 1, NULL,
++		MMAL_PARAMETER_SHUTTER_SPEED,
++		&ctrl_set_exposure,
++		false
++	},
++	{
++		V4L2_CID_AUTO_EXPOSURE_BIAS, MMAL_CONTROL_TYPE_INT_MENU,
++		0, ARRAY_SIZE(ev_bias_qmenu) - 1,
++		(ARRAY_SIZE(ev_bias_qmenu)+1)/2 - 1, 0, ev_bias_qmenu,
++		MMAL_PARAMETER_EXPOSURE_COMP,
++		&ctrl_set_value_ev,
++		false
++	},
++	{
++		V4L2_CID_EXPOSURE_AUTO_PRIORITY, MMAL_CONTROL_TYPE_STD,
++		0, 1,
++		0, 1, NULL,
++		0,	/* Dummy MMAL ID as it gets mapped into FPS range*/
++		&ctrl_set_exposure,
++		false
++	},
++	{
++		V4L2_CID_EXPOSURE_METERING,
++		MMAL_CONTROL_TYPE_STD_MENU,
++		~0x7, 2, V4L2_EXPOSURE_METERING_AVERAGE, 0, NULL,
++		MMAL_PARAMETER_EXP_METERING_MODE,
++		&ctrl_set_metering_mode,
++		false
++	},
++	{
++		V4L2_CID_AUTO_N_PRESET_WHITE_BALANCE,
++		MMAL_CONTROL_TYPE_STD_MENU,
++		~0x3ff, 9, V4L2_WHITE_BALANCE_AUTO, 0, NULL,
++		MMAL_PARAMETER_AWB_MODE,
++		&ctrl_set_awb_mode,
++		false
++	},
++	{
++		V4L2_CID_RED_BALANCE, MMAL_CONTROL_TYPE_STD,
++		1, 7999, 1000, 1, NULL,
++		MMAL_PARAMETER_CUSTOM_AWB_GAINS,
++		&ctrl_set_awb_gains,
++		false
++	},
++	{
++		V4L2_CID_BLUE_BALANCE, MMAL_CONTROL_TYPE_STD,
++		1, 7999, 1000, 1, NULL,
++		MMAL_PARAMETER_CUSTOM_AWB_GAINS,
++		&ctrl_set_awb_gains,
++		false
++	},
++	{
++		V4L2_CID_COLORFX, MMAL_CONTROL_TYPE_STD_MENU,
++		0, 15, V4L2_COLORFX_NONE, 0, NULL,
++		MMAL_PARAMETER_IMAGE_EFFECT,
++		&ctrl_set_image_effect,
++		false
++	},
++	{
++		V4L2_CID_COLORFX_CBCR, MMAL_CONTROL_TYPE_STD,
++		0, 0xffff, 0x8080, 1, NULL,
++		MMAL_PARAMETER_COLOUR_EFFECT,
++		&ctrl_set_colfx,
++		false
++	},
++	{
++		V4L2_CID_ROTATE, MMAL_CONTROL_TYPE_STD,
++		0, 360, 0, 90, NULL,
++		MMAL_PARAMETER_ROTATION,
++		&ctrl_set_rotate,
++		false
++	},
++	{
++		V4L2_CID_HFLIP, MMAL_CONTROL_TYPE_STD,
++		0, 1, 0, 1, NULL,
++		MMAL_PARAMETER_MIRROR,
++		&ctrl_set_flip,
++		false
++	},
++	{
++		V4L2_CID_VFLIP, MMAL_CONTROL_TYPE_STD,
++		0, 1, 0, 1, NULL,
++		MMAL_PARAMETER_MIRROR,
++		&ctrl_set_flip,
++		false
++	},
++	{
++		V4L2_CID_MPEG_VIDEO_BITRATE_MODE, MMAL_CONTROL_TYPE_STD_MENU,
++		0, ARRAY_SIZE(bitrate_mode_qmenu) - 1,
++		0, 0, bitrate_mode_qmenu,
++		MMAL_PARAMETER_RATECONTROL,
++		&ctrl_set_bitrate_mode,
++		false
++	},
++	{
++		V4L2_CID_MPEG_VIDEO_BITRATE, MMAL_CONTROL_TYPE_STD,
++		25*1000, 25*1000*1000, 10*1000*1000, 25*1000, NULL,
++		MMAL_PARAMETER_VIDEO_BIT_RATE,
++		&ctrl_set_bitrate,
++		false
++	},
++	{
++		V4L2_CID_JPEG_COMPRESSION_QUALITY, MMAL_CONTROL_TYPE_STD,
++		1, 100,
++		30, 1, NULL,
++		MMAL_PARAMETER_JPEG_Q_FACTOR,
++		&ctrl_set_image_encode_output,
++		false
++	},
++	{
++		V4L2_CID_POWER_LINE_FREQUENCY, MMAL_CONTROL_TYPE_STD_MENU,
++		0, ARRAY_SIZE(mains_freq_qmenu) - 1,
++		1, 1, NULL,
++		MMAL_PARAMETER_FLICKER_AVOID,
++		&ctrl_set_flicker_avoidance,
++		false
++	},
++	{
++		V4L2_CID_MPEG_VIDEO_REPEAT_SEQ_HEADER, MMAL_CONTROL_TYPE_STD,
++		0, 1,
++		0, 1, NULL,
++		MMAL_PARAMETER_VIDEO_ENCODE_INLINE_HEADER,
++		&ctrl_set_video_encode_param_output,
++		true	/* Errors ignored as requires latest firmware to work */
++	},
++	{
++		V4L2_CID_MPEG_VIDEO_H264_PROFILE,
++		MMAL_CONTROL_TYPE_STD_MENU,
++		~((1<<V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) |
++			(1<<V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE) |
++			(1<<V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) |
++			(1<<V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)),
++		V4L2_MPEG_VIDEO_H264_PROFILE_HIGH,
++		V4L2_MPEG_VIDEO_H264_PROFILE_HIGH, 1, NULL,
++		MMAL_PARAMETER_PROFILE,
++		&ctrl_set_video_encode_profile_level,
++		false
++	},
++	{
++		V4L2_CID_MPEG_VIDEO_H264_LEVEL, MMAL_CONTROL_TYPE_STD_MENU,
++		~((1<<V4L2_MPEG_VIDEO_H264_LEVEL_1_0) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_1B) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_1_1) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_1_2) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_1_3) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_2_0) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_2_1) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_2_2) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_3_0) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_3_1) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_3_2) |
++			(1<<V4L2_MPEG_VIDEO_H264_LEVEL_4_0)),
++		V4L2_MPEG_VIDEO_H264_LEVEL_4_0,
++		V4L2_MPEG_VIDEO_H264_LEVEL_4_0, 1, NULL,
++		MMAL_PARAMETER_PROFILE,
++		&ctrl_set_video_encode_profile_level,
++		false
++	},
++	{
++		V4L2_CID_SCENE_MODE, MMAL_CONTROL_TYPE_STD_MENU,
++		-1,	/* Min is computed at runtime */
++		V4L2_SCENE_MODE_TEXT,
++		V4L2_SCENE_MODE_NONE, 1, NULL,
++		MMAL_PARAMETER_PROFILE,
++		&ctrl_set_scene_mode,
++		false
++	},
++	{
++		V4L2_CID_MPEG_VIDEO_H264_I_PERIOD, MMAL_CONTROL_TYPE_STD,
++		0, 0x7FFFFFFF, 60, 1, NULL,
++		MMAL_PARAMETER_INTRAPERIOD,
++		&ctrl_set_video_encode_param_output,
++		false
++	},
++};
++
++int bm2835_mmal_set_all_camera_controls(struct bm2835_mmal_dev *dev)
++{
++	int c;
++	int ret = 0;
++
++	for (c = 0; c < V4L2_CTRL_COUNT; c++) {
++		if ((dev->ctrls[c]) && (v4l2_ctrls[c].setter)) {
++			ret = v4l2_ctrls[c].setter(dev, dev->ctrls[c],
++						   &v4l2_ctrls[c]);
++			if (!v4l2_ctrls[c].ignore_errors && ret) {
++				v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++					"Failed when setting default values for ctrl %d\n",
++					c);
++				break;
++			}
++		}
++	}
++	return ret;
++}
++
++int set_framerate_params(struct bm2835_mmal_dev *dev)
++{
++	struct mmal_parameter_fps_range fps_range;
++	int ret;
++
++	if ((dev->exposure_mode_active != MMAL_PARAM_EXPOSUREMODE_OFF) &&
++	     (dev->exp_auto_priority)) {
++		/* Variable FPS. Define min FPS as 1fps.
++		 * Max as max defined FPS.
++		 */
++		fps_range.fps_low.num = 1;
++		fps_range.fps_low.den = 1;
++		fps_range.fps_high.num = dev->capture.timeperframe.denominator;
++		fps_range.fps_high.den = dev->capture.timeperframe.numerator;
++	} else {
++		/* Fixed FPS - set min and max to be the same */
++		fps_range.fps_low.num = fps_range.fps_high.num =
++			dev->capture.timeperframe.denominator;
++		fps_range.fps_low.den = fps_range.fps_high.den =
++			dev->capture.timeperframe.numerator;
++	}
++
++	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
++			 "Set fps range to %d/%d to %d/%d\n",
++			 fps_range.fps_low.num,
++			 fps_range.fps_low.den,
++			 fps_range.fps_high.num,
++			 fps_range.fps_high.den
++		 );
++
++	ret = vchiq_mmal_port_parameter_set(dev->instance,
++				      &dev->component[MMAL_COMPONENT_CAMERA]->
++					output[MMAL_CAMERA_PORT_PREVIEW],
++				      MMAL_PARAMETER_FPS_RANGE,
++				      &fps_range, sizeof(fps_range));
++	ret += vchiq_mmal_port_parameter_set(dev->instance,
++				      &dev->component[MMAL_COMPONENT_CAMERA]->
++					output[MMAL_CAMERA_PORT_VIDEO],
++				      MMAL_PARAMETER_FPS_RANGE,
++				      &fps_range, sizeof(fps_range));
++	ret += vchiq_mmal_port_parameter_set(dev->instance,
++				      &dev->component[MMAL_COMPONENT_CAMERA]->
++					output[MMAL_CAMERA_PORT_CAPTURE],
++				      MMAL_PARAMETER_FPS_RANGE,
++				      &fps_range, sizeof(fps_range));
++	if (ret)
++		v4l2_dbg(0, bcm2835_v4l2_debug, &dev->v4l2_dev,
++		 "Failed to set fps ret %d\n",
++		 ret);
++
++	return ret;
++
++}
++
++int bm2835_mmal_init_controls(struct bm2835_mmal_dev *dev,
++			      struct v4l2_ctrl_handler *hdl)
++{
++	int c;
++	const struct bm2835_mmal_v4l2_ctrl *ctrl;
++
++	v4l2_ctrl_handler_init(hdl, V4L2_CTRL_COUNT);
++
++	for (c = 0; c < V4L2_CTRL_COUNT; c++) {
++		ctrl = &v4l2_ctrls[c];
++
++		switch (ctrl->type) {
++		case MMAL_CONTROL_TYPE_STD:
++			dev->ctrls[c] = v4l2_ctrl_new_std(hdl,
++				&bm2835_mmal_ctrl_ops, ctrl->id,
++				ctrl->min, ctrl->max, ctrl->step, ctrl->def);
++			break;
++
++		case MMAL_CONTROL_TYPE_STD_MENU:
++		{
++			int mask = ctrl->min;
++
++			if (ctrl->id == V4L2_CID_SCENE_MODE) {
++				/* Special handling to work out the mask
++				 * value based on the scene_configs array
++				 * at runtime. Reduces the chance of
++				 * mismatches.
++				 */
++				int i;
++				mask = 1<<V4L2_SCENE_MODE_NONE;
++				for (i = 0;
++				     i < ARRAY_SIZE(scene_configs);
++				     i++) {
++					mask |= 1<<scene_configs[i].v4l2_scene;
++				}
++				mask = ~mask;
++			}
++
++			dev->ctrls[c] = v4l2_ctrl_new_std_menu(hdl,
++			&bm2835_mmal_ctrl_ops, ctrl->id,
++			ctrl->max, mask, ctrl->def);
++			break;
++		}
++
++		case MMAL_CONTROL_TYPE_INT_MENU:
++			dev->ctrls[c] = v4l2_ctrl_new_int_menu(hdl,
++				&bm2835_mmal_ctrl_ops, ctrl->id,
++				ctrl->max, ctrl->def, ctrl->imenu);
++			break;
++
++		case MMAL_CONTROL_TYPE_CLUSTER:
++			/* skip this entry when constructing controls */
++			continue;
++		}
++
++		if (hdl->error)
++			break;
++
++		dev->ctrls[c]->priv = (void *)ctrl;
++	}
++
++	if (hdl->error) {
++		pr_err("error adding control %d/%d id 0x%x\n", c,
++			 V4L2_CTRL_COUNT, ctrl->id);
++		return hdl->error;
++	}
++
++	for (c = 0; c < V4L2_CTRL_COUNT; c++) {
++		ctrl = &v4l2_ctrls[c];
++
++		switch (ctrl->type) {
++		case MMAL_CONTROL_TYPE_CLUSTER:
++			v4l2_ctrl_auto_cluster(ctrl->min,
++					       &dev->ctrls[c+1],
++					       ctrl->max,
++					       ctrl->def);
++			break;
++
++		case MMAL_CONTROL_TYPE_STD:
++		case MMAL_CONTROL_TYPE_STD_MENU:
++		case MMAL_CONTROL_TYPE_INT_MENU:
++			break;
++		}
++
++	}
++
++	return 0;
++}
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-common.h
+@@ -0,0 +1,53 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ *
++ * MMAL structures
++ *
++ */
++
++#define MMAL_FOURCC(a, b, c, d) ((a) | (b << 8) | (c << 16) | (d << 24))
++#define MMAL_MAGIC MMAL_FOURCC('m', 'm', 'a', 'l')
++
++/** Special value signalling that time is not known */
++#define MMAL_TIME_UNKNOWN (1LL<<63)
++
++/* mapping between v4l and mmal video modes */
++struct mmal_fmt {
++	char  *name;
++	u32   fourcc;          /* v4l2 format id */
++	int   flags;           /* v4l2 flags field */
++	u32   mmal;
++	int   depth;
++	u32   mmal_component;  /* MMAL component index to be used to encode */
++	u32   ybbp;            /* depth of first Y plane for planar formats */
++};
++
++/* buffer for one video frame */
++struct mmal_buffer {
++	/* v4l buffer data -- must be first */
++	struct vb2_v4l2_buffer	vb;
++
++	/* list of buffers available */
++	struct list_head	list;
++
++	void *buffer; /* buffer pointer */
++	unsigned long buffer_size; /* size of allocated buffer */
++};
++
++/* */
++struct mmal_colourfx {
++	s32 enable;
++	u32 u;
++	u32 v;
++};
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-encodings.h
+@@ -0,0 +1,127 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ */
++#ifndef MMAL_ENCODINGS_H
++#define MMAL_ENCODINGS_H
++
++#define MMAL_ENCODING_H264             MMAL_FOURCC('H', '2', '6', '4')
++#define MMAL_ENCODING_H263             MMAL_FOURCC('H', '2', '6', '3')
++#define MMAL_ENCODING_MP4V             MMAL_FOURCC('M', 'P', '4', 'V')
++#define MMAL_ENCODING_MP2V             MMAL_FOURCC('M', 'P', '2', 'V')
++#define MMAL_ENCODING_MP1V             MMAL_FOURCC('M', 'P', '1', 'V')
++#define MMAL_ENCODING_WMV3             MMAL_FOURCC('W', 'M', 'V', '3')
++#define MMAL_ENCODING_WMV2             MMAL_FOURCC('W', 'M', 'V', '2')
++#define MMAL_ENCODING_WMV1             MMAL_FOURCC('W', 'M', 'V', '1')
++#define MMAL_ENCODING_WVC1             MMAL_FOURCC('W', 'V', 'C', '1')
++#define MMAL_ENCODING_VP8              MMAL_FOURCC('V', 'P', '8', ' ')
++#define MMAL_ENCODING_VP7              MMAL_FOURCC('V', 'P', '7', ' ')
++#define MMAL_ENCODING_VP6              MMAL_FOURCC('V', 'P', '6', ' ')
++#define MMAL_ENCODING_THEORA           MMAL_FOURCC('T', 'H', 'E', 'O')
++#define MMAL_ENCODING_SPARK            MMAL_FOURCC('S', 'P', 'R', 'K')
++#define MMAL_ENCODING_MJPEG            MMAL_FOURCC('M', 'J', 'P', 'G')
++
++#define MMAL_ENCODING_JPEG             MMAL_FOURCC('J', 'P', 'E', 'G')
++#define MMAL_ENCODING_GIF              MMAL_FOURCC('G', 'I', 'F', ' ')
++#define MMAL_ENCODING_PNG              MMAL_FOURCC('P', 'N', 'G', ' ')
++#define MMAL_ENCODING_PPM              MMAL_FOURCC('P', 'P', 'M', ' ')
++#define MMAL_ENCODING_TGA              MMAL_FOURCC('T', 'G', 'A', ' ')
++#define MMAL_ENCODING_BMP              MMAL_FOURCC('B', 'M', 'P', ' ')
++
++#define MMAL_ENCODING_I420             MMAL_FOURCC('I', '4', '2', '0')
++#define MMAL_ENCODING_I420_SLICE       MMAL_FOURCC('S', '4', '2', '0')
++#define MMAL_ENCODING_YV12             MMAL_FOURCC('Y', 'V', '1', '2')
++#define MMAL_ENCODING_I422             MMAL_FOURCC('I', '4', '2', '2')
++#define MMAL_ENCODING_I422_SLICE       MMAL_FOURCC('S', '4', '2', '2')
++#define MMAL_ENCODING_YUYV             MMAL_FOURCC('Y', 'U', 'Y', 'V')
++#define MMAL_ENCODING_YVYU             MMAL_FOURCC('Y', 'V', 'Y', 'U')
++#define MMAL_ENCODING_UYVY             MMAL_FOURCC('U', 'Y', 'V', 'Y')
++#define MMAL_ENCODING_VYUY             MMAL_FOURCC('V', 'Y', 'U', 'Y')
++#define MMAL_ENCODING_NV12             MMAL_FOURCC('N', 'V', '1', '2')
++#define MMAL_ENCODING_NV21             MMAL_FOURCC('N', 'V', '2', '1')
++#define MMAL_ENCODING_ARGB             MMAL_FOURCC('A', 'R', 'G', 'B')
++#define MMAL_ENCODING_RGBA             MMAL_FOURCC('R', 'G', 'B', 'A')
++#define MMAL_ENCODING_ABGR             MMAL_FOURCC('A', 'B', 'G', 'R')
++#define MMAL_ENCODING_BGRA             MMAL_FOURCC('B', 'G', 'R', 'A')
++#define MMAL_ENCODING_RGB16            MMAL_FOURCC('R', 'G', 'B', '2')
++#define MMAL_ENCODING_RGB24            MMAL_FOURCC('R', 'G', 'B', '3')
++#define MMAL_ENCODING_RGB32            MMAL_FOURCC('R', 'G', 'B', '4')
++#define MMAL_ENCODING_BGR16            MMAL_FOURCC('B', 'G', 'R', '2')
++#define MMAL_ENCODING_BGR24            MMAL_FOURCC('B', 'G', 'R', '3')
++#define MMAL_ENCODING_BGR32            MMAL_FOURCC('B', 'G', 'R', '4')
++
++/** SAND Video (YUVUV128) format, native format understood by VideoCore.
++ * This format is *not* opaque - if requested you will receive full frames
++ * of YUV_UV video.
++ */
++#define MMAL_ENCODING_YUVUV128         MMAL_FOURCC('S', 'A', 'N', 'D')
++
++/** VideoCore opaque image format, image handles are returned to
++ * the host but not the actual image data.
++ */
++#define MMAL_ENCODING_OPAQUE           MMAL_FOURCC('O', 'P', 'Q', 'V')
++
++/** An EGL image handle
++ */
++#define MMAL_ENCODING_EGL_IMAGE        MMAL_FOURCC('E', 'G', 'L', 'I')
++
++/* }@ */
++
++/** \name Pre-defined audio encodings */
++/* @{ */
++#define MMAL_ENCODING_PCM_UNSIGNED_BE  MMAL_FOURCC('P', 'C', 'M', 'U')
++#define MMAL_ENCODING_PCM_UNSIGNED_LE  MMAL_FOURCC('p', 'c', 'm', 'u')
++#define MMAL_ENCODING_PCM_SIGNED_BE    MMAL_FOURCC('P', 'C', 'M', 'S')
++#define MMAL_ENCODING_PCM_SIGNED_LE    MMAL_FOURCC('p', 'c', 'm', 's')
++#define MMAL_ENCODING_PCM_FLOAT_BE     MMAL_FOURCC('P', 'C', 'M', 'F')
++#define MMAL_ENCODING_PCM_FLOAT_LE     MMAL_FOURCC('p', 'c', 'm', 'f')
++
++/* Pre-defined H264 encoding variants */
++
++/** ISO 14496-10 Annex B byte stream format */
++#define MMAL_ENCODING_VARIANT_H264_DEFAULT   0
++/** ISO 14496-15 AVC stream format */
++#define MMAL_ENCODING_VARIANT_H264_AVC1      MMAL_FOURCC('A', 'V', 'C', '1')
++/** Implicitly delineated NAL units without emulation prevention */
++#define MMAL_ENCODING_VARIANT_H264_RAW       MMAL_FOURCC('R', 'A', 'W', ' ')
++
++
++/** \defgroup MmalColorSpace List of pre-defined video color spaces
++ * This defines a list of common color spaces. This list isn't exhaustive and
++ * is only provided as a convenience to avoid clients having to use FourCC
++ * codes directly. However components are allowed to define and use their own
++ * FourCC codes.
++ */
++/* @{ */
++
++/** Unknown color space */
++#define MMAL_COLOR_SPACE_UNKNOWN       0
++/** ITU-R BT.601-5 [SDTV] */
++#define MMAL_COLOR_SPACE_ITUR_BT601    MMAL_FOURCC('Y', '6', '0', '1')
++/** ITU-R BT.709-3 [HDTV] */
++#define MMAL_COLOR_SPACE_ITUR_BT709    MMAL_FOURCC('Y', '7', '0', '9')
++/** JPEG JFIF */
++#define MMAL_COLOR_SPACE_JPEG_JFIF     MMAL_FOURCC('Y', 'J', 'F', 'I')
++/** Title 47 Code of Federal Regulations (2003) 73.682 (a) (20) */
++#define MMAL_COLOR_SPACE_FCC           MMAL_FOURCC('Y', 'F', 'C', 'C')
++/** Society of Motion Picture and Television Engineers 240M (1999) */
++#define MMAL_COLOR_SPACE_SMPTE240M     MMAL_FOURCC('Y', '2', '4', '0')
++/** ITU-R BT.470-2 System M */
++#define MMAL_COLOR_SPACE_BT470_2_M     MMAL_FOURCC('Y', '_', '_', 'M')
++/** ITU-R BT.470-2 System BG */
++#define MMAL_COLOR_SPACE_BT470_2_BG    MMAL_FOURCC('Y', '_', 'B', 'G')
++/** JPEG JFIF, but with 16..255 luma */
++#define MMAL_COLOR_SPACE_JFIF_Y16_255  MMAL_FOURCC('Y', 'Y', '1', '6')
++/* @} MmalColorSpace List */
++
++#endif /* MMAL_ENCODINGS_H */
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-msg-common.h
+@@ -0,0 +1,50 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ */
++
++#ifndef MMAL_MSG_COMMON_H
++#define MMAL_MSG_COMMON_H
++
++enum mmal_msg_status {
++	MMAL_MSG_STATUS_SUCCESS = 0, /**< Success */
++	MMAL_MSG_STATUS_ENOMEM,      /**< Out of memory */
++	MMAL_MSG_STATUS_ENOSPC,      /**< Out of resources other than memory */
++	MMAL_MSG_STATUS_EINVAL,      /**< Argument is invalid */
++	MMAL_MSG_STATUS_ENOSYS,      /**< Function not implemented */
++	MMAL_MSG_STATUS_ENOENT,      /**< No such file or directory */
++	MMAL_MSG_STATUS_ENXIO,       /**< No such device or address */
++	MMAL_MSG_STATUS_EIO,         /**< I/O error */
++	MMAL_MSG_STATUS_ESPIPE,      /**< Illegal seek */
++	MMAL_MSG_STATUS_ECORRUPT,    /**< Data is corrupt \attention */
++	MMAL_MSG_STATUS_ENOTREADY,   /**< Component is not ready */
++	MMAL_MSG_STATUS_ECONFIG,     /**< Component is not configured */
++	MMAL_MSG_STATUS_EISCONN,     /**< Port is already connected */
++	MMAL_MSG_STATUS_ENOTCONN,    /**< Port is disconnected */
++	MMAL_MSG_STATUS_EAGAIN,      /**< Resource temporarily unavailable. */
++	MMAL_MSG_STATUS_EFAULT,      /**< Bad address */
++};
++
++struct mmal_rect {
++	s32 x;      /**< x coordinate (from left) */
++	s32 y;      /**< y coordinate (from top) */
++	s32 width;  /**< width */
++	s32 height; /**< height */
++};
++
++struct mmal_rational {
++	s32 num;    /**< Numerator */
++	s32 den;    /**< Denominator */
++};
++
++#endif /* MMAL_MSG_COMMON_H */
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-msg-format.h
+@@ -0,0 +1,81 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ */
++
++#ifndef MMAL_MSG_FORMAT_H
++#define MMAL_MSG_FORMAT_H
++
++#include "mmal-msg-common.h"
++
++/* MMAL_ES_FORMAT_T */
++
++
++struct mmal_audio_format {
++	u32 channels;           /**< Number of audio channels */
++	u32 sample_rate;        /**< Sample rate */
++
++	u32 bits_per_sample;    /**< Bits per sample */
++	u32 block_align;        /**< Size of a block of data */
++};
++
++struct mmal_video_format {
++	u32 width;        /**< Width of frame in pixels */
++	u32 height;       /**< Height of frame in rows of pixels */
++	struct mmal_rect crop;         /**< Visible region of the frame */
++	struct mmal_rational frame_rate;   /**< Frame rate */
++	struct mmal_rational par;          /**< Pixel aspect ratio */
++
++	/* FourCC specifying the color space of the video stream. See the
++	 * \ref MmalColorSpace "pre-defined color spaces" for some examples.
++	 */
++	u32 color_space;
++};
++
++struct mmal_subpicture_format {
++	u32 x_offset;
++	u32 y_offset;
++};
++
++union mmal_es_specific_format {
++	struct mmal_audio_format audio;
++	struct mmal_video_format video;
++	struct mmal_subpicture_format subpicture;
++};
++
++/** Definition of an elementary stream format (MMAL_ES_FORMAT_T) */
++struct mmal_es_format {
++	u32 type;      /* enum mmal_es_type */
++
++	u32 encoding;  /* FourCC specifying encoding of the elementary stream.*/
++	u32 encoding_variant; /* FourCC specifying the specific
++			       * encoding variant of the elementary
++			       * stream.
++			       */
++
++	union mmal_es_specific_format *es; /* TODO: pointers in
++					    * message serialisation?!?
++					    */
++					    /* Type specific
++					     * information for the
++					     * elementary stream
++					     */
++
++	u32 bitrate;        /**< Bitrate in bits per second */
++	u32 flags; /**< Flags describing properties of the elementary stream. */
++
++	u32 extradata_size;       /**< Size of the codec specific data */
++	u8  *extradata;           /**< Codec specific data */
++};
++
++#endif /* MMAL_MSG_FORMAT_H */
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-msg-port.h
+@@ -0,0 +1,107 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ */
++
++/* MMAL_PORT_TYPE_T */
++enum mmal_port_type {
++	MMAL_PORT_TYPE_UNKNOWN = 0,  /**< Unknown port type */
++	MMAL_PORT_TYPE_CONTROL,      /**< Control port */
++	MMAL_PORT_TYPE_INPUT,        /**< Input port */
++	MMAL_PORT_TYPE_OUTPUT,       /**< Output port */
++	MMAL_PORT_TYPE_CLOCK,        /**< Clock port */
++};
++
++/** The port is pass-through and doesn't need buffer headers allocated */
++#define MMAL_PORT_CAPABILITY_PASSTHROUGH                       0x01
++/** The port wants to allocate the buffer payloads.
++ * This signals a preference that payload allocation should be done
++ * on this port for efficiency reasons. */
++#define MMAL_PORT_CAPABILITY_ALLOCATION                        0x02
++/** The port supports format change events.
++ * This applies to input ports and is used to let the client know
++ * whether the port supports being reconfigured via a format
++ * change event (i.e. without having to disable the port). */
++#define MMAL_PORT_CAPABILITY_SUPPORTS_EVENT_FORMAT_CHANGE      0x04
++
++/* mmal port structure (MMAL_PORT_T)
++ *
++ * most elements are informational only, the pointer values for
++ * interogation messages are generally provided as additional
++ * strucures within the message. When used to set values only teh
++ * buffer_num, buffer_size and userdata parameters are writable.
++ */
++struct mmal_port {
++	void *priv; /* Private member used by the framework */
++	const char *name; /* Port name. Used for debugging purposes (RO) */
++
++	u32 type;      /* Type of the port (RO) enum mmal_port_type */
++	u16 index;     /* Index of the port in its type list (RO) */
++	u16 index_all; /* Index of the port in the list of all ports (RO) */
++
++	u32 is_enabled; /* Indicates whether the port is enabled or not (RO) */
++	struct mmal_es_format *format; /* Format of the elementary stream */
++
++	u32 buffer_num_min; /* Minimum number of buffers the port
++			     *   requires (RO).  This is set by the
++			     *   component.
++			     */
++
++	u32 buffer_size_min; /* Minimum size of buffers the port
++			      * requires (RO).  This is set by the
++			      * component.
++			      */
++
++	u32 buffer_alignment_min; /* Minimum alignment requirement for
++				   * the buffers (RO).  A value of
++				   * zero means no special alignment
++				   * requirements.  This is set by the
++				   * component.
++				   */
++
++	u32 buffer_num_recommended;  /* Number of buffers the port
++				      * recommends for optimal
++				      * performance (RO).  A value of
++				      * zero means no special
++				      * recommendation.  This is set
++				      * by the component.
++				      */
++
++	u32 buffer_size_recommended; /* Size of buffers the port
++				      * recommends for optimal
++				      * performance (RO).  A value of
++				      * zero means no special
++				      * recommendation.  This is set
++				      * by the component.
++				      */
++
++	u32 buffer_num; /* Actual number of buffers the port will use.
++			 * This is set by the client.
++			 */
++
++	u32 buffer_size; /* Actual maximum size of the buffers that
++			  * will be sent to the port. This is set by
++			  * the client.
++			  */
++
++	void *component; /* Component this port belongs to (Read Only) */
++
++	void *userdata; /* Field reserved for use by the client */
++
++	u32 capabilities; /* Flags describing the capabilities of a
++			   * port (RO).  Bitwise combination of \ref
++			   * portcapabilities "Port capabilities"
++			   * values.
++			   */
++
++};
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-msg.h
+@@ -0,0 +1,404 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ */
++
++/* all the data structures which serialise the MMAL protocol. note
++ * these are directly mapped onto the recived message data.
++ *
++ * BEWARE: They seem to *assume* pointers are u32 and that there is no
++ * structure padding!
++ *
++ * NOTE: this implementation uses kernel types to ensure sizes. Rather
++ * than assigning values to enums to force their size the
++ * implementation uses fixed size types and not the enums (though the
++ * comments have the actual enum type
++ */
++
++#define VC_MMAL_VER 15
++#define VC_MMAL_MIN_VER 10
++#define VC_MMAL_SERVER_NAME  MAKE_FOURCC("mmal")
++
++/* max total message size is 512 bytes */
++#define MMAL_MSG_MAX_SIZE 512
++/* with six 32bit header elements max payload is therefore 488 bytes */
++#define MMAL_MSG_MAX_PAYLOAD 488
++
++#include "mmal-msg-common.h"
++#include "mmal-msg-format.h"
++#include "mmal-msg-port.h"
++
++enum mmal_msg_type {
++	MMAL_MSG_TYPE_QUIT = 1,
++	MMAL_MSG_TYPE_SERVICE_CLOSED,
++	MMAL_MSG_TYPE_GET_VERSION,
++	MMAL_MSG_TYPE_COMPONENT_CREATE,
++	MMAL_MSG_TYPE_COMPONENT_DESTROY, /* 5 */
++	MMAL_MSG_TYPE_COMPONENT_ENABLE,
++	MMAL_MSG_TYPE_COMPONENT_DISABLE,
++	MMAL_MSG_TYPE_PORT_INFO_GET,
++	MMAL_MSG_TYPE_PORT_INFO_SET,
++	MMAL_MSG_TYPE_PORT_ACTION, /* 10 */
++	MMAL_MSG_TYPE_BUFFER_FROM_HOST,
++	MMAL_MSG_TYPE_BUFFER_TO_HOST,
++	MMAL_MSG_TYPE_GET_STATS,
++	MMAL_MSG_TYPE_PORT_PARAMETER_SET,
++	MMAL_MSG_TYPE_PORT_PARAMETER_GET, /* 15 */
++	MMAL_MSG_TYPE_EVENT_TO_HOST,
++	MMAL_MSG_TYPE_GET_CORE_STATS_FOR_PORT,
++	MMAL_MSG_TYPE_OPAQUE_ALLOCATOR,
++	MMAL_MSG_TYPE_CONSUME_MEM,
++	MMAL_MSG_TYPE_LMK, /* 20 */
++	MMAL_MSG_TYPE_OPAQUE_ALLOCATOR_DESC,
++	MMAL_MSG_TYPE_DRM_GET_LHS32,
++	MMAL_MSG_TYPE_DRM_GET_TIME,
++	MMAL_MSG_TYPE_BUFFER_FROM_HOST_ZEROLEN,
++	MMAL_MSG_TYPE_PORT_FLUSH, /* 25 */
++	MMAL_MSG_TYPE_HOST_LOG,
++	MMAL_MSG_TYPE_MSG_LAST
++};
++
++/* port action request messages differ depending on the action type */
++enum mmal_msg_port_action_type {
++	MMAL_MSG_PORT_ACTION_TYPE_UNKNOWN = 0,      /* Unkown action */
++	MMAL_MSG_PORT_ACTION_TYPE_ENABLE,           /* Enable a port */
++	MMAL_MSG_PORT_ACTION_TYPE_DISABLE,          /* Disable a port */
++	MMAL_MSG_PORT_ACTION_TYPE_FLUSH,            /* Flush a port */
++	MMAL_MSG_PORT_ACTION_TYPE_CONNECT,          /* Connect ports */
++	MMAL_MSG_PORT_ACTION_TYPE_DISCONNECT,       /* Disconnect ports */
++	MMAL_MSG_PORT_ACTION_TYPE_SET_REQUIREMENTS, /* Set buffer requirements*/
++};
++
++struct mmal_msg_header {
++	u32 magic;
++	u32 type; /** enum mmal_msg_type */
++
++	/* Opaque handle to the control service */
++	struct mmal_control_service *control_service;
++
++	struct mmal_msg_context *context; /** a u32 per message context */
++	u32 status; /** The status of the vchiq operation */
++	u32 padding;
++};
++
++/* Send from VC to host to report version */
++struct mmal_msg_version {
++	u32 flags;
++	u32 major;
++	u32 minor;
++	u32 minimum;
++};
++
++/* request to VC to create component */
++struct mmal_msg_component_create {
++	void *client_component; /* component context */
++	char name[128];
++	u32 pid;                /* For debug */
++};
++
++/* reply from VC to component creation request */
++struct mmal_msg_component_create_reply {
++	u32 status; /** enum mmal_msg_status - how does this differ to
++		     * the one in the header?
++		     */
++	u32 component_handle; /* VideoCore handle for component */
++	u32 input_num;        /* Number of input ports */
++	u32 output_num;       /* Number of output ports */
++	u32 clock_num;        /* Number of clock ports */
++};
++
++/* request to VC to destroy a component */
++struct mmal_msg_component_destroy {
++	u32 component_handle;
++};
++
++struct mmal_msg_component_destroy_reply {
++	u32 status; /** The component destruction status */
++};
++
++
++/* request and reply to VC to enable a component */
++struct mmal_msg_component_enable {
++	u32 component_handle;
++};
++
++struct mmal_msg_component_enable_reply {
++	u32 status; /** The component enable status */
++};
++
++
++/* request and reply to VC to disable a component */
++struct mmal_msg_component_disable {
++	u32 component_handle;
++};
++
++struct mmal_msg_component_disable_reply {
++	u32 status; /** The component disable status */
++};
++
++/* request to VC to get port information */
++struct mmal_msg_port_info_get {
++	u32 component_handle;  /* component handle port is associated with */
++	u32 port_type;         /* enum mmal_msg_port_type */
++	u32 index;             /* port index to query */
++};
++
++/* reply from VC to get port info request */
++struct mmal_msg_port_info_get_reply {
++	u32 status; /** enum mmal_msg_status */
++	u32 component_handle;  /* component handle port is associated with */
++	u32 port_type;         /* enum mmal_msg_port_type */
++	u32 port_index;        /* port indexed in query */
++	s32 found;             /* unused */
++	u32 port_handle;               /**< Handle to use for this port */
++	struct mmal_port port;
++	struct mmal_es_format format; /* elementry stream format */
++	union mmal_es_specific_format es; /* es type specific data */
++	u8 extradata[MMAL_FORMAT_EXTRADATA_MAX_SIZE]; /* es extra data */
++};
++
++/* request to VC to set port information */
++struct mmal_msg_port_info_set {
++	u32 component_handle;
++	u32 port_type;         /* enum mmal_msg_port_type */
++	u32 port_index;           /* port indexed in query */
++	struct mmal_port port;
++	struct mmal_es_format format;
++	union mmal_es_specific_format es;
++	u8 extradata[MMAL_FORMAT_EXTRADATA_MAX_SIZE];
++};
++
++/* reply from VC to port info set request */
++struct mmal_msg_port_info_set_reply {
++	u32 status;
++	u32 component_handle;  /* component handle port is associated with */
++	u32 port_type;         /* enum mmal_msg_port_type */
++	u32 index;             /* port indexed in query */
++	s32 found;             /* unused */
++	u32 port_handle;               /**< Handle to use for this port */
++	struct mmal_port port;
++	struct mmal_es_format format;
++	union mmal_es_specific_format es;
++	u8 extradata[MMAL_FORMAT_EXTRADATA_MAX_SIZE];
++};
++
++
++/* port action requests that take a mmal_port as a parameter */
++struct mmal_msg_port_action_port {
++	u32 component_handle;
++	u32 port_handle;
++	u32 action; /* enum mmal_msg_port_action_type */
++	struct mmal_port port;
++};
++
++/* port action requests that take handles as a parameter */
++struct mmal_msg_port_action_handle {
++	u32 component_handle;
++	u32 port_handle;
++	u32 action; /* enum mmal_msg_port_action_type */
++	u32 connect_component_handle;
++	u32 connect_port_handle;
++};
++
++struct mmal_msg_port_action_reply {
++	u32 status; /** The port action operation status */
++};
++
++
++
++
++/* MMAL buffer transfer */
++
++/** Size of space reserved in a buffer message for short messages. */
++#define MMAL_VC_SHORT_DATA 128
++
++/** Signals that the current payload is the end of the stream of data */
++#define MMAL_BUFFER_HEADER_FLAG_EOS                    (1<<0)
++/** Signals that the start of the current payload starts a frame */
++#define MMAL_BUFFER_HEADER_FLAG_FRAME_START            (1<<1)
++/** Signals that the end of the current payload ends a frame */
++#define MMAL_BUFFER_HEADER_FLAG_FRAME_END              (1<<2)
++/** Signals that the current payload contains only complete frames (>1) */
++#define MMAL_BUFFER_HEADER_FLAG_FRAME                  \
++	(MMAL_BUFFER_HEADER_FLAG_FRAME_START|MMAL_BUFFER_HEADER_FLAG_FRAME_END)
++/** Signals that the current payload is a keyframe (i.e. self decodable) */
++#define MMAL_BUFFER_HEADER_FLAG_KEYFRAME               (1<<3)
++/** Signals a discontinuity in the stream of data (e.g. after a seek).
++ * Can be used for instance by a decoder to reset its state */
++#define MMAL_BUFFER_HEADER_FLAG_DISCONTINUITY          (1<<4)
++/** Signals a buffer containing some kind of config data for the component
++ * (e.g. codec config data) */
++#define MMAL_BUFFER_HEADER_FLAG_CONFIG                 (1<<5)
++/** Signals an encrypted payload */
++#define MMAL_BUFFER_HEADER_FLAG_ENCRYPTED              (1<<6)
++/** Signals a buffer containing side information */
++#define MMAL_BUFFER_HEADER_FLAG_CODECSIDEINFO          (1<<7)
++/** Signals a buffer which is the snapshot/postview image from a stills
++ * capture
++ */
++#define MMAL_BUFFER_HEADER_FLAGS_SNAPSHOT              (1<<8)
++/** Signals a buffer which contains data known to be corrupted */
++#define MMAL_BUFFER_HEADER_FLAG_CORRUPTED              (1<<9)
++/** Signals that a buffer failed to be transmitted */
++#define MMAL_BUFFER_HEADER_FLAG_TRANSMISSION_FAILED    (1<<10)
++
++struct mmal_driver_buffer {
++	u32 magic;
++	u32 component_handle;
++	u32 port_handle;
++	void *client_context;
++};
++
++/* buffer header */
++struct mmal_buffer_header {
++	struct mmal_buffer_header *next; /* next header */
++	void *priv; /* framework private data */
++	u32 cmd;
++	void *data;
++	u32 alloc_size;
++	u32 length;
++	u32 offset;
++	u32 flags;
++	s64 pts;
++	s64 dts;
++	void *type;
++	void *user_data;
++};
++
++struct mmal_buffer_header_type_specific {
++	union {
++		struct {
++		u32 planes;
++		u32 offset[4];
++		u32 pitch[4];
++		u32 flags;
++		} video;
++	} u;
++};
++
++struct mmal_msg_buffer_from_host {
++	/* The front 32 bytes of the buffer header are copied
++	 * back to us in the reply to allow for context. This
++	 * area is used to store two mmal_driver_buffer structures to
++	 * allow for multiple concurrent service users.
++	 */
++	/* control data */
++	struct mmal_driver_buffer drvbuf;
++
++	/* referenced control data for passthrough buffer management */
++	struct mmal_driver_buffer drvbuf_ref;
++	struct mmal_buffer_header buffer_header; /* buffer header itself */
++	struct mmal_buffer_header_type_specific buffer_header_type_specific;
++	s32 is_zero_copy;
++	s32 has_reference;
++
++	/** allows short data to be xfered in control message */
++	u32 payload_in_message;
++	u8 short_data[MMAL_VC_SHORT_DATA];
++};
++
++
++/* port parameter setting */
++
++#define MMAL_WORKER_PORT_PARAMETER_SPACE      96
++
++struct mmal_msg_port_parameter_set {
++	u32 component_handle; /* component */
++	u32 port_handle;      /* port */
++	u32 id;     /* Parameter ID  */
++	u32 size;      /* Parameter size */
++	uint32_t value[MMAL_WORKER_PORT_PARAMETER_SPACE];
++};
++
++struct mmal_msg_port_parameter_set_reply {
++	u32 status; /** enum mmal_msg_status todo: how does this
++		     * differ to the one in the header?
++		     */
++};
++
++/* port parameter getting */
++
++struct mmal_msg_port_parameter_get {
++	u32 component_handle; /* component */
++	u32 port_handle;      /* port */
++	u32 id;     /* Parameter ID  */
++	u32 size;      /* Parameter size */
++};
++
++struct mmal_msg_port_parameter_get_reply {
++	u32 status;           /* Status of mmal_port_parameter_get call */
++	u32 id;     /* Parameter ID  */
++	u32 size;      /* Parameter size */
++	uint32_t value[MMAL_WORKER_PORT_PARAMETER_SPACE];
++};
++
++/* event messages */
++#define MMAL_WORKER_EVENT_SPACE 256
++
++struct mmal_msg_event_to_host {
++	void *client_component; /* component context */
++
++	u32 port_type;
++	u32 port_num;
++
++	u32 cmd;
++	u32 length;
++	u8 data[MMAL_WORKER_EVENT_SPACE];
++	struct mmal_buffer_header *delayed_buffer;
++};
++
++/* all mmal messages are serialised through this structure */
++struct mmal_msg {
++	/* header */
++	struct mmal_msg_header h;
++	/* payload */
++	union {
++		struct mmal_msg_version version;
++
++		struct mmal_msg_component_create component_create;
++		struct mmal_msg_component_create_reply component_create_reply;
++
++		struct mmal_msg_component_destroy component_destroy;
++		struct mmal_msg_component_destroy_reply component_destroy_reply;
++
++		struct mmal_msg_component_enable component_enable;
++		struct mmal_msg_component_enable_reply component_enable_reply;
++
++		struct mmal_msg_component_disable component_disable;
++		struct mmal_msg_component_disable_reply component_disable_reply;
++
++		struct mmal_msg_port_info_get port_info_get;
++		struct mmal_msg_port_info_get_reply port_info_get_reply;
++
++		struct mmal_msg_port_info_set port_info_set;
++		struct mmal_msg_port_info_set_reply port_info_set_reply;
++
++		struct mmal_msg_port_action_port port_action_port;
++		struct mmal_msg_port_action_handle port_action_handle;
++		struct mmal_msg_port_action_reply port_action_reply;
++
++		struct mmal_msg_buffer_from_host buffer_from_host;
++
++		struct mmal_msg_port_parameter_set port_parameter_set;
++		struct mmal_msg_port_parameter_set_reply
++			port_parameter_set_reply;
++		struct mmal_msg_port_parameter_get
++			port_parameter_get;
++		struct mmal_msg_port_parameter_get_reply
++			port_parameter_get_reply;
++
++		struct mmal_msg_event_to_host event_to_host;
++
++		u8 payload[MMAL_MSG_MAX_PAYLOAD];
++	} u;
++};
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-parameters.h
+@@ -0,0 +1,656 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ */
++
++/* common parameters */
++
++/** @name Parameter groups
++ * Parameters are divided into groups, and then allocated sequentially within
++ * a group using an enum.
++ * @{
++ */
++
++/** Common parameter ID group, used with many types of component. */
++#define MMAL_PARAMETER_GROUP_COMMON            (0<<16)
++/** Camera-specific parameter ID group. */
++#define MMAL_PARAMETER_GROUP_CAMERA            (1<<16)
++/** Video-specific parameter ID group. */
++#define MMAL_PARAMETER_GROUP_VIDEO             (2<<16)
++/** Audio-specific parameter ID group. */
++#define MMAL_PARAMETER_GROUP_AUDIO             (3<<16)
++/** Clock-specific parameter ID group. */
++#define MMAL_PARAMETER_GROUP_CLOCK             (4<<16)
++/** Miracast-specific parameter ID group. */
++#define MMAL_PARAMETER_GROUP_MIRACAST       (5<<16)
++
++/* Common parameters */
++enum mmal_parameter_common_type {
++	MMAL_PARAMETER_UNUSED  /**< Never a valid parameter ID */
++		= MMAL_PARAMETER_GROUP_COMMON,
++	MMAL_PARAMETER_SUPPORTED_ENCODINGS, /**< MMAL_PARAMETER_ENCODING_T */
++	MMAL_PARAMETER_URI, /**< MMAL_PARAMETER_URI_T */
++
++	/** MMAL_PARAMETER_CHANGE_EVENT_REQUEST_T */
++	MMAL_PARAMETER_CHANGE_EVENT_REQUEST,
++
++	/** MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_ZERO_COPY,
++
++	/**< MMAL_PARAMETER_BUFFER_REQUIREMENTS_T */
++	MMAL_PARAMETER_BUFFER_REQUIREMENTS,
++
++	MMAL_PARAMETER_STATISTICS, /**< MMAL_PARAMETER_STATISTICS_T */
++	MMAL_PARAMETER_CORE_STATISTICS, /**< MMAL_PARAMETER_CORE_STATISTICS_T */
++	MMAL_PARAMETER_MEM_USAGE, /**< MMAL_PARAMETER_MEM_USAGE_T */
++	MMAL_PARAMETER_BUFFER_FLAG_FILTER, /**< MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_SEEK, /**< MMAL_PARAMETER_SEEK_T */
++	MMAL_PARAMETER_POWERMON_ENABLE, /**< MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_LOGGING, /**< MMAL_PARAMETER_LOGGING_T */
++	MMAL_PARAMETER_SYSTEM_TIME, /**< MMAL_PARAMETER_UINT64_T */
++	MMAL_PARAMETER_NO_IMAGE_PADDING  /**< MMAL_PARAMETER_BOOLEAN_T */
++};
++
++/* camera parameters */
++
++enum mmal_parameter_camera_type {
++	/* 0 */
++	/** @ref MMAL_PARAMETER_THUMBNAIL_CONFIG_T */
++	MMAL_PARAMETER_THUMBNAIL_CONFIGURATION
++		= MMAL_PARAMETER_GROUP_CAMERA,
++	MMAL_PARAMETER_CAPTURE_QUALITY, /**< Unused? */
++	MMAL_PARAMETER_ROTATION, /**< @ref MMAL_PARAMETER_INT32_T */
++	MMAL_PARAMETER_EXIF_DISABLE, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_EXIF, /**< @ref MMAL_PARAMETER_EXIF_T */
++	MMAL_PARAMETER_AWB_MODE, /**< @ref MMAL_PARAM_AWBMODE_T */
++	MMAL_PARAMETER_IMAGE_EFFECT, /**< @ref MMAL_PARAMETER_IMAGEFX_T */
++	MMAL_PARAMETER_COLOUR_EFFECT, /**< @ref MMAL_PARAMETER_COLOURFX_T */
++	MMAL_PARAMETER_FLICKER_AVOID, /**< @ref MMAL_PARAMETER_FLICKERAVOID_T */
++	MMAL_PARAMETER_FLASH, /**< @ref MMAL_PARAMETER_FLASH_T */
++	MMAL_PARAMETER_REDEYE, /**< @ref MMAL_PARAMETER_REDEYE_T */
++	MMAL_PARAMETER_FOCUS, /**< @ref MMAL_PARAMETER_FOCUS_T */
++	MMAL_PARAMETER_FOCAL_LENGTHS, /**< Unused? */
++	MMAL_PARAMETER_EXPOSURE_COMP, /**< @ref MMAL_PARAMETER_INT32_T */
++	MMAL_PARAMETER_ZOOM, /**< @ref MMAL_PARAMETER_SCALEFACTOR_T */
++	MMAL_PARAMETER_MIRROR, /**< @ref MMAL_PARAMETER_MIRROR_T */
++
++	/* 0x10 */
++	MMAL_PARAMETER_CAMERA_NUM, /**< @ref MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_CAPTURE, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_EXPOSURE_MODE, /**< @ref MMAL_PARAMETER_EXPOSUREMODE_T */
++	MMAL_PARAMETER_EXP_METERING_MODE, /**< @ref MMAL_PARAMETER_EXPOSUREMETERINGMODE_T */
++	MMAL_PARAMETER_FOCUS_STATUS, /**< @ref MMAL_PARAMETER_FOCUS_STATUS_T */
++	MMAL_PARAMETER_CAMERA_CONFIG, /**< @ref MMAL_PARAMETER_CAMERA_CONFIG_T */
++	MMAL_PARAMETER_CAPTURE_STATUS, /**< @ref MMAL_PARAMETER_CAPTURE_STATUS_T */
++	MMAL_PARAMETER_FACE_TRACK, /**< @ref MMAL_PARAMETER_FACE_TRACK_T */
++	MMAL_PARAMETER_DRAW_BOX_FACES_AND_FOCUS, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_JPEG_Q_FACTOR, /**< @ref MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_FRAME_RATE, /**< @ref MMAL_PARAMETER_FRAME_RATE_T */
++	MMAL_PARAMETER_USE_STC, /**< @ref MMAL_PARAMETER_CAMERA_STC_MODE_T */
++	MMAL_PARAMETER_CAMERA_INFO, /**< @ref MMAL_PARAMETER_CAMERA_INFO_T */
++	MMAL_PARAMETER_VIDEO_STABILISATION, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_FACE_TRACK_RESULTS, /**< @ref MMAL_PARAMETER_FACE_TRACK_RESULTS_T */
++	MMAL_PARAMETER_ENABLE_RAW_CAPTURE, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++
++	/* 0x20 */
++	MMAL_PARAMETER_DPF_FILE, /**< @ref MMAL_PARAMETER_URI_T */
++	MMAL_PARAMETER_ENABLE_DPF_FILE, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_DPF_FAIL_IS_FATAL, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_CAPTURE_MODE, /**< @ref MMAL_PARAMETER_CAPTUREMODE_T */
++	MMAL_PARAMETER_FOCUS_REGIONS, /**< @ref MMAL_PARAMETER_FOCUS_REGIONS_T */
++	MMAL_PARAMETER_INPUT_CROP, /**< @ref MMAL_PARAMETER_INPUT_CROP_T */
++	MMAL_PARAMETER_SENSOR_INFORMATION, /**< @ref MMAL_PARAMETER_SENSOR_INFORMATION_T */
++	MMAL_PARAMETER_FLASH_SELECT, /**< @ref MMAL_PARAMETER_FLASH_SELECT_T */
++	MMAL_PARAMETER_FIELD_OF_VIEW, /**< @ref MMAL_PARAMETER_FIELD_OF_VIEW_T */
++	MMAL_PARAMETER_HIGH_DYNAMIC_RANGE, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_DYNAMIC_RANGE_COMPRESSION, /**< @ref MMAL_PARAMETER_DRC_T */
++	MMAL_PARAMETER_ALGORITHM_CONTROL, /**< @ref MMAL_PARAMETER_ALGORITHM_CONTROL_T */
++	MMAL_PARAMETER_SHARPNESS, /**< @ref MMAL_PARAMETER_RATIONAL_T */
++	MMAL_PARAMETER_CONTRAST, /**< @ref MMAL_PARAMETER_RATIONAL_T */
++	MMAL_PARAMETER_BRIGHTNESS, /**< @ref MMAL_PARAMETER_RATIONAL_T */
++	MMAL_PARAMETER_SATURATION, /**< @ref MMAL_PARAMETER_RATIONAL_T */
++
++	/* 0x30 */
++	MMAL_PARAMETER_ISO, /**< @ref MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_ANTISHAKE, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++
++	/** @ref MMAL_PARAMETER_IMAGEFX_PARAMETERS_T */
++	MMAL_PARAMETER_IMAGE_EFFECT_PARAMETERS,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_CAMERA_BURST_CAPTURE,
++
++	/** @ref MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_CAMERA_MIN_ISO,
++
++	/** @ref MMAL_PARAMETER_CAMERA_USE_CASE_T */
++	MMAL_PARAMETER_CAMERA_USE_CASE,
++
++	/**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_CAPTURE_STATS_PASS,
++
++	/** @ref MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_CAMERA_CUSTOM_SENSOR_CONFIG,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_ENABLE_REGISTER_FILE,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_REGISTER_FAIL_IS_FATAL,
++
++	/** @ref MMAL_PARAMETER_CONFIGFILE_T */
++	MMAL_PARAMETER_CONFIGFILE_REGISTERS,
++
++	/** @ref MMAL_PARAMETER_CONFIGFILE_CHUNK_T */
++	MMAL_PARAMETER_CONFIGFILE_CHUNK_REGISTERS,
++	MMAL_PARAMETER_JPEG_ATTACH_LOG, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_ZERO_SHUTTER_LAG, /**< @ref MMAL_PARAMETER_ZEROSHUTTERLAG_T */
++	MMAL_PARAMETER_FPS_RANGE, /**< @ref MMAL_PARAMETER_FPS_RANGE_T */
++	MMAL_PARAMETER_CAPTURE_EXPOSURE_COMP, /**< @ref MMAL_PARAMETER_INT32_T */
++
++	/* 0x40 */
++	MMAL_PARAMETER_SW_SHARPEN_DISABLE, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_FLASH_REQUIRED, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_SW_SATURATION_DISABLE, /**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_SHUTTER_SPEED,             /**< Takes a @ref MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_CUSTOM_AWB_GAINS,          /**< Takes a @ref MMAL_PARAMETER_AWB_GAINS_T */
++};
++
++struct mmal_parameter_rational {
++	s32 num;    /**< Numerator */
++	s32 den;    /**< Denominator */
++};
++
++enum mmal_parameter_camera_config_timestamp_mode {
++	MMAL_PARAM_TIMESTAMP_MODE_ZERO = 0, /* Always timestamp frames as 0 */
++	MMAL_PARAM_TIMESTAMP_MODE_RAW_STC,  /* Use the raw STC value
++					     * for the frame timestamp
++					     */
++	MMAL_PARAM_TIMESTAMP_MODE_RESET_STC, /* Use the STC timestamp
++					      * but subtract the
++					      * timestamp of the first
++					      * frame sent to give a
++					      * zero based timestamp.
++					      */
++};
++
++struct mmal_parameter_fps_range {
++	/**< Low end of the permitted framerate range */
++	struct mmal_parameter_rational	fps_low;
++	/**< High end of the permitted framerate range */
++	struct mmal_parameter_rational	fps_high;
++};
++
++
++/* camera configuration parameter */
++struct mmal_parameter_camera_config {
++	/* Parameters for setting up the image pools */
++	u32 max_stills_w; /* Max size of stills capture */
++	u32 max_stills_h;
++	u32 stills_yuv422; /* Allow YUV422 stills capture */
++	u32 one_shot_stills; /* Continuous or one shot stills captures. */
++
++	u32 max_preview_video_w; /* Max size of the preview or video
++				  * capture frames
++				  */
++	u32 max_preview_video_h;
++	u32 num_preview_video_frames;
++
++	/** Sets the height of the circular buffer for stills capture. */
++	u32 stills_capture_circular_buffer_height;
++
++	/** Allows preview/encode to resume as fast as possible after the stills
++	 * input frame has been received, and then processes the still frame in
++	 * the background whilst preview/encode has resumed.
++	 * Actual mode is controlled by MMAL_PARAMETER_CAPTURE_MODE.
++	 */
++	u32 fast_preview_resume;
++
++	/** Selects algorithm for timestamping frames if
++	 * there is no clock component connected.
++	 * enum mmal_parameter_camera_config_timestamp_mode
++	 */
++	s32 use_stc_timestamp;
++};
++
++
++enum mmal_parameter_exposuremode {
++	MMAL_PARAM_EXPOSUREMODE_OFF,
++	MMAL_PARAM_EXPOSUREMODE_AUTO,
++	MMAL_PARAM_EXPOSUREMODE_NIGHT,
++	MMAL_PARAM_EXPOSUREMODE_NIGHTPREVIEW,
++	MMAL_PARAM_EXPOSUREMODE_BACKLIGHT,
++	MMAL_PARAM_EXPOSUREMODE_SPOTLIGHT,
++	MMAL_PARAM_EXPOSUREMODE_SPORTS,
++	MMAL_PARAM_EXPOSUREMODE_SNOW,
++	MMAL_PARAM_EXPOSUREMODE_BEACH,
++	MMAL_PARAM_EXPOSUREMODE_VERYLONG,
++	MMAL_PARAM_EXPOSUREMODE_FIXEDFPS,
++	MMAL_PARAM_EXPOSUREMODE_ANTISHAKE,
++	MMAL_PARAM_EXPOSUREMODE_FIREWORKS,
++};
++
++enum mmal_parameter_exposuremeteringmode {
++	MMAL_PARAM_EXPOSUREMETERINGMODE_AVERAGE,
++	MMAL_PARAM_EXPOSUREMETERINGMODE_SPOT,
++	MMAL_PARAM_EXPOSUREMETERINGMODE_BACKLIT,
++	MMAL_PARAM_EXPOSUREMETERINGMODE_MATRIX,
++};
++
++enum mmal_parameter_awbmode {
++	MMAL_PARAM_AWBMODE_OFF,
++	MMAL_PARAM_AWBMODE_AUTO,
++	MMAL_PARAM_AWBMODE_SUNLIGHT,
++	MMAL_PARAM_AWBMODE_CLOUDY,
++	MMAL_PARAM_AWBMODE_SHADE,
++	MMAL_PARAM_AWBMODE_TUNGSTEN,
++	MMAL_PARAM_AWBMODE_FLUORESCENT,
++	MMAL_PARAM_AWBMODE_INCANDESCENT,
++	MMAL_PARAM_AWBMODE_FLASH,
++	MMAL_PARAM_AWBMODE_HORIZON,
++};
++
++enum mmal_parameter_imagefx {
++	MMAL_PARAM_IMAGEFX_NONE,
++	MMAL_PARAM_IMAGEFX_NEGATIVE,
++	MMAL_PARAM_IMAGEFX_SOLARIZE,
++	MMAL_PARAM_IMAGEFX_POSTERIZE,
++	MMAL_PARAM_IMAGEFX_WHITEBOARD,
++	MMAL_PARAM_IMAGEFX_BLACKBOARD,
++	MMAL_PARAM_IMAGEFX_SKETCH,
++	MMAL_PARAM_IMAGEFX_DENOISE,
++	MMAL_PARAM_IMAGEFX_EMBOSS,
++	MMAL_PARAM_IMAGEFX_OILPAINT,
++	MMAL_PARAM_IMAGEFX_HATCH,
++	MMAL_PARAM_IMAGEFX_GPEN,
++	MMAL_PARAM_IMAGEFX_PASTEL,
++	MMAL_PARAM_IMAGEFX_WATERCOLOUR,
++	MMAL_PARAM_IMAGEFX_FILM,
++	MMAL_PARAM_IMAGEFX_BLUR,
++	MMAL_PARAM_IMAGEFX_SATURATION,
++	MMAL_PARAM_IMAGEFX_COLOURSWAP,
++	MMAL_PARAM_IMAGEFX_WASHEDOUT,
++	MMAL_PARAM_IMAGEFX_POSTERISE,
++	MMAL_PARAM_IMAGEFX_COLOURPOINT,
++	MMAL_PARAM_IMAGEFX_COLOURBALANCE,
++	MMAL_PARAM_IMAGEFX_CARTOON,
++};
++
++enum MMAL_PARAM_FLICKERAVOID_T {
++	MMAL_PARAM_FLICKERAVOID_OFF,
++	MMAL_PARAM_FLICKERAVOID_AUTO,
++	MMAL_PARAM_FLICKERAVOID_50HZ,
++	MMAL_PARAM_FLICKERAVOID_60HZ,
++	MMAL_PARAM_FLICKERAVOID_MAX = 0x7FFFFFFF
++};
++
++struct mmal_parameter_awbgains {
++	struct mmal_parameter_rational r_gain;	/**< Red gain */
++	struct mmal_parameter_rational b_gain;	/**< Blue gain */
++};
++
++/** Manner of video rate control */
++enum mmal_parameter_rate_control_mode {
++	MMAL_VIDEO_RATECONTROL_DEFAULT,
++	MMAL_VIDEO_RATECONTROL_VARIABLE,
++	MMAL_VIDEO_RATECONTROL_CONSTANT,
++	MMAL_VIDEO_RATECONTROL_VARIABLE_SKIP_FRAMES,
++	MMAL_VIDEO_RATECONTROL_CONSTANT_SKIP_FRAMES
++};
++
++enum mmal_video_profile {
++	MMAL_VIDEO_PROFILE_H263_BASELINE,
++	MMAL_VIDEO_PROFILE_H263_H320CODING,
++	MMAL_VIDEO_PROFILE_H263_BACKWARDCOMPATIBLE,
++	MMAL_VIDEO_PROFILE_H263_ISWV2,
++	MMAL_VIDEO_PROFILE_H263_ISWV3,
++	MMAL_VIDEO_PROFILE_H263_HIGHCOMPRESSION,
++	MMAL_VIDEO_PROFILE_H263_INTERNET,
++	MMAL_VIDEO_PROFILE_H263_INTERLACE,
++	MMAL_VIDEO_PROFILE_H263_HIGHLATENCY,
++	MMAL_VIDEO_PROFILE_MP4V_SIMPLE,
++	MMAL_VIDEO_PROFILE_MP4V_SIMPLESCALABLE,
++	MMAL_VIDEO_PROFILE_MP4V_CORE,
++	MMAL_VIDEO_PROFILE_MP4V_MAIN,
++	MMAL_VIDEO_PROFILE_MP4V_NBIT,
++	MMAL_VIDEO_PROFILE_MP4V_SCALABLETEXTURE,
++	MMAL_VIDEO_PROFILE_MP4V_SIMPLEFACE,
++	MMAL_VIDEO_PROFILE_MP4V_SIMPLEFBA,
++	MMAL_VIDEO_PROFILE_MP4V_BASICANIMATED,
++	MMAL_VIDEO_PROFILE_MP4V_HYBRID,
++	MMAL_VIDEO_PROFILE_MP4V_ADVANCEDREALTIME,
++	MMAL_VIDEO_PROFILE_MP4V_CORESCALABLE,
++	MMAL_VIDEO_PROFILE_MP4V_ADVANCEDCODING,
++	MMAL_VIDEO_PROFILE_MP4V_ADVANCEDCORE,
++	MMAL_VIDEO_PROFILE_MP4V_ADVANCEDSCALABLE,
++	MMAL_VIDEO_PROFILE_MP4V_ADVANCEDSIMPLE,
++	MMAL_VIDEO_PROFILE_H264_BASELINE,
++	MMAL_VIDEO_PROFILE_H264_MAIN,
++	MMAL_VIDEO_PROFILE_H264_EXTENDED,
++	MMAL_VIDEO_PROFILE_H264_HIGH,
++	MMAL_VIDEO_PROFILE_H264_HIGH10,
++	MMAL_VIDEO_PROFILE_H264_HIGH422,
++	MMAL_VIDEO_PROFILE_H264_HIGH444,
++	MMAL_VIDEO_PROFILE_H264_CONSTRAINED_BASELINE,
++	MMAL_VIDEO_PROFILE_DUMMY = 0x7FFFFFFF
++};
++
++enum mmal_video_level {
++	MMAL_VIDEO_LEVEL_H263_10,
++	MMAL_VIDEO_LEVEL_H263_20,
++	MMAL_VIDEO_LEVEL_H263_30,
++	MMAL_VIDEO_LEVEL_H263_40,
++	MMAL_VIDEO_LEVEL_H263_45,
++	MMAL_VIDEO_LEVEL_H263_50,
++	MMAL_VIDEO_LEVEL_H263_60,
++	MMAL_VIDEO_LEVEL_H263_70,
++	MMAL_VIDEO_LEVEL_MP4V_0,
++	MMAL_VIDEO_LEVEL_MP4V_0b,
++	MMAL_VIDEO_LEVEL_MP4V_1,
++	MMAL_VIDEO_LEVEL_MP4V_2,
++	MMAL_VIDEO_LEVEL_MP4V_3,
++	MMAL_VIDEO_LEVEL_MP4V_4,
++	MMAL_VIDEO_LEVEL_MP4V_4a,
++	MMAL_VIDEO_LEVEL_MP4V_5,
++	MMAL_VIDEO_LEVEL_MP4V_6,
++	MMAL_VIDEO_LEVEL_H264_1,
++	MMAL_VIDEO_LEVEL_H264_1b,
++	MMAL_VIDEO_LEVEL_H264_11,
++	MMAL_VIDEO_LEVEL_H264_12,
++	MMAL_VIDEO_LEVEL_H264_13,
++	MMAL_VIDEO_LEVEL_H264_2,
++	MMAL_VIDEO_LEVEL_H264_21,
++	MMAL_VIDEO_LEVEL_H264_22,
++	MMAL_VIDEO_LEVEL_H264_3,
++	MMAL_VIDEO_LEVEL_H264_31,
++	MMAL_VIDEO_LEVEL_H264_32,
++	MMAL_VIDEO_LEVEL_H264_4,
++	MMAL_VIDEO_LEVEL_H264_41,
++	MMAL_VIDEO_LEVEL_H264_42,
++	MMAL_VIDEO_LEVEL_H264_5,
++	MMAL_VIDEO_LEVEL_H264_51,
++	MMAL_VIDEO_LEVEL_DUMMY = 0x7FFFFFFF
++};
++
++struct mmal_parameter_video_profile {
++	enum mmal_video_profile profile;
++	enum mmal_video_level level;
++};
++
++/* video parameters */
++
++enum mmal_parameter_video_type {
++	/** @ref MMAL_DISPLAYREGION_T */
++	MMAL_PARAMETER_DISPLAYREGION = MMAL_PARAMETER_GROUP_VIDEO,
++
++	/** @ref MMAL_PARAMETER_VIDEO_PROFILE_T */
++	MMAL_PARAMETER_SUPPORTED_PROFILES,
++
++	/** @ref MMAL_PARAMETER_VIDEO_PROFILE_T */
++	MMAL_PARAMETER_PROFILE,
++
++	/** @ref MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_INTRAPERIOD,
++
++	/** @ref MMAL_PARAMETER_VIDEO_RATECONTROL_T */
++	MMAL_PARAMETER_RATECONTROL,
++
++	/** @ref MMAL_PARAMETER_VIDEO_NALUNITFORMAT_T */
++	MMAL_PARAMETER_NALUNITFORMAT,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_MINIMISE_FRAGMENTATION,
++
++	/** @ref MMAL_PARAMETER_UINT32_T.
++	 * Setting the value to zero resets to the default (one slice per frame).
++	 */
++	MMAL_PARAMETER_MB_ROWS_PER_SLICE,
++
++	/** @ref MMAL_PARAMETER_VIDEO_LEVEL_EXTENSION_T */
++	MMAL_PARAMETER_VIDEO_LEVEL_EXTENSION,
++
++	/** @ref MMAL_PARAMETER_VIDEO_EEDE_ENABLE_T */
++	MMAL_PARAMETER_VIDEO_EEDE_ENABLE,
++
++	/** @ref MMAL_PARAMETER_VIDEO_EEDE_LOSSRATE_T */
++	MMAL_PARAMETER_VIDEO_EEDE_LOSSRATE,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T. Request an I-frame. */
++	MMAL_PARAMETER_VIDEO_REQUEST_I_FRAME,
++	/** @ref MMAL_PARAMETER_VIDEO_INTRA_REFRESH_T */
++	MMAL_PARAMETER_VIDEO_INTRA_REFRESH,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T. */
++	MMAL_PARAMETER_VIDEO_IMMUTABLE_INPUT,
++
++	/** @ref MMAL_PARAMETER_UINT32_T. Run-time bit rate control */
++	MMAL_PARAMETER_VIDEO_BIT_RATE,
++
++	/** @ref MMAL_PARAMETER_FRAME_RATE_T */
++	MMAL_PARAMETER_VIDEO_FRAME_RATE,
++
++	/** @ref MMAL_PARAMETER_UINT32_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_MIN_QUANT,
++
++	/** @ref MMAL_PARAMETER_UINT32_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_MAX_QUANT,
++
++	/** @ref MMAL_PARAMETER_VIDEO_ENCODE_RC_MODEL_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_RC_MODEL,
++
++	MMAL_PARAMETER_EXTRA_BUFFERS, /**< @ref MMAL_PARAMETER_UINT32_T. */
++	/** @ref MMAL_PARAMETER_UINT32_T.
++	 * Changing this parameter from the default can reduce frame rate
++	 * because image buffers need to be re-pitched.
++	 */
++	MMAL_PARAMETER_VIDEO_ALIGN_HORIZ,
++
++	/** @ref MMAL_PARAMETER_UINT32_T.
++	 * Changing this parameter from the default can reduce frame rate
++	 * because image buffers need to be re-pitched.
++	 */
++	MMAL_PARAMETER_VIDEO_ALIGN_VERT,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T. */
++	MMAL_PARAMETER_VIDEO_DROPPABLE_PFRAMES,
++
++	/** @ref MMAL_PARAMETER_UINT32_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_INITIAL_QUANT,
++
++	/**< @ref MMAL_PARAMETER_UINT32_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_QP_P,
++
++	/**< @ref MMAL_PARAMETER_UINT32_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_RC_SLICE_DQUANT,
++
++	/** @ref MMAL_PARAMETER_UINT32_T */
++	MMAL_PARAMETER_VIDEO_ENCODE_FRAME_LIMIT_BITS,
++
++	/** @ref MMAL_PARAMETER_UINT32_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_PEAK_RATE,
++
++	/* H264 specific parameters */
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_H264_DISABLE_CABAC,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_H264_LOW_LATENCY,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_H264_AU_DELIMITERS,
++
++	/** @ref MMAL_PARAMETER_UINT32_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_H264_DEBLOCK_IDC,
++
++	/** @ref MMAL_PARAMETER_VIDEO_ENCODER_H264_MB_INTRA_MODES_T. */
++	MMAL_PARAMETER_VIDEO_ENCODE_H264_MB_INTRA_MODE,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_VIDEO_ENCODE_HEADER_ON_OPEN,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_VIDEO_ENCODE_PRECODE_FOR_QP,
++
++	/** @ref MMAL_PARAMETER_VIDEO_DRM_INIT_INFO_T. */
++	MMAL_PARAMETER_VIDEO_DRM_INIT_INFO,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_VIDEO_TIMESTAMP_FIFO,
++
++	/** @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_VIDEO_DECODE_ERROR_CONCEALMENT,
++
++	/** @ref MMAL_PARAMETER_VIDEO_DRM_PROTECT_BUFFER_T. */
++	MMAL_PARAMETER_VIDEO_DRM_PROTECT_BUFFER,
++
++	/** @ref MMAL_PARAMETER_BYTES_T */
++	MMAL_PARAMETER_VIDEO_DECODE_CONFIG_VD3,
++
++	/**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_VIDEO_ENCODE_H264_VCL_HRD_PARAMETERS,
++
++	/**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_VIDEO_ENCODE_H264_LOW_DELAY_HRD_FLAG,
++
++	/**< @ref MMAL_PARAMETER_BOOLEAN_T */
++	MMAL_PARAMETER_VIDEO_ENCODE_INLINE_HEADER
++};
++
++/** Valid mirror modes */
++enum mmal_parameter_mirror {
++	MMAL_PARAM_MIRROR_NONE,
++	MMAL_PARAM_MIRROR_VERTICAL,
++	MMAL_PARAM_MIRROR_HORIZONTAL,
++	MMAL_PARAM_MIRROR_BOTH,
++};
++
++enum mmal_parameter_displaytransform {
++	MMAL_DISPLAY_ROT0 = 0,
++	MMAL_DISPLAY_MIRROR_ROT0 = 1,
++	MMAL_DISPLAY_MIRROR_ROT180 = 2,
++	MMAL_DISPLAY_ROT180 = 3,
++	MMAL_DISPLAY_MIRROR_ROT90 = 4,
++	MMAL_DISPLAY_ROT270 = 5,
++	MMAL_DISPLAY_ROT90 = 6,
++	MMAL_DISPLAY_MIRROR_ROT270 = 7,
++};
++
++enum mmal_parameter_displaymode {
++	MMAL_DISPLAY_MODE_FILL = 0,
++	MMAL_DISPLAY_MODE_LETTERBOX = 1,
++};
++
++enum mmal_parameter_displayset {
++	MMAL_DISPLAY_SET_NONE = 0,
++	MMAL_DISPLAY_SET_NUM = 1,
++	MMAL_DISPLAY_SET_FULLSCREEN = 2,
++	MMAL_DISPLAY_SET_TRANSFORM = 4,
++	MMAL_DISPLAY_SET_DEST_RECT = 8,
++	MMAL_DISPLAY_SET_SRC_RECT = 0x10,
++	MMAL_DISPLAY_SET_MODE = 0x20,
++	MMAL_DISPLAY_SET_PIXEL = 0x40,
++	MMAL_DISPLAY_SET_NOASPECT = 0x80,
++	MMAL_DISPLAY_SET_LAYER = 0x100,
++	MMAL_DISPLAY_SET_COPYPROTECT = 0x200,
++	MMAL_DISPLAY_SET_ALPHA = 0x400,
++};
++
++struct mmal_parameter_displayregion {
++	/** Bitfield that indicates which fields are set and should be
++	 * used. All other fields will maintain their current value.
++	 * \ref MMAL_DISPLAYSET_T defines the bits that can be
++	 * combined.
++	 */
++	u32 set;
++
++	/** Describes the display output device, with 0 typically
++	 * being a directly connected LCD display.  The actual values
++	 * will depend on the hardware.  Code using hard-wired numbers
++	 * (e.g. 2) is certain to fail.
++	 */
++
++	u32 display_num;
++	/** Indicates that we are using the full device screen area,
++	 * rather than a window of the display.  If zero, then
++	 * dest_rect is used to specify a region of the display to
++	 * use.
++	 */
++
++	s32 fullscreen;
++	/** Indicates any rotation or flipping used to map frames onto
++	 * the natural display orientation.
++	 */
++	u32 transform; /* enum mmal_parameter_displaytransform */
++
++	/** Where to display the frame within the screen, if
++	 * fullscreen is zero.
++	 */
++	struct vchiq_mmal_rect dest_rect;
++
++	/** Indicates which area of the frame to display. If all
++	 * values are zero, the whole frame will be used.
++	 */
++	struct vchiq_mmal_rect src_rect;
++
++	/** If set to non-zero, indicates that any display scaling
++	 * should disregard the aspect ratio of the frame region being
++	 * displayed.
++	 */
++	s32 noaspect;
++
++	/** Indicates how the image should be scaled to fit the
++	 * display. \code MMAL_DISPLAY_MODE_FILL \endcode indicates
++	 * that the image should fill the screen by potentially
++	 * cropping the frames.  Setting \code mode \endcode to \code
++	 * MMAL_DISPLAY_MODE_LETTERBOX \endcode indicates that all the
++	 * source region should be displayed and black bars added if
++	 * necessary.
++	 */
++	u32 mode; /* enum mmal_parameter_displaymode */
++
++	/** If non-zero, defines the width of a source pixel relative
++	 * to \code pixel_y \endcode.  If zero, then pixels default to
++	 * being square.
++	 */
++	u32 pixel_x;
++
++	/** If non-zero, defines the height of a source pixel relative
++	 * to \code pixel_x \endcode.  If zero, then pixels default to
++	 * being square.
++	 */
++	u32 pixel_y;
++
++	/** Sets the relative depth of the images, with greater values
++	 * being in front of smaller values.
++	 */
++	u32 layer;
++
++	/** Set to non-zero to ensure copy protection is used on
++	 * output.
++	 */
++	s32 copyprotect_required;
++
++	/** Level of opacity of the layer, where zero is fully
++	 * transparent and 255 is fully opaque.
++	 */
++	u32 alpha;
++};
++
++#define MMAL_MAX_IMAGEFX_PARAMETERS 5
++
++struct mmal_parameter_imagefx_parameters {
++	enum mmal_parameter_imagefx effect;
++	u32 num_effect_params;
++	u32 effect_parameter[MMAL_MAX_IMAGEFX_PARAMETERS];
++};
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-vchiq.c
+@@ -0,0 +1,1916 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ *
++ * V4L2 driver MMAL vchiq interface code
++ */
++
++#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
++
++#include <linux/errno.h>
++#include <linux/kernel.h>
++#include <linux/mutex.h>
++#include <linux/mm.h>
++#include <linux/slab.h>
++#include <linux/completion.h>
++#include <linux/vmalloc.h>
++#include <asm/cacheflush.h>
++#include <media/videobuf2-vmalloc.h>
++
++#include "mmal-common.h"
++#include "mmal-vchiq.h"
++#include "mmal-msg.h"
++
++#define USE_VCHIQ_ARM
++#include "interface/vchi/vchi.h"
++
++/* maximum number of components supported */
++#define VCHIQ_MMAL_MAX_COMPONENTS 4
++
++/*#define FULL_MSG_DUMP 1*/
++
++#ifdef DEBUG
++static const char *const msg_type_names[] = {
++	"UNKNOWN",
++	"QUIT",
++	"SERVICE_CLOSED",
++	"GET_VERSION",
++	"COMPONENT_CREATE",
++	"COMPONENT_DESTROY",
++	"COMPONENT_ENABLE",
++	"COMPONENT_DISABLE",
++	"PORT_INFO_GET",
++	"PORT_INFO_SET",
++	"PORT_ACTION",
++	"BUFFER_FROM_HOST",
++	"BUFFER_TO_HOST",
++	"GET_STATS",
++	"PORT_PARAMETER_SET",
++	"PORT_PARAMETER_GET",
++	"EVENT_TO_HOST",
++	"GET_CORE_STATS_FOR_PORT",
++	"OPAQUE_ALLOCATOR",
++	"CONSUME_MEM",
++	"LMK",
++	"OPAQUE_ALLOCATOR_DESC",
++	"DRM_GET_LHS32",
++	"DRM_GET_TIME",
++	"BUFFER_FROM_HOST_ZEROLEN",
++	"PORT_FLUSH",
++	"HOST_LOG",
++};
++#endif
++
++static const char *const port_action_type_names[] = {
++	"UNKNOWN",
++	"ENABLE",
++	"DISABLE",
++	"FLUSH",
++	"CONNECT",
++	"DISCONNECT",
++	"SET_REQUIREMENTS",
++};
++
++#if defined(DEBUG)
++#if defined(FULL_MSG_DUMP)
++#define DBG_DUMP_MSG(MSG, MSG_LEN, TITLE)				\
++	do {								\
++		pr_debug(TITLE" type:%s(%d) length:%d\n",		\
++			 msg_type_names[(MSG)->h.type],			\
++			 (MSG)->h.type, (MSG_LEN));			\
++		print_hex_dump(KERN_DEBUG, "<<h: ", DUMP_PREFIX_OFFSET,	\
++			       16, 4, (MSG),				\
++			       sizeof(struct mmal_msg_header), 1);	\
++		print_hex_dump(KERN_DEBUG, "<<p: ", DUMP_PREFIX_OFFSET,	\
++			       16, 4,					\
++			       ((u8 *)(MSG)) + sizeof(struct mmal_msg_header),\
++			       (MSG_LEN) - sizeof(struct mmal_msg_header), 1); \
++	} while (0)
++#else
++#define DBG_DUMP_MSG(MSG, MSG_LEN, TITLE)				\
++	{								\
++		pr_debug(TITLE" type:%s(%d) length:%d\n",		\
++			 msg_type_names[(MSG)->h.type],			\
++			 (MSG)->h.type, (MSG_LEN));			\
++	}
++#endif
++#else
++#define DBG_DUMP_MSG(MSG, MSG_LEN, TITLE)
++#endif
++
++/* normal message context */
++struct mmal_msg_context {
++	union {
++		struct {
++			/* work struct for defered callback - must come first */
++			struct work_struct work;
++			/* mmal instance */
++			struct vchiq_mmal_instance *instance;
++			/* mmal port */
++			struct vchiq_mmal_port *port;
++			/* actual buffer used to store bulk reply */
++			struct mmal_buffer *buffer;
++			/* amount of buffer used */
++			unsigned long buffer_used;
++			/* MMAL buffer flags */
++			u32 mmal_flags;
++			/* Presentation and Decode timestamps */
++			s64 pts;
++			s64 dts;
++
++			int status;	/* context status */
++
++		} bulk;		/* bulk data */
++
++		struct {
++			/* message handle to release */
++			VCHI_HELD_MSG_T msg_handle;
++			/* pointer to received message */
++			struct mmal_msg *msg;
++			/* received message length */
++			u32 msg_len;
++			/* completion upon reply */
++			struct completion cmplt;
++		} sync;		/* synchronous response */
++	} u;
++
++};
++
++struct vchiq_mmal_instance {
++	VCHI_SERVICE_HANDLE_T handle;
++
++	/* ensure serialised access to service */
++	struct mutex vchiq_mutex;
++
++	/* ensure serialised access to bulk operations */
++	struct mutex bulk_mutex;
++
++	/* vmalloc page to receive scratch bulk xfers into */
++	void *bulk_scratch;
++
++	/* component to use next */
++	int component_idx;
++	struct vchiq_mmal_component component[VCHIQ_MMAL_MAX_COMPONENTS];
++};
++
++static struct mmal_msg_context *get_msg_context(struct vchiq_mmal_instance
++						*instance)
++{
++	struct mmal_msg_context *msg_context;
++
++	/* todo: should this be allocated from a pool to avoid kmalloc */
++	msg_context = kmalloc(sizeof(*msg_context), GFP_KERNEL);
++	memset(msg_context, 0, sizeof(*msg_context));
++
++	return msg_context;
++}
++
++static void release_msg_context(struct mmal_msg_context *msg_context)
++{
++	kfree(msg_context);
++}
++
++/* deals with receipt of event to host message */
++static void event_to_host_cb(struct vchiq_mmal_instance *instance,
++			     struct mmal_msg *msg, u32 msg_len)
++{
++	pr_debug("unhandled event\n");
++	pr_debug("component:%p port type:%d num:%d cmd:0x%x length:%d\n",
++		 msg->u.event_to_host.client_component,
++		 msg->u.event_to_host.port_type,
++		 msg->u.event_to_host.port_num,
++		 msg->u.event_to_host.cmd, msg->u.event_to_host.length);
++}
++
++/* workqueue scheduled callback
++ *
++ * we do this because it is important we do not call any other vchiq
++ * sync calls from witin the message delivery thread
++ */
++static void buffer_work_cb(struct work_struct *work)
++{
++	struct mmal_msg_context *msg_context = (struct mmal_msg_context *)work;
++
++	msg_context->u.bulk.port->buffer_cb(msg_context->u.bulk.instance,
++					    msg_context->u.bulk.port,
++					    msg_context->u.bulk.status,
++					    msg_context->u.bulk.buffer,
++					    msg_context->u.bulk.buffer_used,
++					    msg_context->u.bulk.mmal_flags,
++					    msg_context->u.bulk.dts,
++					    msg_context->u.bulk.pts);
++
++	/* release message context */
++	release_msg_context(msg_context);
++}
++
++/* enqueue a bulk receive for a given message context */
++static int bulk_receive(struct vchiq_mmal_instance *instance,
++			struct mmal_msg *msg,
++			struct mmal_msg_context *msg_context)
++{
++	unsigned long rd_len;
++	unsigned long flags = 0;
++	int ret;
++
++	/* bulk mutex stops other bulk operations while we have a
++	 * receive in progress - released in callback
++	 */
++	ret = mutex_lock_interruptible(&instance->bulk_mutex);
++	if (ret != 0)
++		return ret;
++
++	rd_len = msg->u.buffer_from_host.buffer_header.length;
++
++	/* take buffer from queue */
++	spin_lock_irqsave(&msg_context->u.bulk.port->slock, flags);
++	if (list_empty(&msg_context->u.bulk.port->buffers)) {
++		spin_unlock_irqrestore(&msg_context->u.bulk.port->slock, flags);
++		pr_err("buffer list empty trying to submit bulk receive\n");
++
++		/* todo: this is a serious error, we should never have
++		 * commited a buffer_to_host operation to the mmal
++		 * port without the buffer to back it up (underflow
++		 * handling) and there is no obvious way to deal with
++		 * this - how is the mmal servie going to react when
++		 * we fail to do the xfer and reschedule a buffer when
++		 * it arrives? perhaps a starved flag to indicate a
++		 * waiting bulk receive?
++		 */
++
++		mutex_unlock(&instance->bulk_mutex);
++
++		return -EINVAL;
++	}
++
++	msg_context->u.bulk.buffer =
++	    list_entry(msg_context->u.bulk.port->buffers.next,
++		       struct mmal_buffer, list);
++	list_del(&msg_context->u.bulk.buffer->list);
++
++	spin_unlock_irqrestore(&msg_context->u.bulk.port->slock, flags);
++
++	/* ensure we do not overrun the available buffer */
++	if (rd_len > msg_context->u.bulk.buffer->buffer_size) {
++		rd_len = msg_context->u.bulk.buffer->buffer_size;
++		pr_warn("short read as not enough receive buffer space\n");
++		/* todo: is this the correct response, what happens to
++		 * the rest of the message data?
++		 */
++	}
++
++	/* store length */
++	msg_context->u.bulk.buffer_used = rd_len;
++	msg_context->u.bulk.mmal_flags =
++	    msg->u.buffer_from_host.buffer_header.flags;
++	msg_context->u.bulk.dts = msg->u.buffer_from_host.buffer_header.dts;
++	msg_context->u.bulk.pts = msg->u.buffer_from_host.buffer_header.pts;
++
++	// only need to flush L1 cache here, as VCHIQ takes care of the L2
++	// cache.
++	__cpuc_flush_dcache_area(msg_context->u.bulk.buffer->buffer, rd_len);
++
++	/* queue the bulk submission */
++	vchi_service_use(instance->handle);
++	ret = vchi_bulk_queue_receive(instance->handle,
++				      msg_context->u.bulk.buffer->buffer,
++				      /* Actual receive needs to be a multiple
++				       * of 4 bytes
++				       */
++				      (rd_len + 3) & ~3,
++				      VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE |
++				      VCHI_FLAGS_BLOCK_UNTIL_QUEUED,
++				      msg_context);
++
++	vchi_service_release(instance->handle);
++
++	if (ret != 0) {
++		/* callback will not be clearing the mutex */
++		mutex_unlock(&instance->bulk_mutex);
++	}
++
++	return ret;
++}
++
++/* enque a dummy bulk receive for a given message context */
++static int dummy_bulk_receive(struct vchiq_mmal_instance *instance,
++			      struct mmal_msg_context *msg_context)
++{
++	int ret;
++
++	/* bulk mutex stops other bulk operations while we have a
++	 * receive in progress - released in callback
++	 */
++	ret = mutex_lock_interruptible(&instance->bulk_mutex);
++	if (ret != 0)
++		return ret;
++
++	/* zero length indicates this was a dummy transfer */
++	msg_context->u.bulk.buffer_used = 0;
++
++	/* queue the bulk submission */
++	vchi_service_use(instance->handle);
++
++	ret = vchi_bulk_queue_receive(instance->handle,
++				      instance->bulk_scratch,
++				      8,
++				      VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE |
++				      VCHI_FLAGS_BLOCK_UNTIL_QUEUED,
++				      msg_context);
++
++	vchi_service_release(instance->handle);
++
++	if (ret != 0) {
++		/* callback will not be clearing the mutex */
++		mutex_unlock(&instance->bulk_mutex);
++	}
++
++	return ret;
++}
++
++/* data in message, memcpy from packet into output buffer */
++static int inline_receive(struct vchiq_mmal_instance *instance,
++			  struct mmal_msg *msg,
++			  struct mmal_msg_context *msg_context)
++{
++	unsigned long flags = 0;
++
++	/* take buffer from queue */
++	spin_lock_irqsave(&msg_context->u.bulk.port->slock, flags);
++	if (list_empty(&msg_context->u.bulk.port->buffers)) {
++		spin_unlock_irqrestore(&msg_context->u.bulk.port->slock, flags);
++		pr_err("buffer list empty trying to receive inline\n");
++
++		/* todo: this is a serious error, we should never have
++		 * commited a buffer_to_host operation to the mmal
++		 * port without the buffer to back it up (with
++		 * underflow handling) and there is no obvious way to
++		 * deal with this. Less bad than the bulk case as we
++		 * can just drop this on the floor but...unhelpful
++		 */
++		return -EINVAL;
++	}
++
++	msg_context->u.bulk.buffer =
++	    list_entry(msg_context->u.bulk.port->buffers.next,
++		       struct mmal_buffer, list);
++	list_del(&msg_context->u.bulk.buffer->list);
++
++	spin_unlock_irqrestore(&msg_context->u.bulk.port->slock, flags);
++
++	memcpy(msg_context->u.bulk.buffer->buffer,
++	       msg->u.buffer_from_host.short_data,
++	       msg->u.buffer_from_host.payload_in_message);
++
++	msg_context->u.bulk.buffer_used =
++	    msg->u.buffer_from_host.payload_in_message;
++
++	return 0;
++}
++
++/* queue the buffer availability with MMAL_MSG_TYPE_BUFFER_FROM_HOST */
++static int
++buffer_from_host(struct vchiq_mmal_instance *instance,
++		 struct vchiq_mmal_port *port, struct mmal_buffer *buf)
++{
++	struct mmal_msg_context *msg_context;
++	struct mmal_msg m;
++	int ret;
++
++	pr_debug("instance:%p buffer:%p\n", instance->handle, buf);
++
++	/* bulk mutex stops other bulk operations while we
++	 * have a receive in progress
++	 */
++	if (mutex_lock_interruptible(&instance->bulk_mutex))
++		return -EINTR;
++
++	/* get context */
++	msg_context = get_msg_context(instance);
++	if (msg_context == NULL)
++		return -ENOMEM;
++
++	/* store bulk message context for when data arrives */
++	msg_context->u.bulk.instance = instance;
++	msg_context->u.bulk.port = port;
++	msg_context->u.bulk.buffer = NULL;	/* not valid until bulk xfer */
++	msg_context->u.bulk.buffer_used = 0;
++
++	/* initialise work structure ready to schedule callback */
++	INIT_WORK(&msg_context->u.bulk.work, buffer_work_cb);
++
++	/* prep the buffer from host message */
++	memset(&m, 0xbc, sizeof(m));	/* just to make debug clearer */
++
++	m.h.type = MMAL_MSG_TYPE_BUFFER_FROM_HOST;
++	m.h.magic = MMAL_MAGIC;
++	m.h.context = msg_context;
++	m.h.status = 0;
++
++	/* drvbuf is our private data passed back */
++	m.u.buffer_from_host.drvbuf.magic = MMAL_MAGIC;
++	m.u.buffer_from_host.drvbuf.component_handle = port->component->handle;
++	m.u.buffer_from_host.drvbuf.port_handle = port->handle;
++	m.u.buffer_from_host.drvbuf.client_context = msg_context;
++
++	/* buffer header */
++	m.u.buffer_from_host.buffer_header.cmd = 0;
++	m.u.buffer_from_host.buffer_header.data = buf->buffer;
++	m.u.buffer_from_host.buffer_header.alloc_size = buf->buffer_size;
++	m.u.buffer_from_host.buffer_header.length = 0;	/* nothing used yet */
++	m.u.buffer_from_host.buffer_header.offset = 0;	/* no offset */
++	m.u.buffer_from_host.buffer_header.flags = 0;	/* no flags */
++	m.u.buffer_from_host.buffer_header.pts = MMAL_TIME_UNKNOWN;
++	m.u.buffer_from_host.buffer_header.dts = MMAL_TIME_UNKNOWN;
++
++	/* clear buffer type sepecific data */
++	memset(&m.u.buffer_from_host.buffer_header_type_specific, 0,
++	       sizeof(m.u.buffer_from_host.buffer_header_type_specific));
++
++	/* no payload in message */
++	m.u.buffer_from_host.payload_in_message = 0;
++
++	vchi_service_use(instance->handle);
++
++	ret = vchi_msg_queue(instance->handle, &m,
++			     sizeof(struct mmal_msg_header) +
++			     sizeof(m.u.buffer_from_host),
++			     VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	if (ret != 0) {
++		release_msg_context(msg_context);
++		/* todo: is this correct error value? */
++	}
++
++	vchi_service_release(instance->handle);
++
++	mutex_unlock(&instance->bulk_mutex);
++
++	return ret;
++}
++
++/* submit a buffer to the mmal sevice
++ *
++ * the buffer_from_host uses size data from the ports next available
++ * mmal_buffer and deals with there being no buffer available by
++ * incrementing the underflow for later
++ */
++static int port_buffer_from_host(struct vchiq_mmal_instance *instance,
++				 struct vchiq_mmal_port *port)
++{
++	int ret;
++	struct mmal_buffer *buf;
++	unsigned long flags = 0;
++
++	if (!port->enabled)
++		return -EINVAL;
++
++	/* peek buffer from queue */
++	spin_lock_irqsave(&port->slock, flags);
++	if (list_empty(&port->buffers)) {
++		port->buffer_underflow++;
++		spin_unlock_irqrestore(&port->slock, flags);
++		return -ENOSPC;
++	}
++
++	buf = list_entry(port->buffers.next, struct mmal_buffer, list);
++
++	spin_unlock_irqrestore(&port->slock, flags);
++
++	/* issue buffer to mmal service */
++	ret = buffer_from_host(instance, port, buf);
++	if (ret) {
++		pr_err("adding buffer header failed\n");
++		/* todo: how should this be dealt with */
++	}
++
++	return ret;
++}
++
++/* deals with receipt of buffer to host message */
++static void buffer_to_host_cb(struct vchiq_mmal_instance *instance,
++			      struct mmal_msg *msg, u32 msg_len)
++{
++	struct mmal_msg_context *msg_context;
++
++	pr_debug("buffer_to_host_cb: instance:%p msg:%p msg_len:%d\n",
++		 instance, msg, msg_len);
++
++	if (msg->u.buffer_from_host.drvbuf.magic == MMAL_MAGIC) {
++		msg_context = msg->u.buffer_from_host.drvbuf.client_context;
++	} else {
++		pr_err("MMAL_MSG_TYPE_BUFFER_TO_HOST with bad magic\n");
++		return;
++	}
++
++	if (msg->h.status != MMAL_MSG_STATUS_SUCCESS) {
++		/* message reception had an error */
++		pr_warn("error %d in reply\n", msg->h.status);
++
++		msg_context->u.bulk.status = msg->h.status;
++
++	} else if (msg->u.buffer_from_host.buffer_header.length == 0) {
++		/* empty buffer */
++		if (msg->u.buffer_from_host.buffer_header.flags &
++		    MMAL_BUFFER_HEADER_FLAG_EOS) {
++			msg_context->u.bulk.status =
++			    dummy_bulk_receive(instance, msg_context);
++			if (msg_context->u.bulk.status == 0)
++				return;	/* successful bulk submission, bulk
++					 * completion will trigger callback
++					 */
++		} else {
++			/* do callback with empty buffer - not EOS though */
++			msg_context->u.bulk.status = 0;
++			msg_context->u.bulk.buffer_used = 0;
++		}
++	} else if (msg->u.buffer_from_host.payload_in_message == 0) {
++		/* data is not in message, queue a bulk receive */
++		msg_context->u.bulk.status =
++		    bulk_receive(instance, msg, msg_context);
++		if (msg_context->u.bulk.status == 0)
++			return;	/* successful bulk submission, bulk
++				 * completion will trigger callback
++				 */
++
++		/* failed to submit buffer, this will end badly */
++		pr_err("error %d on bulk submission\n",
++		       msg_context->u.bulk.status);
++
++	} else if (msg->u.buffer_from_host.payload_in_message <=
++		   MMAL_VC_SHORT_DATA) {
++		/* data payload within message */
++		msg_context->u.bulk.status = inline_receive(instance, msg,
++							    msg_context);
++	} else {
++		pr_err("message with invalid short payload\n");
++
++		/* signal error */
++		msg_context->u.bulk.status = -EINVAL;
++		msg_context->u.bulk.buffer_used =
++		    msg->u.buffer_from_host.payload_in_message;
++	}
++
++	/* replace the buffer header */
++	port_buffer_from_host(instance, msg_context->u.bulk.port);
++
++	/* schedule the port callback */
++	schedule_work(&msg_context->u.bulk.work);
++}
++
++static void bulk_receive_cb(struct vchiq_mmal_instance *instance,
++			    struct mmal_msg_context *msg_context)
++{
++	/* bulk receive operation complete */
++	mutex_unlock(&msg_context->u.bulk.instance->bulk_mutex);
++
++	/* replace the buffer header */
++	port_buffer_from_host(msg_context->u.bulk.instance,
++			      msg_context->u.bulk.port);
++
++	msg_context->u.bulk.status = 0;
++
++	/* schedule the port callback */
++	schedule_work(&msg_context->u.bulk.work);
++}
++
++static void bulk_abort_cb(struct vchiq_mmal_instance *instance,
++			  struct mmal_msg_context *msg_context)
++{
++	pr_err("%s: bulk ABORTED msg_context:%p\n", __func__, msg_context);
++
++	/* bulk receive operation complete */
++	mutex_unlock(&msg_context->u.bulk.instance->bulk_mutex);
++
++	/* replace the buffer header */
++	port_buffer_from_host(msg_context->u.bulk.instance,
++			      msg_context->u.bulk.port);
++
++	msg_context->u.bulk.status = -EINTR;
++
++	schedule_work(&msg_context->u.bulk.work);
++}
++
++/* incoming event service callback */
++static void service_callback(void *param,
++			     const VCHI_CALLBACK_REASON_T reason,
++			     void *bulk_ctx)
++{
++	struct vchiq_mmal_instance *instance = param;
++	int status;
++	u32 msg_len;
++	struct mmal_msg *msg;
++	VCHI_HELD_MSG_T msg_handle;
++
++	if (!instance) {
++		pr_err("Message callback passed NULL instance\n");
++		return;
++	}
++
++	switch (reason) {
++	case VCHI_CALLBACK_MSG_AVAILABLE:
++		status = vchi_msg_hold(instance->handle, (void **)&msg,
++				       &msg_len, VCHI_FLAGS_NONE, &msg_handle);
++		if (status) {
++			pr_err("Unable to dequeue a message (%d)\n", status);
++			break;
++		}
++
++		DBG_DUMP_MSG(msg, msg_len, "<<< reply message");
++
++		/* handling is different for buffer messages */
++		switch (msg->h.type) {
++
++		case MMAL_MSG_TYPE_BUFFER_FROM_HOST:
++			vchi_held_msg_release(&msg_handle);
++			break;
++
++		case MMAL_MSG_TYPE_EVENT_TO_HOST:
++			event_to_host_cb(instance, msg, msg_len);
++			vchi_held_msg_release(&msg_handle);
++
++			break;
++
++		case MMAL_MSG_TYPE_BUFFER_TO_HOST:
++			buffer_to_host_cb(instance, msg, msg_len);
++			vchi_held_msg_release(&msg_handle);
++			break;
++
++		default:
++			/* messages dependant on header context to complete */
++
++			/* todo: the msg.context really ought to be sanity
++			 * checked before we just use it, afaict it comes back
++			 * and is used raw from the videocore. Perhaps it
++			 * should be verified the address lies in the kernel
++			 * address space.
++			 */
++			if (msg->h.context == NULL) {
++				pr_err("received message context was null!\n");
++				vchi_held_msg_release(&msg_handle);
++				break;
++			}
++
++			/* fill in context values */
++			msg->h.context->u.sync.msg_handle = msg_handle;
++			msg->h.context->u.sync.msg = msg;
++			msg->h.context->u.sync.msg_len = msg_len;
++
++			/* todo: should this check (completion_done()
++			 * == 1) for no one waiting? or do we need a
++			 * flag to tell us the completion has been
++			 * interrupted so we can free the message and
++			 * its context. This probably also solves the
++			 * message arriving after interruption todo
++			 * below
++			 */
++
++			/* complete message so caller knows it happened */
++			complete(&msg->h.context->u.sync.cmplt);
++			break;
++		}
++
++		break;
++
++	case VCHI_CALLBACK_BULK_RECEIVED:
++		bulk_receive_cb(instance, bulk_ctx);
++		break;
++
++	case VCHI_CALLBACK_BULK_RECEIVE_ABORTED:
++		bulk_abort_cb(instance, bulk_ctx);
++		break;
++
++	case VCHI_CALLBACK_SERVICE_CLOSED:
++		/* TODO: consider if this requires action if received when
++		 * driver is not explicitly closing the service
++		 */
++		break;
++
++	default:
++		pr_err("Received unhandled message reason %d\n", reason);
++		break;
++	}
++}
++
++static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance,
++				     struct mmal_msg *msg,
++				     unsigned int payload_len,
++				     struct mmal_msg **msg_out,
++				     VCHI_HELD_MSG_T *msg_handle_out)
++{
++	struct mmal_msg_context msg_context;
++	int ret;
++
++	/* payload size must not cause message to exceed max size */
++	if (payload_len >
++	    (MMAL_MSG_MAX_SIZE - sizeof(struct mmal_msg_header))) {
++		pr_err("payload length %d exceeds max:%d\n", payload_len,
++			 (MMAL_MSG_MAX_SIZE - sizeof(struct mmal_msg_header)));
++		return -EINVAL;
++	}
++
++	init_completion(&msg_context.u.sync.cmplt);
++
++	msg->h.magic = MMAL_MAGIC;
++	msg->h.context = &msg_context;
++	msg->h.status = 0;
++
++	DBG_DUMP_MSG(msg, (sizeof(struct mmal_msg_header) + payload_len),
++		     ">>> sync message");
++
++	vchi_service_use(instance->handle);
++
++	ret = vchi_msg_queue(instance->handle,
++			     msg,
++			     sizeof(struct mmal_msg_header) + payload_len,
++			     VCHI_FLAGS_BLOCK_UNTIL_QUEUED, NULL);
++
++	vchi_service_release(instance->handle);
++
++	if (ret) {
++		pr_err("error %d queuing message\n", ret);
++		return ret;
++	}
++
++	ret = wait_for_completion_timeout(&msg_context.u.sync.cmplt, 3*HZ);
++	if (ret <= 0) {
++		pr_err("error %d waiting for sync completion\n", ret);
++		if (ret == 0)
++			ret = -ETIME;
++		/* todo: what happens if the message arrives after aborting */
++		return ret;
++	}
++
++	*msg_out = msg_context.u.sync.msg;
++	*msg_handle_out = msg_context.u.sync.msg_handle;
++
++	return 0;
++}
++
++static void dump_port_info(struct vchiq_mmal_port *port)
++{
++	pr_debug("port handle:0x%x enabled:%d\n", port->handle, port->enabled);
++
++	pr_debug("buffer minimum num:%d size:%d align:%d\n",
++		 port->minimum_buffer.num,
++		 port->minimum_buffer.size, port->minimum_buffer.alignment);
++
++	pr_debug("buffer recommended num:%d size:%d align:%d\n",
++		 port->recommended_buffer.num,
++		 port->recommended_buffer.size,
++		 port->recommended_buffer.alignment);
++
++	pr_debug("buffer current values num:%d size:%d align:%d\n",
++		 port->current_buffer.num,
++		 port->current_buffer.size, port->current_buffer.alignment);
++
++	pr_debug("elementry stream: type:%d encoding:0x%x varient:0x%x\n",
++		 port->format.type,
++		 port->format.encoding, port->format.encoding_variant);
++
++	pr_debug("		    bitrate:%d flags:0x%x\n",
++		 port->format.bitrate, port->format.flags);
++
++	if (port->format.type == MMAL_ES_TYPE_VIDEO) {
++		pr_debug
++		    ("es video format: width:%d height:%d colourspace:0x%x\n",
++		     port->es.video.width, port->es.video.height,
++		     port->es.video.color_space);
++
++		pr_debug("		 : crop xywh %d,%d,%d,%d\n",
++			 port->es.video.crop.x,
++			 port->es.video.crop.y,
++			 port->es.video.crop.width, port->es.video.crop.height);
++		pr_debug("		 : framerate %d/%d  aspect %d/%d\n",
++			 port->es.video.frame_rate.num,
++			 port->es.video.frame_rate.den,
++			 port->es.video.par.num, port->es.video.par.den);
++	}
++}
++
++static void port_to_mmal_msg(struct vchiq_mmal_port *port, struct mmal_port *p)
++{
++
++	/* todo do readonly fields need setting at all? */
++	p->type = port->type;
++	p->index = port->index;
++	p->index_all = 0;
++	p->is_enabled = port->enabled;
++	p->buffer_num_min = port->minimum_buffer.num;
++	p->buffer_size_min = port->minimum_buffer.size;
++	p->buffer_alignment_min = port->minimum_buffer.alignment;
++	p->buffer_num_recommended = port->recommended_buffer.num;
++	p->buffer_size_recommended = port->recommended_buffer.size;
++
++	/* only three writable fields in a port */
++	p->buffer_num = port->current_buffer.num;
++	p->buffer_size = port->current_buffer.size;
++	p->userdata = port;
++}
++
++static int port_info_set(struct vchiq_mmal_instance *instance,
++			 struct vchiq_mmal_port *port)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	pr_debug("setting port info port %p\n", port);
++	if (!port)
++		return -1;
++	dump_port_info(port);
++
++	m.h.type = MMAL_MSG_TYPE_PORT_INFO_SET;
++
++	m.u.port_info_set.component_handle = port->component->handle;
++	m.u.port_info_set.port_type = port->type;
++	m.u.port_info_set.port_index = port->index;
++
++	port_to_mmal_msg(port, &m.u.port_info_set.port);
++
++	/* elementry stream format setup */
++	m.u.port_info_set.format.type = port->format.type;
++	m.u.port_info_set.format.encoding = port->format.encoding;
++	m.u.port_info_set.format.encoding_variant =
++	    port->format.encoding_variant;
++	m.u.port_info_set.format.bitrate = port->format.bitrate;
++	m.u.port_info_set.format.flags = port->format.flags;
++
++	memcpy(&m.u.port_info_set.es, &port->es,
++	       sizeof(union mmal_es_specific_format));
++
++	m.u.port_info_set.format.extradata_size = port->format.extradata_size;
++	memcpy(&m.u.port_info_set.extradata, port->format.extradata,
++	       port->format.extradata_size);
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.port_info_set),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != MMAL_MSG_TYPE_PORT_INFO_SET) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	/* return operation status */
++	ret = -rmsg->u.port_info_get_reply.status;
++
++	pr_debug("%s:result:%d component:0x%x port:%d\n", __func__, ret,
++		 port->component->handle, port->handle);
++
++release_msg:
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++
++}
++
++/* use port info get message to retrive port information */
++static int port_info_get(struct vchiq_mmal_instance *instance,
++			 struct vchiq_mmal_port *port)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	/* port info time */
++	m.h.type = MMAL_MSG_TYPE_PORT_INFO_GET;
++	m.u.port_info_get.component_handle = port->component->handle;
++	m.u.port_info_get.port_type = port->type;
++	m.u.port_info_get.index = port->index;
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.port_info_get),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != MMAL_MSG_TYPE_PORT_INFO_GET) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	/* return operation status */
++	ret = -rmsg->u.port_info_get_reply.status;
++	if (ret != MMAL_MSG_STATUS_SUCCESS)
++		goto release_msg;
++
++	if (rmsg->u.port_info_get_reply.port.is_enabled == 0)
++		port->enabled = false;
++	else
++		port->enabled = true;
++
++	/* copy the values out of the message */
++	port->handle = rmsg->u.port_info_get_reply.port_handle;
++
++	/* port type and index cached to use on port info set becuase
++	 * it does not use a port handle
++	 */
++	port->type = rmsg->u.port_info_get_reply.port_type;
++	port->index = rmsg->u.port_info_get_reply.port_index;
++
++	port->minimum_buffer.num =
++	    rmsg->u.port_info_get_reply.port.buffer_num_min;
++	port->minimum_buffer.size =
++	    rmsg->u.port_info_get_reply.port.buffer_size_min;
++	port->minimum_buffer.alignment =
++	    rmsg->u.port_info_get_reply.port.buffer_alignment_min;
++
++	port->recommended_buffer.alignment =
++	    rmsg->u.port_info_get_reply.port.buffer_alignment_min;
++	port->recommended_buffer.num =
++	    rmsg->u.port_info_get_reply.port.buffer_num_recommended;
++
++	port->current_buffer.num = rmsg->u.port_info_get_reply.port.buffer_num;
++	port->current_buffer.size =
++	    rmsg->u.port_info_get_reply.port.buffer_size;
++
++	/* stream format */
++	port->format.type = rmsg->u.port_info_get_reply.format.type;
++	port->format.encoding = rmsg->u.port_info_get_reply.format.encoding;
++	port->format.encoding_variant =
++	    rmsg->u.port_info_get_reply.format.encoding_variant;
++	port->format.bitrate = rmsg->u.port_info_get_reply.format.bitrate;
++	port->format.flags = rmsg->u.port_info_get_reply.format.flags;
++
++	/* elementry stream format */
++	memcpy(&port->es,
++	       &rmsg->u.port_info_get_reply.es,
++	       sizeof(union mmal_es_specific_format));
++	port->format.es = &port->es;
++
++	port->format.extradata_size =
++	    rmsg->u.port_info_get_reply.format.extradata_size;
++	memcpy(port->format.extradata,
++	       rmsg->u.port_info_get_reply.extradata,
++	       port->format.extradata_size);
++
++	pr_debug("received port info\n");
++	dump_port_info(port);
++
++release_msg:
++
++	pr_debug("%s:result:%d component:0x%x port:%d\n",
++		 __func__, ret, port->component->handle, port->handle);
++
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++/* create comonent on vc */
++static int create_component(struct vchiq_mmal_instance *instance,
++			    struct vchiq_mmal_component *component,
++			    const char *name)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	/* build component create message */
++	m.h.type = MMAL_MSG_TYPE_COMPONENT_CREATE;
++	m.u.component_create.client_component = component;
++	strncpy(m.u.component_create.name, name,
++		sizeof(m.u.component_create.name));
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.component_create),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != m.h.type) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	ret = -rmsg->u.component_create_reply.status;
++	if (ret != MMAL_MSG_STATUS_SUCCESS)
++		goto release_msg;
++
++	/* a valid component response received */
++	component->handle = rmsg->u.component_create_reply.component_handle;
++	component->inputs = rmsg->u.component_create_reply.input_num;
++	component->outputs = rmsg->u.component_create_reply.output_num;
++	component->clocks = rmsg->u.component_create_reply.clock_num;
++
++	pr_debug("Component handle:0x%x in:%d out:%d clock:%d\n",
++		 component->handle,
++		 component->inputs, component->outputs, component->clocks);
++
++release_msg:
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++/* destroys a component on vc */
++static int destroy_component(struct vchiq_mmal_instance *instance,
++			     struct vchiq_mmal_component *component)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	m.h.type = MMAL_MSG_TYPE_COMPONENT_DESTROY;
++	m.u.component_destroy.component_handle = component->handle;
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.component_destroy),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != m.h.type) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	ret = -rmsg->u.component_destroy_reply.status;
++
++release_msg:
++
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++/* enable a component on vc */
++static int enable_component(struct vchiq_mmal_instance *instance,
++			    struct vchiq_mmal_component *component)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	m.h.type = MMAL_MSG_TYPE_COMPONENT_ENABLE;
++	m.u.component_enable.component_handle = component->handle;
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.component_enable),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != m.h.type) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	ret = -rmsg->u.component_enable_reply.status;
++
++release_msg:
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++/* disable a component on vc */
++static int disable_component(struct vchiq_mmal_instance *instance,
++			     struct vchiq_mmal_component *component)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	m.h.type = MMAL_MSG_TYPE_COMPONENT_DISABLE;
++	m.u.component_disable.component_handle = component->handle;
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.component_disable),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != m.h.type) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	ret = -rmsg->u.component_disable_reply.status;
++
++release_msg:
++
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++/* get version of mmal implementation */
++static int get_version(struct vchiq_mmal_instance *instance,
++		       u32 *major_out, u32 *minor_out)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	m.h.type = MMAL_MSG_TYPE_GET_VERSION;
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.version),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != m.h.type) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	*major_out = rmsg->u.version.major;
++	*minor_out = rmsg->u.version.minor;
++
++release_msg:
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++/* do a port action with a port as a parameter */
++static int port_action_port(struct vchiq_mmal_instance *instance,
++			    struct vchiq_mmal_port *port,
++			    enum mmal_msg_port_action_type action_type)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	m.h.type = MMAL_MSG_TYPE_PORT_ACTION;
++	m.u.port_action_port.component_handle = port->component->handle;
++	m.u.port_action_port.port_handle = port->handle;
++	m.u.port_action_port.action = action_type;
++
++	port_to_mmal_msg(port, &m.u.port_action_port.port);
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.port_action_port),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != MMAL_MSG_TYPE_PORT_ACTION) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	ret = -rmsg->u.port_action_reply.status;
++
++	pr_debug("%s:result:%d component:0x%x port:%d action:%s(%d)\n",
++		 __func__,
++		 ret, port->component->handle, port->handle,
++		 port_action_type_names[action_type], action_type);
++
++release_msg:
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++/* do a port action with handles as parameters */
++static int port_action_handle(struct vchiq_mmal_instance *instance,
++			      struct vchiq_mmal_port *port,
++			      enum mmal_msg_port_action_type action_type,
++			      u32 connect_component_handle,
++			      u32 connect_port_handle)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	m.h.type = MMAL_MSG_TYPE_PORT_ACTION;
++
++	m.u.port_action_handle.component_handle = port->component->handle;
++	m.u.port_action_handle.port_handle = port->handle;
++	m.u.port_action_handle.action = action_type;
++
++	m.u.port_action_handle.connect_component_handle =
++	    connect_component_handle;
++	m.u.port_action_handle.connect_port_handle = connect_port_handle;
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(m.u.port_action_handle),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != MMAL_MSG_TYPE_PORT_ACTION) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	ret = -rmsg->u.port_action_reply.status;
++
++	pr_debug("%s:result:%d component:0x%x port:%d action:%s(%d)" \
++		 " connect component:0x%x connect port:%d\n",
++		 __func__,
++		 ret, port->component->handle, port->handle,
++		 port_action_type_names[action_type],
++		 action_type, connect_component_handle, connect_port_handle);
++
++release_msg:
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++static int port_parameter_set(struct vchiq_mmal_instance *instance,
++			      struct vchiq_mmal_port *port,
++			      u32 parameter_id, void *value, u32 value_size)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	m.h.type = MMAL_MSG_TYPE_PORT_PARAMETER_SET;
++
++	m.u.port_parameter_set.component_handle = port->component->handle;
++	m.u.port_parameter_set.port_handle = port->handle;
++	m.u.port_parameter_set.id = parameter_id;
++	m.u.port_parameter_set.size = (2 * sizeof(u32)) + value_size;
++	memcpy(&m.u.port_parameter_set.value, value, value_size);
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					(4 * sizeof(u32)) + value_size,
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != MMAL_MSG_TYPE_PORT_PARAMETER_SET) {
++		/* got an unexpected message type in reply */
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	ret = -rmsg->u.port_parameter_set_reply.status;
++
++	pr_debug("%s:result:%d component:0x%x port:%d parameter:%d\n",
++		 __func__,
++		 ret, port->component->handle, port->handle, parameter_id);
++
++release_msg:
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++static int port_parameter_get(struct vchiq_mmal_instance *instance,
++			      struct vchiq_mmal_port *port,
++			      u32 parameter_id, void *value, u32 *value_size)
++{
++	int ret;
++	struct mmal_msg m;
++	struct mmal_msg *rmsg;
++	VCHI_HELD_MSG_T rmsg_handle;
++
++	m.h.type = MMAL_MSG_TYPE_PORT_PARAMETER_GET;
++
++	m.u.port_parameter_get.component_handle = port->component->handle;
++	m.u.port_parameter_get.port_handle = port->handle;
++	m.u.port_parameter_get.id = parameter_id;
++	m.u.port_parameter_get.size = (2 * sizeof(u32)) + *value_size;
++
++	ret = send_synchronous_mmal_msg(instance, &m,
++					sizeof(struct
++					       mmal_msg_port_parameter_get),
++					&rmsg, &rmsg_handle);
++	if (ret)
++		return ret;
++
++	if (rmsg->h.type != MMAL_MSG_TYPE_PORT_PARAMETER_GET) {
++		/* got an unexpected message type in reply */
++		pr_err("Incorrect reply type %d\n", rmsg->h.type);
++		ret = -EINVAL;
++		goto release_msg;
++	}
++
++	ret = -rmsg->u.port_parameter_get_reply.status;
++	if (ret) {
++		/* Copy only as much as we have space for
++		 * but report true size of parameter
++		 */
++		memcpy(value, &rmsg->u.port_parameter_get_reply.value,
++		       *value_size);
++		*value_size = rmsg->u.port_parameter_get_reply.size;
++	} else
++		memcpy(value, &rmsg->u.port_parameter_get_reply.value,
++		       rmsg->u.port_parameter_get_reply.size);
++
++	pr_debug("%s:result:%d component:0x%x port:%d parameter:%d\n", __func__,
++	        ret, port->component->handle, port->handle, parameter_id);
++
++release_msg:
++	vchi_held_msg_release(&rmsg_handle);
++
++	return ret;
++}
++
++/* disables a port and drains buffers from it */
++static int port_disable(struct vchiq_mmal_instance *instance,
++			struct vchiq_mmal_port *port)
++{
++	int ret;
++	struct list_head *q, *buf_head;
++	unsigned long flags = 0;
++
++	if (!port->enabled)
++		return 0;
++
++	port->enabled = false;
++
++	ret = port_action_port(instance, port,
++			       MMAL_MSG_PORT_ACTION_TYPE_DISABLE);
++	if (ret == 0) {
++
++		/* drain all queued buffers on port */
++		spin_lock_irqsave(&port->slock, flags);
++
++		list_for_each_safe(buf_head, q, &port->buffers) {
++			struct mmal_buffer *mmalbuf;
++			mmalbuf = list_entry(buf_head, struct mmal_buffer,
++					     list);
++			list_del(buf_head);
++			if (port->buffer_cb)
++				port->buffer_cb(instance,
++						port, 0, mmalbuf, 0, 0,
++						MMAL_TIME_UNKNOWN,
++						MMAL_TIME_UNKNOWN);
++		}
++
++		spin_unlock_irqrestore(&port->slock, flags);
++
++		ret = port_info_get(instance, port);
++	}
++
++	return ret;
++}
++
++/* enable a port */
++static int port_enable(struct vchiq_mmal_instance *instance,
++		       struct vchiq_mmal_port *port)
++{
++	unsigned int hdr_count;
++	struct list_head *buf_head;
++	int ret;
++
++	if (port->enabled)
++		return 0;
++
++	/* ensure there are enough buffers queued to cover the buffer headers */
++	if (port->buffer_cb != NULL) {
++		hdr_count = 0;
++		list_for_each(buf_head, &port->buffers) {
++			hdr_count++;
++		}
++		if (hdr_count < port->current_buffer.num)
++			return -ENOSPC;
++	}
++
++	ret = port_action_port(instance, port,
++			       MMAL_MSG_PORT_ACTION_TYPE_ENABLE);
++	if (ret)
++		goto done;
++
++	port->enabled = true;
++
++	if (port->buffer_cb) {
++		/* send buffer headers to videocore */
++		hdr_count = 1;
++		list_for_each(buf_head, &port->buffers) {
++			struct mmal_buffer *mmalbuf;
++			mmalbuf = list_entry(buf_head, struct mmal_buffer,
++					     list);
++			ret = buffer_from_host(instance, port, mmalbuf);
++			if (ret)
++				goto done;
++
++			hdr_count++;
++			if (hdr_count > port->current_buffer.num)
++				break;
++		}
++	}
++
++	ret = port_info_get(instance, port);
++
++done:
++	return ret;
++}
++
++/* ------------------------------------------------------------------
++ * Exported API
++ *------------------------------------------------------------------*/
++
++int vchiq_mmal_port_set_format(struct vchiq_mmal_instance *instance,
++			       struct vchiq_mmal_port *port)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	ret = port_info_set(instance, port);
++	if (ret)
++		goto release_unlock;
++
++	/* read what has actually been set */
++	ret = port_info_get(instance, port);
++
++release_unlock:
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++
++}
++
++int vchiq_mmal_port_parameter_set(struct vchiq_mmal_instance *instance,
++				  struct vchiq_mmal_port *port,
++				  u32 parameter, void *value, u32 value_size)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	ret = port_parameter_set(instance, port, parameter, value, value_size);
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++int vchiq_mmal_port_parameter_get(struct vchiq_mmal_instance *instance,
++				  struct vchiq_mmal_port *port,
++				  u32 parameter, void *value, u32 *value_size)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	ret = port_parameter_get(instance, port, parameter, value, value_size);
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++/* enable a port
++ *
++ * enables a port and queues buffers for satisfying callbacks if we
++ * provide a callback handler
++ */
++int vchiq_mmal_port_enable(struct vchiq_mmal_instance *instance,
++			   struct vchiq_mmal_port *port,
++			   vchiq_mmal_buffer_cb buffer_cb)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	/* already enabled - noop */
++	if (port->enabled) {
++		ret = 0;
++		goto unlock;
++	}
++
++	port->buffer_cb = buffer_cb;
++
++	ret = port_enable(instance, port);
++
++unlock:
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++int vchiq_mmal_port_disable(struct vchiq_mmal_instance *instance,
++			    struct vchiq_mmal_port *port)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	if (!port->enabled) {
++		mutex_unlock(&instance->vchiq_mutex);
++		return 0;
++	}
++
++	ret = port_disable(instance, port);
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++/* ports will be connected in a tunneled manner so data buffers
++ * are not handled by client.
++ */
++int vchiq_mmal_port_connect_tunnel(struct vchiq_mmal_instance *instance,
++				   struct vchiq_mmal_port *src,
++				   struct vchiq_mmal_port *dst)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	/* disconnect ports if connected */
++	if (src->connected != NULL) {
++		ret = port_disable(instance, src);
++		if (ret) {
++			pr_err("failed disabling src port(%d)\n", ret);
++			goto release_unlock;
++		}
++
++		/* do not need to disable the destination port as they
++		 * are connected and it is done automatically
++		 */
++
++		ret = port_action_handle(instance, src,
++					 MMAL_MSG_PORT_ACTION_TYPE_DISCONNECT,
++					 src->connected->component->handle,
++					 src->connected->handle);
++		if (ret < 0) {
++			pr_err("failed disconnecting src port\n");
++			goto release_unlock;
++		}
++		src->connected->enabled = false;
++		src->connected = NULL;
++	}
++
++	if (dst == NULL) {
++		/* do not make new connection */
++		ret = 0;
++		pr_debug("not making new connection\n");
++		goto release_unlock;
++	}
++
++	/* copy src port format to dst */
++	dst->format.encoding = src->format.encoding;
++	dst->es.video.width = src->es.video.width;
++	dst->es.video.height = src->es.video.height;
++	dst->es.video.crop.x = src->es.video.crop.x;
++	dst->es.video.crop.y = src->es.video.crop.y;
++	dst->es.video.crop.width = src->es.video.crop.width;
++	dst->es.video.crop.height = src->es.video.crop.height;
++	dst->es.video.frame_rate.num = src->es.video.frame_rate.num;
++	dst->es.video.frame_rate.den = src->es.video.frame_rate.den;
++
++	/* set new format */
++	ret = port_info_set(instance, dst);
++	if (ret) {
++		pr_debug("setting port info failed\n");
++		goto release_unlock;
++	}
++
++	/* read what has actually been set */
++	ret = port_info_get(instance, dst);
++	if (ret) {
++		pr_debug("read back port info failed\n");
++		goto release_unlock;
++	}
++
++	/* connect two ports together */
++	ret = port_action_handle(instance, src,
++				 MMAL_MSG_PORT_ACTION_TYPE_CONNECT,
++				 dst->component->handle, dst->handle);
++	if (ret < 0) {
++		pr_debug("connecting port %d:%d to %d:%d failed\n",
++			 src->component->handle, src->handle,
++			 dst->component->handle, dst->handle);
++		goto release_unlock;
++	}
++	src->connected = dst;
++
++release_unlock:
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++int vchiq_mmal_submit_buffer(struct vchiq_mmal_instance *instance,
++			     struct vchiq_mmal_port *port,
++			     struct mmal_buffer *buffer)
++{
++	unsigned long flags = 0;
++
++	spin_lock_irqsave(&port->slock, flags);
++	list_add_tail(&buffer->list, &port->buffers);
++	spin_unlock_irqrestore(&port->slock, flags);
++
++	/* the port previously underflowed because it was missing a
++	 * mmal_buffer which has just been added, submit that buffer
++	 * to the mmal service.
++	 */
++	if (port->buffer_underflow) {
++		port_buffer_from_host(instance, port);
++		port->buffer_underflow--;
++	}
++
++	return 0;
++}
++
++/* Initialise a mmal component and its ports
++ *
++ */
++int vchiq_mmal_component_init(struct vchiq_mmal_instance *instance,
++			      const char *name,
++			      struct vchiq_mmal_component **component_out)
++{
++	int ret;
++	int idx;		/* port index */
++	struct vchiq_mmal_component *component;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	if (instance->component_idx == VCHIQ_MMAL_MAX_COMPONENTS) {
++		ret = -EINVAL;	/* todo is this correct error? */
++		goto unlock;
++	}
++
++	component = &instance->component[instance->component_idx];
++
++	ret = create_component(instance, component, name);
++	if (ret < 0)
++		goto unlock;
++
++	/* ports info needs gathering */
++	component->control.type = MMAL_PORT_TYPE_CONTROL;
++	component->control.index = 0;
++	component->control.component = component;
++	spin_lock_init(&component->control.slock);
++	INIT_LIST_HEAD(&component->control.buffers);
++	ret = port_info_get(instance, &component->control);
++	if (ret < 0)
++		goto release_component;
++
++	for (idx = 0; idx < component->inputs; idx++) {
++		component->input[idx].type = MMAL_PORT_TYPE_INPUT;
++		component->input[idx].index = idx;
++		component->input[idx].component = component;
++		spin_lock_init(&component->input[idx].slock);
++		INIT_LIST_HEAD(&component->input[idx].buffers);
++		ret = port_info_get(instance, &component->input[idx]);
++		if (ret < 0)
++			goto release_component;
++	}
++
++	for (idx = 0; idx < component->outputs; idx++) {
++		component->output[idx].type = MMAL_PORT_TYPE_OUTPUT;
++		component->output[idx].index = idx;
++		component->output[idx].component = component;
++		spin_lock_init(&component->output[idx].slock);
++		INIT_LIST_HEAD(&component->output[idx].buffers);
++		ret = port_info_get(instance, &component->output[idx]);
++		if (ret < 0)
++			goto release_component;
++	}
++
++	for (idx = 0; idx < component->clocks; idx++) {
++		component->clock[idx].type = MMAL_PORT_TYPE_CLOCK;
++		component->clock[idx].index = idx;
++		component->clock[idx].component = component;
++		spin_lock_init(&component->clock[idx].slock);
++		INIT_LIST_HEAD(&component->clock[idx].buffers);
++		ret = port_info_get(instance, &component->clock[idx]);
++		if (ret < 0)
++			goto release_component;
++	}
++
++	instance->component_idx++;
++
++	*component_out = component;
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return 0;
++
++release_component:
++	destroy_component(instance, component);
++unlock:
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++/*
++ * cause a mmal component to be destroyed
++ */
++int vchiq_mmal_component_finalise(struct vchiq_mmal_instance *instance,
++				  struct vchiq_mmal_component *component)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	if (component->enabled)
++		ret = disable_component(instance, component);
++
++	ret = destroy_component(instance, component);
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++/*
++ * cause a mmal component to be enabled
++ */
++int vchiq_mmal_component_enable(struct vchiq_mmal_instance *instance,
++				struct vchiq_mmal_component *component)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	if (component->enabled) {
++		mutex_unlock(&instance->vchiq_mutex);
++		return 0;
++	}
++
++	ret = enable_component(instance, component);
++	if (ret == 0)
++		component->enabled = true;
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++/*
++ * cause a mmal component to be enabled
++ */
++int vchiq_mmal_component_disable(struct vchiq_mmal_instance *instance,
++				 struct vchiq_mmal_component *component)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	if (!component->enabled) {
++		mutex_unlock(&instance->vchiq_mutex);
++		return 0;
++	}
++
++	ret = disable_component(instance, component);
++	if (ret == 0)
++		component->enabled = false;
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++int vchiq_mmal_version(struct vchiq_mmal_instance *instance,
++		       u32 *major_out, u32 *minor_out)
++{
++	int ret;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	ret = get_version(instance, major_out, minor_out);
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	return ret;
++}
++
++int vchiq_mmal_finalise(struct vchiq_mmal_instance *instance)
++{
++	int status = 0;
++
++	if (instance == NULL)
++		return -EINVAL;
++
++	if (mutex_lock_interruptible(&instance->vchiq_mutex))
++		return -EINTR;
++
++	vchi_service_use(instance->handle);
++
++	status = vchi_service_close(instance->handle);
++	if (status != 0)
++		pr_err("mmal-vchiq: VCHIQ close failed");
++
++	mutex_unlock(&instance->vchiq_mutex);
++
++	vfree(instance->bulk_scratch);
++
++	kfree(instance);
++
++	return status;
++}
++
++int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance)
++{
++	int status;
++	struct vchiq_mmal_instance *instance;
++	static VCHI_CONNECTION_T *vchi_connection;
++	static VCHI_INSTANCE_T vchi_instance;
++	SERVICE_CREATION_T params = {
++		VCHI_VERSION_EX(VC_MMAL_VER, VC_MMAL_MIN_VER),
++		VC_MMAL_SERVER_NAME,
++		vchi_connection,
++		0,		/* rx fifo size (unused) */
++		0,		/* tx fifo size (unused) */
++		service_callback,
++		NULL,		/* service callback parameter */
++		1,		/* unaligned bulk receives */
++		1,		/* unaligned bulk transmits */
++		0		/* want crc check on bulk transfers */
++	};
++
++	/* compile time checks to ensure structure size as they are
++	 * directly (de)serialised from memory.
++	 */
++
++	/* ensure the header structure has packed to the correct size */
++	BUILD_BUG_ON(sizeof(struct mmal_msg_header) != 24);
++
++	/* ensure message structure does not exceed maximum length */
++	BUILD_BUG_ON(sizeof(struct mmal_msg) > MMAL_MSG_MAX_SIZE);
++
++	/* mmal port struct is correct size */
++	BUILD_BUG_ON(sizeof(struct mmal_port) != 64);
++
++	/* create a vchi instance */
++	status = vchi_initialise(&vchi_instance);
++	if (status) {
++		pr_err("Failed to initialise VCHI instance (status=%d)\n",
++		       status);
++		return -EIO;
++	}
++
++	status = vchi_connect(NULL, 0, vchi_instance);
++	if (status) {
++		pr_err("Failed to connect VCHI instance (status=%d)\n", status);
++		return -EIO;
++	}
++
++	instance = kmalloc(sizeof(*instance), GFP_KERNEL);
++	memset(instance, 0, sizeof(*instance));
++
++	mutex_init(&instance->vchiq_mutex);
++	mutex_init(&instance->bulk_mutex);
++
++	instance->bulk_scratch = vmalloc(PAGE_SIZE);
++
++	params.callback_param = instance;
++
++	status = vchi_service_open(vchi_instance, &params, &instance->handle);
++	if (status) {
++		pr_err("Failed to open VCHI service connection (status=%d)\n",
++		       status);
++		goto err_close_services;
++	}
++
++	vchi_service_release(instance->handle);
++
++	*out_instance = instance;
++
++	return 0;
++
++err_close_services:
++
++	vchi_service_close(instance->handle);
++	vfree(instance->bulk_scratch);
++	kfree(instance);
++	return -ENODEV;
++}
+--- /dev/null
++++ b/drivers/media/platform/bcm2835/mmal-vchiq.h
+@@ -0,0 +1,178 @@
++/*
++ * Broadcom BM2835 V4L2 driver
++ *
++ * Copyright © 2013 Raspberry Pi (Trading) Ltd.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License.  See the file COPYING in the main directory of this archive
++ * for more details.
++ *
++ * Authors: Vincent Sanders <vincent.sanders at collabora.co.uk>
++ *          Dave Stevenson <dsteve at broadcom.com>
++ *          Simon Mellor <simellor at broadcom.com>
++ *          Luke Diamand <luked at broadcom.com>
++ *
++ * MMAL interface to VCHIQ message passing
++ */
++
++#ifndef MMAL_VCHIQ_H
++#define MMAL_VCHIQ_H
++
++#include "mmal-msg-format.h"
++
++#define MAX_PORT_COUNT 4
++
++/* Maximum size of the format extradata. */
++#define MMAL_FORMAT_EXTRADATA_MAX_SIZE 128
++
++struct vchiq_mmal_instance;
++
++enum vchiq_mmal_es_type {
++	MMAL_ES_TYPE_UNKNOWN,     /**< Unknown elementary stream type */
++	MMAL_ES_TYPE_CONTROL,     /**< Elementary stream of control commands */
++	MMAL_ES_TYPE_AUDIO,       /**< Audio elementary stream */
++	MMAL_ES_TYPE_VIDEO,       /**< Video elementary stream */
++	MMAL_ES_TYPE_SUBPICTURE   /**< Sub-picture elementary stream */
++};
++
++/* rectangle, used lots so it gets its own struct */
++struct vchiq_mmal_rect {
++	s32 x;
++	s32 y;
++	s32 width;
++	s32 height;
++};
++
++struct vchiq_mmal_port_buffer {
++	unsigned int num; /* number of buffers */
++	u32 size; /* size of buffers */
++	u32 alignment; /* alignment of buffers */
++};
++
++struct vchiq_mmal_port;
++
++typedef void (*vchiq_mmal_buffer_cb)(
++		struct vchiq_mmal_instance  *instance,
++		struct vchiq_mmal_port *port,
++		int status, struct mmal_buffer *buffer,
++		unsigned long length, u32 mmal_flags, s64 dts, s64 pts);
++
++struct vchiq_mmal_port {
++	bool enabled;
++	u32 handle;
++	u32 type; /* port type, cached to use on port info set */
++	u32 index; /* port index, cached to use on port info set */
++
++	/* component port belongs to, allows simple deref */
++	struct vchiq_mmal_component *component;
++
++	struct vchiq_mmal_port *connected; /* port conencted to */
++
++	/* buffer info */
++	struct vchiq_mmal_port_buffer minimum_buffer;
++	struct vchiq_mmal_port_buffer recommended_buffer;
++	struct vchiq_mmal_port_buffer current_buffer;
++
++	/* stream format */
++	struct mmal_es_format format;
++	/* elementry stream format */
++	union mmal_es_specific_format es;
++
++	/* data buffers to fill */
++	struct list_head buffers;
++	/* lock to serialise adding and removing buffers from list */
++	spinlock_t slock;
++	/* count of how many buffer header refils have failed because
++	 * there was no buffer to satisfy them
++	 */
++	int buffer_underflow;
++	/* callback on buffer completion */
++	vchiq_mmal_buffer_cb buffer_cb;
++	/* callback context */
++	void *cb_ctx;
++};
++
++struct vchiq_mmal_component {
++	bool enabled;
++	u32 handle;  /* VideoCore handle for component */
++	u32 inputs;  /* Number of input ports */
++	u32 outputs; /* Number of output ports */
++	u32 clocks;  /* Number of clock ports */
++	struct vchiq_mmal_port control; /* control port */
++	struct vchiq_mmal_port input[MAX_PORT_COUNT]; /* input ports */
++	struct vchiq_mmal_port output[MAX_PORT_COUNT]; /* output ports */
++	struct vchiq_mmal_port clock[MAX_PORT_COUNT]; /* clock ports */
++};
++
++
++int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance);
++int vchiq_mmal_finalise(struct vchiq_mmal_instance *instance);
++
++/* Initialise a mmal component and its ports
++*
++*/
++int vchiq_mmal_component_init(
++		struct vchiq_mmal_instance *instance,
++		const char *name,
++		struct vchiq_mmal_component **component_out);
++
++int vchiq_mmal_component_finalise(
++		struct vchiq_mmal_instance *instance,
++		struct vchiq_mmal_component *component);
++
++int vchiq_mmal_component_enable(
++		struct vchiq_mmal_instance *instance,
++		struct vchiq_mmal_component *component);
++
++int vchiq_mmal_component_disable(
++		struct vchiq_mmal_instance *instance,
++		struct vchiq_mmal_component *component);
++
++
++
++/* enable a mmal port
++ *
++ * enables a port and if a buffer callback provided enque buffer
++ * headers as apropriate for the port.
++ */
++int vchiq_mmal_port_enable(
++		struct vchiq_mmal_instance *instance,
++		struct vchiq_mmal_port *port,
++		vchiq_mmal_buffer_cb buffer_cb);
++
++/* disable a port
++ *
++ * disable a port will dequeue any pending buffers
++ */
++int vchiq_mmal_port_disable(struct vchiq_mmal_instance *instance,
++			   struct vchiq_mmal_port *port);
++
++
++int vchiq_mmal_port_parameter_set(struct vchiq_mmal_instance *instance,
++				  struct vchiq_mmal_port *port,
++				  u32 parameter,
++				  void *value,
++				  u32 value_size);
++
++int vchiq_mmal_port_parameter_get(struct vchiq_mmal_instance *instance,
++				  struct vchiq_mmal_port *port,
++				  u32 parameter,
++				  void *value,
++				  u32 *value_size);
++
++int vchiq_mmal_port_set_format(struct vchiq_mmal_instance *instance,
++			       struct vchiq_mmal_port *port);
++
++int vchiq_mmal_port_connect_tunnel(struct vchiq_mmal_instance *instance,
++			    struct vchiq_mmal_port *src,
++			    struct vchiq_mmal_port *dst);
++
++int vchiq_mmal_version(struct vchiq_mmal_instance *instance,
++		       u32 *major_out,
++		       u32 *minor_out);
++
++int vchiq_mmal_submit_buffer(struct vchiq_mmal_instance *instance,
++			     struct vchiq_mmal_port *port,
++			     struct mmal_buffer *buf);
++
++#endif /* MMAL_VCHIQ_H */
diff --git a/target/linux/brcm2708/patches-4.4/0050-scripts-Add-mkknlimg-and-knlinfo-scripts-from-tools-.patch b/target/linux/brcm2708/patches-4.4/0050-scripts-Add-mkknlimg-and-knlinfo-scripts-from-tools-.patch
new file mode 100644
index 0000000..93c267c
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0050-scripts-Add-mkknlimg-and-knlinfo-scripts-from-tools-.patch
@@ -0,0 +1,461 @@
+From 4ddb3fae0a5c5b6969168134b4352bceccf51b9c Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Mon, 11 May 2015 09:00:42 +0100
+Subject: [PATCH 050/127] scripts: Add mkknlimg and knlinfo scripts from tools
+ repo
+
+The Raspberry Pi firmware looks for a trailer on the kernel image to
+determine whether it was compiled with Device Tree support enabled.
+If the firmware finds a kernel without this trailer, or which has a
+trailer indicating that it isn't DT-capable, it disables DT support
+and reverts to using ATAGs.
+
+The mkknlimg utility adds that trailer, having first analysed the
+image to look for signs of DT support and the kernel version string.
+
+knlinfo displays the contents of the trailer in the given kernel image.
+
+scripts/mkknlimg: Add support for ARCH_BCM2835
+
+Add a new trailer field indicating whether this is an ARCH_BCM2835
+build, as opposed to MACH_BCM2708/9. If the loader finds this flag
+is set it changes the default base dtb file name from bcm270x...
+to bcm283y...
+
+Also update knlinfo to show the status of the field.
+
+scripts/mkknlimg: Improve ARCH_BCM2835 detection
+
+The board support code contains sufficient strings to be able to
+distinguish 2708 vs. 2835 builds, so remove the check for
+bcm2835-pm-wdt which could exist in either.
+
+Also, since the canned configuration is no longer built in (it's
+a module), remove the config string checking.
+
+See: https://github.com/raspberrypi/linux/issues/1157
+---
+ scripts/knlinfo  | 168 ++++++++++++++++++++++++++++++++++++++
+ scripts/mkknlimg | 244 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ 2 files changed, 412 insertions(+)
+ create mode 100755 scripts/knlinfo
+ create mode 100755 scripts/mkknlimg
+
+--- /dev/null
++++ b/scripts/knlinfo
+@@ -0,0 +1,168 @@
++#!/usr/bin/env perl
++# ----------------------------------------------------------------------
++# knlinfo by Phil Elwell for Raspberry Pi
++#
++# (c) 2014,2015 Raspberry Pi (Trading) Limited <info at raspberrypi.org>
++#
++# Licensed under the terms of the GNU General Public License.
++# ----------------------------------------------------------------------
++
++use strict;
++use integer;
++
++use Fcntl ":seek";
++
++my $trailer_magic = 'RPTL';
++
++my %atom_formats =
++(
++    'DTOK' => \&format_bool,
++    'KVer' => \&format_string,
++    '283x' => \&format_bool,
++);
++
++if (@ARGV != 1)
++{
++	print ("Usage: knlinfo <kernel image>\n");
++	exit(1);
++}
++
++my $kernel_file = $ARGV[0];
++
++
++my ($atoms, $pos) = read_trailer($kernel_file);
++
++exit(1) if (!$atoms);
++
++printf("Kernel trailer found at %d/0x%x:\n", $pos, $pos);
++
++foreach my $atom (@$atoms)
++{
++    printf("  %s: %s\n", $atom->[0], format_atom($atom));
++}
++
++exit(0);
++
++sub read_trailer
++{
++	my ($kernel_file) = @_;
++	my $fh;
++
++	if (!open($fh, '<', $kernel_file))
++	{
++		print ("* Failed to open '$kernel_file'\n");
++		return undef;
++	}
++
++	if (!seek($fh, -12, SEEK_END))
++	{
++		print ("* seek error in '$kernel_file'\n");
++		return undef;
++	}
++
++	my $last_bytes;
++	sysread($fh, $last_bytes, 12);
++
++	my ($trailer_len, $data_len, $magic) = unpack('VVa4', $last_bytes);
++
++	if (($magic ne $trailer_magic) || ($data_len != 4))
++	{
++		print ("* no trailer\n");
++		return undef;
++	}
++	if (!seek($fh, -12, SEEK_END))
++	{
++		print ("* seek error in '$kernel_file'\n");
++		return undef;
++	}
++
++	$trailer_len -= 12;
++
++	while ($trailer_len > 0)
++	{
++		if ($trailer_len < 8)
++		{
++			print ("* truncated atom header in trailer\n");
++			return undef;
++		}
++		if (!seek($fh, -8, SEEK_CUR))
++		{
++			print ("* seek error in '$kernel_file'\n");
++			return undef;
++		}
++		$trailer_len -= 8;
++
++		my $atom_hdr;
++		sysread($fh, $atom_hdr, 8);
++		my ($atom_len, $atom_type) = unpack('Va4', $atom_hdr);
++
++		if ($trailer_len < $atom_len)
++		{
++			print ("* truncated atom data in trailer\n");
++			return undef;
++		}
++
++		my $rounded_len = (($atom_len + 3) & ~3);
++		if (!seek($fh, -(8 + $rounded_len), SEEK_CUR))
++		{
++			print ("* seek error in '$kernel_file'\n");
++			return undef;
++		}
++		$trailer_len -= $rounded_len;
++
++		my $atom_data;
++		sysread($fh, $atom_data, $atom_len);
++
++		if (!seek($fh, -$atom_len, SEEK_CUR))
++		{
++			print ("* seek error in '$kernel_file'\n");
++			return undef;
++		}
++
++		push @$atoms, [ $atom_type, $atom_data ];
++	}
++
++ 	if (($$atoms[-1][0] eq "\x00\x00\x00\x00") &&
++	    ($$atoms[-1][1] eq ""))
++	{
++		pop @$atoms;
++	}
++	else
++	{
++		print ("* end marker missing from trailer\n");
++	}
++
++	return ($atoms, tell($fh));
++}
++
++sub format_atom
++{
++    my ($atom) = @_;
++
++    my $format_func = $atom_formats{$atom->[0]} || \&format_hex;
++    return $format_func->($atom->[1]);
++}
++
++sub format_bool
++{
++    my ($data) = @_;
++    return unpack('V', $data) ? 'true' : 'false';
++}
++
++sub format_int
++{
++    my ($data) = @_;
++    return unpack('V', $data);
++}
++
++sub format_string
++{
++    my ($data) = @_;
++    return '"'.$data.'"';
++}
++
++sub format_hex
++{
++    my ($data) = @_;
++    return unpack('H*', $data);
++}
+--- /dev/null
++++ b/scripts/mkknlimg
+@@ -0,0 +1,244 @@
++#!/usr/bin/env perl
++# ----------------------------------------------------------------------
++# mkknlimg by Phil Elwell for Raspberry Pi
++# based on extract-ikconfig by Dick Streefland
++#
++# (c) 2009,2010 Dick Streefland <dick at streefland.net>
++# (c) 2014,2015 Raspberry Pi (Trading) Limited <info at raspberrypi.org>
++#
++# Licensed under the terms of the GNU General Public License.
++# ----------------------------------------------------------------------
++
++use strict;
++use warnings;
++use integer;
++
++my $trailer_magic = 'RPTL';
++
++my $tmpfile1 = "/tmp/mkknlimg_$$.1";
++my $tmpfile2 = "/tmp/mkknlimg_$$.2";
++
++my $dtok = 0;
++my $is_283x = 0;
++
++while (@ARGV && ($ARGV[0] =~ /^-/))
++{
++    my $arg = shift(@ARGV);
++    if ($arg eq '--dtok')
++    {
++	$dtok = 1;
++    }
++    elsif ($arg eq '--283x')
++    {
++	$is_283x = 1;
++    }
++    else
++    {
++	print ("* Unknown option '$arg'\n");
++	usage();
++    }
++}
++
++usage() if (@ARGV != 2);
++
++my $kernel_file = $ARGV[0];
++my $out_file = $ARGV[1];
++
++if (! -r $kernel_file)
++{
++    print ("* File '$kernel_file' not found\n");
++    usage();
++}
++
++my @wanted_strings =
++(
++	'bcm2708_fb',
++	'brcm,bcm2835-mmc',
++	'brcm,bcm2835-sdhost',
++	'brcm,bcm2708-pinctrl',
++	'brcm,bcm2835-gpio',
++	'brcm,bcm2835',
++	'brcm,bcm2836'
++);
++
++my $res = try_extract($kernel_file, $tmpfile1);
++$res = try_decompress('\037\213\010', 'xy',    'gunzip', 0,
++		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
++$res = try_decompress('\3757zXZ\000', 'abcde', 'unxz --single-stream', -1,
++		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
++$res = try_decompress('BZh',          'xy',    'bunzip2', 0,
++		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
++$res = try_decompress('\135\0\0\0',   'xxx',   'unlzma', 0,
++		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
++$res = try_decompress('\211\114\132', 'xy',    'lzop -d', 0,
++		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
++$res = try_decompress('\002\041\114\030', 'xy',    'lz4 -d', 1,
++		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
++
++my $append_trailer;
++my $trailer;
++my $kver = '?';
++
++$append_trailer = $dtok;
++
++if ($res)
++{
++    $kver = $res->{''} || '?';
++    print("Version: $kver\n");
++
++    $append_trailer = $dtok;
++    if (!$dtok)
++    {
++	if (config_bool($res, 'bcm2708_fb') ||
++	    config_bool($res, 'brcm,bcm2835-mmc') ||
++	    config_bool($res, 'brcm,bcm2835-sdhost'))
++	{
++	    $dtok ||= config_bool($res, 'brcm,bcm2708-pinctrl');
++	    $dtok ||= config_bool($res, 'brcm,bcm2835-gpio');
++	    $is_283x ||= config_bool($res, 'brcm,bcm2835');
++	    $is_283x ||= config_bool($res, 'brcm,bcm2836');
++	    $dtok ||= $is_283x;
++	    $append_trailer = 1;
++	}
++	else
++	{
++	    print ("* This doesn't look like a Raspberry Pi kernel. In pass-through mode.\n");
++	}
++    }
++}
++elsif (!$dtok)
++{
++    print ("* Is this a valid kernel? In pass-through mode.\n");
++}
++
++if ($append_trailer)
++{
++    printf("DT: %s\n", $dtok ? "y" : "n");
++    printf("283x: %s\n", $is_283x ? "y" : "n");
++
++    my @atoms;
++
++    push @atoms, [ $trailer_magic, pack('V', 0) ];
++    push @atoms, [ 'KVer', $kver ];
++    push @atoms, [ 'DTOK', pack('V', $dtok) ];
++    push @atoms, [ '283x', pack('V', $is_283x) ];
++
++    $trailer = pack_trailer(\@atoms);
++    $atoms[0]->[1] = pack('V', length($trailer));
++
++    $trailer = pack_trailer(\@atoms);
++}
++
++my $ofh;
++my $total_len = 0;
++
++if ($out_file eq $kernel_file)
++{
++    die "* Failed to open '$out_file' for append\n"
++	if (!open($ofh, '>>', $out_file));
++    $total_len = tell($ofh);
++}
++else
++{
++    die "* Failed to open '$kernel_file'\n"
++	if (!open(my $ifh, '<', $kernel_file));
++    die "* Failed to create '$out_file'\n"
++	if (!open($ofh, '>', $out_file));
++
++    my $copybuf;
++    while (1)
++    {
++	my $bytes = sysread($ifh, $copybuf, 64*1024);
++	last if (!$bytes);
++	syswrite($ofh, $copybuf, $bytes);
++	$total_len += $bytes;
++    }
++    close($ifh);
++}
++
++if ($trailer)
++{
++    # Pad to word-alignment
++    syswrite($ofh, "\x000\x000\x000", (-$total_len & 0x3));
++    syswrite($ofh, $trailer);
++}
++
++close($ofh);
++
++exit($trailer ? 0 : 1);
++
++END {
++	unlink($tmpfile1) if ($tmpfile1);
++	unlink($tmpfile2) if ($tmpfile2);
++}
++
++
++sub usage
++{
++	print ("Usage: mkknlimg [--dtok] [--283x] <vmlinux|zImage|bzImage> <outfile>\n");
++	exit(1);
++}
++
++sub try_extract
++{
++	my ($knl, $tmp) = @_;
++
++	my $ver = `strings "$knl" | grep -a -E "^Linux version [1-9]"`;
++
++	return undef if (!$ver);
++
++	chomp($ver);
++
++	my $res = { ''=>$ver };
++	my $string_pattern = '^('.join('|', @wanted_strings).')$';
++
++	my @matches = `strings \"$knl\" | grep -E \"$string_pattern\"`;
++	foreach my $match (@matches)
++	{
++	    chomp($match);
++	    $res->{$match} = 1;
++	}
++
++	return $res;
++}
++
++
++sub try_decompress
++{
++	my ($magic, $subst, $zcat, $idx, $knl, $tmp1, $tmp2) = @_;
++
++	my $pos = `tr "$magic\n$subst" "\n$subst=" < "$knl" | grep -abo "^$subst"`;
++	if ($pos)
++	{
++		chomp($pos);
++		$pos = (split(/[\r\n]+/, $pos))[$idx];
++		return undef if (!defined($pos));
++		$pos =~ s/:.*[\r\n]*$//s;
++		my $cmd = "tail -c+$pos \"$knl\" | $zcat > $tmp2 2> /dev/null";
++		my $err = (system($cmd) >> 8);
++		return undef if (($err != 0) && ($err != 2));
++
++		return try_extract($tmp2, $tmp1);
++	}
++
++	return undef;
++}
++
++sub pack_trailer
++{
++	my ($atoms) = @_;
++	my $trailer = pack('VV', 0, 0);
++	for (my $i = $#$atoms; $i>=0; $i--)
++	{
++		my $atom = $atoms->[$i];
++		$trailer .= pack('a*x!4Va4', $atom->[1], length($atom->[1]), $atom->[0]);
++	}
++	return $trailer;
++}
++
++sub config_bool
++{
++	my ($configs, $wanted) = @_;
++	my $val = $configs->{$wanted} || 'n';
++	return (($val eq 'y') || ($val eq '1'));
++}
diff --git a/target/linux/brcm2708/patches-4.4/0051-fdt-Add-support-for-the-CONFIG_CMDLINE_EXTEND-option.patch b/target/linux/brcm2708/patches-4.4/0051-fdt-Add-support-for-the-CONFIG_CMDLINE_EXTEND-option.patch
new file mode 100644
index 0000000..c909369
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0051-fdt-Add-support-for-the-CONFIG_CMDLINE_EXTEND-option.patch
@@ -0,0 +1,58 @@
+From 3f46d3627061688e50e24d5bfbc0adb5deb89b97 Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Fri, 5 Dec 2014 17:26:26 +0000
+Subject: [PATCH 051/127] fdt: Add support for the CONFIG_CMDLINE_EXTEND option
+
+---
+ drivers/of/fdt.c | 29 ++++++++++++++++++++++++-----
+ 1 file changed, 24 insertions(+), 5 deletions(-)
+
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -954,22 +954,38 @@ int __init early_init_dt_scan_chosen(uns
+ 
+ 	/* Retrieve command line */
+ 	p = of_get_flat_dt_prop(node, "bootargs", &l);
+-	if (p != NULL && l > 0)
+-		strlcpy(data, p, min((int)l, COMMAND_LINE_SIZE));
+-	p = of_get_flat_dt_prop(node, "bootargs-append", &l);
+-	if (p != NULL && l > 0)
+-		strlcat(data, p, min_t(int, strlen(data) + (int)l, COMMAND_LINE_SIZE));
+ 
+ 	/*
+ 	 * CONFIG_CMDLINE is meant to be a default in case nothing else
+ 	 * managed to set the command line, unless CONFIG_CMDLINE_FORCE
+ 	 * is set in which case we override whatever was found earlier.
++	 *
++	 * However, it can be useful to be able to treat the default as
++	 * a starting point to be extended using CONFIG_CMDLINE_EXTEND.
+ 	 */
++	((char *)data)[0] = '\0';
++
+ #ifdef CONFIG_CMDLINE
+-#ifndef CONFIG_CMDLINE_FORCE
+-	if (!((char *)data)[0])
++	strlcpy(data, CONFIG_CMDLINE, COMMAND_LINE_SIZE);
++
++	if (p != NULL && l > 0)	{
++#if defined(CONFIG_CMDLINE_EXTEND)
++		int len = strlen(data);
++		if (len > 0) {
++			strlcat(data, " ", COMMAND_LINE_SIZE);
++			len++;
++		}
++		strlcpy((char *)data + len, p, min((int)l, COMMAND_LINE_SIZE - len));
++#elif defined(CONFIG_CMDLINE_FORCE)
++		pr_warning("Ignoring bootargs property (using the default kernel command line)\n");
++#else
++		/* Neither extend nor force - just override */
++		strlcpy(data, p, min((int)l, COMMAND_LINE_SIZE));
+ #endif
+-		strlcpy(data, CONFIG_CMDLINE, COMMAND_LINE_SIZE);
++	}
++#else /* CONFIG_CMDLINE */
++	if (p != NULL && l > 0)
++		strlcpy(data, p, min((int)l, COMMAND_LINE_SIZE));
+ #endif /* CONFIG_CMDLINE */
+ 
+ 	pr_debug("Command line is: %s\n", (char*)data);
diff --git a/target/linux/brcm2708/patches-4.4/0052-BCM2708-Add-core-Device-Tree-support.patch b/target/linux/brcm2708/patches-4.4/0052-BCM2708-Add-core-Device-Tree-support.patch
new file mode 100644
index 0000000..52bc6bd
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0052-BCM2708-Add-core-Device-Tree-support.patch
@@ -0,0 +1,4564 @@
+From 46e100d54a3bef01345aaffd1d722f6d2087a170 Mon Sep 17 00:00:00 2001
+From: notro <notro at tronnes.org>
+Date: Wed, 9 Jul 2014 14:46:08 +0200
+Subject: [PATCH 052/127] BCM2708: Add core Device Tree support
+
+Add the bare minimum needed to boot BCM2708 from a Device Tree.
+
+Signed-off-by: Noralf Tronnes <notro at tronnes.org>
+
+BCM2708: DT: change 'axi' nodename to 'soc'
+
+Change DT node named 'axi' to 'soc' so it matches ARCH_BCM2835.
+The VC4 bootloader fills in certain properties in the 'axi' subtree,
+but since this is part of an upstreaming effort, the name is changed.
+
+Signed-off-by: Noralf Tronnes notro at tronnes.org
+
+BCM2708_DT: Correct length of the peripheral space
+
+Use dts-dirs feature for overlays.
+
+The kernel makefiles have a dts-dirs target that is for vendor subdirectories.
+
+Using this fixes the install_dtbs target, which previously did not install the overlays.
+
+BCM270X_DT: configure I2S DMA channels
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+
+BCM270X_DT: switch to bcm2835-i2s
+
+I2S soundcard drivers with proper devicetree support (i.e. not linking
+to the cpu_dai/platform via name but to cpu/platform via of_node)
+will work out of the box without any modifications.
+
+When the kernel is compiled without devicetree support the platform
+code will instantiate the bcm2708-i2s driver and I2S soundcard drivers
+will link to it via name, as before.
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+
+SDIO-overlay: add poll_once-boolean parameter
+
+Add paramter to toggle sdio-device-polling
+done every second or once at boot-time.
+
+Signed-off-by: Patrick Boettcher <patrick.boettcher at posteo.de>
+
+BCM270X_DT: Make mmc overlay compatible with current firmware
+
+The original DT overlay logic followed a merge-then-patch procedure,
+i.e. parameters are applied to the loaded overlay before the overlay
+is merged into the base DTB. This sequence has been changed to
+patch-then-merge, in order to support parameterised node names, and
+to protect against bad overlays. As a result, overrides (parameters)
+must only target labels in the overlay, but the overlay can obviously target nodes in the base DTB.
+
+mmc-overlay.dts (that switches back to the original mmc sdcard
+driver) is the only overlay violating that rule, and this patch
+fixes it.
+
+bcm270x_dt: Use the sdhost MMC controller by default
+
+The "mmc" overlay reverts to using the other controller.
+
+squash: Add cprman to dt
+
+BCM270X_DT: Use clk_core for I2C interfaces
+---
+ arch/arm/boot/dts/Makefile                         |  30 +
+ arch/arm/boot/dts/bcm2708-rpi-b-plus.dts           | 145 +++++
+ arch/arm/boot/dts/bcm2708-rpi-b.dts                | 135 +++++
+ arch/arm/boot/dts/bcm2708-rpi-cm.dts               | 102 ++++
+ arch/arm/boot/dts/bcm2708-rpi-cm.dtsi              |  40 ++
+ arch/arm/boot/dts/bcm2708.dtsi                     |  40 ++
+ arch/arm/boot/dts/bcm2708_common.dtsi              | 347 +++++++++++
+ arch/arm/boot/dts/bcm2709-rpi-2-b.dts              | 145 +++++
+ arch/arm/boot/dts/bcm2709.dtsi                     | 102 ++++
+ arch/arm/boot/dts/bcm2835-rpi-cm.dts               |  93 +++
+ arch/arm/boot/dts/bcm2835-rpi-cm.dtsi              |  30 +
+ arch/arm/boot/dts/overlays/Makefile                |  69 +++
+ arch/arm/boot/dts/overlays/README                  | 648 +++++++++++++++++++++
+ arch/arm/boot/dts/overlays/ads7846-overlay.dts     |  83 +++
+ .../dts/overlays/bmp085_i2c-sensor-overlay.dts     |  23 +
+ arch/arm/boot/dts/overlays/dht11-overlay.dts       |  39 ++
+ arch/arm/boot/dts/overlays/enc28j60-overlay.dts    |  50 ++
+ .../boot/dts/overlays/gpio-poweroff-overlay.dts    |  34 ++
+ .../boot/dts/overlays/hifiberry-amp-overlay.dts    |  39 ++
+ .../boot/dts/overlays/hifiberry-dac-overlay.dts    |  34 ++
+ .../dts/overlays/hifiberry-dacplus-overlay.dts     |  39 ++
+ .../boot/dts/overlays/hifiberry-digi-overlay.dts   |  39 ++
+ arch/arm/boot/dts/overlays/hy28a-overlay.dts       |  87 +++
+ arch/arm/boot/dts/overlays/hy28b-overlay.dts       | 142 +++++
+ arch/arm/boot/dts/overlays/i2c-rtc-overlay.dts     |  55 ++
+ arch/arm/boot/dts/overlays/i2s-mmap-overlay.dts    |  13 +
+ arch/arm/boot/dts/overlays/iqaudio-dac-overlay.dts |  39 ++
+ .../boot/dts/overlays/iqaudio-dacplus-overlay.dts  |  39 ++
+ arch/arm/boot/dts/overlays/lirc-rpi-overlay.dts    |  57 ++
+ .../arm/boot/dts/overlays/mcp2515-can0-overlay.dts |  69 +++
+ .../arm/boot/dts/overlays/mcp2515-can1-overlay.dts |  69 +++
+ arch/arm/boot/dts/overlays/mmc-overlay.dts         |  39 ++
+ arch/arm/boot/dts/overlays/mz61581-overlay.dts     | 111 ++++
+ arch/arm/boot/dts/overlays/piscreen-overlay.dts    |  96 +++
+ .../dts/overlays/pitft28-resistive-overlay.dts     | 115 ++++
+ arch/arm/boot/dts/overlays/pps-gpio-overlay.dts    |  34 ++
+ arch/arm/boot/dts/overlays/pwm-2chan-overlay.dts   |  46 ++
+ arch/arm/boot/dts/overlays/pwm-overlay.dts         |  42 ++
+ arch/arm/boot/dts/overlays/raspidac3-overlay.dts   |  45 ++
+ arch/arm/boot/dts/overlays/rpi-dac-overlay.dts     |  34 ++
+ arch/arm/boot/dts/overlays/rpi-display-overlay.dts |  82 +++
+ arch/arm/boot/dts/overlays/rpi-ft5406-overlay.dts  |  17 +
+ arch/arm/boot/dts/overlays/rpi-proto-overlay.dts   |  39 ++
+ arch/arm/boot/dts/overlays/rpi-sense-overlay.dts   |  47 ++
+ arch/arm/boot/dts/overlays/sdhost-overlay.dts      |  29 +
+ arch/arm/boot/dts/overlays/sdio-overlay.dts        |  32 +
+ arch/arm/boot/dts/overlays/smi-dev-overlay.dts     |  18 +
+ arch/arm/boot/dts/overlays/smi-nand-overlay.dts    |  69 +++
+ arch/arm/boot/dts/overlays/smi-overlay.dts         |  37 ++
+ .../boot/dts/overlays/spi-gpio35-39-overlay.dts    |  31 +
+ arch/arm/boot/dts/overlays/tinylcd35-overlay.dts   | 216 +++++++
+ arch/arm/boot/dts/overlays/uart1-overlay.dts       |  38 ++
+ arch/arm/boot/dts/overlays/vga666-overlay.dts      |  30 +
+ arch/arm/boot/dts/overlays/w1-gpio-overlay.dts     |  39 ++
+ .../boot/dts/overlays/w1-gpio-pullup-overlay.dts   |  41 ++
+ 55 files changed, 4203 insertions(+)
+ create mode 100644 arch/arm/boot/dts/bcm2708-rpi-b-plus.dts
+ create mode 100644 arch/arm/boot/dts/bcm2708-rpi-b.dts
+ create mode 100755 arch/arm/boot/dts/bcm2708-rpi-cm.dts
+ create mode 100644 arch/arm/boot/dts/bcm2708-rpi-cm.dtsi
+ create mode 100644 arch/arm/boot/dts/bcm2708.dtsi
+ create mode 100644 arch/arm/boot/dts/bcm2708_common.dtsi
+ create mode 100644 arch/arm/boot/dts/bcm2709-rpi-2-b.dts
+ create mode 100644 arch/arm/boot/dts/bcm2709.dtsi
+ create mode 100644 arch/arm/boot/dts/bcm2835-rpi-cm.dts
+ create mode 100644 arch/arm/boot/dts/bcm2835-rpi-cm.dtsi
+ create mode 100644 arch/arm/boot/dts/overlays/Makefile
+ create mode 100644 arch/arm/boot/dts/overlays/README
+ create mode 100644 arch/arm/boot/dts/overlays/ads7846-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/bmp085_i2c-sensor-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/dht11-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/enc28j60-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/gpio-poweroff-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/hifiberry-amp-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/hifiberry-dac-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/hifiberry-dacplus-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/hifiberry-digi-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/hy28a-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/hy28b-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/i2c-rtc-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/i2s-mmap-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/iqaudio-dac-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/iqaudio-dacplus-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/lirc-rpi-overlay.dts
+ create mode 100755 arch/arm/boot/dts/overlays/mcp2515-can0-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/mcp2515-can1-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/mmc-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/mz61581-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/piscreen-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/pitft28-resistive-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/pps-gpio-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/pwm-2chan-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/pwm-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/raspidac3-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/rpi-dac-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/rpi-display-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/rpi-ft5406-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/rpi-proto-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/rpi-sense-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/sdhost-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/sdio-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/smi-dev-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/smi-nand-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/smi-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/spi-gpio35-39-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/tinylcd35-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/uart1-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/vga666-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/w1-gpio-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/w1-gpio-pullup-overlay.dts
+
+--- a/arch/arm/boot/dts/Makefile
++++ b/arch/arm/boot/dts/Makefile
+@@ -1,5 +1,25 @@
+ ifeq ($(CONFIG_OF),y)
+ 
++dtb-$(CONFIG_ARCH_BCM2708) += bcm2708-rpi-b.dtb
++dtb-$(CONFIG_ARCH_BCM2708) += bcm2708-rpi-b-plus.dtb
++dtb-$(CONFIG_ARCH_BCM2708) += bcm2708-rpi-cm.dtb
++dtb-$(CONFIG_ARCH_BCM2835) += bcm2835-rpi-cm.dtb
++dtb-$(CONFIG_ARCH_BCM2709) += bcm2709-rpi-2-b.dtb
++
++# Raspberry Pi
++ifeq ($(CONFIG_ARCH_BCM2708),y)
++   RPI_DT_OVERLAYS=y
++endif
++ifeq ($(CONFIG_ARCH_BCM2709),y)
++   RPI_DT_OVERLAYS=y
++endif
++ifeq ($(CONFIG_ARCH_BCM2835),y)
++   RPI_DT_OVERLAYS=y
++endif
++ifeq ($(RPI_DT_OVERLAYS),y)
++    dts-dirs += overlays
++endif
++
+ dtb-$(CONFIG_ARCH_ALPINE) += \
+ 	alpine-db.dtb
+ dtb-$(CONFIG_MACH_ASM9260) += \
+@@ -777,10 +797,20 @@ dtb-$(CONFIG_ARCH_MEDIATEK) += \
+ 	mt8127-moose.dtb \
+ 	mt8135-evbp1.dtb
+ dtb-$(CONFIG_ARCH_ZX) += zx296702-ad1.dtb
++
++targets += dtbs dtbs_install
++targets += $(dtb-y)
++
+ endif
+ 
+ dtstree		:= $(srctree)/$(src)
+ dtb-$(CONFIG_OF_ALL_DTBS) := $(patsubst $(dtstree)/%.dts,%.dtb, $(wildcard $(dtstree)/*.dts))
+ 
+ always		:= $(dtb-y)
++subdir-y	:= $(dts-dirs)
+ clean-files	:= *.dtb
++
++# Enable fixups to support overlays on BCM2708 platforms
++ifeq ($(RPI_DT_OVERLAYS),y)
++	DTC_FLAGS ?= -@
++endif
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2708-rpi-b-plus.dts
+@@ -0,0 +1,145 @@
++/dts-v1/;
++
++#include "bcm2708.dtsi"
++
++/ {
++	compatible = "brcm,bcm2708";
++	model = "Raspberry Pi Model B+";
++};
++
++&gpio {
++	sdhost_pins: sdhost_pins {
++		brcm,pins = <48 49 50 51 52 53>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	spi0_pins: spi0_pins {
++		brcm,pins = <9 10 11>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	spi0_cs_pins: spi0_cs_pins {
++		brcm,pins = <8 7>;
++		brcm,function = <1>; /* output */
++	};
++
++	i2c0_pins: i2c0 {
++		brcm,pins = <0 1>;
++		brcm,function = <4>;
++	};
++
++	i2c1_pins: i2c1 {
++		brcm,pins = <2 3>;
++		brcm,function = <4>;
++	};
++
++	i2s_pins: i2s {
++		brcm,pins = <18 19 20 21>;
++		brcm,function = <4>; /* alt0 */
++	};
++};
++
++&sdhost {
++	pinctrl-names = "default";
++	pinctrl-0 = <&sdhost_pins>;
++	bus-width = <4>;
++	status = "okay";
++};
++
++&fb {
++	status = "okay";
++};
++
++&uart0 {
++	status = "okay";
++};
++
++&spi0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spi0_pins &spi0_cs_pins>;
++	cs-gpios = <&gpio 8 1>, <&gpio 7 1>;
++
++	spidev at 0{
++		compatible = "spidev";
++		reg = <0>;	/* CE0 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++
++	spidev at 1{
++		compatible = "spidev";
++		reg = <1>;	/* CE1 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++};
++
++&i2c0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c0_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c1_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c2 {
++	clock-frequency = <100000>;
++};
++
++&i2s {
++	#sound-dai-cells = <0>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2s_pins>;
++};
++
++&random {
++	status = "okay";
++};
++
++&leds {
++	act_led: act {
++		label = "led0";
++		linux,default-trigger = "mmc0";
++		gpios = <&gpio 47 0>;
++	};
++
++	pwr_led: pwr {
++		label = "led1";
++		linux,default-trigger = "input";
++		gpios = <&gpio 35 0>;
++	};
++};
++
++/ {
++	__overrides__ {
++		uart0 = <&uart0>,"status";
++		uart0_clkrate = <&clk_uart0>,"clock-frequency:0";
++		i2s = <&i2s>,"status";
++		spi = <&spi0>,"status";
++		i2c0 = <&i2c0>,"status";
++		i2c1 = <&i2c1>,"status";
++		i2c2_iknowwhatimdoing = <&i2c2>,"status";
++		i2c0_baudrate = <&i2c0>,"clock-frequency:0";
++		i2c1_baudrate = <&i2c1>,"clock-frequency:0";
++		i2c2_baudrate = <&i2c2>,"clock-frequency:0";
++		core_freq = <&clk_core>,"clock-frequency:0";
++
++		act_led_gpio = <&act_led>,"gpios:4";
++		act_led_activelow = <&act_led>,"gpios:8";
++		act_led_trigger = <&act_led>,"linux,default-trigger";
++
++		pwr_led_gpio = <&pwr_led>,"gpios:4";
++		pwr_led_activelow = <&pwr_led>,"gpios:8";
++		pwr_led_trigger = <&pwr_led>,"linux,default-trigger";
++
++		audio = <&audio>,"status";
++		watchdog = <&watchdog>,"status";
++		random = <&random>,"status";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2708-rpi-b.dts
+@@ -0,0 +1,135 @@
++/dts-v1/;
++
++#include "bcm2708.dtsi"
++
++/ {
++	compatible = "brcm,bcm2708";
++	model = "Raspberry Pi Model B";
++};
++
++&gpio {
++	sdhost_pins: sdhost_pins {
++		brcm,pins = <48 49 50 51 52 53>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	spi0_pins: spi0_pins {
++		brcm,pins = <9 10 11>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	spi0_cs_pins: spi0_cs_pins {
++		brcm,pins = <8 7>;
++		brcm,function = <1>; /* output */
++	};
++
++	i2c0_pins: i2c0 {
++		brcm,pins = <0 1>;
++		brcm,function = <4>;
++	};
++
++	i2c1_pins: i2c1 {
++		brcm,pins = <2 3>;
++		brcm,function = <4>;
++	};
++
++	i2s_pins: i2s {
++		brcm,pins = <28 29 30 31>;
++		brcm,function = <6>; /* alt2 */
++	};
++};
++
++&sdhost {
++	pinctrl-names = "default";
++	pinctrl-0 = <&sdhost_pins>;
++	bus-width = <4>;
++	status = "okay";
++};
++
++&fb {
++	status = "okay";
++};
++
++&uart0 {
++	status = "okay";
++};
++
++&spi0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spi0_pins &spi0_cs_pins>;
++	cs-gpios = <&gpio 8 1>, <&gpio 7 1>;
++
++	spidev at 0{
++		compatible = "spidev";
++		reg = <0>;	/* CE0 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++
++	spidev at 1{
++		compatible = "spidev";
++		reg = <1>;	/* CE1 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++};
++
++&i2c0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c0_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c1_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c2 {
++	clock-frequency = <100000>;
++};
++
++&i2s {
++	#sound-dai-cells = <0>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2s_pins>;
++};
++
++&random {
++	status = "okay";
++};
++
++&leds {
++	act_led: act {
++		label = "led0";
++		linux,default-trigger = "mmc0";
++		gpios = <&gpio 16 1>;
++	};
++};
++
++/ {
++	__overrides__ {
++		uart0 = <&uart0>,"status";
++		uart0_clkrate = <&clk_uart0>,"clock-frequency:0";
++		i2s = <&i2s>,"status";
++		spi = <&spi0>,"status";
++		i2c0 = <&i2c0>,"status";
++		i2c1 = <&i2c1>,"status";
++		i2c2_iknowwhatimdoing = <&i2c2>,"status";
++		i2c0_baudrate = <&i2c0>,"clock-frequency:0";
++		i2c1_baudrate = <&i2c1>,"clock-frequency:0";
++		i2c2_baudrate = <&i2c2>,"clock-frequency:0";
++		core_freq = <&clk_core>,"clock-frequency:0";
++
++		act_led_gpio = <&act_led>,"gpios:4";
++		act_led_activelow = <&act_led>,"gpios:8";
++		act_led_trigger = <&act_led>,"linux,default-trigger";
++
++		audio = <&audio>,"status";
++		watchdog = <&watchdog>,"status";
++		random = <&random>,"status";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2708-rpi-cm.dts
+@@ -0,0 +1,102 @@
++/dts-v1/;
++
++#include "bcm2708-rpi-cm.dtsi"
++
++/ {
++	model = "Raspberry Pi Compute Module";
++};
++
++&uart0 {
++	status = "okay";
++};
++
++&gpio {
++	spi0_pins: spi0_pins {
++		brcm,pins = <9 10 11>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	spi0_cs_pins: spi0_cs_pins {
++		brcm,pins = <8 7>;
++		brcm,function = <1>; /* output */
++	};
++
++	i2c0_pins: i2c0 {
++		brcm,pins = <0 1>;
++		brcm,function = <4>;
++	};
++
++	i2c1_pins: i2c1 {
++		brcm,pins = <2 3>;
++		brcm,function = <4>;
++	};
++
++	i2s_pins: i2s {
++		brcm,pins = <18 19 20 21>;
++		brcm,function = <4>; /* alt0 */
++	};
++};
++
++&spi0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spi0_pins &spi0_cs_pins>;
++	cs-gpios = <&gpio 8 1>, <&gpio 7 1>;
++
++	spidev at 0{
++		compatible = "spidev";
++		reg = <0>;	/* CE0 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++
++	spidev at 1{
++		compatible = "spidev";
++		reg = <1>;	/* CE1 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++};
++
++&i2c0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c0_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c1_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c2 {
++	clock-frequency = <100000>;
++};
++
++&i2s {
++	#sound-dai-cells = <0>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2s_pins>;
++};
++
++&random {
++	status = "okay";
++};
++
++/ {
++	__overrides__ {
++		uart0 = <&uart0>,"status";
++		uart0_clkrate = <&clk_uart0>,"clock-frequency:0";
++		i2s = <&i2s>,"status";
++		spi = <&spi0>,"status";
++		i2c0 = <&i2c0>,"status";
++		i2c1 = <&i2c1>,"status";
++		i2c2_iknowwhatimdoing = <&i2c2>,"status";
++		i2c0_baudrate = <&i2c0>,"clock-frequency:0";
++		i2c1_baudrate = <&i2c1>,"clock-frequency:0";
++		i2c2_baudrate = <&i2c2>,"clock-frequency:0";
++		core_freq = <&clk_core>,"clock-frequency:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2708-rpi-cm.dtsi
+@@ -0,0 +1,40 @@
++#include "bcm2708.dtsi"
++
++&gpio {
++	sdhost_pins: sdhost_pins {
++		brcm,pins = <48 49 50 51 52 53>;
++		brcm,function = <4>; /* alt0 */
++	};
++};
++
++&leds {
++	act_led: act {
++		label = "led0";
++		linux,default-trigger = "mmc0";
++		gpios = <&gpio 47 0>;
++	};
++};
++
++&sdhost {
++	pinctrl-names = "default";
++	pinctrl-0 = <&sdhost_pins>;
++	bus-width = <4>;
++	non-removable;
++	status = "okay";
++};
++
++&fb {
++	status = "okay";
++};
++
++/ {
++	__overrides__ {
++		act_led_gpio = <&act_led>,"gpios:4";
++		act_led_activelow = <&act_led>,"gpios:8";
++		act_led_trigger = <&act_led>,"linux,default-trigger";
++
++		audio = <&audio>,"status";
++		watchdog = <&watchdog>,"status";
++		random = <&random>,"status";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2708.dtsi
+@@ -0,0 +1,40 @@
++#include "bcm2708_common.dtsi"
++
++/ {
++	compatible = "brcm,bcm2708";
++	model = "BCM2708";
++
++	chosen {
++		/* No padding required - the boot loader can do that. */
++		bootargs = "";
++	};
++
++	soc {
++		ranges = <0x7e000000 0x20000000 0x01000000>;
++
++		timer at 7e003000 {
++			compatible = "brcm,bcm2835-system-timer";
++			reg = <0x7e003000 0x1000>;
++			interrupts = <1 0>, <1 1>, <1 2>, <1 3>;
++			clock-frequency = <1000000>;
++		};
++
++		arm-pmu {
++			compatible = "arm,arm1176-pmu";
++		};
++
++		gpiomem {
++			compatible = "brcm,bcm2835-gpiomem";
++			reg = <0x7e200000 0x1000>;
++			status = "okay";
++		};
++	};
++};
++
++&intc {
++	compatible = "brcm,bcm2835-armctrl-ic";
++};
++
++&watchdog {
++	status = "okay";
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2708_common.dtsi
+@@ -0,0 +1,347 @@
++#include "skeleton.dtsi"
++
++/ {
++	interrupt-parent = <&intc>;
++
++	aliases {
++		audio = &audio;
++		sound = &sound;
++		soc = &soc;
++		dma = &dma;
++		intc = &intc;
++		watchdog = &watchdog;
++		random = &random;
++		mailbox = &mailbox;
++		gpio = &gpio;
++		uart0 = &uart0;
++		sdhost = &sdhost;
++		i2s  = &i2s;
++		spi0 = &spi0;
++		i2c0 = &i2c0;
++		uart1 = &uart1;
++		mmc = &mmc;
++		i2c1 = &i2c1;
++		i2c2 = &i2c2;
++		usb = &usb;
++		leds = &leds;
++		fb = &fb;
++		vchiq = &vchiq;
++		thermal = &thermal;
++		clocks = &clocks;
++	};
++
++	/* Onboard audio */
++	audio: audio {
++		compatible = "brcm,bcm2835-audio";
++		brcm,pwm-channels = <8>;
++		status = "disabled";
++	};
++
++	/* External sound card */
++	sound: sound {
++	};
++
++	soc: soc {
++		compatible = "simple-bus";
++		#address-cells = <1>;
++		#size-cells = <1>;
++
++		dma: dma at 7e007000 {
++			compatible = "brcm,bcm2835-dma";
++			reg = <0x7e007000 0xf00>;
++			interrupts = <1 16>,
++				     <1 17>,
++				     <1 18>,
++				     <1 19>,
++				     <1 20>,
++				     <1 21>,
++				     <1 22>,
++				     <1 23>,
++				     <1 24>,
++				     <1 25>,
++				     <1 26>,
++				     <1 27>;
++
++			#dma-cells = <1>;
++			brcm,dma-channel-mask = <0x0f35>;
++		};
++
++		intc: interrupt-controller at 7e00b200 {
++			compatible = "brcm,bcm2708-armctrl-ic";
++			reg = <0x7e00b200 0x200>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++
++		mailbox: mailbox at 7e00b800 {
++			compatible = "brcm,bcm2835-mbox";
++			reg = <0x7e00b880 0x40>;
++			interrupts = <0 1>;
++			#mbox-cells = <0>;
++		};
++
++		watchdog: watchdog at 7e100000 {
++			compatible = "brcm,bcm2835-pm-wdt";
++			reg = <0x7e100000 0x28>;
++			status = "disabled";
++		};
++
++		cprman: cprman at 7e101000 {
++			compatible = "brcm,bcm2835-cprman";
++			#clock-cells = <1>;
++			reg = <0x7e101000 0x2000>;
++
++			/* CPRMAN derives everything from the platform's
++			 * oscillator.
++			 */
++			clocks = <&clk_osc>;
++			status = "disabled";
++		};
++
++		random: rng at 7e104000 {
++			compatible = "brcm,bcm2835-rng";
++			reg = <0x7e104000 0x10>;
++			status = "disabled";
++		};
++
++		gpio: gpio at 7e200000 {
++			compatible = "brcm,bcm2835-gpio";
++			reg = <0x7e200000 0xb4>;
++			interrupts = <2 17>, <2 18>;
++
++			gpio-controller;
++			#gpio-cells = <2>;
++
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++
++		uart0: uart at 7e201000 {
++			compatible = "arm,pl011", "arm,primecell";
++			reg = <0x7e201000 0x1000>;
++			interrupts = <2 25>;
++			clocks = <&clk_uart0 &clk_apb_p>;
++			clock-names = "uartclk","apb_pclk";
++			arm,primecell-periphid = <0x00241011>; // For an explanation, see
++			// https://github.com/raspberrypi/linux/commit/13731d862cf5219216533a3b0de052cee4cc5038
++			status = "disabled";
++		};
++
++		sdhost: sdhost at 7e202000 {
++			compatible = "brcm,bcm2835-sdhost";
++			reg = <0x7e202000 0x100>;
++			interrupts = <2 24>;
++			clocks = <&clk_core>;
++			dmas = <&dma 13>,
++			       <&dma 13>;
++			dma-names = "tx", "rx";
++			brcm,pio-limit = <1>;
++			status = "disabled";
++		};
++
++		i2s: i2s at 7e203000 {
++			compatible = "brcm,bcm2835-i2s";
++			reg = <0x7e203000 0x24>,
++			      <0x7e101098 0x08>;
++
++			dmas = <&dma 2>, <&dma 3>;
++			dma-names = "tx", "rx";
++			status = "disabled";
++		};
++
++		spi0: spi at 7e204000 {
++			compatible = "brcm,bcm2835-spi";
++			reg = <0x7e204000 0x1000>;
++			interrupts = <2 22>;
++			clocks = <&clk_core>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "disabled";
++			/* the dma channels */
++			dmas = <&dma 6>, <&dma 7>;
++			dma-names = "tx", "rx";
++			/* the chipselects used - <0> means native GPIO
++			 * add more gpios if necessary as <&gpio 6 1>
++			 * (but do not forget to make them output!)
++			 */
++			cs-gpios = <0>, <0>;
++		};
++
++		i2c0: i2c at 7e205000 {
++			compatible = "brcm,bcm2708-i2c";
++			reg = <0x7e205000 0x1000>;
++			interrupts = <2 21>;
++			clocks = <&clk_core>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "disabled";
++		};
++
++		pwm: pwm at 7e20c000 {
++			compatible = "brcm,bcm2835-pwm";
++			reg = <0x7e20c000 0x28>;
++			clocks = <&clk_pwm>;
++			#pwm-cells = <2>;
++			status = "disabled";
++		};
++
++		uart1: uart at 7e215040 {
++			compatible = "brcm,bcm2835-aux-uart", "ns16550";
++			reg = <0x7e215040 0x40>;
++			interrupts = <1 29>;
++			clocks = <&clk_uart1>;
++			reg-shift = <2>;
++			no-loopback-test;
++			status = "disabled";
++	        };
++
++		mmc: mmc at 7e300000 {
++			compatible = "brcm,bcm2835-mmc";
++			reg = <0x7e300000 0x100>;
++			interrupts = <2 30>;
++			clocks = <&clk_mmc>;
++			dmas = <&dma 11>,
++			       <&dma 11>;
++			dma-names = "tx", "rx";
++			status = "disabled";
++		};
++
++		i2c1: i2c at 7e804000 {
++			compatible = "brcm,bcm2708-i2c";
++			reg = <0x7e804000 0x1000>;
++			interrupts = <2 21>;
++			clocks = <&clk_core>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "disabled";
++		};
++
++		i2c2: i2c at 7e805000 {
++			// Beware - this is shared with the HDMI module.
++			// Careless use may break (really) your display.
++			// Caveat emptor.
++			compatible = "brcm,bcm2708-i2c";
++			reg = <0x7e805000 0x1000>;
++			interrupts = <2 21>;
++			clocks = <&clk_core>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "disabled";
++		};
++
++		smi: smi at 7e600000 {
++			compatible = "brcm,bcm2835-smi";
++			reg = <0x7e600000 0x44>, <0x7e1010b0 0x8>;
++			interrupts = <2 16>;
++			brcm,smi-clock-source = <6>;
++			brcm,smi-clock-divisor = <4>;
++			dmas = <&dma 4>;
++			dma-names = "rx-tx";
++			status = "disabled";
++		};
++
++		usb: usb at 7e980000 {
++			compatible = "brcm,bcm2708-usb";
++			reg = <0x7e980000 0x10000>,
++			      <0x7e006000 0x1000>;
++			interrupts = <2 0>,
++				     <1 9>;
++		};
++
++		firmware: firmware {
++			compatible = "raspberrypi,bcm2835-firmware";
++			mboxes = <&mailbox>;
++		};
++
++		leds: leds {
++			compatible = "gpio-leds";
++		};
++
++		fb: fb {
++			compatible = "brcm,bcm2708-fb";
++			firmware = <&firmware>;
++			status = "disabled";
++		};
++
++		vchiq: vchiq {
++			compatible = "brcm,bcm2835-vchiq";
++			reg = <0x7e00b840 0xf>;
++			interrupts = <0 2>;
++			cache-line-size = <32>;
++			firmware = <&firmware>;
++		};
++
++		thermal: thermal {
++			compatible = "brcm,bcm2835-thermal";
++			firmware = <&firmware>;
++		};
++	};
++
++	clocks: clocks {
++		compatible = "simple-bus";
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		clk_core: clock at 0 {
++			compatible = "fixed-clock";
++			reg = <0>;
++			#clock-cells = <0>;
++			clock-output-names = "core";
++			clock-frequency = <250000000>;
++		};
++
++		clk_mmc: clock at 1 {
++			compatible = "fixed-clock";
++			reg = <1>;
++			#clock-cells = <0>;
++			clock-output-names = "mmc";
++			clock-frequency = <250000000>;
++		};
++
++		clk_uart0: clock at 2 {
++			compatible = "fixed-clock";
++			reg = <2>;
++			#clock-cells = <0>;
++			clock-output-names = "uart0_pclk";
++			clock-frequency = <3000000>;
++		};
++
++		clk_apb_p: clock at 3 {
++			compatible = "fixed-clock";
++			reg = <3>;
++			#clock-cells = <0>;
++			clock-output-names = "apb_pclk";
++			clock-frequency = <126000000>;
++		};
++
++		clk_pwm: clock at 4 {
++			compatible = "fixed-clock";
++			reg = <4>;
++			#clock-cells = <0>;
++			clock-output-names = "pwm";
++			clock-frequency = <100000000>;
++		};
++
++		clk_uart1: clock at 5 {
++			compatible = "fixed-factor-clock";
++			reg = <5>;
++			clocks = <&clk_core>;
++			#clock-cells = <0>;
++			clock-div = <1>;
++			clock-mult = <2>;
++		};
++
++		/* The oscillator is the root of the clock tree. */
++		clk_osc: clock at 6 {
++			compatible = "fixed-clock";
++			reg = <6>;
++			#clock-cells = <0>;
++			clock-output-names = "osc";
++			clock-frequency = <19200000>;
++		};
++	};
++
++	__overrides__ {
++		cache_line_size = <&vchiq>, "cache-line-size:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2709-rpi-2-b.dts
+@@ -0,0 +1,145 @@
++/dts-v1/;
++
++#include "bcm2709.dtsi"
++
++/ {
++	compatible = "brcm,bcm2709";
++	model = "Raspberry Pi 2 Model B";
++};
++
++&gpio {
++	sdhost_pins: sdhost_pins {
++		brcm,pins = <48 49 50 51 52 53>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	spi0_pins: spi0_pins {
++		brcm,pins = <9 10 11>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	spi0_cs_pins: spi0_cs_pins {
++		brcm,pins = <8 7>;
++		brcm,function = <1>; /* output */
++	};
++
++	i2c0_pins: i2c0 {
++		brcm,pins = <0 1>;
++		brcm,function = <4>;
++	};
++
++	i2c1_pins: i2c1 {
++		brcm,pins = <2 3>;
++		brcm,function = <4>;
++	};
++
++	i2s_pins: i2s {
++		brcm,pins = <18 19 20 21>;
++		brcm,function = <4>; /* alt0 */
++	};
++};
++
++&sdhost {
++	pinctrl-names = "default";
++	pinctrl-0 = <&sdhost_pins>;
++	bus-width = <4>;
++	status = "okay";
++};
++
++&fb {
++	status = "okay";
++};
++
++&uart0 {
++	status = "okay";
++};
++
++&spi0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spi0_pins &spi0_cs_pins>;
++	cs-gpios = <&gpio 8 1>, <&gpio 7 1>;
++
++	spidev at 0{
++		compatible = "spidev";
++		reg = <0>;	/* CE0 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++
++	spidev at 1{
++		compatible = "spidev";
++		reg = <1>;	/* CE1 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++};
++
++&i2c0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c0_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c1_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c2 {
++	clock-frequency = <100000>;
++};
++
++&i2s {
++	#sound-dai-cells = <0>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2s_pins>;
++};
++
++&random {
++	status = "okay";
++};
++
++&leds {
++	act_led: act {
++		label = "led0";
++		linux,default-trigger = "mmc0";
++		gpios = <&gpio 47 0>;
++	};
++
++	pwr_led: pwr {
++		label = "led1";
++		linux,default-trigger = "input";
++		gpios = <&gpio 35 0>;
++	};
++};
++
++/ {
++	__overrides__ {
++		uart0 = <&uart0>,"status";
++		uart0_clkrate = <&clk_uart0>,"clock-frequency:0";
++		i2s = <&i2s>,"status";
++		spi = <&spi0>,"status";
++		i2c0 = <&i2c0>,"status";
++		i2c1 = <&i2c1>,"status";
++		i2c2_iknowwhatimdoing = <&i2c2>,"status";
++		i2c0_baudrate = <&i2c0>,"clock-frequency:0";
++		i2c1_baudrate = <&i2c1>,"clock-frequency:0";
++		i2c2_baudrate = <&i2c2>,"clock-frequency:0";
++		core_freq = <&clk_core>,"clock-frequency:0";
++
++		act_led_gpio = <&act_led>,"gpios:4";
++		act_led_activelow = <&act_led>,"gpios:8";
++		act_led_trigger = <&act_led>,"linux,default-trigger";
++
++		pwr_led_gpio = <&pwr_led>,"gpios:4";
++		pwr_led_activelow = <&pwr_led>,"gpios:8";
++		pwr_led_trigger = <&pwr_led>,"linux,default-trigger";
++
++		audio = <&audio>,"status";
++		watchdog = <&watchdog>,"status";
++		random = <&random>,"status";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2709.dtsi
+@@ -0,0 +1,102 @@
++#include "bcm2708_common.dtsi"
++
++/ {
++	compatible = "brcm,bcm2709";
++	model = "BCM2709";
++
++	chosen {
++		/* No padding required - the boot loader can do that. */
++		bootargs = "";
++	};
++
++	soc {
++		ranges = <0x7e000000 0x3f000000 0x01000000>,
++		         <0x40000000 0x40000000 0x00040000>;
++
++		local_intc: local_intc {
++			compatible = "brcm,bcm2836-l1-intc";
++			reg = <0x40000000 0x100>;
++			interrupt-controller;
++			#interrupt-cells = <1>;
++			interrupt-parent = <&local_intc>;
++		};
++
++		arm-pmu {
++			compatible = "arm,cortex-a7-pmu";
++			interrupt-parent = <&local_intc>;
++			interrupts = <9>;
++		};
++
++		gpiomem {
++			compatible = "brcm,bcm2835-gpiomem";
++			reg = <0x7e200000 0x1000>;
++			status = "okay";
++		};
++
++		timer {
++			compatible = "arm,armv7-timer";
++			clock-frequency = <19200000>;
++			interrupt-parent = <&local_intc>;
++			interrupts = <0>, // PHYS_SECURE_PPI
++				     <1>, // PHYS_NONSECURE_PPI
++				     <3>, // VIRT_PPI
++				     <2>; // HYP_PPI
++			always-on;
++		};
++
++		syscon at 40000000 {
++			compatible = "brcm,bcm2836-arm-local", "syscon";
++			reg = <0x40000000 0x100>;
++		};
++	};
++
++	cpus: cpus {
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		v7_cpu0: cpu at 0 {
++			device_type = "cpu";
++			compatible = "arm,cortex-a7";
++			reg = <0xf00>;
++			clock-frequency = <800000000>;
++		};
++
++		v7_cpu1: cpu at 1 {
++			device_type = "cpu";
++			compatible = "arm,cortex-a7";
++			reg = <0xf01>;
++			clock-frequency = <800000000>;
++		};
++
++		v7_cpu2: cpu at 2 {
++			device_type = "cpu";
++			compatible = "arm,cortex-a7";
++			reg = <0xf02>;
++			clock-frequency = <800000000>;
++		};
++
++		v7_cpu3: cpu at 3 {
++			device_type = "cpu";
++			compatible = "arm,cortex-a7";
++			reg = <0xf03>;
++			clock-frequency = <800000000>;
++		};
++	};
++
++	__overrides__ {
++		arm_freq = <&v7_cpu0>, "clock-frequency:0",
++		       <&v7_cpu1>, "clock-frequency:0",
++		       <&v7_cpu2>, "clock-frequency:0",
++		       <&v7_cpu3>, "clock-frequency:0";
++	};
++};
++
++&watchdog {
++	status = "okay";
++};
++
++&intc {
++        compatible = "brcm,bcm2836-armctrl-ic";
++        interrupt-parent = <&local_intc>;
++        interrupts = <8>;
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2835-rpi-cm.dts
+@@ -0,0 +1,93 @@
++/dts-v1/;
++
++#include "bcm2835-rpi-cm.dtsi"
++
++/ {
++	model = "Raspberry Pi Compute Module";
++};
++
++&uart0 {
++	status = "okay";
++};
++
++&gpio {
++	spi0_pins: spi0_pins {
++		brcm,pins = <7 8 9 10 11>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	i2c0_pins: i2c0 {
++		brcm,pins = <0 1>;
++		brcm,function = <4>;
++	};
++
++	i2c1_pins: i2c1 {
++		brcm,pins = <2 3>;
++		brcm,function = <4>;
++	};
++
++	i2s_pins: i2s {
++		brcm,pins = <18 19 20 21>;
++		brcm,function = <4>; /* alt0 */
++	};
++};
++
++&spi0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spi0_pins>;
++
++	spidev at 0{
++		compatible = "spidev";
++		reg = <0>;	/* CE0 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++
++	spidev at 1{
++		compatible = "spidev";
++		reg = <1>;	/* CE1 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++};
++
++&i2c0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c0_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c1_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c2 {
++	clock-frequency = <100000>;
++};
++
++&i2s {
++	#sound-dai-cells = <0>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2s_pins>;
++};
++
++/ {
++	__overrides__ {
++		uart0 = <&uart0>,"status";
++		uart0_clkrate = <&clk_uart0>,"clock-frequency:0";
++		uart1_clkrate = <&uart1>,"clock-frequency:0";
++		i2s = <&i2s>,"status";
++		spi = <&spi0>,"status";
++		i2c0 = <&i2c0>,"status";
++		i2c1 = <&i2c1>,"status";
++		i2c2_iknowwhatimdoing = <&i2c2>,"status";
++		i2c0_baudrate = <&i2c0>,"clock-frequency:0";
++		i2c1_baudrate = <&i2c1>,"clock-frequency:0";
++		i2c2_baudrate = <&i2c2>,"clock-frequency:0";
++		core_freq = <&clk_core>,"clock-frequency:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/bcm2835-rpi-cm.dtsi
+@@ -0,0 +1,30 @@
++#include "bcm2835.dtsi"
++
++&leds {
++	act_led: act {
++		label = "led0";
++		linux,default-trigger = "mmc0";
++		gpios = <&gpio 47 0>;
++	};
++};
++
++&mmc {
++	status = "okay";
++	bus-width = <4>;
++};
++
++&fb {
++	status = "okay";
++};
++
++/ {
++	__overrides__ {
++		act_led_gpio = <&act_led>,"gpios:4";
++		act_led_activelow = <&act_led>,"gpios:8";
++		act_led_trigger = <&act_led>,"linux,default-trigger";
++
++		audio = <&audio>,"status";
++		watchdog = <&watchdog>,"status";
++		random = <&random>,"status";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -0,0 +1,69 @@
++ifeq ($(CONFIG_OF),y)
++
++# Overlays for the Raspberry Pi platform
++
++ifeq ($(CONFIG_ARCH_BCM2708),y)
++   RPI_DT_OVERLAYS=y
++endif
++ifeq ($(CONFIG_ARCH_BCM2709),y)
++   RPI_DT_OVERLAYS=y
++endif
++ifeq ($(CONFIG_ARCH_BCM2835),y)
++   RPI_DT_OVERLAYS=y
++endif
++
++dtb-$(RPI_DT_OVERLAYS) += ads7846-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += bmp085_i2c-sensor-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += dht11-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += enc28j60-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += gpio-poweroff-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += hifiberry-amp-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += hifiberry-dac-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += hifiberry-dacplus-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += hifiberry-digi-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += hy28a-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += hy28b-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += i2c-rtc-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += i2s-mmap-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += iqaudio-dac-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += iqaudio-dacplus-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += lirc-rpi-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += mcp2515-can0-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += mcp2515-can1-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += mmc-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += mz61581-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += piscreen-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += pitft28-resistive-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += pps-gpio-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += pwm-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += pwm-2chan-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += raspidac3-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += rpi-dac-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += rpi-display-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += rpi-ft5406-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += rpi-proto-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += rpi-sense-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += sdhost-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += sdio-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += smi-dev-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += smi-nand-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += smi-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += spi-gpio35-39-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += tinylcd35-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += uart1-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += vga666-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += w1-gpio-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += w1-gpio-pullup-overlay.dtb
++
++targets += dtbs dtbs_install
++targets += $(dtb-y)
++
++endif
++
++always		:= $(dtb-y)
++clean-files	:= *.dtb
++
++# Enable fixups to support overlays on BCM2708 platforms
++ifeq ($(RPI_DT_OVERLAYS),y)
++	DTC_FLAGS ?= -@
++endif
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/README
+@@ -0,0 +1,648 @@
++Introduction
++============
++
++This directory contains Device Tree overlays. Device Tree makes it possible
++to support many hardware configurations with a single kernel and without the
++need to explicitly load or blacklist kernel modules. Note that this isn't a
++"pure" Device Tree configuration (c.f. MACH_BCM2835) - some on-board devices
++are still configured by the board support code, but the intention is to
++eventually reach that goal.
++
++On Raspberry Pi, Device Tree usage is controlled from /boot/config.txt. By
++default, the Raspberry Pi kernel boots with device tree enabled. You can
++completely disable DT usage (for now) by adding:
++
++    device_tree=
++
++to your config.txt, which should cause your Pi to revert to the old way of
++doing things after a reboot.
++
++In /boot you will find a .dtb for each base platform. This describes the
++hardware that is part of the Raspberry Pi board. The loader (start.elf and its
++siblings) selects the .dtb file appropriate for the platform by name, and reads
++it into memory. At this point, all of the optional interfaces (i2c, i2s, spi)
++are disabled, but they can be enabled using Device Tree parameters:
++
++    dtparam=i2c=on,i2s=on,spi=on
++
++However, this shouldn't be necessary in many use cases because loading an
++overlay that requires one of those interfaces will cause it to be enabled
++automatically, and it is advisable to only enable interfaces if they are
++needed.
++
++Configuring additional, optional hardware is done using Device Tree overlays
++(see below).
++
++raspi-config
++============
++
++The Advanced Options section of the raspi-config utility can enable and disable
++Device Tree use, as well as toggling the I2C and SPI interfaces. Note that it
++is possible to both enable an interface and blacklist the driver, if for some
++reason you should want to defer the loading.
++
++Modules
++=======
++
++As well as describing the hardware, Device Tree also gives enough information
++to allow suitable driver modules to be located and loaded, with the corollary
++that unneeded modules are not loaded. As a result it should be possible to
++remove lines from /etc/modules, and /etc/modprobe.d/raspi-blacklist.conf can
++have its contents deleted (or commented out).
++
++Using Overlays
++==============
++
++Overlays are loaded using the "dtoverlay" directive. As an example, consider the
++popular lirc-rpi module, the Linux Infrared Remote Control driver. In the
++pre-DT world this would be loaded from /etc/modules, with an explicit
++"modprobe lirc-rpi" command, or programmatically by lircd. With DT enabled,
++this becomes a line in config.txt:
++
++    dtoverlay=lirc-rpi
++
++This causes the file /boot/overlays/lirc-rpi-overlay.dtb to be loaded. By
++default it will use GPIOs 17 (out) and 18 (in), but this can be modified using
++DT parameters:
++
++    dtoverlay=lirc-rpi,gpio_out_pin=17,gpio_in_pin=13
++
++Parameters always have default values, although in some cases (e.g. "w1-gpio")
++it is necessary to provided multiple overlays in order to get the desired
++behaviour. See the list of overlays below for a description of the parameters and their defaults.
++
++The Overlay and Parameter Reference
++===================================
++
++N.B. When editing this file, please preserve the indentation levels to make it simple to parse
++programmatically. NO HARD TABS.
++
++
++Name:   <The base DTB>
++Info:   Configures the base Raspberry Pi hardware
++Load:   <loaded automatically>
++Params:
++        audio                    Set to "on" to enable the onboard ALSA audio
++                                 interface (default "off")
++
++        i2c_arm                  Set to "on" to enable the ARM's i2c interface
++                                 (default "off")
++
++        i2c_vc                   Set to "on" to enable the i2c interface
++                                 usually reserved for the VideoCore processor
++                                 (default "off")
++
++        i2c                      An alias for i2c_arm
++
++        i2c_arm_baudrate         Set the baudrate of the ARM's i2c interface
++                                 (default "100000")
++
++        i2c_vc_baudrate          Set the baudrate of the VideoCore i2c interface
++                                 (default "100000")
++
++        i2c_baudrate             An alias for i2c_arm_baudrate
++
++        i2s                      Set to "on" to enable the i2s interface
++                                 (default "off")
++
++        spi                      Set to "on" to enable the spi interfaces
++                                 (default "off")
++
++        random                   Set to "on" to enable the hardware random
++                                 number generator (default "off")
++
++        uart0                    Set to "off" to disable uart0 (default "on")
++
++        watchdog                 Set to "on" to enable the hardware watchdog
++                                 (default "off")
++
++        act_led_trigger          Choose which activity the LED tracks.
++                                 Use "heartbeat" for a nice load indicator.
++                                 (default "mmc")
++
++        act_led_activelow        Set to "on" to invert the sense of the LED
++                                 (default "off")
++
++        act_led_gpio             Set which GPIO to use for the activity LED
++                                 (in case you want to connect it to an external
++                                 device)
++                                 (default "16" on a non-Plus board, "47" on a
++                                 Plus or Pi 2)
++
++        pwr_led_trigger
++        pwr_led_activelow
++        pwr_led_gpio
++                                 As for act_led_*, but using the PWR LED.
++                                 Not available on Model A/B boards.
++
++        N.B. It is recommended to only enable those interfaces that are needed.
++        Leaving all interfaces enabled can lead to unwanted behaviour (i2c_vc
++        interfering with Pi Camera, I2S and SPI hogging GPIO pins, etc.)
++        Note also that i2c, i2c_arm and i2c_vc are aliases for the physical
++        interfaces i2c0 and i2c1. Use of the numeric variants is still possible
++        but deprecated because the ARM/VC assignments differ between board
++        revisions. The same board-specific mapping applies to i2c_baudrate,
++        and the other i2c baudrate parameters.
++
++
++Name:   ads7846
++Info:   ADS7846 Touch controller
++Load:   dtoverlay=ads7846,<param>=<val>
++Params: cs                       SPI bus Chip Select (default 1)
++        speed                    SPI bus speed (default 2Mhz, max 3.25MHz)
++        penirq                   GPIO used for PENIRQ. REQUIRED
++        penirq_pull              Set GPIO pull (default 0=none, 2=pullup)
++        swapxy                   Swap x and y axis
++        xmin                     Minimum value on the X axis (default 0)
++        ymin                     Minimum value on the Y axis (default 0)
++        xmax                     Maximum value on the X axis (default 4095)
++        ymax                     Maximum value on the Y axis (default 4095)
++        pmin                     Minimum reported pressure value (default 0)
++        pmax                     Maximum reported pressure value (default 65535)
++        xohms                    Touchpanel sensitivity (X-plate resistance)
++                                 (default 400)
++
++        penirq is required and usually xohms (60-100) has to be set as well.
++        Apart from that, pmax (255) and swapxy are also common.
++        The rest of the calibration can be done with xinput-calibrator.
++        See: github.com/notro/fbtft/wiki/FBTFT-on-Raspian
++        Device Tree binding document:
++        www.kernel.org/doc/Documentation/devicetree/bindings/input/ads7846.txt
++
++
++Name:   bmp085_i2c-sensor
++Info:   Configures the BMP085/BMP180 digital barometric pressure and temperature
++        sensors from Bosch Sensortec
++Load:   dtoverlay=bmp085_i2c-sensor
++Params: <None>
++
++
++Name:   dht11
++Info:   Overlay for the DHT11/DHT21/DHT22 humidity/temperature sensors
++        Also sometimes found with the part number(s) AM230x.
++Load:   dtoverlay=dht11,<param>=<val>
++Params: gpiopin                  GPIO connected to the sensor's DATA output.
++                                 (default 4)
++
++
++[ The ds1307-rtc overlay has been deleted. See i2c-rtc. ]
++
++
++Name:   enc28j60
++Info:   Overlay for the Microchip ENC28J60 Ethernet Controller (SPI)
++Load:   dtoverlay=enc28j60,<param>=<val>
++Params: int_pin                  GPIO used for INT (default 25)
++
++        speed                    SPI bus speed (default 12000000)
++
++
++Name:   gpio-poweroff
++Info:   Drives a GPIO high or low on reboot
++Load:   dtoverlay=gpio-poweroff,<param>=<val>
++Params: gpiopin                  GPIO for signalling (default 26)
++
++        active_low               Set if the power control device requires a
++                                 high->low transition to trigger a power-down.
++                                 Note that this will require the support of a
++                                 custom dt-blob.bin to prevent a power-down
++                                 during the boot process, and that a reboot
++                                 will also cause the pin to go low.
++
++
++Name:   hifiberry-amp
++Info:   Configures the HifiBerry Amp and Amp+ audio cards
++Load:   dtoverlay=hifiberry-amp
++Params: <None>
++
++
++Name:   hifiberry-dac
++Info:   Configures the HifiBerry DAC audio card
++Load:   dtoverlay=hifiberry-dac
++Params: <None>
++
++
++Name:   hifiberry-dacplus
++Info:   Configures the HifiBerry DAC+ audio card
++Load:   dtoverlay=hifiberry-dacplus
++Params: <None>
++
++
++Name:   hifiberry-digi
++Info:   Configures the HifiBerry Digi audio card
++Load:   dtoverlay=hifiberry-digi
++Params: <None>
++
++
++Name:   hy28a
++Info:   HY28A - 2.8" TFT LCD Display Module by HAOYU Electronics
++        Default values match Texy's display shield
++Load:   dtoverlay=hy28a,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        debug                    Debug output level {0-7}
++
++        xohms                    Touchpanel sensitivity (X-plate resistance)
++
++        resetgpio                GPIO used to reset controller
++
++        ledgpio                  GPIO used to control backlight
++
++
++Name:   hy28b
++Info:   HY28B - 2.8" TFT LCD Display Module by HAOYU Electronics
++        Default values match Texy's display shield
++Load:   dtoverlay=hy28b,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        debug                    Debug output level {0-7}
++
++        xohms                    Touchpanel sensitivity (X-plate resistance)
++
++        resetgpio                GPIO used to reset controller
++
++        ledgpio                  GPIO used to control backlight
++
++
++Name:   i2c-rtc
++Info:   Adds support for a number of I2C Real Time Clock devices
++Load:   dtoverlay=i2c-rtc,<param>
++Params: ds1307                   Select the DS1307 device
++
++        ds3231                   Select the DS3231 device
++
++        mcp7941x                 Select the MCP7941x device
++
++        pcf2127                  Select the PCF2127 device
++
++        pcf8523                  Select the PCF8523 device
++
++        pcf8563                  Select the PCF8563 device
++
++
++Name:   i2s-mmap
++Info:   Enables mmap support in the bcm2708-i2s driver
++Load:   dtoverlay=i2s-mmap
++Params: <None>
++
++
++Name:   iqaudio-dac
++Info:   Configures the IQaudio DAC audio card
++Load:   dtoverlay=iqaudio-dac
++Params: <None>
++
++
++Name:   iqaudio-dacplus
++Info:   Configures the IQaudio DAC+ audio card
++Load:   dtoverlay=iqaudio-dacplus
++Params: <None>
++
++
++Name:   lirc-rpi
++Info:   Configures lirc-rpi (Linux Infrared Remote Control for Raspberry Pi)
++        Consult the module documentation for more details.
++Load:   dtoverlay=lirc-rpi,<param>=<val>,...
++Params: gpio_out_pin             GPIO for output (default "17")
++
++        gpio_in_pin              GPIO for input (default "18")
++
++        gpio_in_pull             Pull up/down/off on the input pin
++                                 (default "down")
++
++        sense                    Override the IR receive auto-detection logic:
++                                   "0" = force active-high
++                                   "1" = force active-low
++                                   "-1" = use auto-detection
++                                 (default "-1")
++
++        softcarrier              Turn the software carrier "on" or "off"
++                                 (default "on")
++
++        invert                   "on" = invert the output pin (default "off")
++
++        debug                    "on" = enable additional debug messages
++                                 (default "off")
++
++
++Name:   mcp2515-can0
++Info:   Configures the MCP2515 CAN controller on spi0.0
++Load:   dtoverlay=mcp2515-can0,<param>=<val>
++Params: oscillator               Clock frequency for the CAN controller (Hz)
++
++        spimaxfrequency          Maximum SPI frequence (Hz)
++
++        interrupt                GPIO for interrupt signal
++
++
++Name:   mcp2515-can1
++Info:   Configures the MCP2515 CAN controller on spi0.1
++Load:   dtoverlay=mcp2515-can1,<param>=<val>
++Params: oscillator               Clock frequency for the CAN controller (Hz)
++
++        spimaxfrequency          Maximum SPI frequence (Hz)
++
++        interrupt                GPIO for interrupt signal
++
++
++Name:   mmc
++Info:   Selects the bcm2835-mmc SD/MMC driver, optionally with overclock
++Load:   dtoverlay=mmc,<param>=<val>
++Params: overclock_50             Clock (in MHz) to use when the MMC framework
++                                 requests 50MHz
++        force_pio                Disable DMA support
++
++
++Name:   mz61581
++Info:   MZ61581 display by Tontec
++Load:   dtoverlay=mz61581,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        txbuflen                 Transmit buffer length (default 32768)
++
++        debug                    Debug output level {0-7}
++
++        xohms                    Touchpanel sensitivity (X-plate resistance)
++
++
++[ The pcf2127-rtc overlay has been deleted. See i2c-rtc. ]
++
++
++[ The pcf8523-rtc overlay has been deleted. See i2c-rtc. ]
++
++
++[ The pcf8563-rtc overlay has been deleted. See i2c-rtc. ]
++
++
++Name:   piscreen
++Info:   PiScreen display by OzzMaker.com
++Load:   dtoverlay=piscreen,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        debug                    Debug output level {0-7}
++
++        xohms                    Touchpanel sensitivity (X-plate resistance)
++
++
++Name:   pitft28-resistive
++Info:   Adafruit PiTFT 2.8" resistive touch screen
++Load:   dtoverlay=pitft28-resistive,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        debug                    Debug output level {0-7}
++
++
++Name:   pps-gpio
++Info:   Configures the pps-gpio (pulse-per-second time signal via GPIO).
++Load:   dtoverlay=pps-gpio,<param>=<val>
++Params: gpiopin                  Input GPIO (default "18")
++
++
++Name:   pwm
++Info:   Configures a single PWM channel
++        Legal pin,function combinations for each channel:
++          PWM0: 12,4(Alt0) 18,2(Alt5) 40,4(Alt0)            52,5(Alt1)
++          PWM1: 13,4(Alt0) 19,2(Alt5) 41,4(Alt0) 45,4(Alt0) 53,5(Alt1)
++        N.B.:
++          1) Pin 18 is the only one available on all platforms, and
++             it is the one used by the I2S audio interface.
++             Pins 12 and 13 might be better choices on an A+, B+ or Pi2.
++          2) The onboard analogue audio output uses both PWM channels.
++          3) So be careful mixing audio and PWM.
++          4) Currently the clock must have been enabled and configured
++             by other means.
++Load:   dtoverlay=pwm,<param>=<val>
++Params: pin                      Output pin (default 18) - see table
++        func                     Pin function (default 2 = Alt5) - see above
++        clock                    PWM clock frequency (informational)
++
++
++Name:   pwm-2chan
++Info:   Configures both PWM channels
++        Legal pin,function combinations for each channel:
++          PWM0: 12,4(Alt0) 18,2(Alt5) 40,4(Alt0)            52,5(Alt1)
++          PWM1: 13,4(Alt0) 19,2(Alt5) 41,4(Alt0) 45,4(Alt0) 53,5(Alt1)
++        N.B.:
++          1) Pin 18 is the only one available on all platforms, and
++             it is the one used by the I2S audio interface.
++             Pins 12 and 13 might be better choices on an A+, B+ or Pi2.
++          2) The onboard analogue audio output uses both PWM channels.
++          3) So be careful mixing audio and PWM.
++          4) Currently the clock must have been enabled and configured
++             by other means.
++Load:   dtoverlay=pwm-2chan,<param>=<val>
++Params: pin                      Output pin (default 18) - see table
++        pin2                     Output pin for other channel (default 19)
++        func                     Pin function (default 2 = Alt5) - see above
++        func2                    Function for pin2 (default 2 = Alt5)
++        clock                    PWM clock frequency (informational)
++
++
++Name:   raspidac3
++Info:   Configures the RaspiDAV Rev.3x audio card
++Load:   dtoverlay=raspidac3
++Params: <None>
++
++
++Name:   rpi-dac
++Info:   Configures the RPi DAC audio card
++Load:   dtoverlay=rpi-dac
++Params: <None>
++
++
++Name:   rpi-display
++Info:   RPi-Display - 2.8" Touch Display by Watterott
++Load:   dtoverlay=rpi-display,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        debug                    Debug output level {0-7}
++
++        xohms                    Touchpanel sensitivity (X-plate resistance)
++
++
++Name:   rpi-ft5406
++Info:   Official Raspberry Pi display touchscreen
++Load:   dtoverlay=rpi-ft5406
++Params: <None>
++
++
++Name:   rpi-proto
++Info:   Configures the RPi Proto audio card
++Load:   dtoverlay=rpi-proto
++Params: <None>
++
++
++Name:   rpi-sense
++Info:   Raspberry Pi Sense HAT
++Load:   dtoverlay=rpi-sense
++Params: <None>
++
++
++Name:   sdhost
++Info:   Selects the bcm2835-sdhost SD/MMC driver, optionally with overclock
++Load:   dtoverlay=sdhost,<param>=<val>
++Params: overclock_50             Clock (in MHz) to use when the MMC framework
++                                 requests 50MHz
++
++        force_pio                Disable DMA support (default off)
++
++        pio_limit                Number of blocks above which to use DMA
++                                 (default 1)
++
++        debug                    Enable debug output (default off)
++
++
++Name:   sdio
++Info:   Selects the bcm2835-sdhost SD/MMC driver, optionally with overclock,
++        and enables SDIO via GPIOs 22-27.
++Load:   dtoverlay=sdio,<param>=<val>
++Params: overclock_50             Clock (in MHz) to use when the MMC framework
++                                 requests 50MHz
++
++        force_pio                Disable DMA support (default off)
++
++        pio_limit                Number of blocks above which to use DMA
++                                 (default 1)
++
++        debug                    Enable debug output (default off)
++
++        poll_once                Disable SDIO-device polling every second
++                                 (default on: polling once at boot-time)
++
++
++Name:   smi
++Info:   Enables the Secondary Memory Interface peripheral. Uses GPIOs 2-25!
++Load:   dtoverlay=smi
++Params: <None>
++
++
++Name:   smi-dev
++Info:   Enables the userspace interface for the SMI driver
++Load:   dtoverlay=smi-dev
++Params: <None>
++
++
++Name:   smi-nand
++Info:   Enables access to NAND flash via the SMI interface
++Load:   dtoverlay=smi-nand
++Params: <None>
++
++
++Name:   spi-gpio35-39
++Info:   move SPI function block to GPIO 35 to 39
++Load:   dtoverlay=spi-gpio35-39
++Params: <None>
++
++
++Name:   tinylcd35
++Info:   3.5" Color TFT Display by www.tinylcd.com
++        Options: Touch, RTC, keypad
++Load:   dtoverlay=tinylcd35,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        debug                    Debug output level {0-7}
++
++        touch                    Enable touch panel
++
++        touchgpio                Touch controller IRQ GPIO
++
++        xohms                    Touchpanel: Resistance of X-plate in ohms
++
++        rtc-pcf                  PCF8563 Real Time Clock
++
++        rtc-ds                   DS1307 Real Time Clock
++
++        keypad                   Enable keypad
++
++        Examples:
++            Display with touchpanel, PCF8563 RTC and keypad:
++                dtoverlay=tinylcd35,touch,rtc-pcf,keypad
++            Old touch display:
++                dtoverlay=tinylcd35,touch,touchgpio=3
++
++
++Name:   uart1
++Info:   Enable uart1 in place of uart0
++Load:   dtoverlay=uart1,<param>=<val>
++Params: txd1_pin                 GPIO pin for TXD1 (14, 32 or 40 - default 14)
++
++        rxd1_pin                 GPIO pin for RXD1 (15, 33 or 41 - default 15)
++
++
++Name:   vga666
++Info:   Overlay for the Fen Logic VGA666 board
++        This uses GPIOs 2-21 (so no I2C), and activates the output 2-3 seconds
++        after the kernel has started.
++Load:   dtoverlay=vga666
++Params: <None>
++
++
++Name:   w1-gpio
++Info:   Configures the w1-gpio Onewire interface module.
++        Use this overlay if you *don't* need a GPIO to drive an external pullup.
++Load:   dtoverlay=w1-gpio,<param>=<val>
++Params: gpiopin                  GPIO for I/O (default "4")
++
++        pullup                   Non-zero, "on", or "y" to enable the parasitic
++                                 power (2-wire, power-on-data) feature
++
++
++Name:   w1-gpio-pullup
++Info:   Configures the w1-gpio Onewire interface module.
++        Use this overlay if you *do* need a GPIO to drive an external pullup.
++Load:   dtoverlay=w1-gpio-pullup,<param>=<val>
++Params: gpiopin                  GPIO for I/O (default "4")
++
++        pullup                   Non-zero, "on", or "y" to enable the parasitic
++                                 power (2-wire, power-on-data) feature
++
++        extpullup                GPIO for external pullup (default "5")
++
++
++Troubleshooting
++===============
++
++If you are experiencing problems that you think are DT-related, enable DT
++diagnostic output by adding this to /boot/config.txt:
++
++    dtdebug=on
++
++and rebooting. Then run:
++
++    sudo vcdbg log msg
++
++and look for relevant messages.
++
++Further reading
++===============
++
++This is only meant to be a quick introduction to the subject of Device Tree on
++Raspberry Pi. There is a more complete explanation here:
++
++http://www.raspberrypi.org/documentation/configuration/device-tree.md
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/ads7846-overlay.dts
+@@ -0,0 +1,83 @@
++/*
++ * Generic Device Tree overlay for the ADS7846 touch controller
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			ads7846_pins: ads7846_pins {
++				brcm,pins = <255>; /* illegal default value */
++				brcm,function = <0>; /* in */
++				brcm,pull = <0>; /* none */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			ads7846: ads7846 at 1 {
++				compatible = "ti,ads7846";
++				reg = <1>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&ads7846_pins>;
++
++				spi-max-frequency = <2000000>;
++				interrupts = <255 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				pendown-gpio = <&gpio 255 0>;
++
++				/* driver defaults */
++				ti,x-min = /bits/ 16 <0>;
++				ti,y-min = /bits/ 16 <0>;
++				ti,x-max = /bits/ 16 <0x0FFF>;
++				ti,y-max = /bits/ 16 <0x0FFF>;
++				ti,pressure-min = /bits/ 16 <0>;
++				ti,pressure-max = /bits/ 16 <0xFFFF>;
++				ti,x-plate-ohms = /bits/ 16 <400>;
++			};
++		};
++	};
++	__overrides__ {
++		cs =     <&ads7846>,"reg:0";
++		speed =  <&ads7846>,"spi-max-frequency:0";
++		penirq = <&ads7846_pins>,"brcm,pins:0", /* REQUIRED */
++			 <&ads7846>,"interrupts:0",
++			 <&ads7846>,"pendown-gpio:4";
++		penirq_pull = <&ads7846_pins>,"brcm,pull:0";
++		swapxy = <&ads7846>,"ti,swap-xy?";
++		xmin =   <&ads7846>,"ti,x-min;0";
++		ymin =   <&ads7846>,"ti,y-min;0";
++		xmax =   <&ads7846>,"ti,x-max;0";
++		ymax =   <&ads7846>,"ti,y-max;0";
++		pmin =   <&ads7846>,"ti,pressure-min;0";
++		pmax =   <&ads7846>,"ti,pressure-max;0";
++		xohms =  <&ads7846>,"ti,x-plate-ohms;0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/bmp085_i2c-sensor-overlay.dts
+@@ -0,0 +1,23 @@
++// Definitions for BMP085/BMP180 digital barometric pressure and temperature sensors from Bosch Sensortec
++/dts-v1/;
++/plugin/;
++
++/ {
++        compatible = "brcm,bcm2708";
++
++        fragment at 0 {
++                target = <&i2c_arm>;
++                __overlay__ {
++                        #address-cells = <1>;
++                        #size-cells = <0>;
++                        status = "okay";
++
++                        bmp085 at 77 {
++                                compatible = "bosch,bmp085";
++                                reg = <0x77>;
++                                default-oversampling = <3>;
++                                status = "okay";
++                        };
++                };
++        };
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/dht11-overlay.dts
+@@ -0,0 +1,39 @@
++/*
++ * Overlay for the DHT11/21/22 humidity/temperature sensor modules.
++ */
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++
++			dht11: dht11 at 0 {
++				compatible = "dht11";
++				pinctrl-names = "default";
++				pinctrl-0 = <&dht11_pins>;
++				gpios = <&gpio 4 0>;
++				status = "okay";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			dht11_pins: dht11_pins {
++				brcm,pins = <4>;
++				brcm,function = <0>; // in
++				brcm,pull = <0>; // off
++			};
++		};
++	};
++
++	__overrides__ {
++		gpiopin = <&dht11_pins>,"brcm,pins:0",
++			<&dht11>,"gpios:4";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/enc28j60-overlay.dts
+@@ -0,0 +1,50 @@
++// Overlay for the Microchip ENC28J60 Ethernet Controller
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			eth1: enc28j60 at 0{
++				compatible = "microchip,enc28j60";
++				reg = <0>; /* CE0 */
++				pinctrl-names = "default";
++				pinctrl-0 = <&eth1_pins>;
++				interrupt-parent = <&gpio>;
++				interrupts = <25 0x2>; /* falling edge */
++				spi-max-frequency = <12000000>;
++				status = "okay";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			eth1_pins: eth1_pins {
++				brcm,pins = <25>;
++				brcm,function = <0>; /* in */
++				brcm,pull = <0>; /* none */
++			};
++		};
++	};
++
++	__overrides__ {
++		int_pin = <&eth1>, "interrupts:0",
++		          <&eth1_pins>, "brcm,pins:0";
++		speed   = <&eth1>, "spi-max-frequency:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/gpio-poweroff-overlay.dts
+@@ -0,0 +1,34 @@
++// Definitions for gpio-poweroff module
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++			power_ctrl: power_ctrl {
++				compatible = "gpio-poweroff";
++				gpios = <&gpio 26 0>;
++				force;
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			power_ctrl_pins: power_ctrl_pins {
++				brcm,pins = <26>;
++				brcm,function = <1>; // out
++			};
++		};
++	};
++
++	__overrides__ {
++		gpiopin =       <&power_ctrl>,"gpios:4",
++				<&power_ctrl_pins>,"brcm,pins:0";
++		active_low =    <&power_ctrl>,"gpios:8";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/hifiberry-amp-overlay.dts
+@@ -0,0 +1,39 @@
++// Definitions for HiFiBerry Amp/Amp+
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "hifiberry,hifiberry-amp";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			tas5713 at 1b {
++				#sound-dai-cells = <0>;
++				compatible = "ti,tas5713";
++				reg = <0x1b>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/hifiberry-dac-overlay.dts
+@@ -0,0 +1,34 @@
++// Definitions for HiFiBerry DAC
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "hifiberry,hifiberry-dac";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target-path = "/";
++		__overlay__ {
++			pcm5102a-codec {
++				#sound-dai-cells = <0>;
++				compatible = "ti,pcm5102a";
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/hifiberry-dacplus-overlay.dts
+@@ -0,0 +1,39 @@
++// Definitions for HiFiBerry DAC+
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "hifiberry,hifiberry-dacplus";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			pcm5122 at 4d {
++				#sound-dai-cells = <0>;
++				compatible = "ti,pcm5122";
++				reg = <0x4d>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/hifiberry-digi-overlay.dts
+@@ -0,0 +1,39 @@
++// Definitions for HiFiBerry Digi
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "hifiberry,hifiberry-digi";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			wm8804 at 3b {
++				#sound-dai-cells = <0>;
++				compatible = "wlf,wm8804";
++				reg = <0x3b>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/hy28a-overlay.dts
+@@ -0,0 +1,87 @@
++/*
++ * Device Tree overlay for HY28A display
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			hy28a_pins: hy28a_pins {
++				brcm,pins = <17 25 18>;
++				brcm,function = <0 1 1>; /* in out out */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			hy28a: hy28a at 0{
++				compatible = "ilitek,ili9320";
++				reg = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&hy28a_pins>;
++
++				spi-max-frequency = <32000000>;
++				spi-cpol;
++				spi-cpha;
++				rotate = <270>;
++				bgr;
++				fps = <50>;
++				buswidth = <8>;
++				startbyte = <0x70>;
++				reset-gpios = <&gpio 25 0>;
++				led-gpios = <&gpio 18 1>;
++				debug = <0>;
++			};
++
++			hy28a_ts: hy28a-ts at 1 {
++				compatible = "ti,ads7846";
++				reg = <1>;
++
++				spi-max-frequency = <2000000>;
++				interrupts = <17 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				pendown-gpio = <&gpio 17 0>;
++				ti,x-plate-ohms = /bits/ 16 <100>;
++				ti,pressure-max = /bits/ 16 <255>;
++			};
++		};
++	};
++	__overrides__ {
++		speed =		<&hy28a>,"spi-max-frequency:0";
++		rotate =	<&hy28a>,"rotate:0";
++		fps =		<&hy28a>,"fps:0";
++		debug =		<&hy28a>,"debug:0";
++		xohms =		<&hy28a_ts>,"ti,x-plate-ohms;0";
++		resetgpio =	<&hy28a>,"reset-gpios:4",
++				<&hy28a_pins>, "brcm,pins:1";
++		ledgpio =	<&hy28a>,"led-gpios:4",
++				<&hy28a_pins>, "brcm,pins:2";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/hy28b-overlay.dts
+@@ -0,0 +1,142 @@
++/*
++ * Device Tree overlay for HY28b display shield by Texy
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			hy28b_pins: hy28b_pins {
++				brcm,pins = <17 25 18>;
++				brcm,function = <0 1 1>; /* in out out */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			hy28b: hy28b at 0{
++				compatible = "ilitek,ili9325";
++				reg = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&hy28b_pins>;
++
++				spi-max-frequency = <48000000>;
++				spi-cpol;
++				spi-cpha;
++				rotate = <270>;
++				bgr;
++				fps = <50>;
++				buswidth = <8>;
++				startbyte = <0x70>;
++				reset-gpios = <&gpio 25 0>;
++				led-gpios = <&gpio 18 1>;
++
++				gamma = "04 1F 4 7 7 0 7 7 6 0\n0F 00 1 7 4 0 0 0 6 7";
++
++				init = <0x10000e7 0x0010
++					0x1000000 0x0001
++					0x1000001 0x0100
++					0x1000002 0x0700
++				        0x1000003 0x1030
++					0x1000004 0x0000
++					0x1000008 0x0207
++					0x1000009 0x0000
++				        0x100000a 0x0000
++					0x100000c 0x0001
++					0x100000d 0x0000
++					0x100000f 0x0000
++				        0x1000010 0x0000
++					0x1000011 0x0007
++					0x1000012 0x0000
++					0x1000013 0x0000
++				        0x2000032
++					0x1000010 0x1590
++					0x1000011 0x0227
++				        0x2000032
++					0x1000012 0x009c
++				        0x2000032
++				        0x1000013 0x1900
++					0x1000029 0x0023
++					0x100002b 0x000e
++				        0x2000032
++				        0x1000020 0x0000
++					0x1000021 0x0000
++				        0x2000032
++					0x1000050 0x0000
++				        0x1000051 0x00ef
++					0x1000052 0x0000
++					0x1000053 0x013f
++					0x1000060 0xa700
++				        0x1000061 0x0001
++					0x100006a 0x0000
++					0x1000080 0x0000
++					0x1000081 0x0000
++				        0x1000082 0x0000
++					0x1000083 0x0000
++					0x1000084 0x0000
++					0x1000085 0x0000
++				        0x1000090 0x0010
++					0x1000092 0x0000
++					0x1000093 0x0003
++					0x1000095 0x0110
++				        0x1000097 0x0000
++					0x1000098 0x0000
++					0x1000007 0x0133
++					0x1000020 0x0000
++				        0x1000021 0x0000
++				        0x2000064>;
++				debug = <0>;
++			};
++
++			hy28b_ts: hy28b-ts at 1 {
++				compatible = "ti,ads7846";
++				reg = <1>;
++
++				spi-max-frequency = <2000000>;
++				interrupts = <17 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				pendown-gpio = <&gpio 17 0>;
++				ti,x-plate-ohms = /bits/ 16 <100>;
++				ti,pressure-max = /bits/ 16 <255>;
++			};
++		};
++	};
++	__overrides__ {
++		speed = 	<&hy28b>,"spi-max-frequency:0";
++		rotate = 	<&hy28b>,"rotate:0";
++		fps = 		<&hy28b>,"fps:0";
++		debug = 	<&hy28b>,"debug:0";
++		xohms =		<&hy28b_ts>,"ti,x-plate-ohms;0";
++		resetgpio =	<&hy28b>,"reset-gpios:4",
++				<&hy28b_pins>, "brcm,pins:1";
++		ledgpio =	<&hy28b>,"led-gpios:4",
++				<&hy28b_pins>, "brcm,pins:2";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/i2c-rtc-overlay.dts
+@@ -0,0 +1,55 @@
++// Definitions for several I2C based Real Time Clocks
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&i2c_arm>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			ds1307: ds1307 at 68 {
++				compatible = "maxim,ds1307";
++				reg = <0x68>;
++				status = "disable";
++			};
++			mcp7941x: mcp7941x at 6f {
++				compatible = "microchip,mcp7941x";
++				reg = <0x6f>;
++				status = "disable";
++			};
++			ds3231: ds3231 at 68 {
++				compatible = "maxim,ds3231";
++				reg = <0x68>;
++				status = "disable";
++			};
++			pcf2127: pcf2127 at 51 {
++				compatible = "nxp,pcf2127";
++				reg = <0x51>;
++				status = "disable";
++			};
++			pcf8523: pcf8523 at 68 {
++				compatible = "nxp,pcf8523";
++				reg = <0x68>;
++				status = "disable";
++			};
++			pcf8563: pcf8563 at 51 {
++				compatible = "nxp,pcf8563";
++				reg = <0x51>;
++				status = "disable";
++			};
++		};
++	};
++	__overrides__ {
++		ds1307 = <&ds1307>,"status";
++		ds3231 = <&ds3231>,"status";
++		mcp7941x = <&mcp7941x>,"status";
++		pcf2127 = <&pcf2127>,"status";
++		pcf8523 = <&pcf8523>,"status";
++		pcf8563 = <&pcf8563>,"status";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/i2s-mmap-overlay.dts
+@@ -0,0 +1,13 @@
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&i2s>;
++		__overlay__ {
++			brcm,enable-mmap;
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/iqaudio-dac-overlay.dts
+@@ -0,0 +1,39 @@
++// Definitions for IQaudIO DAC
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "iqaudio,iqaudio-dac";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			pcm5122 at 4c {
++				#sound-dai-cells = <0>;
++				compatible = "ti,pcm5122";
++				reg = <0x4c>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/iqaudio-dacplus-overlay.dts
+@@ -0,0 +1,39 @@
++// Definitions for IQaudIO DAC+
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "iqaudio,iqaudio-dac";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			pcm5122 at 4c {
++				#sound-dai-cells = <0>;
++				compatible = "ti,pcm5122";
++				reg = <0x4c>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/lirc-rpi-overlay.dts
+@@ -0,0 +1,57 @@
++// Definitions for lirc-rpi module
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++			lirc_rpi: lirc_rpi {
++				compatible = "rpi,lirc-rpi";
++				pinctrl-names = "default";
++				pinctrl-0 = <&lirc_pins>;
++				status = "okay";
++
++				// Override autodetection of IR receiver circuit
++				// (0 = active high, 1 = active low, -1 = no override )
++				rpi,sense = <0xffffffff>;
++
++				// Software carrier
++				// (0 = off, 1 = on)
++				rpi,softcarrier = <1>;
++
++				// Invert output
++				// (0 = off, 1 = on)
++				rpi,invert = <0>;
++
++				// Enable debugging messages
++				// (0 = off, 1 = on)
++				rpi,debug = <0>;
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			lirc_pins: lirc_pins {
++				brcm,pins = <17 18>;
++				brcm,function = <1 0>; // out in
++				brcm,pull = <0 1>; // off down
++			};
++		};
++	};
++
++	__overrides__ {
++		gpio_out_pin =  <&lirc_pins>,"brcm,pins:0";
++		gpio_in_pin =   <&lirc_pins>,"brcm,pins:4";
++		gpio_in_pull =  <&lirc_pins>,"brcm,pull:4";
++
++		sense =         <&lirc_rpi>,"rpi,sense:0";
++		softcarrier =   <&lirc_rpi>,"rpi,softcarrier:0";
++		invert =        <&lirc_rpi>,"rpi,invert:0";
++		debug =         <&lirc_rpi>,"rpi,debug:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/mcp2515-can0-overlay.dts
+@@ -0,0 +1,69 @@
++/*
++ * Device tree overlay for mcp251x/can0 on spi0.0
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++    compatible = "brcm,bcm2835", "brcm,bcm2836", "brcm,bcm2708", "brcm,bcm2709";
++    /* disable spi-dev for spi0.0 */
++    fragment at 0 {
++        target = <&spi0>;
++        __overlay__ {
++            status = "okay";
++            spidev at 0{
++                status = "disabled";
++            };
++        };
++    };
++
++    /* the interrupt pin of the can-controller */
++    fragment at 1 {
++        target = <&gpio>;
++        __overlay__ {
++            can0_pins: can0_pins {
++                brcm,pins = <25>;
++                brcm,function = <0>; /* input */
++            };
++        };
++    };
++
++    /* the clock/oscillator of the can-controller */
++    fragment at 2 {
++        target-path = "/clocks";
++        __overlay__ {
++            /* external oscillator of mcp2515 on SPI0.0 */
++            can0_osc: can0_osc {
++                compatible = "fixed-clock";
++                #clock-cells = <0>;
++                clock-frequency  = <16000000>;
++            };
++        };
++    };
++
++    /* the spi config of the can-controller itself binding everything together */
++    fragment at 3 {
++        target = <&spi0>;
++        __overlay__ {
++            /* needed to avoid dtc warning */
++            #address-cells = <1>;
++            #size-cells = <0>;
++            can0: mcp2515 at 0 {
++                reg = <0>;
++                compatible = "microchip,mcp2515";
++                pinctrl-names = "default";
++                pinctrl-0 = <&can0_pins>;
++                spi-max-frequency = <10000000>;
++                interrupt-parent = <&gpio>;
++                interrupts = <25 0x2>;
++                clocks = <&can0_osc>;
++            };
++        };
++    };
++    __overrides__ {
++        oscillator = <&can0_osc>,"clock-frequency:0";
++        spimaxfrequency = <&can0>,"spi-max-frequency:0";
++        interrupt = <&can0_pins>,"brcm,pins:0",<&can0>,"interrupts:0";
++    };
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/mcp2515-can1-overlay.dts
+@@ -0,0 +1,69 @@
++/*
++ * Device tree overlay for mcp251x/can1 on spi0.1 edited by petit_miner
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++    compatible = "brcm,bcm2835", "brcm,bcm2836", "brcm,bcm2708", "brcm,bcm2709";
++    /* disable spi-dev for spi0.1 */
++    fragment at 0 {
++        target = <&spi0>;
++        __overlay__ {
++            status = "okay";
++            spidev at 1{
++                status = "disabled";
++            };
++        };
++    };
++
++    /* the interrupt pin of the can-controller */
++    fragment at 1 {
++        target = <&gpio>;
++        __overlay__ {
++            can1_pins: can1_pins {
++                brcm,pins = <25>;
++                brcm,function = <0>; /* input */
++            };
++        };
++    };
++
++    /* the clock/oscillator of the can-controller */
++    fragment at 2 {
++        target-path = "/clocks";
++        __overlay__ {
++            /* external oscillator of mcp2515 on spi0.1 */
++            can1_osc: can1_osc {
++                compatible = "fixed-clock";
++                #clock-cells = <0>;
++                clock-frequency  = <16000000>;
++            };
++        };
++    };
++
++    /* the spi config of the can-controller itself binding everything together */
++    fragment at 3 {
++        target = <&spi0>;
++        __overlay__ {
++            /* needed to avoid dtc warning */
++            #address-cells = <1>;
++            #size-cells = <0>;
++            can1: mcp2515 at 1 {
++                reg = <1>;
++                compatible = "microchip,mcp2515";
++                pinctrl-names = "default";
++                pinctrl-0 = <&can1_pins>;
++                spi-max-frequency = <10000000>;
++                interrupt-parent = <&gpio>;
++                interrupts = <25 0x2>;
++                clocks = <&can1_osc>;
++            };
++        };
++    };
++    __overrides__ {
++        oscillator = <&can1_osc>,"clock-frequency:0";
++        spimaxfrequency = <&can1>,"spi-max-frequency:0";
++        interrupt = <&can1_pins>,"brcm,pins:0",<&can1>,"interrupts:0";
++    };
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/mmc-overlay.dts
+@@ -0,0 +1,39 @@
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&mmc>;
++		frag0: __overlay__ {
++			pinctrl-names = "default";
++			pinctrl-0 = <&mmc_pins>;
++			bus-width = <4>;
++			brcm,overclock-50 = <0>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			mmc_pins: mmc_pins {
++				brcm,pins = <48 49 50 51 52 53>;
++				brcm,function = <7>; /* alt3 */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&sdhost>;
++		__overlay__ {
++			status = "disabled";
++		};
++	};
++
++	__overrides__ {
++		overclock_50     = <&frag0>,"brcm,overclock-50:0";
++		force_pio        = <&frag0>,"brcm,force-pio?";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/mz61581-overlay.dts
+@@ -0,0 +1,111 @@
++/*
++ * Device Tree overlay for MZ61581-PI-EXT 2014.12.28 by Tontec
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			mz61581_pins: mz61581_pins {
++				brcm,pins = <4 15 18 25>;
++				brcm,function = <0 1 1 1>; /* in out out out */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			mz61581: mz61581 at 0{
++				compatible = "samsung,s6d02a1";
++				reg = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&mz61581_pins>;
++
++				spi-max-frequency = <128000000>;
++				spi-cpol;
++				spi-cpha;
++
++				width = <320>;
++				height = <480>;
++				rotate = <270>;
++				bgr;
++				fps = <30>;
++				buswidth = <8>;
++				txbuflen = <32768>;
++
++				reset-gpios = <&gpio 15 0>;
++				dc-gpios = <&gpio 25 0>;
++				led-gpios = <&gpio 18 0>;
++
++				init = <0x10000b0 00
++					0x1000011
++					0x20000ff
++					0x10000b3 0x02 0x00 0x00 0x00
++					0x10000c0 0x13 0x3b 0x00 0x02 0x00 0x01 0x00 0x43
++					0x10000c1 0x08 0x16 0x08 0x08
++					0x10000c4 0x11 0x07 0x03 0x03
++					0x10000c6 0x00
++					0x10000c8 0x03 0x03 0x13 0x5c 0x03 0x07 0x14 0x08 0x00 0x21 0x08 0x14 0x07 0x53 0x0c 0x13 0x03 0x03 0x21 0x00
++					0x1000035 0x00
++					0x1000036 0xa0
++					0x100003a 0x55
++					0x1000044 0x00 0x01
++					0x10000d0 0x07 0x07 0x1d 0x03
++					0x10000d1 0x03 0x30 0x10
++					0x10000d2 0x03 0x14 0x04
++					0x1000029
++					0x100002c>;
++
++				/* This is a workaround to make sure the init sequence slows down and doesn't fail */
++				debug = <3>;
++			};
++
++			mz61581_ts: mz61581_ts at 1 {
++				compatible = "ti,ads7846";
++				reg = <1>;
++
++				spi-max-frequency = <2000000>;
++				interrupts = <4 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				pendown-gpio = <&gpio 4 0>;
++
++				ti,x-plate-ohms = /bits/ 16 <60>;
++				ti,pressure-max = /bits/ 16 <255>;
++			};
++		};
++	};
++	__overrides__ {
++		speed =   <&mz61581>, "spi-max-frequency:0";
++		rotate =  <&mz61581>, "rotate:0";
++		fps =     <&mz61581>, "fps:0";
++		txbuflen = <&mz61581>, "txbuflen:0";
++		debug =   <&mz61581>, "debug:0";
++		xohms =   <&mz61581_ts>,"ti,x-plate-ohms;0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/piscreen-overlay.dts
+@@ -0,0 +1,96 @@
++/*
++ * Device Tree overlay for PiScreen 3.5" display shield by Ozzmaker
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			piscreen_pins: piscreen_pins {
++				brcm,pins = <17 25 24 22>;
++				brcm,function = <0 1 1 1>; /* in out out out */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			piscreen: piscreen at 0{
++				compatible = "ilitek,ili9486";
++				reg = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&piscreen_pins>;
++
++				spi-max-frequency = <24000000>;
++				rotate = <270>;
++				bgr;
++				fps = <30>;
++				buswidth = <8>;
++				regwidth = <16>;
++				reset-gpios = <&gpio 25 0>;
++				dc-gpios = <&gpio 24 0>;
++				led-gpios = <&gpio 22 1>;
++				debug = <0>;
++
++				init = <0x10000b0 0x00
++				        0x1000011
++					0x20000ff
++					0x100003a 0x55
++					0x1000036 0x28
++					0x10000c2 0x44
++					0x10000c5 0x00 0x00 0x00 0x00
++					0x10000e0 0x0f 0x1f 0x1c 0x0c 0x0f 0x08 0x48 0x98 0x37 0x0a 0x13 0x04 0x11 0x0d 0x00
++					0x10000e1 0x0f 0x32 0x2e 0x0b 0x0d 0x05 0x47 0x75 0x37 0x06 0x10 0x03 0x24 0x20 0x00
++					0x10000e2 0x0f 0x32 0x2e 0x0b 0x0d 0x05 0x47 0x75 0x37 0x06 0x10 0x03 0x24 0x20 0x00
++					0x1000011
++					0x1000029>;
++			};
++
++			piscreen_ts: piscreen-ts at 1 {
++				compatible = "ti,ads7846";
++				reg = <1>;
++
++				spi-max-frequency = <2000000>;
++				interrupts = <17 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				pendown-gpio = <&gpio 17 0>;
++				ti,swap-xy;
++				ti,x-plate-ohms = /bits/ 16 <100>;
++				ti,pressure-max = /bits/ 16 <255>;
++			};
++		};
++	};
++	__overrides__ {
++		speed =		<&piscreen>,"spi-max-frequency:0";
++		rotate =	<&piscreen>,"rotate:0";
++		fps =		<&piscreen>,"fps:0";
++		debug =		<&piscreen>,"debug:0";
++		xohms =		<&piscreen_ts>,"ti,x-plate-ohms;0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/pitft28-resistive-overlay.dts
+@@ -0,0 +1,115 @@
++/*
++ * Device Tree overlay for Adafruit PiTFT 2.8" resistive touch screen
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			pitft_pins: pitft_pins {
++				brcm,pins = <24 25>;
++				brcm,function = <0 1>; /* in out */
++				brcm,pull = <2 0>; /* pullup none */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			pitft: pitft at 0{
++				compatible = "ilitek,ili9340";
++				reg = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&pitft_pins>;
++
++				spi-max-frequency = <32000000>;
++				rotate = <90>;
++				fps = <25>;
++				bgr;
++				buswidth = <8>;
++				dc-gpios = <&gpio 25 0>;
++				debug = <0>;
++			};
++
++			pitft_ts at 1 {
++				#address-cells = <1>;
++				#size-cells = <0>;
++				compatible = "st,stmpe610";
++				reg = <1>;
++
++				spi-max-frequency = <500000>;
++				irq-gpio = <&gpio 24 0x2>; /* IRQF_TRIGGER_FALLING */
++				interrupts = <24 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				interrupt-controller;
++
++				stmpe_touchscreen {
++					compatible = "st,stmpe-ts";
++					st,sample-time = <4>;
++					st,mod-12b = <1>;
++					st,ref-sel = <0>;
++					st,adc-freq = <2>;
++					st,ave-ctrl = <3>;
++					st,touch-det-delay = <4>;
++					st,settling = <2>;
++					st,fraction-z = <7>;
++					st,i-drive = <0>;
++				};
++
++				stmpe_gpio: stmpe_gpio {
++					#gpio-cells = <2>;
++					compatible = "st,stmpe-gpio";
++					/*
++					 * only GPIO2 is wired/available
++					 * and it is wired to the backlight
++					 */
++					st,norequest-mask = <0x7b>;
++				};
++			};
++		};
++	};
++
++	fragment at 3 {
++		target-path = "/soc";
++		__overlay__ {
++			backlight {
++				compatible = "gpio-backlight";
++				gpios = <&stmpe_gpio 2 0>;
++				default-on;
++			};
++		};
++	};
++
++	__overrides__ {
++		speed =   <&pitft>,"spi-max-frequency:0";
++		rotate =  <&pitft>,"rotate:0";
++		fps =     <&pitft>,"fps:0";
++		debug =   <&pitft>,"debug:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/pps-gpio-overlay.dts
+@@ -0,0 +1,34 @@
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++			pps: pps {
++				compatible = "pps-gpio";
++				pinctrl-names = "default";
++				pinctrl-0 = <&pps_pins>;
++				gpios = <&gpio 18 0>;
++				status = "okay";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			pps_pins: pps_pins {
++				brcm,pins =     <18>;
++				brcm,function = <0>;    // in
++				brcm,pull =     <0>;    // off
++			};
++		};
++	};
++
++	__overrides__ {
++		gpiopin = <&pps>,"gpios:4",
++			  <&pps_pins>,"brcm,pins:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/pwm-2chan-overlay.dts
+@@ -0,0 +1,46 @@
++/dts-v1/;
++/plugin/;
++
++/*
++This is the 2-channel overlay - only use it if you need both channels.
++
++Legal pin,function combinations for each channel:
++  PWM0: 12,4(Alt0) 18,2(Alt5) 40,4(Alt0)            52,5(Alt1)
++  PWM1: 13,4(Alt0) 19,2(Alt5) 41,4(Alt0) 45,4(Alt0) 53,5(Alt1)
++
++N.B.:
++  1) Pin 18 is the only one available on all platforms, and
++     it is the one used by the I2S audio interface.
++     Pins 12 and 13 might be better choices on an A+, B+ or Pi2.
++  2) The onboard analogue audio output uses both PWM channels.
++  3) So be careful mixing audio and PWM.
++*/
++
++/ {
++	fragment at 0 {
++		target = <&gpio>;
++		__overlay__ {
++			pwm_pins: pwm_pins {
++				brcm,pins = <18 19>;
++				brcm,function = <2 2>; /* Alt5 */
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&pwm>;
++		__overlay__ {
++			pinctrl-names = "default";
++			pinctrl-0 = <&pwm_pins>;
++			status = "okay";
++		};
++	};
++
++	__overrides__ {
++		pin   = <&pwm_pins>,"brcm,pins:0";
++		pin2  = <&pwm_pins>,"brcm,pins:4";
++		func  = <&pwm_pins>,"brcm,function:0";
++		func2 = <&pwm_pins>,"brcm,function:4";
++		clock = <&clk_pwm>,"clock-frequency:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/pwm-overlay.dts
+@@ -0,0 +1,42 @@
++/dts-v1/;
++/plugin/;
++
++/*
++Legal pin,function combinations for each channel:
++  PWM0: 12,4(Alt0) 18,2(Alt5) 40,4(Alt0)            52,5(Alt1)
++  PWM1: 13,4(Alt0) 19,2(Alt5) 41,4(Alt0) 45,4(Alt0) 53,5(Alt1)
++
++N.B.:
++  1) Pin 18 is the only one available on all platforms, and
++     it is the one used by the I2S audio interface.
++     Pins 12 and 13 might be better choices on an A+, B+ or Pi2.
++  2) The onboard analogue audio output uses both PWM channels.
++  3) So be careful mixing audio and PWM.
++*/
++
++/ {
++	fragment at 0 {
++		target = <&gpio>;
++		__overlay__ {
++			pwm_pins: pwm_pins {
++				brcm,pins = <18>;
++				brcm,function = <2>; /* Alt5 */
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&pwm>;
++		__overlay__ {
++			pinctrl-names = "default";
++			pinctrl-0 = <&pwm_pins>;
++			status = "okay";
++		};
++	};
++
++	__overrides__ {
++		pin   = <&pwm_pins>,"brcm,pins:0";
++		func  = <&pwm_pins>,"brcm,function:0";
++		clock = <&clk_pwm>,"clock-frequency:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/raspidac3-overlay.dts
+@@ -0,0 +1,45 @@
++// Definitions for RaspiDACv3
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "jg,raspidacv3";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			pcm5122 at 4c {
++				#sound-dai-cells = <0>;
++				compatible = "ti,pcm5122";
++				reg = <0x4c>;
++				status = "okay";
++			};
++
++			tpa6130a2: tpa6130a2 at 60 {
++				compatible = "ti,tpa6130a2";
++				reg = <0x60>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/rpi-dac-overlay.dts
+@@ -0,0 +1,34 @@
++// Definitions for RPi DAC
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "rpi,rpi-dac";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target-path = "/";
++		__overlay__ {
++			pcm1794a-codec {
++				#sound-dai-cells = <0>;
++				compatible = "ti,pcm1794a";
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/rpi-display-overlay.dts
+@@ -0,0 +1,82 @@
++/*
++ * Device Tree overlay for rpi-display by Watterott
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			rpi_display_pins: rpi_display_pins {
++				brcm,pins = <18 23 24 25>;
++				brcm,function = <1 1 1 0>; /* out out out in */
++				brcm,pull = <0 0 0 2>; /* - - - up */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			rpidisplay: rpi-display at 0{
++				compatible = "ilitek,ili9341";
++				reg = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&rpi_display_pins>;
++
++				spi-max-frequency = <32000000>;
++				rotate = <270>;
++				bgr;
++				fps = <30>;
++				buswidth = <8>;
++				reset-gpios = <&gpio 23 0>;
++				dc-gpios = <&gpio 24 0>;
++				led-gpios = <&gpio 18 1>;
++				debug = <0>;
++			};
++
++			rpidisplay_ts: rpi-display-ts at 1 {
++				compatible = "ti,ads7846";
++				reg = <1>;
++
++				spi-max-frequency = <2000000>;
++				interrupts = <25 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				pendown-gpio = <&gpio 25 0>;
++				ti,x-plate-ohms = /bits/ 16 <60>;
++				ti,pressure-max = /bits/ 16 <255>;
++			};
++		};
++	};
++	__overrides__ {
++		speed =   <&rpidisplay>,"spi-max-frequency:0";
++		rotate =  <&rpidisplay>,"rotate:0";
++		fps =     <&rpidisplay>,"fps:0";
++		debug =   <&rpidisplay>,"debug:0";
++		xohms =   <&rpidisplay_ts>,"ti,x-plate-ohms;0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/rpi-ft5406-overlay.dts
+@@ -0,0 +1,17 @@
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++			rpi_ft5406: rpi_ft5406 {
++				compatible = "rpi,rpi-ft5406";
++				firmware = <&firmware>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/rpi-proto-overlay.dts
+@@ -0,0 +1,39 @@
++// Definitions for Rpi-Proto
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sound>;
++		__overlay__ {
++			compatible = "rpi,rpi-proto";
++			i2s-controller = <&i2s>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&i2s>;
++		__overlay__ {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			wm8731 at 1a {
++				#sound-dai-cells = <0>;
++				compatible = "wlf,wm8731";
++				reg = <0x1a>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/rpi-sense-overlay.dts
+@@ -0,0 +1,47 @@
++// rpi-sense HAT
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "okay";
++
++			rpi-sense at 46 {
++				compatible = "rpi,rpi-sense";
++				reg = <0x46>;
++				keys-int-gpios = <&gpio 23 1>;
++				status = "okay";
++			};
++
++			lsm9ds1-magn at 1c {
++				compatible = "st,lsm9ds1-magn";
++				reg = <0x1c>;
++				status = "okay";
++			};
++
++			lsm9ds1-accel6a {
++				compatible = "st,lsm9ds1-accel";
++				reg = <0x6a>;
++				status = "okay";
++			};
++
++			lps25h-press at 5c {
++				compatible = "st,lps25h-press";
++				reg = <0x5c>;
++				status = "okay";
++			};
++
++			hts221-humid at 5f {
++				compatible = "st,hts221-humid";
++				reg = <0x5f>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/sdhost-overlay.dts
+@@ -0,0 +1,29 @@
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&mmc>;
++		__overlay__ {
++			status = "disabled";
++		};
++	};
++
++	fragment at 1 {
++		target = <&sdhost>;
++		frag1: __overlay__ {
++			brcm,overclock-50 = <0>;
++			brcm,pio-limit = <1>;
++			status = "okay";
++		};
++	};
++
++	__overrides__ {
++		overclock_50     = <&frag1>,"brcm,overclock-50:0";
++		force_pio        = <&frag1>,"brcm,force-pio?";
++		pio_limit        = <&frag1>,"brcm,pio-limit:0";
++		debug            = <&frag1>,"brcm,debug?";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/sdio-overlay.dts
+@@ -0,0 +1,32 @@
++/* Enable SDIO from MMC interface via GPIOs 22-27. Includes sdhost overlay. */
++
++/include/ "sdhost-overlay.dts"
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 3 {
++		target = <&mmc>;
++		sdio_mmc: __overlay__ {
++			pinctrl-names = "default";
++			pinctrl-0 = <&sdio_pins>;
++			non-removable;
++			status = "okay";
++		};
++	};
++
++	fragment at 4 {
++		target = <&gpio>;
++		__overlay__ {
++			sdio_pins: sdio_pins {
++				brcm,pins = <22 23 24 25 26 27>;
++				brcm,function = <7 7 7 7 7 7>; /* ALT3 = SD1 */
++				brcm,pull = <0 2 2 2 2 2>;
++			};
++		};
++	};
++
++	__overrides__ {
++		poll_once = <&sdio_mmc>,"non-removable?";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/smi-dev-overlay.dts
+@@ -0,0 +1,18 @@
++// Description: Overlay to enable character device interface for SMI.
++// Author:	Luke Wren <luke at raspberrypi.org>
++
++/dts-v1/;
++/plugin/;
++
++/{
++	fragment at 0 {
++		target = <&soc>;
++		__overlay__ {
++			smi_dev {
++				compatible = "brcm,bcm2835-smi-dev";
++				smi_handle = <&smi>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/smi-nand-overlay.dts
+@@ -0,0 +1,69 @@
++// Description: Overlay to enable NAND flash through
++// the secondary memory interface
++// Author:	Luke Wren
++
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&smi>;
++		__overlay__ {
++			pinctrl-names = "default";
++			pinctrl-0 = <&smi_pins>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&soc>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <1>;
++
++			nand: flash at 0 {
++				compatible = "brcm,bcm2835-smi-nand";
++				smi_handle = <&smi>;
++				#address-cells = <1>;
++				#size-cells = <1>;
++				status = "okay";
++
++				partition at 0 {
++					label = "stage2";
++					// 128k
++					reg = <0 0x20000>;
++					read-only;
++				};
++				partition at 1 {
++					label = "firmware";
++					// 16M
++					reg = <0x20000 0x1000000>;
++					read-only;
++				};
++				partition at 2 {
++					label = "root";
++					// 2G (will need to use 64 bit for >=4G)
++					reg = <0x1020000 0x80000000>;
++				};
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&gpio>;
++		__overlay__ {
++			smi_pins: smi_pins {
++				brcm,pins = <0 1 2 3 4 5 6 7 8 9 10 11
++					12 13 14 15>;
++				/* Alt 1: SMI */
++				brcm,function = <5 5 5 5 5 5 5 5 5 5 5
++					5 5 5 5 5>;
++				/* /CS, /WE and /OE are pulled high, as they are
++				   generally active low signals */
++				brcm,pull = <2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0>;
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/smi-overlay.dts
+@@ -0,0 +1,37 @@
++// Description:	Overlay to enable the secondary memory interface peripheral
++// Author:	Luke Wren
++
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&smi>;
++		__overlay__ {
++			pinctrl-names = "default";
++			pinctrl-0 = <&smi_pins>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			smi_pins: smi_pins {
++				/* Don't configure the top two address bits, as
++				   these are already used as ID_SD and ID_SC */
++				brcm,pins = <2 3 4 5 6 7 8 9 10 11 12 13 14 15
++					     16 17 18 19 20 21 22 23 24 25>;
++				/* Alt 0: SMI */
++				brcm,function = <5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
++						 5 5 5 5 5 5 5 5 5>;
++				/* /CS, /WE and /OE are pulled high, as they are
++				   generally active low signals */
++				brcm,pull = <2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0
++					     0 0 0 0 0 0 0>;
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/spi-gpio35-39-overlay.dts
+@@ -0,0 +1,31 @@
++/*
++ * Device tree overlay to move spi0 to gpio 35 to 39 on CM
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2836", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			cs-gpios = <&gpio 36 1>, <&gpio 35 1>;
++		};
++	};
++
++	fragment at 1 {
++		target = <&spi0_cs_pins>;
++		__overlay__ {
++			bcrm,pins = <36 35>;
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0_pins>;
++		__overlay__ {
++			bcrm,pins = <37 38 39>;
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/tinylcd35-overlay.dts
+@@ -0,0 +1,216 @@
++/*
++ * tinylcd35-overlay.dts
++ *
++ * -------------------------------------------------
++ * www.tinlylcd.com
++ * -------------------------------------------------
++ * Device---Driver-----BUS       GPIO's
++ * display  tinylcd35  spi0.0    25 24 18
++ * touch    ads7846    spi0.1    5
++ * rtc      ds1307     i2c1-0068
++ * rtc      pcf8563    i2c1-0051
++ * keypad   gpio-keys  --------- 17 22 27 23 28
++ *
++ *
++ * TinyLCD.com 3.5 inch TFT
++ *
++ *  Version 001
++ *  5/3/2015  -- Noralf Trønnes     Initial Device tree framework
++ *  10/3/2015 -- tinylcd at gmail.com  added ds1307 support.
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			tinylcd35_pins: tinylcd35_pins {
++				brcm,pins = <25 24 18>;
++				brcm,function = <1>; /* out */
++			};
++			tinylcd35_ts_pins: tinylcd35_ts_pins {
++				brcm,pins = <5>;
++				brcm,function = <0>; /* in */
++			};
++			keypad_pins: keypad_pins {
++				brcm,pins = <4 17 22 23 27>;
++				brcm,function = <0>; /* in */
++				brcm,pull = <1>; /* down */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			tinylcd35: tinylcd35 at 0{
++				compatible = "neosec,tinylcd";
++				reg = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&tinylcd35_pins>,
++					    <&tinylcd35_ts_pins>;
++
++				spi-max-frequency = <48000000>;
++				rotate = <270>;
++				fps = <20>;
++				bgr;
++				buswidth = <8>;
++				reset-gpios = <&gpio 25 0>;
++				dc-gpios = <&gpio 24 0>;
++				led-gpios = <&gpio 18 1>;
++				debug = <0>;
++
++				init = <0x10000B0 0x80
++					0x10000C0 0x0A 0x0A
++					0x10000C1 0x01 0x01
++					0x10000C2 0x33
++					0x10000C5 0x00 0x42 0x80
++					0x10000B1 0xD0 0x11
++					0x10000B4 0x02
++					0x10000B6 0x00 0x22 0x3B
++					0x10000B7 0x07
++					0x1000036 0x58
++					0x10000F0 0x36 0xA5 0xD3
++					0x10000E5 0x80
++					0x10000E5 0x01
++					0x10000B3 0x00
++					0x10000E5 0x00
++					0x10000F0 0x36 0xA5 0x53
++					0x10000E0 0x00 0x35 0x33 0x00 0x00 0x00 0x00 0x35 0x33 0x00 0x00 0x00
++					0x100003A 0x55
++					0x1000011
++					0x2000001
++					0x1000029>;
++			};
++
++			tinylcd35_ts: tinylcd35_ts at 1 {
++				compatible = "ti,ads7846";
++				reg = <1>;
++				status = "disabled";
++
++				spi-max-frequency = <2000000>;
++				interrupts = <5 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				pendown-gpio = <&gpio 5 0>;
++				ti,x-plate-ohms = /bits/ 16 <100>;
++				ti,pressure-max = /bits/ 16 <255>;
++			};
++		};
++	};
++
++	/*  RTC    */
++
++	fragment at 3 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			pcf8563: pcf8563 at 51 {
++				compatible = "nxp,pcf8563";
++				reg = <0x51>;
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 4 {
++		target = <&i2c1>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			ds1307: ds1307 at 68 {
++				compatible = "maxim,ds1307";
++				reg = <0x68>;
++				status = "disabled";
++			};
++		};
++	};
++
++	/*
++	 * Values for input event code is found under the
++	 * 'Keys and buttons' heading in include/uapi/linux/input.h
++	 */
++	fragment at 5 {
++		target-path = "/soc";
++		__overlay__ {
++			keypad: keypad {
++				compatible = "gpio-keys";
++				#address-cells = <1>;
++				#size-cells = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&keypad_pins>;
++				status = "disabled";
++				autorepeat;
++
++				button at 17 {
++					label = "GPIO KEY_UP";
++					linux,code = <103>;
++					gpios = <&gpio 17 0>;
++				};
++				button at 22 {
++					label = "GPIO KEY_DOWN";
++					linux,code = <108>;
++					gpios = <&gpio 22 0>;
++				};
++				button at 27 {
++					label = "GPIO KEY_LEFT";
++					linux,code = <105>;
++					gpios = <&gpio 27 0>;
++				};
++				button at 23 {
++					label = "GPIO KEY_RIGHT";
++					linux,code = <106>;
++					gpios = <&gpio 23 0>;
++				};
++				button at 4 {
++					label = "GPIO KEY_ENTER";
++					linux,code = <28>;
++					gpios = <&gpio 4 0>;
++				};
++			};
++		};
++	};
++
++	__overrides__ {
++		speed =      <&tinylcd35>,"spi-max-frequency:0";
++		rotate =     <&tinylcd35>,"rotate:0";
++		fps =        <&tinylcd35>,"fps:0";
++		debug =      <&tinylcd35>,"debug:0";
++		touch =      <&tinylcd35_ts>,"status";
++		touchgpio =  <&tinylcd35_ts_pins>,"brcm,pins:0",
++			     <&tinylcd35_ts>,"interrupts:0",
++			     <&tinylcd35_ts>,"pendown-gpio:4";
++		xohms =      <&tinylcd35_ts>,"ti,x-plate-ohms;0";
++		rtc-pcf =    <&i2c1>,"status",
++			     <&pcf8563>,"status";
++		rtc-ds =     <&i2c1>,"status",
++			     <&ds1307>,"status";
++		keypad =     <&keypad>,"status";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/uart1-overlay.dts
+@@ -0,0 +1,38 @@
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&uart1>;
++		__overlay__ {
++			pinctrl-names = "default";
++			pinctrl-0 = <&uart1_pins>;
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			uart1_pins: uart1_pins {
++				brcm,pins = <14 15>;
++				brcm,function = <2>; /* alt5 */
++				brcm,pull = <0 2>;
++			};
++		};
++	};
++
++	fragment at 2 {
++		target-path = "/chosen";
++		__overlay__ {
++			bootargs = "8250.nr_uarts=1";
++		};
++	};
++
++	__overrides__ {
++		txd1_pin = <&uart1_pins>,"brcm,pins:0";
++		rxd1_pin = <&uart1_pins>,"brcm,pins:4";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/vga666-overlay.dts
+@@ -0,0 +1,30 @@
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	// There is no VGA driver module, but we need a platform device
++	// node (that doesn't already use pinctrl) to hang the pinctrl
++	// reference on - leds will do
++
++	fragment at 0 {
++		target = <&leds>;
++		__overlay__ {
++			pinctrl-names = "default";
++			pinctrl-0 = <&vga666_pins>;
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			vga666_pins: vga666_pins {
++				brcm,pins = <2 3 4 5 6 7 8 9 10 11 12
++					     13 14 15 16 17 18 19 20 21>;
++				brcm,function = <6>; /* alt2 */
++				brcm,pull = <0>; /* no pull */
++			};
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/w1-gpio-overlay.dts
+@@ -0,0 +1,39 @@
++// Definitions for w1-gpio module (without external pullup)
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++
++			w1: onewire at 0 {
++				compatible = "w1-gpio";
++				pinctrl-names = "default";
++				pinctrl-0 = <&w1_pins>;
++				gpios = <&gpio 4 0>;
++				rpi,parasitic-power = <0>;
++				status = "okay";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			w1_pins: w1_pins {
++				brcm,pins = <4>;
++				brcm,function = <0>; // in (initially)
++				brcm,pull = <0>; // off
++			};
++		};
++	};
++
++	__overrides__ {
++		gpiopin =       <&w1>,"gpios:4",
++				<&w1_pins>,"brcm,pins:0";
++		pullup =        <&w1>,"rpi,parasitic-power:0";
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/w1-gpio-pullup-overlay.dts
+@@ -0,0 +1,41 @@
++// Definitions for w1-gpio module (with external pullup)
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++
++			w1: onewire at 0 {
++				compatible = "w1-gpio";
++				pinctrl-names = "default";
++				pinctrl-0 = <&w1_pins>;
++				gpios = <&gpio 4 0>, <&gpio 5 1>;
++				rpi,parasitic-power = <0>;
++				status = "okay";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			w1_pins: w1_pins {
++				brcm,pins = <4 5>;
++				brcm,function = <0 1>; // in out
++				brcm,pull = <0 0>; // off off
++			};
++		};
++	};
++
++	__overrides__ {
++		gpiopin =       <&w1>,"gpios:4",
++				<&w1_pins>,"brcm,pins:0";
++		extpullup =     <&w1>,"gpios:16",
++				<&w1_pins>,"brcm,pins:4";
++		pullup =        <&w1>,"rpi,parasitic-power:0";
++	};
++};
diff --git a/target/linux/brcm2708/patches-4.4/0053-bcm2835-Match-with-BCM2708-Device-Trees.patch b/target/linux/brcm2708/patches-4.4/0053-bcm2835-Match-with-BCM2708-Device-Trees.patch
new file mode 100644
index 0000000..6105e46
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0053-bcm2835-Match-with-BCM2708-Device-Trees.patch
@@ -0,0 +1,515 @@
+From 1cb8b3610957f9c892a7eb6e18b99eca0ac7a77d Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Sat, 15 Aug 2015 20:47:07 +0200
+Subject: [PATCH 053/127] bcm2835: Match with BCM2708 Device Trees
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ arch/arm/boot/dts/bcm2835-rpi-b-plus.dts | 132 ++++++++++++++++++---
+ arch/arm/boot/dts/bcm2835-rpi-b.dts      | 115 ++++++++++++++++--
+ arch/arm/boot/dts/bcm2835.dtsi           | 195 +++----------------------------
+ 3 files changed, 237 insertions(+), 205 deletions(-)
+
+--- a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
+@@ -1,30 +1,128 @@
+ /dts-v1/;
+-#include "bcm2835-rpi.dtsi"
++#include "bcm2835.dtsi"
+ 
+ / {
+ 	compatible = "raspberrypi,model-b-plus", "brcm,bcm2835";
+ 	model = "Raspberry Pi Model B+";
+-
+-	leds {
+-		act {
+-			gpios = <&gpio 47 0>;
+-		};
+-
+-		pwr {
+-			label = "PWR";
+-			gpios = <&gpio 35 0>;
+-			default-state = "keep";
+-			linux,default-trigger = "default-on";
+-		};
+-	};
+ };
+ 
+ &gpio {
+-	pinctrl-0 = <&gpioout &alt0 &i2s_alt0 &alt3>;
++	spi0_pins: spi0_pins {
++		brcm,pins = <7 8 9 10 11>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	i2c0_pins: i2c0 {
++		brcm,pins = <0 1>;
++		brcm,function = <4>;
++	};
+ 
+-	/* I2S interface */
+-	i2s_alt0: i2s_alt0 {
++	i2c1_pins: i2c1 {
++		brcm,pins = <2 3>;
++		brcm,function = <4>;
++	};
++
++	i2s_pins: i2s {
+ 		brcm,pins = <18 19 20 21>;
+-		brcm,function = <BCM2835_FSEL_ALT0>;
++		brcm,function = <4>; /* alt0 */
++	};
++};
++
++&mmc {
++	status = "okay";
++	bus-width = <4>;
++};
++
++&fb {
++	status = "okay";
++};
++
++&uart0 {
++	status = "okay";
++};
++
++&spi0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spi0_pins>;
++
++	spidev at 0{
++		compatible = "spidev";
++		reg = <0>;	/* CE0 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++
++	spidev at 1{
++		compatible = "spidev";
++		reg = <1>;	/* CE1 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++};
++
++&i2c0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c0_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c1_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c2 {
++	clock-frequency = <100000>;
++};
++
++&i2s {
++	#sound-dai-cells = <0>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2s_pins>;
++};
++
++&leds {
++	act_led: act {
++		label = "led0";
++		linux,default-trigger = "mmc0";
++		gpios = <&gpio 47 0>;
++	};
++
++	pwr_led: pwr {
++		label = "led1";
++		linux,default-trigger = "input";
++		gpios = <&gpio 35 0>;
++	};
++};
++
++/ {
++	__overrides__ {
++		uart0 = <&uart0>,"status";
++		uart0_clkrate = <&clk_uart0>,"clock-frequency:0";
++		uart1_clkrate = <&uart1>,"clock-frequency:0";
++		i2s = <&i2s>,"status";
++		spi = <&spi0>,"status";
++		i2c0 = <&i2c0>,"status";
++		i2c1 = <&i2c1>,"status";
++		i2c2_iknowwhatimdoing = <&i2c2>,"status";
++		i2c0_baudrate = <&i2c0>,"clock-frequency:0";
++		i2c1_baudrate = <&i2c1>,"clock-frequency:0";
++		i2c2_baudrate = <&i2c2>,"clock-frequency:0";
++		core_freq = <&clk_core>,"clock-frequency:0";
++
++		act_led_gpio = <&act_led>,"gpios:4";
++		act_led_activelow = <&act_led>,"gpios:8";
++		act_led_trigger = <&act_led>,"linux,default-trigger";
++
++		pwr_led_gpio = <&pwr_led>,"gpios:4";
++		pwr_led_activelow = <&pwr_led>,"gpios:8";
++		pwr_led_trigger = <&pwr_led>,"linux,default-trigger";
++
++		audio = <&audio>,"status";
++		watchdog = <&watchdog>,"status";
++		random = <&random>,"status";
+ 	};
+ };
+--- a/arch/arm/boot/dts/bcm2835-rpi-b.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+@@ -1,17 +1,118 @@
+ /dts-v1/;
+-#include "bcm2835-rpi.dtsi"
++#include "bcm2835.dtsi"
+ 
+ / {
+ 	compatible = "raspberrypi,model-b", "brcm,bcm2835";
+ 	model = "Raspberry Pi Model B";
++};
+ 
+-	leds {
+-		act {
+-			gpios = <&gpio 16 1>;
+-		};
++&gpio {
++	spi0_pins: spi0_pins {
++		brcm,pins = <7 8 9 10 11>;
++		brcm,function = <4>; /* alt0 */
++	};
++
++	i2c0_pins: i2c0 {
++		brcm,pins = <0 1>;
++		brcm,function = <4>;
++	};
++
++	i2c1_pins: i2c1 {
++		brcm,pins = <2 3>;
++		brcm,function = <4>;
++	};
++
++	i2s_pins: i2s {
++		brcm,pins = <28 29 30 31>;
++		brcm,function = <6>; /* alt2 */
+ 	};
+ };
+ 
+-&gpio {
+-	pinctrl-0 = <&gpioout &alt0 &alt3>;
++&mmc {
++	status = "okay";
++	bus-width = <4>;
++};
++
++&fb {
++	status = "okay";
++};
++
++&uart0 {
++	status = "okay";
++};
++
++&spi0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spi0_pins>;
++
++	spidev at 0{
++		compatible = "spidev";
++		reg = <0>;	/* CE0 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++
++	spidev at 1{
++		compatible = "spidev";
++		reg = <1>;	/* CE1 */
++		#address-cells = <1>;
++		#size-cells = <0>;
++		spi-max-frequency = <500000>;
++	};
++};
++
++&i2c0 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c0_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2c1_pins>;
++	clock-frequency = <100000>;
++};
++
++&i2c2 {
++	clock-frequency = <100000>;
++};
++
++&i2s {
++	#sound-dai-cells = <0>;
++	pinctrl-names = "default";
++	pinctrl-0 = <&i2s_pins>;
++};
++
++&leds {
++	act_led: act {
++		label = "led0";
++		linux,default-trigger = "mmc0";
++		gpios = <&gpio 16 1>;
++	};
++};
++
++/ {
++	__overrides__ {
++		uart0 = <&uart0>,"status";
++		uart0_clkrate = <&clk_uart0>,"clock-frequency:0";
++		uart1_clkrate = <&uart1>,"clock-frequency:0";
++		i2s = <&i2s>,"status";
++		spi = <&spi0>,"status";
++		i2c0 = <&i2c0>,"status";
++		i2c1 = <&i2c1>,"status";
++		i2c2_iknowwhatimdoing = <&i2c2>,"status";
++		i2c0_baudrate = <&i2c0>,"clock-frequency:0";
++		i2c1_baudrate = <&i2c1>,"clock-frequency:0";
++		i2c2_baudrate = <&i2c2>,"clock-frequency:0";
++		core_freq = <&clk_core>,"clock-frequency:0";
++
++		act_led_gpio = <&act_led>,"gpios:4";
++		act_led_activelow = <&act_led>,"gpios:8";
++		act_led_trigger = <&act_led>,"linux,default-trigger";
++
++		audio = <&audio>,"status";
++		watchdog = <&watchdog>,"status";
++		random = <&random>,"status";
++	};
+ };
+--- a/arch/arm/boot/dts/bcm2835.dtsi
++++ b/arch/arm/boot/dts/bcm2835.dtsi
+@@ -1,206 +1,39 @@
+-#include <dt-bindings/pinctrl/bcm2835.h>
+-#include <dt-bindings/clock/bcm2835.h>
+-#include "skeleton.dtsi"
++#include "bcm2708_common.dtsi"
+ 
+ / {
+ 	compatible = "brcm,bcm2835";
+ 	model = "BCM2835";
+-	interrupt-parent = <&intc>;
+ 
+ 	chosen {
+-		bootargs = "earlyprintk console=ttyAMA0";
++		bootargs = "";
+ 	};
+ 
+ 	soc {
+-		compatible = "simple-bus";
+-		#address-cells = <1>;
+-		#size-cells = <1>;
+-		ranges = <0x7e000000 0x20000000 0x02000000>;
++		ranges = <0x7e000000 0x20000000 0x01000000>;
+ 		dma-ranges = <0x40000000 0x00000000 0x20000000>;
+ 
+ 		timer at 7e003000 {
+ 			compatible = "brcm,bcm2835-system-timer";
+ 			reg = <0x7e003000 0x1000>;
+ 			interrupts = <1 0>, <1 1>, <1 2>, <1 3>;
+-			/* This could be a reference to BCM2835_CLOCK_TIMER,
+-			 * but we don't have the driver using the common clock
+-			 * support yet.
+-			 */
+ 			clock-frequency = <1000000>;
+ 		};
+ 
+-		dma: dma at 7e007000 {
+-			compatible = "brcm,bcm2835-dma";
+-			reg = <0x7e007000 0xf00>;
+-			interrupts = <1 16>,
+-				     <1 17>,
+-				     <1 18>,
+-				     <1 19>,
+-				     <1 20>,
+-				     <1 21>,
+-				     <1 22>,
+-				     <1 23>,
+-				     <1 24>,
+-				     <1 25>,
+-				     <1 26>,
+-				     <1 27>,
+-				     <1 28>;
+-
+-			#dma-cells = <1>;
+-			brcm,dma-channel-mask = <0x7f35>;
+-		};
+-
+-		intc: interrupt-controller at 7e00b200 {
+-			compatible = "brcm,bcm2835-armctrl-ic";
+-			reg = <0x7e00b200 0x200>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-
+-		watchdog at 7e100000 {
+-			compatible = "brcm,bcm2835-pm-wdt";
+-			reg = <0x7e100000 0x28>;
+-		};
+-
+-		clocks: cprman at 7e101000 {
+-			compatible = "brcm,bcm2835-cprman";
+-			#clock-cells = <1>;
+-			reg = <0x7e101000 0x2000>;
+-
+-			/* CPRMAN derives everything from the platform's
+-			 * oscillator.
+-			 */
+-			clocks = <&clk_osc>;
+-		};
+-
+-		rng at 7e104000 {
+-			compatible = "brcm,bcm2835-rng";
+-			reg = <0x7e104000 0x10>;
+-		};
+-
+-		mailbox: mailbox at 7e00b800 {
+-			compatible = "brcm,bcm2835-mbox";
+-			reg = <0x7e00b880 0x40>;
+-			interrupts = <0 1>;
+-			#mbox-cells = <0>;
+-		};
+-
+-		gpio: gpio at 7e200000 {
+-			compatible = "brcm,bcm2835-gpio";
+-			reg = <0x7e200000 0xb4>;
+-			/*
+-			 * The GPIO IP block is designed for 3 banks of GPIOs.
+-			 * Each bank has a GPIO interrupt for itself.
+-			 * There is an overall "any bank" interrupt.
+-			 * In order, these are GIC interrupts 17, 18, 19, 20.
+-			 * Since the BCM2835 only has 2 banks, the 2nd bank
+-			 * interrupt output appears to be mirrored onto the
+-			 * 3rd bank's interrupt signal.
+-			 * So, a bank0 interrupt shows up on 17, 20, and
+-			 * a bank1 interrupt shows up on 18, 19, 20!
+-			 */
+-			interrupts = <2 17>, <2 18>, <2 19>, <2 20>;
+-
+-			gpio-controller;
+-			#gpio-cells = <2>;
+-
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-
+-		uart0: uart at 7e201000 {
+-			compatible = "brcm,bcm2835-pl011", "arm,pl011", "arm,primecell";
+-			reg = <0x7e201000 0x1000>;
+-			interrupts = <2 25>;
+-			clocks = <&clocks BCM2835_CLOCK_UART>,
+-				 <&clocks BCM2835_CLOCK_VPU>;
+-			clock-names = "uartclk", "apb_pclk";
+-			arm,primecell-periphid = <0x00241011>;
+-		};
+-
+-		i2s: i2s at 7e203000 {
+-			compatible = "brcm,bcm2835-i2s";
+-			reg = <0x7e203000 0x24>,
+-			      <0x7e101098 0x08>;
+-
+-			dmas = <&dma 2>,
+-			       <&dma 3>;
+-			dma-names = "tx", "rx";
+-			status = "disabled";
+-		};
+-
+-		spi: spi at 7e204000 {
+-			compatible = "brcm,bcm2835-spi";
+-			reg = <0x7e204000 0x1000>;
+-			interrupts = <2 22>;
+-			clocks = <&clocks BCM2835_CLOCK_VPU>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			status = "disabled";
+-		};
+-
+-		i2c0: i2c at 7e205000 {
+-			compatible = "brcm,bcm2835-i2c";
+-			reg = <0x7e205000 0x1000>;
+-			interrupts = <2 21>;
+-			clocks = <&clocks BCM2835_CLOCK_VPU>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			status = "disabled";
+-		};
+-
+-		sdhci: sdhci at 7e300000 {
+-			compatible = "brcm,bcm2835-sdhci";
+-			reg = <0x7e300000 0x100>;
+-			interrupts = <2 30>;
+-			clocks = <&clocks BCM2835_CLOCK_EMMC>;
+-			status = "disabled";
+-		};
+-
+-		i2c1: i2c at 7e804000 {
+-			compatible = "brcm,bcm2835-i2c";
+-			reg = <0x7e804000 0x1000>;
+-			interrupts = <2 21>;
+-			clocks = <&clocks BCM2835_CLOCK_VPU>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			status = "disabled";
+-		};
+-
+-		i2c2: i2c at 7e805000 {
+-			compatible = "brcm,bcm2835-i2c";
+-			reg = <0x7e805000 0x1000>;
+-			interrupts = <2 21>;
+-			clocks = <&clocks BCM2835_CLOCK_VPU>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			status = "disabled";
+-		};
+-
+-		usb at 7e980000 {
+-			compatible = "brcm,bcm2835-usb";
+-			reg = <0x7e980000 0x10000>;
+-			interrupts = <1 9>;
+-		};
+-
+ 		arm-pmu {
+ 			compatible = "arm,arm1176-pmu";
+ 		};
+-	};
+ 
+-	clocks {
+-		compatible = "simple-bus";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		/* The oscillator is the root of the clock tree. */
+-		clk_osc: clock at 3 {
+-			compatible = "fixed-clock";
+-			reg = <3>;
+-			#clock-cells = <0>;
+-			clock-output-names = "osc";
+-			clock-frequency = <19200000>;
++		aux_enable: aux_enable at 0x7e215004 {
++			compatible = "bcrm,bcm2835-aux-enable";
++			reg = <0x7e215004 0x04>;
+ 		};
+-
+ 	};
+ };
++
++&intc {
++	compatible = "brcm,bcm2835-armctrl-ic";
++};
++
++&watchdog {
++	status = "okay";
++};
diff --git a/target/linux/brcm2708/patches-4.4/0054-fbdev-add-FBIOCOPYAREA-ioctl.patch b/target/linux/brcm2708/patches-4.4/0054-fbdev-add-FBIOCOPYAREA-ioctl.patch
new file mode 100644
index 0000000..151b5cf
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0054-fbdev-add-FBIOCOPYAREA-ioctl.patch
@@ -0,0 +1,91 @@
+From 6faba148320fb8fc49b5cf5e5ef7864d9a1c0fd9 Mon Sep 17 00:00:00 2001
+From: Siarhei Siamashka <siarhei.siamashka at gmail.com>
+Date: Mon, 17 Jun 2013 13:32:11 +0300
+Subject: [PATCH 054/127] fbdev: add FBIOCOPYAREA ioctl
+
+Based on the patch authored by Ali Gholami Rudi at
+    https://lkml.org/lkml/2009/7/13/153
+
+Provide an ioctl for userspace applications, but only if this operation
+is hardware accelerated (otherwide it does not make any sense).
+
+Signed-off-by: Siarhei Siamashka <siarhei.siamashka at gmail.com>
+---
+ drivers/video/fbdev/core/fbmem.c | 30 ++++++++++++++++++++++++++++++
+ include/uapi/linux/fb.h          |  5 +++++
+ 2 files changed, 35 insertions(+)
+
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1084,6 +1084,25 @@ fb_blank(struct fb_info *info, int blank
+ }
+ EXPORT_SYMBOL(fb_blank);
+ 
++static int fb_copyarea_user(struct fb_info *info,
++			    struct fb_copyarea *copy)
++{
++	int ret = 0;
++	if (!lock_fb_info(info))
++		return -ENODEV;
++	if (copy->dx + copy->width > info->var.xres ||
++	    copy->sx + copy->width > info->var.xres ||
++	    copy->dy + copy->height > info->var.yres ||
++	    copy->sy + copy->height > info->var.yres) {
++		ret = -EINVAL;
++		goto out;
++	}
++	info->fbops->fb_copyarea(info, copy);
++out:
++	unlock_fb_info(info);
++	return ret;
++}
++
+ static long do_fb_ioctl(struct fb_info *info, unsigned int cmd,
+ 			unsigned long arg)
+ {
+@@ -1094,6 +1113,7 @@ static long do_fb_ioctl(struct fb_info *
+ 	struct fb_cmap cmap_from;
+ 	struct fb_cmap_user cmap;
+ 	struct fb_event event;
++	struct fb_copyarea copy;
+ 	void __user *argp = (void __user *)arg;
+ 	long ret = 0;
+ 
+@@ -1211,6 +1231,15 @@ static long do_fb_ioctl(struct fb_info *
+ 		unlock_fb_info(info);
+ 		console_unlock();
+ 		break;
++	case FBIOCOPYAREA:
++		if (info->flags & FBINFO_HWACCEL_COPYAREA) {
++			/* only provide this ioctl if it is accelerated */
++			if (copy_from_user(&copy, argp, sizeof(copy)))
++				return -EFAULT;
++			ret = fb_copyarea_user(info, &copy);
++			break;
++		}
++		/* fall through */
+ 	default:
+ 		if (!lock_fb_info(info))
+ 			return -ENODEV;
+@@ -1365,6 +1394,7 @@ static long fb_compat_ioctl(struct file
+ 	case FBIOPAN_DISPLAY:
+ 	case FBIOGET_CON2FBMAP:
+ 	case FBIOPUT_CON2FBMAP:
++	case FBIOCOPYAREA:
+ 		arg = (unsigned long) compat_ptr(arg);
+ 	case FBIOBLANK:
+ 		ret = do_fb_ioctl(info, cmd, arg);
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -34,6 +34,11 @@
+ #define FBIOPUT_MODEINFO        0x4617
+ #define FBIOGET_DISPINFO        0x4618
+ #define FBIO_WAITFORVSYNC	_IOW('F', 0x20, __u32)
++/*
++ * HACK: use 'z' in order not to clash with any other ioctl numbers which might
++ * be concurrently added to the mainline kernel
++ */
++#define FBIOCOPYAREA		_IOW('z', 0x21, struct fb_copyarea)
+ 
+ #define FB_TYPE_PACKED_PIXELS		0	/* Packed Pixels	*/
+ #define FB_TYPE_PLANES			1	/* Non interleaved planes */
diff --git a/target/linux/brcm2708/patches-4.4/0058-Speed-up-console-framebuffer-imageblit-function.patch b/target/linux/brcm2708/patches-4.4/0058-Speed-up-console-framebuffer-imageblit-function.patch
new file mode 100644
index 0000000..ab45773
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0058-Speed-up-console-framebuffer-imageblit-function.patch
@@ -0,0 +1,209 @@
+From a205b536eaf4693151d5f375426b470ed1eaa4ed Mon Sep 17 00:00:00 2001
+From: Harm Hanemaaijer <fgenfb at yahoo.com>
+Date: Thu, 20 Jun 2013 20:21:39 +0200
+Subject: [PATCH 058/127] Speed up console framebuffer imageblit function
+
+Especially on platforms with a slower CPU but a relatively high
+framebuffer fill bandwidth, like current ARM devices, the existing
+console monochrome imageblit function used to draw console text is
+suboptimal for common pixel depths such as 16bpp and 32bpp. The existing
+code is quite general and can deal with several pixel depths. By creating
+special case functions for 16bpp and 32bpp, by far the most common pixel
+formats used on modern systems, a significant speed-up is attained
+which can be readily felt on ARM-based devices like the Raspberry Pi
+and the Allwinner platform, but should help any platform using the
+fb layer.
+
+The special case functions allow constant folding, eliminating a number
+of instructions including divide operations, and allow the use of an
+unrolled loop, eliminating instructions with a variable shift size,
+reducing source memory access instructions, and eliminating excessive
+branching. These unrolled loops also allow much better code optimization
+by the C compiler. The code that selects which optimized variant is used
+is also simplified, eliminating integer divide instructions.
+
+The speed-up, measured by timing 'cat file.txt' in the console, varies
+between 40% and 70%, when testing on the Raspberry Pi and Allwinner
+ARM-based platforms, depending on font size and the pixel depth, with
+the greater benefit for 32bpp.
+
+Signed-off-by: Harm Hanemaaijer <fgenfb at yahoo.com>
+---
+ drivers/video/fbdev/core/cfbimgblt.c | 152 +++++++++++++++++++++++++++++++++--
+ 1 file changed, 147 insertions(+), 5 deletions(-)
+
+--- a/drivers/video/fbdev/core/cfbimgblt.c
++++ b/drivers/video/fbdev/core/cfbimgblt.c
+@@ -28,6 +28,11 @@
+  *
+  *  Also need to add code to deal with cards endians that are different than
+  *  the native cpu endians. I also need to deal with MSB position in the word.
++ *  Modified by Harm Hanemaaijer (fgenfb at yahoo.com) 2013:
++ *  - Provide optimized versions of fast_imageblit for 16 and 32bpp that are
++ *    significantly faster than the previous implementation.
++ *  - Simplify the fast/slow_imageblit selection code, avoiding integer
++ *    divides.
+  */
+ #include <linux/module.h>
+ #include <linux/string.h>
+@@ -262,6 +267,133 @@ static inline void fast_imageblit(const
+ 	}
+ }	
+ 	
++/*
++ * Optimized fast_imageblit for bpp == 16. ppw = 2, bit_mask = 3 folded
++ * into the code, main loop unrolled.
++ */
++
++static inline void fast_imageblit16(const struct fb_image *image,
++				    struct fb_info *p, u8 __iomem * dst1,
++				    u32 fgcolor, u32 bgcolor)
++{
++	u32 fgx = fgcolor, bgx = bgcolor;
++	u32 spitch = (image->width + 7) / 8;
++	u32 end_mask, eorx;
++	const char *s = image->data, *src;
++	u32 __iomem *dst;
++	const u32 *tab = NULL;
++	int i, j, k;
++
++	tab = fb_be_math(p) ? cfb_tab16_be : cfb_tab16_le;
++
++	fgx <<= 16;
++	bgx <<= 16;
++	fgx |= fgcolor;
++	bgx |= bgcolor;
++
++	eorx = fgx ^ bgx;
++	k = image->width / 2;
++
++	for (i = image->height; i--;) {
++		dst = (u32 __iomem *) dst1;
++		src = s;
++
++		j = k;
++		while (j >= 4) {
++			u8 bits = *src;
++			end_mask = tab[(bits >> 6) & 3];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[(bits >> 4) & 3];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[(bits >> 2) & 3];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[bits & 3];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			src++;
++			j -= 4;
++		}
++		if (j != 0) {
++			u8 bits = *src;
++			end_mask = tab[(bits >> 6) & 3];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			if (j >= 2) {
++				end_mask = tab[(bits >> 4) & 3];
++				FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++				if (j == 3) {
++					end_mask = tab[(bits >> 2) & 3];
++					FB_WRITEL((end_mask & eorx) ^ bgx, dst);
++				}
++			}
++		}
++		dst1 += p->fix.line_length;
++		s += spitch;
++	}
++}
++
++/*
++ * Optimized fast_imageblit for bpp == 32. ppw = 1, bit_mask = 1 folded
++ * into the code, main loop unrolled.
++ */
++
++static inline void fast_imageblit32(const struct fb_image *image,
++				    struct fb_info *p, u8 __iomem * dst1,
++				    u32 fgcolor, u32 bgcolor)
++{
++	u32 fgx = fgcolor, bgx = bgcolor;
++	u32 spitch = (image->width + 7) / 8;
++	u32 end_mask, eorx;
++	const char *s = image->data, *src;
++	u32 __iomem *dst;
++	const u32 *tab = NULL;
++	int i, j, k;
++
++	tab = cfb_tab32;
++
++	eorx = fgx ^ bgx;
++	k = image->width;
++
++	for (i = image->height; i--;) {
++		dst = (u32 __iomem *) dst1;
++		src = s;
++
++		j = k;
++		while (j >= 8) {
++			u8 bits = *src;
++			end_mask = tab[(bits >> 7) & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[(bits >> 6) & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[(bits >> 5) & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[(bits >> 4) & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[(bits >> 3) & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[(bits >> 2) & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[(bits >> 1) & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			end_mask = tab[bits & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++			src++;
++			j -= 8;
++		}
++		if (j != 0) {
++			u32 bits = (u32) * src;
++			while (j > 1) {
++				end_mask = tab[(bits >> 7) & 1];
++				FB_WRITEL((end_mask & eorx) ^ bgx, dst++);
++				bits <<= 1;
++				j--;
++			}
++			end_mask = tab[(bits >> 7) & 1];
++			FB_WRITEL((end_mask & eorx) ^ bgx, dst);
++		}
++		dst1 += p->fix.line_length;
++		s += spitch;
++	}
++}
++
+ void cfb_imageblit(struct fb_info *p, const struct fb_image *image)
+ {
+ 	u32 fgcolor, bgcolor, start_index, bitstart, pitch_index = 0;
+@@ -294,11 +426,21 @@ void cfb_imageblit(struct fb_info *p, co
+ 			bgcolor = image->bg_color;
+ 		}	
+ 		
+-		if (32 % bpp == 0 && !start_index && !pitch_index && 
+-		    ((width & (32/bpp-1)) == 0) &&
+-		    bpp >= 8 && bpp <= 32) 			
+-			fast_imageblit(image, p, dst1, fgcolor, bgcolor);
+-		else 
++		if (!start_index && !pitch_index) {
++			if (bpp == 32)
++				fast_imageblit32(image, p, dst1, fgcolor,
++						 bgcolor);
++			else if (bpp == 16 && (width & 1) == 0)
++				fast_imageblit16(image, p, dst1, fgcolor,
++						 bgcolor);
++			else if (bpp == 8 && (width & 3) == 0)
++				fast_imageblit(image, p, dst1, fgcolor,
++					       bgcolor);
++			else
++				slow_imageblit(image, p, dst1, fgcolor,
++					       bgcolor,
++					       start_index, pitch_index);
++		} else
+ 			slow_imageblit(image, p, dst1, fgcolor, bgcolor,
+ 					start_index, pitch_index);
+ 	} else
diff --git a/target/linux/brcm2708/patches-4.4/0059-Allow-mac-address-to-be-set-in-smsc95xx.patch b/target/linux/brcm2708/patches-4.4/0059-Allow-mac-address-to-be-set-in-smsc95xx.patch
new file mode 100644
index 0000000..afe937d
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0059-Allow-mac-address-to-be-set-in-smsc95xx.patch
@@ -0,0 +1,91 @@
+From 43512b5f55e891f325a7619fa9103e6d6566ce81 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Tue, 26 Mar 2013 17:26:38 +0000
+Subject: [PATCH 059/127] Allow mac address to be set in smsc95xx
+
+Signed-off-by: popcornmix <popcornmix at gmail.com>
+---
+ drivers/net/usb/smsc95xx.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++
+ 1 file changed, 56 insertions(+)
+
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -59,6 +59,7 @@
+ #define SUSPEND_SUSPEND3		(0x08)
+ #define SUSPEND_ALLMODES		(SUSPEND_SUSPEND0 | SUSPEND_SUSPEND1 | \
+ 					 SUSPEND_SUSPEND2 | SUSPEND_SUSPEND3)
++#define MAC_ADDR_LEN                    (6)
+ 
+ struct smsc95xx_priv {
+ 	u32 mac_cr;
+@@ -74,6 +75,10 @@ static bool turbo_mode = false;
+ module_param(turbo_mode, bool, 0644);
+ MODULE_PARM_DESC(turbo_mode, "Enable multiple frames per Rx transaction");
+ 
++static char *macaddr = ":";
++module_param(macaddr, charp, 0);
++MODULE_PARM_DESC(macaddr, "MAC address");
++
+ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
+ 					    u32 *data, int in_pm)
+ {
+@@ -763,8 +768,59 @@ static int smsc95xx_ioctl(struct net_dev
+ 	return generic_mii_ioctl(&dev->mii, if_mii(rq), cmd, NULL);
+ }
+ 
++/* Check the macaddr module parameter for a MAC address */
++static int smsc95xx_is_macaddr_param(struct usbnet *dev, u8 *dev_mac)
++{
++       int i, j, got_num, num;
++       u8 mtbl[MAC_ADDR_LEN];
++
++       if (macaddr[0] == ':')
++               return 0;
++
++       i = 0;
++       j = 0;
++       num = 0;
++       got_num = 0;
++       while (j < MAC_ADDR_LEN) {
++               if (macaddr[i] && macaddr[i] != ':') {
++                       got_num++;
++                       if ('0' <= macaddr[i] && macaddr[i] <= '9')
++                               num = num * 16 + macaddr[i] - '0';
++                       else if ('A' <= macaddr[i] && macaddr[i] <= 'F')
++                               num = num * 16 + 10 + macaddr[i] - 'A';
++                       else if ('a' <= macaddr[i] && macaddr[i] <= 'f')
++                               num = num * 16 + 10 + macaddr[i] - 'a';
++                       else
++                               break;
++                       i++;
++               } else if (got_num == 2) {
++                       mtbl[j++] = (u8) num;
++                       num = 0;
++                       got_num = 0;
++                       i++;
++               } else {
++                       break;
++               }
++       }
++
++       if (j == MAC_ADDR_LEN) {
++               netif_dbg(dev, ifup, dev->net, "Overriding MAC address with: "
++               "%02x:%02x:%02x:%02x:%02x:%02x\n", mtbl[0], mtbl[1], mtbl[2],
++                                               mtbl[3], mtbl[4], mtbl[5]);
++               for (i = 0; i < MAC_ADDR_LEN; i++)
++                       dev_mac[i] = mtbl[i];
++               return 1;
++       } else {
++               return 0;
++       }
++}
++
+ static void smsc95xx_init_mac_address(struct usbnet *dev)
+ {
++       /* Check module parameters */
++       if (smsc95xx_is_macaddr_param(dev, dev->net->dev_addr))
++               return;
++
+ 	/* try reading mac address from EEPROM */
+ 	if (smsc95xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN,
+ 			dev->net->dev_addr) == 0) {
diff --git a/target/linux/brcm2708/patches-4.4/0060-enabling-the-realtime-clock-1-wire-chip-DS1307-and-1.patch b/target/linux/brcm2708/patches-4.4/0060-enabling-the-realtime-clock-1-wire-chip-DS1307-and-1.patch
new file mode 100644
index 0000000..367a9bc
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0060-enabling-the-realtime-clock-1-wire-chip-DS1307-and-1.patch
@@ -0,0 +1,242 @@
+From 1dde78801877dc5280b7b1b156a2784d7e2bf982 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 8 May 2013 11:46:50 +0100
+Subject: [PATCH 060/127] enabling the realtime clock 1-wire chip DS1307 and
+ 1-wire on GPIO4 (as a module)
+
+1-wire: Add support for configuring pin for w1-gpio kernel module
+See: https://github.com/raspberrypi/linux/pull/457
+
+Add bitbanging pullups, use them for w1-gpio
+
+Allows parasite power to work, uses module option pullup=1
+
+bcm2708: Ensure 1-wire pullup is disabled by default, and expose as module parameter
+
+Signed-off-by: Alex J Lennon <ajlennon at dynamicdevices.co.uk>
+
+w1-gpio: Add gpiopin module parameter and correctly free up gpio pull-up pin, if set
+
+Signed-off-by: Alex J Lennon <ajlennon at dynamicdevices.co.uk>
+
+w1-gpio: Sort out the pullup/parasitic power tangle
+---
+ drivers/w1/masters/w1-gpio.c | 69 ++++++++++++++++++++++++++++++++++++++++----
+ drivers/w1/w1.h              |  6 ++++
+ drivers/w1/w1_int.c          | 14 +++++++++
+ drivers/w1/w1_io.c           | 18 ++++++++++--
+ include/linux/w1-gpio.h      |  1 +
+ 5 files changed, 99 insertions(+), 9 deletions(-)
+
+--- a/drivers/w1/masters/w1-gpio.c
++++ b/drivers/w1/masters/w1-gpio.c
+@@ -23,6 +23,19 @@
+ #include "../w1.h"
+ #include "../w1_int.h"
+ 
++static int w1_gpio_pullup = 0;
++static int w1_gpio_pullup_orig = 0;
++module_param_named(pullup, w1_gpio_pullup, int, 0);
++MODULE_PARM_DESC(pullup, "Enable parasitic power (power on data) mode");
++static int w1_gpio_pullup_pin = -1;
++static int w1_gpio_pullup_pin_orig = -1;
++module_param_named(extpullup, w1_gpio_pullup_pin, int, 0);
++MODULE_PARM_DESC(extpullup, "GPIO external pullup pin number");
++static int w1_gpio_pin = -1;
++static int w1_gpio_pin_orig = -1;
++module_param_named(gpiopin, w1_gpio_pin, int, 0);
++MODULE_PARM_DESC(gpiopin, "GPIO pin number");
++
+ static u8 w1_gpio_set_pullup(void *data, int delay)
+ {
+ 	struct w1_gpio_platform_data *pdata = data;
+@@ -67,6 +80,16 @@ static u8 w1_gpio_read_bit(void *data)
+ 	return gpio_get_value(pdata->pin) ? 1 : 0;
+ }
+ 
++static void w1_gpio_bitbang_pullup(void *data, u8 on)
++{
++	struct w1_gpio_platform_data *pdata = data;
++
++	if (on)
++		gpio_direction_output(pdata->pin, 1);
++	else
++		gpio_direction_input(pdata->pin);
++}
++
+ #if defined(CONFIG_OF)
+ static const struct of_device_id w1_gpio_dt_ids[] = {
+ 	{ .compatible = "w1-gpio" },
+@@ -80,6 +103,7 @@ static int w1_gpio_probe_dt(struct platf
+ 	struct w1_gpio_platform_data *pdata = dev_get_platdata(&pdev->dev);
+ 	struct device_node *np = pdev->dev.of_node;
+ 	int gpio;
++	u32 value;
+ 
+ 	pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
+ 	if (!pdata)
+@@ -88,6 +112,9 @@ static int w1_gpio_probe_dt(struct platf
+ 	if (of_get_property(np, "linux,open-drain", NULL))
+ 		pdata->is_open_drain = 1;
+ 
++	if (of_property_read_u32(np, "rpi,parasitic-power", &value) == 0)
++	    pdata->parasitic_power = (value != 0);
++
+ 	gpio = of_get_gpio(np, 0);
+ 	if (gpio < 0) {
+ 		if (gpio != -EPROBE_DEFER)
+@@ -103,7 +130,7 @@ static int w1_gpio_probe_dt(struct platf
+ 	if (gpio == -EPROBE_DEFER)
+ 		return gpio;
+ 	/* ignore other errors as the pullup gpio is optional */
+-	pdata->ext_pullup_enable_pin = gpio;
++	pdata->ext_pullup_enable_pin = (gpio >= 0) ? gpio : -1;
+ 
+ 	pdev->dev.platform_data = pdata;
+ 
+@@ -113,13 +140,15 @@ static int w1_gpio_probe_dt(struct platf
+ static int w1_gpio_probe(struct platform_device *pdev)
+ {
+ 	struct w1_bus_master *master;
+-	struct w1_gpio_platform_data *pdata;
++	struct w1_gpio_platform_data *pdata = pdev->dev.platform_data;
+ 	int err;
+ 
+-	if (of_have_populated_dt()) {
+-		err = w1_gpio_probe_dt(pdev);
+-		if (err < 0)
+-			return err;
++	if(pdata == NULL) {
++		if (of_have_populated_dt()) {
++			err = w1_gpio_probe_dt(pdev);
++			if (err < 0)
++				return err;
++		}
+ 	}
+ 
+ 	pdata = dev_get_platdata(&pdev->dev);
+@@ -136,6 +165,22 @@ static int w1_gpio_probe(struct platform
+ 		return -ENOMEM;
+ 	}
+ 
++	w1_gpio_pin_orig = pdata->pin;
++	w1_gpio_pullup_pin_orig = pdata->ext_pullup_enable_pin;
++	w1_gpio_pullup_orig = pdata->parasitic_power;
++
++	if(gpio_is_valid(w1_gpio_pin)) {
++		pdata->pin = w1_gpio_pin;
++		pdata->ext_pullup_enable_pin = -1;
++		pdata->parasitic_power = -1;
++	}
++	pdata->parasitic_power |= w1_gpio_pullup;
++	if(gpio_is_valid(w1_gpio_pullup_pin)) {
++		pdata->ext_pullup_enable_pin = w1_gpio_pullup_pin;
++	}
++
++	dev_info(&pdev->dev, "gpio pin %d, external pullup pin %d, parasitic power %d\n", pdata->pin, pdata->ext_pullup_enable_pin, pdata->parasitic_power);
++
+ 	err = devm_gpio_request(&pdev->dev, pdata->pin, "w1");
+ 	if (err) {
+ 		dev_err(&pdev->dev, "gpio_request (pin) failed\n");
+@@ -165,6 +210,14 @@ static int w1_gpio_probe(struct platform
+ 		master->set_pullup = w1_gpio_set_pullup;
+ 	}
+ 
++	if (pdata->parasitic_power) {
++		if (pdata->is_open_drain)
++			printk(KERN_ERR "w1-gpio 'pullup'(parasitic power) "
++			       "option doesn't work with open drain GPIO\n");
++		else
++			master->bitbang_pullup = w1_gpio_bitbang_pullup;
++	}
++
+ 	err = w1_add_master_device(master);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "w1_add_master device failed\n");
+@@ -195,6 +248,10 @@ static int w1_gpio_remove(struct platfor
+ 
+ 	w1_remove_master_device(master);
+ 
++	pdata->pin = w1_gpio_pin_orig;
++	pdata->ext_pullup_enable_pin = w1_gpio_pullup_pin_orig;
++	pdata->parasitic_power = w1_gpio_pullup_orig;
++
+ 	return 0;
+ }
+ 
+--- a/drivers/w1/w1.h
++++ b/drivers/w1/w1.h
+@@ -171,6 +171,12 @@ struct w1_bus_master
+ 
+ 	u8		(*set_pullup)(void *, int);
+ 
++	/**
++	 * Turns the pullup on/off in bitbanging mode, takes an on/off argument.
++	 * @return -1=Error, 0=completed
++	 */
++	void (*bitbang_pullup) (void *, u8);
++
+ 	void		(*search)(void *, struct w1_master *,
+ 		u8, w1_slave_found_callback);
+ };
+--- a/drivers/w1/w1_int.c
++++ b/drivers/w1/w1_int.c
+@@ -122,6 +122,20 @@ int w1_add_master_device(struct w1_bus_m
+ 		return(-EINVAL);
+ 	}
+ 
++	/* bitbanging hardware uses bitbang_pullup, other hardware uses set_pullup
++	 * and takes care of timing itself */
++	if (!master->write_byte && !master->touch_bit && master->set_pullup) {
++		printk(KERN_ERR "w1_add_master_device: set_pullup requires "
++			"write_byte or touch_bit, disabling\n");
++		master->set_pullup = NULL;
++	}
++
++	if (master->set_pullup && master->bitbang_pullup) {
++		printk(KERN_ERR "w1_add_master_device: set_pullup should not "
++		       "be set when bitbang_pullup is used, disabling\n");
++		master->set_pullup = NULL;
++	}
++
+ 	/* Lock until the device is added (or not) to w1_masters. */
+ 	mutex_lock(&w1_mlock);
+ 	/* Search for the first available id (starting at 1). */
+--- a/drivers/w1/w1_io.c
++++ b/drivers/w1/w1_io.c
+@@ -134,10 +134,22 @@ static void w1_pre_write(struct w1_maste
+ static void w1_post_write(struct w1_master *dev)
+ {
+ 	if (dev->pullup_duration) {
+-		if (dev->enable_pullup && dev->bus_master->set_pullup)
+-			dev->bus_master->set_pullup(dev->bus_master->data, 0);
+-		else
++		if (dev->enable_pullup) {
++			if (dev->bus_master->set_pullup) {
++				dev->bus_master->set_pullup(dev->
++							    bus_master->data,
++							    0);
++			} else if (dev->bus_master->bitbang_pullup) {
++				dev->bus_master->
++				    bitbang_pullup(dev->bus_master->data, 1);
+ 			msleep(dev->pullup_duration);
++				dev->bus_master->
++				    bitbang_pullup(dev->bus_master->data, 0);
++			}
++		} else {
++			msleep(dev->pullup_duration);
++		}
++
+ 		dev->pullup_duration = 0;
+ 	}
+ }
+--- a/include/linux/w1-gpio.h
++++ b/include/linux/w1-gpio.h
+@@ -18,6 +18,7 @@
+ struct w1_gpio_platform_data {
+ 	unsigned int pin;
+ 	unsigned int is_open_drain:1;
++	unsigned int parasitic_power:1;
+ 	void (*enable_external_pullup)(int enable);
+ 	unsigned int ext_pullup_enable_pin;
+ 	unsigned int pullup_duration;
diff --git a/target/linux/brcm2708/patches-4.4/0061-Added-Device-IDs-for-August-DVB-T-205.patch b/target/linux/brcm2708/patches-4.4/0061-Added-Device-IDs-for-August-DVB-T-205.patch
new file mode 100644
index 0000000..130b77e
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0061-Added-Device-IDs-for-August-DVB-T-205.patch
@@ -0,0 +1,22 @@
+From 61a3c58b47a8a2b970a1ca50db15c402874eb8fd Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 3 Jul 2013 00:54:08 +0100
+Subject: [PATCH 061/127] Added Device IDs for August DVB-T 205
+
+---
+ drivers/media/usb/dvb-usb-v2/rtl28xxu.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
++++ b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+@@ -1900,6 +1900,10 @@ static const struct usb_device_id rtl28x
+ 		&rtl28xxu_props, "Compro VideoMate U650F", NULL) },
+ 	{ DVB_USB_DEVICE(USB_VID_KWORLD_2, 0xd394,
+ 		&rtl28xxu_props, "MaxMedia HU394-T", NULL) },
++	{ DVB_USB_DEVICE(USB_VID_GTEK, 0xb803 /*USB_PID_AUGUST_DVBT205*/,
++		&rtl28xxu_props, "August DVB-T 205", NULL) },
++	{ DVB_USB_DEVICE(USB_VID_GTEK, 0xa803 /*USB_PID_AUGUST_DVBT205*/,
++		&rtl28xxu_props, "August DVB-T 205", NULL) },
+ 	{ DVB_USB_DEVICE(USB_VID_LEADTEK, 0x6a03,
+ 		&rtl28xxu_props, "Leadtek WinFast DTV Dongle mini", NULL) },
+ 	{ DVB_USB_DEVICE(USB_VID_GTEK, USB_PID_CPYTO_REDI_PC50A,
diff --git a/target/linux/brcm2708/patches-4.4/0062-config-Enable-CONFIG_MEMCG-but-leave-it-disabled-due.patch b/target/linux/brcm2708/patches-4.4/0062-config-Enable-CONFIG_MEMCG-but-leave-it-disabled-due.patch
new file mode 100644
index 0000000..4d0f692
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0062-config-Enable-CONFIG_MEMCG-but-leave-it-disabled-due.patch
@@ -0,0 +1,49 @@
+From bbc71e816c6bf3d945e67a70db005d1f58b2b064 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 18 Dec 2013 22:16:19 +0000
+Subject: [PATCH 062/127] config: Enable CONFIG_MEMCG, but leave it disabled
+ (due to memory cost). Enable with cgroup_enable=memory.
+
+---
+ kernel/cgroup.c | 23 ++++++++++++++++++++++-
+ 1 file changed, 22 insertions(+), 1 deletion(-)
+
+--- a/kernel/cgroup.c
++++ b/kernel/cgroup.c
+@@ -5273,7 +5273,7 @@ int __init cgroup_init_early(void)
+ 	return 0;
+ }
+ 
+-static unsigned long cgroup_disable_mask __initdata;
++static unsigned long cgroup_disable_mask __initdata = 1<<0;
+ 
+ /**
+  * cgroup_init - cgroup initialization
+@@ -5769,6 +5769,27 @@ static int __init cgroup_disable(char *s
+ }
+ __setup("cgroup_disable=", cgroup_disable);
+ 
++static int __init cgroup_enable(char *str)
++{
++	struct cgroup_subsys *ss;
++	char *token;
++	int i;
++
++	while ((token = strsep(&str, ",")) != NULL) {
++		if (!*token)
++			continue;
++
++		for_each_subsys(ss, i) {
++			if (strcmp(token, ss->name) &&
++			    strcmp(token, ss->legacy_name))
++				continue;
++			cgroup_disable_mask &= ~(1 << i);
++		}
++	}
++	return 1;
++}
++__setup("cgroup_enable=", cgroup_enable);
++
+ /**
+  * css_tryget_online_from_dir - get corresponding css from a cgroup dentry
+  * @dentry: directory dentry of interest
diff --git a/target/linux/brcm2708/patches-4.4/0063-ASoC-Add-support-for-PCM5102A-codec.patch b/target/linux/brcm2708/patches-4.4/0063-ASoC-Add-support-for-PCM5102A-codec.patch
new file mode 100644
index 0000000..cb8a5f5
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0063-ASoC-Add-support-for-PCM5102A-codec.patch
@@ -0,0 +1,128 @@
+From 7ae297c051ae9052562cd97de4b19d05afb4c9ee Mon Sep 17 00:00:00 2001
+From: Florian Meier <florian.meier at koalo.de>
+Date: Fri, 22 Nov 2013 14:59:51 +0100
+Subject: [PATCH 063/127] ASoC: Add support for PCM5102A codec
+
+Some definitions to support the PCM5102A codec
+by Texas Instruments.
+
+Signed-off-by: Florian Meier <florian.meier at koalo.de>
+---
+ sound/soc/codecs/Kconfig    |  5 ++++
+ sound/soc/codecs/Makefile   |  2 ++
+ sound/soc/codecs/pcm5102a.c | 70 +++++++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 77 insertions(+)
+ create mode 100644 sound/soc/codecs/pcm5102a.c
+
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -89,6 +89,7 @@ config SND_SOC_ALL_CODECS
+ 	select SND_SOC_PCM512x_SPI if SPI_MASTER
+ 	select SND_SOC_RT286 if I2C
+ 	select SND_SOC_RT298 if I2C
++	select SND_SOC_PCM5102A if I2C
+ 	select SND_SOC_RT5631 if I2C
+ 	select SND_SOC_RT5640 if I2C
+ 	select SND_SOC_RT5645 if I2C
+@@ -549,6 +550,10 @@ config SND_SOC_RT298
+ 	tristate
+ 	depends on I2C
+ 
++config SND_SOC_PCM5102A
++	tristate
++	depends on I2C
++
+ config SND_SOC_RT5631
+ 	tristate "Realtek ALC5631/RT5631 CODEC"
+ 	depends on I2C
+--- a/sound/soc/codecs/Makefile
++++ b/sound/soc/codecs/Makefile
+@@ -85,6 +85,7 @@ snd-soc-rl6231-objs := rl6231.o
+ snd-soc-rl6347a-objs := rl6347a.o
+ snd-soc-rt286-objs := rt286.o
+ snd-soc-rt298-objs := rt298.o
++snd-soc-pcm5102a-objs := pcm5102a.o
+ snd-soc-rt5631-objs := rt5631.o
+ snd-soc-rt5640-objs := rt5640.o
+ snd-soc-rt5645-objs := rt5645.o
+@@ -280,6 +281,7 @@ obj-$(CONFIG_SND_SOC_RL6231)	+= snd-soc-
+ obj-$(CONFIG_SND_SOC_RL6347A)	+= snd-soc-rl6347a.o
+ obj-$(CONFIG_SND_SOC_RT286)	+= snd-soc-rt286.o
+ obj-$(CONFIG_SND_SOC_RT298)	+= snd-soc-rt298.o
++obj-$(CONFIG_SND_SOC_PCM5102A)	+= snd-soc-pcm5102a.o
+ obj-$(CONFIG_SND_SOC_RT5631)	+= snd-soc-rt5631.o
+ obj-$(CONFIG_SND_SOC_RT5640)	+= snd-soc-rt5640.o
+ obj-$(CONFIG_SND_SOC_RT5645)	+= snd-soc-rt5645.o
+--- /dev/null
++++ b/sound/soc/codecs/pcm5102a.c
+@@ -0,0 +1,70 @@
++/*
++ * Driver for the PCM5102A codec
++ *
++ * Author:	Florian Meier <florian.meier at koalo.de>
++ *		Copyright 2013
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/soc.h>
++
++static struct snd_soc_dai_driver pcm5102a_dai = {
++	.name = "pcm5102a-hifi",
++	.playback = {
++		.channels_min = 2,
++		.channels_max = 2,
++		.rates = SNDRV_PCM_RATE_8000_192000,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE |
++			   SNDRV_PCM_FMTBIT_S24_LE |
++			   SNDRV_PCM_FMTBIT_S32_LE
++	},
++};
++
++static struct snd_soc_codec_driver soc_codec_dev_pcm5102a;
++
++static int pcm5102a_probe(struct platform_device *pdev)
++{
++	return snd_soc_register_codec(&pdev->dev, &soc_codec_dev_pcm5102a,
++			&pcm5102a_dai, 1);
++}
++
++static int pcm5102a_remove(struct platform_device *pdev)
++{
++	snd_soc_unregister_codec(&pdev->dev);
++	return 0;
++}
++
++static const struct of_device_id pcm5102a_of_match[] = {
++	{ .compatible = "ti,pcm5102a", },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, pcm5102a_of_match);
++
++static struct platform_driver pcm5102a_codec_driver = {
++	.probe 		= pcm5102a_probe,
++	.remove 	= pcm5102a_remove,
++	.driver		= {
++		.name	= "pcm5102a-codec",
++		.owner	= THIS_MODULE,
++		.of_match_table = pcm5102a_of_match,
++	},
++};
++
++module_platform_driver(pcm5102a_codec_driver);
++
++MODULE_DESCRIPTION("ASoC PCM5102A codec driver");
++MODULE_AUTHOR("Florian Meier <florian.meier at koalo.de>");
++MODULE_LICENSE("GPL v2");
diff --git a/target/linux/brcm2708/patches-4.4/0064-ASoC-Add-support-for-HifiBerry-DAC.patch b/target/linux/brcm2708/patches-4.4/0064-ASoC-Add-support-for-HifiBerry-DAC.patch
new file mode 100644
index 0000000..77507e6
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0064-ASoC-Add-support-for-HifiBerry-DAC.patch
@@ -0,0 +1,165 @@
+From 7376e0edee3df655d9176ee83f21c58b8f067207 Mon Sep 17 00:00:00 2001
+From: Florian Meier <florian.meier at koalo.de>
+Date: Fri, 22 Nov 2013 19:19:08 +0100
+Subject: [PATCH 064/127] ASoC: Add support for HifiBerry DAC
+
+This adds a machine driver for the HifiBerry DAC.
+It is a sound card that can
+be stacked onto the Raspberry Pi.
+
+Signed-off-by: Florian Meier <florian.meier at koalo.de>
+---
+ sound/soc/bcm/Kconfig         |   7 +++
+ sound/soc/bcm/Makefile        |   4 ++
+ sound/soc/bcm/hifiberry_dac.c | 122 ++++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 133 insertions(+)
+ create mode 100644 sound/soc/bcm/hifiberry_dac.c
+
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -7,3 +7,10 @@ config SND_BCM2835_SOC_I2S
+ 	  Say Y or M if you want to add support for codecs attached to
+ 	  the BCM2835 I2S interface. You will also need
+ 	  to select the audio interfaces to support below.
++
++config SND_BCM2708_SOC_HIFIBERRY_DAC
++        tristate "Support for HifiBerry DAC"
++        depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
++        select SND_SOC_PCM5102A
++        help
++         Say Y or M if you want to add support for HifiBerry DAC.
+--- a/sound/soc/bcm/Makefile
++++ b/sound/soc/bcm/Makefile
+@@ -3,3 +3,7 @@ snd-soc-bcm2835-i2s-objs := bcm2835-i2s.
+ 
+ obj-$(CONFIG_SND_BCM2835_SOC_I2S) += snd-soc-bcm2835-i2s.o
+ 
++# BCM2708 Machine Support
++snd-soc-hifiberry-dac-objs := hifiberry_dac.o
++
++obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC) += snd-soc-hifiberry-dac.o
+--- /dev/null
++++ b/sound/soc/bcm/hifiberry_dac.c
+@@ -0,0 +1,122 @@
++/*
++ * ASoC Driver for HifiBerry DAC
++ *
++ * Author:	Florian Meier <florian.meier at koalo.de>
++ *		Copyright 2013
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/soc.h>
++#include <sound/jack.h>
++
++static int snd_rpi_hifiberry_dac_init(struct snd_soc_pcm_runtime *rtd)
++{
++	return 0;
++}
++
++static int snd_rpi_hifiberry_dac_hw_params(struct snd_pcm_substream *substream,
++				       struct snd_pcm_hw_params *params)
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
++
++	unsigned int sample_bits =
++		snd_pcm_format_physical_width(params_format(params));
++
++	return snd_soc_dai_set_bclk_ratio(cpu_dai, sample_bits * 2);
++}
++
++/* machine stream operations */
++static struct snd_soc_ops snd_rpi_hifiberry_dac_ops = {
++	.hw_params = snd_rpi_hifiberry_dac_hw_params,
++};
++
++static struct snd_soc_dai_link snd_rpi_hifiberry_dac_dai[] = {
++{
++	.name		= "HifiBerry DAC",
++	.stream_name	= "HifiBerry DAC HiFi",
++	.cpu_dai_name	= "bcm2708-i2s.0",
++	.codec_dai_name	= "pcm5102a-hifi",
++	.platform_name	= "bcm2708-i2s.0",
++	.codec_name	= "pcm5102a-codec",
++	.dai_fmt	= SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
++				SND_SOC_DAIFMT_CBS_CFS,
++	.ops		= &snd_rpi_hifiberry_dac_ops,
++	.init		= snd_rpi_hifiberry_dac_init,
++},
++};
++
++/* audio machine driver */
++static struct snd_soc_card snd_rpi_hifiberry_dac = {
++	.name         = "snd_rpi_hifiberry_dac",
++	.dai_link     = snd_rpi_hifiberry_dac_dai,
++	.num_links    = ARRAY_SIZE(snd_rpi_hifiberry_dac_dai),
++};
++
++static int snd_rpi_hifiberry_dac_probe(struct platform_device *pdev)
++{
++	int ret = 0;
++
++	snd_rpi_hifiberry_dac.dev = &pdev->dev;
++
++	if (pdev->dev.of_node) {
++	    struct device_node *i2s_node;
++	    struct snd_soc_dai_link *dai = &snd_rpi_hifiberry_dac_dai[0];
++	    i2s_node = of_parse_phandle(pdev->dev.of_node,
++					"i2s-controller", 0);
++
++	    if (i2s_node) {
++		dai->cpu_dai_name = NULL;
++		dai->cpu_of_node = i2s_node;
++		dai->platform_name = NULL;
++		dai->platform_of_node = i2s_node;
++	    }
++	}
++
++	ret = snd_soc_register_card(&snd_rpi_hifiberry_dac);
++	if (ret)
++		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", ret);
++
++	return ret;
++}
++
++static int snd_rpi_hifiberry_dac_remove(struct platform_device *pdev)
++{
++	return snd_soc_unregister_card(&snd_rpi_hifiberry_dac);
++}
++
++static const struct of_device_id snd_rpi_hifiberry_dac_of_match[] = {
++	{ .compatible = "hifiberry,hifiberry-dac", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, snd_rpi_hifiberry_dac_of_match);
++
++static struct platform_driver snd_rpi_hifiberry_dac_driver = {
++        .driver = {
++                .name   = "snd-hifiberry-dac",
++                .owner  = THIS_MODULE,
++		.of_match_table = snd_rpi_hifiberry_dac_of_match,
++        },
++        .probe          = snd_rpi_hifiberry_dac_probe,
++        .remove         = snd_rpi_hifiberry_dac_remove,
++};
++
++module_platform_driver(snd_rpi_hifiberry_dac_driver);
++
++MODULE_AUTHOR("Florian Meier <florian.meier at koalo.de>");
++MODULE_DESCRIPTION("ASoC Driver for HifiBerry DAC");
++MODULE_LICENSE("GPL v2");
diff --git a/target/linux/brcm2708/patches-4.4/0065-ASoC-Add-support-for-Rpi-DAC.patch b/target/linux/brcm2708/patches-4.4/0065-ASoC-Add-support-for-Rpi-DAC.patch
new file mode 100644
index 0000000..705e796
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0065-ASoC-Add-support-for-Rpi-DAC.patch
@@ -0,0 +1,275 @@
+From e6877f42823f6acd80d27b86816a0d87f35ea004 Mon Sep 17 00:00:00 2001
+From: Florian Meier <florian.meier at koalo.de>
+Date: Fri, 22 Nov 2013 19:21:34 +0100
+Subject: [PATCH 065/127] ASoC: Add support for Rpi-DAC
+
+---
+ sound/soc/bcm/Kconfig       |   7 +++
+ sound/soc/bcm/Makefile      |   2 +
+ sound/soc/bcm/rpi-dac.c     | 118 ++++++++++++++++++++++++++++++++++++++++++++
+ sound/soc/codecs/Kconfig    |   9 ++++
+ sound/soc/codecs/Makefile   |   2 +
+ sound/soc/codecs/pcm1794a.c |  69 ++++++++++++++++++++++++++
+ 6 files changed, 207 insertions(+)
+ create mode 100644 sound/soc/bcm/rpi-dac.c
+ create mode 100644 sound/soc/codecs/pcm1794a.c
+
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -14,3 +14,10 @@ config SND_BCM2708_SOC_HIFIBERRY_DAC
+         select SND_SOC_PCM5102A
+         help
+          Say Y or M if you want to add support for HifiBerry DAC.
++
++config SND_BCM2708_SOC_RPI_DAC
++        tristate "Support for RPi-DAC"
++        depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
++        select SND_SOC_PCM1794A
++        help
++         Say Y or M if you want to add support for RPi-DAC.
+--- a/sound/soc/bcm/Makefile
++++ b/sound/soc/bcm/Makefile
+@@ -5,5 +5,7 @@ obj-$(CONFIG_SND_BCM2835_SOC_I2S) += snd
+ 
+ # BCM2708 Machine Support
+ snd-soc-hifiberry-dac-objs := hifiberry_dac.o
++snd-soc-rpi-dac-objs := rpi-dac.o
+ 
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC) += snd-soc-hifiberry-dac.o
++obj-$(CONFIG_SND_BCM2708_SOC_RPI_DAC) += snd-soc-rpi-dac.o
+--- /dev/null
++++ b/sound/soc/bcm/rpi-dac.c
+@@ -0,0 +1,118 @@
++/*
++ * ASoC Driver for RPi-DAC.
++ *
++ * Author:	Florian Meier <florian.meier at koalo.de>
++ *		Copyright 2013
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/soc.h>
++#include <sound/jack.h>
++
++static int snd_rpi_rpi_dac_init(struct snd_soc_pcm_runtime *rtd)
++{
++	return 0;
++}
++
++static int snd_rpi_rpi_dac_hw_params(struct snd_pcm_substream *substream,
++				       struct snd_pcm_hw_params *params)
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
++
++	return snd_soc_dai_set_bclk_ratio(cpu_dai, 32*2);
++}
++
++/* machine stream operations */
++static struct snd_soc_ops snd_rpi_rpi_dac_ops = {
++	.hw_params = snd_rpi_rpi_dac_hw_params,
++};
++
++static struct snd_soc_dai_link snd_rpi_rpi_dac_dai[] = {
++{
++	.name		= "RPi-DAC",
++	.stream_name	= "RPi-DAC HiFi",
++	.cpu_dai_name	= "bcm2708-i2s.0",
++	.codec_dai_name	= "pcm1794a-hifi",
++	.platform_name	= "bcm2708-i2s.0",
++	.codec_name	= "pcm1794a-codec",
++	.dai_fmt	= SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
++				SND_SOC_DAIFMT_CBS_CFS,
++	.ops		= &snd_rpi_rpi_dac_ops,
++	.init		= snd_rpi_rpi_dac_init,
++},
++};
++
++/* audio machine driver */
++static struct snd_soc_card snd_rpi_rpi_dac = {
++	.name         = "snd_rpi_rpi_dac",
++	.dai_link     = snd_rpi_rpi_dac_dai,
++	.num_links    = ARRAY_SIZE(snd_rpi_rpi_dac_dai),
++};
++
++static int snd_rpi_rpi_dac_probe(struct platform_device *pdev)
++{
++	int ret = 0;
++
++	snd_rpi_rpi_dac.dev = &pdev->dev;
++	
++	if (pdev->dev.of_node) {
++		struct device_node *i2s_node;
++		struct snd_soc_dai_link *dai = &snd_rpi_rpi_dac_dai[0];
++		i2s_node = of_parse_phandle(pdev->dev.of_node, "i2s-controller", 0);
++
++		if (i2s_node) {
++			dai->cpu_dai_name = NULL;
++			dai->cpu_of_node = i2s_node;
++			dai->platform_name = NULL;
++			dai->platform_of_node = i2s_node;
++		}
++	}
++	
++	ret = snd_soc_register_card(&snd_rpi_rpi_dac);
++	if (ret)
++		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", ret);
++
++	return ret;
++}
++
++static int snd_rpi_rpi_dac_remove(struct platform_device *pdev)
++{
++	return snd_soc_unregister_card(&snd_rpi_rpi_dac);
++}
++
++static const struct of_device_id snd_rpi_rpi_dac_of_match[] = {
++	{ .compatible = "rpi,rpi-dac", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, snd_rpi_rpi_dac_of_match);
++
++static struct platform_driver snd_rpi_rpi_dac_driver = {
++        .driver = {
++                .name   = "snd-rpi-dac",
++                .owner  = THIS_MODULE,
++                .of_match_table = snd_rpi_rpi_dac_of_match,
++        },
++        .probe          = snd_rpi_rpi_dac_probe,
++        .remove         = snd_rpi_rpi_dac_remove,
++};
++
++module_platform_driver(snd_rpi_rpi_dac_driver);
++
++MODULE_AUTHOR("Florian Meier <florian.meier at koalo.de>");
++MODULE_DESCRIPTION("ASoC Driver for RPi-DAC");
++MODULE_LICENSE("GPL v2");
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -90,6 +90,7 @@ config SND_SOC_ALL_CODECS
+ 	select SND_SOC_RT286 if I2C
+ 	select SND_SOC_RT298 if I2C
+ 	select SND_SOC_PCM5102A if I2C
++	select SND_SOC_PCM1794A if I2C
+ 	select SND_SOC_RT5631 if I2C
+ 	select SND_SOC_RT5640 if I2C
+ 	select SND_SOC_RT5645 if I2C
+@@ -550,6 +551,14 @@ config SND_SOC_RT298
+ 	tristate
+ 	depends on I2C
+ 
++config SND_SOC_RT298
++	tristate
++	depends on I2C
++
++config SND_SOC_PCM1794A
++	tristate
++	depends on I2C
++
+ config SND_SOC_PCM5102A
+ 	tristate
+ 	depends on I2C
+--- a/sound/soc/codecs/Makefile
++++ b/sound/soc/codecs/Makefile
+@@ -85,6 +85,7 @@ snd-soc-rl6231-objs := rl6231.o
+ snd-soc-rl6347a-objs := rl6347a.o
+ snd-soc-rt286-objs := rt286.o
+ snd-soc-rt298-objs := rt298.o
++snd-soc-pcm1794a-objs := pcm1794a.o
+ snd-soc-pcm5102a-objs := pcm5102a.o
+ snd-soc-rt5631-objs := rt5631.o
+ snd-soc-rt5640-objs := rt5640.o
+@@ -281,6 +282,7 @@ obj-$(CONFIG_SND_SOC_RL6231)	+= snd-soc-
+ obj-$(CONFIG_SND_SOC_RL6347A)	+= snd-soc-rl6347a.o
+ obj-$(CONFIG_SND_SOC_RT286)	+= snd-soc-rt286.o
+ obj-$(CONFIG_SND_SOC_RT298)	+= snd-soc-rt298.o
++obj-$(CONFIG_SND_SOC_PCM1794A)	+= snd-soc-pcm1794a.o
+ obj-$(CONFIG_SND_SOC_PCM5102A)	+= snd-soc-pcm5102a.o
+ obj-$(CONFIG_SND_SOC_RT5631)	+= snd-soc-rt5631.o
+ obj-$(CONFIG_SND_SOC_RT5640)	+= snd-soc-rt5640.o
+--- /dev/null
++++ b/sound/soc/codecs/pcm1794a.c
+@@ -0,0 +1,69 @@
++/*
++ * Driver for the PCM1794A codec
++ *
++ * Author:	Florian Meier <florian.meier at koalo.de>
++ *		Copyright 2013
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/soc.h>
++
++static struct snd_soc_dai_driver pcm1794a_dai = {
++	.name = "pcm1794a-hifi",
++	.playback = {
++		.channels_min = 2,
++		.channels_max = 2,
++		.rates = SNDRV_PCM_RATE_8000_192000,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE |
++			   SNDRV_PCM_FMTBIT_S24_LE
++	},
++};
++
++static struct snd_soc_codec_driver soc_codec_dev_pcm1794a;
++
++static int pcm1794a_probe(struct platform_device *pdev)
++{
++	return snd_soc_register_codec(&pdev->dev, &soc_codec_dev_pcm1794a,
++			&pcm1794a_dai, 1);
++}
++
++static int pcm1794a_remove(struct platform_device *pdev)
++{
++	snd_soc_unregister_codec(&pdev->dev);
++	return 0;
++}
++
++static const struct of_device_id pcm1794a_of_match[] = {
++	{ .compatible = "ti,pcm1794a", },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, pcm1794a_of_match);
++
++static struct platform_driver pcm1794a_codec_driver = {
++	.probe 		= pcm1794a_probe,
++	.remove 	= pcm1794a_remove,
++	.driver		= {
++		.name	= "pcm1794a-codec",
++		.owner	= THIS_MODULE,
++		.of_match_table = of_match_ptr(pcm1794a_of_match),
++	},
++};
++
++module_platform_driver(pcm1794a_codec_driver);
++
++MODULE_DESCRIPTION("ASoC PCM1794A codec driver");
++MODULE_AUTHOR("Florian Meier <florian.meier at koalo.de>");
++MODULE_LICENSE("GPL v2");
diff --git a/target/linux/brcm2708/patches-4.4/0066-ASoC-wm8804-Implement-MCLK-configuration-options-add.patch b/target/linux/brcm2708/patches-4.4/0066-ASoC-wm8804-Implement-MCLK-configuration-options-add.patch
new file mode 100644
index 0000000..b194587
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0066-ASoC-wm8804-Implement-MCLK-configuration-options-add.patch
@@ -0,0 +1,40 @@
+From 10534ba846800325cf08f6e2479dbebecda00d2d Mon Sep 17 00:00:00 2001
+From: Daniel Matuschek <info at crazy-audio.com>
+Date: Wed, 15 Jan 2014 21:41:23 +0100
+Subject: [PATCH 066/127] ASoC: wm8804: Implement MCLK configuration options,
+ add 32bit support WM8804 can run with PLL frequencies of 256xfs and 128xfs
+ for most sample rates. At 192kHz only 128xfs is supported. The existing
+ driver selects 128xfs automatically for some lower samples rates. By using an
+ additional mclk_div divider, it is now possible to control the behaviour.
+ This allows using 256xfs PLL frequency on all sample rates up to 96kHz. It
+ should allow lower jitter and better signal quality. The behavior has to be
+ controlled by the sound card driver, because some sample frequency share the
+ same setting. e.g. 192kHz and 96kHz use 24.576MHz master clock. The only
+ difference is the MCLK divider.
+
+This also added support for 32bit data.
+
+Signed-off-by: Daniel Matuschek <daniel at matuschek.net>
+---
+ sound/soc/codecs/wm8804.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/sound/soc/codecs/wm8804.c
++++ b/sound/soc/codecs/wm8804.c
+@@ -304,6 +304,7 @@ static int wm8804_hw_params(struct snd_p
+ 		blen = 0x1;
+ 		break;
+ 	case 24:
++	case 32:
+ 		blen = 0x2;
+ 		break;
+ 	default:
+@@ -515,7 +516,7 @@ static const struct snd_soc_dai_ops wm88
+ };
+ 
+ #define WM8804_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S20_3LE | \
+-			SNDRV_PCM_FMTBIT_S24_LE)
++			SNDRV_PCM_FMTBIT_S24_3LE | SNDRV_PCM_FMTBIT_S32_LE)
+ 
+ #define WM8804_RATES (SNDRV_PCM_RATE_32000 | SNDRV_PCM_RATE_44100 | \
+ 		      SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_64000 | \
diff --git a/target/linux/brcm2708/patches-4.4/0067-ASoC-BCM-Add-support-for-HiFiBerry-Digi.-Driver-is-b.patch b/target/linux/brcm2708/patches-4.4/0067-ASoC-BCM-Add-support-for-HiFiBerry-Digi.-Driver-is-b.patch
new file mode 100644
index 0000000..a69667c
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0067-ASoC-BCM-Add-support-for-HiFiBerry-Digi.-Driver-is-b.patch
@@ -0,0 +1,282 @@
+From bbeffde8da0be5674342d1f2301d8d9bdc56d2d5 Mon Sep 17 00:00:00 2001
+From: Daniel Matuschek <info at crazy-audio.com>
+Date: Wed, 15 Jan 2014 21:42:08 +0100
+Subject: [PATCH 067/127] ASoC: BCM:Add support for HiFiBerry Digi. Driver is
+ based on the patched WM8804 driver.
+
+Signed-off-by: Daniel Matuschek <daniel at matuschek.net>
+
+Add a parameter to turn off SPDIF output if no audio is playing
+
+This patch adds the paramater auto_shutdown_output to the kernel module.
+Default behaviour of the module is the same, but when auto_shutdown_output
+is set to 1, the SPDIF oputput will shutdown if no stream is playing.
+
+bugfix for 32kHz sample rate, was missing
+
+HiFiBerry Digi: set SPDIF status bits for sample rate
+
+The HiFiBerry Digi driver did not signal the sample rate in the SPDIF status bits.
+While this is optional, some DACs and receivers do not accept this signal. This patch
+adds the sample rate bits in the SPDIF status block.
+---
+ sound/soc/bcm/Kconfig          |   7 ++
+ sound/soc/bcm/Makefile         |   2 +
+ sound/soc/bcm/hifiberry_digi.c | 223 +++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 232 insertions(+)
+ create mode 100644 sound/soc/bcm/hifiberry_digi.c
+
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -15,6 +15,13 @@ config SND_BCM2708_SOC_HIFIBERRY_DAC
+         help
+          Say Y or M if you want to add support for HifiBerry DAC.
+ 
++config SND_BCM2708_SOC_HIFIBERRY_DIGI
++        tristate "Support for HifiBerry Digi"
++        depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
++        select SND_SOC_WM8804
++        help
++         Say Y or M if you want to add support for HifiBerry Digi S/PDIF output board.
++
+ config SND_BCM2708_SOC_RPI_DAC
+         tristate "Support for RPi-DAC"
+         depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
+--- a/sound/soc/bcm/Makefile
++++ b/sound/soc/bcm/Makefile
+@@ -5,7 +5,9 @@ obj-$(CONFIG_SND_BCM2835_SOC_I2S) += snd
+ 
+ # BCM2708 Machine Support
+ snd-soc-hifiberry-dac-objs := hifiberry_dac.o
++snd-soc-hifiberry-digi-objs := hifiberry_digi.o
+ snd-soc-rpi-dac-objs := rpi-dac.o
+ 
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC) += snd-soc-hifiberry-dac.o
++obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI) += snd-soc-hifiberry-digi.o
+ obj-$(CONFIG_SND_BCM2708_SOC_RPI_DAC) += snd-soc-rpi-dac.o
+--- /dev/null
++++ b/sound/soc/bcm/hifiberry_digi.c
+@@ -0,0 +1,223 @@
++/*
++ * ASoC Driver for HifiBerry Digi
++ *
++ * Author: Daniel Matuschek <info at crazy-audio.com>
++ * based on the HifiBerry DAC driver by Florian Meier <florian.meier at koalo.de>
++ *	Copyright 2013
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/soc.h>
++#include <sound/jack.h>
++
++#include "../codecs/wm8804.h"
++
++static short int auto_shutdown_output = 0;
++module_param(auto_shutdown_output, short, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP);
++MODULE_PARM_DESC(auto_shutdown_output, "Shutdown SP/DIF output if playback is stopped");
++
++
++static int samplerate=44100;
++
++static int snd_rpi_hifiberry_digi_init(struct snd_soc_pcm_runtime *rtd)
++{
++	struct snd_soc_codec *codec = rtd->codec;
++
++	/* enable TX output */
++	snd_soc_update_bits(codec, WM8804_PWRDN, 0x4, 0x0);
++
++	return 0;
++}
++
++static int snd_rpi_hifiberry_digi_startup(struct snd_pcm_substream *substream) {
++	/* turn on digital output */
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_codec *codec = rtd->codec;
++	snd_soc_update_bits(codec, WM8804_PWRDN, 0x3c, 0x00);
++	return 0;
++}
++
++static void snd_rpi_hifiberry_digi_shutdown(struct snd_pcm_substream *substream) {
++	/* turn off output */
++	if (auto_shutdown_output) {
++		/* turn off output */
++		struct snd_soc_pcm_runtime *rtd = substream->private_data;
++		struct snd_soc_codec *codec = rtd->codec;
++		snd_soc_update_bits(codec, WM8804_PWRDN, 0x3c, 0x3c);
++	}
++}
++
++
++static int snd_rpi_hifiberry_digi_hw_params(struct snd_pcm_substream *substream,
++				       struct snd_pcm_hw_params *params)
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_dai *codec_dai = rtd->codec_dai;
++	struct snd_soc_codec *codec = rtd->codec;
++	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
++
++	int sysclk = 27000000; /* This is fixed on this board */
++
++	long mclk_freq=0;
++	int mclk_div=1;
++	int sampling_freq=1;
++
++	int ret;
++
++	samplerate = params_rate(params);
++
++	if (samplerate<=96000) {
++		mclk_freq=samplerate*256;
++		mclk_div=WM8804_MCLKDIV_256FS;
++	} else {
++		mclk_freq=samplerate*128;
++		mclk_div=WM8804_MCLKDIV_128FS;
++	}
++	
++	switch (samplerate) {
++		case 32000:
++			sampling_freq=0x03;
++			break;
++		case 44100:
++			sampling_freq=0x00;
++			break;
++		case 48000:
++			sampling_freq=0x02;
++			break;
++		case 88200:
++			sampling_freq=0x08;
++			break;
++		case 96000:
++			sampling_freq=0x0a;
++			break;
++		case 176400:
++			sampling_freq=0x0c;
++			break;
++		case 192000:
++			sampling_freq=0x0e;
++			break;
++		default:
++			dev_err(codec->dev,
++			"Failed to set WM8804 SYSCLK, unsupported samplerate %d\n",
++			samplerate);
++	}
++
++	snd_soc_dai_set_clkdiv(codec_dai, WM8804_MCLK_DIV, mclk_div);
++	snd_soc_dai_set_pll(codec_dai, 0, 0, sysclk, mclk_freq);
++
++	ret = snd_soc_dai_set_sysclk(codec_dai, WM8804_TX_CLKSRC_PLL,
++					sysclk, SND_SOC_CLOCK_OUT);
++	if (ret < 0) {
++		dev_err(codec->dev,
++		"Failed to set WM8804 SYSCLK: %d\n", ret);
++		return ret;
++	}
++
++	/* Enable TX output */
++	snd_soc_update_bits(codec, WM8804_PWRDN, 0x4, 0x0);
++
++	/* Power on */
++	snd_soc_update_bits(codec, WM8804_PWRDN, 0x9, 0);
++
++	/* set sampling frequency status bits */
++	snd_soc_update_bits(codec, WM8804_SPDTX4, 0x0f, sampling_freq);
++
++	return snd_soc_dai_set_bclk_ratio(cpu_dai,64);
++}
++
++/* machine stream operations */
++static struct snd_soc_ops snd_rpi_hifiberry_digi_ops = {
++	.hw_params = snd_rpi_hifiberry_digi_hw_params,
++        .startup = snd_rpi_hifiberry_digi_startup,
++        .shutdown = snd_rpi_hifiberry_digi_shutdown,
++};
++
++static struct snd_soc_dai_link snd_rpi_hifiberry_digi_dai[] = {
++{
++	.name		= "HifiBerry Digi",
++	.stream_name	= "HifiBerry Digi HiFi",
++	.cpu_dai_name	= "bcm2708-i2s.0",
++	.codec_dai_name	= "wm8804-spdif",
++	.platform_name	= "bcm2708-i2s.0",
++	.codec_name	= "wm8804.1-003b",
++	.dai_fmt	= SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
++				SND_SOC_DAIFMT_CBM_CFM,
++	.ops		= &snd_rpi_hifiberry_digi_ops,
++	.init		= snd_rpi_hifiberry_digi_init,
++},
++};
++
++/* audio machine driver */
++static struct snd_soc_card snd_rpi_hifiberry_digi = {
++	.name         = "snd_rpi_hifiberry_digi",
++	.dai_link     = snd_rpi_hifiberry_digi_dai,
++	.num_links    = ARRAY_SIZE(snd_rpi_hifiberry_digi_dai),
++};
++
++static int snd_rpi_hifiberry_digi_probe(struct platform_device *pdev)
++{
++	int ret = 0;
++
++	snd_rpi_hifiberry_digi.dev = &pdev->dev;
++
++	if (pdev->dev.of_node) {
++	    struct device_node *i2s_node;
++	    struct snd_soc_dai_link *dai = &snd_rpi_hifiberry_digi_dai[0];
++	    i2s_node = of_parse_phandle(pdev->dev.of_node,
++					"i2s-controller", 0);
++
++	    if (i2s_node) {
++		dai->cpu_dai_name = NULL;
++		dai->cpu_of_node = i2s_node;
++		dai->platform_name = NULL;
++		dai->platform_of_node = i2s_node;
++	    }
++	}
++
++	ret = snd_soc_register_card(&snd_rpi_hifiberry_digi);
++	if (ret)
++		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", ret);
++
++	return ret;
++}
++
++static int snd_rpi_hifiberry_digi_remove(struct platform_device *pdev)
++{
++	return snd_soc_unregister_card(&snd_rpi_hifiberry_digi);
++}
++
++static const struct of_device_id snd_rpi_hifiberry_digi_of_match[] = {
++	{ .compatible = "hifiberry,hifiberry-digi", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, snd_rpi_hifiberry_digi_of_match);
++
++static struct platform_driver snd_rpi_hifiberry_digi_driver = {
++	.driver = {
++		.name   = "snd-hifiberry-digi",
++		.owner  = THIS_MODULE,
++		.of_match_table = snd_rpi_hifiberry_digi_of_match,
++	},
++	.probe          = snd_rpi_hifiberry_digi_probe,
++	.remove         = snd_rpi_hifiberry_digi_remove,
++};
++
++module_platform_driver(snd_rpi_hifiberry_digi_driver);
++
++MODULE_AUTHOR("Daniel Matuschek <info at crazy-audio.com>");
++MODULE_DESCRIPTION("ASoC Driver for HifiBerry Digi");
++MODULE_LICENSE("GPL v2");
diff --git a/target/linux/brcm2708/patches-4.4/0068-ASoC-wm8804-Set-idle_bias_off-to-false-Idle-bias-has.patch b/target/linux/brcm2708/patches-4.4/0068-ASoC-wm8804-Set-idle_bias_off-to-false-Idle-bias-has.patch
new file mode 100644
index 0000000..a2af8c2
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0068-ASoC-wm8804-Set-idle_bias_off-to-false-Idle-bias-has.patch
@@ -0,0 +1,22 @@
+From 931556f39a5c08c682d31b0b8c25bf1c712a909f Mon Sep 17 00:00:00 2001
+From: Daniel Matuschek <info at crazy-audio.com>
+Date: Thu, 16 Jan 2014 07:36:35 +0100
+Subject: [PATCH 068/127] ASoC: wm8804: Set idle_bias_off to false Idle bias
+ has been change to remove warning on driver startup
+
+Signed-off-by: Daniel Matuschek <daniel at matuschek.net>
+---
+ sound/soc/codecs/wm8804.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/sound/soc/codecs/wm8804.c
++++ b/sound/soc/codecs/wm8804.c
+@@ -544,7 +544,7 @@ static struct snd_soc_dai_driver wm8804_
+ };
+ 
+ static const struct snd_soc_codec_driver soc_codec_dev_wm8804 = {
+-	.idle_bias_off = true,
++	.idle_bias_off = false,
+ 
+ 	.dapm_widgets = wm8804_dapm_widgets,
+ 	.num_dapm_widgets = ARRAY_SIZE(wm8804_dapm_widgets),
diff --git a/target/linux/brcm2708/patches-4.4/0069-Add-IQaudIO-Sound-Card-support-for-Raspberry-Pi.patch b/target/linux/brcm2708/patches-4.4/0069-Add-IQaudIO-Sound-Card-support-for-Raspberry-Pi.patch
new file mode 100644
index 0000000..8d5d4ce
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0069-Add-IQaudIO-Sound-Card-support-for-Raspberry-Pi.patch
@@ -0,0 +1,178 @@
+From 288238307a79e6978bb475bac7098832ca3de373 Mon Sep 17 00:00:00 2001
+From: Gordon Garrity <gordon at iqaudio.com>
+Date: Sat, 8 Mar 2014 16:56:57 +0000
+Subject: [PATCH 069/127] Add IQaudIO Sound Card support for Raspberry Pi
+
+Set a limit of 0dB on Digital Volume Control
+
+The main volume control in the PCM512x DAC has a range up to
++24dB. This is dangerously loud and can potentially cause massive
+clipping in the output stages. Therefore this sets a sensible
+limit of 0dB for this control.
+---
+ sound/soc/bcm/Kconfig       |   7 +++
+ sound/soc/bcm/Makefile      |   2 +
+ sound/soc/bcm/iqaudio-dac.c | 132 ++++++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 141 insertions(+)
+ create mode 100644 sound/soc/bcm/iqaudio-dac.c
+
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -28,3 +28,10 @@ config SND_BCM2708_SOC_RPI_DAC
+         select SND_SOC_PCM1794A
+         help
+          Say Y or M if you want to add support for RPi-DAC.
++
++config SND_BCM2708_SOC_IQAUDIO_DAC
++	tristate "Support for IQaudIO-DAC"
++	depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
++	select SND_SOC_PCM512x_I2C
++	help
++	  Say Y or M if you want to add support for IQaudIO-DAC.
+--- a/sound/soc/bcm/Makefile
++++ b/sound/soc/bcm/Makefile
+@@ -7,7 +7,9 @@ obj-$(CONFIG_SND_BCM2835_SOC_I2S) += snd
+ snd-soc-hifiberry-dac-objs := hifiberry_dac.o
+ snd-soc-hifiberry-digi-objs := hifiberry_digi.o
+ snd-soc-rpi-dac-objs := rpi-dac.o
++snd-soc-iqaudio-dac-objs := iqaudio-dac.o
+ 
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC) += snd-soc-hifiberry-dac.o
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI) += snd-soc-hifiberry-digi.o
+ obj-$(CONFIG_SND_BCM2708_SOC_RPI_DAC) += snd-soc-rpi-dac.o
++obj-$(CONFIG_SND_BCM2708_SOC_IQAUDIO_DAC) += snd-soc-iqaudio-dac.o
+--- /dev/null
++++ b/sound/soc/bcm/iqaudio-dac.c
+@@ -0,0 +1,132 @@
++/*
++ * ASoC Driver for IQaudIO DAC
++ *
++ * Author:	Florian Meier <florian.meier at koalo.de>
++ *		Copyright 2013
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/soc.h>
++#include <sound/jack.h>
++
++static int snd_rpi_iqaudio_dac_init(struct snd_soc_pcm_runtime *rtd)
++{
++	int ret;
++	struct snd_soc_card *card = rtd->card;
++
++	ret = snd_soc_limit_volume(card, "Digital Playback Volume", 207);
++	if (ret < 0)
++		dev_warn(card->dev, "Failed to set volume limit: %d\n", ret);
++
++	return 0;
++}
++
++static int snd_rpi_iqaudio_dac_hw_params(struct snd_pcm_substream *substream,
++				       struct snd_pcm_hw_params *params)
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++// NOT USED	struct snd_soc_dai *codec_dai = rtd->codec_dai;
++// NOT USED	struct snd_soc_codec *codec = rtd->codec;
++	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
++
++	unsigned int sample_bits =
++		snd_pcm_format_physical_width(params_format(params));
++
++	return snd_soc_dai_set_bclk_ratio(cpu_dai, sample_bits * 2);
++}
++
++/* machine stream operations */
++static struct snd_soc_ops snd_rpi_iqaudio_dac_ops = {
++	.hw_params = snd_rpi_iqaudio_dac_hw_params,
++};
++
++static struct snd_soc_dai_link snd_rpi_iqaudio_dac_dai[] = {
++{
++	.name		= "IQaudIO DAC",
++	.stream_name	= "IQaudIO DAC HiFi",
++	.cpu_dai_name	= "bcm2708-i2s.0",
++	.codec_dai_name	= "pcm512x-hifi",
++	.platform_name	= "bcm2708-i2s.0",
++	.codec_name	= "pcm512x.1-004c",
++	.dai_fmt	= SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
++				SND_SOC_DAIFMT_CBS_CFS,
++	.ops		= &snd_rpi_iqaudio_dac_ops,
++	.init		= snd_rpi_iqaudio_dac_init,
++},
++};
++
++/* audio machine driver */
++static struct snd_soc_card snd_rpi_iqaudio_dac = {
++	.name         = "IQaudIODAC",
++	.dai_link     = snd_rpi_iqaudio_dac_dai,
++	.num_links    = ARRAY_SIZE(snd_rpi_iqaudio_dac_dai),
++};
++
++static int snd_rpi_iqaudio_dac_probe(struct platform_device *pdev)
++{
++	int ret = 0;
++
++	snd_rpi_iqaudio_dac.dev = &pdev->dev;
++
++	if (pdev->dev.of_node) {
++	    struct device_node *i2s_node;
++	    struct snd_soc_dai_link *dai = &snd_rpi_iqaudio_dac_dai[0];
++	    i2s_node = of_parse_phandle(pdev->dev.of_node,
++					"i2s-controller", 0);
++
++	    if (i2s_node) {
++		dai->cpu_dai_name = NULL;
++		dai->cpu_of_node = i2s_node;
++		dai->platform_name = NULL;
++		dai->platform_of_node = i2s_node;
++	    }
++	}
++
++	ret = snd_soc_register_card(&snd_rpi_iqaudio_dac);
++	if (ret)
++		dev_err(&pdev->dev,
++			"snd_soc_register_card() failed: %d\n", ret);
++
++	return ret;
++}
++
++static int snd_rpi_iqaudio_dac_remove(struct platform_device *pdev)
++{
++	return snd_soc_unregister_card(&snd_rpi_iqaudio_dac);
++}
++
++static const struct of_device_id iqaudio_of_match[] = {
++	{ .compatible = "iqaudio,iqaudio-dac", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, iqaudio_of_match);
++
++static struct platform_driver snd_rpi_iqaudio_dac_driver = {
++	.driver = {
++		.name   = "snd-rpi-iqaudio-dac",
++		.owner  = THIS_MODULE,
++		.of_match_table = iqaudio_of_match,
++	},
++	.probe          = snd_rpi_iqaudio_dac_probe,
++	.remove         = snd_rpi_iqaudio_dac_remove,
++};
++
++module_platform_driver(snd_rpi_iqaudio_dac_driver);
++
++MODULE_AUTHOR("Florian Meier <florian.meier at koalo.de>");
++MODULE_DESCRIPTION("ASoC Driver for IQAudio DAC");
++MODULE_LICENSE("GPL v2");
diff --git a/target/linux/brcm2708/patches-4.4/0070-hid-Reduce-default-mouse-polling-interval-to-60Hz.patch b/target/linux/brcm2708/patches-4.4/0070-hid-Reduce-default-mouse-polling-interval-to-60Hz.patch
new file mode 100644
index 0000000..ac9ba9f
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0070-hid-Reduce-default-mouse-polling-interval-to-60Hz.patch
@@ -0,0 +1,36 @@
+From 7f78dfe7e00d426d90be5f98ac9683c83c024b28 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Mon, 14 Jul 2014 22:02:09 +0100
+Subject: [PATCH 070/127] hid: Reduce default mouse polling interval to 60Hz
+
+Reduces overhead when using X
+---
+ drivers/hid/usbhid/hid-core.c | 10 +++++++---
+ 1 file changed, 7 insertions(+), 3 deletions(-)
+
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -49,7 +49,7 @@
+  * Module parameters.
+  */
+ 
+-static unsigned int hid_mousepoll_interval;
++static unsigned int hid_mousepoll_interval = ~0;
+ module_param_named(mousepoll, hid_mousepoll_interval, uint, 0644);
+ MODULE_PARM_DESC(mousepoll, "Polling interval of mice");
+ 
+@@ -1091,8 +1091,12 @@ static int usbhid_start(struct hid_devic
+ 		}
+ 
+ 		/* Change the polling interval of mice. */
+-		if (hid->collection->usage == HID_GD_MOUSE && hid_mousepoll_interval > 0)
+-			interval = hid_mousepoll_interval;
++		if (hid->collection->usage == HID_GD_MOUSE) {
++				if (hid_mousepoll_interval == ~0 && interval < 16)
++						interval = 16;
++				else if (hid_mousepoll_interval != ~0 && hid_mousepoll_interval != 0)
++						interval = hid_mousepoll_interval;
++		}
+ 
+ 		ret = -ENOMEM;
+ 		if (usb_endpoint_dir_in(endpoint)) {
diff --git a/target/linux/brcm2708/patches-4.4/0071-Added-support-for-HiFiBerry-DAC.patch b/target/linux/brcm2708/patches-4.4/0071-Added-support-for-HiFiBerry-DAC.patch
new file mode 100644
index 0000000..2c2ed05
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0071-Added-support-for-HiFiBerry-DAC.patch
@@ -0,0 +1,190 @@
+From 21d1427a7563d56ca1fdcee66cb80e7f2879c6b8 Mon Sep 17 00:00:00 2001
+From: Daniel Matuschek <info at crazy-audio.com>
+Date: Mon, 4 Aug 2014 10:06:56 +0200
+Subject: [PATCH 071/127] Added support for HiFiBerry DAC+
+
+The driver is based on the HiFiBerry DAC driver. However HiFiBerry DAC+ uses
+a different codec chip (PCM5122), therefore a new driver is necessary.
+---
+ sound/soc/bcm/Kconfig             |   7 ++
+ sound/soc/bcm/Makefile            |   2 +
+ sound/soc/bcm/hifiberry_dacplus.c | 141 ++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 150 insertions(+)
+ create mode 100644 sound/soc/bcm/hifiberry_dacplus.c
+
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -15,6 +15,13 @@ config SND_BCM2708_SOC_HIFIBERRY_DAC
+         help
+          Say Y or M if you want to add support for HifiBerry DAC.
+ 
++config SND_BCM2708_SOC_HIFIBERRY_DACPLUS
++        tristate "Support for HifiBerry DAC+"
++        depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
++        select SND_SOC_PCM512x
++        help
++         Say Y or M if you want to add support for HifiBerry DAC+.
++
+ config SND_BCM2708_SOC_HIFIBERRY_DIGI
+         tristate "Support for HifiBerry Digi"
+         depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
+--- a/sound/soc/bcm/Makefile
++++ b/sound/soc/bcm/Makefile
+@@ -5,11 +5,13 @@ obj-$(CONFIG_SND_BCM2835_SOC_I2S) += snd
+ 
+ # BCM2708 Machine Support
+ snd-soc-hifiberry-dac-objs := hifiberry_dac.o
++snd-soc-hifiberry-dacplus-objs := hifiberry_dacplus.o
+ snd-soc-hifiberry-digi-objs := hifiberry_digi.o
+ snd-soc-rpi-dac-objs := rpi-dac.o
+ snd-soc-iqaudio-dac-objs := iqaudio-dac.o
+ 
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC) += snd-soc-hifiberry-dac.o
++obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS) += snd-soc-hifiberry-dacplus.o
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI) += snd-soc-hifiberry-digi.o
+ obj-$(CONFIG_SND_BCM2708_SOC_RPI_DAC) += snd-soc-rpi-dac.o
+ obj-$(CONFIG_SND_BCM2708_SOC_IQAUDIO_DAC) += snd-soc-iqaudio-dac.o
+--- /dev/null
++++ b/sound/soc/bcm/hifiberry_dacplus.c
+@@ -0,0 +1,141 @@
++/*
++ * ASoC Driver for HiFiBerry DAC+
++ *
++ * Author:	Daniel Matuschek
++ *		Copyright 2014
++ *		based on code by Florian Meier <florian.meier at koalo.de>
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/soc.h>
++#include <sound/jack.h>
++
++#include "../codecs/pcm512x.h"
++
++static int snd_rpi_hifiberry_dacplus_init(struct snd_soc_pcm_runtime *rtd)
++{
++	struct snd_soc_codec *codec = rtd->codec;
++	snd_soc_update_bits(codec, PCM512x_GPIO_EN, 0x08, 0x08);
++	snd_soc_update_bits(codec, PCM512x_GPIO_OUTPUT_4, 0xf, 0x02);
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x08);
++	return 0;
++}
++
++static int snd_rpi_hifiberry_dacplus_hw_params(struct snd_pcm_substream *substream,
++				       struct snd_pcm_hw_params *params)
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
++	return snd_soc_dai_set_bclk_ratio(cpu_dai, 64);
++}
++
++static int snd_rpi_hifiberry_dacplus_startup(struct snd_pcm_substream *substream) {
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_codec *codec = rtd->codec;
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x08);
++	return 0;
++}
++
++static void snd_rpi_hifiberry_dacplus_shutdown(struct snd_pcm_substream *substream) {
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_codec *codec = rtd->codec;
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x00);
++}
++
++/* machine stream operations */
++static struct snd_soc_ops snd_rpi_hifiberry_dacplus_ops = {
++	.hw_params = snd_rpi_hifiberry_dacplus_hw_params,
++	.startup = snd_rpi_hifiberry_dacplus_startup,
++	.shutdown = snd_rpi_hifiberry_dacplus_shutdown,
++};
++
++static struct snd_soc_dai_link snd_rpi_hifiberry_dacplus_dai[] = {
++{
++	.name		= "HiFiBerry DAC+",
++	.stream_name	= "HiFiBerry DAC+ HiFi",
++	.cpu_dai_name	= "bcm2708-i2s.0",
++	.codec_dai_name	= "pcm512x-hifi",
++	.platform_name	= "bcm2708-i2s.0",
++	.codec_name	= "pcm512x.1-004d",
++	.dai_fmt	= SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
++				SND_SOC_DAIFMT_CBS_CFS,
++	.ops		= &snd_rpi_hifiberry_dacplus_ops,
++	.init		= snd_rpi_hifiberry_dacplus_init,
++},
++};
++
++/* audio machine driver */
++static struct snd_soc_card snd_rpi_hifiberry_dacplus = {
++	.name         = "snd_rpi_hifiberry_dacplus",
++	.dai_link     = snd_rpi_hifiberry_dacplus_dai,
++	.num_links    = ARRAY_SIZE(snd_rpi_hifiberry_dacplus_dai),
++};
++
++static int snd_rpi_hifiberry_dacplus_probe(struct platform_device *pdev)
++{
++	int ret = 0;
++
++	snd_rpi_hifiberry_dacplus.dev = &pdev->dev;
++
++	if (pdev->dev.of_node) {
++	    struct device_node *i2s_node;
++	    struct snd_soc_dai_link *dai = &snd_rpi_hifiberry_dacplus_dai[0];
++	    i2s_node = of_parse_phandle(pdev->dev.of_node,
++					"i2s-controller", 0);
++
++	    if (i2s_node) {
++		dai->cpu_dai_name = NULL;
++		dai->cpu_of_node = i2s_node;
++		dai->platform_name = NULL;
++		dai->platform_of_node = i2s_node;
++	    }
++	}
++
++	ret = snd_soc_register_card(&snd_rpi_hifiberry_dacplus);
++	if (ret)
++		dev_err(&pdev->dev,
++			"snd_soc_register_card() failed: %d\n", ret);
++
++	return ret;
++}
++
++static int snd_rpi_hifiberry_dacplus_remove(struct platform_device *pdev)
++{
++	return snd_soc_unregister_card(&snd_rpi_hifiberry_dacplus);
++}
++
++static const struct of_device_id snd_rpi_hifiberry_dacplus_of_match[] = {
++	{ .compatible = "hifiberry,hifiberry-dacplus", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, snd_rpi_hifiberry_dacplus_of_match);
++
++static struct platform_driver snd_rpi_hifiberry_dacplus_driver = {
++	.driver = {
++		.name   = "snd-rpi-hifiberry-dacplus",
++		.owner  = THIS_MODULE,
++		.of_match_table = snd_rpi_hifiberry_dacplus_of_match,
++	},
++	.probe          = snd_rpi_hifiberry_dacplus_probe,
++	.remove         = snd_rpi_hifiberry_dacplus_remove,
++};
++
++module_platform_driver(snd_rpi_hifiberry_dacplus_driver);
++
++MODULE_AUTHOR("Daniel Matuschek <daniel at hifiberry.com>");
++MODULE_DESCRIPTION("ASoC Driver for HiFiBerry DAC+");
++MODULE_LICENSE("GPL v2");
diff --git a/target/linux/brcm2708/patches-4.4/0072-Added-driver-for-HiFiBerry-Amp-amplifier-add-on-boar.patch b/target/linux/brcm2708/patches-4.4/0072-Added-driver-for-HiFiBerry-Amp-amplifier-add-on-boar.patch
new file mode 100644
index 0000000..34502a6
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0072-Added-driver-for-HiFiBerry-Amp-amplifier-add-on-boar.patch
@@ -0,0 +1,816 @@
+From f9201b45dcaa09fc1e85ca9fe011db801c0c12b5 Mon Sep 17 00:00:00 2001
+From: Daniel Matuschek <info at crazy-audio.com>
+Date: Mon, 4 Aug 2014 11:09:58 +0200
+Subject: [PATCH 072/127] Added driver for HiFiBerry Amp amplifier add-on board
+
+The driver contains a low-level hardware driver for the TAS5713 and the
+drivers for the Raspberry Pi I2S subsystem.
+
+TAS5713: return error if initialisation fails
+
+Existing TAS5713 driver logs errors during initialisation, but does not return
+an error code. Therefore even if initialisation fails, the driver will still be
+loaded, but won't work. This patch fixes this. I2C communication error will now
+reported correctly by a non-zero return code.
+
+HiFiBerry Amp: fix device-tree problems
+
+Some code to load the driver based on device-tree-overlays was missing. This is added by this patch.
+---
+ sound/soc/bcm/Kconfig         |   7 +
+ sound/soc/bcm/Makefile        |   2 +
+ sound/soc/bcm/hifiberry_amp.c | 127 +++++++++++++++
+ sound/soc/codecs/Kconfig      |   4 +
+ sound/soc/codecs/Makefile     |   2 +
+ sound/soc/codecs/tas5713.c    | 369 ++++++++++++++++++++++++++++++++++++++++++
+ sound/soc/codecs/tas5713.h    | 210 ++++++++++++++++++++++++
+ 7 files changed, 721 insertions(+)
+ create mode 100644 sound/soc/bcm/hifiberry_amp.c
+ create mode 100644 sound/soc/codecs/tas5713.c
+ create mode 100644 sound/soc/codecs/tas5713.h
+
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -29,6 +29,13 @@ config SND_BCM2708_SOC_HIFIBERRY_DIGI
+         help
+          Say Y or M if you want to add support for HifiBerry Digi S/PDIF output board.
+ 
++config SND_BCM2708_SOC_HIFIBERRY_AMP
++        tristate "Support for the HifiBerry Amp"
++        depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
++        select SND_SOC_TAS5713
++        help
++         Say Y or M if you want to add support for the HifiBerry Amp amplifier board.
++
+ config SND_BCM2708_SOC_RPI_DAC
+         tristate "Support for RPi-DAC"
+         depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
+--- a/sound/soc/bcm/Makefile
++++ b/sound/soc/bcm/Makefile
+@@ -7,11 +7,13 @@ obj-$(CONFIG_SND_BCM2835_SOC_I2S) += snd
+ snd-soc-hifiberry-dac-objs := hifiberry_dac.o
+ snd-soc-hifiberry-dacplus-objs := hifiberry_dacplus.o
+ snd-soc-hifiberry-digi-objs := hifiberry_digi.o
++snd-soc-hifiberry-amp-objs := hifiberry_amp.o
+ snd-soc-rpi-dac-objs := rpi-dac.o
+ snd-soc-iqaudio-dac-objs := iqaudio-dac.o
+ 
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC) += snd-soc-hifiberry-dac.o
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS) += snd-soc-hifiberry-dacplus.o
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI) += snd-soc-hifiberry-digi.o
++obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_AMP) += snd-soc-hifiberry-amp.o
+ obj-$(CONFIG_SND_BCM2708_SOC_RPI_DAC) += snd-soc-rpi-dac.o
+ obj-$(CONFIG_SND_BCM2708_SOC_IQAUDIO_DAC) += snd-soc-iqaudio-dac.o
+--- /dev/null
++++ b/sound/soc/bcm/hifiberry_amp.c
+@@ -0,0 +1,127 @@
++/*
++ * ASoC Driver for HifiBerry AMP
++ *
++ * Author:	Sebastian Eickhoff <basti.eickhoff at googlemail.com>
++ *		Copyright 2014
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/soc.h>
++#include <sound/jack.h>
++
++static int snd_rpi_hifiberry_amp_init(struct snd_soc_pcm_runtime *rtd)
++{
++	// ToDo: init of the dsp-registers.
++	return 0;
++}
++
++static int snd_rpi_hifiberry_amp_hw_params( struct snd_pcm_substream *substream,
++				       struct snd_pcm_hw_params *params )
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
++
++	return snd_soc_dai_set_bclk_ratio(cpu_dai, 64);
++}
++
++static struct snd_soc_ops snd_rpi_hifiberry_amp_ops = {
++	.hw_params = snd_rpi_hifiberry_amp_hw_params,
++};
++
++static struct snd_soc_dai_link snd_rpi_hifiberry_amp_dai[] = {
++    {
++		.name			= "HifiBerry AMP",
++		.stream_name	= "HifiBerry AMP HiFi",
++		.cpu_dai_name	= "bcm2708-i2s.0",
++		.codec_dai_name	= "tas5713-hifi",
++		.platform_name	= "bcm2708-i2s.0",
++		.codec_name		= "tas5713.1-001b",
++		.dai_fmt		= SND_SOC_DAIFMT_I2S |
++						  SND_SOC_DAIFMT_NB_NF |
++						  SND_SOC_DAIFMT_CBS_CFS,
++		.ops			= &snd_rpi_hifiberry_amp_ops,
++		.init			= snd_rpi_hifiberry_amp_init,
++	},
++};
++
++
++static struct snd_soc_card snd_rpi_hifiberry_amp = {
++	.name         = "snd_rpi_hifiberry_amp",
++	.dai_link     = snd_rpi_hifiberry_amp_dai,
++	.num_links    = ARRAY_SIZE(snd_rpi_hifiberry_amp_dai),
++};
++
++static const struct of_device_id snd_rpi_hifiberry_amp_of_match[] = {
++        { .compatible = "hifiberry,hifiberry-amp", },
++        {},
++};
++MODULE_DEVICE_TABLE(of, snd_rpi_hifiberry_amp_of_match);
++
++
++static int snd_rpi_hifiberry_amp_probe(struct platform_device *pdev)
++{
++	int ret = 0;
++
++	snd_rpi_hifiberry_amp.dev = &pdev->dev;
++
++        if (pdev->dev.of_node) {
++            struct device_node *i2s_node;
++            struct snd_soc_dai_link *dai = &snd_rpi_hifiberry_amp_dai[0];
++            i2s_node = of_parse_phandle(pdev->dev.of_node,
++                                        "i2s-controller", 0);
++
++            if (i2s_node) {
++                dai->cpu_dai_name = NULL;
++                dai->cpu_of_node = i2s_node;
++                dai->platform_name = NULL;
++                dai->platform_of_node = i2s_node;
++            }
++        }
++
++	ret = snd_soc_register_card(&snd_rpi_hifiberry_amp);
++
++	if (ret != 0) {
++		dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", ret);
++	}
++
++	return ret;
++}
++
++
++static int snd_rpi_hifiberry_amp_remove(struct platform_device *pdev)
++{
++	return snd_soc_unregister_card(&snd_rpi_hifiberry_amp);
++}
++
++
++static struct platform_driver snd_rpi_hifiberry_amp_driver = {
++        .driver = {
++                .name   = "snd-hifiberry-amp",
++                .owner  = THIS_MODULE,
++		.of_match_table = snd_rpi_hifiberry_amp_of_match,
++        },
++        .probe          = snd_rpi_hifiberry_amp_probe,
++        .remove         = snd_rpi_hifiberry_amp_remove,
++};
++
++
++module_platform_driver(snd_rpi_hifiberry_amp_driver);
++
++
++MODULE_AUTHOR("Sebastian Eickhoff <basti.eickhoff at googlemail.com>");
++MODULE_DESCRIPTION("ASoC driver for HiFiBerry-AMP");
++MODULE_LICENSE("GPL v2");
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -117,6 +117,7 @@ config SND_SOC_ALL_CODECS
+ 	select SND_SOC_TFA9879 if I2C
+ 	select SND_SOC_TLV320AIC23_I2C if I2C
+ 	select SND_SOC_TLV320AIC23_SPI if SPI_MASTER
++	select SND_SOC_TAS5713 if I2C
+ 	select SND_SOC_TLV320AIC26 if SPI_MASTER
+ 	select SND_SOC_TLV320AIC31XX if I2C
+ 	select SND_SOC_TLV320AIC32X4 if I2C
+@@ -674,6 +675,9 @@ config SND_SOC_TFA9879
+ 	tristate "NXP Semiconductors TFA9879 amplifier"
+ 	depends on I2C
+ 
++config SND_SOC_TAS5713
++	tristate
++
+ config SND_SOC_TLV320AIC23
+ 	tristate
+ 
+--- a/sound/soc/codecs/Makefile
++++ b/sound/soc/codecs/Makefile
+@@ -118,6 +118,7 @@ snd-soc-sti-sas-objs := sti-sas.o
+ snd-soc-tas5086-objs := tas5086.o
+ snd-soc-tas571x-objs := tas571x.o
+ snd-soc-tfa9879-objs := tfa9879.o
++snd-soc-tas5713-objs := tas5713.o
+ snd-soc-tlv320aic23-objs := tlv320aic23.o
+ snd-soc-tlv320aic23-i2c-objs := tlv320aic23-i2c.o
+ snd-soc-tlv320aic23-spi-objs := tlv320aic23-spi.o
+@@ -312,6 +313,7 @@ obj-$(CONFIG_SND_SOC_TAS2552)	+= snd-soc
+ obj-$(CONFIG_SND_SOC_TAS5086)	+= snd-soc-tas5086.o
+ obj-$(CONFIG_SND_SOC_TAS571X)	+= snd-soc-tas571x.o
+ obj-$(CONFIG_SND_SOC_TFA9879)	+= snd-soc-tfa9879.o
++obj-$(CONFIG_SND_SOC_TAS5713)	+= snd-soc-tas5713.o
+ obj-$(CONFIG_SND_SOC_TLV320AIC23)	+= snd-soc-tlv320aic23.o
+ obj-$(CONFIG_SND_SOC_TLV320AIC23_I2C)	+= snd-soc-tlv320aic23-i2c.o
+ obj-$(CONFIG_SND_SOC_TLV320AIC23_SPI)	+= snd-soc-tlv320aic23-spi.o
+--- /dev/null
++++ b/sound/soc/codecs/tas5713.c
+@@ -0,0 +1,369 @@
++/*
++ * ASoC Driver for TAS5713
++ *
++ * Author:	Sebastian Eickhoff <basti.eickhoff at googlemail.com>
++ *		Copyright 2014
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/module.h>
++#include <linux/moduleparam.h>
++#include <linux/init.h>
++#include <linux/delay.h>
++#include <linux/pm.h>
++#include <linux/i2c.h>
++#include <linux/of_device.h>
++#include <linux/spi/spi.h>
++#include <linux/regmap.h>
++#include <linux/regulator/consumer.h>
++#include <linux/slab.h>
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/soc.h>
++#include <sound/initval.h>
++#include <sound/tlv.h>
++
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/fs.h>
++#include <asm/uaccess.h>
++
++#include "tas5713.h"
++
++
++static struct i2c_client *i2c;
++
++struct tas5713_priv {
++	struct regmap *regmap;
++	int mclk_div;
++	struct snd_soc_codec *codec;
++};
++
++static struct tas5713_priv *priv_data;
++
++
++
++
++/*
++ *    _   _    ___   _      ___         _           _
++ *   /_\ | |  / __| /_\    / __|___ _ _| |_ _ _ ___| |___
++ *  / _ \| |__\__ \/ _ \  | (__/ _ \ ' \  _| '_/ _ \ (_-<
++ * /_/ \_\____|___/_/ \_\  \___\___/_||_\__|_| \___/_/__/
++ *
++ */
++
++static const DECLARE_TLV_DB_SCALE(tas5713_vol_tlv, -10000, 50, 1);
++
++
++static const struct snd_kcontrol_new tas5713_snd_controls[] = {
++	SOC_SINGLE_TLV  ("Master"    , TAS5713_VOL_MASTER, 0, 248, 1, tas5713_vol_tlv),
++	SOC_DOUBLE_R_TLV("Channels"  , TAS5713_VOL_CH1, TAS5713_VOL_CH2, 0, 248, 1, tas5713_vol_tlv)
++};
++
++
++
++
++/*
++ *  __  __         _    _            ___      _
++ * |  \/  |__ _ __| |_ (_)_ _  ___  |   \ _ _(_)_ _____ _ _
++ * | |\/| / _` / _| ' \| | ' \/ -_) | |) | '_| \ V / -_) '_|
++ * |_|  |_\__,_\__|_||_|_|_||_\___| |___/|_| |_|\_/\___|_|
++ *
++ */
++
++static int tas5713_hw_params(struct snd_pcm_substream *substream,
++			    struct snd_pcm_hw_params *params,
++			    struct snd_soc_dai *dai)
++{
++	u16 blen = 0x00;
++
++	struct snd_soc_codec *codec;
++	codec = dai->codec;
++	priv_data->codec = dai->codec;
++
++	switch (params_format(params)) {
++	case SNDRV_PCM_FORMAT_S16_LE:
++		blen = 0x03;
++		break;
++	case SNDRV_PCM_FORMAT_S20_3LE:
++		blen = 0x1;
++		break;
++	case SNDRV_PCM_FORMAT_S24_LE:
++		blen = 0x04;
++		break;
++	case SNDRV_PCM_FORMAT_S32_LE:
++		blen = 0x05;
++		break;
++	default:
++		dev_err(dai->dev, "Unsupported word length: %u\n",
++			params_format(params));
++		return -EINVAL;
++	}
++
++	// set word length
++	snd_soc_update_bits(codec, TAS5713_SERIAL_DATA_INTERFACE, 0x7, blen);
++
++	return 0;
++}
++
++
++static int tas5713_mute_stream(struct snd_soc_dai *dai, int mute, int stream)
++{
++	unsigned int val = 0;
++
++	struct tas5713_priv *tas5713;
++	struct snd_soc_codec *codec = dai->codec;
++	tas5713 = snd_soc_codec_get_drvdata(codec);
++
++	if (mute) {
++		val = TAS5713_SOFT_MUTE_ALL;
++	}
++
++	return regmap_write(tas5713->regmap, TAS5713_SOFT_MUTE, val);
++}
++
++
++static const struct snd_soc_dai_ops tas5713_dai_ops = {
++	.hw_params 		= tas5713_hw_params,
++	.mute_stream	= tas5713_mute_stream,
++};
++
++
++static struct snd_soc_dai_driver tas5713_dai = {
++	.name		= "tas5713-hifi",
++	.playback 	= {
++		.stream_name	= "Playback",
++		.channels_min	= 2,
++		.channels_max	= 2,
++		.rates		    = SNDRV_PCM_RATE_8000_48000,
++		.formats	    = (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE | SNDRV_PCM_FMTBIT_S32_LE ),
++	},
++	.ops        = &tas5713_dai_ops,
++};
++
++
++
++
++/*
++ *   ___         _          ___      _
++ *  / __|___  __| |___ __  |   \ _ _(_)_ _____ _ _
++ * | (__/ _ \/ _` / -_) _| | |) | '_| \ V / -_) '_|
++ *  \___\___/\__,_\___\__| |___/|_| |_|\_/\___|_|
++ *
++ */
++
++static int tas5713_remove(struct snd_soc_codec *codec)
++{
++	struct tas5713_priv *tas5713;
++
++	tas5713 = snd_soc_codec_get_drvdata(codec);
++
++	return 0;
++}
++
++
++static int tas5713_probe(struct snd_soc_codec *codec)
++{
++	struct tas5713_priv *tas5713;
++	int i, ret;
++
++	i2c = container_of(codec->dev, struct i2c_client, dev);
++
++	tas5713 = snd_soc_codec_get_drvdata(codec);
++
++	// Reset error
++	ret = snd_soc_write(codec, TAS5713_ERROR_STATUS, 0x00);
++	if (ret < 0) return ret;
++
++	// Trim oscillator
++	ret = snd_soc_write(codec, TAS5713_OSC_TRIM, 0x00);
++	if (ret < 0) return ret;
++	msleep(1000);
++
++	// Reset error
++	ret = snd_soc_write(codec, TAS5713_ERROR_STATUS, 0x00);
++	if (ret < 0) return ret;
++
++	// Clock mode: 44/48kHz, MCLK=64xfs
++	ret = snd_soc_write(codec, TAS5713_CLOCK_CTRL, 0x60);
++	if (ret < 0) return ret;
++
++	// I2S 24bit
++	ret = snd_soc_write(codec, TAS5713_SERIAL_DATA_INTERFACE, 0x05);
++	if (ret < 0) return ret;
++
++	// Unmute
++	ret = snd_soc_write(codec, TAS5713_SYSTEM_CTRL2, 0x00);
++	if (ret < 0) return ret;
++	ret = snd_soc_write(codec, TAS5713_SOFT_MUTE, 0x00);
++	if (ret < 0) return ret;
++
++	// Set volume to 0db
++	ret = snd_soc_write(codec, TAS5713_VOL_MASTER, 0x00);
++	if (ret < 0) return ret;
++
++	// Now start programming the default initialization sequence
++	for (i = 0; i < ARRAY_SIZE(tas5713_init_sequence); ++i) {
++		ret = i2c_master_send(i2c,
++				     tas5713_init_sequence[i].data,
++				     tas5713_init_sequence[i].size);
++		if (ret < 0) {
++			printk(KERN_INFO "TAS5713 CODEC PROBE: InitSeq returns: %d\n", ret);
++		}
++	}
++
++	// Unmute
++	ret = snd_soc_write(codec, TAS5713_SYSTEM_CTRL2, 0x00);
++	if (ret < 0) return ret;
++
++	return 0;
++}
++
++
++static struct snd_soc_codec_driver soc_codec_dev_tas5713 = {
++	.probe = tas5713_probe,
++	.remove = tas5713_remove,
++	.controls = tas5713_snd_controls,
++	.num_controls = ARRAY_SIZE(tas5713_snd_controls),
++};
++
++
++
++
++/*
++ *   ___ ___ ___   ___      _
++ *  |_ _|_  ) __| |   \ _ _(_)_ _____ _ _
++ *   | | / / (__  | |) | '_| \ V / -_) '_|
++ *  |___/___\___| |___/|_| |_|\_/\___|_|
++ *
++ */
++
++static const struct reg_default tas5713_reg_defaults[] = {
++	{ 0x07 ,0x80 },     // R7  - VOL_MASTER    - -40dB
++	{ 0x08 ,  30 },     // R8  - VOL_CH1	   -   0dB
++	{ 0x09 ,  30 },     // R9  - VOL_CH2       -   0dB
++	{ 0x0A ,0x80 },     // R10 - VOL_HEADPHONE - -40dB
++};
++
++
++static bool tas5713_reg_volatile(struct device *dev, unsigned int reg)
++{
++	switch (reg) {
++		case TAS5713_DEVICE_ID:
++		case TAS5713_ERROR_STATUS:
++			return true;
++	default:
++			return false;
++	}
++}
++
++
++static const struct of_device_id tas5713_of_match[] = {
++	{ .compatible = "ti,tas5713", },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, tas5713_of_match);
++
++
++static struct regmap_config tas5713_regmap_config = {
++	.reg_bits = 8,
++	.val_bits = 8,
++
++	.max_register = TAS5713_MAX_REGISTER,
++	.volatile_reg = tas5713_reg_volatile,
++
++	.cache_type = REGCACHE_RBTREE,
++	.reg_defaults = tas5713_reg_defaults,
++	.num_reg_defaults = ARRAY_SIZE(tas5713_reg_defaults),
++};
++
++
++static int tas5713_i2c_probe(struct i2c_client *i2c,
++			    const struct i2c_device_id *id)
++{
++	int ret;
++
++	priv_data = devm_kzalloc(&i2c->dev, sizeof *priv_data, GFP_KERNEL);
++	if (!priv_data)
++		return -ENOMEM;
++
++	priv_data->regmap = devm_regmap_init_i2c(i2c, &tas5713_regmap_config);
++	if (IS_ERR(priv_data->regmap)) {
++		ret = PTR_ERR(priv_data->regmap);
++		return ret;
++	}
++
++	i2c_set_clientdata(i2c, priv_data);
++
++	ret = snd_soc_register_codec(&i2c->dev,
++				     &soc_codec_dev_tas5713, &tas5713_dai, 1);
++
++	return ret;
++}
++
++
++static int tas5713_i2c_remove(struct i2c_client *i2c)
++{
++	snd_soc_unregister_codec(&i2c->dev);
++	i2c_set_clientdata(i2c, NULL);
++
++	kfree(priv_data);
++
++	return 0;
++}
++
++
++static const struct i2c_device_id tas5713_i2c_id[] = {
++	{ "tas5713", 0 },
++	{ }
++};
++
++MODULE_DEVICE_TABLE(i2c, tas5713_i2c_id);
++
++
++static struct i2c_driver tas5713_i2c_driver = {
++	.driver = {
++		.name = "tas5713",
++		.owner = THIS_MODULE,
++		.of_match_table = tas5713_of_match,
++	},
++	.probe = tas5713_i2c_probe,
++	.remove = tas5713_i2c_remove,
++	.id_table = tas5713_i2c_id
++};
++
++
++static int __init tas5713_modinit(void)
++{
++	int ret = 0;
++
++	ret = i2c_add_driver(&tas5713_i2c_driver);
++	if (ret) {
++		printk(KERN_ERR "Failed to register tas5713 I2C driver: %d\n",
++		       ret);
++	}
++
++	return ret;
++}
++module_init(tas5713_modinit);
++
++
++static void __exit tas5713_exit(void)
++{
++	i2c_del_driver(&tas5713_i2c_driver);
++}
++module_exit(tas5713_exit);
++
++
++MODULE_AUTHOR("Sebastian Eickhoff <basti.eickhoff at googlemail.com>");
++MODULE_DESCRIPTION("ASoC driver for TAS5713");
++MODULE_LICENSE("GPL v2");
+--- /dev/null
++++ b/sound/soc/codecs/tas5713.h
+@@ -0,0 +1,210 @@
++/*
++ * ASoC Driver for TAS5713
++ *
++ * Author:      Sebastian Eickhoff <basti.eickhoff at googlemail.com>
++ *              Copyright 2014
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#ifndef _TAS5713_H
++#define _TAS5713_H
++
++
++// TAS5713 I2C-bus register addresses
++
++#define TAS5713_CLOCK_CTRL              0x00
++#define TAS5713_DEVICE_ID               0x01
++#define TAS5713_ERROR_STATUS            0x02
++#define TAS5713_SYSTEM_CTRL1            0x03
++#define TAS5713_SERIAL_DATA_INTERFACE   0x04
++#define TAS5713_SYSTEM_CTRL2            0x05
++#define TAS5713_SOFT_MUTE               0x06
++#define TAS5713_VOL_MASTER              0x07
++#define TAS5713_VOL_CH1                 0x08
++#define TAS5713_VOL_CH2                 0x09
++#define TAS5713_VOL_HEADPHONE           0x0A
++#define TAS5713_VOL_CONFIG              0x0E
++#define TAS5713_MODULATION_LIMIT        0x10
++#define TAS5713_IC_DLY_CH1              0x11
++#define TAS5713_IC_DLY_CH2              0x12
++#define TAS5713_IC_DLY_CH3              0x13
++#define TAS5713_IC_DLY_CH4              0x14
++
++#define TAS5713_START_STOP_PERIOD       0x1A
++#define TAS5713_OSC_TRIM                0x1B
++#define TAS5713_BKND_ERR                0x1C
++
++#define TAS5713_INPUT_MUX               0x20
++#define TAS5713_SRC_SELECT_CH4          0x21
++#define TAS5713_PWM_MUX                 0x25
++
++#define TAS5713_CH1_BQ0                 0x29
++#define TAS5713_CH1_BQ1                 0x2A
++#define TAS5713_CH1_BQ2                 0x2B
++#define TAS5713_CH1_BQ3                 0x2C
++#define TAS5713_CH1_BQ4                 0x2D
++#define TAS5713_CH1_BQ5                 0x2E
++#define TAS5713_CH1_BQ6                 0x2F
++#define TAS5713_CH1_BQ7                 0x58
++#define TAS5713_CH1_BQ8                 0x59
++
++#define TAS5713_CH2_BQ0                 0x30
++#define TAS5713_CH2_BQ1                 0x31
++#define TAS5713_CH2_BQ2                 0x32
++#define TAS5713_CH2_BQ3                 0x33
++#define TAS5713_CH2_BQ4                 0x34
++#define TAS5713_CH2_BQ5                 0x35
++#define TAS5713_CH2_BQ6                 0x36
++#define TAS5713_CH2_BQ7                 0x5C
++#define TAS5713_CH2_BQ8                 0x5D
++
++#define TAS5713_CH4_BQ0                 0x5A
++#define TAS5713_CH4_BQ1                 0x5B
++#define TAS5713_CH3_BQ0                 0x5E
++#define TAS5713_CH3_BQ1                 0x5F
++
++#define TAS5713_DRC1_SOFTENING_FILTER_ALPHA_OMEGA       0x3B
++#define TAS5713_DRC1_ATTACK_RELEASE_RATE                0x3C
++#define TAS5713_DRC2_SOFTENING_FILTER_ALPHA_OMEGA       0x3E
++#define TAS5713_DRC2_ATTACK_RELEASE_RATE                0x3F
++#define TAS5713_DRC1_ATTACK_RELEASE_THRES               0x40
++#define TAS5713_DRC2_ATTACK_RELEASE_THRES               0x43
++#define TAS5713_DRC_CTRL                                0x46
++
++#define TAS5713_BANK_SW_CTRL            0x50
++#define TAS5713_CH1_OUTPUT_MIXER        0x51
++#define TAS5713_CH2_OUTPUT_MIXER        0x52
++#define TAS5713_CH1_INPUT_MIXER         0x53
++#define TAS5713_CH2_INPUT_MIXER         0x54
++#define TAS5713_OUTPUT_POST_SCALE       0x56
++#define TAS5713_OUTPUT_PRESCALE         0x57
++
++#define TAS5713_IDF_POST_SCALE          0x62
++
++#define TAS5713_CH1_INLINE_MIXER        0x70
++#define TAS5713_CH1_INLINE_DRC_EN_MIXER 0x71
++#define TAS5713_CH1_R_CHANNEL_MIXER     0x72
++#define TAS5713_CH1_L_CHANNEL_MIXER     0x73
++#define TAS5713_CH2_INLINE_MIXER        0x74
++#define TAS5713_CH2_INLINE_DRC_EN_MIXER 0x75
++#define TAS5713_CH2_L_CHANNEL_MIXER     0x76
++#define TAS5713_CH2_R_CHANNEL_MIXER     0x77
++
++#define TAS5713_UPDATE_DEV_ADDR_KEY     0xF8
++#define TAS5713_UPDATE_DEV_ADDR_REG     0xF9
++
++#define TAS5713_REGISTER_COUNT          0x46
++#define TAS5713_MAX_REGISTER            0xF9
++
++
++// Bitmasks for registers
++#define TAS5713_SOFT_MUTE_ALL           0x07
++
++
++
++struct tas5713_init_command {
++        const int size;
++        const char *const data;
++};
++
++static const struct tas5713_init_command tas5713_init_sequence[] = {
++
++        // Trim oscillator
++    { .size = 2,  .data = "\x1B\x00" },
++    // System control register 1 (0x03): block DC
++    { .size = 2,  .data = "\x03\x80" },
++    // Mute everything
++    { .size = 2,  .data = "\x05\x40" },
++    // Modulation limit register (0x10): 97.7%
++    { .size = 2,  .data = "\x10\x02" },
++    // Interchannel delay registers
++    // (0x11, 0x12, 0x13, and 0x14): BD mode
++    { .size = 2,  .data = "\x11\xB8" },
++    { .size = 2,  .data = "\x12\x60" },
++    { .size = 2,  .data = "\x13\xA0" },
++    { .size = 2,  .data = "\x14\x48" },
++    // PWM shutdown group register (0x19): no shutdown
++    { .size = 2,  .data = "\x19\x00" },
++    // Input multiplexer register (0x20): BD mode
++    { .size = 2,  .data = "\x20\x00\x89\x77\x72" },
++    // PWM output mux register (0x25)
++    // Channel 1 --> OUTA, channel 1 neg --> OUTB
++    // Channel 2 --> OUTC, channel 2 neg --> OUTD
++    { .size = 5,  .data = "\x25\x01\x02\x13\x45" },
++    // DRC control (0x46): DRC off
++    { .size = 5,  .data = "\x46\x00\x00\x00\x00" },
++    // BKND_ERR register (0x1C): 299ms reset period
++    { .size = 2,  .data = "\x1C\x07" },
++    // Mute channel 3
++    { .size = 2,  .data = "\x0A\xFF" },
++    // Volume configuration register (0x0E): volume slew 512 steps
++    { .size = 2,  .data = "\x0E\x90" },
++    // Clock control register (0x00): 44/48kHz, MCLK=64xfs
++    { .size = 2,  .data = "\x00\x60" },
++    // Bank switch and eq control (0x50): no bank switching
++    { .size = 5,  .data = "\x50\x00\x00\x00\x00" },
++    // Volume registers (0x07, 0x08, 0x09, 0x0A)
++    { .size = 2,  .data = "\x07\x20" },
++    { .size = 2,  .data = "\x08\x30" },
++    { .size = 2,  .data = "\x09\x30" },
++    { .size = 2,  .data = "\x0A\xFF" },
++    // 0x72, 0x73, 0x76, 0x77 input mixer:
++    // no intermix between channels
++    { .size = 5,  .data = "\x72\x00\x00\x00\x00" },
++    { .size = 5,  .data = "\x73\x00\x80\x00\x00" },
++    { .size = 5,  .data = "\x76\x00\x00\x00\x00" },
++    { .size = 5,  .data = "\x77\x00\x80\x00\x00" },
++    // 0x70, 0x71, 0x74, 0x75 inline DRC mixer:
++    // no inline DRC inmix
++    { .size = 5,  .data = "\x70\x00\x80\x00\x00" },
++    { .size = 5,  .data = "\x71\x00\x00\x00\x00" },
++    { .size = 5,  .data = "\x74\x00\x80\x00\x00" },
++    { .size = 5,  .data = "\x75\x00\x00\x00\x00" },
++    // 0x56, 0x57 Output scale
++    { .size = 5,  .data = "\x56\x00\x80\x00\x00" },
++    { .size = 5,  .data = "\x57\x00\x02\x00\x00" },
++    // 0x3B, 0x3c
++    { .size = 9,  .data = "\x3B\x00\x08\x00\x00\x00\x78\x00\x00" },
++    { .size = 9,  .data = "\x3C\x00\x00\x01\x00\xFF\xFF\xFF\x00" },
++    { .size = 9,  .data = "\x3E\x00\x08\x00\x00\x00\x78\x00\x00" },
++    { .size = 9,  .data = "\x3F\x00\x00\x01\x00\xFF\xFF\xFF\x00" },
++    { .size = 9,  .data = "\x40\x00\x00\x01\x00\xFF\xFF\xFF\x00" },
++    { .size = 9,  .data = "\x43\x00\x00\x01\x00\xFF\xFF\xFF\x00" },
++    // 0x51, 0x52: output mixer
++    { .size = 9,  .data = "\x51\x00\x80\x00\x00\x00\x00\x00\x00" },
++    { .size = 9,  .data = "\x52\x00\x80\x00\x00\x00\x00\x00\x00" },
++    // PEQ defaults
++    { .size = 21,  .data = "\x29\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x2A\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x2B\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x2C\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x2D\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x2E\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x2F\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x30\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x31\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x32\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x33\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x34\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x35\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x36\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x58\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x59\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x5C\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x5D\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x5E\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x5F\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x5A\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++    { .size = 21,  .data = "\x5B\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" },
++};
++
++
++#endif  /* _TAS5713_H */
diff --git a/target/linux/brcm2708/patches-4.4/0073-Update-ds1307-driver-for-device-tree-support.patch b/target/linux/brcm2708/patches-4.4/0073-Update-ds1307-driver-for-device-tree-support.patch
new file mode 100644
index 0000000..e2c5b2e
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0073-Update-ds1307-driver-for-device-tree-support.patch
@@ -0,0 +1,27 @@
+From 0a44043b0586ca2ea7f4af174ba18188c8dced82 Mon Sep 17 00:00:00 2001
+From: Ryan Coe <bluemrp9 at gmail.com>
+Date: Sat, 31 Jan 2015 18:25:49 -0700
+Subject: [PATCH 073/127] Update ds1307 driver for device-tree support
+
+Signed-off-by: Ryan Coe <bluemrp9 at gmail.com>
+---
+ drivers/rtc/rtc-ds1307.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -1207,6 +1207,14 @@ static int ds1307_remove(struct i2c_clie
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_OF
++static const struct of_device_id ds1307_of_match[] = {
++	{ .compatible = "maxim,ds1307" },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, ds1307_of_match);
++#endif
++
+ static struct i2c_driver ds1307_driver = {
+ 	.driver = {
+ 		.name	= "rtc-ds1307",
diff --git a/target/linux/brcm2708/patches-4.4/0074-BCM270x_DT-Add-pwr_led-and-the-required-input-trigge.patch b/target/linux/brcm2708/patches-4.4/0074-BCM270x_DT-Add-pwr_led-and-the-required-input-trigge.patch
new file mode 100644
index 0000000..c9c4569
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0074-BCM270x_DT-Add-pwr_led-and-the-required-input-trigge.patch
@@ -0,0 +1,170 @@
+From 8629cfc983fbca391421dec0644a34ffe29b5c9e Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Fri, 6 Feb 2015 13:50:57 +0000
+Subject: [PATCH 074/127] BCM270x_DT: Add pwr_led, and the required "input"
+ trigger
+
+The "input" trigger makes the associated GPIO an input.  This is to support
+the Raspberry Pi PWR LED, which is driven by external hardware in normal use.
+
+N.B. pwr_led is not available on Model A or B boards.
+
+leds-gpio: Implement the brightness_get method
+
+The power LED uses some clever logic that means it is driven
+by a voltage measuring circuit when configured as input, otherwise
+it is driven by the GPIO output value. This patch wires up the
+brightness_get method for leds-gpio so that user-space can monitor
+the LED value via /sys/class/gpio/led1/brightness. Using the input
+trigger this returns an indication of the system power health,
+otherwise it is just whatever value the trigger has written most
+recently.
+
+See: https://github.com/raspberrypi/linux/issues/1064
+---
+ drivers/leds/leds-gpio.c             | 18 +++++++++++-
+ drivers/leds/trigger/Kconfig         |  7 +++++
+ drivers/leds/trigger/Makefile        |  1 +
+ drivers/leds/trigger/ledtrig-input.c | 54 ++++++++++++++++++++++++++++++++++++
+ include/linux/leds.h                 |  3 ++
+ 5 files changed, 82 insertions(+), 1 deletion(-)
+ create mode 100644 drivers/leds/trigger/ledtrig-input.c
+
+--- a/drivers/leds/leds-gpio.c
++++ b/drivers/leds/leds-gpio.c
+@@ -42,6 +42,13 @@ static void gpio_led_work(struct work_st
+ 		led_dat->platform_gpio_blink_set(led_dat->gpiod,
+ 					led_dat->new_level, NULL, NULL);
+ 		led_dat->blinking = 0;
++	} else if (led_dat->cdev.flags & SET_GPIO_INPUT) {
++		gpiod_direction_input(led_dat->gpiod);
++		led_dat->cdev.flags &= ~SET_GPIO_INPUT;
++	}
++	else if (led_dat->cdev.flags & SET_GPIO_OUTPUT) {
++		gpiod_direction_output(led_dat->gpiod, led_dat->new_level);
++		led_dat->cdev.flags &= ~SET_GPIO_OUTPUT;
+ 	} else
+ 		gpiod_set_value_cansleep(led_dat->gpiod, led_dat->new_level);
+ }
+@@ -62,7 +69,8 @@ static void gpio_led_set(struct led_clas
+ 	 * seem to have a reliable way to know if we're already in one; so
+ 	 * let's just assume the worst.
+ 	 */
+-	if (led_dat->can_sleep) {
++	if (led_dat->can_sleep ||
++	    (led_dat->cdev.flags & (SET_GPIO_INPUT | SET_GPIO_OUTPUT) )) {
+ 		led_dat->new_level = level;
+ 		schedule_work(&led_dat->work);
+ 	} else {
+@@ -75,6 +83,13 @@ static void gpio_led_set(struct led_clas
+ 	}
+ }
+ 
++static enum led_brightness gpio_led_get(struct led_classdev *led_cdev)
++{
++	struct gpio_led_data *led_dat =
++		container_of(led_cdev, struct gpio_led_data, cdev);
++	return gpiod_get_value_cansleep(led_dat->gpiod) ? LED_FULL : LED_OFF;
++}
++
+ static int gpio_blink_set(struct led_classdev *led_cdev,
+ 	unsigned long *delay_on, unsigned long *delay_off)
+ {
+@@ -131,6 +146,7 @@ static int create_gpio_led(const struct
+ 		led_dat->cdev.blink_set = gpio_blink_set;
+ 	}
+ 	led_dat->cdev.brightness_set = gpio_led_set;
++	led_dat->cdev.brightness_get = gpio_led_get;
+ 	if (template->default_state == LEDS_GPIO_DEFSTATE_KEEP)
+ 		state = !!gpiod_get_value_cansleep(led_dat->gpiod);
+ 	else
+--- a/drivers/leds/trigger/Kconfig
++++ b/drivers/leds/trigger/Kconfig
+@@ -126,4 +126,11 @@ config LEDS_TRIGGER_USBDEV
+ 	  This allows LEDs to be controlled by the presence/activity of
+ 	  an USB device. If unsure, say N.
+ 
++config LEDS_TRIGGER_INPUT
++	tristate "LED Input Trigger"
++	depends on LEDS_TRIGGERS
++	help
++	  This allows the GPIOs assigned to be LEDs to be initialised to inputs.
++	  If unsure, say Y.
++
+ endif # LEDS_TRIGGERS
+--- a/drivers/leds/trigger/Makefile
++++ b/drivers/leds/trigger/Makefile
+@@ -8,3 +8,4 @@ obj-$(CONFIG_LEDS_TRIGGER_CPU)		+= ledtr
+ obj-$(CONFIG_LEDS_TRIGGER_DEFAULT_ON)	+= ledtrig-default-on.o
+ obj-$(CONFIG_LEDS_TRIGGER_TRANSIENT)	+= ledtrig-transient.o
+ obj-$(CONFIG_LEDS_TRIGGER_CAMERA)	+= ledtrig-camera.o
++obj-$(CONFIG_LEDS_TRIGGER_INPUT)	+= ledtrig-input.o
+--- /dev/null
++++ b/drivers/leds/trigger/ledtrig-input.c
+@@ -0,0 +1,54 @@
++/*
++ * Set LED GPIO to Input "Trigger"
++ *
++ * Copyright 2015 Phil Elwell <phil at raspberrypi.org>
++ *
++ * Based on Nick Forbes's ledtrig-default-on.c.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ */
++
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/leds.h>
++#include <linux/gpio.h>
++#include "../leds.h"
++
++static void input_trig_activate(struct led_classdev *led_cdev)
++{
++	led_cdev->flags |= SET_GPIO_INPUT;
++	led_set_brightness_async(led_cdev, 0);
++}
++
++static void input_trig_deactivate(struct led_classdev *led_cdev)
++{
++	led_cdev->flags |= SET_GPIO_OUTPUT;
++	led_set_brightness_async(led_cdev, 0);
++}
++
++static struct led_trigger input_led_trigger = {
++	.name     = "input",
++	.activate = input_trig_activate,
++	.deactivate = input_trig_deactivate,
++};
++
++static int __init input_trig_init(void)
++{
++	return led_trigger_register(&input_led_trigger);
++}
++
++static void __exit input_trig_exit(void)
++{
++	led_trigger_unregister(&input_led_trigger);
++}
++
++module_init(input_trig_init);
++module_exit(input_trig_exit);
++
++MODULE_AUTHOR("Phil Elwell <phil at raspberrypi.org>");
++MODULE_DESCRIPTION("Set LED GPIO to Input \"trigger\"");
++MODULE_LICENSE("GPL");
+--- a/include/linux/leds.h
++++ b/include/linux/leds.h
+@@ -48,6 +48,9 @@ struct led_classdev {
+ #define SET_BRIGHTNESS_ASYNC	(1 << 21)
+ #define SET_BRIGHTNESS_SYNC	(1 << 22)
+ #define LED_DEV_CAP_FLASH	(1 << 23)
++	/* Additions for Raspberry Pi PWR LED */
++#define SET_GPIO_INPUT		(1 << 30)
++#define SET_GPIO_OUTPUT		(1 << 31)
+ 
+ 	/* Set LED brightness level */
+ 	/* Must not sleep, use a workqueue if needed */
diff --git a/target/linux/brcm2708/patches-4.4/0075-enc28j60-Add-device-tree-compatible-string-and-an-ov.patch b/target/linux/brcm2708/patches-4.4/0075-enc28j60-Add-device-tree-compatible-string-and-an-ov.patch
new file mode 100644
index 0000000..5dba231
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0075-enc28j60-Add-device-tree-compatible-string-and-an-ov.patch
@@ -0,0 +1,29 @@
+From 5bdfbb8eafe9c92e60031502980207f55c0092ce Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Fri, 27 Feb 2015 15:10:24 +0000
+Subject: [PATCH 075/127] enc28j60: Add device tree compatible string and an
+ overlay
+
+---
+ drivers/net/ethernet/microchip/enc28j60.c | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+--- a/drivers/net/ethernet/microchip/enc28j60.c
++++ b/drivers/net/ethernet/microchip/enc28j60.c
+@@ -1630,9 +1630,16 @@ static int enc28j60_remove(struct spi_de
+ 	return 0;
+ }
+ 
++static const struct of_device_id enc28j60_of_match[] = {
++	{ .compatible = "microchip,enc28j60", },
++	{ /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(of, enc28j60_of_match);
++
+ static struct spi_driver enc28j60_driver = {
+ 	.driver = {
+ 		   .name = DRV_NAME,
++		   .of_match_table = enc28j60_of_match,
+ 	 },
+ 	.probe = enc28j60_probe,
+ 	.remove = enc28j60_remove,
diff --git a/target/linux/brcm2708/patches-4.4/0076-Add-driver-for-rpi-proto.patch b/target/linux/brcm2708/patches-4.4/0076-Add-driver-for-rpi-proto.patch
new file mode 100644
index 0000000..8940e41
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0076-Add-driver-for-rpi-proto.patch
@@ -0,0 +1,210 @@
+From e3dc96afd80658ed5ad61c6f58dd45979abdcc61 Mon Sep 17 00:00:00 2001
+From: Waldemar Brodkorb <wbrodkorb at conet.de>
+Date: Wed, 25 Mar 2015 09:26:17 +0100
+Subject: [PATCH 076/127] Add driver for rpi-proto
+
+Forward port of 3.10.x driver from https://github.com/koalo
+We are using a custom board and would like to use rpi 3.18.x
+kernel. Patch works fine for our embedded system.
+
+URL to the audio chip:
+http://www.mikroe.com/add-on-boards/audio-voice/audio-codec-proto/
+
+Playback tested with devicetree enabled.
+
+Signed-off-by: Waldemar Brodkorb <wbrodkorb at conet.de>
+---
+ sound/soc/bcm/Kconfig     |   7 +++
+ sound/soc/bcm/Makefile    |   2 +
+ sound/soc/bcm/rpi-proto.c | 153 ++++++++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 162 insertions(+)
+ create mode 100644 sound/soc/bcm/rpi-proto.c
+
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -43,6 +43,13 @@ config SND_BCM2708_SOC_RPI_DAC
+         help
+          Say Y or M if you want to add support for RPi-DAC.
+ 
++config SND_BCM2708_SOC_RPI_PROTO
++	tristate "Support for Rpi-PROTO"
++	depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
++	select SND_SOC_WM8731
++	help
++	  Say Y or M if you want to add support for Audio Codec Board PROTO (WM8731).
++
+ config SND_BCM2708_SOC_IQAUDIO_DAC
+ 	tristate "Support for IQaudIO-DAC"
+ 	depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
+--- a/sound/soc/bcm/Makefile
++++ b/sound/soc/bcm/Makefile
+@@ -9,6 +9,7 @@ snd-soc-hifiberry-dacplus-objs := hifibe
+ snd-soc-hifiberry-digi-objs := hifiberry_digi.o
+ snd-soc-hifiberry-amp-objs := hifiberry_amp.o
+ snd-soc-rpi-dac-objs := rpi-dac.o
++snd-soc-rpi-proto-objs := rpi-proto.o
+ snd-soc-iqaudio-dac-objs := iqaudio-dac.o
+ 
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC) += snd-soc-hifiberry-dac.o
+@@ -16,4 +17,5 @@ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_D
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI) += snd-soc-hifiberry-digi.o
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_AMP) += snd-soc-hifiberry-amp.o
+ obj-$(CONFIG_SND_BCM2708_SOC_RPI_DAC) += snd-soc-rpi-dac.o
++obj-$(CONFIG_SND_BCM2708_SOC_RPI_PROTO) += snd-soc-rpi-proto.o
+ obj-$(CONFIG_SND_BCM2708_SOC_IQAUDIO_DAC) += snd-soc-iqaudio-dac.o
+--- /dev/null
++++ b/sound/soc/bcm/rpi-proto.c
+@@ -0,0 +1,153 @@
++/*
++ * ASoC driver for PROTO AudioCODEC (with a WM8731)
++ * connected to a Raspberry Pi
++ *
++ * Author:      Florian Meier, <koalo at koalo.de>
++ *	      Copyright 2013
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/soc.h>
++#include <sound/jack.h>
++
++#include "../codecs/wm8731.h"
++
++static const unsigned int wm8731_rates_12288000[] = {
++	8000, 32000, 48000, 96000,
++};
++
++static struct snd_pcm_hw_constraint_list wm8731_constraints_12288000 = {
++	.list = wm8731_rates_12288000,
++	.count = ARRAY_SIZE(wm8731_rates_12288000),
++};
++
++static int snd_rpi_proto_startup(struct snd_pcm_substream *substream)
++{
++	/* Setup constraints, because there is a 12.288 MHz XTAL on the board */
++	snd_pcm_hw_constraint_list(substream->runtime, 0,
++				SNDRV_PCM_HW_PARAM_RATE,
++				&wm8731_constraints_12288000);
++	return 0;
++}
++
++static int snd_rpi_proto_hw_params(struct snd_pcm_substream *substream,
++				       struct snd_pcm_hw_params *params)
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_codec *codec = rtd->codec;
++	struct snd_soc_dai *codec_dai = rtd->codec_dai;
++	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
++	int sysclk = 12288000; /* This is fixed on this board */
++
++	/* Set proto bclk */
++	int ret = snd_soc_dai_set_bclk_ratio(cpu_dai,32*2);
++	if (ret < 0){
++		dev_err(codec->dev,
++				"Failed to set BCLK ratio %d\n", ret);
++		return ret;
++	}
++
++	/* Set proto sysclk */
++	ret = snd_soc_dai_set_sysclk(codec_dai, WM8731_SYSCLK_XTAL,
++			sysclk, SND_SOC_CLOCK_IN);
++	if (ret < 0) {
++		dev_err(codec->dev,
++				"Failed to set WM8731 SYSCLK: %d\n", ret);
++		return ret;
++	}
++
++	return 0;
++}
++
++/* machine stream operations */
++static struct snd_soc_ops snd_rpi_proto_ops = {
++	.startup = snd_rpi_proto_startup,
++	.hw_params = snd_rpi_proto_hw_params,
++};
++
++static struct snd_soc_dai_link snd_rpi_proto_dai[] = {
++{
++	.name		= "WM8731",
++	.stream_name	= "WM8731 HiFi",
++	.cpu_dai_name	= "bcm2708-i2s.0",
++	.codec_dai_name	= "wm8731-hifi",
++	.platform_name	= "bcm2708-i2s.0",
++	.codec_name	= "wm8731.1-001a",
++	.dai_fmt	= SND_SOC_DAIFMT_I2S
++				| SND_SOC_DAIFMT_NB_NF
++				| SND_SOC_DAIFMT_CBM_CFM,
++	.ops		= &snd_rpi_proto_ops,
++},
++};
++
++/* audio machine driver */
++static struct snd_soc_card snd_rpi_proto = {
++	.name		= "snd_rpi_proto",
++	.dai_link	= snd_rpi_proto_dai,
++	.num_links	= ARRAY_SIZE(snd_rpi_proto_dai),
++};
++
++static int snd_rpi_proto_probe(struct platform_device *pdev)
++{
++	int ret = 0;
++
++	snd_rpi_proto.dev = &pdev->dev;
++
++	if (pdev->dev.of_node) {
++		struct device_node *i2s_node;
++		struct snd_soc_dai_link *dai = &snd_rpi_proto_dai[0];
++		i2s_node = of_parse_phandle(pdev->dev.of_node,
++				            "i2s-controller", 0);
++
++		if (i2s_node) {
++			dai->cpu_dai_name = NULL;
++			dai->cpu_of_node = i2s_node;
++			dai->platform_name = NULL;
++			dai->platform_of_node = i2s_node;
++		}
++	}
++
++	ret = snd_soc_register_card(&snd_rpi_proto);
++	if (ret) {
++		dev_err(&pdev->dev,
++				"snd_soc_register_card() failed: %d\n", ret);
++	}
++
++	return ret;
++}
++
++
++static int snd_rpi_proto_remove(struct platform_device *pdev)
++{
++	return snd_soc_unregister_card(&snd_rpi_proto);
++}
++
++static const struct of_device_id snd_rpi_proto_of_match[] = {
++	{ .compatible = "rpi,rpi-proto", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, snd_rpi_proto_of_match);
++
++static struct platform_driver snd_rpi_proto_driver = {
++	.driver = {
++		.name   = "snd-rpi-proto",
++		.owner  = THIS_MODULE,
++		.of_match_table = snd_rpi_proto_of_match,
++	},
++	.probe	  = snd_rpi_proto_probe,
++	.remove	 = snd_rpi_proto_remove,
++};
++
++module_platform_driver(snd_rpi_proto_driver);
++
++MODULE_AUTHOR("Florian Meier");
++MODULE_DESCRIPTION("ASoC Driver for Raspberry Pi connected to PROTO board (WM8731)");
++MODULE_LICENSE("GPL");
diff --git a/target/linux/brcm2708/patches-4.4/0077-config-Add-default-configs.patch b/target/linux/brcm2708/patches-4.4/0077-config-Add-default-configs.patch
new file mode 100644
index 0000000..333990d
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0077-config-Add-default-configs.patch
@@ -0,0 +1,2537 @@
+From 18e76baf8723e54c2fb7303d07e797960b61d399 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Mon, 13 Apr 2015 17:16:29 +0100
+Subject: [PATCH 077/127] config: Add default configs
+
+---
+ arch/arm/configs/bcm2709_defconfig | 1254 +++++++++++++++++++++++++++++++++++
+ arch/arm/configs/bcmrpi_defconfig  | 1265 ++++++++++++++++++++++++++++++++++++
+ 2 files changed, 2519 insertions(+)
+ create mode 100644 arch/arm/configs/bcm2709_defconfig
+ create mode 100644 arch/arm/configs/bcmrpi_defconfig
+
+--- /dev/null
++++ b/arch/arm/configs/bcm2709_defconfig
+@@ -0,0 +1,1254 @@
++# CONFIG_ARM_PATCH_PHYS_VIRT is not set
++CONFIG_PHYS_OFFSET=0
++CONFIG_LOCALVERSION="-v7"
++# CONFIG_LOCALVERSION_AUTO is not set
++CONFIG_SYSVIPC=y
++CONFIG_POSIX_MQUEUE=y
++CONFIG_FHANDLE=y
++CONFIG_NO_HZ=y
++CONFIG_HIGH_RES_TIMERS=y
++CONFIG_BSD_PROCESS_ACCT=y
++CONFIG_BSD_PROCESS_ACCT_V3=y
++CONFIG_TASKSTATS=y
++CONFIG_TASK_DELAY_ACCT=y
++CONFIG_TASK_XACCT=y
++CONFIG_TASK_IO_ACCOUNTING=y
++CONFIG_IKCONFIG=m
++CONFIG_IKCONFIG_PROC=y
++CONFIG_CGROUP_FREEZER=y
++CONFIG_CGROUP_DEVICE=y
++CONFIG_CPUSETS=y
++CONFIG_CGROUP_CPUACCT=y
++CONFIG_MEMCG=y
++CONFIG_BLK_CGROUP=y
++CONFIG_NAMESPACES=y
++CONFIG_SCHED_AUTOGROUP=y
++CONFIG_BLK_DEV_INITRD=y
++CONFIG_EMBEDDED=y
++# CONFIG_COMPAT_BRK is not set
++CONFIG_PROFILING=y
++CONFIG_OPROFILE=m
++CONFIG_KPROBES=y
++CONFIG_JUMP_LABEL=y
++CONFIG_MODULES=y
++CONFIG_MODULE_UNLOAD=y
++CONFIG_MODVERSIONS=y
++CONFIG_MODULE_SRCVERSION_ALL=y
++CONFIG_BLK_DEV_THROTTLING=y
++CONFIG_PARTITION_ADVANCED=y
++CONFIG_MAC_PARTITION=y
++CONFIG_CFQ_GROUP_IOSCHED=y
++CONFIG_ARCH_BCM2709=y
++# CONFIG_CACHE_L2X0 is not set
++CONFIG_SMP=y
++CONFIG_HAVE_ARM_ARCH_TIMER=y
++CONFIG_VMSPLIT_2G=y
++CONFIG_PREEMPT_VOLUNTARY=y
++CONFIG_AEABI=y
++CONFIG_OABI_COMPAT=y
++# CONFIG_CPU_SW_DOMAIN_PAN is not set
++CONFIG_CLEANCACHE=y
++CONFIG_FRONTSWAP=y
++CONFIG_CMA=y
++CONFIG_ZSMALLOC=m
++CONFIG_PGTABLE_MAPPING=y
++CONFIG_UACCESS_WITH_MEMCPY=y
++CONFIG_SECCOMP=y
++# CONFIG_ATAGS is not set
++CONFIG_ZBOOT_ROM_TEXT=0x0
++CONFIG_ZBOOT_ROM_BSS=0x0
++CONFIG_CMDLINE="console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait"
++CONFIG_CPU_FREQ=y
++CONFIG_CPU_FREQ_STAT=m
++CONFIG_CPU_FREQ_STAT_DETAILS=y
++CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE=y
++CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
++CONFIG_CPU_FREQ_GOV_USERSPACE=y
++CONFIG_CPU_FREQ_GOV_ONDEMAND=y
++CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
++CONFIG_VFP=y
++CONFIG_NEON=y
++CONFIG_KERNEL_MODE_NEON=y
++CONFIG_BINFMT_MISC=m
++# CONFIG_SUSPEND is not set
++CONFIG_NET=y
++CONFIG_PACKET=y
++CONFIG_UNIX=y
++CONFIG_XFRM_USER=y
++CONFIG_NET_KEY=m
++CONFIG_INET=y
++CONFIG_IP_MULTICAST=y
++CONFIG_IP_ADVANCED_ROUTER=y
++CONFIG_IP_MULTIPLE_TABLES=y
++CONFIG_IP_ROUTE_MULTIPATH=y
++CONFIG_IP_ROUTE_VERBOSE=y
++CONFIG_IP_PNP=y
++CONFIG_IP_PNP_DHCP=y
++CONFIG_IP_PNP_RARP=y
++CONFIG_NET_IPIP=m
++CONFIG_NET_IPGRE_DEMUX=m
++CONFIG_NET_IPGRE=m
++CONFIG_IP_MROUTE=y
++CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
++CONFIG_IP_PIMSM_V1=y
++CONFIG_IP_PIMSM_V2=y
++CONFIG_SYN_COOKIES=y
++CONFIG_INET_AH=m
++CONFIG_INET_ESP=m
++CONFIG_INET_IPCOMP=m
++CONFIG_INET_XFRM_MODE_TRANSPORT=m
++CONFIG_INET_XFRM_MODE_TUNNEL=m
++CONFIG_INET_XFRM_MODE_BEET=m
++CONFIG_INET_LRO=m
++CONFIG_INET_DIAG=m
++CONFIG_INET6_AH=m
++CONFIG_INET6_ESP=m
++CONFIG_INET6_IPCOMP=m
++CONFIG_IPV6_TUNNEL=m
++CONFIG_IPV6_MULTIPLE_TABLES=y
++CONFIG_IPV6_MROUTE=y
++CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
++CONFIG_IPV6_PIMSM_V2=y
++CONFIG_NETFILTER=y
++CONFIG_NF_CONNTRACK=m
++CONFIG_NF_CONNTRACK_ZONES=y
++CONFIG_NF_CONNTRACK_EVENTS=y
++CONFIG_NF_CONNTRACK_TIMESTAMP=y
++CONFIG_NF_CT_PROTO_DCCP=m
++CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CONNTRACK_AMANDA=m
++CONFIG_NF_CONNTRACK_FTP=m
++CONFIG_NF_CONNTRACK_H323=m
++CONFIG_NF_CONNTRACK_IRC=m
++CONFIG_NF_CONNTRACK_NETBIOS_NS=m
++CONFIG_NF_CONNTRACK_SNMP=m
++CONFIG_NF_CONNTRACK_PPTP=m
++CONFIG_NF_CONNTRACK_SANE=m
++CONFIG_NF_CONNTRACK_SIP=m
++CONFIG_NF_CONNTRACK_TFTP=m
++CONFIG_NF_CT_NETLINK=m
++CONFIG_NETFILTER_XT_SET=m
++CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
++CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
++CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
++CONFIG_NETFILTER_XT_TARGET_DSCP=m
++CONFIG_NETFILTER_XT_TARGET_HMARK=m
++CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
++CONFIG_NETFILTER_XT_TARGET_LED=m
++CONFIG_NETFILTER_XT_TARGET_LOG=m
++CONFIG_NETFILTER_XT_TARGET_MARK=m
++CONFIG_NETFILTER_XT_TARGET_NFLOG=m
++CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
++CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
++CONFIG_NETFILTER_XT_TARGET_TEE=m
++CONFIG_NETFILTER_XT_TARGET_TPROXY=m
++CONFIG_NETFILTER_XT_TARGET_TRACE=m
++CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
++CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
++CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
++CONFIG_NETFILTER_XT_MATCH_BPF=m
++CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
++CONFIG_NETFILTER_XT_MATCH_COMMENT=m
++CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
++CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
++CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
++CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
++CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
++CONFIG_NETFILTER_XT_MATCH_CPU=m
++CONFIG_NETFILTER_XT_MATCH_DCCP=m
++CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
++CONFIG_NETFILTER_XT_MATCH_DSCP=m
++CONFIG_NETFILTER_XT_MATCH_ESP=m
++CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
++CONFIG_NETFILTER_XT_MATCH_HELPER=m
++CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
++CONFIG_NETFILTER_XT_MATCH_IPVS=m
++CONFIG_NETFILTER_XT_MATCH_LENGTH=m
++CONFIG_NETFILTER_XT_MATCH_LIMIT=m
++CONFIG_NETFILTER_XT_MATCH_MAC=m
++CONFIG_NETFILTER_XT_MATCH_MARK=m
++CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
++CONFIG_NETFILTER_XT_MATCH_NFACCT=m
++CONFIG_NETFILTER_XT_MATCH_OSF=m
++CONFIG_NETFILTER_XT_MATCH_OWNER=m
++CONFIG_NETFILTER_XT_MATCH_POLICY=m
++CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
++CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
++CONFIG_NETFILTER_XT_MATCH_QUOTA=m
++CONFIG_NETFILTER_XT_MATCH_RATEEST=m
++CONFIG_NETFILTER_XT_MATCH_REALM=m
++CONFIG_NETFILTER_XT_MATCH_RECENT=m
++CONFIG_NETFILTER_XT_MATCH_SOCKET=m
++CONFIG_NETFILTER_XT_MATCH_STATE=m
++CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
++CONFIG_NETFILTER_XT_MATCH_STRING=m
++CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
++CONFIG_NETFILTER_XT_MATCH_TIME=m
++CONFIG_NETFILTER_XT_MATCH_U32=m
++CONFIG_IP_SET=m
++CONFIG_IP_SET_BITMAP_IP=m
++CONFIG_IP_SET_BITMAP_IPMAC=m
++CONFIG_IP_SET_BITMAP_PORT=m
++CONFIG_IP_SET_HASH_IP=m
++CONFIG_IP_SET_HASH_IPPORT=m
++CONFIG_IP_SET_HASH_IPPORTIP=m
++CONFIG_IP_SET_HASH_IPPORTNET=m
++CONFIG_IP_SET_HASH_NET=m
++CONFIG_IP_SET_HASH_NETPORT=m
++CONFIG_IP_SET_HASH_NETIFACE=m
++CONFIG_IP_SET_LIST_SET=m
++CONFIG_IP_VS=m
++CONFIG_IP_VS_PROTO_TCP=y
++CONFIG_IP_VS_PROTO_UDP=y
++CONFIG_IP_VS_PROTO_ESP=y
++CONFIG_IP_VS_PROTO_AH=y
++CONFIG_IP_VS_PROTO_SCTP=y
++CONFIG_IP_VS_RR=m
++CONFIG_IP_VS_WRR=m
++CONFIG_IP_VS_LC=m
++CONFIG_IP_VS_WLC=m
++CONFIG_IP_VS_LBLC=m
++CONFIG_IP_VS_LBLCR=m
++CONFIG_IP_VS_DH=m
++CONFIG_IP_VS_SH=m
++CONFIG_IP_VS_SED=m
++CONFIG_IP_VS_NQ=m
++CONFIG_IP_VS_FTP=m
++CONFIG_IP_VS_PE_SIP=m
++CONFIG_NF_CONNTRACK_IPV4=m
++CONFIG_IP_NF_IPTABLES=m
++CONFIG_IP_NF_MATCH_AH=m
++CONFIG_IP_NF_MATCH_ECN=m
++CONFIG_IP_NF_MATCH_TTL=m
++CONFIG_IP_NF_FILTER=m
++CONFIG_IP_NF_TARGET_REJECT=m
++CONFIG_IP_NF_NAT=m
++CONFIG_IP_NF_TARGET_MASQUERADE=m
++CONFIG_IP_NF_TARGET_NETMAP=m
++CONFIG_IP_NF_TARGET_REDIRECT=m
++CONFIG_IP_NF_MANGLE=m
++CONFIG_IP_NF_TARGET_CLUSTERIP=m
++CONFIG_IP_NF_TARGET_ECN=m
++CONFIG_IP_NF_TARGET_TTL=m
++CONFIG_IP_NF_RAW=m
++CONFIG_IP_NF_ARPTABLES=m
++CONFIG_IP_NF_ARPFILTER=m
++CONFIG_IP_NF_ARP_MANGLE=m
++CONFIG_NF_CONNTRACK_IPV6=m
++CONFIG_IP6_NF_IPTABLES=m
++CONFIG_IP6_NF_MATCH_AH=m
++CONFIG_IP6_NF_MATCH_EUI64=m
++CONFIG_IP6_NF_MATCH_FRAG=m
++CONFIG_IP6_NF_MATCH_OPTS=m
++CONFIG_IP6_NF_MATCH_HL=m
++CONFIG_IP6_NF_MATCH_IPV6HEADER=m
++CONFIG_IP6_NF_MATCH_MH=m
++CONFIG_IP6_NF_MATCH_RT=m
++CONFIG_IP6_NF_TARGET_HL=m
++CONFIG_IP6_NF_FILTER=m
++CONFIG_IP6_NF_TARGET_REJECT=m
++CONFIG_IP6_NF_MANGLE=m
++CONFIG_IP6_NF_RAW=m
++CONFIG_IP6_NF_NAT=m
++CONFIG_IP6_NF_TARGET_MASQUERADE=m
++CONFIG_IP6_NF_TARGET_NPT=m
++CONFIG_BRIDGE_NF_EBTABLES=m
++CONFIG_BRIDGE_EBT_BROUTE=m
++CONFIG_BRIDGE_EBT_T_FILTER=m
++CONFIG_BRIDGE_EBT_T_NAT=m
++CONFIG_BRIDGE_EBT_802_3=m
++CONFIG_BRIDGE_EBT_AMONG=m
++CONFIG_BRIDGE_EBT_ARP=m
++CONFIG_BRIDGE_EBT_IP=m
++CONFIG_BRIDGE_EBT_IP6=m
++CONFIG_BRIDGE_EBT_LIMIT=m
++CONFIG_BRIDGE_EBT_MARK=m
++CONFIG_BRIDGE_EBT_PKTTYPE=m
++CONFIG_BRIDGE_EBT_STP=m
++CONFIG_BRIDGE_EBT_VLAN=m
++CONFIG_BRIDGE_EBT_ARPREPLY=m
++CONFIG_BRIDGE_EBT_DNAT=m
++CONFIG_BRIDGE_EBT_MARK_T=m
++CONFIG_BRIDGE_EBT_REDIRECT=m
++CONFIG_BRIDGE_EBT_SNAT=m
++CONFIG_BRIDGE_EBT_LOG=m
++CONFIG_BRIDGE_EBT_NFLOG=m
++CONFIG_SCTP_COOKIE_HMAC_SHA1=y
++CONFIG_ATM=m
++CONFIG_L2TP=m
++CONFIG_L2TP_V3=y
++CONFIG_L2TP_IP=m
++CONFIG_L2TP_ETH=m
++CONFIG_BRIDGE=m
++CONFIG_VLAN_8021Q=m
++CONFIG_VLAN_8021Q_GVRP=y
++CONFIG_ATALK=m
++CONFIG_6LOWPAN=m
++CONFIG_IEEE802154=m
++CONFIG_IEEE802154_6LOWPAN=m
++CONFIG_MAC802154=m
++CONFIG_NET_SCHED=y
++CONFIG_NET_SCH_CBQ=m
++CONFIG_NET_SCH_HTB=m
++CONFIG_NET_SCH_HFSC=m
++CONFIG_NET_SCH_PRIO=m
++CONFIG_NET_SCH_MULTIQ=m
++CONFIG_NET_SCH_RED=m
++CONFIG_NET_SCH_SFB=m
++CONFIG_NET_SCH_SFQ=m
++CONFIG_NET_SCH_TEQL=m
++CONFIG_NET_SCH_TBF=m
++CONFIG_NET_SCH_GRED=m
++CONFIG_NET_SCH_DSMARK=m
++CONFIG_NET_SCH_NETEM=m
++CONFIG_NET_SCH_DRR=m
++CONFIG_NET_SCH_MQPRIO=m
++CONFIG_NET_SCH_CHOKE=m
++CONFIG_NET_SCH_QFQ=m
++CONFIG_NET_SCH_CODEL=m
++CONFIG_NET_SCH_FQ_CODEL=m
++CONFIG_NET_SCH_INGRESS=m
++CONFIG_NET_SCH_PLUG=m
++CONFIG_NET_CLS_BASIC=m
++CONFIG_NET_CLS_TCINDEX=m
++CONFIG_NET_CLS_ROUTE4=m
++CONFIG_NET_CLS_FW=m
++CONFIG_NET_CLS_U32=m
++CONFIG_CLS_U32_MARK=y
++CONFIG_NET_CLS_RSVP=m
++CONFIG_NET_CLS_RSVP6=m
++CONFIG_NET_CLS_FLOW=m
++CONFIG_NET_CLS_CGROUP=m
++CONFIG_NET_EMATCH=y
++CONFIG_NET_EMATCH_CMP=m
++CONFIG_NET_EMATCH_NBYTE=m
++CONFIG_NET_EMATCH_U32=m
++CONFIG_NET_EMATCH_META=m
++CONFIG_NET_EMATCH_TEXT=m
++CONFIG_NET_EMATCH_IPSET=m
++CONFIG_NET_CLS_ACT=y
++CONFIG_NET_ACT_POLICE=m
++CONFIG_NET_ACT_GACT=m
++CONFIG_GACT_PROB=y
++CONFIG_NET_ACT_MIRRED=m
++CONFIG_NET_ACT_IPT=m
++CONFIG_NET_ACT_NAT=m
++CONFIG_NET_ACT_PEDIT=m
++CONFIG_NET_ACT_SIMP=m
++CONFIG_NET_ACT_SKBEDIT=m
++CONFIG_NET_ACT_CSUM=m
++CONFIG_BATMAN_ADV=m
++CONFIG_OPENVSWITCH=m
++CONFIG_NET_PKTGEN=m
++CONFIG_HAMRADIO=y
++CONFIG_AX25=m
++CONFIG_NETROM=m
++CONFIG_ROSE=m
++CONFIG_MKISS=m
++CONFIG_6PACK=m
++CONFIG_BPQETHER=m
++CONFIG_BAYCOM_SER_FDX=m
++CONFIG_BAYCOM_SER_HDX=m
++CONFIG_YAM=m
++CONFIG_CAN=m
++CONFIG_CAN_VCAN=m
++CONFIG_CAN_MCP251X=m
++CONFIG_IRDA=m
++CONFIG_IRLAN=m
++CONFIG_IRNET=m
++CONFIG_IRCOMM=m
++CONFIG_IRDA_ULTRA=y
++CONFIG_IRDA_CACHE_LAST_LSAP=y
++CONFIG_IRDA_FAST_RR=y
++CONFIG_IRTTY_SIR=m
++CONFIG_KINGSUN_DONGLE=m
++CONFIG_KSDAZZLE_DONGLE=m
++CONFIG_KS959_DONGLE=m
++CONFIG_USB_IRDA=m
++CONFIG_SIGMATEL_FIR=m
++CONFIG_MCS_FIR=m
++CONFIG_BT=m
++CONFIG_BT_RFCOMM=m
++CONFIG_BT_RFCOMM_TTY=y
++CONFIG_BT_BNEP=m
++CONFIG_BT_BNEP_MC_FILTER=y
++CONFIG_BT_BNEP_PROTO_FILTER=y
++CONFIG_BT_HIDP=m
++CONFIG_BT_6LOWPAN=m
++CONFIG_BT_HCIBTUSB=m
++CONFIG_BT_HCIBCM203X=m
++CONFIG_BT_HCIBPA10X=m
++CONFIG_BT_HCIBFUSB=m
++CONFIG_BT_HCIVHCI=m
++CONFIG_BT_MRVL=m
++CONFIG_BT_MRVL_SDIO=m
++CONFIG_BT_ATH3K=m
++CONFIG_BT_WILINK=m
++CONFIG_MAC80211=m
++CONFIG_MAC80211_MESH=y
++CONFIG_WIMAX=m
++CONFIG_RFKILL=m
++CONFIG_RFKILL_INPUT=y
++CONFIG_NET_9P=m
++CONFIG_NFC=m
++CONFIG_NFC_PN533=m
++CONFIG_DEVTMPFS=y
++CONFIG_DEVTMPFS_MOUNT=y
++CONFIG_DMA_CMA=y
++CONFIG_CMA_SIZE_MBYTES=5
++CONFIG_MTD=m
++CONFIG_MTD_BLOCK=m
++CONFIG_MTD_NAND=m
++CONFIG_MTD_UBI=m
++CONFIG_ZRAM=m
++CONFIG_ZRAM_LZ4_COMPRESS=y
++CONFIG_BLK_DEV_LOOP=y
++CONFIG_BLK_DEV_CRYPTOLOOP=m
++CONFIG_BLK_DEV_DRBD=m
++CONFIG_BLK_DEV_NBD=m
++CONFIG_BLK_DEV_RAM=y
++CONFIG_CDROM_PKTCDVD=m
++CONFIG_ATA_OVER_ETH=m
++CONFIG_EEPROM_AT24=m
++CONFIG_TI_ST=m
++CONFIG_SCSI=y
++# CONFIG_SCSI_PROC_FS is not set
++CONFIG_BLK_DEV_SD=y
++CONFIG_CHR_DEV_ST=m
++CONFIG_CHR_DEV_OSST=m
++CONFIG_BLK_DEV_SR=m
++CONFIG_CHR_DEV_SG=m
++CONFIG_SCSI_ISCSI_ATTRS=y
++CONFIG_ISCSI_TCP=m
++CONFIG_ISCSI_BOOT_SYSFS=m
++CONFIG_MD=y
++CONFIG_MD_LINEAR=m
++CONFIG_MD_RAID0=m
++CONFIG_BLK_DEV_DM=m
++CONFIG_DM_CRYPT=m
++CONFIG_DM_SNAPSHOT=m
++CONFIG_DM_THIN_PROVISIONING=m
++CONFIG_DM_MIRROR=m
++CONFIG_DM_LOG_USERSPACE=m
++CONFIG_DM_RAID=m
++CONFIG_DM_ZERO=m
++CONFIG_DM_DELAY=m
++CONFIG_NETDEVICES=y
++CONFIG_BONDING=m
++CONFIG_DUMMY=m
++CONFIG_IFB=m
++CONFIG_MACVLAN=m
++CONFIG_NETCONSOLE=m
++CONFIG_TUN=m
++CONFIG_VETH=m
++CONFIG_ENC28J60=m
++CONFIG_MDIO_BITBANG=m
++CONFIG_PPP=m
++CONFIG_PPP_BSDCOMP=m
++CONFIG_PPP_DEFLATE=m
++CONFIG_PPP_FILTER=y
++CONFIG_PPP_MPPE=m
++CONFIG_PPP_MULTILINK=y
++CONFIG_PPPOATM=m
++CONFIG_PPPOE=m
++CONFIG_PPPOL2TP=m
++CONFIG_PPP_ASYNC=m
++CONFIG_PPP_SYNC_TTY=m
++CONFIG_SLIP=m
++CONFIG_SLIP_COMPRESSED=y
++CONFIG_SLIP_SMART=y
++CONFIG_USB_CATC=m
++CONFIG_USB_KAWETH=m
++CONFIG_USB_PEGASUS=m
++CONFIG_USB_RTL8150=m
++CONFIG_USB_RTL8152=m
++CONFIG_USB_USBNET=y
++CONFIG_USB_NET_AX8817X=m
++CONFIG_USB_NET_AX88179_178A=m
++CONFIG_USB_NET_CDCETHER=m
++CONFIG_USB_NET_CDC_EEM=m
++CONFIG_USB_NET_CDC_NCM=m
++CONFIG_USB_NET_HUAWEI_CDC_NCM=m
++CONFIG_USB_NET_CDC_MBIM=m
++CONFIG_USB_NET_DM9601=m
++CONFIG_USB_NET_SR9700=m
++CONFIG_USB_NET_SR9800=m
++CONFIG_USB_NET_SMSC75XX=m
++CONFIG_USB_NET_SMSC95XX=y
++CONFIG_USB_NET_GL620A=m
++CONFIG_USB_NET_NET1080=m
++CONFIG_USB_NET_PLUSB=m
++CONFIG_USB_NET_MCS7830=m
++CONFIG_USB_NET_CDC_SUBSET=m
++CONFIG_USB_ALI_M5632=y
++CONFIG_USB_AN2720=y
++CONFIG_USB_EPSON2888=y
++CONFIG_USB_KC2190=y
++CONFIG_USB_NET_ZAURUS=m
++CONFIG_USB_NET_CX82310_ETH=m
++CONFIG_USB_NET_KALMIA=m
++CONFIG_USB_NET_QMI_WWAN=m
++CONFIG_USB_HSO=m
++CONFIG_USB_NET_INT51X1=m
++CONFIG_USB_IPHETH=m
++CONFIG_USB_SIERRA_NET=m
++CONFIG_USB_VL600=m
++CONFIG_LIBERTAS_THINFIRM=m
++CONFIG_LIBERTAS_THINFIRM_USB=m
++CONFIG_AT76C50X_USB=m
++CONFIG_USB_ZD1201=m
++CONFIG_USB_NET_RNDIS_WLAN=m
++CONFIG_RTL8187=m
++CONFIG_MAC80211_HWSIM=m
++CONFIG_ATH_CARDS=m
++CONFIG_ATH9K=m
++CONFIG_ATH9K_HTC=m
++CONFIG_CARL9170=m
++CONFIG_ATH6KL=m
++CONFIG_ATH6KL_USB=m
++CONFIG_AR5523=m
++CONFIG_B43=m
++# CONFIG_B43_PHY_N is not set
++CONFIG_B43LEGACY=m
++CONFIG_BRCMFMAC=m
++CONFIG_BRCMFMAC_USB=y
++CONFIG_HOSTAP=m
++CONFIG_LIBERTAS=m
++CONFIG_LIBERTAS_USB=m
++CONFIG_LIBERTAS_SDIO=m
++CONFIG_P54_COMMON=m
++CONFIG_P54_USB=m
++CONFIG_RT2X00=m
++CONFIG_RT2500USB=m
++CONFIG_RT73USB=m
++CONFIG_RT2800USB=m
++CONFIG_RT2800USB_RT3573=y
++CONFIG_RT2800USB_RT53XX=y
++CONFIG_RT2800USB_RT55XX=y
++CONFIG_RT2800USB_UNKNOWN=y
++CONFIG_WL_MEDIATEK=y
++CONFIG_MT7601U=m
++CONFIG_RTL8192CU=m
++CONFIG_ZD1211RW=m
++CONFIG_MWIFIEX=m
++CONFIG_MWIFIEX_SDIO=m
++CONFIG_WIMAX_I2400M_USB=m
++CONFIG_IEEE802154_AT86RF230=m
++CONFIG_IEEE802154_MRF24J40=m
++CONFIG_IEEE802154_CC2520=m
++CONFIG_INPUT_POLLDEV=m
++# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
++CONFIG_INPUT_JOYDEV=m
++CONFIG_INPUT_EVDEV=m
++# CONFIG_KEYBOARD_ATKBD is not set
++CONFIG_KEYBOARD_GPIO=m
++# CONFIG_INPUT_MOUSE is not set
++CONFIG_INPUT_JOYSTICK=y
++CONFIG_JOYSTICK_IFORCE=m
++CONFIG_JOYSTICK_IFORCE_USB=y
++CONFIG_JOYSTICK_XPAD=m
++CONFIG_JOYSTICK_XPAD_FF=y
++CONFIG_JOYSTICK_RPISENSE=m
++CONFIG_INPUT_TOUCHSCREEN=y
++CONFIG_TOUCHSCREEN_ADS7846=m
++CONFIG_TOUCHSCREEN_EGALAX=m
++CONFIG_TOUCHSCREEN_FT6236=m
++CONFIG_TOUCHSCREEN_RPI_FT5406=m
++CONFIG_TOUCHSCREEN_USB_COMPOSITE=m
++CONFIG_TOUCHSCREEN_STMPE=m
++CONFIG_INPUT_MISC=y
++CONFIG_INPUT_AD714X=m
++CONFIG_INPUT_ATI_REMOTE2=m
++CONFIG_INPUT_KEYSPAN_REMOTE=m
++CONFIG_INPUT_POWERMATE=m
++CONFIG_INPUT_YEALINK=m
++CONFIG_INPUT_CM109=m
++CONFIG_INPUT_UINPUT=m
++CONFIG_INPUT_GPIO_ROTARY_ENCODER=m
++CONFIG_INPUT_ADXL34X=m
++CONFIG_INPUT_CMA3000=m
++CONFIG_SERIO=m
++CONFIG_SERIO_RAW=m
++CONFIG_GAMEPORT=m
++CONFIG_GAMEPORT_NS558=m
++CONFIG_GAMEPORT_L4=m
++CONFIG_BRCM_CHAR_DRIVERS=y
++CONFIG_BCM_VC_CMA=y
++CONFIG_BCM_VCIO=y
++CONFIG_BCM_VC_SM=y
++CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
++# CONFIG_LEGACY_PTYS is not set
++# CONFIG_DEVKMEM is not set
++CONFIG_SERIAL_8250=y
++# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
++CONFIG_SERIAL_8250_CONSOLE=y
++# CONFIG_SERIAL_8250_DMA is not set
++CONFIG_SERIAL_8250_NR_UARTS=1
++CONFIG_SERIAL_8250_RUNTIME_UARTS=0
++CONFIG_SERIAL_AMBA_PL011=y
++CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
++CONFIG_SERIAL_OF_PLATFORM=y
++CONFIG_TTY_PRINTK=y
++CONFIG_HW_RANDOM=y
++CONFIG_HW_RANDOM_BCM2835=y
++CONFIG_RAW_DRIVER=y
++CONFIG_I2C=y
++CONFIG_I2C_CHARDEV=m
++CONFIG_I2C_BCM2708=m
++CONFIG_SPI=y
++CONFIG_SPI_BCM2835=m
++CONFIG_SPI_SPIDEV=y
++CONFIG_PPS=m
++CONFIG_PPS_CLIENT_LDISC=m
++CONFIG_PPS_CLIENT_GPIO=m
++CONFIG_GPIO_SYSFS=y
++CONFIG_GPIO_ARIZONA=m
++CONFIG_GPIO_STMPE=y
++CONFIG_W1=m
++CONFIG_W1_MASTER_DS2490=m
++CONFIG_W1_MASTER_DS2482=m
++CONFIG_W1_MASTER_DS1WM=m
++CONFIG_W1_MASTER_GPIO=m
++CONFIG_W1_SLAVE_THERM=m
++CONFIG_W1_SLAVE_SMEM=m
++CONFIG_W1_SLAVE_DS2408=m
++CONFIG_W1_SLAVE_DS2413=m
++CONFIG_W1_SLAVE_DS2406=m
++CONFIG_W1_SLAVE_DS2423=m
++CONFIG_W1_SLAVE_DS2431=m
++CONFIG_W1_SLAVE_DS2433=m
++CONFIG_W1_SLAVE_DS2760=m
++CONFIG_W1_SLAVE_DS2780=m
++CONFIG_W1_SLAVE_DS2781=m
++CONFIG_W1_SLAVE_DS28E04=m
++CONFIG_W1_SLAVE_BQ27000=m
++CONFIG_BATTERY_DS2760=m
++CONFIG_POWER_RESET=y
++CONFIG_POWER_RESET_GPIO=y
++CONFIG_HWMON=m
++CONFIG_SENSORS_SHT21=m
++CONFIG_SENSORS_SHTC1=m
++CONFIG_THERMAL=y
++CONFIG_THERMAL_BCM2835=y
++CONFIG_WATCHDOG=y
++CONFIG_BCM2835_WDT=m
++CONFIG_UCB1400_CORE=m
++CONFIG_MFD_STMPE=y
++CONFIG_STMPE_SPI=y
++CONFIG_MFD_ARIZONA_I2C=m
++CONFIG_MFD_ARIZONA_SPI=m
++CONFIG_MFD_WM5102=y
++CONFIG_MEDIA_SUPPORT=m
++CONFIG_MEDIA_CAMERA_SUPPORT=y
++CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
++CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
++CONFIG_MEDIA_RADIO_SUPPORT=y
++CONFIG_MEDIA_RC_SUPPORT=y
++CONFIG_MEDIA_CONTROLLER=y
++CONFIG_LIRC=m
++CONFIG_RC_DEVICES=y
++CONFIG_RC_ATI_REMOTE=m
++CONFIG_IR_IMON=m
++CONFIG_IR_MCEUSB=m
++CONFIG_IR_REDRAT3=m
++CONFIG_IR_STREAMZAP=m
++CONFIG_IR_IGUANA=m
++CONFIG_IR_TTUSBIR=m
++CONFIG_RC_LOOPBACK=m
++CONFIG_IR_GPIO_CIR=m
++CONFIG_MEDIA_USB_SUPPORT=y
++CONFIG_USB_VIDEO_CLASS=m
++CONFIG_USB_M5602=m
++CONFIG_USB_STV06XX=m
++CONFIG_USB_GL860=m
++CONFIG_USB_GSPCA_BENQ=m
++CONFIG_USB_GSPCA_CONEX=m
++CONFIG_USB_GSPCA_CPIA1=m
++CONFIG_USB_GSPCA_DTCS033=m
++CONFIG_USB_GSPCA_ETOMS=m
++CONFIG_USB_GSPCA_FINEPIX=m
++CONFIG_USB_GSPCA_JEILINJ=m
++CONFIG_USB_GSPCA_JL2005BCD=m
++CONFIG_USB_GSPCA_KINECT=m
++CONFIG_USB_GSPCA_KONICA=m
++CONFIG_USB_GSPCA_MARS=m
++CONFIG_USB_GSPCA_MR97310A=m
++CONFIG_USB_GSPCA_NW80X=m
++CONFIG_USB_GSPCA_OV519=m
++CONFIG_USB_GSPCA_OV534=m
++CONFIG_USB_GSPCA_OV534_9=m
++CONFIG_USB_GSPCA_PAC207=m
++CONFIG_USB_GSPCA_PAC7302=m
++CONFIG_USB_GSPCA_PAC7311=m
++CONFIG_USB_GSPCA_SE401=m
++CONFIG_USB_GSPCA_SN9C2028=m
++CONFIG_USB_GSPCA_SN9C20X=m
++CONFIG_USB_GSPCA_SONIXB=m
++CONFIG_USB_GSPCA_SONIXJ=m
++CONFIG_USB_GSPCA_SPCA500=m
++CONFIG_USB_GSPCA_SPCA501=m
++CONFIG_USB_GSPCA_SPCA505=m
++CONFIG_USB_GSPCA_SPCA506=m
++CONFIG_USB_GSPCA_SPCA508=m
++CONFIG_USB_GSPCA_SPCA561=m
++CONFIG_USB_GSPCA_SPCA1528=m
++CONFIG_USB_GSPCA_SQ905=m
++CONFIG_USB_GSPCA_SQ905C=m
++CONFIG_USB_GSPCA_SQ930X=m
++CONFIG_USB_GSPCA_STK014=m
++CONFIG_USB_GSPCA_STK1135=m
++CONFIG_USB_GSPCA_STV0680=m
++CONFIG_USB_GSPCA_SUNPLUS=m
++CONFIG_USB_GSPCA_T613=m
++CONFIG_USB_GSPCA_TOPRO=m
++CONFIG_USB_GSPCA_TV8532=m
++CONFIG_USB_GSPCA_VC032X=m
++CONFIG_USB_GSPCA_VICAM=m
++CONFIG_USB_GSPCA_XIRLINK_CIT=m
++CONFIG_USB_GSPCA_ZC3XX=m
++CONFIG_USB_PWC=m
++CONFIG_VIDEO_CPIA2=m
++CONFIG_USB_ZR364XX=m
++CONFIG_USB_STKWEBCAM=m
++CONFIG_USB_S2255=m
++CONFIG_VIDEO_USBTV=m
++CONFIG_VIDEO_PVRUSB2=m
++CONFIG_VIDEO_HDPVR=m
++CONFIG_VIDEO_USBVISION=m
++CONFIG_VIDEO_STK1160_COMMON=m
++CONFIG_VIDEO_STK1160_AC97=y
++CONFIG_VIDEO_GO7007=m
++CONFIG_VIDEO_GO7007_USB=m
++CONFIG_VIDEO_GO7007_USB_S2250_BOARD=m
++CONFIG_VIDEO_AU0828=m
++CONFIG_VIDEO_AU0828_RC=y
++CONFIG_VIDEO_CX231XX=m
++CONFIG_VIDEO_CX231XX_ALSA=m
++CONFIG_VIDEO_CX231XX_DVB=m
++CONFIG_VIDEO_TM6000=m
++CONFIG_VIDEO_TM6000_ALSA=m
++CONFIG_VIDEO_TM6000_DVB=m
++CONFIG_DVB_USB=m
++CONFIG_DVB_USB_A800=m
++CONFIG_DVB_USB_DIBUSB_MB=m
++CONFIG_DVB_USB_DIBUSB_MB_FAULTY=y
++CONFIG_DVB_USB_DIBUSB_MC=m
++CONFIG_DVB_USB_DIB0700=m
++CONFIG_DVB_USB_UMT_010=m
++CONFIG_DVB_USB_CXUSB=m
++CONFIG_DVB_USB_M920X=m
++CONFIG_DVB_USB_DIGITV=m
++CONFIG_DVB_USB_VP7045=m
++CONFIG_DVB_USB_VP702X=m
++CONFIG_DVB_USB_GP8PSK=m
++CONFIG_DVB_USB_NOVA_T_USB2=m
++CONFIG_DVB_USB_TTUSB2=m
++CONFIG_DVB_USB_DTT200U=m
++CONFIG_DVB_USB_OPERA1=m
++CONFIG_DVB_USB_AF9005=m
++CONFIG_DVB_USB_AF9005_REMOTE=m
++CONFIG_DVB_USB_PCTV452E=m
++CONFIG_DVB_USB_DW2102=m
++CONFIG_DVB_USB_CINERGY_T2=m
++CONFIG_DVB_USB_DTV5100=m
++CONFIG_DVB_USB_FRIIO=m
++CONFIG_DVB_USB_AZ6027=m
++CONFIG_DVB_USB_TECHNISAT_USB2=m
++CONFIG_DVB_USB_V2=m
++CONFIG_DVB_USB_AF9015=m
++CONFIG_DVB_USB_AF9035=m
++CONFIG_DVB_USB_ANYSEE=m
++CONFIG_DVB_USB_AU6610=m
++CONFIG_DVB_USB_AZ6007=m
++CONFIG_DVB_USB_CE6230=m
++CONFIG_DVB_USB_EC168=m
++CONFIG_DVB_USB_GL861=m
++CONFIG_DVB_USB_LME2510=m
++CONFIG_DVB_USB_MXL111SF=m
++CONFIG_DVB_USB_RTL28XXU=m
++CONFIG_DVB_USB_DVBSKY=m
++CONFIG_SMS_USB_DRV=m
++CONFIG_DVB_B2C2_FLEXCOP_USB=m
++CONFIG_DVB_AS102=m
++CONFIG_VIDEO_EM28XX=m
++CONFIG_VIDEO_EM28XX_V4L2=m
++CONFIG_VIDEO_EM28XX_ALSA=m
++CONFIG_VIDEO_EM28XX_DVB=m
++CONFIG_V4L_PLATFORM_DRIVERS=y
++CONFIG_VIDEO_BCM2835=y
++CONFIG_VIDEO_BCM2835_MMAL=m
++CONFIG_RADIO_SI470X=y
++CONFIG_USB_SI470X=m
++CONFIG_I2C_SI470X=m
++CONFIG_RADIO_SI4713=m
++CONFIG_I2C_SI4713=m
++CONFIG_USB_MR800=m
++CONFIG_USB_DSBR=m
++CONFIG_RADIO_SHARK=m
++CONFIG_RADIO_SHARK2=m
++CONFIG_USB_KEENE=m
++CONFIG_USB_MA901=m
++CONFIG_RADIO_TEA5764=m
++CONFIG_RADIO_SAA7706H=m
++CONFIG_RADIO_TEF6862=m
++CONFIG_RADIO_WL1273=m
++CONFIG_RADIO_WL128X=m
++# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set
++CONFIG_VIDEO_UDA1342=m
++CONFIG_VIDEO_SONY_BTF_MPX=m
++CONFIG_VIDEO_TVP5150=m
++CONFIG_VIDEO_TW2804=m
++CONFIG_VIDEO_TW9903=m
++CONFIG_VIDEO_TW9906=m
++CONFIG_VIDEO_OV7640=m
++CONFIG_VIDEO_MT9V011=m
++CONFIG_FB=y
++CONFIG_FB_BCM2708=y
++CONFIG_FB_UDL=m
++CONFIG_FB_SSD1307=m
++CONFIG_FB_RPISENSE=m
++# CONFIG_BACKLIGHT_GENERIC is not set
++CONFIG_BACKLIGHT_GPIO=m
++CONFIG_FRAMEBUFFER_CONSOLE=y
++CONFIG_LOGO=y
++# CONFIG_LOGO_LINUX_MONO is not set
++# CONFIG_LOGO_LINUX_VGA16 is not set
++CONFIG_SOUND=y
++CONFIG_SND=m
++CONFIG_SND_SEQUENCER=m
++CONFIG_SND_SEQ_DUMMY=m
++CONFIG_SND_MIXER_OSS=m
++CONFIG_SND_PCM_OSS=m
++CONFIG_SND_SEQUENCER_OSS=y
++CONFIG_SND_HRTIMER=m
++CONFIG_SND_DUMMY=m
++CONFIG_SND_ALOOP=m
++CONFIG_SND_VIRMIDI=m
++CONFIG_SND_MTPAV=m
++CONFIG_SND_SERIAL_U16550=m
++CONFIG_SND_MPU401=m
++CONFIG_SND_BCM2835=m
++CONFIG_SND_USB_AUDIO=m
++CONFIG_SND_USB_UA101=m
++CONFIG_SND_USB_CAIAQ=m
++CONFIG_SND_USB_CAIAQ_INPUT=y
++CONFIG_SND_USB_6FIRE=m
++CONFIG_SND_SOC=m
++CONFIG_SND_BCM2835_SOC_I2S=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_AMP=m
++CONFIG_SND_BCM2708_SOC_RPI_DAC=m
++CONFIG_SND_BCM2708_SOC_RPI_PROTO=m
++CONFIG_SND_BCM2708_SOC_IQAUDIO_DAC=m
++CONFIG_SND_BCM2708_SOC_RASPIDAC3=m
++CONFIG_SND_SOC_ADAU1701=m
++CONFIG_SND_SOC_WM8804_I2C=m
++CONFIG_SND_SIMPLE_CARD=m
++CONFIG_SOUND_PRIME=m
++CONFIG_HIDRAW=y
++CONFIG_UHID=m
++CONFIG_HID_A4TECH=m
++CONFIG_HID_ACRUX=m
++CONFIG_HID_APPLE=m
++CONFIG_HID_BELKIN=m
++CONFIG_HID_CHERRY=m
++CONFIG_HID_CHICONY=m
++CONFIG_HID_CYPRESS=m
++CONFIG_HID_DRAGONRISE=m
++CONFIG_HID_EMS_FF=m
++CONFIG_HID_ELECOM=m
++CONFIG_HID_ELO=m
++CONFIG_HID_EZKEY=m
++CONFIG_HID_HOLTEK=m
++CONFIG_HID_KEYTOUCH=m
++CONFIG_HID_KYE=m
++CONFIG_HID_UCLOGIC=m
++CONFIG_HID_WALTOP=m
++CONFIG_HID_GYRATION=m
++CONFIG_HID_TWINHAN=m
++CONFIG_HID_KENSINGTON=m
++CONFIG_HID_LCPOWER=m
++CONFIG_HID_LOGITECH=m
++CONFIG_HID_MAGICMOUSE=m
++CONFIG_HID_MICROSOFT=m
++CONFIG_HID_MONTEREY=m
++CONFIG_HID_MULTITOUCH=m
++CONFIG_HID_NTRIG=m
++CONFIG_HID_ORTEK=m
++CONFIG_HID_PANTHERLORD=m
++CONFIG_HID_PETALYNX=m
++CONFIG_HID_PICOLCD=m
++CONFIG_HID_ROCCAT=m
++CONFIG_HID_SAMSUNG=m
++CONFIG_HID_SONY=m
++CONFIG_HID_SPEEDLINK=m
++CONFIG_HID_SUNPLUS=m
++CONFIG_HID_GREENASIA=m
++CONFIG_HID_SMARTJOYPLUS=m
++CONFIG_HID_TOPSEED=m
++CONFIG_HID_THINGM=m
++CONFIG_HID_THRUSTMASTER=m
++CONFIG_HID_WACOM=m
++CONFIG_HID_WIIMOTE=m
++CONFIG_HID_XINMO=m
++CONFIG_HID_ZEROPLUS=m
++CONFIG_HID_ZYDACRON=m
++CONFIG_HID_PID=y
++CONFIG_USB_HIDDEV=y
++CONFIG_USB=y
++CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
++CONFIG_USB_MON=m
++CONFIG_USB_DWCOTG=y
++CONFIG_USB_PRINTER=m
++CONFIG_USB_STORAGE=y
++CONFIG_USB_STORAGE_REALTEK=m
++CONFIG_USB_STORAGE_DATAFAB=m
++CONFIG_USB_STORAGE_FREECOM=m
++CONFIG_USB_STORAGE_ISD200=m
++CONFIG_USB_STORAGE_USBAT=m
++CONFIG_USB_STORAGE_SDDR09=m
++CONFIG_USB_STORAGE_SDDR55=m
++CONFIG_USB_STORAGE_JUMPSHOT=m
++CONFIG_USB_STORAGE_ALAUDA=m
++CONFIG_USB_STORAGE_ONETOUCH=m
++CONFIG_USB_STORAGE_KARMA=m
++CONFIG_USB_STORAGE_CYPRESS_ATACB=m
++CONFIG_USB_STORAGE_ENE_UB6250=m
++CONFIG_USB_MDC800=m
++CONFIG_USB_MICROTEK=m
++CONFIG_USBIP_CORE=m
++CONFIG_USBIP_VHCI_HCD=m
++CONFIG_USBIP_HOST=m
++CONFIG_USB_SERIAL=m
++CONFIG_USB_SERIAL_GENERIC=y
++CONFIG_USB_SERIAL_AIRCABLE=m
++CONFIG_USB_SERIAL_ARK3116=m
++CONFIG_USB_SERIAL_BELKIN=m
++CONFIG_USB_SERIAL_CH341=m
++CONFIG_USB_SERIAL_WHITEHEAT=m
++CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
++CONFIG_USB_SERIAL_CP210X=m
++CONFIG_USB_SERIAL_CYPRESS_M8=m
++CONFIG_USB_SERIAL_EMPEG=m
++CONFIG_USB_SERIAL_FTDI_SIO=m
++CONFIG_USB_SERIAL_VISOR=m
++CONFIG_USB_SERIAL_IPAQ=m
++CONFIG_USB_SERIAL_IR=m
++CONFIG_USB_SERIAL_EDGEPORT=m
++CONFIG_USB_SERIAL_EDGEPORT_TI=m
++CONFIG_USB_SERIAL_F81232=m
++CONFIG_USB_SERIAL_GARMIN=m
++CONFIG_USB_SERIAL_IPW=m
++CONFIG_USB_SERIAL_IUU=m
++CONFIG_USB_SERIAL_KEYSPAN_PDA=m
++CONFIG_USB_SERIAL_KEYSPAN=m
++CONFIG_USB_SERIAL_KLSI=m
++CONFIG_USB_SERIAL_KOBIL_SCT=m
++CONFIG_USB_SERIAL_MCT_U232=m
++CONFIG_USB_SERIAL_METRO=m
++CONFIG_USB_SERIAL_MOS7720=m
++CONFIG_USB_SERIAL_MOS7840=m
++CONFIG_USB_SERIAL_NAVMAN=m
++CONFIG_USB_SERIAL_PL2303=m
++CONFIG_USB_SERIAL_OTI6858=m
++CONFIG_USB_SERIAL_QCAUX=m
++CONFIG_USB_SERIAL_QUALCOMM=m
++CONFIG_USB_SERIAL_SPCP8X5=m
++CONFIG_USB_SERIAL_SAFE=m
++CONFIG_USB_SERIAL_SIERRAWIRELESS=m
++CONFIG_USB_SERIAL_SYMBOL=m
++CONFIG_USB_SERIAL_TI=m
++CONFIG_USB_SERIAL_CYBERJACK=m
++CONFIG_USB_SERIAL_XIRCOM=m
++CONFIG_USB_SERIAL_OPTION=m
++CONFIG_USB_SERIAL_OMNINET=m
++CONFIG_USB_SERIAL_OPTICON=m
++CONFIG_USB_SERIAL_XSENS_MT=m
++CONFIG_USB_SERIAL_WISHBONE=m
++CONFIG_USB_SERIAL_SSU100=m
++CONFIG_USB_SERIAL_QT2=m
++CONFIG_USB_SERIAL_DEBUG=m
++CONFIG_USB_EMI62=m
++CONFIG_USB_EMI26=m
++CONFIG_USB_ADUTUX=m
++CONFIG_USB_SEVSEG=m
++CONFIG_USB_RIO500=m
++CONFIG_USB_LEGOTOWER=m
++CONFIG_USB_LCD=m
++CONFIG_USB_LED=m
++CONFIG_USB_CYPRESS_CY7C63=m
++CONFIG_USB_CYTHERM=m
++CONFIG_USB_IDMOUSE=m
++CONFIG_USB_FTDI_ELAN=m
++CONFIG_USB_APPLEDISPLAY=m
++CONFIG_USB_LD=m
++CONFIG_USB_TRANCEVIBRATOR=m
++CONFIG_USB_IOWARRIOR=m
++CONFIG_USB_TEST=m
++CONFIG_USB_ISIGHTFW=m
++CONFIG_USB_YUREX=m
++CONFIG_USB_ATM=m
++CONFIG_USB_SPEEDTOUCH=m
++CONFIG_USB_CXACRU=m
++CONFIG_USB_UEAGLEATM=m
++CONFIG_USB_XUSBATM=m
++CONFIG_MMC=y
++CONFIG_MMC_BLOCK_MINORS=32
++CONFIG_MMC_BCM2835=y
++CONFIG_MMC_BCM2835_DMA=y
++CONFIG_MMC_BCM2835_SDHOST=y
++CONFIG_MMC_SDHCI=y
++CONFIG_MMC_SDHCI_PLTFM=y
++CONFIG_MMC_SPI=m
++CONFIG_LEDS_CLASS=y
++CONFIG_LEDS_GPIO=y
++CONFIG_LEDS_TRIGGER_TIMER=y
++CONFIG_LEDS_TRIGGER_ONESHOT=y
++CONFIG_LEDS_TRIGGER_HEARTBEAT=y
++CONFIG_LEDS_TRIGGER_BACKLIGHT=y
++CONFIG_LEDS_TRIGGER_CPU=y
++CONFIG_LEDS_TRIGGER_GPIO=y
++CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
++CONFIG_LEDS_TRIGGER_TRANSIENT=m
++CONFIG_LEDS_TRIGGER_CAMERA=m
++CONFIG_LEDS_TRIGGER_INPUT=y
++CONFIG_RTC_CLASS=y
++# CONFIG_RTC_HCTOSYS is not set
++CONFIG_RTC_DRV_DS1307=m
++CONFIG_RTC_DRV_DS1374=m
++CONFIG_RTC_DRV_DS1672=m
++CONFIG_RTC_DRV_DS3232=m
++CONFIG_RTC_DRV_MAX6900=m
++CONFIG_RTC_DRV_RS5C372=m
++CONFIG_RTC_DRV_ISL1208=m
++CONFIG_RTC_DRV_ISL12022=m
++CONFIG_RTC_DRV_ISL12057=m
++CONFIG_RTC_DRV_X1205=m
++CONFIG_RTC_DRV_PCF2127=m
++CONFIG_RTC_DRV_PCF8523=m
++CONFIG_RTC_DRV_PCF8563=m
++CONFIG_RTC_DRV_PCF8583=m
++CONFIG_RTC_DRV_M41T80=m
++CONFIG_RTC_DRV_BQ32K=m
++CONFIG_RTC_DRV_S35390A=m
++CONFIG_RTC_DRV_FM3130=m
++CONFIG_RTC_DRV_RX8581=m
++CONFIG_RTC_DRV_RX8025=m
++CONFIG_RTC_DRV_EM3027=m
++CONFIG_RTC_DRV_RV3029C2=m
++CONFIG_RTC_DRV_M41T93=m
++CONFIG_RTC_DRV_M41T94=m
++CONFIG_RTC_DRV_DS1305=m
++CONFIG_RTC_DRV_DS1390=m
++CONFIG_RTC_DRV_MAX6902=m
++CONFIG_RTC_DRV_R9701=m
++CONFIG_RTC_DRV_RS5C348=m
++CONFIG_RTC_DRV_DS3234=m
++CONFIG_RTC_DRV_PCF2123=m
++CONFIG_RTC_DRV_RX4581=m
++CONFIG_DMADEVICES=y
++CONFIG_DMA_BCM2835=y
++CONFIG_DMA_BCM2708=y
++CONFIG_UIO=m
++CONFIG_UIO_PDRV_GENIRQ=m
++CONFIG_STAGING=y
++CONFIG_PRISM2_USB=m
++CONFIG_R8712U=m
++CONFIG_R8188EU=m
++CONFIG_R8723AU=m
++CONFIG_VT6656=m
++CONFIG_SPEAKUP=m
++CONFIG_SPEAKUP_SYNTH_SOFT=m
++CONFIG_STAGING_MEDIA=y
++CONFIG_LIRC_STAGING=y
++CONFIG_LIRC_IMON=m
++CONFIG_LIRC_RPI=m
++CONFIG_LIRC_SASEM=m
++CONFIG_LIRC_SERIAL=m
++CONFIG_FB_TFT=m
++CONFIG_FB_TFT_AGM1264K_FL=m
++CONFIG_FB_TFT_BD663474=m
++CONFIG_FB_TFT_HX8340BN=m
++CONFIG_FB_TFT_HX8347D=m
++CONFIG_FB_TFT_HX8353D=m
++CONFIG_FB_TFT_ILI9163=m
++CONFIG_FB_TFT_ILI9320=m
++CONFIG_FB_TFT_ILI9325=m
++CONFIG_FB_TFT_ILI9340=m
++CONFIG_FB_TFT_ILI9341=m
++CONFIG_FB_TFT_ILI9481=m
++CONFIG_FB_TFT_ILI9486=m
++CONFIG_FB_TFT_PCD8544=m
++CONFIG_FB_TFT_RA8875=m
++CONFIG_FB_TFT_S6D02A1=m
++CONFIG_FB_TFT_S6D1121=m
++CONFIG_FB_TFT_SSD1289=m
++CONFIG_FB_TFT_SSD1306=m
++CONFIG_FB_TFT_SSD1331=m
++CONFIG_FB_TFT_SSD1351=m
++CONFIG_FB_TFT_ST7735R=m
++CONFIG_FB_TFT_TINYLCD=m
++CONFIG_FB_TFT_TLS8204=m
++CONFIG_FB_TFT_UC1701=m
++CONFIG_FB_TFT_UPD161704=m
++CONFIG_FB_TFT_WATTEROTT=m
++CONFIG_FB_FLEX=m
++CONFIG_FB_TFT_FBTFT_DEVICE=m
++CONFIG_MAILBOX=y
++CONFIG_BCM2835_MBOX=y
++# CONFIG_IOMMU_SUPPORT is not set
++CONFIG_EXTCON=m
++CONFIG_EXTCON_ARIZONA=m
++CONFIG_IIO=m
++CONFIG_IIO_BUFFER=y
++CONFIG_IIO_BUFFER_CB=y
++CONFIG_IIO_KFIFO_BUF=m
++CONFIG_MCP320X=m
++CONFIG_DHT11=m
++CONFIG_PWM_BCM2835=m
++CONFIG_RASPBERRYPI_FIRMWARE=y
++CONFIG_EXT4_FS=y
++CONFIG_EXT4_FS_POSIX_ACL=y
++CONFIG_EXT4_FS_SECURITY=y
++CONFIG_REISERFS_FS=m
++CONFIG_REISERFS_FS_XATTR=y
++CONFIG_REISERFS_FS_POSIX_ACL=y
++CONFIG_REISERFS_FS_SECURITY=y
++CONFIG_JFS_FS=m
++CONFIG_JFS_POSIX_ACL=y
++CONFIG_JFS_SECURITY=y
++CONFIG_JFS_STATISTICS=y
++CONFIG_XFS_FS=m
++CONFIG_XFS_QUOTA=y
++CONFIG_XFS_POSIX_ACL=y
++CONFIG_XFS_RT=y
++CONFIG_GFS2_FS=m
++CONFIG_OCFS2_FS=m
++CONFIG_BTRFS_FS=m
++CONFIG_BTRFS_FS_POSIX_ACL=y
++CONFIG_NILFS2_FS=m
++CONFIG_F2FS_FS=y
++CONFIG_FANOTIFY=y
++CONFIG_QFMT_V1=m
++CONFIG_QFMT_V2=m
++CONFIG_AUTOFS4_FS=y
++CONFIG_FUSE_FS=m
++CONFIG_CUSE=m
++CONFIG_OVERLAY_FS=m
++CONFIG_FSCACHE=y
++CONFIG_FSCACHE_STATS=y
++CONFIG_FSCACHE_HISTOGRAM=y
++CONFIG_CACHEFILES=y
++CONFIG_ISO9660_FS=m
++CONFIG_JOLIET=y
++CONFIG_ZISOFS=y
++CONFIG_UDF_FS=m
++CONFIG_MSDOS_FS=y
++CONFIG_VFAT_FS=y
++CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
++CONFIG_NTFS_FS=m
++CONFIG_NTFS_RW=y
++CONFIG_TMPFS=y
++CONFIG_TMPFS_POSIX_ACL=y
++CONFIG_CONFIGFS_FS=y
++CONFIG_ECRYPT_FS=m
++CONFIG_HFS_FS=m
++CONFIG_HFSPLUS_FS=m
++CONFIG_JFFS2_FS=m
++CONFIG_JFFS2_SUMMARY=y
++CONFIG_UBIFS_FS=m
++CONFIG_SQUASHFS=m
++CONFIG_SQUASHFS_XATTR=y
++CONFIG_SQUASHFS_LZO=y
++CONFIG_SQUASHFS_XZ=y
++CONFIG_NFS_FS=y
++CONFIG_NFS_V3_ACL=y
++CONFIG_NFS_V4=y
++CONFIG_NFS_SWAP=y
++CONFIG_ROOT_NFS=y
++CONFIG_NFS_FSCACHE=y
++CONFIG_NFSD=m
++CONFIG_NFSD_V3_ACL=y
++CONFIG_NFSD_V4=y
++CONFIG_CIFS=m
++CONFIG_CIFS_WEAK_PW_HASH=y
++CONFIG_CIFS_UPCALL=y
++CONFIG_CIFS_XATTR=y
++CONFIG_CIFS_POSIX=y
++CONFIG_CIFS_ACL=y
++CONFIG_CIFS_DFS_UPCALL=y
++CONFIG_CIFS_SMB2=y
++CONFIG_CIFS_FSCACHE=y
++CONFIG_9P_FS=m
++CONFIG_9P_FS_POSIX_ACL=y
++CONFIG_NLS_DEFAULT="utf8"
++CONFIG_NLS_CODEPAGE_437=y
++CONFIG_NLS_CODEPAGE_737=m
++CONFIG_NLS_CODEPAGE_775=m
++CONFIG_NLS_CODEPAGE_850=m
++CONFIG_NLS_CODEPAGE_852=m
++CONFIG_NLS_CODEPAGE_855=m
++CONFIG_NLS_CODEPAGE_857=m
++CONFIG_NLS_CODEPAGE_860=m
++CONFIG_NLS_CODEPAGE_861=m
++CONFIG_NLS_CODEPAGE_862=m
++CONFIG_NLS_CODEPAGE_863=m
++CONFIG_NLS_CODEPAGE_864=m
++CONFIG_NLS_CODEPAGE_865=m
++CONFIG_NLS_CODEPAGE_866=m
++CONFIG_NLS_CODEPAGE_869=m
++CONFIG_NLS_CODEPAGE_936=m
++CONFIG_NLS_CODEPAGE_950=m
++CONFIG_NLS_CODEPAGE_932=m
++CONFIG_NLS_CODEPAGE_949=m
++CONFIG_NLS_CODEPAGE_874=m
++CONFIG_NLS_ISO8859_8=m
++CONFIG_NLS_CODEPAGE_1250=m
++CONFIG_NLS_CODEPAGE_1251=m
++CONFIG_NLS_ASCII=y
++CONFIG_NLS_ISO8859_1=m
++CONFIG_NLS_ISO8859_2=m
++CONFIG_NLS_ISO8859_3=m
++CONFIG_NLS_ISO8859_4=m
++CONFIG_NLS_ISO8859_5=m
++CONFIG_NLS_ISO8859_6=m
++CONFIG_NLS_ISO8859_7=m
++CONFIG_NLS_ISO8859_9=m
++CONFIG_NLS_ISO8859_13=m
++CONFIG_NLS_ISO8859_14=m
++CONFIG_NLS_ISO8859_15=m
++CONFIG_NLS_KOI8_R=m
++CONFIG_NLS_KOI8_U=m
++CONFIG_DLM=m
++CONFIG_PRINTK_TIME=y
++CONFIG_BOOT_PRINTK_DELAY=y
++CONFIG_DEBUG_MEMORY_INIT=y
++CONFIG_DETECT_HUNG_TASK=y
++CONFIG_TIMER_STATS=y
++CONFIG_IRQSOFF_TRACER=y
++CONFIG_SCHED_TRACER=y
++CONFIG_STACK_TRACER=y
++CONFIG_BLK_DEV_IO_TRACE=y
++# CONFIG_KPROBE_EVENT is not set
++CONFIG_FUNCTION_PROFILER=y
++CONFIG_KGDB=y
++CONFIG_KGDB_KDB=y
++CONFIG_KDB_KEYBOARD=y
++CONFIG_CRYPTO_USER=m
++CONFIG_CRYPTO_CBC=y
++CONFIG_CRYPTO_CTS=m
++CONFIG_CRYPTO_XTS=m
++CONFIG_CRYPTO_XCBC=m
++CONFIG_CRYPTO_TGR192=m
++CONFIG_CRYPTO_WP512=m
++CONFIG_CRYPTO_CAST5=m
++CONFIG_CRYPTO_DES=y
++CONFIG_CRYPTO_USER_API_SKCIPHER=m
++# CONFIG_CRYPTO_HW is not set
++CONFIG_ARM_CRYPTO=y
++CONFIG_CRYPTO_SHA1_ARM_NEON=m
++CONFIG_CRYPTO_AES_ARM_BS=m
++CONFIG_CRC_ITU_T=y
++CONFIG_LIBCRC32C=y
+--- /dev/null
++++ b/arch/arm/configs/bcmrpi_defconfig
+@@ -0,0 +1,1265 @@
++# CONFIG_ARM_PATCH_PHYS_VIRT is not set
++CONFIG_PHYS_OFFSET=0
++# CONFIG_LOCALVERSION_AUTO is not set
++CONFIG_SYSVIPC=y
++CONFIG_POSIX_MQUEUE=y
++CONFIG_FHANDLE=y
++CONFIG_NO_HZ=y
++CONFIG_HIGH_RES_TIMERS=y
++CONFIG_BSD_PROCESS_ACCT=y
++CONFIG_BSD_PROCESS_ACCT_V3=y
++CONFIG_TASKSTATS=y
++CONFIG_TASK_DELAY_ACCT=y
++CONFIG_TASK_XACCT=y
++CONFIG_TASK_IO_ACCOUNTING=y
++CONFIG_IKCONFIG=m
++CONFIG_IKCONFIG_PROC=y
++CONFIG_CGROUP_FREEZER=y
++CONFIG_CGROUP_DEVICE=y
++CONFIG_CPUSETS=y
++CONFIG_CGROUP_CPUACCT=y
++CONFIG_MEMCG=y
++CONFIG_BLK_CGROUP=y
++CONFIG_NAMESPACES=y
++CONFIG_SCHED_AUTOGROUP=y
++CONFIG_BLK_DEV_INITRD=y
++CONFIG_EMBEDDED=y
++# CONFIG_COMPAT_BRK is not set
++CONFIG_PROFILING=y
++CONFIG_OPROFILE=m
++CONFIG_KPROBES=y
++CONFIG_JUMP_LABEL=y
++CONFIG_MODULES=y
++CONFIG_MODULE_UNLOAD=y
++CONFIG_MODVERSIONS=y
++CONFIG_MODULE_SRCVERSION_ALL=y
++CONFIG_BLK_DEV_THROTTLING=y
++CONFIG_PARTITION_ADVANCED=y
++CONFIG_MAC_PARTITION=y
++CONFIG_CFQ_GROUP_IOSCHED=y
++CONFIG_ARCH_BCM2708=y
++CONFIG_PREEMPT_VOLUNTARY=y
++CONFIG_AEABI=y
++CONFIG_OABI_COMPAT=y
++# CONFIG_CPU_SW_DOMAIN_PAN is not set
++CONFIG_CLEANCACHE=y
++CONFIG_FRONTSWAP=y
++CONFIG_CMA=y
++CONFIG_ZSMALLOC=m
++CONFIG_PGTABLE_MAPPING=y
++CONFIG_UACCESS_WITH_MEMCPY=y
++CONFIG_SECCOMP=y
++# CONFIG_ATAGS is not set
++CONFIG_ZBOOT_ROM_TEXT=0x0
++CONFIG_ZBOOT_ROM_BSS=0x0
++CONFIG_CMDLINE="console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait"
++CONFIG_CPU_FREQ=y
++CONFIG_CPU_FREQ_STAT=m
++CONFIG_CPU_FREQ_STAT_DETAILS=y
++CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE=y
++CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
++CONFIG_CPU_FREQ_GOV_USERSPACE=y
++CONFIG_CPU_FREQ_GOV_ONDEMAND=y
++CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
++CONFIG_VFP=y
++CONFIG_BINFMT_MISC=m
++# CONFIG_SUSPEND is not set
++CONFIG_NET=y
++CONFIG_PACKET=y
++CONFIG_UNIX=y
++CONFIG_XFRM_USER=y
++CONFIG_NET_KEY=m
++CONFIG_INET=y
++CONFIG_IP_MULTICAST=y
++CONFIG_IP_ADVANCED_ROUTER=y
++CONFIG_IP_MULTIPLE_TABLES=y
++CONFIG_IP_ROUTE_MULTIPATH=y
++CONFIG_IP_ROUTE_VERBOSE=y
++CONFIG_IP_PNP=y
++CONFIG_IP_PNP_DHCP=y
++CONFIG_IP_PNP_RARP=y
++CONFIG_NET_IPIP=m
++CONFIG_NET_IPGRE_DEMUX=m
++CONFIG_NET_IPGRE=m
++CONFIG_IP_MROUTE=y
++CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
++CONFIG_IP_PIMSM_V1=y
++CONFIG_IP_PIMSM_V2=y
++CONFIG_SYN_COOKIES=y
++CONFIG_INET_AH=m
++CONFIG_INET_ESP=m
++CONFIG_INET_IPCOMP=m
++CONFIG_INET_XFRM_MODE_TRANSPORT=m
++CONFIG_INET_XFRM_MODE_TUNNEL=m
++CONFIG_INET_XFRM_MODE_BEET=m
++CONFIG_INET_LRO=m
++CONFIG_INET_DIAG=m
++CONFIG_INET6_AH=m
++CONFIG_INET6_ESP=m
++CONFIG_INET6_IPCOMP=m
++CONFIG_IPV6_TUNNEL=m
++CONFIG_IPV6_MULTIPLE_TABLES=y
++CONFIG_IPV6_MROUTE=y
++CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
++CONFIG_IPV6_PIMSM_V2=y
++CONFIG_NETFILTER=y
++CONFIG_NF_CONNTRACK=m
++CONFIG_NF_CONNTRACK_ZONES=y
++CONFIG_NF_CONNTRACK_EVENTS=y
++CONFIG_NF_CONNTRACK_TIMESTAMP=y
++CONFIG_NF_CT_PROTO_DCCP=m
++CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CONNTRACK_AMANDA=m
++CONFIG_NF_CONNTRACK_FTP=m
++CONFIG_NF_CONNTRACK_H323=m
++CONFIG_NF_CONNTRACK_IRC=m
++CONFIG_NF_CONNTRACK_NETBIOS_NS=m
++CONFIG_NF_CONNTRACK_SNMP=m
++CONFIG_NF_CONNTRACK_PPTP=m
++CONFIG_NF_CONNTRACK_SANE=m
++CONFIG_NF_CONNTRACK_SIP=m
++CONFIG_NF_CONNTRACK_TFTP=m
++CONFIG_NF_CT_NETLINK=m
++CONFIG_NETFILTER_XT_SET=m
++CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
++CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
++CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
++CONFIG_NETFILTER_XT_TARGET_DSCP=m
++CONFIG_NETFILTER_XT_TARGET_HMARK=m
++CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
++CONFIG_NETFILTER_XT_TARGET_LED=m
++CONFIG_NETFILTER_XT_TARGET_LOG=m
++CONFIG_NETFILTER_XT_TARGET_MARK=m
++CONFIG_NETFILTER_XT_TARGET_NFLOG=m
++CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
++CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
++CONFIG_NETFILTER_XT_TARGET_TEE=m
++CONFIG_NETFILTER_XT_TARGET_TPROXY=m
++CONFIG_NETFILTER_XT_TARGET_TRACE=m
++CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
++CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
++CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
++CONFIG_NETFILTER_XT_MATCH_BPF=m
++CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
++CONFIG_NETFILTER_XT_MATCH_COMMENT=m
++CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
++CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
++CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
++CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
++CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
++CONFIG_NETFILTER_XT_MATCH_CPU=m
++CONFIG_NETFILTER_XT_MATCH_DCCP=m
++CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
++CONFIG_NETFILTER_XT_MATCH_DSCP=m
++CONFIG_NETFILTER_XT_MATCH_ESP=m
++CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
++CONFIG_NETFILTER_XT_MATCH_HELPER=m
++CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
++CONFIG_NETFILTER_XT_MATCH_IPVS=m
++CONFIG_NETFILTER_XT_MATCH_LENGTH=m
++CONFIG_NETFILTER_XT_MATCH_LIMIT=m
++CONFIG_NETFILTER_XT_MATCH_MAC=m
++CONFIG_NETFILTER_XT_MATCH_MARK=m
++CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
++CONFIG_NETFILTER_XT_MATCH_NFACCT=m
++CONFIG_NETFILTER_XT_MATCH_OSF=m
++CONFIG_NETFILTER_XT_MATCH_OWNER=m
++CONFIG_NETFILTER_XT_MATCH_POLICY=m
++CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
++CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
++CONFIG_NETFILTER_XT_MATCH_QUOTA=m
++CONFIG_NETFILTER_XT_MATCH_RATEEST=m
++CONFIG_NETFILTER_XT_MATCH_REALM=m
++CONFIG_NETFILTER_XT_MATCH_RECENT=m
++CONFIG_NETFILTER_XT_MATCH_SOCKET=m
++CONFIG_NETFILTER_XT_MATCH_STATE=m
++CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
++CONFIG_NETFILTER_XT_MATCH_STRING=m
++CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
++CONFIG_NETFILTER_XT_MATCH_TIME=m
++CONFIG_NETFILTER_XT_MATCH_U32=m
++CONFIG_IP_SET=m
++CONFIG_IP_SET_BITMAP_IP=m
++CONFIG_IP_SET_BITMAP_IPMAC=m
++CONFIG_IP_SET_BITMAP_PORT=m
++CONFIG_IP_SET_HASH_IP=m
++CONFIG_IP_SET_HASH_IPPORT=m
++CONFIG_IP_SET_HASH_IPPORTIP=m
++CONFIG_IP_SET_HASH_IPPORTNET=m
++CONFIG_IP_SET_HASH_NET=m
++CONFIG_IP_SET_HASH_NETPORT=m
++CONFIG_IP_SET_HASH_NETIFACE=m
++CONFIG_IP_SET_LIST_SET=m
++CONFIG_IP_VS=m
++CONFIG_IP_VS_PROTO_TCP=y
++CONFIG_IP_VS_PROTO_UDP=y
++CONFIG_IP_VS_PROTO_ESP=y
++CONFIG_IP_VS_PROTO_AH=y
++CONFIG_IP_VS_PROTO_SCTP=y
++CONFIG_IP_VS_RR=m
++CONFIG_IP_VS_WRR=m
++CONFIG_IP_VS_LC=m
++CONFIG_IP_VS_WLC=m
++CONFIG_IP_VS_LBLC=m
++CONFIG_IP_VS_LBLCR=m
++CONFIG_IP_VS_DH=m
++CONFIG_IP_VS_SH=m
++CONFIG_IP_VS_SED=m
++CONFIG_IP_VS_NQ=m
++CONFIG_IP_VS_FTP=m
++CONFIG_IP_VS_PE_SIP=m
++CONFIG_NF_CONNTRACK_IPV4=m
++CONFIG_IP_NF_IPTABLES=m
++CONFIG_IP_NF_MATCH_AH=m
++CONFIG_IP_NF_MATCH_ECN=m
++CONFIG_IP_NF_MATCH_TTL=m
++CONFIG_IP_NF_FILTER=m
++CONFIG_IP_NF_TARGET_REJECT=m
++CONFIG_IP_NF_NAT=m
++CONFIG_IP_NF_TARGET_MASQUERADE=m
++CONFIG_IP_NF_TARGET_NETMAP=m
++CONFIG_IP_NF_TARGET_REDIRECT=m
++CONFIG_IP_NF_MANGLE=m
++CONFIG_IP_NF_TARGET_CLUSTERIP=m
++CONFIG_IP_NF_TARGET_ECN=m
++CONFIG_IP_NF_TARGET_TTL=m
++CONFIG_IP_NF_RAW=m
++CONFIG_IP_NF_ARPTABLES=m
++CONFIG_IP_NF_ARPFILTER=m
++CONFIG_IP_NF_ARP_MANGLE=m
++CONFIG_NF_CONNTRACK_IPV6=m
++CONFIG_IP6_NF_IPTABLES=m
++CONFIG_IP6_NF_MATCH_AH=m
++CONFIG_IP6_NF_MATCH_EUI64=m
++CONFIG_IP6_NF_MATCH_FRAG=m
++CONFIG_IP6_NF_MATCH_OPTS=m
++CONFIG_IP6_NF_MATCH_HL=m
++CONFIG_IP6_NF_MATCH_IPV6HEADER=m
++CONFIG_IP6_NF_MATCH_MH=m
++CONFIG_IP6_NF_MATCH_RT=m
++CONFIG_IP6_NF_TARGET_HL=m
++CONFIG_IP6_NF_FILTER=m
++CONFIG_IP6_NF_TARGET_REJECT=m
++CONFIG_IP6_NF_MANGLE=m
++CONFIG_IP6_NF_RAW=m
++CONFIG_IP6_NF_NAT=m
++CONFIG_IP6_NF_TARGET_MASQUERADE=m
++CONFIG_IP6_NF_TARGET_NPT=m
++CONFIG_BRIDGE_NF_EBTABLES=m
++CONFIG_BRIDGE_EBT_BROUTE=m
++CONFIG_BRIDGE_EBT_T_FILTER=m
++CONFIG_BRIDGE_EBT_T_NAT=m
++CONFIG_BRIDGE_EBT_802_3=m
++CONFIG_BRIDGE_EBT_AMONG=m
++CONFIG_BRIDGE_EBT_ARP=m
++CONFIG_BRIDGE_EBT_IP=m
++CONFIG_BRIDGE_EBT_IP6=m
++CONFIG_BRIDGE_EBT_LIMIT=m
++CONFIG_BRIDGE_EBT_MARK=m
++CONFIG_BRIDGE_EBT_PKTTYPE=m
++CONFIG_BRIDGE_EBT_STP=m
++CONFIG_BRIDGE_EBT_VLAN=m
++CONFIG_BRIDGE_EBT_ARPREPLY=m
++CONFIG_BRIDGE_EBT_DNAT=m
++CONFIG_BRIDGE_EBT_MARK_T=m
++CONFIG_BRIDGE_EBT_REDIRECT=m
++CONFIG_BRIDGE_EBT_SNAT=m
++CONFIG_BRIDGE_EBT_LOG=m
++CONFIG_BRIDGE_EBT_NFLOG=m
++CONFIG_SCTP_COOKIE_HMAC_SHA1=y
++CONFIG_ATM=m
++CONFIG_L2TP=m
++CONFIG_L2TP_V3=y
++CONFIG_L2TP_IP=m
++CONFIG_L2TP_ETH=m
++CONFIG_BRIDGE=m
++CONFIG_VLAN_8021Q=m
++CONFIG_VLAN_8021Q_GVRP=y
++CONFIG_ATALK=m
++CONFIG_6LOWPAN=m
++CONFIG_IEEE802154=m
++CONFIG_IEEE802154_6LOWPAN=m
++CONFIG_MAC802154=m
++CONFIG_NET_SCHED=y
++CONFIG_NET_SCH_CBQ=m
++CONFIG_NET_SCH_HTB=m
++CONFIG_NET_SCH_HFSC=m
++CONFIG_NET_SCH_PRIO=m
++CONFIG_NET_SCH_MULTIQ=m
++CONFIG_NET_SCH_RED=m
++CONFIG_NET_SCH_SFB=m
++CONFIG_NET_SCH_SFQ=m
++CONFIG_NET_SCH_TEQL=m
++CONFIG_NET_SCH_TBF=m
++CONFIG_NET_SCH_GRED=m
++CONFIG_NET_SCH_DSMARK=m
++CONFIG_NET_SCH_NETEM=m
++CONFIG_NET_SCH_DRR=m
++CONFIG_NET_SCH_MQPRIO=m
++CONFIG_NET_SCH_CHOKE=m
++CONFIG_NET_SCH_QFQ=m
++CONFIG_NET_SCH_CODEL=m
++CONFIG_NET_SCH_FQ_CODEL=m
++CONFIG_NET_SCH_INGRESS=m
++CONFIG_NET_SCH_PLUG=m
++CONFIG_NET_CLS_BASIC=m
++CONFIG_NET_CLS_TCINDEX=m
++CONFIG_NET_CLS_ROUTE4=m
++CONFIG_NET_CLS_FW=m
++CONFIG_NET_CLS_U32=m
++CONFIG_CLS_U32_MARK=y
++CONFIG_NET_CLS_RSVP=m
++CONFIG_NET_CLS_RSVP6=m
++CONFIG_NET_CLS_FLOW=m
++CONFIG_NET_CLS_CGROUP=m
++CONFIG_NET_EMATCH=y
++CONFIG_NET_EMATCH_CMP=m
++CONFIG_NET_EMATCH_NBYTE=m
++CONFIG_NET_EMATCH_U32=m
++CONFIG_NET_EMATCH_META=m
++CONFIG_NET_EMATCH_TEXT=m
++CONFIG_NET_EMATCH_IPSET=m
++CONFIG_NET_CLS_ACT=y
++CONFIG_NET_ACT_POLICE=m
++CONFIG_NET_ACT_GACT=m
++CONFIG_GACT_PROB=y
++CONFIG_NET_ACT_MIRRED=m
++CONFIG_NET_ACT_IPT=m
++CONFIG_NET_ACT_NAT=m
++CONFIG_NET_ACT_PEDIT=m
++CONFIG_NET_ACT_SIMP=m
++CONFIG_NET_ACT_SKBEDIT=m
++CONFIG_NET_ACT_CSUM=m
++CONFIG_BATMAN_ADV=m
++CONFIG_OPENVSWITCH=m
++CONFIG_NET_PKTGEN=m
++CONFIG_HAMRADIO=y
++CONFIG_AX25=m
++CONFIG_NETROM=m
++CONFIG_ROSE=m
++CONFIG_MKISS=m
++CONFIG_6PACK=m
++CONFIG_BPQETHER=m
++CONFIG_BAYCOM_SER_FDX=m
++CONFIG_BAYCOM_SER_HDX=m
++CONFIG_YAM=m
++CONFIG_CAN=m
++CONFIG_CAN_VCAN=m
++CONFIG_CAN_MCP251X=m
++CONFIG_IRDA=m
++CONFIG_IRLAN=m
++CONFIG_IRNET=m
++CONFIG_IRCOMM=m
++CONFIG_IRDA_ULTRA=y
++CONFIG_IRDA_CACHE_LAST_LSAP=y
++CONFIG_IRDA_FAST_RR=y
++CONFIG_IRTTY_SIR=m
++CONFIG_KINGSUN_DONGLE=m
++CONFIG_KSDAZZLE_DONGLE=m
++CONFIG_KS959_DONGLE=m
++CONFIG_USB_IRDA=m
++CONFIG_SIGMATEL_FIR=m
++CONFIG_MCS_FIR=m
++CONFIG_BT=m
++CONFIG_BT_RFCOMM=m
++CONFIG_BT_RFCOMM_TTY=y
++CONFIG_BT_BNEP=m
++CONFIG_BT_BNEP_MC_FILTER=y
++CONFIG_BT_BNEP_PROTO_FILTER=y
++CONFIG_BT_HIDP=m
++CONFIG_BT_6LOWPAN=m
++CONFIG_BT_HCIBTUSB=m
++CONFIG_BT_HCIUART=m
++CONFIG_BT_HCIBCM203X=m
++CONFIG_BT_HCIBPA10X=m
++CONFIG_BT_HCIBFUSB=m
++CONFIG_BT_HCIVHCI=m
++CONFIG_BT_MRVL=m
++CONFIG_BT_MRVL_SDIO=m
++CONFIG_BT_ATH3K=m
++CONFIG_BT_WILINK=m
++CONFIG_MAC80211=m
++CONFIG_MAC80211_MESH=y
++CONFIG_WIMAX=m
++CONFIG_RFKILL=m
++CONFIG_RFKILL_INPUT=y
++CONFIG_NET_9P=m
++CONFIG_NFC=m
++CONFIG_NFC_PN533=m
++CONFIG_DEVTMPFS=y
++CONFIG_DEVTMPFS_MOUNT=y
++CONFIG_DMA_CMA=y
++CONFIG_CMA_SIZE_MBYTES=5
++CONFIG_MTD=m
++CONFIG_MTD_BLOCK=m
++CONFIG_MTD_NAND=m
++CONFIG_MTD_UBI=m
++CONFIG_ZRAM=m
++CONFIG_ZRAM_LZ4_COMPRESS=y
++CONFIG_BLK_DEV_LOOP=y
++CONFIG_BLK_DEV_CRYPTOLOOP=m
++CONFIG_BLK_DEV_DRBD=m
++CONFIG_BLK_DEV_NBD=m
++CONFIG_BLK_DEV_RAM=y
++CONFIG_CDROM_PKTCDVD=m
++CONFIG_ATA_OVER_ETH=m
++CONFIG_EEPROM_AT24=m
++CONFIG_TI_ST=m
++CONFIG_SCSI=y
++# CONFIG_SCSI_PROC_FS is not set
++CONFIG_BLK_DEV_SD=y
++CONFIG_CHR_DEV_ST=m
++CONFIG_CHR_DEV_OSST=m
++CONFIG_BLK_DEV_SR=m
++CONFIG_CHR_DEV_SG=m
++CONFIG_SCSI_ISCSI_ATTRS=y
++CONFIG_ISCSI_TCP=m
++CONFIG_ISCSI_BOOT_SYSFS=m
++CONFIG_MD=y
++CONFIG_MD_LINEAR=m
++CONFIG_MD_RAID0=m
++CONFIG_BLK_DEV_DM=m
++CONFIG_DM_CRYPT=m
++CONFIG_DM_SNAPSHOT=m
++CONFIG_DM_THIN_PROVISIONING=m
++CONFIG_DM_MIRROR=m
++CONFIG_DM_LOG_USERSPACE=m
++CONFIG_DM_RAID=m
++CONFIG_DM_ZERO=m
++CONFIG_DM_DELAY=m
++CONFIG_NETDEVICES=y
++CONFIG_BONDING=m
++CONFIG_DUMMY=m
++CONFIG_IFB=m
++CONFIG_MACVLAN=m
++CONFIG_NETCONSOLE=m
++CONFIG_TUN=m
++CONFIG_VETH=m
++CONFIG_ENC28J60=m
++CONFIG_MDIO_BITBANG=m
++CONFIG_PPP=m
++CONFIG_PPP_BSDCOMP=m
++CONFIG_PPP_DEFLATE=m
++CONFIG_PPP_FILTER=y
++CONFIG_PPP_MPPE=m
++CONFIG_PPP_MULTILINK=y
++CONFIG_PPPOATM=m
++CONFIG_PPPOE=m
++CONFIG_PPPOL2TP=m
++CONFIG_PPP_ASYNC=m
++CONFIG_PPP_SYNC_TTY=m
++CONFIG_SLIP=m
++CONFIG_SLIP_COMPRESSED=y
++CONFIG_SLIP_SMART=y
++CONFIG_USB_CATC=m
++CONFIG_USB_KAWETH=m
++CONFIG_USB_PEGASUS=m
++CONFIG_USB_RTL8150=m
++CONFIG_USB_RTL8152=m
++CONFIG_USB_USBNET=y
++CONFIG_USB_NET_AX8817X=m
++CONFIG_USB_NET_AX88179_178A=m
++CONFIG_USB_NET_CDCETHER=m
++CONFIG_USB_NET_CDC_EEM=m
++CONFIG_USB_NET_CDC_NCM=m
++CONFIG_USB_NET_HUAWEI_CDC_NCM=m
++CONFIG_USB_NET_CDC_MBIM=m
++CONFIG_USB_NET_DM9601=m
++CONFIG_USB_NET_SR9700=m
++CONFIG_USB_NET_SR9800=m
++CONFIG_USB_NET_SMSC75XX=m
++CONFIG_USB_NET_SMSC95XX=y
++CONFIG_USB_NET_GL620A=m
++CONFIG_USB_NET_NET1080=m
++CONFIG_USB_NET_PLUSB=m
++CONFIG_USB_NET_MCS7830=m
++CONFIG_USB_NET_CDC_SUBSET=m
++CONFIG_USB_ALI_M5632=y
++CONFIG_USB_AN2720=y
++CONFIG_USB_EPSON2888=y
++CONFIG_USB_KC2190=y
++CONFIG_USB_NET_ZAURUS=m
++CONFIG_USB_NET_CX82310_ETH=m
++CONFIG_USB_NET_KALMIA=m
++CONFIG_USB_NET_QMI_WWAN=m
++CONFIG_USB_HSO=m
++CONFIG_USB_NET_INT51X1=m
++CONFIG_USB_IPHETH=m
++CONFIG_USB_SIERRA_NET=m
++CONFIG_USB_VL600=m
++CONFIG_LIBERTAS_THINFIRM=m
++CONFIG_LIBERTAS_THINFIRM_USB=m
++CONFIG_AT76C50X_USB=m
++CONFIG_USB_ZD1201=m
++CONFIG_USB_NET_RNDIS_WLAN=m
++CONFIG_RTL8187=m
++CONFIG_MAC80211_HWSIM=m
++CONFIG_ATH_CARDS=m
++CONFIG_ATH9K=m
++CONFIG_ATH9K_HTC=m
++CONFIG_CARL9170=m
++CONFIG_ATH6KL=m
++CONFIG_ATH6KL_USB=m
++CONFIG_AR5523=m
++CONFIG_B43=m
++# CONFIG_B43_PHY_N is not set
++CONFIG_B43LEGACY=m
++CONFIG_BRCMFMAC=m
++CONFIG_BRCMFMAC_USB=y
++CONFIG_HOSTAP=m
++CONFIG_LIBERTAS=m
++CONFIG_LIBERTAS_USB=m
++CONFIG_LIBERTAS_SDIO=m
++CONFIG_P54_COMMON=m
++CONFIG_P54_USB=m
++CONFIG_RT2X00=m
++CONFIG_RT2500USB=m
++CONFIG_RT73USB=m
++CONFIG_RT2800USB=m
++CONFIG_RT2800USB_RT3573=y
++CONFIG_RT2800USB_RT53XX=y
++CONFIG_RT2800USB_RT55XX=y
++CONFIG_RT2800USB_UNKNOWN=y
++CONFIG_WL_MEDIATEK=y
++CONFIG_MT7601U=m
++CONFIG_RTL8192CU=m
++CONFIG_ZD1211RW=m
++CONFIG_MWIFIEX=m
++CONFIG_MWIFIEX_SDIO=m
++CONFIG_WIMAX_I2400M_USB=m
++CONFIG_IEEE802154_AT86RF230=m
++CONFIG_IEEE802154_MRF24J40=m
++CONFIG_IEEE802154_CC2520=m
++CONFIG_INPUT_POLLDEV=m
++# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
++CONFIG_INPUT_JOYDEV=m
++CONFIG_INPUT_EVDEV=m
++# CONFIG_KEYBOARD_ATKBD is not set
++CONFIG_KEYBOARD_GPIO=m
++# CONFIG_INPUT_MOUSE is not set
++CONFIG_INPUT_JOYSTICK=y
++CONFIG_JOYSTICK_IFORCE=m
++CONFIG_JOYSTICK_IFORCE_USB=y
++CONFIG_JOYSTICK_XPAD=m
++CONFIG_JOYSTICK_XPAD_FF=y
++CONFIG_JOYSTICK_RPISENSE=m
++CONFIG_INPUT_TOUCHSCREEN=y
++CONFIG_TOUCHSCREEN_ADS7846=m
++CONFIG_TOUCHSCREEN_EGALAX=m
++CONFIG_TOUCHSCREEN_FT6236=m
++CONFIG_TOUCHSCREEN_RPI_FT5406=m
++CONFIG_TOUCHSCREEN_USB_COMPOSITE=m
++CONFIG_TOUCHSCREEN_STMPE=m
++CONFIG_INPUT_MISC=y
++CONFIG_INPUT_AD714X=m
++CONFIG_INPUT_ATI_REMOTE2=m
++CONFIG_INPUT_KEYSPAN_REMOTE=m
++CONFIG_INPUT_POWERMATE=m
++CONFIG_INPUT_YEALINK=m
++CONFIG_INPUT_CM109=m
++CONFIG_INPUT_UINPUT=m
++CONFIG_INPUT_GPIO_ROTARY_ENCODER=m
++CONFIG_INPUT_ADXL34X=m
++CONFIG_INPUT_CMA3000=m
++CONFIG_SERIO=m
++CONFIG_SERIO_RAW=m
++CONFIG_GAMEPORT=m
++CONFIG_GAMEPORT_NS558=m
++CONFIG_GAMEPORT_L4=m
++CONFIG_BRCM_CHAR_DRIVERS=y
++CONFIG_BCM_VC_CMA=y
++CONFIG_BCM_VCIO=y
++CONFIG_BCM_VC_SM=y
++CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
++# CONFIG_LEGACY_PTYS is not set
++# CONFIG_DEVKMEM is not set
++CONFIG_SERIAL_8250=y
++# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
++CONFIG_SERIAL_8250_CONSOLE=y
++# CONFIG_SERIAL_8250_DMA is not set
++CONFIG_SERIAL_8250_NR_UARTS=1
++CONFIG_SERIAL_8250_RUNTIME_UARTS=0
++CONFIG_SERIAL_AMBA_PL011=y
++CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
++CONFIG_SERIAL_OF_PLATFORM=y
++CONFIG_TTY_PRINTK=y
++CONFIG_HW_RANDOM=y
++CONFIG_RAW_DRIVER=y
++CONFIG_I2C=y
++CONFIG_I2C_CHARDEV=m
++CONFIG_I2C_BCM2708=m
++CONFIG_SPI=y
++CONFIG_SPI_BCM2835=m
++CONFIG_SPI_SPIDEV=y
++CONFIG_PPS=m
++CONFIG_PPS_CLIENT_LDISC=m
++CONFIG_PPS_CLIENT_GPIO=m
++CONFIG_GPIO_SYSFS=y
++CONFIG_GPIO_ARIZONA=m
++CONFIG_GPIO_STMPE=y
++CONFIG_W1=m
++CONFIG_W1_MASTER_DS2490=m
++CONFIG_W1_MASTER_DS2482=m
++CONFIG_W1_MASTER_DS1WM=m
++CONFIG_W1_MASTER_GPIO=m
++CONFIG_W1_SLAVE_THERM=m
++CONFIG_W1_SLAVE_SMEM=m
++CONFIG_W1_SLAVE_DS2408=m
++CONFIG_W1_SLAVE_DS2413=m
++CONFIG_W1_SLAVE_DS2406=m
++CONFIG_W1_SLAVE_DS2423=m
++CONFIG_W1_SLAVE_DS2431=m
++CONFIG_W1_SLAVE_DS2433=m
++CONFIG_W1_SLAVE_DS2760=m
++CONFIG_W1_SLAVE_DS2780=m
++CONFIG_W1_SLAVE_DS2781=m
++CONFIG_W1_SLAVE_DS28E04=m
++CONFIG_W1_SLAVE_BQ27000=m
++CONFIG_BATTERY_DS2760=m
++CONFIG_POWER_RESET=y
++CONFIG_POWER_RESET_GPIO=y
++CONFIG_HWMON=m
++CONFIG_SENSORS_SHT21=m
++CONFIG_SENSORS_SHTC1=m
++CONFIG_THERMAL=y
++CONFIG_THERMAL_BCM2835=y
++CONFIG_WATCHDOG=y
++CONFIG_BCM2835_WDT=m
++CONFIG_UCB1400_CORE=m
++CONFIG_MFD_STMPE=y
++CONFIG_STMPE_SPI=y
++CONFIG_MFD_ARIZONA_I2C=m
++CONFIG_MFD_ARIZONA_SPI=m
++CONFIG_MFD_WM5102=y
++CONFIG_MEDIA_SUPPORT=m
++CONFIG_MEDIA_CAMERA_SUPPORT=y
++CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
++CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
++CONFIG_MEDIA_RADIO_SUPPORT=y
++CONFIG_MEDIA_RC_SUPPORT=y
++CONFIG_MEDIA_CONTROLLER=y
++CONFIG_LIRC=m
++CONFIG_RC_DEVICES=y
++CONFIG_RC_ATI_REMOTE=m
++CONFIG_IR_IMON=m
++CONFIG_IR_MCEUSB=m
++CONFIG_IR_REDRAT3=m
++CONFIG_IR_STREAMZAP=m
++CONFIG_IR_IGUANA=m
++CONFIG_IR_TTUSBIR=m
++CONFIG_RC_LOOPBACK=m
++CONFIG_IR_GPIO_CIR=m
++CONFIG_MEDIA_USB_SUPPORT=y
++CONFIG_USB_VIDEO_CLASS=m
++CONFIG_USB_M5602=m
++CONFIG_USB_STV06XX=m
++CONFIG_USB_GL860=m
++CONFIG_USB_GSPCA_BENQ=m
++CONFIG_USB_GSPCA_CONEX=m
++CONFIG_USB_GSPCA_CPIA1=m
++CONFIG_USB_GSPCA_DTCS033=m
++CONFIG_USB_GSPCA_ETOMS=m
++CONFIG_USB_GSPCA_FINEPIX=m
++CONFIG_USB_GSPCA_JEILINJ=m
++CONFIG_USB_GSPCA_JL2005BCD=m
++CONFIG_USB_GSPCA_KINECT=m
++CONFIG_USB_GSPCA_KONICA=m
++CONFIG_USB_GSPCA_MARS=m
++CONFIG_USB_GSPCA_MR97310A=m
++CONFIG_USB_GSPCA_NW80X=m
++CONFIG_USB_GSPCA_OV519=m
++CONFIG_USB_GSPCA_OV534=m
++CONFIG_USB_GSPCA_OV534_9=m
++CONFIG_USB_GSPCA_PAC207=m
++CONFIG_USB_GSPCA_PAC7302=m
++CONFIG_USB_GSPCA_PAC7311=m
++CONFIG_USB_GSPCA_SE401=m
++CONFIG_USB_GSPCA_SN9C2028=m
++CONFIG_USB_GSPCA_SN9C20X=m
++CONFIG_USB_GSPCA_SONIXB=m
++CONFIG_USB_GSPCA_SONIXJ=m
++CONFIG_USB_GSPCA_SPCA500=m
++CONFIG_USB_GSPCA_SPCA501=m
++CONFIG_USB_GSPCA_SPCA505=m
++CONFIG_USB_GSPCA_SPCA506=m
++CONFIG_USB_GSPCA_SPCA508=m
++CONFIG_USB_GSPCA_SPCA561=m
++CONFIG_USB_GSPCA_SPCA1528=m
++CONFIG_USB_GSPCA_SQ905=m
++CONFIG_USB_GSPCA_SQ905C=m
++CONFIG_USB_GSPCA_SQ930X=m
++CONFIG_USB_GSPCA_STK014=m
++CONFIG_USB_GSPCA_STK1135=m
++CONFIG_USB_GSPCA_STV0680=m
++CONFIG_USB_GSPCA_SUNPLUS=m
++CONFIG_USB_GSPCA_T613=m
++CONFIG_USB_GSPCA_TOPRO=m
++CONFIG_USB_GSPCA_TV8532=m
++CONFIG_USB_GSPCA_VC032X=m
++CONFIG_USB_GSPCA_VICAM=m
++CONFIG_USB_GSPCA_XIRLINK_CIT=m
++CONFIG_USB_GSPCA_ZC3XX=m
++CONFIG_USB_PWC=m
++CONFIG_VIDEO_CPIA2=m
++CONFIG_USB_ZR364XX=m
++CONFIG_USB_STKWEBCAM=m
++CONFIG_USB_S2255=m
++CONFIG_VIDEO_USBTV=m
++CONFIG_VIDEO_PVRUSB2=m
++CONFIG_VIDEO_HDPVR=m
++CONFIG_VIDEO_USBVISION=m
++CONFIG_VIDEO_STK1160_COMMON=m
++CONFIG_VIDEO_STK1160_AC97=y
++CONFIG_VIDEO_GO7007=m
++CONFIG_VIDEO_GO7007_USB=m
++CONFIG_VIDEO_GO7007_USB_S2250_BOARD=m
++CONFIG_VIDEO_AU0828=m
++CONFIG_VIDEO_AU0828_RC=y
++CONFIG_VIDEO_CX231XX=m
++CONFIG_VIDEO_CX231XX_ALSA=m
++CONFIG_VIDEO_CX231XX_DVB=m
++CONFIG_VIDEO_TM6000=m
++CONFIG_VIDEO_TM6000_ALSA=m
++CONFIG_VIDEO_TM6000_DVB=m
++CONFIG_DVB_USB=m
++CONFIG_DVB_USB_A800=m
++CONFIG_DVB_USB_DIBUSB_MB=m
++CONFIG_DVB_USB_DIBUSB_MB_FAULTY=y
++CONFIG_DVB_USB_DIBUSB_MC=m
++CONFIG_DVB_USB_DIB0700=m
++CONFIG_DVB_USB_UMT_010=m
++CONFIG_DVB_USB_CXUSB=m
++CONFIG_DVB_USB_M920X=m
++CONFIG_DVB_USB_DIGITV=m
++CONFIG_DVB_USB_VP7045=m
++CONFIG_DVB_USB_VP702X=m
++CONFIG_DVB_USB_GP8PSK=m
++CONFIG_DVB_USB_NOVA_T_USB2=m
++CONFIG_DVB_USB_TTUSB2=m
++CONFIG_DVB_USB_DTT200U=m
++CONFIG_DVB_USB_OPERA1=m
++CONFIG_DVB_USB_AF9005=m
++CONFIG_DVB_USB_AF9005_REMOTE=m
++CONFIG_DVB_USB_PCTV452E=m
++CONFIG_DVB_USB_DW2102=m
++CONFIG_DVB_USB_CINERGY_T2=m
++CONFIG_DVB_USB_DTV5100=m
++CONFIG_DVB_USB_FRIIO=m
++CONFIG_DVB_USB_AZ6027=m
++CONFIG_DVB_USB_TECHNISAT_USB2=m
++CONFIG_DVB_USB_V2=m
++CONFIG_DVB_USB_AF9015=m
++CONFIG_DVB_USB_AF9035=m
++CONFIG_DVB_USB_ANYSEE=m
++CONFIG_DVB_USB_AU6610=m
++CONFIG_DVB_USB_AZ6007=m
++CONFIG_DVB_USB_CE6230=m
++CONFIG_DVB_USB_EC168=m
++CONFIG_DVB_USB_GL861=m
++CONFIG_DVB_USB_LME2510=m
++CONFIG_DVB_USB_MXL111SF=m
++CONFIG_DVB_USB_RTL28XXU=m
++CONFIG_DVB_USB_DVBSKY=m
++CONFIG_SMS_USB_DRV=m
++CONFIG_DVB_B2C2_FLEXCOP_USB=m
++CONFIG_DVB_AS102=m
++CONFIG_VIDEO_EM28XX=m
++CONFIG_VIDEO_EM28XX_V4L2=m
++CONFIG_VIDEO_EM28XX_ALSA=m
++CONFIG_VIDEO_EM28XX_DVB=m
++CONFIG_V4L_PLATFORM_DRIVERS=y
++CONFIG_VIDEO_BCM2835=y
++CONFIG_VIDEO_BCM2835_MMAL=m
++CONFIG_RADIO_SI470X=y
++CONFIG_USB_SI470X=m
++CONFIG_I2C_SI470X=m
++CONFIG_RADIO_SI4713=m
++CONFIG_I2C_SI4713=m
++CONFIG_USB_MR800=m
++CONFIG_USB_DSBR=m
++CONFIG_RADIO_SHARK=m
++CONFIG_RADIO_SHARK2=m
++CONFIG_USB_KEENE=m
++CONFIG_USB_MA901=m
++CONFIG_RADIO_TEA5764=m
++CONFIG_RADIO_SAA7706H=m
++CONFIG_RADIO_TEF6862=m
++CONFIG_RADIO_WL1273=m
++CONFIG_RADIO_WL128X=m
++# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set
++CONFIG_VIDEO_UDA1342=m
++CONFIG_VIDEO_SONY_BTF_MPX=m
++CONFIG_VIDEO_TVP5150=m
++CONFIG_VIDEO_TW2804=m
++CONFIG_VIDEO_TW9903=m
++CONFIG_VIDEO_TW9906=m
++CONFIG_VIDEO_OV7640=m
++CONFIG_VIDEO_MT9V011=m
++CONFIG_FB=y
++CONFIG_FB_BCM2708=y
++CONFIG_FB_UDL=m
++CONFIG_FB_SSD1307=m
++CONFIG_FB_RPISENSE=m
++# CONFIG_BACKLIGHT_GENERIC is not set
++CONFIG_BACKLIGHT_GPIO=m
++CONFIG_FRAMEBUFFER_CONSOLE=y
++CONFIG_LOGO=y
++# CONFIG_LOGO_LINUX_MONO is not set
++# CONFIG_LOGO_LINUX_VGA16 is not set
++CONFIG_SOUND=y
++CONFIG_SND=m
++CONFIG_SND_SEQUENCER=m
++CONFIG_SND_SEQ_DUMMY=m
++CONFIG_SND_MIXER_OSS=m
++CONFIG_SND_PCM_OSS=m
++CONFIG_SND_SEQUENCER_OSS=y
++CONFIG_SND_HRTIMER=m
++CONFIG_SND_DUMMY=m
++CONFIG_SND_ALOOP=m
++CONFIG_SND_VIRMIDI=m
++CONFIG_SND_MTPAV=m
++CONFIG_SND_SERIAL_U16550=m
++CONFIG_SND_MPU401=m
++CONFIG_SND_BCM2835=m
++CONFIG_SND_USB_AUDIO=m
++CONFIG_SND_USB_UA101=m
++CONFIG_SND_USB_CAIAQ=m
++CONFIG_SND_USB_CAIAQ_INPUT=y
++CONFIG_SND_USB_6FIRE=m
++CONFIG_SND_SOC=m
++CONFIG_SND_BCM2835_SOC_I2S=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_AMP=m
++CONFIG_SND_BCM2708_SOC_RPI_DAC=m
++CONFIG_SND_BCM2708_SOC_RPI_PROTO=m
++CONFIG_SND_BCM2708_SOC_IQAUDIO_DAC=m
++CONFIG_SND_BCM2708_SOC_RASPIDAC3=m
++CONFIG_SND_SOC_ADAU1701=m
++CONFIG_SND_SOC_WM8804_I2C=m
++CONFIG_SND_SIMPLE_CARD=m
++CONFIG_SOUND_PRIME=m
++CONFIG_HIDRAW=y
++CONFIG_UHID=m
++CONFIG_HID_A4TECH=m
++CONFIG_HID_ACRUX=m
++CONFIG_HID_APPLE=m
++CONFIG_HID_BELKIN=m
++CONFIG_HID_CHERRY=m
++CONFIG_HID_CHICONY=m
++CONFIG_HID_CYPRESS=m
++CONFIG_HID_DRAGONRISE=m
++CONFIG_HID_EMS_FF=m
++CONFIG_HID_ELECOM=m
++CONFIG_HID_ELO=m
++CONFIG_HID_EZKEY=m
++CONFIG_HID_HOLTEK=m
++CONFIG_HID_KEYTOUCH=m
++CONFIG_HID_KYE=m
++CONFIG_HID_UCLOGIC=m
++CONFIG_HID_WALTOP=m
++CONFIG_HID_GYRATION=m
++CONFIG_HID_TWINHAN=m
++CONFIG_HID_KENSINGTON=m
++CONFIG_HID_LCPOWER=m
++CONFIG_HID_LOGITECH=m
++CONFIG_HID_MAGICMOUSE=m
++CONFIG_HID_MICROSOFT=m
++CONFIG_HID_MONTEREY=m
++CONFIG_HID_MULTITOUCH=m
++CONFIG_HID_NTRIG=m
++CONFIG_HID_ORTEK=m
++CONFIG_HID_PANTHERLORD=m
++CONFIG_HID_PETALYNX=m
++CONFIG_HID_PICOLCD=m
++CONFIG_HID_ROCCAT=m
++CONFIG_HID_SAMSUNG=m
++CONFIG_HID_SONY=m
++CONFIG_HID_SPEEDLINK=m
++CONFIG_HID_SUNPLUS=m
++CONFIG_HID_GREENASIA=m
++CONFIG_HID_SMARTJOYPLUS=m
++CONFIG_HID_TOPSEED=m
++CONFIG_HID_THINGM=m
++CONFIG_HID_THRUSTMASTER=m
++CONFIG_HID_WACOM=m
++CONFIG_HID_WIIMOTE=m
++CONFIG_HID_XINMO=m
++CONFIG_HID_ZEROPLUS=m
++CONFIG_HID_ZYDACRON=m
++CONFIG_HID_PID=y
++CONFIG_USB_HIDDEV=y
++CONFIG_USB=y
++CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
++CONFIG_USB_MON=m
++CONFIG_USB_DWCOTG=y
++CONFIG_USB_PRINTER=m
++CONFIG_USB_STORAGE=y
++CONFIG_USB_STORAGE_REALTEK=m
++CONFIG_USB_STORAGE_DATAFAB=m
++CONFIG_USB_STORAGE_FREECOM=m
++CONFIG_USB_STORAGE_ISD200=m
++CONFIG_USB_STORAGE_USBAT=m
++CONFIG_USB_STORAGE_SDDR09=m
++CONFIG_USB_STORAGE_SDDR55=m
++CONFIG_USB_STORAGE_JUMPSHOT=m
++CONFIG_USB_STORAGE_ALAUDA=m
++CONFIG_USB_STORAGE_ONETOUCH=m
++CONFIG_USB_STORAGE_KARMA=m
++CONFIG_USB_STORAGE_CYPRESS_ATACB=m
++CONFIG_USB_STORAGE_ENE_UB6250=m
++CONFIG_USB_MDC800=m
++CONFIG_USB_MICROTEK=m
++CONFIG_USBIP_CORE=m
++CONFIG_USBIP_VHCI_HCD=m
++CONFIG_USBIP_HOST=m
++CONFIG_USB_DWC2=m
++CONFIG_USB_SERIAL=m
++CONFIG_USB_SERIAL_GENERIC=y
++CONFIG_USB_SERIAL_AIRCABLE=m
++CONFIG_USB_SERIAL_ARK3116=m
++CONFIG_USB_SERIAL_BELKIN=m
++CONFIG_USB_SERIAL_CH341=m
++CONFIG_USB_SERIAL_WHITEHEAT=m
++CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
++CONFIG_USB_SERIAL_CP210X=m
++CONFIG_USB_SERIAL_CYPRESS_M8=m
++CONFIG_USB_SERIAL_EMPEG=m
++CONFIG_USB_SERIAL_FTDI_SIO=m
++CONFIG_USB_SERIAL_VISOR=m
++CONFIG_USB_SERIAL_IPAQ=m
++CONFIG_USB_SERIAL_IR=m
++CONFIG_USB_SERIAL_EDGEPORT=m
++CONFIG_USB_SERIAL_EDGEPORT_TI=m
++CONFIG_USB_SERIAL_F81232=m
++CONFIG_USB_SERIAL_GARMIN=m
++CONFIG_USB_SERIAL_IPW=m
++CONFIG_USB_SERIAL_IUU=m
++CONFIG_USB_SERIAL_KEYSPAN_PDA=m
++CONFIG_USB_SERIAL_KEYSPAN=m
++CONFIG_USB_SERIAL_KLSI=m
++CONFIG_USB_SERIAL_KOBIL_SCT=m
++CONFIG_USB_SERIAL_MCT_U232=m
++CONFIG_USB_SERIAL_METRO=m
++CONFIG_USB_SERIAL_MOS7720=m
++CONFIG_USB_SERIAL_MOS7840=m
++CONFIG_USB_SERIAL_NAVMAN=m
++CONFIG_USB_SERIAL_PL2303=m
++CONFIG_USB_SERIAL_OTI6858=m
++CONFIG_USB_SERIAL_QCAUX=m
++CONFIG_USB_SERIAL_QUALCOMM=m
++CONFIG_USB_SERIAL_SPCP8X5=m
++CONFIG_USB_SERIAL_SAFE=m
++CONFIG_USB_SERIAL_SIERRAWIRELESS=m
++CONFIG_USB_SERIAL_SYMBOL=m
++CONFIG_USB_SERIAL_TI=m
++CONFIG_USB_SERIAL_CYBERJACK=m
++CONFIG_USB_SERIAL_XIRCOM=m
++CONFIG_USB_SERIAL_OPTION=m
++CONFIG_USB_SERIAL_OMNINET=m
++CONFIG_USB_SERIAL_OPTICON=m
++CONFIG_USB_SERIAL_XSENS_MT=m
++CONFIG_USB_SERIAL_WISHBONE=m
++CONFIG_USB_SERIAL_SSU100=m
++CONFIG_USB_SERIAL_QT2=m
++CONFIG_USB_SERIAL_DEBUG=m
++CONFIG_USB_EMI62=m
++CONFIG_USB_EMI26=m
++CONFIG_USB_ADUTUX=m
++CONFIG_USB_SEVSEG=m
++CONFIG_USB_RIO500=m
++CONFIG_USB_LEGOTOWER=m
++CONFIG_USB_LCD=m
++CONFIG_USB_LED=m
++CONFIG_USB_CYPRESS_CY7C63=m
++CONFIG_USB_CYTHERM=m
++CONFIG_USB_IDMOUSE=m
++CONFIG_USB_FTDI_ELAN=m
++CONFIG_USB_APPLEDISPLAY=m
++CONFIG_USB_LD=m
++CONFIG_USB_TRANCEVIBRATOR=m
++CONFIG_USB_IOWARRIOR=m
++CONFIG_USB_TEST=m
++CONFIG_USB_ISIGHTFW=m
++CONFIG_USB_YUREX=m
++CONFIG_USB_ATM=m
++CONFIG_USB_SPEEDTOUCH=m
++CONFIG_USB_CXACRU=m
++CONFIG_USB_UEAGLEATM=m
++CONFIG_USB_XUSBATM=m
++CONFIG_USB_GADGET=m
++CONFIG_USB_ZERO=m
++CONFIG_USB_AUDIO=m
++CONFIG_USB_ETH=m
++CONFIG_USB_GADGETFS=m
++CONFIG_USB_MASS_STORAGE=m
++CONFIG_USB_G_SERIAL=m
++CONFIG_USB_MIDI_GADGET=m
++CONFIG_USB_G_PRINTER=m
++CONFIG_USB_CDC_COMPOSITE=m
++CONFIG_USB_G_ACM_MS=m
++CONFIG_USB_G_MULTI=m
++CONFIG_USB_G_HID=m
++CONFIG_USB_G_WEBCAM=m
++CONFIG_MMC=y
++CONFIG_MMC_BLOCK_MINORS=32
++CONFIG_MMC_BCM2835=y
++CONFIG_MMC_BCM2835_DMA=y
++CONFIG_MMC_BCM2835_SDHOST=y
++CONFIG_MMC_SDHCI=y
++CONFIG_MMC_SDHCI_PLTFM=y
++CONFIG_MMC_SPI=m
++CONFIG_LEDS_CLASS=y
++CONFIG_LEDS_GPIO=y
++CONFIG_LEDS_TRIGGER_TIMER=y
++CONFIG_LEDS_TRIGGER_ONESHOT=y
++CONFIG_LEDS_TRIGGER_HEARTBEAT=y
++CONFIG_LEDS_TRIGGER_BACKLIGHT=y
++CONFIG_LEDS_TRIGGER_CPU=y
++CONFIG_LEDS_TRIGGER_GPIO=y
++CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
++CONFIG_LEDS_TRIGGER_TRANSIENT=m
++CONFIG_LEDS_TRIGGER_CAMERA=m
++CONFIG_LEDS_TRIGGER_INPUT=y
++CONFIG_RTC_CLASS=y
++# CONFIG_RTC_HCTOSYS is not set
++CONFIG_RTC_DRV_DS1307=m
++CONFIG_RTC_DRV_DS1374=m
++CONFIG_RTC_DRV_DS1672=m
++CONFIG_RTC_DRV_DS3232=m
++CONFIG_RTC_DRV_MAX6900=m
++CONFIG_RTC_DRV_RS5C372=m
++CONFIG_RTC_DRV_ISL1208=m
++CONFIG_RTC_DRV_ISL12022=m
++CONFIG_RTC_DRV_ISL12057=m
++CONFIG_RTC_DRV_X1205=m
++CONFIG_RTC_DRV_PCF2127=m
++CONFIG_RTC_DRV_PCF8523=m
++CONFIG_RTC_DRV_PCF8563=m
++CONFIG_RTC_DRV_PCF8583=m
++CONFIG_RTC_DRV_M41T80=m
++CONFIG_RTC_DRV_BQ32K=m
++CONFIG_RTC_DRV_S35390A=m
++CONFIG_RTC_DRV_FM3130=m
++CONFIG_RTC_DRV_RX8581=m
++CONFIG_RTC_DRV_RX8025=m
++CONFIG_RTC_DRV_EM3027=m
++CONFIG_RTC_DRV_RV3029C2=m
++CONFIG_RTC_DRV_M41T93=m
++CONFIG_RTC_DRV_M41T94=m
++CONFIG_RTC_DRV_DS1305=m
++CONFIG_RTC_DRV_DS1390=m
++CONFIG_RTC_DRV_MAX6902=m
++CONFIG_RTC_DRV_R9701=m
++CONFIG_RTC_DRV_RS5C348=m
++CONFIG_RTC_DRV_DS3234=m
++CONFIG_RTC_DRV_PCF2123=m
++CONFIG_RTC_DRV_RX4581=m
++CONFIG_DMADEVICES=y
++CONFIG_DMA_BCM2835=y
++CONFIG_DMA_BCM2708=y
++CONFIG_UIO=m
++CONFIG_UIO_PDRV_GENIRQ=m
++CONFIG_STAGING=y
++CONFIG_PRISM2_USB=m
++CONFIG_R8712U=m
++CONFIG_R8188EU=m
++CONFIG_R8723AU=m
++CONFIG_VT6656=m
++CONFIG_SPEAKUP=m
++CONFIG_SPEAKUP_SYNTH_SOFT=m
++CONFIG_STAGING_MEDIA=y
++CONFIG_LIRC_STAGING=y
++CONFIG_LIRC_IMON=m
++CONFIG_LIRC_RPI=m
++CONFIG_LIRC_SASEM=m
++CONFIG_LIRC_SERIAL=m
++CONFIG_FB_TFT=m
++CONFIG_FB_TFT_AGM1264K_FL=m
++CONFIG_FB_TFT_BD663474=m
++CONFIG_FB_TFT_HX8340BN=m
++CONFIG_FB_TFT_HX8347D=m
++CONFIG_FB_TFT_HX8353D=m
++CONFIG_FB_TFT_ILI9163=m
++CONFIG_FB_TFT_ILI9320=m
++CONFIG_FB_TFT_ILI9325=m
++CONFIG_FB_TFT_ILI9340=m
++CONFIG_FB_TFT_ILI9341=m
++CONFIG_FB_TFT_ILI9481=m
++CONFIG_FB_TFT_ILI9486=m
++CONFIG_FB_TFT_PCD8544=m
++CONFIG_FB_TFT_RA8875=m
++CONFIG_FB_TFT_S6D02A1=m
++CONFIG_FB_TFT_S6D1121=m
++CONFIG_FB_TFT_SSD1289=m
++CONFIG_FB_TFT_SSD1306=m
++CONFIG_FB_TFT_SSD1331=m
++CONFIG_FB_TFT_SSD1351=m
++CONFIG_FB_TFT_ST7735R=m
++CONFIG_FB_TFT_TINYLCD=m
++CONFIG_FB_TFT_TLS8204=m
++CONFIG_FB_TFT_UC1701=m
++CONFIG_FB_TFT_UPD161704=m
++CONFIG_FB_TFT_WATTEROTT=m
++CONFIG_FB_FLEX=m
++CONFIG_FB_TFT_FBTFT_DEVICE=m
++CONFIG_MAILBOX=y
++CONFIG_BCM2835_MBOX=y
++# CONFIG_IOMMU_SUPPORT is not set
++CONFIG_EXTCON=m
++CONFIG_EXTCON_ARIZONA=m
++CONFIG_IIO=m
++CONFIG_IIO_BUFFER=y
++CONFIG_IIO_BUFFER_CB=m
++CONFIG_IIO_KFIFO_BUF=m
++CONFIG_MCP320X=m
++CONFIG_DHT11=m
++CONFIG_PWM_BCM2835=m
++CONFIG_RASPBERRYPI_FIRMWARE=y
++CONFIG_EXT4_FS=y
++CONFIG_EXT4_FS_POSIX_ACL=y
++CONFIG_EXT4_FS_SECURITY=y
++CONFIG_REISERFS_FS=m
++CONFIG_REISERFS_FS_XATTR=y
++CONFIG_REISERFS_FS_POSIX_ACL=y
++CONFIG_REISERFS_FS_SECURITY=y
++CONFIG_JFS_FS=m
++CONFIG_JFS_POSIX_ACL=y
++CONFIG_JFS_SECURITY=y
++CONFIG_JFS_STATISTICS=y
++CONFIG_XFS_FS=m
++CONFIG_XFS_QUOTA=y
++CONFIG_XFS_POSIX_ACL=y
++CONFIG_XFS_RT=y
++CONFIG_GFS2_FS=m
++CONFIG_OCFS2_FS=m
++CONFIG_BTRFS_FS=m
++CONFIG_BTRFS_FS_POSIX_ACL=y
++CONFIG_NILFS2_FS=m
++CONFIG_F2FS_FS=y
++CONFIG_FANOTIFY=y
++CONFIG_QFMT_V1=m
++CONFIG_QFMT_V2=m
++CONFIG_AUTOFS4_FS=y
++CONFIG_FUSE_FS=m
++CONFIG_CUSE=m
++CONFIG_OVERLAY_FS=m
++CONFIG_FSCACHE=y
++CONFIG_FSCACHE_STATS=y
++CONFIG_FSCACHE_HISTOGRAM=y
++CONFIG_CACHEFILES=y
++CONFIG_ISO9660_FS=m
++CONFIG_JOLIET=y
++CONFIG_ZISOFS=y
++CONFIG_UDF_FS=m
++CONFIG_MSDOS_FS=y
++CONFIG_VFAT_FS=y
++CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
++CONFIG_NTFS_FS=m
++CONFIG_NTFS_RW=y
++CONFIG_TMPFS=y
++CONFIG_TMPFS_POSIX_ACL=y
++CONFIG_CONFIGFS_FS=y
++CONFIG_ECRYPT_FS=m
++CONFIG_HFS_FS=m
++CONFIG_HFSPLUS_FS=m
++CONFIG_JFFS2_FS=m
++CONFIG_JFFS2_SUMMARY=y
++CONFIG_UBIFS_FS=m
++CONFIG_SQUASHFS=m
++CONFIG_SQUASHFS_XATTR=y
++CONFIG_SQUASHFS_LZO=y
++CONFIG_SQUASHFS_XZ=y
++CONFIG_NFS_FS=y
++CONFIG_NFS_V3_ACL=y
++CONFIG_NFS_V4=y
++CONFIG_NFS_SWAP=y
++CONFIG_ROOT_NFS=y
++CONFIG_NFS_FSCACHE=y
++CONFIG_NFSD=m
++CONFIG_NFSD_V3_ACL=y
++CONFIG_NFSD_V4=y
++CONFIG_CIFS=m
++CONFIG_CIFS_WEAK_PW_HASH=y
++CONFIG_CIFS_UPCALL=y
++CONFIG_CIFS_XATTR=y
++CONFIG_CIFS_POSIX=y
++CONFIG_CIFS_ACL=y
++CONFIG_CIFS_DFS_UPCALL=y
++CONFIG_CIFS_SMB2=y
++CONFIG_CIFS_FSCACHE=y
++CONFIG_9P_FS=m
++CONFIG_9P_FS_POSIX_ACL=y
++CONFIG_NLS_DEFAULT="utf8"
++CONFIG_NLS_CODEPAGE_437=y
++CONFIG_NLS_CODEPAGE_737=m
++CONFIG_NLS_CODEPAGE_775=m
++CONFIG_NLS_CODEPAGE_850=m
++CONFIG_NLS_CODEPAGE_852=m
++CONFIG_NLS_CODEPAGE_855=m
++CONFIG_NLS_CODEPAGE_857=m
++CONFIG_NLS_CODEPAGE_860=m
++CONFIG_NLS_CODEPAGE_861=m
++CONFIG_NLS_CODEPAGE_862=m
++CONFIG_NLS_CODEPAGE_863=m
++CONFIG_NLS_CODEPAGE_864=m
++CONFIG_NLS_CODEPAGE_865=m
++CONFIG_NLS_CODEPAGE_866=m
++CONFIG_NLS_CODEPAGE_869=m
++CONFIG_NLS_CODEPAGE_936=m
++CONFIG_NLS_CODEPAGE_950=m
++CONFIG_NLS_CODEPAGE_932=m
++CONFIG_NLS_CODEPAGE_949=m
++CONFIG_NLS_CODEPAGE_874=m
++CONFIG_NLS_ISO8859_8=m
++CONFIG_NLS_CODEPAGE_1250=m
++CONFIG_NLS_CODEPAGE_1251=m
++CONFIG_NLS_ASCII=y
++CONFIG_NLS_ISO8859_1=m
++CONFIG_NLS_ISO8859_2=m
++CONFIG_NLS_ISO8859_3=m
++CONFIG_NLS_ISO8859_4=m
++CONFIG_NLS_ISO8859_5=m
++CONFIG_NLS_ISO8859_6=m
++CONFIG_NLS_ISO8859_7=m
++CONFIG_NLS_ISO8859_9=m
++CONFIG_NLS_ISO8859_13=m
++CONFIG_NLS_ISO8859_14=m
++CONFIG_NLS_ISO8859_15=m
++CONFIG_NLS_KOI8_R=m
++CONFIG_NLS_KOI8_U=m
++CONFIG_DLM=m
++CONFIG_PRINTK_TIME=y
++CONFIG_BOOT_PRINTK_DELAY=y
++CONFIG_DEBUG_MEMORY_INIT=y
++CONFIG_DETECT_HUNG_TASK=y
++CONFIG_TIMER_STATS=y
++CONFIG_LATENCYTOP=y
++CONFIG_IRQSOFF_TRACER=y
++CONFIG_SCHED_TRACER=y
++CONFIG_STACK_TRACER=y
++CONFIG_BLK_DEV_IO_TRACE=y
++# CONFIG_KPROBE_EVENT is not set
++CONFIG_FUNCTION_PROFILER=y
++CONFIG_KGDB=y
++CONFIG_KGDB_KDB=y
++CONFIG_KDB_KEYBOARD=y
++CONFIG_CRYPTO_USER=m
++CONFIG_CRYPTO_CRYPTD=m
++CONFIG_CRYPTO_CBC=y
++CONFIG_CRYPTO_CTS=m
++CONFIG_CRYPTO_XTS=m
++CONFIG_CRYPTO_XCBC=m
++CONFIG_CRYPTO_SHA512=m
++CONFIG_CRYPTO_TGR192=m
++CONFIG_CRYPTO_WP512=m
++CONFIG_CRYPTO_CAST5=m
++CONFIG_CRYPTO_DES=y
++CONFIG_CRYPTO_USER_API_SKCIPHER=m
++# CONFIG_CRYPTO_HW is not set
++CONFIG_ARM_CRYPTO=y
++CONFIG_CRYPTO_SHA1_ARM=m
++CONFIG_CRYPTO_AES_ARM=m
++CONFIG_CRC_ITU_T=y
++CONFIG_LIBCRC32C=y
diff --git a/target/linux/brcm2708/patches-4.4/0078-bcm2835-bcm2835_defconfig.patch b/target/linux/brcm2708/patches-4.4/0078-bcm2835-bcm2835_defconfig.patch
new file mode 100644
index 0000000..781be5b
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0078-bcm2835-bcm2835_defconfig.patch
@@ -0,0 +1,1426 @@
+From 8c70059859aa757f8b1ea04bc4b27e62bb038059 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Wed, 29 Apr 2015 17:24:02 +0200
+Subject: [PATCH 078/127] bcm2835: bcm2835_defconfig
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Some options in bcm2835_defconfig are now the default and
+some have changed. Update to keep functionality.
+
+No longer available: SCSI_MULTI_LUN and RESOURCE_COUNTERS.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+bcm2835: bcm2835_defconfig enable MMC_BCM2835
+
+Enable the downstream bcm2835-mmc driver and DMA support.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+bcm2835: bcm2835_defconfig enable BCM2708_MBOX
+
+Enable the mailbox driver.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+bcm2835: bcm2835_defconfig use FB_BCM2708
+
+Enable the bcm2708 framebuffer driver.
+Disable the simple framebuffer driver, which matches the
+device handed over by u-boot.
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+bcm2835: Merge bcm2835_defconfig with bcmrpi_defconfig
+
+These commands where used to make this commit:
+
+./scripts/diffconfig -m arch/arm/configs/bcm2835_defconfig arch/arm/configs/bcmrpi_defconfig > merge.cfg
+
+cat << EOF > filter
+CONFIG_ARCH_BCM2708
+CONFIG_BCM2708_DT
+CONFIG_ARM_PATCH_PHYS_VIRT
+CONFIG_PHYS_OFFSET
+CONFIG_CMDLINE
+CONFIG_BCM2708_WDT
+CONFIG_HW_RANDOM_BCM2708
+CONFIG_I2C_BCM2708
+CONFIG_SPI_BCM2708
+CONFIG_SND_BCM2708_SOC_I2S
+CONFIG_USB_DWCOTG
+CONFIG_LIRC_RPI
+EOF
+
+grep -F -v -f filter merge.cfg > filtered.cfg
+
+cat << EOF > added.cfg
+CONFIG_WATCHDOG=y
+CONFIG_BCM2835_WDT=y
+CONFIG_MISC_FILESYSTEMS=y
+CONFIG_SND_BCM2835_SOC_I2S=m
+EOF
+
+ARCH=arm scripts/kconfig/merge_config.sh arch/arm/configs/bcm2835_defconfig filtered.cfg added.cfg
+ARCH=arm make oldconfig
+
+ARCH=arm make savedefconfig
+cp defconfig arch/arm/configs/bcm2835_defconfig
+
+rm merge.cfg filter filtered.cfg added.cfg defconfig
+
+ARCH=arm make bcm2835_defconfig
+ARCH=arm make oldconfig
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+configs: Incorporate v4.1 dependency changes
+
+Commit 78e9b7de78bb53e8bc7f4c4a60ebacb250c0c190 added a
+dependency on TI_ST instead of selecting it, disabling:
+CONFIG_BT_WILINK=m
+CONFIG_RADIO_WL128X=m
+
+Commit 652ccae5cc4e1305fb0a4619947f9ee89d8c7f5a added a
+depency on ARM_CRYPTO, disabling:
+CONFIG_CRYPTO_SHA*_ARM*=m
+CONFIG_CRYPTO_AES_ARM*=m
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+
+Conflicts:
+	arch/arm/configs/bcm2709_defconfig
+
+bcm2835: Sync bcm2835_defconfig with bcmrpi_defconfig
+
+These commands where used to make this commit:
+
+: Get changed and new config values from a merge
+./scripts/diffconfig -m arch/arm/configs/bcm2835_defconfig arch/arm/configs/bcmrpi_defconfig > merge.cfg
+
+: Remove these options
+cat << EOF > filter
+CONFIG_ARCH_BCM2708
+CONFIG_BCM2708_DT
+CONFIG_ARM_PATCH_PHYS_VIRT
+CONFIG_PHYS_OFFSET
+CONFIG_CMDLINE
+CONFIG_BCM2708_WDT
+CONFIG_HW_RANDOM_BCM2708
+CONFIG_SPI_BCM2708
+EOF
+
+: Apply filter
+grep -F -v -f filter merge.cfg > filtered.cfg
+
+: Add these options
+: watchdog contains the restart/poweroff code.
+cat << EOF > added.cfg
+CONFIG_WATCHDOG=y
+CONFIG_BCM2835_WDT=y
+CONFIG_MISC_FILESYSTEMS=y
+CONFIG_I2C_BCM2835=m
+CONFIG_SND_BCM2835_SOC_I2S=m
+EOF
+
+: Create new config
+ARCH=arm scripts/kconfig/merge_config.sh arch/arm/configs/bcm2835_defconfig filtered.cfg added.cfg
+: Verify
+ARCH=arm make oldconfig
+
+: Update bcm2835_defconfig
+ARCH=arm make savedefconfig
+cp defconfig arch/arm/configs/bcm2835_defconfig
+
+: Clean up
+rm merge.cfg filter filtered.cfg added.cfg defconfig
+
+Signed-off-by: Noralf Trønnes <noralf at tronnes.org>
+---
+ arch/arm/configs/bcm2835_defconfig | 1166 +++++++++++++++++++++++++++++++++++-
+ 1 file changed, 1140 insertions(+), 26 deletions(-)
+
+--- a/arch/arm/configs/bcm2835_defconfig
++++ b/arch/arm/configs/bcm2835_defconfig
+@@ -1,105 +1,1103 @@
+ # CONFIG_LOCALVERSION_AUTO is not set
+ CONFIG_SYSVIPC=y
++CONFIG_POSIX_MQUEUE=y
+ CONFIG_FHANDLE=y
+ CONFIG_NO_HZ=y
+ CONFIG_HIGH_RES_TIMERS=y
+ CONFIG_BSD_PROCESS_ACCT=y
+ CONFIG_BSD_PROCESS_ACCT_V3=y
++CONFIG_TASKSTATS=y
++CONFIG_TASK_DELAY_ACCT=y
++CONFIG_TASK_XACCT=y
++CONFIG_TASK_IO_ACCOUNTING=y
++CONFIG_IKCONFIG=m
++CONFIG_IKCONFIG_PROC=y
+ CONFIG_LOG_BUF_SHIFT=18
+ CONFIG_CGROUP_FREEZER=y
+ CONFIG_CGROUP_DEVICE=y
+ CONFIG_CPUSETS=y
+ CONFIG_CGROUP_CPUACCT=y
+-CONFIG_RESOURCE_COUNTERS=y
++CONFIG_MEMCG=y
+ CONFIG_CGROUP_PERF=y
+ CONFIG_CFS_BANDWIDTH=y
+ CONFIG_RT_GROUP_SCHED=y
++CONFIG_BLK_CGROUP=y
+ CONFIG_NAMESPACES=y
+ CONFIG_SCHED_AUTOGROUP=y
+-CONFIG_RELAY=y
+ CONFIG_BLK_DEV_INITRD=y
+-CONFIG_RD_BZIP2=y
+-CONFIG_RD_LZMA=y
+-CONFIG_RD_XZ=y
+-CONFIG_RD_LZO=y
+ CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+-CONFIG_KALLSYMS_ALL=y
+ CONFIG_EMBEDDED=y
+ # CONFIG_COMPAT_BRK is not set
+ CONFIG_PROFILING=y
+-CONFIG_OPROFILE=y
++CONFIG_OPROFILE=m
++CONFIG_KPROBES=y
+ CONFIG_JUMP_LABEL=y
++CONFIG_CC_STACKPROTECTOR_REGULAR=y
++CONFIG_MODULES=y
++CONFIG_MODULE_UNLOAD=y
++CONFIG_MODVERSIONS=y
++CONFIG_MODULE_SRCVERSION_ALL=y
++CONFIG_BLK_DEV_THROTTLING=y
++CONFIG_PARTITION_ADVANCED=y
++CONFIG_MAC_PARTITION=y
++CONFIG_CFQ_GROUP_IOSCHED=y
+ CONFIG_ARCH_MULTI_V6=y
+ # CONFIG_ARCH_MULTI_V7 is not set
+ CONFIG_ARCH_BCM=y
+ CONFIG_ARCH_BCM2835=y
+-CONFIG_PREEMPT_VOLUNTARY=y
++CONFIG_PREEMPT=y
+ CONFIG_AEABI=y
++CONFIG_OABI_COMPAT=y
+ CONFIG_KSM=y
+ CONFIG_CLEANCACHE=y
++CONFIG_FRONTSWAP=y
++CONFIG_CMA=y
++CONFIG_ZSMALLOC=m
++CONFIG_PGTABLE_MAPPING=y
++CONFIG_UACCESS_WITH_MEMCPY=y
+ CONFIG_SECCOMP=y
+-CONFIG_CC_STACKPROTECTOR=y
++CONFIG_ZBOOT_ROM_TEXT=0x0
++CONFIG_ZBOOT_ROM_BSS=0x0
+ CONFIG_KEXEC=y
+ CONFIG_CRASH_DUMP=y
++CONFIG_CPU_FREQ=y
++CONFIG_CPU_FREQ_STAT=m
++CONFIG_CPU_FREQ_STAT_DETAILS=y
++CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE=y
++CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
++CONFIG_CPU_FREQ_GOV_USERSPACE=y
++CONFIG_CPU_FREQ_GOV_ONDEMAND=y
++CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+ CONFIG_VFP=y
+ # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
++CONFIG_BINFMT_MISC=m
+ # CONFIG_SUSPEND is not set
+ CONFIG_NET=y
+ CONFIG_PACKET=y
+ CONFIG_UNIX=y
++CONFIG_XFRM_USER=y
++CONFIG_NET_KEY=m
+ CONFIG_INET=y
++CONFIG_IP_MULTICAST=y
++CONFIG_IP_ADVANCED_ROUTER=y
++CONFIG_IP_MULTIPLE_TABLES=y
++CONFIG_IP_ROUTE_MULTIPATH=y
++CONFIG_IP_ROUTE_VERBOSE=y
++CONFIG_IP_PNP=y
++CONFIG_IP_PNP_DHCP=y
++CONFIG_IP_PNP_RARP=y
++CONFIG_NET_IPIP=m
++CONFIG_NET_IPGRE_DEMUX=m
++CONFIG_NET_IPGRE=m
++CONFIG_IP_MROUTE=y
++CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
++CONFIG_IP_PIMSM_V1=y
++CONFIG_IP_PIMSM_V2=y
++CONFIG_SYN_COOKIES=y
++CONFIG_INET_AH=m
++CONFIG_INET_ESP=m
++CONFIG_INET_IPCOMP=m
++CONFIG_INET_XFRM_MODE_TRANSPORT=m
++CONFIG_INET_XFRM_MODE_TUNNEL=m
++CONFIG_INET_XFRM_MODE_BEET=m
++CONFIG_INET_LRO=m
++CONFIG_INET_DIAG=m
++CONFIG_INET6_AH=m
++CONFIG_INET6_ESP=m
++CONFIG_INET6_IPCOMP=m
++CONFIG_IPV6_TUNNEL=m
++CONFIG_IPV6_MULTIPLE_TABLES=y
++CONFIG_IPV6_MROUTE=y
++CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
++CONFIG_IPV6_PIMSM_V2=y
+ CONFIG_NETWORK_SECMARK=y
+ CONFIG_NETFILTER=y
+-CONFIG_CFG80211=y
+-CONFIG_MAC80211=y
++CONFIG_NF_CONNTRACK=m
++CONFIG_NF_CONNTRACK_ZONES=y
++CONFIG_NF_CONNTRACK_EVENTS=y
++CONFIG_NF_CONNTRACK_TIMESTAMP=y
++CONFIG_NF_CT_PROTO_DCCP=m
++CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CONNTRACK_AMANDA=m
++CONFIG_NF_CONNTRACK_FTP=m
++CONFIG_NF_CONNTRACK_H323=m
++CONFIG_NF_CONNTRACK_IRC=m
++CONFIG_NF_CONNTRACK_NETBIOS_NS=m
++CONFIG_NF_CONNTRACK_SNMP=m
++CONFIG_NF_CONNTRACK_PPTP=m
++CONFIG_NF_CONNTRACK_SANE=m
++CONFIG_NF_CONNTRACK_SIP=m
++CONFIG_NF_CONNTRACK_TFTP=m
++CONFIG_NF_CT_NETLINK=m
++CONFIG_NETFILTER_XT_SET=m
++CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
++CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
++CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
++CONFIG_NETFILTER_XT_TARGET_DSCP=m
++CONFIG_NETFILTER_XT_TARGET_HMARK=m
++CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
++CONFIG_NETFILTER_XT_TARGET_LED=m
++CONFIG_NETFILTER_XT_TARGET_LOG=m
++CONFIG_NETFILTER_XT_TARGET_MARK=m
++CONFIG_NETFILTER_XT_TARGET_NFLOG=m
++CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
++CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
++CONFIG_NETFILTER_XT_TARGET_TEE=m
++CONFIG_NETFILTER_XT_TARGET_TPROXY=m
++CONFIG_NETFILTER_XT_TARGET_TRACE=m
++CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
++CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
++CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
++CONFIG_NETFILTER_XT_MATCH_BPF=m
++CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
++CONFIG_NETFILTER_XT_MATCH_COMMENT=m
++CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
++CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
++CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
++CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
++CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
++CONFIG_NETFILTER_XT_MATCH_CPU=m
++CONFIG_NETFILTER_XT_MATCH_DCCP=m
++CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
++CONFIG_NETFILTER_XT_MATCH_DSCP=m
++CONFIG_NETFILTER_XT_MATCH_ESP=m
++CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
++CONFIG_NETFILTER_XT_MATCH_HELPER=m
++CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
++CONFIG_NETFILTER_XT_MATCH_IPVS=m
++CONFIG_NETFILTER_XT_MATCH_LENGTH=m
++CONFIG_NETFILTER_XT_MATCH_LIMIT=m
++CONFIG_NETFILTER_XT_MATCH_MAC=m
++CONFIG_NETFILTER_XT_MATCH_MARK=m
++CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
++CONFIG_NETFILTER_XT_MATCH_NFACCT=m
++CONFIG_NETFILTER_XT_MATCH_OSF=m
++CONFIG_NETFILTER_XT_MATCH_OWNER=m
++CONFIG_NETFILTER_XT_MATCH_POLICY=m
++CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
++CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
++CONFIG_NETFILTER_XT_MATCH_QUOTA=m
++CONFIG_NETFILTER_XT_MATCH_RATEEST=m
++CONFIG_NETFILTER_XT_MATCH_REALM=m
++CONFIG_NETFILTER_XT_MATCH_RECENT=m
++CONFIG_NETFILTER_XT_MATCH_SOCKET=m
++CONFIG_NETFILTER_XT_MATCH_STATE=m
++CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
++CONFIG_NETFILTER_XT_MATCH_STRING=m
++CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
++CONFIG_NETFILTER_XT_MATCH_TIME=m
++CONFIG_NETFILTER_XT_MATCH_U32=m
++CONFIG_IP_SET=m
++CONFIG_IP_SET_BITMAP_IP=m
++CONFIG_IP_SET_BITMAP_IPMAC=m
++CONFIG_IP_SET_BITMAP_PORT=m
++CONFIG_IP_SET_HASH_IP=m
++CONFIG_IP_SET_HASH_IPPORT=m
++CONFIG_IP_SET_HASH_IPPORTIP=m
++CONFIG_IP_SET_HASH_IPPORTNET=m
++CONFIG_IP_SET_HASH_NET=m
++CONFIG_IP_SET_HASH_NETPORT=m
++CONFIG_IP_SET_HASH_NETIFACE=m
++CONFIG_IP_SET_LIST_SET=m
++CONFIG_IP_VS=m
++CONFIG_IP_VS_PROTO_TCP=y
++CONFIG_IP_VS_PROTO_UDP=y
++CONFIG_IP_VS_PROTO_ESP=y
++CONFIG_IP_VS_PROTO_AH=y
++CONFIG_IP_VS_PROTO_SCTP=y
++CONFIG_IP_VS_RR=m
++CONFIG_IP_VS_WRR=m
++CONFIG_IP_VS_LC=m
++CONFIG_IP_VS_WLC=m
++CONFIG_IP_VS_LBLC=m
++CONFIG_IP_VS_LBLCR=m
++CONFIG_IP_VS_DH=m
++CONFIG_IP_VS_SH=m
++CONFIG_IP_VS_SED=m
++CONFIG_IP_VS_NQ=m
++CONFIG_IP_VS_FTP=m
++CONFIG_IP_VS_PE_SIP=m
++CONFIG_NF_CONNTRACK_IPV4=m
++CONFIG_IP_NF_IPTABLES=m
++CONFIG_IP_NF_MATCH_AH=m
++CONFIG_IP_NF_MATCH_ECN=m
++CONFIG_IP_NF_MATCH_TTL=m
++CONFIG_IP_NF_FILTER=m
++CONFIG_IP_NF_TARGET_REJECT=m
++CONFIG_IP_NF_NAT=m
++CONFIG_IP_NF_TARGET_MASQUERADE=m
++CONFIG_IP_NF_TARGET_NETMAP=m
++CONFIG_IP_NF_TARGET_REDIRECT=m
++CONFIG_IP_NF_MANGLE=m
++CONFIG_IP_NF_TARGET_CLUSTERIP=m
++CONFIG_IP_NF_TARGET_ECN=m
++CONFIG_IP_NF_TARGET_TTL=m
++CONFIG_IP_NF_RAW=m
++CONFIG_IP_NF_ARPTABLES=m
++CONFIG_IP_NF_ARPFILTER=m
++CONFIG_IP_NF_ARP_MANGLE=m
++CONFIG_NF_CONNTRACK_IPV6=m
++CONFIG_IP6_NF_IPTABLES=m
++CONFIG_IP6_NF_MATCH_AH=m
++CONFIG_IP6_NF_MATCH_EUI64=m
++CONFIG_IP6_NF_MATCH_FRAG=m
++CONFIG_IP6_NF_MATCH_OPTS=m
++CONFIG_IP6_NF_MATCH_HL=m
++CONFIG_IP6_NF_MATCH_IPV6HEADER=m
++CONFIG_IP6_NF_MATCH_MH=m
++CONFIG_IP6_NF_MATCH_RT=m
++CONFIG_IP6_NF_TARGET_HL=m
++CONFIG_IP6_NF_FILTER=m
++CONFIG_IP6_NF_TARGET_REJECT=m
++CONFIG_IP6_NF_MANGLE=m
++CONFIG_IP6_NF_RAW=m
++CONFIG_IP6_NF_NAT=m
++CONFIG_IP6_NF_TARGET_MASQUERADE=m
++CONFIG_IP6_NF_TARGET_NPT=m
++CONFIG_BRIDGE_NF_EBTABLES=m
++CONFIG_BRIDGE_EBT_BROUTE=m
++CONFIG_BRIDGE_EBT_T_FILTER=m
++CONFIG_BRIDGE_EBT_T_NAT=m
++CONFIG_BRIDGE_EBT_802_3=m
++CONFIG_BRIDGE_EBT_AMONG=m
++CONFIG_BRIDGE_EBT_ARP=m
++CONFIG_BRIDGE_EBT_IP=m
++CONFIG_BRIDGE_EBT_IP6=m
++CONFIG_BRIDGE_EBT_LIMIT=m
++CONFIG_BRIDGE_EBT_MARK=m
++CONFIG_BRIDGE_EBT_PKTTYPE=m
++CONFIG_BRIDGE_EBT_STP=m
++CONFIG_BRIDGE_EBT_VLAN=m
++CONFIG_BRIDGE_EBT_ARPREPLY=m
++CONFIG_BRIDGE_EBT_DNAT=m
++CONFIG_BRIDGE_EBT_MARK_T=m
++CONFIG_BRIDGE_EBT_REDIRECT=m
++CONFIG_BRIDGE_EBT_SNAT=m
++CONFIG_BRIDGE_EBT_LOG=m
++CONFIG_BRIDGE_EBT_NFLOG=m
++CONFIG_SCTP_COOKIE_HMAC_SHA1=y
++CONFIG_ATM=m
++CONFIG_L2TP=m
++CONFIG_L2TP_V3=y
++CONFIG_L2TP_IP=m
++CONFIG_L2TP_ETH=m
++CONFIG_BRIDGE=m
++CONFIG_VLAN_8021Q=m
++CONFIG_VLAN_8021Q_GVRP=y
++CONFIG_ATALK=m
++CONFIG_6LOWPAN=m
++CONFIG_NET_SCHED=y
++CONFIG_NET_SCH_CBQ=m
++CONFIG_NET_SCH_HTB=m
++CONFIG_NET_SCH_HFSC=m
++CONFIG_NET_SCH_PRIO=m
++CONFIG_NET_SCH_MULTIQ=m
++CONFIG_NET_SCH_RED=m
++CONFIG_NET_SCH_SFB=m
++CONFIG_NET_SCH_SFQ=m
++CONFIG_NET_SCH_TEQL=m
++CONFIG_NET_SCH_TBF=m
++CONFIG_NET_SCH_GRED=m
++CONFIG_NET_SCH_DSMARK=m
++CONFIG_NET_SCH_NETEM=m
++CONFIG_NET_SCH_DRR=m
++CONFIG_NET_SCH_MQPRIO=m
++CONFIG_NET_SCH_CHOKE=m
++CONFIG_NET_SCH_QFQ=m
++CONFIG_NET_SCH_CODEL=m
++CONFIG_NET_SCH_FQ_CODEL=m
++CONFIG_NET_SCH_INGRESS=m
++CONFIG_NET_SCH_PLUG=m
++CONFIG_NET_CLS_BASIC=m
++CONFIG_NET_CLS_TCINDEX=m
++CONFIG_NET_CLS_ROUTE4=m
++CONFIG_NET_CLS_FW=m
++CONFIG_NET_CLS_U32=m
++CONFIG_CLS_U32_MARK=y
++CONFIG_NET_CLS_RSVP=m
++CONFIG_NET_CLS_RSVP6=m
++CONFIG_NET_CLS_FLOW=m
++CONFIG_NET_CLS_CGROUP=m
++CONFIG_NET_EMATCH=y
++CONFIG_NET_EMATCH_CMP=m
++CONFIG_NET_EMATCH_NBYTE=m
++CONFIG_NET_EMATCH_U32=m
++CONFIG_NET_EMATCH_META=m
++CONFIG_NET_EMATCH_TEXT=m
++CONFIG_NET_EMATCH_IPSET=m
++CONFIG_NET_CLS_ACT=y
++CONFIG_NET_ACT_POLICE=m
++CONFIG_NET_ACT_GACT=m
++CONFIG_GACT_PROB=y
++CONFIG_NET_ACT_MIRRED=m
++CONFIG_NET_ACT_IPT=m
++CONFIG_NET_ACT_NAT=m
++CONFIG_NET_ACT_PEDIT=m
++CONFIG_NET_ACT_SIMP=m
++CONFIG_NET_ACT_SKBEDIT=m
++CONFIG_NET_ACT_CSUM=m
++CONFIG_BATMAN_ADV=m
++CONFIG_OPENVSWITCH=m
++CONFIG_NET_PKTGEN=m
++CONFIG_HAMRADIO=y
++CONFIG_AX25=m
++CONFIG_NETROM=m
++CONFIG_ROSE=m
++CONFIG_MKISS=m
++CONFIG_6PACK=m
++CONFIG_BPQETHER=m
++CONFIG_BAYCOM_SER_FDX=m
++CONFIG_BAYCOM_SER_HDX=m
++CONFIG_YAM=m
++CONFIG_CAN=m
++CONFIG_CAN_VCAN=m
++CONFIG_CAN_MCP251X=m
++CONFIG_IRDA=m
++CONFIG_IRLAN=m
++CONFIG_IRNET=m
++CONFIG_IRCOMM=m
++CONFIG_IRDA_ULTRA=y
++CONFIG_IRDA_CACHE_LAST_LSAP=y
++CONFIG_IRDA_FAST_RR=y
++CONFIG_IRTTY_SIR=m
++CONFIG_KINGSUN_DONGLE=m
++CONFIG_KSDAZZLE_DONGLE=m
++CONFIG_KS959_DONGLE=m
++CONFIG_USB_IRDA=m
++CONFIG_SIGMATEL_FIR=m
++CONFIG_MCS_FIR=m
++CONFIG_BT=m
++CONFIG_BT_RFCOMM=m
++CONFIG_BT_RFCOMM_TTY=y
++CONFIG_BT_BNEP=m
++CONFIG_BT_BNEP_MC_FILTER=y
++CONFIG_BT_BNEP_PROTO_FILTER=y
++CONFIG_BT_HIDP=m
++CONFIG_BT_6LOWPAN=m
++CONFIG_BT_HCIBTUSB=m
++CONFIG_BT_HCIBCM203X=m
++CONFIG_BT_HCIBPA10X=m
++CONFIG_BT_HCIBFUSB=m
++CONFIG_BT_HCIVHCI=m
++CONFIG_BT_MRVL=m
++CONFIG_BT_MRVL_SDIO=m
++CONFIG_BT_ATH3K=m
++CONFIG_BT_WILINK=m
++CONFIG_MAC80211=m
++CONFIG_MAC80211_MESH=y
++CONFIG_WIMAX=m
++CONFIG_RFKILL=m
++CONFIG_RFKILL_INPUT=y
++CONFIG_NET_9P=m
++CONFIG_NFC=m
++CONFIG_NFC_PN533=m
+ CONFIG_DEVTMPFS=y
+ CONFIG_DEVTMPFS_MOUNT=y
+ # CONFIG_STANDALONE is not set
++CONFIG_DMA_CMA=y
++CONFIG_CMA_SIZE_MBYTES=5
++CONFIG_ZRAM=m
++CONFIG_ZRAM_LZ4_COMPRESS=y
++CONFIG_BLK_DEV_LOOP=y
++CONFIG_BLK_DEV_CRYPTOLOOP=m
++CONFIG_BLK_DEV_DRBD=m
++CONFIG_BLK_DEV_NBD=m
++CONFIG_BLK_DEV_RAM=y
++CONFIG_CDROM_PKTCDVD=m
++CONFIG_ATA_OVER_ETH=m
++CONFIG_EEPROM_AT24=m
++CONFIG_TI_ST=m
+ CONFIG_SCSI=y
++# CONFIG_SCSI_PROC_FS is not set
+ CONFIG_BLK_DEV_SD=y
+-CONFIG_SCSI_MULTI_LUN=y
++CONFIG_CHR_DEV_ST=m
++CONFIG_CHR_DEV_OSST=m
++CONFIG_BLK_DEV_SR=m
++CONFIG_CHR_DEV_SG=m
+ CONFIG_SCSI_CONSTANTS=y
+ CONFIG_SCSI_SCAN_ASYNC=y
++CONFIG_SCSI_ISCSI_ATTRS=y
++CONFIG_ISCSI_TCP=m
++CONFIG_ISCSI_BOOT_SYSFS=m
++CONFIG_MD=y
++CONFIG_MD_LINEAR=m
++CONFIG_MD_RAID0=m
++CONFIG_BLK_DEV_DM=m
++CONFIG_DM_CRYPT=m
++CONFIG_DM_SNAPSHOT=m
++CONFIG_DM_MIRROR=m
++CONFIG_DM_LOG_USERSPACE=m
++CONFIG_DM_RAID=m
++CONFIG_DM_ZERO=m
++CONFIG_DM_DELAY=m
+ CONFIG_NETDEVICES=y
++CONFIG_BONDING=m
++CONFIG_DUMMY=m
++CONFIG_IFB=m
++CONFIG_MACVLAN=m
++CONFIG_NETCONSOLE=m
++CONFIG_TUN=m
++CONFIG_VETH=m
++CONFIG_ENC28J60=m
++CONFIG_MDIO_BITBANG=m
++CONFIG_PPP=m
++CONFIG_PPP_BSDCOMP=m
++CONFIG_PPP_DEFLATE=m
++CONFIG_PPP_FILTER=y
++CONFIG_PPP_MPPE=m
++CONFIG_PPP_MULTILINK=y
++CONFIG_PPPOATM=m
++CONFIG_PPPOE=m
++CONFIG_PPPOL2TP=m
++CONFIG_PPP_ASYNC=m
++CONFIG_PPP_SYNC_TTY=m
++CONFIG_SLIP=m
++CONFIG_SLIP_COMPRESSED=y
++CONFIG_SLIP_SMART=y
++CONFIG_USB_CATC=m
++CONFIG_USB_KAWETH=m
++CONFIG_USB_PEGASUS=m
++CONFIG_USB_RTL8150=m
++CONFIG_USB_RTL8152=m
+ CONFIG_USB_USBNET=y
++CONFIG_USB_NET_AX8817X=m
++CONFIG_USB_NET_AX88179_178A=m
++CONFIG_USB_NET_CDCETHER=m
++CONFIG_USB_NET_CDC_EEM=m
++CONFIG_USB_NET_CDC_NCM=m
++CONFIG_USB_NET_HUAWEI_CDC_NCM=m
++CONFIG_USB_NET_CDC_MBIM=m
++CONFIG_USB_NET_DM9601=m
++CONFIG_USB_NET_SR9700=m
++CONFIG_USB_NET_SR9800=m
++CONFIG_USB_NET_SMSC75XX=m
+ CONFIG_USB_NET_SMSC95XX=y
+-CONFIG_ZD1211RW=y
+-CONFIG_INPUT_EVDEV=y
++CONFIG_USB_NET_GL620A=m
++CONFIG_USB_NET_NET1080=m
++CONFIG_USB_NET_PLUSB=m
++CONFIG_USB_NET_MCS7830=m
++CONFIG_USB_NET_CDC_SUBSET=m
++CONFIG_USB_ALI_M5632=y
++CONFIG_USB_AN2720=y
++CONFIG_USB_EPSON2888=y
++CONFIG_USB_KC2190=y
++CONFIG_USB_NET_ZAURUS=m
++CONFIG_USB_NET_CX82310_ETH=m
++CONFIG_USB_NET_KALMIA=m
++CONFIG_USB_NET_QMI_WWAN=m
++CONFIG_USB_HSO=m
++CONFIG_USB_NET_INT51X1=m
++CONFIG_USB_IPHETH=m
++CONFIG_USB_SIERRA_NET=m
++CONFIG_USB_VL600=m
++CONFIG_LIBERTAS_THINFIRM=m
++CONFIG_LIBERTAS_THINFIRM_USB=m
++CONFIG_AT76C50X_USB=m
++CONFIG_USB_ZD1201=m
++CONFIG_USB_NET_RNDIS_WLAN=m
++CONFIG_RTL8187=m
++CONFIG_MAC80211_HWSIM=m
++CONFIG_ATH_CARDS=m
++CONFIG_ATH9K=m
++CONFIG_ATH9K_HTC=m
++CONFIG_CARL9170=m
++CONFIG_ATH6KL=m
++CONFIG_ATH6KL_USB=m
++CONFIG_AR5523=m
++CONFIG_B43=m
++# CONFIG_B43_PHY_N is not set
++CONFIG_B43LEGACY=m
++CONFIG_BRCMFMAC=m
++CONFIG_BRCMFMAC_USB=y
++CONFIG_HOSTAP=m
++CONFIG_LIBERTAS=m
++CONFIG_LIBERTAS_USB=m
++CONFIG_LIBERTAS_SDIO=m
++CONFIG_P54_COMMON=m
++CONFIG_P54_USB=m
++CONFIG_RT2X00=m
++CONFIG_RT2500USB=m
++CONFIG_RT73USB=m
++CONFIG_RT2800USB=m
++CONFIG_RT2800USB_RT3573=y
++CONFIG_RT2800USB_RT53XX=y
++CONFIG_RT2800USB_RT55XX=y
++CONFIG_RT2800USB_UNKNOWN=y
++CONFIG_WL_MEDIATEK=y
++CONFIG_MT7601U=m
++CONFIG_RTL8192CU=m
++CONFIG_ZD1211RW=m
++CONFIG_MWIFIEX=m
++CONFIG_MWIFIEX_SDIO=m
++CONFIG_WIMAX_I2400M_USB=m
++CONFIG_INPUT_POLLDEV=m
++# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
++CONFIG_INPUT_JOYDEV=m
++CONFIG_INPUT_EVDEV=m
++# CONFIG_KEYBOARD_ATKBD is not set
++CONFIG_KEYBOARD_GPIO=m
++# CONFIG_INPUT_MOUSE is not set
++CONFIG_INPUT_JOYSTICK=y
++CONFIG_JOYSTICK_IFORCE=m
++CONFIG_JOYSTICK_IFORCE_USB=y
++CONFIG_JOYSTICK_XPAD=m
++CONFIG_JOYSTICK_XPAD_FF=y
++CONFIG_JOYSTICK_RPISENSE=m
++CONFIG_INPUT_TOUCHSCREEN=y
++CONFIG_TOUCHSCREEN_ADS7846=m
++CONFIG_TOUCHSCREEN_EGALAX=m
++CONFIG_TOUCHSCREEN_RPI_FT5406=m
++CONFIG_TOUCHSCREEN_USB_COMPOSITE=m
++CONFIG_TOUCHSCREEN_STMPE=m
++CONFIG_INPUT_MISC=y
++CONFIG_INPUT_AD714X=m
++CONFIG_INPUT_ATI_REMOTE2=m
++CONFIG_INPUT_KEYSPAN_REMOTE=m
++CONFIG_INPUT_POWERMATE=m
++CONFIG_INPUT_YEALINK=m
++CONFIG_INPUT_CM109=m
++CONFIG_INPUT_UINPUT=m
++CONFIG_INPUT_GPIO_ROTARY_ENCODER=m
++CONFIG_INPUT_ADXL34X=m
++CONFIG_INPUT_CMA3000=m
++CONFIG_SERIO=m
++CONFIG_SERIO_RAW=m
++CONFIG_GAMEPORT=m
++CONFIG_GAMEPORT_NS558=m
++CONFIG_GAMEPORT_L4=m
++CONFIG_BRCM_CHAR_DRIVERS=y
++CONFIG_BCM_VC_CMA=y
++CONFIG_BCM_VCIO=y
++CONFIG_BCM_VC_SM=y
++CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
+ # CONFIG_LEGACY_PTYS is not set
+ # CONFIG_DEVKMEM is not set
++CONFIG_SERIAL_8250=y
++# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
++CONFIG_SERIAL_8250_CONSOLE=y
++# CONFIG_SERIAL_8250_DMA is not set
++CONFIG_SERIAL_8250_NR_UARTS=1
++CONFIG_SERIAL_8250_RUNTIME_UARTS=0
+ CONFIG_SERIAL_AMBA_PL011=y
+ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
++CONFIG_SERIAL_OF_PLATFORM=y
+ CONFIG_TTY_PRINTK=y
++CONFIG_HW_RANDOM=y
++CONFIG_HW_RANDOM_BCM2835=m
++CONFIG_RAW_DRIVER=y
+ CONFIG_I2C=y
+-CONFIG_I2C_CHARDEV=y
+-CONFIG_I2C_BCM2835=y
++CONFIG_I2C_CHARDEV=m
++CONFIG_I2C_BCM2708=m
++CONFIG_I2C_BCM2835=m
+ CONFIG_SPI=y
+-CONFIG_SPI_BCM2835=y
++CONFIG_SPI_BCM2835=m
++CONFIG_SPI_SPIDEV=y
++CONFIG_PPS=m
++CONFIG_PPS_CLIENT_LDISC=m
++CONFIG_PPS_CLIENT_GPIO=m
+ CONFIG_GPIO_SYSFS=y
++CONFIG_GPIO_ARIZONA=m
++CONFIG_GPIO_STMPE=y
++CONFIG_W1=m
++CONFIG_W1_MASTER_DS2490=m
++CONFIG_W1_MASTER_DS2482=m
++CONFIG_W1_MASTER_DS1WM=m
++CONFIG_W1_MASTER_GPIO=m
++CONFIG_W1_SLAVE_THERM=m
++CONFIG_W1_SLAVE_SMEM=m
++CONFIG_W1_SLAVE_DS2408=m
++CONFIG_W1_SLAVE_DS2413=m
++CONFIG_W1_SLAVE_DS2406=m
++CONFIG_W1_SLAVE_DS2423=m
++CONFIG_W1_SLAVE_DS2431=m
++CONFIG_W1_SLAVE_DS2433=m
++CONFIG_W1_SLAVE_DS2760=m
++CONFIG_W1_SLAVE_DS2780=m
++CONFIG_W1_SLAVE_DS2781=m
++CONFIG_W1_SLAVE_DS28E04=m
++CONFIG_W1_SLAVE_BQ27000=m
++CONFIG_BATTERY_DS2760=m
++CONFIG_POWER_RESET=y
++CONFIG_POWER_RESET_GPIO=y
+ # CONFIG_HWMON is not set
++CONFIG_THERMAL=y
++CONFIG_THERMAL_BCM2835=y
++CONFIG_WATCHDOG=y
++CONFIG_BCM2835_WDT=y
++CONFIG_UCB1400_CORE=m
++CONFIG_MFD_STMPE=y
++CONFIG_STMPE_SPI=y
++CONFIG_MFD_ARIZONA_I2C=m
++CONFIG_MFD_ARIZONA_SPI=m
++CONFIG_MFD_WM5102=y
++CONFIG_MEDIA_SUPPORT=m
++CONFIG_MEDIA_CAMERA_SUPPORT=y
++CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
++CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
++CONFIG_MEDIA_RADIO_SUPPORT=y
++CONFIG_MEDIA_RC_SUPPORT=y
++CONFIG_MEDIA_CONTROLLER=y
++CONFIG_LIRC=m
++CONFIG_RC_DEVICES=y
++CONFIG_RC_ATI_REMOTE=m
++CONFIG_IR_IMON=m
++CONFIG_IR_MCEUSB=m
++CONFIG_IR_REDRAT3=m
++CONFIG_IR_STREAMZAP=m
++CONFIG_IR_IGUANA=m
++CONFIG_IR_TTUSBIR=m
++CONFIG_RC_LOOPBACK=m
++CONFIG_IR_GPIO_CIR=m
++CONFIG_MEDIA_USB_SUPPORT=y
++CONFIG_USB_VIDEO_CLASS=m
++CONFIG_USB_M5602=m
++CONFIG_USB_STV06XX=m
++CONFIG_USB_GL860=m
++CONFIG_USB_GSPCA_BENQ=m
++CONFIG_USB_GSPCA_CONEX=m
++CONFIG_USB_GSPCA_CPIA1=m
++CONFIG_USB_GSPCA_DTCS033=m
++CONFIG_USB_GSPCA_ETOMS=m
++CONFIG_USB_GSPCA_FINEPIX=m
++CONFIG_USB_GSPCA_JEILINJ=m
++CONFIG_USB_GSPCA_JL2005BCD=m
++CONFIG_USB_GSPCA_KINECT=m
++CONFIG_USB_GSPCA_KONICA=m
++CONFIG_USB_GSPCA_MARS=m
++CONFIG_USB_GSPCA_MR97310A=m
++CONFIG_USB_GSPCA_NW80X=m
++CONFIG_USB_GSPCA_OV519=m
++CONFIG_USB_GSPCA_OV534=m
++CONFIG_USB_GSPCA_OV534_9=m
++CONFIG_USB_GSPCA_PAC207=m
++CONFIG_USB_GSPCA_PAC7302=m
++CONFIG_USB_GSPCA_PAC7311=m
++CONFIG_USB_GSPCA_SE401=m
++CONFIG_USB_GSPCA_SN9C2028=m
++CONFIG_USB_GSPCA_SN9C20X=m
++CONFIG_USB_GSPCA_SONIXB=m
++CONFIG_USB_GSPCA_SONIXJ=m
++CONFIG_USB_GSPCA_SPCA500=m
++CONFIG_USB_GSPCA_SPCA501=m
++CONFIG_USB_GSPCA_SPCA505=m
++CONFIG_USB_GSPCA_SPCA506=m
++CONFIG_USB_GSPCA_SPCA508=m
++CONFIG_USB_GSPCA_SPCA561=m
++CONFIG_USB_GSPCA_SPCA1528=m
++CONFIG_USB_GSPCA_SQ905=m
++CONFIG_USB_GSPCA_SQ905C=m
++CONFIG_USB_GSPCA_SQ930X=m
++CONFIG_USB_GSPCA_STK014=m
++CONFIG_USB_GSPCA_STK1135=m
++CONFIG_USB_GSPCA_STV0680=m
++CONFIG_USB_GSPCA_SUNPLUS=m
++CONFIG_USB_GSPCA_T613=m
++CONFIG_USB_GSPCA_TOPRO=m
++CONFIG_USB_GSPCA_TV8532=m
++CONFIG_USB_GSPCA_VC032X=m
++CONFIG_USB_GSPCA_VICAM=m
++CONFIG_USB_GSPCA_XIRLINK_CIT=m
++CONFIG_USB_GSPCA_ZC3XX=m
++CONFIG_USB_PWC=m
++CONFIG_VIDEO_CPIA2=m
++CONFIG_USB_ZR364XX=m
++CONFIG_USB_STKWEBCAM=m
++CONFIG_USB_S2255=m
++CONFIG_VIDEO_USBTV=m
++CONFIG_VIDEO_PVRUSB2=m
++CONFIG_VIDEO_HDPVR=m
++CONFIG_VIDEO_USBVISION=m
++CONFIG_VIDEO_STK1160_COMMON=m
++CONFIG_VIDEO_STK1160_AC97=y
++CONFIG_VIDEO_GO7007=m
++CONFIG_VIDEO_GO7007_USB=m
++CONFIG_VIDEO_GO7007_USB_S2250_BOARD=m
++CONFIG_VIDEO_AU0828=m
++CONFIG_VIDEO_AU0828_RC=y
++CONFIG_VIDEO_CX231XX=m
++CONFIG_VIDEO_CX231XX_ALSA=m
++CONFIG_VIDEO_CX231XX_DVB=m
++CONFIG_VIDEO_TM6000=m
++CONFIG_VIDEO_TM6000_ALSA=m
++CONFIG_VIDEO_TM6000_DVB=m
++CONFIG_DVB_USB=m
++CONFIG_DVB_USB_A800=m
++CONFIG_DVB_USB_DIBUSB_MB=m
++CONFIG_DVB_USB_DIBUSB_MB_FAULTY=y
++CONFIG_DVB_USB_DIBUSB_MC=m
++CONFIG_DVB_USB_DIB0700=m
++CONFIG_DVB_USB_UMT_010=m
++CONFIG_DVB_USB_CXUSB=m
++CONFIG_DVB_USB_M920X=m
++CONFIG_DVB_USB_DIGITV=m
++CONFIG_DVB_USB_VP7045=m
++CONFIG_DVB_USB_VP702X=m
++CONFIG_DVB_USB_GP8PSK=m
++CONFIG_DVB_USB_NOVA_T_USB2=m
++CONFIG_DVB_USB_TTUSB2=m
++CONFIG_DVB_USB_DTT200U=m
++CONFIG_DVB_USB_OPERA1=m
++CONFIG_DVB_USB_AF9005=m
++CONFIG_DVB_USB_AF9005_REMOTE=m
++CONFIG_DVB_USB_PCTV452E=m
++CONFIG_DVB_USB_DW2102=m
++CONFIG_DVB_USB_CINERGY_T2=m
++CONFIG_DVB_USB_DTV5100=m
++CONFIG_DVB_USB_FRIIO=m
++CONFIG_DVB_USB_AZ6027=m
++CONFIG_DVB_USB_TECHNISAT_USB2=m
++CONFIG_DVB_USB_V2=m
++CONFIG_DVB_USB_AF9015=m
++CONFIG_DVB_USB_AF9035=m
++CONFIG_DVB_USB_ANYSEE=m
++CONFIG_DVB_USB_AU6610=m
++CONFIG_DVB_USB_AZ6007=m
++CONFIG_DVB_USB_CE6230=m
++CONFIG_DVB_USB_EC168=m
++CONFIG_DVB_USB_GL861=m
++CONFIG_DVB_USB_LME2510=m
++CONFIG_DVB_USB_MXL111SF=m
++CONFIG_DVB_USB_RTL28XXU=m
++CONFIG_DVB_USB_DVBSKY=m
++CONFIG_SMS_USB_DRV=m
++CONFIG_DVB_B2C2_FLEXCOP_USB=m
++CONFIG_DVB_AS102=m
++CONFIG_VIDEO_EM28XX=m
++CONFIG_VIDEO_EM28XX_V4L2=m
++CONFIG_VIDEO_EM28XX_ALSA=m
++CONFIG_VIDEO_EM28XX_DVB=m
++CONFIG_V4L_PLATFORM_DRIVERS=y
++CONFIG_VIDEO_BCM2835=y
++CONFIG_VIDEO_BCM2835_MMAL=m
++CONFIG_RADIO_SI470X=y
++CONFIG_USB_SI470X=m
++CONFIG_I2C_SI470X=m
++CONFIG_RADIO_SI4713=m
++CONFIG_I2C_SI4713=m
++CONFIG_USB_MR800=m
++CONFIG_USB_DSBR=m
++CONFIG_RADIO_SHARK=m
++CONFIG_RADIO_SHARK2=m
++CONFIG_USB_KEENE=m
++CONFIG_USB_MA901=m
++CONFIG_RADIO_TEA5764=m
++CONFIG_RADIO_SAA7706H=m
++CONFIG_RADIO_TEF6862=m
++CONFIG_RADIO_WL1273=m
++CONFIG_RADIO_WL128X=m
++# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set
++CONFIG_VIDEO_UDA1342=m
++CONFIG_VIDEO_SONY_BTF_MPX=m
++CONFIG_VIDEO_TVP5150=m
++CONFIG_VIDEO_TW2804=m
++CONFIG_VIDEO_TW9903=m
++CONFIG_VIDEO_TW9906=m
++CONFIG_VIDEO_OV7640=m
++CONFIG_VIDEO_MT9V011=m
+ CONFIG_FB=y
+-CONFIG_FB_SIMPLE=y
++CONFIG_FB_BCM2708=y
++CONFIG_FB_SSD1307=m
++CONFIG_FB_RPISENSE=m
++# CONFIG_BACKLIGHT_GENERIC is not set
++CONFIG_BACKLIGHT_GPIO=m
+ CONFIG_FRAMEBUFFER_CONSOLE=y
+ CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
++CONFIG_LOGO=y
++# CONFIG_LOGO_LINUX_MONO is not set
++# CONFIG_LOGO_LINUX_VGA16 is not set
++CONFIG_SOUND=y
++CONFIG_SND=m
++CONFIG_SND_SEQUENCER=m
++CONFIG_SND_SEQ_DUMMY=m
++CONFIG_SND_MIXER_OSS=m
++CONFIG_SND_PCM_OSS=m
++CONFIG_SND_SEQUENCER_OSS=y
++CONFIG_SND_HRTIMER=m
++CONFIG_SND_DUMMY=m
++CONFIG_SND_ALOOP=m
++CONFIG_SND_VIRMIDI=m
++CONFIG_SND_MTPAV=m
++CONFIG_SND_SERIAL_U16550=m
++CONFIG_SND_MPU401=m
++CONFIG_SND_BCM2835=m
++CONFIG_SND_USB_AUDIO=m
++CONFIG_SND_USB_UA101=m
++CONFIG_SND_USB_CAIAQ=m
++CONFIG_SND_USB_CAIAQ_INPUT=y
++CONFIG_SND_USB_6FIRE=m
++CONFIG_SND_SOC=m
++CONFIG_SND_BCM2835_SOC_I2S=m
++CONFIG_SND_BCM2708_SOC_I2S=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_DIGI=m
++CONFIG_SND_BCM2708_SOC_HIFIBERRY_AMP=m
++CONFIG_SND_BCM2708_SOC_RPI_DAC=m
++CONFIG_SND_BCM2708_SOC_RPI_PROTO=m
++CONFIG_SND_BCM2708_SOC_IQAUDIO_DAC=m
++CONFIG_SND_SOC_WM8804_I2C=m
++CONFIG_SND_SIMPLE_CARD=m
++CONFIG_SOUND_PRIME=m
++CONFIG_HIDRAW=y
++CONFIG_HID_A4TECH=m
++CONFIG_HID_ACRUX=m
++CONFIG_HID_APPLE=m
++CONFIG_HID_BELKIN=m
++CONFIG_HID_CHERRY=m
++CONFIG_HID_CHICONY=m
++CONFIG_HID_CYPRESS=m
++CONFIG_HID_DRAGONRISE=m
++CONFIG_HID_EMS_FF=m
++CONFIG_HID_ELECOM=m
++CONFIG_HID_ELO=m
++CONFIG_HID_EZKEY=m
++CONFIG_HID_HOLTEK=m
++CONFIG_HID_KEYTOUCH=m
++CONFIG_HID_KYE=m
++CONFIG_HID_UCLOGIC=m
++CONFIG_HID_WALTOP=m
++CONFIG_HID_GYRATION=m
++CONFIG_HID_TWINHAN=m
++CONFIG_HID_KENSINGTON=m
++CONFIG_HID_LCPOWER=m
++CONFIG_HID_LOGITECH=m
++CONFIG_HID_MAGICMOUSE=m
++CONFIG_HID_MICROSOFT=m
++CONFIG_HID_MONTEREY=m
++CONFIG_HID_MULTITOUCH=m
++CONFIG_HID_NTRIG=m
++CONFIG_HID_ORTEK=m
++CONFIG_HID_PANTHERLORD=m
++CONFIG_HID_PETALYNX=m
++CONFIG_HID_PICOLCD=m
++CONFIG_HID_ROCCAT=m
++CONFIG_HID_SAMSUNG=m
++CONFIG_HID_SONY=m
++CONFIG_HID_SPEEDLINK=m
++CONFIG_HID_SUNPLUS=m
++CONFIG_HID_GREENASIA=m
++CONFIG_HID_SMARTJOYPLUS=m
++CONFIG_HID_TOPSEED=m
++CONFIG_HID_THINGM=m
++CONFIG_HID_THRUSTMASTER=m
++CONFIG_HID_WACOM=m
++CONFIG_HID_WIIMOTE=m
++CONFIG_HID_XINMO=m
++CONFIG_HID_ZEROPLUS=m
++CONFIG_HID_ZYDACRON=m
++CONFIG_HID_PID=y
++CONFIG_USB_HIDDEV=y
+ CONFIG_USB=y
++CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
++CONFIG_USB_MON=m
++CONFIG_USB_DWCOTG=y
++CONFIG_USB_PRINTER=m
+ CONFIG_USB_STORAGE=y
++CONFIG_USB_STORAGE_REALTEK=m
++CONFIG_USB_STORAGE_DATAFAB=m
++CONFIG_USB_STORAGE_FREECOM=m
++CONFIG_USB_STORAGE_ISD200=m
++CONFIG_USB_STORAGE_USBAT=m
++CONFIG_USB_STORAGE_SDDR09=m
++CONFIG_USB_STORAGE_SDDR55=m
++CONFIG_USB_STORAGE_JUMPSHOT=m
++CONFIG_USB_STORAGE_ALAUDA=m
++CONFIG_USB_STORAGE_ONETOUCH=m
++CONFIG_USB_STORAGE_KARMA=m
++CONFIG_USB_STORAGE_CYPRESS_ATACB=m
++CONFIG_USB_STORAGE_ENE_UB6250=m
++CONFIG_USB_MDC800=m
++CONFIG_USB_MICROTEK=m
++CONFIG_USBIP_CORE=m
++CONFIG_USBIP_VHCI_HCD=m
++CONFIG_USBIP_HOST=m
++CONFIG_USB_DWC2=y
++CONFIG_USB_SERIAL=m
++CONFIG_USB_SERIAL_GENERIC=y
++CONFIG_USB_SERIAL_AIRCABLE=m
++CONFIG_USB_SERIAL_ARK3116=m
++CONFIG_USB_SERIAL_BELKIN=m
++CONFIG_USB_SERIAL_CH341=m
++CONFIG_USB_SERIAL_WHITEHEAT=m
++CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
++CONFIG_USB_SERIAL_CP210X=m
++CONFIG_USB_SERIAL_CYPRESS_M8=m
++CONFIG_USB_SERIAL_EMPEG=m
++CONFIG_USB_SERIAL_FTDI_SIO=m
++CONFIG_USB_SERIAL_VISOR=m
++CONFIG_USB_SERIAL_IPAQ=m
++CONFIG_USB_SERIAL_IR=m
++CONFIG_USB_SERIAL_EDGEPORT=m
++CONFIG_USB_SERIAL_EDGEPORT_TI=m
++CONFIG_USB_SERIAL_F81232=m
++CONFIG_USB_SERIAL_GARMIN=m
++CONFIG_USB_SERIAL_IPW=m
++CONFIG_USB_SERIAL_IUU=m
++CONFIG_USB_SERIAL_KEYSPAN_PDA=m
++CONFIG_USB_SERIAL_KEYSPAN=m
++CONFIG_USB_SERIAL_KLSI=m
++CONFIG_USB_SERIAL_KOBIL_SCT=m
++CONFIG_USB_SERIAL_MCT_U232=m
++CONFIG_USB_SERIAL_METRO=m
++CONFIG_USB_SERIAL_MOS7720=m
++CONFIG_USB_SERIAL_MOS7840=m
++CONFIG_USB_SERIAL_NAVMAN=m
++CONFIG_USB_SERIAL_PL2303=m
++CONFIG_USB_SERIAL_OTI6858=m
++CONFIG_USB_SERIAL_QCAUX=m
++CONFIG_USB_SERIAL_QUALCOMM=m
++CONFIG_USB_SERIAL_SPCP8X5=m
++CONFIG_USB_SERIAL_SAFE=m
++CONFIG_USB_SERIAL_SIERRAWIRELESS=m
++CONFIG_USB_SERIAL_SYMBOL=m
++CONFIG_USB_SERIAL_TI=m
++CONFIG_USB_SERIAL_CYBERJACK=m
++CONFIG_USB_SERIAL_XIRCOM=m
++CONFIG_USB_SERIAL_OPTION=m
++CONFIG_USB_SERIAL_OMNINET=m
++CONFIG_USB_SERIAL_OPTICON=m
++CONFIG_USB_SERIAL_XSENS_MT=m
++CONFIG_USB_SERIAL_WISHBONE=m
++CONFIG_USB_SERIAL_SSU100=m
++CONFIG_USB_SERIAL_QT2=m
++CONFIG_USB_SERIAL_DEBUG=m
++CONFIG_USB_EMI62=m
++CONFIG_USB_EMI26=m
++CONFIG_USB_ADUTUX=m
++CONFIG_USB_SEVSEG=m
++CONFIG_USB_RIO500=m
++CONFIG_USB_LEGOTOWER=m
++CONFIG_USB_LCD=m
++CONFIG_USB_LED=m
++CONFIG_USB_CYPRESS_CY7C63=m
++CONFIG_USB_CYTHERM=m
++CONFIG_USB_IDMOUSE=m
++CONFIG_USB_FTDI_ELAN=m
++CONFIG_USB_APPLEDISPLAY=m
++CONFIG_USB_LD=m
++CONFIG_USB_TRANCEVIBRATOR=m
++CONFIG_USB_IOWARRIOR=m
++CONFIG_USB_TEST=m
++CONFIG_USB_ISIGHTFW=m
++CONFIG_USB_YUREX=m
++CONFIG_USB_ATM=m
++CONFIG_USB_SPEEDTOUCH=m
++CONFIG_USB_CXACRU=m
++CONFIG_USB_UEAGLEATM=m
++CONFIG_USB_XUSBATM=m
+ CONFIG_MMC=y
++CONFIG_MMC_BLOCK_MINORS=32
++CONFIG_MMC_BCM2835=y
++CONFIG_MMC_BCM2835_DMA=y
++CONFIG_MMC_BCM2835_SDHOST=y
+ CONFIG_MMC_SDHCI=y
+ CONFIG_MMC_SDHCI_PLTFM=y
+ CONFIG_MMC_SDHCI_BCM2835=y
++CONFIG_MMC_SPI=m
++CONFIG_LEDS_CLASS=y
+ CONFIG_LEDS_GPIO=y
+ CONFIG_LEDS_TRIGGER_TIMER=y
+ CONFIG_LEDS_TRIGGER_ONESHOT=y
+ CONFIG_LEDS_TRIGGER_HEARTBEAT=y
++CONFIG_LEDS_TRIGGER_BACKLIGHT=y
+ CONFIG_LEDS_TRIGGER_CPU=y
+ CONFIG_LEDS_TRIGGER_GPIO=y
+ CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
+-CONFIG_LEDS_TRIGGER_TRANSIENT=y
+-CONFIG_LEDS_TRIGGER_CAMERA=y
++CONFIG_LEDS_TRIGGER_TRANSIENT=m
++CONFIG_LEDS_TRIGGER_CAMERA=m
++CONFIG_LEDS_TRIGGER_INPUT=y
++CONFIG_RTC_CLASS=y
++# CONFIG_RTC_HCTOSYS is not set
++CONFIG_RTC_DRV_DS1307=m
++CONFIG_RTC_DRV_DS1374=m
++CONFIG_RTC_DRV_DS1672=m
++CONFIG_RTC_DRV_DS3232=m
++CONFIG_RTC_DRV_MAX6900=m
++CONFIG_RTC_DRV_RS5C372=m
++CONFIG_RTC_DRV_ISL1208=m
++CONFIG_RTC_DRV_ISL12022=m
++CONFIG_RTC_DRV_ISL12057=m
++CONFIG_RTC_DRV_X1205=m
++CONFIG_RTC_DRV_PCF2127=m
++CONFIG_RTC_DRV_PCF8523=m
++CONFIG_RTC_DRV_PCF8563=m
++CONFIG_RTC_DRV_PCF8583=m
++CONFIG_RTC_DRV_M41T80=m
++CONFIG_RTC_DRV_BQ32K=m
++CONFIG_RTC_DRV_S35390A=m
++CONFIG_RTC_DRV_FM3130=m
++CONFIG_RTC_DRV_RX8581=m
++CONFIG_RTC_DRV_RX8025=m
++CONFIG_RTC_DRV_EM3027=m
++CONFIG_RTC_DRV_RV3029C2=m
++CONFIG_RTC_DRV_M41T93=m
++CONFIG_RTC_DRV_M41T94=m
++CONFIG_RTC_DRV_DS1305=m
++CONFIG_RTC_DRV_DS1390=m
++CONFIG_RTC_DRV_MAX6902=m
++CONFIG_RTC_DRV_R9701=m
++CONFIG_RTC_DRV_RS5C348=m
++CONFIG_RTC_DRV_DS3234=m
++CONFIG_RTC_DRV_PCF2123=m
++CONFIG_RTC_DRV_RX4581=m
++CONFIG_DMADEVICES=y
++CONFIG_DMA_BCM2835=y
++CONFIG_DMA_BCM2708=y
++CONFIG_UIO=m
++CONFIG_UIO_PDRV_GENIRQ=m
+ CONFIG_STAGING=y
+-CONFIG_USB_DWC2=y
+-CONFIG_USB_DWC2_HOST=y
++CONFIG_PRISM2_USB=m
++CONFIG_R8712U=m
++CONFIG_R8188EU=m
++CONFIG_R8723AU=m
++CONFIG_VT6656=m
++CONFIG_SPEAKUP=m
++CONFIG_SPEAKUP_SYNTH_SOFT=m
++CONFIG_STAGING_MEDIA=y
++CONFIG_LIRC_STAGING=y
++CONFIG_LIRC_IMON=m
++CONFIG_LIRC_RPI=m
++CONFIG_LIRC_SASEM=m
++CONFIG_LIRC_SERIAL=m
++CONFIG_FB_TFT=m
++CONFIG_FB_TFT_AGM1264K_FL=m
++CONFIG_FB_TFT_BD663474=m
++CONFIG_FB_TFT_HX8340BN=m
++CONFIG_FB_TFT_HX8347D=m
++CONFIG_FB_TFT_HX8353D=m
++CONFIG_FB_TFT_ILI9320=m
++CONFIG_FB_TFT_ILI9325=m
++CONFIG_FB_TFT_ILI9340=m
++CONFIG_FB_TFT_ILI9341=m
++CONFIG_FB_TFT_ILI9481=m
++CONFIG_FB_TFT_ILI9486=m
++CONFIG_FB_TFT_PCD8544=m
++CONFIG_FB_TFT_RA8875=m
++CONFIG_FB_TFT_S6D02A1=m
++CONFIG_FB_TFT_S6D1121=m
++CONFIG_FB_TFT_SSD1289=m
++CONFIG_FB_TFT_SSD1306=m
++CONFIG_FB_TFT_SSD1331=m
++CONFIG_FB_TFT_SSD1351=m
++CONFIG_FB_TFT_ST7735R=m
++CONFIG_FB_TFT_TINYLCD=m
++CONFIG_FB_TFT_TLS8204=m
++CONFIG_FB_TFT_UC1701=m
++CONFIG_FB_TFT_UPD161704=m
++CONFIG_FB_TFT_WATTEROTT=m
++CONFIG_FB_FLEX=m
++CONFIG_FB_TFT_FBTFT_DEVICE=m
++CONFIG_MAILBOX=y
++CONFIG_BCM2835_MBOX=y
+ # CONFIG_IOMMU_SUPPORT is not set
++CONFIG_EXTCON=m
++CONFIG_EXTCON_ARIZONA=m
++CONFIG_IIO=m
++CONFIG_IIO_BUFFER=y
++CONFIG_IIO_BUFFER_CB=y
++CONFIG_IIO_KFIFO_BUF=m
++CONFIG_DHT11=m
++CONFIG_RASPBERRYPI_FIRMWARE=y
+ CONFIG_EXT2_FS=y
+ CONFIG_EXT2_FS_XATTR=y
+ CONFIG_EXT2_FS_POSIX_ACL=y
+@@ -107,18 +1105,110 @@ CONFIG_EXT3_FS=y
+ CONFIG_EXT3_FS_POSIX_ACL=y
+ CONFIG_EXT4_FS=y
+ CONFIG_EXT4_FS_POSIX_ACL=y
++CONFIG_EXT4_FS_SECURITY=y
++CONFIG_REISERFS_FS=m
++CONFIG_REISERFS_FS_XATTR=y
++CONFIG_REISERFS_FS_POSIX_ACL=y
++CONFIG_REISERFS_FS_SECURITY=y
++CONFIG_JFS_FS=m
++CONFIG_JFS_POSIX_ACL=y
++CONFIG_JFS_SECURITY=y
++CONFIG_JFS_STATISTICS=y
++CONFIG_XFS_FS=m
++CONFIG_XFS_QUOTA=y
++CONFIG_XFS_POSIX_ACL=y
++CONFIG_XFS_RT=y
++CONFIG_GFS2_FS=m
++CONFIG_OCFS2_FS=m
++CONFIG_BTRFS_FS=m
++CONFIG_BTRFS_FS_POSIX_ACL=y
++CONFIG_NILFS2_FS=m
++CONFIG_F2FS_FS=y
+ CONFIG_FANOTIFY=y
++CONFIG_QFMT_V1=m
++CONFIG_QFMT_V2=m
++CONFIG_AUTOFS4_FS=y
++CONFIG_FUSE_FS=m
++CONFIG_CUSE=m
++CONFIG_FSCACHE=y
++CONFIG_FSCACHE_STATS=y
++CONFIG_FSCACHE_HISTOGRAM=y
++CONFIG_CACHEFILES=y
++CONFIG_ISO9660_FS=m
++CONFIG_JOLIET=y
++CONFIG_ZISOFS=y
++CONFIG_UDF_FS=m
+ CONFIG_MSDOS_FS=y
+ CONFIG_VFAT_FS=y
++CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
++CONFIG_NTFS_FS=m
++CONFIG_NTFS_RW=y
+ CONFIG_TMPFS=y
+ CONFIG_TMPFS_POSIX_ACL=y
+-# CONFIG_MISC_FILESYSTEMS is not set
++CONFIG_CONFIGFS_FS=y
++CONFIG_ECRYPT_FS=m
++CONFIG_HFS_FS=m
++CONFIG_HFSPLUS_FS=m
++CONFIG_SQUASHFS=m
++CONFIG_SQUASHFS_XATTR=y
++CONFIG_SQUASHFS_LZO=y
++CONFIG_SQUASHFS_XZ=y
+ CONFIG_NFS_FS=y
+-CONFIG_NFSD=y
++CONFIG_NFS_V3_ACL=y
++CONFIG_NFS_V4=y
++CONFIG_NFS_SWAP=y
++CONFIG_ROOT_NFS=y
++CONFIG_NFS_FSCACHE=y
++CONFIG_NFSD=m
++CONFIG_NFSD_V3_ACL=y
++CONFIG_NFSD_V4=y
++CONFIG_CIFS=m
++CONFIG_CIFS_WEAK_PW_HASH=y
++CONFIG_CIFS_UPCALL=y
++CONFIG_CIFS_XATTR=y
++CONFIG_CIFS_POSIX=y
++CONFIG_9P_FS=m
++CONFIG_9P_FS_POSIX_ACL=y
++CONFIG_NLS_DEFAULT="utf8"
+ CONFIG_NLS_CODEPAGE_437=y
++CONFIG_NLS_CODEPAGE_737=m
++CONFIG_NLS_CODEPAGE_775=m
++CONFIG_NLS_CODEPAGE_850=m
++CONFIG_NLS_CODEPAGE_852=m
++CONFIG_NLS_CODEPAGE_855=m
++CONFIG_NLS_CODEPAGE_857=m
++CONFIG_NLS_CODEPAGE_860=m
++CONFIG_NLS_CODEPAGE_861=m
++CONFIG_NLS_CODEPAGE_862=m
++CONFIG_NLS_CODEPAGE_863=m
++CONFIG_NLS_CODEPAGE_864=m
++CONFIG_NLS_CODEPAGE_865=m
++CONFIG_NLS_CODEPAGE_866=m
++CONFIG_NLS_CODEPAGE_869=m
++CONFIG_NLS_CODEPAGE_936=m
++CONFIG_NLS_CODEPAGE_950=m
++CONFIG_NLS_CODEPAGE_932=m
++CONFIG_NLS_CODEPAGE_949=m
++CONFIG_NLS_CODEPAGE_874=m
++CONFIG_NLS_ISO8859_8=m
++CONFIG_NLS_CODEPAGE_1250=m
++CONFIG_NLS_CODEPAGE_1251=m
+ CONFIG_NLS_ASCII=y
+-CONFIG_NLS_ISO8859_1=y
++CONFIG_NLS_ISO8859_1=m
++CONFIG_NLS_ISO8859_2=m
++CONFIG_NLS_ISO8859_3=m
++CONFIG_NLS_ISO8859_4=m
++CONFIG_NLS_ISO8859_5=m
++CONFIG_NLS_ISO8859_6=m
++CONFIG_NLS_ISO8859_7=m
++CONFIG_NLS_ISO8859_9=m
++CONFIG_NLS_ISO8859_13=m
++CONFIG_NLS_ISO8859_14=m
++CONFIG_NLS_ISO8859_15=m
++CONFIG_NLS_KOI8_R=m
++CONFIG_NLS_KOI8_U=m
+ CONFIG_NLS_UTF8=y
++CONFIG_DLM=m
+ CONFIG_PRINTK_TIME=y
+ CONFIG_BOOT_PRINTK_DELAY=y
+ CONFIG_DYNAMIC_DEBUG=y
+@@ -128,14 +1218,38 @@ CONFIG_DEBUG_INFO=y
+ CONFIG_UNUSED_SYMBOLS=y
+ CONFIG_DEBUG_MEMORY_INIT=y
+ CONFIG_LOCKUP_DETECTOR=y
++CONFIG_TIMER_STATS=y
++# CONFIG_DEBUG_PREEMPT is not set
++CONFIG_LATENCYTOP=y
++CONFIG_IRQSOFF_TRACER=y
+ CONFIG_SCHED_TRACER=y
+ CONFIG_STACK_TRACER=y
++CONFIG_BLK_DEV_IO_TRACE=y
++# CONFIG_KPROBE_EVENT is not set
+ CONFIG_FUNCTION_PROFILER=y
+ CONFIG_TEST_KSTRTOX=y
+ CONFIG_KGDB=y
+ CONFIG_KGDB_KDB=y
++CONFIG_KDB_KEYBOARD=y
+ CONFIG_STRICT_DEVMEM=y
+ CONFIG_DEBUG_LL=y
+ CONFIG_EARLY_PRINTK=y
++CONFIG_CRYPTO_USER=m
++CONFIG_CRYPTO_CRYPTD=m
++CONFIG_CRYPTO_CBC=y
++CONFIG_CRYPTO_CTS=m
++CONFIG_CRYPTO_XTS=m
++CONFIG_CRYPTO_XCBC=m
++CONFIG_CRYPTO_SHA512=m
++CONFIG_CRYPTO_TGR192=m
++CONFIG_CRYPTO_WP512=m
++CONFIG_CRYPTO_CAST5=m
++CONFIG_CRYPTO_DES=y
++# CONFIG_CRYPTO_HW is not set
++CONFIG_ARM_CRYPTO=y
++CONFIG_CRYPTO_SHA1_ARM=m
++CONFIG_CRYPTO_AES_ARM=m
++CONFIG_CRC_ITU_T=y
++CONFIG_LIBCRC32C=y
+ # CONFIG_XZ_DEC_ARM is not set
+ # CONFIG_XZ_DEC_ARMTHUMB is not set
diff --git a/target/linux/brcm2708/patches-4.4/0079-rpi-ft5406-Add-touchscreen-driver-for-pi-LCD-display.patch b/target/linux/brcm2708/patches-4.4/0079-rpi-ft5406-Add-touchscreen-driver-for-pi-LCD-display.patch
new file mode 100644
index 0000000..f1721f8
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0079-rpi-ft5406-Add-touchscreen-driver-for-pi-LCD-display.patch
@@ -0,0 +1,290 @@
+From 5152e89db66a2004ebd3d1629b4c1656672c3c37 Mon Sep 17 00:00:00 2001
+From: Gordon Hollingworth <gordon at raspberrypi.org>
+Date: Tue, 12 May 2015 14:47:56 +0100
+Subject: [PATCH 079/127] rpi-ft5406: Add touchscreen driver for pi LCD display
+
+Fix driver detection failure Check that the buffer response is non-zero meaning the touchscreen was detected
+
+rpi-ft5406: Use firmware API
+---
+ drivers/input/touchscreen/Kconfig      |   7 +
+ drivers/input/touchscreen/Makefile     |   1 +
+ drivers/input/touchscreen/rpi-ft5406.c | 246 +++++++++++++++++++++++++++++++++
+ 3 files changed, 254 insertions(+)
+ create mode 100644 drivers/input/touchscreen/rpi-ft5406.c
+
+--- a/drivers/input/touchscreen/Kconfig
++++ b/drivers/input/touchscreen/Kconfig
+@@ -608,6 +608,13 @@ config TOUCHSCREEN_EDT_FT5X06
+ 	  To compile this driver as a module, choose M here: the
+ 	  module will be called edt-ft5x06.
+ 
++config TOUCHSCREEN_RPI_FT5406
++	tristate "Raspberry Pi FT5406 driver"
++	depends on RASPBERRYPI_FIRMWARE
++	help
++	  Say Y here to enable the Raspberry Pi memory based FT5406 device
++
++
+ config TOUCHSCREEN_MIGOR
+ 	tristate "Renesas MIGO-R touchscreen"
+ 	depends on SH_MIGOR && I2C
+--- a/drivers/input/touchscreen/Makefile
++++ b/drivers/input/touchscreen/Makefile
+@@ -29,6 +29,7 @@ obj-$(CONFIG_TOUCHSCREEN_DA9034)	+= da90
+ obj-$(CONFIG_TOUCHSCREEN_DA9052)	+= da9052_tsi.o
+ obj-$(CONFIG_TOUCHSCREEN_DYNAPRO)	+= dynapro.o
+ obj-$(CONFIG_TOUCHSCREEN_EDT_FT5X06)	+= edt-ft5x06.o
++obj-$(CONFIG_TOUCHSCREEN_RPI_FT5406)	+= rpi-ft5406.o
+ obj-$(CONFIG_TOUCHSCREEN_HAMPSHIRE)	+= hampshire.o
+ obj-$(CONFIG_TOUCHSCREEN_GUNZE)		+= gunze.o
+ obj-$(CONFIG_TOUCHSCREEN_EETI)		+= eeti_ts.o
+--- /dev/null
++++ b/drivers/input/touchscreen/rpi-ft5406.c
+@@ -0,0 +1,246 @@
++/*
++ * Driver for memory based ft5406 touchscreen
++ *
++ * Copyright (C) 2015 Raspberry Pi
++ *
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++
++#include <linux/module.h>
++#include <linux/interrupt.h>
++#include <linux/input.h>
++#include <linux/irq.h>
++#include <linux/delay.h>
++#include <linux/slab.h>
++#include <linux/bitops.h>
++#include <linux/input/mt.h>
++#include <linux/kthread.h>
++#include <linux/platform_device.h>
++#include <asm/io.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
++
++#define MAXIMUM_SUPPORTED_POINTS 10
++struct ft5406_regs {
++	uint8_t device_mode;
++	uint8_t gesture_id;
++	uint8_t num_points;
++	struct ft5406_touch {
++		uint8_t xh;
++		uint8_t xl;
++		uint8_t yh;
++		uint8_t yl;
++		uint8_t res1;
++		uint8_t res2;
++	} point[MAXIMUM_SUPPORTED_POINTS];
++};
++
++#define SCREEN_WIDTH  800
++#define SCREEN_HEIGHT 480
++
++struct ft5406 {
++	struct platform_device * pdev;
++	struct input_dev       * input_dev;
++	void __iomem           * ts_base;
++	struct ft5406_regs     * regs;
++	struct task_struct     * thread;
++};
++
++/* Thread to poll for touchscreen events
++ * 
++ * This thread polls the memory based register copy of the ft5406 registers
++ * using the number of points register to know whether the copy has been
++ * updated (we write 99 to the memory copy, the GPU will write between 
++ * 0 - 10 points)
++ */
++static int ft5406_thread(void *arg)
++{
++	struct ft5406 *ts = (struct ft5406 *) arg;
++	struct ft5406_regs regs;
++	int known_ids = 0;
++	
++	while(!kthread_should_stop())
++	{
++		// 60fps polling
++		msleep_interruptible(17);
++		memcpy_fromio(&regs, ts->regs, sizeof(*ts->regs));
++		writel(99, &ts->regs->num_points);
++		// Do not output if theres no new information (num_points is 99)
++		// or we have no touch points and don't need to release any
++		if(!(regs.num_points == 99 || (regs.num_points == 0 && known_ids == 0)))
++		{
++			int i;
++			int modified_ids = 0, released_ids;
++			for(i = 0; i < regs.num_points; i++)
++			{
++				int x = (((int) regs.point[i].xh & 0xf) << 8) + regs.point[i].xl;
++				int y = (((int) regs.point[i].yh & 0xf) << 8) + regs.point[i].yl;
++				int touchid = (regs.point[i].yh >> 4) & 0xf;
++				
++				modified_ids |= 1 << touchid;
++
++				if(!((1 << touchid) & known_ids))
++					dev_dbg(&ts->pdev->dev, "x = %d, y = %d, touchid = %d\n", x, y, touchid);
++				
++				input_mt_slot(ts->input_dev, touchid);
++				input_mt_report_slot_state(ts->input_dev, MT_TOOL_FINGER, 1);
++
++				input_report_abs(ts->input_dev, ABS_MT_POSITION_X, x);
++				input_report_abs(ts->input_dev, ABS_MT_POSITION_Y, y);
++
++			}
++
++			released_ids = known_ids & ~modified_ids;
++			for(i = 0; released_ids && i < MAXIMUM_SUPPORTED_POINTS; i++)
++			{
++				if(released_ids & (1<<i))
++				{
++					dev_dbg(&ts->pdev->dev, "Released %d, known = %x modified = %x\n", i, known_ids, modified_ids);
++					input_mt_slot(ts->input_dev, i);
++					input_mt_report_slot_state(ts->input_dev, MT_TOOL_FINGER, 0);
++					modified_ids &= ~(1 << i);
++				}
++			}
++			known_ids = modified_ids;
++			
++			input_mt_report_pointer_emulation(ts->input_dev, true);
++			input_sync(ts->input_dev);
++		}
++			
++	}
++	
++	return 0;
++}
++
++static int ft5406_probe(struct platform_device *pdev)
++{
++	int ret;
++	struct input_dev * input_dev = input_allocate_device();
++	struct ft5406 * ts;
++	struct device_node *fw_node;
++	struct rpi_firmware *fw;
++	u32 touchbuf;
++	
++	dev_info(&pdev->dev, "Probing device\n");
++	
++	fw_node = of_parse_phandle(pdev->dev.of_node, "firmware", 0);
++	if (!fw_node) {
++		dev_err(&pdev->dev, "Missing firmware node\n");
++		return -ENOENT;
++	}
++
++	fw = rpi_firmware_get(fw_node);
++	if (!fw)
++		return -EPROBE_DEFER;
++
++	ret = rpi_firmware_property(fw, RPI_FIRMWARE_FRAMEBUFFER_GET_TOUCHBUF,
++				    &touchbuf, sizeof(touchbuf));
++	if (ret) {
++		dev_err(&pdev->dev, "Failed to get touch buffer\n");
++		return ret;
++	}
++
++	if (!touchbuf) {
++		dev_err(&pdev->dev, "Touchscreen not detected\n");
++		return -ENODEV;
++	}
++
++	dev_dbg(&pdev->dev, "Got TS buffer 0x%x\n", touchbuf);
++
++	ts = kzalloc(sizeof(struct ft5406), GFP_KERNEL);
++
++	if (!ts || !input_dev) {
++		ret = -ENOMEM;
++		dev_err(&pdev->dev, "Failed to allocate memory\n");
++		return ret;
++	}
++	ts->input_dev = input_dev;
++	platform_set_drvdata(pdev, ts);
++	ts->pdev = pdev;
++	
++	input_dev->name = "FT5406 memory based driver";
++	
++	__set_bit(EV_KEY, input_dev->evbit);
++	__set_bit(EV_SYN, input_dev->evbit);
++	__set_bit(EV_ABS, input_dev->evbit);
++
++	input_set_abs_params(input_dev, ABS_MT_POSITION_X, 0,
++			     SCREEN_WIDTH, 0, 0);
++	input_set_abs_params(input_dev, ABS_MT_POSITION_Y, 0,
++			     SCREEN_HEIGHT, 0, 0);
++
++	input_mt_init_slots(input_dev, MAXIMUM_SUPPORTED_POINTS, INPUT_MT_DIRECT);
++
++	input_set_drvdata(input_dev, ts);
++	
++	ret = input_register_device(input_dev);
++	if (ret) {
++		dev_err(&pdev->dev, "could not register input device, %d\n",
++			ret);
++		return ret;
++	}
++	
++	// mmap the physical memory
++	touchbuf &= ~0xc0000000;
++	ts->ts_base = ioremap(touchbuf, sizeof(*ts->regs));
++	if(ts->ts_base == NULL)
++	{
++		dev_err(&pdev->dev, "Failed to map physical address\n");
++		input_unregister_device(input_dev);
++		kzfree(ts);
++		return -ENOMEM;
++	}
++	
++	ts->regs = (struct ft5406_regs *) ts->ts_base;
++
++	// create thread to poll the touch events
++	ts->thread = kthread_run(ft5406_thread, ts, "ft5406");
++	if(ts->thread == NULL)
++	{
++		dev_err(&pdev->dev, "Failed to create kernel thread");
++		iounmap(ts->ts_base);
++		input_unregister_device(input_dev);
++		kzfree(ts);
++	}
++
++	return 0;
++}
++
++static int ft5406_remove(struct platform_device *pdev)
++{
++	struct ft5406 *ts = (struct ft5406 *) platform_get_drvdata(pdev);
++	
++	dev_info(&pdev->dev, "Removing rpi-ft5406\n");
++	
++	kthread_stop(ts->thread);
++	iounmap(ts->ts_base);
++	input_unregister_device(ts->input_dev);
++	kzfree(ts);
++	
++	return 0;
++}
++
++static const struct of_device_id ft5406_match[] = {
++	{ .compatible = "rpi,rpi-ft5406", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, ft5406_match);
++
++static struct platform_driver ft5406_driver = {
++	.driver = {
++		.name   = "rpi-ft5406",
++		.owner  = THIS_MODULE,
++		.of_match_table = ft5406_match,
++	},
++	.probe          = ft5406_probe,
++	.remove         = ft5406_remove,
++};
++
++module_platform_driver(ft5406_driver);
++
++MODULE_AUTHOR("Gordon Hollingworth");
++MODULE_DESCRIPTION("Touchscreen driver for memory based FT5406");
++MODULE_LICENSE("GPL");
diff --git a/target/linux/brcm2708/patches-4.4/0080-Improve-__copy_to_user-and-__copy_from_user-performa.patch b/target/linux/brcm2708/patches-4.4/0080-Improve-__copy_to_user-and-__copy_from_user-performa.patch
new file mode 100644
index 0000000..10f1184
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0080-Improve-__copy_to_user-and-__copy_from_user-performa.patch
@@ -0,0 +1,1510 @@
+From 61d24c12473972a4eb6b259f297d65a03fc09bda Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Mon, 13 Oct 2014 11:47:53 +0100
+Subject: [PATCH 080/127] Improve __copy_to_user and __copy_from_user
+ performance
+
+Provide a __copy_from_user that uses memcpy. On BCM2708, use
+optimised memcpy/memmove/memcmp/memset implementations.
+
+arch/arm: Add mmiocpy/set aliases for memcpy/set
+
+See: https://github.com/raspberrypi/linux/issues/1082
+---
+ arch/arm/include/asm/string.h      |   5 +
+ arch/arm/include/asm/uaccess.h     |   3 +
+ arch/arm/lib/Makefile              |  15 +-
+ arch/arm/lib/arm-mem.h             | 159 ++++++++++++
+ arch/arm/lib/copy_from_user.S      |   4 +-
+ arch/arm/lib/exports_rpi.c         |  37 +++
+ arch/arm/lib/memcmp_rpi.S          | 285 +++++++++++++++++++++
+ arch/arm/lib/memcpy_rpi.S          |  61 +++++
+ arch/arm/lib/memcpymove.h          | 506 +++++++++++++++++++++++++++++++++++++
+ arch/arm/lib/memmove_rpi.S         |  61 +++++
+ arch/arm/lib/memset_rpi.S          | 123 +++++++++
+ arch/arm/lib/uaccess_with_memcpy.c | 112 +++++++-
+ 12 files changed, 1365 insertions(+), 6 deletions(-)
+ create mode 100644 arch/arm/lib/arm-mem.h
+ create mode 100644 arch/arm/lib/exports_rpi.c
+ create mode 100644 arch/arm/lib/memcmp_rpi.S
+ create mode 100644 arch/arm/lib/memcpy_rpi.S
+ create mode 100644 arch/arm/lib/memcpymove.h
+ create mode 100644 arch/arm/lib/memmove_rpi.S
+ create mode 100644 arch/arm/lib/memset_rpi.S
+
+--- a/arch/arm/include/asm/string.h
++++ b/arch/arm/include/asm/string.h
+@@ -24,6 +24,11 @@ extern void * memchr(const void *, int,
+ #define __HAVE_ARCH_MEMSET
+ extern void * memset(void *, int, __kernel_size_t);
+ 
++#ifdef CONFIG_MACH_BCM2708
++#define __HAVE_ARCH_MEMCMP
++extern int memcmp(const void *, const void *, size_t);
++#endif
++
+ extern void __memzero(void *ptr, __kernel_size_t n);
+ 
+ #define memset(p,v,n)							\
+--- a/arch/arm/include/asm/uaccess.h
++++ b/arch/arm/include/asm/uaccess.h
+@@ -493,6 +493,9 @@ do {									\
+ extern unsigned long __must_check
+ arm_copy_from_user(void *to, const void __user *from, unsigned long n);
+ 
++extern unsigned long __must_check
++__copy_from_user_std(void *to, const void __user *from, unsigned long n);
++
+ static inline unsigned long __must_check
+ __copy_from_user(void *to, const void __user *from, unsigned long n)
+ {
+--- a/arch/arm/lib/Makefile
++++ b/arch/arm/lib/Makefile
+@@ -6,9 +6,8 @@
+ 
+ lib-y		:= backtrace.o changebit.o csumipv6.o csumpartial.o   \
+ 		   csumpartialcopy.o csumpartialcopyuser.o clearbit.o \
+-		   delay.o delay-loop.o findbit.o memchr.o memcpy.o   \
+-		   memmove.o memset.o memzero.o setbit.o              \
+-		   strchr.o strrchr.o                                 \
++		   delay.o delay-loop.o findbit.o memchr.o memzero.o  \
++		   setbit.o strchr.o strrchr.o                        \
+ 		   testchangebit.o testclearbit.o testsetbit.o        \
+ 		   ashldi3.o ashrdi3.o lshrdi3.o muldi3.o             \
+ 		   ucmpdi2.o lib1funcs.o div64.o                      \
+@@ -18,6 +17,16 @@ lib-y		:= backtrace.o changebit.o csumip
+ mmu-y		:= clear_user.o copy_page.o getuser.o putuser.o       \
+ 		   copy_from_user.o copy_to_user.o
+ 
++# Choose optimised implementations for Raspberry Pi
++ifeq ($(CONFIG_MACH_BCM2708),y)
++  CFLAGS_uaccess_with_memcpy.o += -DCOPY_FROM_USER_THRESHOLD=1600
++  CFLAGS_uaccess_with_memcpy.o += -DCOPY_TO_USER_THRESHOLD=672
++  obj-$(CONFIG_MODULES) += exports_rpi.o
++  lib-y        += memcpy_rpi.o memmove_rpi.o memset_rpi.o memcmp_rpi.o
++else
++  lib-y        += memcpy.o memmove.o memset.o
++endif
++
+ # using lib_ here won't override already available weak symbols
+ obj-$(CONFIG_UACCESS_WITH_MEMCPY) += uaccess_with_memcpy.o
+ 
+--- /dev/null
++++ b/arch/arm/lib/arm-mem.h
+@@ -0,0 +1,159 @@
++/*
++Copyright (c) 2013, Raspberry Pi Foundation
++Copyright (c) 2013, RISC OS Open Ltd
++All rights reserved.
++
++Redistribution and use in source and binary forms, with or without
++modification, are permitted provided that the following conditions are met:
++    * Redistributions of source code must retain the above copyright
++      notice, this list of conditions and the following disclaimer.
++    * Redistributions in binary form must reproduce the above copyright
++      notice, this list of conditions and the following disclaimer in the
++      documentation and/or other materials provided with the distribution.
++    * Neither the name of the copyright holder nor the
++      names of its contributors may be used to endorse or promote products
++      derived from this software without specific prior written permission.
++
++THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
++DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++*/
++
++.macro myfunc fname
++ .func fname
++ .global fname
++fname:
++.endm
++
++.macro preload_leading_step1  backwards, ptr, base
++/* If the destination is already 16-byte aligned, then we need to preload
++ * between 0 and prefetch_distance (inclusive) cache lines ahead so there
++ * are no gaps when the inner loop starts.
++ */
++ .if backwards
++        sub     ptr, base, #1
++        bic     ptr, ptr, #31
++ .else
++        bic     ptr, base, #31
++ .endif
++ .set OFFSET, 0
++ .rept prefetch_distance+1
++        pld     [ptr, #OFFSET]
++  .if backwards
++   .set OFFSET, OFFSET-32
++  .else
++   .set OFFSET, OFFSET+32
++  .endif
++ .endr
++.endm
++
++.macro preload_leading_step2  backwards, ptr, base, leading_bytes, tmp
++/* However, if the destination is not 16-byte aligned, we may need to
++ * preload one more cache line than that. The question we need to ask is:
++ * are the leading bytes more than the amount by which the source
++ * pointer will be rounded down for preloading, and if so, by how many
++ * cache lines?
++ */
++ .if backwards
++/* Here we compare against how many bytes we are into the
++ * cache line, counting down from the highest such address.
++ * Effectively, we want to calculate
++ *     leading_bytes = dst&15
++ *     cacheline_offset = 31-((src-leading_bytes-1)&31)
++ *     extra_needed = leading_bytes - cacheline_offset
++ * and test if extra_needed is <= 0, or rearranging:
++ *     leading_bytes + (src-leading_bytes-1)&31 <= 31
++ */
++        mov     tmp, base, lsl #32-5
++        sbc     tmp, tmp, leading_bytes, lsl #32-5
++        adds    tmp, tmp, leading_bytes, lsl #32-5
++        bcc     61f
++        pld     [ptr, #-32*(prefetch_distance+1)]
++ .else
++/* Effectively, we want to calculate
++ *     leading_bytes = (-dst)&15
++ *     cacheline_offset = (src+leading_bytes)&31
++ *     extra_needed = leading_bytes - cacheline_offset
++ * and test if extra_needed is <= 0.
++ */
++        mov     tmp, base, lsl #32-5
++        add     tmp, tmp, leading_bytes, lsl #32-5
++        rsbs    tmp, tmp, leading_bytes, lsl #32-5
++        bls     61f
++        pld     [ptr, #32*(prefetch_distance+1)]
++ .endif
++61:
++.endm
++
++.macro preload_trailing  backwards, base, remain, tmp
++        /* We need either 0, 1 or 2 extra preloads */
++ .if backwards
++        rsb     tmp, base, #0
++        mov     tmp, tmp, lsl #32-5
++ .else
++        mov     tmp, base, lsl #32-5
++ .endif
++        adds    tmp, tmp, remain, lsl #32-5
++        adceqs  tmp, tmp, #0
++        /* The instruction above has two effects: ensures Z is only
++         * set if C was clear (so Z indicates that both shifted quantities
++         * were 0), and clears C if Z was set (so C indicates that the sum
++         * of the shifted quantities was greater and not equal to 32) */
++        beq     82f
++ .if backwards
++        sub     tmp, base, #1
++        bic     tmp, tmp, #31
++ .else
++        bic     tmp, base, #31
++ .endif
++        bcc     81f
++ .if backwards
++        pld     [tmp, #-32*(prefetch_distance+1)]
++81:
++        pld     [tmp, #-32*prefetch_distance]
++ .else
++        pld     [tmp, #32*(prefetch_distance+2)]
++81:
++        pld     [tmp, #32*(prefetch_distance+1)]
++ .endif
++82:
++.endm
++
++.macro preload_all    backwards, narrow_case, shift, base, remain, tmp0, tmp1
++ .if backwards
++        sub     tmp0, base, #1
++        bic     tmp0, tmp0, #31
++        pld     [tmp0]
++        sub     tmp1, base, remain, lsl #shift
++ .else
++        bic     tmp0, base, #31
++        pld     [tmp0]
++        add     tmp1, base, remain, lsl #shift
++        sub     tmp1, tmp1, #1
++ .endif
++        bic     tmp1, tmp1, #31
++        cmp     tmp1, tmp0
++        beq     92f
++ .if narrow_case
++        /* In this case, all the data fits in either 1 or 2 cache lines */
++        pld     [tmp1]
++ .else
++91:
++  .if backwards
++        sub     tmp0, tmp0, #32
++  .else
++        add     tmp0, tmp0, #32
++  .endif
++        cmp     tmp0, tmp1
++        pld     [tmp0]
++        bne     91b
++ .endif
++92:
++.endm
+--- a/arch/arm/lib/copy_from_user.S
++++ b/arch/arm/lib/copy_from_user.S
+@@ -89,11 +89,13 @@
+ 
+ 	.text
+ 
+-ENTRY(arm_copy_from_user)
++ENTRY(__copy_from_user_std)
++WEAK(arm_copy_from_user)
+ 
+ #include "copy_template.S"
+ 
+ ENDPROC(arm_copy_from_user)
++ENDPROC(__copy_from_user_std)
+ 
+ 	.pushsection .fixup,"ax"
+ 	.align 0
+--- /dev/null
++++ b/arch/arm/lib/exports_rpi.c
+@@ -0,0 +1,37 @@
++/**
++ * Copyright (c) 2014, Raspberry Pi (Trading) Ltd.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ *    notice, this list of conditions, and the following disclaimer,
++ *    without modification.
++ * 2. Redistributions in binary form must reproduce the above copyright
++ *    notice, this list of conditions and the following disclaimer in the
++ *    documentation and/or other materials provided with the distribution.
++ * 3. The names of the above-listed copyright holders may not be used
++ *    to endorse or promote products derived from this software without
++ *    specific prior written permission.
++ *
++ * ALTERNATIVELY, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2, as published by the Free
++ * Software Foundation.
++ *
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
++ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
++ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
++ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
++ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
++ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
++ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
++ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
++ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
++ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++
++EXPORT_SYMBOL(memcmp);
+--- /dev/null
++++ b/arch/arm/lib/memcmp_rpi.S
+@@ -0,0 +1,285 @@
++/*
++Copyright (c) 2013, Raspberry Pi Foundation
++Copyright (c) 2013, RISC OS Open Ltd
++All rights reserved.
++
++Redistribution and use in source and binary forms, with or without
++modification, are permitted provided that the following conditions are met:
++    * Redistributions of source code must retain the above copyright
++      notice, this list of conditions and the following disclaimer.
++    * Redistributions in binary form must reproduce the above copyright
++      notice, this list of conditions and the following disclaimer in the
++      documentation and/or other materials provided with the distribution.
++    * Neither the name of the copyright holder nor the
++      names of its contributors may be used to endorse or promote products
++      derived from this software without specific prior written permission.
++
++THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
++DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++*/
++
++#include <linux/linkage.h>
++#include "arm-mem.h"
++
++/* Prevent the stack from becoming executable */
++#if defined(__linux__) && defined(__ELF__)
++.section .note.GNU-stack,"",%progbits
++#endif
++
++    .text
++    .arch armv6
++    .object_arch armv4
++    .arm
++    .altmacro
++    .p2align 2
++
++.macro memcmp_process_head  unaligned
++ .if unaligned
++        ldr     DAT0, [S_1], #4
++        ldr     DAT1, [S_1], #4
++        ldr     DAT2, [S_1], #4
++        ldr     DAT3, [S_1], #4
++ .else
++        ldmia   S_1!, {DAT0, DAT1, DAT2, DAT3}
++ .endif
++        ldmia   S_2!, {DAT4, DAT5, DAT6, DAT7}
++.endm
++
++.macro memcmp_process_tail
++        cmp     DAT0, DAT4
++        cmpeq   DAT1, DAT5
++        cmpeq   DAT2, DAT6
++        cmpeq   DAT3, DAT7
++        bne     200f
++.endm
++
++.macro memcmp_leading_31bytes
++        movs    DAT0, OFF, lsl #31
++        ldrmib  DAT0, [S_1], #1
++        ldrcsh  DAT1, [S_1], #2
++        ldrmib  DAT4, [S_2], #1
++        ldrcsh  DAT5, [S_2], #2
++        movpl   DAT0, #0
++        movcc   DAT1, #0
++        movpl   DAT4, #0
++        movcc   DAT5, #0
++        submi   N, N, #1
++        subcs   N, N, #2
++        cmp     DAT0, DAT4
++        cmpeq   DAT1, DAT5
++        bne     200f
++        movs    DAT0, OFF, lsl #29
++        ldrmi   DAT0, [S_1], #4
++        ldrcs   DAT1, [S_1], #4
++        ldrcs   DAT2, [S_1], #4
++        ldrmi   DAT4, [S_2], #4
++        ldmcsia S_2!, {DAT5, DAT6}
++        movpl   DAT0, #0
++        movcc   DAT1, #0
++        movcc   DAT2, #0
++        movpl   DAT4, #0
++        movcc   DAT5, #0
++        movcc   DAT6, #0
++        submi   N, N, #4
++        subcs   N, N, #8
++        cmp     DAT0, DAT4
++        cmpeq   DAT1, DAT5
++        cmpeq   DAT2, DAT6
++        bne     200f
++        tst     OFF, #16
++        beq     105f
++        memcmp_process_head  1
++        sub     N, N, #16
++        memcmp_process_tail
++105:
++.endm
++
++.macro memcmp_trailing_15bytes  unaligned
++        movs    N, N, lsl #29
++ .if unaligned
++        ldrcs   DAT0, [S_1], #4
++        ldrcs   DAT1, [S_1], #4
++ .else
++        ldmcsia S_1!, {DAT0, DAT1}
++ .endif
++        ldrmi   DAT2, [S_1], #4
++        ldmcsia S_2!, {DAT4, DAT5}
++        ldrmi   DAT6, [S_2], #4
++        movcc   DAT0, #0
++        movcc   DAT1, #0
++        movpl   DAT2, #0
++        movcc   DAT4, #0
++        movcc   DAT5, #0
++        movpl   DAT6, #0
++        cmp     DAT0, DAT4
++        cmpeq   DAT1, DAT5
++        cmpeq   DAT2, DAT6
++        bne     200f
++        movs    N, N, lsl #2
++        ldrcsh  DAT0, [S_1], #2
++        ldrmib  DAT1, [S_1]
++        ldrcsh  DAT4, [S_2], #2
++        ldrmib  DAT5, [S_2]
++        movcc   DAT0, #0
++        movpl   DAT1, #0
++        movcc   DAT4, #0
++        movpl   DAT5, #0
++        cmp     DAT0, DAT4
++        cmpeq   DAT1, DAT5
++        bne     200f
++.endm
++
++.macro memcmp_long_inner_loop  unaligned
++110:
++        memcmp_process_head  unaligned
++        pld     [S_2, #prefetch_distance*32 + 16]
++        memcmp_process_tail
++        memcmp_process_head  unaligned
++        pld     [S_1, OFF]
++        memcmp_process_tail
++        subs    N, N, #32
++        bhs     110b
++        /* Just before the final (prefetch_distance+1) 32-byte blocks,
++         * deal with final preloads */
++        preload_trailing  0, S_1, N, DAT0
++        preload_trailing  0, S_2, N, DAT0
++        add     N, N, #(prefetch_distance+2)*32 - 16
++120:
++        memcmp_process_head  unaligned
++        memcmp_process_tail
++        subs    N, N, #16
++        bhs     120b
++        /* Trailing words and bytes */
++        tst     N, #15
++        beq     199f
++        memcmp_trailing_15bytes  unaligned
++199:    /* Reached end without detecting a difference */
++        mov     a1, #0
++        setend  le
++        pop     {DAT1-DAT6, pc}
++.endm
++
++.macro memcmp_short_inner_loop  unaligned
++        subs    N, N, #16     /* simplifies inner loop termination */
++        blo     122f
++120:
++        memcmp_process_head  unaligned
++        memcmp_process_tail
++        subs    N, N, #16
++        bhs     120b
++122:    /* Trailing words and bytes */
++        tst     N, #15
++        beq     199f
++        memcmp_trailing_15bytes  unaligned
++199:    /* Reached end without detecting a difference */
++        mov     a1, #0
++        setend  le
++        pop     {DAT1-DAT6, pc}
++.endm
++
++/*
++ * int memcmp(const void *s1, const void *s2, size_t n);
++ * On entry:
++ * a1 = pointer to buffer 1
++ * a2 = pointer to buffer 2
++ * a3 = number of bytes to compare (as unsigned chars)
++ * On exit:
++ * a1 = >0/=0/<0 if s1 >/=/< s2
++ */
++
++.set prefetch_distance, 2
++
++ENTRY(memcmp)
++        S_1     .req    a1
++        S_2     .req    a2
++        N       .req    a3
++        DAT0    .req    a4
++        DAT1    .req    v1
++        DAT2    .req    v2
++        DAT3    .req    v3
++        DAT4    .req    v4
++        DAT5    .req    v5
++        DAT6    .req    v6
++        DAT7    .req    ip
++        OFF     .req    lr
++
++        push    {DAT1-DAT6, lr}
++        setend  be /* lowest-addressed bytes are most significant */
++
++        /* To preload ahead as we go, we need at least (prefetch_distance+2) 32-byte blocks */
++        cmp     N, #(prefetch_distance+3)*32 - 1
++        blo     170f
++
++        /* Long case */
++        /* Adjust N so that the decrement instruction can also test for
++         * inner loop termination. We want it to stop when there are
++         * (prefetch_distance+1) complete blocks to go. */
++        sub     N, N, #(prefetch_distance+2)*32
++        preload_leading_step1  0, DAT0, S_1
++        preload_leading_step1  0, DAT1, S_2
++        tst     S_2, #31
++        beq     154f
++        rsb     OFF, S_2, #0 /* no need to AND with 15 here */
++        preload_leading_step2  0, DAT0, S_1, OFF, DAT2
++        preload_leading_step2  0, DAT1, S_2, OFF, DAT2
++        memcmp_leading_31bytes
++154:    /* Second source now cacheline (32-byte) aligned; we have at
++         * least one prefetch to go. */
++        /* Prefetch offset is best selected such that it lies in the
++         * first 8 of each 32 bytes - but it's just as easy to aim for
++         * the first one */
++        and     OFF, S_1, #31
++        rsb     OFF, OFF, #32*prefetch_distance
++        tst     S_1, #3
++        bne     140f
++        memcmp_long_inner_loop  0
++140:    memcmp_long_inner_loop  1
++
++170:    /* Short case */
++        teq     N, #0
++        beq     199f
++        preload_all 0, 0, 0, S_1, N, DAT0, DAT1
++        preload_all 0, 0, 0, S_2, N, DAT0, DAT1
++        tst     S_2, #3
++        beq     174f
++172:    subs    N, N, #1
++        blo     199f
++        ldrb    DAT0, [S_1], #1
++        ldrb    DAT4, [S_2], #1
++        cmp     DAT0, DAT4
++        bne     200f
++        tst     S_2, #3
++        bne     172b
++174:    /* Second source now 4-byte aligned; we have 0 or more bytes to go */
++        tst     S_1, #3
++        bne     140f
++        memcmp_short_inner_loop  0
++140:    memcmp_short_inner_loop  1
++
++200:    /* Difference found: determine sign. */
++        movhi   a1, #1
++        movlo   a1, #-1
++        setend  le
++        pop     {DAT1-DAT6, pc}
++
++        .unreq  S_1
++        .unreq  S_2
++        .unreq  N
++        .unreq  DAT0
++        .unreq  DAT1
++        .unreq  DAT2
++        .unreq  DAT3
++        .unreq  DAT4
++        .unreq  DAT5
++        .unreq  DAT6
++        .unreq  DAT7
++        .unreq  OFF
++ENDPROC(memcmp)
+--- /dev/null
++++ b/arch/arm/lib/memcpy_rpi.S
+@@ -0,0 +1,61 @@
++/*
++Copyright (c) 2013, Raspberry Pi Foundation
++Copyright (c) 2013, RISC OS Open Ltd
++All rights reserved.
++
++Redistribution and use in source and binary forms, with or without
++modification, are permitted provided that the following conditions are met:
++    * Redistributions of source code must retain the above copyright
++      notice, this list of conditions and the following disclaimer.
++    * Redistributions in binary form must reproduce the above copyright
++      notice, this list of conditions and the following disclaimer in the
++      documentation and/or other materials provided with the distribution.
++    * Neither the name of the copyright holder nor the
++      names of its contributors may be used to endorse or promote products
++      derived from this software without specific prior written permission.
++
++THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
++DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++*/
++
++#include <linux/linkage.h>
++#include "arm-mem.h"
++#include "memcpymove.h"
++
++/* Prevent the stack from becoming executable */
++#if defined(__linux__) && defined(__ELF__)
++.section .note.GNU-stack,"",%progbits
++#endif
++
++    .text
++    .arch armv6
++    .object_arch armv4
++    .arm
++    .altmacro
++    .p2align 2
++
++/*
++ * void *memcpy(void * restrict s1, const void * restrict s2, size_t n);
++ * On entry:
++ * a1 = pointer to destination
++ * a2 = pointer to source
++ * a3 = number of bytes to copy
++ * On exit:
++ * a1 preserved
++ */
++
++.set prefetch_distance, 3
++
++ENTRY(mmiocpy)
++ENTRY(memcpy)
++        memcpy  0
++ENDPROC(memcpy)
++ENDPROC(mmiocpy)
+--- /dev/null
++++ b/arch/arm/lib/memcpymove.h
+@@ -0,0 +1,506 @@
++/*
++Copyright (c) 2013, Raspberry Pi Foundation
++Copyright (c) 2013, RISC OS Open Ltd
++All rights reserved.
++
++Redistribution and use in source and binary forms, with or without
++modification, are permitted provided that the following conditions are met:
++    * Redistributions of source code must retain the above copyright
++      notice, this list of conditions and the following disclaimer.
++    * Redistributions in binary form must reproduce the above copyright
++      notice, this list of conditions and the following disclaimer in the
++      documentation and/or other materials provided with the distribution.
++    * Neither the name of the copyright holder nor the
++      names of its contributors may be used to endorse or promote products
++      derived from this software without specific prior written permission.
++
++THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
++DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++*/
++
++.macro unaligned_words  backwards, align, use_pld, words, r0, r1, r2, r3, r4, r5, r6, r7, r8
++ .if words == 1
++  .if backwards
++        mov     r1, r0, lsl #32-align*8
++        ldr     r0, [S, #-4]!
++        orr     r1, r1, r0, lsr #align*8
++        str     r1, [D, #-4]!
++  .else
++        mov     r0, r1, lsr #align*8
++        ldr     r1, [S, #4]!
++        orr     r0, r0, r1, lsl #32-align*8
++        str     r0, [D], #4
++  .endif
++ .elseif words == 2
++  .if backwards
++        ldr     r1, [S, #-4]!
++        mov     r2, r0, lsl #32-align*8
++        ldr     r0, [S, #-4]!
++        orr     r2, r2, r1, lsr #align*8
++        mov     r1, r1, lsl #32-align*8
++        orr     r1, r1, r0, lsr #align*8
++        stmdb   D!, {r1, r2}
++  .else
++        ldr     r1, [S, #4]!
++        mov     r0, r2, lsr #align*8
++        ldr     r2, [S, #4]!
++        orr     r0, r0, r1, lsl #32-align*8
++        mov     r1, r1, lsr #align*8
++        orr     r1, r1, r2, lsl #32-align*8
++        stmia   D!, {r0, r1}
++  .endif
++ .elseif words == 4
++  .if backwards
++        ldmdb   S!, {r2, r3}
++        mov     r4, r0, lsl #32-align*8
++        ldmdb   S!, {r0, r1}
++        orr     r4, r4, r3, lsr #align*8
++        mov     r3, r3, lsl #32-align*8
++        orr     r3, r3, r2, lsr #align*8
++        mov     r2, r2, lsl #32-align*8
++        orr     r2, r2, r1, lsr #align*8
++        mov     r1, r1, lsl #32-align*8
++        orr     r1, r1, r0, lsr #align*8
++        stmdb   D!, {r1, r2, r3, r4}
++  .else
++        ldmib   S!, {r1, r2}
++        mov     r0, r4, lsr #align*8
++        ldmib   S!, {r3, r4}
++        orr     r0, r0, r1, lsl #32-align*8
++        mov     r1, r1, lsr #align*8
++        orr     r1, r1, r2, lsl #32-align*8
++        mov     r2, r2, lsr #align*8
++        orr     r2, r2, r3, lsl #32-align*8
++        mov     r3, r3, lsr #align*8
++        orr     r3, r3, r4, lsl #32-align*8
++        stmia   D!, {r0, r1, r2, r3}
++  .endif
++ .elseif words == 8
++  .if backwards
++        ldmdb   S!, {r4, r5, r6, r7}
++        mov     r8, r0, lsl #32-align*8
++        ldmdb   S!, {r0, r1, r2, r3}
++   .if use_pld
++        pld     [S, OFF]
++   .endif
++        orr     r8, r8, r7, lsr #align*8
++        mov     r7, r7, lsl #32-align*8
++        orr     r7, r7, r6, lsr #align*8
++        mov     r6, r6, lsl #32-align*8
++        orr     r6, r6, r5, lsr #align*8
++        mov     r5, r5, lsl #32-align*8
++        orr     r5, r5, r4, lsr #align*8
++        mov     r4, r4, lsl #32-align*8
++        orr     r4, r4, r3, lsr #align*8
++        mov     r3, r3, lsl #32-align*8
++        orr     r3, r3, r2, lsr #align*8
++        mov     r2, r2, lsl #32-align*8
++        orr     r2, r2, r1, lsr #align*8
++        mov     r1, r1, lsl #32-align*8
++        orr     r1, r1, r0, lsr #align*8
++        stmdb   D!, {r5, r6, r7, r8}
++        stmdb   D!, {r1, r2, r3, r4}
++  .else
++        ldmib   S!, {r1, r2, r3, r4}
++        mov     r0, r8, lsr #align*8
++        ldmib   S!, {r5, r6, r7, r8}
++   .if use_pld
++        pld     [S, OFF]
++   .endif
++        orr     r0, r0, r1, lsl #32-align*8
++        mov     r1, r1, lsr #align*8
++        orr     r1, r1, r2, lsl #32-align*8
++        mov     r2, r2, lsr #align*8
++        orr     r2, r2, r3, lsl #32-align*8
++        mov     r3, r3, lsr #align*8
++        orr     r3, r3, r4, lsl #32-align*8
++        mov     r4, r4, lsr #align*8
++        orr     r4, r4, r5, lsl #32-align*8
++        mov     r5, r5, lsr #align*8
++        orr     r5, r5, r6, lsl #32-align*8
++        mov     r6, r6, lsr #align*8
++        orr     r6, r6, r7, lsl #32-align*8
++        mov     r7, r7, lsr #align*8
++        orr     r7, r7, r8, lsl #32-align*8
++        stmia   D!, {r0, r1, r2, r3}
++        stmia   D!, {r4, r5, r6, r7}
++  .endif
++ .endif
++.endm
++
++.macro memcpy_leading_15bytes  backwards, align
++        movs    DAT1, DAT2, lsl #31
++        sub     N, N, DAT2
++ .if backwards
++        ldrmib  DAT0, [S, #-1]!
++        ldrcsh  DAT1, [S, #-2]!
++        strmib  DAT0, [D, #-1]!
++        strcsh  DAT1, [D, #-2]!
++ .else
++        ldrmib  DAT0, [S], #1
++        ldrcsh  DAT1, [S], #2
++        strmib  DAT0, [D], #1
++        strcsh  DAT1, [D], #2
++ .endif
++        movs    DAT1, DAT2, lsl #29
++ .if backwards
++        ldrmi   DAT0, [S, #-4]!
++  .if align == 0
++        ldmcsdb S!, {DAT1, DAT2}
++  .else
++        ldrcs   DAT2, [S, #-4]!
++        ldrcs   DAT1, [S, #-4]!
++  .endif
++        strmi   DAT0, [D, #-4]!
++        stmcsdb D!, {DAT1, DAT2}
++ .else
++        ldrmi   DAT0, [S], #4
++  .if align == 0
++        ldmcsia S!, {DAT1, DAT2}
++  .else
++        ldrcs   DAT1, [S], #4
++        ldrcs   DAT2, [S], #4
++  .endif
++        strmi   DAT0, [D], #4
++        stmcsia D!, {DAT1, DAT2}
++ .endif
++.endm
++
++.macro memcpy_trailing_15bytes  backwards, align
++        movs    N, N, lsl #29
++ .if backwards
++  .if align == 0
++        ldmcsdb S!, {DAT0, DAT1}
++  .else
++        ldrcs   DAT1, [S, #-4]!
++        ldrcs   DAT0, [S, #-4]!
++  .endif
++        ldrmi   DAT2, [S, #-4]!
++        stmcsdb D!, {DAT0, DAT1}
++        strmi   DAT2, [D, #-4]!
++ .else
++  .if align == 0
++        ldmcsia S!, {DAT0, DAT1}
++  .else
++        ldrcs   DAT0, [S], #4
++        ldrcs   DAT1, [S], #4
++  .endif
++        ldrmi   DAT2, [S], #4
++        stmcsia D!, {DAT0, DAT1}
++        strmi   DAT2, [D], #4
++ .endif
++        movs    N, N, lsl #2
++ .if backwards
++        ldrcsh  DAT0, [S, #-2]!
++        ldrmib  DAT1, [S, #-1]
++        strcsh  DAT0, [D, #-2]!
++        strmib  DAT1, [D, #-1]
++ .else
++        ldrcsh  DAT0, [S], #2
++        ldrmib  DAT1, [S]
++        strcsh  DAT0, [D], #2
++        strmib  DAT1, [D]
++ .endif
++.endm
++
++.macro memcpy_long_inner_loop  backwards, align
++ .if align != 0
++  .if backwards
++        ldr     DAT0, [S, #-align]!
++  .else
++        ldr     LAST, [S, #-align]!
++  .endif
++ .endif
++110:
++ .if align == 0
++  .if backwards
++        ldmdb   S!, {DAT0, DAT1, DAT2, DAT3, DAT4, DAT5, DAT6, LAST}
++        pld     [S, OFF]
++        stmdb   D!, {DAT4, DAT5, DAT6, LAST}
++        stmdb   D!, {DAT0, DAT1, DAT2, DAT3}
++  .else
++        ldmia   S!, {DAT0, DAT1, DAT2, DAT3, DAT4, DAT5, DAT6, LAST}
++        pld     [S, OFF]
++        stmia   D!, {DAT0, DAT1, DAT2, DAT3}
++        stmia   D!, {DAT4, DAT5, DAT6, LAST}
++  .endif
++ .else
++        unaligned_words  backwards, align, 1, 8, DAT0, DAT1, DAT2, DAT3, DAT4, DAT5, DAT6, DAT7, LAST
++ .endif
++        subs    N, N, #32
++        bhs     110b
++        /* Just before the final (prefetch_distance+1) 32-byte blocks, deal with final preloads */
++        preload_trailing  backwards, S, N, OFF
++        add     N, N, #(prefetch_distance+2)*32 - 32
++120:
++ .if align == 0
++  .if backwards
++        ldmdb   S!, {DAT0, DAT1, DAT2, DAT3, DAT4, DAT5, DAT6, LAST}
++        stmdb   D!, {DAT4, DAT5, DAT6, LAST}
++        stmdb   D!, {DAT0, DAT1, DAT2, DAT3}
++  .else
++        ldmia   S!, {DAT0, DAT1, DAT2, DAT3, DAT4, DAT5, DAT6, LAST}
++        stmia   D!, {DAT0, DAT1, DAT2, DAT3}
++        stmia   D!, {DAT4, DAT5, DAT6, LAST}
++  .endif
++ .else
++        unaligned_words  backwards, align, 0, 8, DAT0, DAT1, DAT2, DAT3, DAT4, DAT5, DAT6, DAT7, LAST
++ .endif
++        subs    N, N, #32
++        bhs     120b
++        tst     N, #16
++ .if align == 0
++  .if backwards
++        ldmnedb S!, {DAT0, DAT1, DAT2, LAST}
++        stmnedb D!, {DAT0, DAT1, DAT2, LAST}
++  .else
++        ldmneia S!, {DAT0, DAT1, DAT2, LAST}
++        stmneia D!, {DAT0, DAT1, DAT2, LAST}
++  .endif
++ .else
++        beq     130f
++        unaligned_words  backwards, align, 0, 4, DAT0, DAT1, DAT2, DAT3, LAST
++130:
++ .endif
++        /* Trailing words and bytes */
++        tst      N, #15
++        beq      199f
++ .if align != 0
++        add     S, S, #align
++ .endif
++        memcpy_trailing_15bytes  backwards, align
++199:
++        pop     {DAT3, DAT4, DAT5, DAT6, DAT7}
++        pop     {D, DAT1, DAT2, pc}
++.endm
++
++.macro memcpy_medium_inner_loop  backwards, align
++120:
++ .if backwards
++  .if align == 0
++        ldmdb   S!, {DAT0, DAT1, DAT2, LAST}
++  .else
++        ldr     LAST, [S, #-4]!
++        ldr     DAT2, [S, #-4]!
++        ldr     DAT1, [S, #-4]!
++        ldr     DAT0, [S, #-4]!
++  .endif
++        stmdb   D!, {DAT0, DAT1, DAT2, LAST}
++ .else
++  .if align == 0
++        ldmia   S!, {DAT0, DAT1, DAT2, LAST}
++  .else
++        ldr     DAT0, [S], #4
++        ldr     DAT1, [S], #4
++        ldr     DAT2, [S], #4
++        ldr     LAST, [S], #4
++  .endif
++        stmia   D!, {DAT0, DAT1, DAT2, LAST}
++ .endif
++        subs     N, N, #16
++        bhs      120b
++        /* Trailing words and bytes */
++        tst      N, #15
++        beq      199f
++        memcpy_trailing_15bytes  backwards, align
++199:
++        pop     {D, DAT1, DAT2, pc}
++.endm
++
++.macro memcpy_short_inner_loop  backwards, align
++        tst     N, #16
++ .if backwards
++  .if align == 0
++        ldmnedb S!, {DAT0, DAT1, DAT2, LAST}
++  .else
++        ldrne   LAST, [S, #-4]!
++        ldrne   DAT2, [S, #-4]!
++        ldrne   DAT1, [S, #-4]!
++        ldrne   DAT0, [S, #-4]!
++  .endif
++        stmnedb D!, {DAT0, DAT1, DAT2, LAST}
++ .else
++  .if align == 0
++        ldmneia S!, {DAT0, DAT1, DAT2, LAST}
++  .else
++        ldrne   DAT0, [S], #4
++        ldrne   DAT1, [S], #4
++        ldrne   DAT2, [S], #4
++        ldrne   LAST, [S], #4
++  .endif
++        stmneia D!, {DAT0, DAT1, DAT2, LAST}
++ .endif
++        memcpy_trailing_15bytes  backwards, align
++199:
++        pop     {D, DAT1, DAT2, pc}
++.endm
++
++.macro memcpy backwards
++        D       .req    a1
++        S       .req    a2
++        N       .req    a3
++        DAT0    .req    a4
++        DAT1    .req    v1
++        DAT2    .req    v2
++        DAT3    .req    v3
++        DAT4    .req    v4
++        DAT5    .req    v5
++        DAT6    .req    v6
++        DAT7    .req    sl
++        LAST    .req    ip
++        OFF     .req    lr
++
++        .cfi_startproc
++
++        push    {D, DAT1, DAT2, lr}
++
++        .cfi_def_cfa_offset 16
++        .cfi_rel_offset D, 0
++        .cfi_undefined  S
++        .cfi_undefined  N
++        .cfi_undefined  DAT0
++        .cfi_rel_offset DAT1, 4
++        .cfi_rel_offset DAT2, 8
++        .cfi_undefined  LAST
++        .cfi_rel_offset lr, 12
++
++ .if backwards
++        add     D, D, N
++        add     S, S, N
++ .endif
++
++        /* See if we're guaranteed to have at least one 16-byte aligned 16-byte write */
++        cmp     N, #31
++        blo     170f
++        /* To preload ahead as we go, we need at least (prefetch_distance+2) 32-byte blocks */
++        cmp     N, #(prefetch_distance+3)*32 - 1
++        blo     160f
++
++        /* Long case */
++        push    {DAT3, DAT4, DAT5, DAT6, DAT7}
++
++        .cfi_def_cfa_offset 36
++        .cfi_rel_offset D, 20
++        .cfi_rel_offset DAT1, 24
++        .cfi_rel_offset DAT2, 28
++        .cfi_rel_offset DAT3, 0
++        .cfi_rel_offset DAT4, 4
++        .cfi_rel_offset DAT5, 8
++        .cfi_rel_offset DAT6, 12
++        .cfi_rel_offset DAT7, 16
++        .cfi_rel_offset lr, 32
++
++        /* Adjust N so that the decrement instruction can also test for
++         * inner loop termination. We want it to stop when there are
++         * (prefetch_distance+1) complete blocks to go. */
++        sub     N, N, #(prefetch_distance+2)*32
++        preload_leading_step1  backwards, DAT0, S
++ .if backwards
++        /* Bug in GAS: it accepts, but mis-assembles the instruction
++         * ands    DAT2, D, #60, 2
++         * which sets DAT2 to the number of leading bytes until destination is aligned and also clears C (sets borrow)
++         */
++        .word   0xE210513C
++        beq     154f
++ .else
++        ands    DAT2, D, #15
++        beq     154f
++        rsb     DAT2, DAT2, #16 /* number of leading bytes until destination aligned */
++ .endif
++        preload_leading_step2  backwards, DAT0, S, DAT2, OFF
++        memcpy_leading_15bytes backwards, 1
++154:    /* Destination now 16-byte aligned; we have at least one prefetch as well as at least one 16-byte output block */
++        /* Prefetch offset is best selected such that it lies in the first 8 of each 32 bytes - but it's just as easy to aim for the first one */
++ .if backwards
++        rsb     OFF, S, #3
++        and     OFF, OFF, #28
++        sub     OFF, OFF, #32*(prefetch_distance+1)
++ .else
++        and     OFF, S, #28
++        rsb     OFF, OFF, #32*prefetch_distance
++ .endif
++        movs    DAT0, S, lsl #31
++        bhi     157f
++        bcs     156f
++        bmi     155f
++        memcpy_long_inner_loop  backwards, 0
++155:    memcpy_long_inner_loop  backwards, 1
++156:    memcpy_long_inner_loop  backwards, 2
++157:    memcpy_long_inner_loop  backwards, 3
++
++        .cfi_def_cfa_offset 16
++        .cfi_rel_offset D, 0
++        .cfi_rel_offset DAT1, 4
++        .cfi_rel_offset DAT2, 8
++        .cfi_same_value DAT3
++        .cfi_same_value DAT4
++        .cfi_same_value DAT5
++        .cfi_same_value DAT6
++        .cfi_same_value DAT7
++        .cfi_rel_offset lr, 12
++
++160:    /* Medium case */
++        preload_all  backwards, 0, 0, S, N, DAT2, OFF
++        sub     N, N, #16     /* simplifies inner loop termination */
++ .if backwards
++        ands    DAT2, D, #15
++        beq     164f
++ .else
++        ands    DAT2, D, #15
++        beq     164f
++        rsb     DAT2, DAT2, #16
++ .endif
++        memcpy_leading_15bytes backwards, align
++164:    /* Destination now 16-byte aligned; we have at least one 16-byte output block */
++        tst     S, #3
++        bne     140f
++        memcpy_medium_inner_loop  backwards, 0
++140:    memcpy_medium_inner_loop  backwards, 1
++
++170:    /* Short case, less than 31 bytes, so no guarantee of at least one 16-byte block */
++        teq     N, #0
++        beq     199f
++        preload_all  backwards, 1, 0, S, N, DAT2, LAST
++        tst     D, #3
++        beq     174f
++172:    subs    N, N, #1
++        blo     199f
++ .if backwards
++        ldrb    DAT0, [S, #-1]!
++        strb    DAT0, [D, #-1]!
++ .else
++        ldrb    DAT0, [S], #1
++        strb    DAT0, [D], #1
++ .endif
++        tst     D, #3
++        bne     172b
++174:    /* Destination now 4-byte aligned; we have 0 or more output bytes to go */
++        tst     S, #3
++        bne     140f
++        memcpy_short_inner_loop  backwards, 0
++140:    memcpy_short_inner_loop  backwards, 1
++
++        .cfi_endproc
++
++        .unreq  D
++        .unreq  S
++        .unreq  N
++        .unreq  DAT0
++        .unreq  DAT1
++        .unreq  DAT2
++        .unreq  DAT3
++        .unreq  DAT4
++        .unreq  DAT5
++        .unreq  DAT6
++        .unreq  DAT7
++        .unreq  LAST
++        .unreq  OFF
++.endm
+--- /dev/null
++++ b/arch/arm/lib/memmove_rpi.S
+@@ -0,0 +1,61 @@
++/*
++Copyright (c) 2013, Raspberry Pi Foundation
++Copyright (c) 2013, RISC OS Open Ltd
++All rights reserved.
++
++Redistribution and use in source and binary forms, with or without
++modification, are permitted provided that the following conditions are met:
++    * Redistributions of source code must retain the above copyright
++      notice, this list of conditions and the following disclaimer.
++    * Redistributions in binary form must reproduce the above copyright
++      notice, this list of conditions and the following disclaimer in the
++      documentation and/or other materials provided with the distribution.
++    * Neither the name of the copyright holder nor the
++      names of its contributors may be used to endorse or promote products
++      derived from this software without specific prior written permission.
++
++THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
++DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++*/
++
++#include <linux/linkage.h>
++#include "arm-mem.h"
++#include "memcpymove.h"
++
++/* Prevent the stack from becoming executable */
++#if defined(__linux__) && defined(__ELF__)
++.section .note.GNU-stack,"",%progbits
++#endif
++
++    .text
++    .arch armv6
++    .object_arch armv4
++    .arm
++    .altmacro
++    .p2align 2
++
++/*
++ * void *memmove(void *s1, const void *s2, size_t n);
++ * On entry:
++ * a1 = pointer to destination
++ * a2 = pointer to source
++ * a3 = number of bytes to copy
++ * On exit:
++ * a1 preserved
++ */
++
++.set prefetch_distance, 3
++
++ENTRY(memmove)
++        cmp     a2, a1
++        bpl     memcpy  /* pl works even over -1 - 0 and 0x7fffffff - 0x80000000 boundaries */
++        memcpy  1
++ENDPROC(memmove)
+--- /dev/null
++++ b/arch/arm/lib/memset_rpi.S
+@@ -0,0 +1,123 @@
++/*
++Copyright (c) 2013, Raspberry Pi Foundation
++Copyright (c) 2013, RISC OS Open Ltd
++All rights reserved.
++
++Redistribution and use in source and binary forms, with or without
++modification, are permitted provided that the following conditions are met:
++    * Redistributions of source code must retain the above copyright
++      notice, this list of conditions and the following disclaimer.
++    * Redistributions in binary form must reproduce the above copyright
++      notice, this list of conditions and the following disclaimer in the
++      documentation and/or other materials provided with the distribution.
++    * Neither the name of the copyright holder nor the
++      names of its contributors may be used to endorse or promote products
++      derived from this software without specific prior written permission.
++
++THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
++ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
++WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
++DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
++DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
++(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
++LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
++ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
++(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
++SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
++*/
++
++#include <linux/linkage.h>
++#include "arm-mem.h"
++
++/* Prevent the stack from becoming executable */
++#if defined(__linux__) && defined(__ELF__)
++.section .note.GNU-stack,"",%progbits
++#endif
++
++    .text
++    .arch armv6
++    .object_arch armv4
++    .arm
++    .altmacro
++    .p2align 2
++
++/*
++ *  void *memset(void *s, int c, size_t n);
++ *  On entry:
++ *  a1 = pointer to buffer to fill
++ *  a2 = byte pattern to fill with (caller-narrowed)
++ *  a3 = number of bytes to fill
++ *  On exit:
++ *  a1 preserved
++ */
++ENTRY(mmioset)
++ENTRY(memset)
++        S       .req    a1
++        DAT0    .req    a2
++        N       .req    a3
++        DAT1    .req    a4
++        DAT2    .req    ip
++        DAT3    .req    lr
++
++        orr     DAT0, DAT0, lsl #8
++        push    {S, lr}
++        orr     DAT0, DAT0, lsl #16
++        mov     DAT1, DAT0
++
++        /* See if we're guaranteed to have at least one 16-byte aligned 16-byte write */
++        cmp     N, #31
++        blo     170f
++
++161:    sub     N, N, #16     /* simplifies inner loop termination */
++        /* Leading words and bytes */
++        tst     S, #15
++        beq     164f
++        rsb     DAT3, S, #0   /* bits 0-3 = number of leading bytes until aligned */
++        movs    DAT2, DAT3, lsl #31
++        submi   N, N, #1
++        strmib  DAT0, [S], #1
++        subcs   N, N, #2
++        strcsh  DAT0, [S], #2
++        movs    DAT2, DAT3, lsl #29
++        submi   N, N, #4
++        strmi   DAT0, [S], #4
++        subcs   N, N, #8
++        stmcsia S!, {DAT0, DAT1}
++164:    /* Delayed set up of DAT2 and DAT3 so we could use them as scratch registers above */
++        mov     DAT2, DAT0
++        mov     DAT3, DAT0
++        /* Now the inner loop of 16-byte stores */
++165:    stmia   S!, {DAT0, DAT1, DAT2, DAT3}
++        subs    N, N, #16
++        bhs     165b
++166:    /* Trailing words and bytes */
++        movs    N, N, lsl #29
++        stmcsia S!, {DAT0, DAT1}
++        strmi   DAT0, [S], #4
++        movs    N, N, lsl #2
++        strcsh  DAT0, [S], #2
++        strmib  DAT0, [S]
++199:    pop     {S, pc}
++
++170:    /* Short case */
++        mov     DAT2, DAT0
++        mov     DAT3, DAT0
++        tst     S, #3
++        beq     174f
++172:    subs    N, N, #1
++        blo     199b
++        strb    DAT0, [S], #1
++        tst     S, #3
++        bne     172b
++174:    tst     N, #16
++        stmneia S!, {DAT0, DAT1, DAT2, DAT3}
++        b       166b
++
++        .unreq  S
++        .unreq  DAT0
++        .unreq  N
++        .unreq  DAT1
++        .unreq  DAT2
++        .unreq  DAT3
++ENDPROC(memset)
++ENDPROC(mmioset)
+--- a/arch/arm/lib/uaccess_with_memcpy.c
++++ b/arch/arm/lib/uaccess_with_memcpy.c
+@@ -22,6 +22,14 @@
+ #include <asm/current.h>
+ #include <asm/page.h>
+ 
++#ifndef COPY_FROM_USER_THRESHOLD
++#define COPY_FROM_USER_THRESHOLD 64
++#endif
++
++#ifndef COPY_TO_USER_THRESHOLD
++#define COPY_TO_USER_THRESHOLD 64
++#endif
++
+ static int
+ pin_page_for_write(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
+ {
+@@ -85,7 +93,44 @@ pin_page_for_write(const void __user *_a
+ 	return 1;
+ }
+ 
+-static unsigned long noinline
++static int
++pin_page_for_read(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
++{
++	unsigned long addr = (unsigned long)_addr;
++	pgd_t *pgd;
++	pmd_t *pmd;
++	pte_t *pte;
++	pud_t *pud;
++	spinlock_t *ptl;
++
++	pgd = pgd_offset(current->mm, addr);
++	if (unlikely(pgd_none(*pgd) || pgd_bad(*pgd)))
++	{
++		return 0;
++	}
++	pud = pud_offset(pgd, addr);
++	if (unlikely(pud_none(*pud) || pud_bad(*pud)))
++	{
++		return 0;
++	}
++
++	pmd = pmd_offset(pud, addr);
++	if (unlikely(pmd_none(*pmd) || pmd_bad(*pmd)))
++		return 0;
++
++	pte = pte_offset_map_lock(current->mm, pmd, addr, &ptl);
++	if (unlikely(!pte_present(*pte) || !pte_young(*pte))) {
++		pte_unmap_unlock(pte, ptl);
++		return 0;
++	}
++
++	*ptep = pte;
++	*ptlp = ptl;
++
++	return 1;
++}
++
++unsigned long noinline
+ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)
+ {
+ 	unsigned long ua_flags;
+@@ -138,6 +183,54 @@ out:
+ 	return n;
+ }
+ 
++unsigned long noinline
++__copy_from_user_memcpy(void *to, const void __user *from, unsigned long n)
++{
++	int atomic;
++
++	if (unlikely(segment_eq(get_fs(), KERNEL_DS))) {
++		memcpy(to, (const void *)from, n);
++		return 0;
++	}
++
++	/* the mmap semaphore is taken only if not in an atomic context */
++	atomic = in_atomic();
++
++	if (!atomic)
++		down_read(&current->mm->mmap_sem);
++	while (n) {
++		pte_t *pte;
++		spinlock_t *ptl;
++		int tocopy;
++
++		while (!pin_page_for_read(from, &pte, &ptl)) {
++			char temp;
++			if (!atomic)
++				up_read(&current->mm->mmap_sem);
++			if (__get_user(temp, (char __user *)from))
++				goto out;
++			if (!atomic)
++				down_read(&current->mm->mmap_sem);
++		}
++
++		tocopy = (~(unsigned long)from & ~PAGE_MASK) + 1;
++		if (tocopy > n)
++			tocopy = n;
++
++		memcpy(to, (const void *)from, tocopy);
++		to += tocopy;
++		from += tocopy;
++		n -= tocopy;
++
++		pte_unmap_unlock(pte, ptl);
++	}
++	if (!atomic)
++		up_read(&current->mm->mmap_sem);
++
++out:
++	return n;
++}
++
+ unsigned long
+ arm_copy_to_user(void __user *to, const void *from, unsigned long n)
+ {
+@@ -148,7 +241,7 @@ arm_copy_to_user(void __user *to, const
+ 	 * With frame pointer disabled, tail call optimization kicks in
+ 	 * as well making this test almost invisible.
+ 	 */
+-	if (n < 64) {
++	if (n < COPY_TO_USER_THRESHOLD) {
+ 		unsigned long ua_flags = uaccess_save_and_enable();
+ 		n = __copy_to_user_std(to, from, n);
+ 		uaccess_restore(ua_flags);
+@@ -157,6 +250,21 @@ arm_copy_to_user(void __user *to, const
+ 	}
+ 	return n;
+ }
++
++unsigned long __must_check
++arm_copy_from_user(void *to, const void __user *from, unsigned long n)
++{
++	/*
++	 * This test is stubbed out of the main function above to keep
++	 * the overhead for small copies low by avoiding a large
++	 * register dump on the stack just to reload them right away.
++	 * With frame pointer disabled, tail call optimization kicks in
++	 * as well making this test almost invisible.
++	 */
++	if (n < COPY_FROM_USER_THRESHOLD)
++		return __copy_from_user_std(to, from, n);
++	return __copy_from_user_memcpy(to, from, n);
++}
+ 	
+ static unsigned long noinline
+ __clear_user_memset(void __user *addr, unsigned long n)
diff --git a/target/linux/brcm2708/patches-4.4/0081-gpio-poweroff-Allow-it-to-work-on-Raspberry-Pi.patch b/target/linux/brcm2708/patches-4.4/0081-gpio-poweroff-Allow-it-to-work-on-Raspberry-Pi.patch
new file mode 100644
index 0000000..72b7697
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0081-gpio-poweroff-Allow-it-to-work-on-Raspberry-Pi.patch
@@ -0,0 +1,35 @@
+From 5a7fddeb5dfd0dd291363dcab556006a02297d13 Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Thu, 25 Jun 2015 12:16:11 +0100
+Subject: [PATCH 081/127] gpio-poweroff: Allow it to work on Raspberry Pi
+
+The Raspberry Pi firmware manages the power-down and reboot
+process. To do this it installs a pm_power_off handler, causing
+the gpio-poweroff module to abort the probe function.
+
+This patch introduces a "force" DT property that overrides that
+behaviour, and also adds a DT overlay to enable and control it.
+
+Note that running in an active-low configuration (DT parameter
+"active_low") requires a custom dt-blob.bin and probably won't
+allow a reboot without switching off, so an external inversion
+of the trigger signal may be preferable.
+---
+ drivers/power/reset/gpio-poweroff.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/drivers/power/reset/gpio-poweroff.c
++++ b/drivers/power/reset/gpio-poweroff.c
+@@ -49,9 +49,11 @@ static int gpio_poweroff_probe(struct pl
+ {
+ 	bool input = false;
+ 	enum gpiod_flags flags;
++	bool force = false;
+ 
+ 	/* If a pm_power_off function has already been added, leave it alone */
+-	if (pm_power_off != NULL) {
++	force = of_property_read_bool(pdev->dev.of_node, "force");
++	if (!force && (pm_power_off != NULL)) {
+ 		dev_err(&pdev->dev,
+ 			"%s: pm_power_off function already registered",
+ 		       __func__);
diff --git a/target/linux/brcm2708/patches-4.4/0082-spidev-Add-spidev-compatible-string-to-silence-warni.patch b/target/linux/brcm2708/patches-4.4/0082-spidev-Add-spidev-compatible-string-to-silence-warni.patch
new file mode 100644
index 0000000..4aaa468
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0082-spidev-Add-spidev-compatible-string-to-silence-warni.patch
@@ -0,0 +1,21 @@
+From ab8626a92ca2308de52a1db26b10ab4ecc3a1be7 Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Tue, 14 Jul 2015 10:26:09 +0100
+Subject: [PATCH 082/127] spidev: Add "spidev" compatible string to silence
+ warning
+
+See: https://github.com/raspberrypi/linux/issues/1054
+---
+ drivers/spi/spidev.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -695,6 +695,7 @@ static struct class *spidev_class;
+ static const struct of_device_id spidev_dt_ids[] = {
+ 	{ .compatible = "rohm,dh2228fv" },
+ 	{ .compatible = "lineartechnology,ltc2488" },
++	{ .compatible = "spidev" },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, spidev_dt_ids);
diff --git a/target/linux/brcm2708/patches-4.4/0083-scripts-dtc-Add-overlay-support.patch b/target/linux/brcm2708/patches-4.4/0083-scripts-dtc-Add-overlay-support.patch
new file mode 100644
index 0000000..27f9c8a
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0083-scripts-dtc-Add-overlay-support.patch
@@ -0,0 +1,4389 @@
+From 311f1f1043c31d5e0101cd304e521a12c1acffd5 Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Tue, 14 Jul 2015 17:00:18 +0100
+Subject: [PATCH 083/127] scripts/dtc: Add overlay support
+
+---
+ scripts/dtc/checks.c                 |  119 ++-
+ scripts/dtc/dtc-lexer.l              |    5 +
+ scripts/dtc/dtc-lexer.lex.c_shipped  |  490 ++++-----
+ scripts/dtc/dtc-parser.tab.c_shipped | 1896 +++++++++++++++++++---------------
+ scripts/dtc/dtc-parser.tab.h_shipped |  107 +-
+ scripts/dtc/dtc-parser.y             |   23 +-
+ scripts/dtc/dtc.c                    |    9 +-
+ scripts/dtc/dtc.h                    |   38 +
+ scripts/dtc/flattree.c               |  141 ++-
+ scripts/dtc/version_gen.h            |    2 +-
+ 10 files changed, 1685 insertions(+), 1145 deletions(-)
+
+--- a/scripts/dtc/checks.c
++++ b/scripts/dtc/checks.c
+@@ -458,21 +458,91 @@ static void fixup_phandle_references(str
+ 				     struct node *node, struct property *prop)
+ {
+ 	struct marker *m = prop->val.markers;
++	struct fixup *f, **fp;
++	struct fixup_entry *fe, **fep;
+ 	struct node *refnode;
+ 	cell_t phandle;
++	int has_phandle_refs;
++
++	has_phandle_refs = 0;
++	for_each_marker_of_type(m, REF_PHANDLE) {
++		has_phandle_refs = 1;
++		break;
++	}
++
++	if (!has_phandle_refs)
++		return;
+ 
+ 	for_each_marker_of_type(m, REF_PHANDLE) {
+ 		assert(m->offset + sizeof(cell_t) <= prop->val.len);
+ 
+ 		refnode = get_node_by_ref(dt, m->ref);
+-		if (! refnode) {
++		if (!refnode && !symbol_fixup_support) {
+ 			FAIL(c, "Reference to non-existent node or label \"%s\"\n",
+-			     m->ref);
++				m->ref);
+ 			continue;
+ 		}
+ 
+-		phandle = get_node_phandle(dt, refnode);
+-		*((cell_t *)(prop->val.val + m->offset)) = cpu_to_fdt32(phandle);
++		if (!refnode) {
++			/* allocate fixup entry */
++			fe = xmalloc(sizeof(*fe));
++
++			fe->node = node;
++			fe->prop = prop;
++			fe->offset = m->offset;
++			fe->next = NULL;
++
++			/* search for an already existing fixup */
++			for_each_fixup(dt, f)
++				if (strcmp(f->ref, m->ref) == 0)
++					break;
++
++			/* no fixup found, add new */
++			if (f == NULL) {
++				f = xmalloc(sizeof(*f));
++				f->ref = m->ref;
++				f->entries = NULL;
++				f->next = NULL;
++
++				/* add it to the tree */
++				fp = &dt->fixups;
++				while (*fp)
++					fp = &(*fp)->next;
++				*fp = f;
++			}
++
++			/* and now append fixup entry */
++			fep = &f->entries;
++			while (*fep)
++				fep = &(*fep)->next;
++			*fep = fe;
++
++			/* mark the entry as unresolved */
++			phandle = 0xdeadbeef;
++		} else {
++			phandle = get_node_phandle(dt, refnode);
++
++			/* if it's a plugin, we need to record it */
++			if (symbol_fixup_support && dt->is_plugin) {
++
++				/* allocate a new local fixup entry */
++				fe = xmalloc(sizeof(*fe));
++
++				fe->node = node;
++				fe->prop = prop;
++				fe->offset = m->offset;
++				fe->next = NULL;
++
++				/* append it to the local fixups */
++				fep = &dt->local_fixups;
++				while (*fep)
++					fep = &(*fep)->next;
++				*fep = fe;
++			}
++		}
++
++		*((cell_t *)(prop->val.val + m->offset)) =
++			cpu_to_fdt32(phandle);
+ 	}
+ }
+ ERROR(phandle_references, NULL, NULL, fixup_phandle_references, NULL,
+@@ -652,6 +722,45 @@ static void check_obsolete_chosen_interr
+ }
+ TREE_WARNING(obsolete_chosen_interrupt_controller, NULL);
+ 
++static void check_auto_label_phandles(struct check *c, struct node *dt,
++				       struct node *node)
++{
++	struct label *l;
++	struct symbol *s, **sp;
++	int has_label;
++
++	if (!symbol_fixup_support)
++		return;
++
++	has_label = 0;
++	for_each_label(node->labels, l) {
++		has_label = 1;
++		break;
++	}
++
++	if (!has_label)
++		return;
++
++	/* force allocation of a phandle for this node */
++	(void)get_node_phandle(dt, node);
++
++	/* add the symbol */
++	for_each_label(node->labels, l) {
++
++		s = xmalloc(sizeof(*s));
++		s->label = l;
++		s->node = node;
++		s->next = NULL;
++
++		/* add it to the symbols list */
++		sp = &dt->symbols;
++		while (*sp)
++			sp = &((*sp)->next);
++		*sp = s;
++	}
++}
++NODE_WARNING(auto_label_phandles, NULL);
++
+ static struct check *check_table[] = {
+ 	&duplicate_node_names, &duplicate_property_names,
+ 	&node_name_chars, &node_name_format, &property_name_chars,
+@@ -670,6 +779,8 @@ static struct check *check_table[] = {
+ 	&avoid_default_addr_size,
+ 	&obsolete_chosen_interrupt_controller,
+ 
++	&auto_label_phandles,
++
+ 	&always_fail,
+ };
+ 
+--- a/scripts/dtc/dtc-lexer.l
++++ b/scripts/dtc/dtc-lexer.l
+@@ -113,6 +113,11 @@ static void lexical_error(const char *fm
+ 			return DT_V1;
+ 		}
+ 
++<*>"/plugin/"	{
++			DPRINT("Keyword: /plugin/\n");
++			return DT_PLUGIN;
++		}
++
+ <*>"/memreserve/"	{
+ 			DPRINT("Keyword: /memreserve/\n");
+ 			BEGIN_DEFAULT();
+--- a/scripts/dtc/dtc-lexer.lex.c_shipped
++++ b/scripts/dtc/dtc-lexer.lex.c_shipped
+@@ -9,7 +9,7 @@
+ #define FLEX_SCANNER
+ #define YY_FLEX_MAJOR_VERSION 2
+ #define YY_FLEX_MINOR_VERSION 5
+-#define YY_FLEX_SUBMINOR_VERSION 39
++#define YY_FLEX_SUBMINOR_VERSION 35
+ #if YY_FLEX_SUBMINOR_VERSION > 0
+ #define FLEX_BETA
+ #endif
+@@ -162,12 +162,7 @@ typedef unsigned int flex_uint32_t;
+ typedef struct yy_buffer_state *YY_BUFFER_STATE;
+ #endif
+ 
+-#ifndef YY_TYPEDEF_YY_SIZE_T
+-#define YY_TYPEDEF_YY_SIZE_T
+-typedef size_t yy_size_t;
+-#endif
+-
+-extern yy_size_t yyleng;
++extern int yyleng;
+ 
+ extern FILE *yyin, *yyout;
+ 
+@@ -176,7 +171,6 @@ extern FILE *yyin, *yyout;
+ #define EOB_ACT_LAST_MATCH 2
+ 
+     #define YY_LESS_LINENO(n)
+-    #define YY_LINENO_REWIND_TO(ptr)
+     
+ /* Return all but the first "n" matched characters back to the input stream. */
+ #define yyless(n) \
+@@ -194,6 +188,11 @@ extern FILE *yyin, *yyout;
+ 
+ #define unput(c) yyunput( c, (yytext_ptr)  )
+ 
++#ifndef YY_TYPEDEF_YY_SIZE_T
++#define YY_TYPEDEF_YY_SIZE_T
++typedef size_t yy_size_t;
++#endif
++
+ #ifndef YY_STRUCT_YY_BUFFER_STATE
+ #define YY_STRUCT_YY_BUFFER_STATE
+ struct yy_buffer_state
+@@ -211,7 +210,7 @@ struct yy_buffer_state
+ 	/* Number of characters read into yy_ch_buf, not including EOB
+ 	 * characters.
+ 	 */
+-	yy_size_t yy_n_chars;
++	int yy_n_chars;
+ 
+ 	/* Whether we "own" the buffer - i.e., we know we created it,
+ 	 * and can realloc() it to grow it, and should free() it to
+@@ -281,8 +280,8 @@ static YY_BUFFER_STATE * yy_buffer_stack
+ 
+ /* yy_hold_char holds the character lost when yytext is formed. */
+ static char yy_hold_char;
+-static yy_size_t yy_n_chars;		/* number of characters read into yy_ch_buf */
+-yy_size_t yyleng;
++static int yy_n_chars;		/* number of characters read into yy_ch_buf */
++int yyleng;
+ 
+ /* Points to current character in buffer. */
+ static char *yy_c_buf_p = (char *) 0;
+@@ -310,7 +309,7 @@ static void yy_init_buffer (YY_BUFFER_ST
+ 
+ YY_BUFFER_STATE yy_scan_buffer (char *base,yy_size_t size  );
+ YY_BUFFER_STATE yy_scan_string (yyconst char *yy_str  );
+-YY_BUFFER_STATE yy_scan_bytes (yyconst char *bytes,yy_size_t len  );
++YY_BUFFER_STATE yy_scan_bytes (yyconst char *bytes,int len  );
+ 
+ void *yyalloc (yy_size_t  );
+ void *yyrealloc (void *,yy_size_t  );
+@@ -342,7 +341,7 @@ void yyfree (void *  );
+ 
+ /* Begin user sect3 */
+ 
+-#define yywrap() 1
++#define yywrap(n) 1
+ #define YY_SKIP_YYWRAP
+ 
+ typedef unsigned char YY_CHAR;
+@@ -373,8 +372,8 @@ static void yy_fatal_error (yyconst char
+ 	*yy_cp = '\0'; \
+ 	(yy_c_buf_p) = yy_cp;
+ 
+-#define YY_NUM_RULES 30
+-#define YY_END_OF_BUFFER 31
++#define YY_NUM_RULES 31
++#define YY_END_OF_BUFFER 32
+ /* This struct is not used in this scanner,
+    but its presence is necessary. */
+ struct yy_trans_info
+@@ -382,25 +381,26 @@ struct yy_trans_info
+ 	flex_int32_t yy_verify;
+ 	flex_int32_t yy_nxt;
+ 	};
+-static yyconst flex_int16_t yy_accept[159] =
++static yyconst flex_int16_t yy_accept[166] =
+     {   0,
+-        0,    0,    0,    0,    0,    0,    0,    0,   31,   29,
+-       18,   18,   29,   29,   29,   29,   29,   29,   29,   29,
+-       29,   29,   29,   29,   29,   29,   15,   16,   16,   29,
+-       16,   10,   10,   18,   26,    0,    3,    0,   27,   12,
+-        0,    0,   11,    0,    0,    0,    0,    0,    0,    0,
+-       21,   23,   25,   24,   22,    0,    9,   28,    0,    0,
+-        0,   14,   14,   16,   16,   16,   10,   10,   10,    0,
+-       12,    0,   11,    0,    0,    0,   20,    0,    0,    0,
+-        0,    0,    0,    0,    0,   16,   10,   10,   10,    0,
+-       13,   19,    0,    0,    0,    0,    0,    0,    0,    0,
+-
+-        0,   16,    0,    0,    0,    0,    0,    0,    0,    0,
+-        0,   16,    6,    0,    0,    0,    0,    0,    0,    2,
+-        0,    0,    0,    0,    0,    0,    0,    0,    4,   17,
+-        0,    0,    2,    0,    0,    0,    0,    0,    0,    0,
+-        0,    0,    0,    0,    0,    1,    0,    0,    0,    0,
+-        5,    8,    0,    0,    0,    0,    7,    0
++        0,    0,    0,    0,    0,    0,    0,    0,   32,   30,
++       19,   19,   30,   30,   30,   30,   30,   30,   30,   30,
++       30,   30,   30,   30,   30,   30,   16,   17,   17,   30,
++       17,   11,   11,   19,   27,    0,    3,    0,   28,   13,
++        0,    0,   12,    0,    0,    0,    0,    0,    0,    0,
++        0,   22,   24,   26,   25,   23,    0,   10,   29,    0,
++        0,    0,   15,   15,   17,   17,   17,   11,   11,   11,
++        0,   13,    0,   12,    0,    0,    0,   21,    0,    0,
++        0,    0,    0,    0,    0,    0,    0,   17,   11,   11,
++       11,    0,   14,   20,    0,    0,    0,    0,    0,    0,
++
++        0,    0,    0,    0,   17,    0,    0,    0,    0,    0,
++        0,    0,    0,    0,    0,   17,    7,    0,    0,    0,
++        0,    0,    0,    0,    2,    0,    0,    0,    0,    0,
++        0,    0,    0,    0,    4,   18,    0,    0,    5,    2,
++        0,    0,    0,    0,    0,    0,    0,    0,    0,    0,
++        0,    0,    1,    0,    0,    0,    0,    6,    9,    0,
++        0,    0,    0,    8,    0
+     } ;
+ 
+ static yyconst flex_int32_t yy_ec[256] =
+@@ -416,9 +416,9 @@ static yyconst flex_int32_t yy_ec[256] =
+        22,   22,   22,   22,   24,   22,   22,   25,   22,   22,
+         1,   26,   27,    1,   22,    1,   21,   28,   29,   30,
+ 
+-       31,   21,   22,   22,   32,   22,   22,   33,   34,   35,
+-       36,   37,   22,   38,   39,   40,   41,   42,   22,   25,
+-       43,   22,   44,   45,   46,    1,    1,    1,    1,    1,
++       31,   21,   32,   22,   33,   22,   22,   34,   35,   36,
++       37,   38,   22,   39,   40,   41,   42,   43,   22,   25,
++       44,   22,   45,   46,   47,    1,    1,    1,    1,    1,
+         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
+         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
+         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
+@@ -435,163 +435,165 @@ static yyconst flex_int32_t yy_ec[256] =
+         1,    1,    1,    1,    1
+     } ;
+ 
+-static yyconst flex_int32_t yy_meta[47] =
++static yyconst flex_int32_t yy_meta[48] =
+     {   0,
+         1,    1,    1,    1,    1,    1,    2,    3,    1,    2,
+         2,    2,    4,    5,    5,    5,    6,    1,    1,    1,
+         7,    8,    8,    8,    8,    1,    1,    7,    7,    7,
+         7,    8,    8,    8,    8,    8,    8,    8,    8,    8,
+-        8,    8,    8,    3,    1,    4
++        8,    8,    8,    8,    3,    1,    4
+     } ;
+ 
+-static yyconst flex_int16_t yy_base[173] =
++static yyconst flex_int16_t yy_base[180] =
+     {   0,
+-        0,  383,   34,  382,   65,  381,   37,  105,  387,  391,
+-       54,  111,  367,  110,  109,  109,  112,   41,  366,  104,
+-      367,  338,  124,  117,    0,  144,  391,    0,  121,    0,
+-      135,  155,  140,  179,  391,  160,  391,  379,  391,    0,
+-      368,  141,  391,  167,  370,  376,  346,  103,  342,  345,
+-      391,  391,  391,  391,  391,  358,  391,  391,  175,  342,
+-      338,  391,  355,    0,  185,  339,  184,  347,  346,    0,
+-        0,  322,  175,  357,  175,  363,  352,  324,  330,  323,
+-      332,  326,  201,  324,  329,  322,  391,  333,  181,  309,
+-      391,  341,  340,  313,  320,  338,  178,  311,  146,  317,
+-
+-      314,  315,  335,  331,  303,  300,  309,  299,  308,  188,
+-      336,  335,  391,  305,  320,  281,  283,  271,  203,  288,
+-      281,  271,  266,  264,  245,  242,  208,  104,  391,  391,
+-      244,  218,  204,  219,  206,  224,  201,  212,  204,  229,
+-      215,  208,  207,  200,  219,  391,  233,  221,  200,  181,
+-      391,  391,  149,  122,   86,   41,  391,  391,  245,  251,
+-      259,  263,  267,  273,  280,  284,  292,  300,  304,  310,
+-      318,  326
++        0,  393,   35,  392,   66,  391,   38,  107,  397,  401,
++       55,  113,  377,  112,  111,  111,  114,   42,  376,  106,
++      377,  347,  126,  120,    0,  147,  401,    0,  124,    0,
++      137,  158,  170,  163,  401,  153,  401,  389,  401,    0,
++      378,  120,  401,  131,  380,  386,  355,  139,  351,  355,
++      351,  401,  401,  401,  401,  401,  367,  401,  401,  185,
++      350,  346,  401,  364,    0,  185,  347,  189,  356,  355,
++        0,    0,  330,  180,  366,  141,  372,  361,  332,  338,
++      331,  341,  334,  326,  205,  331,  337,  329,  401,  341,
++      167,  316,  401,  349,  348,  320,  328,  346,  180,  318,
++
++      324,  209,  324,  320,  322,  342,  338,  309,  306,  315,
++      305,  315,  312,  192,  342,  341,  401,  293,  306,  282,
++      268,  252,  255,  203,  285,  282,  272,  268,  252,  233,
++      232,  239,  208,  107,  401,  401,  238,  211,  401,  211,
++      212,  208,  228,  203,  215,  207,  233,  222,  212,  211,
++      203,  227,  401,  237,  225,  204,  185,  401,  401,  149,
++      128,   88,   42,  401,  401,  253,  259,  267,  271,  275,
++      281,  288,  292,  300,  308,  312,  318,  326,  334
+     } ;
+ 
+-static yyconst flex_int16_t yy_def[173] =
++static yyconst flex_int16_t yy_def[180] =
+     {   0,
+-      158,    1,    1,    3,  158,    5,    1,    1,  158,  158,
+-      158,  158,  158,  159,  160,  161,  158,  158,  158,  158,
+-      162,  158,  158,  158,  163,  162,  158,  164,  165,  164,
+-      164,  158,  158,  158,  158,  159,  158,  159,  158,  166,
+-      158,  161,  158,  161,  167,  168,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  162,  158,  158,  158,  158,
+-      158,  158,  162,  164,  165,  164,  158,  158,  158,  169,
+-      166,  170,  161,  167,  167,  168,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  164,  158,  158,  169,  170,
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-
+-      158,  164,  158,  158,  158,  158,  158,  158,  158,  171,
+-      158,  164,  158,  158,  158,  158,  158,  158,  171,  158,
+-      171,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      172,  158,  158,  158,  172,  158,  172,  158,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158,    0,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      158,  158
++      165,    1,    1,    3,  165,    5,    1,    1,  165,  165,
++      165,  165,  165,  166,  167,  168,  165,  165,  165,  165,
++      169,  165,  165,  165,  170,  169,  165,  171,  172,  171,
++      171,  165,  165,  165,  165,  166,  165,  166,  165,  173,
++      165,  168,  165,  168,  174,  175,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  169,  165,  165,  165,
++      165,  165,  165,  169,  171,  172,  171,  165,  165,  165,
++      176,  173,  177,  168,  174,  174,  175,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  171,  165,  165,
++      176,  177,  165,  165,  165,  165,  165,  165,  165,  165,
++
++      165,  165,  165,  165,  171,  165,  165,  165,  165,  165,
++      165,  165,  165,  178,  165,  171,  165,  165,  165,  165,
++      165,  165,  165,  178,  165,  178,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  179,  165,  165,
++      165,  179,  165,  179,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,    0,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165,  165
+     } ;
+ 
+-static yyconst flex_int16_t yy_nxt[438] =
++static yyconst flex_int16_t yy_nxt[449] =
+     {   0,
+        10,   11,   12,   11,   13,   14,   10,   15,   16,   10,
+        10,   10,   17,   10,   10,   10,   10,   18,   19,   20,
+        21,   21,   21,   21,   21,   10,   10,   21,   21,   21,
+        21,   21,   21,   21,   21,   21,   21,   21,   21,   21,
+-       21,   21,   21,   10,   22,   10,   24,   25,   25,   25,
+-       32,   33,   33,  157,   26,   34,   34,   34,   51,   52,
+-       27,   26,   26,   26,   26,   10,   11,   12,   11,   13,
+-       14,   28,   15,   16,   28,   28,   28,   24,   28,   28,
+-       28,   10,   18,   19,   20,   29,   29,   29,   29,   29,
+-       30,   10,   29,   29,   29,   29,   29,   29,   29,   29,
+-
+-       29,   29,   29,   29,   29,   29,   29,   29,   10,   22,
+-       10,   23,   34,   34,   34,   37,   39,   43,   32,   33,
+-       33,   45,   54,   55,   46,   59,   45,   64,  156,   46,
+-       64,   64,   64,   79,   44,   38,   59,   57,  134,   47,
+-      135,   48,   80,   49,   47,   50,   48,   99,   61,   43,
+-       50,  110,   41,   67,   67,   67,   60,   63,   63,   63,
+-       57,  155,   68,   69,   63,   37,   44,   66,   67,   67,
+-       67,   63,   63,   63,   63,   73,   59,   68,   69,   70,
+-       34,   34,   34,   43,   75,   38,  154,   92,   83,   83,
+-       83,   64,   44,  120,   64,   64,   64,   67,   67,   67,
+-
+-       44,   57,   99,   68,   69,  107,   68,   69,  120,  127,
+-      108,  153,  152,  121,   83,   83,   83,  133,  133,  133,
+-      146,  133,  133,  133,  146,  140,  140,  140,  121,  141,
+-      140,  140,  140,  151,  141,  158,  150,  149,  148,  144,
+-      147,  143,  142,  139,  147,   36,   36,   36,   36,   36,
+-       36,   36,   36,   40,  138,  137,  136,   40,   40,   42,
+-       42,   42,   42,   42,   42,   42,   42,   56,   56,   56,
+-       56,   62,  132,   62,   64,  131,  130,   64,  129,   64,
+-       64,   65,  128,  158,   65,   65,   65,   65,   71,  127,
+-       71,   71,   74,   74,   74,   74,   74,   74,   74,   74,
+-
+-       76,   76,   76,   76,   76,   76,   76,   76,   89,  126,
+-       89,   90,  125,   90,   90,  124,   90,   90,  119,  119,
+-      119,  119,  119,  119,  119,  119,  145,  145,  145,  145,
+-      145,  145,  145,  145,  123,  122,   59,   59,  118,  117,
+-      116,  115,  114,  113,   45,  112,  108,  111,  109,  106,
+-      105,  104,   46,  103,   91,   87,  102,  101,  100,   98,
+-       97,   96,   95,   94,   93,   77,   75,   91,   88,   87,
+-       86,   57,   85,   84,   57,   82,   81,   78,   77,   75,
+-       72,  158,   58,   57,   53,   35,  158,   31,   23,   23,
+-        9,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158
++       21,   21,   21,   21,   10,   22,   10,   24,   25,   25,
++       25,   32,   33,   33,  164,   26,   34,   34,   34,   52,
++       53,   27,   26,   26,   26,   26,   10,   11,   12,   11,
++       13,   14,   28,   15,   16,   28,   28,   28,   24,   28,
++       28,   28,   10,   18,   19,   20,   29,   29,   29,   29,
++       29,   30,   10,   29,   29,   29,   29,   29,   29,   29,
++
++       29,   29,   29,   29,   29,   29,   29,   29,   29,   29,
++       10,   22,   10,   23,   34,   34,   34,   37,   39,   43,
++       32,   33,   33,   45,   55,   56,   46,   60,   43,   45,
++       65,  163,   46,   65,   65,   65,   44,   38,   60,   74,
++       58,   47,  141,   48,  142,   44,   49,   47,   50,   48,
++       76,   51,   62,   94,   50,   41,   44,   51,   37,   61,
++       64,   64,   64,   58,   34,   34,   34,   64,  162,   80,
++       67,   68,   68,   68,   64,   64,   64,   64,   38,   81,
++       69,   70,   71,   68,   68,   68,   60,  161,   43,   69,
++       70,   65,   69,   70,   65,   65,   65,  125,   85,   85,
++
++       85,   58,   68,   68,   68,   44,  102,  110,  125,  133,
++      102,   69,   70,  111,  114,  160,  159,  126,   85,   85,
++       85,  140,  140,  140,  140,  140,  140,  153,  126,  147,
++      147,  147,  153,  148,  147,  147,  147,  158,  148,  165,
++      157,  156,  155,  151,  150,  149,  146,  154,  145,  144,
++      143,  139,  154,   36,   36,   36,   36,   36,   36,   36,
++       36,   40,  138,  137,  136,   40,   40,   42,   42,   42,
++       42,   42,   42,   42,   42,   57,   57,   57,   57,   63,
++      135,   63,   65,  134,  165,   65,  133,   65,   65,   66,
++      132,  131,   66,   66,   66,   66,   72,  130,   72,   72,
++
++       75,   75,   75,   75,   75,   75,   75,   75,   77,   77,
++       77,   77,   77,   77,   77,   77,   91,  129,   91,   92,
++      128,   92,   92,  127,   92,   92,  124,  124,  124,  124,
++      124,  124,  124,  124,  152,  152,  152,  152,  152,  152,
++      152,  152,   60,   60,  123,  122,  121,  120,  119,  118,
++      117,   45,  116,  111,  115,  113,  112,  109,  108,  107,
++       46,  106,   93,   89,  105,  104,  103,  101,  100,   99,
++       98,   97,   96,   95,   78,   76,   93,   90,   89,   88,
++       58,   87,   86,   58,   84,   83,   82,   79,   78,   76,
++       73,  165,   59,   58,   54,   35,  165,   31,   23,   23,
++
++        9,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165
+     } ;
+ 
+-static yyconst flex_int16_t yy_chk[438] =
++static yyconst flex_int16_t yy_chk[449] =
+     {   0,
+         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
+         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
+         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
+         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
+-        1,    1,    1,    1,    1,    1,    3,    3,    3,    3,
+-        7,    7,    7,  156,    3,   11,   11,   11,   18,   18,
+-        3,    3,    3,    3,    3,    5,    5,    5,    5,    5,
++        1,    1,    1,    1,    1,    1,    1,    3,    3,    3,
++        3,    7,    7,    7,  163,    3,   11,   11,   11,   18,
++       18,    3,    3,    3,    3,    3,    5,    5,    5,    5,
+         5,    5,    5,    5,    5,    5,    5,    5,    5,    5,
+         5,    5,    5,    5,    5,    5,    5,    5,    5,    5,
+         5,    5,    5,    5,    5,    5,    5,    5,    5,    5,
+ 
+         5,    5,    5,    5,    5,    5,    5,    5,    5,    5,
+-        5,    8,   12,   12,   12,   14,   15,   16,    8,    8,
+-        8,   17,   20,   20,   17,   23,   24,   29,  155,   24,
+-       29,   29,   29,   48,   16,   14,   31,   29,  128,   17,
+-      128,   17,   48,   17,   24,   17,   24,   99,   24,   42,
+-       24,   99,   15,   33,   33,   33,   23,   26,   26,   26,
+-       26,  154,   33,   33,   26,   36,   42,   31,   32,   32,
+-       32,   26,   26,   26,   26,   44,   59,   32,   32,   32,
+-       34,   34,   34,   73,   75,   36,  153,   75,   59,   59,
+-       59,   65,   44,  110,   65,   65,   65,   67,   67,   67,
+-
+-       73,   65,   83,   89,   89,   97,   67,   67,  119,  127,
+-       97,  150,  149,  110,   83,   83,   83,  133,  133,  133,
+-      141,  127,  127,  127,  145,  136,  136,  136,  119,  136,
+-      140,  140,  140,  148,  140,  147,  144,  143,  142,  139,
+-      141,  138,  137,  135,  145,  159,  159,  159,  159,  159,
+-      159,  159,  159,  160,  134,  132,  131,  160,  160,  161,
+-      161,  161,  161,  161,  161,  161,  161,  162,  162,  162,
+-      162,  163,  126,  163,  164,  125,  124,  164,  123,  164,
+-      164,  165,  122,  121,  165,  165,  165,  165,  166,  120,
+-      166,  166,  167,  167,  167,  167,  167,  167,  167,  167,
+-
+-      168,  168,  168,  168,  168,  168,  168,  168,  169,  118,
+-      169,  170,  117,  170,  170,  116,  170,  170,  171,  171,
+-      171,  171,  171,  171,  171,  171,  172,  172,  172,  172,
+-      172,  172,  172,  172,  115,  114,  112,  111,  109,  108,
+-      107,  106,  105,  104,  103,  102,  101,  100,   98,   96,
+-       95,   94,   93,   92,   90,   88,   86,   85,   84,   82,
+-       81,   80,   79,   78,   77,   76,   74,   72,   69,   68,
+-       66,   63,   61,   60,   56,   50,   49,   47,   46,   45,
++        5,    5,    5,    8,   12,   12,   12,   14,   15,   16,
++        8,    8,    8,   17,   20,   20,   17,   23,   42,   24,
++       29,  162,   24,   29,   29,   29,   16,   14,   31,   44,
++       29,   17,  134,   17,  134,   42,   17,   24,   17,   24,
++       76,   17,   24,   76,   24,   15,   44,   24,   36,   23,
++       26,   26,   26,   26,   34,   34,   34,   26,  161,   48,
++       31,   32,   32,   32,   26,   26,   26,   26,   36,   48,
++       32,   32,   32,   33,   33,   33,   60,  160,   74,   91,
++       91,   66,   33,   33,   66,   66,   66,  114,   60,   60,
++
++       60,   66,   68,   68,   68,   74,   85,   99,  124,  133,
++      102,   68,   68,   99,  102,  157,  156,  114,   85,   85,
++       85,  133,  133,  133,  140,  140,  140,  148,  124,  143,
++      143,  143,  152,  143,  147,  147,  147,  155,  147,  154,
++      151,  150,  149,  146,  145,  144,  142,  148,  141,  138,
++      137,  132,  152,  166,  166,  166,  166,  166,  166,  166,
++      166,  167,  131,  130,  129,  167,  167,  168,  168,  168,
++      168,  168,  168,  168,  168,  169,  169,  169,  169,  170,
++      128,  170,  171,  127,  126,  171,  125,  171,  171,  172,
++      123,  122,  172,  172,  172,  172,  173,  121,  173,  173,
++
++      174,  174,  174,  174,  174,  174,  174,  174,  175,  175,
++      175,  175,  175,  175,  175,  175,  176,  120,  176,  177,
++      119,  177,  177,  118,  177,  177,  178,  178,  178,  178,
++      178,  178,  178,  178,  179,  179,  179,  179,  179,  179,
++      179,  179,  116,  115,  113,  112,  111,  110,  109,  108,
++      107,  106,  105,  104,  103,  101,  100,   98,   97,   96,
++       95,   94,   92,   90,   88,   87,   86,   84,   83,   82,
++       81,   80,   79,   78,   77,   75,   73,   70,   69,   67,
++       64,   62,   61,   57,   51,   50,   49,   47,   46,   45,
+        41,   38,   22,   21,   19,   13,    9,    6,    4,    2,
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+ 
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158,  158,  158,  158,
+-      158,  158,  158,  158,  158,  158,  158
++      165,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165,  165,  165,
++      165,  165,  165,  165,  165,  165,  165,  165
+     } ;
+ 
+ static yy_state_type yy_last_accepting_state;
+@@ -662,7 +664,7 @@ static int dts_version = 1;
+ static void push_input_file(const char *filename);
+ static bool pop_input_file(void);
+ static void lexical_error(const char *fmt, ...);
+-#line 666 "dtc-lexer.lex.c"
++#line 668 "dtc-lexer.lex.c"
+ 
+ #define INITIAL 0
+ #define BYTESTRING 1
+@@ -704,7 +706,7 @@ FILE *yyget_out (void );
+ 
+ void yyset_out  (FILE * out_str  );
+ 
+-yy_size_t yyget_leng (void );
++int yyget_leng (void );
+ 
+ char *yyget_text (void );
+ 
+@@ -853,6 +855,10 @@ YY_DECL
+ 	register char *yy_cp, *yy_bp;
+ 	register int yy_act;
+     
++#line 68 "dtc-lexer.l"
++
++#line 861 "dtc-lexer.lex.c"
++
+ 	if ( !(yy_init) )
+ 		{
+ 		(yy_init) = 1;
+@@ -879,11 +885,6 @@ YY_DECL
+ 		yy_load_buffer_state( );
+ 		}
+ 
+-	{
+-#line 68 "dtc-lexer.l"
+-
+-#line 886 "dtc-lexer.lex.c"
+-
+ 	while ( 1 )		/* loops until end-of-file is reached */
+ 		{
+ 		yy_cp = (yy_c_buf_p);
+@@ -901,7 +902,7 @@ YY_DECL
+ yy_match:
+ 		do
+ 			{
+-			register YY_CHAR yy_c = yy_ec[YY_SC_TO_UI(*yy_cp)] ;
++			register YY_CHAR yy_c = yy_ec[YY_SC_TO_UI(*yy_cp)];
+ 			if ( yy_accept[yy_current_state] )
+ 				{
+ 				(yy_last_accepting_state) = yy_current_state;
+@@ -910,13 +911,13 @@ yy_match:
+ 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
+ 				{
+ 				yy_current_state = (int) yy_def[yy_current_state];
+-				if ( yy_current_state >= 159 )
++				if ( yy_current_state >= 166 )
+ 					yy_c = yy_meta[(unsigned int) yy_c];
+ 				}
+ 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
+ 			++yy_cp;
+ 			}
+-		while ( yy_current_state != 158 );
++		while ( yy_current_state != 165 );
+ 		yy_cp = (yy_last_accepting_cpos);
+ 		yy_current_state = (yy_last_accepting_state);
+ 
+@@ -1007,23 +1008,31 @@ case 5:
+ YY_RULE_SETUP
+ #line 116 "dtc-lexer.l"
+ {
++			DPRINT("Keyword: /plugin/\n");
++			return DT_PLUGIN;
++		}
++	YY_BREAK
++case 6:
++YY_RULE_SETUP
++#line 121 "dtc-lexer.l"
++{
+ 			DPRINT("Keyword: /memreserve/\n");
+ 			BEGIN_DEFAULT();
+ 			return DT_MEMRESERVE;
+ 		}
+ 	YY_BREAK
+-case 6:
++case 7:
+ YY_RULE_SETUP
+-#line 122 "dtc-lexer.l"
++#line 127 "dtc-lexer.l"
+ {
+ 			DPRINT("Keyword: /bits/\n");
+ 			BEGIN_DEFAULT();
+ 			return DT_BITS;
+ 		}
+ 	YY_BREAK
+-case 7:
++case 8:
+ YY_RULE_SETUP
+-#line 128 "dtc-lexer.l"
++#line 133 "dtc-lexer.l"
+ {
+ 			DPRINT("Keyword: /delete-property/\n");
+ 			DPRINT("<PROPNODENAME>\n");
+@@ -1031,9 +1040,9 @@ YY_RULE_SETUP
+ 			return DT_DEL_PROP;
+ 		}
+ 	YY_BREAK
+-case 8:
++case 9:
+ YY_RULE_SETUP
+-#line 135 "dtc-lexer.l"
++#line 140 "dtc-lexer.l"
+ {
+ 			DPRINT("Keyword: /delete-node/\n");
+ 			DPRINT("<PROPNODENAME>\n");
+@@ -1041,9 +1050,9 @@ YY_RULE_SETUP
+ 			return DT_DEL_NODE;
+ 		}
+ 	YY_BREAK
+-case 9:
++case 10:
+ YY_RULE_SETUP
+-#line 142 "dtc-lexer.l"
++#line 147 "dtc-lexer.l"
+ {
+ 			DPRINT("Label: %s\n", yytext);
+ 			yylval.labelref = xstrdup(yytext);
+@@ -1051,9 +1060,9 @@ YY_RULE_SETUP
+ 			return DT_LABEL;
+ 		}
+ 	YY_BREAK
+-case 10:
++case 11:
+ YY_RULE_SETUP
+-#line 149 "dtc-lexer.l"
++#line 154 "dtc-lexer.l"
+ {
+ 			char *e;
+ 			DPRINT("Integer Literal: '%s'\n", yytext);
+@@ -1073,10 +1082,10 @@ YY_RULE_SETUP
+ 			return DT_LITERAL;
+ 		}
+ 	YY_BREAK
+-case 11:
+-/* rule 11 can match eol */
++case 12:
++/* rule 12 can match eol */
+ YY_RULE_SETUP
+-#line 168 "dtc-lexer.l"
++#line 173 "dtc-lexer.l"
+ {
+ 			struct data d;
+ 			DPRINT("Character literal: %s\n", yytext);
+@@ -1098,18 +1107,18 @@ YY_RULE_SETUP
+ 			return DT_CHAR_LITERAL;
+ 		}
+ 	YY_BREAK
+-case 12:
++case 13:
+ YY_RULE_SETUP
+-#line 189 "dtc-lexer.l"
++#line 194 "dtc-lexer.l"
+ {	/* label reference */
+ 			DPRINT("Ref: %s\n", yytext+1);
+ 			yylval.labelref = xstrdup(yytext+1);
+ 			return DT_REF;
+ 		}
+ 	YY_BREAK
+-case 13:
++case 14:
+ YY_RULE_SETUP
+-#line 195 "dtc-lexer.l"
++#line 200 "dtc-lexer.l"
+ {	/* new-style path reference */
+ 			yytext[yyleng-1] = '\0';
+ 			DPRINT("Ref: %s\n", yytext+2);
+@@ -1117,27 +1126,27 @@ YY_RULE_SETUP
+ 			return DT_REF;
+ 		}
+ 	YY_BREAK
+-case 14:
++case 15:
+ YY_RULE_SETUP
+-#line 202 "dtc-lexer.l"
++#line 207 "dtc-lexer.l"
+ {
+ 			yylval.byte = strtol(yytext, NULL, 16);
+ 			DPRINT("Byte: %02x\n", (int)yylval.byte);
+ 			return DT_BYTE;
+ 		}
+ 	YY_BREAK
+-case 15:
++case 16:
+ YY_RULE_SETUP
+-#line 208 "dtc-lexer.l"
++#line 213 "dtc-lexer.l"
+ {
+ 			DPRINT("/BYTESTRING\n");
+ 			BEGIN_DEFAULT();
+ 			return ']';
+ 		}
+ 	YY_BREAK
+-case 16:
++case 17:
+ YY_RULE_SETUP
+-#line 214 "dtc-lexer.l"
++#line 219 "dtc-lexer.l"
+ {
+ 			DPRINT("PropNodeName: %s\n", yytext);
+ 			yylval.propnodename = xstrdup((yytext[0] == '\\') ?
+@@ -1146,75 +1155,75 @@ YY_RULE_SETUP
+ 			return DT_PROPNODENAME;
+ 		}
+ 	YY_BREAK
+-case 17:
++case 18:
+ YY_RULE_SETUP
+-#line 222 "dtc-lexer.l"
++#line 227 "dtc-lexer.l"
+ {
+ 			DPRINT("Binary Include\n");
+ 			return DT_INCBIN;
+ 		}
+ 	YY_BREAK
+-case 18:
+-/* rule 18 can match eol */
+-YY_RULE_SETUP
+-#line 227 "dtc-lexer.l"
+-/* eat whitespace */
+-	YY_BREAK
+ case 19:
+ /* rule 19 can match eol */
+ YY_RULE_SETUP
+-#line 228 "dtc-lexer.l"
+-/* eat C-style comments */
++#line 232 "dtc-lexer.l"
++/* eat whitespace */
+ 	YY_BREAK
+ case 20:
+ /* rule 20 can match eol */
+ YY_RULE_SETUP
+-#line 229 "dtc-lexer.l"
+-/* eat C++-style comments */
++#line 233 "dtc-lexer.l"
++/* eat C-style comments */
+ 	YY_BREAK
+ case 21:
++/* rule 21 can match eol */
+ YY_RULE_SETUP
+-#line 231 "dtc-lexer.l"
+-{ return DT_LSHIFT; };
++#line 234 "dtc-lexer.l"
++/* eat C++-style comments */
+ 	YY_BREAK
+ case 22:
+ YY_RULE_SETUP
+-#line 232 "dtc-lexer.l"
+-{ return DT_RSHIFT; };
++#line 236 "dtc-lexer.l"
++{ return DT_LSHIFT; };
+ 	YY_BREAK
+ case 23:
+ YY_RULE_SETUP
+-#line 233 "dtc-lexer.l"
+-{ return DT_LE; };
++#line 237 "dtc-lexer.l"
++{ return DT_RSHIFT; };
+ 	YY_BREAK
+ case 24:
+ YY_RULE_SETUP
+-#line 234 "dtc-lexer.l"
+-{ return DT_GE; };
++#line 238 "dtc-lexer.l"
++{ return DT_LE; };
+ 	YY_BREAK
+ case 25:
+ YY_RULE_SETUP
+-#line 235 "dtc-lexer.l"
+-{ return DT_EQ; };
++#line 239 "dtc-lexer.l"
++{ return DT_GE; };
+ 	YY_BREAK
+ case 26:
+ YY_RULE_SETUP
+-#line 236 "dtc-lexer.l"
+-{ return DT_NE; };
++#line 240 "dtc-lexer.l"
++{ return DT_EQ; };
+ 	YY_BREAK
+ case 27:
+ YY_RULE_SETUP
+-#line 237 "dtc-lexer.l"
+-{ return DT_AND; };
++#line 241 "dtc-lexer.l"
++{ return DT_NE; };
+ 	YY_BREAK
+ case 28:
+ YY_RULE_SETUP
+-#line 238 "dtc-lexer.l"
+-{ return DT_OR; };
++#line 242 "dtc-lexer.l"
++{ return DT_AND; };
+ 	YY_BREAK
+ case 29:
+ YY_RULE_SETUP
+-#line 240 "dtc-lexer.l"
++#line 243 "dtc-lexer.l"
++{ return DT_OR; };
++	YY_BREAK
++case 30:
++YY_RULE_SETUP
++#line 245 "dtc-lexer.l"
+ {
+ 			DPRINT("Char: %c (\\x%02x)\n", yytext[0],
+ 				(unsigned)yytext[0]);
+@@ -1230,12 +1239,12 @@ YY_RULE_SETUP
+ 			return yytext[0];
+ 		}
+ 	YY_BREAK
+-case 30:
++case 31:
+ YY_RULE_SETUP
+-#line 255 "dtc-lexer.l"
++#line 260 "dtc-lexer.l"
+ ECHO;
+ 	YY_BREAK
+-#line 1239 "dtc-lexer.lex.c"
++#line 1248 "dtc-lexer.lex.c"
+ 
+ 	case YY_END_OF_BUFFER:
+ 		{
+@@ -1365,7 +1374,6 @@ ECHO;
+ 			"fatal flex scanner internal error--no action found" );
+ 	} /* end of action switch */
+ 		} /* end of scanning one token */
+-	} /* end of user's declarations */
+ } /* end of yylex */
+ 
+ /* yy_get_next_buffer - try to read in a new buffer
+@@ -1421,21 +1429,21 @@ static int yy_get_next_buffer (void)
+ 
+ 	else
+ 		{
+-			yy_size_t num_to_read =
++			int num_to_read =
+ 			YY_CURRENT_BUFFER_LVALUE->yy_buf_size - number_to_move - 1;
+ 
+ 		while ( num_to_read <= 0 )
+ 			{ /* Not enough room in the buffer - grow it. */
+ 
+ 			/* just a shorter name for the current buffer */
+-			YY_BUFFER_STATE b = YY_CURRENT_BUFFER_LVALUE;
++			YY_BUFFER_STATE b = YY_CURRENT_BUFFER;
+ 
+ 			int yy_c_buf_p_offset =
+ 				(int) ((yy_c_buf_p) - b->yy_ch_buf);
+ 
+ 			if ( b->yy_is_our_buffer )
+ 				{
+-				yy_size_t new_size = b->yy_buf_size * 2;
++				int new_size = b->yy_buf_size * 2;
+ 
+ 				if ( new_size <= 0 )
+ 					b->yy_buf_size += b->yy_buf_size / 8;
+@@ -1466,7 +1474,7 @@ static int yy_get_next_buffer (void)
+ 
+ 		/* Read in more data. */
+ 		YY_INPUT( (&YY_CURRENT_BUFFER_LVALUE->yy_ch_buf[number_to_move]),
+-			(yy_n_chars), num_to_read );
++			(yy_n_chars), (size_t) num_to_read );
+ 
+ 		YY_CURRENT_BUFFER_LVALUE->yy_n_chars = (yy_n_chars);
+ 		}
+@@ -1528,7 +1536,7 @@ static int yy_get_next_buffer (void)
+ 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
+ 			{
+ 			yy_current_state = (int) yy_def[yy_current_state];
+-			if ( yy_current_state >= 159 )
++			if ( yy_current_state >= 166 )
+ 				yy_c = yy_meta[(unsigned int) yy_c];
+ 			}
+ 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
+@@ -1556,13 +1564,13 @@ static int yy_get_next_buffer (void)
+ 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
+ 		{
+ 		yy_current_state = (int) yy_def[yy_current_state];
+-		if ( yy_current_state >= 159 )
++		if ( yy_current_state >= 166 )
+ 			yy_c = yy_meta[(unsigned int) yy_c];
+ 		}
+ 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
+-	yy_is_jam = (yy_current_state == 158);
++	yy_is_jam = (yy_current_state == 165);
+ 
+-		return yy_is_jam ? 0 : yy_current_state;
++	return yy_is_jam ? 0 : yy_current_state;
+ }
+ 
+ #ifndef YY_NO_INPUT
+@@ -1589,7 +1597,7 @@ static int yy_get_next_buffer (void)
+ 
+ 		else
+ 			{ /* need more input */
+-			yy_size_t offset = (yy_c_buf_p) - (yytext_ptr);
++			int offset = (yy_c_buf_p) - (yytext_ptr);
+ 			++(yy_c_buf_p);
+ 
+ 			switch ( yy_get_next_buffer(  ) )
+@@ -1863,7 +1871,7 @@ void yypop_buffer_state (void)
+  */
+ static void yyensure_buffer_stack (void)
+ {
+-	yy_size_t num_to_alloc;
++	int num_to_alloc;
+     
+ 	if (!(yy_buffer_stack)) {
+ 
+@@ -1960,12 +1968,12 @@ YY_BUFFER_STATE yy_scan_string (yyconst
+  * 
+  * @return the newly allocated buffer state object.
+  */
+-YY_BUFFER_STATE yy_scan_bytes  (yyconst char * yybytes, yy_size_t  _yybytes_len )
++YY_BUFFER_STATE yy_scan_bytes  (yyconst char * yybytes, int  _yybytes_len )
+ {
+ 	YY_BUFFER_STATE b;
+ 	char *buf;
+ 	yy_size_t n;
+-	yy_size_t i;
++	int i;
+     
+ 	/* Get memory for full buffer, including space for trailing EOB's. */
+ 	n = _yybytes_len + 2;
+@@ -2047,7 +2055,7 @@ FILE *yyget_out  (void)
+ /** Get the length of the current token.
+  * 
+  */
+-yy_size_t yyget_leng  (void)
++int yyget_leng  (void)
+ {
+         return yyleng;
+ }
+@@ -2195,7 +2203,7 @@ void yyfree (void * ptr )
+ 
+ #define YYTABLES_NAME "yytables"
+ 
+-#line 254 "dtc-lexer.l"
++#line 260 "dtc-lexer.l"
+ 
+ 
+ 
+--- a/scripts/dtc/dtc-parser.tab.c_shipped
++++ b/scripts/dtc/dtc-parser.tab.c_shipped
+@@ -1,19 +1,19 @@
+-/* A Bison parser, made by GNU Bison 3.0.2.  */
++/* A Bison parser, made by GNU Bison 2.5.  */
+ 
+ /* Bison implementation for Yacc-like parsers in C
+-
+-   Copyright (C) 1984, 1989-1990, 2000-2013 Free Software Foundation, Inc.
+-
++   
++      Copyright (C) 1984, 1989-1990, 2000-2011 Free Software Foundation, Inc.
++   
+    This program is free software: you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation, either version 3 of the License, or
+    (at your option) any later version.
+-
++   
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+-
++   
+    You should have received a copy of the GNU General Public License
+    along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+ 
+@@ -26,7 +26,7 @@
+    special exception, which will cause the skeleton and the resulting
+    Bison output files to be licensed under the GNU General Public
+    License without this special exception.
+-
++   
+    This special exception was added by the Free Software Foundation in
+    version 2.2 of Bison.  */
+ 
+@@ -44,7 +44,7 @@
+ #define YYBISON 1
+ 
+ /* Bison version.  */
+-#define YYBISON_VERSION "3.0.2"
++#define YYBISON_VERSION "2.5"
+ 
+ /* Skeleton name.  */
+ #define YYSKELETON_NAME "yacc.c"
+@@ -58,13 +58,18 @@
+ /* Pull parsers.  */
+ #define YYPULL 1
+ 
++/* Using locations.  */
++#define YYLSP_NEEDED 1
+ 
+ 
+ 
+ /* Copy the first part of user declarations.  */
+-#line 20 "dtc-parser.y" /* yacc.c:339  */
++
++/* Line 268 of yacc.c  */
++#line 20 "dtc-parser.y"
+ 
+ #include <stdio.h>
++#include <inttypes.h>
+ 
+ #include "dtc.h"
+ #include "srcpos.h"
+@@ -80,15 +85,14 @@ extern void yyerror(char const *s);
+ extern struct boot_info *the_boot_info;
+ extern bool treesource_error;
+ 
+-#line 84 "dtc-parser.tab.c" /* yacc.c:339  */
+ 
+-# ifndef YY_NULLPTR
+-#  if defined __cplusplus && 201103L <= __cplusplus
+-#   define YY_NULLPTR nullptr
+-#  else
+-#   define YY_NULLPTR 0
+-#  endif
+-# endif
++/* Line 268 of yacc.c  */
++#line 91 "dtc-parser.tab.c"
++
++/* Enabling traces.  */
++#ifndef YYDEBUG
++# define YYDEBUG 0
++#endif
+ 
+ /* Enabling verbose error messages.  */
+ #ifdef YYERROR_VERBOSE
+@@ -98,53 +102,51 @@ extern bool treesource_error;
+ # define YYERROR_VERBOSE 0
+ #endif
+ 
+-/* In a future release of Bison, this section will be replaced
+-   by #include "dtc-parser.tab.h".  */
+-#ifndef YY_YY_DTC_PARSER_TAB_H_INCLUDED
+-# define YY_YY_DTC_PARSER_TAB_H_INCLUDED
+-/* Debug traces.  */
+-#ifndef YYDEBUG
+-# define YYDEBUG 0
+-#endif
+-#if YYDEBUG
+-extern int yydebug;
++/* Enabling the token table.  */
++#ifndef YYTOKEN_TABLE
++# define YYTOKEN_TABLE 0
+ #endif
+ 
+-/* Token type.  */
++
++/* Tokens.  */
+ #ifndef YYTOKENTYPE
+ # define YYTOKENTYPE
+-  enum yytokentype
+-  {
+-    DT_V1 = 258,
+-    DT_MEMRESERVE = 259,
+-    DT_LSHIFT = 260,
+-    DT_RSHIFT = 261,
+-    DT_LE = 262,
+-    DT_GE = 263,
+-    DT_EQ = 264,
+-    DT_NE = 265,
+-    DT_AND = 266,
+-    DT_OR = 267,
+-    DT_BITS = 268,
+-    DT_DEL_PROP = 269,
+-    DT_DEL_NODE = 270,
+-    DT_PROPNODENAME = 271,
+-    DT_LITERAL = 272,
+-    DT_CHAR_LITERAL = 273,
+-    DT_BYTE = 274,
+-    DT_STRING = 275,
+-    DT_LABEL = 276,
+-    DT_REF = 277,
+-    DT_INCBIN = 278
+-  };
++   /* Put the tokens into the symbol table, so that GDB and other debuggers
++      know about them.  */
++   enum yytokentype {
++     DT_V1 = 258,
++     DT_PLUGIN = 259,
++     DT_MEMRESERVE = 260,
++     DT_LSHIFT = 261,
++     DT_RSHIFT = 262,
++     DT_LE = 263,
++     DT_GE = 264,
++     DT_EQ = 265,
++     DT_NE = 266,
++     DT_AND = 267,
++     DT_OR = 268,
++     DT_BITS = 269,
++     DT_DEL_PROP = 270,
++     DT_DEL_NODE = 271,
++     DT_PROPNODENAME = 272,
++     DT_LITERAL = 273,
++     DT_CHAR_LITERAL = 274,
++     DT_BYTE = 275,
++     DT_STRING = 276,
++     DT_LABEL = 277,
++     DT_REF = 278,
++     DT_INCBIN = 279
++   };
+ #endif
+ 
+-/* Value type.  */
++
++
+ #if ! defined YYSTYPE && ! defined YYSTYPE_IS_DECLARED
+-typedef union YYSTYPE YYSTYPE;
+-union YYSTYPE
++typedef union YYSTYPE
+ {
+-#line 38 "dtc-parser.y" /* yacc.c:355  */
++
++/* Line 293 of yacc.c  */
++#line 39 "dtc-parser.y"
+ 
+ 	char *propnodename;
+ 	char *labelref;
+@@ -162,37 +164,37 @@ union YYSTYPE
+ 	struct node *nodelist;
+ 	struct reserve_info *re;
+ 	uint64_t integer;
++	int is_plugin;
+ 
+-#line 167 "dtc-parser.tab.c" /* yacc.c:355  */
+-};
++
++
++/* Line 293 of yacc.c  */
++#line 173 "dtc-parser.tab.c"
++} YYSTYPE;
+ # define YYSTYPE_IS_TRIVIAL 1
++# define yystype YYSTYPE /* obsolescent; will be withdrawn */
+ # define YYSTYPE_IS_DECLARED 1
+ #endif
+ 
+-/* Location type.  */
+ #if ! defined YYLTYPE && ! defined YYLTYPE_IS_DECLARED
+-typedef struct YYLTYPE YYLTYPE;
+-struct YYLTYPE
++typedef struct YYLTYPE
+ {
+   int first_line;
+   int first_column;
+   int last_line;
+   int last_column;
+-};
++} YYLTYPE;
++# define yyltype YYLTYPE /* obsolescent; will be withdrawn */
+ # define YYLTYPE_IS_DECLARED 1
+ # define YYLTYPE_IS_TRIVIAL 1
+ #endif
+ 
+ 
+-extern YYSTYPE yylval;
+-extern YYLTYPE yylloc;
+-int yyparse (void);
+-
+-#endif /* !YY_YY_DTC_PARSER_TAB_H_INCLUDED  */
+-
+ /* Copy the second part of user declarations.  */
+ 
+-#line 196 "dtc-parser.tab.c" /* yacc.c:358  */
++
++/* Line 343 of yacc.c  */
++#line 198 "dtc-parser.tab.c"
+ 
+ #ifdef short
+ # undef short
+@@ -206,8 +208,11 @@ typedef unsigned char yytype_uint8;
+ 
+ #ifdef YYTYPE_INT8
+ typedef YYTYPE_INT8 yytype_int8;
+-#else
++#elif (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ typedef signed char yytype_int8;
++#else
++typedef short int yytype_int8;
+ #endif
+ 
+ #ifdef YYTYPE_UINT16
+@@ -227,7 +232,8 @@ typedef short int yytype_int16;
+ #  define YYSIZE_T __SIZE_TYPE__
+ # elif defined size_t
+ #  define YYSIZE_T size_t
+-# elif ! defined YYSIZE_T
++# elif ! defined YYSIZE_T && (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ #  include <stddef.h> /* INFRINGES ON USER NAME SPACE */
+ #  define YYSIZE_T size_t
+ # else
+@@ -241,68 +247,39 @@ typedef short int yytype_int16;
+ # if defined YYENABLE_NLS && YYENABLE_NLS
+ #  if ENABLE_NLS
+ #   include <libintl.h> /* INFRINGES ON USER NAME SPACE */
+-#   define YY_(Msgid) dgettext ("bison-runtime", Msgid)
++#   define YY_(msgid) dgettext ("bison-runtime", msgid)
+ #  endif
+ # endif
+ # ifndef YY_
+-#  define YY_(Msgid) Msgid
+-# endif
+-#endif
+-
+-#ifndef YY_ATTRIBUTE
+-# if (defined __GNUC__                                               \
+-      && (2 < __GNUC__ || (__GNUC__ == 2 && 96 <= __GNUC_MINOR__)))  \
+-     || defined __SUNPRO_C && 0x5110 <= __SUNPRO_C
+-#  define YY_ATTRIBUTE(Spec) __attribute__(Spec)
+-# else
+-#  define YY_ATTRIBUTE(Spec) /* empty */
+-# endif
+-#endif
+-
+-#ifndef YY_ATTRIBUTE_PURE
+-# define YY_ATTRIBUTE_PURE   YY_ATTRIBUTE ((__pure__))
+-#endif
+-
+-#ifndef YY_ATTRIBUTE_UNUSED
+-# define YY_ATTRIBUTE_UNUSED YY_ATTRIBUTE ((__unused__))
+-#endif
+-
+-#if !defined _Noreturn \
+-     && (!defined __STDC_VERSION__ || __STDC_VERSION__ < 201112)
+-# if defined _MSC_VER && 1200 <= _MSC_VER
+-#  define _Noreturn __declspec (noreturn)
+-# else
+-#  define _Noreturn YY_ATTRIBUTE ((__noreturn__))
++#  define YY_(msgid) msgid
+ # endif
+ #endif
+ 
+ /* Suppress unused-variable warnings by "using" E.  */
+ #if ! defined lint || defined __GNUC__
+-# define YYUSE(E) ((void) (E))
++# define YYUSE(e) ((void) (e))
+ #else
+-# define YYUSE(E) /* empty */
++# define YYUSE(e) /* empty */
+ #endif
+ 
+-#if defined __GNUC__ && 407 <= __GNUC__ * 100 + __GNUC_MINOR__
+-/* Suppress an incorrect diagnostic about yylval being uninitialized.  */
+-# define YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN \
+-    _Pragma ("GCC diagnostic push") \
+-    _Pragma ("GCC diagnostic ignored \"-Wuninitialized\"")\
+-    _Pragma ("GCC diagnostic ignored \"-Wmaybe-uninitialized\"")
+-# define YY_IGNORE_MAYBE_UNINITIALIZED_END \
+-    _Pragma ("GCC diagnostic pop")
++/* Identity function, used to suppress warnings about constant conditions.  */
++#ifndef lint
++# define YYID(n) (n)
+ #else
+-# define YY_INITIAL_VALUE(Value) Value
+-#endif
+-#ifndef YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN
+-# define YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN
+-# define YY_IGNORE_MAYBE_UNINITIALIZED_END
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
++static int
++YYID (int yyi)
++#else
++static int
++YYID (yyi)
++    int yyi;
+ #endif
+-#ifndef YY_INITIAL_VALUE
+-# define YY_INITIAL_VALUE(Value) /* Nothing. */
++{
++  return yyi;
++}
+ #endif
+ 
+-
+ #if ! defined yyoverflow || YYERROR_VERBOSE
+ 
+ /* The parser invokes alloca or malloc; define the necessary symbols.  */
+@@ -320,9 +297,9 @@ typedef short int yytype_int16;
+ #    define alloca _alloca
+ #   else
+ #    define YYSTACK_ALLOC alloca
+-#    if ! defined _ALLOCA_H && ! defined EXIT_SUCCESS
++#    if ! defined _ALLOCA_H && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ #     include <stdlib.h> /* INFRINGES ON USER NAME SPACE */
+-      /* Use EXIT_SUCCESS as a witness for stdlib.h.  */
+ #     ifndef EXIT_SUCCESS
+ #      define EXIT_SUCCESS 0
+ #     endif
+@@ -332,8 +309,8 @@ typedef short int yytype_int16;
+ # endif
+ 
+ # ifdef YYSTACK_ALLOC
+-   /* Pacify GCC's 'empty if-body' warning.  */
+-#  define YYSTACK_FREE(Ptr) do { /* empty */; } while (0)
++   /* Pacify GCC's `empty if-body' warning.  */
++#  define YYSTACK_FREE(Ptr) do { /* empty */; } while (YYID (0))
+ #  ifndef YYSTACK_ALLOC_MAXIMUM
+     /* The OS might guarantee only one guard page at the bottom of the stack,
+        and a page size can be as small as 4096 bytes.  So we cannot safely
+@@ -349,7 +326,7 @@ typedef short int yytype_int16;
+ #  endif
+ #  if (defined __cplusplus && ! defined EXIT_SUCCESS \
+        && ! ((defined YYMALLOC || defined malloc) \
+-             && (defined YYFREE || defined free)))
++	     && (defined YYFREE || defined free)))
+ #   include <stdlib.h> /* INFRINGES ON USER NAME SPACE */
+ #   ifndef EXIT_SUCCESS
+ #    define EXIT_SUCCESS 0
+@@ -357,13 +334,15 @@ typedef short int yytype_int16;
+ #  endif
+ #  ifndef YYMALLOC
+ #   define YYMALLOC malloc
+-#   if ! defined malloc && ! defined EXIT_SUCCESS
++#   if ! defined malloc && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ void *malloc (YYSIZE_T); /* INFRINGES ON USER NAME SPACE */
+ #   endif
+ #  endif
+ #  ifndef YYFREE
+ #   define YYFREE free
+-#   if ! defined free && ! defined EXIT_SUCCESS
++#   if ! defined free && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ void free (void *); /* INFRINGES ON USER NAME SPACE */
+ #   endif
+ #  endif
+@@ -373,8 +352,8 @@ void free (void *); /* INFRINGES ON USER
+ 
+ #if (! defined yyoverflow \
+      && (! defined __cplusplus \
+-         || (defined YYLTYPE_IS_TRIVIAL && YYLTYPE_IS_TRIVIAL \
+-             && defined YYSTYPE_IS_TRIVIAL && YYSTYPE_IS_TRIVIAL)))
++	 || (defined YYLTYPE_IS_TRIVIAL && YYLTYPE_IS_TRIVIAL \
++	     && defined YYSTYPE_IS_TRIVIAL && YYSTYPE_IS_TRIVIAL)))
+ 
+ /* A type that is properly aligned for any stack member.  */
+ union yyalloc
+@@ -400,35 +379,35 @@ union yyalloc
+    elements in the stack, and YYPTR gives the new location of the
+    stack.  Advance YYPTR to a properly aligned location for the next
+    stack.  */
+-# define YYSTACK_RELOCATE(Stack_alloc, Stack)                           \
+-    do                                                                  \
+-      {                                                                 \
+-        YYSIZE_T yynewbytes;                                            \
+-        YYCOPY (&yyptr->Stack_alloc, Stack, yysize);                    \
+-        Stack = &yyptr->Stack_alloc;                                    \
+-        yynewbytes = yystacksize * sizeof (*Stack) + YYSTACK_GAP_MAXIMUM; \
+-        yyptr += yynewbytes / sizeof (*yyptr);                          \
+-      }                                                                 \
+-    while (0)
++# define YYSTACK_RELOCATE(Stack_alloc, Stack)				\
++    do									\
++      {									\
++	YYSIZE_T yynewbytes;						\
++	YYCOPY (&yyptr->Stack_alloc, Stack, yysize);			\
++	Stack = &yyptr->Stack_alloc;					\
++	yynewbytes = yystacksize * sizeof (*Stack) + YYSTACK_GAP_MAXIMUM; \
++	yyptr += yynewbytes / sizeof (*yyptr);				\
++      }									\
++    while (YYID (0))
+ 
+ #endif
+ 
+ #if defined YYCOPY_NEEDED && YYCOPY_NEEDED
+-/* Copy COUNT objects from SRC to DST.  The source and destination do
++/* Copy COUNT objects from FROM to TO.  The source and destination do
+    not overlap.  */
+ # ifndef YYCOPY
+ #  if defined __GNUC__ && 1 < __GNUC__
+-#   define YYCOPY(Dst, Src, Count) \
+-      __builtin_memcpy (Dst, Src, (Count) * sizeof (*(Src)))
++#   define YYCOPY(To, From, Count) \
++      __builtin_memcpy (To, From, (Count) * sizeof (*(From)))
+ #  else
+-#   define YYCOPY(Dst, Src, Count)              \
+-      do                                        \
+-        {                                       \
+-          YYSIZE_T yyi;                         \
+-          for (yyi = 0; yyi < (Count); yyi++)   \
+-            (Dst)[yyi] = (Src)[yyi];            \
+-        }                                       \
+-      while (0)
++#   define YYCOPY(To, From, Count)		\
++      do					\
++	{					\
++	  YYSIZE_T yyi;				\
++	  for (yyi = 0; yyi < (Count); yyi++)	\
++	    (To)[yyi] = (From)[yyi];		\
++	}					\
++      while (YYID (0))
+ #  endif
+ # endif
+ #endif /* !YYCOPY_NEEDED */
+@@ -439,39 +418,37 @@ union yyalloc
+ #define YYLAST   136
+ 
+ /* YYNTOKENS -- Number of terminals.  */
+-#define YYNTOKENS  47
++#define YYNTOKENS  48
+ /* YYNNTS -- Number of nonterminals.  */
+-#define YYNNTS  28
++#define YYNNTS  29
+ /* YYNRULES -- Number of rules.  */
+-#define YYNRULES  80
+-/* YYNSTATES -- Number of states.  */
+-#define YYNSTATES  144
++#define YYNRULES  82
++/* YYNRULES -- Number of states.  */
++#define YYNSTATES  147
+ 
+-/* YYTRANSLATE[YYX] -- Symbol number corresponding to YYX as returned
+-   by yylex, with out-of-bounds checking.  */
++/* YYTRANSLATE(YYLEX) -- Bison symbol number corresponding to YYLEX.  */
+ #define YYUNDEFTOK  2
+-#define YYMAXUTOK   278
++#define YYMAXUTOK   279
+ 
+-#define YYTRANSLATE(YYX)                                                \
++#define YYTRANSLATE(YYX)						\
+   ((unsigned int) (YYX) <= YYMAXUTOK ? yytranslate[YYX] : YYUNDEFTOK)
+ 
+-/* YYTRANSLATE[TOKEN-NUM] -- Symbol number corresponding to TOKEN-NUM
+-   as returned by yylex, without out-of-bounds checking.  */
++/* YYTRANSLATE[YYLEX] -- Bison symbol number corresponding to YYLEX.  */
+ static const yytype_uint8 yytranslate[] =
+ {
+        0,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+-       2,     2,     2,    46,     2,     2,     2,    44,    40,     2,
+-      32,    34,    43,    41,    33,    42,     2,    25,     2,     2,
+-       2,     2,     2,     2,     2,     2,     2,     2,    37,    24,
+-      35,    28,    29,    36,     2,     2,     2,     2,     2,     2,
++       2,     2,     2,    47,     2,     2,     2,    45,    41,     2,
++      33,    35,    44,    42,    34,    43,     2,    26,     2,     2,
++       2,     2,     2,     2,     2,     2,     2,     2,    38,    25,
++      36,    29,    30,    37,     2,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+-       2,    30,     2,    31,    39,     2,     2,     2,     2,     2,
++       2,    31,     2,    32,    40,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+-       2,     2,     2,    26,    38,    27,    45,     2,     2,     2,
++       2,     2,     2,    27,    39,    28,    46,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+@@ -486,292 +463,335 @@ static const yytype_uint8 yytranslate[]
+        2,     2,     2,     2,     2,     2,     2,     2,     2,     2,
+        2,     2,     2,     2,     2,     2,     1,     2,     3,     4,
+        5,     6,     7,     8,     9,    10,    11,    12,    13,    14,
+-      15,    16,    17,    18,    19,    20,    21,    22,    23
++      15,    16,    17,    18,    19,    20,    21,    22,    23,    24
+ };
+ 
+ #if YYDEBUG
+-  /* YYRLINE[YYN] -- Source line where rule number YYN was defined.  */
++/* YYPRHS[YYN] -- Index of the first RHS symbol of rule number YYN in
++   YYRHS.  */
++static const yytype_uint16 yyprhs[] =
++{
++       0,     0,     3,     9,    10,    13,    14,    17,    22,    25,
++      28,    32,    37,    41,    46,    52,    53,    56,    61,    64,
++      68,    71,    74,    78,    83,    86,    96,   102,   105,   106,
++     109,   112,   116,   118,   121,   124,   127,   129,   131,   135,
++     137,   139,   145,   147,   151,   153,   157,   159,   163,   165,
++     169,   171,   175,   177,   181,   185,   187,   191,   195,   199,
++     203,   207,   211,   213,   217,   221,   223,   227,   231,   235,
++     237,   239,   242,   245,   248,   249,   252,   255,   256,   259,
++     262,   265,   269
++};
++
++/* YYRHS -- A `-1'-separated list of the rules' RHS.  */
++static const yytype_int8 yyrhs[] =
++{
++      49,     0,    -1,     3,    25,    50,    51,    53,    -1,    -1,
++       4,    25,    -1,    -1,    52,    51,    -1,     5,    60,    60,
++      25,    -1,    22,    52,    -1,    26,    54,    -1,    53,    26,
++      54,    -1,    53,    22,    23,    54,    -1,    53,    23,    54,
++      -1,    53,    16,    23,    25,    -1,    27,    55,    75,    28,
++      25,    -1,    -1,    55,    56,    -1,    17,    29,    57,    25,
++      -1,    17,    25,    -1,    15,    17,    25,    -1,    22,    56,
++      -1,    58,    21,    -1,    58,    59,    30,    -1,    58,    31,
++      74,    32,    -1,    58,    23,    -1,    58,    24,    33,    21,
++      34,    60,    34,    60,    35,    -1,    58,    24,    33,    21,
++      35,    -1,    57,    22,    -1,    -1,    57,    34,    -1,    58,
++      22,    -1,    14,    18,    36,    -1,    36,    -1,    59,    60,
++      -1,    59,    23,    -1,    59,    22,    -1,    18,    -1,    19,
++      -1,    33,    61,    35,    -1,    62,    -1,    63,    -1,    63,
++      37,    61,    38,    62,    -1,    64,    -1,    63,    13,    64,
++      -1,    65,    -1,    64,    12,    65,    -1,    66,    -1,    65,
++      39,    66,    -1,    67,    -1,    66,    40,    67,    -1,    68,
++      -1,    67,    41,    68,    -1,    69,    -1,    68,    10,    69,
++      -1,    68,    11,    69,    -1,    70,    -1,    69,    36,    70,
++      -1,    69,    30,    70,    -1,    69,     8,    70,    -1,    69,
++       9,    70,    -1,    70,     6,    71,    -1,    70,     7,    71,
++      -1,    71,    -1,    71,    42,    72,    -1,    71,    43,    72,
++      -1,    72,    -1,    72,    44,    73,    -1,    72,    26,    73,
++      -1,    72,    45,    73,    -1,    73,    -1,    60,    -1,    43,
++      73,    -1,    46,    73,    -1,    47,    73,    -1,    -1,    74,
++      20,    -1,    74,    22,    -1,    -1,    76,    75,    -1,    76,
++      56,    -1,    17,    54,    -1,    16,    17,    25,    -1,    22,
++      76,    -1
++};
++
++/* YYRLINE[YYN] -- source line where rule number YYN was defined.  */
+ static const yytype_uint16 yyrline[] =
+ {
+-       0,   104,   104,   113,   116,   123,   127,   135,   139,   144,
+-     155,   165,   180,   188,   191,   198,   202,   206,   210,   218,
+-     222,   226,   230,   234,   250,   260,   268,   271,   275,   282,
+-     298,   303,   322,   336,   343,   344,   345,   352,   356,   357,
+-     361,   362,   366,   367,   371,   372,   376,   377,   381,   382,
+-     386,   387,   388,   392,   393,   394,   395,   396,   400,   401,
+-     402,   406,   407,   408,   412,   413,   414,   415,   419,   420,
+-     421,   422,   427,   430,   434,   442,   445,   449,   457,   461,
+-     465
++       0,   108,   108,   119,   122,   130,   133,   140,   144,   152,
++     156,   161,   172,   182,   197,   205,   208,   215,   219,   223,
++     227,   235,   239,   243,   247,   251,   267,   277,   285,   288,
++     292,   299,   315,   320,   339,   353,   360,   361,   362,   369,
++     373,   374,   378,   379,   383,   384,   388,   389,   393,   394,
++     398,   399,   403,   404,   405,   409,   410,   411,   412,   413,
++     417,   418,   419,   423,   424,   425,   429,   430,   431,   432,
++     436,   437,   438,   439,   444,   447,   451,   459,   462,   466,
++     474,   478,   482
+ };
+ #endif
+ 
+-#if YYDEBUG || YYERROR_VERBOSE || 0
++#if YYDEBUG || YYERROR_VERBOSE || YYTOKEN_TABLE
+ /* YYTNAME[SYMBOL-NUM] -- String name of the symbol SYMBOL-NUM.
+    First, the terminals, then, starting at YYNTOKENS, nonterminals.  */
+ static const char *const yytname[] =
+ {
+-  "$end", "error", "$undefined", "DT_V1", "DT_MEMRESERVE", "DT_LSHIFT",
+-  "DT_RSHIFT", "DT_LE", "DT_GE", "DT_EQ", "DT_NE", "DT_AND", "DT_OR",
+-  "DT_BITS", "DT_DEL_PROP", "DT_DEL_NODE", "DT_PROPNODENAME", "DT_LITERAL",
+-  "DT_CHAR_LITERAL", "DT_BYTE", "DT_STRING", "DT_LABEL", "DT_REF",
+-  "DT_INCBIN", "';'", "'/'", "'{'", "'}'", "'='", "'>'", "'['", "']'",
+-  "'('", "','", "')'", "'<'", "'?'", "':'", "'|'", "'^'", "'&'", "'+'",
+-  "'-'", "'*'", "'%'", "'~'", "'!'", "$accept", "sourcefile",
+-  "memreserves", "memreserve", "devicetree", "nodedef", "proplist",
+-  "propdef", "propdata", "propdataprefix", "arrayprefix", "integer_prim",
+-  "integer_expr", "integer_trinary", "integer_or", "integer_and",
+-  "integer_bitor", "integer_bitxor", "integer_bitand", "integer_eq",
+-  "integer_rela", "integer_shift", "integer_add", "integer_mul",
+-  "integer_unary", "bytestring", "subnodes", "subnode", YY_NULLPTR
++  "$end", "error", "$undefined", "DT_V1", "DT_PLUGIN", "DT_MEMRESERVE",
++  "DT_LSHIFT", "DT_RSHIFT", "DT_LE", "DT_GE", "DT_EQ", "DT_NE", "DT_AND",
++  "DT_OR", "DT_BITS", "DT_DEL_PROP", "DT_DEL_NODE", "DT_PROPNODENAME",
++  "DT_LITERAL", "DT_CHAR_LITERAL", "DT_BYTE", "DT_STRING", "DT_LABEL",
++  "DT_REF", "DT_INCBIN", "';'", "'/'", "'{'", "'}'", "'='", "'>'", "'['",
++  "']'", "'('", "','", "')'", "'<'", "'?'", "':'", "'|'", "'^'", "'&'",
++  "'+'", "'-'", "'*'", "'%'", "'~'", "'!'", "$accept", "sourcefile",
++  "plugindecl", "memreserves", "memreserve", "devicetree", "nodedef",
++  "proplist", "propdef", "propdata", "propdataprefix", "arrayprefix",
++  "integer_prim", "integer_expr", "integer_trinary", "integer_or",
++  "integer_and", "integer_bitor", "integer_bitxor", "integer_bitand",
++  "integer_eq", "integer_rela", "integer_shift", "integer_add",
++  "integer_mul", "integer_unary", "bytestring", "subnodes", "subnode", 0
+ };
+ #endif
+ 
+ # ifdef YYPRINT
+-/* YYTOKNUM[NUM] -- (External) token number corresponding to the
+-   (internal) symbol number NUM (which must be that of a token).  */
++/* YYTOKNUM[YYLEX-NUM] -- Internal token number corresponding to
++   token YYLEX-NUM.  */
+ static const yytype_uint16 yytoknum[] =
+ {
+        0,   256,   257,   258,   259,   260,   261,   262,   263,   264,
+      265,   266,   267,   268,   269,   270,   271,   272,   273,   274,
+-     275,   276,   277,   278,    59,    47,   123,   125,    61,    62,
+-      91,    93,    40,    44,    41,    60,    63,    58,   124,    94,
+-      38,    43,    45,    42,    37,   126,    33
++     275,   276,   277,   278,   279,    59,    47,   123,   125,    61,
++      62,    91,    93,    40,    44,    41,    60,    63,    58,   124,
++      94,    38,    43,    45,    42,    37,   126,    33
+ };
+ # endif
+ 
+-#define YYPACT_NINF -81
+-
+-#define yypact_value_is_default(Yystate) \
+-  (!!((Yystate) == (-81)))
+-
+-#define YYTABLE_NINF -1
+-
+-#define yytable_value_is_error(Yytable_value) \
+-  0
+-
+-  /* YYPACT[STATE-NUM] -- Index in YYTABLE of the portion describing
+-     STATE-NUM.  */
+-static const yytype_int8 yypact[] =
++/* YYR1[YYN] -- Symbol number of symbol that rule YYN derives.  */
++static const yytype_uint8 yyr1[] =
+ {
+-      16,   -11,    21,    10,   -81,    25,    10,    19,    10,   -81,
+-     -81,    -9,    25,   -81,     2,    51,   -81,    -9,    -9,    -9,
+-     -81,     1,   -81,    -6,    50,    14,    28,    29,    36,     3,
+-      58,    44,    -3,   -81,    47,   -81,   -81,    65,    68,     2,
+-       2,   -81,   -81,   -81,   -81,    -9,    -9,    -9,    -9,    -9,
+-      -9,    -9,    -9,    -9,    -9,    -9,    -9,    -9,    -9,    -9,
+-      -9,    -9,    -9,    -9,   -81,    63,    69,     2,   -81,   -81,
+-      50,    57,    14,    28,    29,    36,     3,     3,    58,    58,
+-      58,    58,    44,    44,    -3,    -3,   -81,   -81,   -81,    79,
+-      80,    -8,    63,   -81,    72,    63,   -81,   -81,    -9,    76,
+-      77,   -81,   -81,   -81,   -81,   -81,    78,   -81,   -81,   -81,
+-     -81,   -81,    35,     4,   -81,   -81,   -81,   -81,    86,   -81,
+-     -81,   -81,    73,   -81,   -81,    33,    71,    84,    39,   -81,
+-     -81,   -81,   -81,   -81,    41,   -81,   -81,   -81,    25,   -81,
+-      74,    25,    75,   -81
++       0,    48,    49,    50,    50,    51,    51,    52,    52,    53,
++      53,    53,    53,    53,    54,    55,    55,    56,    56,    56,
++      56,    57,    57,    57,    57,    57,    57,    57,    58,    58,
++      58,    59,    59,    59,    59,    59,    60,    60,    60,    61,
++      62,    62,    63,    63,    64,    64,    65,    65,    66,    66,
++      67,    67,    68,    68,    68,    69,    69,    69,    69,    69,
++      70,    70,    70,    71,    71,    71,    72,    72,    72,    72,
++      73,    73,    73,    73,    74,    74,    74,    75,    75,    75,
++      76,    76,    76
+ };
+ 
+-  /* YYDEFACT[STATE-NUM] -- Default reduction number in state STATE-NUM.
+-     Performed when YYTABLE does not specify something else to do.  Zero
+-     means the default is an error.  */
+-static const yytype_uint8 yydefact[] =
++/* YYR2[YYN] -- Number of symbols composing right hand side of rule YYN.  */
++static const yytype_uint8 yyr2[] =
+ {
+-       0,     0,     0,     3,     1,     0,     0,     0,     3,    34,
+-      35,     0,     0,     6,     0,     2,     4,     0,     0,     0,
+-      68,     0,    37,    38,    40,    42,    44,    46,    48,    50,
+-      53,    60,    63,    67,     0,    13,     7,     0,     0,     0,
+-       0,    69,    70,    71,    36,     0,     0,     0,     0,     0,
+-       0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
+-       0,     0,     0,     0,     5,    75,     0,     0,    10,     8,
+-      41,     0,    43,    45,    47,    49,    51,    52,    56,    57,
+-      55,    54,    58,    59,    61,    62,    65,    64,    66,     0,
+-       0,     0,     0,    14,     0,    75,    11,     9,     0,     0,
+-       0,    16,    26,    78,    18,    80,     0,    77,    76,    39,
+-      17,    79,     0,     0,    12,    25,    15,    27,     0,    19,
+-      28,    22,     0,    72,    30,     0,     0,     0,     0,    33,
+-      32,    20,    31,    29,     0,    73,    74,    21,     0,    24,
+-       0,     0,     0,    23
++       0,     2,     5,     0,     2,     0,     2,     4,     2,     2,
++       3,     4,     3,     4,     5,     0,     2,     4,     2,     3,
++       2,     2,     3,     4,     2,     9,     5,     2,     0,     2,
++       2,     3,     1,     2,     2,     2,     1,     1,     3,     1,
++       1,     5,     1,     3,     1,     3,     1,     3,     1,     3,
++       1,     3,     1,     3,     3,     1,     3,     3,     3,     3,
++       3,     3,     1,     3,     3,     1,     3,     3,     3,     1,
++       1,     2,     2,     2,     0,     2,     2,     0,     2,     2,
++       2,     3,     2
+ };
+ 
+-  /* YYPGOTO[NTERM-NUM].  */
+-static const yytype_int8 yypgoto[] =
++/* YYDEFACT[STATE-NAME] -- Default reduction number in state STATE-NUM.
++   Performed when YYTABLE doesn't specify something else to do.  Zero
++   means the default is an error.  */
++static const yytype_uint8 yydefact[] =
+ {
+-     -81,   -81,   100,   104,   -81,   -38,   -81,   -80,   -81,   -81,
+-     -81,    -5,    66,    13,   -81,    70,    67,    81,    64,    82,
+-      37,    27,    34,    38,   -14,   -81,    22,    24
++       0,     0,     0,     3,     1,     0,     5,     4,     0,     0,
++       0,     5,    36,    37,     0,     0,     8,     0,     2,     6,
++       0,     0,     0,    70,     0,    39,    40,    42,    44,    46,
++      48,    50,    52,    55,    62,    65,    69,     0,    15,     9,
++       0,     0,     0,     0,    71,    72,    73,    38,     0,     0,
++       0,     0,     0,     0,     0,     0,     0,     0,     0,     0,
++       0,     0,     0,     0,     0,     0,     0,     7,    77,     0,
++       0,    12,    10,    43,     0,    45,    47,    49,    51,    53,
++      54,    58,    59,    57,    56,    60,    61,    63,    64,    67,
++      66,    68,     0,     0,     0,     0,    16,     0,    77,    13,
++      11,     0,     0,     0,    18,    28,    80,    20,    82,     0,
++      79,    78,    41,    19,    81,     0,     0,    14,    27,    17,
++      29,     0,    21,    30,    24,     0,    74,    32,     0,     0,
++       0,     0,    35,    34,    22,    33,    31,     0,    75,    76,
++      23,     0,    26,     0,     0,     0,    25
+ };
+ 
+-  /* YYDEFGOTO[NTERM-NUM].  */
++/* YYDEFGOTO[NTERM-NUM].  */
+ static const yytype_int16 yydefgoto[] =
+ {
+-      -1,     2,     7,     8,    15,    36,    65,    93,   112,   113,
+-     125,    20,    21,    22,    23,    24,    25,    26,    27,    28,
+-      29,    30,    31,    32,    33,   128,    94,    95
++      -1,     2,     6,    10,    11,    18,    39,    68,    96,   115,
++     116,   128,    23,    24,    25,    26,    27,    28,    29,    30,
++      31,    32,    33,    34,    35,    36,   131,    97,    98
+ };
+ 
+-  /* YYTABLE[YYPACT[STATE-NUM]] -- What to do in state STATE-NUM.  If
+-     positive, shift that token.  If negative, reduce the rule whose
+-     number is the opposite.  If YYTABLE_NINF, syntax error.  */
+-static const yytype_uint8 yytable[] =
++/* YYPACT[STATE-NUM] -- Index in YYTABLE of the portion describing
++   STATE-NUM.  */
++#define YYPACT_NINF -84
++static const yytype_int8 yypact[] =
+ {
+-      12,    68,    69,    41,    42,    43,    45,    34,     9,    10,
+-      53,    54,   104,     3,     5,   107,   101,   118,    35,     1,
+-     102,     4,    61,    11,   119,   120,   121,   122,    35,    97,
+-      46,     6,    55,    17,   123,    44,    18,    19,    56,   124,
+-      62,    63,     9,    10,    14,    51,    52,    86,    87,    88,
+-       9,    10,    48,   103,   129,   130,   115,    11,   135,   116,
+-     136,    47,   131,    57,    58,    11,    37,    49,   117,    50,
+-     137,    64,    38,    39,   138,   139,    40,    89,    90,    91,
+-      78,    79,    80,    81,    92,    59,    60,    66,    76,    77,
+-      67,    82,    83,    96,    98,    99,   100,    84,    85,   106,
+-     110,   111,   114,   126,   134,   127,   133,   141,    16,   143,
+-      13,   109,    71,    74,    72,    70,   105,   108,     0,     0,
+-     132,     0,     0,     0,     0,     0,     0,     0,     0,    73,
+-       0,     0,    75,   140,     0,     0,   142
++      15,   -12,    35,    42,   -84,    27,     9,   -84,    24,     9,
++      43,     9,   -84,   -84,   -10,    24,   -84,    60,    44,   -84,
++     -10,   -10,   -10,   -84,    55,   -84,    -7,    52,    53,    51,
++      54,    10,     2,    38,    37,    -4,   -84,    68,   -84,   -84,
++      71,    73,    60,    60,   -84,   -84,   -84,   -84,   -10,   -10,
++     -10,   -10,   -10,   -10,   -10,   -10,   -10,   -10,   -10,   -10,
++     -10,   -10,   -10,   -10,   -10,   -10,   -10,   -84,    56,    72,
++      60,   -84,   -84,    52,    61,    53,    51,    54,    10,     2,
++       2,    38,    38,    38,    38,    37,    37,    -4,    -4,   -84,
++     -84,   -84,    81,    83,    34,    56,   -84,    74,    56,   -84,
++     -84,   -10,    76,    78,   -84,   -84,   -84,   -84,   -84,    79,
++     -84,   -84,   -84,   -84,   -84,    -6,     3,   -84,   -84,   -84,
++     -84,    87,   -84,   -84,   -84,    75,   -84,   -84,    32,    70,
++      86,    36,   -84,   -84,   -84,   -84,   -84,    47,   -84,   -84,
++     -84,    24,   -84,    77,    24,    80,   -84
+ };
+ 
+-static const yytype_int16 yycheck[] =
++/* YYPGOTO[NTERM-NUM].  */
++static const yytype_int8 yypgoto[] =
+ {
+-       5,    39,    40,    17,    18,    19,    12,    12,    17,    18,
+-       7,     8,    92,    24,     4,    95,    24,    13,    26,     3,
+-      28,     0,    25,    32,    20,    21,    22,    23,    26,    67,
+-      36,    21,    29,    42,    30,    34,    45,    46,    35,    35,
+-      43,    44,    17,    18,    25,     9,    10,    61,    62,    63,
+-      17,    18,    38,    91,    21,    22,    21,    32,    19,    24,
+-      21,    11,    29,     5,     6,    32,    15,    39,    33,    40,
+-      31,    24,    21,    22,    33,    34,    25,    14,    15,    16,
+-      53,    54,    55,    56,    21,    41,    42,    22,    51,    52,
+-      22,    57,    58,    24,    37,    16,    16,    59,    60,    27,
+-      24,    24,    24,    17,    20,    32,    35,    33,     8,    34,
+-       6,    98,    46,    49,    47,    45,    92,    95,    -1,    -1,
+-     125,    -1,    -1,    -1,    -1,    -1,    -1,    -1,    -1,    48,
+-      -1,    -1,    50,   138,    -1,    -1,   141
++     -84,   -84,   -84,    98,   101,   -84,   -41,   -84,   -83,   -84,
++     -84,   -84,    -8,    63,    12,   -84,    66,    67,    65,    69,
++      82,    29,    18,    25,    26,   -17,   -84,    20,    28
+ };
+ 
+-  /* YYSTOS[STATE-NUM] -- The (internal number of the) accessing
+-     symbol of state STATE-NUM.  */
+-static const yytype_uint8 yystos[] =
++/* YYTABLE[YYPACT[STATE-NUM]].  What to do in state STATE-NUM.  If
++   positive, shift that token.  If negative, reduce the rule which
++   number is the opposite.  If YYTABLE_NINF, syntax error.  */
++#define YYTABLE_NINF -1
++static const yytype_uint8 yytable[] =
+ {
+-       0,     3,    48,    24,     0,     4,    21,    49,    50,    17,
+-      18,    32,    58,    50,    25,    51,    49,    42,    45,    46,
+-      58,    59,    60,    61,    62,    63,    64,    65,    66,    67,
+-      68,    69,    70,    71,    58,    26,    52,    15,    21,    22,
+-      25,    71,    71,    71,    34,    12,    36,    11,    38,    39,
+-      40,     9,    10,     7,     8,    29,    35,     5,     6,    41,
+-      42,    25,    43,    44,    24,    53,    22,    22,    52,    52,
+-      62,    59,    63,    64,    65,    66,    67,    67,    68,    68,
+-      68,    68,    69,    69,    70,    70,    71,    71,    71,    14,
+-      15,    16,    21,    54,    73,    74,    24,    52,    37,    16,
+-      16,    24,    28,    52,    54,    74,    27,    54,    73,    60,
+-      24,    24,    55,    56,    24,    21,    24,    33,    13,    20,
+-      21,    22,    23,    30,    35,    57,    17,    32,    72,    21,
+-      22,    29,    58,    35,    20,    19,    21,    31,    33,    34,
+-      58,    33,    58,    34
++      15,    71,    72,    44,    45,    46,    48,    37,    12,    13,
++      56,    57,   107,     3,     8,   110,   118,   121,     1,   119,
++      54,    55,    64,    14,   122,   123,   124,   125,   120,   100,
++      49,     9,    58,    20,   126,     4,    21,    22,    59,   127,
++      65,    66,    12,    13,    60,    61,     5,    89,    90,    91,
++      12,    13,     7,   106,   132,   133,   138,    14,   139,   104,
++      40,    38,   134,   105,    50,    14,    41,    42,   140,    17,
++      43,    92,    93,    94,    81,    82,    83,    84,    95,    62,
++      63,   141,   142,    79,    80,    85,    86,    38,    87,    88,
++      47,    52,    51,    67,    69,    53,    70,    99,   102,   101,
++     103,   113,   109,   114,   117,   129,   136,   137,   130,    19,
++      16,   144,    74,   112,    73,   146,    76,    75,   111,     0,
++     135,    77,     0,   108,     0,     0,     0,     0,     0,     0,
++       0,     0,     0,   143,     0,    78,   145
+ };
+ 
+-  /* YYR1[YYN] -- Symbol number of symbol that rule YYN derives.  */
+-static const yytype_uint8 yyr1[] =
++#define yypact_value_is_default(yystate) \
++  ((yystate) == (-84))
++
++#define yytable_value_is_error(yytable_value) \
++  YYID (0)
++
++static const yytype_int16 yycheck[] =
+ {
+-       0,    47,    48,    49,    49,    50,    50,    51,    51,    51,
+-      51,    51,    52,    53,    53,    54,    54,    54,    54,    55,
+-      55,    55,    55,    55,    55,    55,    56,    56,    56,    57,
+-      57,    57,    57,    57,    58,    58,    58,    59,    60,    60,
+-      61,    61,    62,    62,    63,    63,    64,    64,    65,    65,
+-      66,    66,    66,    67,    67,    67,    67,    67,    68,    68,
+-      68,    69,    69,    69,    70,    70,    70,    70,    71,    71,
+-      71,    71,    72,    72,    72,    73,    73,    73,    74,    74,
+-      74
++       8,    42,    43,    20,    21,    22,    13,    15,    18,    19,
++       8,     9,    95,    25,     5,    98,    22,    14,     3,    25,
++      10,    11,    26,    33,    21,    22,    23,    24,    34,    70,
++      37,    22,    30,    43,    31,     0,    46,    47,    36,    36,
++      44,    45,    18,    19,     6,     7,     4,    64,    65,    66,
++      18,    19,    25,    94,    22,    23,    20,    33,    22,    25,
++      16,    27,    30,    29,    12,    33,    22,    23,    32,    26,
++      26,    15,    16,    17,    56,    57,    58,    59,    22,    42,
++      43,    34,    35,    54,    55,    60,    61,    27,    62,    63,
++      35,    40,    39,    25,    23,    41,    23,    25,    17,    38,
++      17,    25,    28,    25,    25,    18,    36,    21,    33,    11,
++       9,    34,    49,   101,    48,    35,    51,    50,    98,    -1,
++     128,    52,    -1,    95,    -1,    -1,    -1,    -1,    -1,    -1,
++      -1,    -1,    -1,   141,    -1,    53,   144
+ };
+ 
+-  /* YYR2[YYN] -- Number of symbols on the right hand side of rule YYN.  */
+-static const yytype_uint8 yyr2[] =
++/* YYSTOS[STATE-NUM] -- The (internal number of the) accessing
++   symbol of state STATE-NUM.  */
++static const yytype_uint8 yystos[] =
+ {
+-       0,     2,     4,     0,     2,     4,     2,     2,     3,     4,
+-       3,     4,     5,     0,     2,     4,     2,     3,     2,     2,
+-       3,     4,     2,     9,     5,     2,     0,     2,     2,     3,
+-       1,     2,     2,     2,     1,     1,     3,     1,     1,     5,
+-       1,     3,     1,     3,     1,     3,     1,     3,     1,     3,
+-       1,     3,     3,     1,     3,     3,     3,     3,     3,     3,
+-       1,     3,     3,     1,     3,     3,     3,     1,     1,     2,
+-       2,     2,     0,     2,     2,     0,     2,     2,     2,     3,
+-       2
++       0,     3,    49,    25,     0,     4,    50,    25,     5,    22,
++      51,    52,    18,    19,    33,    60,    52,    26,    53,    51,
++      43,    46,    47,    60,    61,    62,    63,    64,    65,    66,
++      67,    68,    69,    70,    71,    72,    73,    60,    27,    54,
++      16,    22,    23,    26,    73,    73,    73,    35,    13,    37,
++      12,    39,    40,    41,    10,    11,     8,     9,    30,    36,
++       6,     7,    42,    43,    26,    44,    45,    25,    55,    23,
++      23,    54,    54,    64,    61,    65,    66,    67,    68,    69,
++      69,    70,    70,    70,    70,    71,    71,    72,    72,    73,
++      73,    73,    15,    16,    17,    22,    56,    75,    76,    25,
++      54,    38,    17,    17,    25,    29,    54,    56,    76,    28,
++      56,    75,    62,    25,    25,    57,    58,    25,    22,    25,
++      34,    14,    21,    22,    23,    24,    31,    36,    59,    18,
++      33,    74,    22,    23,    30,    60,    36,    21,    20,    22,
++      32,    34,    35,    60,    34,    60,    35
+ };
+ 
+-
+-#define yyerrok         (yyerrstatus = 0)
+-#define yyclearin       (yychar = YYEMPTY)
+-#define YYEMPTY         (-2)
+-#define YYEOF           0
+-
+-#define YYACCEPT        goto yyacceptlab
+-#define YYABORT         goto yyabortlab
+-#define YYERROR         goto yyerrorlab
+-
++#define yyerrok		(yyerrstatus = 0)
++#define yyclearin	(yychar = YYEMPTY)
++#define YYEMPTY		(-2)
++#define YYEOF		0
++
++#define YYACCEPT	goto yyacceptlab
++#define YYABORT		goto yyabortlab
++#define YYERROR		goto yyerrorlab
++
++
++/* Like YYERROR except do call yyerror.  This remains here temporarily
++   to ease the transition to the new meaning of YYERROR, for GCC.
++   Once GCC version 2 has supplanted version 1, this can go.  However,
++   YYFAIL appears to be in use.  Nevertheless, it is formally deprecated
++   in Bison 2.4.2's NEWS entry, where a plan to phase it out is
++   discussed.  */
++
++#define YYFAIL		goto yyerrlab
++#if defined YYFAIL
++  /* This is here to suppress warnings from the GCC cpp's
++     -Wunused-macros.  Normally we don't worry about that warning, but
++     some users do, and we want to make it easy for users to remove
++     YYFAIL uses, which will produce warnings from Bison 2.5.  */
++#endif
+ 
+ #define YYRECOVERING()  (!!yyerrstatus)
+ 
+-#define YYBACKUP(Token, Value)                                  \
+-do                                                              \
+-  if (yychar == YYEMPTY)                                        \
+-    {                                                           \
+-      yychar = (Token);                                         \
+-      yylval = (Value);                                         \
+-      YYPOPSTACK (yylen);                                       \
+-      yystate = *yyssp;                                         \
+-      goto yybackup;                                            \
+-    }                                                           \
+-  else                                                          \
+-    {                                                           \
++#define YYBACKUP(Token, Value)					\
++do								\
++  if (yychar == YYEMPTY && yylen == 1)				\
++    {								\
++      yychar = (Token);						\
++      yylval = (Value);						\
++      YYPOPSTACK (1);						\
++      goto yybackup;						\
++    }								\
++  else								\
++    {								\
+       yyerror (YY_("syntax error: cannot back up")); \
+-      YYERROR;                                                  \
+-    }                                                           \
+-while (0)
+-
+-/* Error token number */
+-#define YYTERROR        1
+-#define YYERRCODE       256
++      YYERROR;							\
++    }								\
++while (YYID (0))
++
++
++#define YYTERROR	1
++#define YYERRCODE	256
+ 
+ 
+ /* YYLLOC_DEFAULT -- Set CURRENT to span from RHS[1] to RHS[N].
+    If N is 0, then set CURRENT to the empty location which ends
+    the previous symbol: RHS[0] (always defined).  */
+ 
++#define YYRHSLOC(Rhs, K) ((Rhs)[K])
+ #ifndef YYLLOC_DEFAULT
+-# define YYLLOC_DEFAULT(Current, Rhs, N)                                \
+-    do                                                                  \
+-      if (N)                                                            \
+-        {                                                               \
+-          (Current).first_line   = YYRHSLOC (Rhs, 1).first_line;        \
+-          (Current).first_column = YYRHSLOC (Rhs, 1).first_column;      \
+-          (Current).last_line    = YYRHSLOC (Rhs, N).last_line;         \
+-          (Current).last_column  = YYRHSLOC (Rhs, N).last_column;       \
+-        }                                                               \
+-      else                                                              \
+-        {                                                               \
+-          (Current).first_line   = (Current).last_line   =              \
+-            YYRHSLOC (Rhs, 0).last_line;                                \
+-          (Current).first_column = (Current).last_column =              \
+-            YYRHSLOC (Rhs, 0).last_column;                              \
+-        }                                                               \
+-    while (0)
++# define YYLLOC_DEFAULT(Current, Rhs, N)				\
++    do									\
++      if (YYID (N))                                                    \
++	{								\
++	  (Current).first_line   = YYRHSLOC (Rhs, 1).first_line;	\
++	  (Current).first_column = YYRHSLOC (Rhs, 1).first_column;	\
++	  (Current).last_line    = YYRHSLOC (Rhs, N).last_line;		\
++	  (Current).last_column  = YYRHSLOC (Rhs, N).last_column;	\
++	}								\
++      else								\
++	{								\
++	  (Current).first_line   = (Current).last_line   =		\
++	    YYRHSLOC (Rhs, 0).last_line;				\
++	  (Current).first_column = (Current).last_column =		\
++	    YYRHSLOC (Rhs, 0).last_column;				\
++	}								\
++    while (YYID (0))
+ #endif
+ 
+-#define YYRHSLOC(Rhs, K) ((Rhs)[K])
+-
+-
+-/* Enable debugging if requested.  */
+-#if YYDEBUG
+-
+-# ifndef YYFPRINTF
+-#  include <stdio.h> /* INFRINGES ON USER NAME SPACE */
+-#  define YYFPRINTF fprintf
+-# endif
+-
+-# define YYDPRINTF(Args)                        \
+-do {                                            \
+-  if (yydebug)                                  \
+-    YYFPRINTF Args;                             \
+-} while (0)
+-
+ 
+ /* YY_LOCATION_PRINT -- Print the location on the stream.
+    This macro was not mandated originally: define only if we know
+@@ -779,73 +799,82 @@ do {
+ 
+ #ifndef YY_LOCATION_PRINT
+ # if defined YYLTYPE_IS_TRIVIAL && YYLTYPE_IS_TRIVIAL
+-
+-/* Print *YYLOCP on YYO.  Private, do not rely on its existence. */
+-
+-YY_ATTRIBUTE_UNUSED
+-static unsigned
+-yy_location_print_ (FILE *yyo, YYLTYPE const * const yylocp)
+-{
+-  unsigned res = 0;
+-  int end_col = 0 != yylocp->last_column ? yylocp->last_column - 1 : 0;
+-  if (0 <= yylocp->first_line)
+-    {
+-      res += YYFPRINTF (yyo, "%d", yylocp->first_line);
+-      if (0 <= yylocp->first_column)
+-        res += YYFPRINTF (yyo, ".%d", yylocp->first_column);
+-    }
+-  if (0 <= yylocp->last_line)
+-    {
+-      if (yylocp->first_line < yylocp->last_line)
+-        {
+-          res += YYFPRINTF (yyo, "-%d", yylocp->last_line);
+-          if (0 <= end_col)
+-            res += YYFPRINTF (yyo, ".%d", end_col);
+-        }
+-      else if (0 <= end_col && yylocp->first_column < end_col)
+-        res += YYFPRINTF (yyo, "-%d", end_col);
+-    }
+-  return res;
+- }
+-
+-#  define YY_LOCATION_PRINT(File, Loc)          \
+-  yy_location_print_ (File, &(Loc))
+-
++#  define YY_LOCATION_PRINT(File, Loc)			\
++     fprintf (File, "%d.%d-%d.%d",			\
++	      (Loc).first_line, (Loc).first_column,	\
++	      (Loc).last_line,  (Loc).last_column)
+ # else
+ #  define YY_LOCATION_PRINT(File, Loc) ((void) 0)
+ # endif
+ #endif
+ 
+ 
+-# define YY_SYMBOL_PRINT(Title, Type, Value, Location)                    \
+-do {                                                                      \
+-  if (yydebug)                                                            \
+-    {                                                                     \
+-      YYFPRINTF (stderr, "%s ", Title);                                   \
+-      yy_symbol_print (stderr,                                            \
+-                  Type, Value, Location); \
+-      YYFPRINTF (stderr, "\n");                                           \
+-    }                                                                     \
+-} while (0)
++/* YYLEX -- calling `yylex' with the right arguments.  */
+ 
++#ifdef YYLEX_PARAM
++# define YYLEX yylex (YYLEX_PARAM)
++#else
++# define YYLEX yylex ()
++#endif
+ 
+-/*----------------------------------------.
+-| Print this symbol's value on YYOUTPUT.  |
+-`----------------------------------------*/
++/* Enable debugging if requested.  */
++#if YYDEBUG
+ 
++# ifndef YYFPRINTF
++#  include <stdio.h> /* INFRINGES ON USER NAME SPACE */
++#  define YYFPRINTF fprintf
++# endif
++
++# define YYDPRINTF(Args)			\
++do {						\
++  if (yydebug)					\
++    YYFPRINTF Args;				\
++} while (YYID (0))
++
++# define YY_SYMBOL_PRINT(Title, Type, Value, Location)			  \
++do {									  \
++  if (yydebug)								  \
++    {									  \
++      YYFPRINTF (stderr, "%s ", Title);					  \
++      yy_symbol_print (stderr,						  \
++		  Type, Value, Location); \
++      YYFPRINTF (stderr, "\n");						  \
++    }									  \
++} while (YYID (0))
++
++
++/*--------------------------------.
++| Print this symbol on YYOUTPUT.  |
++`--------------------------------*/
++
++/*ARGSUSED*/
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ static void
+ yy_symbol_value_print (FILE *yyoutput, int yytype, YYSTYPE const * const yyvaluep, YYLTYPE const * const yylocationp)
++#else
++static void
++yy_symbol_value_print (yyoutput, yytype, yyvaluep, yylocationp)
++    FILE *yyoutput;
++    int yytype;
++    YYSTYPE const * const yyvaluep;
++    YYLTYPE const * const yylocationp;
++#endif
+ {
+-  FILE *yyo = yyoutput;
+-  YYUSE (yyo);
+-  YYUSE (yylocationp);
+   if (!yyvaluep)
+     return;
++  YYUSE (yylocationp);
+ # ifdef YYPRINT
+   if (yytype < YYNTOKENS)
+     YYPRINT (yyoutput, yytoknum[yytype], *yyvaluep);
++# else
++  YYUSE (yyoutput);
+ # endif
+-  YYUSE (yytype);
++  switch (yytype)
++    {
++      default:
++	break;
++    }
+ }
+ 
+ 
+@@ -853,11 +882,23 @@ yy_symbol_value_print (FILE *yyoutput, i
+ | Print this symbol on YYOUTPUT.  |
+ `--------------------------------*/
+ 
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ static void
+ yy_symbol_print (FILE *yyoutput, int yytype, YYSTYPE const * const yyvaluep, YYLTYPE const * const yylocationp)
++#else
++static void
++yy_symbol_print (yyoutput, yytype, yyvaluep, yylocationp)
++    FILE *yyoutput;
++    int yytype;
++    YYSTYPE const * const yyvaluep;
++    YYLTYPE const * const yylocationp;
++#endif
+ {
+-  YYFPRINTF (yyoutput, "%s %s (",
+-             yytype < YYNTOKENS ? "token" : "nterm", yytname[yytype]);
++  if (yytype < YYNTOKENS)
++    YYFPRINTF (yyoutput, "token %s (", yytname[yytype]);
++  else
++    YYFPRINTF (yyoutput, "nterm %s (", yytname[yytype]);
+ 
+   YY_LOCATION_PRINT (yyoutput, *yylocationp);
+   YYFPRINTF (yyoutput, ": ");
+@@ -870,8 +911,16 @@ yy_symbol_print (FILE *yyoutput, int yyt
+ | TOP (included).                                                   |
+ `------------------------------------------------------------------*/
+ 
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ static void
+ yy_stack_print (yytype_int16 *yybottom, yytype_int16 *yytop)
++#else
++static void
++yy_stack_print (yybottom, yytop)
++    yytype_int16 *yybottom;
++    yytype_int16 *yytop;
++#endif
+ {
+   YYFPRINTF (stderr, "Stack now");
+   for (; yybottom <= yytop; yybottom++)
+@@ -882,42 +931,50 @@ yy_stack_print (yytype_int16 *yybottom,
+   YYFPRINTF (stderr, "\n");
+ }
+ 
+-# define YY_STACK_PRINT(Bottom, Top)                            \
+-do {                                                            \
+-  if (yydebug)                                                  \
+-    yy_stack_print ((Bottom), (Top));                           \
+-} while (0)
++# define YY_STACK_PRINT(Bottom, Top)				\
++do {								\
++  if (yydebug)							\
++    yy_stack_print ((Bottom), (Top));				\
++} while (YYID (0))
+ 
+ 
+ /*------------------------------------------------.
+ | Report that the YYRULE is going to be reduced.  |
+ `------------------------------------------------*/
+ 
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
++static void
++yy_reduce_print (YYSTYPE *yyvsp, YYLTYPE *yylsp, int yyrule)
++#else
+ static void
+-yy_reduce_print (yytype_int16 *yyssp, YYSTYPE *yyvsp, YYLTYPE *yylsp, int yyrule)
++yy_reduce_print (yyvsp, yylsp, yyrule)
++    YYSTYPE *yyvsp;
++    YYLTYPE *yylsp;
++    int yyrule;
++#endif
+ {
+-  unsigned long int yylno = yyrline[yyrule];
+   int yynrhs = yyr2[yyrule];
+   int yyi;
++  unsigned long int yylno = yyrline[yyrule];
+   YYFPRINTF (stderr, "Reducing stack by rule %d (line %lu):\n",
+-             yyrule - 1, yylno);
++	     yyrule - 1, yylno);
+   /* The symbols being reduced.  */
+   for (yyi = 0; yyi < yynrhs; yyi++)
+     {
+       YYFPRINTF (stderr, "   $%d = ", yyi + 1);
+-      yy_symbol_print (stderr,
+-                       yystos[yyssp[yyi + 1 - yynrhs]],
+-                       &(yyvsp[(yyi + 1) - (yynrhs)])
+-                       , &(yylsp[(yyi + 1) - (yynrhs)])                       );
++      yy_symbol_print (stderr, yyrhs[yyprhs[yyrule] + yyi],
++		       &(yyvsp[(yyi + 1) - (yynrhs)])
++		       , &(yylsp[(yyi + 1) - (yynrhs)])		       );
+       YYFPRINTF (stderr, "\n");
+     }
+ }
+ 
+-# define YY_REDUCE_PRINT(Rule)          \
+-do {                                    \
+-  if (yydebug)                          \
+-    yy_reduce_print (yyssp, yyvsp, yylsp, Rule); \
+-} while (0)
++# define YY_REDUCE_PRINT(Rule)		\
++do {					\
++  if (yydebug)				\
++    yy_reduce_print (yyvsp, yylsp, Rule); \
++} while (YYID (0))
+ 
+ /* Nonzero means print parse trace.  It is left uninitialized so that
+    multiple parsers can coexist.  */
+@@ -931,7 +988,7 @@ int yydebug;
+ 
+ 
+ /* YYINITDEPTH -- initial size of the parser's stacks.  */
+-#ifndef YYINITDEPTH
++#ifndef	YYINITDEPTH
+ # define YYINITDEPTH 200
+ #endif
+ 
+@@ -954,8 +1011,15 @@ int yydebug;
+ #   define yystrlen strlen
+ #  else
+ /* Return the length of YYSTR.  */
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ static YYSIZE_T
+ yystrlen (const char *yystr)
++#else
++static YYSIZE_T
++yystrlen (yystr)
++    const char *yystr;
++#endif
+ {
+   YYSIZE_T yylen;
+   for (yylen = 0; yystr[yylen]; yylen++)
+@@ -971,8 +1035,16 @@ yystrlen (const char *yystr)
+ #  else
+ /* Copy YYSRC to YYDEST, returning the address of the terminating '\0' in
+    YYDEST.  */
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ static char *
+ yystpcpy (char *yydest, const char *yysrc)
++#else
++static char *
++yystpcpy (yydest, yysrc)
++    char *yydest;
++    const char *yysrc;
++#endif
+ {
+   char *yyd = yydest;
+   const char *yys = yysrc;
+@@ -1002,27 +1074,27 @@ yytnamerr (char *yyres, const char *yyst
+       char const *yyp = yystr;
+ 
+       for (;;)
+-        switch (*++yyp)
+-          {
+-          case '\'':
+-          case ',':
+-            goto do_not_strip_quotes;
+-
+-          case '\\':
+-            if (*++yyp != '\\')
+-              goto do_not_strip_quotes;
+-            /* Fall through.  */
+-          default:
+-            if (yyres)
+-              yyres[yyn] = *yyp;
+-            yyn++;
+-            break;
+-
+-          case '"':
+-            if (yyres)
+-              yyres[yyn] = '\0';
+-            return yyn;
+-          }
++	switch (*++yyp)
++	  {
++	  case '\'':
++	  case ',':
++	    goto do_not_strip_quotes;
++
++	  case '\\':
++	    if (*++yyp != '\\')
++	      goto do_not_strip_quotes;
++	    /* Fall through.  */
++	  default:
++	    if (yyres)
++	      yyres[yyn] = *yyp;
++	    yyn++;
++	    break;
++
++	  case '"':
++	    if (yyres)
++	      yyres[yyn] = '\0';
++	    return yyn;
++	  }
+     do_not_strip_quotes: ;
+     }
+ 
+@@ -1045,11 +1117,12 @@ static int
+ yysyntax_error (YYSIZE_T *yymsg_alloc, char **yymsg,
+                 yytype_int16 *yyssp, int yytoken)
+ {
+-  YYSIZE_T yysize0 = yytnamerr (YY_NULLPTR, yytname[yytoken]);
++  YYSIZE_T yysize0 = yytnamerr (0, yytname[yytoken]);
+   YYSIZE_T yysize = yysize0;
++  YYSIZE_T yysize1;
+   enum { YYERROR_VERBOSE_ARGS_MAXIMUM = 5 };
+   /* Internationalized format string. */
+-  const char *yyformat = YY_NULLPTR;
++  const char *yyformat = 0;
+   /* Arguments of yyformat. */
+   char const *yyarg[YYERROR_VERBOSE_ARGS_MAXIMUM];
+   /* Number of reported tokens (one for the "unexpected", one per
+@@ -1057,6 +1130,10 @@ yysyntax_error (YYSIZE_T *yymsg_alloc, c
+   int yycount = 0;
+ 
+   /* There are many possibilities here to consider:
++     - Assume YYFAIL is not used.  It's too flawed to consider.  See
++       <http://lists.gnu.org/archive/html/bison-patches/2009-12/msg00024.html>
++       for details.  YYERROR is fine as it does not invoke this
++       function.
+      - If this state is a consistent state with a default action, then
+        the only way this function was invoked is if the default action
+        is an error action.  In that case, don't check for expected
+@@ -1105,13 +1182,11 @@ yysyntax_error (YYSIZE_T *yymsg_alloc, c
+                     break;
+                   }
+                 yyarg[yycount++] = yytname[yyx];
+-                {
+-                  YYSIZE_T yysize1 = yysize + yytnamerr (YY_NULLPTR, yytname[yyx]);
+-                  if (! (yysize <= yysize1
+-                         && yysize1 <= YYSTACK_ALLOC_MAXIMUM))
+-                    return 2;
+-                  yysize = yysize1;
+-                }
++                yysize1 = yysize + yytnamerr (0, yytname[yyx]);
++                if (! (yysize <= yysize1
++                       && yysize1 <= YYSTACK_ALLOC_MAXIMUM))
++                  return 2;
++                yysize = yysize1;
+               }
+         }
+     }
+@@ -1131,12 +1206,10 @@ yysyntax_error (YYSIZE_T *yymsg_alloc, c
+ # undef YYCASE_
+     }
+ 
+-  {
+-    YYSIZE_T yysize1 = yysize + yystrlen (yyformat);
+-    if (! (yysize <= yysize1 && yysize1 <= YYSTACK_ALLOC_MAXIMUM))
+-      return 2;
+-    yysize = yysize1;
+-  }
++  yysize1 = yysize + yystrlen (yyformat);
++  if (! (yysize <= yysize1 && yysize1 <= YYSTACK_ALLOC_MAXIMUM))
++    return 2;
++  yysize = yysize1;
+ 
+   if (*yymsg_alloc < yysize)
+     {
+@@ -1173,21 +1246,50 @@ yysyntax_error (YYSIZE_T *yymsg_alloc, c
+ | Release the memory associated to this symbol.  |
+ `-----------------------------------------------*/
+ 
++/*ARGSUSED*/
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ static void
+ yydestruct (const char *yymsg, int yytype, YYSTYPE *yyvaluep, YYLTYPE *yylocationp)
++#else
++static void
++yydestruct (yymsg, yytype, yyvaluep, yylocationp)
++    const char *yymsg;
++    int yytype;
++    YYSTYPE *yyvaluep;
++    YYLTYPE *yylocationp;
++#endif
+ {
+   YYUSE (yyvaluep);
+   YYUSE (yylocationp);
++
+   if (!yymsg)
+     yymsg = "Deleting";
+   YY_SYMBOL_PRINT (yymsg, yytype, yyvaluep, yylocationp);
+ 
+-  YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN
+-  YYUSE (yytype);
+-  YY_IGNORE_MAYBE_UNINITIALIZED_END
++  switch (yytype)
++    {
++
++      default:
++	break;
++    }
+ }
+ 
+ 
++/* Prevent warnings from -Wmissing-prototypes.  */
++#ifdef YYPARSE_PARAM
++#if defined __STDC__ || defined __cplusplus
++int yyparse (void *YYPARSE_PARAM);
++#else
++int yyparse ();
++#endif
++#else /* ! YYPARSE_PARAM */
++#if defined __STDC__ || defined __cplusplus
++int yyparse (void);
++#else
++int yyparse ();
++#endif
++#endif /* ! YYPARSE_PARAM */
+ 
+ 
+ /* The lookahead symbol.  */
+@@ -1195,12 +1297,10 @@ int yychar;
+ 
+ /* The semantic value of the lookahead symbol.  */
+ YYSTYPE yylval;
++
+ /* Location data for the lookahead symbol.  */
+-YYLTYPE yylloc
+-# if defined YYLTYPE_IS_TRIVIAL && YYLTYPE_IS_TRIVIAL
+-  = { 1, 1, 1, 1 }
+-# endif
+-;
++YYLTYPE yylloc;
++
+ /* Number of syntax errors so far.  */
+ int yynerrs;
+ 
+@@ -1209,19 +1309,38 @@ int yynerrs;
+ | yyparse.  |
+ `----------*/
+ 
++#ifdef YYPARSE_PARAM
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
++int
++yyparse (void *YYPARSE_PARAM)
++#else
++int
++yyparse (YYPARSE_PARAM)
++    void *YYPARSE_PARAM;
++#endif
++#else /* ! YYPARSE_PARAM */
++#if (defined __STDC__ || defined __C99__FUNC__ \
++     || defined __cplusplus || defined _MSC_VER)
+ int
+ yyparse (void)
++#else
++int
++yyparse ()
++
++#endif
++#endif
+ {
+     int yystate;
+     /* Number of tokens to shift before error messages enabled.  */
+     int yyerrstatus;
+ 
+     /* The stacks and their tools:
+-       'yyss': related to states.
+-       'yyvs': related to semantic values.
+-       'yyls': related to locations.
++       `yyss': related to states.
++       `yyvs': related to semantic values.
++       `yyls': related to locations.
+ 
+-       Refer to the stacks through separate pointers, to allow yyoverflow
++       Refer to the stacks thru separate pointers, to allow yyoverflow
+        to reallocate them elsewhere.  */
+ 
+     /* The state stack.  */
+@@ -1247,7 +1366,7 @@ yyparse (void)
+   int yyn;
+   int yyresult;
+   /* Lookahead token as an internal (translated) token number.  */
+-  int yytoken = 0;
++  int yytoken;
+   /* The variables used to return semantic value and location from the
+      action routines.  */
+   YYSTYPE yyval;
+@@ -1266,9 +1385,10 @@ yyparse (void)
+      Keep to zero when no symbol should be popped.  */
+   int yylen = 0;
+ 
+-  yyssp = yyss = yyssa;
+-  yyvsp = yyvs = yyvsa;
+-  yylsp = yyls = yylsa;
++  yytoken = 0;
++  yyss = yyssa;
++  yyvs = yyvsa;
++  yyls = yylsa;
+   yystacksize = YYINITDEPTH;
+ 
+   YYDPRINTF ((stderr, "Starting parse\n"));
+@@ -1277,7 +1397,21 @@ yyparse (void)
+   yyerrstatus = 0;
+   yynerrs = 0;
+   yychar = YYEMPTY; /* Cause a token to be read.  */
+-  yylsp[0] = yylloc;
++
++  /* Initialize stack pointers.
++     Waste one element of value and location stack
++     so that they stay on the same level as the state stack.
++     The wasted elements are never initialized.  */
++  yyssp = yyss;
++  yyvsp = yyvs;
++  yylsp = yyls;
++
++#if defined YYLTYPE_IS_TRIVIAL && YYLTYPE_IS_TRIVIAL
++  /* Initialize the default location before parsing starts.  */
++  yylloc.first_line   = yylloc.last_line   = 1;
++  yylloc.first_column = yylloc.last_column = 1;
++#endif
++
+   goto yysetstate;
+ 
+ /*------------------------------------------------------------.
+@@ -1298,26 +1432,26 @@ yyparse (void)
+ 
+ #ifdef yyoverflow
+       {
+-        /* Give user a chance to reallocate the stack.  Use copies of
+-           these so that the &'s don't force the real ones into
+-           memory.  */
+-        YYSTYPE *yyvs1 = yyvs;
+-        yytype_int16 *yyss1 = yyss;
+-        YYLTYPE *yyls1 = yyls;
+-
+-        /* Each stack pointer address is followed by the size of the
+-           data in use in that stack, in bytes.  This used to be a
+-           conditional around just the two extra args, but that might
+-           be undefined if yyoverflow is a macro.  */
+-        yyoverflow (YY_("memory exhausted"),
+-                    &yyss1, yysize * sizeof (*yyssp),
+-                    &yyvs1, yysize * sizeof (*yyvsp),
+-                    &yyls1, yysize * sizeof (*yylsp),
+-                    &yystacksize);
+-
+-        yyls = yyls1;
+-        yyss = yyss1;
+-        yyvs = yyvs1;
++	/* Give user a chance to reallocate the stack.  Use copies of
++	   these so that the &'s don't force the real ones into
++	   memory.  */
++	YYSTYPE *yyvs1 = yyvs;
++	yytype_int16 *yyss1 = yyss;
++	YYLTYPE *yyls1 = yyls;
++
++	/* Each stack pointer address is followed by the size of the
++	   data in use in that stack, in bytes.  This used to be a
++	   conditional around just the two extra args, but that might
++	   be undefined if yyoverflow is a macro.  */
++	yyoverflow (YY_("memory exhausted"),
++		    &yyss1, yysize * sizeof (*yyssp),
++		    &yyvs1, yysize * sizeof (*yyvsp),
++		    &yyls1, yysize * sizeof (*yylsp),
++		    &yystacksize);
++
++	yyls = yyls1;
++	yyss = yyss1;
++	yyvs = yyvs1;
+       }
+ #else /* no yyoverflow */
+ # ifndef YYSTACK_RELOCATE
+@@ -1325,23 +1459,23 @@ yyparse (void)
+ # else
+       /* Extend the stack our own way.  */
+       if (YYMAXDEPTH <= yystacksize)
+-        goto yyexhaustedlab;
++	goto yyexhaustedlab;
+       yystacksize *= 2;
+       if (YYMAXDEPTH < yystacksize)
+-        yystacksize = YYMAXDEPTH;
++	yystacksize = YYMAXDEPTH;
+ 
+       {
+-        yytype_int16 *yyss1 = yyss;
+-        union yyalloc *yyptr =
+-          (union yyalloc *) YYSTACK_ALLOC (YYSTACK_BYTES (yystacksize));
+-        if (! yyptr)
+-          goto yyexhaustedlab;
+-        YYSTACK_RELOCATE (yyss_alloc, yyss);
+-        YYSTACK_RELOCATE (yyvs_alloc, yyvs);
+-        YYSTACK_RELOCATE (yyls_alloc, yyls);
++	yytype_int16 *yyss1 = yyss;
++	union yyalloc *yyptr =
++	  (union yyalloc *) YYSTACK_ALLOC (YYSTACK_BYTES (yystacksize));
++	if (! yyptr)
++	  goto yyexhaustedlab;
++	YYSTACK_RELOCATE (yyss_alloc, yyss);
++	YYSTACK_RELOCATE (yyvs_alloc, yyvs);
++	YYSTACK_RELOCATE (yyls_alloc, yyls);
+ #  undef YYSTACK_RELOCATE
+-        if (yyss1 != yyssa)
+-          YYSTACK_FREE (yyss1);
++	if (yyss1 != yyssa)
++	  YYSTACK_FREE (yyss1);
+       }
+ # endif
+ #endif /* no yyoverflow */
+@@ -1351,10 +1485,10 @@ yyparse (void)
+       yylsp = yyls + yysize - 1;
+ 
+       YYDPRINTF ((stderr, "Stack size increased to %lu\n",
+-                  (unsigned long int) yystacksize));
++		  (unsigned long int) yystacksize));
+ 
+       if (yyss + yystacksize - 1 <= yyssp)
+-        YYABORT;
++	YYABORT;
+     }
+ 
+   YYDPRINTF ((stderr, "Entering state %d\n", yystate));
+@@ -1383,7 +1517,7 @@ yybackup:
+   if (yychar == YYEMPTY)
+     {
+       YYDPRINTF ((stderr, "Reading a token: "));
+-      yychar = yylex ();
++      yychar = YYLEX;
+     }
+ 
+   if (yychar <= YYEOF)
+@@ -1423,9 +1557,7 @@ yybackup:
+   yychar = YYEMPTY;
+ 
+   yystate = yyn;
+-  YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN
+   *++yyvsp = yylval;
+-  YY_IGNORE_MAYBE_UNINITIALIZED_END
+   *++yylsp = yylloc;
+   goto yynewstate;
+ 
+@@ -1448,7 +1580,7 @@ yyreduce:
+   yylen = yyr2[yyn];
+ 
+   /* If YYLEN is nonzero, implement the default value of the action:
+-     '$$ = $1'.
++     `$$ = $1'.
+ 
+      Otherwise, the following line sets YYVAL to garbage.
+      This behavior is undocumented and Bison
+@@ -1463,273 +1595,322 @@ yyreduce:
+   switch (yyn)
+     {
+         case 2:
+-#line 105 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 109 "dtc-parser.y"
+     {
+-			the_boot_info = build_boot_info((yyvsp[-1].re), (yyvsp[0].node),
+-							guess_boot_cpuid((yyvsp[0].node)));
++			(yyvsp[(5) - (5)].node)->is_plugin = (yyvsp[(3) - (5)].is_plugin);
++			(yyvsp[(5) - (5)].node)->is_root = 1;
++			the_boot_info = build_boot_info((yyvsp[(4) - (5)].re), (yyvsp[(5) - (5)].node),
++							guess_boot_cpuid((yyvsp[(5) - (5)].node)));
+ 		}
+-#line 1472 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 3:
+-#line 113 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 119 "dtc-parser.y"
+     {
+-			(yyval.re) = NULL;
++			(yyval.is_plugin) = 0;
+ 		}
+-#line 1480 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 4:
+-#line 117 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 123 "dtc-parser.y"
+     {
+-			(yyval.re) = chain_reserve_entry((yyvsp[-1].re), (yyvsp[0].re));
++			(yyval.is_plugin) = 1;
+ 		}
+-#line 1488 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 5:
+-#line 124 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 130 "dtc-parser.y"
+     {
+-			(yyval.re) = build_reserve_entry((yyvsp[-2].integer), (yyvsp[-1].integer));
++			(yyval.re) = NULL;
+ 		}
+-#line 1496 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 6:
+-#line 128 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 134 "dtc-parser.y"
+     {
+-			add_label(&(yyvsp[0].re)->labels, (yyvsp[-1].labelref));
+-			(yyval.re) = (yyvsp[0].re);
++			(yyval.re) = chain_reserve_entry((yyvsp[(1) - (2)].re), (yyvsp[(2) - (2)].re));
+ 		}
+-#line 1505 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 7:
+-#line 136 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 141 "dtc-parser.y"
+     {
+-			(yyval.node) = name_node((yyvsp[0].node), "");
++			(yyval.re) = build_reserve_entry((yyvsp[(2) - (4)].integer), (yyvsp[(3) - (4)].integer));
+ 		}
+-#line 1513 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 8:
+-#line 140 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 145 "dtc-parser.y"
+     {
+-			(yyval.node) = merge_nodes((yyvsp[-2].node), (yyvsp[0].node));
++			add_label(&(yyvsp[(2) - (2)].re)->labels, (yyvsp[(1) - (2)].labelref));
++			(yyval.re) = (yyvsp[(2) - (2)].re);
+ 		}
+-#line 1521 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 9:
+-#line 145 "dtc-parser.y" /* yacc.c:1646  */
+-    {
+-			struct node *target = get_node_by_ref((yyvsp[-3].node), (yyvsp[-1].labelref));
+ 
+-			add_label(&target->labels, (yyvsp[-2].labelref));
+-			if (target)
+-				merge_nodes(target, (yyvsp[0].node));
+-			else
+-				ERROR(&(yylsp[-1]), "Label or path %s not found", (yyvsp[-1].labelref));
+-			(yyval.node) = (yyvsp[-3].node);
++/* Line 1806 of yacc.c  */
++#line 153 "dtc-parser.y"
++    {
++			(yyval.node) = name_node((yyvsp[(2) - (2)].node), "");
+ 		}
+-#line 1536 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 10:
+-#line 156 "dtc-parser.y" /* yacc.c:1646  */
+-    {
+-			struct node *target = get_node_by_ref((yyvsp[-2].node), (yyvsp[-1].labelref));
+ 
+-			if (target)
+-				merge_nodes(target, (yyvsp[0].node));
+-			else
+-				ERROR(&(yylsp[-1]), "Label or path %s not found", (yyvsp[-1].labelref));
+-			(yyval.node) = (yyvsp[-2].node);
++/* Line 1806 of yacc.c  */
++#line 157 "dtc-parser.y"
++    {
++			(yyval.node) = merge_nodes((yyvsp[(1) - (3)].node), (yyvsp[(3) - (3)].node));
+ 		}
+-#line 1550 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 11:
+-#line 166 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 162 "dtc-parser.y"
+     {
+-			struct node *target = get_node_by_ref((yyvsp[-3].node), (yyvsp[-1].labelref));
++			struct node *target = get_node_by_ref((yyvsp[(1) - (4)].node), (yyvsp[(3) - (4)].labelref));
+ 
++			add_label(&target->labels, (yyvsp[(2) - (4)].labelref));
+ 			if (target)
+-				delete_node(target);
++				merge_nodes(target, (yyvsp[(4) - (4)].node));
+ 			else
+-				ERROR(&(yylsp[-1]), "Label or path %s not found", (yyvsp[-1].labelref));
+-
+-
+-			(yyval.node) = (yyvsp[-3].node);
++				ERROR(&(yylsp[(3) - (4)]), "Label or path %s not found", (yyvsp[(3) - (4)].labelref));
++			(yyval.node) = (yyvsp[(1) - (4)].node);
+ 		}
+-#line 1566 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 12:
+-#line 181 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 173 "dtc-parser.y"
+     {
+-			(yyval.node) = build_node((yyvsp[-3].proplist), (yyvsp[-2].nodelist));
++			struct node *target = get_node_by_ref((yyvsp[(1) - (3)].node), (yyvsp[(2) - (3)].labelref));
++
++			if (target)
++				merge_nodes(target, (yyvsp[(3) - (3)].node));
++			else
++				ERROR(&(yylsp[(2) - (3)]), "Label or path %s not found", (yyvsp[(2) - (3)].labelref));
++			(yyval.node) = (yyvsp[(1) - (3)].node);
+ 		}
+-#line 1574 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 13:
+-#line 188 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 183 "dtc-parser.y"
+     {
+-			(yyval.proplist) = NULL;
++			struct node *target = get_node_by_ref((yyvsp[(1) - (4)].node), (yyvsp[(3) - (4)].labelref));
++
++			if (target)
++				delete_node(target);
++			else
++				ERROR(&(yylsp[(3) - (4)]), "Label or path %s not found", (yyvsp[(3) - (4)].labelref));
++
++
++			(yyval.node) = (yyvsp[(1) - (4)].node);
+ 		}
+-#line 1582 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 14:
+-#line 192 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 198 "dtc-parser.y"
+     {
+-			(yyval.proplist) = chain_property((yyvsp[0].prop), (yyvsp[-1].proplist));
++			(yyval.node) = build_node((yyvsp[(2) - (5)].proplist), (yyvsp[(3) - (5)].nodelist));
+ 		}
+-#line 1590 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 15:
+-#line 199 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 205 "dtc-parser.y"
+     {
+-			(yyval.prop) = build_property((yyvsp[-3].propnodename), (yyvsp[-1].data));
++			(yyval.proplist) = NULL;
+ 		}
+-#line 1598 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 16:
+-#line 203 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 209 "dtc-parser.y"
+     {
+-			(yyval.prop) = build_property((yyvsp[-1].propnodename), empty_data);
++			(yyval.proplist) = chain_property((yyvsp[(2) - (2)].prop), (yyvsp[(1) - (2)].proplist));
+ 		}
+-#line 1606 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 17:
+-#line 207 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 216 "dtc-parser.y"
+     {
+-			(yyval.prop) = build_property_delete((yyvsp[-1].propnodename));
++			(yyval.prop) = build_property((yyvsp[(1) - (4)].propnodename), (yyvsp[(3) - (4)].data));
+ 		}
+-#line 1614 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 18:
+-#line 211 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 220 "dtc-parser.y"
+     {
+-			add_label(&(yyvsp[0].prop)->labels, (yyvsp[-1].labelref));
+-			(yyval.prop) = (yyvsp[0].prop);
++			(yyval.prop) = build_property((yyvsp[(1) - (2)].propnodename), empty_data);
+ 		}
+-#line 1623 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 19:
+-#line 219 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 224 "dtc-parser.y"
+     {
+-			(yyval.data) = data_merge((yyvsp[-1].data), (yyvsp[0].data));
++			(yyval.prop) = build_property_delete((yyvsp[(2) - (3)].propnodename));
+ 		}
+-#line 1631 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 20:
+-#line 223 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 228 "dtc-parser.y"
+     {
+-			(yyval.data) = data_merge((yyvsp[-2].data), (yyvsp[-1].array).data);
++			add_label(&(yyvsp[(2) - (2)].prop)->labels, (yyvsp[(1) - (2)].labelref));
++			(yyval.prop) = (yyvsp[(2) - (2)].prop);
+ 		}
+-#line 1639 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 21:
+-#line 227 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 236 "dtc-parser.y"
+     {
+-			(yyval.data) = data_merge((yyvsp[-3].data), (yyvsp[-1].data));
++			(yyval.data) = data_merge((yyvsp[(1) - (2)].data), (yyvsp[(2) - (2)].data));
+ 		}
+-#line 1647 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 22:
+-#line 231 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 240 "dtc-parser.y"
+     {
+-			(yyval.data) = data_add_marker((yyvsp[-1].data), REF_PATH, (yyvsp[0].labelref));
++			(yyval.data) = data_merge((yyvsp[(1) - (3)].data), (yyvsp[(2) - (3)].array).data);
+ 		}
+-#line 1655 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 23:
+-#line 235 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 244 "dtc-parser.y"
++    {
++			(yyval.data) = data_merge((yyvsp[(1) - (4)].data), (yyvsp[(3) - (4)].data));
++		}
++    break;
++
++  case 24:
++
++/* Line 1806 of yacc.c  */
++#line 248 "dtc-parser.y"
++    {
++			(yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), REF_PATH, (yyvsp[(2) - (2)].labelref));
++		}
++    break;
++
++  case 25:
++
++/* Line 1806 of yacc.c  */
++#line 252 "dtc-parser.y"
+     {
+-			FILE *f = srcfile_relative_open((yyvsp[-5].data).val, NULL);
++			FILE *f = srcfile_relative_open((yyvsp[(4) - (9)].data).val, NULL);
+ 			struct data d;
+ 
+-			if ((yyvsp[-3].integer) != 0)
+-				if (fseek(f, (yyvsp[-3].integer), SEEK_SET) != 0)
++			if ((yyvsp[(6) - (9)].integer) != 0)
++				if (fseek(f, (yyvsp[(6) - (9)].integer), SEEK_SET) != 0)
+ 					die("Couldn't seek to offset %llu in \"%s\": %s",
+-					    (unsigned long long)(yyvsp[-3].integer), (yyvsp[-5].data).val,
++					    (unsigned long long)(yyvsp[(6) - (9)].integer), (yyvsp[(4) - (9)].data).val,
+ 					    strerror(errno));
+ 
+-			d = data_copy_file(f, (yyvsp[-1].integer));
++			d = data_copy_file(f, (yyvsp[(8) - (9)].integer));
+ 
+-			(yyval.data) = data_merge((yyvsp[-8].data), d);
++			(yyval.data) = data_merge((yyvsp[(1) - (9)].data), d);
+ 			fclose(f);
+ 		}
+-#line 1675 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 24:
+-#line 251 "dtc-parser.y" /* yacc.c:1646  */
++  case 26:
++
++/* Line 1806 of yacc.c  */
++#line 268 "dtc-parser.y"
+     {
+-			FILE *f = srcfile_relative_open((yyvsp[-1].data).val, NULL);
++			FILE *f = srcfile_relative_open((yyvsp[(4) - (5)].data).val, NULL);
+ 			struct data d = empty_data;
+ 
+ 			d = data_copy_file(f, -1);
+ 
+-			(yyval.data) = data_merge((yyvsp[-4].data), d);
++			(yyval.data) = data_merge((yyvsp[(1) - (5)].data), d);
+ 			fclose(f);
+ 		}
+-#line 1689 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 25:
+-#line 261 "dtc-parser.y" /* yacc.c:1646  */
++  case 27:
++
++/* Line 1806 of yacc.c  */
++#line 278 "dtc-parser.y"
+     {
+-			(yyval.data) = data_add_marker((yyvsp[-1].data), LABEL, (yyvsp[0].labelref));
++			(yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref));
+ 		}
+-#line 1697 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 26:
+-#line 268 "dtc-parser.y" /* yacc.c:1646  */
++  case 28:
++
++/* Line 1806 of yacc.c  */
++#line 285 "dtc-parser.y"
+     {
+ 			(yyval.data) = empty_data;
+ 		}
+-#line 1705 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 27:
+-#line 272 "dtc-parser.y" /* yacc.c:1646  */
++  case 29:
++
++/* Line 1806 of yacc.c  */
++#line 289 "dtc-parser.y"
+     {
+-			(yyval.data) = (yyvsp[-1].data);
++			(yyval.data) = (yyvsp[(1) - (2)].data);
+ 		}
+-#line 1713 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 28:
+-#line 276 "dtc-parser.y" /* yacc.c:1646  */
++  case 30:
++
++/* Line 1806 of yacc.c  */
++#line 293 "dtc-parser.y"
+     {
+-			(yyval.data) = data_add_marker((yyvsp[-1].data), LABEL, (yyvsp[0].labelref));
++			(yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref));
+ 		}
+-#line 1721 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 29:
+-#line 283 "dtc-parser.y" /* yacc.c:1646  */
++  case 31:
++
++/* Line 1806 of yacc.c  */
++#line 300 "dtc-parser.y"
+     {
+ 			unsigned long long bits;
+ 
+-			bits = (yyvsp[-1].integer);
++			bits = (yyvsp[(2) - (3)].integer);
+ 
+ 			if ((bits !=  8) && (bits != 16) &&
+ 			    (bits != 32) && (bits != 64)) {
+-				ERROR(&(yylsp[-1]), "Array elements must be"
++				ERROR(&(yylsp[(2) - (3)]), "Array elements must be"
+ 				      " 8, 16, 32 or 64-bits");
+ 				bits = 32;
+ 			}
+@@ -1737,23 +1918,25 @@ yyreduce:
+ 			(yyval.array).data = empty_data;
+ 			(yyval.array).bits = bits;
+ 		}
+-#line 1741 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 30:
+-#line 299 "dtc-parser.y" /* yacc.c:1646  */
++  case 32:
++
++/* Line 1806 of yacc.c  */
++#line 316 "dtc-parser.y"
+     {
+ 			(yyval.array).data = empty_data;
+ 			(yyval.array).bits = 32;
+ 		}
+-#line 1750 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 31:
+-#line 304 "dtc-parser.y" /* yacc.c:1646  */
++  case 33:
++
++/* Line 1806 of yacc.c  */
++#line 321 "dtc-parser.y"
+     {
+-			if ((yyvsp[-1].array).bits < 64) {
+-				uint64_t mask = (1ULL << (yyvsp[-1].array).bits) - 1;
++			if ((yyvsp[(1) - (2)].array).bits < 64) {
++				uint64_t mask = (1ULL << (yyvsp[(1) - (2)].array).bits) - 1;
+ 				/*
+ 				 * Bits above mask must either be all zero
+ 				 * (positive within range of mask) or all one
+@@ -1762,258 +1945,293 @@ yyreduce:
+ 				 * within the mask to one (i.e. | in the
+ 				 * mask), all bits are one.
+ 				 */
+-				if (((yyvsp[0].integer) > mask) && (((yyvsp[0].integer) | mask) != -1ULL))
+-					ERROR(&(yylsp[0]), "Value out of range for"
+-					      " %d-bit array element", (yyvsp[-1].array).bits);
++				if (((yyvsp[(2) - (2)].integer) > mask) && (((yyvsp[(2) - (2)].integer) | mask) != -1ULL))
++					ERROR(&(yylsp[(2) - (2)]), "Value out of range for"
++					      " %d-bit array element", (yyvsp[(1) - (2)].array).bits);
+ 			}
+ 
+-			(yyval.array).data = data_append_integer((yyvsp[-1].array).data, (yyvsp[0].integer), (yyvsp[-1].array).bits);
++			(yyval.array).data = data_append_integer((yyvsp[(1) - (2)].array).data, (yyvsp[(2) - (2)].integer), (yyvsp[(1) - (2)].array).bits);
+ 		}
+-#line 1773 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 32:
+-#line 323 "dtc-parser.y" /* yacc.c:1646  */
++  case 34:
++
++/* Line 1806 of yacc.c  */
++#line 340 "dtc-parser.y"
+     {
+-			uint64_t val = ~0ULL >> (64 - (yyvsp[-1].array).bits);
++			uint64_t val = ~0ULL >> (64 - (yyvsp[(1) - (2)].array).bits);
+ 
+-			if ((yyvsp[-1].array).bits == 32)
+-				(yyvsp[-1].array).data = data_add_marker((yyvsp[-1].array).data,
++			if ((yyvsp[(1) - (2)].array).bits == 32)
++				(yyvsp[(1) - (2)].array).data = data_add_marker((yyvsp[(1) - (2)].array).data,
+ 							  REF_PHANDLE,
+-							  (yyvsp[0].labelref));
++							  (yyvsp[(2) - (2)].labelref));
+ 			else
+-				ERROR(&(yylsp[0]), "References are only allowed in "
++				ERROR(&(yylsp[(2) - (2)]), "References are only allowed in "
+ 					    "arrays with 32-bit elements.");
+ 
+-			(yyval.array).data = data_append_integer((yyvsp[-1].array).data, val, (yyvsp[-1].array).bits);
++			(yyval.array).data = data_append_integer((yyvsp[(1) - (2)].array).data, val, (yyvsp[(1) - (2)].array).bits);
+ 		}
+-#line 1791 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 33:
+-#line 337 "dtc-parser.y" /* yacc.c:1646  */
++  case 35:
++
++/* Line 1806 of yacc.c  */
++#line 354 "dtc-parser.y"
+     {
+-			(yyval.array).data = data_add_marker((yyvsp[-1].array).data, LABEL, (yyvsp[0].labelref));
++			(yyval.array).data = data_add_marker((yyvsp[(1) - (2)].array).data, LABEL, (yyvsp[(2) - (2)].labelref));
+ 		}
+-#line 1799 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+-  case 36:
+-#line 346 "dtc-parser.y" /* yacc.c:1646  */
++  case 38:
++
++/* Line 1806 of yacc.c  */
++#line 363 "dtc-parser.y"
+     {
+-			(yyval.integer) = (yyvsp[-1].integer);
++			(yyval.integer) = (yyvsp[(2) - (3)].integer);
+ 		}
+-#line 1807 "dtc-parser.tab.c" /* yacc.c:1646  */
+-    break;
+-
+-  case 39:
+-#line 357 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-4].integer) ? (yyvsp[-2].integer) : (yyvsp[0].integer); }
+-#line 1813 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 41:
+-#line 362 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) || (yyvsp[0].integer); }
+-#line 1819 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 374 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (5)].integer) ? (yyvsp[(3) - (5)].integer) : (yyvsp[(5) - (5)].integer); }
+     break;
+ 
+   case 43:
+-#line 367 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) && (yyvsp[0].integer); }
+-#line 1825 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 379 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) || (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 45:
+-#line 372 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) | (yyvsp[0].integer); }
+-#line 1831 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 384 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) && (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 47:
+-#line 377 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) ^ (yyvsp[0].integer); }
+-#line 1837 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 389 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) | (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 49:
+-#line 382 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) & (yyvsp[0].integer); }
+-#line 1843 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 394 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) ^ (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 51:
+-#line 387 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) == (yyvsp[0].integer); }
+-#line 1849 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 399 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) & (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+-  case 52:
+-#line 388 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) != (yyvsp[0].integer); }
+-#line 1855 "dtc-parser.tab.c" /* yacc.c:1646  */
++  case 53:
++
++/* Line 1806 of yacc.c  */
++#line 404 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) == (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 54:
+-#line 393 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) < (yyvsp[0].integer); }
+-#line 1861 "dtc-parser.tab.c" /* yacc.c:1646  */
+-    break;
+ 
+-  case 55:
+-#line 394 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) > (yyvsp[0].integer); }
+-#line 1867 "dtc-parser.tab.c" /* yacc.c:1646  */
++/* Line 1806 of yacc.c  */
++#line 405 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) != (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 56:
+-#line 395 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) <= (yyvsp[0].integer); }
+-#line 1873 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 410 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) < (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 57:
+-#line 396 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) >= (yyvsp[0].integer); }
+-#line 1879 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 411 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) > (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 58:
+-#line 400 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) << (yyvsp[0].integer); }
+-#line 1885 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 412 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) <= (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 59:
+-#line 401 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) >> (yyvsp[0].integer); }
+-#line 1891 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 413 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) >= (yyvsp[(3) - (3)].integer); }
++    break;
++
++  case 60:
++
++/* Line 1806 of yacc.c  */
++#line 417 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) << (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 61:
+-#line 406 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) + (yyvsp[0].integer); }
+-#line 1897 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 418 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) >> (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+-  case 62:
+-#line 407 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) - (yyvsp[0].integer); }
+-#line 1903 "dtc-parser.tab.c" /* yacc.c:1646  */
++  case 63:
++
++/* Line 1806 of yacc.c  */
++#line 423 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) + (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 64:
+-#line 412 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) * (yyvsp[0].integer); }
+-#line 1909 "dtc-parser.tab.c" /* yacc.c:1646  */
+-    break;
+ 
+-  case 65:
+-#line 413 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) / (yyvsp[0].integer); }
+-#line 1915 "dtc-parser.tab.c" /* yacc.c:1646  */
++/* Line 1806 of yacc.c  */
++#line 424 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) - (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 66:
+-#line 414 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = (yyvsp[-2].integer) % (yyvsp[0].integer); }
+-#line 1921 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 429 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) * (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+-  case 69:
+-#line 420 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = -(yyvsp[0].integer); }
+-#line 1927 "dtc-parser.tab.c" /* yacc.c:1646  */
++  case 67:
++
++/* Line 1806 of yacc.c  */
++#line 430 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) / (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+-  case 70:
+-#line 421 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = ~(yyvsp[0].integer); }
+-#line 1933 "dtc-parser.tab.c" /* yacc.c:1646  */
++  case 68:
++
++/* Line 1806 of yacc.c  */
++#line 431 "dtc-parser.y"
++    { (yyval.integer) = (yyvsp[(1) - (3)].integer) % (yyvsp[(3) - (3)].integer); }
+     break;
+ 
+   case 71:
+-#line 422 "dtc-parser.y" /* yacc.c:1646  */
+-    { (yyval.integer) = !(yyvsp[0].integer); }
+-#line 1939 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 437 "dtc-parser.y"
++    { (yyval.integer) = -(yyvsp[(2) - (2)].integer); }
+     break;
+ 
+   case 72:
+-#line 427 "dtc-parser.y" /* yacc.c:1646  */
+-    {
+-			(yyval.data) = empty_data;
+-		}
+-#line 1947 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 438 "dtc-parser.y"
++    { (yyval.integer) = ~(yyvsp[(2) - (2)].integer); }
+     break;
+ 
+   case 73:
+-#line 431 "dtc-parser.y" /* yacc.c:1646  */
+-    {
+-			(yyval.data) = data_append_byte((yyvsp[-1].data), (yyvsp[0].byte));
+-		}
+-#line 1955 "dtc-parser.tab.c" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 439 "dtc-parser.y"
++    { (yyval.integer) = !(yyvsp[(2) - (2)].integer); }
+     break;
+ 
+   case 74:
+-#line 435 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 444 "dtc-parser.y"
+     {
+-			(yyval.data) = data_add_marker((yyvsp[-1].data), LABEL, (yyvsp[0].labelref));
++			(yyval.data) = empty_data;
+ 		}
+-#line 1963 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 75:
+-#line 442 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 448 "dtc-parser.y"
+     {
+-			(yyval.nodelist) = NULL;
++			(yyval.data) = data_append_byte((yyvsp[(1) - (2)].data), (yyvsp[(2) - (2)].byte));
+ 		}
+-#line 1971 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 76:
+-#line 446 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 452 "dtc-parser.y"
+     {
+-			(yyval.nodelist) = chain_node((yyvsp[-1].node), (yyvsp[0].nodelist));
++			(yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref));
+ 		}
+-#line 1979 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 77:
+-#line 450 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 459 "dtc-parser.y"
+     {
+-			ERROR(&(yylsp[0]), "Properties must precede subnodes");
+-			YYERROR;
++			(yyval.nodelist) = NULL;
+ 		}
+-#line 1988 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 78:
+-#line 458 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 463 "dtc-parser.y"
+     {
+-			(yyval.node) = name_node((yyvsp[0].node), (yyvsp[-1].propnodename));
++			(yyval.nodelist) = chain_node((yyvsp[(1) - (2)].node), (yyvsp[(2) - (2)].nodelist));
+ 		}
+-#line 1996 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 79:
+-#line 462 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 467 "dtc-parser.y"
+     {
+-			(yyval.node) = name_node(build_node_delete(), (yyvsp[-1].propnodename));
++			ERROR(&(yylsp[(2) - (2)]), "Properties must precede subnodes");
++			YYERROR;
+ 		}
+-#line 2004 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
+   case 80:
+-#line 466 "dtc-parser.y" /* yacc.c:1646  */
++
++/* Line 1806 of yacc.c  */
++#line 475 "dtc-parser.y"
+     {
+-			add_label(&(yyvsp[0].node)->labels, (yyvsp[-1].labelref));
+-			(yyval.node) = (yyvsp[0].node);
++			(yyval.node) = name_node((yyvsp[(2) - (2)].node), (yyvsp[(1) - (2)].propnodename));
+ 		}
+-#line 2013 "dtc-parser.tab.c" /* yacc.c:1646  */
+     break;
+ 
++  case 81:
+ 
+-#line 2017 "dtc-parser.tab.c" /* yacc.c:1646  */
++/* Line 1806 of yacc.c  */
++#line 479 "dtc-parser.y"
++    {
++			(yyval.node) = name_node(build_node_delete(), (yyvsp[(2) - (3)].propnodename));
++		}
++    break;
++
++  case 82:
++
++/* Line 1806 of yacc.c  */
++#line 483 "dtc-parser.y"
++    {
++			add_label(&(yyvsp[(2) - (2)].node)->labels, (yyvsp[(1) - (2)].labelref));
++			(yyval.node) = (yyvsp[(2) - (2)].node);
++		}
++    break;
++
++
++
++/* Line 1806 of yacc.c  */
++#line 2235 "dtc-parser.tab.c"
+       default: break;
+     }
+   /* User semantic actions sometimes alter yychar, and that requires
+@@ -2036,7 +2254,7 @@ yyreduce:
+   *++yyvsp = yyval;
+   *++yylsp = yyloc;
+ 
+-  /* Now 'shift' the result of the reduction.  Determine what state
++  /* Now `shift' the result of the reduction.  Determine what state
+      that goes to, based on the state we popped back to and the rule
+      number reduced by.  */
+ 
+@@ -2051,9 +2269,9 @@ yyreduce:
+   goto yynewstate;
+ 
+ 
+-/*--------------------------------------.
+-| yyerrlab -- here on detecting error.  |
+-`--------------------------------------*/
++/*------------------------------------.
++| yyerrlab -- here on detecting error |
++`------------------------------------*/
+ yyerrlab:
+   /* Make sure we have latest lookahead translation.  See comments at
+      user semantic actions for why this is necessary.  */
+@@ -2104,20 +2322,20 @@ yyerrlab:
+   if (yyerrstatus == 3)
+     {
+       /* If just tried and failed to reuse lookahead token after an
+-         error, discard it.  */
++	 error, discard it.  */
+ 
+       if (yychar <= YYEOF)
+-        {
+-          /* Return failure if at end of input.  */
+-          if (yychar == YYEOF)
+-            YYABORT;
+-        }
++	{
++	  /* Return failure if at end of input.  */
++	  if (yychar == YYEOF)
++	    YYABORT;
++	}
+       else
+-        {
+-          yydestruct ("Error: discarding",
+-                      yytoken, &yylval, &yylloc);
+-          yychar = YYEMPTY;
+-        }
++	{
++	  yydestruct ("Error: discarding",
++		      yytoken, &yylval, &yylloc);
++	  yychar = YYEMPTY;
++	}
+     }
+ 
+   /* Else will try to reuse lookahead token after shifting the error
+@@ -2137,7 +2355,7 @@ yyerrorlab:
+      goto yyerrorlab;
+ 
+   yyerror_range[1] = yylsp[1-yylen];
+-  /* Do not reclaim the symbols of the rule whose action triggered
++  /* Do not reclaim the symbols of the rule which action triggered
+      this YYERROR.  */
+   YYPOPSTACK (yylen);
+   yylen = 0;
+@@ -2150,37 +2368,35 @@ yyerrorlab:
+ | yyerrlab1 -- common code for both syntax error and YYERROR.  |
+ `-------------------------------------------------------------*/
+ yyerrlab1:
+-  yyerrstatus = 3;      /* Each real token shifted decrements this.  */
++  yyerrstatus = 3;	/* Each real token shifted decrements this.  */
+ 
+   for (;;)
+     {
+       yyn = yypact[yystate];
+       if (!yypact_value_is_default (yyn))
+-        {
+-          yyn += YYTERROR;
+-          if (0 <= yyn && yyn <= YYLAST && yycheck[yyn] == YYTERROR)
+-            {
+-              yyn = yytable[yyn];
+-              if (0 < yyn)
+-                break;
+-            }
+-        }
++	{
++	  yyn += YYTERROR;
++	  if (0 <= yyn && yyn <= YYLAST && yycheck[yyn] == YYTERROR)
++	    {
++	      yyn = yytable[yyn];
++	      if (0 < yyn)
++		break;
++	    }
++	}
+ 
+       /* Pop the current state because it cannot handle the error token.  */
+       if (yyssp == yyss)
+-        YYABORT;
++	YYABORT;
+ 
+       yyerror_range[1] = *yylsp;
+       yydestruct ("Error: popping",
+-                  yystos[yystate], yyvsp, yylsp);
++		  yystos[yystate], yyvsp, yylsp);
+       YYPOPSTACK (1);
+       yystate = *yyssp;
+       YY_STACK_PRINT (yyss, yyssp);
+     }
+ 
+-  YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN
+   *++yyvsp = yylval;
+-  YY_IGNORE_MAYBE_UNINITIALIZED_END
+ 
+   yyerror_range[2] = yylloc;
+   /* Using YYLLOC is tempting, but would change the location of
+@@ -2209,7 +2425,7 @@ yyabortlab:
+   yyresult = 1;
+   goto yyreturn;
+ 
+-#if !defined yyoverflow || YYERROR_VERBOSE
++#if !defined(yyoverflow) || YYERROR_VERBOSE
+ /*-------------------------------------------------.
+ | yyexhaustedlab -- memory exhaustion comes here.  |
+ `-------------------------------------------------*/
+@@ -2228,14 +2444,14 @@ yyreturn:
+       yydestruct ("Cleanup: discarding lookahead",
+                   yytoken, &yylval, &yylloc);
+     }
+-  /* Do not reclaim the symbols of the rule whose action triggered
++  /* Do not reclaim the symbols of the rule which action triggered
+      this YYABORT or YYACCEPT.  */
+   YYPOPSTACK (yylen);
+   YY_STACK_PRINT (yyss, yyssp);
+   while (yyssp != yyss)
+     {
+       yydestruct ("Cleanup: popping",
+-                  yystos[*yyssp], yyvsp, yylsp);
++		  yystos[*yyssp], yyvsp, yylsp);
+       YYPOPSTACK (1);
+     }
+ #ifndef yyoverflow
+@@ -2246,12 +2462,18 @@ yyreturn:
+   if (yymsg != yymsgbuf)
+     YYSTACK_FREE (yymsg);
+ #endif
+-  return yyresult;
++  /* Make sure YYID is used.  */
++  return YYID (yyresult);
+ }
+-#line 472 "dtc-parser.y" /* yacc.c:1906  */
++
++
++
++/* Line 2067 of yacc.c  */
++#line 489 "dtc-parser.y"
+ 
+ 
+ void yyerror(char const *s)
+ {
+ 	ERROR(&yylloc, "%s", s);
+ }
++
+--- a/scripts/dtc/dtc-parser.tab.h_shipped
++++ b/scripts/dtc/dtc-parser.tab.h_shipped
+@@ -1,19 +1,19 @@
+-/* A Bison parser, made by GNU Bison 3.0.2.  */
++/* A Bison parser, made by GNU Bison 2.5.  */
+ 
+ /* Bison interface for Yacc-like parsers in C
+-
+-   Copyright (C) 1984, 1989-1990, 2000-2013 Free Software Foundation, Inc.
+-
++   
++      Copyright (C) 1984, 1989-1990, 2000-2011 Free Software Foundation, Inc.
++   
+    This program is free software: you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation, either version 3 of the License, or
+    (at your option) any later version.
+-
++   
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+-
++   
+    You should have received a copy of the GNU General Public License
+    along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+ 
+@@ -26,55 +26,50 @@
+    special exception, which will cause the skeleton and the resulting
+    Bison output files to be licensed under the GNU General Public
+    License without this special exception.
+-
++   
+    This special exception was added by the Free Software Foundation in
+    version 2.2 of Bison.  */
+ 
+-#ifndef YY_YY_DTC_PARSER_TAB_H_INCLUDED
+-# define YY_YY_DTC_PARSER_TAB_H_INCLUDED
+-/* Debug traces.  */
+-#ifndef YYDEBUG
+-# define YYDEBUG 0
+-#endif
+-#if YYDEBUG
+-extern int yydebug;
+-#endif
+ 
+-/* Token type.  */
++/* Tokens.  */
+ #ifndef YYTOKENTYPE
+ # define YYTOKENTYPE
+-  enum yytokentype
+-  {
+-    DT_V1 = 258,
+-    DT_MEMRESERVE = 259,
+-    DT_LSHIFT = 260,
+-    DT_RSHIFT = 261,
+-    DT_LE = 262,
+-    DT_GE = 263,
+-    DT_EQ = 264,
+-    DT_NE = 265,
+-    DT_AND = 266,
+-    DT_OR = 267,
+-    DT_BITS = 268,
+-    DT_DEL_PROP = 269,
+-    DT_DEL_NODE = 270,
+-    DT_PROPNODENAME = 271,
+-    DT_LITERAL = 272,
+-    DT_CHAR_LITERAL = 273,
+-    DT_BYTE = 274,
+-    DT_STRING = 275,
+-    DT_LABEL = 276,
+-    DT_REF = 277,
+-    DT_INCBIN = 278
+-  };
++   /* Put the tokens into the symbol table, so that GDB and other debuggers
++      know about them.  */
++   enum yytokentype {
++     DT_V1 = 258,
++     DT_PLUGIN = 259,
++     DT_MEMRESERVE = 260,
++     DT_LSHIFT = 261,
++     DT_RSHIFT = 262,
++     DT_LE = 263,
++     DT_GE = 264,
++     DT_EQ = 265,
++     DT_NE = 266,
++     DT_AND = 267,
++     DT_OR = 268,
++     DT_BITS = 269,
++     DT_DEL_PROP = 270,
++     DT_DEL_NODE = 271,
++     DT_PROPNODENAME = 272,
++     DT_LITERAL = 273,
++     DT_CHAR_LITERAL = 274,
++     DT_BYTE = 275,
++     DT_STRING = 276,
++     DT_LABEL = 277,
++     DT_REF = 278,
++     DT_INCBIN = 279
++   };
+ #endif
+ 
+-/* Value type.  */
++
++
+ #if ! defined YYSTYPE && ! defined YYSTYPE_IS_DECLARED
+-typedef union YYSTYPE YYSTYPE;
+-union YYSTYPE
++typedef union YYSTYPE
+ {
+-#line 38 "dtc-parser.y" /* yacc.c:1909  */
++
++/* Line 2068 of yacc.c  */
++#line 39 "dtc-parser.y"
+ 
+ 	char *propnodename;
+ 	char *labelref;
+@@ -92,30 +87,32 @@ union YYSTYPE
+ 	struct node *nodelist;
+ 	struct reserve_info *re;
+ 	uint64_t integer;
++	int is_plugin;
++
++
+ 
+-#line 97 "dtc-parser.tab.h" /* yacc.c:1909  */
+-};
++/* Line 2068 of yacc.c  */
++#line 96 "dtc-parser.tab.h"
++} YYSTYPE;
+ # define YYSTYPE_IS_TRIVIAL 1
++# define yystype YYSTYPE /* obsolescent; will be withdrawn */
+ # define YYSTYPE_IS_DECLARED 1
+ #endif
+ 
+-/* Location type.  */
++extern YYSTYPE yylval;
++
+ #if ! defined YYLTYPE && ! defined YYLTYPE_IS_DECLARED
+-typedef struct YYLTYPE YYLTYPE;
+-struct YYLTYPE
++typedef struct YYLTYPE
+ {
+   int first_line;
+   int first_column;
+   int last_line;
+   int last_column;
+-};
++} YYLTYPE;
++# define yyltype YYLTYPE /* obsolescent; will be withdrawn */
+ # define YYLTYPE_IS_DECLARED 1
+ # define YYLTYPE_IS_TRIVIAL 1
+ #endif
+ 
+-
+-extern YYSTYPE yylval;
+ extern YYLTYPE yylloc;
+-int yyparse (void);
+ 
+-#endif /* !YY_YY_DTC_PARSER_TAB_H_INCLUDED  */
+--- a/scripts/dtc/dtc-parser.y
++++ b/scripts/dtc/dtc-parser.y
+@@ -19,6 +19,7 @@
+  */
+ %{
+ #include <stdio.h>
++#include <inttypes.h>
+ 
+ #include "dtc.h"
+ #include "srcpos.h"
+@@ -52,9 +53,11 @@ extern bool treesource_error;
+ 	struct node *nodelist;
+ 	struct reserve_info *re;
+ 	uint64_t integer;
++	int is_plugin;
+ }
+ 
+ %token DT_V1
++%token DT_PLUGIN
+ %token DT_MEMRESERVE
+ %token DT_LSHIFT DT_RSHIFT DT_LE DT_GE DT_EQ DT_NE DT_AND DT_OR
+ %token DT_BITS
+@@ -71,6 +74,7 @@ extern bool treesource_error;
+ 
+ %type <data> propdata
+ %type <data> propdataprefix
++%type <is_plugin> plugindecl
+ %type <re> memreserve
+ %type <re> memreserves
+ %type <array> arrayprefix
+@@ -101,10 +105,23 @@ extern bool treesource_error;
+ %%
+ 
+ sourcefile:
+-	  DT_V1 ';' memreserves devicetree
++	  DT_V1 ';' plugindecl memreserves devicetree
+ 		{
+-			the_boot_info = build_boot_info($3, $4,
+-							guess_boot_cpuid($4));
++			$5->is_plugin = $3;
++			$5->is_root = 1;
++			the_boot_info = build_boot_info($4, $5,
++							guess_boot_cpuid($5));
++		}
++	;
++
++plugindecl:
++	/* empty */
++		{
++			$$ = 0;
++		}
++	| DT_PLUGIN ';'
++		{
++			$$ = 1;
+ 		}
+ 	;
+ 
+--- a/scripts/dtc/dtc.c
++++ b/scripts/dtc/dtc.c
+@@ -29,6 +29,7 @@ int reservenum;		/* Number of memory res
+ int minsize;		/* Minimum blob size */
+ int padsize;		/* Additional padding to blob */
+ int phandle_format = PHANDLE_BOTH;	/* Use linux,phandle or phandle properties */
++int symbol_fixup_support = 0;
+ 
+ static void fill_fullpaths(struct node *tree, const char *prefix)
+ {
+@@ -51,7 +52,7 @@ static void fill_fullpaths(struct node *
+ #define FDT_VERSION(version)	_FDT_VERSION(version)
+ #define _FDT_VERSION(version)	#version
+ static const char usage_synopsis[] = "dtc [options] <input file>";
+-static const char usage_short_opts[] = "qI:O:o:V:d:R:S:p:fb:i:H:sW:E:hv";
++static const char usage_short_opts[] = "qI:O:o:V:d:R:S:p:fb:i:H:sW:E:hv@";
+ static struct option const usage_long_opts[] = {
+ 	{"quiet",            no_argument, NULL, 'q'},
+ 	{"in-format",         a_argument, NULL, 'I'},
+@@ -69,6 +70,7 @@ static struct option const usage_long_op
+ 	{"phandle",           a_argument, NULL, 'H'},
+ 	{"warning",           a_argument, NULL, 'W'},
+ 	{"error",             a_argument, NULL, 'E'},
++	{"symbols",           a_argument, NULL, '@'},
+ 	{"help",             no_argument, NULL, 'h'},
+ 	{"version",          no_argument, NULL, 'v'},
+ 	{NULL,               no_argument, NULL, 0x0},
+@@ -99,6 +101,7 @@ static const char * const usage_opts_hel
+ 	 "\t\tboth   - Both \"linux,phandle\" and \"phandle\" properties",
+ 	"\n\tEnable/disable warnings (prefix with \"no-\")",
+ 	"\n\tEnable/disable errors (prefix with \"no-\")",
++	"\n\tSymbols and Fixups support",
+ 	"\n\tPrint this help and exit",
+ 	"\n\tPrint version and exit",
+ 	NULL,
+@@ -186,7 +189,9 @@ int main(int argc, char *argv[])
+ 		case 'E':
+ 			parse_checks_option(false, true, optarg);
+ 			break;
+-
++		case '@':
++			symbol_fixup_support = 1;
++			break;
+ 		case 'h':
+ 			usage(NULL);
+ 		default:
+--- a/scripts/dtc/dtc.h
++++ b/scripts/dtc/dtc.h
+@@ -54,6 +54,7 @@ extern int reservenum;		/* Number of mem
+ extern int minsize;		/* Minimum blob size */
+ extern int padsize;		/* Additional padding to blob */
+ extern int phandle_format;	/* Use linux,phandle or phandle properties */
++extern int symbol_fixup_support;/* enable symbols & fixup support */
+ 
+ #define PHANDLE_LEGACY	0x1
+ #define PHANDLE_EPAPR	0x2
+@@ -132,6 +133,25 @@ struct label {
+ 	struct label *next;
+ };
+ 
++struct fixup_entry {
++	int offset;
++	struct node *node;
++	struct property *prop;
++	struct fixup_entry *next;
++};
++
++struct fixup {
++	char *ref;
++	struct fixup_entry *entries;
++	struct fixup *next;
++};
++
++struct symbol {
++	struct label *label;
++	struct node *node;
++	struct symbol *next;
++};
++
+ struct property {
+ 	bool deleted;
+ 	char *name;
+@@ -158,6 +178,12 @@ struct node {
+ 	int addr_cells, size_cells;
+ 
+ 	struct label *labels;
++
++	int is_root;
++	int is_plugin;
++	struct fixup *fixups;
++	struct symbol *symbols;
++	struct fixup_entry *local_fixups;
+ };
+ 
+ #define for_each_label_withdel(l0, l) \
+@@ -181,6 +207,18 @@ struct node {
+ 	for_each_child_withdel(n, c) \
+ 		if (!(c)->deleted)
+ 
++#define for_each_fixup(n, f) \
++	for ((f) = (n)->fixups; (f); (f) = (f)->next)
++
++#define for_each_fixup_entry(f, fe) \
++	for ((fe) = (f)->entries; (fe); (fe) = (fe)->next)
++
++#define for_each_symbol(n, s) \
++	for ((s) = (n)->symbols; (s); (s) = (s)->next)
++
++#define for_each_local_fixup_entry(n, fe) \
++	for ((fe) = (n)->local_fixups; (fe); (fe) = (fe)->next)
++
+ void add_label(struct label **labels, char *label);
+ void delete_labels(struct label **labels);
+ 
+--- a/scripts/dtc/flattree.c
++++ b/scripts/dtc/flattree.c
+@@ -262,6 +262,12 @@ static void flatten_tree(struct node *tr
+ 	struct property *prop;
+ 	struct node *child;
+ 	bool seen_name_prop = false;
++	struct symbol *sym;
++	struct fixup *f;
++	struct fixup_entry *fe;
++	char *name, *s;
++	const char *fullpath;
++	int namesz, nameoff, vallen;
+ 
+ 	if (tree->deleted)
+ 		return;
+@@ -276,8 +282,6 @@ static void flatten_tree(struct node *tr
+ 	emit->align(etarget, sizeof(cell_t));
+ 
+ 	for_each_property(tree, prop) {
+-		int nameoff;
+-
+ 		if (streq(prop->name, "name"))
+ 			seen_name_prop = true;
+ 
+@@ -310,6 +314,139 @@ static void flatten_tree(struct node *tr
+ 		flatten_tree(child, emit, etarget, strbuf, vi);
+ 	}
+ 
++	if (!symbol_fixup_support)
++		goto no_symbols;
++
++	/* add the symbol nodes (if any) */
++	if (tree->symbols) {
++
++		emit->beginnode(etarget, NULL);
++		emit->string(etarget, "__symbols__", 0);
++		emit->align(etarget, sizeof(cell_t));
++
++		for_each_symbol(tree, sym) {
++
++			vallen = strlen(sym->node->fullpath);
++
++			nameoff = stringtable_insert(strbuf, sym->label->label);
++
++			emit->property(etarget, NULL);
++			emit->cell(etarget, vallen + 1);
++			emit->cell(etarget, nameoff);
++
++			if ((vi->flags & FTF_VARALIGN) && vallen >= 8)
++				emit->align(etarget, 8);
++
++			emit->string(etarget, sym->node->fullpath,
++					strlen(sym->node->fullpath));
++			emit->align(etarget, sizeof(cell_t));
++		}
++
++		emit->endnode(etarget, NULL);
++	}
++
++	/* add the fixup nodes */
++	if (tree->fixups) {
++
++		/* emit the external fixups */
++		emit->beginnode(etarget, NULL);
++		emit->string(etarget, "__fixups__", 0);
++		emit->align(etarget, sizeof(cell_t));
++
++		for_each_fixup(tree, f) {
++
++			namesz = 0;
++			for_each_fixup_entry(f, fe) {
++				fullpath = fe->node->fullpath;
++				if (fullpath[0] == '\0')
++					fullpath = "/";
++				namesz += strlen(fullpath) + 1;
++				namesz += strlen(fe->prop->name) + 1;
++				namesz += 32;	/* space for :<number> + '\0' */
++			}
++
++			name = xmalloc(namesz);
++
++			s = name;
++			for_each_fixup_entry(f, fe) {
++				fullpath = fe->node->fullpath;
++				if (fullpath[0] == '\0')
++					fullpath = "/";
++				snprintf(s, name + namesz - s, "%s:%s:%d",
++						fullpath,
++						fe->prop->name, fe->offset);
++				s += strlen(s) + 1;
++			}
++
++			nameoff = stringtable_insert(strbuf, f->ref);
++			vallen = s - name - 1;
++
++			emit->property(etarget, NULL);
++			emit->cell(etarget, vallen + 1);
++			emit->cell(etarget, nameoff);
++
++			if ((vi->flags & FTF_VARALIGN) && vallen >= 8)
++				emit->align(etarget, 8);
++
++			emit->string(etarget, name, vallen);
++			emit->align(etarget, sizeof(cell_t));
++
++			free(name);
++		}
++
++		emit->endnode(etarget, tree->labels);
++	}
++
++	/* add the local fixup property */
++	if (tree->local_fixups) {
++
++		/* emit the external fixups */
++		emit->beginnode(etarget, NULL);
++		emit->string(etarget, "__local_fixups__", 0);
++		emit->align(etarget, sizeof(cell_t));
++
++		namesz = 0;
++		for_each_local_fixup_entry(tree, fe) {
++			fullpath = fe->node->fullpath;
++			if (fullpath[0] == '\0')
++				fullpath = "/";
++			namesz += strlen(fullpath) + 1;
++			namesz += strlen(fe->prop->name) + 1;
++			namesz += 32;	/* space for :<number> + '\0' */
++		}
++
++		name = xmalloc(namesz);
++
++		s = name;
++		for_each_local_fixup_entry(tree, fe) {
++			fullpath = fe->node->fullpath;
++			if (fullpath[0] == '\0')
++				fullpath = "/";
++			snprintf(s, name + namesz - s, "%s:%s:%d",
++					fullpath, fe->prop->name,
++					fe->offset);
++			s += strlen(s) + 1;
++		}
++
++		nameoff = stringtable_insert(strbuf, "fixup");
++		vallen = s - name - 1;
++
++		emit->property(etarget, NULL);
++		emit->cell(etarget, vallen + 1);
++		emit->cell(etarget, nameoff);
++
++		if ((vi->flags & FTF_VARALIGN) && vallen >= 8)
++			emit->align(etarget, 8);
++
++		emit->string(etarget, name, vallen);
++		emit->align(etarget, sizeof(cell_t));
++
++		free(name);
++
++		emit->endnode(etarget, tree->labels);
++	}
++
++no_symbols:
+ 	emit->endnode(etarget, tree->labels);
+ }
+ 
+--- a/scripts/dtc/version_gen.h
++++ b/scripts/dtc/version_gen.h
+@@ -1 +1 @@
+-#define DTC_VERSION "DTC 1.4.1-g9d3649bd"
++#define DTC_VERSION "DTC 1.4.1-g9d3649bd-dirty"
diff --git a/target/linux/brcm2708/patches-4.4/0084-mfd-Add-Raspberry-Pi-Sense-HAT-core-driver.patch b/target/linux/brcm2708/patches-4.4/0084-mfd-Add-Raspberry-Pi-Sense-HAT-core-driver.patch
new file mode 100644
index 0000000..cbb3c48
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0084-mfd-Add-Raspberry-Pi-Sense-HAT-core-driver.patch
@@ -0,0 +1,838 @@
+From 933c347e34871491afd9d53c7a9d0bf3c283c41f Mon Sep 17 00:00:00 2001
+From: Phil Elwell <pelwell at users.noreply.github.com>
+Date: Tue, 14 Jul 2015 14:32:47 +0100
+Subject: [PATCH 084/127] mfd: Add Raspberry Pi Sense HAT core driver
+
+---
+ drivers/input/joystick/Kconfig           |   8 +
+ drivers/input/joystick/Makefile          |   1 +
+ drivers/input/joystick/rpisense-js.c     | 153 ++++++++++++++++
+ drivers/mfd/Kconfig                      |   8 +
+ drivers/mfd/Makefile                     |   2 +
+ drivers/mfd/rpisense-core.c              | 157 +++++++++++++++++
+ drivers/video/fbdev/Kconfig              |  13 ++
+ drivers/video/fbdev/Makefile             |   1 +
+ drivers/video/fbdev/rpisense-fb.c        | 293 +++++++++++++++++++++++++++++++
+ include/linux/mfd/rpisense/core.h        |  47 +++++
+ include/linux/mfd/rpisense/framebuffer.h |  32 ++++
+ include/linux/mfd/rpisense/joystick.h    |  35 ++++
+ 12 files changed, 750 insertions(+)
+ create mode 100644 drivers/input/joystick/rpisense-js.c
+ create mode 100644 drivers/mfd/rpisense-core.c
+ create mode 100644 drivers/video/fbdev/rpisense-fb.c
+ create mode 100644 include/linux/mfd/rpisense/core.h
+ create mode 100644 include/linux/mfd/rpisense/framebuffer.h
+ create mode 100644 include/linux/mfd/rpisense/joystick.h
+
+--- a/drivers/input/joystick/Kconfig
++++ b/drivers/input/joystick/Kconfig
+@@ -330,4 +330,12 @@ config JOYSTICK_MAPLE
+ 	  To compile this as a module choose M here: the module will be called
+ 	  maplecontrol.
+ 
++config JOYSTICK_RPISENSE
++	tristate "Raspberry Pi Sense HAT joystick"
++	depends on GPIOLIB && INPUT
++	select MFD_RPISENSE_CORE
++
++	help
++	  This is the joystick driver for the Raspberry Pi Sense HAT
++
+ endif
+--- a/drivers/input/joystick/Makefile
++++ b/drivers/input/joystick/Makefile
+@@ -32,4 +32,5 @@ obj-$(CONFIG_JOYSTICK_WARRIOR)		+= warri
+ obj-$(CONFIG_JOYSTICK_XPAD)		+= xpad.o
+ obj-$(CONFIG_JOYSTICK_ZHENHUA)		+= zhenhua.o
+ obj-$(CONFIG_JOYSTICK_WALKERA0701)	+= walkera0701.o
++obj-$(CONFIG_JOYSTICK_RPISENSE)		+= rpisense-js.o
+ 
+--- /dev/null
++++ b/drivers/input/joystick/rpisense-js.c
+@@ -0,0 +1,153 @@
++/*
++ * Raspberry Pi Sense HAT joystick driver
++ * http://raspberrypi.org
++ *
++ * Copyright (C) 2015 Raspberry Pi
++ *
++ * Author: Serge Schneider
++ *
++ *  This program is free software; you can redistribute  it and/or modify it
++ *  under  the terms of  the GNU General  Public License as published by the
++ *  Free Software Foundation;  either version 2 of the  License, or (at your
++ *  option) any later version.
++ *
++ */
++
++#include <linux/module.h>
++
++#include <linux/mfd/rpisense/joystick.h>
++#include <linux/mfd/rpisense/core.h>
++
++static struct rpisense *rpisense;
++static unsigned char keymap[5] = {KEY_DOWN, KEY_RIGHT, KEY_UP, KEY_ENTER, KEY_LEFT,};
++
++static void keys_work_fn(struct work_struct *work)
++{
++	int i;
++	static s32 prev_keys;
++	struct rpisense_js *rpisense_js = &rpisense->joystick;
++	s32 keys = rpisense_reg_read(rpisense, RPISENSE_KEYS);
++	s32 changes = keys ^ prev_keys;
++
++	prev_keys = keys;
++	for (i = 0; i < 5; i++) {
++		if (changes & 1) {
++			input_report_key(rpisense_js->keys_dev,
++					 keymap[i], keys & 1);
++		}
++		changes >>= 1;
++		keys >>= 1;
++	}
++	input_sync(rpisense_js->keys_dev);
++}
++
++static irqreturn_t keys_irq_handler(int irq, void *pdev)
++{
++	struct rpisense_js *rpisense_js = &rpisense->joystick;
++
++	schedule_work(&rpisense_js->keys_work_s);
++	return IRQ_HANDLED;
++}
++
++static int rpisense_js_probe(struct platform_device *pdev)
++{
++	int ret;
++	int i;
++	struct rpisense_js *rpisense_js;
++
++	rpisense = rpisense_get_dev();
++	rpisense_js = &rpisense->joystick;
++
++	INIT_WORK(&rpisense_js->keys_work_s, keys_work_fn);
++
++	rpisense_js->keys_dev = input_allocate_device();
++	if (!rpisense_js->keys_dev) {
++		dev_err(&pdev->dev, "Could not allocate input device.\n");
++		return -ENOMEM;
++	}
++
++	rpisense_js->keys_dev->evbit[0] = BIT_MASK(EV_KEY);
++	for (i = 0; i < ARRAY_SIZE(keymap); i++) {
++		set_bit(keymap[i],
++			rpisense_js->keys_dev->keybit);
++	}
++
++	rpisense_js->keys_dev->name = "Raspberry Pi Sense HAT Joystick";
++	rpisense_js->keys_dev->phys = "rpi-sense-joy/input0";
++	rpisense_js->keys_dev->id.bustype = BUS_I2C;
++	rpisense_js->keys_dev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REP);
++	rpisense_js->keys_dev->keycode = keymap;
++	rpisense_js->keys_dev->keycodesize = sizeof(unsigned char);
++	rpisense_js->keys_dev->keycodemax = ARRAY_SIZE(keymap);
++
++	ret = input_register_device(rpisense_js->keys_dev);
++	if (ret) {
++		dev_err(&pdev->dev, "Could not register input device.\n");
++		goto err_keys_alloc;
++	}
++
++	ret = gpiod_direction_input(rpisense_js->keys_desc);
++	if (ret) {
++		dev_err(&pdev->dev, "Could not set keys-int direction.\n");
++		goto err_keys_reg;
++	}
++
++	rpisense_js->keys_irq = gpiod_to_irq(rpisense_js->keys_desc);
++	if (rpisense_js->keys_irq < 0) {
++		dev_err(&pdev->dev, "Could not determine keys-int IRQ.\n");
++		ret = rpisense_js->keys_irq;
++		goto err_keys_reg;
++	}
++
++	ret = devm_request_irq(&pdev->dev, rpisense_js->keys_irq,
++			       keys_irq_handler, IRQF_TRIGGER_RISING,
++			       "keys", &pdev->dev);
++	if (ret) {
++		dev_err(&pdev->dev, "IRQ request failed.\n");
++		goto err_keys_reg;
++	}
++	return 0;
++err_keys_reg:
++	input_unregister_device(rpisense_js->keys_dev);
++err_keys_alloc:
++	input_free_device(rpisense_js->keys_dev);
++	return ret;
++}
++
++static int rpisense_js_remove(struct platform_device *pdev)
++{
++	struct rpisense_js *rpisense_js = &rpisense->joystick;
++
++	input_unregister_device(rpisense_js->keys_dev);
++	input_free_device(rpisense_js->keys_dev);
++	return 0;
++}
++
++#ifdef CONFIG_OF
++static const struct of_device_id rpisense_js_id[] = {
++	{ .compatible = "rpi,rpi-sense-js" },
++	{ },
++};
++MODULE_DEVICE_TABLE(of, rpisense_js_id);
++#endif
++
++static struct platform_device_id rpisense_js_device_id[] = {
++	{ .name = "rpi-sense-js" },
++	{ },
++};
++MODULE_DEVICE_TABLE(platform, rpisense_js_device_id);
++
++static struct platform_driver rpisense_js_driver = {
++	.probe = rpisense_js_probe,
++	.remove = rpisense_js_remove,
++	.driver = {
++		.name = "rpi-sense-js",
++		.owner = THIS_MODULE,
++	},
++};
++
++module_platform_driver(rpisense_js_driver);
++
++MODULE_DESCRIPTION("Raspberry Pi Sense HAT joystick driver");
++MODULE_AUTHOR("Serge Schneider <serge at raspberrypi.org>");
++MODULE_LICENSE("GPL");
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -10,6 +10,14 @@ config MFD_CORE
+ 	select IRQ_DOMAIN
+ 	default n
+ 
++config MFD_RPISENSE_CORE
++	tristate "Raspberry Pi Sense HAT core functions"
++	depends on I2C
++	select MFD_CORE
++	help
++	  This is the core driver for the Raspberry Pi Sense HAT. This provides
++	  the necessary functions to communicate with the hardware.
++
+ config MFD_CS5535
+ 	tristate "AMD CS5535 and CS5536 southbridge core functions"
+ 	select MFD_CORE
+--- a/drivers/mfd/Makefile
++++ b/drivers/mfd/Makefile
+@@ -194,3 +194,5 @@ intel-soc-pmic-objs		:= intel_soc_pmic_c
+ intel-soc-pmic-$(CONFIG_INTEL_PMC_IPC)	+= intel_soc_pmic_bxtwc.o
+ obj-$(CONFIG_INTEL_SOC_PMIC)	+= intel-soc-pmic.o
+ obj-$(CONFIG_MFD_MT6397)	+= mt6397-core.o
++
++obj-$(CONFIG_MFD_RPISENSE_CORE)	+= rpisense-core.o
+--- /dev/null
++++ b/drivers/mfd/rpisense-core.c
+@@ -0,0 +1,157 @@
++/*
++ * Raspberry Pi Sense HAT core driver
++ * http://raspberrypi.org
++ *
++ * Copyright (C) 2015 Raspberry Pi
++ *
++ * Author: Serge Schneider
++ *
++ *  This program is free software; you can redistribute  it and/or modify it
++ *  under  the terms of  the GNU General  Public License as published by the
++ *  Free Software Foundation;  either version 2 of the  License, or (at your
++ *  option) any later version.
++ *
++ *  This driver is based on wm8350 implementation.
++ */
++
++#include <linux/module.h>
++#include <linux/moduleparam.h>
++#include <linux/err.h>
++#include <linux/init.h>
++#include <linux/i2c.h>
++#include <linux/platform_device.h>
++#include <linux/mfd/rpisense/core.h>
++#include <linux/slab.h>
++
++static struct rpisense *rpisense;
++
++static void rpisense_client_dev_register(struct rpisense *rpisense,
++					 const char *name,
++					 struct platform_device **pdev)
++{
++	int ret;
++
++	*pdev = platform_device_alloc(name, -1);
++	if (*pdev == NULL) {
++		dev_err(rpisense->dev, "Failed to allocate %s\n", name);
++		return;
++	}
++
++	(*pdev)->dev.parent = rpisense->dev;
++	platform_set_drvdata(*pdev, rpisense);
++	ret = platform_device_add(*pdev);
++	if (ret != 0) {
++		dev_err(rpisense->dev, "Failed to register %s: %d\n",
++			name, ret);
++		platform_device_put(*pdev);
++		*pdev = NULL;
++	}
++}
++
++static int rpisense_probe(struct i2c_client *i2c,
++			       const struct i2c_device_id *id)
++{
++	int ret;
++	struct rpisense_js *rpisense_js;
++
++	rpisense = devm_kzalloc(&i2c->dev, sizeof(struct rpisense), GFP_KERNEL);
++	if (rpisense == NULL)
++		return -ENOMEM;
++
++	i2c_set_clientdata(i2c, rpisense);
++	rpisense->dev = &i2c->dev;
++	rpisense->i2c_client = i2c;
++
++	ret = rpisense_reg_read(rpisense, RPISENSE_WAI);
++	if (ret > 0) {
++		if (ret != 's')
++			return -EINVAL;
++	} else {
++		return ret;
++	}
++	ret = rpisense_reg_read(rpisense, RPISENSE_VER);
++	if (ret < 0)
++		return ret;
++
++	dev_info(rpisense->dev,
++		 "Raspberry Pi Sense HAT firmware version %i\n", ret);
++
++	rpisense_js = &rpisense->joystick;
++	rpisense_js->keys_desc = devm_gpiod_get(&i2c->dev,
++						"keys-int", GPIOD_IN);
++	if (IS_ERR(rpisense_js->keys_desc)) {
++		dev_warn(&i2c->dev, "Failed to get keys-int descriptor.\n");
++		rpisense_js->keys_desc = gpio_to_desc(23);
++		if (rpisense_js->keys_desc == NULL) {
++			dev_err(&i2c->dev, "GPIO23 fallback failed.\n");
++			return PTR_ERR(rpisense_js->keys_desc);
++		}
++	}
++	rpisense_client_dev_register(rpisense, "rpi-sense-js",
++				     &(rpisense->joystick.pdev));
++	rpisense_client_dev_register(rpisense, "rpi-sense-fb",
++				     &(rpisense->framebuffer.pdev));
++
++	return 0;
++}
++
++static int rpisense_remove(struct i2c_client *i2c)
++{
++	struct rpisense *rpisense = i2c_get_clientdata(i2c);
++
++	platform_device_unregister(rpisense->joystick.pdev);
++	return 0;
++}
++
++struct rpisense *rpisense_get_dev(void)
++{
++	return rpisense;
++}
++EXPORT_SYMBOL_GPL(rpisense_get_dev);
++
++s32 rpisense_reg_read(struct rpisense *rpisense, int reg)
++{
++	int ret = i2c_smbus_read_byte_data(rpisense->i2c_client, reg);
++
++	if (ret < 0)
++		dev_err(rpisense->dev, "Read from reg %d failed\n", reg);
++	/* Due to the BCM270x I2C clock stretching bug, some values
++	 * may have MSB set. Clear it to avoid incorrect values.
++	 * */
++	return ret & 0x7F;
++}
++EXPORT_SYMBOL_GPL(rpisense_reg_read);
++
++int rpisense_block_write(struct rpisense *rpisense, const char *buf, int count)
++{
++	int ret = i2c_master_send(rpisense->i2c_client, buf, count);
++
++	if (ret < 0)
++		dev_err(rpisense->dev, "Block write failed\n");
++	return ret;
++}
++EXPORT_SYMBOL_GPL(rpisense_block_write);
++
++static const struct i2c_device_id rpisense_i2c_id[] = {
++	{ "rpi-sense", 0 },
++	{ }
++};
++MODULE_DEVICE_TABLE(i2c, rpisense_i2c_id);
++
++
++static struct i2c_driver rpisense_driver = {
++	.driver = {
++		   .name = "rpi-sense",
++		   .owner = THIS_MODULE,
++	},
++	.probe = rpisense_probe,
++	.remove = rpisense_remove,
++	.id_table = rpisense_i2c_id,
++};
++
++module_i2c_driver(rpisense_driver);
++
++MODULE_DESCRIPTION("Raspberry Pi Sense HAT core driver");
++MODULE_AUTHOR("Serge Schneider <serge at raspberrypi.org>");
++MODULE_LICENSE("GPL");
++
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -2506,3 +2506,16 @@ config FB_SM712
+ 	  This driver is also available as a module. The module will be
+ 	  called sm712fb. If you want to compile it as a module, say M
+ 	  here and read <file:Documentation/kbuild/modules.txt>.
++
++config FB_RPISENSE
++	tristate "Raspberry Pi Sense HAT framebuffer"
++	depends on FB
++	select MFD_RPISENSE_CORE
++	select FB_SYS_FOPS
++	select FB_SYS_FILLRECT
++	select FB_SYS_COPYAREA
++	select FB_SYS_IMAGEBLIT
++	select FB_DEFERRED_IO
++
++	help
++	  This is the framebuffer driver for the Raspberry Pi Sense HAT
+--- a/drivers/video/fbdev/Makefile
++++ b/drivers/video/fbdev/Makefile
+@@ -150,6 +150,7 @@ obj-$(CONFIG_FB_DA8XX)		  += da8xx-fb.o
+ obj-$(CONFIG_FB_MXS)		  += mxsfb.o
+ obj-$(CONFIG_FB_SSD1307)	  += ssd1307fb.o
+ obj-$(CONFIG_FB_SIMPLE)           += simplefb.o
++obj-$(CONFIG_FB_RPISENSE)	  += rpisense-fb.o
+ 
+ # the test framebuffer is last
+ obj-$(CONFIG_FB_VIRTUAL)          += vfb.o
+--- /dev/null
++++ b/drivers/video/fbdev/rpisense-fb.c
+@@ -0,0 +1,293 @@
++/*
++ * Raspberry Pi Sense HAT framebuffer driver
++ * http://raspberrypi.org
++ *
++ * Copyright (C) 2015 Raspberry Pi
++ *
++ * Author: Serge Schneider
++ *
++ *  This program is free software; you can redistribute  it and/or modify it
++ *  under  the terms of  the GNU General  Public License as published by the
++ *  Free Software Foundation;  either version 2 of the  License, or (at your
++ *  option) any later version.
++ *
++ */
++
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/errno.h>
++#include <linux/string.h>
++#include <linux/mm.h>
++#include <linux/slab.h>
++#include <linux/uaccess.h>
++#include <linux/delay.h>
++#include <linux/fb.h>
++#include <linux/init.h>
++
++#include <linux/mfd/rpisense/framebuffer.h>
++#include <linux/mfd/rpisense/core.h>
++
++static bool lowlight;
++module_param(lowlight, bool, 0);
++MODULE_PARM_DESC(lowlight, "Reduce LED matrix brightness to one third");
++
++static struct rpisense *rpisense;
++
++struct rpisense_fb_param {
++	char __iomem *vmem;
++	u8 *vmem_work;
++	u32 vmemsize;
++	u8 *gamma;
++};
++
++static u8 gamma_default[32] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01,
++			       0x02, 0x02, 0x03, 0x03, 0x04, 0x05, 0x06, 0x07,
++			       0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0E, 0x0F, 0x11,
++			       0x12, 0x14, 0x15, 0x17, 0x19, 0x1B, 0x1D, 0x1F,};
++
++static u8 gamma_low[32] = {0x00, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
++			   0x01, 0x01, 0x01, 0x01, 0x01, 0x02, 0x02, 0x02,
++			   0x03, 0x03, 0x03, 0x04, 0x04, 0x05, 0x05, 0x06,
++			   0x06, 0x07, 0x07, 0x08, 0x08, 0x09, 0x0A, 0x0A,};
++
++static u8 gamma_user[32];
++
++static struct rpisense_fb_param rpisense_fb_param = {
++	.vmem = NULL,
++	.vmemsize = 128,
++	.gamma = gamma_default,
++};
++
++static struct fb_deferred_io rpisense_fb_defio;
++
++static struct fb_fix_screeninfo rpisense_fb_fix = {
++	.id =		"RPi-Sense FB",
++	.type =		FB_TYPE_PACKED_PIXELS,
++	.visual =	FB_VISUAL_TRUECOLOR,
++	.xpanstep =	0,
++	.ypanstep =	0,
++	.ywrapstep =	0,
++	.accel =	FB_ACCEL_NONE,
++	.line_length =	16,
++};
++
++static struct fb_var_screeninfo rpisense_fb_var = {
++	.xres		= 8,
++	.yres		= 8,
++	.xres_virtual	= 8,
++	.yres_virtual	= 8,
++	.bits_per_pixel = 16,
++	.red		= {11, 5, 0},
++	.green		= {5, 6, 0},
++	.blue		= {0, 5, 0},
++};
++
++static ssize_t rpisense_fb_write(struct fb_info *info,
++				 const char __user *buf, size_t count,
++				 loff_t *ppos)
++{
++	ssize_t res = fb_sys_write(info, buf, count, ppos);
++
++	schedule_delayed_work(&info->deferred_work, rpisense_fb_defio.delay);
++	return res;
++}
++
++static void rpisense_fb_fillrect(struct fb_info *info,
++				 const struct fb_fillrect *rect)
++{
++	sys_fillrect(info, rect);
++	schedule_delayed_work(&info->deferred_work, rpisense_fb_defio.delay);
++}
++
++static void rpisense_fb_copyarea(struct fb_info *info,
++				 const struct fb_copyarea *area)
++{
++	sys_copyarea(info, area);
++	schedule_delayed_work(&info->deferred_work, rpisense_fb_defio.delay);
++}
++
++static void rpisense_fb_imageblit(struct fb_info *info,
++				  const struct fb_image *image)
++{
++	sys_imageblit(info, image);
++	schedule_delayed_work(&info->deferred_work, rpisense_fb_defio.delay);
++}
++
++static void rpisense_fb_deferred_io(struct fb_info *info,
++				struct list_head *pagelist)
++{
++	int i;
++	int j;
++	u8 *vmem_work = rpisense_fb_param.vmem_work;
++	u16 *mem = (u16 *)rpisense_fb_param.vmem;
++	u8 *gamma = rpisense_fb_param.gamma;
++
++	vmem_work[0] = 0;
++	for (j = 0; j < 8; j++) {
++		for (i = 0; i < 8; i++) {
++			vmem_work[(j * 24) + i + 1] =
++				gamma[(mem[(j * 8) + i] >> 11) & 0x1F];
++			vmem_work[(j * 24) + (i + 8) + 1] =
++				gamma[(mem[(j * 8) + i] >> 6) & 0x1F];
++			vmem_work[(j * 24) + (i + 16) + 1] =
++				gamma[(mem[(j * 8) + i]) & 0x1F];
++		}
++	}
++	rpisense_block_write(rpisense, vmem_work, 193);
++}
++
++static struct fb_deferred_io rpisense_fb_defio = {
++	.delay		= HZ/100,
++	.deferred_io	= rpisense_fb_deferred_io,
++};
++
++static int rpisense_fb_ioctl(struct fb_info *info, unsigned int cmd,
++			     unsigned long arg)
++{
++	switch (cmd) {
++	case SENSEFB_FBIOGET_GAMMA:
++		if (copy_to_user((void __user *) arg, rpisense_fb_param.gamma,
++				 sizeof(u8[32])))
++			return -EFAULT;
++		return 0;
++	case SENSEFB_FBIOSET_GAMMA:
++		if (copy_from_user(gamma_user, (void __user *)arg,
++				   sizeof(u8[32])))
++			return -EFAULT;
++		rpisense_fb_param.gamma = gamma_user;
++		schedule_delayed_work(&info->deferred_work,
++				      rpisense_fb_defio.delay);
++		return 0;
++	case SENSEFB_FBIORESET_GAMMA:
++		switch (arg) {
++		case 0:
++			rpisense_fb_param.gamma = gamma_default;
++			break;
++		case 1:
++			rpisense_fb_param.gamma = gamma_low;
++			break;
++		case 2:
++			rpisense_fb_param.gamma = gamma_user;
++			break;
++		default:
++			return -EINVAL;
++		}
++		schedule_delayed_work(&info->deferred_work,
++				      rpisense_fb_defio.delay);
++		break;
++	default:
++		return -EINVAL;
++	}
++	return 0;
++}
++
++static struct fb_ops rpisense_fb_ops = {
++	.owner		= THIS_MODULE,
++	.fb_read	= fb_sys_read,
++	.fb_write	= rpisense_fb_write,
++	.fb_fillrect	= rpisense_fb_fillrect,
++	.fb_copyarea	= rpisense_fb_copyarea,
++	.fb_imageblit	= rpisense_fb_imageblit,
++	.fb_ioctl	= rpisense_fb_ioctl,
++};
++
++static int rpisense_fb_probe(struct platform_device *pdev)
++{
++	struct fb_info *info;
++	int ret = -ENOMEM;
++	struct rpisense_fb *rpisense_fb;
++
++	rpisense = rpisense_get_dev();
++	rpisense_fb = &rpisense->framebuffer;
++
++	rpisense_fb_param.vmem = vzalloc(rpisense_fb_param.vmemsize);
++	if (!rpisense_fb_param.vmem)
++		return ret;
++
++	rpisense_fb_param.vmem_work = devm_kmalloc(&pdev->dev, 193, GFP_KERNEL);
++	if (!rpisense_fb_param.vmem_work)
++		goto err_malloc;
++
++	info = framebuffer_alloc(0, &pdev->dev);
++	if (!info) {
++		dev_err(&pdev->dev, "Could not allocate framebuffer.\n");
++		goto err_malloc;
++	}
++	rpisense_fb->info = info;
++
++	rpisense_fb_fix.smem_start = (unsigned long)rpisense_fb_param.vmem;
++	rpisense_fb_fix.smem_len = rpisense_fb_param.vmemsize;
++
++	info->fbops = &rpisense_fb_ops;
++	info->fix = rpisense_fb_fix;
++	info->var = rpisense_fb_var;
++	info->fbdefio = &rpisense_fb_defio;
++	info->flags = FBINFO_FLAG_DEFAULT | FBINFO_VIRTFB;
++	info->screen_base = rpisense_fb_param.vmem;
++	info->screen_size = rpisense_fb_param.vmemsize;
++
++	if (lowlight)
++		rpisense_fb_param.gamma = gamma_low;
++
++	fb_deferred_io_init(info);
++
++	ret = register_framebuffer(info);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Could not register framebuffer.\n");
++		goto err_fballoc;
++	}
++
++	fb_info(info, "%s frame buffer device\n", info->fix.id);
++	schedule_delayed_work(&info->deferred_work, rpisense_fb_defio.delay);
++	return 0;
++err_fballoc:
++	framebuffer_release(info);
++err_malloc:
++	vfree(rpisense_fb_param.vmem);
++	return ret;
++}
++
++static int rpisense_fb_remove(struct platform_device *pdev)
++{
++	struct rpisense_fb *rpisense_fb = &rpisense->framebuffer;
++	struct fb_info *info = rpisense_fb->info;
++
++	if (info) {
++		unregister_framebuffer(info);
++		fb_deferred_io_cleanup(info);
++		framebuffer_release(info);
++		vfree(rpisense_fb_param.vmem);
++	}
++
++	return 0;
++}
++
++#ifdef CONFIG_OF
++static const struct of_device_id rpisense_fb_id[] = {
++	{ .compatible = "rpi,rpi-sense-fb" },
++	{ },
++};
++MODULE_DEVICE_TABLE(of, rpisense_fb_id);
++#endif
++
++static struct platform_device_id rpisense_fb_device_id[] = {
++	{ .name = "rpi-sense-fb" },
++	{ },
++};
++MODULE_DEVICE_TABLE(platform, rpisense_fb_device_id);
++
++static struct platform_driver rpisense_fb_driver = {
++	.probe = rpisense_fb_probe,
++	.remove = rpisense_fb_remove,
++	.driver = {
++		.name = "rpi-sense-fb",
++		.owner = THIS_MODULE,
++	},
++};
++
++module_platform_driver(rpisense_fb_driver);
++
++MODULE_DESCRIPTION("Raspberry Pi Sense HAT framebuffer driver");
++MODULE_AUTHOR("Serge Schneider <serge at raspberrypi.org>");
++MODULE_LICENSE("GPL");
++
+--- /dev/null
++++ b/include/linux/mfd/rpisense/core.h
+@@ -0,0 +1,47 @@
++/*
++ * Raspberry Pi Sense HAT core driver
++ * http://raspberrypi.org
++ *
++ * Copyright (C) 2015 Raspberry Pi
++ *
++ * Author: Serge Schneider
++ *
++ *  This program is free software; you can redistribute  it and/or modify it
++ *  under  the terms of  the GNU General  Public License as published by the
++ *  Free Software Foundation;  either version 2 of the  License, or (at your
++ *  option) any later version.
++ *
++ */
++
++#ifndef __LINUX_MFD_RPISENSE_CORE_H_
++#define __LINUX_MFD_RPISENSE_CORE_H_
++
++#include <linux/mfd/rpisense/joystick.h>
++#include <linux/mfd/rpisense/framebuffer.h>
++
++/*
++ * Register values.
++ */
++#define RPISENSE_FB			0x00
++#define RPISENSE_WAI			0xF0
++#define RPISENSE_VER			0xF1
++#define RPISENSE_KEYS			0xF2
++#define RPISENSE_EE_WP			0xF3
++
++#define RPISENSE_ID			's'
++
++struct rpisense {
++	struct device *dev;
++	struct i2c_client *i2c_client;
++
++	/* Client devices */
++	struct rpisense_js joystick;
++	struct rpisense_fb framebuffer;
++};
++
++struct rpisense *rpisense_get_dev(void);
++s32 rpisense_reg_read(struct rpisense *rpisense, int reg);
++int rpisense_reg_write(struct rpisense *rpisense, int reg, u16 val);
++int rpisense_block_write(struct rpisense *rpisense, const char *buf, int count);
++
++#endif
+--- /dev/null
++++ b/include/linux/mfd/rpisense/framebuffer.h
+@@ -0,0 +1,32 @@
++/*
++ * Raspberry Pi Sense HAT framebuffer driver
++ * http://raspberrypi.org
++ *
++ * Copyright (C) 2015 Raspberry Pi
++ *
++ * Author: Serge Schneider
++ *
++ *  This program is free software; you can redistribute  it and/or modify it
++ *  under  the terms of  the GNU General  Public License as published by the
++ *  Free Software Foundation;  either version 2 of the  License, or (at your
++ *  option) any later version.
++ *
++ */
++
++#ifndef __LINUX_RPISENSE_FB_H_
++#define __LINUX_RPISENSE_FB_H_
++
++#define SENSEFB_FBIO_IOC_MAGIC 0xF1
++
++#define SENSEFB_FBIOGET_GAMMA _IO(SENSEFB_FBIO_IOC_MAGIC, 0)
++#define SENSEFB_FBIOSET_GAMMA _IO(SENSEFB_FBIO_IOC_MAGIC, 1)
++#define SENSEFB_FBIORESET_GAMMA _IO(SENSEFB_FBIO_IOC_MAGIC, 2)
++
++struct rpisense;
++
++struct rpisense_fb {
++	struct platform_device *pdev;
++	struct fb_info *info;
++};
++
++#endif
+--- /dev/null
++++ b/include/linux/mfd/rpisense/joystick.h
+@@ -0,0 +1,35 @@
++/*
++ * Raspberry Pi Sense HAT joystick driver
++ * http://raspberrypi.org
++ *
++ * Copyright (C) 2015 Raspberry Pi
++ *
++ * Author: Serge Schneider
++ *
++ *  This program is free software; you can redistribute  it and/or modify it
++ *  under  the terms of  the GNU General  Public License as published by the
++ *  Free Software Foundation;  either version 2 of the  License, or (at your
++ *  option) any later version.
++ *
++ */
++
++#ifndef __LINUX_RPISENSE_JOYSTICK_H_
++#define __LINUX_RPISENSE_JOYSTICK_H_
++
++#include <linux/input.h>
++#include <linux/interrupt.h>
++#include <linux/gpio/consumer.h>
++#include <linux/platform_device.h>
++
++struct rpisense;
++
++struct rpisense_js {
++	struct platform_device *pdev;
++	struct input_dev *keys_dev;
++	struct gpio_desc *keys_desc;
++	struct work_struct keys_work_s;
++	int keys_irq;
++};
++
++
++#endif
diff --git a/target/linux/brcm2708/patches-4.4/0085-RaspiDAC3-support.patch b/target/linux/brcm2708/patches-4.4/0085-RaspiDAC3-support.patch
new file mode 100644
index 0000000..fc3c488
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0085-RaspiDAC3-support.patch
@@ -0,0 +1,243 @@
+From c0121c68fb1b9242907546191008deb49123729a Mon Sep 17 00:00:00 2001
+From: Jan Grulich <jan at grulich.eu>
+Date: Mon, 24 Aug 2015 16:03:47 +0100
+Subject: [PATCH 085/127] RaspiDAC3 support
+
+Signed-off-by: Jan Grulich <jan at grulich.eu>
+
+config: fix RaspiDAC Rev.3x dependencies
+
+Change depends to SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
+like the other I2S soundcard drivers.
+
+Signed-off-by: Matthias Reichl <hias at horus.com>
+---
+ sound/soc/bcm/Kconfig     |   8 ++
+ sound/soc/bcm/Makefile    |   2 +
+ sound/soc/bcm/raspidac3.c | 191 ++++++++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 201 insertions(+)
+ create mode 100644 sound/soc/bcm/raspidac3.c
+
+--- a/sound/soc/bcm/Kconfig
++++ b/sound/soc/bcm/Kconfig
+@@ -56,3 +56,11 @@ config SND_BCM2708_SOC_IQAUDIO_DAC
+ 	select SND_SOC_PCM512x_I2C
+ 	help
+ 	  Say Y or M if you want to add support for IQaudIO-DAC.
++
++config SND_BCM2708_SOC_RASPIDAC3
++	tristate "Support for RaspiDAC Rev.3x"
++	depends on SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
++	select SND_SOC_PCM512x_I2C
++	select SND_SOC_TPA6130A2
++	help
++	  Say Y or M if you want to add support for RaspiDAC Rev.3x.
+--- a/sound/soc/bcm/Makefile
++++ b/sound/soc/bcm/Makefile
+@@ -11,6 +11,7 @@ snd-soc-hifiberry-amp-objs := hifiberry_
+ snd-soc-rpi-dac-objs := rpi-dac.o
+ snd-soc-rpi-proto-objs := rpi-proto.o
+ snd-soc-iqaudio-dac-objs := iqaudio-dac.o
++snd-soc-raspidac3-objs := raspidac3.o
+ 
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DAC) += snd-soc-hifiberry-dac.o
+ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS) += snd-soc-hifiberry-dacplus.o
+@@ -19,3 +20,4 @@ obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_A
+ obj-$(CONFIG_SND_BCM2708_SOC_RPI_DAC) += snd-soc-rpi-dac.o
+ obj-$(CONFIG_SND_BCM2708_SOC_RPI_PROTO) += snd-soc-rpi-proto.o
+ obj-$(CONFIG_SND_BCM2708_SOC_IQAUDIO_DAC) += snd-soc-iqaudio-dac.o
++obj-$(CONFIG_SND_BCM2708_SOC_RASPIDAC3) += snd-soc-raspidac3.o
+--- /dev/null
++++ b/sound/soc/bcm/raspidac3.c
+@@ -0,0 +1,191 @@
++/*
++ * ASoC Driver for RaspiDAC v3
++ *
++ * Author:	Jan Grulich <jan at grulich.eu>
++ *		Copyright 2015
++ *              based on code by Daniel Matuschek <daniel at hifiberry.com>
++ *		based on code by Florian Meier <florian.meier at koalo.de>
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++
++#include <sound/core.h>
++#include <sound/pcm.h>
++#include <sound/pcm_params.h>
++#include <sound/soc.h>
++#include <sound/jack.h>
++#include <sound/soc-dapm.h>
++
++#include "../codecs/pcm512x.h"
++#include "../codecs/tpa6130a2.h"
++
++/* sound card init */
++static int snd_rpi_raspidac3_init(struct snd_soc_pcm_runtime *rtd)
++{
++	int ret;
++	struct snd_soc_card *card = rtd->card;
++	struct snd_soc_codec *codec = rtd->codec;
++	snd_soc_update_bits(codec, PCM512x_GPIO_EN, 0x08, 0x08);
++	snd_soc_update_bits(codec, PCM512x_GPIO_OUTPUT_4, 0xf, 0x02);
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x00);
++
++	ret = snd_soc_limit_volume(card, "Digital Playback Volume", 207);
++	if (ret < 0)
++		dev_warn(card->dev, "Failed to set volume limit: %d\n", ret);
++	else {
++		struct snd_kcontrol *kctl;
++
++		ret = tpa6130a2_add_controls(codec);
++		if (ret < 0)
++			dev_warn(card->dev, "Failed to add TPA6130A2 controls: %d\n",
++				 ret);
++		ret = snd_soc_limit_volume(card,
++					   "TPA6130A2 Headphone Playback Volume",
++					   54);
++		if (ret < 0)
++			dev_warn(card->dev, "Failed to set TPA6130A2 volume limit: %d\n",
++				 ret);
++		kctl = snd_soc_card_get_kcontrol(card,
++						 "TPA6130A2 Headphone Playback Volume");
++		if (kctl) {
++			strcpy(kctl->id.name, "Headphones Playback Volume");
++			/* disable the volume dB scale so alsamixer works */
++			kctl->vd[0].access = SNDRV_CTL_ELEM_ACCESS_READWRITE;
++		}
++
++		kctl = snd_soc_card_get_kcontrol(card,
++						 "TPA6130A2 Headphone Playback Switch");
++		if (kctl)
++			strcpy(kctl->id.name, "Headphones Playback Switch");
++	}
++
++	return 0;
++}
++
++/* set hw parameters */
++static int snd_rpi_raspidac3_hw_params(struct snd_pcm_substream *substream,
++				       struct snd_pcm_hw_params *params)
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
++
++	unsigned int sample_bits =
++		snd_pcm_format_physical_width(params_format(params));
++
++	return snd_soc_dai_set_bclk_ratio(cpu_dai, sample_bits * 2);
++}
++
++/* startup */
++static int snd_rpi_raspidac3_startup(struct snd_pcm_substream *substream) {
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_codec *codec = rtd->codec;
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x08);
++	tpa6130a2_stereo_enable(codec, 1);
++	return 0;
++}
++
++/* shutdown */
++static void snd_rpi_raspidac3_shutdown(struct snd_pcm_substream *substream) {
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_codec *codec = rtd->codec;
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x00);
++	tpa6130a2_stereo_enable(codec, 0);
++}
++
++/* machine stream operations */
++static struct snd_soc_ops snd_rpi_raspidac3_ops = {
++	.hw_params = snd_rpi_raspidac3_hw_params,
++	.startup = snd_rpi_raspidac3_startup,
++	.shutdown = snd_rpi_raspidac3_shutdown,
++};
++
++/* interface setup */
++static struct snd_soc_dai_link snd_rpi_raspidac3_dai[] = {
++{
++	.name		= "RaspiDAC Rev.3x",
++	.stream_name	= "RaspiDAC HiFi",
++	.cpu_dai_name	= "bcm2708-i2s.0",
++	.codec_dai_name	= "pcm512x-hifi",
++	.platform_name	= "bcm2708-i2s.0",
++	.codec_name	= "pcm512x.1-004c",
++	.dai_fmt	= SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
++				SND_SOC_DAIFMT_CBS_CFS,
++	.ops		= &snd_rpi_raspidac3_ops,
++	.init		= snd_rpi_raspidac3_init,
++},
++};
++
++/* audio machine driver */
++static struct snd_soc_card snd_rpi_raspidac3 = {
++	.name         = "RaspiDAC Rev.3x HiFi Audio Card",
++	.dai_link     = snd_rpi_raspidac3_dai,
++	.num_links    = ARRAY_SIZE(snd_rpi_raspidac3_dai),
++};
++
++/* sound card test */
++static int snd_rpi_raspidac3_probe(struct platform_device *pdev)
++{
++	int ret = 0;
++
++	snd_rpi_raspidac3.dev = &pdev->dev;
++
++	if (pdev->dev.of_node) {
++	    struct device_node *i2s_node;
++	    struct snd_soc_dai_link *dai = &snd_rpi_raspidac3_dai[0];
++	    i2s_node = of_parse_phandle(pdev->dev.of_node,
++					"i2s-controller", 0);
++
++	    if (i2s_node) {
++		dai->cpu_dai_name = NULL;
++		dai->cpu_of_node = i2s_node;
++		dai->platform_name = NULL;
++		dai->platform_of_node = i2s_node;
++	    }
++	}
++
++	ret = snd_soc_register_card(&snd_rpi_raspidac3);
++	if (ret)
++		dev_err(&pdev->dev,
++			"snd_soc_register_card() failed: %d\n", ret);
++
++	return ret;
++}
++
++/* sound card disconnect */
++static int snd_rpi_raspidac3_remove(struct platform_device *pdev)
++{
++	return snd_soc_unregister_card(&snd_rpi_raspidac3);
++}
++
++static const struct of_device_id raspidac3_of_match[] = {
++	{ .compatible = "jg,raspidacv3", },
++	{},
++};
++MODULE_DEVICE_TABLE(of, raspidac3_of_match);
++
++/* sound card platform driver */
++static struct platform_driver snd_rpi_raspidac3_driver = {
++	.driver = {
++		.name   = "snd-rpi-raspidac3",
++		.owner  = THIS_MODULE,
++		.of_match_table = raspidac3_of_match,
++	},
++	.probe          = snd_rpi_raspidac3_probe,
++	.remove         = snd_rpi_raspidac3_remove,
++};
++
++module_platform_driver(snd_rpi_raspidac3_driver);
++
++MODULE_AUTHOR("Jan Grulich <jan at grulich.eu>");
++MODULE_DESCRIPTION("ASoC Driver for RaspiDAC Rev.3x");
++MODULE_LICENSE("GPL v2");
diff --git a/target/linux/brcm2708/patches-4.4/0086-tpa6130a2-Add-headphone-switch-control.patch b/target/linux/brcm2708/patches-4.4/0086-tpa6130a2-Add-headphone-switch-control.patch
new file mode 100644
index 0000000..5cf76f0
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0086-tpa6130a2-Add-headphone-switch-control.patch
@@ -0,0 +1,91 @@
+From a7c37b3c36ccec8a089d576c55ad9ba1721cb0ef Mon Sep 17 00:00:00 2001
+From: Jan Grulich <jan at grulich.eu>
+Date: Mon, 24 Aug 2015 16:02:34 +0100
+Subject: [PATCH 086/127] tpa6130a2: Add headphone switch control
+
+Signed-off-by: Jan Grulich <jan at grulich.eu>
+---
+ sound/soc/codecs/tpa6130a2.c | 29 ++++++++++++++++++++++++++---
+ 1 file changed, 26 insertions(+), 3 deletions(-)
+
+--- a/sound/soc/codecs/tpa6130a2.c
++++ b/sound/soc/codecs/tpa6130a2.c
+@@ -4,6 +4,7 @@
+  * Copyright (C) Nokia Corporation
+  *
+  * Author: Peter Ujfalusi <peter.ujfalusi at ti.com>
++ * Modified: Jan Grulich <jan at grulich.eu>
+  *
+  * This program is free software; you can redistribute it and/or
+  * modify it under the terms of the GNU General Public License
+@@ -52,6 +53,8 @@ struct tpa6130a2_data {
+ 	enum tpa_model id;
+ };
+ 
++static void tpa6130a2_channel_enable(u8 channel, int enable);
++
+ static int tpa6130a2_i2c_read(int reg)
+ {
+ 	struct tpa6130a2_data *data;
+@@ -189,7 +192,7 @@ exit:
+ }
+ 
+ static int tpa6130a2_get_volsw(struct snd_kcontrol *kcontrol,
+-		struct snd_ctl_elem_value *ucontrol)
++			       struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
+@@ -218,7 +221,7 @@ static int tpa6130a2_get_volsw(struct sn
+ }
+ 
+ static int tpa6130a2_put_volsw(struct snd_kcontrol *kcontrol,
+-		struct snd_ctl_elem_value *ucontrol)
++			       struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
+@@ -255,8 +258,22 @@ static int tpa6130a2_put_volsw(struct sn
+ 	return 1;
+ }
+ 
++static int tpa6130a2_put_hp_sw(struct snd_kcontrol *kcontrol,
++			       struct snd_ctl_elem_value *ucontrol)
++{
++	int enable = ucontrol->value.integer.value[0];
++	unsigned int state;
++
++	state = (tpa6130a2_read(TPA6130A2_REG_VOL_MUTE) & 0x80) == 0;
++	if (state == enable)
++		return 0; /* No change */
++
++	tpa6130a2_channel_enable(TPA6130A2_HP_EN_R | TPA6130A2_HP_EN_L, enable);
++	return 1; /* Changed */
++}
++
+ /*
+- * TPA6130 volume. From -59.5 to 4 dB with increasing step size when going
++ * TPA6130 volume. From -59.5 to +4.0 dB with increasing step size when going
+  * down in gain.
+  */
+ static const DECLARE_TLV_DB_RANGE(tpa6130_tlv,
+@@ -277,6 +294,9 @@ static const struct snd_kcontrol_new tpa
+ 		       TPA6130A2_REG_VOL_MUTE, 0, 0x3f, 0,
+ 		       tpa6130a2_get_volsw, tpa6130a2_put_volsw,
+ 		       tpa6130_tlv),
++	SOC_SINGLE_EXT("TPA6130A2 Headphone Playback Switch",
++		       TPA6130A2_REG_VOL_MUTE, 7, 1, 1,
++		       tpa6130a2_get_volsw, tpa6130a2_put_hp_sw),
+ };
+ 
+ static const DECLARE_TLV_DB_RANGE(tpa6140_tlv,
+@@ -290,6 +310,9 @@ static const struct snd_kcontrol_new tpa
+ 		       TPA6130A2_REG_VOL_MUTE, 1, 0x1f, 0,
+ 		       tpa6130a2_get_volsw, tpa6130a2_put_volsw,
+ 		       tpa6140_tlv),
++	SOC_SINGLE_EXT("TPA6140A2 Headphone Playback Switch",
++		       TPA6130A2_REG_VOL_MUTE, 7, 1, 1,
++		       tpa6130a2_get_volsw, tpa6130a2_put_hp_sw),
+ };
+ 
+ /*
diff --git a/target/linux/brcm2708/patches-4.4/0087-irq-bcm2835-Fix-building-with-2708.patch b/target/linux/brcm2708/patches-4.4/0087-irq-bcm2835-Fix-building-with-2708.patch
new file mode 100644
index 0000000..523dca0
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0087-irq-bcm2835-Fix-building-with-2708.patch
@@ -0,0 +1,28 @@
+From d4df36c7243c7fcf016278309a7fcf1e07bf8b7f Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Mon, 28 Sep 2015 23:38:59 +0100
+Subject: [PATCH 087/127] irq-bcm2835: Fix building with 2708
+
+---
+ drivers/irqchip/irq-bcm2835.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/irqchip/irq-bcm2835.c
++++ b/drivers/irqchip/irq-bcm2835.c
+@@ -82,6 +82,7 @@
+ #define NR_BANKS		3
+ #define IRQS_PER_BANK		32
+ #define NUMBER_IRQS		MAKE_HWIRQ(NR_BANKS, 0)
++#undef FIQ_START
+ #define FIQ_START		(NR_IRQS_BANK0 + MAKE_HWIRQ(NR_BANKS - 1, 0))
+ 
+ static const int reg_pending[] __initconst = { 0x00, 0x04, 0x08 };
+@@ -256,7 +257,7 @@ static int __init armctrl_of_init(struct
+ 					MAKE_HWIRQ(b, i) + NUMBER_IRQS);
+ 			BUG_ON(irq <= 0);
+ 			irq_set_chip(irq, &armctrl_chip);
+-			set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
++			irq_set_probe(irq);
+ 		}
+ 	}
+ 	init_FIQ(FIQ_START);
diff --git a/target/linux/brcm2708/patches-4.4/0088-rpi_display-add-backlight-driver-and-overlay.patch b/target/linux/brcm2708/patches-4.4/0088-rpi_display-add-backlight-driver-and-overlay.patch
new file mode 100644
index 0000000..cb88086
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0088-rpi_display-add-backlight-driver-and-overlay.patch
@@ -0,0 +1,250 @@
+From 356081aede8e5bd1b9503c4510e1d4fc0d85b026 Mon Sep 17 00:00:00 2001
+From: P33M <P33M at github.com>
+Date: Wed, 21 Oct 2015 14:55:21 +0100
+Subject: [PATCH 088/127] rpi_display: add backlight driver and overlay
+
+Add a mailbox-driven backlight controller for the Raspberry Pi DSI
+touchscreen display. Requires updated GPU firmware to recognise the
+mailbox request.
+
+Signed-off-by: Gordon Hollingworth <gordon at raspberrypi.org>
+---
+ arch/arm/boot/dts/overlays/Makefile                |   1 +
+ arch/arm/boot/dts/overlays/README                  |   6 ++
+ .../boot/dts/overlays/rpi-backlight-overlay.dts    |  21 ++++
+ arch/arm/configs/bcm2709_defconfig                 |   1 +
+ arch/arm/configs/bcmrpi_defconfig                  |   1 +
+ drivers/video/backlight/Kconfig                    |   6 ++
+ drivers/video/backlight/Makefile                   |   1 +
+ drivers/video/backlight/rpi_backlight.c            | 119 +++++++++++++++++++++
+ include/soc/bcm2835/raspberrypi-firmware.h         |   1 +
+ 9 files changed, 157 insertions(+)
+ create mode 100644 arch/arm/boot/dts/overlays/rpi-backlight-overlay.dts
+ create mode 100644 drivers/video/backlight/rpi_backlight.c
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -38,6 +38,7 @@ dtb-$(RPI_DT_OVERLAYS) += pps-gpio-overl
+ dtb-$(RPI_DT_OVERLAYS) += pwm-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += pwm-2chan-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += raspidac3-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += rpi-backlight-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += rpi-dac-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += rpi-display-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += rpi-ft5406-overlay.dtb
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -462,6 +462,12 @@ Load:   dtoverlay=raspidac3
+ Params: <None>
+ 
+ 
++Name:   rpi-backlight
++Info:   Raspberry Pi official display backlight driver
++Load:   dtoverlay=rpi-backlight
++Params: <None>
++
++
+ Name:   rpi-dac
+ Info:   Configures the RPi DAC audio card
+ Load:   dtoverlay=rpi-dac
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/rpi-backlight-overlay.dts
+@@ -0,0 +1,21 @@
++/*
++ * Devicetree overlay for mailbox-driven Raspberry Pi DSI Display
++ * backlight controller
++ */
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++			rpi_backlight: rpi_backlight {
++				compatible = "raspberrypi,rpi-backlight";
++				firmware = <&firmware>;
++				status = "okay";
++			};
++		};
++	};
++};
+--- a/arch/arm/configs/bcm2709_defconfig
++++ b/arch/arm/configs/bcm2709_defconfig
+@@ -808,6 +808,7 @@ CONFIG_FB_UDL=m
+ CONFIG_FB_SSD1307=m
+ CONFIG_FB_RPISENSE=m
+ # CONFIG_BACKLIGHT_GENERIC is not set
++CONFIG_BACKLIGHT_RPI=m
+ CONFIG_BACKLIGHT_GPIO=m
+ CONFIG_FRAMEBUFFER_CONSOLE=y
+ CONFIG_LOGO=y
+--- a/arch/arm/configs/bcmrpi_defconfig
++++ b/arch/arm/configs/bcmrpi_defconfig
+@@ -801,6 +801,7 @@ CONFIG_FB_UDL=m
+ CONFIG_FB_SSD1307=m
+ CONFIG_FB_RPISENSE=m
+ # CONFIG_BACKLIGHT_GENERIC is not set
++CONFIG_BACKLIGHT_RPI=m
+ CONFIG_BACKLIGHT_GPIO=m
+ CONFIG_FRAMEBUFFER_CONSOLE=y
+ CONFIG_LOGO=y
+--- a/drivers/video/backlight/Kconfig
++++ b/drivers/video/backlight/Kconfig
+@@ -265,6 +265,12 @@ config BACKLIGHT_PWM
+ 	  If you have a LCD backlight adjustable by PWM, say Y to enable
+ 	  this driver.
+ 
++config BACKLIGHT_RPI
++	tristate "Raspberry Pi display firmware driven backlight"
++	help
++	  If you have the Raspberry Pi DSI touchscreen display, say Y to
++	  enable the mailbox-controlled backlight driver.
++
+ config BACKLIGHT_DA903X
+ 	tristate "Backlight Driver for DA9030/DA9034 using WLED"
+ 	depends on PMIC_DA903X
+--- a/drivers/video/backlight/Makefile
++++ b/drivers/video/backlight/Makefile
+@@ -50,6 +50,7 @@ obj-$(CONFIG_BACKLIGHT_PANDORA)		+= pand
+ obj-$(CONFIG_BACKLIGHT_PCF50633)	+= pcf50633-backlight.o
+ obj-$(CONFIG_BACKLIGHT_PM8941_WLED)	+= pm8941-wled.o
+ obj-$(CONFIG_BACKLIGHT_PWM)		+= pwm_bl.o
++obj-$(CONFIG_BACKLIGHT_RPI)			+= rpi_backlight.o
+ obj-$(CONFIG_BACKLIGHT_SAHARA)		+= kb3886_bl.o
+ obj-$(CONFIG_BACKLIGHT_SKY81452)	+= sky81452-backlight.o
+ obj-$(CONFIG_BACKLIGHT_TOSA)		+= tosa_bl.o
+--- /dev/null
++++ b/drivers/video/backlight/rpi_backlight.c
+@@ -0,0 +1,119 @@
++/*
++ * rpi_bl.c - Backlight controller through VPU
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#include <linux/backlight.h>
++#include <linux/err.h>
++#include <linux/fb.h>
++#include <linux/gpio.h>
++#include <linux/init.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/of.h>
++#include <linux/of_gpio.h>
++#include <linux/platform_device.h>
++#include <linux/slab.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
++
++struct rpi_backlight {
++	struct device *dev;
++	struct device *fbdev;
++	struct rpi_firmware *fw;
++};
++
++static int rpi_backlight_update_status(struct backlight_device *bl)
++{
++	struct rpi_backlight *gbl = bl_get_data(bl);
++	int brightness = bl->props.brightness;
++	int ret;
++
++	if (bl->props.power != FB_BLANK_UNBLANK ||
++	    bl->props.fb_blank != FB_BLANK_UNBLANK ||
++	    bl->props.state & (BL_CORE_SUSPENDED | BL_CORE_FBBLANK))
++		brightness = 0;
++
++	ret = rpi_firmware_property(gbl->fw,
++			RPI_FIRMWARE_FRAMEBUFFER_SET_BACKLIGHT,
++			&brightness, sizeof(brightness));
++	if (ret) {
++		dev_err(gbl->dev, "Failed to set brightness\n");
++		return ret;
++	}
++
++	if (brightness < 0) {
++		dev_err(gbl->dev, "Backlight change failed\n");
++		return -EAGAIN;
++	}
++
++	return 0;
++}
++
++static const struct backlight_ops rpi_backlight_ops = {
++	.options	= BL_CORE_SUSPENDRESUME,
++	.update_status	= rpi_backlight_update_status,
++};
++
++static int rpi_backlight_probe(struct platform_device *pdev)
++{
++	struct backlight_properties props;
++	struct backlight_device *bl;
++	struct rpi_backlight *gbl;
++	struct device_node *fw_node;
++
++	gbl = devm_kzalloc(&pdev->dev, sizeof(*gbl), GFP_KERNEL);
++	if (gbl == NULL)
++		return -ENOMEM;
++
++	gbl->dev = &pdev->dev;
++
++	fw_node = of_parse_phandle(pdev->dev.of_node, "firmware", 0);
++	if (!fw_node) {
++		dev_err(&pdev->dev, "Missing firmware node\n");
++		return -ENOENT;
++	}
++
++	gbl->fw = rpi_firmware_get(fw_node);
++	if (!gbl->fw)
++		return -EPROBE_DEFER;
++
++	memset(&props, 0, sizeof(props));
++	props.type = BACKLIGHT_RAW;
++	props.max_brightness = 255;
++	bl = devm_backlight_device_register(&pdev->dev, dev_name(&pdev->dev),
++					&pdev->dev, gbl, &rpi_backlight_ops,
++					&props);
++	if (IS_ERR(bl)) {
++		dev_err(&pdev->dev, "failed to register backlight\n");
++		return PTR_ERR(bl);
++	}
++
++	bl->props.brightness = 255;
++	backlight_update_status(bl);
++
++	platform_set_drvdata(pdev, bl);
++	return 0;
++}
++
++static const struct of_device_id rpi_backlight_of_match[] = {
++	{ .compatible = "raspberrypi,rpi-backlight" },
++	{ /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(of, rpi_backlight_of_match);
++
++static struct platform_driver rpi_backlight_driver = {
++	.driver		= {
++		.name		= "rpi-backlight",
++		.of_match_table = of_match_ptr(rpi_backlight_of_match),
++	},
++	.probe		= rpi_backlight_probe,
++};
++
++module_platform_driver(rpi_backlight_driver);
++
++MODULE_AUTHOR("Gordon Hollingworth <gordon at raspberrypi.org>");
++MODULE_DESCRIPTION("Raspberry Pi mailbox based Backlight Driver");
++MODULE_LICENSE("GPL");
+--- a/include/soc/bcm2835/raspberrypi-firmware.h
++++ b/include/soc/bcm2835/raspberrypi-firmware.h
+@@ -112,6 +112,7 @@ enum rpi_firmware_property_tag {
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_OVERSCAN =               0x0004800a,
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_PALETTE =                0x0004800b,
+ 	RPI_FIRMWARE_FRAMEBUFFER_SET_VSYNC =                  0x0004800e,
++	RPI_FIRMWARE_FRAMEBUFFER_SET_BACKLIGHT =              0x0004800f,
+ 
+ 	RPI_FIRMWARE_VCHIQ_INIT =                             0x00048010,
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0089-bcm2835-dma-Fix-up-convert-to-DMA-pool.patch b/target/linux/brcm2708/patches-4.4/0089-bcm2835-dma-Fix-up-convert-to-DMA-pool.patch
new file mode 100644
index 0000000..031f4c6
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0089-bcm2835-dma-Fix-up-convert-to-DMA-pool.patch
@@ -0,0 +1,85 @@
+From 83a58ceb78952a15b2a14716e2099cb548cd9081 Mon Sep 17 00:00:00 2001
+From: Matthias Reichl <hias at horus.com>
+Date: Mon, 16 Nov 2015 14:05:35 +0000
+Subject: [PATCH 089/127] bcm2835-dma: Fix up convert to DMA pool
+
+---
+ drivers/dma/bcm2835-dma.c | 36 ++++++++++++++++++++++++++----------
+ 1 file changed, 26 insertions(+), 10 deletions(-)
+
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -488,6 +488,17 @@ static struct dma_async_tx_descriptor *b
+ 	c->cyclic = true;
+ 
+ 	return vchan_tx_prep(&c->vc, &d->vd, flags);
++error_cb:
++	i--;
++	for (; i >= 0; i--) {
++		struct bcm2835_cb_entry *cb_entry = &d->cb_list[i];
++
++		dma_pool_free(c->cb_pool, cb_entry->cb, cb_entry->paddr);
++	}
++
++	kfree(d->cb_list);
++	kfree(d);
++	return NULL;
+ }
+ 
+ static struct dma_async_tx_descriptor *
+@@ -534,6 +545,7 @@ bcm2835_dma_prep_slave_sg(struct dma_cha
+ 	if (!d)
+ 		return NULL;
+ 
++	d->c = c;
+ 	d->dir = direction;
+ 
+ 	if (c->ch >= 8) /* LITE channel */
+@@ -553,15 +565,21 @@ bcm2835_dma_prep_slave_sg(struct dma_cha
+ 		d->frames += len / max_size + 1;
+ 	}
+ 
+-	/* Allocate memory for control blocks */
+-	d->control_block_size = d->frames * sizeof(struct bcm2835_dma_cb);
+-	d->control_block_base = dma_zalloc_coherent(chan->device->dev,
+-			d->control_block_size, &d->control_block_base_phys,
+-			GFP_NOWAIT);
+-	if (!d->control_block_base) {
++	d->cb_list = kcalloc(d->frames, sizeof(*d->cb_list), GFP_KERNEL);
++	if (!d->cb_list) {
+ 		kfree(d);
+ 		return NULL;
+ 	}
++	/* Allocate memory for control blocks */
++	for (i = 0; i < d->frames; i++) {
++		struct bcm2835_cb_entry *cb_entry = &d->cb_list[i];
++
++		cb_entry->cb = dma_pool_zalloc(c->cb_pool, GFP_ATOMIC,
++						&cb_entry->paddr);
++
++		if (!cb_entry->cb)
++			goto error_cb;
++	}
+ 
+ 	/*
+ 	 * Iterate over all SG entries, create a control block
+@@ -578,7 +596,7 @@ bcm2835_dma_prep_slave_sg(struct dma_cha
+ 
+ 		for (j = 0; j < len; j += max_size) {
+ 			struct bcm2835_dma_cb *control_block =
+-				&d->control_block_base[i + split_cnt];
++				d->cb_list[i + split_cnt].cb;
+ 
+ 			/* Setup addresses */
+ 			if (d->dir == DMA_DEV_TO_MEM) {
+@@ -620,9 +638,7 @@ bcm2835_dma_prep_slave_sg(struct dma_cha
+ 			if (i < sg_len - 1 || len - j > max_size) {
+ 				/* Next block is the next frame. */
+ 				control_block->next =
+-					d->control_block_base_phys +
+-					sizeof(struct bcm2835_dma_cb) *
+-					(i + split_cnt + 1);
++					d->cb_list[i + split_cnt + 1].paddr;
+ 			} else {
+ 				/* Next block is empty. */
+ 				control_block->next = 0;
diff --git a/target/linux/brcm2708/patches-4.4/0090-scripts-Multi-platform-support-for-mkknlimg-and-knli.patch b/target/linux/brcm2708/patches-4.4/0090-scripts-Multi-platform-support-for-mkknlimg-and-knli.patch
new file mode 100644
index 0000000..12f1290
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0090-scripts-Multi-platform-support-for-mkknlimg-and-knli.patch
@@ -0,0 +1,247 @@
+From 7740825fc5e8324d362566ecc61f95fbdc38370a Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Wed, 11 Nov 2015 11:38:59 +0000
+Subject: [PATCH 090/127] scripts: Multi-platform support for mkknlimg and
+ knlinfo
+
+The firmware uses tags in the kernel trailer to choose which dtb file
+to load. Current firmware loads bcm2835-*.dtb if the '283x' tag is true,
+otherwise it loads bcm270*.dtb. This scheme breaks if an image supports
+multiple platforms.
+
+This patch adds '270X' and '283X' tags to indicate support for RPi and
+upstream platforms, respectively. '283x' (note lower case 'x') is left
+for old firmware, and is only set if the image only supports upstream
+builds.
+---
+ scripts/knlinfo  |   2 +
+ scripts/mkknlimg | 136 +++++++++++++++++++++++++++++++------------------------
+ 2 files changed, 80 insertions(+), 58 deletions(-)
+
+--- a/scripts/knlinfo
++++ b/scripts/knlinfo
+@@ -18,6 +18,8 @@ my %atom_formats =
+ (
+     'DTOK' => \&format_bool,
+     'KVer' => \&format_string,
++    '270X' => \&format_bool,
++    '283X' => \&format_bool,
+     '283x' => \&format_bool,
+ );
+ 
+--- a/scripts/mkknlimg
++++ b/scripts/mkknlimg
+@@ -13,12 +13,20 @@ use strict;
+ use warnings;
+ use integer;
+ 
++use constant FLAG_PI   => 0x01;
++use constant FLAG_DTOK => 0x02;
++use constant FLAG_DDTK => 0x04;
++use constant FLAG_270X => 0x08;
++use constant FLAG_283X => 0x10;
++
+ my $trailer_magic = 'RPTL';
+ 
+ my $tmpfile1 = "/tmp/mkknlimg_$$.1";
+ my $tmpfile2 = "/tmp/mkknlimg_$$.2";
+ 
+ my $dtok = 0;
++my $ddtk = 0;
++my $is_270x = 0;
+ my $is_283x = 0;
+ 
+ while (@ARGV && ($ARGV[0] =~ /^-/))
+@@ -28,6 +36,14 @@ while (@ARGV && ($ARGV[0] =~ /^-/))
+     {
+ 	$dtok = 1;
+     }
++    elsif ($arg eq '--ddtk')
++    {
++	$ddtk = 1;
++    }
++    elsif ($arg eq '--270x')
++    {
++	$is_270x = 1;
++    }
+     elsif ($arg eq '--283x')
+     {
+ 	$is_283x = 1;
+@@ -50,30 +66,33 @@ if (! -r $kernel_file)
+     usage();
+ }
+ 
+-my @wanted_strings =
+-(
+-	'bcm2708_fb',
+-	'brcm,bcm2835-mmc',
+-	'brcm,bcm2835-sdhost',
+-	'brcm,bcm2708-pinctrl',
+-	'brcm,bcm2835-gpio',
+-	'brcm,bcm2835',
+-	'brcm,bcm2836'
+-);
++my $wanted_strings =
++{
++	'bcm2708_fb' => FLAG_PI,
++	'brcm,bcm2835-mmc' => FLAG_PI,
++	'brcm,bcm2835-sdhost' => FLAG_PI,
++	'brcm,bcm2708-pinctrl' => FLAG_PI | FLAG_DTOK,
++	'brcm,bcm2835-gpio' => FLAG_PI | FLAG_DTOK,
++	'brcm,bcm2708' => FLAG_PI | FLAG_DTOK | FLAG_270X,
++	'brcm,bcm2709' => FLAG_PI | FLAG_DTOK | FLAG_270X,
++	'brcm,bcm2835' => FLAG_PI | FLAG_DTOK | FLAG_283X,
++	'brcm,bcm2836' => FLAG_PI | FLAG_DTOK | FLAG_283X,
++	'of_overlay_apply' => FLAG_DTOK | FLAG_DDTK,
++};
+ 
+ my $res = try_extract($kernel_file, $tmpfile1);
+-$res = try_decompress('\037\213\010', 'xy',    'gunzip', 0,
+-		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
+-$res = try_decompress('\3757zXZ\000', 'abcde', 'unxz --single-stream', -1,
+-		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
+-$res = try_decompress('BZh',          'xy',    'bunzip2', 0,
+-		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
+-$res = try_decompress('\135\0\0\0',   'xxx',   'unlzma', 0,
+-		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
+-$res = try_decompress('\211\114\132', 'xy',    'lzop -d', 0,
+-		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
+-$res = try_decompress('\002\041\114\030', 'xy',    'lz4 -d', 1,
+-		      $kernel_file, $tmpfile1, $tmpfile2) if (!$res);
++$res ||= try_decompress('\037\213\010', 'xy',    'gunzip', 0,
++			$kernel_file, $tmpfile1, $tmpfile2);
++$res ||= try_decompress('\3757zXZ\000', 'abcde', 'unxz --single-stream', -1,
++			$kernel_file, $tmpfile1, $tmpfile2);
++$res ||= try_decompress('BZh',          'xy',    'bunzip2', 0,
++			$kernel_file, $tmpfile1, $tmpfile2);
++$res ||= try_decompress('\135\0\0\0',   'xxx',   'unlzma', 0,
++			$kernel_file, $tmpfile1, $tmpfile2);
++$res ||= try_decompress('\211\114\132', 'xy',    'lzop -d', 0,
++			$kernel_file, $tmpfile1, $tmpfile2);
++$res ||= try_decompress('\002\041\114\030', 'xy',    'lz4 -d', 1,
++			$kernel_file, $tmpfile1, $tmpfile2);
+ 
+ my $append_trailer;
+ my $trailer;
+@@ -83,27 +102,21 @@ $append_trailer = $dtok;
+ 
+ if ($res)
+ {
+-    $kver = $res->{''} || '?';
++    $kver = $res->{'kver'} || '?';
++    my $flags = $res->{'flags'};
+     print("Version: $kver\n");
+ 
+-    $append_trailer = $dtok;
+-    if (!$dtok)
++    if ($flags & FLAG_PI)
+     {
+-	if (config_bool($res, 'bcm2708_fb') ||
+-	    config_bool($res, 'brcm,bcm2835-mmc') ||
+-	    config_bool($res, 'brcm,bcm2835-sdhost'))
+-	{
+-	    $dtok ||= config_bool($res, 'brcm,bcm2708-pinctrl');
+-	    $dtok ||= config_bool($res, 'brcm,bcm2835-gpio');
+-	    $is_283x ||= config_bool($res, 'brcm,bcm2835');
+-	    $is_283x ||= config_bool($res, 'brcm,bcm2836');
+-	    $dtok ||= $is_283x;
+-	    $append_trailer = 1;
+-	}
+-	else
+-	{
+-	    print ("* This doesn't look like a Raspberry Pi kernel. In pass-through mode.\n");
+-	}
++	$append_trailer = 1;
++	$dtok ||= ($flags & FLAG_DTOK) != 0;
++	$is_270x ||= ($flags & FLAG_270X) != 0;
++	$is_283x ||= ($flags & FLAG_283X) != 0;
++	$ddtk ||= ($flags & FLAG_DDTK) != 0;
++    }
++    else
++    {
++	print ("* This doesn't look like a Raspberry Pi kernel. In pass-through mode.\n");
+     }
+ }
+ elsif (!$dtok)
+@@ -114,6 +127,8 @@ elsif (!$dtok)
+ if ($append_trailer)
+ {
+     printf("DT: %s\n", $dtok ? "y" : "n");
++    printf("DDT: %s\n", $ddtk ? "y" : "n") if ($ddtk);
++    printf("270x: %s\n", $is_270x ? "y" : "n");
+     printf("283x: %s\n", $is_283x ? "y" : "n");
+ 
+     my @atoms;
+@@ -121,7 +136,10 @@ if ($append_trailer)
+     push @atoms, [ $trailer_magic, pack('V', 0) ];
+     push @atoms, [ 'KVer', $kver ];
+     push @atoms, [ 'DTOK', pack('V', $dtok) ];
+-    push @atoms, [ '283x', pack('V', $is_283x) ];
++    push @atoms, [ 'DDTK', pack('V', $ddtk) ] if ($ddtk);
++    push @atoms, [ '270X', pack('V', $is_270x) ];
++    push @atoms, [ '283X', pack('V', $is_283x) ];
++    push @atoms, [ '283x', pack('V', $is_283x && !$is_270x) ];
+ 
+     $trailer = pack_trailer(\@atoms);
+     $atoms[0]->[1] = pack('V', length($trailer));
+@@ -175,7 +193,7 @@ END {
+ 
+ sub usage
+ {
+-	print ("Usage: mkknlimg [--dtok] [--283x] <vmlinux|zImage|bzImage> <outfile>\n");
++	print ("Usage: mkknlimg [--dtok] [--270x] [--283x] <vmlinux|zImage|bzImage> <outfile>\n");
+ 	exit(1);
+ }
+ 
+@@ -189,15 +207,8 @@ sub try_extract
+ 
+ 	chomp($ver);
+ 
+-	my $res = { ''=>$ver };
+-	my $string_pattern = '^('.join('|', @wanted_strings).')$';
+-
+-	my @matches = `strings \"$knl\" | grep -E \"$string_pattern\"`;
+-	foreach my $match (@matches)
+-	{
+-	    chomp($match);
+-	    $res->{$match} = 1;
+-	}
++	my $res = { 'kver'=>$ver };
++	$res->{'flags'} = strings_to_flags($knl, $wanted_strings);
+ 
+ 	return $res;
+ }
+@@ -224,6 +235,22 @@ sub try_decompress
+ 	return undef;
+ }
+ 
++sub strings_to_flags
++{
++	my ($knl, $strings) = @_;
++	my $string_pattern = '^('.join('|', keys(%$strings)).')$';
++	my $flags = 0;
++
++	my @matches = `strings \"$knl\" | grep -E \"$string_pattern\"`;
++	foreach my $match (@matches)
++	{
++	    chomp($match);
++	    $flags |= $strings->{$match};
++	}
++
++	return $flags;
++}
++
+ sub pack_trailer
+ {
+ 	my ($atoms) = @_;
+@@ -235,10 +262,3 @@ sub pack_trailer
+ 	}
+ 	return $trailer;
+ }
+-
+-sub config_bool
+-{
+-	my ($configs, $wanted) = @_;
+-	my $val = $configs->{$wanted} || 'n';
+-	return (($val eq 'y') || ($val eq '1'));
+-}
diff --git a/target/linux/brcm2708/patches-4.4/0091-drm-vc4-Add-suport-for-3D-rendering-using-the-V3D-en.patch b/target/linux/brcm2708/patches-4.4/0091-drm-vc4-Add-suport-for-3D-rendering-using-the-V3D-en.patch
new file mode 100644
index 0000000..2a2125f
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0091-drm-vc4-Add-suport-for-3D-rendering-using-the-V3D-en.patch
@@ -0,0 +1,5558 @@
+From e8c7a56d86c676b4665edc50762fd737a7b56ff5 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Mon, 2 Mar 2015 13:01:12 -0800
+Subject: [PATCH 091/127] drm/vc4: Add suport for 3D rendering using the V3D
+ engine.
+
+This is a squash of the out-of-tree development series.  Since that
+series contained code from the first "get a demo triangle rendered
+using a hacked up driver using binary shader code" to "plug the last
+known security hole", it's hard to reconstruct a different series of
+incremental development that's mergeable without security holes
+throughout it.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/Makefile               |  11 +-
+ drivers/gpu/drm/vc4/vc4_bo.c               | 476 +++++++++++++-
+ drivers/gpu/drm/vc4/vc4_crtc.c             |  98 ++-
+ drivers/gpu/drm/vc4/vc4_debugfs.c          |   3 +
+ drivers/gpu/drm/vc4/vc4_drv.c              |  45 +-
+ drivers/gpu/drm/vc4/vc4_drv.h              | 317 ++++++++++
+ drivers/gpu/drm/vc4/vc4_gem.c              | 686 +++++++++++++++++++++
+ drivers/gpu/drm/vc4/vc4_irq.c              | 211 +++++++
+ drivers/gpu/drm/vc4/vc4_kms.c              | 148 ++++-
+ drivers/gpu/drm/vc4/vc4_packet.h           | 384 ++++++++++++
+ drivers/gpu/drm/vc4/vc4_plane.c            |  40 ++
+ drivers/gpu/drm/vc4/vc4_qpu_defines.h      | 268 ++++++++
+ drivers/gpu/drm/vc4/vc4_render_cl.c        | 448 ++++++++++++++
+ drivers/gpu/drm/vc4/vc4_trace.h            |  63 ++
+ drivers/gpu/drm/vc4/vc4_trace_points.c     |  14 +
+ drivers/gpu/drm/vc4/vc4_v3d.c              | 268 ++++++++
+ drivers/gpu/drm/vc4/vc4_validate.c         | 958 +++++++++++++++++++++++++++++
+ drivers/gpu/drm/vc4/vc4_validate_shaders.c | 521 ++++++++++++++++
+ include/uapi/drm/vc4_drm.h                 | 229 +++++++
+ 19 files changed, 5173 insertions(+), 15 deletions(-)
+ create mode 100644 drivers/gpu/drm/vc4/vc4_gem.c
+ create mode 100644 drivers/gpu/drm/vc4/vc4_irq.c
+ create mode 100644 drivers/gpu/drm/vc4/vc4_packet.h
+ create mode 100644 drivers/gpu/drm/vc4/vc4_qpu_defines.h
+ create mode 100644 drivers/gpu/drm/vc4/vc4_render_cl.c
+ create mode 100644 drivers/gpu/drm/vc4/vc4_trace.h
+ create mode 100644 drivers/gpu/drm/vc4/vc4_trace_points.c
+ create mode 100644 drivers/gpu/drm/vc4/vc4_v3d.c
+ create mode 100644 drivers/gpu/drm/vc4/vc4_validate.c
+ create mode 100644 drivers/gpu/drm/vc4/vc4_validate_shaders.c
+ create mode 100644 include/uapi/drm/vc4_drm.h
+
+--- a/drivers/gpu/drm/vc4/Makefile
++++ b/drivers/gpu/drm/vc4/Makefile
+@@ -8,10 +8,19 @@ vc4-y := \
+ 	vc4_crtc.o \
+ 	vc4_drv.o \
+ 	vc4_kms.o \
++	vc4_gem.o \
+ 	vc4_hdmi.o \
+ 	vc4_hvs.o \
+-	vc4_plane.o
++	vc4_irq.o \
++	vc4_plane.o \
++	vc4_render_cl.o \
++	vc4_trace_points.o \
++	vc4_v3d.o \
++	vc4_validate.o \
++	vc4_validate_shaders.o
+ 
+ vc4-$(CONFIG_DEBUG_FS) += vc4_debugfs.o
+ 
+ obj-$(CONFIG_DRM_VC4)  += vc4.o
++
++CFLAGS_vc4_trace_points.o := -I$(src)
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -15,16 +15,174 @@
+  */
+ 
+ #include "vc4_drv.h"
++#include "uapi/drm/vc4_drm.h"
+ 
+-struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t size)
++static void vc4_bo_stats_dump(struct vc4_dev *vc4)
+ {
++	DRM_INFO("num bos allocated: %d\n",
++		 vc4->bo_stats.num_allocated);
++	DRM_INFO("size bos allocated: %dkb\n",
++		 vc4->bo_stats.size_allocated / 1024);
++	DRM_INFO("num bos used: %d\n",
++		 vc4->bo_stats.num_allocated - vc4->bo_stats.num_cached);
++	DRM_INFO("size bos used: %dkb\n",
++		 (vc4->bo_stats.size_allocated -
++		  vc4->bo_stats.size_cached) / 1024);
++	DRM_INFO("num bos cached: %d\n",
++		 vc4->bo_stats.num_cached);
++	DRM_INFO("size bos cached: %dkb\n",
++		 vc4->bo_stats.size_cached / 1024);
++}
++
++static uint32_t bo_page_index(size_t size)
++{
++	return (size / PAGE_SIZE) - 1;
++}
++
++/* Must be called with bo_lock held. */
++static void vc4_bo_destroy(struct vc4_bo *bo)
++{
++	struct drm_gem_object *obj = &bo->base.base;
++	struct vc4_dev *vc4 = to_vc4_dev(obj->dev);
++
++	if (bo->validated_shader) {
++		kfree(bo->validated_shader->texture_samples);
++		kfree(bo->validated_shader);
++		bo->validated_shader = NULL;
++	}
++
++	vc4->bo_stats.num_allocated--;
++	vc4->bo_stats.size_allocated -= obj->size;
++	drm_gem_cma_free_object(obj);
++}
++
++/* Must be called with bo_lock held. */
++static void vc4_bo_remove_from_cache(struct vc4_bo *bo)
++{
++	struct drm_gem_object *obj = &bo->base.base;
++	struct vc4_dev *vc4 = to_vc4_dev(obj->dev);
++
++	vc4->bo_stats.num_cached--;
++	vc4->bo_stats.size_cached -= obj->size;
++
++	list_del(&bo->unref_head);
++	list_del(&bo->size_head);
++}
++
++static struct list_head *vc4_get_cache_list_for_size(struct drm_device *dev,
++						     size_t size)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	uint32_t page_index = bo_page_index(size);
++
++	if (vc4->bo_cache.size_list_size <= page_index) {
++		uint32_t new_size = max(vc4->bo_cache.size_list_size * 2,
++					page_index + 1);
++		struct list_head *new_list;
++		uint32_t i;
++
++		new_list = kmalloc(new_size * sizeof(struct list_head),
++				   GFP_KERNEL);
++		if (!new_list)
++			return NULL;
++
++		/* Rebase the old cached BO lists to their new list
++		 * head locations.
++		 */
++		for (i = 0; i < vc4->bo_cache.size_list_size; i++) {
++			struct list_head *old_list = &vc4->bo_cache.size_list[i];
++			if (list_empty(old_list))
++				INIT_LIST_HEAD(&new_list[i]);
++			else
++				list_replace(old_list, &new_list[i]);
++		}
++		/* And initialize the brand new BO list heads. */
++		for (i = vc4->bo_cache.size_list_size; i < new_size; i++)
++			INIT_LIST_HEAD(&new_list[i]);
++
++		kfree(vc4->bo_cache.size_list);
++		vc4->bo_cache.size_list = new_list;
++		vc4->bo_cache.size_list_size = new_size;
++	}
++
++	return &vc4->bo_cache.size_list[page_index];
++}
++
++void vc4_bo_cache_purge(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	spin_lock(&vc4->bo_lock);
++	while (!list_empty(&vc4->bo_cache.time_list)) {
++		struct vc4_bo *bo = list_last_entry(&vc4->bo_cache.time_list,
++						    struct vc4_bo, unref_head);
++		vc4_bo_remove_from_cache(bo);
++		vc4_bo_destroy(bo);
++	}
++	spin_unlock(&vc4->bo_lock);
++}
++
++struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	uint32_t size = roundup(unaligned_size, PAGE_SIZE);
++	uint32_t page_index = bo_page_index(size);
+ 	struct drm_gem_cma_object *cma_obj;
++	int pass;
+ 
+-	cma_obj = drm_gem_cma_create(dev, size);
+-	if (IS_ERR(cma_obj))
++	if (size == 0)
+ 		return NULL;
+-	else
+-		return to_vc4_bo(&cma_obj->base);
++
++	/* First, try to get a vc4_bo from the kernel BO cache. */
++	spin_lock(&vc4->bo_lock);
++	if (page_index < vc4->bo_cache.size_list_size &&
++	    !list_empty(&vc4->bo_cache.size_list[page_index])) {
++		struct vc4_bo *bo =
++			list_first_entry(&vc4->bo_cache.size_list[page_index],
++					 struct vc4_bo, size_head);
++		vc4_bo_remove_from_cache(bo);
++		spin_unlock(&vc4->bo_lock);
++		kref_init(&bo->base.base.refcount);
++		return bo;
++	}
++	spin_unlock(&vc4->bo_lock);
++
++	/* Otherwise, make a new BO. */
++	for (pass = 0; ; pass++) {
++		cma_obj = drm_gem_cma_create(dev, size);
++		if (!IS_ERR(cma_obj))
++			break;
++
++		switch (pass) {
++		case 0:
++			/*
++			 * If we've run out of CMA memory, kill the cache of
++			 * CMA allocations we've got laying around and try again.
++			 */
++			vc4_bo_cache_purge(dev);
++			break;
++		case 1:
++			/*
++			 * Getting desperate, so try to wait for any
++			 * previous rendering to finish, free its
++			 * unreferenced BOs to the cache, and then
++			 * free the cache.
++			 */
++			vc4_wait_for_seqno(dev, vc4->emit_seqno, ~0ull, true);
++			vc4_job_handle_completed(vc4);
++			vc4_bo_cache_purge(dev);
++			break;
++		case 3:
++			DRM_ERROR("Failed to allocate from CMA:\n");
++			vc4_bo_stats_dump(vc4);
++			return NULL;
++		}
++	}
++
++	vc4->bo_stats.num_allocated++;
++	vc4->bo_stats.size_allocated += size;
++
++	return to_vc4_bo(&cma_obj->base);
+ }
+ 
+ int vc4_dumb_create(struct drm_file *file_priv,
+@@ -41,7 +199,129 @@ int vc4_dumb_create(struct drm_file *fil
+ 	if (args->size < args->pitch * args->height)
+ 		args->size = args->pitch * args->height;
+ 
+-	bo = vc4_bo_create(dev, roundup(args->size, PAGE_SIZE));
++	bo = vc4_bo_create(dev, args->size);
++	if (!bo)
++		return -ENOMEM;
++
++	ret = drm_gem_handle_create(file_priv, &bo->base.base, &args->handle);
++	drm_gem_object_unreference_unlocked(&bo->base.base);
++
++	return ret;
++}
++
++static void
++vc4_bo_cache_free_old(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	unsigned long expire_time = jiffies - msecs_to_jiffies(1000);
++
++	spin_lock(&vc4->bo_lock);
++	while (!list_empty(&vc4->bo_cache.time_list)) {
++		struct vc4_bo *bo = list_last_entry(&vc4->bo_cache.time_list,
++						    struct vc4_bo, unref_head);
++		if (time_before(expire_time, bo->free_time)) {
++			mod_timer(&vc4->bo_cache.time_timer,
++				  round_jiffies_up(jiffies +
++						   msecs_to_jiffies(1000)));
++			spin_unlock(&vc4->bo_lock);
++			return;
++		}
++
++		vc4_bo_remove_from_cache(bo);
++		vc4_bo_destroy(bo);
++	}
++	spin_unlock(&vc4->bo_lock);
++}
++
++/* Called on the last userspace/kernel unreference of the BO.  Returns
++ * it to the BO cache if possible, otherwise frees it.
++ *
++ * Note that this is called with the struct_mutex held.
++ */
++void vc4_free_object(struct drm_gem_object *gem_bo)
++{
++	struct drm_device *dev = gem_bo->dev;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct vc4_bo *bo = to_vc4_bo(gem_bo);
++	struct list_head *cache_list;
++
++	/* If the object references someone else's memory, we can't cache it.
++	 */
++	if (gem_bo->import_attach) {
++		vc4_bo_destroy(bo);
++		return;
++	}
++
++	/* Don't cache if it was publicly named. */
++	if (gem_bo->name) {
++		vc4_bo_destroy(bo);
++		return;
++	}
++
++	spin_lock(&vc4->bo_lock);
++	cache_list = vc4_get_cache_list_for_size(dev, gem_bo->size);
++	if (!cache_list) {
++		vc4_bo_destroy(bo);
++		spin_unlock(&vc4->bo_lock);
++		return;
++	}
++
++	if (bo->validated_shader) {
++		kfree(bo->validated_shader->texture_samples);
++		kfree(bo->validated_shader);
++		bo->validated_shader = NULL;
++	}
++
++	bo->free_time = jiffies;
++	list_add(&bo->size_head, cache_list);
++	list_add(&bo->unref_head, &vc4->bo_cache.time_list);
++
++	vc4->bo_stats.num_cached++;
++	vc4->bo_stats.size_cached += gem_bo->size;
++	spin_unlock(&vc4->bo_lock);
++
++	vc4_bo_cache_free_old(dev);
++}
++
++static void vc4_bo_cache_time_work(struct work_struct *work)
++{
++	struct vc4_dev *vc4 =
++		container_of(work, struct vc4_dev, bo_cache.time_work);
++	struct drm_device *dev = vc4->dev;
++
++	vc4_bo_cache_free_old(dev);
++}
++
++static void vc4_bo_cache_time_timer(unsigned long data)
++{
++	struct drm_device *dev = (struct drm_device *)data;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	schedule_work(&vc4->bo_cache.time_work);
++}
++
++struct dma_buf *
++vc4_prime_export(struct drm_device *dev, struct drm_gem_object *obj, int flags)
++{
++	struct vc4_bo *bo = to_vc4_bo(obj);
++
++	if (bo->validated_shader) {
++		DRM_ERROR("Attempting to export shader BO\n");
++		return ERR_PTR(-EINVAL);
++	}
++
++	return drm_gem_prime_export(dev, obj, flags);
++}
++
++int
++vc4_create_bo_ioctl(struct drm_device *dev, void *data,
++		    struct drm_file *file_priv)
++{
++	struct drm_vc4_create_bo *args = data;
++	struct vc4_bo *bo = NULL;
++	int ret;
++
++	bo = vc4_bo_create(dev, args->size);
+ 	if (!bo)
+ 		return -ENOMEM;
+ 
+@@ -50,3 +330,187 @@ int vc4_dumb_create(struct drm_file *fil
+ 
+ 	return ret;
+ }
++
++int
++vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
++			   struct drm_file *file_priv)
++{
++	struct drm_vc4_create_shader_bo *args = data;
++	struct vc4_bo *bo = NULL;
++	int ret;
++
++	if (args->size == 0)
++		return -EINVAL;
++
++	if (args->size % sizeof(u64) != 0)
++		return -EINVAL;
++
++	if (args->flags != 0) {
++		DRM_INFO("Unknown flags set: 0x%08x\n", args->flags);
++		return -EINVAL;
++	}
++
++	if (args->pad != 0) {
++		DRM_INFO("Pad set: 0x%08x\n", args->pad);
++		return -EINVAL;
++	}
++
++	bo = vc4_bo_create(dev, args->size);
++	if (!bo)
++		return -ENOMEM;
++
++	ret = copy_from_user(bo->base.vaddr,
++			     (void __user *)(uintptr_t)args->data,
++			     args->size);
++	if (ret != 0)
++		goto fail;
++
++	bo->validated_shader = vc4_validate_shader(&bo->base);
++	if (!bo->validated_shader) {
++		ret = -EINVAL;
++		goto fail;
++	}
++
++	/* We have to create the handle after validation, to avoid
++	 * races for users to do doing things like mmap the shader BO.
++	 */
++	ret = drm_gem_handle_create(file_priv, &bo->base.base, &args->handle);
++
++ fail:
++	drm_gem_object_unreference_unlocked(&bo->base.base);
++
++	return ret;
++}
++
++int
++vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
++		  struct drm_file *file_priv)
++{
++	struct drm_vc4_mmap_bo *args = data;
++	struct drm_gem_object *gem_obj;
++
++	gem_obj = drm_gem_object_lookup(dev, file_priv, args->handle);
++	if (!gem_obj) {
++		DRM_ERROR("Failed to look up GEM BO %d\n", args->handle);
++		return -EINVAL;
++	}
++
++	/* The mmap offset was set up at BO allocation time. */
++	args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
++
++	drm_gem_object_unreference(gem_obj);
++	return 0;
++}
++
++int vc4_mmap(struct file *filp, struct vm_area_struct *vma)
++{
++	struct drm_gem_object *gem_obj;
++	struct vc4_bo *bo;
++	int ret;
++
++	ret = drm_gem_mmap(filp, vma);
++	if (ret)
++		return ret;
++
++	gem_obj = vma->vm_private_data;
++	bo = to_vc4_bo(gem_obj);
++
++	if (bo->validated_shader) {
++		DRM_ERROR("mmaping of shader BOs not allowed.\n");
++		return -EINVAL;
++	}
++
++	/*
++	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
++	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
++	 * the whole buffer.
++	 */
++	vma->vm_flags &= ~VM_PFNMAP;
++	vma->vm_pgoff = 0;
++
++	ret = dma_mmap_writecombine(bo->base.base.dev->dev, vma,
++				    bo->base.vaddr, bo->base.paddr,
++				    vma->vm_end - vma->vm_start);
++	if (ret)
++		drm_gem_vm_close(vma);
++
++	return ret;
++}
++
++int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
++{
++	struct vc4_bo *bo = to_vc4_bo(obj);
++
++	if (bo->validated_shader) {
++		DRM_ERROR("mmaping of shader BOs not allowed.\n");
++		return -EINVAL;
++	}
++
++	return drm_gem_cma_prime_mmap(obj, vma);
++}
++
++void *vc4_prime_vmap(struct drm_gem_object *obj)
++{
++	struct vc4_bo *bo = to_vc4_bo(obj);
++
++	if (bo->validated_shader) {
++		DRM_ERROR("mmaping of shader BOs not allowed.\n");
++		return ERR_PTR(-EINVAL);
++	}
++
++	return drm_gem_cma_prime_vmap(obj);
++}
++
++void vc4_bo_cache_init(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	spin_lock_init(&vc4->bo_lock);
++
++	INIT_LIST_HEAD(&vc4->bo_cache.time_list);
++
++	INIT_WORK(&vc4->bo_cache.time_work, vc4_bo_cache_time_work);
++	setup_timer(&vc4->bo_cache.time_timer,
++		    vc4_bo_cache_time_timer,
++		    (unsigned long) dev);
++}
++
++void vc4_bo_cache_destroy(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	del_timer(&vc4->bo_cache.time_timer);
++	cancel_work_sync(&vc4->bo_cache.time_work);
++
++	vc4_bo_cache_purge(dev);
++
++	if (vc4->bo_stats.num_allocated) {
++		DRM_ERROR("Destroying BO cache while BOs still allocated:\n");
++		vc4_bo_stats_dump(vc4);
++	}
++}
++
++#ifdef CONFIG_DEBUG_FS
++int vc4_bo_stats_debugfs(struct seq_file *m, void *unused)
++{
++	struct drm_info_node *node = (struct drm_info_node *) m->private;
++	struct drm_device *dev = node->minor->dev;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct vc4_bo_stats stats;
++
++	spin_lock(&vc4->bo_lock);
++	stats = vc4->bo_stats;
++	spin_unlock(&vc4->bo_lock);
++
++	seq_printf(m, "num bos allocated: %d\n", stats.num_allocated);
++	seq_printf(m, "size bos allocated: %dkb\n", stats.size_allocated / 1024);
++	seq_printf(m, "num bos used: %d\n", (stats.num_allocated -
++					     stats.num_cached));
++	seq_printf(m, "size bos used: %dkb\n", (stats.size_allocated -
++						stats.size_cached) / 1024);
++	seq_printf(m, "num bos cached: %d\n", stats.num_cached);
++	seq_printf(m, "size bos cached: %dkb\n", stats.size_cached / 1024);
++
++	return 0;
++}
++#endif
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -35,6 +35,7 @@
+ #include "drm_atomic_helper.h"
+ #include "drm_crtc_helper.h"
+ #include "linux/clk.h"
++#include "drm_fb_cma_helper.h"
+ #include "linux/component.h"
+ #include "linux/of_device.h"
+ #include "vc4_drv.h"
+@@ -476,10 +477,105 @@ static irqreturn_t vc4_crtc_irq_handler(
+ 	return ret;
+ }
+ 
++struct vc4_async_flip_state {
++	struct drm_crtc *crtc;
++	struct drm_framebuffer *fb;
++	struct drm_pending_vblank_event *event;
++
++	struct vc4_seqno_cb cb;
++};
++
++/* Called when the V3D execution for the BO being flipped to is done, so that
++ * we can actually update the plane's address to point to it.
++ */
++static void
++vc4_async_page_flip_complete(struct vc4_seqno_cb *cb)
++{
++	struct vc4_async_flip_state *flip_state =
++		container_of(cb, struct vc4_async_flip_state, cb);
++	struct drm_crtc *crtc = flip_state->crtc;
++	struct drm_device *dev = crtc->dev;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct drm_plane *plane = crtc->primary;
++
++	vc4_plane_async_set_fb(plane, flip_state->fb);
++	if (flip_state->event) {
++		unsigned long flags;
++		spin_lock_irqsave(&dev->event_lock, flags);
++		drm_crtc_send_vblank_event(crtc, flip_state->event);
++		spin_unlock_irqrestore(&dev->event_lock, flags);
++	}
++
++	drm_framebuffer_unreference(flip_state->fb);
++	kfree(flip_state);
++
++	up(&vc4->async_modeset);
++}
++
++/* Implements async (non-vblank-synced) page flips.
++ *
++ * The page flip ioctl needs to return immediately, so we grab the
++ * modeset semaphore on the pipe, and queue the address update for
++ * when V3D is done with the BO being flipped to.
++ */
++static int vc4_async_page_flip(struct drm_crtc *crtc,
++			       struct drm_framebuffer *fb,
++			       struct drm_pending_vblank_event *event,
++			       uint32_t flags)
++{
++	struct drm_device *dev = crtc->dev;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct drm_plane *plane = crtc->primary;
++	int ret = 0;
++	struct vc4_async_flip_state *flip_state;
++	struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);
++	struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
++
++	flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL);
++	if (!flip_state)
++		return -ENOMEM;
++
++	drm_framebuffer_reference(fb);
++	flip_state->fb = fb;
++	flip_state->crtc = crtc;
++	flip_state->event = event;
++
++	/* Make sure all other async modesetes have landed. */
++	ret = down_interruptible(&vc4->async_modeset);
++	if (ret) {
++		kfree(flip_state);
++		return ret;
++	}
++
++	/* Immediately update the plane's legacy fb pointer, so that later
++	 * modeset prep sees the state that will be present when the semaphore
++	 * is released.
++	 */
++	drm_atomic_set_fb_for_plane(plane->state, fb);
++	plane->fb = fb;
++
++	vc4_queue_seqno_cb(dev, &flip_state->cb, bo->seqno,
++			   vc4_async_page_flip_complete);
++
++	/* Driver takes ownership of state on successful async commit. */
++	return 0;
++}
++
++static int vc4_page_flip(struct drm_crtc *crtc,
++		  struct drm_framebuffer *fb,
++		  struct drm_pending_vblank_event *event,
++		  uint32_t flags)
++{
++	if (flags & DRM_MODE_PAGE_FLIP_ASYNC)
++		return vc4_async_page_flip(crtc, fb, event, flags);
++	else
++		return drm_atomic_helper_page_flip(crtc, fb, event, flags);
++}
++
+ static const struct drm_crtc_funcs vc4_crtc_funcs = {
+ 	.set_config = drm_atomic_helper_set_config,
+ 	.destroy = vc4_crtc_destroy,
+-	.page_flip = drm_atomic_helper_page_flip,
++	.page_flip = vc4_page_flip,
+ 	.set_property = NULL,
+ 	.cursor_set = NULL, /* handled by drm_mode_cursor_universal */
+ 	.cursor_move = NULL, /* handled by drm_mode_cursor_universal */
+--- a/drivers/gpu/drm/vc4/vc4_debugfs.c
++++ b/drivers/gpu/drm/vc4/vc4_debugfs.c
+@@ -16,11 +16,14 @@
+ #include "vc4_regs.h"
+ 
+ static const struct drm_info_list vc4_debugfs_list[] = {
++	{"bo_stats", vc4_bo_stats_debugfs, 0},
+ 	{"hdmi_regs", vc4_hdmi_debugfs_regs, 0},
+ 	{"hvs_regs", vc4_hvs_debugfs_regs, 0},
+ 	{"crtc0_regs", vc4_crtc_debugfs_regs, 0, (void *)(uintptr_t)0},
+ 	{"crtc1_regs", vc4_crtc_debugfs_regs, 0, (void *)(uintptr_t)1},
+ 	{"crtc2_regs", vc4_crtc_debugfs_regs, 0, (void *)(uintptr_t)2},
++	{"v3d_ident", vc4_v3d_debugfs_ident, 0},
++	{"v3d_regs", vc4_v3d_debugfs_regs, 0},
+ };
+ 
+ #define VC4_DEBUGFS_ENTRIES ARRAY_SIZE(vc4_debugfs_list)
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -14,8 +14,10 @@
+ #include <linux/module.h>
+ #include <linux/of_platform.h>
+ #include <linux/platform_device.h>
++#include <soc/bcm2835/raspberrypi-firmware.h>
+ #include "drm_fb_cma_helper.h"
+ 
++#include "uapi/drm/vc4_drm.h"
+ #include "vc4_drv.h"
+ #include "vc4_regs.h"
+ 
+@@ -63,7 +65,7 @@ static const struct file_operations vc4_
+ 	.open = drm_open,
+ 	.release = drm_release,
+ 	.unlocked_ioctl = drm_ioctl,
+-	.mmap = drm_gem_cma_mmap,
++	.mmap = vc4_mmap,
+ 	.poll = drm_poll,
+ 	.read = drm_read,
+ #ifdef CONFIG_COMPAT
+@@ -73,16 +75,28 @@ static const struct file_operations vc4_
+ };
+ 
+ static const struct drm_ioctl_desc vc4_drm_ioctls[] = {
++	DRM_IOCTL_DEF_DRV(VC4_SUBMIT_CL, vc4_submit_cl_ioctl, 0),
++	DRM_IOCTL_DEF_DRV(VC4_WAIT_SEQNO, vc4_wait_seqno_ioctl, 0),
++	DRM_IOCTL_DEF_DRV(VC4_WAIT_BO, vc4_wait_bo_ioctl, 0),
++	DRM_IOCTL_DEF_DRV(VC4_CREATE_BO, vc4_create_bo_ioctl, 0),
++	DRM_IOCTL_DEF_DRV(VC4_MMAP_BO, vc4_mmap_bo_ioctl, 0),
++	DRM_IOCTL_DEF_DRV(VC4_CREATE_SHADER_BO, vc4_create_shader_bo_ioctl, 0),
+ };
+ 
+ static struct drm_driver vc4_drm_driver = {
+ 	.driver_features = (DRIVER_MODESET |
+ 			    DRIVER_ATOMIC |
+ 			    DRIVER_GEM |
++			    DRIVER_HAVE_IRQ |
+ 			    DRIVER_PRIME),
+ 	.lastclose = vc4_lastclose,
+ 	.preclose = vc4_drm_preclose,
+ 
++	.irq_handler = vc4_irq,
++	.irq_preinstall = vc4_irq_preinstall,
++	.irq_postinstall = vc4_irq_postinstall,
++	.irq_uninstall = vc4_irq_uninstall,
++
+ 	.enable_vblank = vc4_enable_vblank,
+ 	.disable_vblank = vc4_disable_vblank,
+ 	.get_vblank_counter = drm_vblank_count,
+@@ -92,18 +106,18 @@ static struct drm_driver vc4_drm_driver
+ 	.debugfs_cleanup = vc4_debugfs_cleanup,
+ #endif
+ 
+-	.gem_free_object = drm_gem_cma_free_object,
++	.gem_free_object = vc4_free_object,
+ 	.gem_vm_ops = &drm_gem_cma_vm_ops,
+ 
+ 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+ 	.gem_prime_import = drm_gem_prime_import,
+-	.gem_prime_export = drm_gem_prime_export,
++	.gem_prime_export = vc4_prime_export,
+ 	.gem_prime_get_sg_table	= drm_gem_cma_prime_get_sg_table,
+ 	.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table,
+-	.gem_prime_vmap = drm_gem_cma_prime_vmap,
++	.gem_prime_vmap = vc4_prime_vmap,
+ 	.gem_prime_vunmap = drm_gem_cma_prime_vunmap,
+-	.gem_prime_mmap = drm_gem_cma_prime_mmap,
++	.gem_prime_mmap = vc4_prime_mmap,
+ 
+ 	.dumb_create = vc4_dumb_create,
+ 	.dumb_map_offset = drm_gem_cma_dumb_map_offset,
+@@ -113,6 +127,8 @@ static struct drm_driver vc4_drm_driver
+ 	.num_ioctls = ARRAY_SIZE(vc4_drm_ioctls),
+ 	.fops = &vc4_drm_fops,
+ 
++	.gem_obj_size = sizeof(struct vc4_bo),
++
+ 	.name = DRIVER_NAME,
+ 	.desc = DRIVER_DESC,
+ 	.date = DRIVER_DATE,
+@@ -153,6 +169,7 @@ static int vc4_drm_bind(struct device *d
+ 	struct drm_device *drm;
+ 	struct drm_connector *connector;
+ 	struct vc4_dev *vc4;
++	struct device_node *firmware_node;
+ 	int ret = 0;
+ 
+ 	dev->coherent_dma_mask = DMA_BIT_MASK(32);
+@@ -161,6 +178,14 @@ static int vc4_drm_bind(struct device *d
+ 	if (!vc4)
+ 		return -ENOMEM;
+ 
++	firmware_node = of_parse_phandle(dev->of_node, "firmware", 0);
++	vc4->firmware = rpi_firmware_get(firmware_node);
++	if (!vc4->firmware) {
++		DRM_DEBUG("Failed to get Raspberry Pi firmware reference.\n");
++		return -EPROBE_DEFER;
++	}
++	of_node_put(firmware_node);
++
+ 	drm = drm_dev_alloc(&vc4_drm_driver, dev);
+ 	if (!drm)
+ 		return -ENOMEM;
+@@ -170,13 +195,17 @@ static int vc4_drm_bind(struct device *d
+ 
+ 	drm_dev_set_unique(drm, dev_name(dev));
+ 
++	vc4_bo_cache_init(drm);
++
+ 	drm_mode_config_init(drm);
+ 	if (ret)
+ 		goto unref;
+ 
++	vc4_gem_init(drm);
++
+ 	ret = component_bind_all(dev, drm);
+ 	if (ret)
+-		goto unref;
++		goto gem_destroy;
+ 
+ 	ret = drm_dev_register(drm, 0);
+ 	if (ret < 0)
+@@ -200,8 +229,11 @@ unregister:
+ 	drm_dev_unregister(drm);
+ unbind_all:
+ 	component_unbind_all(dev, drm);
++gem_destroy:
++	vc4_gem_destroy(drm);
+ unref:
+ 	drm_dev_unref(drm);
++	vc4_bo_cache_destroy(drm);
+ 	return ret;
+ }
+ 
+@@ -228,6 +260,7 @@ static struct platform_driver *const com
+ 	&vc4_hdmi_driver,
+ 	&vc4_crtc_driver,
+ 	&vc4_hvs_driver,
++	&vc4_v3d_driver,
+ };
+ 
+ static int vc4_platform_drm_probe(struct platform_device *pdev)
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -15,8 +15,85 @@ struct vc4_dev {
+ 	struct vc4_hdmi *hdmi;
+ 	struct vc4_hvs *hvs;
+ 	struct vc4_crtc *crtc[3];
++	struct vc4_v3d *v3d;
+ 
+ 	struct drm_fbdev_cma *fbdev;
++	struct rpi_firmware *firmware;
++
++	/* The kernel-space BO cache.  Tracks buffers that have been
++	 * unreferenced by all other users (refcounts of 0!) but not
++	 * yet freed, so we can do cheap allocations.
++	 */
++	struct vc4_bo_cache {
++		/* Array of list heads for entries in the BO cache,
++		 * based on number of pages, so we can do O(1) lookups
++		 * in the cache when allocating.
++		 */
++		struct list_head *size_list;
++		uint32_t size_list_size;
++
++		/* List of all BOs in the cache, ordered by age, so we
++		 * can do O(1) lookups when trying to free old
++		 * buffers.
++		 */
++		struct list_head time_list;
++		struct work_struct time_work;
++		struct timer_list time_timer;
++	} bo_cache;
++
++	struct vc4_bo_stats {
++		u32 num_allocated;
++		u32 size_allocated;
++		u32 num_cached;
++		u32 size_cached;
++	} bo_stats;
++
++	/* Protects bo_cache and the BO stats. */
++	spinlock_t bo_lock;
++
++	/* Sequence number for the last job queued in job_list.
++	 * Starts at 0 (no jobs emitted).
++	 */
++	uint64_t emit_seqno;
++
++	/* Sequence number for the last completed job on the GPU.
++	 * Starts at 0 (no jobs completed).
++	 */
++	uint64_t finished_seqno;
++
++	/* List of all struct vc4_exec_info for jobs to be executed.
++	 * The first job in the list is the one currently programmed
++	 * into ct0ca/ct1ca for execution.
++	 */
++	struct list_head job_list;
++	/* List of the finished vc4_exec_infos waiting to be freed by
++	 * job_done_work.
++	 */
++	struct list_head job_done_list;
++	spinlock_t job_lock;
++	wait_queue_head_t job_wait_queue;
++	struct work_struct job_done_work;
++
++	/* List of struct vc4_seqno_cb for callbacks to be made from a
++	 * workqueue when the given seqno is passed.
++	 */
++	struct list_head seqno_cb_list;
++
++	/* The binner overflow memory that's currently set up in
++	 * BPOA/BPOS registers.  When overflow occurs and a new one is
++	 * allocated, the previous one will be moved to
++	 * vc4->current_exec's free list.
++	 */
++	struct vc4_bo *overflow_mem;
++	struct work_struct overflow_mem_work;
++
++	struct {
++		uint32_t last_ct0ca, last_ct1ca;
++		struct timer_list timer;
++		struct work_struct reset_work;
++	} hangcheck;
++
++	struct semaphore async_modeset;
+ };
+ 
+ static inline struct vc4_dev *
+@@ -27,6 +104,25 @@ to_vc4_dev(struct drm_device *dev)
+ 
+ struct vc4_bo {
+ 	struct drm_gem_cma_object base;
++
++	/* seqno of the last job to render to this BO. */
++	uint64_t seqno;
++
++	/* List entry for the BO's position in either
++	 * vc4_exec_info->unref_list or vc4_dev->bo_cache.time_list
++	 */
++	struct list_head unref_head;
++
++	/* Time in jiffies when the BO was put in vc4->bo_cache. */
++	unsigned long free_time;
++
++	/* List entry for the BO's position in vc4_dev->bo_cache.size_list */
++	struct list_head size_head;
++
++	/* Struct for shader validation state, if created by
++	 * DRM_IOCTL_VC4_CREATE_SHADER_BO.
++	 */
++	struct vc4_validated_shader_info *validated_shader;
+ };
+ 
+ static inline struct vc4_bo *
+@@ -35,6 +131,17 @@ to_vc4_bo(struct drm_gem_object *bo)
+ 	return (struct vc4_bo *)bo;
+ }
+ 
++struct vc4_seqno_cb {
++	struct work_struct work;
++	uint64_t seqno;
++	void (*func)(struct vc4_seqno_cb *cb);
++};
++
++struct vc4_v3d {
++	struct platform_device *pdev;
++	void __iomem *regs;
++};
++
+ struct vc4_hvs {
+ 	struct platform_device *pdev;
+ 	void __iomem *regs;
+@@ -72,9 +179,151 @@ to_vc4_encoder(struct drm_encoder *encod
+ 	return container_of(encoder, struct vc4_encoder, base);
+ }
+ 
++#define V3D_READ(offset) readl(vc4->v3d->regs + offset)
++#define V3D_WRITE(offset, val) writel(val, vc4->v3d->regs + offset)
+ #define HVS_READ(offset) readl(vc4->hvs->regs + offset)
+ #define HVS_WRITE(offset, val) writel(val, vc4->hvs->regs + offset)
+ 
++enum vc4_bo_mode {
++	VC4_MODE_UNDECIDED,
++	VC4_MODE_RENDER,
++	VC4_MODE_SHADER,
++};
++
++struct vc4_bo_exec_state {
++	struct drm_gem_cma_object *bo;
++	enum vc4_bo_mode mode;
++};
++
++struct vc4_exec_info {
++	/* Sequence number for this bin/render job. */
++	uint64_t seqno;
++
++	/* Kernel-space copy of the ioctl arguments */
++	struct drm_vc4_submit_cl *args;
++
++	/* This is the array of BOs that were looked up at the start of exec.
++	 * Command validation will use indices into this array.
++	 */
++	struct vc4_bo_exec_state *bo;
++	uint32_t bo_count;
++
++	/* Pointers for our position in vc4->job_list */
++	struct list_head head;
++
++	/* List of other BOs used in the job that need to be released
++	 * once the job is complete.
++	 */
++	struct list_head unref_list;
++
++	/* Current unvalidated indices into @bo loaded by the non-hardware
++	 * VC4_PACKET_GEM_HANDLES.
++	 */
++	uint32_t bo_index[2];
++
++	/* This is the BO where we store the validated command lists, shader
++	 * records, and uniforms.
++	 */
++	struct drm_gem_cma_object *exec_bo;
++
++	/**
++	 * This tracks the per-shader-record state (packet 64) that
++	 * determines the length of the shader record and the offset
++	 * it's expected to be found at.  It gets read in from the
++	 * command lists.
++	 */
++	struct vc4_shader_state {
++		uint8_t packet;
++		uint32_t addr;
++		/* Maximum vertex index referenced by any primitive using this
++		 * shader state.
++		 */
++		uint32_t max_index;
++	} *shader_state;
++
++	/** How many shader states the user declared they were using. */
++	uint32_t shader_state_size;
++	/** How many shader state records the validator has seen. */
++	uint32_t shader_state_count;
++
++	bool found_tile_binning_mode_config_packet;
++	bool found_start_tile_binning_packet;
++	bool found_increment_semaphore_packet;
++	uint8_t bin_tiles_x, bin_tiles_y;
++	struct drm_gem_cma_object *tile_bo;
++	uint32_t tile_alloc_offset;
++
++	/**
++	 * Computed addresses pointing into exec_bo where we start the
++	 * bin thread (ct0) and render thread (ct1).
++	 */
++	uint32_t ct0ca, ct0ea;
++	uint32_t ct1ca, ct1ea;
++
++	/* Pointers to the shader recs.  These paddr gets incremented as CL
++	 * packets are relocated in validate_gl_shader_state, and the vaddrs
++	 * (u and v) get incremented and size decremented as the shader recs
++	 * themselves are validated.
++	 */
++	void *shader_rec_u;
++	void *shader_rec_v;
++	uint32_t shader_rec_p;
++	uint32_t shader_rec_size;
++
++	/* Pointers to the uniform data.  These pointers are incremented, and
++	 * size decremented, as each batch of uniforms is uploaded.
++	 */
++	void *uniforms_u;
++	void *uniforms_v;
++	uint32_t uniforms_p;
++	uint32_t uniforms_size;
++};
++
++static inline struct vc4_exec_info *
++vc4_first_job(struct vc4_dev *vc4)
++{
++	if (list_empty(&vc4->job_list))
++		return NULL;
++	return list_first_entry(&vc4->job_list, struct vc4_exec_info, head);
++}
++
++/**
++ * struct vc4_texture_sample_info - saves the offsets into the UBO for texture
++ * setup parameters.
++ *
++ * This will be used at draw time to relocate the reference to the texture
++ * contents in p0, and validate that the offset combined with
++ * width/height/stride/etc. from p1 and p2/p3 doesn't sample outside the BO.
++ * Note that the hardware treats unprovided config parameters as 0, so not all
++ * of them need to be set up for every texure sample, and we'll store ~0 as
++ * the offset to mark the unused ones.
++ *
++ * See the VC4 3D architecture guide page 41 ("Texture and Memory Lookup Unit
++ * Setup") for definitions of the texture parameters.
++ */
++struct vc4_texture_sample_info {
++	bool is_direct;
++	uint32_t p_offset[4];
++};
++
++/**
++ * struct vc4_validated_shader_info - information about validated shaders that
++ * needs to be used from command list validation.
++ *
++ * For a given shader, each time a shader state record references it, we need
++ * to verify that the shader doesn't read more uniforms than the shader state
++ * record's uniform BO pointer can provide, and we need to apply relocations
++ * and validate the shader state record's uniforms that define the texture
++ * samples.
++ */
++struct vc4_validated_shader_info
++{
++	uint32_t uniforms_size;
++	uint32_t uniforms_src_size;
++	uint32_t num_texture_samples;
++	struct vc4_texture_sample_info *texture_samples;
++};
++
+ /**
+  * _wait_for - magic (register) wait macro
+  *
+@@ -111,6 +360,18 @@ int vc4_dumb_create(struct drm_file *fil
+ 		    struct drm_mode_create_dumb *args);
+ struct dma_buf *vc4_prime_export(struct drm_device *dev,
+ 				 struct drm_gem_object *obj, int flags);
++int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
++			struct drm_file *file_priv);
++int vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
++			       struct drm_file *file_priv);
++int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
++		      struct drm_file *file_priv);
++int vc4_mmap(struct file *filp, struct vm_area_struct *vma);
++int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
++void *vc4_prime_vmap(struct drm_gem_object *obj);
++void vc4_bo_cache_init(struct drm_device *dev);
++void vc4_bo_cache_destroy(struct drm_device *dev);
++int vc4_bo_stats_debugfs(struct seq_file *m, void *arg);
+ 
+ /* vc4_crtc.c */
+ extern struct platform_driver vc4_crtc_driver;
+@@ -126,10 +387,34 @@ void vc4_debugfs_cleanup(struct drm_mino
+ /* vc4_drv.c */
+ void __iomem *vc4_ioremap_regs(struct platform_device *dev, int index);
+ 
++/* vc4_gem.c */
++void vc4_gem_init(struct drm_device *dev);
++void vc4_gem_destroy(struct drm_device *dev);
++int vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
++			struct drm_file *file_priv);
++int vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,
++			 struct drm_file *file_priv);
++int vc4_wait_bo_ioctl(struct drm_device *dev, void *data,
++		      struct drm_file *file_priv);
++void vc4_submit_next_job(struct drm_device *dev);
++int vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno,
++		       uint64_t timeout_ns, bool interruptible);
++void vc4_job_handle_completed(struct vc4_dev *vc4);
++int vc4_queue_seqno_cb(struct drm_device *dev,
++		       struct vc4_seqno_cb *cb, uint64_t seqno,
++		       void (*func)(struct vc4_seqno_cb *cb));
++
+ /* vc4_hdmi.c */
+ extern struct platform_driver vc4_hdmi_driver;
+ int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused);
+ 
++/* vc4_irq.c */
++irqreturn_t vc4_irq(int irq, void *arg);
++void vc4_irq_preinstall(struct drm_device *dev);
++int vc4_irq_postinstall(struct drm_device *dev);
++void vc4_irq_uninstall(struct drm_device *dev);
++void vc4_irq_reset(struct drm_device *dev);
++
+ /* vc4_hvs.c */
+ extern struct platform_driver vc4_hvs_driver;
+ void vc4_hvs_dump_state(struct drm_device *dev);
+@@ -143,3 +428,35 @@ struct drm_plane *vc4_plane_init(struct
+ 				 enum drm_plane_type type);
+ u32 vc4_plane_write_dlist(struct drm_plane *plane, u32 __iomem *dlist);
+ u32 vc4_plane_dlist_size(struct drm_plane_state *state);
++void vc4_plane_async_set_fb(struct drm_plane *plane, struct drm_framebuffer *fb);
++
++/* vc4_v3d.c */
++extern struct platform_driver vc4_v3d_driver;
++int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused);
++int vc4_v3d_debugfs_regs(struct seq_file *m, void *unused);
++int vc4_v3d_set_power(struct vc4_dev *vc4, bool on);
++
++/* vc4_validate.c */
++int
++vc4_validate_bin_cl(struct drm_device *dev,
++		    void *validated,
++		    void *unvalidated,
++		    struct vc4_exec_info *exec);
++
++int
++vc4_validate_shader_recs(struct drm_device *dev, struct vc4_exec_info *exec);
++
++struct vc4_validated_shader_info *
++vc4_validate_shader(struct drm_gem_cma_object *shader_obj);
++
++bool vc4_use_bo(struct vc4_exec_info *exec,
++		uint32_t hindex,
++		enum vc4_bo_mode mode,
++		struct drm_gem_cma_object **obj);
++
++int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec);
++
++bool vc4_check_tex_size(struct vc4_exec_info *exec,
++			struct drm_gem_cma_object *fbo,
++			uint32_t offset, uint8_t tiling_format,
++			uint32_t width, uint32_t height, uint8_t cpp);
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -0,0 +1,686 @@
++/*
++ * Copyright © 2014 Broadcom
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ */
++
++#include <linux/module.h>
++#include <linux/platform_device.h>
++#include <linux/device.h>
++#include <linux/io.h>
++
++#include "uapi/drm/vc4_drm.h"
++#include "vc4_drv.h"
++#include "vc4_regs.h"
++#include "vc4_trace.h"
++
++static void
++vc4_queue_hangcheck(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	mod_timer(&vc4->hangcheck.timer,
++		  round_jiffies_up(jiffies + msecs_to_jiffies(100)));
++}
++
++static void
++vc4_reset(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	DRM_INFO("Resetting GPU.\n");
++	vc4_v3d_set_power(vc4, false);
++	vc4_v3d_set_power(vc4, true);
++
++	vc4_irq_reset(dev);
++
++	/* Rearm the hangcheck -- another job might have been waiting
++	 * for our hung one to get kicked off, and vc4_irq_reset()
++	 * would have started it.
++	 */
++	vc4_queue_hangcheck(dev);
++}
++
++static void
++vc4_reset_work(struct work_struct *work)
++{
++	struct vc4_dev *vc4 =
++		container_of(work, struct vc4_dev, hangcheck.reset_work);
++
++	vc4_reset(vc4->dev);
++}
++
++static void
++vc4_hangcheck_elapsed(unsigned long data)
++{
++	struct drm_device *dev = (struct drm_device *)data;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	uint32_t ct0ca, ct1ca;
++
++	/* If idle, we can stop watching for hangs. */
++	if (list_empty(&vc4->job_list))
++		return;
++
++	ct0ca = V3D_READ(V3D_CTNCA(0));
++	ct1ca = V3D_READ(V3D_CTNCA(1));
++
++	/* If we've made any progress in execution, rearm the timer
++	 * and wait.
++	 */
++	if (ct0ca != vc4->hangcheck.last_ct0ca ||
++	    ct1ca != vc4->hangcheck.last_ct1ca) {
++		vc4->hangcheck.last_ct0ca = ct0ca;
++		vc4->hangcheck.last_ct1ca = ct1ca;
++		vc4_queue_hangcheck(dev);
++		return;
++	}
++
++	/* We've gone too long with no progress, reset.  This has to
++	 * be done from a work struct, since resetting can sleep and
++	 * this timer hook isn't allowed to.
++	 */
++	schedule_work(&vc4->hangcheck.reset_work);
++}
++
++static void
++submit_cl(struct drm_device *dev, uint32_t thread, uint32_t start, uint32_t end)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	/* Stop any existing thread and set state to "stopped at halt" */
++	V3D_WRITE(V3D_CTNCS(thread), V3D_CTRUN);
++	barrier();
++
++	V3D_WRITE(V3D_CTNCA(thread), start);
++	barrier();
++
++	/* Set the end address of the control list.  Writing this
++	 * register is what starts the job.
++	 */
++	V3D_WRITE(V3D_CTNEA(thread), end);
++	barrier();
++}
++
++int
++vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno, uint64_t timeout_ns,
++		   bool interruptible)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	int ret = 0;
++	unsigned long timeout_expire;
++	DEFINE_WAIT(wait);
++
++	if (vc4->finished_seqno >= seqno)
++		return 0;
++
++	if (timeout_ns == 0)
++		return -ETIME;
++
++	timeout_expire = jiffies + nsecs_to_jiffies(timeout_ns);
++
++	trace_vc4_wait_for_seqno_begin(dev, seqno, timeout_ns);
++	for (;;) {
++		prepare_to_wait(&vc4->job_wait_queue, &wait,
++				interruptible ? TASK_INTERRUPTIBLE :
++				TASK_UNINTERRUPTIBLE);
++
++		if (interruptible && signal_pending(current)) {
++			ret = -ERESTARTSYS;
++			break;
++		}
++
++		if (vc4->finished_seqno >= seqno)
++			break;
++
++		if (timeout_ns != ~0ull) {
++			if (time_after_eq(jiffies, timeout_expire)) {
++				ret = -ETIME;
++				break;
++			}
++			schedule_timeout(timeout_expire - jiffies);
++		} else {
++			schedule();
++		}
++	}
++
++	finish_wait(&vc4->job_wait_queue, &wait);
++	trace_vc4_wait_for_seqno_end(dev, seqno);
++
++	if (ret && ret != -ERESTARTSYS) {
++		DRM_ERROR("timeout waiting for render thread idle\n");
++		return ret;
++	}
++
++	return 0;
++}
++
++static void
++vc4_flush_caches(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	/* Flush the GPU L2 caches.  These caches sit on top of system
++	 * L3 (the 128kb or so shared with the CPU), and are
++	 * non-allocating in the L3.
++	 */
++	V3D_WRITE(V3D_L2CACTL,
++		  V3D_L2CACTL_L2CCLR);
++
++	V3D_WRITE(V3D_SLCACTL,
++		  VC4_SET_FIELD(0xf, V3D_SLCACTL_T1CC) |
++		  VC4_SET_FIELD(0xf, V3D_SLCACTL_T0CC) |
++		  VC4_SET_FIELD(0xf, V3D_SLCACTL_UCC) |
++		  VC4_SET_FIELD(0xf, V3D_SLCACTL_ICC));
++}
++
++/* Sets the registers for the next job to be actually be executed in
++ * the hardware.
++ *
++ * The job_lock should be held during this.
++ */
++void
++vc4_submit_next_job(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct vc4_exec_info *exec = vc4_first_job(vc4);
++
++	if (!exec)
++		return;
++
++	vc4_flush_caches(dev);
++
++	/* Disable the binner's pre-loaded overflow memory address */
++	V3D_WRITE(V3D_BPOA, 0);
++	V3D_WRITE(V3D_BPOS, 0);
++
++	if (exec->ct0ca != exec->ct0ea)
++		submit_cl(dev, 0, exec->ct0ca, exec->ct0ea);
++	submit_cl(dev, 1, exec->ct1ca, exec->ct1ea);
++}
++
++static void
++vc4_update_bo_seqnos(struct vc4_exec_info *exec, uint64_t seqno)
++{
++	struct vc4_bo *bo;
++	unsigned i;
++
++	for (i = 0; i < exec->bo_count; i++) {
++		bo = to_vc4_bo(&exec->bo[i].bo->base);
++		bo->seqno = seqno;
++	}
++
++	list_for_each_entry(bo, &exec->unref_list, unref_head) {
++		bo->seqno = seqno;
++	}
++}
++
++/* Queues a struct vc4_exec_info for execution.  If no job is
++ * currently executing, then submits it.
++ *
++ * Unlike most GPUs, our hardware only handles one command list at a
++ * time.  To queue multiple jobs at once, we'd need to edit the
++ * previous command list to have a jump to the new one at the end, and
++ * then bump the end address.  That's a change for a later date,
++ * though.
++ */
++static void
++vc4_queue_submit(struct drm_device *dev, struct vc4_exec_info *exec)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	uint64_t seqno = ++vc4->emit_seqno;
++	unsigned long irqflags;
++
++	exec->seqno = seqno;
++	vc4_update_bo_seqnos(exec, seqno);
++
++	spin_lock_irqsave(&vc4->job_lock, irqflags);
++	list_add_tail(&exec->head, &vc4->job_list);
++
++	/* If no job was executing, kick ours off.  Otherwise, it'll
++	 * get started when the previous job's frame done interrupt
++	 * occurs.
++	 */
++	if (vc4_first_job(vc4) == exec) {
++		vc4_submit_next_job(dev);
++		vc4_queue_hangcheck(dev);
++	}
++
++	spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++}
++
++/**
++ * Looks up a bunch of GEM handles for BOs and stores the array for
++ * use in the command validator that actually writes relocated
++ * addresses pointing to them.
++ */
++static int
++vc4_cl_lookup_bos(struct drm_device *dev,
++		  struct drm_file *file_priv,
++		  struct vc4_exec_info *exec)
++{
++	struct drm_vc4_submit_cl *args = exec->args;
++	uint32_t *handles;
++	int ret = 0;
++	int i;
++
++	exec->bo_count = args->bo_handle_count;
++
++	if (!exec->bo_count) {
++		/* See comment on bo_index for why we have to check
++		 * this.
++		 */
++		DRM_ERROR("Rendering requires BOs to validate\n");
++		return -EINVAL;
++	}
++
++	exec->bo = kcalloc(exec->bo_count, sizeof(struct vc4_bo_exec_state),
++			   GFP_KERNEL);
++	if (!exec->bo) {
++		DRM_ERROR("Failed to allocate validated BO pointers\n");
++		return -ENOMEM;
++	}
++
++	handles = drm_malloc_ab(exec->bo_count, sizeof(uint32_t));
++	if (!handles) {
++		DRM_ERROR("Failed to allocate incoming GEM handles\n");
++		goto fail;
++	}
++
++	ret = copy_from_user(handles,
++			     (void __user *)(uintptr_t)args->bo_handles,
++			     exec->bo_count * sizeof(uint32_t));
++	if (ret) {
++		DRM_ERROR("Failed to copy in GEM handles\n");
++		goto fail;
++	}
++
++	spin_lock(&file_priv->table_lock);
++	for (i = 0; i < exec->bo_count; i++) {
++		struct drm_gem_object *bo = idr_find(&file_priv->object_idr,
++						     handles[i]);
++		if (!bo) {
++			DRM_ERROR("Failed to look up GEM BO %d: %d\n",
++				  i, handles[i]);
++			ret = -EINVAL;
++			spin_unlock(&file_priv->table_lock);
++			goto fail;
++		}
++		drm_gem_object_reference(bo);
++		exec->bo[i].bo = (struct drm_gem_cma_object *)bo;
++	}
++	spin_unlock(&file_priv->table_lock);
++
++fail:
++	kfree(handles);
++	return 0;
++}
++
++static int
++vc4_get_bcl(struct drm_device *dev, struct vc4_exec_info *exec)
++{
++	struct drm_vc4_submit_cl *args = exec->args;
++	void *temp = NULL;
++	void *bin;
++	int ret = 0;
++	uint32_t bin_offset = 0;
++	uint32_t shader_rec_offset = roundup(bin_offset + args->bin_cl_size,
++					     16);
++	uint32_t uniforms_offset = shader_rec_offset + args->shader_rec_size;
++	uint32_t exec_size = uniforms_offset + args->uniforms_size;
++	uint32_t temp_size = exec_size + (sizeof(struct vc4_shader_state) *
++					  args->shader_rec_count);
++	struct vc4_bo *bo;
++
++	if (uniforms_offset < shader_rec_offset ||
++	    exec_size < uniforms_offset ||
++	    args->shader_rec_count >= (UINT_MAX /
++					  sizeof(struct vc4_shader_state)) ||
++	    temp_size < exec_size) {
++		DRM_ERROR("overflow in exec arguments\n");
++		goto fail;
++	}
++
++	/* Allocate space where we'll store the copied in user command lists
++	 * and shader records.
++	 *
++	 * We don't just copy directly into the BOs because we need to
++	 * read the contents back for validation, and I think the
++	 * bo->vaddr is uncached access.
++	 */
++	temp = kmalloc(temp_size, GFP_KERNEL);
++	if (!temp) {
++		DRM_ERROR("Failed to allocate storage for copying "
++			  "in bin/render CLs.\n");
++		ret = -ENOMEM;
++		goto fail;
++	}
++	bin = temp + bin_offset;
++	exec->shader_rec_u = temp + shader_rec_offset;
++	exec->uniforms_u = temp + uniforms_offset;
++	exec->shader_state = temp + exec_size;
++	exec->shader_state_size = args->shader_rec_count;
++
++	ret = copy_from_user(bin,
++			     (void __user *)(uintptr_t)args->bin_cl,
++			     args->bin_cl_size);
++	if (ret) {
++		DRM_ERROR("Failed to copy in bin cl\n");
++		goto fail;
++	}
++
++	ret = copy_from_user(exec->shader_rec_u,
++			     (void __user *)(uintptr_t)args->shader_rec,
++			     args->shader_rec_size);
++	if (ret) {
++		DRM_ERROR("Failed to copy in shader recs\n");
++		goto fail;
++	}
++
++	ret = copy_from_user(exec->uniforms_u,
++			     (void __user *)(uintptr_t)args->uniforms,
++			     args->uniforms_size);
++	if (ret) {
++		DRM_ERROR("Failed to copy in uniforms cl\n");
++		goto fail;
++	}
++
++	bo = vc4_bo_create(dev, exec_size);
++	if (!bo) {
++		DRM_ERROR("Couldn't allocate BO for binning\n");
++		ret = PTR_ERR(exec->exec_bo);
++		goto fail;
++	}
++	exec->exec_bo = &bo->base;
++
++	list_add_tail(&to_vc4_bo(&exec->exec_bo->base)->unref_head,
++		      &exec->unref_list);
++
++	exec->ct0ca = exec->exec_bo->paddr + bin_offset;
++
++	exec->shader_rec_v = exec->exec_bo->vaddr + shader_rec_offset;
++	exec->shader_rec_p = exec->exec_bo->paddr + shader_rec_offset;
++	exec->shader_rec_size = args->shader_rec_size;
++
++	exec->uniforms_v = exec->exec_bo->vaddr + uniforms_offset;
++	exec->uniforms_p = exec->exec_bo->paddr + uniforms_offset;
++	exec->uniforms_size = args->uniforms_size;
++
++	ret = vc4_validate_bin_cl(dev,
++				  exec->exec_bo->vaddr + bin_offset,
++				  bin,
++				  exec);
++	if (ret)
++		goto fail;
++
++	ret = vc4_validate_shader_recs(dev, exec);
++
++fail:
++	kfree(temp);
++	return ret;
++}
++
++static void
++vc4_complete_exec(struct vc4_exec_info *exec)
++{
++	unsigned i;
++
++	if (exec->bo) {
++		for (i = 0; i < exec->bo_count; i++)
++			drm_gem_object_unreference(&exec->bo[i].bo->base);
++		kfree(exec->bo);
++	}
++
++	while (!list_empty(&exec->unref_list)) {
++		struct vc4_bo *bo = list_first_entry(&exec->unref_list,
++						     struct vc4_bo, unref_head);
++		list_del(&bo->unref_head);
++		drm_gem_object_unreference(&bo->base.base);
++	}
++
++	kfree(exec);
++}
++
++void
++vc4_job_handle_completed(struct vc4_dev *vc4)
++{
++	unsigned long irqflags;
++	struct vc4_seqno_cb *cb, *cb_temp;
++
++	spin_lock_irqsave(&vc4->job_lock, irqflags);
++	while (!list_empty(&vc4->job_done_list)) {
++		struct vc4_exec_info *exec =
++			list_first_entry(&vc4->job_done_list,
++					 struct vc4_exec_info, head);
++		list_del(&exec->head);
++
++		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++		vc4_complete_exec(exec);
++		spin_lock_irqsave(&vc4->job_lock, irqflags);
++	}
++	spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++
++	list_for_each_entry_safe(cb, cb_temp, &vc4->seqno_cb_list, work.entry) {
++		if (cb->seqno <= vc4->finished_seqno) {
++			list_del_init(&cb->work.entry);
++			schedule_work(&cb->work);
++		}
++	}
++}
++
++static void vc4_seqno_cb_work(struct work_struct *work)
++{
++	struct vc4_seqno_cb *cb = container_of(work, struct vc4_seqno_cb, work);
++	cb->func(cb);
++}
++
++int vc4_queue_seqno_cb(struct drm_device *dev,
++		       struct vc4_seqno_cb *cb, uint64_t seqno,
++		       void (*func)(struct vc4_seqno_cb *cb))
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	int ret = 0;
++
++	cb->func = func;
++	INIT_WORK(&cb->work, vc4_seqno_cb_work);
++
++	mutex_lock(&dev->struct_mutex);
++	if (seqno > vc4->finished_seqno) {
++		cb->seqno = seqno;
++		list_add_tail(&cb->work.entry, &vc4->seqno_cb_list);
++	} else {
++		schedule_work(&cb->work);
++	}
++	mutex_unlock(&dev->struct_mutex);
++
++	return ret;
++}
++
++/* Scheduled when any job has been completed, this walks the list of
++ * jobs that had completed and unrefs their BOs and frees their exec
++ * structs.
++ */
++static void
++vc4_job_done_work(struct work_struct *work)
++{
++	struct vc4_dev *vc4 =
++		container_of(work, struct vc4_dev, job_done_work);
++	struct drm_device *dev = vc4->dev;
++
++	/* Need the struct lock for drm_gem_object_unreference(). */
++	mutex_lock(&dev->struct_mutex);
++	vc4_job_handle_completed(vc4);
++	mutex_unlock(&dev->struct_mutex);
++}
++
++static int
++vc4_wait_for_seqno_ioctl_helper(struct drm_device *dev,
++				uint64_t seqno,
++				uint64_t *timeout_ns)
++{
++	unsigned long start = jiffies;
++	int ret = vc4_wait_for_seqno(dev, seqno, *timeout_ns, true);
++
++	if ((ret == -EINTR || ret == -ERESTARTSYS) && *timeout_ns != ~0ull) {
++		uint64_t delta = jiffies_to_nsecs(jiffies - start);
++		if (*timeout_ns >= delta)
++			*timeout_ns -= delta;
++	}
++
++	return ret;
++}
++
++int
++vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,
++		     struct drm_file *file_priv)
++{
++	struct drm_vc4_wait_seqno *args = data;
++
++	return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno,
++					       &args->timeout_ns);
++}
++
++int
++vc4_wait_bo_ioctl(struct drm_device *dev, void *data,
++		  struct drm_file *file_priv)
++{
++	int ret;
++	struct drm_vc4_wait_bo *args = data;
++	struct drm_gem_object *gem_obj;
++	struct vc4_bo *bo;
++
++	gem_obj = drm_gem_object_lookup(dev, file_priv, args->handle);
++	if (!gem_obj) {
++		DRM_ERROR("Failed to look up GEM BO %d\n", args->handle);
++		return -EINVAL;
++	}
++	bo = to_vc4_bo(gem_obj);
++
++	ret = vc4_wait_for_seqno_ioctl_helper(dev, bo->seqno, &args->timeout_ns);
++
++	drm_gem_object_unreference(gem_obj);
++	return ret;
++}
++
++/**
++ * Submits a command list to the VC4.
++ *
++ * This is what is called batchbuffer emitting on other hardware.
++ */
++int
++vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
++		    struct drm_file *file_priv)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct drm_vc4_submit_cl *args = data;
++	struct vc4_exec_info *exec;
++	int ret;
++
++	if ((args->flags & ~VC4_SUBMIT_CL_USE_CLEAR_COLOR) != 0) {
++		DRM_ERROR("Unknown flags: 0x%02x\n", args->flags);
++		return -EINVAL;
++	}
++
++	exec = kcalloc(1, sizeof(*exec), GFP_KERNEL);
++	if (!exec) {
++		DRM_ERROR("malloc failure on exec struct\n");
++		return -ENOMEM;
++	}
++
++	exec->args = args;
++	INIT_LIST_HEAD(&exec->unref_list);
++
++	mutex_lock(&dev->struct_mutex);
++
++	ret = vc4_cl_lookup_bos(dev, file_priv, exec);
++	if (ret)
++		goto fail;
++
++	if (exec->args->bin_cl_size != 0) {
++		ret = vc4_get_bcl(dev, exec);
++		if (ret)
++			goto fail;
++	} else {
++		exec->ct0ca = exec->ct0ea = 0;
++	}
++
++	ret = vc4_get_rcl(dev, exec);
++	if (ret)
++		goto fail;
++
++	/* Clear this out of the struct we'll be putting in the queue,
++	 * since it's part of our stack.
++	 */
++	exec->args = NULL;
++
++	vc4_queue_submit(dev, exec);
++
++	/* Return the seqno for our job. */
++	args->seqno = vc4->emit_seqno;
++
++	mutex_unlock(&dev->struct_mutex);
++
++	return 0;
++
++fail:
++	vc4_complete_exec(exec);
++
++	mutex_unlock(&dev->struct_mutex);
++
++	return ret;
++}
++
++void
++vc4_gem_init(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	INIT_LIST_HEAD(&vc4->job_list);
++	INIT_LIST_HEAD(&vc4->job_done_list);
++	INIT_LIST_HEAD(&vc4->seqno_cb_list);
++	spin_lock_init(&vc4->job_lock);
++
++	INIT_WORK(&vc4->hangcheck.reset_work, vc4_reset_work);
++	setup_timer(&vc4->hangcheck.timer,
++		    vc4_hangcheck_elapsed,
++		    (unsigned long) dev);
++
++	INIT_WORK(&vc4->job_done_work, vc4_job_done_work);
++}
++
++void
++vc4_gem_destroy(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	/* Waiting for exec to finish would need to be done before
++	 * unregistering V3D.
++	 */
++	WARN_ON(vc4->emit_seqno != vc4->finished_seqno);
++
++	/* V3D should already have disabled its interrupt and cleared
++	 * the overflow allocation registers.  Now free the object.
++	 */
++	if (vc4->overflow_mem) {
++		drm_gem_object_unreference_unlocked(&vc4->overflow_mem->base.base);
++		vc4->overflow_mem = NULL;
++	}
++
++	vc4_bo_cache_destroy(dev);
++}
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_irq.c
+@@ -0,0 +1,211 @@
++/*
++ * Copyright © 2014 Broadcom
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ */
++
++/** DOC: Interrupt management for the V3D engine.
++ *
++ * We have an interrupt status register (V3D_INTCTL) which reports
++ * interrupts, and where writing 1 bits clears those interrupts.
++ * There are also a pair of interrupt registers
++ * (V3D_INTENA/V3D_INTDIS) where writing a 1 to their bits enables or
++ * disables that specific interrupt, and 0s written are ignored
++ * (reading either one returns the set of enabled interrupts).
++ *
++ * When we take a render frame interrupt, we need to wake the
++ * processes waiting for some frame to be done, and get the next frame
++ * submitted ASAP (so the hardware doesn't sit idle when there's work
++ * to do).
++ *
++ * When we take the binner out of memory interrupt, we need to
++ * allocate some new memory and pass it to the binner so that the
++ * current job can make progress.
++ */
++
++#include "vc4_drv.h"
++#include "vc4_regs.h"
++
++#define V3D_DRIVER_IRQS (V3D_INT_OUTOMEM | \
++			 V3D_INT_FRDONE)
++
++DECLARE_WAIT_QUEUE_HEAD(render_wait);
++
++static void
++vc4_overflow_mem_work(struct work_struct *work)
++{
++	struct vc4_dev *vc4 =
++		container_of(work, struct vc4_dev, overflow_mem_work);
++	struct drm_device *dev = vc4->dev;
++	struct vc4_bo *bo;
++
++	bo = vc4_bo_create(dev, 256 * 1024);
++	if (!bo) {
++		DRM_ERROR("Couldn't allocate binner overflow mem\n");
++		return;
++	}
++
++	/* If there's a job executing currently, then our previous
++	 * overflow allocation is getting used in that job and we need
++	 * to queue it to be released when the job is done.  But if no
++	 * job is executing at all, then we can free the old overflow
++	 * object direcctly.
++	 *
++	 * No lock necessary for this pointer since we're the only
++	 * ones that update the pointer, and our workqueue won't
++	 * reenter.
++	 */
++	if (vc4->overflow_mem) {
++		struct vc4_exec_info *current_exec;
++		unsigned long irqflags;
++
++		spin_lock_irqsave(&vc4->job_lock, irqflags);
++		current_exec = vc4_first_job(vc4);
++		if (current_exec) {
++			vc4->overflow_mem->seqno = vc4->finished_seqno + 1;
++			list_add_tail(&vc4->overflow_mem->unref_head,
++				      &current_exec->unref_list);
++			vc4->overflow_mem = NULL;
++		}
++		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++	}
++
++	if (vc4->overflow_mem) {
++		drm_gem_object_unreference_unlocked(&vc4->overflow_mem->base.base);
++	}
++	vc4->overflow_mem = bo;
++
++	V3D_WRITE(V3D_BPOA, bo->base.paddr);
++	V3D_WRITE(V3D_BPOS, bo->base.base.size);
++	V3D_WRITE(V3D_INTCTL, V3D_INT_OUTOMEM);
++	V3D_WRITE(V3D_INTENA, V3D_INT_OUTOMEM);
++}
++
++static void
++vc4_irq_finish_job(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct vc4_exec_info *exec = vc4_first_job(vc4);
++
++	if (!exec)
++		return;
++
++	vc4->finished_seqno++;
++	list_move_tail(&exec->head, &vc4->job_done_list);
++	vc4_submit_next_job(dev);
++
++	wake_up_all(&vc4->job_wait_queue);
++	schedule_work(&vc4->job_done_work);
++}
++
++irqreturn_t
++vc4_irq(int irq, void *arg)
++{
++	struct drm_device *dev = arg;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	uint32_t intctl;
++	irqreturn_t status = IRQ_NONE;
++
++	barrier();
++	intctl = V3D_READ(V3D_INTCTL);
++
++	/* Acknowledge the interrupts we're handling here. The render
++	 * frame done interrupt will be cleared, while OUTOMEM will
++	 * stay high until the underlying cause is cleared.
++	 */
++	V3D_WRITE(V3D_INTCTL, intctl);
++
++	if (intctl & V3D_INT_OUTOMEM) {
++		/* Disable OUTOMEM until the work is done. */
++		V3D_WRITE(V3D_INTDIS, V3D_INT_OUTOMEM);
++		schedule_work(&vc4->overflow_mem_work);
++		status = IRQ_HANDLED;
++	}
++
++	if (intctl & V3D_INT_FRDONE) {
++		spin_lock(&vc4->job_lock);
++		vc4_irq_finish_job(dev);
++		spin_unlock(&vc4->job_lock);
++		status = IRQ_HANDLED;
++	}
++
++	return status;
++}
++
++void
++vc4_irq_preinstall(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	init_waitqueue_head(&vc4->job_wait_queue);
++	INIT_WORK(&vc4->overflow_mem_work, vc4_overflow_mem_work);
++
++	/* Clear any pending interrupts someone might have left around
++	 * for us.
++	 */
++	V3D_WRITE(V3D_INTCTL, V3D_DRIVER_IRQS);
++}
++
++int
++vc4_irq_postinstall(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	/* Enable both the render done and out of memory interrupts. */
++	V3D_WRITE(V3D_INTENA, V3D_DRIVER_IRQS);
++
++	return 0;
++}
++
++void
++vc4_irq_uninstall(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	/* Disable sending interrupts for our driver's IRQs. */
++	V3D_WRITE(V3D_INTDIS, V3D_DRIVER_IRQS);
++
++	/* Clear any pending interrupts we might have left. */
++	V3D_WRITE(V3D_INTCTL, V3D_DRIVER_IRQS);
++
++	cancel_work_sync(&vc4->overflow_mem_work);
++}
++
++/** Reinitializes interrupt registers when a GPU reset is performed. */
++void vc4_irq_reset(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	unsigned long irqflags;
++
++	/* Acknowledge any stale IRQs. */
++	V3D_WRITE(V3D_INTCTL, V3D_DRIVER_IRQS);
++
++	/*
++	 * Turn all our interrupts on.  Binner out of memory is the
++	 * only one we expect to trigger at this point, since we've
++	 * just come from poweron and haven't supplied any overflow
++	 * memory yet.
++	 */
++	V3D_WRITE(V3D_INTENA, V3D_DRIVER_IRQS);
++
++	spin_lock_irqsave(&vc4->job_lock, irqflags);
++	vc4_irq_finish_job(dev);
++	spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++}
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -15,6 +15,7 @@
+  */
+ 
+ #include "drm_crtc.h"
++#include "drm_atomic.h"
+ #include "drm_atomic_helper.h"
+ #include "drm_crtc_helper.h"
+ #include "drm_plane_helper.h"
+@@ -29,10 +30,151 @@ static void vc4_output_poll_changed(stru
+ 		drm_fbdev_cma_hotplug_event(vc4->fbdev);
+ }
+ 
++struct vc4_commit {
++	struct drm_device *dev;
++	struct drm_atomic_state *state;
++	struct vc4_seqno_cb cb;
++};
++
++static void
++vc4_atomic_complete_commit(struct vc4_commit *c)
++{
++	struct drm_atomic_state *state = c->state;
++	struct drm_device *dev = state->dev;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	drm_atomic_helper_commit_modeset_disables(dev, state);
++
++	drm_atomic_helper_commit_planes(dev, state);
++
++	drm_atomic_helper_commit_modeset_enables(dev, state);
++
++	drm_atomic_helper_wait_for_vblanks(dev, state);
++
++	drm_atomic_helper_cleanup_planes(dev, state);
++
++	drm_atomic_state_free(state);
++
++	up(&vc4->async_modeset);
++
++	kfree(c);
++}
++
++static void
++vc4_atomic_complete_commit_seqno_cb(struct vc4_seqno_cb *cb)
++{
++	struct vc4_commit *c = container_of(cb, struct vc4_commit, cb);
++
++	vc4_atomic_complete_commit(c);
++}
++
++static struct vc4_commit *commit_init(struct drm_atomic_state *state)
++{
++	struct vc4_commit *c = kzalloc(sizeof(*c), GFP_KERNEL);
++
++	if (!c)
++		return NULL;
++	c->dev = state->dev;
++	c->state = state;
++
++	return c;
++}
++
++/**
++ * vc4_atomic_commit - commit validated state object
++ * @dev: DRM device
++ * @state: the driver state object
++ * @async: asynchronous commit
++ *
++ * This function commits a with drm_atomic_helper_check() pre-validated state
++ * object. This can still fail when e.g. the framebuffer reservation fails. For
++ * now this doesn't implement asynchronous commits.
++ *
++ * RETURNS
++ * Zero for success or -errno.
++ */
++static int vc4_atomic_commit(struct drm_device *dev,
++			     struct drm_atomic_state *state,
++			     bool async)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	int ret;
++	int i;
++	uint64_t wait_seqno = 0;
++	struct vc4_commit *c;
++
++	c = commit_init(state);
++	if (!c)
++		return -ENOMEM;
++
++	/* Make sure that any outstanding modesets have finished. */
++	ret = down_interruptible(&vc4->async_modeset);
++	if (ret) {
++		kfree(c);
++		return ret;
++	}
++
++	ret = drm_atomic_helper_prepare_planes(dev, state);
++	if (ret) {
++		kfree(c);
++		up(&vc4->async_modeset);
++		return ret;
++	}
++
++	for (i = 0; i < dev->mode_config.num_total_plane; i++) {
++		struct drm_plane *plane = state->planes[i];
++		struct drm_plane_state *new_state = state->plane_states[i];
++
++		if (!plane)
++			continue;
++
++		if ((plane->state->fb != new_state->fb) && new_state->fb) {
++			struct drm_gem_cma_object *cma_bo =
++				drm_fb_cma_get_gem_obj(new_state->fb, 0);
++			struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
++			wait_seqno = max(bo->seqno, wait_seqno);
++		}
++	}
++
++	/*
++	 * This is the point of no return - everything below never fails except
++	 * when the hw goes bonghits. Which means we can commit the new state on
++	 * the software side now.
++	 */
++
++	drm_atomic_helper_swap_state(dev, state);
++
++	/*
++	 * Everything below can be run asynchronously without the need to grab
++	 * any modeset locks at all under one condition: It must be guaranteed
++	 * that the asynchronous work has either been cancelled (if the driver
++	 * supports it, which at least requires that the framebuffers get
++	 * cleaned up with drm_atomic_helper_cleanup_planes()) or completed
++	 * before the new state gets committed on the software side with
++	 * drm_atomic_helper_swap_state().
++	 *
++	 * This scheme allows new atomic state updates to be prepared and
++	 * checked in parallel to the asynchronous completion of the previous
++	 * update. Which is important since compositors need to figure out the
++	 * composition of the next frame right after having submitted the
++	 * current layout.
++	 */
++
++	if (async) {
++		vc4_queue_seqno_cb(dev, &c->cb, wait_seqno,
++				   vc4_atomic_complete_commit_seqno_cb);
++	} else {
++		vc4_wait_for_seqno(dev, wait_seqno, ~0ull, false);
++		vc4_atomic_complete_commit(c);
++	}
++
++	return 0;
++}
++
+ static const struct drm_mode_config_funcs vc4_mode_funcs = {
+ 	.output_poll_changed = vc4_output_poll_changed,
+ 	.atomic_check = drm_atomic_helper_check,
+-	.atomic_commit = drm_atomic_helper_commit,
++	.atomic_commit = vc4_atomic_commit,
+ 	.fb_create = drm_fb_cma_create,
+ };
+ 
+@@ -41,6 +183,8 @@ int vc4_kms_load(struct drm_device *dev)
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	int ret;
+ 
++	sema_init(&vc4->async_modeset, 1);
++
+ 	ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
+ 	if (ret < 0) {
+ 		dev_err(dev->dev, "failed to initialize vblank\n");
+@@ -51,6 +195,8 @@ int vc4_kms_load(struct drm_device *dev)
+ 	dev->mode_config.max_height = 2048;
+ 	dev->mode_config.funcs = &vc4_mode_funcs;
+ 	dev->mode_config.preferred_depth = 24;
++	dev->mode_config.async_page_flip = true;
++
+ 	dev->vblank_disable_allowed = true;
+ 
+ 	drm_mode_config_reset(dev);
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_packet.h
+@@ -0,0 +1,384 @@
++/*
++ * Copyright © 2014 Broadcom
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ */
++
++#ifndef VC4_PACKET_H
++#define VC4_PACKET_H
++
++#include "vc4_regs.h" /* for VC4_MASK, VC4_GET_FIELD, VC4_SET_FIELD */
++
++enum vc4_packet {
++        VC4_PACKET_HALT = 0,
++        VC4_PACKET_NOP = 1,
++
++        VC4_PACKET_FLUSH = 4,
++        VC4_PACKET_FLUSH_ALL = 5,
++        VC4_PACKET_START_TILE_BINNING = 6,
++        VC4_PACKET_INCREMENT_SEMAPHORE = 7,
++        VC4_PACKET_WAIT_ON_SEMAPHORE = 8,
++
++        VC4_PACKET_BRANCH = 16,
++        VC4_PACKET_BRANCH_TO_SUB_LIST = 17,
++
++        VC4_PACKET_STORE_MS_TILE_BUFFER = 24,
++        VC4_PACKET_STORE_MS_TILE_BUFFER_AND_EOF = 25,
++        VC4_PACKET_STORE_FULL_RES_TILE_BUFFER = 26,
++        VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER = 27,
++        VC4_PACKET_STORE_TILE_BUFFER_GENERAL = 28,
++        VC4_PACKET_LOAD_TILE_BUFFER_GENERAL = 29,
++
++        VC4_PACKET_GL_INDEXED_PRIMITIVE = 32,
++        VC4_PACKET_GL_ARRAY_PRIMITIVE = 33,
++
++        VC4_PACKET_COMPRESSED_PRIMITIVE = 48,
++        VC4_PACKET_CLIPPED_COMPRESSED_PRIMITIVE = 49,
++
++        VC4_PACKET_PRIMITIVE_LIST_FORMAT = 56,
++
++        VC4_PACKET_GL_SHADER_STATE = 64,
++        VC4_PACKET_NV_SHADER_STATE = 65,
++        VC4_PACKET_VG_SHADER_STATE = 66,
++
++        VC4_PACKET_CONFIGURATION_BITS = 96,
++        VC4_PACKET_FLAT_SHADE_FLAGS = 97,
++        VC4_PACKET_POINT_SIZE = 98,
++        VC4_PACKET_LINE_WIDTH = 99,
++        VC4_PACKET_RHT_X_BOUNDARY = 100,
++        VC4_PACKET_DEPTH_OFFSET = 101,
++        VC4_PACKET_CLIP_WINDOW = 102,
++        VC4_PACKET_VIEWPORT_OFFSET = 103,
++        VC4_PACKET_Z_CLIPPING = 104,
++        VC4_PACKET_CLIPPER_XY_SCALING = 105,
++        VC4_PACKET_CLIPPER_Z_SCALING = 106,
++
++        VC4_PACKET_TILE_BINNING_MODE_CONFIG = 112,
++        VC4_PACKET_TILE_RENDERING_MODE_CONFIG = 113,
++        VC4_PACKET_CLEAR_COLORS = 114,
++        VC4_PACKET_TILE_COORDINATES = 115,
++
++        /* Not an actual hardware packet -- this is what we use to put
++         * references to GEM bos in the command stream, since we need the u32
++         * int the actual address packet in order to store the offset from the
++         * start of the BO.
++         */
++        VC4_PACKET_GEM_HANDLES = 254,
++} __attribute__ ((__packed__));
++
++#define VC4_PACKET_HALT_SIZE						1
++#define VC4_PACKET_NOP_SIZE						1
++#define VC4_PACKET_FLUSH_SIZE						1
++#define VC4_PACKET_FLUSH_ALL_SIZE					1
++#define VC4_PACKET_START_TILE_BINNING_SIZE				1
++#define VC4_PACKET_INCREMENT_SEMAPHORE_SIZE				1
++#define VC4_PACKET_WAIT_ON_SEMAPHORE_SIZE				1
++#define VC4_PACKET_BRANCH_SIZE						5
++#define VC4_PACKET_BRANCH_TO_SUB_LIST_SIZE				5
++#define VC4_PACKET_STORE_MS_TILE_BUFFER_SIZE				1
++#define VC4_PACKET_STORE_MS_TILE_BUFFER_AND_EOF_SIZE			1
++#define VC4_PACKET_STORE_FULL_RES_TILE_BUFFER_SIZE			5
++#define VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER_SIZE			5
++#define VC4_PACKET_STORE_TILE_BUFFER_GENERAL_SIZE			7
++#define VC4_PACKET_LOAD_TILE_BUFFER_GENERAL_SIZE			7
++#define VC4_PACKET_GL_INDEXED_PRIMITIVE_SIZE				14
++#define VC4_PACKET_GL_ARRAY_PRIMITIVE_SIZE				10
++#define VC4_PACKET_COMPRESSED_PRIMITIVE_SIZE				1
++#define VC4_PACKET_CLIPPED_COMPRESSED_PRIMITIVE_SIZE			1
++#define VC4_PACKET_PRIMITIVE_LIST_FORMAT_SIZE				2
++#define VC4_PACKET_GL_SHADER_STATE_SIZE					5
++#define VC4_PACKET_NV_SHADER_STATE_SIZE					5
++#define VC4_PACKET_VG_SHADER_STATE_SIZE					5
++#define VC4_PACKET_CONFIGURATION_BITS_SIZE				4
++#define VC4_PACKET_FLAT_SHADE_FLAGS_SIZE				5
++#define VC4_PACKET_POINT_SIZE_SIZE					5
++#define VC4_PACKET_LINE_WIDTH_SIZE					5
++#define VC4_PACKET_RHT_X_BOUNDARY_SIZE					3
++#define VC4_PACKET_DEPTH_OFFSET_SIZE					5
++#define VC4_PACKET_CLIP_WINDOW_SIZE					9
++#define VC4_PACKET_VIEWPORT_OFFSET_SIZE					5
++#define VC4_PACKET_Z_CLIPPING_SIZE					9
++#define VC4_PACKET_CLIPPER_XY_SCALING_SIZE				9
++#define VC4_PACKET_CLIPPER_Z_SCALING_SIZE				9
++#define VC4_PACKET_TILE_BINNING_MODE_CONFIG_SIZE			16
++#define VC4_PACKET_TILE_RENDERING_MODE_CONFIG_SIZE			11
++#define VC4_PACKET_CLEAR_COLORS_SIZE					14
++#define VC4_PACKET_TILE_COORDINATES_SIZE				3
++#define VC4_PACKET_GEM_HANDLES_SIZE					9
++
++/** @{
++ * Bits used by packets like VC4_PACKET_STORE_TILE_BUFFER_GENERAL and
++ * VC4_PACKET_TILE_RENDERING_MODE_CONFIG.
++*/
++#define VC4_TILING_FORMAT_LINEAR    0
++#define VC4_TILING_FORMAT_T         1
++#define VC4_TILING_FORMAT_LT        2
++/** @} */
++
++/** @{
++ *
++ * low bits of VC4_PACKET_STORE_FULL_RES_TILE_BUFFER and
++ * VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER.
++ */
++#define VC4_LOADSTORE_FULL_RES_EOF                     (1 << 3)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_CLEAR_ALL       (1 << 2)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_ZS              (1 << 1)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_COLOR           (1 << 0)
++
++/** @{
++ *
++ * byte 2 of VC4_PACKET_STORE_TILE_BUFFER_GENERAL and
++ * VC4_PACKET_LOAD_TILE_BUFFER_GENERAL (low bits of the address)
++ */
++
++#define VC4_LOADSTORE_TILE_BUFFER_EOF                  (1 << 3)
++#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_VG_MASK (1 << 2)
++#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_ZS      (1 << 1)
++#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_COLOR   (1 << 0)
++
++/** @} */
++
++/** @{
++ *
++ * byte 0-1 of VC4_PACKET_STORE_TILE_BUFFER_GENERAL and
++ * VC4_PACKET_LOAD_TILE_BUFFER_GENERAL
++ */
++#define VC4_STORE_TILE_BUFFER_DISABLE_VG_MASK_CLEAR (1 << 15)
++#define VC4_STORE_TILE_BUFFER_DISABLE_ZS_CLEAR     (1 << 14)
++#define VC4_STORE_TILE_BUFFER_DISABLE_COLOR_CLEAR  (1 << 13)
++#define VC4_STORE_TILE_BUFFER_DISABLE_SWAP         (1 << 12)
++
++#define VC4_LOADSTORE_TILE_BUFFER_FORMAT_MASK      VC4_MASK(9, 8)
++#define VC4_LOADSTORE_TILE_BUFFER_FORMAT_SHIFT     8
++#define VC4_LOADSTORE_TILE_BUFFER_RGBA8888         0
++#define VC4_LOADSTORE_TILE_BUFFER_BGR565_DITHER    1
++#define VC4_LOADSTORE_TILE_BUFFER_BGR565           2
++/** @} */
++
++/** @{
++ *
++ * byte 0 of VC4_PACKET_STORE_TILE_BUFFER_GENERAL and
++ * VC4_PACKET_LOAD_TILE_BUFFER_GENERAL
++ */
++#define VC4_STORE_TILE_BUFFER_MODE_MASK            VC4_MASK(7, 6)
++#define VC4_STORE_TILE_BUFFER_MODE_SHIFT           6
++#define VC4_STORE_TILE_BUFFER_MODE_SAMPLE0         (0 << 6)
++#define VC4_STORE_TILE_BUFFER_MODE_DECIMATE_X4     (1 << 6)
++#define VC4_STORE_TILE_BUFFER_MODE_DECIMATE_X16    (2 << 6)
++
++/** The values of the field are VC4_TILING_FORMAT_* */
++#define VC4_LOADSTORE_TILE_BUFFER_TILING_MASK      VC4_MASK(5, 4)
++#define VC4_LOADSTORE_TILE_BUFFER_TILING_SHIFT     4
++
++#define VC4_LOADSTORE_TILE_BUFFER_BUFFER_MASK      VC4_MASK(2, 0)
++#define VC4_LOADSTORE_TILE_BUFFER_BUFFER_SHIFT     0
++#define VC4_LOADSTORE_TILE_BUFFER_NONE             0
++#define VC4_LOADSTORE_TILE_BUFFER_COLOR            1
++#define VC4_LOADSTORE_TILE_BUFFER_ZS               2
++#define VC4_LOADSTORE_TILE_BUFFER_Z                3
++#define VC4_LOADSTORE_TILE_BUFFER_VG_MASK          4
++#define VC4_LOADSTORE_TILE_BUFFER_FULL             5
++/** @} */
++
++#define VC4_INDEX_BUFFER_U8                        (0 << 4)
++#define VC4_INDEX_BUFFER_U16                       (1 << 4)
++
++/* This flag is only present in NV shader state. */
++#define VC4_SHADER_FLAG_SHADED_CLIP_COORDS         (1 << 3)
++#define VC4_SHADER_FLAG_ENABLE_CLIPPING            (1 << 2)
++#define VC4_SHADER_FLAG_VS_POINT_SIZE              (1 << 1)
++#define VC4_SHADER_FLAG_FS_SINGLE_THREAD           (1 << 0)
++
++/** @{ byte 2 of config bits. */
++#define VC4_CONFIG_BITS_EARLY_Z_UPDATE             (1 << 1)
++#define VC4_CONFIG_BITS_EARLY_Z                    (1 << 0)
++/** @} */
++
++/** @{ byte 1 of config bits. */
++#define VC4_CONFIG_BITS_Z_UPDATE                   (1 << 7)
++/** same values in this 3-bit field as PIPE_FUNC_* */
++#define VC4_CONFIG_BITS_DEPTH_FUNC_SHIFT           4
++#define VC4_CONFIG_BITS_COVERAGE_READ_LEAVE        (1 << 3)
++
++#define VC4_CONFIG_BITS_COVERAGE_UPDATE_NONZERO    (0 << 1)
++#define VC4_CONFIG_BITS_COVERAGE_UPDATE_ODD        (1 << 1)
++#define VC4_CONFIG_BITS_COVERAGE_UPDATE_OR         (2 << 1)
++#define VC4_CONFIG_BITS_COVERAGE_UPDATE_ZERO       (3 << 1)
++
++#define VC4_CONFIG_BITS_COVERAGE_PIPE_SELECT       (1 << 0)
++/** @} */
++
++/** @{ byte 0 of config bits. */
++#define VC4_CONFIG_BITS_RASTERIZER_OVERSAMPLE_NONE (0 << 6)
++#define VC4_CONFIG_BITS_RASTERIZER_OVERSAMPLE_4X   (1 << 6)
++#define VC4_CONFIG_BITS_RASTERIZER_OVERSAMPLE_16X  (2 << 6)
++
++#define VC4_CONFIG_BITS_AA_POINTS_AND_LINES        (1 << 4)
++#define VC4_CONFIG_BITS_ENABLE_DEPTH_OFFSET        (1 << 3)
++#define VC4_CONFIG_BITS_CW_PRIMITIVES              (1 << 2)
++#define VC4_CONFIG_BITS_ENABLE_PRIM_BACK           (1 << 1)
++#define VC4_CONFIG_BITS_ENABLE_PRIM_FRONT          (1 << 0)
++/** @} */
++
++/** @{ bits in the last u8 of VC4_PACKET_TILE_BINNING_MODE_CONFIG */
++#define VC4_BIN_CONFIG_DB_NON_MS                   (1 << 7)
++
++#define VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_MASK       VC4_MASK(6, 5)
++#define VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_SHIFT      5
++#define VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_32         0
++#define VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_64         1
++#define VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_128        2
++#define VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_256        3
++
++#define VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_MASK  VC4_MASK(4, 3)
++#define VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_SHIFT 3
++#define VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_32    0
++#define VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_64    1
++#define VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_128   2
++#define VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_256   3
++
++#define VC4_BIN_CONFIG_AUTO_INIT_TSDA              (1 << 2)
++#define VC4_BIN_CONFIG_TILE_BUFFER_64BIT           (1 << 1)
++#define VC4_BIN_CONFIG_MS_MODE_4X                  (1 << 0)
++/** @} */
++
++/** @{ bits in the last u16 of VC4_PACKET_TILE_RENDERING_MODE_CONFIG */
++#define VC4_RENDER_CONFIG_DB_NON_MS                (1 << 12)
++#define VC4_RENDER_CONFIG_EARLY_Z_COVERAGE_DISABLE (1 << 11)
++#define VC4_RENDER_CONFIG_EARLY_Z_DIRECTION_G      (1 << 10)
++#define VC4_RENDER_CONFIG_COVERAGE_MODE            (1 << 9)
++#define VC4_RENDER_CONFIG_ENABLE_VG_MASK           (1 << 8)
++
++/** The values of the field are VC4_TILING_FORMAT_* */
++#define VC4_RENDER_CONFIG_MEMORY_FORMAT_MASK       VC4_MASK(7, 6)
++#define VC4_RENDER_CONFIG_MEMORY_FORMAT_SHIFT      6
++
++#define VC4_RENDER_CONFIG_DECIMATE_MODE_1X         (0 << 4)
++#define VC4_RENDER_CONFIG_DECIMATE_MODE_4X         (1 << 4)
++#define VC4_RENDER_CONFIG_DECIMATE_MODE_16X        (2 << 4)
++
++#define VC4_RENDER_CONFIG_FORMAT_MASK              VC4_MASK(3, 2)
++#define VC4_RENDER_CONFIG_FORMAT_SHIFT             2
++#define VC4_RENDER_CONFIG_FORMAT_BGR565_DITHERED   0
++#define VC4_RENDER_CONFIG_FORMAT_RGBA8888          1
++#define VC4_RENDER_CONFIG_FORMAT_BGR565            2
++
++#define VC4_RENDER_CONFIG_TILE_BUFFER_64BIT        (1 << 1)
++#define VC4_RENDER_CONFIG_MS_MODE_4X               (1 << 0)
++
++#define VC4_PRIMITIVE_LIST_FORMAT_16_INDEX         (1 << 4)
++#define VC4_PRIMITIVE_LIST_FORMAT_32_XY            (3 << 4)
++#define VC4_PRIMITIVE_LIST_FORMAT_TYPE_POINTS      (0 << 0)
++#define VC4_PRIMITIVE_LIST_FORMAT_TYPE_LINES       (1 << 0)
++#define VC4_PRIMITIVE_LIST_FORMAT_TYPE_TRIANGLES   (2 << 0)
++#define VC4_PRIMITIVE_LIST_FORMAT_TYPE_RHT         (3 << 0)
++
++enum vc4_texture_data_type {
++        VC4_TEXTURE_TYPE_RGBA8888 = 0,
++        VC4_TEXTURE_TYPE_RGBX8888 = 1,
++        VC4_TEXTURE_TYPE_RGBA4444 = 2,
++        VC4_TEXTURE_TYPE_RGBA5551 = 3,
++        VC4_TEXTURE_TYPE_RGB565 = 4,
++        VC4_TEXTURE_TYPE_LUMINANCE = 5,
++        VC4_TEXTURE_TYPE_ALPHA = 6,
++        VC4_TEXTURE_TYPE_LUMALPHA = 7,
++        VC4_TEXTURE_TYPE_ETC1 = 8,
++        VC4_TEXTURE_TYPE_S16F = 9,
++        VC4_TEXTURE_TYPE_S8 = 10,
++        VC4_TEXTURE_TYPE_S16 = 11,
++        VC4_TEXTURE_TYPE_BW1 = 12,
++        VC4_TEXTURE_TYPE_A4 = 13,
++        VC4_TEXTURE_TYPE_A1 = 14,
++        VC4_TEXTURE_TYPE_RGBA64 = 15,
++        VC4_TEXTURE_TYPE_RGBA32R = 16,
++        VC4_TEXTURE_TYPE_YUV422R = 17,
++};
++
++#define VC4_TEX_P0_OFFSET_MASK                     VC4_MASK(31, 12)
++#define VC4_TEX_P0_OFFSET_SHIFT                    12
++#define VC4_TEX_P0_CSWIZ_MASK                      VC4_MASK(11, 10)
++#define VC4_TEX_P0_CSWIZ_SHIFT                     10
++#define VC4_TEX_P0_CMMODE_MASK                     VC4_MASK(9, 9)
++#define VC4_TEX_P0_CMMODE_SHIFT                    9
++#define VC4_TEX_P0_FLIPY_MASK                      VC4_MASK(8, 8)
++#define VC4_TEX_P0_FLIPY_SHIFT                     8
++#define VC4_TEX_P0_TYPE_MASK                       VC4_MASK(7, 4)
++#define VC4_TEX_P0_TYPE_SHIFT                      4
++#define VC4_TEX_P0_MIPLVLS_MASK                    VC4_MASK(3, 0)
++#define VC4_TEX_P0_MIPLVLS_SHIFT                   0
++
++#define VC4_TEX_P1_TYPE4_MASK                      VC4_MASK(31, 31)
++#define VC4_TEX_P1_TYPE4_SHIFT                     31
++#define VC4_TEX_P1_HEIGHT_MASK                     VC4_MASK(30, 20)
++#define VC4_TEX_P1_HEIGHT_SHIFT                    20
++#define VC4_TEX_P1_ETCFLIP_MASK                    VC4_MASK(19, 19)
++#define VC4_TEX_P1_ETCFLIP_SHIFT                   19
++#define VC4_TEX_P1_WIDTH_MASK                      VC4_MASK(18, 8)
++#define VC4_TEX_P1_WIDTH_SHIFT                     8
++
++#define VC4_TEX_P1_MAGFILT_MASK                    VC4_MASK(7, 7)
++#define VC4_TEX_P1_MAGFILT_SHIFT                   7
++# define VC4_TEX_P1_MAGFILT_LINEAR                 0
++# define VC4_TEX_P1_MAGFILT_NEAREST                1
++
++#define VC4_TEX_P1_MINFILT_MASK                    VC4_MASK(6, 4)
++#define VC4_TEX_P1_MINFILT_SHIFT                   4
++# define VC4_TEX_P1_MINFILT_LINEAR                 0
++# define VC4_TEX_P1_MINFILT_NEAREST                1
++# define VC4_TEX_P1_MINFILT_NEAR_MIP_NEAR          2
++# define VC4_TEX_P1_MINFILT_NEAR_MIP_LIN           3
++# define VC4_TEX_P1_MINFILT_LIN_MIP_NEAR           4
++# define VC4_TEX_P1_MINFILT_LIN_MIP_LIN            5
++
++#define VC4_TEX_P1_WRAP_T_MASK                     VC4_MASK(3, 2)
++#define VC4_TEX_P1_WRAP_T_SHIFT                    2
++#define VC4_TEX_P1_WRAP_S_MASK                     VC4_MASK(1, 0)
++#define VC4_TEX_P1_WRAP_S_SHIFT                    0
++# define VC4_TEX_P1_WRAP_REPEAT                    0
++# define VC4_TEX_P1_WRAP_CLAMP                     1
++# define VC4_TEX_P1_WRAP_MIRROR                    2
++# define VC4_TEX_P1_WRAP_BORDER                    3
++
++#define VC4_TEX_P2_PTYPE_MASK                      VC4_MASK(31, 30)
++#define VC4_TEX_P2_PTYPE_SHIFT                     30
++# define VC4_TEX_P2_PTYPE_IGNORED                  0
++# define VC4_TEX_P2_PTYPE_CUBE_MAP_STRIDE          1
++# define VC4_TEX_P2_PTYPE_CHILD_IMAGE_DIMENSIONS   2
++# define VC4_TEX_P2_PTYPE_CHILD_IMAGE_OFFSETS      3
++
++/* VC4_TEX_P2_PTYPE_CUBE_MAP_STRIDE bits */
++#define VC4_TEX_P2_CMST_MASK                       VC4_MASK(29, 12)
++#define VC4_TEX_P2_CMST_SHIFT                      12
++#define VC4_TEX_P2_BSLOD_MASK                      VC4_MASK(0, 0)
++#define VC4_TEX_P2_BSLOD_SHIFT                     0
++
++/* VC4_TEX_P2_PTYPE_CHILD_IMAGE_DIMENSIONS */
++#define VC4_TEX_P2_CHEIGHT_MASK                    VC4_MASK(22, 12)
++#define VC4_TEX_P2_CHEIGHT_SHIFT                   12
++#define VC4_TEX_P2_CWIDTH_MASK                     VC4_MASK(10, 0)
++#define VC4_TEX_P2_CWIDTH_SHIFT                    0
++
++/* VC4_TEX_P2_PTYPE_CHILD_IMAGE_OFFSETS */
++#define VC4_TEX_P2_CYOFF_MASK                      VC4_MASK(22, 12)
++#define VC4_TEX_P2_CYOFF_SHIFT                     12
++#define VC4_TEX_P2_CXOFF_MASK                      VC4_MASK(10, 0)
++#define VC4_TEX_P2_CXOFF_SHIFT                     0
++
++#endif /* VC4_PACKET_H */
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -29,6 +29,14 @@ struct vc4_plane_state {
+ 	u32 *dlist;
+ 	u32 dlist_size; /* Number of dwords in allocated for the display list */
+ 	u32 dlist_count; /* Number of used dwords in the display list. */
++
++	/* Offset in the dlist to pointer word 0. */
++	u32 pw0_offset;
++
++	/* Offset where the plane's dlist was last stored in the
++	   hardware at vc4_crtc_atomic_flush() time.
++	*/
++	u32 *hw_dlist;
+ };
+ 
+ static inline struct vc4_plane_state *
+@@ -207,6 +215,8 @@ static int vc4_plane_mode_set(struct drm
+ 	/* Position Word 3: Context.  Written by the HVS. */
+ 	vc4_dlist_write(vc4_state, 0xc0c0c0c0);
+ 
++	vc4_state->pw0_offset = vc4_state->dlist_count;
++
+ 	/* Pointer Word 0: RGB / Y Pointer */
+ 	vc4_dlist_write(vc4_state, bo->paddr + offset);
+ 
+@@ -258,6 +268,8 @@ u32 vc4_plane_write_dlist(struct drm_pla
+ 	struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state);
+ 	int i;
+ 
++	vc4_state->hw_dlist = dlist;
++
+ 	/* Can't memcpy_toio() because it needs to be 32-bit writes. */
+ 	for (i = 0; i < vc4_state->dlist_count; i++)
+ 		writel(vc4_state->dlist[i], &dlist[i]);
+@@ -272,6 +284,34 @@ u32 vc4_plane_dlist_size(struct drm_plan
+ 	return vc4_state->dlist_count;
+ }
+ 
++/* Updates the plane to immediately (well, once the FIFO needs
++ * refilling) scan out from at a new framebuffer.
++ */
++void vc4_plane_async_set_fb(struct drm_plane *plane, struct drm_framebuffer *fb)
++{
++	struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state);
++	struct drm_gem_cma_object *bo = drm_fb_cma_get_gem_obj(fb, 0);
++	uint32_t addr;
++
++	/* We're skipping the address adjustment for negative origin,
++	 * because this is only called on the primary plane.
++	 */
++	WARN_ON_ONCE(plane->state->crtc_x < 0 || plane->state->crtc_y < 0);
++	addr = bo->paddr + fb->offsets[0];
++
++	/* Write the new address into the hardware immediately.  The
++	 * scanout will start from this address as soon as the FIFO
++	 * needs to refill with pixels.
++	 */
++	writel(addr, &vc4_state->hw_dlist[vc4_state->pw0_offset]);
++
++	/* Also update the CPU-side dlist copy, so that any later
++	 * atomic updates that don't do a new modeset on our plane
++	 * also use our updated address.
++	 */
++	vc4_state->dlist[vc4_state->pw0_offset] = addr;
++}
++
+ static const struct drm_plane_helper_funcs vc4_plane_helper_funcs = {
+ 	.prepare_fb = NULL,
+ 	.cleanup_fb = NULL,
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_qpu_defines.h
+@@ -0,0 +1,268 @@
++/*
++ * Copyright © 2014 Broadcom
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ */
++
++#ifndef VC4_QPU_DEFINES_H
++#define VC4_QPU_DEFINES_H
++
++enum qpu_op_add {
++        QPU_A_NOP,
++        QPU_A_FADD,
++        QPU_A_FSUB,
++        QPU_A_FMIN,
++        QPU_A_FMAX,
++        QPU_A_FMINABS,
++        QPU_A_FMAXABS,
++        QPU_A_FTOI,
++        QPU_A_ITOF,
++        QPU_A_ADD = 12,
++        QPU_A_SUB,
++        QPU_A_SHR,
++        QPU_A_ASR,
++        QPU_A_ROR,
++        QPU_A_SHL,
++        QPU_A_MIN,
++        QPU_A_MAX,
++        QPU_A_AND,
++        QPU_A_OR,
++        QPU_A_XOR,
++        QPU_A_NOT,
++        QPU_A_CLZ,
++        QPU_A_V8ADDS = 30,
++        QPU_A_V8SUBS = 31,
++};
++
++enum qpu_op_mul {
++        QPU_M_NOP,
++        QPU_M_FMUL,
++        QPU_M_MUL24,
++        QPU_M_V8MULD,
++        QPU_M_V8MIN,
++        QPU_M_V8MAX,
++        QPU_M_V8ADDS,
++        QPU_M_V8SUBS,
++};
++
++enum qpu_raddr {
++        QPU_R_FRAG_PAYLOAD_ZW = 15, /* W for A file, Z for B file */
++        /* 0-31 are the plain regfile a or b fields */
++        QPU_R_UNIF = 32,
++        QPU_R_VARY = 35,
++        QPU_R_ELEM_QPU = 38,
++        QPU_R_NOP,
++        QPU_R_XY_PIXEL_COORD = 41,
++        QPU_R_MS_REV_FLAGS = 41,
++        QPU_R_VPM = 48,
++        QPU_R_VPM_LD_BUSY,
++        QPU_R_VPM_LD_WAIT,
++        QPU_R_MUTEX_ACQUIRE,
++};
++
++enum qpu_waddr {
++        /* 0-31 are the plain regfile a or b fields */
++        QPU_W_ACC0 = 32, /* aka r0 */
++        QPU_W_ACC1,
++        QPU_W_ACC2,
++        QPU_W_ACC3,
++        QPU_W_TMU_NOSWAP,
++        QPU_W_ACC5,
++        QPU_W_HOST_INT,
++        QPU_W_NOP,
++        QPU_W_UNIFORMS_ADDRESS,
++        QPU_W_QUAD_XY, /* X for regfile a, Y for regfile b */
++        QPU_W_MS_FLAGS = 42,
++        QPU_W_REV_FLAG = 42,
++        QPU_W_TLB_STENCIL_SETUP = 43,
++        QPU_W_TLB_Z,
++        QPU_W_TLB_COLOR_MS,
++        QPU_W_TLB_COLOR_ALL,
++        QPU_W_TLB_ALPHA_MASK,
++        QPU_W_VPM,
++        QPU_W_VPMVCD_SETUP, /* LD for regfile a, ST for regfile b */
++        QPU_W_VPM_ADDR, /* LD for regfile a, ST for regfile b */
++        QPU_W_MUTEX_RELEASE,
++        QPU_W_SFU_RECIP,
++        QPU_W_SFU_RECIPSQRT,
++        QPU_W_SFU_EXP,
++        QPU_W_SFU_LOG,
++        QPU_W_TMU0_S,
++        QPU_W_TMU0_T,
++        QPU_W_TMU0_R,
++        QPU_W_TMU0_B,
++        QPU_W_TMU1_S,
++        QPU_W_TMU1_T,
++        QPU_W_TMU1_R,
++        QPU_W_TMU1_B,
++};
++
++enum qpu_sig_bits {
++        QPU_SIG_SW_BREAKPOINT,
++        QPU_SIG_NONE,
++        QPU_SIG_THREAD_SWITCH,
++        QPU_SIG_PROG_END,
++        QPU_SIG_WAIT_FOR_SCOREBOARD,
++        QPU_SIG_SCOREBOARD_UNLOCK,
++        QPU_SIG_LAST_THREAD_SWITCH,
++        QPU_SIG_COVERAGE_LOAD,
++        QPU_SIG_COLOR_LOAD,
++        QPU_SIG_COLOR_LOAD_END,
++        QPU_SIG_LOAD_TMU0,
++        QPU_SIG_LOAD_TMU1,
++        QPU_SIG_ALPHA_MASK_LOAD,
++        QPU_SIG_SMALL_IMM,
++        QPU_SIG_LOAD_IMM,
++        QPU_SIG_BRANCH
++};
++
++enum qpu_mux {
++        /* hardware mux values */
++        QPU_MUX_R0,
++        QPU_MUX_R1,
++        QPU_MUX_R2,
++        QPU_MUX_R3,
++        QPU_MUX_R4,
++        QPU_MUX_R5,
++        QPU_MUX_A,
++        QPU_MUX_B,
++
++        /* non-hardware mux values */
++        QPU_MUX_IMM,
++};
++
++enum qpu_cond {
++        QPU_COND_NEVER,
++        QPU_COND_ALWAYS,
++        QPU_COND_ZS,
++        QPU_COND_ZC,
++        QPU_COND_NS,
++        QPU_COND_NC,
++        QPU_COND_CS,
++        QPU_COND_CC,
++};
++
++enum qpu_pack_mul {
++        QPU_PACK_MUL_NOP,
++        QPU_PACK_MUL_8888 = 3, /* replicated to each 8 bits of the 32-bit dst. */
++        QPU_PACK_MUL_8A,
++        QPU_PACK_MUL_8B,
++        QPU_PACK_MUL_8C,
++        QPU_PACK_MUL_8D,
++};
++
++enum qpu_pack_a {
++        QPU_PACK_A_NOP,
++        /* convert to 16 bit float if float input, or to int16. */
++        QPU_PACK_A_16A,
++        QPU_PACK_A_16B,
++        /* replicated to each 8 bits of the 32-bit dst. */
++        QPU_PACK_A_8888,
++        /* Convert to 8-bit unsigned int. */
++        QPU_PACK_A_8A,
++        QPU_PACK_A_8B,
++        QPU_PACK_A_8C,
++        QPU_PACK_A_8D,
++
++        /* Saturating variants of the previous instructions. */
++        QPU_PACK_A_32_SAT, /* int-only */
++        QPU_PACK_A_16A_SAT, /* int or float */
++        QPU_PACK_A_16B_SAT,
++        QPU_PACK_A_8888_SAT,
++        QPU_PACK_A_8A_SAT,
++        QPU_PACK_A_8B_SAT,
++        QPU_PACK_A_8C_SAT,
++        QPU_PACK_A_8D_SAT,
++};
++
++enum qpu_unpack_r4 {
++        QPU_UNPACK_R4_NOP,
++        QPU_UNPACK_R4_F16A_TO_F32,
++        QPU_UNPACK_R4_F16B_TO_F32,
++        QPU_UNPACK_R4_8D_REP,
++        QPU_UNPACK_R4_8A,
++        QPU_UNPACK_R4_8B,
++        QPU_UNPACK_R4_8C,
++        QPU_UNPACK_R4_8D,
++};
++
++#define QPU_MASK(high, low) ((((uint64_t)1<<((high)-(low)+1))-1)<<(low))
++/* Using the GNU statement expression extension */
++#define QPU_SET_FIELD(value, field)                                       \
++        ({                                                                \
++                uint64_t fieldval = (uint64_t)(value) << field ## _SHIFT; \
++                assert((fieldval & ~ field ## _MASK) == 0);               \
++                fieldval & field ## _MASK;                                \
++         })
++
++#define QPU_GET_FIELD(word, field) ((uint32_t)(((word)  & field ## _MASK) >> field ## _SHIFT))
++
++#define QPU_SIG_SHIFT                   60
++#define QPU_SIG_MASK                    QPU_MASK(63, 60)
++
++#define QPU_UNPACK_SHIFT                57
++#define QPU_UNPACK_MASK                 QPU_MASK(59, 57)
++
++/**
++ * If set, the pack field means PACK_MUL or R4 packing, instead of normal
++ * regfile a packing.
++ */
++#define QPU_PM                          ((uint64_t)1 << 56)
++
++#define QPU_PACK_SHIFT                  52
++#define QPU_PACK_MASK                   QPU_MASK(55, 52)
++
++#define QPU_COND_ADD_SHIFT              49
++#define QPU_COND_ADD_MASK               QPU_MASK(51, 49)
++#define QPU_COND_MUL_SHIFT              46
++#define QPU_COND_MUL_MASK               QPU_MASK(48, 46)
++
++#define QPU_SF                          ((uint64_t)1 << 45)
++
++#define QPU_WADDR_ADD_SHIFT             38
++#define QPU_WADDR_ADD_MASK              QPU_MASK(43, 38)
++#define QPU_WADDR_MUL_SHIFT             32
++#define QPU_WADDR_MUL_MASK              QPU_MASK(37, 32)
++
++#define QPU_OP_MUL_SHIFT                29
++#define QPU_OP_MUL_MASK                 QPU_MASK(31, 29)
++
++#define QPU_RADDR_A_SHIFT               18
++#define QPU_RADDR_A_MASK                QPU_MASK(23, 18)
++#define QPU_RADDR_B_SHIFT               12
++#define QPU_RADDR_B_MASK                QPU_MASK(17, 12)
++#define QPU_SMALL_IMM_SHIFT             12
++#define QPU_SMALL_IMM_MASK              QPU_MASK(17, 12)
++
++#define QPU_ADD_A_SHIFT                 9
++#define QPU_ADD_A_MASK                  QPU_MASK(11, 9)
++#define QPU_ADD_B_SHIFT                 6
++#define QPU_ADD_B_MASK                  QPU_MASK(8, 6)
++#define QPU_MUL_A_SHIFT                 3
++#define QPU_MUL_A_MASK                  QPU_MASK(5, 3)
++#define QPU_MUL_B_SHIFT                 0
++#define QPU_MUL_B_MASK                  QPU_MASK(2, 0)
++
++#define QPU_WS                          ((uint64_t)1 << 44)
++
++#define QPU_OP_ADD_SHIFT                24
++#define QPU_OP_ADD_MASK                 QPU_MASK(28, 24)
++
++#endif /* VC4_QPU_DEFINES_H */
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
+@@ -0,0 +1,448 @@
++/*
++ * Copyright © 2014-2015 Broadcom
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ */
++
++/**
++ * DOC: Render command list generation
++ *
++ * In the VC4 driver, render command list generation is performed by the
++ * kernel instead of userspace.  We do this because validating a
++ * user-submitted command list is hard to get right and has high CPU overhead,
++ * while the number of valid configurations for render command lists is
++ * actually fairly low.
++ */
++
++#include "uapi/drm/vc4_drm.h"
++#include "vc4_drv.h"
++#include "vc4_packet.h"
++
++struct vc4_rcl_setup {
++	struct drm_gem_cma_object *color_read;
++	struct drm_gem_cma_object *color_ms_write;
++	struct drm_gem_cma_object *zs_read;
++	struct drm_gem_cma_object *zs_write;
++
++	struct drm_gem_cma_object *rcl;
++	u32 next_offset;
++};
++
++static inline void rcl_u8(struct vc4_rcl_setup *setup, u8 val)
++{
++	*(u8 *)(setup->rcl->vaddr + setup->next_offset) = val;
++	setup->next_offset += 1;
++}
++
++static inline void rcl_u16(struct vc4_rcl_setup *setup, u16 val)
++{
++	*(u16 *)(setup->rcl->vaddr + setup->next_offset) = val;
++	setup->next_offset += 2;
++}
++
++static inline void rcl_u32(struct vc4_rcl_setup *setup, u32 val)
++{
++	*(u32 *)(setup->rcl->vaddr + setup->next_offset) = val;
++	setup->next_offset += 4;
++}
++
++
++/*
++ * Emits a no-op STORE_TILE_BUFFER_GENERAL.
++ *
++ * If we emit a PACKET_TILE_COORDINATES, it must be followed by a store of
++ * some sort before another load is triggered.
++ */
++static void vc4_store_before_load(struct vc4_rcl_setup *setup)
++{
++	rcl_u8(setup, VC4_PACKET_STORE_TILE_BUFFER_GENERAL);
++	rcl_u16(setup,
++		VC4_SET_FIELD(VC4_LOADSTORE_TILE_BUFFER_NONE,
++			      VC4_LOADSTORE_TILE_BUFFER_BUFFER) |
++		VC4_STORE_TILE_BUFFER_DISABLE_COLOR_CLEAR |
++		VC4_STORE_TILE_BUFFER_DISABLE_ZS_CLEAR |
++		VC4_STORE_TILE_BUFFER_DISABLE_VG_MASK_CLEAR);
++	rcl_u32(setup, 0); /* no address, since we're in None mode */
++}
++
++/*
++ * Emits a PACKET_TILE_COORDINATES if one isn't already pending.
++ *
++ * The tile coordinates packet triggers a pending load if there is one, are
++ * used for clipping during rendering, and determine where loads/stores happen
++ * relative to their base address.
++ */
++static void vc4_tile_coordinates(struct vc4_rcl_setup *setup,
++				 uint32_t x, uint32_t y)
++{
++	rcl_u8(setup, VC4_PACKET_TILE_COORDINATES);
++	rcl_u8(setup, x);
++	rcl_u8(setup, y);
++}
++
++static void emit_tile(struct vc4_exec_info *exec,
++		      struct vc4_rcl_setup *setup,
++		      uint8_t x, uint8_t y, bool first, bool last)
++{
++	struct drm_vc4_submit_cl *args = exec->args;
++	bool has_bin = args->bin_cl_size != 0;
++
++	/* Note that the load doesn't actually occur until the
++	 * tile coords packet is processed, and only one load
++	 * may be outstanding at a time.
++	 */
++	if (setup->color_read) {
++		rcl_u8(setup, VC4_PACKET_LOAD_TILE_BUFFER_GENERAL);
++		rcl_u16(setup, args->color_read.bits);
++		rcl_u32(setup,
++			setup->color_read->paddr + args->color_read.offset);
++	}
++
++	if (setup->zs_read) {
++		if (setup->color_read) {
++			/* Exec previous load. */
++			vc4_tile_coordinates(setup, x, y);
++			vc4_store_before_load(setup);
++		}
++
++		rcl_u8(setup, VC4_PACKET_LOAD_TILE_BUFFER_GENERAL);
++		rcl_u16(setup, args->zs_read.bits);
++		rcl_u32(setup, setup->zs_read->paddr + args->zs_read.offset);
++	}
++
++	/* Clipping depends on tile coordinates having been
++	 * emitted, so we always need one here.
++	 */
++	vc4_tile_coordinates(setup, x, y);
++
++	/* Wait for the binner before jumping to the first
++	 * tile's lists.
++	 */
++	if (first && has_bin)
++		rcl_u8(setup, VC4_PACKET_WAIT_ON_SEMAPHORE);
++
++	if (has_bin) {
++		rcl_u8(setup, VC4_PACKET_BRANCH_TO_SUB_LIST);
++		rcl_u32(setup, (exec->tile_bo->paddr +
++				exec->tile_alloc_offset +
++				(y * exec->bin_tiles_x + x) * 32));
++	}
++
++	if (setup->zs_write) {
++		rcl_u8(setup, VC4_PACKET_STORE_TILE_BUFFER_GENERAL);
++		rcl_u16(setup, args->zs_write.bits |
++			(setup->color_ms_write ?
++			 VC4_STORE_TILE_BUFFER_DISABLE_COLOR_CLEAR : 0));
++		rcl_u32(setup,
++			(setup->zs_write->paddr + args->zs_write.offset) |
++			((last && !setup->color_ms_write) ?
++			 VC4_LOADSTORE_TILE_BUFFER_EOF : 0));
++	}
++
++	if (setup->color_ms_write) {
++		if (setup->zs_write) {
++			/* Reset after previous store */
++			vc4_tile_coordinates(setup, x, y);
++		}
++
++		if (last)
++			rcl_u8(setup, VC4_PACKET_STORE_MS_TILE_BUFFER_AND_EOF);
++		else
++			rcl_u8(setup, VC4_PACKET_STORE_MS_TILE_BUFFER);
++	}
++}
++
++static int vc4_create_rcl_bo(struct drm_device *dev, struct vc4_exec_info *exec,
++			     struct vc4_rcl_setup *setup)
++{
++	struct drm_vc4_submit_cl *args = exec->args;
++	bool has_bin = args->bin_cl_size != 0;
++	uint8_t min_x_tile = args->min_x_tile;
++	uint8_t min_y_tile = args->min_y_tile;
++	uint8_t max_x_tile = args->max_x_tile;
++	uint8_t max_y_tile = args->max_y_tile;
++	uint8_t xtiles = max_x_tile - min_x_tile + 1;
++	uint8_t ytiles = max_y_tile - min_y_tile + 1;
++	uint8_t x, y;
++	uint32_t size, loop_body_size;
++
++	size = VC4_PACKET_TILE_RENDERING_MODE_CONFIG_SIZE;
++	loop_body_size = VC4_PACKET_TILE_COORDINATES_SIZE;
++
++	if (args->flags & VC4_SUBMIT_CL_USE_CLEAR_COLOR) {
++		size += VC4_PACKET_CLEAR_COLORS_SIZE +
++			VC4_PACKET_TILE_COORDINATES_SIZE +
++			VC4_PACKET_STORE_TILE_BUFFER_GENERAL_SIZE;
++	}
++
++	if (setup->color_read) {
++		loop_body_size += (VC4_PACKET_LOAD_TILE_BUFFER_GENERAL_SIZE);
++	}
++	if (setup->zs_read) {
++		if (setup->color_read) {
++			loop_body_size += VC4_PACKET_TILE_COORDINATES_SIZE;
++			loop_body_size += VC4_PACKET_STORE_TILE_BUFFER_GENERAL_SIZE;
++		}
++		loop_body_size += VC4_PACKET_LOAD_TILE_BUFFER_GENERAL_SIZE;
++	}
++
++	if (has_bin) {
++		size += VC4_PACKET_WAIT_ON_SEMAPHORE_SIZE;
++		loop_body_size += VC4_PACKET_BRANCH_TO_SUB_LIST_SIZE;
++	}
++
++	if (setup->zs_write)
++		loop_body_size += VC4_PACKET_STORE_TILE_BUFFER_GENERAL_SIZE;
++	if (setup->color_ms_write) {
++		if (setup->zs_write)
++			loop_body_size += VC4_PACKET_TILE_COORDINATES_SIZE;
++		loop_body_size += VC4_PACKET_STORE_MS_TILE_BUFFER_SIZE;
++	}
++	size += xtiles * ytiles * loop_body_size;
++
++	setup->rcl = &vc4_bo_create(dev, size)->base;
++	if (!setup->rcl)
++		return -ENOMEM;
++	list_add_tail(&to_vc4_bo(&setup->rcl->base)->unref_head,
++		      &exec->unref_list);
++
++	rcl_u8(setup, VC4_PACKET_TILE_RENDERING_MODE_CONFIG);
++	rcl_u32(setup,
++		(setup->color_ms_write ?
++		 (setup->color_ms_write->paddr +
++		  args->color_ms_write.offset) :
++		 0));
++	rcl_u16(setup, args->width);
++	rcl_u16(setup, args->height);
++	rcl_u16(setup, args->color_ms_write.bits);
++
++	/* The tile buffer gets cleared when the previous tile is stored.  If
++	 * the clear values changed between frames, then the tile buffer has
++	 * stale clear values in it, so we have to do a store in None mode (no
++	 * writes) so that we trigger the tile buffer clear.
++	 */
++	if (args->flags & VC4_SUBMIT_CL_USE_CLEAR_COLOR) {
++		rcl_u8(setup, VC4_PACKET_CLEAR_COLORS);
++		rcl_u32(setup, args->clear_color[0]);
++		rcl_u32(setup, args->clear_color[1]);
++		rcl_u32(setup, args->clear_z);
++		rcl_u8(setup, args->clear_s);
++
++		vc4_tile_coordinates(setup, 0, 0);
++
++		rcl_u8(setup, VC4_PACKET_STORE_TILE_BUFFER_GENERAL);
++		rcl_u16(setup, VC4_LOADSTORE_TILE_BUFFER_NONE);
++		rcl_u32(setup, 0); /* no address, since we're in None mode */
++	}
++
++	for (y = min_y_tile; y <= max_y_tile; y++) {
++		for (x = min_x_tile; x <= max_x_tile; x++) {
++			bool first = (x == min_x_tile && y == min_y_tile);
++			bool last = (x == max_x_tile && y == max_y_tile);
++			emit_tile(exec, setup, x, y, first, last);
++		}
++	}
++
++	BUG_ON(setup->next_offset != size);
++	exec->ct1ca = setup->rcl->paddr;
++	exec->ct1ea = setup->rcl->paddr + setup->next_offset;
++
++	return 0;
++}
++
++static int vc4_rcl_surface_setup(struct vc4_exec_info *exec,
++				 struct drm_gem_cma_object **obj,
++				 struct drm_vc4_submit_rcl_surface *surf)
++{
++	uint8_t tiling = VC4_GET_FIELD(surf->bits,
++				       VC4_LOADSTORE_TILE_BUFFER_TILING);
++	uint8_t buffer = VC4_GET_FIELD(surf->bits,
++				       VC4_LOADSTORE_TILE_BUFFER_BUFFER);
++	uint8_t format = VC4_GET_FIELD(surf->bits,
++				       VC4_LOADSTORE_TILE_BUFFER_FORMAT);
++	int cpp;
++
++	if (surf->pad != 0) {
++		DRM_ERROR("Padding unset\n");
++		return -EINVAL;
++	}
++
++	if (surf->hindex == ~0)
++		return 0;
++
++	if (!vc4_use_bo(exec, surf->hindex, VC4_MODE_RENDER, obj))
++		return -EINVAL;
++
++	if (surf->bits & ~(VC4_LOADSTORE_TILE_BUFFER_TILING_MASK |
++			   VC4_LOADSTORE_TILE_BUFFER_BUFFER_MASK |
++			   VC4_LOADSTORE_TILE_BUFFER_FORMAT_MASK)) {
++		DRM_ERROR("Unknown bits in load/store: 0x%04x\n",
++			  surf->bits);
++		return -EINVAL;
++	}
++
++	if (tiling > VC4_TILING_FORMAT_LT) {
++		DRM_ERROR("Bad tiling format\n");
++		return -EINVAL;
++	}
++
++	if (buffer == VC4_LOADSTORE_TILE_BUFFER_ZS) {
++		if (format != 0) {
++			DRM_ERROR("No color format should be set for ZS\n");
++			return -EINVAL;
++		}
++		cpp = 4;
++	} else if (buffer == VC4_LOADSTORE_TILE_BUFFER_COLOR) {
++		switch (format) {
++		case VC4_LOADSTORE_TILE_BUFFER_BGR565:
++		case VC4_LOADSTORE_TILE_BUFFER_BGR565_DITHER:
++			cpp = 2;
++			break;
++		case VC4_LOADSTORE_TILE_BUFFER_RGBA8888:
++			cpp = 4;
++			break;
++		default:
++			DRM_ERROR("Bad tile buffer format\n");
++			return -EINVAL;
++		}
++	} else {
++		DRM_ERROR("Bad load/store buffer %d.\n", buffer);
++		return -EINVAL;
++	}
++
++	if (surf->offset & 0xf) {
++		DRM_ERROR("load/store buffer must be 16b aligned.\n");
++		return -EINVAL;
++	}
++
++	if (!vc4_check_tex_size(exec, *obj, surf->offset, tiling,
++				exec->args->width, exec->args->height, cpp)) {
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++static int
++vc4_rcl_ms_surface_setup(struct vc4_exec_info *exec,
++			 struct drm_gem_cma_object **obj,
++			 struct drm_vc4_submit_rcl_surface *surf)
++{
++	uint8_t tiling = VC4_GET_FIELD(surf->bits,
++				       VC4_RENDER_CONFIG_MEMORY_FORMAT);
++	uint8_t format = VC4_GET_FIELD(surf->bits,
++				       VC4_RENDER_CONFIG_FORMAT);
++	int cpp;
++
++	if (surf->pad != 0) {
++		DRM_ERROR("Padding unset\n");
++		return -EINVAL;
++	}
++
++	if (surf->bits & ~(VC4_RENDER_CONFIG_MEMORY_FORMAT_MASK |
++			   VC4_RENDER_CONFIG_FORMAT_MASK)) {
++		DRM_ERROR("Unknown bits in render config: 0x%04x\n",
++			  surf->bits);
++		return -EINVAL;
++	}
++
++	if (surf->hindex == ~0)
++		return 0;
++
++	if (!vc4_use_bo(exec, surf->hindex, VC4_MODE_RENDER, obj))
++		return -EINVAL;
++
++	if (tiling > VC4_TILING_FORMAT_LT) {
++		DRM_ERROR("Bad tiling format\n");
++		return -EINVAL;
++	}
++
++	switch (format) {
++	case VC4_RENDER_CONFIG_FORMAT_BGR565_DITHERED:
++	case VC4_RENDER_CONFIG_FORMAT_BGR565:
++		cpp = 2;
++		break;
++	case VC4_RENDER_CONFIG_FORMAT_RGBA8888:
++		cpp = 4;
++		break;
++	default:
++		DRM_ERROR("Bad tile buffer format\n");
++		return -EINVAL;
++	}
++
++	if (!vc4_check_tex_size(exec, *obj, surf->offset, tiling,
++				exec->args->width, exec->args->height, cpp)) {
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec)
++{
++	struct vc4_rcl_setup setup = {0};
++	struct drm_vc4_submit_cl *args = exec->args;
++	bool has_bin = args->bin_cl_size != 0;
++	int ret;
++
++	if (args->min_x_tile > args->max_x_tile ||
++	    args->min_y_tile > args->max_y_tile) {
++		DRM_ERROR("Bad render tile set (%d,%d)-(%d,%d)\n",
++			  args->min_x_tile, args->min_y_tile,
++			  args->max_x_tile, args->max_y_tile);
++		return -EINVAL;
++	}
++
++	if (has_bin &&
++	    (args->max_x_tile > exec->bin_tiles_x ||
++	     args->max_y_tile > exec->bin_tiles_y)) {
++		DRM_ERROR("Render tiles (%d,%d) outside of bin config (%d,%d)\n",
++			  args->max_x_tile, args->max_y_tile,
++			  exec->bin_tiles_x, exec->bin_tiles_y);
++		return -EINVAL;
++	}
++
++	ret = vc4_rcl_surface_setup(exec, &setup.color_read, &args->color_read);
++	if (ret)
++		return ret;
++
++	ret = vc4_rcl_ms_surface_setup(exec, &setup.color_ms_write,
++				       &args->color_ms_write);
++	if (ret)
++		return ret;
++
++	ret = vc4_rcl_surface_setup(exec, &setup.zs_read, &args->zs_read);
++	if (ret)
++		return ret;
++
++	ret = vc4_rcl_surface_setup(exec, &setup.zs_write, &args->zs_write);
++	if (ret)
++		return ret;
++
++	/* We shouldn't even have the job submitted to us if there's no
++	 * surface to write out.
++	 */
++	if (!setup.color_ms_write && !setup.zs_write) {
++		DRM_ERROR("RCL requires color or Z/S write\n");
++		return -EINVAL;
++	}
++
++	return vc4_create_rcl_bo(dev, exec, &setup);
++}
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_trace.h
+@@ -0,0 +1,63 @@
++/*
++ * Copyright (C) 2015 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#if !defined(_VC4_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
++#define _VC4_TRACE_H_
++
++#include <linux/stringify.h>
++#include <linux/types.h>
++#include <linux/tracepoint.h>
++
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM vc4
++#define TRACE_INCLUDE_FILE vc4_trace
++
++TRACE_EVENT(vc4_wait_for_seqno_begin,
++	    TP_PROTO(struct drm_device *dev, uint64_t seqno, uint64_t timeout),
++	    TP_ARGS(dev, seqno, timeout),
++
++	    TP_STRUCT__entry(
++			     __field(u32, dev)
++			     __field(u64, seqno)
++			     __field(u64, timeout)
++			     ),
++
++	    TP_fast_assign(
++			   __entry->dev = dev->primary->index;
++			   __entry->seqno = seqno;
++			   __entry->timeout = timeout;
++			   ),
++
++	    TP_printk("dev=%u, seqno=%llu, timeout=%llu",
++		      __entry->dev, __entry->seqno, __entry->timeout)
++);
++
++TRACE_EVENT(vc4_wait_for_seqno_end,
++	    TP_PROTO(struct drm_device *dev, uint64_t seqno),
++	    TP_ARGS(dev, seqno),
++
++	    TP_STRUCT__entry(
++			     __field(u32, dev)
++			     __field(u64, seqno)
++			     ),
++
++	    TP_fast_assign(
++			   __entry->dev = dev->primary->index;
++			   __entry->seqno = seqno;
++			   ),
++
++	    TP_printk("dev=%u, seqno=%llu",
++		      __entry->dev, __entry->seqno)
++);
++
++#endif /* _VC4_TRACE_H_ */
++
++/* This part must be outside protection */
++#undef TRACE_INCLUDE_PATH
++#define TRACE_INCLUDE_PATH .
++#include <trace/define_trace.h>
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_trace_points.c
+@@ -0,0 +1,14 @@
++/*
++ * Copyright (C) 2015 Broadcom
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#include "vc4_drv.h"
++
++#ifndef __CHECKER__
++#define CREATE_TRACE_POINTS
++#include "vc4_trace.h"
++#endif
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_v3d.c
+@@ -0,0 +1,268 @@
++/*
++ * Copyright (c) 2014 The Linux Foundation. All rights reserved.
++ * Copyright (C) 2013 Red Hat
++ * Author: Rob Clark <robdclark at gmail.com>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License version 2 as published by
++ * the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but WITHOUT
++ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
++ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
++ * more details.
++ *
++ * You should have received a copy of the GNU General Public License along with
++ * this program.  If not, see <http://www.gnu.org/licenses/>.
++ */
++
++#include "linux/component.h"
++#include "soc/bcm2835/raspberrypi-firmware.h"
++#include "vc4_drv.h"
++#include "vc4_regs.h"
++
++#ifdef CONFIG_DEBUG_FS
++#define REGDEF(reg) { reg, #reg }
++static const struct {
++	uint32_t reg;
++	const char *name;
++} vc4_reg_defs[] = {
++	REGDEF(V3D_IDENT0),
++	REGDEF(V3D_IDENT1),
++	REGDEF(V3D_IDENT2),
++	REGDEF(V3D_SCRATCH),
++	REGDEF(V3D_L2CACTL),
++	REGDEF(V3D_SLCACTL),
++	REGDEF(V3D_INTCTL),
++	REGDEF(V3D_INTENA),
++	REGDEF(V3D_INTDIS),
++	REGDEF(V3D_CT0CS),
++	REGDEF(V3D_CT1CS),
++	REGDEF(V3D_CT0EA),
++	REGDEF(V3D_CT1EA),
++	REGDEF(V3D_CT0CA),
++	REGDEF(V3D_CT1CA),
++	REGDEF(V3D_CT00RA0),
++	REGDEF(V3D_CT01RA0),
++	REGDEF(V3D_CT0LC),
++	REGDEF(V3D_CT1LC),
++	REGDEF(V3D_CT0PC),
++	REGDEF(V3D_CT1PC),
++	REGDEF(V3D_PCS),
++	REGDEF(V3D_BFC),
++	REGDEF(V3D_RFC),
++	REGDEF(V3D_BPCA),
++	REGDEF(V3D_BPCS),
++	REGDEF(V3D_BPOA),
++	REGDEF(V3D_BPOS),
++	REGDEF(V3D_BXCF),
++	REGDEF(V3D_SQRSV0),
++	REGDEF(V3D_SQRSV1),
++	REGDEF(V3D_SQCNTL),
++	REGDEF(V3D_SRQPC),
++	REGDEF(V3D_SRQUA),
++	REGDEF(V3D_SRQUL),
++	REGDEF(V3D_SRQCS),
++	REGDEF(V3D_VPACNTL),
++	REGDEF(V3D_VPMBASE),
++	REGDEF(V3D_PCTRC),
++	REGDEF(V3D_PCTRE),
++	REGDEF(V3D_PCTR0),
++	REGDEF(V3D_PCTRS0),
++	REGDEF(V3D_PCTR1),
++	REGDEF(V3D_PCTRS1),
++	REGDEF(V3D_PCTR2),
++	REGDEF(V3D_PCTRS2),
++	REGDEF(V3D_PCTR3),
++	REGDEF(V3D_PCTRS3),
++	REGDEF(V3D_PCTR4),
++	REGDEF(V3D_PCTRS4),
++	REGDEF(V3D_PCTR5),
++	REGDEF(V3D_PCTRS5),
++	REGDEF(V3D_PCTR6),
++	REGDEF(V3D_PCTRS6),
++	REGDEF(V3D_PCTR7),
++	REGDEF(V3D_PCTRS7),
++	REGDEF(V3D_PCTR8),
++	REGDEF(V3D_PCTRS8),
++	REGDEF(V3D_PCTR9),
++	REGDEF(V3D_PCTRS9),
++	REGDEF(V3D_PCTR10),
++	REGDEF(V3D_PCTRS10),
++	REGDEF(V3D_PCTR11),
++	REGDEF(V3D_PCTRS11),
++	REGDEF(V3D_PCTR12),
++	REGDEF(V3D_PCTRS12),
++	REGDEF(V3D_PCTR13),
++	REGDEF(V3D_PCTRS13),
++	REGDEF(V3D_PCTR14),
++	REGDEF(V3D_PCTRS14),
++	REGDEF(V3D_PCTR15),
++	REGDEF(V3D_PCTRS15),
++	REGDEF(V3D_BGE),
++	REGDEF(V3D_FDBGO),
++	REGDEF(V3D_FDBGB),
++	REGDEF(V3D_FDBGR),
++	REGDEF(V3D_FDBGS),
++	REGDEF(V3D_ERRSTAT),
++};
++
++int vc4_v3d_debugfs_regs(struct seq_file *m, void *unused)
++{
++	struct drm_info_node *node = (struct drm_info_node *) m->private;
++	struct drm_device *dev = node->minor->dev;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(vc4_reg_defs); i++) {
++		seq_printf(m, "%s (0x%04x): 0x%08x\n",
++			   vc4_reg_defs[i].name, vc4_reg_defs[i].reg,
++			   V3D_READ(vc4_reg_defs[i].reg));
++	}
++
++	return 0;
++}
++
++int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused)
++{
++	struct drm_info_node *node = (struct drm_info_node *) m->private;
++	struct drm_device *dev = node->minor->dev;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	uint32_t ident1 = V3D_READ(V3D_IDENT1);
++	uint32_t nslc = VC4_GET_FIELD(ident1, V3D_IDENT1_NSLC);
++	uint32_t tups = VC4_GET_FIELD(ident1, V3D_IDENT1_TUPS);
++	uint32_t qups = VC4_GET_FIELD(ident1, V3D_IDENT1_QUPS);
++
++	seq_printf(m, "Revision:   %d\n", VC4_GET_FIELD(ident1, V3D_IDENT1_REV));
++	seq_printf(m, "Slices:     %d\n", nslc);
++	seq_printf(m, "TMUs:       %d\n", nslc * tups);
++	seq_printf(m, "QPUs:       %d\n", nslc * qups);
++	seq_printf(m, "Semaphores: %d\n", VC4_GET_FIELD(ident1, V3D_IDENT1_NSEM));
++
++	return 0;
++}
++#endif /* CONFIG_DEBUG_FS */
++
++/*
++ * Asks the firmware to turn on power to the V3D engine.
++ *
++ * This may be doable with just the clocks interface, though this
++ * packet does some other register setup from the firmware, too.
++ */
++int
++vc4_v3d_set_power(struct vc4_dev *vc4, bool on)
++{
++	u32 packet = on;
++
++	return rpi_firmware_property(vc4->firmware,
++				     RPI_FIRMWARE_SET_ENABLE_QPU,
++				     &packet, sizeof(packet));
++}
++
++static void vc4_v3d_init_hw(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++
++	/* Take all the memory that would have been reserved for user
++	 * QPU programs, since we don't have an interface for running
++	 * them, anyway.
++	 */
++	V3D_WRITE(V3D_VPMBASE, 0);
++}
++
++static int vc4_v3d_bind(struct device *dev, struct device *master, void *data)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct drm_device *drm = dev_get_drvdata(master);
++	struct vc4_dev *vc4 = to_vc4_dev(drm);
++	struct vc4_v3d *v3d = NULL;
++	int ret;
++
++	v3d = devm_kzalloc(&pdev->dev, sizeof(*v3d), GFP_KERNEL);
++	if (!v3d)
++		return -ENOMEM;
++
++	v3d->pdev = pdev;
++
++	v3d->regs = vc4_ioremap_regs(pdev, 0);
++	if (IS_ERR(v3d->regs))
++		return PTR_ERR(v3d->regs);
++
++	vc4->v3d = v3d;
++
++	ret = vc4_v3d_set_power(vc4, true);
++	if (ret)
++		return ret;
++
++	if (V3D_READ(V3D_IDENT0) != V3D_EXPECTED_IDENT0) {
++		DRM_ERROR("V3D_IDENT0 read 0x%08x instead of 0x%08x\n",
++			  V3D_READ(V3D_IDENT0), V3D_EXPECTED_IDENT0);
++		return -EINVAL;
++	}
++
++	/* Reset the binner overflow address/size at setup, to be sure
++	 * we don't reuse an old one.
++	 */
++	V3D_WRITE(V3D_BPOA, 0);
++	V3D_WRITE(V3D_BPOS, 0);
++
++	vc4_v3d_init_hw(drm);
++
++	ret = drm_irq_install(drm, platform_get_irq(pdev, 0));
++	if (ret) {
++		DRM_ERROR("Failed to install IRQ handler\n");
++		return ret;
++	}
++
++	return 0;
++}
++
++static void vc4_v3d_unbind(struct device *dev, struct device *master,
++			    void *data)
++{
++	struct drm_device *drm = dev_get_drvdata(master);
++	struct vc4_dev *vc4 = to_vc4_dev(drm);
++
++	drm_irq_uninstall(drm);
++
++	/* Disable the binner's overflow memory address, so the next
++	 * driver probe (if any) doesn't try to reuse our old
++	 * allocation.
++	 */
++	V3D_WRITE(V3D_BPOA, 0);
++	V3D_WRITE(V3D_BPOS, 0);
++
++	vc4_v3d_set_power(vc4, false);
++
++	vc4->v3d = NULL;
++}
++
++static const struct component_ops vc4_v3d_ops = {
++	.bind   = vc4_v3d_bind,
++	.unbind = vc4_v3d_unbind,
++};
++
++static int vc4_v3d_dev_probe(struct platform_device *pdev)
++{
++	return component_add(&pdev->dev, &vc4_v3d_ops);
++}
++
++static int vc4_v3d_dev_remove(struct platform_device *pdev)
++{
++	component_del(&pdev->dev, &vc4_v3d_ops);
++	return 0;
++}
++
++static const struct of_device_id vc4_v3d_dt_match[] = {
++	{ .compatible = "brcm,vc4-v3d" },
++	{}
++};
++
++struct platform_driver vc4_v3d_driver = {
++	.probe = vc4_v3d_dev_probe,
++	.remove = vc4_v3d_dev_remove,
++	.driver = {
++		.name = "vc4_v3d",
++		.of_match_table = vc4_v3d_dt_match,
++	},
++};
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_validate.c
+@@ -0,0 +1,958 @@
++/*
++ * Copyright © 2014 Broadcom
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ */
++
++/**
++ * Command list validator for VC4.
++ *
++ * The VC4 has no IOMMU between it and system memory.  So, a user with
++ * access to execute command lists could escalate privilege by
++ * overwriting system memory (drawing to it as a framebuffer) or
++ * reading system memory it shouldn't (reading it as a texture, or
++ * uniform data, or vertex data).
++ *
++ * This validates command lists to ensure that all accesses are within
++ * the bounds of the GEM objects referenced.  It explicitly whitelists
++ * packets, and looks at the offsets in any address fields to make
++ * sure they're constrained within the BOs they reference.
++ *
++ * Note that because of the validation that's happening anyway, this
++ * is where GEM relocation processing happens.
++ */
++
++#include "uapi/drm/vc4_drm.h"
++#include "vc4_drv.h"
++#include "vc4_packet.h"
++
++#define VALIDATE_ARGS \
++	struct vc4_exec_info *exec,			\
++	void *validated,				\
++	void *untrusted
++
++
++/** Return the width in pixels of a 64-byte microtile. */
++static uint32_t
++utile_width(int cpp)
++{
++	switch (cpp) {
++	case 1:
++	case 2:
++		return 8;
++	case 4:
++		return 4;
++	case 8:
++		return 2;
++	default:
++		DRM_ERROR("unknown cpp: %d\n", cpp);
++		return 1;
++	}
++}
++
++/** Return the height in pixels of a 64-byte microtile. */
++static uint32_t
++utile_height(int cpp)
++{
++	switch (cpp) {
++	case 1:
++		return 8;
++	case 2:
++	case 4:
++	case 8:
++		return 4;
++	default:
++		DRM_ERROR("unknown cpp: %d\n", cpp);
++		return 1;
++	}
++}
++
++/**
++ * The texture unit decides what tiling format a particular miplevel is using
++ * this function, so we lay out our miptrees accordingly.
++ */
++static bool
++size_is_lt(uint32_t width, uint32_t height, int cpp)
++{
++	return (width <= 4 * utile_width(cpp) ||
++		height <= 4 * utile_height(cpp));
++}
++
++bool
++vc4_use_bo(struct vc4_exec_info *exec,
++	   uint32_t hindex,
++	   enum vc4_bo_mode mode,
++	   struct drm_gem_cma_object **obj)
++{
++	*obj = NULL;
++
++	if (hindex >= exec->bo_count) {
++		DRM_ERROR("BO index %d greater than BO count %d\n",
++			  hindex, exec->bo_count);
++		return false;
++	}
++
++	if (exec->bo[hindex].mode != mode) {
++		if (exec->bo[hindex].mode == VC4_MODE_UNDECIDED) {
++			exec->bo[hindex].mode = mode;
++		} else {
++			DRM_ERROR("BO index %d reused with mode %d vs %d\n",
++				  hindex, exec->bo[hindex].mode, mode);
++			return false;
++		}
++	}
++
++	*obj = exec->bo[hindex].bo;
++	return true;
++}
++
++static bool
++vc4_use_handle(struct vc4_exec_info *exec,
++	       uint32_t gem_handles_packet_index,
++	       enum vc4_bo_mode mode,
++	       struct drm_gem_cma_object **obj)
++{
++	return vc4_use_bo(exec, exec->bo_index[gem_handles_packet_index],
++			  mode, obj);
++}
++
++static uint32_t
++gl_shader_rec_size(uint32_t pointer_bits)
++{
++	uint32_t attribute_count = pointer_bits & 7;
++	bool extended = pointer_bits & 8;
++
++	if (attribute_count == 0)
++		attribute_count = 8;
++
++	if (extended)
++		return 100 + attribute_count * 4;
++	else
++		return 36 + attribute_count * 8;
++}
++
++bool
++vc4_check_tex_size(struct vc4_exec_info *exec, struct drm_gem_cma_object *fbo,
++		   uint32_t offset, uint8_t tiling_format,
++		   uint32_t width, uint32_t height, uint8_t cpp)
++{
++	uint32_t aligned_width, aligned_height, stride, size;
++	uint32_t utile_w = utile_width(cpp);
++	uint32_t utile_h = utile_height(cpp);
++
++	/* The shaded vertex format stores signed 12.4 fixed point
++	 * (-2048,2047) offsets from the viewport center, so we should
++	 * never have a render target larger than 4096.  The texture
++	 * unit can only sample from 2048x2048, so it's even more
++	 * restricted.  This lets us avoid worrying about overflow in
++	 * our math.
++	 */
++	if (width > 4096 || height > 4096) {
++		DRM_ERROR("Surface dimesions (%d,%d) too large", width, height);
++		return false;
++	}
++
++	switch (tiling_format) {
++	case VC4_TILING_FORMAT_LINEAR:
++		aligned_width = round_up(width, utile_w);
++		aligned_height = height;
++		break;
++	case VC4_TILING_FORMAT_T:
++		aligned_width = round_up(width, utile_w * 8);
++		aligned_height = round_up(height, utile_h * 8);
++		break;
++	case VC4_TILING_FORMAT_LT:
++		aligned_width = round_up(width, utile_w);
++		aligned_height = round_up(height, utile_h);
++		break;
++	default:
++		DRM_ERROR("buffer tiling %d unsupported\n", tiling_format);
++		return false;
++	}
++
++	stride = aligned_width * cpp;
++	size = stride * aligned_height;
++
++	if (size + offset < size ||
++	    size + offset > fbo->base.size) {
++		DRM_ERROR("Overflow in %dx%d (%dx%d) fbo size (%d + %d > %d)\n",
++			  width, height,
++			  aligned_width, aligned_height,
++			  size, offset, fbo->base.size);
++		return false;
++	}
++
++	return true;
++}
++
++static int
++validate_flush_all(VALIDATE_ARGS)
++{
++	if (exec->found_increment_semaphore_packet) {
++		DRM_ERROR("VC4_PACKET_FLUSH_ALL after "
++			  "VC4_PACKET_INCREMENT_SEMAPHORE\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++static int
++validate_start_tile_binning(VALIDATE_ARGS)
++{
++	if (exec->found_start_tile_binning_packet) {
++		DRM_ERROR("Duplicate VC4_PACKET_START_TILE_BINNING\n");
++		return -EINVAL;
++	}
++	exec->found_start_tile_binning_packet = true;
++
++	if (!exec->found_tile_binning_mode_config_packet) {
++		DRM_ERROR("missing VC4_PACKET_TILE_BINNING_MODE_CONFIG\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++static int
++validate_increment_semaphore(VALIDATE_ARGS)
++{
++	if (exec->found_increment_semaphore_packet) {
++		DRM_ERROR("Duplicate VC4_PACKET_INCREMENT_SEMAPHORE\n");
++		return -EINVAL;
++	}
++	exec->found_increment_semaphore_packet = true;
++
++	/* Once we've found the semaphore increment, there should be one FLUSH
++	 * then the end of the command list.  The FLUSH actually triggers the
++	 * increment, so we only need to make sure there
++	 */
++
++	return 0;
++}
++
++static int
++validate_indexed_prim_list(VALIDATE_ARGS)
++{
++	struct drm_gem_cma_object *ib;
++	uint32_t length = *(uint32_t *)(untrusted + 1);
++	uint32_t offset = *(uint32_t *)(untrusted + 5);
++	uint32_t max_index = *(uint32_t *)(untrusted + 9);
++	uint32_t index_size = (*(uint8_t *)(untrusted + 0) >> 4) ? 2 : 1;
++	struct vc4_shader_state *shader_state;
++
++	if (exec->found_increment_semaphore_packet) {
++		DRM_ERROR("Drawing after VC4_PACKET_INCREMENT_SEMAPHORE\n");
++		return -EINVAL;
++	}
++
++	/* Check overflow condition */
++	if (exec->shader_state_count == 0) {
++		DRM_ERROR("shader state must precede primitives\n");
++		return -EINVAL;
++	}
++	shader_state = &exec->shader_state[exec->shader_state_count - 1];
++
++	if (max_index > shader_state->max_index)
++		shader_state->max_index = max_index;
++
++	if (!vc4_use_handle(exec, 0, VC4_MODE_RENDER, &ib))
++		return -EINVAL;
++
++	if (offset > ib->base.size ||
++	    (ib->base.size - offset) / index_size < length) {
++		DRM_ERROR("IB access overflow (%d + %d*%d > %d)\n",
++			  offset, length, index_size, ib->base.size);
++		return -EINVAL;
++	}
++
++	*(uint32_t *)(validated + 5) = ib->paddr + offset;
++
++	return 0;
++}
++
++static int
++validate_gl_array_primitive(VALIDATE_ARGS)
++{
++	uint32_t length = *(uint32_t *)(untrusted + 1);
++	uint32_t base_index = *(uint32_t *)(untrusted + 5);
++	uint32_t max_index;
++	struct vc4_shader_state *shader_state;
++
++	if (exec->found_increment_semaphore_packet) {
++		DRM_ERROR("Drawing after VC4_PACKET_INCREMENT_SEMAPHORE\n");
++		return -EINVAL;
++	}
++
++	/* Check overflow condition */
++	if (exec->shader_state_count == 0) {
++		DRM_ERROR("shader state must precede primitives\n");
++		return -EINVAL;
++	}
++	shader_state = &exec->shader_state[exec->shader_state_count - 1];
++
++	if (length + base_index < length) {
++		DRM_ERROR("primitive vertex count overflow\n");
++		return -EINVAL;
++	}
++	max_index = length + base_index - 1;
++
++	if (max_index > shader_state->max_index)
++		shader_state->max_index = max_index;
++
++	return 0;
++}
++
++static int
++validate_gl_shader_state(VALIDATE_ARGS)
++{
++	uint32_t i = exec->shader_state_count++;
++
++	if (i >= exec->shader_state_size) {
++		DRM_ERROR("More requests for shader states than declared\n");
++		return -EINVAL;
++	}
++
++	exec->shader_state[i].packet = VC4_PACKET_GL_SHADER_STATE;
++	exec->shader_state[i].addr = *(uint32_t *)untrusted;
++	exec->shader_state[i].max_index = 0;
++
++	if (exec->shader_state[i].addr & ~0xf) {
++		DRM_ERROR("high bits set in GL shader rec reference\n");
++		return -EINVAL;
++	}
++
++	*(uint32_t *)validated = (exec->shader_rec_p +
++				  exec->shader_state[i].addr);
++
++	exec->shader_rec_p +=
++		roundup(gl_shader_rec_size(exec->shader_state[i].addr), 16);
++
++	return 0;
++}
++
++static int
++validate_nv_shader_state(VALIDATE_ARGS)
++{
++	uint32_t i = exec->shader_state_count++;
++
++	if (i >= exec->shader_state_size) {
++		DRM_ERROR("More requests for shader states than declared\n");
++		return -EINVAL;
++	}
++
++	exec->shader_state[i].packet = VC4_PACKET_NV_SHADER_STATE;
++	exec->shader_state[i].addr = *(uint32_t *)untrusted;
++
++	if (exec->shader_state[i].addr & 15) {
++		DRM_ERROR("NV shader state address 0x%08x misaligned\n",
++			  exec->shader_state[i].addr);
++		return -EINVAL;
++	}
++
++	*(uint32_t *)validated = (exec->shader_state[i].addr +
++				  exec->shader_rec_p);
++
++	return 0;
++}
++
++static int
++validate_tile_binning_config(VALIDATE_ARGS)
++{
++	struct drm_device *dev = exec->exec_bo->base.dev;
++	uint8_t flags;
++	uint32_t tile_state_size, tile_alloc_size;
++	uint32_t tile_count;
++
++	if (exec->found_tile_binning_mode_config_packet) {
++		DRM_ERROR("Duplicate VC4_PACKET_TILE_BINNING_MODE_CONFIG\n");
++		return -EINVAL;
++	}
++	exec->found_tile_binning_mode_config_packet = true;
++
++	exec->bin_tiles_x = *(uint8_t *)(untrusted + 12);
++	exec->bin_tiles_y = *(uint8_t *)(untrusted + 13);
++	tile_count = exec->bin_tiles_x * exec->bin_tiles_y;
++	flags = *(uint8_t *)(untrusted + 14);
++
++	if (exec->bin_tiles_x == 0 ||
++	    exec->bin_tiles_y == 0) {
++		DRM_ERROR("Tile binning config of %dx%d too small\n",
++			  exec->bin_tiles_x, exec->bin_tiles_y);
++		return -EINVAL;
++	}
++
++	if (flags & (VC4_BIN_CONFIG_DB_NON_MS |
++		     VC4_BIN_CONFIG_TILE_BUFFER_64BIT |
++		     VC4_BIN_CONFIG_MS_MODE_4X)) {
++		DRM_ERROR("unsupported bining config flags 0x%02x\n", flags);
++		return -EINVAL;
++	}
++
++	/* The tile state data array is 48 bytes per tile, and we put it at
++	 * the start of a BO containing both it and the tile alloc.
++	 */
++	tile_state_size = 48 * tile_count;
++
++	/* Since the tile alloc array will follow us, align. */
++	exec->tile_alloc_offset = roundup(tile_state_size, 4096);
++
++	*(uint8_t *)(validated + 14) =
++		((flags & ~(VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_MASK |
++			    VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_MASK)) |
++		 VC4_BIN_CONFIG_AUTO_INIT_TSDA |
++		 VC4_SET_FIELD(VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_32,
++			       VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE) |
++		 VC4_SET_FIELD(VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_128,
++			       VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE));
++
++	/* Initial block size. */
++	tile_alloc_size = 32 * tile_count;
++
++	/*
++	 * The initial allocation gets rounded to the next 256 bytes before
++	 * the hardware starts fulfilling further allocations.
++	 */
++	tile_alloc_size = roundup(tile_alloc_size, 256);
++
++	/* Add space for the extra allocations.  This is what gets used first,
++	 * before overflow memory.  It must have at least 4096 bytes, but we
++	 * want to avoid overflow memory usage if possible.
++	 */
++	tile_alloc_size += 1024 * 1024;
++
++	exec->tile_bo = &vc4_bo_create(dev, exec->tile_alloc_offset +
++				       tile_alloc_size)->base;
++	if (!exec->tile_bo)
++		return -ENOMEM;
++	list_add_tail(&to_vc4_bo(&exec->tile_bo->base)->unref_head,
++		     &exec->unref_list);
++
++	/* tile alloc address. */
++	*(uint32_t *)(validated + 0) = (exec->tile_bo->paddr +
++					exec->tile_alloc_offset);
++	/* tile alloc size. */
++	*(uint32_t *)(validated + 4) = tile_alloc_size;
++	/* tile state address. */
++	*(uint32_t *)(validated + 8) = exec->tile_bo->paddr;
++
++	return 0;
++}
++
++static int
++validate_gem_handles(VALIDATE_ARGS)
++{
++	memcpy(exec->bo_index, untrusted, sizeof(exec->bo_index));
++	return 0;
++}
++
++#define VC4_DEFINE_PACKET(packet, name, func) \
++	[packet] = { packet ## _SIZE, name, func }
++
++static const struct cmd_info {
++	uint16_t len;
++	const char *name;
++	int (*func)(struct vc4_exec_info *exec, void *validated,
++		    void *untrusted);
++} cmd_info[] = {
++	VC4_DEFINE_PACKET(VC4_PACKET_HALT, "halt", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_NOP, "nop", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH, "flush", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH_ALL, "flush all state", validate_flush_all),
++	VC4_DEFINE_PACKET(VC4_PACKET_START_TILE_BINNING, "start tile binning", validate_start_tile_binning),
++	VC4_DEFINE_PACKET(VC4_PACKET_INCREMENT_SEMAPHORE, "increment semaphore", validate_increment_semaphore),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_GL_INDEXED_PRIMITIVE, "Indexed Primitive List", validate_indexed_prim_list),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_GL_ARRAY_PRIMITIVE, "Vertex Array Primitives", validate_gl_array_primitive),
++
++	/* This is only used by clipped primitives (packets 48 and 49), which
++	 * we don't support parsing yet.
++	 */
++	VC4_DEFINE_PACKET(VC4_PACKET_PRIMITIVE_LIST_FORMAT, "primitive list format", NULL),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_GL_SHADER_STATE, "GL Shader State", validate_gl_shader_state),
++	VC4_DEFINE_PACKET(VC4_PACKET_NV_SHADER_STATE, "NV Shader State", validate_nv_shader_state),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_CONFIGURATION_BITS, "configuration bits", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_FLAT_SHADE_FLAGS, "flat shade flags", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_POINT_SIZE, "point size", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_LINE_WIDTH, "line width", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_RHT_X_BOUNDARY, "RHT X boundary", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_DEPTH_OFFSET, "Depth Offset", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_CLIP_WINDOW, "Clip Window", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_VIEWPORT_OFFSET, "Viewport Offset", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_CLIPPER_XY_SCALING, "Clipper XY Scaling", NULL),
++	/* Note: The docs say this was also 105, but it was 106 in the
++	 * initial userland code drop.
++	 */
++	VC4_DEFINE_PACKET(VC4_PACKET_CLIPPER_Z_SCALING, "Clipper Z Scale and Offset", NULL),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_TILE_BINNING_MODE_CONFIG, "tile binning configuration", validate_tile_binning_config),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_GEM_HANDLES, "GEM handles", validate_gem_handles),
++};
++
++int
++vc4_validate_bin_cl(struct drm_device *dev,
++		    void *validated,
++		    void *unvalidated,
++		    struct vc4_exec_info *exec)
++{
++	uint32_t len = exec->args->bin_cl_size;
++	uint32_t dst_offset = 0;
++	uint32_t src_offset = 0;
++
++	while (src_offset < len) {
++		void *dst_pkt = validated + dst_offset;
++		void *src_pkt = unvalidated + src_offset;
++		u8 cmd = *(uint8_t *)src_pkt;
++		const struct cmd_info *info;
++
++		if (cmd > ARRAY_SIZE(cmd_info)) {
++			DRM_ERROR("0x%08x: packet %d out of bounds\n",
++				  src_offset, cmd);
++			return -EINVAL;
++		}
++
++		info = &cmd_info[cmd];
++		if (!info->name) {
++			DRM_ERROR("0x%08x: packet %d invalid\n",
++				  src_offset, cmd);
++			return -EINVAL;
++		}
++
++#if 0
++		DRM_INFO("0x%08x: packet %d (%s) size %d processing...\n",
++			 src_offset, cmd, info->name, info->len);
++#endif
++
++		if (src_offset + info->len > len) {
++			DRM_ERROR("0x%08x: packet %d (%s) length 0x%08x "
++				  "exceeds bounds (0x%08x)\n",
++				  src_offset, cmd, info->name, info->len,
++				  src_offset + len);
++			return -EINVAL;
++		}
++
++		if (cmd != VC4_PACKET_GEM_HANDLES)
++			memcpy(dst_pkt, src_pkt, info->len);
++
++		if (info->func && info->func(exec,
++					     dst_pkt + 1,
++					     src_pkt + 1)) {
++			DRM_ERROR("0x%08x: packet %d (%s) failed to "
++				  "validate\n",
++				  src_offset, cmd, info->name);
++			return -EINVAL;
++		}
++
++		src_offset += info->len;
++		/* GEM handle loading doesn't produce HW packets. */
++		if (cmd != VC4_PACKET_GEM_HANDLES)
++			dst_offset += info->len;
++
++		/* When the CL hits halt, it'll stop reading anything else. */
++		if (cmd == VC4_PACKET_HALT)
++			break;
++	}
++
++	exec->ct0ea = exec->ct0ca + dst_offset;
++
++	if (!exec->found_start_tile_binning_packet) {
++		DRM_ERROR("Bin CL missing VC4_PACKET_START_TILE_BINNING\n");
++		return -EINVAL;
++	}
++
++	if (!exec->found_increment_semaphore_packet) {
++		DRM_ERROR("Bin CL missing VC4_PACKET_INCREMENT_SEMAPHORE\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++static bool
++reloc_tex(struct vc4_exec_info *exec,
++	  void *uniform_data_u,
++	  struct vc4_texture_sample_info *sample,
++	  uint32_t texture_handle_index)
++
++{
++	struct drm_gem_cma_object *tex;
++	uint32_t p0 = *(uint32_t *)(uniform_data_u + sample->p_offset[0]);
++	uint32_t p1 = *(uint32_t *)(uniform_data_u + sample->p_offset[1]);
++	uint32_t p2 = (sample->p_offset[2] != ~0 ?
++		       *(uint32_t *)(uniform_data_u + sample->p_offset[2]) : 0);
++	uint32_t p3 = (sample->p_offset[3] != ~0 ?
++		       *(uint32_t *)(uniform_data_u + sample->p_offset[3]) : 0);
++	uint32_t *validated_p0 = exec->uniforms_v + sample->p_offset[0];
++	uint32_t offset = p0 & VC4_TEX_P0_OFFSET_MASK;
++	uint32_t miplevels = VC4_GET_FIELD(p0, VC4_TEX_P0_MIPLVLS);
++	uint32_t width = VC4_GET_FIELD(p1, VC4_TEX_P1_WIDTH);
++	uint32_t height = VC4_GET_FIELD(p1, VC4_TEX_P1_HEIGHT);
++	uint32_t cpp, tiling_format, utile_w, utile_h;
++	uint32_t i;
++	uint32_t cube_map_stride = 0;
++	enum vc4_texture_data_type type;
++
++	if (!vc4_use_bo(exec, texture_handle_index, VC4_MODE_RENDER, &tex))
++		return false;
++
++	if (sample->is_direct) {
++		uint32_t remaining_size = tex->base.size - p0;
++		if (p0 > tex->base.size - 4) {
++			DRM_ERROR("UBO offset greater than UBO size\n");
++			goto fail;
++		}
++		if (p1 > remaining_size - 4) {
++			DRM_ERROR("UBO clamp would allow reads outside of UBO\n");
++			goto fail;
++		}
++		*validated_p0 = tex->paddr + p0;
++		return true;
++	}
++
++	if (width == 0)
++		width = 2048;
++	if (height == 0)
++		height = 2048;
++
++	if (p0 & VC4_TEX_P0_CMMODE_MASK) {
++		if (VC4_GET_FIELD(p2, VC4_TEX_P2_PTYPE) ==
++		    VC4_TEX_P2_PTYPE_CUBE_MAP_STRIDE)
++			cube_map_stride = p2 & VC4_TEX_P2_CMST_MASK;
++		if (VC4_GET_FIELD(p3, VC4_TEX_P2_PTYPE) ==
++		    VC4_TEX_P2_PTYPE_CUBE_MAP_STRIDE) {
++			if (cube_map_stride) {
++				DRM_ERROR("Cube map stride set twice\n");
++				goto fail;
++			}
++
++			cube_map_stride = p3 & VC4_TEX_P2_CMST_MASK;
++		}
++		if (!cube_map_stride) {
++			DRM_ERROR("Cube map stride not set\n");
++			goto fail;
++		}
++	}
++
++	type = (VC4_GET_FIELD(p0, VC4_TEX_P0_TYPE) |
++		(VC4_GET_FIELD(p1, VC4_TEX_P1_TYPE4) << 4));
++
++	switch (type) {
++	case VC4_TEXTURE_TYPE_RGBA8888:
++	case VC4_TEXTURE_TYPE_RGBX8888:
++	case VC4_TEXTURE_TYPE_RGBA32R:
++		cpp = 4;
++		break;
++	case VC4_TEXTURE_TYPE_RGBA4444:
++	case VC4_TEXTURE_TYPE_RGBA5551:
++	case VC4_TEXTURE_TYPE_RGB565:
++	case VC4_TEXTURE_TYPE_LUMALPHA:
++	case VC4_TEXTURE_TYPE_S16F:
++	case VC4_TEXTURE_TYPE_S16:
++		cpp = 2;
++		break;
++	case VC4_TEXTURE_TYPE_LUMINANCE:
++	case VC4_TEXTURE_TYPE_ALPHA:
++	case VC4_TEXTURE_TYPE_S8:
++		cpp = 1;
++		break;
++	case VC4_TEXTURE_TYPE_ETC1:
++	case VC4_TEXTURE_TYPE_BW1:
++	case VC4_TEXTURE_TYPE_A4:
++	case VC4_TEXTURE_TYPE_A1:
++	case VC4_TEXTURE_TYPE_RGBA64:
++	case VC4_TEXTURE_TYPE_YUV422R:
++	default:
++		DRM_ERROR("Texture format %d unsupported\n", type);
++		goto fail;
++	}
++	utile_w = utile_width(cpp);
++	utile_h = utile_height(cpp);
++
++	if (type == VC4_TEXTURE_TYPE_RGBA32R) {
++		tiling_format = VC4_TILING_FORMAT_LINEAR;
++	} else {
++		if (size_is_lt(width, height, cpp))
++			tiling_format = VC4_TILING_FORMAT_LT;
++		else
++			tiling_format = VC4_TILING_FORMAT_T;
++	}
++
++	if (!vc4_check_tex_size(exec, tex, offset + cube_map_stride * 5,
++				tiling_format, width, height, cpp)) {
++		goto fail;
++	}
++
++	/* The mipmap levels are stored before the base of the texture.  Make
++	 * sure there is actually space in the BO.
++	 */
++	for (i = 1; i <= miplevels; i++) {
++		uint32_t level_width = max(width >> i, 1u);
++		uint32_t level_height = max(height >> i, 1u);
++		uint32_t aligned_width, aligned_height;
++		uint32_t level_size;
++
++		/* Once the levels get small enough, they drop from T to LT. */
++		if (tiling_format == VC4_TILING_FORMAT_T &&
++		    size_is_lt(level_width, level_height, cpp)) {
++			tiling_format = VC4_TILING_FORMAT_LT;
++		}
++
++		switch (tiling_format) {
++		case VC4_TILING_FORMAT_T:
++			aligned_width = round_up(level_width, utile_w * 8);
++			aligned_height = round_up(level_height, utile_h * 8);
++			break;
++		case VC4_TILING_FORMAT_LT:
++			aligned_width = round_up(level_width, utile_w);
++			aligned_height = round_up(level_height, utile_h);
++			break;
++		default:
++			aligned_width = round_up(level_width, utile_w);
++			aligned_height = level_height;
++			break;
++		}
++
++		level_size = aligned_width * cpp * aligned_height;
++
++		if (offset < level_size) {
++			DRM_ERROR("Level %d (%dx%d -> %dx%d) size %db "
++				  "overflowed buffer bounds (offset %d)\n",
++				  i, level_width, level_height,
++				  aligned_width, aligned_height,
++				  level_size, offset);
++			goto fail;
++		}
++
++		offset -= level_size;
++	}
++
++	*validated_p0 = tex->paddr + p0;
++
++	return true;
++ fail:
++	DRM_INFO("Texture p0 at %d: 0x%08x\n", sample->p_offset[0], p0);
++	DRM_INFO("Texture p1 at %d: 0x%08x\n", sample->p_offset[1], p1);
++	DRM_INFO("Texture p2 at %d: 0x%08x\n", sample->p_offset[2], p2);
++	DRM_INFO("Texture p3 at %d: 0x%08x\n", sample->p_offset[3], p3);
++	return false;
++}
++
++static int
++validate_shader_rec(struct drm_device *dev,
++		    struct vc4_exec_info *exec,
++		    struct vc4_shader_state *state)
++{
++	uint32_t *src_handles;
++	void *pkt_u, *pkt_v;
++	enum shader_rec_reloc_type {
++		RELOC_CODE,
++		RELOC_VBO,
++	};
++	struct shader_rec_reloc {
++		enum shader_rec_reloc_type type;
++		uint32_t offset;
++	};
++	static const struct shader_rec_reloc gl_relocs[] = {
++		{ RELOC_CODE, 4 },  /* fs */
++		{ RELOC_CODE, 16 }, /* vs */
++		{ RELOC_CODE, 28 }, /* cs */
++	};
++	static const struct shader_rec_reloc nv_relocs[] = {
++		{ RELOC_CODE, 4 }, /* fs */
++		{ RELOC_VBO, 12 }
++	};
++	const struct shader_rec_reloc *relocs;
++	struct drm_gem_cma_object *bo[ARRAY_SIZE(gl_relocs) + 8];
++	uint32_t nr_attributes = 0, nr_fixed_relocs, nr_relocs, packet_size;
++	int i;
++	struct vc4_validated_shader_info *validated_shader;
++
++	if (state->packet == VC4_PACKET_NV_SHADER_STATE) {
++		relocs = nv_relocs;
++		nr_fixed_relocs = ARRAY_SIZE(nv_relocs);
++
++		packet_size = 16;
++	} else {
++		relocs = gl_relocs;
++		nr_fixed_relocs = ARRAY_SIZE(gl_relocs);
++
++		nr_attributes = state->addr & 0x7;
++		if (nr_attributes == 0)
++			nr_attributes = 8;
++		packet_size = gl_shader_rec_size(state->addr);
++	}
++	nr_relocs = nr_fixed_relocs + nr_attributes;
++
++	if (nr_relocs * 4 > exec->shader_rec_size) {
++		DRM_ERROR("overflowed shader recs reading %d handles "
++			  "from %d bytes left\n",
++			  nr_relocs, exec->shader_rec_size);
++		return -EINVAL;
++	}
++	src_handles = exec->shader_rec_u;
++	exec->shader_rec_u += nr_relocs * 4;
++	exec->shader_rec_size -= nr_relocs * 4;
++
++	if (packet_size > exec->shader_rec_size) {
++		DRM_ERROR("overflowed shader recs copying %db packet "
++			  "from %d bytes left\n",
++			  packet_size, exec->shader_rec_size);
++		return -EINVAL;
++	}
++	pkt_u = exec->shader_rec_u;
++	pkt_v = exec->shader_rec_v;
++	memcpy(pkt_v, pkt_u, packet_size);
++	exec->shader_rec_u += packet_size;
++	/* Shader recs have to be aligned to 16 bytes (due to the attribute
++	 * flags being in the low bytes), so round the next validated shader
++	 * rec address up.  This should be safe, since we've got so many
++	 * relocations in a shader rec packet.
++	 */
++	BUG_ON(roundup(packet_size, 16) - packet_size > nr_relocs * 4);
++	exec->shader_rec_v += roundup(packet_size, 16);
++	exec->shader_rec_size -= packet_size;
++
++	for (i = 0; i < nr_relocs; i++) {
++		enum vc4_bo_mode mode;
++
++		if (i < nr_fixed_relocs && relocs[i].type == RELOC_CODE)
++			mode = VC4_MODE_SHADER;
++		else
++			mode = VC4_MODE_RENDER;
++
++		if (!vc4_use_bo(exec, src_handles[i], mode, &bo[i])) {
++			return false;
++		}
++	}
++
++	for (i = 0; i < nr_fixed_relocs; i++) {
++		uint32_t o = relocs[i].offset;
++		uint32_t src_offset = *(uint32_t *)(pkt_u + o);
++		uint32_t *texture_handles_u;
++		void *uniform_data_u;
++		uint32_t tex;
++
++		*(uint32_t *)(pkt_v + o) = bo[i]->paddr + src_offset;
++
++		switch (relocs[i].type) {
++		case RELOC_CODE:
++			if (src_offset != 0) {
++				DRM_ERROR("Shaders must be at offset 0 of "
++					  "the BO.\n");
++				goto fail;
++			}
++
++			validated_shader = to_vc4_bo(&bo[i]->base)->validated_shader;
++			if (!validated_shader)
++				goto fail;
++
++			if (validated_shader->uniforms_src_size >
++			    exec->uniforms_size) {
++				DRM_ERROR("Uniforms src buffer overflow\n");
++				goto fail;
++			}
++
++			texture_handles_u = exec->uniforms_u;
++			uniform_data_u = (texture_handles_u +
++					  validated_shader->num_texture_samples);
++
++			memcpy(exec->uniforms_v, uniform_data_u,
++			       validated_shader->uniforms_size);
++
++			for (tex = 0;
++			     tex < validated_shader->num_texture_samples;
++			     tex++) {
++				if (!reloc_tex(exec,
++					       uniform_data_u,
++					       &validated_shader->texture_samples[tex],
++					       texture_handles_u[tex])) {
++					goto fail;
++				}
++			}
++
++			*(uint32_t *)(pkt_v + o + 4) = exec->uniforms_p;
++
++			exec->uniforms_u += validated_shader->uniforms_src_size;
++			exec->uniforms_v += validated_shader->uniforms_size;
++			exec->uniforms_p += validated_shader->uniforms_size;
++
++			break;
++
++		case RELOC_VBO:
++			break;
++		}
++	}
++
++	for (i = 0; i < nr_attributes; i++) {
++		struct drm_gem_cma_object *vbo = bo[nr_fixed_relocs + i];
++		uint32_t o = 36 + i * 8;
++		uint32_t offset = *(uint32_t *)(pkt_u + o + 0);
++		uint32_t attr_size = *(uint8_t *)(pkt_u + o + 4) + 1;
++		uint32_t stride = *(uint8_t *)(pkt_u + o + 5);
++		uint32_t max_index;
++
++		if (state->addr & 0x8)
++			stride |= (*(uint32_t *)(pkt_u + 100 + i * 4)) & ~0xff;
++
++		if (vbo->base.size < offset ||
++		    vbo->base.size - offset < attr_size) {
++			DRM_ERROR("BO offset overflow (%d + %d > %d)\n",
++				  offset, attr_size, vbo->base.size);
++			return -EINVAL;
++		}
++
++		if (stride != 0) {
++			max_index = ((vbo->base.size - offset - attr_size) /
++				     stride);
++			if (state->max_index > max_index) {
++				DRM_ERROR("primitives use index %d out of supplied %d\n",
++					  state->max_index, max_index);
++				return -EINVAL;
++			}
++		}
++
++		*(uint32_t *)(pkt_v + o) = vbo->paddr + offset;
++	}
++
++	return 0;
++
++fail:
++	return -EINVAL;
++}
++
++int
++vc4_validate_shader_recs(struct drm_device *dev,
++			 struct vc4_exec_info *exec)
++{
++	uint32_t i;
++	int ret = 0;
++
++	for (i = 0; i < exec->shader_state_count; i++) {
++		ret = validate_shader_rec(dev, exec, &exec->shader_state[i]);
++		if (ret)
++			return ret;
++	}
++
++	return ret;
++}
+--- /dev/null
++++ b/drivers/gpu/drm/vc4/vc4_validate_shaders.c
+@@ -0,0 +1,521 @@
++/*
++ * Copyright © 2014 Broadcom
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ */
++
++/**
++ * DOC: Shader validator for VC4.
++ *
++ * The VC4 has no IOMMU between it and system memory.  So, a user with access
++ * to execute shaders could escalate privilege by overwriting system memory
++ * (using the VPM write address register in the general-purpose DMA mode) or
++ * reading system memory it shouldn't (reading it as a texture, or uniform
++ * data, or vertex data).
++ *
++ * This walks over a shader starting from some offset within a BO, ensuring
++ * that its accesses are appropriately bounded, and recording how many texture
++ * accesses are made and where so that we can do relocations for them in the
++ * uniform stream.
++ *
++ * The kernel API has shaders stored in user-mapped BOs.  The BOs will be
++ * forcibly unmapped from the process before validation, and any cache of
++ * validated state will be flushed if the mapping is faulted back in.
++ *
++ * Storing the shaders in BOs means that the validation process will be slow
++ * due to uncached reads, but since shaders are long-lived and shader BOs are
++ * never actually modified, this shouldn't be a problem.
++ */
++
++#include "vc4_drv.h"
++#include "vc4_qpu_defines.h"
++
++struct vc4_shader_validation_state {
++	struct vc4_texture_sample_info tmu_setup[2];
++	int tmu_write_count[2];
++
++	/* For registers that were last written to by a MIN instruction with
++	 * one argument being a uniform, the address of the uniform.
++	 * Otherwise, ~0.
++	 *
++	 * This is used for the validation of direct address memory reads.
++	 */
++	uint32_t live_min_clamp_offsets[32 + 32 + 4];
++	bool live_max_clamp_regs[32 + 32 + 4];
++};
++
++static uint32_t
++waddr_to_live_reg_index(uint32_t waddr, bool is_b)
++{
++	if (waddr < 32) {
++		if (is_b)
++			return 32 + waddr;
++		else
++			return waddr;
++	} else if (waddr <= QPU_W_ACC3) {
++
++		return 64 + waddr - QPU_W_ACC0;
++	} else {
++		return ~0;
++	}
++}
++
++static uint32_t
++raddr_add_a_to_live_reg_index(uint64_t inst)
++{
++	uint32_t sig = QPU_GET_FIELD(inst, QPU_SIG);
++	uint32_t add_a = QPU_GET_FIELD(inst, QPU_ADD_A);
++	uint32_t raddr_a = QPU_GET_FIELD(inst, QPU_RADDR_A);
++	uint32_t raddr_b = QPU_GET_FIELD(inst, QPU_RADDR_B);
++
++	if (add_a == QPU_MUX_A) {
++		return raddr_a;
++	} else if (add_a == QPU_MUX_B && sig != QPU_SIG_SMALL_IMM) {
++		return 32 + raddr_b;
++	} else if (add_a <= QPU_MUX_R3) {
++		return 64 + add_a;
++	} else {
++		return ~0;
++	}
++}
++
++static bool
++is_tmu_submit(uint32_t waddr)
++{
++	return (waddr == QPU_W_TMU0_S ||
++		waddr == QPU_W_TMU1_S);
++}
++
++static bool
++is_tmu_write(uint32_t waddr)
++{
++	return (waddr >= QPU_W_TMU0_S &&
++		waddr <= QPU_W_TMU1_B);
++}
++
++static bool
++record_validated_texture_sample(struct vc4_validated_shader_info *validated_shader,
++				struct vc4_shader_validation_state *validation_state,
++				int tmu)
++{
++	uint32_t s = validated_shader->num_texture_samples;
++	int i;
++	struct vc4_texture_sample_info *temp_samples;
++
++	temp_samples = krealloc(validated_shader->texture_samples,
++				(s + 1) * sizeof(*temp_samples),
++				GFP_KERNEL);
++	if (!temp_samples)
++		return false;
++
++	memcpy(&temp_samples[s],
++	       &validation_state->tmu_setup[tmu],
++	       sizeof(*temp_samples));
++
++	validated_shader->num_texture_samples = s + 1;
++	validated_shader->texture_samples = temp_samples;
++
++	for (i = 0; i < 4; i++)
++		validation_state->tmu_setup[tmu].p_offset[i] = ~0;
++
++	return true;
++}
++
++static bool
++check_tmu_write(uint64_t inst,
++		struct vc4_validated_shader_info *validated_shader,
++		struct vc4_shader_validation_state *validation_state,
++		bool is_mul)
++{
++	uint32_t waddr = (is_mul ?
++			  QPU_GET_FIELD(inst, QPU_WADDR_MUL) :
++			  QPU_GET_FIELD(inst, QPU_WADDR_ADD));
++	uint32_t raddr_a = QPU_GET_FIELD(inst, QPU_RADDR_A);
++	uint32_t raddr_b = QPU_GET_FIELD(inst, QPU_RADDR_B);
++	int tmu = waddr > QPU_W_TMU0_B;
++	bool submit = is_tmu_submit(waddr);
++	bool is_direct = submit && validation_state->tmu_write_count[tmu] == 0;
++	uint32_t sig = QPU_GET_FIELD(inst, QPU_SIG);
++
++	if (is_direct) {
++		uint32_t add_b = QPU_GET_FIELD(inst, QPU_ADD_B);
++		uint32_t clamp_reg, clamp_offset;
++
++		if (sig == QPU_SIG_SMALL_IMM) {
++			DRM_ERROR("direct TMU read used small immediate\n");
++			return false;
++		}
++
++		/* Make sure that this texture load is an add of the base
++		 * address of the UBO to a clamped offset within the UBO.
++		 */
++		if (is_mul ||
++		    QPU_GET_FIELD(inst, QPU_OP_ADD) != QPU_A_ADD) {
++			DRM_ERROR("direct TMU load wasn't an add\n");
++			return false;
++		}
++
++		/* We assert that the the clamped address is the first
++		 * argument, and the UBO base address is the second argument.
++		 * This is arbitrary, but simpler than supporting flipping the
++		 * two either way.
++		 */
++		clamp_reg = raddr_add_a_to_live_reg_index(inst);
++		if (clamp_reg == ~0) {
++			DRM_ERROR("direct TMU load wasn't clamped\n");
++			return false;
++		}
++
++		clamp_offset = validation_state->live_min_clamp_offsets[clamp_reg];
++		if (clamp_offset == ~0) {
++			DRM_ERROR("direct TMU load wasn't clamped\n");
++			return false;
++		}
++
++		/* Store the clamp value's offset in p1 (see reloc_tex() in
++		 * vc4_validate.c).
++		 */
++		validation_state->tmu_setup[tmu].p_offset[1] =
++			clamp_offset;
++
++		if (!(add_b == QPU_MUX_A && raddr_a == QPU_R_UNIF) &&
++		    !(add_b == QPU_MUX_B && raddr_b == QPU_R_UNIF)) {
++			DRM_ERROR("direct TMU load didn't add to a uniform\n");
++			return false;
++		}
++
++		validation_state->tmu_setup[tmu].is_direct = true;
++	} else {
++		if (raddr_a == QPU_R_UNIF || (sig != QPU_SIG_SMALL_IMM &&
++					      raddr_b == QPU_R_UNIF)) {
++			DRM_ERROR("uniform read in the same instruction as "
++				  "texture setup.\n");
++			return false;
++		}
++	}
++
++	if (validation_state->tmu_write_count[tmu] >= 4) {
++		DRM_ERROR("TMU%d got too many parameters before dispatch\n",
++			  tmu);
++		return false;
++	}
++	validation_state->tmu_setup[tmu].p_offset[validation_state->tmu_write_count[tmu]] =
++		validated_shader->uniforms_size;
++	validation_state->tmu_write_count[tmu]++;
++	/* Since direct uses a RADDR uniform reference, it will get counted in
++	 * check_instruction_reads()
++	 */
++	if (!is_direct)
++		validated_shader->uniforms_size += 4;
++
++	if (submit) {
++		if (!record_validated_texture_sample(validated_shader,
++						     validation_state, tmu)) {
++			return false;
++		}
++
++		validation_state->tmu_write_count[tmu] = 0;
++	}
++
++	return true;
++}
++
++static bool
++check_register_write(uint64_t inst,
++		     struct vc4_validated_shader_info *validated_shader,
++		     struct vc4_shader_validation_state *validation_state,
++		     bool is_mul)
++{
++	uint32_t waddr = (is_mul ?
++			  QPU_GET_FIELD(inst, QPU_WADDR_MUL) :
++			  QPU_GET_FIELD(inst, QPU_WADDR_ADD));
++
++	switch (waddr) {
++	case QPU_W_UNIFORMS_ADDRESS:
++		/* XXX: We'll probably need to support this for reladdr, but
++		 * it's definitely a security-related one.
++		 */
++		DRM_ERROR("uniforms address load unsupported\n");
++		return false;
++
++	case QPU_W_TLB_COLOR_MS:
++	case QPU_W_TLB_COLOR_ALL:
++	case QPU_W_TLB_Z:
++		/* These only interact with the tile buffer, not main memory,
++		 * so they're safe.
++		 */
++		return true;
++
++	case QPU_W_TMU0_S:
++	case QPU_W_TMU0_T:
++	case QPU_W_TMU0_R:
++	case QPU_W_TMU0_B:
++	case QPU_W_TMU1_S:
++	case QPU_W_TMU1_T:
++	case QPU_W_TMU1_R:
++	case QPU_W_TMU1_B:
++		return check_tmu_write(inst, validated_shader, validation_state,
++				       is_mul);
++
++	case QPU_W_HOST_INT:
++	case QPU_W_TMU_NOSWAP:
++	case QPU_W_TLB_ALPHA_MASK:
++	case QPU_W_MUTEX_RELEASE:
++		/* XXX: I haven't thought about these, so don't support them
++		 * for now.
++		 */
++		DRM_ERROR("Unsupported waddr %d\n", waddr);
++		return false;
++
++	case QPU_W_VPM_ADDR:
++		DRM_ERROR("General VPM DMA unsupported\n");
++		return false;
++
++	case QPU_W_VPM:
++	case QPU_W_VPMVCD_SETUP:
++		/* We allow VPM setup in general, even including VPM DMA
++		 * configuration setup, because the (unsafe) DMA can only be
++		 * triggered by QPU_W_VPM_ADDR writes.
++		 */
++		return true;
++
++	case QPU_W_TLB_STENCIL_SETUP:
++                return true;
++	}
++
++	return true;
++}
++
++static void
++track_live_clamps(uint64_t inst,
++		  struct vc4_validated_shader_info *validated_shader,
++		  struct vc4_shader_validation_state *validation_state)
++{
++	uint32_t op_add = QPU_GET_FIELD(inst, QPU_OP_ADD);
++	uint32_t waddr_add = QPU_GET_FIELD(inst, QPU_WADDR_ADD);
++	uint32_t waddr_mul = QPU_GET_FIELD(inst, QPU_WADDR_MUL);
++	uint32_t cond_add = QPU_GET_FIELD(inst, QPU_COND_ADD);
++	uint32_t add_a = QPU_GET_FIELD(inst, QPU_ADD_A);
++	uint32_t add_b = QPU_GET_FIELD(inst, QPU_ADD_B);
++	uint32_t raddr_a = QPU_GET_FIELD(inst, QPU_RADDR_A);
++	uint32_t raddr_b = QPU_GET_FIELD(inst, QPU_RADDR_B);
++	uint32_t sig = QPU_GET_FIELD(inst, QPU_SIG);
++	bool ws = inst & QPU_WS;
++	uint32_t lri_add_a, lri_add, lri_mul;
++	bool add_a_is_min_0;
++
++	/* Check whether OP_ADD's A argumennt comes from a live MAX(x, 0),
++	 * before we clear previous live state.
++	 */
++	lri_add_a = raddr_add_a_to_live_reg_index(inst);
++	add_a_is_min_0 = (lri_add_a != ~0 &&
++			  validation_state->live_max_clamp_regs[lri_add_a]);
++
++	/* Clear live state for registers written by our instruction. */
++	lri_add = waddr_to_live_reg_index(waddr_add, ws);
++	lri_mul = waddr_to_live_reg_index(waddr_mul, !ws);
++	if (lri_mul != ~0) {
++		validation_state->live_max_clamp_regs[lri_mul] = false;
++		validation_state->live_min_clamp_offsets[lri_mul] = ~0;
++	}
++	if (lri_add != ~0) {
++		validation_state->live_max_clamp_regs[lri_add] = false;
++		validation_state->live_min_clamp_offsets[lri_add] = ~0;
++	} else {
++		/* Nothing further to do for live tracking, since only ADDs
++		 * generate new live clamp registers.
++		 */
++		return;
++	}
++
++	/* Now, handle remaining live clamp tracking for the ADD operation. */
++
++	if (cond_add != QPU_COND_ALWAYS)
++		return;
++
++	if (op_add == QPU_A_MAX) {
++		/* Track live clamps of a value to a minimum of 0 (in either
++		 * arg).
++		 */
++		if (sig != QPU_SIG_SMALL_IMM || raddr_b != 0 ||
++		    (add_a != QPU_MUX_B && add_b != QPU_MUX_B)) {
++			return;
++		}
++
++		validation_state->live_max_clamp_regs[lri_add] = true;
++	} if (op_add == QPU_A_MIN) {
++		/* Track live clamps of a value clamped to a minimum of 0 and
++		 * a maximum of some uniform's offset.
++		 */
++		if (!add_a_is_min_0)
++			return;
++
++		if (!(add_b == QPU_MUX_A && raddr_a == QPU_R_UNIF) &&
++		    !(add_b == QPU_MUX_B && raddr_b == QPU_R_UNIF &&
++		      sig != QPU_SIG_SMALL_IMM)) {
++			return;
++		}
++
++		validation_state->live_min_clamp_offsets[lri_add] =
++			validated_shader->uniforms_size;
++	}
++}
++
++static bool
++check_instruction_writes(uint64_t inst,
++			 struct vc4_validated_shader_info *validated_shader,
++			 struct vc4_shader_validation_state *validation_state)
++{
++	uint32_t waddr_add = QPU_GET_FIELD(inst, QPU_WADDR_ADD);
++	uint32_t waddr_mul = QPU_GET_FIELD(inst, QPU_WADDR_MUL);
++	bool ok;
++
++	if (is_tmu_write(waddr_add) && is_tmu_write(waddr_mul)) {
++		DRM_ERROR("ADD and MUL both set up textures\n");
++		return false;
++	}
++
++	ok = (check_register_write(inst, validated_shader, validation_state, false) &&
++	      check_register_write(inst, validated_shader, validation_state, true));
++
++	track_live_clamps(inst, validated_shader, validation_state);
++
++	return ok;
++}
++
++static bool
++check_instruction_reads(uint64_t inst,
++			struct vc4_validated_shader_info *validated_shader)
++{
++	uint32_t raddr_a = QPU_GET_FIELD(inst, QPU_RADDR_A);
++	uint32_t raddr_b = QPU_GET_FIELD(inst, QPU_RADDR_B);
++	uint32_t sig = QPU_GET_FIELD(inst, QPU_SIG);
++
++	if (raddr_a == QPU_R_UNIF ||
++	    (raddr_b == QPU_R_UNIF && sig != QPU_SIG_SMALL_IMM)) {
++		/* This can't overflow the uint32_t, because we're reading 8
++		 * bytes of instruction to increment by 4 here, so we'd
++		 * already be OOM.
++		 */
++		validated_shader->uniforms_size += 4;
++	}
++
++	return true;
++}
++
++struct vc4_validated_shader_info *
++vc4_validate_shader(struct drm_gem_cma_object *shader_obj)
++{
++	bool found_shader_end = false;
++	int shader_end_ip = 0;
++	uint32_t ip, max_ip;
++	uint64_t *shader;
++	struct vc4_validated_shader_info *validated_shader;
++	struct vc4_shader_validation_state validation_state;
++	int i;
++
++	memset(&validation_state, 0, sizeof(validation_state));
++
++	for (i = 0; i < 8; i++)
++		validation_state.tmu_setup[i / 4].p_offset[i % 4] = ~0;
++	for (i = 0; i < ARRAY_SIZE(validation_state.live_min_clamp_offsets); i++)
++		validation_state.live_min_clamp_offsets[i] = ~0;
++
++	shader = shader_obj->vaddr;
++	max_ip = shader_obj->base.size / sizeof(uint64_t);
++
++	validated_shader = kcalloc(sizeof(*validated_shader), 1, GFP_KERNEL);
++	if (!validated_shader)
++		return NULL;
++
++	for (ip = 0; ip < max_ip; ip++) {
++		uint64_t inst = shader[ip];
++		uint32_t sig = QPU_GET_FIELD(inst, QPU_SIG);
++
++		switch (sig) {
++		case QPU_SIG_NONE:
++		case QPU_SIG_WAIT_FOR_SCOREBOARD:
++		case QPU_SIG_SCOREBOARD_UNLOCK:
++		case QPU_SIG_COLOR_LOAD:
++		case QPU_SIG_LOAD_TMU0:
++		case QPU_SIG_LOAD_TMU1:
++		case QPU_SIG_PROG_END:
++		case QPU_SIG_SMALL_IMM:
++			if (!check_instruction_writes(inst, validated_shader,
++						      &validation_state)) {
++				DRM_ERROR("Bad write at ip %d\n", ip);
++				goto fail;
++			}
++
++			if (!check_instruction_reads(inst, validated_shader))
++				goto fail;
++
++			if (sig == QPU_SIG_PROG_END) {
++				found_shader_end = true;
++				shader_end_ip = ip;
++			}
++
++			break;
++
++		case QPU_SIG_LOAD_IMM:
++			if (!check_instruction_writes(inst, validated_shader,
++						      &validation_state)) {
++				DRM_ERROR("Bad LOAD_IMM write at ip %d\n", ip);
++				goto fail;
++			}
++			break;
++
++		default:
++			DRM_ERROR("Unsupported QPU signal %d at "
++				  "instruction %d\n", sig, ip);
++			goto fail;
++		}
++
++		/* There are two delay slots after program end is signaled
++		 * that are still executed, then we're finished.
++		 */
++		if (found_shader_end && ip == shader_end_ip + 2)
++			break;
++	}
++
++	if (ip == max_ip) {
++		DRM_ERROR("shader failed to terminate before "
++			  "shader BO end at %d\n",
++			  shader_obj->base.size);
++		goto fail;
++	}
++
++	/* Again, no chance of integer overflow here because the worst case
++	 * scenario is 8 bytes of uniforms plus handles per 8-byte
++	 * instruction.
++	 */
++	validated_shader->uniforms_src_size =
++		(validated_shader->uniforms_size +
++		 4 * validated_shader->num_texture_samples);
++
++	return validated_shader;
++
++fail:
++	if (validated_shader) {
++		kfree(validated_shader->texture_samples);
++		kfree(validated_shader);
++	}
++	return NULL;
++}
+--- /dev/null
++++ b/include/uapi/drm/vc4_drm.h
+@@ -0,0 +1,229 @@
++/*
++ * Copyright © 2014-2015 Broadcom
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice (including the next
++ * paragraph) shall be included in all copies or substantial portions of the
++ * Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
++ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
++ * IN THE SOFTWARE.
++ */
++
++#ifndef _UAPI_VC4_DRM_H_
++#define _UAPI_VC4_DRM_H_
++
++#include <drm/drm.h>
++
++#define DRM_VC4_SUBMIT_CL                         0x00
++#define DRM_VC4_WAIT_SEQNO                        0x01
++#define DRM_VC4_WAIT_BO                           0x02
++#define DRM_VC4_CREATE_BO                         0x03
++#define DRM_VC4_MMAP_BO                           0x04
++#define DRM_VC4_CREATE_SHADER_BO                  0x05
++
++#define DRM_IOCTL_VC4_SUBMIT_CL           DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_SUBMIT_CL, struct drm_vc4_submit_cl)
++#define DRM_IOCTL_VC4_WAIT_SEQNO          DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_WAIT_SEQNO, struct drm_vc4_wait_seqno)
++#define DRM_IOCTL_VC4_WAIT_BO             DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_WAIT_BO, struct drm_vc4_wait_bo)
++#define DRM_IOCTL_VC4_CREATE_BO           DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_CREATE_BO, struct drm_vc4_create_bo)
++#define DRM_IOCTL_VC4_MMAP_BO             DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_MMAP_BO, struct drm_vc4_mmap_bo)
++#define DRM_IOCTL_VC4_CREATE_SHADER_BO    DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_CREATE_SHADER_BO, struct drm_vc4_create_shader_bo)
++
++struct drm_vc4_submit_rcl_surface {
++	uint32_t hindex; /* Handle index, or ~0 if not present. */
++	uint32_t offset; /* Offset to start of buffer. */
++	/*
++         * Bits for either render config (color_ms_write) or load/store packet.
++	 */
++	uint16_t bits;
++	uint16_t pad;
++};
++
++/**
++ * struct drm_vc4_submit_cl - ioctl argument for submitting commands to the 3D
++ * engine.
++ *
++ * Drivers typically use GPU BOs to store batchbuffers / command lists and
++ * their associated state.  However, because the VC4 lacks an MMU, we have to
++ * do validation of memory accesses by the GPU commands.  If we were to store
++ * our commands in BOs, we'd need to do uncached readback from them to do the
++ * validation process, which is too expensive.  Instead, userspace accumulates
++ * commands and associated state in plain memory, then the kernel copies the
++ * data to its own address space, and then validates and stores it in a GPU
++ * BO.
++ */
++struct drm_vc4_submit_cl {
++	/* Pointer to the binner command list.
++	 *
++	 * This is the first set of commands executed, which runs the
++	 * coordinate shader to determine where primitives land on the screen,
++	 * then writes out the state updates and draw calls necessary per tile
++	 * to the tile allocation BO.
++	 */
++	uint64_t bin_cl;
++
++	/* Pointer to the shader records.
++	 *
++	 * Shader records are the structures read by the hardware that contain
++	 * pointers to uniforms, shaders, and vertex attributes.  The
++	 * reference to the shader record has enough information to determine
++	 * how many pointers are necessary (fixed number for shaders/uniforms,
++	 * and an attribute count), so those BO indices into bo_handles are
++	 * just stored as uint32_ts before each shader record passed in.
++	 */
++	uint64_t shader_rec;
++
++	/* Pointer to uniform data and texture handles for the textures
++	 * referenced by the shader.
++	 *
++	 * For each shader state record, there is a set of uniform data in the
++	 * order referenced by the record (FS, VS, then CS).  Each set of
++	 * uniform data has a uint32_t index into bo_handles per texture
++	 * sample operation, in the order the QPU_W_TMUn_S writes appear in
++	 * the program.  Following the texture BO handle indices is the actual
++	 * uniform data.
++	 *
++	 * The individual uniform state blocks don't have sizes passed in,
++	 * because the kernel has to determine the sizes anyway during shader
++	 * code validation.
++	 */
++	uint64_t uniforms;
++	uint64_t bo_handles;
++
++	/* Size in bytes of the binner command list. */
++	uint32_t bin_cl_size;
++	/* Size in bytes of the set of shader records. */
++	uint32_t shader_rec_size;
++	/* Number of shader records.
++	 *
++	 * This could just be computed from the contents of shader_records and
++	 * the address bits of references to them from the bin CL, but it
++	 * keeps the kernel from having to resize some allocations it makes.
++	 */
++	uint32_t shader_rec_count;
++	/* Size in bytes of the uniform state. */
++	uint32_t uniforms_size;
++
++	/* Number of BO handles passed in (size is that times 4). */
++	uint32_t bo_handle_count;
++
++	/* RCL setup: */
++	uint16_t width;
++	uint16_t height;
++	uint8_t min_x_tile;
++	uint8_t min_y_tile;
++	uint8_t max_x_tile;
++	uint8_t max_y_tile;
++	struct drm_vc4_submit_rcl_surface color_read;
++	struct drm_vc4_submit_rcl_surface color_ms_write;
++	struct drm_vc4_submit_rcl_surface zs_read;
++	struct drm_vc4_submit_rcl_surface zs_write;
++	uint32_t clear_color[2];
++	uint32_t clear_z;
++	uint8_t clear_s;
++
++	uint32_t pad:24;
++
++#define VC4_SUBMIT_CL_USE_CLEAR_COLOR			(1 << 0)
++	uint32_t flags;
++
++	/* Returned value of the seqno of this render job (for the
++	 * wait ioctl).
++	 */
++	uint64_t seqno;
++};
++
++/**
++ * struct drm_vc4_wait_seqno - ioctl argument for waiting for
++ * DRM_VC4_SUBMIT_CL completion using its returned seqno.
++ *
++ * timeout_ns is the timeout in nanoseconds, where "0" means "don't
++ * block, just return the status."
++ */
++struct drm_vc4_wait_seqno {
++	uint64_t seqno;
++	uint64_t timeout_ns;
++};
++
++/**
++ * struct drm_vc4_wait_bo - ioctl argument for waiting for
++ * completion of the last DRM_VC4_SUBMIT_CL on a BO.
++ *
++ * This is useful for cases where multiple processes might be
++ * rendering to a BO and you want to wait for all rendering to be
++ * completed.
++ */
++struct drm_vc4_wait_bo {
++	uint32_t handle;
++	uint32_t pad;
++	uint64_t timeout_ns;
++};
++
++/**
++ * struct drm_vc4_create_bo - ioctl argument for creating VC4 BOs.
++ *
++ * There are currently no values for the flags argument, but it may be
++ * used in a future extension.
++ */
++struct drm_vc4_create_bo {
++	uint32_t size;
++	uint32_t flags;
++	/** Returned GEM handle for the BO. */
++	uint32_t handle;
++	uint32_t pad;
++};
++
++/**
++ * struct drm_vc4_create_shader_bo - ioctl argument for creating VC4
++ * shader BOs.
++ *
++ * Since allowing a shader to be overwritten while it's also being
++ * executed from would allow privlege escalation, shaders must be
++ * created using this ioctl, and they can't be mmapped later.
++ */
++struct drm_vc4_create_shader_bo {
++	/* Size of the data argument. */
++	uint32_t size;
++	/* Flags, currently must be 0. */
++	uint32_t flags;
++
++	/* Pointer to the data. */
++	uint64_t data;
++
++	/** Returned GEM handle for the BO. */
++	uint32_t handle;
++	/* Pad, must be 0. */
++	uint32_t pad;
++};
++
++/**
++ * struct drm_vc4_mmap_bo - ioctl argument for mapping VC4 BOs.
++ *
++ * This doesn't actually perform an mmap.  Instead, it returns the
++ * offset you need to use in an mmap on the DRM device node.  This
++ * means that tools like valgrind end up knowing about the mapped
++ * memory.
++ *
++ * There are currently no values for the flags argument, but it may be
++ * used in a future extension.
++ */
++struct drm_vc4_mmap_bo {
++	/** Handle for the object being mapped. */
++	uint32_t handle;
++	uint32_t flags;
++	/** offset into the drm node to use for subsequent mmap call. */
++	uint64_t offset;
++};
++
++#endif /* _UAPI_VC4_DRM_H_ */
diff --git a/target/linux/brcm2708/patches-4.4/0092-drm-vc4-Force-HDMI-to-connected.patch b/target/linux/brcm2708/patches-4.4/0092-drm-vc4-Force-HDMI-to-connected.patch
new file mode 100644
index 0000000..ff316d3
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0092-drm-vc4-Force-HDMI-to-connected.patch
@@ -0,0 +1,23 @@
+From c2b464e766dc1fa6fe28f2292f8258b64f468d7f Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Wed, 14 Oct 2015 11:32:14 -0700
+Subject: [PATCH 092/127] drm/vc4: Force HDMI to connected.
+
+For some reason on the downstream tree, the HPD GPIO isn't working.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_hdmi.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -164,6 +164,8 @@ vc4_hdmi_connector_detect(struct drm_con
+ 	struct drm_device *dev = connector->dev;
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 
++	return connector_status_connected;
++
+ 	if (vc4->hdmi->hpd_gpio) {
+ 		if (gpio_get_value(vc4->hdmi->hpd_gpio))
+ 			return connector_status_connected;
diff --git a/target/linux/brcm2708/patches-4.4/0093-drm-vc4-bo-cache-locking-fixes.patch b/target/linux/brcm2708/patches-4.4/0093-drm-vc4-bo-cache-locking-fixes.patch
new file mode 100644
index 0000000..923d393
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0093-drm-vc4-bo-cache-locking-fixes.patch
@@ -0,0 +1,147 @@
+From 084567d656e2876ed867c0a030dab0c2c4e38522 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Mon, 19 Oct 2015 08:23:18 -0700
+Subject: [PATCH 093/127] drm/vc4: bo cache locking fixes.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_bo.c  | 32 ++++++++++++++++++--------------
+ drivers/gpu/drm/vc4/vc4_drv.h |  2 +-
+ 2 files changed, 19 insertions(+), 15 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -112,14 +112,14 @@ void vc4_bo_cache_purge(struct drm_devic
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 
+-	spin_lock(&vc4->bo_lock);
++	mutex_lock(&vc4->bo_lock);
+ 	while (!list_empty(&vc4->bo_cache.time_list)) {
+ 		struct vc4_bo *bo = list_last_entry(&vc4->bo_cache.time_list,
+ 						    struct vc4_bo, unref_head);
+ 		vc4_bo_remove_from_cache(bo);
+ 		vc4_bo_destroy(bo);
+ 	}
+-	spin_unlock(&vc4->bo_lock);
++	mutex_unlock(&vc4->bo_lock);
+ }
+ 
+ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size)
+@@ -134,18 +134,18 @@ struct vc4_bo *vc4_bo_create(struct drm_
+ 		return NULL;
+ 
+ 	/* First, try to get a vc4_bo from the kernel BO cache. */
+-	spin_lock(&vc4->bo_lock);
++	mutex_lock(&vc4->bo_lock);
+ 	if (page_index < vc4->bo_cache.size_list_size &&
+ 	    !list_empty(&vc4->bo_cache.size_list[page_index])) {
+ 		struct vc4_bo *bo =
+ 			list_first_entry(&vc4->bo_cache.size_list[page_index],
+ 					 struct vc4_bo, size_head);
+ 		vc4_bo_remove_from_cache(bo);
+-		spin_unlock(&vc4->bo_lock);
++		mutex_unlock(&vc4->bo_lock);
+ 		kref_init(&bo->base.base.refcount);
+ 		return bo;
+ 	}
+-	spin_unlock(&vc4->bo_lock);
++	mutex_unlock(&vc4->bo_lock);
+ 
+ 	/* Otherwise, make a new BO. */
+ 	for (pass = 0; ; pass++) {
+@@ -215,7 +215,7 @@ vc4_bo_cache_free_old(struct drm_device
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	unsigned long expire_time = jiffies - msecs_to_jiffies(1000);
+ 
+-	spin_lock(&vc4->bo_lock);
++	mutex_lock(&vc4->bo_lock);
+ 	while (!list_empty(&vc4->bo_cache.time_list)) {
+ 		struct vc4_bo *bo = list_last_entry(&vc4->bo_cache.time_list,
+ 						    struct vc4_bo, unref_head);
+@@ -223,14 +223,14 @@ vc4_bo_cache_free_old(struct drm_device
+ 			mod_timer(&vc4->bo_cache.time_timer,
+ 				  round_jiffies_up(jiffies +
+ 						   msecs_to_jiffies(1000)));
+-			spin_unlock(&vc4->bo_lock);
++			mutex_unlock(&vc4->bo_lock);
+ 			return;
+ 		}
+ 
+ 		vc4_bo_remove_from_cache(bo);
+ 		vc4_bo_destroy(bo);
+ 	}
+-	spin_unlock(&vc4->bo_lock);
++	mutex_unlock(&vc4->bo_lock);
+ }
+ 
+ /* Called on the last userspace/kernel unreference of the BO.  Returns
+@@ -248,21 +248,25 @@ void vc4_free_object(struct drm_gem_obje
+ 	/* If the object references someone else's memory, we can't cache it.
+ 	 */
+ 	if (gem_bo->import_attach) {
++		mutex_lock(&vc4->bo_lock);
+ 		vc4_bo_destroy(bo);
++		mutex_unlock(&vc4->bo_lock);
+ 		return;
+ 	}
+ 
+ 	/* Don't cache if it was publicly named. */
+ 	if (gem_bo->name) {
++		mutex_lock(&vc4->bo_lock);
+ 		vc4_bo_destroy(bo);
++		mutex_unlock(&vc4->bo_lock);
+ 		return;
+ 	}
+ 
+-	spin_lock(&vc4->bo_lock);
++	mutex_lock(&vc4->bo_lock);
+ 	cache_list = vc4_get_cache_list_for_size(dev, gem_bo->size);
+ 	if (!cache_list) {
+ 		vc4_bo_destroy(bo);
+-		spin_unlock(&vc4->bo_lock);
++		mutex_unlock(&vc4->bo_lock);
+ 		return;
+ 	}
+ 
+@@ -278,7 +282,7 @@ void vc4_free_object(struct drm_gem_obje
+ 
+ 	vc4->bo_stats.num_cached++;
+ 	vc4->bo_stats.size_cached += gem_bo->size;
+-	spin_unlock(&vc4->bo_lock);
++	mutex_unlock(&vc4->bo_lock);
+ 
+ 	vc4_bo_cache_free_old(dev);
+ }
+@@ -465,7 +469,7 @@ void vc4_bo_cache_init(struct drm_device
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 
+-	spin_lock_init(&vc4->bo_lock);
++	mutex_init(&vc4->bo_lock);
+ 
+ 	INIT_LIST_HEAD(&vc4->bo_cache.time_list);
+ 
+@@ -498,9 +502,9 @@ int vc4_bo_stats_debugfs(struct seq_file
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	struct vc4_bo_stats stats;
+ 
+-	spin_lock(&vc4->bo_lock);
++	mutex_lock(&vc4->bo_lock);
+ 	stats = vc4->bo_stats;
+-	spin_unlock(&vc4->bo_lock);
++	mutex_unlock(&vc4->bo_lock);
+ 
+ 	seq_printf(m, "num bos allocated: %d\n", stats.num_allocated);
+ 	seq_printf(m, "size bos allocated: %dkb\n", stats.size_allocated / 1024);
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -49,7 +49,7 @@ struct vc4_dev {
+ 	} bo_stats;
+ 
+ 	/* Protects bo_cache and the BO stats. */
+-	spinlock_t bo_lock;
++	struct mutex bo_lock;
+ 
+ 	/* Sequence number for the last job queued in job_list.
+ 	 * Starts at 0 (no jobs emitted).
diff --git a/target/linux/brcm2708/patches-4.4/0094-drm-vc4-bo-cache-locking-cleanup.patch b/target/linux/brcm2708/patches-4.4/0094-drm-vc4-bo-cache-locking-cleanup.patch
new file mode 100644
index 0000000..1d797a1
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0094-drm-vc4-bo-cache-locking-cleanup.patch
@@ -0,0 +1,92 @@
+From a62d7adb16fdd9c450a0a255a3e0946c62b30b20 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Mon, 19 Oct 2015 08:29:41 -0700
+Subject: [PATCH 094/127] drm/vc4: bo cache locking cleanup.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_bo.c | 22 +++++++++-------------
+ 1 file changed, 9 insertions(+), 13 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -215,7 +215,6 @@ vc4_bo_cache_free_old(struct drm_device
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	unsigned long expire_time = jiffies - msecs_to_jiffies(1000);
+ 
+-	mutex_lock(&vc4->bo_lock);
+ 	while (!list_empty(&vc4->bo_cache.time_list)) {
+ 		struct vc4_bo *bo = list_last_entry(&vc4->bo_cache.time_list,
+ 						    struct vc4_bo, unref_head);
+@@ -223,14 +222,12 @@ vc4_bo_cache_free_old(struct drm_device
+ 			mod_timer(&vc4->bo_cache.time_timer,
+ 				  round_jiffies_up(jiffies +
+ 						   msecs_to_jiffies(1000)));
+-			mutex_unlock(&vc4->bo_lock);
+ 			return;
+ 		}
+ 
+ 		vc4_bo_remove_from_cache(bo);
+ 		vc4_bo_destroy(bo);
+ 	}
+-	mutex_unlock(&vc4->bo_lock);
+ }
+ 
+ /* Called on the last userspace/kernel unreference of the BO.  Returns
+@@ -245,29 +242,24 @@ void vc4_free_object(struct drm_gem_obje
+ 	struct vc4_bo *bo = to_vc4_bo(gem_bo);
+ 	struct list_head *cache_list;
+ 
++	mutex_lock(&vc4->bo_lock);
+ 	/* If the object references someone else's memory, we can't cache it.
+ 	 */
+ 	if (gem_bo->import_attach) {
+-		mutex_lock(&vc4->bo_lock);
+ 		vc4_bo_destroy(bo);
+-		mutex_unlock(&vc4->bo_lock);
+-		return;
++		goto out;
+ 	}
+ 
+ 	/* Don't cache if it was publicly named. */
+ 	if (gem_bo->name) {
+-		mutex_lock(&vc4->bo_lock);
+ 		vc4_bo_destroy(bo);
+-		mutex_unlock(&vc4->bo_lock);
+-		return;
++		goto out;
+ 	}
+ 
+-	mutex_lock(&vc4->bo_lock);
+ 	cache_list = vc4_get_cache_list_for_size(dev, gem_bo->size);
+ 	if (!cache_list) {
+ 		vc4_bo_destroy(bo);
+-		mutex_unlock(&vc4->bo_lock);
+-		return;
++		goto out;
+ 	}
+ 
+ 	if (bo->validated_shader) {
+@@ -282,9 +274,11 @@ void vc4_free_object(struct drm_gem_obje
+ 
+ 	vc4->bo_stats.num_cached++;
+ 	vc4->bo_stats.size_cached += gem_bo->size;
+-	mutex_unlock(&vc4->bo_lock);
+ 
+ 	vc4_bo_cache_free_old(dev);
++
++out:
++	mutex_unlock(&vc4->bo_lock);
+ }
+ 
+ static void vc4_bo_cache_time_work(struct work_struct *work)
+@@ -293,7 +287,9 @@ static void vc4_bo_cache_time_work(struc
+ 		container_of(work, struct vc4_dev, bo_cache.time_work);
+ 	struct drm_device *dev = vc4->dev;
+ 
++	mutex_lock(&vc4->bo_lock);
+ 	vc4_bo_cache_free_old(dev);
++	mutex_unlock(&vc4->bo_lock);
+ }
+ 
+ static void vc4_bo_cache_time_timer(unsigned long data)
diff --git a/target/linux/brcm2708/patches-4.4/0095-drm-vc4-Use-job_lock-to-protect-seqno_cb_list.patch b/target/linux/brcm2708/patches-4.4/0095-drm-vc4-Use-job_lock-to-protect-seqno_cb_list.patch
new file mode 100644
index 0000000..f3b9839
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0095-drm-vc4-Use-job_lock-to-protect-seqno_cb_list.patch
@@ -0,0 +1,54 @@
+From c5200dcf1298dc6789a88640ab581e364d92282b Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Mon, 19 Oct 2015 08:32:24 -0700
+Subject: [PATCH 095/127] drm/vc4: Use job_lock to protect seqno_cb_list.
+
+We're (mostly) not supposed to be using struct_mutex in drivers these
+days.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_gem.c | 8 +++++---
+ 1 file changed, 5 insertions(+), 3 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -474,7 +474,6 @@ vc4_job_handle_completed(struct vc4_dev
+ 		vc4_complete_exec(exec);
+ 		spin_lock_irqsave(&vc4->job_lock, irqflags);
+ 	}
+-	spin_unlock_irqrestore(&vc4->job_lock, irqflags);
+ 
+ 	list_for_each_entry_safe(cb, cb_temp, &vc4->seqno_cb_list, work.entry) {
+ 		if (cb->seqno <= vc4->finished_seqno) {
+@@ -482,6 +481,8 @@ vc4_job_handle_completed(struct vc4_dev
+ 			schedule_work(&cb->work);
+ 		}
+ 	}
++
++	spin_unlock_irqrestore(&vc4->job_lock, irqflags);
+ }
+ 
+ static void vc4_seqno_cb_work(struct work_struct *work)
+@@ -496,18 +497,19 @@ int vc4_queue_seqno_cb(struct drm_device
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	int ret = 0;
++	unsigned long irqflags;
+ 
+ 	cb->func = func;
+ 	INIT_WORK(&cb->work, vc4_seqno_cb_work);
+ 
+-	mutex_lock(&dev->struct_mutex);
++	spin_lock_irqsave(&vc4->job_lock, irqflags);
+ 	if (seqno > vc4->finished_seqno) {
+ 		cb->seqno = seqno;
+ 		list_add_tail(&cb->work.entry, &vc4->seqno_cb_list);
+ 	} else {
+ 		schedule_work(&cb->work);
+ 	}
+-	mutex_unlock(&dev->struct_mutex);
++	spin_unlock_irqrestore(&vc4->job_lock, irqflags);
+ 
+ 	return ret;
+ }
diff --git a/target/linux/brcm2708/patches-4.4/0096-drm-vc4-Drop-struct_mutex-around-CL-validation.patch b/target/linux/brcm2708/patches-4.4/0096-drm-vc4-Drop-struct_mutex-around-CL-validation.patch
new file mode 100644
index 0000000..7796399
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0096-drm-vc4-Drop-struct_mutex-around-CL-validation.patch
@@ -0,0 +1,63 @@
+From 6a9940a2bde49eda470e58f19a6bde3ded87d7fb Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Mon, 19 Oct 2015 08:44:35 -0700
+Subject: [PATCH 096/127] drm/vc4: Drop struct_mutex around CL validation.
+
+We were using it so that we could make sure that shader validation
+state didn't change while we were validating, but now shader
+validation state is immutable.  The bcl/rcl generation doesn't do any
+other BO dereferencing, and seems to have no other global state
+dependency not covered by job_lock / bo_lock.
+
+Fixes a lock order reversal between mmap_sem and struct_mutex.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_gem.c | 12 ++++--------
+ 1 file changed, 4 insertions(+), 8 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -244,13 +244,15 @@ static void
+ vc4_queue_submit(struct drm_device *dev, struct vc4_exec_info *exec)
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+-	uint64_t seqno = ++vc4->emit_seqno;
++	uint64_t seqno;
+ 	unsigned long irqflags;
+ 
++	spin_lock_irqsave(&vc4->job_lock, irqflags);
++
++	seqno = ++vc4->emit_seqno;
+ 	exec->seqno = seqno;
+ 	vc4_update_bo_seqnos(exec, seqno);
+ 
+-	spin_lock_irqsave(&vc4->job_lock, irqflags);
+ 	list_add_tail(&exec->head, &vc4->job_list);
+ 
+ 	/* If no job was executing, kick ours off.  Otherwise, it'll
+@@ -608,8 +610,6 @@ vc4_submit_cl_ioctl(struct drm_device *d
+ 	exec->args = args;
+ 	INIT_LIST_HEAD(&exec->unref_list);
+ 
+-	mutex_lock(&dev->struct_mutex);
+-
+ 	ret = vc4_cl_lookup_bos(dev, file_priv, exec);
+ 	if (ret)
+ 		goto fail;
+@@ -636,15 +636,11 @@ vc4_submit_cl_ioctl(struct drm_device *d
+ 	/* Return the seqno for our job. */
+ 	args->seqno = vc4->emit_seqno;
+ 
+-	mutex_unlock(&dev->struct_mutex);
+-
+ 	return 0;
+ 
+ fail:
+ 	vc4_complete_exec(exec);
+ 
+-	mutex_unlock(&dev->struct_mutex);
+-
+ 	return ret;
+ }
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0097-drm-vc4-Drop-struct_mutex-around-CL-validation.patch b/target/linux/brcm2708/patches-4.4/0097-drm-vc4-Drop-struct_mutex-around-CL-validation.patch
new file mode 100644
index 0000000..5275fb5
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0097-drm-vc4-Drop-struct_mutex-around-CL-validation.patch
@@ -0,0 +1,74 @@
+From 4cc42228e828c8cba5e2f712fd92fe3e0bb8b09d Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Mon, 19 Oct 2015 08:44:35 -0700
+Subject: [PATCH 097/127] drm/vc4: Drop struct_mutex around CL validation.
+
+We were using it so that we could make sure that shader validation
+state didn't change while we were validating, but now shader
+validation state is immutable.  The bcl/rcl generation doesn't do any
+other BO dereferencing, and seems to have no other global state
+dependency not covered by job_lock / bo_lock.  We only need to hold
+struct_mutex for object unreferencing.
+
+Fixes a lock order reversal between mmap_sem and struct_mutex.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_gem.c | 13 ++++++-------
+ 1 file changed, 6 insertions(+), 7 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -439,10 +439,12 @@ fail:
+ }
+ 
+ static void
+-vc4_complete_exec(struct vc4_exec_info *exec)
++vc4_complete_exec(struct drm_device *dev, struct vc4_exec_info *exec)
+ {
+ 	unsigned i;
+ 
++	/* Need the struct lock for drm_gem_object_unreference(). */
++	mutex_lock(&dev->struct_mutex);
+ 	if (exec->bo) {
+ 		for (i = 0; i < exec->bo_count; i++)
+ 			drm_gem_object_unreference(&exec->bo[i].bo->base);
+@@ -455,6 +457,7 @@ vc4_complete_exec(struct vc4_exec_info *
+ 		list_del(&bo->unref_head);
+ 		drm_gem_object_unreference(&bo->base.base);
+ 	}
++	mutex_unlock(&dev->struct_mutex);
+ 
+ 	kfree(exec);
+ }
+@@ -473,7 +476,7 @@ vc4_job_handle_completed(struct vc4_dev
+ 		list_del(&exec->head);
+ 
+ 		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
+-		vc4_complete_exec(exec);
++		vc4_complete_exec(vc4->dev, exec);
+ 		spin_lock_irqsave(&vc4->job_lock, irqflags);
+ 	}
+ 
+@@ -525,12 +528,8 @@ vc4_job_done_work(struct work_struct *wo
+ {
+ 	struct vc4_dev *vc4 =
+ 		container_of(work, struct vc4_dev, job_done_work);
+-	struct drm_device *dev = vc4->dev;
+ 
+-	/* Need the struct lock for drm_gem_object_unreference(). */
+-	mutex_lock(&dev->struct_mutex);
+ 	vc4_job_handle_completed(vc4);
+-	mutex_unlock(&dev->struct_mutex);
+ }
+ 
+ static int
+@@ -639,7 +638,7 @@ vc4_submit_cl_ioctl(struct drm_device *d
+ 	return 0;
+ 
+ fail:
+-	vc4_complete_exec(exec);
++	vc4_complete_exec(vc4->dev, exec);
+ 
+ 	return ret;
+ }
diff --git a/target/linux/brcm2708/patches-4.4/0098-drm-vc4-Add-support-for-more-display-plane-formats.patch b/target/linux/brcm2708/patches-4.4/0098-drm-vc4-Add-support-for-more-display-plane-formats.patch
new file mode 100644
index 0000000..b2328d1
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0098-drm-vc4-Add-support-for-more-display-plane-formats.patch
@@ -0,0 +1,35 @@
+From 7ad1b03ccca2d3c81ee97edfd1141b31331a0076 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Tue, 20 Oct 2015 13:59:15 +0100
+Subject: [PATCH 098/127] drm/vc4: Add support for more display plane formats.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_plane.c | 16 ++++++++++++++++
+ 1 file changed, 16 insertions(+)
+
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -59,6 +59,22 @@ static const struct hvs_format {
+ 		.drm = DRM_FORMAT_ARGB8888, .hvs = HVS_PIXEL_FORMAT_RGBA8888,
+ 		.pixel_order = HVS_PIXEL_ORDER_ABGR, .has_alpha = true,
+ 	},
++	{
++		.drm = DRM_FORMAT_RGB565, .hvs = HVS_PIXEL_FORMAT_RGB565,
++		.pixel_order = HVS_PIXEL_ORDER_XRGB, .has_alpha = false,
++	},
++	{
++		.drm = DRM_FORMAT_BGR565, .hvs = HVS_PIXEL_FORMAT_RGB565,
++		.pixel_order = HVS_PIXEL_ORDER_XBGR, .has_alpha = false,
++	},
++	{
++		.drm = DRM_FORMAT_ARGB1555, .hvs = HVS_PIXEL_FORMAT_RGBA5551,
++		.pixel_order = HVS_PIXEL_ORDER_ABGR, .has_alpha = true,
++	},
++	{
++		.drm = DRM_FORMAT_XRGB1555, .hvs = HVS_PIXEL_FORMAT_RGBA5551,
++		.pixel_order = HVS_PIXEL_ORDER_ABGR, .has_alpha = false,
++	},
+ };
+ 
+ static const struct hvs_format *vc4_get_hvs_format(u32 drm_format)
diff --git a/target/linux/brcm2708/patches-4.4/0099-drm-vc4-No-need-to-stop-the-stopped-threads.patch b/target/linux/brcm2708/patches-4.4/0099-drm-vc4-No-need-to-stop-the-stopped-threads.patch
new file mode 100644
index 0000000..d4d8c07
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0099-drm-vc4-No-need-to-stop-the-stopped-threads.patch
@@ -0,0 +1,26 @@
+From 7fdb9332f996fbaa26eaa805ce72a6afa590a302 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Fri, 23 Oct 2015 12:31:56 +0100
+Subject: [PATCH 099/127] drm/vc4: No need to stop the stopped threads.
+
+This was leftover debug code from the hackdriver.  We never submit
+unless the thread is already idle.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_gem.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -104,10 +104,6 @@ submit_cl(struct drm_device *dev, uint32
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 
+-	/* Stop any existing thread and set state to "stopped at halt" */
+-	V3D_WRITE(V3D_CTNCS(thread), V3D_CTRUN);
+-	barrier();
+-
+ 	V3D_WRITE(V3D_CTNCA(thread), start);
+ 	barrier();
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0100-drm-vc4-Remove-extra-barrier-s-aroudn-CTnCA-CTnEA-se.patch b/target/linux/brcm2708/patches-4.4/0100-drm-vc4-Remove-extra-barrier-s-aroudn-CTnCA-CTnEA-se.patch
new file mode 100644
index 0000000..0ea1ee7
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0100-drm-vc4-Remove-extra-barrier-s-aroudn-CTnCA-CTnEA-se.patch
@@ -0,0 +1,33 @@
+From 65854b72066317cb2896df184794890c32f5f1bb Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Fri, 23 Oct 2015 12:33:43 +0100
+Subject: [PATCH 100/127] drm/vc4: Remove extra barrier()s aroudn CTnCA/CTnEA
+ setup.
+
+The writel() that these expand to already does barriers.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_gem.c | 9 +++------
+ 1 file changed, 3 insertions(+), 6 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -104,14 +104,11 @@ submit_cl(struct drm_device *dev, uint32
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 
+-	V3D_WRITE(V3D_CTNCA(thread), start);
+-	barrier();
+-
+-	/* Set the end address of the control list.  Writing this
+-	 * register is what starts the job.
++	/* Set the current and end address of the control list.
++	 * Writing the end register is what starts the job.
+ 	 */
++	V3D_WRITE(V3D_CTNCA(thread), start);
+ 	V3D_WRITE(V3D_CTNEA(thread), end);
+-	barrier();
+ }
+ 
+ int
diff --git a/target/linux/brcm2708/patches-4.4/0101-drm-vc4-Fix-a-typo-in-a-V3D-debug-register.patch b/target/linux/brcm2708/patches-4.4/0101-drm-vc4-Fix-a-typo-in-a-V3D-debug-register.patch
new file mode 100644
index 0000000..4cf4dc1
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0101-drm-vc4-Fix-a-typo-in-a-V3D-debug-register.patch
@@ -0,0 +1,33 @@
+From 8566dcb0b278a12d9aad06e8b85b94e9d797d66e Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Fri, 23 Oct 2015 14:57:22 +0100
+Subject: [PATCH 101/127] drm/vc4: Fix a typo in a V3D debug register.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_regs.h | 2 +-
+ drivers/gpu/drm/vc4/vc4_v3d.c  | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_regs.h
++++ b/drivers/gpu/drm/vc4/vc4_regs.h
+@@ -154,7 +154,7 @@
+ #define V3D_PCTRS14  0x006f4
+ #define V3D_PCTR15   0x006f8
+ #define V3D_PCTRS15  0x006fc
+-#define V3D_BGE      0x00f00
++#define V3D_DBGE     0x00f00
+ #define V3D_FDBGO    0x00f04
+ #define V3D_FDBGB    0x00f08
+ #define V3D_FDBGR    0x00f0c
+--- a/drivers/gpu/drm/vc4/vc4_v3d.c
++++ b/drivers/gpu/drm/vc4/vc4_v3d.c
+@@ -99,7 +99,7 @@ static const struct {
+ 	REGDEF(V3D_PCTRS14),
+ 	REGDEF(V3D_PCTR15),
+ 	REGDEF(V3D_PCTRS15),
+-	REGDEF(V3D_BGE),
++	REGDEF(V3D_DBGE),
+ 	REGDEF(V3D_FDBGO),
+ 	REGDEF(V3D_FDBGB),
+ 	REGDEF(V3D_FDBGR),
diff --git a/target/linux/brcm2708/patches-4.4/0102-drm-vc4-Enable-VC4-modules-and-increase-CMA-size-wit.patch b/target/linux/brcm2708/patches-4.4/0102-drm-vc4-Enable-VC4-modules-and-increase-CMA-size-wit.patch
new file mode 100644
index 0000000..fe05bae
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0102-drm-vc4-Enable-VC4-modules-and-increase-CMA-size-wit.patch
@@ -0,0 +1,153 @@
+From e7c59032e2b16f07a4e836b6f417a7cea5d66b2d Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Mon, 2 Nov 2015 17:07:33 +0000
+Subject: [PATCH 102/127] drm/vc4: Enable VC4 modules, and increase CMA size
+ with overlay
+
+If using the overlay, be careful not to boot to GUI or run startx,
+or the Pi will almost hang, reporting stalls in kernel threads.
+---
+ arch/arm/boot/dts/overlays/README                  |  8 ++
+ arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts | 95 ++++++++++++++++++++++
+ arch/arm/configs/bcm2709_defconfig                 |  2 +
+ arch/arm/configs/bcmrpi_defconfig                  |  2 +
+ 4 files changed, 107 insertions(+)
+ create mode 100644 arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts
+
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -601,6 +601,14 @@ Params: txd1_pin                 GPIO pi
+         rxd1_pin                 GPIO pin for RXD1 (15, 33 or 41 - default 15)
+ 
+ 
++Name:   vc4-kms-v3d
++Info:   Enable Eric Anholt's DRM VC4 HDMI/HVS/V3D driver. Running startx or
++        booting to GUI while this overlay is in use will cause interesting
++        lockups.
++Load:   dtoverlay=vc4-kms-v3d
++Params: <None>
++
++
+ Name:   vga666
+ Info:   Overlay for the Fen Logic VGA666 board
+         This uses GPIOs 2-21 (so no I2C), and activates the output 2-3 seconds
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts
+@@ -0,0 +1,95 @@
++/*
++ * vc4-kms-v3d-overlay.dts
++ */
++
++/dts-v1/;
++/plugin/;
++
++#include "dt-bindings/clock/bcm2835.h"
++#include "dt-bindings/gpio/gpio.h"
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&i2c2>;
++		__overlay__  {
++			status = "okay";
++		};
++	};
++
++	fragment at 1 {
++		target = <&cprman>;
++		__overlay__  {
++			status = "okay";
++		};
++	};
++
++	fragment at 2 {
++		target = <&fb>;
++		__overlay__  {
++			status = "disabled";
++		};
++	};
++
++	fragment at 3 {
++		target = <&soc>;
++		__overlay__  {
++			#address-cells = <1>;
++			#size-cells = <1>;
++
++			pixelvalve at 7e206000 {
++				compatible = "brcm,bcm2835-pixelvalve0";
++				reg = <0x7e206000 0x100>;
++				interrupts = <2 13>; /* pwa0 */
++			};
++
++			pixelvalve at 7e207000 {
++				compatible = "brcm,bcm2835-pixelvalve1";
++				reg = <0x7e207000 0x100>;
++				interrupts = <2 14>; /* pwa1 */
++			};
++
++			hvs at 7e400000 {
++				compatible = "brcm,bcm2835-hvs";
++				reg = <0x7e400000 0x6000>;
++				interrupts = <2 1>;
++			};
++
++			pixelvalve at 7e807000 {
++				compatible = "brcm,bcm2835-pixelvalve2";
++				reg = <0x7e807000 0x100>;
++				interrupts = <2 10>; /* pixelvalve */
++			};
++
++			hdmi at 7e902000 {
++				compatible = "brcm,bcm2835-hdmi";
++				reg = <0x7e902000 0x600>,
++				      <0x7e808000 0x100>;
++				interrupts = <2 8>, <2 9>;
++				ddc = <&i2c2>;
++				hpd-gpio = <&gpio 46 GPIO_ACTIVE_HIGH>;
++				clocks = <&cprman BCM2835_PLLH_PIX>,
++					 <&cprman BCM2835_CLOCK_HSM>;
++				clock-names = "pixel", "hdmi";
++			};
++
++			v3d at 7ec00000 {
++				compatible = "brcm,vc4-v3d";
++				reg = <0x7ec00000 0x1000>;
++				interrupts = <1 10>;
++			};
++
++			gpu at 7e4c0000 {
++				compatible = "brcm,bcm2835-vc4";
++			};
++		};
++	};
++
++	fragment at 4 {
++		target-path = "/chosen";
++		__overlay__ {
++			bootargs = "cma=256M at 512M";
++		};
++	};
++};
+--- a/arch/arm/configs/bcm2709_defconfig
++++ b/arch/arm/configs/bcm2709_defconfig
+@@ -802,6 +802,8 @@ CONFIG_VIDEO_TW9903=m
+ CONFIG_VIDEO_TW9906=m
+ CONFIG_VIDEO_OV7640=m
+ CONFIG_VIDEO_MT9V011=m
++CONFIG_DRM=m
++CONFIG_DRM_VC4=m
+ CONFIG_FB=y
+ CONFIG_FB_BCM2708=y
+ CONFIG_FB_UDL=m
+--- a/arch/arm/configs/bcmrpi_defconfig
++++ b/arch/arm/configs/bcmrpi_defconfig
+@@ -795,6 +795,8 @@ CONFIG_VIDEO_TW9903=m
+ CONFIG_VIDEO_TW9906=m
+ CONFIG_VIDEO_OV7640=m
+ CONFIG_VIDEO_MT9V011=m
++CONFIG_DRM=m
++CONFIG_DRM_VC4=m
+ CONFIG_FB=y
+ CONFIG_FB_BCM2708=y
+ CONFIG_FB_UDL=m
diff --git a/target/linux/brcm2708/patches-4.4/0103-squash-fixups.patch b/target/linux/brcm2708/patches-4.4/0103-squash-fixups.patch
new file mode 100644
index 0000000..cd21a7e
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0103-squash-fixups.patch
@@ -0,0 +1,43 @@
+From 88f275f67c6816cad5aec510074bacea409d78cd Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 18 Nov 2015 18:29:58 +0000
+Subject: [PATCH 103/127] squash: fixups
+
+---
+ drivers/gpu/drm/vc4/Kconfig   | 2 +-
+ drivers/gpu/drm/vc4/vc4_drv.c | 2 +-
+ drivers/gpu/drm/vc4/vc4_kms.c | 2 +-
+ 3 files changed, 3 insertions(+), 3 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/Kconfig
++++ b/drivers/gpu/drm/vc4/Kconfig
+@@ -1,6 +1,6 @@
+ config DRM_VC4
+ 	tristate "Broadcom VC4 Graphics"
+-	depends on ARCH_BCM2835 || COMPILE_TEST
++	depends on ARCH_BCM2835 || ARCH_BCM2708 || ARCH_BCM2709 || COMPILE_TEST
+ 	depends on DRM && HAVE_DMA_ATTRS
+ 	select DRM_KMS_HELPER
+ 	select DRM_KMS_CMA_HELPER
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -127,7 +127,7 @@ static struct drm_driver vc4_drm_driver
+ 	.num_ioctls = ARRAY_SIZE(vc4_drm_ioctls),
+ 	.fops = &vc4_drm_fops,
+ 
+-	.gem_obj_size = sizeof(struct vc4_bo),
++	//.gem_obj_size = sizeof(struct vc4_bo),
+ 
+ 	.name = DRIVER_NAME,
+ 	.desc = DRIVER_DESC,
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -45,7 +45,7 @@ vc4_atomic_complete_commit(struct vc4_co
+ 
+ 	drm_atomic_helper_commit_modeset_disables(dev, state);
+ 
+-	drm_atomic_helper_commit_planes(dev, state);
++	drm_atomic_helper_commit_planes(dev, state, false);
+ 
+ 	drm_atomic_helper_commit_modeset_enables(dev, state);
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0104-squash-add-missing-vc4-kms-v3d-overlay.dtb-to-makefi.patch b/target/linux/brcm2708/patches-4.4/0104-squash-add-missing-vc4-kms-v3d-overlay.dtb-to-makefi.patch
new file mode 100644
index 0000000..cb1c267
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0104-squash-add-missing-vc4-kms-v3d-overlay.dtb-to-makefi.patch
@@ -0,0 +1,20 @@
+From 37a66cf4c337c439330ffd6316e421d0603ce936 Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 18 Nov 2015 20:26:03 +0000
+Subject: [PATCH 104/127] squash: add missing vc4-kms-v3d-overlay.dtb to
+ makefile
+
+---
+ arch/arm/boot/dts/overlays/Makefile | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -52,6 +52,7 @@ dtb-$(RPI_DT_OVERLAYS) += smi-overlay.dt
+ dtb-$(RPI_DT_OVERLAYS) += spi-gpio35-39-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += tinylcd35-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += uart1-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += vc4-kms-v3d-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += vga666-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += w1-gpio-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += w1-gpio-pullup-overlay.dtb
diff --git a/target/linux/brcm2708/patches-4.4/0105-clk-bcm2835-Also-build-the-driver-for-downstream-ker.patch b/target/linux/brcm2708/patches-4.4/0105-clk-bcm2835-Also-build-the-driver-for-downstream-ker.patch
new file mode 100644
index 0000000..c297bb7
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0105-clk-bcm2835-Also-build-the-driver-for-downstream-ker.patch
@@ -0,0 +1,22 @@
+From fc0a178addf1cae65245e129e9bbe4a5daeb2cfd Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Mon, 12 Oct 2015 11:23:34 -0700
+Subject: [PATCH 105/127] clk: bcm2835: Also build the driver for downstream
+ kernels.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/clk/bcm/Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/clk/bcm/Makefile
++++ b/drivers/clk/bcm/Makefile
+@@ -3,7 +3,7 @@ obj-$(CONFIG_CLK_BCM_KONA)	+= clk-kona-s
+ obj-$(CONFIG_CLK_BCM_KONA)	+= clk-bcm281xx.o
+ obj-$(CONFIG_CLK_BCM_KONA)	+= clk-bcm21664.o
+ obj-$(CONFIG_COMMON_CLK_IPROC)	+= clk-iproc-armpll.o clk-iproc-pll.o clk-iproc-asiu.o
+-obj-$(CONFIG_ARCH_BCM2835)	+= clk-bcm2835.o
++obj-$(CONFIG_ARCH_BCM2835)$(CONFIG_ARCH_BCM2708)$(CONFIG_ARCH_BCM2709)	+= clk-bcm2835.o
+ obj-$(CONFIG_COMMON_CLK_IPROC)	+= clk-ns2.o
+ obj-$(CONFIG_ARCH_BCM_CYGNUS)	+= clk-cygnus.o
+ obj-$(CONFIG_ARCH_BCM_NSP)	+= clk-nsp.o
diff --git a/target/linux/brcm2708/patches-4.4/0106-dts-Added-overlay-for-gpio_ir_recv-driver.patch b/target/linux/brcm2708/patches-4.4/0106-dts-Added-overlay-for-gpio_ir_recv-driver.patch
new file mode 100644
index 0000000..356b778
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0106-dts-Added-overlay-for-gpio_ir_recv-driver.patch
@@ -0,0 +1,104 @@
+From d97e7fdedbc15a25985dfa73f7e1c63e2b4ca4c8 Mon Sep 17 00:00:00 2001
+From: Holger Steinhaus <hsteinhaus at gmx.de>
+Date: Sat, 14 Nov 2015 18:37:43 +0100
+Subject: [PATCH 106/127] dts: Added overlay for gpio_ir_recv driver
+
+---
+ arch/arm/boot/dts/overlays/Makefile            |  1 +
+ arch/arm/boot/dts/overlays/README              | 18 ++++++++++-
+ arch/arm/boot/dts/overlays/gpio-ir-overlay.dts | 45 ++++++++++++++++++++++++++
+ 3 files changed, 63 insertions(+), 1 deletion(-)
+ create mode 100644 arch/arm/boot/dts/overlays/gpio-ir-overlay.dts
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -16,6 +16,7 @@ dtb-$(RPI_DT_OVERLAYS) += ads7846-overla
+ dtb-$(RPI_DT_OVERLAYS) += bmp085_i2c-sensor-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += dht11-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += enc28j60-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += gpio-ir-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += gpio-poweroff-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += hifiberry-amp-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += hifiberry-dac-overlay.dtb
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -196,6 +196,22 @@ Params: int_pin                  GPIO us
+         speed                    SPI bus speed (default 12000000)
+ 
+ 
++Name:   gpio-ir
++Info:   Use GPIO pin as rc-core style infrared receiver input. The rc-core-
++        based gpio_ir_recv driver maps received keys directly to a
++        /dev/input/event* device, all decoding is done by the kernel - LIRC is
++        not required! The key mapping and other decoding parameters can be
++        configured by "ir-keytable" tool.
++Load:   dtoverlay=gpio-ir,<param>=<val>
++Params: gpio_pin                 Input pin number. Default is 18.
++
++        gpio_pull                Desired pull-up/down state (off, down, up)
++                                 Default is "down".
++
++        rc-map-name              Default rc keymap (can also be changed by
++                                 ir-keytable), defaults to "rc-rc6-mce"
++
++
+ Name:   gpio-poweroff
+ Info:   Drives a GPIO high or low on reboot
+ Load:   dtoverlay=gpio-poweroff,<param>=<val>
+@@ -308,7 +324,7 @@ Params: <None>
+ Name:   lirc-rpi
+ Info:   Configures lirc-rpi (Linux Infrared Remote Control for Raspberry Pi)
+         Consult the module documentation for more details.
+-Load:   dtoverlay=lirc-rpi,<param>=<val>,...
++Load:   dtoverlay=lirc-rpi,<param>=<val>
+ Params: gpio_out_pin             GPIO for output (default "17")
+ 
+         gpio_in_pin              GPIO for input (default "18")
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/gpio-ir-overlay.dts
+@@ -0,0 +1,45 @@
++// Definitions for ir-gpio module
++/dts-v1/;
++/plugin/;
++
++/ {
++        compatible = "brcm,bcm2708";
++
++        fragment at 0 {
++                target-path = "/";
++                __overlay__ {
++                        gpio_ir: ir-receiver {
++                                compatible = "gpio-ir-receiver";
++
++                                // pin number, high or low
++                                gpios = <&gpio 18 1>;
++
++                                // parameter for keymap name
++                                linux,rc-map-name = "rc-rc6-mce";
++
++                                status = "okay";
++                        };
++                };
++        };
++
++        fragment at 1 {
++                target = <&gpio>;
++                __overlay__ {
++                        gpio_ir_pins: gpio_ir_pins {
++                                brcm,pins = <18>;                       // pin 18
++                                brcm,function = <0>;                    // in
++                                brcm,pull = <1>;                        // down
++                        };
++                };
++        };
++
++        __overrides__ {
++                // parameters
++                gpio_pin =      <&gpio_ir>,"gpios:4",
++                                        <&gpio_ir_pins>,"brcm,pins:0",
++                                        <&gpio_ir_pins>,"brcm,pull:0";  // pin number
++                gpio_pull = <&gpio_ir_pins>,"brcm,pull:0";              // pull-up/down state
++
++                rc-map-name = <&gpio_ir>,"linux,rc-map-name";           // default rc map
++        };
++};
diff --git a/target/linux/brcm2708/patches-4.4/0107-Build-i2c_gpio-module-and-add-a-device-tree-overlay-.patch b/target/linux/brcm2708/patches-4.4/0107-Build-i2c_gpio-module-and-add-a-device-tree-overlay-.patch
new file mode 100644
index 0000000..4a501a2
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0107-Build-i2c_gpio-module-and-add-a-device-tree-overlay-.patch
@@ -0,0 +1,100 @@
+From 5371d757db191f77f115b1248184e2f1a15dfa4b Mon Sep 17 00:00:00 2001
+From: Alistair Buxton <a.j.buxton at gmail.com>
+Date: Sun, 1 Nov 2015 22:27:56 +0000
+Subject: [PATCH 107/127] Build i2c_gpio module and add a device tree overlay
+ to configure it.
+
+---
+ arch/arm/boot/dts/overlays/Makefile             |  1 +
+ arch/arm/boot/dts/overlays/README               | 13 +++++++++++-
+ arch/arm/boot/dts/overlays/i2c-gpio-overlay.dts | 28 +++++++++++++++++++++++++
+ arch/arm/configs/bcm2709_defconfig              |  1 +
+ arch/arm/configs/bcmrpi_defconfig               |  1 +
+ 5 files changed, 43 insertions(+), 1 deletion(-)
+ create mode 100644 arch/arm/boot/dts/overlays/i2c-gpio-overlay.dts
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -25,6 +25,7 @@ dtb-$(RPI_DT_OVERLAYS) += hifiberry-digi
+ dtb-$(RPI_DT_OVERLAYS) += hy28a-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += hy28b-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += i2c-rtc-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += i2c-gpio-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += i2s-mmap-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += iqaudio-dac-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += iqaudio-dacplus-overlay.dtb
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -287,9 +287,20 @@ Params: speed                    Display
+         ledgpio                  GPIO used to control backlight
+ 
+ 
++Name:   i2c-gpio
++Info:   Adds support for software i2c controller on gpio pins
++Load:   dtoverlay=i2c-gpio,<param>=<val>
++Params: i2c_gpio_sda             GPIO used for I2C data (default "23")
++
++        i2c_gpio_scl             GPIO used for I2C clock (default "24")
++
++        i2c_gpio_delay_us        Clock delay in microseconds
++                                 (default "2" = ~100kHz)
++
++
+ Name:   i2c-rtc
+ Info:   Adds support for a number of I2C Real Time Clock devices
+-Load:   dtoverlay=i2c-rtc,<param>
++Load:   dtoverlay=i2c-rtc,<param>=<val>
+ Params: ds1307                   Select the DS1307 device
+ 
+         ds3231                   Select the DS3231 device
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/i2c-gpio-overlay.dts
+@@ -0,0 +1,28 @@
++// Overlay for i2c_gpio bitbanging host bus.
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target-path = "/";
++		__overlay__ {
++			i2c_gpio: i2c at 0 {
++				compatible = "i2c-gpio";
++				gpios = <&gpio 23 0 /* sda */
++					 &gpio 24 0 /* scl */
++					>;
++				i2c-gpio,delay-us = <2>;        /* ~100 kHz */
++				#address-cells = <1>;
++				#size-cells = <0>;
++			};
++		};
++	};
++	__overrides__ {
++		i2c_gpio_sda = <&i2c_gpio>,"gpios:4";
++		i2c_gpio_scl = <&i2c_gpio>,"gpios:16";
++		i2c_gpio_delay_us = <&i2c_gpio>,"i2c-gpio,delay-us:0";
++	};
++};
++
+--- a/arch/arm/configs/bcm2709_defconfig
++++ b/arch/arm/configs/bcm2709_defconfig
+@@ -595,6 +595,7 @@ CONFIG_RAW_DRIVER=y
+ CONFIG_I2C=y
+ CONFIG_I2C_CHARDEV=m
+ CONFIG_I2C_BCM2708=m
++CONFIG_I2C_GPIO=m
+ CONFIG_SPI=y
+ CONFIG_SPI_BCM2835=m
+ CONFIG_SPI_SPIDEV=y
+--- a/arch/arm/configs/bcmrpi_defconfig
++++ b/arch/arm/configs/bcmrpi_defconfig
+@@ -588,6 +588,7 @@ CONFIG_RAW_DRIVER=y
+ CONFIG_I2C=y
+ CONFIG_I2C_CHARDEV=m
+ CONFIG_I2C_BCM2708=m
++CONFIG_I2C_GPIO=m
+ CONFIG_SPI=y
+ CONFIG_SPI_BCM2835=m
+ CONFIG_SPI_SPIDEV=y
diff --git a/target/linux/brcm2708/patches-4.4/0108-New-overlay-for-PiScreen2r.patch b/target/linux/brcm2708/patches-4.4/0108-New-overlay-for-PiScreen2r.patch
new file mode 100644
index 0000000..28df3eb
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0108-New-overlay-for-PiScreen2r.patch
@@ -0,0 +1,148 @@
+From 872bf9aa8073bb571ecb3fe502afd39257f4ba20 Mon Sep 17 00:00:00 2001
+From: mwilliams03 <mark.mwilliams at gmail.com>
+Date: Sun, 18 Oct 2015 17:07:24 -0700
+Subject: [PATCH 108/127] New overlay for PiScreen2r
+
+---
+ arch/arm/boot/dts/overlays/Makefile               |   1 +
+ arch/arm/boot/dts/overlays/README                 |  14 +++
+ arch/arm/boot/dts/overlays/piscreen2r-overlay.dts | 100 ++++++++++++++++++++++
+ 3 files changed, 115 insertions(+)
+ create mode 100644 arch/arm/boot/dts/overlays/piscreen2r-overlay.dts
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -35,6 +35,7 @@ dtb-$(RPI_DT_OVERLAYS) += mcp2515-can1-o
+ dtb-$(RPI_DT_OVERLAYS) += mmc-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += mz61581-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += piscreen-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += piscreen2r-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += pitft28-resistive-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += pps-gpio-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += pwm-overlay.dtb
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -425,6 +425,20 @@ Params: speed                    Display
+         xohms                    Touchpanel sensitivity (X-plate resistance)
+ 
+ 
++Name:   piscreen2r
++Info:   PiScreen 2 with resistive TP display by OzzMaker.com
++Load:   dtoverlay=piscreen2r,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        debug                    Debug output level {0-7}
++
++        xohms                    Touchpanel sensitivity (X-plate resistance)
++
++
+ Name:   pitft28-resistive
+ Info:   Adafruit PiTFT 2.8" resistive touch screen
+ Load:   dtoverlay=pitft28-resistive,<param>=<val>
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/piscreen2r-overlay.dts
+@@ -0,0 +1,100 @@
++ /*
++ * Device Tree overlay for PiScreen2 3.5" TFT with resistive touch  by Ozzmaker.com
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			spidev at 1{
++				status = "disabled";
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			piscreen2_pins: piscreen2_pins {
++				brcm,pins = <17 25 24 22>;
++				brcm,function = <0 1 1 1>; /* in out out out */
++			};
++		};
++	};
++
++	fragment at 2 {
++		target = <&spi0>;
++		__overlay__ {
++			/* needed to avoid dtc warning */
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			piscreen2: piscreen2 at 0{
++				compatible = "ilitek,ili9486";
++				reg = <0>;
++				pinctrl-names = "default";
++				pinctrl-0 = <&piscreen2_pins>;
++				bgr;
++				spi-max-frequency = <64000000>;
++				rotate = <90>;
++				fps = <30>;
++				buswidth = <8>;
++				regwidth = <16>;
++				txbuflen = <32768>;
++				reset-gpios = <&gpio 25 0>;
++				dc-gpios = <&gpio 24 0>;
++				led-gpios = <&gpio 22 1>;
++				debug = <0>;
++
++                                init = <0x10000b0 0x00
++                                        0x1000011
++                                        0x20000ff
++                                        0x100003a 0x55
++                                        0x1000036 0x28
++                                        0x10000c0 0x11 0x09
++                                        0x10000c1 0x41
++                                        0x10000c5 0x00 0x00 0x00 0x00
++                                        0x10000b6 0x00 0x02
++                                        0x10000f7 0xa9 0x51 0x2c 0x2
++                                        0x10000be 0x00 0x04
++                                        0x10000e9 0x00
++                                        0x1000011
++                                        0x1000029>;
++
++			};
++
++			piscreen2_ts: piscreen2-ts at 1 {
++				compatible = "ti,ads7846";
++				reg = <1>;
++
++				spi-max-frequency = <2000000>;
++				interrupts = <17 2>; /* high-to-low edge triggered */
++				interrupt-parent = <&gpio>;
++				pendown-gpio = <&gpio 17 0>;
++				ti,swap-xy;
++				ti,x-plate-ohms = /bits/ 16 <100>;
++				ti,pressure-max = /bits/ 16 <255>;
++			};
++		};
++	};
++	__overrides__ {
++		speed =		<&piscreen2>,"spi-max-frequency:0";
++		rotate =	<&piscreen2>,"rotate:0";
++		fps =		<&piscreen2>,"fps:0";
++		debug =		<&piscreen2>,"debug:0";
++		xohms =		<&piscreen2_ts>,"ti,x-plate-ohms;0";
++	};
++};
++
diff --git a/target/linux/brcm2708/patches-4.4/0109-dts-Added-overlay-for-Adafruit-PiTFT-2.8-capacitive-.patch b/target/linux/brcm2708/patches-4.4/0109-dts-Added-overlay-for-Adafruit-PiTFT-2.8-capacitive-.patch
new file mode 100644
index 0000000..0fd1f55
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0109-dts-Added-overlay-for-Adafruit-PiTFT-2.8-capacitive-.patch
@@ -0,0 +1,145 @@
+From aa474d100ec0ad7647935ffaf5bd2272d4a57249 Mon Sep 17 00:00:00 2001
+From: Ondrej Wisniewski <ondrej.wisniewski at gmail.com>
+Date: Fri, 6 Nov 2015 15:01:28 +0100
+Subject: [PATCH 109/127] dts: Added overlay for Adafruit PiTFT 2.8" capacitive
+ touch screen
+
+---
+ arch/arm/boot/dts/overlays/Makefile                |  1 +
+ arch/arm/boot/dts/overlays/README                  | 22 ++++++
+ .../dts/overlays/pitft28-capacitive-overlay.dts    | 88 ++++++++++++++++++++++
+ 3 files changed, 111 insertions(+)
+ create mode 100644 arch/arm/boot/dts/overlays/pitft28-capacitive-overlay.dts
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -36,6 +36,7 @@ dtb-$(RPI_DT_OVERLAYS) += mmc-overlay.dt
+ dtb-$(RPI_DT_OVERLAYS) += mz61581-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += piscreen-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += piscreen2r-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += pitft28-capacitive-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += pitft28-resistive-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += pps-gpio-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += pwm-overlay.dtb
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -439,6 +439,28 @@ Params: speed                    Display
+         xohms                    Touchpanel sensitivity (X-plate resistance)
+ 
+ 
++Name:   pitft28-capacitive
++Info:   Adafruit PiTFT 2.8" capacitive touch screen
++Load:   dtoverlay=pitft28-capacitive,<param>=<val>
++Params: speed                    Display SPI bus speed
++
++        rotate                   Display rotation {0,90,180,270}
++
++        fps                      Delay between frame updates
++
++        debug                    Debug output level {0-7}
++
++        touch-sizex              Touchscreen size x (default 240)
++
++        touch-sizey              Touchscreen size y (default 320)
++
++        touch-invx               Touchscreen inverted x axis
++
++        touch-invy               Touchscreen inverted y axis
++
++        touch-swapxy             Touchscreen swapped x y axis
++
++
+ Name:   pitft28-resistive
+ Info:   Adafruit PiTFT 2.8" resistive touch screen
+ Load:   dtoverlay=pitft28-resistive,<param>=<val>
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/pitft28-capacitive-overlay.dts
+@@ -0,0 +1,88 @@
++/*
++ * Device Tree overlay for Adafruit PiTFT 2.8" capacitive touch screen
++ *
++ */
++
++/dts-v1/;
++/plugin/;
++
++/ {
++        compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
++
++        fragment at 0 {
++                target = <&spi0>;
++                __overlay__ {
++                        status = "okay";
++
++                        spidev at 0{
++                                status = "disabled";
++                        };
++                };
++        };
++
++        fragment at 1 {
++                target = <&gpio>;
++                __overlay__ {
++                        pitft_pins: pitft_pins {
++                                brcm,pins = <24 25>;
++                                brcm,function = <0 1>; /* in out */
++                                brcm,pull = <2 0>; /* pullup none */
++                        };
++                };
++        };
++
++        fragment at 2 {
++                target = <&spi0>;
++                __overlay__ {
++                        /* needed to avoid dtc warning */
++                        #address-cells = <1>;
++                        #size-cells = <0>;
++
++                        pitft: pitft at 0{
++                                compatible = "ilitek,ili9340";
++                                reg = <0>;
++                                pinctrl-names = "default";
++                                pinctrl-0 = <&pitft_pins>;
++
++                                spi-max-frequency = <32000000>;
++                                rotate = <90>;
++                                fps = <25>;
++                                bgr;
++                                buswidth = <8>;
++                                dc-gpios = <&gpio 25 0>;
++                                debug = <0>;
++                        };
++                };
++        };
++
++        fragment at 3 {
++                target = <&i2c1>;
++                __overlay__ {
++                        /* needed to avoid dtc warning */
++                        #address-cells = <1>;
++                        #size-cells = <0>;
++
++                        ft6236: ft6236 at 38 {
++                                compatible = "focaltech,ft6236";
++                                reg = <0x38>;
++
++                                interrupt-parent = <&gpio>;
++                                interrupts = <24 2>;
++                                touchscreen-size-x = <240>;
++                                touchscreen-size-y = <320>;
++                        };
++                };
++        };
++
++        __overrides__ {
++                speed =   <&pitft>,"spi-max-frequency:0";
++                rotate =  <&pitft>,"rotate:0";
++                fps =     <&pitft>,"fps:0";
++                debug =   <&pitft>,"debug:0";
++                touch-sizex = <&ft6236>,"touchscreen-size-x?";
++                touch-sizey = <&ft6236>,"touchscreen-size-y?";
++                touch-invx  = <&ft6236>,"touchscreen-inverted-x?";
++                touch-invy  = <&ft6236>,"touchscreen-inverted-y?";
++                touch-swapxy = <&ft6236>,"touchscreen-swapped-x-y?";
++        };
++};
diff --git a/target/linux/brcm2708/patches-4.4/0110-Add-support-for-the-HiFiBerry-DAC-Pro.patch b/target/linux/brcm2708/patches-4.4/0110-Add-support-for-the-HiFiBerry-DAC-Pro.patch
new file mode 100644
index 0000000..b492c16
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0110-Add-support-for-the-HiFiBerry-DAC-Pro.patch
@@ -0,0 +1,539 @@
+From 7b841ce695094a46ff36b977a714fb60c19346d0 Mon Sep 17 00:00:00 2001
+From: Stuart MacLean <stuart at hifiberry.com>
+Date: Fri, 2 Oct 2015 15:12:59 +0100
+Subject: [PATCH 110/127] Add support for the HiFiBerry DAC+ Pro.
+
+The HiFiBerry DAC+ and DAC+ Pro products both use the existing bcm sound driver with the DAC+ Pro having a special clock device driver representing the two high precision oscillators.
+
+An addition bug fix is included for the PCM512x codec where by the physical size of the sample frame is used in the calculation of the LRCK divisor as it was found to be wrong when using 24-bit depth sample contained in a little endian 4-byte sample frame.
+---
+ .../dts/overlays/hifiberry-dacplus-overlay.dts     |  15 +-
+ drivers/clk/Makefile                               |   1 +
+ drivers/clk/clk-hifiberry-dacpro.c                 | 160 ++++++++++++++
+ sound/soc/bcm/hifiberry_dacplus.c                  | 244 +++++++++++++++++++--
+ sound/soc/codecs/pcm512x.c                         |   3 +-
+ 5 files changed, 396 insertions(+), 27 deletions(-)
+ create mode 100644 drivers/clk/clk-hifiberry-dacpro.c
+
+--- a/arch/arm/boot/dts/overlays/hifiberry-dacplus-overlay.dts
++++ b/arch/arm/boot/dts/overlays/hifiberry-dacplus-overlay.dts
+@@ -6,6 +6,16 @@
+ 	compatible = "brcm,bcm2708";
+ 
+ 	fragment at 0 {
++		target-path = "/clocks";
++		__overlay__ {
++			dacpro_osc: dacpro_osc {
++				compatible = "hifiberry,dacpro-clk";
++				#clock-cells = <0>;
++			};
++		};
++	};
++
++	fragment at 1 {
+ 		target = <&sound>;
+ 		__overlay__ {
+ 			compatible = "hifiberry,hifiberry-dacplus";
+@@ -14,14 +24,14 @@
+ 		};
+ 	};
+ 
+-	fragment at 1 {
++	fragment at 2 {
+ 		target = <&i2s>;
+ 		__overlay__ {
+ 			status = "okay";
+ 		};
+ 	};
+ 
+-	fragment at 2 {
++	fragment at 3 {
+ 		target = <&i2c1>;
+ 		__overlay__ {
+ 			#address-cells = <1>;
+@@ -32,6 +42,7 @@
+ 				#sound-dai-cells = <0>;
+ 				compatible = "ti,pcm5122";
+ 				reg = <0x4d>;
++				clocks = <&dacpro_osc>;
+ 				status = "okay";
+ 			};
+ 		};
+--- a/drivers/clk/Makefile
++++ b/drivers/clk/Makefile
+@@ -24,6 +24,7 @@ obj-$(CONFIG_COMMON_CLK_CDCE706)	+= clk-
+ obj-$(CONFIG_ARCH_CLPS711X)		+= clk-clps711x.o
+ obj-$(CONFIG_ARCH_EFM32)		+= clk-efm32gg.o
+ obj-$(CONFIG_ARCH_HIGHBANK)		+= clk-highbank.o
++obj-$(CONFIG_SND_BCM2708_SOC_HIFIBERRY_DACPLUS) += clk-hifiberry-dacpro.o
+ obj-$(CONFIG_MACH_LOONGSON32)		+= clk-ls1x.o
+ obj-$(CONFIG_COMMON_CLK_MAX_GEN)	+= clk-max-gen.o
+ obj-$(CONFIG_COMMON_CLK_MAX77686)	+= clk-max77686.o
+--- /dev/null
++++ b/drivers/clk/clk-hifiberry-dacpro.c
+@@ -0,0 +1,160 @@
++/*
++ * Clock Driver for HiFiBerry DAC Pro
++ *
++ * Author: Stuart MacLean
++ *         Copyright 2015
++ *
++ * This program is free software; you can redistribute it and/or
++ * modify it under the terms of the GNU General Public License
++ * version 2 as published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful, but
++ * WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++ * General Public License for more details.
++ */
++
++#include <linux/clk-provider.h>
++#include <linux/clkdev.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/of.h>
++#include <linux/slab.h>
++#include <linux/platform_device.h>
++
++/* Clock rate of CLK44EN attached to GPIO6 pin */
++#define CLK_44EN_RATE 22579200UL
++/* Clock rate of CLK48EN attached to GPIO3 pin */
++#define CLK_48EN_RATE 24576000UL
++
++/**
++ * struct hifiberry_dacpro_clk - Common struct to the HiFiBerry DAC Pro
++ * @hw: clk_hw for the common clk framework
++ * @mode: 0 => CLK44EN, 1 => CLK48EN
++ */
++struct clk_hifiberry_hw {
++	struct clk_hw hw;
++	uint8_t mode;
++};
++
++#define to_hifiberry_clk(_hw) container_of(_hw, struct clk_hifiberry_hw, hw)
++
++static const struct of_device_id clk_hifiberry_dacpro_dt_ids[] = {
++	{ .compatible = "hifiberry,dacpro-clk",},
++	{ }
++};
++MODULE_DEVICE_TABLE(of, clk_hifiberry_dacpro_dt_ids);
++
++static unsigned long clk_hifiberry_dacpro_recalc_rate(struct clk_hw *hw,
++	unsigned long parent_rate)
++{
++	return (to_hifiberry_clk(hw)->mode == 0) ? CLK_44EN_RATE :
++		CLK_48EN_RATE;
++}
++
++static long clk_hifiberry_dacpro_round_rate(struct clk_hw *hw,
++	unsigned long rate, unsigned long *parent_rate)
++{
++	long actual_rate;
++
++	if (rate <= CLK_44EN_RATE) {
++		actual_rate = (long)CLK_44EN_RATE;
++	} else if (rate >= CLK_48EN_RATE) {
++		actual_rate = (long)CLK_48EN_RATE;
++	} else {
++		long diff44Rate = (long)(rate - CLK_44EN_RATE);
++		long diff48Rate = (long)(CLK_48EN_RATE - rate);
++
++		if (diff44Rate < diff48Rate)
++			actual_rate = (long)CLK_44EN_RATE;
++		else
++			actual_rate = (long)CLK_48EN_RATE;
++	}
++	return actual_rate;
++}
++
++
++static int clk_hifiberry_dacpro_set_rate(struct clk_hw *hw,
++	unsigned long rate, unsigned long parent_rate)
++{
++	unsigned long actual_rate;
++	struct clk_hifiberry_hw *clk = to_hifiberry_clk(hw);
++
++	actual_rate = (unsigned long)clk_hifiberry_dacpro_round_rate(hw, rate,
++		&parent_rate);
++	clk->mode = (actual_rate == CLK_44EN_RATE) ? 0 : 1;
++	return 0;
++}
++
++
++const struct clk_ops clk_hifiberry_dacpro_rate_ops = {
++	.recalc_rate = clk_hifiberry_dacpro_recalc_rate,
++	.round_rate = clk_hifiberry_dacpro_round_rate,
++	.set_rate = clk_hifiberry_dacpro_set_rate,
++};
++
++static int clk_hifiberry_dacpro_probe(struct platform_device *pdev)
++{
++	int ret;
++	struct clk_hifiberry_hw *proclk;
++	struct clk *clk;
++	struct device *dev;
++	struct clk_init_data init;
++
++	dev = &pdev->dev;
++
++	proclk = kzalloc(sizeof(struct clk_hifiberry_hw), GFP_KERNEL);
++	if (!proclk)
++		return -ENOMEM;
++
++	init.name = "clk-hifiberry-dacpro";
++	init.ops = &clk_hifiberry_dacpro_rate_ops;
++	init.flags = CLK_IS_ROOT | CLK_IS_BASIC;
++	init.parent_names = NULL;
++	init.num_parents = 0;
++
++	proclk->mode = 0;
++	proclk->hw.init = &init;
++
++	clk = devm_clk_register(dev, &proclk->hw);
++	if (!IS_ERR(clk)) {
++		ret = of_clk_add_provider(dev->of_node, of_clk_src_simple_get,
++			clk);
++	} else {
++		dev_err(dev, "Fail to register clock driver\n");
++		kfree(proclk);
++		ret = PTR_ERR(clk);
++	}
++	return ret;
++}
++
++static int clk_hifiberry_dacpro_remove(struct platform_device *pdev)
++{
++	of_clk_del_provider(pdev->dev.of_node);
++	return 0;
++}
++
++static struct platform_driver clk_hifiberry_dacpro_driver = {
++	.probe = clk_hifiberry_dacpro_probe,
++	.remove = clk_hifiberry_dacpro_remove,
++	.driver = {
++		.name = "clk-hifiberry-dacpro",
++		.of_match_table = clk_hifiberry_dacpro_dt_ids,
++	},
++};
++
++static int __init clk_hifiberry_dacpro_init(void)
++{
++	return platform_driver_register(&clk_hifiberry_dacpro_driver);
++}
++core_initcall(clk_hifiberry_dacpro_init);
++
++static void __exit clk_hifiberry_dacpro_exit(void)
++{
++	platform_driver_unregister(&clk_hifiberry_dacpro_driver);
++}
++module_exit(clk_hifiberry_dacpro_exit);
++
++MODULE_DESCRIPTION("HiFiBerry DAC Pro clock driver");
++MODULE_LICENSE("GPL v2");
++MODULE_ALIAS("platform:clk-hifiberry-dacpro");
+--- a/sound/soc/bcm/hifiberry_dacplus.c
++++ b/sound/soc/bcm/hifiberry_dacplus.c
+@@ -1,8 +1,8 @@
+ /*
+- * ASoC Driver for HiFiBerry DAC+
++ * ASoC Driver for HiFiBerry DAC+ / DAC Pro
+  *
+- * Author:	Daniel Matuschek
+- *		Copyright 2014
++ * Author:	Daniel Matuschek, Stuart MacLean <stuart at hifiberry.com>
++ *		Copyright 2014-2015
+  *		based on code by Florian Meier <florian.meier at koalo.de>
+  *
+  * This program is free software; you can redistribute it and/or
+@@ -17,6 +17,13 @@
+ 
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
++#include <linux/kernel.h>
++#include <linux/clk.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/of.h>
++#include <linux/slab.h>
++#include <linux/delay.h>
+ 
+ #include <sound/core.h>
+ #include <sound/pcm.h>
+@@ -26,34 +33,222 @@
+ 
+ #include "../codecs/pcm512x.h"
+ 
++#define HIFIBERRY_DACPRO_NOCLOCK 0
++#define HIFIBERRY_DACPRO_CLK44EN 1
++#define HIFIBERRY_DACPRO_CLK48EN 2
++
++struct pcm512x_priv {
++	struct regmap *regmap;
++	struct clk *sclk;
++};
++
++/* Clock rate of CLK44EN attached to GPIO6 pin */
++#define CLK_44EN_RATE 22579200UL
++/* Clock rate of CLK48EN attached to GPIO3 pin */
++#define CLK_48EN_RATE 24576000UL
++
++static bool snd_rpi_hifiberry_is_dacpro;
++
++static void snd_rpi_hifiberry_dacplus_select_clk(struct snd_soc_codec *codec,
++	int clk_id)
++{
++	switch (clk_id) {
++	case HIFIBERRY_DACPRO_NOCLOCK:
++		snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x24, 0x00);
++		break;
++	case HIFIBERRY_DACPRO_CLK44EN:
++		snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x24, 0x20);
++		break;
++	case HIFIBERRY_DACPRO_CLK48EN:
++		snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x24, 0x04);
++		break;
++	}
++}
++
++static void snd_rpi_hifiberry_dacplus_clk_gpio(struct snd_soc_codec *codec)
++{
++	snd_soc_update_bits(codec, PCM512x_GPIO_EN, 0x24, 0x24);
++	snd_soc_update_bits(codec, PCM512x_GPIO_OUTPUT_3, 0x0f, 0x02);
++	snd_soc_update_bits(codec, PCM512x_GPIO_OUTPUT_6, 0x0f, 0x02);
++}
++
++static bool snd_rpi_hifiberry_dacplus_is_sclk(struct snd_soc_codec *codec)
++{
++	int sck;
++
++	sck = snd_soc_read(codec, PCM512x_RATE_DET_4);
++	return (!(sck & 0x40));
++}
++
++static bool snd_rpi_hifiberry_dacplus_is_sclk_sleep(
++	struct snd_soc_codec *codec)
++{
++	msleep(2);
++	return snd_rpi_hifiberry_dacplus_is_sclk(codec);
++}
++
++static bool snd_rpi_hifiberry_dacplus_is_pro_card(struct snd_soc_codec *codec)
++{
++	bool isClk44EN, isClk48En, isNoClk;
++
++	snd_rpi_hifiberry_dacplus_clk_gpio(codec);
++
++	snd_rpi_hifiberry_dacplus_select_clk(codec, HIFIBERRY_DACPRO_CLK44EN);
++	isClk44EN = snd_rpi_hifiberry_dacplus_is_sclk_sleep(codec);
++
++	snd_rpi_hifiberry_dacplus_select_clk(codec, HIFIBERRY_DACPRO_NOCLOCK);
++	isNoClk = snd_rpi_hifiberry_dacplus_is_sclk_sleep(codec);
++
++	snd_rpi_hifiberry_dacplus_select_clk(codec, HIFIBERRY_DACPRO_CLK48EN);
++	isClk48En = snd_rpi_hifiberry_dacplus_is_sclk_sleep(codec);
++
++	return (isClk44EN && isClk48En && !isNoClk);
++}
++
++static int snd_rpi_hifiberry_dacplus_clk_for_rate(int sample_rate)
++{
++	int type;
++
++	switch (sample_rate) {
++	case 11025:
++	case 22050:
++	case 44100:
++	case 88200:
++	case 176400:
++		type = HIFIBERRY_DACPRO_CLK44EN;
++		break;
++	default:
++		type = HIFIBERRY_DACPRO_CLK48EN;
++		break;
++	}
++	return type;
++}
++
++static void snd_rpi_hifiberry_dacplus_set_sclk(struct snd_soc_codec *codec,
++	int sample_rate)
++{
++	struct pcm512x_priv *pcm512x = snd_soc_codec_get_drvdata(codec);
++
++	if (!IS_ERR(pcm512x->sclk)) {
++		int ctype;
++
++		ctype = snd_rpi_hifiberry_dacplus_clk_for_rate(sample_rate);
++		clk_set_rate(pcm512x->sclk, (ctype == HIFIBERRY_DACPRO_CLK44EN)
++			? CLK_44EN_RATE : CLK_48EN_RATE);
++		snd_rpi_hifiberry_dacplus_select_clk(codec, ctype);
++	}
++}
++
+ static int snd_rpi_hifiberry_dacplus_init(struct snd_soc_pcm_runtime *rtd)
+ {
+ 	struct snd_soc_codec *codec = rtd->codec;
++	struct pcm512x_priv *priv;
++
++	snd_rpi_hifiberry_is_dacpro
++		= snd_rpi_hifiberry_dacplus_is_pro_card(codec);
++
++	if (snd_rpi_hifiberry_is_dacpro) {
++		struct snd_soc_dai_link *dai = rtd->dai_link;
++
++		dai->name = "HiFiBerry DAC+ Pro";
++		dai->stream_name = "HiFiBerry DAC+ Pro HiFi";
++		dai->dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF
++			| SND_SOC_DAIFMT_CBM_CFM;
++
++		snd_soc_update_bits(codec, PCM512x_BCLK_LRCLK_CFG, 0x31, 0x11);
++		snd_soc_update_bits(codec, PCM512x_MASTER_MODE, 0x03, 0x03);
++		snd_soc_update_bits(codec, PCM512x_MASTER_CLKDIV_2, 0x7f, 63);
++	} else {
++		priv = snd_soc_codec_get_drvdata(codec);
++		priv->sclk = ERR_PTR(-ENOENT);
++	}
++
+ 	snd_soc_update_bits(codec, PCM512x_GPIO_EN, 0x08, 0x08);
+-	snd_soc_update_bits(codec, PCM512x_GPIO_OUTPUT_4, 0xf, 0x02);
+-	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x08);
++	snd_soc_update_bits(codec, PCM512x_GPIO_OUTPUT_4, 0x0f, 0x02);
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08, 0x08);
++
++	return 0;
++}
++
++static int snd_rpi_hifiberry_dacplus_update_rate_den(
++	struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params)
++{
++	struct snd_soc_pcm_runtime *rtd = substream->private_data;
++	struct snd_soc_codec *codec = rtd->codec;
++	struct pcm512x_priv *pcm512x = snd_soc_codec_get_drvdata(codec);
++	struct snd_ratnum *rats_no_pll;
++	unsigned int num = 0, den = 0;
++	int err;
++
++	rats_no_pll = devm_kzalloc(rtd->dev, sizeof(*rats_no_pll), GFP_KERNEL);
++	if (!rats_no_pll)
++		return -ENOMEM;
++
++	rats_no_pll->num = clk_get_rate(pcm512x->sclk) / 64;
++	rats_no_pll->den_min = 1;
++	rats_no_pll->den_max = 128;
++	rats_no_pll->den_step = 1;
++
++	err = snd_interval_ratnum(hw_param_interval(params,
++		SNDRV_PCM_HW_PARAM_RATE), 1, rats_no_pll, &num, &den);
++	if (err >= 0 && den) {
++		params->rate_num = num;
++		params->rate_den = den;
++	}
++
++	devm_kfree(rtd->dev, rats_no_pll);
+ 	return 0;
+ }
+ 
+-static int snd_rpi_hifiberry_dacplus_hw_params(struct snd_pcm_substream *substream,
+-				       struct snd_pcm_hw_params *params)
++static int snd_rpi_hifiberry_dacplus_set_bclk_ratio_pro(
++	struct snd_soc_dai *cpu_dai, struct snd_pcm_hw_params *params)
++{
++	int bratio = snd_pcm_format_physical_width(params_format(params))
++		* params_channels(params);
++	return snd_soc_dai_set_bclk_ratio(cpu_dai, bratio);
++}
++
++static int snd_rpi_hifiberry_dacplus_hw_params(
++	struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params)
+ {
++	int ret;
+ 	struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ 	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+-	return snd_soc_dai_set_bclk_ratio(cpu_dai, 64);
++
++	if (snd_rpi_hifiberry_is_dacpro) {
++		struct snd_soc_codec *codec = rtd->codec;
++
++		snd_rpi_hifiberry_dacplus_set_sclk(codec,
++			params_rate(params));
++
++		ret = snd_rpi_hifiberry_dacplus_set_bclk_ratio_pro(cpu_dai,
++			params);
++		if (!ret)
++			ret = snd_rpi_hifiberry_dacplus_update_rate_den(
++				substream, params);
++	} else {
++		ret = snd_soc_dai_set_bclk_ratio(cpu_dai, 64);
++	}
++	return ret;
+ }
+ 
+-static int snd_rpi_hifiberry_dacplus_startup(struct snd_pcm_substream *substream) {
++static int snd_rpi_hifiberry_dacplus_startup(
++	struct snd_pcm_substream *substream)
++{
+ 	struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ 	struct snd_soc_codec *codec = rtd->codec;
+-	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x08);
++
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08, 0x08);
+ 	return 0;
+ }
+ 
+-static void snd_rpi_hifiberry_dacplus_shutdown(struct snd_pcm_substream *substream) {
++static void snd_rpi_hifiberry_dacplus_shutdown(
++	struct snd_pcm_substream *substream)
++{
+ 	struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ 	struct snd_soc_codec *codec = rtd->codec;
+-	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08,0x00);
++
++	snd_soc_update_bits(codec, PCM512x_GPIO_CONTROL_1, 0x08, 0x00);
+ }
+ 
+ /* machine stream operations */
+@@ -90,19 +285,20 @@ static int snd_rpi_hifiberry_dacplus_pro
+ 	int ret = 0;
+ 
+ 	snd_rpi_hifiberry_dacplus.dev = &pdev->dev;
+-
+ 	if (pdev->dev.of_node) {
+-	    struct device_node *i2s_node;
+-	    struct snd_soc_dai_link *dai = &snd_rpi_hifiberry_dacplus_dai[0];
+-	    i2s_node = of_parse_phandle(pdev->dev.of_node,
+-					"i2s-controller", 0);
+-
+-	    if (i2s_node) {
+-		dai->cpu_dai_name = NULL;
+-		dai->cpu_of_node = i2s_node;
+-		dai->platform_name = NULL;
+-		dai->platform_of_node = i2s_node;
+-	    }
++		struct device_node *i2s_node;
++		struct snd_soc_dai_link *dai;
++
++		dai = &snd_rpi_hifiberry_dacplus_dai[0];
++		i2s_node = of_parse_phandle(pdev->dev.of_node,
++			"i2s-controller", 0);
++
++		if (i2s_node) {
++			dai->cpu_dai_name = NULL;
++			dai->cpu_of_node = i2s_node;
++			dai->platform_name = NULL;
++			dai->platform_of_node = i2s_node;
++		}
+ 	}
+ 
+ 	ret = snd_soc_register_card(&snd_rpi_hifiberry_dacplus);
+--- a/sound/soc/codecs/pcm512x.c
++++ b/sound/soc/codecs/pcm512x.c
+@@ -854,7 +854,8 @@ static int pcm512x_set_dividers(struct s
+ 	int fssp;
+ 	int gpio;
+ 
+-	lrclk_div = snd_soc_params_to_frame_size(params);
++	lrclk_div = snd_pcm_format_physical_width(params_format(params))
++		* params_channels(params);
+ 	if (lrclk_div == 0) {
+ 		dev_err(dev, "No LRCLK?\n");
+ 		return -EINVAL;
diff --git a/target/linux/brcm2708/patches-4.4/0111-BCM270X_DT-Add-at86rf233-overlay.patch b/target/linux/brcm2708/patches-4.4/0111-BCM270X_DT-Add-at86rf233-overlay.patch
new file mode 100644
index 0000000..dfb067a
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0111-BCM270X_DT-Add-at86rf233-overlay.patch
@@ -0,0 +1,130 @@
+From d0219e9506be92eb12b2a303e5cbb2261e0dc05a Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Mon, 5 Oct 2015 10:47:45 +0100
+Subject: [PATCH 111/127] BCM270X_DT: Add at86rf233 overlay
+
+Add an overlay to support the Atmel AT86RF233 WPAN transceiver on spi0.0.
+
+See: https://github.com/raspberrypi/linux/issues/1151
+---
+ arch/arm/boot/dts/overlays/Makefile              |  1 +
+ arch/arm/boot/dts/overlays/README                | 21 +++++++--
+ arch/arm/boot/dts/overlays/at86rf233-overlay.dts | 54 ++++++++++++++++++++++++
+ 3 files changed, 72 insertions(+), 4 deletions(-)
+ create mode 100644 arch/arm/boot/dts/overlays/at86rf233-overlay.dts
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -13,6 +13,7 @@ ifeq ($(CONFIG_ARCH_BCM2835),y)
+ endif
+ 
+ dtb-$(RPI_DT_OVERLAYS) += ads7846-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += at86rf233-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += bmp085_i2c-sensor-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += dht11-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += enc28j60-overlay.dtb
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -69,13 +69,14 @@ DT parameters:
+ 
+ Parameters always have default values, although in some cases (e.g. "w1-gpio")
+ it is necessary to provided multiple overlays in order to get the desired
+-behaviour. See the list of overlays below for a description of the parameters and their defaults.
++behaviour. See the list of overlays below for a description of the parameters
++and their defaults.
+ 
+ The Overlay and Parameter Reference
+ ===================================
+ 
+-N.B. When editing this file, please preserve the indentation levels to make it simple to parse
+-programmatically. NO HARD TABS.
++N.B. When editing this file, please preserve the indentation levels to make it
++simple to parse programmatically. NO HARD TABS.
+ 
+ 
+ Name:   <The base DTB>
+@@ -149,7 +150,7 @@ Name:   ads7846
+ Info:   ADS7846 Touch controller
+ Load:   dtoverlay=ads7846,<param>=<val>
+ Params: cs                       SPI bus Chip Select (default 1)
+-        speed                    SPI bus speed (default 2Mhz, max 3.25MHz)
++        speed                    SPI bus speed (default 2MHz, max 3.25MHz)
+         penirq                   GPIO used for PENIRQ. REQUIRED
+         penirq_pull              Set GPIO pull (default 0=none, 2=pullup)
+         swapxy                   Swap x and y axis
+@@ -170,6 +171,18 @@ Params: cs                       SPI bus
+         www.kernel.org/doc/Documentation/devicetree/bindings/input/ads7846.txt
+ 
+ 
++Name:   at86rf233
++Info:   Configures the Atmel AT86RF233 802.15.4 low-power WPAN transceiver,
++        connected to spi0.0
++Load:   dtoverlay=at86rf233,<param>=<val>
++Params: interrupt                GPIO used for INT (default 23)
++        reset                    GPIO used for Reset (default 24)
++        sleep                    GPIO used for Sleep (default 25)
++        speed                    SPI bus speed in Hz (default 6000000)
++        trim                     Fine tuning of the internal capacitance
++                                 arrays (0=+0pF, 15=+4.5pF, default 15)
++
++
+ Name:   bmp085_i2c-sensor
+ Info:   Configures the BMP085/BMP180 digital barometric pressure and temperature
+         sensors from Bosch Sensortec
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/at86rf233-overlay.dts
+@@ -0,0 +1,54 @@
++/dts-v1/;
++/plugin/;
++
++/* Overlay for Atmel AT86RF233 IEEE 802.15.4 WPAN transceiver on spi0.0 */
++
++/ {
++	compatible = "brcm,bcm2835", "brcm,bcm2836", "brcm,bcm2708", "brcm,bcm2709";
++
++	fragment at 0 {
++		target = <&spi0>;
++		__overlay__ {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			status = "okay";
++
++			spidev at 0{
++				status = "disabled";
++			};
++
++			lowpan0: at86rf233 at 0 {
++				compatible = "atmel,at86rf233";
++				reg = <0>;
++				interrupt-parent = <&gpio>;
++				interrupts = <23 4>; /* active high */
++				reset-gpio = <&gpio 24 1>;
++				sleep-gpio = <&gpio 25 1>;
++				spi-max-frequency = <6000000>;
++				xtal-trim = /bits/ 8 <0xf>;
++			};
++		};
++	};
++
++	fragment at 1 {
++		target = <&gpio>;
++		__overlay__ {
++			lowpan0_pins: lowpan0_pins {
++				brcm,pins = <23 24 25>;
++				brcm,function = <0 1 1>; /* in out out */
++			};
++		};
++	};
++
++	__overrides__ {
++		interrupt = <&lowpan0>, "interrupts:0",
++			<&lowpan0_pins>, "brcm,pins:0";
++		reset     = <&lowpan0>, "reset-gpio:4",
++			<&lowpan0_pins>, "brcm,pins:4";
++		sleep     = <&lowpan0>, "sleep-gpio:4",
++			<&lowpan0_pins>, "brcm,pins:8";
++		speed     = <&lowpan0>, "spi-max-frequency:0";
++		trim      = <&lowpan0>, "xtal-trim.0";
++	};
++};
diff --git a/target/linux/brcm2708/patches-4.4/0112-mm-Remove-the-PFN-busy-warning.patch b/target/linux/brcm2708/patches-4.4/0112-mm-Remove-the-PFN-busy-warning.patch
new file mode 100644
index 0000000..1254547
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0112-mm-Remove-the-PFN-busy-warning.patch
@@ -0,0 +1,25 @@
+From f65e7507069d858d619883fccd2d92b2783e07b3 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Thu, 18 Dec 2014 16:07:15 -0800
+Subject: [PATCH 112/127] mm: Remove the PFN busy warning
+
+See commit dae803e165a11bc88ca8dbc07a11077caf97bbcb -- the warning is
+expected sometimes when using CMA.  However, that commit still spams
+my kernel log with these warnings.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ mm/page_alloc.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6760,8 +6760,6 @@ int alloc_contig_range(unsigned long sta
+ 
+ 	/* Make sure the range is really isolated. */
+ 	if (test_pages_isolated(outer_start, end, false)) {
+-		pr_info("%s: [%lx, %lx) PFNs busy\n",
+-			__func__, outer_start, end);
+ 		ret = -EBUSY;
+ 		goto done;
+ 	}
diff --git a/target/linux/brcm2708/patches-4.4/0113-drm-Put-an-optional-field-in-the-driver-struct-for-G.patch b/target/linux/brcm2708/patches-4.4/0113-drm-Put-an-optional-field-in-the-driver-struct-for-G.patch
new file mode 100644
index 0000000..ab7c3ab
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0113-drm-Put-an-optional-field-in-the-driver-struct-for-G.patch
@@ -0,0 +1,40 @@
+From daf1a4e97b6eb0c0576d895c9496025e8cd84a0f Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Wed, 19 Nov 2014 12:06:38 -0800
+Subject: [PATCH 113/127] drm: Put an optional field in the driver struct for
+ GEM obj struct size.
+
+This allows a driver to derive from the CMA object without copying all
+of the code.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/drm_gem_cma_helper.c | 5 ++++-
+ include/drm/drmP.h                   | 1 +
+ 2 files changed, 5 insertions(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/drm_gem_cma_helper.c
++++ b/drivers/gpu/drm/drm_gem_cma_helper.c
+@@ -58,8 +58,11 @@ __drm_gem_cma_create(struct drm_device *
+ 	struct drm_gem_cma_object *cma_obj;
+ 	struct drm_gem_object *gem_obj;
+ 	int ret;
++	size_t obj_size = (drm->driver->gem_obj_size ?
++			   drm->driver->gem_obj_size :
++			   sizeof(*cma_obj));
+ 
+-	cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
++	cma_obj = kzalloc(obj_size, GFP_KERNEL);
+ 	if (!cma_obj)
+ 		return ERR_PTR(-ENOMEM);
+ 
+--- a/include/drm/drmP.h
++++ b/include/drm/drmP.h
+@@ -639,6 +639,7 @@ struct drm_driver {
+ 
+ 	u32 driver_features;
+ 	int dev_priv_size;
++	size_t gem_obj_size;
+ 	const struct drm_ioctl_desc *ioctls;
+ 	int num_ioctls;
+ 	const struct file_operations *fops;
diff --git a/target/linux/brcm2708/patches-4.4/0114-drm-vc4-Add-an-interface-for-capturing-the-GPU-state.patch b/target/linux/brcm2708/patches-4.4/0114-drm-vc4-Add-an-interface-for-capturing-the-GPU-state.patch
new file mode 100644
index 0000000..97cce3d
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0114-drm-vc4-Add-an-interface-for-capturing-the-GPU-state.patch
@@ -0,0 +1,333 @@
+From 7839c603643699a8f4f061e1ab2eeb970af00802 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Fri, 30 Oct 2015 10:09:02 -0700
+Subject: [PATCH 114/127] drm/vc4: Add an interface for capturing the GPU state
+ after a hang.
+
+This can be parsed with vc4-gpu-tools tools for trying to figure out
+what was going on.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_bo.c  |   4 +-
+ drivers/gpu/drm/vc4/vc4_drv.c |   1 +
+ drivers/gpu/drm/vc4/vc4_drv.h |   4 +
+ drivers/gpu/drm/vc4/vc4_gem.c | 185 ++++++++++++++++++++++++++++++++++++++++++
+ include/uapi/drm/vc4_drm.h    |  45 ++++++++++
+ 5 files changed, 237 insertions(+), 2 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -415,8 +415,8 @@ int vc4_mmap(struct file *filp, struct v
+ 	gem_obj = vma->vm_private_data;
+ 	bo = to_vc4_bo(gem_obj);
+ 
+-	if (bo->validated_shader) {
+-		DRM_ERROR("mmaping of shader BOs not allowed.\n");
++	if (bo->validated_shader && (vma->vm_flags & VM_WRITE)) {
++		DRM_ERROR("mmaping of shader BOs for writing not allowed.\n");
+ 		return -EINVAL;
+ 	}
+ 
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -81,6 +81,7 @@ static const struct drm_ioctl_desc vc4_d
+ 	DRM_IOCTL_DEF_DRV(VC4_CREATE_BO, vc4_create_bo_ioctl, 0),
+ 	DRM_IOCTL_DEF_DRV(VC4_MMAP_BO, vc4_mmap_bo_ioctl, 0),
+ 	DRM_IOCTL_DEF_DRV(VC4_CREATE_SHADER_BO, vc4_create_shader_bo_ioctl, 0),
++	DRM_IOCTL_DEF_DRV(VC4_GET_HANG_STATE, vc4_get_hang_state_ioctl, DRM_ROOT_ONLY),
+ };
+ 
+ static struct drm_driver vc4_drm_driver = {
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -20,6 +20,8 @@ struct vc4_dev {
+ 	struct drm_fbdev_cma *fbdev;
+ 	struct rpi_firmware *firmware;
+ 
++	struct vc4_hang_state *hang_state;
++
+ 	/* The kernel-space BO cache.  Tracks buffers that have been
+ 	 * unreferenced by all other users (refcounts of 0!) but not
+ 	 * yet freed, so we can do cheap allocations.
+@@ -366,6 +368,8 @@ int vc4_create_shader_bo_ioctl(struct dr
+ 			       struct drm_file *file_priv);
+ int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
+ 		      struct drm_file *file_priv);
++int vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
++			     struct drm_file *file_priv);
+ int vc4_mmap(struct file *filp, struct vm_area_struct *vma);
+ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
+ void *vc4_prime_vmap(struct drm_gem_object *obj);
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -40,6 +40,186 @@ vc4_queue_hangcheck(struct drm_device *d
+ 		  round_jiffies_up(jiffies + msecs_to_jiffies(100)));
+ }
+ 
++struct vc4_hang_state {
++	struct drm_vc4_get_hang_state user_state;
++
++	u32 bo_count;
++	struct drm_gem_object **bo;
++};
++
++static void
++vc4_free_hang_state(struct drm_device *dev, struct vc4_hang_state *state)
++{
++	unsigned int i;
++
++	mutex_lock(&dev->struct_mutex);
++	for (i = 0; i < state->user_state.bo_count; i++) {
++		drm_gem_object_unreference(state->bo[i]);
++	}
++	mutex_unlock(&dev->struct_mutex);
++
++	kfree(state);
++}
++
++int
++vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
++			 struct drm_file *file_priv)
++{
++ 	struct drm_vc4_get_hang_state *get_state = data;
++	struct drm_vc4_get_hang_state_bo *bo_state;
++	struct vc4_hang_state *kernel_state;
++ 	struct drm_vc4_get_hang_state *state;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	unsigned long irqflags;
++	u32 i;
++	int ret;
++
++	spin_lock_irqsave(&vc4->job_lock, irqflags);
++	kernel_state = vc4->hang_state;
++	if (!kernel_state) {
++		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++		return -ENOENT;
++	}
++	state = &kernel_state->user_state;
++
++	/* If the user's array isn't big enough, just return the
++	 * required array size.
++	 */
++	if (get_state->bo_count < state->bo_count) {
++		get_state->bo_count = state->bo_count;
++		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++		return 0;
++	}
++
++	vc4->hang_state = NULL;
++	spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++
++	/* Save the user's BO pointer, so we don't stomp it with the memcpy. */
++	state->bo = get_state->bo;
++	memcpy(get_state, state, sizeof(*state));
++
++	bo_state = kcalloc(state->bo_count, sizeof(*bo_state), GFP_KERNEL);
++	if (!bo_state) {
++		ret = -ENOMEM;
++		goto err_free;
++	}
++
++	for (i = 0; i < state->bo_count; i++) {
++		struct vc4_bo *vc4_bo = to_vc4_bo(kernel_state->bo[i]);
++		u32 handle;
++		ret = drm_gem_handle_create(file_priv, kernel_state->bo[i],
++					    &handle);
++
++		if (ret) {
++			state->bo_count = i - 1;
++			goto err;
++		}
++		bo_state[i].handle = handle;
++		bo_state[i].paddr = vc4_bo->base.paddr;
++		bo_state[i].size = vc4_bo->base.base.size;
++	}
++
++	ret = copy_to_user((void __user *)(uintptr_t)get_state->bo,
++			   bo_state,
++			   state->bo_count * sizeof(*bo_state));
++	kfree(bo_state);
++
++ err_free:
++
++	vc4_free_hang_state(dev, kernel_state);
++
++err:
++	return ret;
++}
++
++static void
++vc4_save_hang_state(struct drm_device *dev)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct drm_vc4_get_hang_state *state;
++	struct vc4_hang_state *kernel_state;
++	struct vc4_exec_info *exec;
++	struct vc4_bo *bo;
++	unsigned long irqflags;
++	unsigned int i, unref_list_count;
++
++	kernel_state = kcalloc(1, sizeof(*state), GFP_KERNEL);
++	if (!kernel_state)
++		return;
++
++	state = &kernel_state->user_state;
++
++	spin_lock_irqsave(&vc4->job_lock, irqflags);
++	exec = vc4_first_job(vc4);
++	if (!exec) {
++		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++		return;
++	}
++
++	unref_list_count = 0;
++	list_for_each_entry(bo, &exec->unref_list, unref_head)
++		unref_list_count++;
++
++	state->bo_count = exec->bo_count + unref_list_count;
++	kernel_state->bo = kcalloc(state->bo_count, sizeof(*kernel_state->bo),
++				   GFP_ATOMIC);
++	if (!kernel_state->bo) {
++		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++		return;
++	}
++
++	for (i = 0; i < exec->bo_count; i++) {
++		drm_gem_object_reference(&exec->bo[i].bo->base);
++		kernel_state->bo[i] = &exec->bo[i].bo->base;
++	}
++
++	list_for_each_entry(bo, &exec->unref_list, unref_head) {
++		drm_gem_object_reference(&bo->base.base);
++		kernel_state->bo[i] = &bo->base.base;
++		i++;
++	}
++
++	state->start_bin = exec->ct0ca;
++	state->start_render = exec->ct1ca;
++
++	spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++
++	state->ct0ca = V3D_READ(V3D_CTNCA(0));
++	state->ct0ea = V3D_READ(V3D_CTNEA(0));
++
++	state->ct1ca = V3D_READ(V3D_CTNCA(1));
++	state->ct1ea = V3D_READ(V3D_CTNEA(1));
++
++	state->ct0cs = V3D_READ(V3D_CTNCS(0));
++	state->ct1cs = V3D_READ(V3D_CTNCS(1));
++
++	state->ct0ra0 = V3D_READ(V3D_CT00RA0);
++	state->ct1ra0 = V3D_READ(V3D_CT01RA0);
++
++	state->bpca = V3D_READ(V3D_BPCA);
++	state->bpcs = V3D_READ(V3D_BPCS);
++	state->bpoa = V3D_READ(V3D_BPOA);
++	state->bpos = V3D_READ(V3D_BPOS);
++
++	state->vpmbase = V3D_READ(V3D_VPMBASE);
++
++	state->dbge = V3D_READ(V3D_DBGE);
++	state->fdbgo = V3D_READ(V3D_FDBGO);
++	state->fdbgb = V3D_READ(V3D_FDBGB);
++	state->fdbgr = V3D_READ(V3D_FDBGR);
++	state->fdbgs = V3D_READ(V3D_FDBGS);
++	state->errstat = V3D_READ(V3D_ERRSTAT);
++
++	spin_lock_irqsave(&vc4->job_lock, irqflags);
++	if (vc4->hang_state) {
++		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++		vc4_free_hang_state(dev, kernel_state);
++	} else {
++		vc4->hang_state = kernel_state;
++		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
++	}
++}
++
+ static void
+ vc4_reset(struct drm_device *dev)
+ {
+@@ -64,6 +244,8 @@ vc4_reset_work(struct work_struct *work)
+ 	struct vc4_dev *vc4 =
+ 		container_of(work, struct vc4_dev, hangcheck.reset_work);
+ 
++	vc4_save_hang_state(vc4->dev);
++
+ 	vc4_reset(vc4->dev);
+ }
+ 
+@@ -673,4 +855,7 @@ vc4_gem_destroy(struct drm_device *dev)
+ 	}
+ 
+ 	vc4_bo_cache_destroy(dev);
++
++	if (vc4->hang_state)
++		vc4_free_hang_state(dev, vc4->hang_state);
+ }
+--- a/include/uapi/drm/vc4_drm.h
++++ b/include/uapi/drm/vc4_drm.h
+@@ -32,6 +32,7 @@
+ #define DRM_VC4_CREATE_BO                         0x03
+ #define DRM_VC4_MMAP_BO                           0x04
+ #define DRM_VC4_CREATE_SHADER_BO                  0x05
++#define DRM_VC4_GET_HANG_STATE                    0x06
+ 
+ #define DRM_IOCTL_VC4_SUBMIT_CL           DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_SUBMIT_CL, struct drm_vc4_submit_cl)
+ #define DRM_IOCTL_VC4_WAIT_SEQNO          DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_WAIT_SEQNO, struct drm_vc4_wait_seqno)
+@@ -39,6 +40,7 @@
+ #define DRM_IOCTL_VC4_CREATE_BO           DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_CREATE_BO, struct drm_vc4_create_bo)
+ #define DRM_IOCTL_VC4_MMAP_BO             DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_MMAP_BO, struct drm_vc4_mmap_bo)
+ #define DRM_IOCTL_VC4_CREATE_SHADER_BO    DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_CREATE_SHADER_BO, struct drm_vc4_create_shader_bo)
++#define DRM_IOCTL_VC4_GET_HANG_STATE      DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_GET_HANG_STATE, struct drm_vc4_get_hang_state)
+ 
+ struct drm_vc4_submit_rcl_surface {
+ 	uint32_t hindex; /* Handle index, or ~0 if not present. */
+@@ -226,4 +228,47 @@ struct drm_vc4_mmap_bo {
+ 	uint64_t offset;
+ };
+ 
++struct drm_vc4_get_hang_state_bo {
++	uint32_t handle;
++	uint32_t paddr;
++	uint32_t size;
++	uint32_t pad;
++};
++
++/**
++ * struct drm_vc4_hang_state - ioctl argument for collecting state
++ * from a GPU hang for analysis.
++*/
++struct drm_vc4_get_hang_state {
++	/** Pointer to array of struct drm_vc4_get_hang_state_bo. */
++	uint64_t bo;
++	/**
++	 * On input, the size of the bo array.  Output is the number
++	 * of bos to be returned.
++	 */
++	uint32_t bo_count;
++
++	uint32_t start_bin, start_render;
++
++	uint32_t ct0ca, ct0ea;
++	uint32_t ct1ca, ct1ea;
++	uint32_t ct0cs, ct1cs;
++	uint32_t ct0ra0, ct1ra0;
++
++	uint32_t bpca, bpcs;
++	uint32_t bpoa, bpos;
++
++	uint32_t vpmbase;
++
++	uint32_t dbge;
++	uint32_t fdbgo;
++	uint32_t fdbgb;
++	uint32_t fdbgr;
++	uint32_t fdbgs;
++	uint32_t errstat;
++
++	/* Pad that we may save more registers into in the future. */
++	uint32_t pad[16];
++};
++
+ #endif /* _UAPI_VC4_DRM_H_ */
diff --git a/target/linux/brcm2708/patches-4.4/0115-drm-vc4-Update-a-bunch-of-code-to-match-upstream-sub.patch b/target/linux/brcm2708/patches-4.4/0115-drm-vc4-Update-a-bunch-of-code-to-match-upstream-sub.patch
new file mode 100644
index 0000000..54e6698
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0115-drm-vc4-Update-a-bunch-of-code-to-match-upstream-sub.patch
@@ -0,0 +1,1894 @@
+From 06dbf5f7d41615b40de35ddab611d92c2a9dd1c1 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Fri, 4 Dec 2015 11:35:34 -0800
+Subject: [PATCH 115/127] drm/vc4: Update a bunch of code to match upstream
+ submission.
+
+This gets almost everything matching, except for the MSAA support and
+using generic PM domains.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/drm_gem_cma_helper.c       |  13 +-
+ drivers/gpu/drm/vc4/vc4_bo.c               | 322 +++++++++++++++++------------
+ drivers/gpu/drm/vc4/vc4_crtc.c             |   7 +-
+ drivers/gpu/drm/vc4/vc4_drv.c              |   6 +-
+ drivers/gpu/drm/vc4/vc4_drv.h              |  20 +-
+ drivers/gpu/drm/vc4/vc4_gem.c              |  24 ++-
+ drivers/gpu/drm/vc4/vc4_irq.c              |   5 +-
+ drivers/gpu/drm/vc4/vc4_kms.c              |   1 +
+ drivers/gpu/drm/vc4/vc4_packet.h           | 210 +++++++++----------
+ drivers/gpu/drm/vc4/vc4_qpu_defines.h      | 308 ++++++++++++++-------------
+ drivers/gpu/drm/vc4/vc4_render_cl.c        |   4 +-
+ drivers/gpu/drm/vc4/vc4_v3d.c              |  10 +-
+ drivers/gpu/drm/vc4/vc4_validate.c         | 130 ++++++------
+ drivers/gpu/drm/vc4/vc4_validate_shaders.c |  66 +++---
+ include/drm/drmP.h                         |   8 +-
+ 15 files changed, 598 insertions(+), 536 deletions(-)
+
+--- a/drivers/gpu/drm/drm_gem_cma_helper.c
++++ b/drivers/gpu/drm/drm_gem_cma_helper.c
+@@ -58,15 +58,14 @@ __drm_gem_cma_create(struct drm_device *
+ 	struct drm_gem_cma_object *cma_obj;
+ 	struct drm_gem_object *gem_obj;
+ 	int ret;
+-	size_t obj_size = (drm->driver->gem_obj_size ?
+-			   drm->driver->gem_obj_size :
+-			   sizeof(*cma_obj));
+ 
+-	cma_obj = kzalloc(obj_size, GFP_KERNEL);
+-	if (!cma_obj)
++	if (drm->driver->gem_create_object)
++		gem_obj = drm->driver->gem_create_object(drm, size);
++	else
++		gem_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
++	if (!gem_obj)
+ 		return ERR_PTR(-ENOMEM);
+-
+-	gem_obj = &cma_obj->base;
++	cma_obj = container_of(gem_obj, struct drm_gem_cma_object, base);
+ 
+ 	ret = drm_gem_object_init(drm, gem_obj, size);
+ 	if (ret)
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -12,6 +12,10 @@
+  * access to system memory with no MMU in between.  To support it, we
+  * use the GEM CMA helper functions to allocate contiguous ranges of
+  * physical memory for our BOs.
++ *
++ * Since the CMA allocator is very slow, we keep a cache of recently
++ * freed BOs around so that the kernel's allocation of objects for 3D
++ * rendering can return quickly.
+  */
+ 
+ #include "vc4_drv.h"
+@@ -34,6 +38,36 @@ static void vc4_bo_stats_dump(struct vc4
+ 		 vc4->bo_stats.size_cached / 1024);
+ }
+ 
++#ifdef CONFIG_DEBUG_FS
++int vc4_bo_stats_debugfs(struct seq_file *m, void *unused)
++{
++	struct drm_info_node *node = (struct drm_info_node *)m->private;
++	struct drm_device *dev = node->minor->dev;
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct vc4_bo_stats stats;
++
++	/* Take a snapshot of the current stats with the lock held. */
++	mutex_lock(&vc4->bo_lock);
++	stats = vc4->bo_stats;
++	mutex_unlock(&vc4->bo_lock);
++
++	seq_printf(m, "num bos allocated: %d\n",
++		   stats.num_allocated);
++	seq_printf(m, "size bos allocated: %dkb\n",
++		   stats.size_allocated / 1024);
++	seq_printf(m, "num bos used: %d\n",
++		   stats.num_allocated - stats.num_cached);
++	seq_printf(m, "size bos used: %dkb\n",
++		   (stats.size_allocated - stats.size_cached) / 1024);
++	seq_printf(m, "num bos cached: %d\n",
++		   stats.num_cached);
++	seq_printf(m, "size bos cached: %dkb\n",
++		   stats.size_cached / 1024);
++
++	return 0;
++}
++#endif
++
+ static uint32_t bo_page_index(size_t size)
+ {
+ 	return (size / PAGE_SIZE) - 1;
+@@ -81,8 +115,8 @@ static struct list_head *vc4_get_cache_l
+ 		struct list_head *new_list;
+ 		uint32_t i;
+ 
+-		new_list = kmalloc(new_size * sizeof(struct list_head),
+-				   GFP_KERNEL);
++		new_list = kmalloc_array(new_size, sizeof(struct list_head),
++					 GFP_KERNEL);
+ 		if (!new_list)
+ 			return NULL;
+ 
+@@ -90,7 +124,9 @@ static struct list_head *vc4_get_cache_l
+ 		 * head locations.
+ 		 */
+ 		for (i = 0; i < vc4->bo_cache.size_list_size; i++) {
+-			struct list_head *old_list = &vc4->bo_cache.size_list[i];
++			struct list_head *old_list =
++				&vc4->bo_cache.size_list[i];
++
+ 			if (list_empty(old_list))
+ 				INIT_LIST_HEAD(&new_list[i]);
+ 			else
+@@ -122,11 +158,60 @@ void vc4_bo_cache_purge(struct drm_devic
+ 	mutex_unlock(&vc4->bo_lock);
+ }
+ 
+-struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size)
++static struct vc4_bo *vc4_bo_get_from_cache(struct drm_device *dev,
++					    uint32_t size)
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+-	uint32_t size = roundup(unaligned_size, PAGE_SIZE);
+ 	uint32_t page_index = bo_page_index(size);
++	struct vc4_bo *bo = NULL;
++
++	size = roundup(size, PAGE_SIZE);
++
++	mutex_lock(&vc4->bo_lock);
++	if (page_index >= vc4->bo_cache.size_list_size)
++		goto out;
++
++	if (list_empty(&vc4->bo_cache.size_list[page_index]))
++		goto out;
++
++	bo = list_first_entry(&vc4->bo_cache.size_list[page_index],
++			      struct vc4_bo, size_head);
++	vc4_bo_remove_from_cache(bo);
++	kref_init(&bo->base.base.refcount);
++
++out:
++	mutex_unlock(&vc4->bo_lock);
++	return bo;
++}
++
++/**
++ * vc4_gem_create_object - Implementation of driver->gem_create_object.
++ *
++ * This lets the CMA helpers allocate object structs for us, and keep
++ * our BO stats correct.
++ */
++struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	struct vc4_bo *bo;
++
++	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
++	if (!bo)
++		return ERR_PTR(-ENOMEM);
++
++	mutex_lock(&vc4->bo_lock);
++	vc4->bo_stats.num_allocated++;
++	vc4->bo_stats.size_allocated += size;
++	mutex_unlock(&vc4->bo_lock);
++
++	return &bo->base.base;
++}
++
++struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size,
++			     bool from_cache)
++{
++	size_t size = roundup(unaligned_size, PAGE_SIZE);
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	struct drm_gem_cma_object *cma_obj;
+ 	int pass;
+ 
+@@ -134,18 +219,12 @@ struct vc4_bo *vc4_bo_create(struct drm_
+ 		return NULL;
+ 
+ 	/* First, try to get a vc4_bo from the kernel BO cache. */
+-	mutex_lock(&vc4->bo_lock);
+-	if (page_index < vc4->bo_cache.size_list_size &&
+-	    !list_empty(&vc4->bo_cache.size_list[page_index])) {
+-		struct vc4_bo *bo =
+-			list_first_entry(&vc4->bo_cache.size_list[page_index],
+-					 struct vc4_bo, size_head);
+-		vc4_bo_remove_from_cache(bo);
+-		mutex_unlock(&vc4->bo_lock);
+-		kref_init(&bo->base.base.refcount);
+-		return bo;
++	if (from_cache) {
++		struct vc4_bo *bo = vc4_bo_get_from_cache(dev, size);
++
++		if (bo)
++			return bo;
+ 	}
+-	mutex_unlock(&vc4->bo_lock);
+ 
+ 	/* Otherwise, make a new BO. */
+ 	for (pass = 0; ; pass++) {
+@@ -179,9 +258,6 @@ struct vc4_bo *vc4_bo_create(struct drm_
+ 		}
+ 	}
+ 
+-	vc4->bo_stats.num_allocated++;
+-	vc4->bo_stats.size_allocated += size;
+-
+ 	return to_vc4_bo(&cma_obj->base);
+ }
+ 
+@@ -199,7 +275,7 @@ int vc4_dumb_create(struct drm_file *fil
+ 	if (args->size < args->pitch * args->height)
+ 		args->size = args->pitch * args->height;
+ 
+-	bo = vc4_bo_create(dev, args->size);
++	bo = vc4_bo_create(dev, args->size, false);
+ 	if (!bo)
+ 		return -ENOMEM;
+ 
+@@ -209,8 +285,8 @@ int vc4_dumb_create(struct drm_file *fil
+ 	return ret;
+ }
+ 
+-static void
+-vc4_bo_cache_free_old(struct drm_device *dev)
++/* Must be called with bo_lock held. */
++static void vc4_bo_cache_free_old(struct drm_device *dev)
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	unsigned long expire_time = jiffies - msecs_to_jiffies(1000);
+@@ -313,15 +389,77 @@ vc4_prime_export(struct drm_device *dev,
+ 	return drm_gem_prime_export(dev, obj, flags);
+ }
+ 
+-int
+-vc4_create_bo_ioctl(struct drm_device *dev, void *data,
+-		    struct drm_file *file_priv)
++int vc4_mmap(struct file *filp, struct vm_area_struct *vma)
++{
++	struct drm_gem_object *gem_obj;
++	struct vc4_bo *bo;
++	int ret;
++
++	ret = drm_gem_mmap(filp, vma);
++	if (ret)
++		return ret;
++
++	gem_obj = vma->vm_private_data;
++	bo = to_vc4_bo(gem_obj);
++
++	if (bo->validated_shader && (vma->vm_flags & VM_WRITE)) {
++		DRM_ERROR("mmaping of shader BOs for writing not allowed.\n");
++		return -EINVAL;
++	}
++
++	/*
++	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
++	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
++	 * the whole buffer.
++	 */
++	vma->vm_flags &= ~VM_PFNMAP;
++	vma->vm_pgoff = 0;
++
++	ret = dma_mmap_writecombine(bo->base.base.dev->dev, vma,
++				    bo->base.vaddr, bo->base.paddr,
++				    vma->vm_end - vma->vm_start);
++	if (ret)
++		drm_gem_vm_close(vma);
++
++	return ret;
++}
++
++int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
++{
++	struct vc4_bo *bo = to_vc4_bo(obj);
++
++	if (bo->validated_shader && (vma->vm_flags & VM_WRITE)) {
++		DRM_ERROR("mmaping of shader BOs for writing not allowed.\n");
++		return -EINVAL;
++	}
++
++	return drm_gem_cma_prime_mmap(obj, vma);
++}
++
++void *vc4_prime_vmap(struct drm_gem_object *obj)
++{
++	struct vc4_bo *bo = to_vc4_bo(obj);
++
++	if (bo->validated_shader) {
++		DRM_ERROR("mmaping of shader BOs not allowed.\n");
++		return ERR_PTR(-EINVAL);
++	}
++
++	return drm_gem_cma_prime_vmap(obj);
++}
++
++int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
++			struct drm_file *file_priv)
+ {
+ 	struct drm_vc4_create_bo *args = data;
+ 	struct vc4_bo *bo = NULL;
+ 	int ret;
+ 
+-	bo = vc4_bo_create(dev, args->size);
++	/*
++	 * We can't allocate from the BO cache, because the BOs don't
++	 * get zeroed, and that might leak data between users.
++	 */
++	bo = vc4_bo_create(dev, args->size, false);
+ 	if (!bo)
+ 		return -ENOMEM;
+ 
+@@ -331,6 +469,25 @@ vc4_create_bo_ioctl(struct drm_device *d
+ 	return ret;
+ }
+ 
++int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
++		      struct drm_file *file_priv)
++{
++	struct drm_vc4_mmap_bo *args = data;
++	struct drm_gem_object *gem_obj;
++
++	gem_obj = drm_gem_object_lookup(dev, file_priv, args->handle);
++	if (!gem_obj) {
++		DRM_ERROR("Failed to look up GEM BO %d\n", args->handle);
++		return -EINVAL;
++	}
++
++	/* The mmap offset was set up at BO allocation time. */
++	args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
++
++	drm_gem_object_unreference_unlocked(gem_obj);
++	return 0;
++}
++
+ int
+ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
+ 			   struct drm_file *file_priv)
+@@ -355,7 +512,7 @@ vc4_create_shader_bo_ioctl(struct drm_de
+ 		return -EINVAL;
+ 	}
+ 
+-	bo = vc4_bo_create(dev, args->size);
++	bo = vc4_bo_create(dev, args->size, true);
+ 	if (!bo)
+ 		return -ENOMEM;
+ 
+@@ -364,6 +521,11 @@ vc4_create_shader_bo_ioctl(struct drm_de
+ 			     args->size);
+ 	if (ret != 0)
+ 		goto fail;
++	/* Clear the rest of the memory from allocating from the BO
++	 * cache.
++	 */
++	memset(bo->base.vaddr + args->size, 0,
++	       bo->base.base.size - args->size);
+ 
+ 	bo->validated_shader = vc4_validate_shader(&bo->base);
+ 	if (!bo->validated_shader) {
+@@ -382,85 +544,6 @@ vc4_create_shader_bo_ioctl(struct drm_de
+ 	return ret;
+ }
+ 
+-int
+-vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
+-		  struct drm_file *file_priv)
+-{
+-	struct drm_vc4_mmap_bo *args = data;
+-	struct drm_gem_object *gem_obj;
+-
+-	gem_obj = drm_gem_object_lookup(dev, file_priv, args->handle);
+-	if (!gem_obj) {
+-		DRM_ERROR("Failed to look up GEM BO %d\n", args->handle);
+-		return -EINVAL;
+-	}
+-
+-	/* The mmap offset was set up at BO allocation time. */
+-	args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
+-
+-	drm_gem_object_unreference(gem_obj);
+-	return 0;
+-}
+-
+-int vc4_mmap(struct file *filp, struct vm_area_struct *vma)
+-{
+-	struct drm_gem_object *gem_obj;
+-	struct vc4_bo *bo;
+-	int ret;
+-
+-	ret = drm_gem_mmap(filp, vma);
+-	if (ret)
+-		return ret;
+-
+-	gem_obj = vma->vm_private_data;
+-	bo = to_vc4_bo(gem_obj);
+-
+-	if (bo->validated_shader && (vma->vm_flags & VM_WRITE)) {
+-		DRM_ERROR("mmaping of shader BOs for writing not allowed.\n");
+-		return -EINVAL;
+-	}
+-
+-	/*
+-	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
+-	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
+-	 * the whole buffer.
+-	 */
+-	vma->vm_flags &= ~VM_PFNMAP;
+-	vma->vm_pgoff = 0;
+-
+-	ret = dma_mmap_writecombine(bo->base.base.dev->dev, vma,
+-				    bo->base.vaddr, bo->base.paddr,
+-				    vma->vm_end - vma->vm_start);
+-	if (ret)
+-		drm_gem_vm_close(vma);
+-
+-	return ret;
+-}
+-
+-int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+-{
+-	struct vc4_bo *bo = to_vc4_bo(obj);
+-
+-	if (bo->validated_shader) {
+-		DRM_ERROR("mmaping of shader BOs not allowed.\n");
+-		return -EINVAL;
+-	}
+-
+-	return drm_gem_cma_prime_mmap(obj, vma);
+-}
+-
+-void *vc4_prime_vmap(struct drm_gem_object *obj)
+-{
+-	struct vc4_bo *bo = to_vc4_bo(obj);
+-
+-	if (bo->validated_shader) {
+-		DRM_ERROR("mmaping of shader BOs not allowed.\n");
+-		return ERR_PTR(-EINVAL);
+-	}
+-
+-	return drm_gem_cma_prime_vmap(obj);
+-}
+-
+ void vc4_bo_cache_init(struct drm_device *dev)
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+@@ -472,7 +555,7 @@ void vc4_bo_cache_init(struct drm_device
+ 	INIT_WORK(&vc4->bo_cache.time_work, vc4_bo_cache_time_work);
+ 	setup_timer(&vc4->bo_cache.time_timer,
+ 		    vc4_bo_cache_time_timer,
+-		    (unsigned long) dev);
++		    (unsigned long)dev);
+ }
+ 
+ void vc4_bo_cache_destroy(struct drm_device *dev)
+@@ -489,28 +572,3 @@ void vc4_bo_cache_destroy(struct drm_dev
+ 		vc4_bo_stats_dump(vc4);
+ 	}
+ }
+-
+-#ifdef CONFIG_DEBUG_FS
+-int vc4_bo_stats_debugfs(struct seq_file *m, void *unused)
+-{
+-	struct drm_info_node *node = (struct drm_info_node *) m->private;
+-	struct drm_device *dev = node->minor->dev;
+-	struct vc4_dev *vc4 = to_vc4_dev(dev);
+-	struct vc4_bo_stats stats;
+-
+-	mutex_lock(&vc4->bo_lock);
+-	stats = vc4->bo_stats;
+-	mutex_unlock(&vc4->bo_lock);
+-
+-	seq_printf(m, "num bos allocated: %d\n", stats.num_allocated);
+-	seq_printf(m, "size bos allocated: %dkb\n", stats.size_allocated / 1024);
+-	seq_printf(m, "num bos used: %d\n", (stats.num_allocated -
+-					     stats.num_cached));
+-	seq_printf(m, "size bos used: %dkb\n", (stats.size_allocated -
+-						stats.size_cached) / 1024);
+-	seq_printf(m, "num bos cached: %d\n", stats.num_cached);
+-	seq_printf(m, "size bos cached: %dkb\n", stats.size_cached / 1024);
+-
+-	return 0;
+-}
+-#endif
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -501,6 +501,7 @@ vc4_async_page_flip_complete(struct vc4_
+ 	vc4_plane_async_set_fb(plane, flip_state->fb);
+ 	if (flip_state->event) {
+ 		unsigned long flags;
++
+ 		spin_lock_irqsave(&dev->event_lock, flags);
+ 		drm_crtc_send_vblank_event(crtc, flip_state->event);
+ 		spin_unlock_irqrestore(&dev->event_lock, flags);
+@@ -562,9 +563,9 @@ static int vc4_async_page_flip(struct dr
+ }
+ 
+ static int vc4_page_flip(struct drm_crtc *crtc,
+-		  struct drm_framebuffer *fb,
+-		  struct drm_pending_vblank_event *event,
+-		  uint32_t flags)
++			 struct drm_framebuffer *fb,
++			 struct drm_pending_vblank_event *event,
++			 uint32_t flags)
+ {
+ 	if (flags & DRM_MODE_PAGE_FLIP_ASYNC)
+ 		return vc4_async_page_flip(crtc, fb, event, flags);
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -81,7 +81,8 @@ static const struct drm_ioctl_desc vc4_d
+ 	DRM_IOCTL_DEF_DRV(VC4_CREATE_BO, vc4_create_bo_ioctl, 0),
+ 	DRM_IOCTL_DEF_DRV(VC4_MMAP_BO, vc4_mmap_bo_ioctl, 0),
+ 	DRM_IOCTL_DEF_DRV(VC4_CREATE_SHADER_BO, vc4_create_shader_bo_ioctl, 0),
+-	DRM_IOCTL_DEF_DRV(VC4_GET_HANG_STATE, vc4_get_hang_state_ioctl, DRM_ROOT_ONLY),
++	DRM_IOCTL_DEF_DRV(VC4_GET_HANG_STATE, vc4_get_hang_state_ioctl,
++			  DRM_ROOT_ONLY),
+ };
+ 
+ static struct drm_driver vc4_drm_driver = {
+@@ -107,6 +108,7 @@ static struct drm_driver vc4_drm_driver
+ 	.debugfs_cleanup = vc4_debugfs_cleanup,
+ #endif
+ 
++	.gem_create_object = vc4_create_object,
+ 	.gem_free_object = vc4_free_object,
+ 	.gem_vm_ops = &drm_gem_cma_vm_ops,
+ 
+@@ -128,8 +130,6 @@ static struct drm_driver vc4_drm_driver
+ 	.num_ioctls = ARRAY_SIZE(vc4_drm_ioctls),
+ 	.fops = &vc4_drm_fops,
+ 
+-	//.gem_obj_size = sizeof(struct vc4_bo),
+-
+ 	.name = DRIVER_NAME,
+ 	.desc = DRIVER_DESC,
+ 	.date = DRIVER_DATE,
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -72,6 +72,9 @@ struct vc4_dev {
+ 	 * job_done_work.
+ 	 */
+ 	struct list_head job_done_list;
++	/* Spinlock used to synchronize the job_list and seqno
++	 * accesses between the IRQ handler and GEM ioctls.
++	 */
+ 	spinlock_t job_lock;
+ 	wait_queue_head_t job_wait_queue;
+ 	struct work_struct job_done_work;
+@@ -318,8 +321,7 @@ struct vc4_texture_sample_info {
+  * and validate the shader state record's uniforms that define the texture
+  * samples.
+  */
+-struct vc4_validated_shader_info
+-{
++struct vc4_validated_shader_info {
+ 	uint32_t uniforms_size;
+ 	uint32_t uniforms_src_size;
+ 	uint32_t num_texture_samples;
+@@ -355,8 +357,10 @@ struct vc4_validated_shader_info
+ #define wait_for(COND, MS) _wait_for(COND, MS, 1)
+ 
+ /* vc4_bo.c */
++struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size);
+ void vc4_free_object(struct drm_gem_object *gem_obj);
+-struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t size);
++struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t size,
++			     bool from_cache);
+ int vc4_dumb_create(struct drm_file *file_priv,
+ 		    struct drm_device *dev,
+ 		    struct drm_mode_create_dumb *args);
+@@ -432,7 +436,8 @@ struct drm_plane *vc4_plane_init(struct
+ 				 enum drm_plane_type type);
+ u32 vc4_plane_write_dlist(struct drm_plane *plane, u32 __iomem *dlist);
+ u32 vc4_plane_dlist_size(struct drm_plane_state *state);
+-void vc4_plane_async_set_fb(struct drm_plane *plane, struct drm_framebuffer *fb);
++void vc4_plane_async_set_fb(struct drm_plane *plane,
++			    struct drm_framebuffer *fb);
+ 
+ /* vc4_v3d.c */
+ extern struct platform_driver vc4_v3d_driver;
+@@ -450,9 +455,6 @@ vc4_validate_bin_cl(struct drm_device *d
+ int
+ vc4_validate_shader_recs(struct drm_device *dev, struct vc4_exec_info *exec);
+ 
+-struct vc4_validated_shader_info *
+-vc4_validate_shader(struct drm_gem_cma_object *shader_obj);
+-
+ bool vc4_use_bo(struct vc4_exec_info *exec,
+ 		uint32_t hindex,
+ 		enum vc4_bo_mode mode,
+@@ -464,3 +466,7 @@ bool vc4_check_tex_size(struct vc4_exec_
+ 			struct drm_gem_cma_object *fbo,
+ 			uint32_t offset, uint8_t tiling_format,
+ 			uint32_t width, uint32_t height, uint8_t cpp);
++
++/* vc4_validate_shader.c */
++struct vc4_validated_shader_info *
++vc4_validate_shader(struct drm_gem_cma_object *shader_obj);
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -53,9 +53,8 @@ vc4_free_hang_state(struct drm_device *d
+ 	unsigned int i;
+ 
+ 	mutex_lock(&dev->struct_mutex);
+-	for (i = 0; i < state->user_state.bo_count; i++) {
++	for (i = 0; i < state->user_state.bo_count; i++)
+ 		drm_gem_object_unreference(state->bo[i]);
+-	}
+ 	mutex_unlock(&dev->struct_mutex);
+ 
+ 	kfree(state);
+@@ -65,10 +64,10 @@ int
+ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
+ 			 struct drm_file *file_priv)
+ {
+- 	struct drm_vc4_get_hang_state *get_state = data;
++	struct drm_vc4_get_hang_state *get_state = data;
+ 	struct drm_vc4_get_hang_state_bo *bo_state;
+ 	struct vc4_hang_state *kernel_state;
+- 	struct drm_vc4_get_hang_state *state;
++	struct drm_vc4_get_hang_state *state;
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	unsigned long irqflags;
+ 	u32 i;
+@@ -107,6 +106,7 @@ vc4_get_hang_state_ioctl(struct drm_devi
+ 	for (i = 0; i < state->bo_count; i++) {
+ 		struct vc4_bo *vc4_bo = to_vc4_bo(kernel_state->bo[i]);
+ 		u32 handle;
++
+ 		ret = drm_gem_handle_create(file_priv, kernel_state->bo[i],
+ 					    &handle);
+ 
+@@ -124,7 +124,7 @@ vc4_get_hang_state_ioctl(struct drm_devi
+ 			   state->bo_count * sizeof(*bo_state));
+ 	kfree(bo_state);
+ 
+- err_free:
++err_free:
+ 
+ 	vc4_free_hang_state(dev, kernel_state);
+ 
+@@ -578,7 +578,7 @@ vc4_get_bcl(struct drm_device *dev, stru
+ 		goto fail;
+ 	}
+ 
+-	bo = vc4_bo_create(dev, exec_size);
++	bo = vc4_bo_create(dev, exec_size, true);
+ 	if (!bo) {
+ 		DRM_ERROR("Couldn't allocate BO for binning\n");
+ 		ret = PTR_ERR(exec->exec_bo);
+@@ -668,6 +668,7 @@ vc4_job_handle_completed(struct vc4_dev
+ static void vc4_seqno_cb_work(struct work_struct *work)
+ {
+ 	struct vc4_seqno_cb *cb = container_of(work, struct vc4_seqno_cb, work);
++
+ 	cb->func(cb);
+ }
+ 
+@@ -717,6 +718,7 @@ vc4_wait_for_seqno_ioctl_helper(struct d
+ 
+ 	if ((ret == -EINTR || ret == -ERESTARTSYS) && *timeout_ns != ~0ull) {
+ 		uint64_t delta = jiffies_to_nsecs(jiffies - start);
++
+ 		if (*timeout_ns >= delta)
+ 			*timeout_ns -= delta;
+ 	}
+@@ -750,9 +752,10 @@ vc4_wait_bo_ioctl(struct drm_device *dev
+ 	}
+ 	bo = to_vc4_bo(gem_obj);
+ 
+-	ret = vc4_wait_for_seqno_ioctl_helper(dev, bo->seqno, &args->timeout_ns);
++	ret = vc4_wait_for_seqno_ioctl_helper(dev, bo->seqno,
++					      &args->timeout_ns);
+ 
+-	drm_gem_object_unreference(gem_obj);
++	drm_gem_object_unreference_unlocked(gem_obj);
+ 	return ret;
+ }
+ 
+@@ -793,7 +796,8 @@ vc4_submit_cl_ioctl(struct drm_device *d
+ 		if (ret)
+ 			goto fail;
+ 	} else {
+-		exec->ct0ca = exec->ct0ea = 0;
++		exec->ct0ca = 0;
++		exec->ct0ea = 0;
+ 	}
+ 
+ 	ret = vc4_get_rcl(dev, exec);
+@@ -831,7 +835,7 @@ vc4_gem_init(struct drm_device *dev)
+ 	INIT_WORK(&vc4->hangcheck.reset_work, vc4_reset_work);
+ 	setup_timer(&vc4->hangcheck.timer,
+ 		    vc4_hangcheck_elapsed,
+-		    (unsigned long) dev);
++		    (unsigned long)dev);
+ 
+ 	INIT_WORK(&vc4->job_done_work, vc4_job_done_work);
+ }
+--- a/drivers/gpu/drm/vc4/vc4_irq.c
++++ b/drivers/gpu/drm/vc4/vc4_irq.c
+@@ -56,7 +56,7 @@ vc4_overflow_mem_work(struct work_struct
+ 	struct drm_device *dev = vc4->dev;
+ 	struct vc4_bo *bo;
+ 
+-	bo = vc4_bo_create(dev, 256 * 1024);
++	bo = vc4_bo_create(dev, 256 * 1024, true);
+ 	if (!bo) {
+ 		DRM_ERROR("Couldn't allocate binner overflow mem\n");
+ 		return;
+@@ -87,9 +87,8 @@ vc4_overflow_mem_work(struct work_struct
+ 		spin_unlock_irqrestore(&vc4->job_lock, irqflags);
+ 	}
+ 
+-	if (vc4->overflow_mem) {
++	if (vc4->overflow_mem)
+ 		drm_gem_object_unreference_unlocked(&vc4->overflow_mem->base.base);
+-	}
+ 	vc4->overflow_mem = bo;
+ 
+ 	V3D_WRITE(V3D_BPOA, bo->base.paddr);
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -132,6 +132,7 @@ static int vc4_atomic_commit(struct drm_
+ 			struct drm_gem_cma_object *cma_bo =
+ 				drm_fb_cma_get_gem_obj(new_state->fb, 0);
+ 			struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);
++
+ 			wait_seqno = max(bo->seqno, wait_seqno);
+ 		}
+ 	}
+--- a/drivers/gpu/drm/vc4/vc4_packet.h
++++ b/drivers/gpu/drm/vc4/vc4_packet.h
+@@ -27,60 +27,60 @@
+ #include "vc4_regs.h" /* for VC4_MASK, VC4_GET_FIELD, VC4_SET_FIELD */
+ 
+ enum vc4_packet {
+-        VC4_PACKET_HALT = 0,
+-        VC4_PACKET_NOP = 1,
++	VC4_PACKET_HALT = 0,
++	VC4_PACKET_NOP = 1,
+ 
+-        VC4_PACKET_FLUSH = 4,
+-        VC4_PACKET_FLUSH_ALL = 5,
+-        VC4_PACKET_START_TILE_BINNING = 6,
+-        VC4_PACKET_INCREMENT_SEMAPHORE = 7,
+-        VC4_PACKET_WAIT_ON_SEMAPHORE = 8,
+-
+-        VC4_PACKET_BRANCH = 16,
+-        VC4_PACKET_BRANCH_TO_SUB_LIST = 17,
+-
+-        VC4_PACKET_STORE_MS_TILE_BUFFER = 24,
+-        VC4_PACKET_STORE_MS_TILE_BUFFER_AND_EOF = 25,
+-        VC4_PACKET_STORE_FULL_RES_TILE_BUFFER = 26,
+-        VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER = 27,
+-        VC4_PACKET_STORE_TILE_BUFFER_GENERAL = 28,
+-        VC4_PACKET_LOAD_TILE_BUFFER_GENERAL = 29,
+-
+-        VC4_PACKET_GL_INDEXED_PRIMITIVE = 32,
+-        VC4_PACKET_GL_ARRAY_PRIMITIVE = 33,
+-
+-        VC4_PACKET_COMPRESSED_PRIMITIVE = 48,
+-        VC4_PACKET_CLIPPED_COMPRESSED_PRIMITIVE = 49,
+-
+-        VC4_PACKET_PRIMITIVE_LIST_FORMAT = 56,
+-
+-        VC4_PACKET_GL_SHADER_STATE = 64,
+-        VC4_PACKET_NV_SHADER_STATE = 65,
+-        VC4_PACKET_VG_SHADER_STATE = 66,
+-
+-        VC4_PACKET_CONFIGURATION_BITS = 96,
+-        VC4_PACKET_FLAT_SHADE_FLAGS = 97,
+-        VC4_PACKET_POINT_SIZE = 98,
+-        VC4_PACKET_LINE_WIDTH = 99,
+-        VC4_PACKET_RHT_X_BOUNDARY = 100,
+-        VC4_PACKET_DEPTH_OFFSET = 101,
+-        VC4_PACKET_CLIP_WINDOW = 102,
+-        VC4_PACKET_VIEWPORT_OFFSET = 103,
+-        VC4_PACKET_Z_CLIPPING = 104,
+-        VC4_PACKET_CLIPPER_XY_SCALING = 105,
+-        VC4_PACKET_CLIPPER_Z_SCALING = 106,
+-
+-        VC4_PACKET_TILE_BINNING_MODE_CONFIG = 112,
+-        VC4_PACKET_TILE_RENDERING_MODE_CONFIG = 113,
+-        VC4_PACKET_CLEAR_COLORS = 114,
+-        VC4_PACKET_TILE_COORDINATES = 115,
+-
+-        /* Not an actual hardware packet -- this is what we use to put
+-         * references to GEM bos in the command stream, since we need the u32
+-         * int the actual address packet in order to store the offset from the
+-         * start of the BO.
+-         */
+-        VC4_PACKET_GEM_HANDLES = 254,
++	VC4_PACKET_FLUSH = 4,
++	VC4_PACKET_FLUSH_ALL = 5,
++	VC4_PACKET_START_TILE_BINNING = 6,
++	VC4_PACKET_INCREMENT_SEMAPHORE = 7,
++	VC4_PACKET_WAIT_ON_SEMAPHORE = 8,
++
++	VC4_PACKET_BRANCH = 16,
++	VC4_PACKET_BRANCH_TO_SUB_LIST = 17,
++
++	VC4_PACKET_STORE_MS_TILE_BUFFER = 24,
++	VC4_PACKET_STORE_MS_TILE_BUFFER_AND_EOF = 25,
++	VC4_PACKET_STORE_FULL_RES_TILE_BUFFER = 26,
++	VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER = 27,
++	VC4_PACKET_STORE_TILE_BUFFER_GENERAL = 28,
++	VC4_PACKET_LOAD_TILE_BUFFER_GENERAL = 29,
++
++	VC4_PACKET_GL_INDEXED_PRIMITIVE = 32,
++	VC4_PACKET_GL_ARRAY_PRIMITIVE = 33,
++
++	VC4_PACKET_COMPRESSED_PRIMITIVE = 48,
++	VC4_PACKET_CLIPPED_COMPRESSED_PRIMITIVE = 49,
++
++	VC4_PACKET_PRIMITIVE_LIST_FORMAT = 56,
++
++	VC4_PACKET_GL_SHADER_STATE = 64,
++	VC4_PACKET_NV_SHADER_STATE = 65,
++	VC4_PACKET_VG_SHADER_STATE = 66,
++
++	VC4_PACKET_CONFIGURATION_BITS = 96,
++	VC4_PACKET_FLAT_SHADE_FLAGS = 97,
++	VC4_PACKET_POINT_SIZE = 98,
++	VC4_PACKET_LINE_WIDTH = 99,
++	VC4_PACKET_RHT_X_BOUNDARY = 100,
++	VC4_PACKET_DEPTH_OFFSET = 101,
++	VC4_PACKET_CLIP_WINDOW = 102,
++	VC4_PACKET_VIEWPORT_OFFSET = 103,
++	VC4_PACKET_Z_CLIPPING = 104,
++	VC4_PACKET_CLIPPER_XY_SCALING = 105,
++	VC4_PACKET_CLIPPER_Z_SCALING = 106,
++
++	VC4_PACKET_TILE_BINNING_MODE_CONFIG = 112,
++	VC4_PACKET_TILE_RENDERING_MODE_CONFIG = 113,
++	VC4_PACKET_CLEAR_COLORS = 114,
++	VC4_PACKET_TILE_COORDINATES = 115,
++
++	/* Not an actual hardware packet -- this is what we use to put
++	 * references to GEM bos in the command stream, since we need the u32
++	 * int the actual address packet in order to store the offset from the
++	 * start of the BO.
++	 */
++	VC4_PACKET_GEM_HANDLES = 254,
+ } __attribute__ ((__packed__));
+ 
+ #define VC4_PACKET_HALT_SIZE						1
+@@ -148,10 +148,10 @@ enum vc4_packet {
+  * VC4_PACKET_LOAD_TILE_BUFFER_GENERAL (low bits of the address)
+  */
+ 
+-#define VC4_LOADSTORE_TILE_BUFFER_EOF                  (1 << 3)
+-#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_VG_MASK (1 << 2)
+-#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_ZS      (1 << 1)
+-#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_COLOR   (1 << 0)
++#define VC4_LOADSTORE_TILE_BUFFER_EOF                  BIT(3)
++#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_VG_MASK BIT(2)
++#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_ZS      BIT(1)
++#define VC4_LOADSTORE_TILE_BUFFER_DISABLE_FULL_COLOR   BIT(0)
+ 
+ /** @} */
+ 
+@@ -160,10 +160,10 @@ enum vc4_packet {
+  * byte 0-1 of VC4_PACKET_STORE_TILE_BUFFER_GENERAL and
+  * VC4_PACKET_LOAD_TILE_BUFFER_GENERAL
+  */
+-#define VC4_STORE_TILE_BUFFER_DISABLE_VG_MASK_CLEAR (1 << 15)
+-#define VC4_STORE_TILE_BUFFER_DISABLE_ZS_CLEAR     (1 << 14)
+-#define VC4_STORE_TILE_BUFFER_DISABLE_COLOR_CLEAR  (1 << 13)
+-#define VC4_STORE_TILE_BUFFER_DISABLE_SWAP         (1 << 12)
++#define VC4_STORE_TILE_BUFFER_DISABLE_VG_MASK_CLEAR BIT(15)
++#define VC4_STORE_TILE_BUFFER_DISABLE_ZS_CLEAR     BIT(14)
++#define VC4_STORE_TILE_BUFFER_DISABLE_COLOR_CLEAR  BIT(13)
++#define VC4_STORE_TILE_BUFFER_DISABLE_SWAP         BIT(12)
+ 
+ #define VC4_LOADSTORE_TILE_BUFFER_FORMAT_MASK      VC4_MASK(9, 8)
+ #define VC4_LOADSTORE_TILE_BUFFER_FORMAT_SHIFT     8
+@@ -201,28 +201,28 @@ enum vc4_packet {
+ #define VC4_INDEX_BUFFER_U16                       (1 << 4)
+ 
+ /* This flag is only present in NV shader state. */
+-#define VC4_SHADER_FLAG_SHADED_CLIP_COORDS         (1 << 3)
+-#define VC4_SHADER_FLAG_ENABLE_CLIPPING            (1 << 2)
+-#define VC4_SHADER_FLAG_VS_POINT_SIZE              (1 << 1)
+-#define VC4_SHADER_FLAG_FS_SINGLE_THREAD           (1 << 0)
++#define VC4_SHADER_FLAG_SHADED_CLIP_COORDS         BIT(3)
++#define VC4_SHADER_FLAG_ENABLE_CLIPPING            BIT(2)
++#define VC4_SHADER_FLAG_VS_POINT_SIZE              BIT(1)
++#define VC4_SHADER_FLAG_FS_SINGLE_THREAD           BIT(0)
+ 
+ /** @{ byte 2 of config bits. */
+-#define VC4_CONFIG_BITS_EARLY_Z_UPDATE             (1 << 1)
+-#define VC4_CONFIG_BITS_EARLY_Z                    (1 << 0)
++#define VC4_CONFIG_BITS_EARLY_Z_UPDATE             BIT(1)
++#define VC4_CONFIG_BITS_EARLY_Z                    BIT(0)
+ /** @} */
+ 
+ /** @{ byte 1 of config bits. */
+-#define VC4_CONFIG_BITS_Z_UPDATE                   (1 << 7)
++#define VC4_CONFIG_BITS_Z_UPDATE                   BIT(7)
+ /** same values in this 3-bit field as PIPE_FUNC_* */
+ #define VC4_CONFIG_BITS_DEPTH_FUNC_SHIFT           4
+-#define VC4_CONFIG_BITS_COVERAGE_READ_LEAVE        (1 << 3)
++#define VC4_CONFIG_BITS_COVERAGE_READ_LEAVE        BIT(3)
+ 
+ #define VC4_CONFIG_BITS_COVERAGE_UPDATE_NONZERO    (0 << 1)
+ #define VC4_CONFIG_BITS_COVERAGE_UPDATE_ODD        (1 << 1)
+ #define VC4_CONFIG_BITS_COVERAGE_UPDATE_OR         (2 << 1)
+ #define VC4_CONFIG_BITS_COVERAGE_UPDATE_ZERO       (3 << 1)
+ 
+-#define VC4_CONFIG_BITS_COVERAGE_PIPE_SELECT       (1 << 0)
++#define VC4_CONFIG_BITS_COVERAGE_PIPE_SELECT       BIT(0)
+ /** @} */
+ 
+ /** @{ byte 0 of config bits. */
+@@ -230,15 +230,15 @@ enum vc4_packet {
+ #define VC4_CONFIG_BITS_RASTERIZER_OVERSAMPLE_4X   (1 << 6)
+ #define VC4_CONFIG_BITS_RASTERIZER_OVERSAMPLE_16X  (2 << 6)
+ 
+-#define VC4_CONFIG_BITS_AA_POINTS_AND_LINES        (1 << 4)
+-#define VC4_CONFIG_BITS_ENABLE_DEPTH_OFFSET        (1 << 3)
+-#define VC4_CONFIG_BITS_CW_PRIMITIVES              (1 << 2)
+-#define VC4_CONFIG_BITS_ENABLE_PRIM_BACK           (1 << 1)
+-#define VC4_CONFIG_BITS_ENABLE_PRIM_FRONT          (1 << 0)
++#define VC4_CONFIG_BITS_AA_POINTS_AND_LINES        BIT(4)
++#define VC4_CONFIG_BITS_ENABLE_DEPTH_OFFSET        BIT(3)
++#define VC4_CONFIG_BITS_CW_PRIMITIVES              BIT(2)
++#define VC4_CONFIG_BITS_ENABLE_PRIM_BACK           BIT(1)
++#define VC4_CONFIG_BITS_ENABLE_PRIM_FRONT          BIT(0)
+ /** @} */
+ 
+ /** @{ bits in the last u8 of VC4_PACKET_TILE_BINNING_MODE_CONFIG */
+-#define VC4_BIN_CONFIG_DB_NON_MS                   (1 << 7)
++#define VC4_BIN_CONFIG_DB_NON_MS                   BIT(7)
+ 
+ #define VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_MASK       VC4_MASK(6, 5)
+ #define VC4_BIN_CONFIG_ALLOC_BLOCK_SIZE_SHIFT      5
+@@ -254,17 +254,17 @@ enum vc4_packet {
+ #define VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_128   2
+ #define VC4_BIN_CONFIG_ALLOC_INIT_BLOCK_SIZE_256   3
+ 
+-#define VC4_BIN_CONFIG_AUTO_INIT_TSDA              (1 << 2)
+-#define VC4_BIN_CONFIG_TILE_BUFFER_64BIT           (1 << 1)
+-#define VC4_BIN_CONFIG_MS_MODE_4X                  (1 << 0)
++#define VC4_BIN_CONFIG_AUTO_INIT_TSDA              BIT(2)
++#define VC4_BIN_CONFIG_TILE_BUFFER_64BIT           BIT(1)
++#define VC4_BIN_CONFIG_MS_MODE_4X                  BIT(0)
+ /** @} */
+ 
+ /** @{ bits in the last u16 of VC4_PACKET_TILE_RENDERING_MODE_CONFIG */
+-#define VC4_RENDER_CONFIG_DB_NON_MS                (1 << 12)
+-#define VC4_RENDER_CONFIG_EARLY_Z_COVERAGE_DISABLE (1 << 11)
+-#define VC4_RENDER_CONFIG_EARLY_Z_DIRECTION_G      (1 << 10)
+-#define VC4_RENDER_CONFIG_COVERAGE_MODE            (1 << 9)
+-#define VC4_RENDER_CONFIG_ENABLE_VG_MASK           (1 << 8)
++#define VC4_RENDER_CONFIG_DB_NON_MS                BIT(12)
++#define VC4_RENDER_CONFIG_EARLY_Z_COVERAGE_DISABLE BIT(11)
++#define VC4_RENDER_CONFIG_EARLY_Z_DIRECTION_G      BIT(10)
++#define VC4_RENDER_CONFIG_COVERAGE_MODE            BIT(9)
++#define VC4_RENDER_CONFIG_ENABLE_VG_MASK           BIT(8)
+ 
+ /** The values of the field are VC4_TILING_FORMAT_* */
+ #define VC4_RENDER_CONFIG_MEMORY_FORMAT_MASK       VC4_MASK(7, 6)
+@@ -280,8 +280,8 @@ enum vc4_packet {
+ #define VC4_RENDER_CONFIG_FORMAT_RGBA8888          1
+ #define VC4_RENDER_CONFIG_FORMAT_BGR565            2
+ 
+-#define VC4_RENDER_CONFIG_TILE_BUFFER_64BIT        (1 << 1)
+-#define VC4_RENDER_CONFIG_MS_MODE_4X               (1 << 0)
++#define VC4_RENDER_CONFIG_TILE_BUFFER_64BIT        BIT(1)
++#define VC4_RENDER_CONFIG_MS_MODE_4X               BIT(0)
+ 
+ #define VC4_PRIMITIVE_LIST_FORMAT_16_INDEX         (1 << 4)
+ #define VC4_PRIMITIVE_LIST_FORMAT_32_XY            (3 << 4)
+@@ -291,24 +291,24 @@ enum vc4_packet {
+ #define VC4_PRIMITIVE_LIST_FORMAT_TYPE_RHT         (3 << 0)
+ 
+ enum vc4_texture_data_type {
+-        VC4_TEXTURE_TYPE_RGBA8888 = 0,
+-        VC4_TEXTURE_TYPE_RGBX8888 = 1,
+-        VC4_TEXTURE_TYPE_RGBA4444 = 2,
+-        VC4_TEXTURE_TYPE_RGBA5551 = 3,
+-        VC4_TEXTURE_TYPE_RGB565 = 4,
+-        VC4_TEXTURE_TYPE_LUMINANCE = 5,
+-        VC4_TEXTURE_TYPE_ALPHA = 6,
+-        VC4_TEXTURE_TYPE_LUMALPHA = 7,
+-        VC4_TEXTURE_TYPE_ETC1 = 8,
+-        VC4_TEXTURE_TYPE_S16F = 9,
+-        VC4_TEXTURE_TYPE_S8 = 10,
+-        VC4_TEXTURE_TYPE_S16 = 11,
+-        VC4_TEXTURE_TYPE_BW1 = 12,
+-        VC4_TEXTURE_TYPE_A4 = 13,
+-        VC4_TEXTURE_TYPE_A1 = 14,
+-        VC4_TEXTURE_TYPE_RGBA64 = 15,
+-        VC4_TEXTURE_TYPE_RGBA32R = 16,
+-        VC4_TEXTURE_TYPE_YUV422R = 17,
++	VC4_TEXTURE_TYPE_RGBA8888 = 0,
++	VC4_TEXTURE_TYPE_RGBX8888 = 1,
++	VC4_TEXTURE_TYPE_RGBA4444 = 2,
++	VC4_TEXTURE_TYPE_RGBA5551 = 3,
++	VC4_TEXTURE_TYPE_RGB565 = 4,
++	VC4_TEXTURE_TYPE_LUMINANCE = 5,
++	VC4_TEXTURE_TYPE_ALPHA = 6,
++	VC4_TEXTURE_TYPE_LUMALPHA = 7,
++	VC4_TEXTURE_TYPE_ETC1 = 8,
++	VC4_TEXTURE_TYPE_S16F = 9,
++	VC4_TEXTURE_TYPE_S8 = 10,
++	VC4_TEXTURE_TYPE_S16 = 11,
++	VC4_TEXTURE_TYPE_BW1 = 12,
++	VC4_TEXTURE_TYPE_A4 = 13,
++	VC4_TEXTURE_TYPE_A1 = 14,
++	VC4_TEXTURE_TYPE_RGBA64 = 15,
++	VC4_TEXTURE_TYPE_RGBA32R = 16,
++	VC4_TEXTURE_TYPE_YUV422R = 17,
+ };
+ 
+ #define VC4_TEX_P0_OFFSET_MASK                     VC4_MASK(31, 12)
+--- a/drivers/gpu/drm/vc4/vc4_qpu_defines.h
++++ b/drivers/gpu/drm/vc4/vc4_qpu_defines.h
+@@ -25,194 +25,190 @@
+ #define VC4_QPU_DEFINES_H
+ 
+ enum qpu_op_add {
+-        QPU_A_NOP,
+-        QPU_A_FADD,
+-        QPU_A_FSUB,
+-        QPU_A_FMIN,
+-        QPU_A_FMAX,
+-        QPU_A_FMINABS,
+-        QPU_A_FMAXABS,
+-        QPU_A_FTOI,
+-        QPU_A_ITOF,
+-        QPU_A_ADD = 12,
+-        QPU_A_SUB,
+-        QPU_A_SHR,
+-        QPU_A_ASR,
+-        QPU_A_ROR,
+-        QPU_A_SHL,
+-        QPU_A_MIN,
+-        QPU_A_MAX,
+-        QPU_A_AND,
+-        QPU_A_OR,
+-        QPU_A_XOR,
+-        QPU_A_NOT,
+-        QPU_A_CLZ,
+-        QPU_A_V8ADDS = 30,
+-        QPU_A_V8SUBS = 31,
++	QPU_A_NOP,
++	QPU_A_FADD,
++	QPU_A_FSUB,
++	QPU_A_FMIN,
++	QPU_A_FMAX,
++	QPU_A_FMINABS,
++	QPU_A_FMAXABS,
++	QPU_A_FTOI,
++	QPU_A_ITOF,
++	QPU_A_ADD = 12,
++	QPU_A_SUB,
++	QPU_A_SHR,
++	QPU_A_ASR,
++	QPU_A_ROR,
++	QPU_A_SHL,
++	QPU_A_MIN,
++	QPU_A_MAX,
++	QPU_A_AND,
++	QPU_A_OR,
++	QPU_A_XOR,
++	QPU_A_NOT,
++	QPU_A_CLZ,
++	QPU_A_V8ADDS = 30,
++	QPU_A_V8SUBS = 31,
+ };
+ 
+ enum qpu_op_mul {
+-        QPU_M_NOP,
+-        QPU_M_FMUL,
+-        QPU_M_MUL24,
+-        QPU_M_V8MULD,
+-        QPU_M_V8MIN,
+-        QPU_M_V8MAX,
+-        QPU_M_V8ADDS,
+-        QPU_M_V8SUBS,
++	QPU_M_NOP,
++	QPU_M_FMUL,
++	QPU_M_MUL24,
++	QPU_M_V8MULD,
++	QPU_M_V8MIN,
++	QPU_M_V8MAX,
++	QPU_M_V8ADDS,
++	QPU_M_V8SUBS,
+ };
+ 
+ enum qpu_raddr {
+-        QPU_R_FRAG_PAYLOAD_ZW = 15, /* W for A file, Z for B file */
+-        /* 0-31 are the plain regfile a or b fields */
+-        QPU_R_UNIF = 32,
+-        QPU_R_VARY = 35,
+-        QPU_R_ELEM_QPU = 38,
+-        QPU_R_NOP,
+-        QPU_R_XY_PIXEL_COORD = 41,
+-        QPU_R_MS_REV_FLAGS = 41,
+-        QPU_R_VPM = 48,
+-        QPU_R_VPM_LD_BUSY,
+-        QPU_R_VPM_LD_WAIT,
+-        QPU_R_MUTEX_ACQUIRE,
++	QPU_R_FRAG_PAYLOAD_ZW = 15, /* W for A file, Z for B file */
++	/* 0-31 are the plain regfile a or b fields */
++	QPU_R_UNIF = 32,
++	QPU_R_VARY = 35,
++	QPU_R_ELEM_QPU = 38,
++	QPU_R_NOP,
++	QPU_R_XY_PIXEL_COORD = 41,
++	QPU_R_MS_REV_FLAGS = 41,
++	QPU_R_VPM = 48,
++	QPU_R_VPM_LD_BUSY,
++	QPU_R_VPM_LD_WAIT,
++	QPU_R_MUTEX_ACQUIRE,
+ };
+ 
+ enum qpu_waddr {
+-        /* 0-31 are the plain regfile a or b fields */
+-        QPU_W_ACC0 = 32, /* aka r0 */
+-        QPU_W_ACC1,
+-        QPU_W_ACC2,
+-        QPU_W_ACC3,
+-        QPU_W_TMU_NOSWAP,
+-        QPU_W_ACC5,
+-        QPU_W_HOST_INT,
+-        QPU_W_NOP,
+-        QPU_W_UNIFORMS_ADDRESS,
+-        QPU_W_QUAD_XY, /* X for regfile a, Y for regfile b */
+-        QPU_W_MS_FLAGS = 42,
+-        QPU_W_REV_FLAG = 42,
+-        QPU_W_TLB_STENCIL_SETUP = 43,
+-        QPU_W_TLB_Z,
+-        QPU_W_TLB_COLOR_MS,
+-        QPU_W_TLB_COLOR_ALL,
+-        QPU_W_TLB_ALPHA_MASK,
+-        QPU_W_VPM,
+-        QPU_W_VPMVCD_SETUP, /* LD for regfile a, ST for regfile b */
+-        QPU_W_VPM_ADDR, /* LD for regfile a, ST for regfile b */
+-        QPU_W_MUTEX_RELEASE,
+-        QPU_W_SFU_RECIP,
+-        QPU_W_SFU_RECIPSQRT,
+-        QPU_W_SFU_EXP,
+-        QPU_W_SFU_LOG,
+-        QPU_W_TMU0_S,
+-        QPU_W_TMU0_T,
+-        QPU_W_TMU0_R,
+-        QPU_W_TMU0_B,
+-        QPU_W_TMU1_S,
+-        QPU_W_TMU1_T,
+-        QPU_W_TMU1_R,
+-        QPU_W_TMU1_B,
++	/* 0-31 are the plain regfile a or b fields */
++	QPU_W_ACC0 = 32, /* aka r0 */
++	QPU_W_ACC1,
++	QPU_W_ACC2,
++	QPU_W_ACC3,
++	QPU_W_TMU_NOSWAP,
++	QPU_W_ACC5,
++	QPU_W_HOST_INT,
++	QPU_W_NOP,
++	QPU_W_UNIFORMS_ADDRESS,
++	QPU_W_QUAD_XY, /* X for regfile a, Y for regfile b */
++	QPU_W_MS_FLAGS = 42,
++	QPU_W_REV_FLAG = 42,
++	QPU_W_TLB_STENCIL_SETUP = 43,
++	QPU_W_TLB_Z,
++	QPU_W_TLB_COLOR_MS,
++	QPU_W_TLB_COLOR_ALL,
++	QPU_W_TLB_ALPHA_MASK,
++	QPU_W_VPM,
++	QPU_W_VPMVCD_SETUP, /* LD for regfile a, ST for regfile b */
++	QPU_W_VPM_ADDR, /* LD for regfile a, ST for regfile b */
++	QPU_W_MUTEX_RELEASE,
++	QPU_W_SFU_RECIP,
++	QPU_W_SFU_RECIPSQRT,
++	QPU_W_SFU_EXP,
++	QPU_W_SFU_LOG,
++	QPU_W_TMU0_S,
++	QPU_W_TMU0_T,
++	QPU_W_TMU0_R,
++	QPU_W_TMU0_B,
++	QPU_W_TMU1_S,
++	QPU_W_TMU1_T,
++	QPU_W_TMU1_R,
++	QPU_W_TMU1_B,
+ };
+ 
+ enum qpu_sig_bits {
+-        QPU_SIG_SW_BREAKPOINT,
+-        QPU_SIG_NONE,
+-        QPU_SIG_THREAD_SWITCH,
+-        QPU_SIG_PROG_END,
+-        QPU_SIG_WAIT_FOR_SCOREBOARD,
+-        QPU_SIG_SCOREBOARD_UNLOCK,
+-        QPU_SIG_LAST_THREAD_SWITCH,
+-        QPU_SIG_COVERAGE_LOAD,
+-        QPU_SIG_COLOR_LOAD,
+-        QPU_SIG_COLOR_LOAD_END,
+-        QPU_SIG_LOAD_TMU0,
+-        QPU_SIG_LOAD_TMU1,
+-        QPU_SIG_ALPHA_MASK_LOAD,
+-        QPU_SIG_SMALL_IMM,
+-        QPU_SIG_LOAD_IMM,
+-        QPU_SIG_BRANCH
++	QPU_SIG_SW_BREAKPOINT,
++	QPU_SIG_NONE,
++	QPU_SIG_THREAD_SWITCH,
++	QPU_SIG_PROG_END,
++	QPU_SIG_WAIT_FOR_SCOREBOARD,
++	QPU_SIG_SCOREBOARD_UNLOCK,
++	QPU_SIG_LAST_THREAD_SWITCH,
++	QPU_SIG_COVERAGE_LOAD,
++	QPU_SIG_COLOR_LOAD,
++	QPU_SIG_COLOR_LOAD_END,
++	QPU_SIG_LOAD_TMU0,
++	QPU_SIG_LOAD_TMU1,
++	QPU_SIG_ALPHA_MASK_LOAD,
++	QPU_SIG_SMALL_IMM,
++	QPU_SIG_LOAD_IMM,
++	QPU_SIG_BRANCH
+ };
+ 
+ enum qpu_mux {
+-        /* hardware mux values */
+-        QPU_MUX_R0,
+-        QPU_MUX_R1,
+-        QPU_MUX_R2,
+-        QPU_MUX_R3,
+-        QPU_MUX_R4,
+-        QPU_MUX_R5,
+-        QPU_MUX_A,
+-        QPU_MUX_B,
++	/* hardware mux values */
++	QPU_MUX_R0,
++	QPU_MUX_R1,
++	QPU_MUX_R2,
++	QPU_MUX_R3,
++	QPU_MUX_R4,
++	QPU_MUX_R5,
++	QPU_MUX_A,
++	QPU_MUX_B,
+ 
+-        /* non-hardware mux values */
+-        QPU_MUX_IMM,
++	/* non-hardware mux values */
++	QPU_MUX_IMM,
+ };
+ 
+ enum qpu_cond {
+-        QPU_COND_NEVER,
+-        QPU_COND_ALWAYS,
+-        QPU_COND_ZS,
+-        QPU_COND_ZC,
+-        QPU_COND_NS,
+-        QPU_COND_NC,
+-        QPU_COND_CS,
+-        QPU_COND_CC,
++	QPU_COND_NEVER,
++	QPU_COND_ALWAYS,
++	QPU_COND_ZS,
++	QPU_COND_ZC,
++	QPU_COND_NS,
++	QPU_COND_NC,
++	QPU_COND_CS,
++	QPU_COND_CC,
+ };
+ 
+ enum qpu_pack_mul {
+-        QPU_PACK_MUL_NOP,
+-        QPU_PACK_MUL_8888 = 3, /* replicated to each 8 bits of the 32-bit dst. */
+-        QPU_PACK_MUL_8A,
+-        QPU_PACK_MUL_8B,
+-        QPU_PACK_MUL_8C,
+-        QPU_PACK_MUL_8D,
++	QPU_PACK_MUL_NOP,
++	/* replicated to each 8 bits of the 32-bit dst. */
++	QPU_PACK_MUL_8888 = 3,
++	QPU_PACK_MUL_8A,
++	QPU_PACK_MUL_8B,
++	QPU_PACK_MUL_8C,
++	QPU_PACK_MUL_8D,
+ };
+ 
+ enum qpu_pack_a {
+-        QPU_PACK_A_NOP,
+-        /* convert to 16 bit float if float input, or to int16. */
+-        QPU_PACK_A_16A,
+-        QPU_PACK_A_16B,
+-        /* replicated to each 8 bits of the 32-bit dst. */
+-        QPU_PACK_A_8888,
+-        /* Convert to 8-bit unsigned int. */
+-        QPU_PACK_A_8A,
+-        QPU_PACK_A_8B,
+-        QPU_PACK_A_8C,
+-        QPU_PACK_A_8D,
+-
+-        /* Saturating variants of the previous instructions. */
+-        QPU_PACK_A_32_SAT, /* int-only */
+-        QPU_PACK_A_16A_SAT, /* int or float */
+-        QPU_PACK_A_16B_SAT,
+-        QPU_PACK_A_8888_SAT,
+-        QPU_PACK_A_8A_SAT,
+-        QPU_PACK_A_8B_SAT,
+-        QPU_PACK_A_8C_SAT,
+-        QPU_PACK_A_8D_SAT,
++	QPU_PACK_A_NOP,
++	/* convert to 16 bit float if float input, or to int16. */
++	QPU_PACK_A_16A,
++	QPU_PACK_A_16B,
++	/* replicated to each 8 bits of the 32-bit dst. */
++	QPU_PACK_A_8888,
++	/* Convert to 8-bit unsigned int. */
++	QPU_PACK_A_8A,
++	QPU_PACK_A_8B,
++	QPU_PACK_A_8C,
++	QPU_PACK_A_8D,
++
++	/* Saturating variants of the previous instructions. */
++	QPU_PACK_A_32_SAT, /* int-only */
++	QPU_PACK_A_16A_SAT, /* int or float */
++	QPU_PACK_A_16B_SAT,
++	QPU_PACK_A_8888_SAT,
++	QPU_PACK_A_8A_SAT,
++	QPU_PACK_A_8B_SAT,
++	QPU_PACK_A_8C_SAT,
++	QPU_PACK_A_8D_SAT,
+ };
+ 
+ enum qpu_unpack_r4 {
+-        QPU_UNPACK_R4_NOP,
+-        QPU_UNPACK_R4_F16A_TO_F32,
+-        QPU_UNPACK_R4_F16B_TO_F32,
+-        QPU_UNPACK_R4_8D_REP,
+-        QPU_UNPACK_R4_8A,
+-        QPU_UNPACK_R4_8B,
+-        QPU_UNPACK_R4_8C,
+-        QPU_UNPACK_R4_8D,
+-};
+-
+-#define QPU_MASK(high, low) ((((uint64_t)1<<((high)-(low)+1))-1)<<(low))
+-/* Using the GNU statement expression extension */
+-#define QPU_SET_FIELD(value, field)                                       \
+-        ({                                                                \
+-                uint64_t fieldval = (uint64_t)(value) << field ## _SHIFT; \
+-                assert((fieldval & ~ field ## _MASK) == 0);               \
+-                fieldval & field ## _MASK;                                \
+-         })
++	QPU_UNPACK_R4_NOP,
++	QPU_UNPACK_R4_F16A_TO_F32,
++	QPU_UNPACK_R4_F16B_TO_F32,
++	QPU_UNPACK_R4_8D_REP,
++	QPU_UNPACK_R4_8A,
++	QPU_UNPACK_R4_8B,
++	QPU_UNPACK_R4_8C,
++	QPU_UNPACK_R4_8D,
++};
++
++#define QPU_MASK(high, low) \
++	((((uint64_t)1 << ((high) - (low) + 1)) - 1) << (low))
+ 
+-#define QPU_GET_FIELD(word, field) ((uint32_t)(((word)  & field ## _MASK) >> field ## _SHIFT))
++#define QPU_GET_FIELD(word, field) \
++	((uint32_t)(((word)  & field ## _MASK) >> field ## _SHIFT))
+ 
+ #define QPU_SIG_SHIFT                   60
+ #define QPU_SIG_MASK                    QPU_MASK(63, 60)
+--- a/drivers/gpu/drm/vc4/vc4_render_cl.c
++++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
+@@ -63,7 +63,6 @@ static inline void rcl_u32(struct vc4_rc
+ 	setup->next_offset += 4;
+ }
+ 
+-
+ /*
+  * Emits a no-op STORE_TILE_BUFFER_GENERAL.
+  *
+@@ -217,7 +216,7 @@ static int vc4_create_rcl_bo(struct drm_
+ 	}
+ 	size += xtiles * ytiles * loop_body_size;
+ 
+-	setup->rcl = &vc4_bo_create(dev, size)->base;
++	setup->rcl = &vc4_bo_create(dev, size, true)->base;
+ 	if (!setup->rcl)
+ 		return -ENOMEM;
+ 	list_add_tail(&to_vc4_bo(&setup->rcl->base)->unref_head,
+@@ -256,6 +255,7 @@ static int vc4_create_rcl_bo(struct drm_
+ 		for (x = min_x_tile; x <= max_x_tile; x++) {
+ 			bool first = (x == min_x_tile && y == min_y_tile);
+ 			bool last = (x == max_x_tile && y == max_y_tile);
++
+ 			emit_tile(exec, setup, x, y, first, last);
+ 		}
+ 	}
+--- a/drivers/gpu/drm/vc4/vc4_v3d.c
++++ b/drivers/gpu/drm/vc4/vc4_v3d.c
+@@ -125,7 +125,7 @@ int vc4_v3d_debugfs_regs(struct seq_file
+ 
+ int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused)
+ {
+-	struct drm_info_node *node = (struct drm_info_node *) m->private;
++	struct drm_info_node *node = (struct drm_info_node *)m->private;
+ 	struct drm_device *dev = node->minor->dev;
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	uint32_t ident1 = V3D_READ(V3D_IDENT1);
+@@ -133,11 +133,13 @@ int vc4_v3d_debugfs_ident(struct seq_fil
+ 	uint32_t tups = VC4_GET_FIELD(ident1, V3D_IDENT1_TUPS);
+ 	uint32_t qups = VC4_GET_FIELD(ident1, V3D_IDENT1_QUPS);
+ 
+-	seq_printf(m, "Revision:   %d\n", VC4_GET_FIELD(ident1, V3D_IDENT1_REV));
++	seq_printf(m, "Revision:   %d\n",
++		   VC4_GET_FIELD(ident1, V3D_IDENT1_REV));
+ 	seq_printf(m, "Slices:     %d\n", nslc);
+ 	seq_printf(m, "TMUs:       %d\n", nslc * tups);
+ 	seq_printf(m, "QPUs:       %d\n", nslc * qups);
+-	seq_printf(m, "Semaphores: %d\n", VC4_GET_FIELD(ident1, V3D_IDENT1_NSEM));
++	seq_printf(m, "Semaphores: %d\n",
++		   VC4_GET_FIELD(ident1, V3D_IDENT1_NSEM));
+ 
+ 	return 0;
+ }
+@@ -218,7 +220,7 @@ static int vc4_v3d_bind(struct device *d
+ }
+ 
+ static void vc4_v3d_unbind(struct device *dev, struct device *master,
+-			    void *data)
++			   void *data)
+ {
+ 	struct drm_device *drm = dev_get_drvdata(master);
+ 	struct vc4_dev *vc4 = to_vc4_dev(drm);
+--- a/drivers/gpu/drm/vc4/vc4_validate.c
++++ b/drivers/gpu/drm/vc4/vc4_validate.c
+@@ -48,7 +48,6 @@
+ 	void *validated,				\
+ 	void *untrusted
+ 
+-
+ /** Return the width in pixels of a 64-byte microtile. */
+ static uint32_t
+ utile_width(int cpp)
+@@ -192,7 +191,7 @@ vc4_check_tex_size(struct vc4_exec_info
+ 
+ 	if (size + offset < size ||
+ 	    size + offset > fbo->base.size) {
+-		DRM_ERROR("Overflow in %dx%d (%dx%d) fbo size (%d + %d > %d)\n",
++		DRM_ERROR("Overflow in %dx%d (%dx%d) fbo size (%d + %d > %zd)\n",
+ 			  width, height,
+ 			  aligned_width, aligned_height,
+ 			  size, offset, fbo->base.size);
+@@ -278,7 +277,7 @@ validate_indexed_prim_list(VALIDATE_ARGS
+ 
+ 	if (offset > ib->base.size ||
+ 	    (ib->base.size - offset) / index_size < length) {
+-		DRM_ERROR("IB access overflow (%d + %d*%d > %d)\n",
++		DRM_ERROR("IB access overflow (%d + %d*%d > %zd)\n",
+ 			  offset, length, index_size, ib->base.size);
+ 		return -EINVAL;
+ 	}
+@@ -377,6 +376,7 @@ static int
+ validate_tile_binning_config(VALIDATE_ARGS)
+ {
+ 	struct drm_device *dev = exec->exec_bo->base.dev;
++	struct vc4_bo *tile_bo;
+ 	uint8_t flags;
+ 	uint32_t tile_state_size, tile_alloc_size;
+ 	uint32_t tile_count;
+@@ -438,12 +438,12 @@ validate_tile_binning_config(VALIDATE_AR
+ 	 */
+ 	tile_alloc_size += 1024 * 1024;
+ 
+-	exec->tile_bo = &vc4_bo_create(dev, exec->tile_alloc_offset +
+-				       tile_alloc_size)->base;
++	tile_bo = vc4_bo_create(dev, exec->tile_alloc_offset + tile_alloc_size,
++				true);
++	exec->tile_bo = &tile_bo->base;
+ 	if (!exec->tile_bo)
+ 		return -ENOMEM;
+-	list_add_tail(&to_vc4_bo(&exec->tile_bo->base)->unref_head,
+-		     &exec->unref_list);
++	list_add_tail(&tile_bo->unref_head, &exec->unref_list);
+ 
+ 	/* tile alloc address. */
+ 	*(uint32_t *)(validated + 0) = (exec->tile_bo->paddr +
+@@ -463,8 +463,8 @@ validate_gem_handles(VALIDATE_ARGS)
+ 	return 0;
+ }
+ 
+-#define VC4_DEFINE_PACKET(packet, name, func) \
+-	[packet] = { packet ## _SIZE, name, func }
++#define VC4_DEFINE_PACKET(packet, func) \
++	[packet] = { packet ## _SIZE, #packet, func }
+ 
+ static const struct cmd_info {
+ 	uint16_t len;
+@@ -472,42 +472,43 @@ static const struct cmd_info {
+ 	int (*func)(struct vc4_exec_info *exec, void *validated,
+ 		    void *untrusted);
+ } cmd_info[] = {
+-	VC4_DEFINE_PACKET(VC4_PACKET_HALT, "halt", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_NOP, "nop", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH, "flush", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH_ALL, "flush all state", validate_flush_all),
+-	VC4_DEFINE_PACKET(VC4_PACKET_START_TILE_BINNING, "start tile binning", validate_start_tile_binning),
+-	VC4_DEFINE_PACKET(VC4_PACKET_INCREMENT_SEMAPHORE, "increment semaphore", validate_increment_semaphore),
+-
+-	VC4_DEFINE_PACKET(VC4_PACKET_GL_INDEXED_PRIMITIVE, "Indexed Primitive List", validate_indexed_prim_list),
+-
+-	VC4_DEFINE_PACKET(VC4_PACKET_GL_ARRAY_PRIMITIVE, "Vertex Array Primitives", validate_gl_array_primitive),
+-
+-	/* This is only used by clipped primitives (packets 48 and 49), which
+-	 * we don't support parsing yet.
+-	 */
+-	VC4_DEFINE_PACKET(VC4_PACKET_PRIMITIVE_LIST_FORMAT, "primitive list format", NULL),
+-
+-	VC4_DEFINE_PACKET(VC4_PACKET_GL_SHADER_STATE, "GL Shader State", validate_gl_shader_state),
+-	VC4_DEFINE_PACKET(VC4_PACKET_NV_SHADER_STATE, "NV Shader State", validate_nv_shader_state),
+-
+-	VC4_DEFINE_PACKET(VC4_PACKET_CONFIGURATION_BITS, "configuration bits", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_FLAT_SHADE_FLAGS, "flat shade flags", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_POINT_SIZE, "point size", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_LINE_WIDTH, "line width", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_RHT_X_BOUNDARY, "RHT X boundary", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_DEPTH_OFFSET, "Depth Offset", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_CLIP_WINDOW, "Clip Window", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_VIEWPORT_OFFSET, "Viewport Offset", NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_CLIPPER_XY_SCALING, "Clipper XY Scaling", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_HALT, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_NOP, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH_ALL, validate_flush_all),
++	VC4_DEFINE_PACKET(VC4_PACKET_START_TILE_BINNING,
++			  validate_start_tile_binning),
++	VC4_DEFINE_PACKET(VC4_PACKET_INCREMENT_SEMAPHORE,
++			  validate_increment_semaphore),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_GL_INDEXED_PRIMITIVE,
++			  validate_indexed_prim_list),
++	VC4_DEFINE_PACKET(VC4_PACKET_GL_ARRAY_PRIMITIVE,
++			  validate_gl_array_primitive),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_PRIMITIVE_LIST_FORMAT, NULL),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_GL_SHADER_STATE, validate_gl_shader_state),
++	VC4_DEFINE_PACKET(VC4_PACKET_NV_SHADER_STATE, validate_nv_shader_state),
++
++	VC4_DEFINE_PACKET(VC4_PACKET_CONFIGURATION_BITS, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_FLAT_SHADE_FLAGS, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_POINT_SIZE, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_LINE_WIDTH, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_RHT_X_BOUNDARY, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_DEPTH_OFFSET, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_CLIP_WINDOW, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_VIEWPORT_OFFSET, NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_CLIPPER_XY_SCALING, NULL),
+ 	/* Note: The docs say this was also 105, but it was 106 in the
+ 	 * initial userland code drop.
+ 	 */
+-	VC4_DEFINE_PACKET(VC4_PACKET_CLIPPER_Z_SCALING, "Clipper Z Scale and Offset", NULL),
++	VC4_DEFINE_PACKET(VC4_PACKET_CLIPPER_Z_SCALING, NULL),
+ 
+-	VC4_DEFINE_PACKET(VC4_PACKET_TILE_BINNING_MODE_CONFIG, "tile binning configuration", validate_tile_binning_config),
++	VC4_DEFINE_PACKET(VC4_PACKET_TILE_BINNING_MODE_CONFIG,
++			  validate_tile_binning_config),
+ 
+-	VC4_DEFINE_PACKET(VC4_PACKET_GEM_HANDLES, "GEM handles", validate_gem_handles),
++	VC4_DEFINE_PACKET(VC4_PACKET_GEM_HANDLES, validate_gem_handles),
+ };
+ 
+ int
+@@ -526,7 +527,7 @@ vc4_validate_bin_cl(struct drm_device *d
+ 		u8 cmd = *(uint8_t *)src_pkt;
+ 		const struct cmd_info *info;
+ 
+-		if (cmd > ARRAY_SIZE(cmd_info)) {
++		if (cmd >= ARRAY_SIZE(cmd_info)) {
+ 			DRM_ERROR("0x%08x: packet %d out of bounds\n",
+ 				  src_offset, cmd);
+ 			return -EINVAL;
+@@ -539,11 +540,6 @@ vc4_validate_bin_cl(struct drm_device *d
+ 			return -EINVAL;
+ 		}
+ 
+-#if 0
+-		DRM_INFO("0x%08x: packet %d (%s) size %d processing...\n",
+-			 src_offset, cmd, info->name, info->len);
+-#endif
+-
+ 		if (src_offset + info->len > len) {
+ 			DRM_ERROR("0x%08x: packet %d (%s) length 0x%08x "
+ 				  "exceeds bounds (0x%08x)\n",
+@@ -558,8 +554,7 @@ vc4_validate_bin_cl(struct drm_device *d
+ 		if (info->func && info->func(exec,
+ 					     dst_pkt + 1,
+ 					     src_pkt + 1)) {
+-			DRM_ERROR("0x%08x: packet %d (%s) failed to "
+-				  "validate\n",
++			DRM_ERROR("0x%08x: packet %d (%s) failed to validate\n",
+ 				  src_offset, cmd, info->name);
+ 			return -EINVAL;
+ 		}
+@@ -618,12 +613,14 @@ reloc_tex(struct vc4_exec_info *exec,
+ 
+ 	if (sample->is_direct) {
+ 		uint32_t remaining_size = tex->base.size - p0;
++
+ 		if (p0 > tex->base.size - 4) {
+ 			DRM_ERROR("UBO offset greater than UBO size\n");
+ 			goto fail;
+ 		}
+ 		if (p1 > remaining_size - 4) {
+-			DRM_ERROR("UBO clamp would allow reads outside of UBO\n");
++			DRM_ERROR("UBO clamp would allow reads "
++				  "outside of UBO\n");
+ 			goto fail;
+ 		}
+ 		*validated_p0 = tex->paddr + p0;
+@@ -786,7 +783,7 @@ validate_shader_rec(struct drm_device *d
+ 	struct drm_gem_cma_object *bo[ARRAY_SIZE(gl_relocs) + 8];
+ 	uint32_t nr_attributes = 0, nr_fixed_relocs, nr_relocs, packet_size;
+ 	int i;
+-	struct vc4_validated_shader_info *validated_shader;
++	struct vc4_validated_shader_info *shader;
+ 
+ 	if (state->packet == VC4_PACKET_NV_SHADER_STATE) {
+ 		relocs = nv_relocs;
+@@ -841,12 +838,12 @@ validate_shader_rec(struct drm_device *d
+ 		else
+ 			mode = VC4_MODE_RENDER;
+ 
+-		if (!vc4_use_bo(exec, src_handles[i], mode, &bo[i])) {
++		if (!vc4_use_bo(exec, src_handles[i], mode, &bo[i]))
+ 			return false;
+-		}
+ 	}
+ 
+ 	for (i = 0; i < nr_fixed_relocs; i++) {
++		struct vc4_bo *vc4_bo;
+ 		uint32_t o = relocs[i].offset;
+ 		uint32_t src_offset = *(uint32_t *)(pkt_u + o);
+ 		uint32_t *texture_handles_u;
+@@ -858,34 +855,34 @@ validate_shader_rec(struct drm_device *d
+ 		switch (relocs[i].type) {
+ 		case RELOC_CODE:
+ 			if (src_offset != 0) {
+-				DRM_ERROR("Shaders must be at offset 0 of "
+-					  "the BO.\n");
++				DRM_ERROR("Shaders must be at offset 0 "
++					  "of the BO.\n");
+ 				goto fail;
+ 			}
+ 
+-			validated_shader = to_vc4_bo(&bo[i]->base)->validated_shader;
+-			if (!validated_shader)
++			vc4_bo = to_vc4_bo(&bo[i]->base);
++			shader = vc4_bo->validated_shader;
++			if (!shader)
+ 				goto fail;
+ 
+-			if (validated_shader->uniforms_src_size >
+-			    exec->uniforms_size) {
++			if (shader->uniforms_src_size > exec->uniforms_size) {
+ 				DRM_ERROR("Uniforms src buffer overflow\n");
+ 				goto fail;
+ 			}
+ 
+ 			texture_handles_u = exec->uniforms_u;
+ 			uniform_data_u = (texture_handles_u +
+-					  validated_shader->num_texture_samples);
++					  shader->num_texture_samples);
+ 
+ 			memcpy(exec->uniforms_v, uniform_data_u,
+-			       validated_shader->uniforms_size);
++			       shader->uniforms_size);
+ 
+ 			for (tex = 0;
+-			     tex < validated_shader->num_texture_samples;
++			     tex < shader->num_texture_samples;
+ 			     tex++) {
+ 				if (!reloc_tex(exec,
+ 					       uniform_data_u,
+-					       &validated_shader->texture_samples[tex],
++					       &shader->texture_samples[tex],
+ 					       texture_handles_u[tex])) {
+ 					goto fail;
+ 				}
+@@ -893,9 +890,9 @@ validate_shader_rec(struct drm_device *d
+ 
+ 			*(uint32_t *)(pkt_v + o + 4) = exec->uniforms_p;
+ 
+-			exec->uniforms_u += validated_shader->uniforms_src_size;
+-			exec->uniforms_v += validated_shader->uniforms_size;
+-			exec->uniforms_p += validated_shader->uniforms_size;
++			exec->uniforms_u += shader->uniforms_src_size;
++			exec->uniforms_v += shader->uniforms_size;
++			exec->uniforms_p += shader->uniforms_size;
+ 
+ 			break;
+ 
+@@ -926,7 +923,8 @@ validate_shader_rec(struct drm_device *d
+ 			max_index = ((vbo->base.size - offset - attr_size) /
+ 				     stride);
+ 			if (state->max_index > max_index) {
+-				DRM_ERROR("primitives use index %d out of supplied %d\n",
++				DRM_ERROR("primitives use index %d out of "
++					  "supplied %d\n",
+ 					  state->max_index, max_index);
+ 				return -EINVAL;
+ 			}
+--- a/drivers/gpu/drm/vc4/vc4_validate_shaders.c
++++ b/drivers/gpu/drm/vc4/vc4_validate_shaders.c
+@@ -24,24 +24,16 @@
+ /**
+  * DOC: Shader validator for VC4.
+  *
+- * The VC4 has no IOMMU between it and system memory.  So, a user with access
+- * to execute shaders could escalate privilege by overwriting system memory
+- * (using the VPM write address register in the general-purpose DMA mode) or
+- * reading system memory it shouldn't (reading it as a texture, or uniform
+- * data, or vertex data).
++ * The VC4 has no IOMMU between it and system memory, so a user with
++ * access to execute shaders could escalate privilege by overwriting
++ * system memory (using the VPM write address register in the
++ * general-purpose DMA mode) or reading system memory it shouldn't
++ * (reading it as a texture, or uniform data, or vertex data).
+  *
+- * This walks over a shader starting from some offset within a BO, ensuring
+- * that its accesses are appropriately bounded, and recording how many texture
+- * accesses are made and where so that we can do relocations for them in the
++ * This walks over a shader BO, ensuring that its accesses are
++ * appropriately bounded, and recording how many texture accesses are
++ * made and where so that we can do relocations for them in the
+  * uniform stream.
+- *
+- * The kernel API has shaders stored in user-mapped BOs.  The BOs will be
+- * forcibly unmapped from the process before validation, and any cache of
+- * validated state will be flushed if the mapping is faulted back in.
+- *
+- * Storing the shaders in BOs means that the validation process will be slow
+- * due to uncached reads, but since shaders are long-lived and shader BOs are
+- * never actually modified, this shouldn't be a problem.
+  */
+ 
+ #include "vc4_drv.h"
+@@ -70,7 +62,6 @@ waddr_to_live_reg_index(uint32_t waddr,
+ 		else
+ 			return waddr;
+ 	} else if (waddr <= QPU_W_ACC3) {
+-
+ 		return 64 + waddr - QPU_W_ACC0;
+ 	} else {
+ 		return ~0;
+@@ -85,15 +76,14 @@ raddr_add_a_to_live_reg_index(uint64_t i
+ 	uint32_t raddr_a = QPU_GET_FIELD(inst, QPU_RADDR_A);
+ 	uint32_t raddr_b = QPU_GET_FIELD(inst, QPU_RADDR_B);
+ 
+-	if (add_a == QPU_MUX_A) {
++	if (add_a == QPU_MUX_A)
+ 		return raddr_a;
+-	} else if (add_a == QPU_MUX_B && sig != QPU_SIG_SMALL_IMM) {
++	else if (add_a == QPU_MUX_B && sig != QPU_SIG_SMALL_IMM)
+ 		return 32 + raddr_b;
+-	} else if (add_a <= QPU_MUX_R3) {
++	else if (add_a <= QPU_MUX_R3)
+ 		return 64 + add_a;
+-	} else {
++	else
+ 		return ~0;
+-	}
+ }
+ 
+ static bool
+@@ -111,9 +101,9 @@ is_tmu_write(uint32_t waddr)
+ }
+ 
+ static bool
+-record_validated_texture_sample(struct vc4_validated_shader_info *validated_shader,
+-				struct vc4_shader_validation_state *validation_state,
+-				int tmu)
++record_texture_sample(struct vc4_validated_shader_info *validated_shader,
++		      struct vc4_shader_validation_state *validation_state,
++		      int tmu)
+ {
+ 	uint32_t s = validated_shader->num_texture_samples;
+ 	int i;
+@@ -226,8 +216,8 @@ check_tmu_write(uint64_t inst,
+ 		validated_shader->uniforms_size += 4;
+ 
+ 	if (submit) {
+-		if (!record_validated_texture_sample(validated_shader,
+-						     validation_state, tmu)) {
++		if (!record_texture_sample(validated_shader,
++					   validation_state, tmu)) {
+ 			return false;
+ 		}
+ 
+@@ -238,10 +228,10 @@ check_tmu_write(uint64_t inst,
+ }
+ 
+ static bool
+-check_register_write(uint64_t inst,
+-		     struct vc4_validated_shader_info *validated_shader,
+-		     struct vc4_shader_validation_state *validation_state,
+-		     bool is_mul)
++check_reg_write(uint64_t inst,
++		struct vc4_validated_shader_info *validated_shader,
++		struct vc4_shader_validation_state *validation_state,
++		bool is_mul)
+ {
+ 	uint32_t waddr = (is_mul ?
+ 			  QPU_GET_FIELD(inst, QPU_WADDR_MUL) :
+@@ -297,7 +287,7 @@ check_register_write(uint64_t inst,
+ 		return true;
+ 
+ 	case QPU_W_TLB_STENCIL_SETUP:
+-                return true;
++		return true;
+ 	}
+ 
+ 	return true;
+@@ -360,7 +350,7 @@ track_live_clamps(uint64_t inst,
+ 		}
+ 
+ 		validation_state->live_max_clamp_regs[lri_add] = true;
+-	} if (op_add == QPU_A_MIN) {
++	} else if (op_add == QPU_A_MIN) {
+ 		/* Track live clamps of a value clamped to a minimum of 0 and
+ 		 * a maximum of some uniform's offset.
+ 		 */
+@@ -392,8 +382,10 @@ check_instruction_writes(uint64_t inst,
+ 		return false;
+ 	}
+ 
+-	ok = (check_register_write(inst, validated_shader, validation_state, false) &&
+-	      check_register_write(inst, validated_shader, validation_state, true));
++	ok = (check_reg_write(inst, validated_shader, validation_state,
++			      false) &&
++	      check_reg_write(inst, validated_shader, validation_state,
++			      true));
+ 
+ 	track_live_clamps(inst, validated_shader, validation_state);
+ 
+@@ -441,7 +433,7 @@ vc4_validate_shader(struct drm_gem_cma_o
+ 	shader = shader_obj->vaddr;
+ 	max_ip = shader_obj->base.size / sizeof(uint64_t);
+ 
+-	validated_shader = kcalloc(sizeof(*validated_shader), 1, GFP_KERNEL);
++	validated_shader = kcalloc(1, sizeof(*validated_shader), GFP_KERNEL);
+ 	if (!validated_shader)
+ 		return NULL;
+ 
+@@ -497,7 +489,7 @@ vc4_validate_shader(struct drm_gem_cma_o
+ 
+ 	if (ip == max_ip) {
+ 		DRM_ERROR("shader failed to terminate before "
+-			  "shader BO end at %d\n",
++			  "shader BO end at %zd\n",
+ 			  shader_obj->base.size);
+ 		goto fail;
+ 	}
+--- a/include/drm/drmP.h
++++ b/include/drm/drmP.h
+@@ -585,6 +585,13 @@ struct drm_driver {
+ 	int (*gem_open_object) (struct drm_gem_object *, struct drm_file *);
+ 	void (*gem_close_object) (struct drm_gem_object *, struct drm_file *);
+ 
++	/**
++	 * Hook for allocating the GEM object struct, for use by core
++	 * helpers.
++	 */
++	struct drm_gem_object *(*gem_create_object)(struct drm_device *dev,
++						    size_t size);
++
+ 	/* prime: */
+ 	/* export handle -> fd (see drm_gem_prime_handle_to_fd() helper) */
+ 	int (*prime_handle_to_fd)(struct drm_device *dev, struct drm_file *file_priv,
+@@ -639,7 +646,6 @@ struct drm_driver {
+ 
+ 	u32 driver_features;
+ 	int dev_priv_size;
+-	size_t gem_obj_size;
+ 	const struct drm_ioctl_desc *ioctls;
+ 	int num_ioctls;
+ 	const struct file_operations *fops;
diff --git a/target/linux/brcm2708/patches-4.4/0116-drm-Use-the-driver-s-gem_object_free-function-from-C.patch b/target/linux/brcm2708/patches-4.4/0116-drm-Use-the-driver-s-gem_object_free-function-from-C.patch
new file mode 100644
index 0000000..472beb5
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0116-drm-Use-the-driver-s-gem_object_free-function-from-C.patch
@@ -0,0 +1,59 @@
+From 78fdb6a7b1de6a345dacf81ef514f94daf44d3da Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Fri, 11 Dec 2015 19:45:03 -0800
+Subject: [PATCH 116/127] drm: Use the driver's gem_object_free function from
+ CMA helpers.
+
+VC4 wraps the CMA objects in its own structures, so it needs to do its
+own teardown (waiting for GPU to finish, updating bo_stats tracking).
+The other CMA drivers are using drm_gem_cma_free_object as their
+gem_free_object, so this should be a no-op for them.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/drm_fb_cma_helper.c  | 6 +++---
+ drivers/gpu/drm/drm_gem_cma_helper.c | 4 ++--
+ 2 files changed, 5 insertions(+), 5 deletions(-)
+
+--- a/drivers/gpu/drm/drm_fb_cma_helper.c
++++ b/drivers/gpu/drm/drm_fb_cma_helper.c
+@@ -266,7 +266,7 @@ static int drm_fbdev_cma_create(struct d
+ 	fbi = drm_fb_helper_alloc_fbi(helper);
+ 	if (IS_ERR(fbi)) {
+ 		ret = PTR_ERR(fbi);
+-		goto err_drm_gem_cma_free_object;
++		goto err_gem_free_object;
+ 	}
+ 
+ 	fbdev_cma->fb = drm_fb_cma_alloc(dev, &mode_cmd, &obj, 1);
+@@ -299,8 +299,8 @@ static int drm_fbdev_cma_create(struct d
+ 
+ err_fb_info_destroy:
+ 	drm_fb_helper_release_fbi(helper);
+-err_drm_gem_cma_free_object:
+-	drm_gem_cma_free_object(&obj->base);
++err_gem_free_object:
++	dev->driver->gem_free_object(&obj->base);
+ 	return ret;
+ }
+ 
+--- a/drivers/gpu/drm/drm_gem_cma_helper.c
++++ b/drivers/gpu/drm/drm_gem_cma_helper.c
+@@ -121,7 +121,7 @@ struct drm_gem_cma_object *drm_gem_cma_c
+ 	return cma_obj;
+ 
+ error:
+-	drm_gem_cma_free_object(&cma_obj->base);
++	drm->driver->gem_free_object(&cma_obj->base);
+ 	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(drm_gem_cma_create);
+@@ -171,7 +171,7 @@ drm_gem_cma_create_with_handle(struct dr
+ 	return cma_obj;
+ 
+ err_handle_create:
+-	drm_gem_cma_free_object(gem_obj);
++	drm->driver->gem_free_object(gem_obj);
+ 
+ 	return ERR_PTR(ret);
+ }
diff --git a/target/linux/brcm2708/patches-4.4/0117-drm-vc4-Add-support-for-MSAA-rendering.patch b/target/linux/brcm2708/patches-4.4/0117-drm-vc4-Add-support-for-MSAA-rendering.patch
new file mode 100644
index 0000000..d4e222d
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0117-drm-vc4-Add-support-for-MSAA-rendering.patch
@@ -0,0 +1,518 @@
+From b66efe927216251ae27f9213e6b92b3a49deb73e Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Fri, 17 Jul 2015 13:15:50 -0700
+Subject: [PATCH 117/127] drm/vc4: Add support for MSAA rendering.
+
+For MSAA, you set a bit in the binner that halves the size of tiles in
+each direction, so you can pack 4 samples per pixel in the tile
+buffer.  During rendering, you can load and store raw tile buffer
+contents (to save the per-sample MSAA contents), or you can load/store
+resolved tile buffer contents (loads spam the pixel value to all 4
+samples, and stores either average the 4 color samples, or store the
+first sample for Z/S).
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_packet.h    |  23 ++-
+ drivers/gpu/drm/vc4/vc4_render_cl.c | 274 ++++++++++++++++++++++++++++++------
+ drivers/gpu/drm/vc4/vc4_validate.c  |   5 +-
+ include/uapi/drm/vc4_drm.h          |  11 +-
+ 4 files changed, 258 insertions(+), 55 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_packet.h
++++ b/drivers/gpu/drm/vc4/vc4_packet.h
+@@ -123,6 +123,11 @@ enum vc4_packet {
+ #define VC4_PACKET_TILE_COORDINATES_SIZE				3
+ #define VC4_PACKET_GEM_HANDLES_SIZE					9
+ 
++/* Number of multisamples supported. */
++#define VC4_MAX_SAMPLES							4
++/* Size of a full resolution color or Z tile buffer load/store. */
++#define VC4_TILE_BUFFER_SIZE			(64 * 64 * 4)
++
+ /** @{
+  * Bits used by packets like VC4_PACKET_STORE_TILE_BUFFER_GENERAL and
+  * VC4_PACKET_TILE_RENDERING_MODE_CONFIG.
+@@ -137,10 +142,20 @@ enum vc4_packet {
+  * low bits of VC4_PACKET_STORE_FULL_RES_TILE_BUFFER and
+  * VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER.
+  */
+-#define VC4_LOADSTORE_FULL_RES_EOF                     (1 << 3)
+-#define VC4_LOADSTORE_FULL_RES_DISABLE_CLEAR_ALL       (1 << 2)
+-#define VC4_LOADSTORE_FULL_RES_DISABLE_ZS              (1 << 1)
+-#define VC4_LOADSTORE_FULL_RES_DISABLE_COLOR           (1 << 0)
++#define VC4_LOADSTORE_FULL_RES_EOF                     BIT(3)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_CLEAR_ALL       BIT(2)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_ZS              BIT(1)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_COLOR           BIT(0)
++
++/** @{
++ *
++ * low bits of VC4_PACKET_STORE_FULL_RES_TILE_BUFFER and
++ * VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER.
++ */
++#define VC4_LOADSTORE_FULL_RES_EOF                     BIT(3)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_CLEAR_ALL       BIT(2)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_ZS              BIT(1)
++#define VC4_LOADSTORE_FULL_RES_DISABLE_COLOR           BIT(0)
+ 
+ /** @{
+  *
+--- a/drivers/gpu/drm/vc4/vc4_render_cl.c
++++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
+@@ -37,9 +37,11 @@
+ 
+ struct vc4_rcl_setup {
+ 	struct drm_gem_cma_object *color_read;
+-	struct drm_gem_cma_object *color_ms_write;
++	struct drm_gem_cma_object *color_write;
+ 	struct drm_gem_cma_object *zs_read;
+ 	struct drm_gem_cma_object *zs_write;
++	struct drm_gem_cma_object *msaa_color_write;
++	struct drm_gem_cma_object *msaa_zs_write;
+ 
+ 	struct drm_gem_cma_object *rcl;
+ 	u32 next_offset;
+@@ -82,6 +84,22 @@ static void vc4_store_before_load(struct
+ }
+ 
+ /*
++ * Calculates the physical address of the start of a tile in a RCL surface.
++ *
++ * Unlike the other load/store packets,
++ * VC4_PACKET_LOAD/STORE_FULL_RES_TILE_BUFFER don't look at the tile
++ * coordinates packet, and instead just store to the address given.
++ */
++static uint32_t vc4_full_res_offset(struct vc4_exec_info *exec,
++				    struct drm_gem_cma_object *bo,
++				    struct drm_vc4_submit_rcl_surface *surf,
++				    uint8_t x, uint8_t y)
++{
++	return bo->paddr + surf->offset + VC4_TILE_BUFFER_SIZE *
++		(DIV_ROUND_UP(exec->args->width, 32) * y + x);
++}
++
++/*
+  * Emits a PACKET_TILE_COORDINATES if one isn't already pending.
+  *
+  * The tile coordinates packet triggers a pending load if there is one, are
+@@ -108,22 +126,41 @@ static void emit_tile(struct vc4_exec_in
+ 	 * may be outstanding at a time.
+ 	 */
+ 	if (setup->color_read) {
+-		rcl_u8(setup, VC4_PACKET_LOAD_TILE_BUFFER_GENERAL);
+-		rcl_u16(setup, args->color_read.bits);
+-		rcl_u32(setup,
+-			setup->color_read->paddr + args->color_read.offset);
++		if (args->color_read.flags &
++		    VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES) {
++			rcl_u8(setup, VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER);
++			rcl_u32(setup,
++				vc4_full_res_offset(exec, setup->color_read,
++						    &args->color_read, x, y) |
++				VC4_LOADSTORE_FULL_RES_DISABLE_ZS);
++		} else {
++			rcl_u8(setup, VC4_PACKET_LOAD_TILE_BUFFER_GENERAL);
++			rcl_u16(setup, args->color_read.bits);
++			rcl_u32(setup, setup->color_read->paddr +
++				args->color_read.offset);
++		}
+ 	}
+ 
+ 	if (setup->zs_read) {
+-		if (setup->color_read) {
+-			/* Exec previous load. */
+-			vc4_tile_coordinates(setup, x, y);
+-			vc4_store_before_load(setup);
++		if (args->zs_read.flags &
++		    VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES) {
++			rcl_u8(setup, VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER);
++			rcl_u32(setup,
++				vc4_full_res_offset(exec, setup->zs_read,
++						    &args->zs_read, x, y) |
++				VC4_LOADSTORE_FULL_RES_DISABLE_COLOR);
++		} else {
++			if (setup->color_read) {
++				/* Exec previous load. */
++				vc4_tile_coordinates(setup, x, y);
++				vc4_store_before_load(setup);
++			}
++
++			rcl_u8(setup, VC4_PACKET_LOAD_TILE_BUFFER_GENERAL);
++			rcl_u16(setup, args->zs_read.bits);
++			rcl_u32(setup, setup->zs_read->paddr +
++				args->zs_read.offset);
+ 		}
+-
+-		rcl_u8(setup, VC4_PACKET_LOAD_TILE_BUFFER_GENERAL);
+-		rcl_u16(setup, args->zs_read.bits);
+-		rcl_u32(setup, setup->zs_read->paddr + args->zs_read.offset);
+ 	}
+ 
+ 	/* Clipping depends on tile coordinates having been
+@@ -144,20 +181,60 @@ static void emit_tile(struct vc4_exec_in
+ 				(y * exec->bin_tiles_x + x) * 32));
+ 	}
+ 
++	if (setup->msaa_color_write) {
++		bool last_tile_write = (!setup->msaa_zs_write &&
++					!setup->zs_write &&
++					!setup->color_write);
++		uint32_t bits = VC4_LOADSTORE_FULL_RES_DISABLE_ZS;
++
++		if (!last_tile_write)
++			bits |= VC4_LOADSTORE_FULL_RES_DISABLE_CLEAR_ALL;
++		else if (last)
++			bits |= VC4_LOADSTORE_FULL_RES_EOF;
++		rcl_u8(setup, VC4_PACKET_STORE_FULL_RES_TILE_BUFFER);
++		rcl_u32(setup,
++			vc4_full_res_offset(exec, setup->msaa_color_write,
++					    &args->msaa_color_write, x, y) |
++			bits);
++	}
++
++	if (setup->msaa_zs_write) {
++		bool last_tile_write = (!setup->zs_write &&
++					!setup->color_write);
++		uint32_t bits = VC4_LOADSTORE_FULL_RES_DISABLE_COLOR;
++
++		if (setup->msaa_color_write)
++			vc4_tile_coordinates(setup, x, y);
++		if (!last_tile_write)
++			bits |= VC4_LOADSTORE_FULL_RES_DISABLE_CLEAR_ALL;
++		else if (last)
++			bits |= VC4_LOADSTORE_FULL_RES_EOF;
++		rcl_u8(setup, VC4_PACKET_STORE_FULL_RES_TILE_BUFFER);
++		rcl_u32(setup,
++			vc4_full_res_offset(exec, setup->msaa_zs_write,
++					    &args->msaa_zs_write, x, y) |
++			bits);
++	}
++
+ 	if (setup->zs_write) {
++		bool last_tile_write = !setup->color_write;
++
++		if (setup->msaa_color_write || setup->msaa_zs_write)
++			vc4_tile_coordinates(setup, x, y);
++
+ 		rcl_u8(setup, VC4_PACKET_STORE_TILE_BUFFER_GENERAL);
+ 		rcl_u16(setup, args->zs_write.bits |
+-			(setup->color_ms_write ?
+-			 VC4_STORE_TILE_BUFFER_DISABLE_COLOR_CLEAR : 0));
++			(last_tile_write ?
++			 0 : VC4_STORE_TILE_BUFFER_DISABLE_COLOR_CLEAR));
+ 		rcl_u32(setup,
+ 			(setup->zs_write->paddr + args->zs_write.offset) |
+-			((last && !setup->color_ms_write) ?
++			((last && last_tile_write) ?
+ 			 VC4_LOADSTORE_TILE_BUFFER_EOF : 0));
+ 	}
+ 
+-	if (setup->color_ms_write) {
+-		if (setup->zs_write) {
+-			/* Reset after previous store */
++	if (setup->color_write) {
++		if (setup->msaa_color_write || setup->msaa_zs_write ||
++		    setup->zs_write) {
+ 			vc4_tile_coordinates(setup, x, y);
+ 		}
+ 
+@@ -192,14 +269,26 @@ static int vc4_create_rcl_bo(struct drm_
+ 	}
+ 
+ 	if (setup->color_read) {
+-		loop_body_size += (VC4_PACKET_LOAD_TILE_BUFFER_GENERAL_SIZE);
++		if (args->color_read.flags &
++		    VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES) {
++			loop_body_size += VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER_SIZE;
++		} else {
++			loop_body_size += VC4_PACKET_LOAD_TILE_BUFFER_GENERAL_SIZE;
++		}
+ 	}
+ 	if (setup->zs_read) {
+-		if (setup->color_read) {
+-			loop_body_size += VC4_PACKET_TILE_COORDINATES_SIZE;
+-			loop_body_size += VC4_PACKET_STORE_TILE_BUFFER_GENERAL_SIZE;
++		if (args->zs_read.flags &
++		    VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES) {
++			loop_body_size += VC4_PACKET_LOAD_FULL_RES_TILE_BUFFER_SIZE;
++		} else {
++			if (setup->color_read &&
++			    !(args->color_read.flags &
++			      VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES)) {
++				loop_body_size += VC4_PACKET_TILE_COORDINATES_SIZE;
++				loop_body_size += VC4_PACKET_STORE_TILE_BUFFER_GENERAL_SIZE;
++			}
++			loop_body_size += VC4_PACKET_LOAD_TILE_BUFFER_GENERAL_SIZE;
+ 		}
+-		loop_body_size += VC4_PACKET_LOAD_TILE_BUFFER_GENERAL_SIZE;
+ 	}
+ 
+ 	if (has_bin) {
+@@ -207,13 +296,23 @@ static int vc4_create_rcl_bo(struct drm_
+ 		loop_body_size += VC4_PACKET_BRANCH_TO_SUB_LIST_SIZE;
+ 	}
+ 
++	if (setup->msaa_color_write)
++		loop_body_size += VC4_PACKET_STORE_FULL_RES_TILE_BUFFER_SIZE;
++	if (setup->msaa_zs_write)
++		loop_body_size += VC4_PACKET_STORE_FULL_RES_TILE_BUFFER_SIZE;
++
+ 	if (setup->zs_write)
+ 		loop_body_size += VC4_PACKET_STORE_TILE_BUFFER_GENERAL_SIZE;
+-	if (setup->color_ms_write) {
+-		if (setup->zs_write)
+-			loop_body_size += VC4_PACKET_TILE_COORDINATES_SIZE;
++	if (setup->color_write)
+ 		loop_body_size += VC4_PACKET_STORE_MS_TILE_BUFFER_SIZE;
+-	}
++
++	/* We need a VC4_PACKET_TILE_COORDINATES in between each store. */
++	loop_body_size += VC4_PACKET_TILE_COORDINATES_SIZE *
++		((setup->msaa_color_write != NULL) +
++		 (setup->msaa_zs_write != NULL) +
++		 (setup->color_write != NULL) +
++		 (setup->zs_write != NULL) - 1);
++
+ 	size += xtiles * ytiles * loop_body_size;
+ 
+ 	setup->rcl = &vc4_bo_create(dev, size, true)->base;
+@@ -224,13 +323,12 @@ static int vc4_create_rcl_bo(struct drm_
+ 
+ 	rcl_u8(setup, VC4_PACKET_TILE_RENDERING_MODE_CONFIG);
+ 	rcl_u32(setup,
+-		(setup->color_ms_write ?
+-		 (setup->color_ms_write->paddr +
+-		  args->color_ms_write.offset) :
++		(setup->color_write ? (setup->color_write->paddr +
++				       args->color_write.offset) :
+ 		 0));
+ 	rcl_u16(setup, args->width);
+ 	rcl_u16(setup, args->height);
+-	rcl_u16(setup, args->color_ms_write.bits);
++	rcl_u16(setup, args->color_write.bits);
+ 
+ 	/* The tile buffer gets cleared when the previous tile is stored.  If
+ 	 * the clear values changed between frames, then the tile buffer has
+@@ -267,6 +365,56 @@ static int vc4_create_rcl_bo(struct drm_
+ 	return 0;
+ }
+ 
++static int vc4_full_res_bounds_check(struct vc4_exec_info *exec,
++				     struct drm_gem_cma_object *obj,
++				     struct drm_vc4_submit_rcl_surface *surf)
++{
++	struct drm_vc4_submit_cl *args = exec->args;
++	u32 render_tiles_stride = DIV_ROUND_UP(exec->args->width, 32);
++
++	if (surf->offset > obj->base.size) {
++		DRM_ERROR("surface offset %d > BO size %zd\n",
++			  surf->offset, obj->base.size);
++		return -EINVAL;
++	}
++
++	if ((obj->base.size - surf->offset) / VC4_TILE_BUFFER_SIZE <
++	    render_tiles_stride * args->max_y_tile + args->max_x_tile) {
++		DRM_ERROR("MSAA tile %d, %d out of bounds "
++			  "(bo size %zd, offset %d).\n",
++			  args->max_x_tile, args->max_y_tile,
++			  obj->base.size,
++			  surf->offset);
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
++static int vc4_rcl_msaa_surface_setup(struct vc4_exec_info *exec,
++				      struct drm_gem_cma_object **obj,
++				      struct drm_vc4_submit_rcl_surface *surf)
++{
++	if (surf->flags != 0 || surf->bits != 0) {
++		DRM_ERROR("MSAA surface had nonzero flags/bits\n");
++		return -EINVAL;
++	}
++
++	if (surf->hindex == ~0)
++		return 0;
++
++	*obj = vc4_use_bo(exec, surf->hindex);
++	if (!*obj)
++		return -EINVAL;
++
++	if (surf->offset & 0xf) {
++		DRM_ERROR("MSAA write must be 16b aligned.\n");
++		return -EINVAL;
++	}
++
++	return vc4_full_res_bounds_check(exec, *obj, surf);
++}
++
+ static int vc4_rcl_surface_setup(struct vc4_exec_info *exec,
+ 				 struct drm_gem_cma_object **obj,
+ 				 struct drm_vc4_submit_rcl_surface *surf)
+@@ -278,9 +426,10 @@ static int vc4_rcl_surface_setup(struct
+ 	uint8_t format = VC4_GET_FIELD(surf->bits,
+ 				       VC4_LOADSTORE_TILE_BUFFER_FORMAT);
+ 	int cpp;
++	int ret;
+ 
+-	if (surf->pad != 0) {
+-		DRM_ERROR("Padding unset\n");
++	if (surf->flags & ~VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES) {
++		DRM_ERROR("Extra flags set\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -290,6 +439,25 @@ static int vc4_rcl_surface_setup(struct
+ 	if (!vc4_use_bo(exec, surf->hindex, VC4_MODE_RENDER, obj))
+ 		return -EINVAL;
+ 
++	if (surf->flags & VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES) {
++		if (surf == &exec->args->zs_write) {
++			DRM_ERROR("general zs write may not be a full-res.\n");
++			return -EINVAL;
++		}
++
++		if (surf->bits != 0) {
++			DRM_ERROR("load/store general bits set with "
++				  "full res load/store.\n");
++			return -EINVAL;
++		}
++
++		ret = vc4_full_res_bounds_check(exec, *obj, surf);
++		if (!ret)
++			return ret;
++
++		return 0;
++	}
++
+ 	if (surf->bits & ~(VC4_LOADSTORE_TILE_BUFFER_TILING_MASK |
+ 			   VC4_LOADSTORE_TILE_BUFFER_BUFFER_MASK |
+ 			   VC4_LOADSTORE_TILE_BUFFER_FORMAT_MASK)) {
+@@ -341,9 +509,10 @@ static int vc4_rcl_surface_setup(struct
+ }
+ 
+ static int
+-vc4_rcl_ms_surface_setup(struct vc4_exec_info *exec,
+-			 struct drm_gem_cma_object **obj,
+-			 struct drm_vc4_submit_rcl_surface *surf)
++vc4_rcl_render_config_surface_setup(struct vc4_exec_info *exec,
++				    struct vc4_rcl_setup *setup,
++				    struct drm_gem_cma_object **obj,
++				    struct drm_vc4_submit_rcl_surface *surf)
+ {
+ 	uint8_t tiling = VC4_GET_FIELD(surf->bits,
+ 				       VC4_RENDER_CONFIG_MEMORY_FORMAT);
+@@ -351,13 +520,15 @@ vc4_rcl_ms_surface_setup(struct vc4_exec
+ 				       VC4_RENDER_CONFIG_FORMAT);
+ 	int cpp;
+ 
+-	if (surf->pad != 0) {
+-		DRM_ERROR("Padding unset\n");
++	if (surf->flags != 0) {
++		DRM_ERROR("No flags supported on render config.\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (surf->bits & ~(VC4_RENDER_CONFIG_MEMORY_FORMAT_MASK |
+-			   VC4_RENDER_CONFIG_FORMAT_MASK)) {
++			   VC4_RENDER_CONFIG_FORMAT_MASK |
++			   VC4_RENDER_CONFIG_MS_MODE_4X |
++			   VC4_RENDER_CONFIG_DECIMATE_MODE_4X)) {
+ 		DRM_ERROR("Unknown bits in render config: 0x%04x\n",
+ 			  surf->bits);
+ 		return -EINVAL;
+@@ -413,18 +584,20 @@ int vc4_get_rcl(struct drm_device *dev,
+ 	if (has_bin &&
+ 	    (args->max_x_tile > exec->bin_tiles_x ||
+ 	     args->max_y_tile > exec->bin_tiles_y)) {
+-		DRM_ERROR("Render tiles (%d,%d) outside of bin config (%d,%d)\n",
++		DRM_ERROR("Render tiles (%d,%d) outside of bin config "
++			  "(%d,%d)\n",
+ 			  args->max_x_tile, args->max_y_tile,
+ 			  exec->bin_tiles_x, exec->bin_tiles_y);
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = vc4_rcl_surface_setup(exec, &setup.color_read, &args->color_read);
++	ret = vc4_rcl_render_config_surface_setup(exec, &setup,
++						  &setup.color_write,
++						  &args->color_write);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = vc4_rcl_ms_surface_setup(exec, &setup.color_ms_write,
+-				       &args->color_ms_write);
++	ret = vc4_rcl_surface_setup(exec, &setup.color_read, &args->color_read);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -436,10 +609,21 @@ int vc4_get_rcl(struct drm_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
++	ret = vc4_rcl_msaa_surface_setup(exec, &setup.msaa_color_write,
++					 &args->msaa_color_write);
++	if (ret)
++		return ret;
++
++	ret = vc4_rcl_msaa_surface_setup(exec, &setup.msaa_zs_write,
++					 &args->msaa_zs_write);
++	if (ret)
++		return ret;
++
+ 	/* We shouldn't even have the job submitted to us if there's no
+ 	 * surface to write out.
+ 	 */
+-	if (!setup.color_ms_write && !setup.zs_write) {
++	if (!setup.color_write && !setup.zs_write &&
++	    !setup.msaa_color_write && !setup.msaa_zs_write) {
+ 		DRM_ERROR("RCL requires color or Z/S write\n");
+ 		return -EINVAL;
+ 	}
+--- a/drivers/gpu/drm/vc4/vc4_validate.c
++++ b/drivers/gpu/drm/vc4/vc4_validate.c
+@@ -400,9 +400,8 @@ validate_tile_binning_config(VALIDATE_AR
+ 	}
+ 
+ 	if (flags & (VC4_BIN_CONFIG_DB_NON_MS |
+-		     VC4_BIN_CONFIG_TILE_BUFFER_64BIT |
+-		     VC4_BIN_CONFIG_MS_MODE_4X)) {
+-		DRM_ERROR("unsupported bining config flags 0x%02x\n", flags);
++		     VC4_BIN_CONFIG_TILE_BUFFER_64BIT)) {
++		DRM_ERROR("unsupported binning config flags 0x%02x\n", flags);
+ 		return -EINVAL;
+ 	}
+ 
+--- a/include/uapi/drm/vc4_drm.h
++++ b/include/uapi/drm/vc4_drm.h
+@@ -46,10 +46,13 @@ struct drm_vc4_submit_rcl_surface {
+ 	uint32_t hindex; /* Handle index, or ~0 if not present. */
+ 	uint32_t offset; /* Offset to start of buffer. */
+ 	/*
+-         * Bits for either render config (color_ms_write) or load/store packet.
++         * Bits for either render config (color_write) or load/store packet.
++         * Bits should all be 0 for MSAA load/stores.
+ 	 */
+ 	uint16_t bits;
+-	uint16_t pad;
++
++#define VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES		(1 << 0)
++	uint16_t flags;
+ };
+ 
+ /**
+@@ -128,9 +131,11 @@ struct drm_vc4_submit_cl {
+ 	uint8_t max_x_tile;
+ 	uint8_t max_y_tile;
+ 	struct drm_vc4_submit_rcl_surface color_read;
+-	struct drm_vc4_submit_rcl_surface color_ms_write;
++	struct drm_vc4_submit_rcl_surface color_write;
+ 	struct drm_vc4_submit_rcl_surface zs_read;
+ 	struct drm_vc4_submit_rcl_surface zs_write;
++	struct drm_vc4_submit_rcl_surface msaa_color_write;
++	struct drm_vc4_submit_rcl_surface msaa_zs_write;
+ 	uint32_t clear_color[2];
+ 	uint32_t clear_z;
+ 	uint8_t clear_s;
diff --git a/target/linux/brcm2708/patches-4.4/0118-drm-vc4-A-few-more-non-functional-changes-to-sync-to.patch b/target/linux/brcm2708/patches-4.4/0118-drm-vc4-A-few-more-non-functional-changes-to-sync-to.patch
new file mode 100644
index 0000000..7edf7a3
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0118-drm-vc4-A-few-more-non-functional-changes-to-sync-to.patch
@@ -0,0 +1,345 @@
+From 8b0e1ce37709125edeb9851307e24ba563749382 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Tue, 8 Dec 2015 14:00:43 -0800
+Subject: [PATCH 118/127] drm/vc4: A few more non-functional changes to sync to
+ upstream.
+
+At this point all that's left is the force-enable of HDMI connector,
+and using direct firmware calls to turn on V3D instead of the generic
+power domain support.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_v3d.c |   2 +-
+ include/uapi/drm/vc4_drm.h    | 182 +++++++++++++++++++++---------------------
+ 2 files changed, 92 insertions(+), 92 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_v3d.c
++++ b/drivers/gpu/drm/vc4/vc4_v3d.c
+@@ -109,7 +109,7 @@ static const struct {
+ 
+ int vc4_v3d_debugfs_regs(struct seq_file *m, void *unused)
+ {
+-	struct drm_info_node *node = (struct drm_info_node *) m->private;
++	struct drm_info_node *node = (struct drm_info_node *)m->private;
+ 	struct drm_device *dev = node->minor->dev;
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+ 	int i;
+--- a/include/uapi/drm/vc4_drm.h
++++ b/include/uapi/drm/vc4_drm.h
+@@ -24,7 +24,7 @@
+ #ifndef _UAPI_VC4_DRM_H_
+ #define _UAPI_VC4_DRM_H_
+ 
+-#include <drm/drm.h>
++#include "drm.h"
+ 
+ #define DRM_VC4_SUBMIT_CL                         0x00
+ #define DRM_VC4_WAIT_SEQNO                        0x01
+@@ -34,25 +34,25 @@
+ #define DRM_VC4_CREATE_SHADER_BO                  0x05
+ #define DRM_VC4_GET_HANG_STATE                    0x06
+ 
+-#define DRM_IOCTL_VC4_SUBMIT_CL           DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_SUBMIT_CL, struct drm_vc4_submit_cl)
+-#define DRM_IOCTL_VC4_WAIT_SEQNO          DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_WAIT_SEQNO, struct drm_vc4_wait_seqno)
+-#define DRM_IOCTL_VC4_WAIT_BO             DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_WAIT_BO, struct drm_vc4_wait_bo)
+-#define DRM_IOCTL_VC4_CREATE_BO           DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_CREATE_BO, struct drm_vc4_create_bo)
+-#define DRM_IOCTL_VC4_MMAP_BO             DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_MMAP_BO, struct drm_vc4_mmap_bo)
+-#define DRM_IOCTL_VC4_CREATE_SHADER_BO    DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_CREATE_SHADER_BO, struct drm_vc4_create_shader_bo)
+-#define DRM_IOCTL_VC4_GET_HANG_STATE      DRM_IOWR( DRM_COMMAND_BASE + DRM_VC4_GET_HANG_STATE, struct drm_vc4_get_hang_state)
++#define DRM_IOCTL_VC4_SUBMIT_CL           DRM_IOWR(DRM_COMMAND_BASE + DRM_VC4_SUBMIT_CL, struct drm_vc4_submit_cl)
++#define DRM_IOCTL_VC4_WAIT_SEQNO          DRM_IOWR(DRM_COMMAND_BASE + DRM_VC4_WAIT_SEQNO, struct drm_vc4_wait_seqno)
++#define DRM_IOCTL_VC4_WAIT_BO             DRM_IOWR(DRM_COMMAND_BASE + DRM_VC4_WAIT_BO, struct drm_vc4_wait_bo)
++#define DRM_IOCTL_VC4_CREATE_BO           DRM_IOWR(DRM_COMMAND_BASE + DRM_VC4_CREATE_BO, struct drm_vc4_create_bo)
++#define DRM_IOCTL_VC4_MMAP_BO             DRM_IOWR(DRM_COMMAND_BASE + DRM_VC4_MMAP_BO, struct drm_vc4_mmap_bo)
++#define DRM_IOCTL_VC4_CREATE_SHADER_BO    DRM_IOWR(DRM_COMMAND_BASE + DRM_VC4_CREATE_SHADER_BO, struct drm_vc4_create_shader_bo)
++#define DRM_IOCTL_VC4_GET_HANG_STATE      DRM_IOWR(DRM_COMMAND_BASE + DRM_VC4_GET_HANG_STATE, struct drm_vc4_get_hang_state)
+ 
+ struct drm_vc4_submit_rcl_surface {
+-	uint32_t hindex; /* Handle index, or ~0 if not present. */
+-	uint32_t offset; /* Offset to start of buffer. */
++	__u32 hindex; /* Handle index, or ~0 if not present. */
++	__u32 offset; /* Offset to start of buffer. */
+ 	/*
+-         * Bits for either render config (color_write) or load/store packet.
+-         * Bits should all be 0 for MSAA load/stores.
++	 * Bits for either render config (color_write) or load/store packet.
++	 * Bits should all be 0 for MSAA load/stores.
+ 	 */
+-	uint16_t bits;
++	__u16 bits;
+ 
+ #define VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES		(1 << 0)
+-	uint16_t flags;
++	__u16 flags;
+ };
+ 
+ /**
+@@ -76,7 +76,7 @@ struct drm_vc4_submit_cl {
+ 	 * then writes out the state updates and draw calls necessary per tile
+ 	 * to the tile allocation BO.
+ 	 */
+-	uint64_t bin_cl;
++	__u64 bin_cl;
+ 
+ 	/* Pointer to the shader records.
+ 	 *
+@@ -85,16 +85,16 @@ struct drm_vc4_submit_cl {
+ 	 * reference to the shader record has enough information to determine
+ 	 * how many pointers are necessary (fixed number for shaders/uniforms,
+ 	 * and an attribute count), so those BO indices into bo_handles are
+-	 * just stored as uint32_ts before each shader record passed in.
++	 * just stored as __u32s before each shader record passed in.
+ 	 */
+-	uint64_t shader_rec;
++	__u64 shader_rec;
+ 
+ 	/* Pointer to uniform data and texture handles for the textures
+ 	 * referenced by the shader.
+ 	 *
+ 	 * For each shader state record, there is a set of uniform data in the
+ 	 * order referenced by the record (FS, VS, then CS).  Each set of
+-	 * uniform data has a uint32_t index into bo_handles per texture
++	 * uniform data has a __u32 index into bo_handles per texture
+ 	 * sample operation, in the order the QPU_W_TMUn_S writes appear in
+ 	 * the program.  Following the texture BO handle indices is the actual
+ 	 * uniform data.
+@@ -103,52 +103,52 @@ struct drm_vc4_submit_cl {
+ 	 * because the kernel has to determine the sizes anyway during shader
+ 	 * code validation.
+ 	 */
+-	uint64_t uniforms;
+-	uint64_t bo_handles;
++	__u64 uniforms;
++	__u64 bo_handles;
+ 
+ 	/* Size in bytes of the binner command list. */
+-	uint32_t bin_cl_size;
++	__u32 bin_cl_size;
+ 	/* Size in bytes of the set of shader records. */
+-	uint32_t shader_rec_size;
++	__u32 shader_rec_size;
+ 	/* Number of shader records.
+ 	 *
+ 	 * This could just be computed from the contents of shader_records and
+ 	 * the address bits of references to them from the bin CL, but it
+ 	 * keeps the kernel from having to resize some allocations it makes.
+ 	 */
+-	uint32_t shader_rec_count;
++	__u32 shader_rec_count;
+ 	/* Size in bytes of the uniform state. */
+-	uint32_t uniforms_size;
++	__u32 uniforms_size;
+ 
+ 	/* Number of BO handles passed in (size is that times 4). */
+-	uint32_t bo_handle_count;
++	__u32 bo_handle_count;
+ 
+ 	/* RCL setup: */
+-	uint16_t width;
+-	uint16_t height;
+-	uint8_t min_x_tile;
+-	uint8_t min_y_tile;
+-	uint8_t max_x_tile;
+-	uint8_t max_y_tile;
++	__u16 width;
++	__u16 height;
++	__u8 min_x_tile;
++	__u8 min_y_tile;
++	__u8 max_x_tile;
++	__u8 max_y_tile;
+ 	struct drm_vc4_submit_rcl_surface color_read;
+ 	struct drm_vc4_submit_rcl_surface color_write;
+ 	struct drm_vc4_submit_rcl_surface zs_read;
+ 	struct drm_vc4_submit_rcl_surface zs_write;
+ 	struct drm_vc4_submit_rcl_surface msaa_color_write;
+ 	struct drm_vc4_submit_rcl_surface msaa_zs_write;
+-	uint32_t clear_color[2];
+-	uint32_t clear_z;
+-	uint8_t clear_s;
++	__u32 clear_color[2];
++	__u32 clear_z;
++	__u8 clear_s;
+ 
+-	uint32_t pad:24;
++	__u32 pad:24;
+ 
+ #define VC4_SUBMIT_CL_USE_CLEAR_COLOR			(1 << 0)
+-	uint32_t flags;
++	__u32 flags;
+ 
+ 	/* Returned value of the seqno of this render job (for the
+ 	 * wait ioctl).
+ 	 */
+-	uint64_t seqno;
++	__u64 seqno;
+ };
+ 
+ /**
+@@ -159,8 +159,8 @@ struct drm_vc4_submit_cl {
+  * block, just return the status."
+  */
+ struct drm_vc4_wait_seqno {
+-	uint64_t seqno;
+-	uint64_t timeout_ns;
++	__u64 seqno;
++	__u64 timeout_ns;
+ };
+ 
+ /**
+@@ -172,9 +172,9 @@ struct drm_vc4_wait_seqno {
+  * completed.
+  */
+ struct drm_vc4_wait_bo {
+-	uint32_t handle;
+-	uint32_t pad;
+-	uint64_t timeout_ns;
++	__u32 handle;
++	__u32 pad;
++	__u64 timeout_ns;
+ };
+ 
+ /**
+@@ -184,11 +184,30 @@ struct drm_vc4_wait_bo {
+  * used in a future extension.
+  */
+ struct drm_vc4_create_bo {
+-	uint32_t size;
+-	uint32_t flags;
++	__u32 size;
++	__u32 flags;
+ 	/** Returned GEM handle for the BO. */
+-	uint32_t handle;
+-	uint32_t pad;
++	__u32 handle;
++	__u32 pad;
++};
++
++/**
++ * struct drm_vc4_mmap_bo - ioctl argument for mapping VC4 BOs.
++ *
++ * This doesn't actually perform an mmap.  Instead, it returns the
++ * offset you need to use in an mmap on the DRM device node.  This
++ * means that tools like valgrind end up knowing about the mapped
++ * memory.
++ *
++ * There are currently no values for the flags argument, but it may be
++ * used in a future extension.
++ */
++struct drm_vc4_mmap_bo {
++	/** Handle for the object being mapped. */
++	__u32 handle;
++	__u32 flags;
++	/** offset into the drm node to use for subsequent mmap call. */
++	__u64 offset;
+ };
+ 
+ /**
+@@ -201,43 +220,24 @@ struct drm_vc4_create_bo {
+  */
+ struct drm_vc4_create_shader_bo {
+ 	/* Size of the data argument. */
+-	uint32_t size;
++	__u32 size;
+ 	/* Flags, currently must be 0. */
+-	uint32_t flags;
++	__u32 flags;
+ 
+ 	/* Pointer to the data. */
+-	uint64_t data;
++	__u64 data;
+ 
+ 	/** Returned GEM handle for the BO. */
+-	uint32_t handle;
++	__u32 handle;
+ 	/* Pad, must be 0. */
+-	uint32_t pad;
+-};
+-
+-/**
+- * struct drm_vc4_mmap_bo - ioctl argument for mapping VC4 BOs.
+- *
+- * This doesn't actually perform an mmap.  Instead, it returns the
+- * offset you need to use in an mmap on the DRM device node.  This
+- * means that tools like valgrind end up knowing about the mapped
+- * memory.
+- *
+- * There are currently no values for the flags argument, but it may be
+- * used in a future extension.
+- */
+-struct drm_vc4_mmap_bo {
+-	/** Handle for the object being mapped. */
+-	uint32_t handle;
+-	uint32_t flags;
+-	/** offset into the drm node to use for subsequent mmap call. */
+-	uint64_t offset;
++	__u32 pad;
+ };
+ 
+ struct drm_vc4_get_hang_state_bo {
+-	uint32_t handle;
+-	uint32_t paddr;
+-	uint32_t size;
+-	uint32_t pad;
++	__u32 handle;
++	__u32 paddr;
++	__u32 size;
++	__u32 pad;
+ };
+ 
+ /**
+@@ -246,34 +246,34 @@ struct drm_vc4_get_hang_state_bo {
+ */
+ struct drm_vc4_get_hang_state {
+ 	/** Pointer to array of struct drm_vc4_get_hang_state_bo. */
+-	uint64_t bo;
++	__u64 bo;
+ 	/**
+ 	 * On input, the size of the bo array.  Output is the number
+ 	 * of bos to be returned.
+ 	 */
+-	uint32_t bo_count;
++	__u32 bo_count;
+ 
+-	uint32_t start_bin, start_render;
++	__u32 start_bin, start_render;
+ 
+-	uint32_t ct0ca, ct0ea;
+-	uint32_t ct1ca, ct1ea;
+-	uint32_t ct0cs, ct1cs;
+-	uint32_t ct0ra0, ct1ra0;
+-
+-	uint32_t bpca, bpcs;
+-	uint32_t bpoa, bpos;
+-
+-	uint32_t vpmbase;
+-
+-	uint32_t dbge;
+-	uint32_t fdbgo;
+-	uint32_t fdbgb;
+-	uint32_t fdbgr;
+-	uint32_t fdbgs;
+-	uint32_t errstat;
++	__u32 ct0ca, ct0ea;
++	__u32 ct1ca, ct1ea;
++	__u32 ct0cs, ct1cs;
++	__u32 ct0ra0, ct1ra0;
++
++	__u32 bpca, bpcs;
++	__u32 bpoa, bpos;
++
++	__u32 vpmbase;
++
++	__u32 dbge;
++	__u32 fdbgo;
++	__u32 fdbgb;
++	__u32 fdbgr;
++	__u32 fdbgs;
++	__u32 errstat;
+ 
+ 	/* Pad that we may save more registers into in the future. */
+-	uint32_t pad[16];
++	__u32 pad[16];
+ };
+ 
+ #endif /* _UAPI_VC4_DRM_H_ */
diff --git a/target/linux/brcm2708/patches-4.4/0119-drm-vc4-Use-hpd-gpios-for-HDMI-GPIO-like-what-landed.patch b/target/linux/brcm2708/patches-4.4/0119-drm-vc4-Use-hpd-gpios-for-HDMI-GPIO-like-what-landed.patch
new file mode 100644
index 0000000..3cb5398
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0119-drm-vc4-Use-hpd-gpios-for-HDMI-GPIO-like-what-landed.patch
@@ -0,0 +1,22 @@
+From d0afa9ee62039105ac62f350b168f3ac4c013734 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Tue, 15 Dec 2015 23:46:32 +0000
+Subject: [PATCH 119/127] drm/vc4: Use "hpd-gpios" for HDMI GPIO, like what
+ landed upstream.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts
++++ b/arch/arm/boot/dts/overlays/vc4-kms-v3d-overlay.dts
+@@ -68,7 +68,7 @@
+ 				      <0x7e808000 0x100>;
+ 				interrupts = <2 8>, <2 9>;
+ 				ddc = <&i2c2>;
+-				hpd-gpio = <&gpio 46 GPIO_ACTIVE_HIGH>;
++				hpd-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>;
+ 				clocks = <&cprman BCM2835_PLLH_PIX>,
+ 					 <&cprman BCM2835_CLOCK_HSM>;
+ 				clock-names = "pixel", "hdmi";
diff --git a/target/linux/brcm2708/patches-4.4/0120-drm-vc4-Synchronize-validation-code-for-v2-submissio.patch b/target/linux/brcm2708/patches-4.4/0120-drm-vc4-Synchronize-validation-code-for-v2-submissio.patch
new file mode 100644
index 0000000..89001b8
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0120-drm-vc4-Synchronize-validation-code-for-v2-submissio.patch
@@ -0,0 +1,612 @@
+From 1de82ff3a47093ed2ff41a0288c6ebe21d6bfcb7 Mon Sep 17 00:00:00 2001
+From: Eric Anholt <eric at anholt.net>
+Date: Mon, 7 Dec 2015 12:35:01 -0800
+Subject: [PATCH 120/127] drm/vc4: Synchronize validation code for v2
+ submission upstream.
+
+Signed-off-by: Eric Anholt <eric at anholt.net>
+---
+ drivers/gpu/drm/vc4/vc4_drv.h       |  24 +--
+ drivers/gpu/drm/vc4/vc4_gem.c       |  14 +-
+ drivers/gpu/drm/vc4/vc4_render_cl.c |   6 +-
+ drivers/gpu/drm/vc4/vc4_validate.c  | 287 +++++++++++++++---------------------
+ 4 files changed, 135 insertions(+), 196 deletions(-)
+
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -189,17 +189,6 @@ to_vc4_encoder(struct drm_encoder *encod
+ #define HVS_READ(offset) readl(vc4->hvs->regs + offset)
+ #define HVS_WRITE(offset, val) writel(val, vc4->hvs->regs + offset)
+ 
+-enum vc4_bo_mode {
+-	VC4_MODE_UNDECIDED,
+-	VC4_MODE_RENDER,
+-	VC4_MODE_SHADER,
+-};
+-
+-struct vc4_bo_exec_state {
+-	struct drm_gem_cma_object *bo;
+-	enum vc4_bo_mode mode;
+-};
+-
+ struct vc4_exec_info {
+ 	/* Sequence number for this bin/render job. */
+ 	uint64_t seqno;
+@@ -210,7 +199,7 @@ struct vc4_exec_info {
+ 	/* This is the array of BOs that were looked up at the start of exec.
+ 	 * Command validation will use indices into this array.
+ 	 */
+-	struct vc4_bo_exec_state *bo;
++	struct drm_gem_cma_object **bo;
+ 	uint32_t bo_count;
+ 
+ 	/* Pointers for our position in vc4->job_list */
+@@ -238,7 +227,6 @@ struct vc4_exec_info {
+ 	 * command lists.
+ 	 */
+ 	struct vc4_shader_state {
+-		uint8_t packet;
+ 		uint32_t addr;
+ 		/* Maximum vertex index referenced by any primitive using this
+ 		 * shader state.
+@@ -254,6 +242,7 @@ struct vc4_exec_info {
+ 	bool found_tile_binning_mode_config_packet;
+ 	bool found_start_tile_binning_packet;
+ 	bool found_increment_semaphore_packet;
++	bool found_flush;
+ 	uint8_t bin_tiles_x, bin_tiles_y;
+ 	struct drm_gem_cma_object *tile_bo;
+ 	uint32_t tile_alloc_offset;
+@@ -265,6 +254,9 @@ struct vc4_exec_info {
+ 	uint32_t ct0ca, ct0ea;
+ 	uint32_t ct1ca, ct1ea;
+ 
++	/* Pointer to the unvalidated bin CL (if present). */
++	void *bin_u;
++
+ 	/* Pointers to the shader recs.  These paddr gets incremented as CL
+ 	 * packets are relocated in validate_gl_shader_state, and the vaddrs
+ 	 * (u and v) get incremented and size decremented as the shader recs
+@@ -455,10 +447,8 @@ vc4_validate_bin_cl(struct drm_device *d
+ int
+ vc4_validate_shader_recs(struct drm_device *dev, struct vc4_exec_info *exec);
+ 
+-bool vc4_use_bo(struct vc4_exec_info *exec,
+-		uint32_t hindex,
+-		enum vc4_bo_mode mode,
+-		struct drm_gem_cma_object **obj);
++struct drm_gem_cma_object *vc4_use_bo(struct vc4_exec_info *exec,
++				      uint32_t hindex);
+ 
+ int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec);
+ 
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -169,8 +169,8 @@ vc4_save_hang_state(struct drm_device *d
+ 	}
+ 
+ 	for (i = 0; i < exec->bo_count; i++) {
+-		drm_gem_object_reference(&exec->bo[i].bo->base);
+-		kernel_state->bo[i] = &exec->bo[i].bo->base;
++		drm_gem_object_reference(&exec->bo[i]->base);
++		kernel_state->bo[i] = &exec->bo[i]->base;
+ 	}
+ 
+ 	list_for_each_entry(bo, &exec->unref_list, unref_head) {
+@@ -397,7 +397,7 @@ vc4_update_bo_seqnos(struct vc4_exec_inf
+ 	unsigned i;
+ 
+ 	for (i = 0; i < exec->bo_count; i++) {
+-		bo = to_vc4_bo(&exec->bo[i].bo->base);
++		bo = to_vc4_bo(&exec->bo[i]->base);
+ 		bo->seqno = seqno;
+ 	}
+ 
+@@ -467,7 +467,7 @@ vc4_cl_lookup_bos(struct drm_device *dev
+ 		return -EINVAL;
+ 	}
+ 
+-	exec->bo = kcalloc(exec->bo_count, sizeof(struct vc4_bo_exec_state),
++	exec->bo = kcalloc(exec->bo_count, sizeof(struct drm_gem_cma_object *),
+ 			   GFP_KERNEL);
+ 	if (!exec->bo) {
+ 		DRM_ERROR("Failed to allocate validated BO pointers\n");
+@@ -500,7 +500,7 @@ vc4_cl_lookup_bos(struct drm_device *dev
+ 			goto fail;
+ 		}
+ 		drm_gem_object_reference(bo);
+-		exec->bo[i].bo = (struct drm_gem_cma_object *)bo;
++		exec->bo[i] = (struct drm_gem_cma_object *)bo;
+ 	}
+ 	spin_unlock(&file_priv->table_lock);
+ 
+@@ -591,6 +591,8 @@ vc4_get_bcl(struct drm_device *dev, stru
+ 
+ 	exec->ct0ca = exec->exec_bo->paddr + bin_offset;
+ 
++	exec->bin_u = bin;
++
+ 	exec->shader_rec_v = exec->exec_bo->vaddr + shader_rec_offset;
+ 	exec->shader_rec_p = exec->exec_bo->paddr + shader_rec_offset;
+ 	exec->shader_rec_size = args->shader_rec_size;
+@@ -622,7 +624,7 @@ vc4_complete_exec(struct drm_device *dev
+ 	mutex_lock(&dev->struct_mutex);
+ 	if (exec->bo) {
+ 		for (i = 0; i < exec->bo_count; i++)
+-			drm_gem_object_unreference(&exec->bo[i].bo->base);
++			drm_gem_object_unreference(&exec->bo[i]->base);
+ 		kfree(exec->bo);
+ 	}
+ 
+--- a/drivers/gpu/drm/vc4/vc4_render_cl.c
++++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
+@@ -436,7 +436,8 @@ static int vc4_rcl_surface_setup(struct
+ 	if (surf->hindex == ~0)
+ 		return 0;
+ 
+-	if (!vc4_use_bo(exec, surf->hindex, VC4_MODE_RENDER, obj))
++	*obj = vc4_use_bo(exec, surf->hindex);
++	if (!*obj)
+ 		return -EINVAL;
+ 
+ 	if (surf->flags & VC4_SUBMIT_RCL_SURFACE_READ_IS_FULL_RES) {
+@@ -537,7 +538,8 @@ vc4_rcl_render_config_surface_setup(stru
+ 	if (surf->hindex == ~0)
+ 		return 0;
+ 
+-	if (!vc4_use_bo(exec, surf->hindex, VC4_MODE_RENDER, obj))
++	*obj = vc4_use_bo(exec, surf->hindex);
++	if (!*obj)
+ 		return -EINVAL;
+ 
+ 	if (tiling > VC4_TILING_FORMAT_LT) {
+--- a/drivers/gpu/drm/vc4/vc4_validate.c
++++ b/drivers/gpu/drm/vc4/vc4_validate.c
+@@ -94,42 +94,42 @@ size_is_lt(uint32_t width, uint32_t heig
+ 		height <= 4 * utile_height(cpp));
+ }
+ 
+-bool
+-vc4_use_bo(struct vc4_exec_info *exec,
+-	   uint32_t hindex,
+-	   enum vc4_bo_mode mode,
+-	   struct drm_gem_cma_object **obj)
++struct drm_gem_cma_object *
++vc4_use_bo(struct vc4_exec_info *exec, uint32_t hindex)
+ {
+-	*obj = NULL;
++	struct drm_gem_cma_object *obj;
++	struct vc4_bo *bo;
+ 
+ 	if (hindex >= exec->bo_count) {
+ 		DRM_ERROR("BO index %d greater than BO count %d\n",
+ 			  hindex, exec->bo_count);
+-		return false;
++		return NULL;
+ 	}
++	obj = exec->bo[hindex];
++	bo = to_vc4_bo(&obj->base);
+ 
+-	if (exec->bo[hindex].mode != mode) {
+-		if (exec->bo[hindex].mode == VC4_MODE_UNDECIDED) {
+-			exec->bo[hindex].mode = mode;
+-		} else {
+-			DRM_ERROR("BO index %d reused with mode %d vs %d\n",
+-				  hindex, exec->bo[hindex].mode, mode);
+-			return false;
+-		}
++	if (bo->validated_shader) {
++		DRM_ERROR("Trying to use shader BO as something other than "
++			  "a shader\n");
++		return NULL;
+ 	}
+ 
+-	*obj = exec->bo[hindex].bo;
+-	return true;
++	return obj;
++}
++
++static struct drm_gem_cma_object *
++vc4_use_handle(struct vc4_exec_info *exec, uint32_t gem_handles_packet_index)
++{
++	return vc4_use_bo(exec, exec->bo_index[gem_handles_packet_index]);
+ }
+ 
+ static bool
+-vc4_use_handle(struct vc4_exec_info *exec,
+-	       uint32_t gem_handles_packet_index,
+-	       enum vc4_bo_mode mode,
+-	       struct drm_gem_cma_object **obj)
++validate_bin_pos(struct vc4_exec_info *exec, void *untrusted, uint32_t pos)
+ {
+-	return vc4_use_bo(exec, exec->bo_index[gem_handles_packet_index],
+-			  mode, obj);
++	/* Note that the untrusted pointer passed to these functions is
++	 * incremented past the packet byte.
++	 */
++	return (untrusted - 1 == exec->bin_u + pos);
+ }
+ 
+ static uint32_t
+@@ -202,13 +202,13 @@ vc4_check_tex_size(struct vc4_exec_info
+ }
+ 
+ static int
+-validate_flush_all(VALIDATE_ARGS)
++validate_flush(VALIDATE_ARGS)
+ {
+-	if (exec->found_increment_semaphore_packet) {
+-		DRM_ERROR("VC4_PACKET_FLUSH_ALL after "
+-			  "VC4_PACKET_INCREMENT_SEMAPHORE\n");
++	if (!validate_bin_pos(exec, untrusted, exec->args->bin_cl_size - 1)) {
++		DRM_ERROR("Bin CL must end with VC4_PACKET_FLUSH\n");
+ 		return -EINVAL;
+ 	}
++	exec->found_flush = true;
+ 
+ 	return 0;
+ }
+@@ -233,17 +233,13 @@ validate_start_tile_binning(VALIDATE_ARG
+ static int
+ validate_increment_semaphore(VALIDATE_ARGS)
+ {
+-	if (exec->found_increment_semaphore_packet) {
+-		DRM_ERROR("Duplicate VC4_PACKET_INCREMENT_SEMAPHORE\n");
++	if (!validate_bin_pos(exec, untrusted, exec->args->bin_cl_size - 2)) {
++		DRM_ERROR("Bin CL must end with "
++			  "VC4_PACKET_INCREMENT_SEMAPHORE\n");
+ 		return -EINVAL;
+ 	}
+ 	exec->found_increment_semaphore_packet = true;
+ 
+-	/* Once we've found the semaphore increment, there should be one FLUSH
+-	 * then the end of the command list.  The FLUSH actually triggers the
+-	 * increment, so we only need to make sure there
+-	 */
+-
+ 	return 0;
+ }
+ 
+@@ -257,11 +253,6 @@ validate_indexed_prim_list(VALIDATE_ARGS
+ 	uint32_t index_size = (*(uint8_t *)(untrusted + 0) >> 4) ? 2 : 1;
+ 	struct vc4_shader_state *shader_state;
+ 
+-	if (exec->found_increment_semaphore_packet) {
+-		DRM_ERROR("Drawing after VC4_PACKET_INCREMENT_SEMAPHORE\n");
+-		return -EINVAL;
+-	}
+-
+ 	/* Check overflow condition */
+ 	if (exec->shader_state_count == 0) {
+ 		DRM_ERROR("shader state must precede primitives\n");
+@@ -272,7 +263,8 @@ validate_indexed_prim_list(VALIDATE_ARGS
+ 	if (max_index > shader_state->max_index)
+ 		shader_state->max_index = max_index;
+ 
+-	if (!vc4_use_handle(exec, 0, VC4_MODE_RENDER, &ib))
++	ib = vc4_use_handle(exec, 0);
++	if (!ib)
+ 		return -EINVAL;
+ 
+ 	if (offset > ib->base.size ||
+@@ -295,11 +287,6 @@ validate_gl_array_primitive(VALIDATE_ARG
+ 	uint32_t max_index;
+ 	struct vc4_shader_state *shader_state;
+ 
+-	if (exec->found_increment_semaphore_packet) {
+-		DRM_ERROR("Drawing after VC4_PACKET_INCREMENT_SEMAPHORE\n");
+-		return -EINVAL;
+-	}
+-
+ 	/* Check overflow condition */
+ 	if (exec->shader_state_count == 0) {
+ 		DRM_ERROR("shader state must precede primitives\n");
+@@ -329,7 +316,6 @@ validate_gl_shader_state(VALIDATE_ARGS)
+ 		return -EINVAL;
+ 	}
+ 
+-	exec->shader_state[i].packet = VC4_PACKET_GL_SHADER_STATE;
+ 	exec->shader_state[i].addr = *(uint32_t *)untrusted;
+ 	exec->shader_state[i].max_index = 0;
+ 
+@@ -348,31 +334,6 @@ validate_gl_shader_state(VALIDATE_ARGS)
+ }
+ 
+ static int
+-validate_nv_shader_state(VALIDATE_ARGS)
+-{
+-	uint32_t i = exec->shader_state_count++;
+-
+-	if (i >= exec->shader_state_size) {
+-		DRM_ERROR("More requests for shader states than declared\n");
+-		return -EINVAL;
+-	}
+-
+-	exec->shader_state[i].packet = VC4_PACKET_NV_SHADER_STATE;
+-	exec->shader_state[i].addr = *(uint32_t *)untrusted;
+-
+-	if (exec->shader_state[i].addr & 15) {
+-		DRM_ERROR("NV shader state address 0x%08x misaligned\n",
+-			  exec->shader_state[i].addr);
+-		return -EINVAL;
+-	}
+-
+-	*(uint32_t *)validated = (exec->shader_state[i].addr +
+-				  exec->shader_rec_p);
+-
+-	return 0;
+-}
+-
+-static int
+ validate_tile_binning_config(VALIDATE_ARGS)
+ {
+ 	struct drm_device *dev = exec->exec_bo->base.dev;
+@@ -473,8 +434,8 @@ static const struct cmd_info {
+ } cmd_info[] = {
+ 	VC4_DEFINE_PACKET(VC4_PACKET_HALT, NULL),
+ 	VC4_DEFINE_PACKET(VC4_PACKET_NOP, NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH, NULL),
+-	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH_ALL, validate_flush_all),
++	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH, validate_flush),
++	VC4_DEFINE_PACKET(VC4_PACKET_FLUSH_ALL, NULL),
+ 	VC4_DEFINE_PACKET(VC4_PACKET_START_TILE_BINNING,
+ 			  validate_start_tile_binning),
+ 	VC4_DEFINE_PACKET(VC4_PACKET_INCREMENT_SEMAPHORE,
+@@ -488,7 +449,6 @@ static const struct cmd_info {
+ 	VC4_DEFINE_PACKET(VC4_PACKET_PRIMITIVE_LIST_FORMAT, NULL),
+ 
+ 	VC4_DEFINE_PACKET(VC4_PACKET_GL_SHADER_STATE, validate_gl_shader_state),
+-	VC4_DEFINE_PACKET(VC4_PACKET_NV_SHADER_STATE, validate_nv_shader_state),
+ 
+ 	VC4_DEFINE_PACKET(VC4_PACKET_CONFIGURATION_BITS, NULL),
+ 	VC4_DEFINE_PACKET(VC4_PACKET_FLAT_SHADE_FLAGS, NULL),
+@@ -575,8 +535,16 @@ vc4_validate_bin_cl(struct drm_device *d
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!exec->found_increment_semaphore_packet) {
+-		DRM_ERROR("Bin CL missing VC4_PACKET_INCREMENT_SEMAPHORE\n");
++	/* The bin CL must be ended with INCREMENT_SEMAPHORE and FLUSH.  The
++	 * semaphore is used to trigger the render CL to start up, and the
++	 * FLUSH is what caps the bin lists with
++	 * VC4_PACKET_RETURN_FROM_SUB_LIST (so they jump back to the main
++	 * render CL when they get called to) and actually triggers the queued
++	 * semaphore increment.
++	 */
++	if (!exec->found_increment_semaphore_packet || !exec->found_flush) {
++		DRM_ERROR("Bin CL missing VC4_PACKET_INCREMENT_SEMAPHORE + "
++			  "VC4_PACKET_FLUSH\n");
+ 		return -EINVAL;
+ 	}
+ 
+@@ -607,7 +575,8 @@ reloc_tex(struct vc4_exec_info *exec,
+ 	uint32_t cube_map_stride = 0;
+ 	enum vc4_texture_data_type type;
+ 
+-	if (!vc4_use_bo(exec, texture_handle_index, VC4_MODE_RENDER, &tex))
++	tex = vc4_use_bo(exec, texture_handle_index);
++	if (!tex)
+ 		return false;
+ 
+ 	if (sample->is_direct) {
+@@ -755,51 +724,28 @@ reloc_tex(struct vc4_exec_info *exec,
+ }
+ 
+ static int
+-validate_shader_rec(struct drm_device *dev,
+-		    struct vc4_exec_info *exec,
+-		    struct vc4_shader_state *state)
++validate_gl_shader_rec(struct drm_device *dev,
++		       struct vc4_exec_info *exec,
++		       struct vc4_shader_state *state)
+ {
+ 	uint32_t *src_handles;
+ 	void *pkt_u, *pkt_v;
+-	enum shader_rec_reloc_type {
+-		RELOC_CODE,
+-		RELOC_VBO,
+-	};
+-	struct shader_rec_reloc {
+-		enum shader_rec_reloc_type type;
+-		uint32_t offset;
+-	};
+-	static const struct shader_rec_reloc gl_relocs[] = {
+-		{ RELOC_CODE, 4 },  /* fs */
+-		{ RELOC_CODE, 16 }, /* vs */
+-		{ RELOC_CODE, 28 }, /* cs */
+-	};
+-	static const struct shader_rec_reloc nv_relocs[] = {
+-		{ RELOC_CODE, 4 }, /* fs */
+-		{ RELOC_VBO, 12 }
++	static const uint32_t shader_reloc_offsets[] = {
++		4, /* fs */
++		16, /* vs */
++		28, /* cs */
+ 	};
+-	const struct shader_rec_reloc *relocs;
+-	struct drm_gem_cma_object *bo[ARRAY_SIZE(gl_relocs) + 8];
+-	uint32_t nr_attributes = 0, nr_fixed_relocs, nr_relocs, packet_size;
++	uint32_t shader_reloc_count = ARRAY_SIZE(shader_reloc_offsets);
++	struct drm_gem_cma_object *bo[shader_reloc_count + 8];
++	uint32_t nr_attributes, nr_relocs, packet_size;
+ 	int i;
+-	struct vc4_validated_shader_info *shader;
+ 
+-	if (state->packet == VC4_PACKET_NV_SHADER_STATE) {
+-		relocs = nv_relocs;
+-		nr_fixed_relocs = ARRAY_SIZE(nv_relocs);
+-
+-		packet_size = 16;
+-	} else {
+-		relocs = gl_relocs;
+-		nr_fixed_relocs = ARRAY_SIZE(gl_relocs);
+-
+-		nr_attributes = state->addr & 0x7;
+-		if (nr_attributes == 0)
+-			nr_attributes = 8;
+-		packet_size = gl_shader_rec_size(state->addr);
+-	}
+-	nr_relocs = nr_fixed_relocs + nr_attributes;
++	nr_attributes = state->addr & 0x7;
++	if (nr_attributes == 0)
++		nr_attributes = 8;
++	packet_size = gl_shader_rec_size(state->addr);
+ 
++	nr_relocs = ARRAY_SIZE(shader_reloc_offsets) + nr_attributes;
+ 	if (nr_relocs * 4 > exec->shader_rec_size) {
+ 		DRM_ERROR("overflowed shader recs reading %d handles "
+ 			  "from %d bytes left\n",
+@@ -829,21 +775,30 @@ validate_shader_rec(struct drm_device *d
+ 	exec->shader_rec_v += roundup(packet_size, 16);
+ 	exec->shader_rec_size -= packet_size;
+ 
+-	for (i = 0; i < nr_relocs; i++) {
+-		enum vc4_bo_mode mode;
++	if (!(*(uint16_t *)pkt_u & VC4_SHADER_FLAG_FS_SINGLE_THREAD)) {
++		DRM_ERROR("Multi-threaded fragment shaders not supported.\n");
++		return -EINVAL;
++	}
+ 
+-		if (i < nr_fixed_relocs && relocs[i].type == RELOC_CODE)
+-			mode = VC4_MODE_SHADER;
+-		else
+-			mode = VC4_MODE_RENDER;
++	for (i = 0; i < shader_reloc_count; i++) {
++		if (src_handles[i] > exec->bo_count) {
++			DRM_ERROR("Shader handle %d too big\n", src_handles[i]);
++			return -EINVAL;
++		}
+ 
+-		if (!vc4_use_bo(exec, src_handles[i], mode, &bo[i]))
+-			return false;
++		bo[i] = exec->bo[src_handles[i]];
++		if (!bo[i])
++			return -EINVAL;
++	}
++	for (i = shader_reloc_count; i < nr_relocs; i++) {
++		bo[i] = vc4_use_bo(exec, src_handles[i]);
++		if (!bo[i])
++			return -EINVAL;
+ 	}
+ 
+-	for (i = 0; i < nr_fixed_relocs; i++) {
+-		struct vc4_bo *vc4_bo;
+-		uint32_t o = relocs[i].offset;
++	for (i = 0; i < shader_reloc_count; i++) {
++		struct vc4_validated_shader_info *validated_shader;
++		uint32_t o = shader_reloc_offsets[i];
+ 		uint32_t src_offset = *(uint32_t *)(pkt_u + o);
+ 		uint32_t *texture_handles_u;
+ 		void *uniform_data_u;
+@@ -851,57 +806,50 @@ validate_shader_rec(struct drm_device *d
+ 
+ 		*(uint32_t *)(pkt_v + o) = bo[i]->paddr + src_offset;
+ 
+-		switch (relocs[i].type) {
+-		case RELOC_CODE:
+-			if (src_offset != 0) {
+-				DRM_ERROR("Shaders must be at offset 0 "
+-					  "of the BO.\n");
+-				goto fail;
+-			}
++		if (src_offset != 0) {
++			DRM_ERROR("Shaders must be at offset 0 of "
++				  "the BO.\n");
++			return -EINVAL;
++		}
+ 
+-			vc4_bo = to_vc4_bo(&bo[i]->base);
+-			shader = vc4_bo->validated_shader;
+-			if (!shader)
+-				goto fail;
++		validated_shader = to_vc4_bo(&bo[i]->base)->validated_shader;
++		if (!validated_shader)
++			return -EINVAL;
+ 
+-			if (shader->uniforms_src_size > exec->uniforms_size) {
+-				DRM_ERROR("Uniforms src buffer overflow\n");
+-				goto fail;
+-			}
++		if (validated_shader->uniforms_src_size >
++		    exec->uniforms_size) {
++			DRM_ERROR("Uniforms src buffer overflow\n");
++			return -EINVAL;
++		}
+ 
+-			texture_handles_u = exec->uniforms_u;
+-			uniform_data_u = (texture_handles_u +
+-					  shader->num_texture_samples);
+-
+-			memcpy(exec->uniforms_v, uniform_data_u,
+-			       shader->uniforms_size);
+-
+-			for (tex = 0;
+-			     tex < shader->num_texture_samples;
+-			     tex++) {
+-				if (!reloc_tex(exec,
+-					       uniform_data_u,
+-					       &shader->texture_samples[tex],
+-					       texture_handles_u[tex])) {
+-					goto fail;
+-				}
++		texture_handles_u = exec->uniforms_u;
++		uniform_data_u = (texture_handles_u +
++				  validated_shader->num_texture_samples);
++
++		memcpy(exec->uniforms_v, uniform_data_u,
++		       validated_shader->uniforms_size);
++
++		for (tex = 0;
++		     tex < validated_shader->num_texture_samples;
++		     tex++) {
++			if (!reloc_tex(exec,
++				       uniform_data_u,
++				       &validated_shader->texture_samples[tex],
++				       texture_handles_u[tex])) {
++				return -EINVAL;
+ 			}
++		}
+ 
+-			*(uint32_t *)(pkt_v + o + 4) = exec->uniforms_p;
+-
+-			exec->uniforms_u += shader->uniforms_src_size;
+-			exec->uniforms_v += shader->uniforms_size;
+-			exec->uniforms_p += shader->uniforms_size;
+-
+-			break;
++		*(uint32_t *)(pkt_v + o + 4) = exec->uniforms_p;
+ 
+-		case RELOC_VBO:
+-			break;
+-		}
++		exec->uniforms_u += validated_shader->uniforms_src_size;
++		exec->uniforms_v += validated_shader->uniforms_size;
++		exec->uniforms_p += validated_shader->uniforms_size;
+ 	}
+ 
+ 	for (i = 0; i < nr_attributes; i++) {
+-		struct drm_gem_cma_object *vbo = bo[nr_fixed_relocs + i];
++		struct drm_gem_cma_object *vbo =
++			bo[ARRAY_SIZE(shader_reloc_offsets) + i];
+ 		uint32_t o = 36 + i * 8;
+ 		uint32_t offset = *(uint32_t *)(pkt_u + o + 0);
+ 		uint32_t attr_size = *(uint8_t *)(pkt_u + o + 4) + 1;
+@@ -933,9 +881,6 @@ validate_shader_rec(struct drm_device *d
+ 	}
+ 
+ 	return 0;
+-
+-fail:
+-	return -EINVAL;
+ }
+ 
+ int
+@@ -946,7 +891,7 @@ vc4_validate_shader_recs(struct drm_devi
+ 	int ret = 0;
+ 
+ 	for (i = 0; i < exec->shader_state_count; i++) {
+-		ret = validate_shader_rec(dev, exec, &exec->shader_state[i]);
++		ret = validate_gl_shader_rec(dev, exec, &exec->shader_state[i]);
+ 		if (ret)
+ 			return ret;
+ 	}
diff --git a/target/linux/brcm2708/patches-4.4/0121-MMC-Do-not-use-mmc_debug-if-CONFIG_MMC_BCM2835-is-no.patch b/target/linux/brcm2708/patches-4.4/0121-MMC-Do-not-use-mmc_debug-if-CONFIG_MMC_BCM2835-is-no.patch
new file mode 100644
index 0000000..07db922
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0121-MMC-Do-not-use-mmc_debug-if-CONFIG_MMC_BCM2835-is-no.patch
@@ -0,0 +1,37 @@
+From 5fc7b3491b5961dcb35251fc908c0fd5988eecde Mon Sep 17 00:00:00 2001
+From: janluca <janluca at zedat.fu-berlin.de>
+Date: Sun, 27 Dec 2015 14:34:04 +0100
+Subject: [PATCH 121/127] MMC: Do not use mmc_debug if CONFIG_MMC_BCM2835 is
+ not set
+
+If CONFIG_MMC_BCM2835 was not set the compiling of the kernel failed
+since mmc_debug was not defined but used in drivers/mmc/core/quirks.c.
+
+This patch add a ifdef-check for CONFIG_MMC_BCM2835 to the change of
+commit 64d395457f793250d2e582eeb38cc3403b1db98c
+---
+ drivers/mmc/core/quirks.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/drivers/mmc/core/quirks.c
++++ b/drivers/mmc/core/quirks.c
+@@ -53,7 +53,9 @@ static const struct mmc_fixup mmc_fixup_
+ 
+ void mmc_fixup_device(struct mmc_card *card, const struct mmc_fixup *table)
+ {
++#ifdef CONFIG_MMC_BCM2835
+ 	extern unsigned mmc_debug;
++#endif
+ 	const struct mmc_fixup *f;
+ 	u64 rev = cid_rev_card(card);
+ 
+@@ -81,7 +83,9 @@ void mmc_fixup_device(struct mmc_card *c
+ 	/* SDHCI on BCM2708 - bug causes a certain sequence of CMD23 operations to fail.
+ 	 * Disable this flag for all cards (fall-back to CMD25/CMD18 multi-block transfers).
+ 	 */
++#ifdef CONFIG_MMC_BCM2835
+ 	if (mmc_debug & (1<<13))
+ 	card->quirks |= MMC_QUIRK_BLK_NO_CMD23;
++#endif
+ }
+ EXPORT_SYMBOL(mmc_fixup_device);
diff --git a/target/linux/brcm2708/patches-4.4/0122-Extend-clock-timeout-fix-modprobe-baudrate-parameter.patch b/target/linux/brcm2708/patches-4.4/0122-Extend-clock-timeout-fix-modprobe-baudrate-parameter.patch
new file mode 100644
index 0000000..8f578ab
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0122-Extend-clock-timeout-fix-modprobe-baudrate-parameter.patch
@@ -0,0 +1,108 @@
+From ba2e42fe23631e6cf1f2e1167618ab1c13d9b3ba Mon Sep 17 00:00:00 2001
+From: Devon Fyson <devonfyson at gmail.com>
+Date: Wed, 30 Dec 2015 16:40:47 -0500
+Subject: [PATCH 122/127] Extend clock timeout, fix modprobe baudrate
+ parameter.
+
+Set the BSC_CLKT clock streching timeout to 35ms as per SMBus specs.\n- Increase priority of baudrate parameter passed to modprobe (in /etc/modprobe.d/*.conf or command line). Currently custom baudrates don't work because they are overridden by clock-frequency in the platform_device passed to the function.
+---
+ drivers/i2c/busses/i2c-bcm2708.c | 45 ++++++++++++++++++++++++++--------------
+ 1 file changed, 29 insertions(+), 16 deletions(-)
+
+--- a/drivers/i2c/busses/i2c-bcm2708.c
++++ b/drivers/i2c/busses/i2c-bcm2708.c
+@@ -71,7 +71,8 @@
+ 
+ #define DRV_NAME		"bcm2708_i2c"
+ 
+-static unsigned int baudrate = CONFIG_I2C_BCM2708_BAUDRATE;
++static unsigned int baudrate_default = CONFIG_I2C_BCM2708_BAUDRATE;
++static unsigned int baudrate;
+ module_param(baudrate, uint, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP);
+ MODULE_PARM_DESC(baudrate, "The I2C baudrate");
+ 
+@@ -87,6 +88,7 @@ struct bcm2708_i2c {
+ 	int irq;
+ 	struct clk *clk;
+ 	u32 cdiv;
++	u32 clk_tout;
+ 
+ 	struct completion done;
+ 
+@@ -126,7 +128,7 @@ static inline void bcm2708_bsc_fifo_fill
+ 
+ static inline int bcm2708_bsc_setup(struct bcm2708_i2c *bi)
+ {
+-	u32 cdiv, s;
++	u32 cdiv, s, clk_tout;
+ 	u32 c = BSC_C_I2CEN | BSC_C_INTD | BSC_C_ST | BSC_C_CLEAR_1;
+ 	int wait_loops = I2C_WAIT_LOOP_COUNT;
+ 
+@@ -134,12 +136,14 @@ static inline int bcm2708_bsc_setup(stru
+ 	 * Use the value that we cached in the probe.
+ 	 */
+ 	cdiv = bi->cdiv;
++	clk_tout = bi->clk_tout;
+ 
+ 	if (bi->msg->flags & I2C_M_RD)
+ 		c |= BSC_C_INTR | BSC_C_READ;
+ 	else
+ 		c |= BSC_C_INTT;
+ 
++	bcm2708_wr(bi, BSC_CLKT, clk_tout);
+ 	bcm2708_wr(bi, BSC_DIV, cdiv);
+ 	bcm2708_wr(bi, BSC_A, bi->msg->addr);
+ 	bcm2708_wr(bi, BSC_DLEN, bi->msg->len);
+@@ -312,21 +316,24 @@ static int bcm2708_i2c_probe(struct plat
+ 	struct bcm2708_i2c *bi;
+ 	struct i2c_adapter *adap;
+ 	unsigned long bus_hz;
+-	u32 cdiv;
+-
+-	if (pdev->dev.of_node) {
+-		u32 bus_clk_rate;
+-		pdev->id = of_alias_get_id(pdev->dev.of_node, "i2c");
+-		if (pdev->id < 0) {
+-			dev_err(&pdev->dev, "alias is missing\n");
+-			return -EINVAL;
++	u32 cdiv, clk_tout;
++	
++	if (!baudrate) {
++		baudrate = baudrate_default;
++		if (pdev->dev.of_node) {
++			u32 bus_clk_rate;
++			pdev->id = of_alias_get_id(pdev->dev.of_node, "i2c");
++			if (pdev->id < 0) {
++				dev_err(&pdev->dev, "alias is missing\n");
++				return -EINVAL;
++			}
++			if (!of_property_read_u32(pdev->dev.of_node,
++						"clock-frequency", &bus_clk_rate))
++				baudrate = bus_clk_rate;
++			else
++				dev_warn(&pdev->dev,
++					"Could not read clock-frequency property\n");
+ 		}
+-		if (!of_property_read_u32(pdev->dev.of_node,
+-					"clock-frequency", &bus_clk_rate))
+-			baudrate = bus_clk_rate;
+-		else
+-			dev_warn(&pdev->dev,
+-				"Could not read clock-frequency property\n");
+ 	}
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -417,7 +424,13 @@ static int bcm2708_i2c_probe(struct plat
+ 		cdiv = 0xffff;
+ 		baudrate = bus_hz / cdiv;
+ 	}
++	
++ 	clk_tout = 35/1000*baudrate; //35ms timeout as per SMBus specs.
++ 	if (clk_tout > 0xffff)
++		clk_tout = 0xffff;
++	
+ 	bi->cdiv = cdiv;
++	bi->clk_tout = clk_tout;
+ 
+ 	dev_info(&pdev->dev, "BSC%d Controller at 0x%08lx (irq %d) (baudrate %d)\n",
+ 		pdev->id, (unsigned long)regs->start, irq, baudrate);
diff --git a/target/linux/brcm2708/patches-4.4/0123-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch b/target/linux/brcm2708/patches-4.4/0123-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch
new file mode 100644
index 0000000..6be1dc3
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0123-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch
@@ -0,0 +1,110 @@
+From 0416b50ebe6a25e512e34f3e11cdfaec045fc5e3 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf at tronnes.org>
+Date: Thu, 31 Dec 2015 16:44:58 +0100
+Subject: [PATCH 123/127] bcm270x_dt: Add dwc2 and dwc-otg overlays
+
+---
+ arch/arm/boot/dts/overlays/Makefile            |  2 ++
+ arch/arm/boot/dts/overlays/README              | 21 +++++++++++++++++++
+ arch/arm/boot/dts/overlays/dwc-otg-overlay.dts | 20 ++++++++++++++++++
+ arch/arm/boot/dts/overlays/dwc2-overlay.dts    | 29 ++++++++++++++++++++++++++
+ 4 files changed, 72 insertions(+)
+ create mode 100644 arch/arm/boot/dts/overlays/dwc-otg-overlay.dts
+ create mode 100644 arch/arm/boot/dts/overlays/dwc2-overlay.dts
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -15,6 +15,8 @@ endif
+ dtb-$(RPI_DT_OVERLAYS) += ads7846-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += at86rf233-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += bmp085_i2c-sensor-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += dwc2-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += dwc-otg-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += dht11-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += enc28j60-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += gpio-ir-overlay.dtb
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -198,6 +198,27 @@ Params: gpiopin                  GPIO co
+                                  (default 4)
+ 
+ 
++Name:   dwc-otg
++Info:   Selects the dwc_otg USB controller driver which has fiq support. This
++        is the default on all except the Pi Zero which defaults to dwc2.
++Load:   dtoverlay=dwc-otg
++Params: <None>
++
++
++Name:   dwc2
++Info:   Selects the dwc2 USB controller driver
++Load:   dtoverlay=dwc2,<param>=<val>
++Params: dr_mode                  Dual role mode: "host", "peripheral" or "otg"
++
++        g-np-tx-fifo-size        Size of rx fifo size in gadget mode
++
++        g-rx-fifo-size           Size of non-periodic tx fifo size in gadget
++                                 mode
++
++        g-tx-fifo-size           Size of periodic tx fifo per endpoint
++                                 (except ep0) in gadget mode
++
++
+ [ The ds1307-rtc overlay has been deleted. See i2c-rtc. ]
+ 
+ 
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/dwc-otg-overlay.dts
+@@ -0,0 +1,20 @@
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&usb>;
++		#address-cells = <1>;
++		#size-cells = <1>;
++		__overlay__ {
++			compatible = "brcm,bcm2708-usb";
++			reg = <0x7e980000 0x10000>,
++			      <0x7e006000 0x1000>;
++			interrupts = <2 0>,
++				     <1 9>;
++			status = "okay";
++		};
++	};
++};
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/dwc2-overlay.dts
+@@ -0,0 +1,29 @@
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&usb>;
++		#address-cells = <1>;
++		#size-cells = <1>;
++		__overlay__ {
++			compatible = "brcm,bcm2835-usb";
++			reg = <0x7e980000 0x10000>;
++			interrupts = <1 9>;
++			dr_mode = "otg";
++			g-np-tx-fifo-size = <32>;
++			g-rx-fifo-size = <256>;
++			g-tx-fifo-size = <256 128 128 64 64 64 32>;
++			status = "okay";
++		};
++	};
++
++	__overrides__ {
++		dr_mode = <&usb>, "dr_mode";
++		g-np-tx-fifo-size = <&usb>,"g-np-tx-fifo-size:0";
++		g-rx-fifo-size = <&usb>,"g-rx-fifo-size:0";
++		g-tx-fifo-size = <&usb>,"g-tx-fifo-size:0";
++	};
++};
diff --git a/target/linux/brcm2708/patches-4.4/0124-BCM270X_DT-Add-the-sdtweak-overlay-for-tuning-sdhost.patch b/target/linux/brcm2708/patches-4.4/0124-BCM270X_DT-Add-the-sdtweak-overlay-for-tuning-sdhost.patch
new file mode 100644
index 0000000..ac3a86e
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0124-BCM270X_DT-Add-the-sdtweak-overlay-for-tuning-sdhost.patch
@@ -0,0 +1,74 @@
+From b7d6f1b965c5de2dd01d6719bee283dffa176362 Mon Sep 17 00:00:00 2001
+From: Phil Elwell <phil at raspberrypi.org>
+Date: Mon, 4 Jan 2016 14:42:17 +0000
+Subject: [PATCH 124/127] BCM270X_DT: Add the sdtweak overlay, for tuning
+ sdhost
+
+The sdhost overlay declares the sdhost interface and allows parameters
+to be set. This is overkill for situations where the user just wants to
+tweak the parameters of a pre-declared sdhost interface, so create an
+sdtweak overlay that does just that.
+---
+ arch/arm/boot/dts/overlays/Makefile            |  1 +
+ arch/arm/boot/dts/overlays/README              | 14 ++++++++++++++
+ arch/arm/boot/dts/overlays/sdtweak-overlay.dts | 21 +++++++++++++++++++++
+ 3 files changed, 36 insertions(+)
+ create mode 100644 arch/arm/boot/dts/overlays/sdtweak-overlay.dts
+
+--- a/arch/arm/boot/dts/overlays/Makefile
++++ b/arch/arm/boot/dts/overlays/Makefile
+@@ -53,6 +53,7 @@ dtb-$(RPI_DT_OVERLAYS) += rpi-proto-over
+ dtb-$(RPI_DT_OVERLAYS) += rpi-sense-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += sdhost-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += sdio-overlay.dtb
++dtb-$(RPI_DT_OVERLAYS) += sdtweak-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += smi-dev-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += smi-nand-overlay.dtb
+ dtb-$(RPI_DT_OVERLAYS) += smi-overlay.dtb
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -635,6 +635,20 @@ Params: overclock_50             Clock (
+                                  (default on: polling once at boot-time)
+ 
+ 
++Name:   sdtweak
++Info:   Tunes the bcm2835-sdhost SD/MMC driver
++Load:   dtoverlay=sdtweak,<param>=<val>
++Params: overclock_50             Clock (in MHz) to use when the MMC framework
++                                 requests 50MHz
++
++        force_pio                Disable DMA support (default off)
++
++        pio_limit                Number of blocks above which to use DMA
++                                 (default 1)
++
++        debug                    Enable debug output (default off)
++
++
+ Name:   smi
+ Info:   Enables the Secondary Memory Interface peripheral. Uses GPIOs 2-25!
+ Load:   dtoverlay=smi
+--- /dev/null
++++ b/arch/arm/boot/dts/overlays/sdtweak-overlay.dts
+@@ -0,0 +1,21 @@
++/dts-v1/;
++/plugin/;
++
++/{
++	compatible = "brcm,bcm2708";
++
++	fragment at 0 {
++		target = <&sdhost>;
++		frag1: __overlay__ {
++			brcm,overclock-50 = <0>;
++			brcm,pio-limit = <1>;
++		};
++	};
++
++	__overrides__ {
++		overclock_50     = <&frag1>,"brcm,overclock-50:0";
++		force_pio        = <&frag1>,"brcm,force-pio?";
++		pio_limit        = <&frag1>,"brcm,pio-limit:0";
++		debug            = <&frag1>,"brcm,debug?";
++	};
++};
diff --git a/target/linux/brcm2708/patches-4.4/0125-bcm2835-mmc-Don-t-override-bus-width-capabilities-fr.patch b/target/linux/brcm2708/patches-4.4/0125-bcm2835-mmc-Don-t-override-bus-width-capabilities-fr.patch
new file mode 100644
index 0000000..d2505b3
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0125-bcm2835-mmc-Don-t-override-bus-width-capabilities-fr.patch
@@ -0,0 +1,24 @@
+From 70243224e92a2c75e01451a5ed002e76ec211c0d Mon Sep 17 00:00:00 2001
+From: Andrew Litt <ajlitt at splunge.net>
+Date: Mon, 11 Jan 2016 07:54:21 +0000
+Subject: [PATCH 125/127] bcm2835-mmc: Don't override bus width capabilities
+ from devicetree
+
+Take out the force setting of the MMC_CAP_4_BIT_DATA host capability
+so that the result read from devicetree via mmc_of_parse() is
+preserved.
+---
+ drivers/mmc/host/bcm2835-mmc.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/mmc/host/bcm2835-mmc.c
++++ b/drivers/mmc/host/bcm2835-mmc.c
+@@ -1305,7 +1305,7 @@ static int bcm2835_mmc_add_host(struct b
+ 	/* host controller capabilities */
+ 	mmc->caps |= MMC_CAP_CMD23 | MMC_CAP_ERASE | MMC_CAP_NEEDS_POLL |
+ 		MMC_CAP_SDIO_IRQ | MMC_CAP_SD_HIGHSPEED |
+-		MMC_CAP_MMC_HIGHSPEED | MMC_CAP_4_BIT_DATA;
++		MMC_CAP_MMC_HIGHSPEED;
+ 
+ 	mmc->caps2 |= MMC_CAP2_SDIO_IRQ_NOTHREAD;
+ 
diff --git a/target/linux/brcm2708/patches-4.4/0126-SDIO-overlay-add-bus_width-parameter.patch b/target/linux/brcm2708/patches-4.4/0126-SDIO-overlay-add-bus_width-parameter.patch
new file mode 100644
index 0000000..080cd27
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0126-SDIO-overlay-add-bus_width-parameter.patch
@@ -0,0 +1,42 @@
+From 2b72fe5d7c71e77e5fd0b5c81aa14177843a59a8 Mon Sep 17 00:00:00 2001
+From: Andrew Litt <ajlitt at splunge.net>
+Date: Mon, 11 Jan 2016 07:55:54 +0000
+Subject: [PATCH 126/127] SDIO-overlay: add bus_width parameter
+
+Allow setting of the SDIO bus width capability of the bcm2835-mmc
+host.  This is helpful when only a 1 bit wide bus is connected
+between host and device but both host and device advertise 4 bit
+mode.
+---
+ arch/arm/boot/dts/overlays/README           | 2 ++
+ arch/arm/boot/dts/overlays/sdio-overlay.dts | 2 ++
+ 2 files changed, 4 insertions(+)
+
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -634,6 +634,8 @@ Params: overclock_50             Clock (
+         poll_once                Disable SDIO-device polling every second
+                                  (default on: polling once at boot-time)
+ 
++        bus_width                Set the SDIO host bus width (default 4 bits)
++
+ 
+ Name:   sdtweak
+ Info:   Tunes the bcm2835-sdhost SD/MMC driver
+--- a/arch/arm/boot/dts/overlays/sdio-overlay.dts
++++ b/arch/arm/boot/dts/overlays/sdio-overlay.dts
+@@ -11,6 +11,7 @@
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&sdio_pins>;
+ 			non-removable;
++			bus-width = <4>;
+ 			status = "okay";
+ 		};
+ 	};
+@@ -28,5 +29,6 @@
+ 
+ 	__overrides__ {
+ 		poll_once = <&sdio_mmc>,"non-removable?";
++		bus_width = <&sdio_mmc>,"bus-width:0";
+ 	};
+ };
diff --git a/target/linux/brcm2708/patches-4.4/0127-fixup-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch b/target/linux/brcm2708/patches-4.4/0127-fixup-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch
new file mode 100644
index 0000000..467badc
--- /dev/null
+++ b/target/linux/brcm2708/patches-4.4/0127-fixup-bcm270x_dt-Add-dwc2-and-dwc-otg-overlays.patch
@@ -0,0 +1,23 @@
+From 18c241c9e14f85fac03beab507b69980f9b33bfc Mon Sep 17 00:00:00 2001
+From: popcornmix <popcornmix at gmail.com>
+Date: Wed, 13 Jan 2016 15:49:06 +0000
+Subject: [PATCH 127/127] fixup! bcm270x_dt: Add dwc2 and dwc-otg overlays
+
+---
+ arch/arm/boot/dts/overlays/README | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/arch/arm/boot/dts/overlays/README
++++ b/arch/arm/boot/dts/overlays/README
+@@ -210,9 +210,9 @@ Info:   Selects the dwc2 USB controller
+ Load:   dtoverlay=dwc2,<param>=<val>
+ Params: dr_mode                  Dual role mode: "host", "peripheral" or "otg"
+ 
+-        g-np-tx-fifo-size        Size of rx fifo size in gadget mode
++        g-rx-fifo-size           Size of rx fifo size in gadget mode
+ 
+-        g-rx-fifo-size           Size of non-periodic tx fifo size in gadget
++        g-np-tx-fifo-size        Size of non-periodic tx fifo size in gadget
+                                  mode
+ 
+         g-tx-fifo-size           Size of periodic tx fifo per endpoint
-- 
1.9.1

_______________________________________________
openwrt-devel mailing list
openwrt-devel at lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


More information about the openwrt-devel mailing list