本文介绍了越来越可视化PCM音频通过Spotify的iOS版SDK的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我们正在寻找利用我们这一直是很多年到iOS应用程序,通过新的iOS SDK的Spotify播放音乐的音乐可视化软件 - 检查出的看到我们的视觉效果,如G力和永旺

We're currently looking at taking our music visualization software that's been around for many years to an iOS app that plays music via the new iOS Spotify SDK -- check out http://soundspectrum.com to see our visuals such as G-Force and Aeon.

总之,我们在Spotify的iOS版SDK中的示范项目全部运行起来,事情看起来不错,但前进的重要一步是获得接入音频PCM所以我们可以发送到我们​​的视觉引擎等。

Anyway, we have the demo projects in the Spotify iOS SDK all up and running and things look good, but the major step forward is to get access to the audio pcm so we can sent it into our visual engines, etc.

难道一个Spotify的开发还是有人在好心知道什么建议可行的办法来获得PCM音频的举行?所述音频PCM块可以作为最新样本的几千循环缓冲器(即我们会使用FFT等)一样简单。

Could a Spotify dev or someone in the know kindly suggest what possibilities are available to get a hold of the pcm audio? The audio pcm block can be as simple as a circular buffer of a few thousand of the latest samples (that we would use to FFT, etc).

在此先感谢!

推荐答案

子类 SPTCoreAudioController ,做一两件事情:


  1. 覆盖 connectOutputBus:ofNode:toInputBus:ofNode:inGraph:错误:,并使用 AudioUnitAddRenderNotify()为渲染回调添加到 destinationNode 的音频单元。作为输出节点呈现回调将被调用,将让您使用音频,因为它是离开的扬声器。一旦你这样做,一定要叫超级的实施Spotify的iOS版SDK的音频管道正常工作。

  1. Override connectOutputBus:ofNode:toInputBus:ofNode:inGraph:error: and use AudioUnitAddRenderNotify() to add a render callback to destinationNode's audio unit. The callback will be called as the output node is rendered and will give you access to the audio as it's leaving for the speakers. Once you've done that, make sure you call super's implementation for the Spotify iOS SDK's audio pipeline to work correctly.

覆盖 attemptToDeliverAudioFrames:ofCount:streamDescription:。这使得因为它是由图书馆制作您访问PCM数据。但是,有一些缓冲在默认管道回事所以在这个回调中给出的数据可能高达背后发生了什么事情传到扬声器半秒,所以我推荐使用建议1这个了。呼叫超级这里继续使用默认的管道。

Override attemptToDeliverAudioFrames:ofCount:streamDescription:. This gives you access to the PCM data as it's produced by the library. However, there's some buffering going on in the default pipeline so the data given in this callback might be up to half a second behind what's going out to the speakers, so I'd recommend using suggestion 1 over this. Call super here to continue with the default pipeline.

一旦你有你的自定义音频控制器,初始化一个 SPTAudioStreamingController 与它和你应该是好去。

Once you have your custom audio controller, initialise an SPTAudioStreamingController with it and you should be good to go.

我实际使用建议1实现iTunes的API Visualiser的在与CocoaLibSpotify建我的Mac OS X客户端的Spotify。它不工作100%顺利(我觉得我做错了什么与runloops和东西),但它带动G力和怀特卡普pretty好。你可以找到项目。在CocoaLibSpotify音频控制器类和项目本质上是一样的人在新的iOS SDK。

I actually used suggestion 1 to implement iTunes' visualiser API in my Mac OS X Spotify client that was built with CocoaLibSpotify. It's not working 100% smoothly (I think I'm doing something wrong with runloops and stuff), but it drives G-Force and Whitecap pretty well. You can find the project here, and the visualiser stuff is in VivaCoreAudioController.m. The audio controller class in CocoaLibSpotify and that project is essentially the same as the one in the new iOS SDK.

这篇关于越来越可视化PCM音频通过Spotify的iOS版SDK的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-14 00:57