ติดต่อเรา
บริษัท : ดีบูรณาการระบบ จำกัด
ติดต่อ: เอลล่า Cai
ที่อยู่: RM 2501 Jiejia อาคาร Futian เซินเจิ้น 518031, จีน
อีเมล์: [email protected][email protected]
Skype: sales009-EIS
โทรศัพท์: 0086-755-23611101
โทรสาร: 0086-755-61679009-109
ติดต่อตอนนี้
บ้าน > ข่าว > Company News > Audio chip brings context awar.....

Audio chip brings context aware processing to voice-activated smartphones

  • ผู้เขียน:Ella Cai
  • ปล่อยบน:2017-10-20
An audio processor which aims to improve the quality of voice-activated smartphones has been developed by US-based firm Knowles.

It does this, says the firm, by improving recognition especially in far-field and high noise environments. Increased data processing adds context awareness to voice recognition.

Knowles

IA8508 audio processor block diagram

The design of the IA8508 audio processor achieves this with four heterogeneous cores which includes hardware acceleration and proprietary instruction set.

The device has three heterogeneous Tensilica-based, audio-centric DSP cores and an ARM Cortex M4 core.

To address issues of background noise and distance, designs require a multi-microphone array with large memory. The DSP must be capable of running algorithms for acoustic echo cancellation (AEC), dynamic beam forming, steering, sound classification, and noise suppression.

Audio applications also require low-latency real-time processing, such as active noise cancellation and asynchronous sample rate conversion for mixing of various different audio signal sources.

The IA8508 processor has been design to support an array of up to eight microphones and is equipped with four heterogeneous cores capable of simultaneously running these audio processing algorithms.

A single sample processor core is employed to offer low latency. And an ARM Cortex M4 core is used for general purpose controller applications.

Mike Polacek, president of intelligent audio products at Knowles, writes:

“The market opportunity for voice-powered devices is massive. People are looking to engage with technology through natural, spoken commands, across mobile, ear-worn and IoT products, and are racing to develop and deploy the technology to enable it.”