本文介绍了如何导航只用语音命令谷歌的玻璃GDK沉浸应用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我如何去了解编码语音触发导航谷歌玻璃卡?

How would I go about coding a voice trigger to navigate Google Glass Cards?

This is how I see it happening:

1) "Ok Glass, Start My Program"

2) Application begins and shows the first card

3) User can say "Next Card" to move to the next card 
(somewhat the equivalent of swiping forward when in the timeline)

4) User can say "Previous Card" to go back 

这是我需要显示的卡是简单的文字和图片,我想知道如果我能安装某种类型的监听器,声控命令听卡正在显示一段时间。

The cards that I need to display are simple text and images, I'm wondering if I can setup a listener of some type to listen for voice commands while the card is being shown.

我研究过Glass语音命令就近从给定的列表但无法运行code的比赛,虽然我有所有的库。

I've researched Glass voice command nearest match from given list but wasn't able to run the code, although I do have all the libraries.

补充说明:用户使用语音命令时,仍然可以看到该卡是很重要的。同时他手脚都在忙所以水龙头/刷卡是不是一种选择。

side note: It's important that the user still see the card when using the voice command. Also his hands are busy so tap/swipe isn't an option.

任何想法如何只使用语音控制来控制在我沉浸应用程序的时间表?pciated 将大大AP $ P $!

Any ideas on how to control the timeline within my Immersion app using only voice control? would be greatly appreciated!

我跟踪 https://开头code.google.com / P /谷歌玻璃API /问题/详细信息?ID = 273 的为好。

我正在进行的研究,我回头看看谷歌Glass开发使用谷歌的建议听取手势的方式:https://developers.google.com/glass/develop/gdk/input/touch#detecting_gestures_with_a_gesture_detector

My ongoing research made me look back at Google Glass Developer to use Google's suggested way of listening to gestures: https://developers.google.com/glass/develop/gdk/input/touch#detecting_gestures_with_a_gesture_detector

我们怎样才能激活这些手势和语音命令?

How can we activate these gestures with voice commands?

Android的只是测试版发布可穿戴设备升级为Android 的http://开发商。 android.com/wear/notifications/remote-input.html ,有没有一种方法,我们可以用它来回答我的问题?它仍然感觉我们仍然1-一步之遥,因为我们可以在服务调用,但没有它睡眠和唤醒作为后台服务,当我们谈论。

Android just beta-released wearable devices upgrade for android http://developer.android.com/wear/notifications/remote-input.html, Is there a way we can use this to answer my question? it still feels like we are still 1-step away since we can call on the service but not have it "sleep" and "wake up" as a background service when we talk.

推荐答案

我写了整个code。在细节,因为我花了这么长的时间来得到这个工作..也许它会救一个人否则宝贵的时间。

I'm writing out the entire code in detail since it took me such a long time to get this working.. perhaps it'll save someone else valuable time.

这code是谷歌语境语音指令的实施所描述的谷歌开发者在这里:的

This code is the implementation of Google Contextual Voice Commands as described on Google Developers here: Contextual voice commands

ContextualMenuActivity.java

   package com.drace.contextualvoicecommands;

    import android.app.Activity;
    import android.os.Bundle;
    import android.view.Menu;
    import android.view.MenuItem;
    import com.drace.contextualvoicecommands.R;
    import com.google.android.glass.view.WindowUtils;

    public class ContextualMenuActivity extends Activity {

    @Override
    protected void onCreate(Bundle bundle) {
        super.onCreate(bundle);

        // Requests a voice menu on this activity. As for any other
        // window feature, be sure to request this before
        // setContentView() is called
        getWindow().requestFeature(WindowUtils.FEATURE_VOICE_COMMANDS);
        setContentView(R.layout.activity_main);
    }

    @Override
    public boolean onCreatePanelMenu(int featureId, Menu menu) {
        if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
            getMenuInflater().inflate(R.menu.main, menu);
            return true;
        }
        // Pass through to super to setup touch menu.
        return super.onCreatePanelMenu(featureId, menu);
    }

    @Override
    public boolean onCreateOptionsMenu(Menu menu) {
        getMenuInflater().inflate(R.menu.main, menu);
        return true;
    }

    @Override
    public boolean onMenuItemSelected(int featureId, MenuItem item) {
        if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
            switch (item.getItemId()) {
                case R.id.dogs_menu_item:
                    // handle top-level dogs menu item
                    break;
                case R.id.cats_menu_item:
                    // handle top-level cats menu item
                    break;
                case R.id.lab_menu_item:
                    // handle second-level labrador menu item
                    break;
                case R.id.golden_menu_item:
                    // handle second-level golden menu item
                    break;
                case R.id.calico_menu_item:
                    // handle second-level calico menu item
                    break;
                case R.id.cheshire_menu_item:
                    // handle second-level cheshire menu item
                    break;
                default:
                    return true;
            }
            return true;
        }
        // Good practice to pass through to super if not handled
        return super.onMenuItemSelected(featureId, item);
    }
    }

activity_main.xml(布局)

 <?xml version="1.0" encoding="utf-8"?>
    <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
        xmlns:tools="http://schemas.android.com/tools"
        android:layout_width="match_parent"
        android:layout_height="match_parent" >

          <TextView
        android:id="@+id/coming_soon"
        android:layout_alignParentTop="true"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="@string/voice_command_test"
        android:textSize="22sp"
        android:layout_marginRight="40px"
        android:layout_marginTop="30px"
        android:layout_marginLeft="210px" /> 
    </RelativeLayout>

的strings.xml

<resources>
<string name="app_name">Contextual voice commands</string>
<string name="voice_start_command">Voice commands</string>
<string name="voice_command_test">Say "Okay, Glass"</string>
<string name="show_me_dogs">Dogs</string>
<string name="labrador">labrador</string>
<string name="golden">golden</string>
<string name="show_me_cats">Cats</string>
<string name="cheshire">cheshire</string>
<string name="calico">calico</string>
</resources>

AndroidManifest.xml中

 <manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.drace.contextualvoicecommands"
    android:versionCode="1"
    android:versionName="1.0" >

    <uses-sdk
        android:minSdkVersion="19"
        android:targetSdkVersion="19" />

    <uses-permission android:name="com.google.android.glass.permission.DEVELOPMENT"/>

    <application
        android:allowBackup="true"
        android:icon="@drawable/ic_launcher"
        android:label="@string/app_name" >

       <activity
            android:name="com.drace.contextualvoicecommands.ContextualMenuActivity"
            android:label="@string/app_name" >
            <intent-filter>
                <action android:name="com.google.android.glass.action.VOICE_TRIGGER" />
            </intent-filter>

            <meta-data
                android:name="com.google.android.glass.VoiceTrigger"
                android:resource="@xml/voice_trigger_start" />
        </activity>

    </application>
    </manifest>

它已经经过测试,在谷歌玻璃XE22的伟大工程!

这篇关于如何导航只用语音命令谷歌的玻璃GDK沉浸应用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-23 11:46