Skip to main content
edited tags
Link
user1430
user1430
Tweeted twitter.com/#!/StackGameDev/status/362312906609668097
deleted 1 characters in body
Source Link
House
  • 73.5k
  • 17
  • 188
  • 276

When using openGL to build a UI for my prototype I find that that I am still required to use pixels to capture touch events. To make matters worse the Android Docs make this subject slightly confusing. http://developer.android.com/guide/practices/screens_support.html specifically states:

You should always use dp units when defining your application's UI, to ensure proper display of your UI on screens with different densities.

Yet

MotionEvent event = new MotionEvent();
event.getX();

returns pixels when used to locate the touchscreen event. http://developer.android.com/reference/android/view/MotionEvent.html#getX()

To make matters even more confusing,

public final float getRawX ()

Added in API level 1 Returns the original raw X coordinate of this event. For touch events on the screen, this is the original location of the event on the screen, before it had been adjusted for the containing window and views.

Seems to imply that there are altercationsalterations being made to adjust for the window and views when using

event.getX()

My last bit of confusion comes in when trying to tie both of these concepts together. I see all the moving parts I am just unable to make them connect.

  • How to convert openGL vertices into terms that work with pixels or dp.
  • How to program for dynamic touch events using MotionEvent when pixel location will change by scale according to device density.

When using openGL to build a UI for my prototype I find that that I am still required to use pixels to capture touch events. To make matters worse the Android Docs make this subject slightly confusing. http://developer.android.com/guide/practices/screens_support.html specifically states:

You should always use dp units when defining your application's UI, to ensure proper display of your UI on screens with different densities.

Yet

MotionEvent event = new MotionEvent();
event.getX();

returns pixels when used to locate the touchscreen event. http://developer.android.com/reference/android/view/MotionEvent.html#getX()

To make matters even more confusing,

public final float getRawX ()

Added in API level 1 Returns the original raw X coordinate of this event. For touch events on the screen, this is the original location of the event on the screen, before it had been adjusted for the containing window and views.

Seems to imply that there are altercations being made to adjust for the window and views when using

event.getX()

My last bit of confusion comes in when trying to tie both of these concepts together. I see all the moving parts I am just unable to make them connect.

  • How to convert openGL vertices into terms that work with pixels or dp.
  • How to program for dynamic touch events using MotionEvent when pixel location will change by scale according to device density.

When using openGL to build a UI for my prototype I find that that I am still required to use pixels to capture touch events. To make matters worse the Android Docs make this subject slightly confusing. http://developer.android.com/guide/practices/screens_support.html specifically states:

You should always use dp units when defining your application's UI, to ensure proper display of your UI on screens with different densities.

Yet

MotionEvent event = new MotionEvent();
event.getX();

returns pixels when used to locate the touchscreen event. http://developer.android.com/reference/android/view/MotionEvent.html#getX()

To make matters even more confusing,

public final float getRawX ()

Added in API level 1 Returns the original raw X coordinate of this event. For touch events on the screen, this is the original location of the event on the screen, before it had been adjusted for the containing window and views.

Seems to imply that there are alterations being made to adjust for the window and views when using

event.getX()

My last bit of confusion comes in when trying to tie both of these concepts together. I see all the moving parts I am just unable to make them connect.

  • How to convert openGL vertices into terms that work with pixels or dp.
  • How to program for dynamic touch events using MotionEvent when pixel location will change by scale according to device density.
Source Link
Justin
  • 306
  • 3
  • 8

How to program dynamic touch events that scale according to device?

When using openGL to build a UI for my prototype I find that that I am still required to use pixels to capture touch events. To make matters worse the Android Docs make this subject slightly confusing. http://developer.android.com/guide/practices/screens_support.html specifically states:

You should always use dp units when defining your application's UI, to ensure proper display of your UI on screens with different densities.

Yet

MotionEvent event = new MotionEvent();
event.getX();

returns pixels when used to locate the touchscreen event. http://developer.android.com/reference/android/view/MotionEvent.html#getX()

To make matters even more confusing,

public final float getRawX ()

Added in API level 1 Returns the original raw X coordinate of this event. For touch events on the screen, this is the original location of the event on the screen, before it had been adjusted for the containing window and views.

Seems to imply that there are altercations being made to adjust for the window and views when using

event.getX()

My last bit of confusion comes in when trying to tie both of these concepts together. I see all the moving parts I am just unable to make them connect.

  • How to convert openGL vertices into terms that work with pixels or dp.
  • How to program for dynamic touch events using MotionEvent when pixel location will change by scale according to device density.