Skip to main content
Post Closed as "Not suitable for this site" by CommunityBot
Tweeted twitter.com/#!/StackGameDev/status/520926178861805568
Code typos fixed
Source Link
War
  • 1.7k
  • 1
  • 18
  • 33

I noticed that from looking at other examples like say .. riemers tutorials he takes a buffer with a bunch of vector3's in it and ties it to a shader which expects a float4 ... why does this work in his situation and not mine?

Also is there a simple fix for this situation that will allow me to do this with the shader determining the w component as to my game logic this means nothing but is obviously crucial to the gpu.

Riemers code is here:

http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/Textured_terrain.php

and mine (key parts only) ...

CPU Code:

public struct TexturedVertex: IVertex
{
    public Vector3 Position { get; set; }
    public Vector2 Uv { get; set; }

    public TexturedVertex(Vector3 position, Vector2 uv) : this()
    {
        Position = position;
        Uv = uv;
    }
}

Shader Code:

struct VS_IN
{
    float4 pos : POSITION;
    float2 tex : TEXCOORD;
};

struct PS_IN
{
    float4 pos : SV_POSITION;
    float2 tex : TEXCOORD;
};

Texture2D picture;
SamplerState pictureSampler;

PS_IN VS(float4 inPos : POSITION, float2 uv : TEXCOORD)
{
    PS_IN output = (PS_IN)0;
    wvp = mul(World, ViewProjection);
    output.pos = mul(inPos, wvpmul(World, ViewProjection));
    output.tex = input.tex;uv;
    return output;
}

How do the two tie together?

I am however using sharpDX not XNA so my code for setting up the buffers is different slightly ...

I created my own mesh class that does this:

VertexBuffer = Buffer.Create(device, BindFlags.VertexBuffer ,Vertices.ToArray());
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<TexturedVertex>(), 0));

I noticed that from looking at other examples like say .. riemers tutorials he takes a buffer with a bunch of vector3's in it and ties it to a shader which expects a float4 ... why does this work in his situation and not mine?

Also is there a simple fix for this situation that will allow me to do this with the shader determining the w component as to my game logic this means nothing but is obviously crucial to the gpu.

Riemers code is here:

http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/Textured_terrain.php

and mine (key parts only) ...

CPU Code:

public struct TexturedVertex: IVertex
{
    public Vector3 Position { get; set; }
    public Vector2 Uv { get; set; }

    public TexturedVertex(Vector3 position, Vector2 uv) : this()
    {
        Position = position;
        Uv = uv;
    }
}

Shader Code:

struct VS_IN
{
    float4 pos : POSITION;
    float2 tex : TEXCOORD;
};

struct PS_IN
{
    float4 pos : SV_POSITION;
    float2 tex : TEXCOORD;
};

Texture2D picture;
SamplerState pictureSampler;

PS_IN VS(float4 inPos : POSITION, float2 uv : TEXCOORD)
{
    PS_IN output = (PS_IN)0;
    wvp = mul(World, ViewProjection);
    output.pos = mul(inPos, wvp);
    output.tex = input.tex;
    return output;
}

How do the two tie together?

I am however using sharpDX not XNA so my code for setting up the buffers is different slightly ...

I created my own mesh class that does this:

VertexBuffer = Buffer.Create(device, BindFlags.VertexBuffer ,Vertices.ToArray());
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<TexturedVertex>(), 0));

I noticed that from looking at other examples like say .. riemers tutorials he takes a buffer with a bunch of vector3's in it and ties it to a shader which expects a float4 ... why does this work in his situation and not mine?

Also is there a simple fix for this situation that will allow me to do this with the shader determining the w component as to my game logic this means nothing but is obviously crucial to the gpu.

Riemers code is here:

http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/Textured_terrain.php

and mine (key parts only) ...

CPU Code:

public struct TexturedVertex: IVertex
{
    public Vector3 Position { get; set; }
    public Vector2 Uv { get; set; }

    public TexturedVertex(Vector3 position, Vector2 uv) : this()
    {
        Position = position;
        Uv = uv;
    }
}

Shader Code:

struct VS_IN
{
    float4 pos : POSITION;
    float2 tex : TEXCOORD;
};

struct PS_IN
{
    float4 pos : SV_POSITION;
    float2 tex : TEXCOORD;
};

Texture2D picture;
SamplerState pictureSampler;

PS_IN VS(float4 inPos : POSITION, float2 uv : TEXCOORD)
{
    PS_IN output = (PS_IN)0;
    output.pos = mul(inPos, mul(World, ViewProjection));
    output.tex = uv;
    return output;
}

How do the two tie together?

I am however using sharpDX not XNA so my code for setting up the buffers is different slightly ...

I created my own mesh class that does this:

VertexBuffer = Buffer.Create(device, BindFlags.VertexBuffer ,Vertices.ToArray());
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<TexturedVertex>(), 0));
Source Link
War
  • 1.7k
  • 1
  • 18
  • 33

Vertex definitions and shaders

I noticed that from looking at other examples like say .. riemers tutorials he takes a buffer with a bunch of vector3's in it and ties it to a shader which expects a float4 ... why does this work in his situation and not mine?

Also is there a simple fix for this situation that will allow me to do this with the shader determining the w component as to my game logic this means nothing but is obviously crucial to the gpu.

Riemers code is here:

http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/Textured_terrain.php

and mine (key parts only) ...

CPU Code:

public struct TexturedVertex: IVertex
{
    public Vector3 Position { get; set; }
    public Vector2 Uv { get; set; }

    public TexturedVertex(Vector3 position, Vector2 uv) : this()
    {
        Position = position;
        Uv = uv;
    }
}

Shader Code:

struct VS_IN
{
    float4 pos : POSITION;
    float2 tex : TEXCOORD;
};

struct PS_IN
{
    float4 pos : SV_POSITION;
    float2 tex : TEXCOORD;
};

Texture2D picture;
SamplerState pictureSampler;

PS_IN VS(float4 inPos : POSITION, float2 uv : TEXCOORD)
{
    PS_IN output = (PS_IN)0;
    wvp = mul(World, ViewProjection);
    output.pos = mul(inPos, wvp);
    output.tex = input.tex;
    return output;
}

How do the two tie together?

I am however using sharpDX not XNA so my code for setting up the buffers is different slightly ...

I created my own mesh class that does this:

VertexBuffer = Buffer.Create(device, BindFlags.VertexBuffer ,Vertices.ToArray());
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<TexturedVertex>(), 0));