Codesnippet: A simple custom hexagon shape for WPF/Silverlight

Recently I’ve been trying to learn more about WPF, since WinForms is getting really old now. As a small exercise I was trying to create a custom Shape, so I created a new class, derived it from Shape and started following this tutorial untill I found out that you can’t override the DefiningGeometry method in Silverlight.

After some searching I found this MSDN Blog article, where in Silverlight a custom shape is created by extending from Path (well I wouldn’t have thought of that).

After a bit of tweaking I adapted the class to display a Hexagon instead of a Triangle:

namespace MyProject.Shapes
{
    public class Hexagon : Path
    {
        public Hexagon()
        {
            CreateDataPath(0, 0);
        }

        private void CreateDataPath(double width, double height)
        {
            height -= this.StrokeThickness;
            width -= this.StrokeThickness;

            //Prevent layout loop
            if(lastWidth == width && lastHeight == height)
                return;

            lastWidth = width;
            lastHeight = height;

            PathGeometry geometry = new PathGeometry();
            figure = new PathFigure();

            //See for figure info http://etc.usf.edu/clipart/50200/50219/50219_area_hexagon_lg.gif
            figure.StartPoint = new Point(0.25 * width, 0);
            AddPoint(0.75 * width, 0);
            AddPoint(width, 0.5 * height);
            AddPoint(0.75 * width, height);
            AddPoint(0.25 * width, height);
            AddPoint(0, 0.5 * height);
            figure.IsClosed = true;
            geometry.Figures.Add(figure);
            this.Data = geometry;
        }

        private void AddPoint(double x, double y)
        {
            LineSegment segment = new LineSegment();
            segment.Point = new Point(x + 0.5 * StrokeThickness,
                y + 0.5 * StrokeThickness);
            figure.Segments.Add(segment);
        }

        protected override Size MeasureOverride(Size availableSize)
        {
            return availableSize;
        }

        protected override Size ArrangeOverride(Size finalSize)
        {
            CreateDataPath(finalSize.Width, finalSize.Height);
            return finalSize;            
        }

        #region FieldsAndProperties
        private double lastWidth = 0;
        private double lastHeight = 0;
        private PathFigure figure;
        #endregion 
    }

You can use a ‘shape’ like this in XAML:

<UserControl x:Class="FantasyCartographer.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    xmlns:Shapes="clr-namespace:MyNameSpace.Shapes"
    mc:Ignorable="d"
    d:DesignHeight="300" d:DesignWidth="400">

    <Grid x:Name="LayoutRoot" Background="White">
        <Canvas Height="161" HorizontalAlignment="Left" Margin="27,32,0,0" Name="canvas1" VerticalAlignment="Top" Width="254">
            <Shapes:Hexagon Canvas.Left="0" Canvas.Top="0" Height="154" x:Name="hexagon1" Stroke="Black" StrokeThickness="3" Width="188" Fill="Red"  />            
        </Canvas>
    </Grid>
</UserControl>

Tip: don’t forget to define an xmlns namespace mapping to your own .NET namespaces so that you can use a shape even if you defined it in another namespace, I’ve done this in the above XAML code as an example.

XNA stereoscopic 3D using anyglyphs and red/cyan 3D-glasses

Anaglyphs are images designed to give a 3D effect when viewed with special glasses, usually with red and blue (sometimes red and cyan) filters over the left and right eyes, respectively.[1]


Using anaglyphs we can make our game look real 3D through those cheap red/cyan 3D-glasses. Adding an anaglyph effect to your XNA game is fairly easy.

Basically we need to undertake the following steps:
-Draw our scene twice to separate render targets, with a slightly offset camera
-Use a shader and draw a full screen quad to blend the images and color coding the images for each eye

Easy right!

Lets start by assuming you have the following draw code:

protected override void Draw(GameTime gameTime)
{
    //a lot of draw calls
    base.Draw(gameTime);
}

We need to extract all your drawing code (except base.Draw(..)) to a new method that as argument accepts a view matrix. Update your code so that the passed viewMatrix is used instead of your normal camera’s viewMatrix. Doing this will allow us to easily draw the scene twice with a slightly offset camera. You should now have something like this:

private void DrawForEye(Matrix viewMatrix)
{
    //...
}

Now let’s first write the shader that is going to blend the images, create a new effect in your content project and paste in the following code:

texture left;
sampler sLeft = sampler_state
{
	texture = ;
	magfilter = POINT;
	minfilter = POINT;
	mipfilter = POINT;
	AddressU  = CLAMP;
	AddressV  = CLAMP;
};

texture right;
sampler sRight = sampler_state
{
	texture = ;
	magfilter = POINT;
	minfilter = POINT;
	mipfilter = POINT;
	AddressU  = CLAMP;
	AddressV  = CLAMP;
};

struct VS_INPUT
{
	float3 Pos : POSITION;
	float2 Tex : TEXCOORD0;
};

struct VS_OUTPUT
{
	float4 Pos : POSITION;
	float2 Tex : TEXCOORD0;
};

void VtAnaglyph(VS_INPUT In, out VS_OUTPUT Out)
{
	Out.Pos = float4(In.Pos,1);
	Out.Tex = In.Tex;
}

float4 PxAnaglyph(VS_OUTPUT In) : COLOR0
{
    float4 colorLeft = tex2D(sLeft, In.Tex.xy);
	float4 colorRight = tex2D(sRight, In.Tex.xy);
    return float4(colorRight.r, colorLeft.g, colorLeft.b, max(colorLeft.a, colorRight.a));
}

technique Anaglyphs
{
    pass p0
    {
        VertexShader = compile vs_2_0 VtAnaglyph();
        PixelShader = compile ps_2_0 PxAnaglyph();
    }

}

This is a pretty standard HLSL shader, but I will quickly go over it.

The texture’s left and right will be the textures resulting from drawing the scene twice, slightly offset from each other. We use a sampler with POINT filters because the left and right textures are going to be exactly the same size as our final rendering.

The vertex shader is passed as input nothing more than the position and texture coordinate of the vertex, it doesn’t transform anything but it just directly passes to the pixel shader.

The pixel shader samples the textures for the left and right eye. The red channel is used to ‘encode’ the image for the right eye (the red is unfiltered by the cyan colored lens). The green and blue channel are taken from the image for the left eye (they are unfiltered by the red colored lens). You can look at this wikipedia entry for other color combinations in case you have different 3D-glasses.

Now that we have our effect we need to add 2 render targets, the effect and a quad renderer to our game class. (The Quad class is posted as a code snippet here and used as to render the final image).

Added these lines to the top of your game class.

RenderTarget2D leftEye;
RenderTarget2D rightEye;
Effect anaglyphEffect;
QuadRenderer quad;

//you can change this later to test different distances between the left and right eye viewpoint,
//the offset depends on the scale of your game, but small values seem to work best.
//I used 0.05 for a scene about 5x5x5 size.
float ammount = 0.05f;

And add these lines in your LoadContent method:

leftEye = new RenderTarget2D(GraphicsDevice, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height);
rightEye = new RenderTarget2D(GraphicsDevice, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height);
anaglyphEffect = Content.Load("Anaglyphs");
quad = new QuadRenderer(GraphicsDevice);

Now we need to calculate two slightly offset view-matrices, draw the scene two times using these view-matrices and then combine them using our effect. To accomplish this write the following Draw method:

protected override void Draw(GameTime gameTime)
{
    Matrix viewMatrix = camera.ViewMatrix; //use your own camere class here

    //The vector pointing to the right (1,0,0) as seen from the view matrix is stored
    //in the view matrix as (M11, M21, M31)
    Vector3 right = new Vector3(viewMatrix.M11, viewMatrix.M21, viewMatrix.M31) * amount; //ofset from the center for each eye
    Matrix viewMatrixLeft = Matrix.CreateLookAt(camera.Position - right, camera.LookAt, Vector3.Up);
    Matrix viewMatrixRight = Matrix.CreateLookAt(camera.Position + right, camera.LookAt, Vector3.Up);

    GraphicsDevice.SetRenderTarget(leftEye);
    DrawForEye(viewMatrixLeft, camera.ProjectionMatrix);

    GraphicsDevice.SetRenderTarget(rightEye);
    DrawForEye(viewMatrixRight, camera.ProjectionMatrix);

    GraphicsDevice.SetRenderTarget(null);

    anaglyphEffect.Techniques["Anaglyphs"].Passes[0].Apply();
    anaglyphEffect.Parameters["left"].SetValue(leftEye);
    anaglyphEffect.Parameters["right"].SetValue(rightEye);
    quad.DrawFullScreenQuad();

    base.Draw(gameTime);
}

And hooray! We now have anaglyphs in our game! This will result in some pretty picture (the following picture of course only make sense when you use red/cyan 3D-glasses)

Anaglyph Example, view with red/cyan glasses

When you have created some cool anaglyph images in XNA, be sure to send them in, I’ll make a small gallery here!

08
May 2011
CATEGORY

Blog, Programming, XNA

COMMENTS 7 Comments

CodeSnippet: Quad renderer

This quad renderer was already part of my conversion of Catalin Zima’s Deferred Rendering for XNA tutorial, but because I plan to use it in a tutorial I’m going to write later today here is a handy snippet for drawing a quad with texture coordinates in XNA. (You could use the SpriteBatch class for this, but as you all know the SB class tends to mess up your render state if you do not define all states correctly).

Anyway here’s the class:

    public class QuadRenderer
    {
        public QuadRenderer(GraphicsDevice device)
        {
            this.device = device;
            vertices = new VertexPositionTexture[]
            {
                new VertexPositionTexture(new Vector3(0,0,0),new Vector2(1,1)),
                new VertexPositionTexture(new Vector3(0,0,0),new Vector2(0,1)),
                new VertexPositionTexture(new Vector3(0,0,0),new Vector2(0,0)),
                new VertexPositionTexture(new Vector3(0,0,0),new Vector2(1,0))
            };

            indexBuffer = new short[] { 0, 1, 2, 2, 3, 0 };
        }

        public void Draw(Vector2 v1, Vector2 v2)
        {
            vertices[0].Position.X = v2.X;
            vertices[0].Position.Y = v1.Y;

            vertices[1].Position.X = v1.X;
            vertices[1].Position.Y = v1.Y;

            vertices[2].Position.X = v1.X;
            vertices[2].Position.Y = v2.Y;

            vertices[3].Position.X = v2.X;
            vertices[3].Position.Y = v2.Y;

            device.DrawUserIndexedPrimitives&lt;VertexPositionTexture&gt;(PrimitiveType.TriangleList, vertices, 0, 4, indexBuffer, 0, 2);
        }

        public void DrawFullScreenQuad()
        {
            Draw(Vector2.One * -1, Vector2.One);
        }

        #region FieldsAndProperties
        private VertexPositionTexture[] vertices = null;
        private short[] indexBuffer = null;
        private GraphicsDevice device;
        #endregion
    }
07
May 2011
CATEGORY

Blog, XNA

COMMENTS 2 Comments