Webinar Q&A: Building Consumer Electronics Devices using Windows Embedded.


Today Jeff Albertson and I hosted a Webinar with CMP on the topic of “Building Consumer Electronics Devices using Windows Embedded” – the webinar will be posted on the CMP site.

There were a number of questions asked that we didn’t have time to answer at the end of the Webinar – here’s the list of questions and answers.

 

What’s included with the Eval Kit?

The CE 6.0 Evaluation kit is a full version of the CE 6.0 development tools which run for 180 days, this includes everything needed to configure, build, deploy, debug, and test a CE 6.0 operating system image – More information about the evaluation kits can be found here.

In CE 6.0 can I now use Internet Explorer to browse to typical Multimedia clips on the web (cnn.com, youtube, etc.) and view them with Windows Media Player, Flash,Realplayer, Quicktime? If so, what minimum processor speed is required?

Does Win 6.0 CE include the MSMQ service as a part of the standard system?
Yes, MSMQ is included in the CE 6.0 catalog.

What is the best path to start in the windows CE application developing?
CE 6.0 supports a number of application development models – C/C++ programming against the Win32 API, MFC (Microsoft Foundation Classes), and Managed development using C# or Visual Basic – there are a number of good books you should take a look at.

Programming Windows CE by Doug Boling 

.NET Compact Framework programming by Paul Yao

also check out the MSDN Virtual Labs – we have two labs, C# application development and MFC application development.

What sort of services is planned? Can we write our own service like on PC?
CE 6.0 supports applications, drivers, and services – you have a choice of how you develop your code – previously on Windows CE 5.0 you were limited to a maximum of 32 processes running at one time and each process being limited to running in a 32 MB process address space.

With CE 6.0 we have expanded the number of supported processes running at one time to 32,768 (in theory, you will probably run out of physical memory before reaching this number of processes) and each process running in a 2GB process address space.

With CE 5.0 developers may have written a service to conserve the number of processes running (since services are all loaded by one services manager) – with CE 6.0 you have a choice of writing an application or service.

Where I can find hardware reference designs to build a Windows CE system?

There are a number of reference boards available for CE 6.0 – these are available through system integrators and Silicon Vendors – more information on reference boards can be found on the Windows Embedded Partners site.

What real-time benchmarking is available on CE?
CE 6.0 provides a number of tools you can use to test the real-time performance of the operating system on your hardware, this includes ILTiming to check the interrupt latency of the operating system, and OSBench – there’s also third party validation of the real-time performance of Windows CE, check out the reports from Dedicated Systems (covers ARM and x86 processors).

What types of testing tools are available?

There are a number of testing tools that ship with the CE 6.0 product – this includes the CETK (CE Test Kit) which can be used to run automated tests on your device drivers and operating system components – there are also a number of “Remote Tools” that ship with CE 6.0 that can be run from your desktop, this includes the Remote Performance Monitor, Remote Kernel Tracker, System Information, Spy, and tools within the CE 6.0 development tool that can analyze memory use, look for memory leaks etc.

How do I get more information on accessing the Source Code? Do I need  sign a contract?

There are two levels of source code available for CE 6.0, these are Shared Source and Premium Source – The Shared source ships with the CE 6.0 products (Eval and Full product), there’s a “Click to Accept” End User License Agreement that you would need to read and accept before the source is installed on your development workstation – the Shared Source contains 100% of the kernel source, and source for a number of operating system components. Premium source requires that you complete an application form before having access to the source. More information on shared source is available on the Shared Source site.

How often will you release updates to CE 6.0?

We plan on releasing major versions of the CE product every two years with “Feature Pack” updates in between the major releases. Feature Packs include new features that are added to the development tools component catalog without affecting the underlying operating system drivers/kernel.

Can you explain your Partner model and how I could work with them?

We have a partner program called the Windows Embedded Partner Program – partners include silicon vendors, system integrators, trainers, hardware vendors, distributors and more.

Is CE the only OS we should look at within Windows Embedded for Consumer Electronics?

We have a range of embedded operating systems, this includes the .NET Micro Framework, Windows Embedded CE 6.0, Windows XP Embedded, and Windows Embedded for Point of Service. Each operating system has features and technologies that are appropriate for a range of embedded devices – You should evaluate the hardware and software requirements for your device to determine which operating system best fits your needs.

How long is presentation?

About an hour.

How difficult is it to adapt a Win32 application for CE 6.0

CE 6.0 exposes a subset of the desktop Win32 APIs – there are some areas that are not supported, for example we don’t have/need back-compat with MS-DOS or Win16 applications – typically we expose the newer functions from the desktop – Windows CE is a full Unicode operating system which may require some changes to desktop applications that are written as ASCII/ANSI applications, we do support conversion APIs (WideCharToMultiByte and MutiByteToWideChar) to convert between ASCII/ANSI and Unicode. The porting process is somewhat abstracted if you move to a higher level API set, this could either be MFC (Microsoft Foundation Classes), or managed application development using C# or Visual Basic. 

Let me know if you have other questions from this mornings Webinar.

– Mike

Comments (13)

  1. Brad Carey says:

    I have two questions:

    1. I would assume that WinCE 6.0 supports services to all interface to the new "Home Server" product?  Can you elaborate if there will be difficulties building CE multimedia devices that connect to Home Server.

    2. Does the DRM capabilities of WinCE allow configuration of an embedded media server that would allow DVD movie content to ripped to HD and managed as DRM content on the server?

    Thanks,

    Brad

  2. William says:

    I’m running platform builder in visual studio 2006 and it takes around 15-30 mins to complete a built.  Which particular component on my computer (i.e.  should I get a quad core processor or 4GB of RAM) if any should decrease that build time significantly?

  3. William says:

    Sorry, I meant Visual Studio 2005 Professional Edition.  

  4. mikehall says:

    the build process is very disc intensive, having a fast machine with plenty of ram and fast hard drives will improve the build time.

    – Mike

  5. mikehall says:

    Brad,

    How do you see a Windows CE device being linked to a Home Server ? – what content/files are you wanting to exchange ?

    If Home Server is running Windows Media Connect then you can ‘stream’ content from the home server – if the home server exposes file shares then you can also read/write to the shares.

    What other scenarios would you like to see exposed ?

    – Mike

  6. writetothiru@gmail.com says:

    I worked on Windows CE/Windows Mobile Multimedia as a platform developer for around 3 yrs. I also worked on other OSes such as Symbian and eLinux in the field of multimedia. I would rate Windows CE/Windows Mobile as worst among all these OSes w.r.t. multimedia platform development.

    Few of the reasons are as follows:

    – Windows CE/Windows Mobile multimedia technologies such as DirectShow, DirectDraw etc. are directly re-used from Desktop without any optimizaiton for Memory, Performance and Power. If your hardware is optimized for performance and power, it is not guaranteed that you will achieve the similar figures when Windows CE/Windows Mobile is running on top of the hardware due to lack of optimization in the memory copies, data path etc.

    – DirectShow is the multimedia infrastructure that Windows CE/Windows Mobile provides for playback and capture pipelines. In these pipelines few of the components such as Video Renderer, Video Capture etc. are designed by Microsoft. Designing components such as encoders/decoders to work with Microsoft components has been nightmare, as the behaviour of microsoft components is not documented well and there is no source code for them. Even if somethings are documented, the documentation is not correct as it only applies to Desktop.

    – The support for Windows Embedded Multimedia is very poor in the newsgroups. There are hardly any eMVPs in the field of Windows Embedded Multimedia.

    – The wave API for audio is insufficient for Windows Embedded devices, as it does not have built-in provision to indicate details about the wave streams such as priority, type (alert, notification, media playback) etc. This results in poor resource management and performance optimization at the platform level.

    – They hardly have any roadmap for multimedia except that they may have roadmaps for their own technologies such as Windows Media Codecs, Windows Media DRM, Direct3D etc.

    – Windows CE/Windows Mobile complaince to the open multimedia standards such as OpenMAX,Open GL, 3GPP, OMA etc. is poor. The ODM/OEM has to assemble the software required to be in complaince with these standards, which increases the product cost and time to market.

  7. Andy Raffman says:

    Let me try to address, as honestly as I know how, some of Thirupathi’s comments. Pardon me for paraphrasing him:

    1. … multimedia technologies such as DirectShow, DirectDraw etc. are directly re-used from Desktop without any optimization…

    – Most of the multimedia code, while it may distantly descend from the desktop, is heavily rewritten and optimized for size and performance. Our graphics and wave drivers and DirectDraw and D3DM implementation are completely written from scratch, and we’ve spent alot of time working on optimizing DirectShow components. Believe me, I really wish we could just grab code from the desktop and reuse it; it would make my job alot easier.

    2. DirectShow is hard to program for, is poorly documented, and MS doesn’t ship source code.

    – Gee, don’t hold back, let me know how you really feel. Seriously, we’re working on improving the documentation, including more sample/source code, and improving the debuggability/error logging in the future.

    3. The support for Windows Embedded Multimedia is very poor in the newsgroups.

    – I can’t speak for the MVPs, but I know alot of us on the MM team read the newsgroups and try to offer assistance.

    4. The wave API is insufficient for Windows Embedded devices.

    – Partly true. I don’t agree that it’s a resource management or perf optimization issue though: audio in general takes very little CPU and it’s fairly trivial to support an unlimited number of audio streams independent of what your hardware supports (all the hardware I develop for only supports one stream, but the sw mixer takes care of that). The bigger issues tend to be questions like how to automatically attenuate MP3 playback while a phone call is in progress, deciding where to route audio (e.g. between a headset, external speakers, internal phone call, etc.), and choosing the best sample rate for playback. We’ve addressed some of these issues in the wavedev2 audio driver for Smartphone/PPC (which is applicable to embedded devices as well), but I’ll be the first to say the design could be improved.

    5 & 6. They hardly have any roadmap for multimedia except they may have roadmaps for their own technologies…compliance to the open multimedia standards such as OpenMAX,Open GL, 3GPP, OMA etc. is poor..

    – I’m not really sure how to respond to this. Yes, we tend to support Microsoft technologies because we’re…you know…Microsoft. Seriously, we tend to support our own APIs first for the following reasons:

    a. Where we feel they give us a competitive advantage/differentiation over other APIs, and allow us the flexibility to move the platform forward.

    b. Where we have a large internal source-code or development infrastructure.

    In general we don’t have anything against other APIs such as OpenMax/OpenGL/etc. In cases where the above issues don’t apply, or where we see overwhelming customer demand, we’re definitely open to supporting these APIs (I think we already do alot with 3GPP and OMA, but that’s outside my realm of expertise).  

    In any case, we’re always willing to assist OEMs with WinCE/Windows Mobile products even if they use non-Microsoft APIs (just last week we offered general assistance to a party interested in running OpenGL ES on a Windows CE device). In such cases, our limiting factor is not desire to help; it’s more often resource/time constraints and our own lack of experience with those APIs.

    -Andy

  8. writetothiru@gmail.com says:

    Hi Andy,

    Following are few reasons (my opinions and experiences), based on which i suggest DirectShow is not optimized for performance and battery life due to ‘memory copies" and "long data paths".

    – Support for Direct Rendering: DShow do not provide an explicit support for direct rendering i.e. rendering the decoded data directly onto the screen/audio codec without passing through system layers. For direct video rendering, platform documentation says DirectShow Video Renderer provides a mechanism using IOverlay and IOverlayNotify to let the hw accelerated decoder exploit the direct video rendering capability. We initially designed our decoder based on this assumption. Finally, realised that video renderer does not notify the video window position changes. I suspect this part of documentation applies to desktop only. For direct audio rendering, there is no offfical support from DirectShow. 2 methods were tried out, both of them failed with the introduction of a2dp based wave driver.  

    – Few of the Windows Embedded application frameworks (Camcorder, VoIP) insist on using conversion based codecs such as ACM/DMOs. If you have a DSP/HW optimized implementation of a codec, the data buffers need to be typically physically contigous, follow certain alignment constraints. ACM/DMO codecs exchange blocks of uncompressed and compressed bit streams with the clients through the memory blocks allocated and managed by the client. The client buffers may not obey the buffer constraints of data exchanged between the host and the accelerator. As a result, client buffers cannot be directly used for data transfer between the accelerator/dsp and host. This typically leads to an implementation involving a DMA transfer to/from pre-allocated physical memory (following the HW constraints) and a copy to/from the client buffers. This happens on the both input and output sides=> there are two definite memcpy’s in accelerated ACM and DMO codecs. This kind of implementation introduces performance degradation due to additional two mem cpy’s and extra power consumption due to the CPU involvement in the memcpy’s. For high quality video this may have significant impact.

    Such kind of drawbacks can be avoided, if the frameworks follow bottom-up buffering.  

    – A media player infrastructure which is optimized for power should release the hw resources such as wave out device, DSP codec etc. during long haul pauses (i.e. if the user has paused for long durations). A directshow based decoder/encoder do not have any information on the application being minimized or in the background/de-activated etc for power related optimizations.

    I think the above were few other reasons (apart from DRM constraints), to have a different pipeline for WMV.  

    There are certain asymmetries and missing interfaces from Windows Embedded Multimedia Infrastructures:

    VoIP application framework uses ACM based audio codecs. Where as Camcorder application enforces to use DMO based audio encoder. This means that for integrating the same codec, we are forced to develop two plug-ins (ACM codec and encoder DMO).

    For ex: To be in compliance with Camcorder, we need to develop an AMR encoder DMO. Also to be in compliance with MS VoIP app, we need to develop AMR ACM codec. Do you have any strong justification for such kind of asymmetry?

    There are no standardized interfaces for advanced codec features such as Packet Loss Concealment, Rate Adaptation, Enabling/Disabling VAD etc in any of the Dshow codec plug-in models (ACM, DMO, Native DShow filter). There are no standardized format tags and format blocks for 3GPP codecs such as H.263, H.264, AMR etc. There may be some defacto standards for the same. Otherwise also it is possible to define some of them on our own. But it compromises the interoperability and their usability. I have discussed somethings here: http://blogs.msdn.com/cenet/archive/2006/11/08/what-s-new-in-real-time-communication-rtc-1-5.aspx#comments

    In general, DirectShow platform documentation can be improved as follows:

    – You can remove the documentation that is applicable to desktop only.

    – You can document the dynamic behaviour of out-of-the box modules such as rendererers, capture sources, parsers etc.  This will help to pre-verify the designs of the codecs.

    – You can standardize the missing interfaces. This helps the interoperability of the pipeline modules designed by different companies.

    For specific points about DirectShow/other documentation, i tried using "Documentation Feedback" in the help. There is no acknowledgement for the feedback.

    The rationale regarding incompleteness of wave api is posted as a comment in medmedia blog @: http://blogs.msdn.com/medmedia/archive/2007/01/16/the-wavedev2-forcespeaker-api.aspx#comments.

    Thanks

  9. The recent Webcast which discussed Rapidly Building Consumer Devices on Windows Embedded CE (which has

  10. luciad says:

    Hi Thirupathy,

    The IOverlay & IOverlayNotify should be supported in WindowsCE. We were not aware of any issues with the interfaces not being called for video window position changes, please let us know in more detail what the problems were so that we can try to fix them.

    The problems you describe regarding memory constraints with DMOs is a known one, and that’s why we are trying to recommend from now on to use filters instead of DMOs. With filters, there’s the possibility of creating your own allocator, and the filter can decide where memory should be. There are other advantages of filters over DMOs as well:

    – DMOs don’t have "pause" (or filter state) knowledge. DMOs just stop/start receiving data.

    – DMOs don’t have current rate knowledge.

    – DMOs don’t have control over the number of buffers, and also can’t provide an allocator.

    It’s true that DMOs are simpler to implement, but I don’t think the constraints they impose are worth in some of the cases.

    Lucia

  11. writetothiru@gmail.com says:

    Hi Lucia,

    Thanks for following up the comments. Please see my comments below:

    Regarding IOverlay and IOverlayNotify,

    Video decoder had success in querying the INotify and advising its IOverlayNotify interface on the video renderer. However the video renderer does not notify about clipping changes i.e the function OnClipChange doesnot get called when we click on menu of media player(ceplayer) or if any window overlaps with the media player video window/etc. Note that decoder requested the video renderer for advising about position (ADVISE_POSITION ) and clipping (ADVISE_CLIPPING) changes.

    Regarding DMOs,

    We had done analysis on the advantages and disadvantages of of all three possible codec plug-in models viz. DMO, ACM and native DShow filters long back. Realised that native Dshow codec filters are sophisticated and superior compared to other plug-in(DMO, ACM) models. So we implemented all codecs as DShow filters. But we had lot of setbacks when we started integrating our DShow filters with Microsoft OOB applications as listed below:

    – Microsoft Camcorder application hard codes to use only DMO based encoder => Already developed DShow filters become obsolete, although they work seamlessly with custom/3rd party camcorder apps.

    – WMP 10.x Mobile can play an mp3 file only if you plug-in mp3 decoder as a DMO.

    – VoIP application Framework/RTC 1.x, uses ACM based audio codecs only => we have to redevelop some of the audio codec plug-ins, although they are available as DMOs/DShow filters.

    As platform developers, we clearly do not know when is going to take place what (?) change in Windows Embedded multimedia applications and frameworks.

    Thanks

  12. keyur47831 says:

    I cannot find any information on streaming mp3 data on Windows mobile