Serio Blog

Friday, 03 Aug 2007

Are you comfortable with the idea you can play the music you’ve paid for on iTunes only on your iPod? Surely it’s time to put consumers first argues Mark James.

Digital rights management (DRM) is a big issue right now. Content creators have a natural desire to protect their intellectual property and consumers want easy access to music, video, and other online content.

The most popular portable media player is the Apple iPod, by far the most successful digital music device to date. Although an iPod can play ordinary MP3 files, its success is closely linked to iTunes’ ease of use. iTunes is a closed system built around an online store with (mostly) DRM-protected tracks using a system called FairPlay that is only compatible with the iTunes player or with an iPod.

Another option is to use a device that carries the PlaysForSure logo. These devices use a different DRM scheme - Windows Media - this time backed by Microsoft and its partners. Somewhat bizarrely, Microsoft has also launched its own Zune player using another version of Windows Media DRM - one that's incompatible with PlaysForSure.

There is a third way to access digital media - users can download or otherwise obtain DRM-free tracks and play them on any player that supports their chosen file format. To many, that sounds chaotic. Letting people download content without the protection of DRM! Surely piracy will rule and the copyright holders will lose revenue.

But will they? Home taping has been commonplace for years but there was always a quality issue. Once the development of digital music technologies allowed perfect copies to be made at home the record companies hid behind non-standard copy prevention schemes (culminating in the Sony rootkit fiasco) and DRM-protected online music. Now video content creators are following suit, with the BBC and Channel 4 both releasing DRM-protected content that will only play on some Windows PCs. At least the BBC does eventually plan to release a system that is compatible with Windows Vista and Macintosh computers but for now, the iPlayer and 4 on Demand are for Windows XP users only.

It needn’t be this way as incompatible DRM schemes restrict consumer choice and are totally unnecessary. Independent artists have already proved the model can work by releasing tracks without DRM. And after the Apple CEO, Steve Jobs, published his Thoughts on Music article in February 2006, EMI made its catalogue available, DRM-free, via iTunes, for a 25% premium.

I suspect that the rest of the major record companies are waiting to see what happens to EMI's sales and whether there is a rise in piracy of EMI tracks; which in my opinion is unlikely. The record companies want to see a return to the 1990s boom in CD sales but that was an artificial phenomenon as music lovers re-purchased their favourite analogue (LP) records in a digital (Compact Disc) format. The way to increase music sales now is to remove the barriers online content purchase.

  • The first of these is cost. Most people seem happy to pay under a pound for a track but expect album prices to be lower (matching the CDs that can be bought in supermarkets and elsewhere for around £9). Interestingly though, there is anecdotal evidence that if the price of a download was reduced and set at around $0.25 (instead of the current $0.99), then people would actually download more songs and the record companies would make more money.
  • Another barrier to sales is ease of use and portability. If I buy a CD (still the benchmark for music sales today), then I only buy it once - regardless of the brand of player that I use. Similarly, if I buy digital music or video from one store - why should I have to buy it again if I change to another system?

One of the reasons that iTunes is so popular is that it's very easy to use - the purchase process is streamlined and the synchronisation is seamless. It also locks consumers into one platform and restricts choice. Microsoft's DRM schemes do the same. And obtaining pirated content on the Internet requires a level of technical knowledge not possessed by many.

If an open standard for DRM could be created, compatible with both FairPlay and Windows Media (PlaysForSure and Zune), it would allow content owners to retain control over their intellectual property without restricting consumer choice. 

Tuesday, 31 Jul 2007

There’s an interesting post over at the Item Community forum at the moment. Contributor ITILNeutral explains how ITIL is being introduced at his company:

…A sizeable number of staff will lose their jobs (there is very little assimilation, everyone of us has to be re-interviewed for our jobs which we are totally not qualified for as far as ITIL is concerned)… ...As an example the few colleagues whose jobs are deemed already to be ITIL compatible have been assimulated but they have taken up to a £5,000 a year pay cut each...

Pretty surprising stuff. It’s unusual for management to take such drastic steps for the sake (if nothing else) of simple expediency. Management actions such as this cause uncertainty, and uncertainty is a catalyst for any organisation’s brightest and best to leave for other employment – something that can have quite devastating effects in the short and medium terms to services delivered to customers.

Although I’ve not experienced such a ‘wrecking’ approach from management I have seen something similar in my career when a former employer (and I job I enjoyed very much) ran into financial difficulties. Within 3 months the most 5 most experienced and able staff had left, rather than waiting to be ‘downsized’. My recollection is that the business suffered further as a result.

On the whole, based on what ITILNeutral has written, I’d regard the actions of managers there as plain barmy. However, it would be interesting to see what the state of IT services in the company were like, and to see if this response was some kind of backlash from a frustrated and angry business/user community. Over the years I’ve seen some pretty poor IT helpdesk/service desk operations where staff can’t be bothered to pick up the phone, cherry pick Incidents so that those that are ‘difficult’ or involve awkward customers are never attended to, and return very poor service for the investment their companies have made.

In companies like these achieving any kind of organisation change is difficult – no amount of ITIL training or role play or simulation will help. My response to this in the past has been to look carefully at team leaders – to recruit or appoint the right people, and to use them as the catalysts for change, so that rather than make cultural and service delivery changes to a team of 30, we are working with teams of something like 5 or 6. In effect, breaking the problem down into more manageable pieces and tackling resistance of apathy on the level of the team.

Generally I’ve not found it necessary to be anywhere near as confrontational.

My first ever boss told me that being a manager in IT was 'like training cats'. With that in mind, and because there seems to be a degree of irrationality in the new manager in this organisation, everyone concerned has my sympathy.

Friday, 27 Jul 2007

'Your Helpdesk/Service Desk may be closer to serving a consumer culture than you think' writes Tracey Caldwell.

IT departments are losing control over the IT used in their companies. The days of bestowing a technology solution on grateful masses seem increasingly distant. The users are revolting and bringing consumer technologies which they are finding useful outside work to the workplace. Technology news feeds and business technology blogs are just as interested in social networking and mobile telephony as they are in gigabytes of this or new versions of that.

Market research giant Gartner Group has come up with a word for this trend - consumerisation. Gartner reckons consumerisation is a catalyst for the growing conflict between the traditional enterprise IT function, which has been in sole charge of enterprise IT architecture, and the growing desire and ability of employees to influence their use of IT. IT staff may have other words for it, believing consumerisation spells disaster for compliance, security and support, and perhaps the entire IT infrastructure of their business.

Gartner has even put out a special report about it and warns businesses to change their attitudes toward consumer-led technology appearing in the enterprise from ‘unavoidable nuisance’ to ‘opportunity for additional innovation’. A bit of a surprise then that it was reported as joining a host of other commentators warning businesses off the iphone at its launch worrying about security and voice quality issues.

True, quite a few technologies that started out as consumer technologies have made an impact in corporate IT from PCs to today’s invasion of the enterprise by consumer-led instant messaging and desktop search.

As web-based companies put out beta technology and let consumers make what they will of it and work out how to make money out of it, savvy business chiefs can’t wait for the technology to mature, as they might have done once. But what consumer technology is hot and what is not? Gartner thinks it has the answers.

Apparently the next round of consumer-led innovations that are likely to have a real effect on revenue or internal spending and processes within three years include web-based application services spreading into business use, private communications channels such as email and IM being overtaken by community communication where privacy is not taken for granted, desktop videoconferencing and portable virtual machines.

Users are already showing worrying (for the Helpdesk or Service Desk) interest in running virtual environments on their PC, not least prompted by incompatibilities of new systems. Some enterprises are already looking to reflect this by implementing a virtual desktop environment as their server-based system of choice. This brings a whole host of security concerns but it looks like the bullet will have to be bitten and security concerns addressed because Gartner forecasts great things for virtualisation.

Further into the future, it thinks virtual technologies will be extended to produce augmented realities where a PC or mobile device will provide an interface and information relevant to the context of location of the user. Unboggle your minds and think of applications in plant maintenance, for example, training, computer aided surgery or PCB diagnostics.

Wednesday, 25 Jul 2007

This is a follow-up post to An Introduction to MPLS. That post tried to give some background on MPLS and described the use of edge routers and the MPLS ‘cloud’.

This post is going to talk about monitoring the service on Cisco routers. What follows works on 2600-series routers, but will also probably work on any later model Cisco router.

As with any monitoring exercise, you need to decide upfront what it is you are interested in. If it’s just ‘circuit availability’ (is the link up of down) then that’s just a simple case of configuring the routers to send LINK-DOWN traps to the Command Center. Usually though, customers are interested in more subtle things like ‘how well is the link performing?’ as opposed to just ‘is it working?’.

Fortunately there are some pretty useful things already in the Cisco Operating system to help us.

The way we’ve approached this is to set-up probes on each of the routers – probes are part of the Cisco operating system. Here is what a probe does: if you consider a pair of Edge Routers either side of the MPLS service, then the probe causes test data to be sent from one router through MPLS, detected by the other in the pair, and then echoed back again. In doing so, you can measure

  • Latency (how long it takes for the round trip)
  • Jitter (how much the round trip time varies)
  • Packet loss (did we lose any data on the round trip)

Jitter is a statistic you should only look at if you are trying to use Voice over IP (VoIP), or are sending voice-class data over your MPLS link. Latency and Packet Loss are relevant statistics however if you are just sending data.

The Cisco routers gather all of this data for you, and place it in an SNMP table you can read from the Command Center (you’ll find a MIB and Command Center Script at the end of this post). With a few simple calculations that the script performs, you can get Latency, Jitter and Packet Loss from the table.

These commands can be used to set-up the Edge Routers

rtr 1
type jitter dest-ipaddr 111.111.111.1 dest-port 2048 num-packets 1000
request–data-size 86
frequency 30
rtr schedule 1 life 2147483647 start-time now
 

where 111.111.111.1 is the address of the other router in the pair.

The default Cisco SNMP Packetsize is too small to allow the statistics table to be read. So, the following command is required:

snmp-server packetsize 8192

Calculations

The probe listed above will send an approximate 20KBPs stream, as shown below:

  • Send 86 byte packets (74 payload + 12 byte RTP header size) + 28 bytes (IP + UDP).
  • Send 1000 packets for each frequency cycle.
  • Send every packet 30 milliseconds apart for a duration of 30 seconds and sleep 10 seconds before starting the next frequency cycle.

((1000 * 74) / 30 seconds) * 8 bite per byte = 19.733 KBPs

These links on the Cisco website offer more detail

http://www.cisco.com/en/US/products/sw/cscowork/ps2144/products_user_guide_chapter09186a00800f4ec8.html http://www.cisco.com/en/US/tech/tk869/tk769/technologies_white_paper09186a00801b1a1e.shtml

and the MIB for the Cisco table is here:

rttMONMIB

and the script is here (right-click each link and 'save as...')

Monday, 23 Jul 2007

This is a post about MPLS (definition below) and a monitoring project we’ve recently helped a Command Center customer with. I’ll start by talking about MPLS in general, and post the monitoring stuff later. I'm going to assume you've never heard of MPLS.

MPLS for Dummies

It stands for Multiprotocol Label Switching. It’s a way of speeding up network traffic by avoiding the time it takes for a router to lookup the address of the next node to send a packet to. In an MPLS network data has a label attached to it, and the path that is taken is based on the label. Additionally, you can also attach a class to the data, which can be used to indicate that data has higher than usual priority.

Whilst the above takes care of the ‘Label Switching’ part of the name, the Multiprotocal part comes from the fact that Asynchronous Transport Mode (ATM), Internet Protocol (IP),  and frame relay network data can all be sent using MPLS.

For most companies, MPLS will be a service that they buy from a network services provider, and it might be beneficial to think of it thus: a pair of routers (the idea of a pair is important) on either side of an ‘MPLS cloud’. Example: you have two offices you want to link – say London and Edinburgh. You have in each office a router which interfaces with the MPLS service. When a device in Edinburgh wants to send data to a device in London it is sent via the Edinburgh router onto the MPLS service (appropriately labelled and classified by the router) where it will appear (eventually) on the London router for passing to the correct device. Between the two routers (referred to as ‘edge routers’ because they sit on the edge of the MPLS service) the data is the responsibility of the network services provider. For this reason, an MPLS service is often referred to as a cloud in network diagrams (‘we don’t know or care what happens here’).

So why bother? From the handful of customers we know using these services, the MPLS service is replacing leased lines. One of the key drivers seems to be cost – the MPLS services are working out cheaper than a leased line. However, another driver seems to be the desire to offer new services to users, one of which is Voice over IP (usually shortened to VoIP).

MPLS can be a sound (heh) choice for VoIP because of the idea of prioritising and classifying data. For VoIP to work, packets need to be sent quickly and at a relatively stable speed – otherwise, you get distortions on the line. Therefore, MPLS will offers the promise of ‘first class mail’ data packets (voice) and ‘2nd class mail’ (data) over the same network path (data is less sensitive to speed of transmission and variance of speed of transmission).

MPLS links:
MPLS Seminar notes – A pretty good introduction
Wiki – not for the faint hearted

I’ll post the monitoring details later in the week.

Friday, 20 Jul 2007

Jim emails ‘Put this one in the blog if you like, but edit out the numbers and change my name. I have to keep track of a 000’s of computers that are on customer sites (we don’t own the equipment need to know where it is). The trouble is our engineers move things and change things without telling me. We have a procedure but it’s routinely not followed. What can I do or should I start looking for another job?’.

Hi ‘Jim’ and thanks for emailing. It’s probably a bit early to start sending your CV out, as there are a few things you can do and suggest to make improvements.

I know from our follow-up emails that you have a Change procedure at your company – the trouble is that it isn’t being followed. A crucial part of your Change process (and indeed most such processes) is the step where the Asset Management/Configuration Management process is informed, allowing records to be brought up-to-date and re-verified.

This non-conformance leads to all sorts of problems, such as: someone phones up to report a problem with equipment which you have registered against site X, but the equipment has been moved to site Y (and subsequently cannot be found) I also know that there is a lot of movement going on each week.

The temptation might be to storm into the Service Delivery Manager’s office and demand that everyone is fired. However, it’s always best to be constructive when trying to solve problems, so I’d try to do a bit of fact-finding first.

I’d want to find out is why the Change procedures, that should notify you of all movements and Changes, are not being followed. Specifically, I’d want to find out:

  • If no-one actually knows about these procedures
  • If the Change procedures are generally known about but are not followed because of some negative perception (it might be they are perceived as cumbersome and bureaucratic)
  • If there was a general culture of apathy and non-cooperation
  • Other cultural factors at play (such as significant time pressure placed on field engineers who respond by moving from one job straight to the next)

What you discover on the fact-find will enable you to suggest solutions.

  • Procedure not understood or known > More training, better documentation
  • Procedures known but not followed > Suggests a re-drafting of your procedures, and/or greater management support and enforcement

… and so on. Be careful how you do your fact-find. Specifically, be careful of asking just managers, who often have a completely different view of the organisation from technicians and engineers.

It also might be the case you need more senior managerial support than you are getting – this is a really vital ingredient. Assuming that any issues with your working practices themselves have been resolved, you sometimes need senior managers to remind people that compliance is mandatory, and not optional.

Wednesday, 18 Jul 2007

This is the final post in the series about Knowledge Base content and design, and follows on from my earlier post on the subject. The earlier post expanded the quality system, this post will give an example for WidgetCo. Hopefully this will help bring all of the previous posts together!

WidgetCo Knowledge Base Quality System

Editor’s Brief

Editor: George Ritchie
Catalog Name: Desktop KB
Catalog Accepted Formats: HTML, PDF
Audience: Service Desk Staff, 2nd Line Support Technicians
Catalog Description: Contains both Incident Resolution and How-To documents for Desktop-based computers running XP and Vista.
Examples of subjects that can be covered:
Subjects covered might include resolution of common display driver problems
Troubleshoot VPN connection problems
How-To roll out a laptop from one of our ghosted images
Resetting a user password
 

Routine tasks for the Editor

Weekly: Check for new Document suggestions. These will either be emailed suggestions to you, or Incident resolved in the last week and flagged with ‘KB suggestion’. From these, produce a list of candidate new documents. Describe the document with either an Incident reference number, or a paragraph describing the subject.

Weekly: Check for document feedback through our feedback mechanisms. Make sure each respondent receives an acknowledgement where contact details are provided.

Monthly: Review the topics (search terms) that have been submitted to the Knowledge Base Engine. Look for topics that are not covered (or adequately covered) and use this to produce a list of candidate new documents.

Monthly: Send an email to all potential users giving a title and link to each new document created.

Yearly: Each document should have a ‘Review Tag’ at the bottom – for example, REVIEW2007. This is the point at which the document is to be reviewed. As reviews are performed just once a year, this kind of tag will work fine. Simply use the Knowledge Base search facility to locate documents tagged REVIEW2007, review the content, and then update the tag to say REVIEW2008 – and so on. Reviews should check documents for accuracy and relevance.

Procedures for adding a new Document

Prior to creating the document:
Check that the document is not already included in the Catalog, or any other possible Catalog. Have the document peer-reviewed by a member of the 3rd line support team if required.
Ensure that the document is accurate and on topic.
 

Reporting and KPIs

A monthly report should be submitted to the Service Delivery Manager detailing:
The number of documents created that month
Number of queries performed in total by consumers that month
Summary of user feedback
Any other issues affecting search relevance
 

Monday, 16 Jul 2007

This is a follow-up to my last post Designing a KB Quality System. In this post, I’m going to give a more detailed description of what is included.

Recall in the last post I said we wanted

  • Accuracy
  • Relevance (conformance with the Editor’s Brief)
  • Non duplication of content
  • Feedback mechanisms
  • Periodic review
  • A way for new documents to be suggested

The first 3 of those mean that it’s unlikely you’ll have a system where anyone can create a document and just add it (what I call a dumpster Knowledge Base). Of course, you could say that contributors have to check these things before adding, but this is likely to be done with just varying degrees of success – and the bigger the team the more problematic it will usually become. (As an aside, I’m not a big fan generally of anything being owned by ‘the team’).

Instead, you will have a small number (ideally 1) of Editors who will check accuracy, relevance and uniqueness before adding. There are two ways you can use Editors:

  • The Editor writes all documents based on suggestions from colleagues or content consumers
  • Or, The Editor checks documents written by others before inclusion.

My experience is that although the second option offers the prospect of more and better content, in practice you’ll need someone to lead the process and will probably end up doing the first.

You need some form of feedback mechanism. For Serio users, this will usually mean you enable the ‘rate this document’ functionality, which allows SerioWeb users to comment on documents for you, or you simply have a special ‘feedback’ email address that goes direct to the Editor. The more technical your audience, the more likely they are to report technical errors in my experience.

Periodic review does just what it says. You need to review documents every once in a while because technology changes. For example, an incompatibility between two products might be resolved, prompting either removal or updating of the document.

Finally, you need a way for content users to suggest new documents. Now, there are two ways to do this.

  • Look at what consumers are searching for.
  • Or, Ask for suggestions directly.

Looking at what search terms are being submitted, and what results are returned, is an essential activity. If you are a Serio user, please note that logging of this data is OFF by default – switch it on and you’ll be able to see all the search terms being targeted by your consumers (see ‘Monitoring search terms used’ in the HowTo guide). This will help you identify weak areas and suggest areas for improvement.

If, as an editor, you just say ‘suggest some articles!’ you probably won’t get much of a response. Instead create a simple & structured environment. For Helpdesks and Service Desks, this usually means a way to ‘flag’ Incidents at the point of closure so as to say ‘a knowledge base article for this is required’.

For Serio users, this usually means using Agent Status B (set to something like ‘KB Suggestion’) and a question like ‘Should this Incident be suggested for Knowledge Base inclusion Yes/No?’ as part of the resolution Action. All the Editor need to do is scoop these up once in a while, and decide based on the Editor’s Brief if it should be included or not.

I’ll take all this in my next post and construct a worked example.

Thursday, 12 Jul 2007

I’ve been blogging recently about Knowledge Base content. Before proceeding, I’m going to do a quick summary of what I’ve said so far.

Content is the important thing – if you don’t make a real effort to get good, useful and relevant content you are wasting your time.

Think carefully in advance about your content. Group related content into a small number of Catalogs, and then document each Catalog. Describe the target audience, and the type of documents you’ll be creating. Create a small number of example documents that will show the style and layout to be used. All of this documentation will become the Editor’s Brief.

When designing your example documents, take a little time to help your Indexing/Search system. Find out how you can help it really understand what the document is about, and then use this in your document structure.

I have a personal preference for short-ish, single subject documents. Decide if these are the kind of documents you want. Try to decide a standard document format (HTML, Word etc) and stick to it for each Catalog, and decide how one document will reference another.

Everything above will come together into an Editor’s Brief. Such documentation is a great thing to have because it will help your content stay focused over the months as your content increases. The Editor’s Brief is also useful to searchers in that it helps them understand what is likely to be in the Catalog.

If you have an Editor’s Brief, it follows that there must be an Editor somewhere, which leads me nicely onto the subject of a Quality System for your Knowledge Base content – something you are likely to need from day one.

Here’s what the Quality System will need to ensure:

  • That the documents placed into the Catalog are technically accurate
  • The documents are in accordance with the Editor’s Brief
  • New documents being added are ‘unique’ (in other word, there is not already a document that addresses the same subject matter)
  • That consumers (searchers) can give feedback, and that the feedback will be read and if needed acted upon by editors
  • Providing a mechanism for periodic review of documents.
  • That a simple mechanism exists for suggesting new documents or content

I know this sounds a bit bureaucratic, but in practice it usually works out to be a common sense approach. I'll expand on this in my next post, and will post and example quality system. 

Tuesday, 10 Jul 2007

I don't have much money to spare, and I wish the banks would make it a little harder for someone else to get what I do have writes Mark James. A few weeks back, I read a column in the IT trade press about my bank’s botched attempt to upgrade their website security and I realised that it’s not just me who thinks banks have got it all wrong… You see, the banks are caught in a dilemma between providing convenient access for their customers and keeping it secure. That sounds reasonable enough until you consider that most casual Internet users are not too hot on security and so the banks have to dumb it down a bit. Frankly, it amazes me that information like my mother’s maiden name, my date of birth, and the town where I was born are used for “security” – they are all publicly available details and if someone wanted to spoof my identity it would be pretty easy to get hold of them all! But my bank is not alone in overdressing their (rather basic) security – one of their competitors recently “made some enhancements to [their] login process, ensuring [my] money is even safer”, resulting in what I can only describe as an unmitigated user experience nightmare. First I have to remember a customer number (which can at least be stored in a cookie – not advisable on a shared-user PC) and, bizarrely, my last name (in case the customer number doesn’t uniquely identify me?). After supplying those details correctly, I’m presented with a screen similar to the one shown below: So what’s wrong with that? Well, for starters, I haven’t a clue what the last three digits of my oldest open account are so that anti-phishing question doesn’t work. Then, to avoid keystroke loggers, I have to click on the key pad buttons to enter the PIN and memorable date. That would be fair enough except that they are not in a logical order and they move around at every attempt to log in. This is more like an IQ test than a security screen (although the bank describes it as “simple”)! I could continue with the anecdotal user experience disasters but I think I’ve probably got my point across by now. Paradoxically, the answer is quite simple and in daily use by many commercial organisations. Whilst banks are sticking with single factor (something you know) login credentials for their customers, companies often use multiple factor authentication for secure remote access by employees. I have a login ID and a token which generates a seemingly random (actually highly mathematical) 6 digit number that I combine with a PIN to access my company network. It’s easy – and all it needs is knowledge of the website URL, my login ID and PIN (things that I know), together with physical access to my security token (something I have). For me, those things are easy to remember but for someone else to guess… practically impossible. I suspect the reason that the banks have stuck with their security theatre is down to cost. So, would someone please remind me, how many billions did the UK high-street banks make in profit last year? And how much money is lost in identity theft every day? A few pounds for a token doesn’t seem too expensive to me. Failing that, why not make card readers a condition of access to online banking and use the Chip and PIN system with our bank cards?

Pages