THE HOME OF THE GLOBAL SOURCING STANDARD
Successful outsourcing is inconspicuous to the wider world – and so it should be; it’s about creating a seamless, efficient experience – while the press and public will always remember and discuss conspicuous failures, especially those resulting from human frailties – such as the employee who downloaded data from a multimillion-pound computer system, and then lost it in the post. What does this tell us about major public databases? Not that they are insecure from hackers or malicious fraudsters; rather, that many staff using them do not have even a basic understanding of how to handle electronic data securely. This is where security fails: not at the firewall, but in the mailroom, or on a cluttered desk.
No-one notices joined-up public service, because the successfully outsourced experience is a forgettable one; while everyone remembers being asked by a call centre worker where the major city you are calling from is located, especially when it is the location of their company’s head office. They will forever associate that experience with the company, brand, or organisation it represents. This is especially true of public services, such as the NHS, the Inland Revenue, and so on.
The Blair government looked favourably on technology as it saw it as an exemplar of ‘modernity’, and Blair’s catholic (i.e. universal) quest for modernity – preached from his ever-present invisible pulpit – was perhaps the defining idea of his premiership, alongside compassionate smartbombing in the approximate direction of democracy. Gordon Brown has maintained a dignified silence on the pursuit of modernity, but largely because he bankrolled it for more than a decade with mixed degrees of success.
This year is the time to accept that technology in isolation driven by the ‘need to be modern’ is a recipe for disaster. 2008 needs to be the year in which public servants stop talking about technology as though they are dealers in an arms race, and start talking about people. It must also be the year when the government finally recognises the clear, repeating patterns in the failure of many large-scale projects.
The massive loss of data and public confidence from a number of government services, including the child benefit system and several NHS trusts illustrate this all too well. The NHS IT project, the ID card scheme, and other major strategic projects are all being discussed as technology solutions to technology problems, all flaws or security concerns within which will be fixed by throwing yet more, world-beating, gold-standard technology at them. This is nonsense.
Minister after minister has been wheeled before the cameras to say that the ID card scheme will succeed because it will be backed by the very best in security technologies, just as the overarching NHS IT system will be when it finally goes live. This, I’m afraid, is completely, supremely, almost ludicrously irrelevant.
The truth is that technology, databases, networks, and communications systems are nothing more than high-tech representations of an organisation’s management structure and corporate policy. They connect human beings together, in accordance with rules set out by the management, and merely facilitated by wires, routers, hubs, servers, optical communications, and so on.
These systems either succeed or fail because of people, policies, and management, and they usually fail not because the people at the top have actively screwed up, but because the people at the bottom have never even been considered, and perhaps know nothing about electronic data, let alone data security and privacy laws. And it’s not just the people at the bottom: just ask the Qualcomm executive who several years ago attended a conference on IT security. He left his laptop there; the equivalent of leaving an entire company in a suitcase.
Take either the ID card scheme or the oft-delayed NHS IT project.
Question: Who enters data into computer systems? Who sits and manually types record after record into a computer terminal? Who will have the time to check the veracity of data? Who will interpret and update and standardise reams upon reams of data stored in filing cabinets, in ringbinders, on outdated databases, on thousands of computer disks (some of them probably long-outmoded)?
Answer number one: Highly qualified professionals who mysteriously have months of free working hours to plough through millions of records, cast their professional eye over the contents, check with the person who the data represents, and then correct the information?
Or answer number two: poorly qualified, poorly paid, or relatively low-skilled workers; people on minimum wage; people with little more than basic secretarial skills; early school leavers; undergraduates in holiday jobs; people whose second language may be English; and people in far-flung parts of the world working for remote corporations?
This is the flaw in the system: not the system itself, nor the firewalls and encryption protocols; nor black-hat hackers waging cyber-warfare from a bunker in North Korea. It’s the normal, everyday working people who do the manual labour, many in outsourced locations. All of them may be trustworthy and intelligent, but most of them have never even been factored in to the management’s thinking.
We all know that the only person who can read your doctor’s handwriting is your local pharmacist, so perhaps we should employ a few thousand pharmacists!
Let’s hope as we plough forward into 2008 as an industry, and as customers of an industry, that we learn one vital lesson about major, multibillion dollar contracts established on behalf of the public: technology, and large-scale public technology implementations are primarily about three things: people; corporate policy; and corporate social responsibility.
A peaceful and prosperous 2008 to one and all.
On this site in the coming weeks, months and years you will find all the professional news, analysis and features you would expect, together with opinion pieces by some of our industry’s leading spokespeople, senior executives, analysts, and industry figureheads. Not only that but there will be both a vertical and a horizontal industry focus, and a host of other unique content that will be unveiled in the coming months covering every part of the globe, every industry sector, and every market within this burgeoning industry. We’re going to slice and dice the outsourcing industry like never before to give you new perspectives on the movers and shakers, from the big names to the smaller, more nimble players who will punch above their weight over the coming year. Together, we aim to make this the only destination for outsourcing news and information you need, but we will also be making it entertaining, provocative, and controversial; if you’re not discussing ideas around the water cooler, then we’re not doing our job. One thing I loathe as a journalist is news analysis that ends with the words “only time will tell”; I dislike the word “somewhat”; and I value writers who not only gather news, but also have a strong opinion. I want to know what the writer thinks and believes, not that he or she is pointing a finger into the wind and refusing even to guess. We’re going to tell you what we think, and we want you to do the same. Perhaps you could even write for us? Sourcingfocus.com will be at the leading edge of outsourcing debate, and we’ll be asking opinion formers from outside the outsourcing industry, from politicians to customer giants, to share their expertise, and their unique perspectives, with our readers. So welcome, then, to your portal. Is there something you want to see, or to share? Tell us. We will always value your feedback, your ideas, and your innovation. Let’s make this fly. With best wishes for a prosperous year. Chris Middleton, Editor
Luxoft, Russia’s largest provider of high-end IT outsourcing services and product development to clients such as Deutsche Bank, IBM, UBS, T-Mobile and Ping Identity, today issued its annual predictions for the IT outsourcing industry in 2008. These predictions cover a range of technical, business and relationship issues demonstrating the increasing maturity of the global IT outsourcing market. Top areas to watch include:
Transformational outsourcing will drive true innovation
In 2007 as the outsourcing market continued to mature, many companies realised that outsourcing can bring value far beyond simple cost savings and tactical software development. The issue for some clients was obtaining the level of innovation they were seeking from vendors.
In 2008 Transformational Outsourcing – leveraging vendor knowledge and expertise to reinvent client business processes – will gain more traction. Here clients require vendors with an in-depth grasp of their industry to support and create mission-critical business processes, manage change and think beyond the initial brief.
Managing and measuring agile comes next
In 2007 agile software developments started to take root in outsourcing engagements. Agile software development is the developing of software in shorter iterations and in a much more collaborative fashion.
In 2008 agile development adoption will continue apace. However, as in-house teams start to truly understand this approach, they will also need to determine how to effectively manage and measure the success of their efforts. It will be critical to secure vendors that have already mastered agile to fuel this next step.
Eastern Europe & Canada provide hot destinations
In 2007 as nearshoring developed in popularity, Eastern Europe and Canada began to pick up steam as outsourcing destinations particularly for European and North American companies as well as US multinationals.
The Eastern European software industry grew by 12.53 percent in 2006 and is estimated to grow at a CAGR of 10.87 percent until 2008. The Ukraine is an especially promising spot with its offshore outsourcing market growing 47 percent in 2006 and 30,000 IT graduates joining the workforce each year.
In 2008 these two regions – known for technical excellence, innovation and solid business practices – will continue to gain prominence.
Strong European client demand continues
In 2007 European customers, particularly in the financial services and banking sectors, had a strong appetite for outsourcing. In fact, demand for outsourcing in Europe increased dramatically in the first half of 2007, rising as high as 78 percent compared to the first half of 2006. This increase meant that Europe accounted for 54 percent of new outsourcing contracts worldwide, compared to 32 percent last year and 38 percent on average for the last five years.
This strong growth will continue in 2008 with U.K. and German clients being the most active seekers of outsourcing services delivered via a nearshore, Eastern European model.
Embedded Development and product engineering take outsourcing deeper
Over the past year, clients in the automotive, industrial, electronics and telecommunications equipment industries have been increasingly seeking outsourced talent to assist their in-house embedded development teams. This trend will continue in the coming year with vendors needing to possess proven skills in working with human-machine interfaces and hardware communication protocols in order to provide successful embedded development support.
In addition, some outsourcing vendors are beginning to provide ground-up software product development and engineering support for offerings that will be packaged and marketed by their independent software vendor clients. This will develop in 2008 as transformational and innovative outsourcing grows, agile development will also be a major factor in increasing time-to-market.
Software testing grows in scope
Software performance and product testing, once almost exclusively done by in-house teams, has started making its way onto the outsourcing scene over the past few years.
In 2008 the scope will broaden beyond traditional functional and system integration testing to encompass overall system performance, scalability, usability and security testing bringing higher value to the client’s organisation. This will require new outsourcing services to be offered in the market in the areas of system performance engineering, test automation and regression testing efficiency.
Country-to-country collaboration creates strange bedfellows
In 2007 many new kinds of outsourcing resource models developed including vendors setting up shop in other countries to tap into resources in a continually tightening talent market.
In 2008 we’ll see this taken a step further with vendors in highly competitive outsourcing countries teaming up or even forging vendor/client relationships. This could, for example, bring Russia and India or China together in new and interesting ways.
Global delivery and nearshoring both in high demand
2007 was the year of the nearshoring buzz as clients wanted the combined benefits of proximity and familiarity as well as the manpower boost of outsourced resources.
In 2008, nearshoring will continue to play an important role but clients will increasingly widen their scope demanding that their key vendors have strong outposts around the world that can handle global delivery. Established and well-staffed vendor locations in North America and Eastern Europe will be vital.
Security becomes paramount as relationships evolve to partnerships
In 2007, as some outsourcing vendor/client relationships evolved into innovation partnerships, tight security planning and policy execution became even more pressing.
In 2008, with greater depths of data potentially being shared through transformational outsourcing, embedded development and product engineering programmes, security practices will need to become a more natural and proactive part of any successful engagement. Security planning and procedures must also span across all forms including systems, data, IP, physical, and staffing and should also include disaster recovery blueprints.
Russian Outsourcing Continues To See Record Growth
In 2007 Russia was increasingly recognised by the industry, influencers and clients, as a world-class outsourcing destination, particularly in its ability to tackle high-end, complex development and business process challenges. This recognition showed in the market growth of Russian outsourcing to over $1B in 2006-2007 as well as 40 percent year-onr-year growth since 2003.
Strong continued expansion is expected into 2008 and beyond.
Luxoft, founded in 2000, is Russia’s largest provider of high-end IT outsourcing services with operations in the U.S., Canada, U.K., Ukraine and the world’s largest delivery capabilities in Russia and CIS.
Luxoft works with global enterprises and independent software vendors (ISVs), enjoying long-term relationships with industry leaders such as Deutsche Bank, IBM, UBS, T-Mobile and Dell.
Luxoft's software development processes meet the highest quality standards, and the company was the first in Europe to achieve Level 5 CMMI quality certification. Luxoft runs research and development centres in Moscow, St. Petersburg, Dubna and Omsk in Russia, as well as centres in Kiev, Dnepropetrovsk, and Odessa, Ukraine, and Vancouver, Canada.
Luxoft is the recipient of the 2007 Frost & Sullivan Global Outsourcing Growth Excellence & Customer Value Leadership Award; its long-term client Deutsche Bank recently won an Applied Innovation Award from the IAOP, Wipro, the ITAA and Forbes for a CRM system jointly developed with Luxoft.
SNC-Lavalin ProFac Inc.,has been awarded a contract by Standard Life to provide facility management services for its investment real estate portfolio.The five-year contract begins on January 1, 2008, with options for renewals.
SNC-Lavalin ProFac was selected by Standard Life to improve services, optimise building operations and deliver cost-savings for its 70 facilities in the Life Account real estate portfolio.
"Standard Life is a great partner, and we are delighted to be working with them on this mandate," said Charlie Rate, President of SNC-Lavalin ProFac. "We are confident that we will deliver the level of service expected by both Standard Life and its tenants."
The new contract calls for SNC-Lavalin ProFac to take over the operations and maintenance of approximately eight million square feet (about 800,000 m2) of real estate.
"Following our RFP process, SNC-Lavalin ProFac was the obvious partner of choice for us," said Gary Aggett, Vice-President, Real Estate Group for Standard Life. "Their systems, processes and procedures provided us with the peace-of-mind necessary to smoothly transition the servicing of our buildings over to them."
SNC-Lavalin (TSX: SNC) is one of the leading engineering and construction groups in the world and a major player in the ownership of infrastructure, and in the provision of operations and maintenance services. The SNC-Lavalin companies have offices across Canada and in 34 other countries around the world and are currently working in some 100 countries.
The issue of data encryption has been brought into sharp focus recently with the HMRC data loss fiasco. Since that catastrophic incident, in which the records of 25 million Britons were lost in the post, the Information Commissioner, Richard Thomas, has come forward to warn that several other public bodies have now stepped forward and admitted that they too have lost personal data. It has also become apparent that this is not the first HMRC data loss – there have been seven breaches of data security since 2005. With critical and confidential data of UK citizens floating about everywhere from post boxes to rubbish dumps, the encryption of data is taking on a more and more central role. Data encryption will not prevent these physical losses, but will, of course, mean that this data is not accessible.
A key problem within the public sector is that of awareness – the government admitted that civil servants ignored, or possibly didn’t know, their own security policies and procedures in copying database information to disk and sending it unencrypted in the post. A recent survey showed that almost 90% of public sector IT managers said staff would open unknown e-mails and 75% connect private USB devices to their work PCs. This is a far worse problem than in the private sector. Getting public sector IT managers to understand the issues associated with data encryption is the first step towards solving the problem.
One of the most important questions that needs to be asked is whether public sector organisations are obliged to use data encryption technology. The answer is no – there is no explicit obligation under the Data Protection Act (DPA) to use encryption, although the DPA does state that ‘appropriate technical and organisational measures’ should be taken to ensure data is kept completely secure, which could be taken as referring to encryption. However, it is widely recognised that data encryption helps to secure electronic data and safeguard privacy and therefore it is surprising that not been more widely adopted in the public sector.
A recent survey of UK businesses carried out by the Department of Trade and Industry reported that, of businesses surveyed, 30% of those who use online transactions do not encyrpt them. The Information Commissioner’s Office expects that an organisation’s security policy and practices should reflect the technology that is available. Therefore, as encryption technology become more widely available more organisations should start adopting it.
But how can the public sector better safeguard itself from another HMRC disaster? The simple solution would be not to copy huge reams of information to a disk at all, but to transfer them directly to the receiver in an encrypted form over either internal or external networks. The sender would have to use software that encrypts the data using strong algorithms encrypting sensitive data at source and tightly controlling and monitoring the way people access the database. A common problem that arises when data encryption is on the agenda is that of who has access to the data. The whole point of encrypting the data is to make sure that data thieves and those who have not got permission to access the data are not allowed to use it. Therefore authorised personnel need to be given passwords of a suitable complexity that they can be remembered but not cracked.
This is magnified when the complexities of a shared service centre come into play. Shared services are about the consolidation of a set of services common to multiple business units, such as HR or finance and accountancy. The shared services approach is increasingly being used in the public sector to maximise efficiency. When data is coming in from a number of different sources to a single data processor (the supplier) the encryption technology must meet these added requirements. The contract drawn up with the supplier for multiple end users must accomodate the added complexity of encrypting data from a number of different sources and the complications arising from the different levels of encryption needed within a single centre.
Another occasion in which data encryption is vital is within an outsourcing arrangement. When outsourcing, the public sector body must choose a supplier that can provide sufficient guarantees with respect to the technical and organisational security measures governing the processing of data. The public sector must also take reasonable steps to ensure compliance with those security measures, including undertaking regular audits and reviews.
As HMRC discovered to their cost, data encryption is a vital part of any security system. Using data encryption is a necessity for public sector organisations, but they need to include this within a rounded, holistic security policy including both data and physical security.
As the impact of commercial and industrial activities on the environment are becoming better understood, IT’s share of responsibility within the overall problem is gaining recognition. Traditional methods of generating power, through burning fossil fuels, transfer a significant carbon footprint to users of that power. The data centre is a very power-hungry industrial proposition. As a result, within the IT sector the data centre is being viewed with increasing unease. Given predicted data growth rates, and no change in their design or method in which they are powered, the environmental consequences of data centres could force them into a corner from which they cannot escape.
Environmental concerns are driving innovation within all sectors that supply solutions to the data centre. However, focussing solely on environmental concerns ignores another, arguably more significant, driver of innovation. Power derived from fossil fuels is not only environmentally damaging but also expensive, certain to be much more expensive in the future and, in some locations, there simply isn’t enough of it to go round. Data centre managers are obliged to minimise costs, while ensuring that they provide a continuous service to their customers to retain their business. As ever, money is a driver.
It is the job of the data centre manager to sift through all the marketing chatter to discover the true innovations that will make the service they provide more cost-efficient and of a standard that will attract and retain customers. A data centre manager who sets out to make a data centre which is environmentally friendly without aligning changes to these specific responsibilities is likely to find his pet green project going nowhere due to lack of senior endorsement or, worse, his data centre out of business. Where the two areas, environment and business coincide, the manager will find a line of least resistance in implementing change.
There are two focus areas in which the acquisition of business benefit may coincide with new environmentally benign technologies and practices; internal cost savings and customer demand. Changes that demonstrably improve the long term cost structure of the business or the customer’s propensity to buy are likely to receive the ‘green’ light from budget holders.
Change decisions that affect the internal ongoing cost structure of the data centre, rather than projects designed to effect customer demand, are often easier to make as more of the variables are known and within the organisation’s control. As calculations can be made with a higher degree of certainty, credible predictions can be made for the cost implications of a particular decision. These satisfactorily accurate predictions can easily be presented to support the business case for change. There have been many recent technology innovations that can dramatically reduce a data centre’s cost profile at the same time as being significantly ‘greener’.
For example, server hardware is largely responsible for the burgeoning power costs within the data centre. Chip manufacturers, such as Intel, and hardware integrators, such as IBM, have invested significant research and development capital to bring products to market that, while not compromising on performance, require far less power to operate.
IBM claims power savings of up to 60% from its new server technology. Not only are IBM servers up to 60% cheaper to run, they also require up to 60% less cooling (there is a direct 1:1 relationship between power usage and heat output). When one calculates the full cost of power to operate these technologies over their lifetime and contrast the results with standard or legacy technology, the savings are compelling. The power usage within a data centre is so large that power saving technology can pay for itself in a short space of time. The business case for new, more energy efficient, technology is made on a cost basis and the happy side-effect is that the data centre is also greener.
It is harder to make a business case for initiatives designed to increase customer demand by enhancing the environmental credentials of the data centre. Nevertheless, the effort is well worth making.
Many organisations have implemented Corporate Social Responsibility (CSR) policies that explicitly state a preference for suppliers with green credentials. Elsewhere within the same organisations there will invariably be a policy stating that the supplier should be fit for purpose. This means that a professionally run data centre with an impeccable operational record will stand a higher chance of satisfying those customers’ CSR requirements and become a preferred supplier if they have implemented certain environmental initiatives. For this data centre, environmental policy is a matter of competitive advantage.
To avoid confusion, two types of data centre need to be defined; private and public. A private data centre is owned and operated by an organisation to service its own internal users. A public data centre is set up to provide external organisations with data outsourcing services, such as web hosting or backup facilities for disaster recovery. Though the costs of these services will be funded differently, both types of data centres have customers with broadly similar needs and as such the data centre will supply to similar demands.
Successful demand satisfaction and creation initiatives depend on an understanding of the customer’s stated buying criteria and also their un-stated or true reasons to buy. Various arms of marketing science can be utilised to inform decisions. However, an element of entrepreneurial risk-taking in decisions will always be required. As long as the entrepreneurial factor is born of the requirement to increase or satisfy customer demand, rather than a wish for the world to be a better place, a business case is being made and resulting green initiatives stand a chance of succeeding.
Green initiatives that satisfy customer demand for environmentally aware suppliers, when fully considered over the long-term, often enhance the internal cost structure of the data-centre or cost so little that the demand they create more than compensates. For example, it is becoming easier and cheaper to select an electricity provider that offers power from renewable sources. The data centre that selects such a supplier may pay a little more for power (in fact, if low-energy servers are installed the overall power costs may fall) but they will be responsible for zero carbon emissions. It is these green initiatives, properly costed and with their effects on demand taken into account that should be implemented.
As is clear, many environmental initiatives, properly considered, may have sound business benefits. From paper recycling bins that subconsciously dissuade employees from printing documents, through light switches that turn themselves off at night to save energy, to marketing material that makes clear to your customers that you do these things so they are able to approve your service, the environment, indirectly, matters.
Ask yourself this, “If I double my expenditure on IT, will it double my profit?” If the answer is “no”, then there are alternatives to bearing the brunt of purchasing new technologies. Managed IT services are an increasingly attractive means for enterprises to achieve their IT initiatives. But how can an enterprise identify services to outsource, what can they expect from managed service providers (MSPs) and how should they evaluate the services offered?
Stepping Down to Services
Finding any supporting services in your business processes that can be placed with MSPs may seem counter intuitive. However, if you ask yourself what are the fundamental activities or technologies at work in each one of those processes, you can see where individual and separable services exist.
Each process that you are trying to manage will encompass several steps – each with their underlying or enabling services. Those that differentiate your product or service and add to its value proposition will be strategic to your company. All others will be supporting services - although important to completing the business process, they are not strategically important to the end value of the product. Funding an MSP for these supporting services can provide operational and financial benefit.
Other categories to consider when identifying services for an MSP are those that are highly specialised or require high levels of expertise for a short time. For many specialised IT services, utilising specialists on a short-term contract is a fraction of the cost of developing and maintaining it in-house. Likewise, many MSPs can deploy the appropriate experts for each phase of an IT initiative as needs change, therefore keeping implementations on time and within budget.
The Right Service for Your Service
It is important to recognise that implementing and delivering a service is not a single event, but several steps that are constantly evolving. Noting what you need for successful service delivery at each stage: Plan, Deploy, Operate, Improve, and End of Life, will help you select suitable providers.
It’s important to get a full understanding of where you are starting from a resource standpoint and what other business services might be sharing the same resources. You need an MSP experienced not just in supplying new technology, but also in assessing the overall burden current business systems are placing on your infrastructure. They should be able to provide an assessment of the startup costs, level of effort required to establish the new service and a breakdown of the time required to setup any new technologies, as well as advise you on realistic expectations of service levels and pricing. The MSP should be able to give you an accurate inventory of your current infrastructure, its connectivity and performance levels, highlighting areas of concern with a clear action plan on how to address them as well as document current levels of service in order to properly set expectations.
This phase takes the service from concept to reality. A suitable MSP will provision the hardware, software, and processes – configuring and putting them under management. Initially, this should be in a limited environment to validate assumptions made in the planning phase, ensure the infrastructure is sufficient, that an acceptable level of service can be delivered and to document that there are no adverse impacts on existing services.
Documentation is vital. An MSP that can provide real-time and historical reports of service performance and trending reports to show how the service will scale is critical. An MSP that can prove contingency plans based on data rather than “gut feel” is more likely to be successful even if unforeseen issues arise.
This phase is about getting value for money. A good MSP will provide regular communication so that you remain confident in service levels being received. Many MSPs mistakenly think that no news is good news. This is precisely when a report of bandwidth saved or network latency reductions can have you seeing them as a trusted resource rather than a monthly expense. Your customers will perceive and relay anecdotes of service outages or slow downs. An MSP that can meet those anecdotal claims with factual, data-driven reports helps you maintain your credibility and validates your decision to outsource.
A range of customisable, easily configured reports are vital for smooth service delivery. A good MSP will utilise web-based reporting dashboards to help you pull the precise data, on the fly that you need to satisfy your business units.
The capability of MSPs to provide alerting and proactive maintenance based on business-level impact analysis of outages or performance threshold threatening behaviour is critical to successfully sustaining your business process.
This is about forecasting trends to improve business processes and prevent degradation before it adversely impacts business profitability. MSPs should be able to provide historical and trending data about service operation - business-level metrics over time and in a format that’s relevant for your customers.
End of Life
The End of Life phase is merely the beginning of a new and improved way of conducting business. When it’s time to move along, you should be able to do it fast with minimal negative impact on your business. An MSP that can accurately assess the decommissioning of hardware and services – mapping their utilisation to impacted users and groups – will enable you to make better business decisions in the Planning Phase of the next BPM cycle.
After quantifying your IT processes into services, you can easily identify what services will be appropriate to outsource. A range of business services and MSPs are available to cost-effectively handle non-strategic services, freeing you to concentrate on differentiating your company. The right MSP should assess new technologies, reduce operational costs and improve profitability. A knowledgeable MSP should become an extension of the IT department. If you can rely on their continued communication and analytics, then you will be well on your way to successfully achieving even the most complex IT initiatives.
Transcom Worldwide, the European Customer Relationship Management and debt collection specialist will provide Customer Relationship Management (CRM) and Collections services over the next three years for Tiscali UK
Legal firm Addleshaw Goddard’s technology and outsourcing team has today announced its role in advising Tiscali UK on the key outsourcing deal between Transcom Worldwide S.A. and Tiscali UK. Transcom Worldwide. Transcom, the European Customer Relationship Management and debt collection specialist, will provide Customer Relationship Management (CRM) and Collections services over the next three years for Tiscali UK, part of Tiscali S.p.A, one of Europe’s leading independent telecommunications providers.
Addleshaw Goddard, who worked with Tiscali on its acquisition of Pipex earlier this year, has ensured the swift completion of the deal under demanding time constraints. Three members of Addleshaw Goddard’s team played crucial roles in the completion of this deal; James Dawson, Damon Rosamond-Lanzetta and Eva Wong.
Damon Rosamond-Lanzetta, managing associate at Addleshaw Goddard’s technology and outsourcing team, commented: “With around 4.2 million active users in Italy and the UK, customer relationships are absolutely central to the way that Tiscali runs its business. Getting the people and technology right to manage these relationships is vital. Ensuring that the deal was up-and-running as quickly as possible was what everyone wanted - to complete within a short and demanding timeframe is testament to the hard work that all parties have put in.”
Scott Marshall, General Counsel & Company Secretary for Tiscali's UK operations said: "Addleshaw Goddard delivered this transaction within a demanding and challenging timeframe while still maintaining their usual high standards of support and advice."
easyJet has signed a three year deal to outsource its IT helpdesk support to utility support group Alfred McAlpine.
The main driver of the deal is to migrate IT support from business hours to a 24-hour, seven days per week, (24/7) operation.
The budget airline refused to divulge the value of the deal but speaking to silicon.com, IT service delivery manager Bill Codd said the deal would cost 50 per cent of the level of investment needed to bring helpdesk support up to 24/7 in-house.
The scope of the deal includes first and second-line service, providing two levels of technical expertise for easyJet's 650 PC users, and a support team based at the airline's Luton Airport HQ.
The deal builds on an existing relationship with Alfred McAlpine, which has also been contracted to manage a converged voice and data network for the HQ.
Codd said: "We have a relatively small IT department and the growth of the company has meant that providing a 24/7 service for staff had become unattainable. The only way to provide that level of service was through outsourcing."
Codd explained the level of support from Alfred McAlpine is built into the contract to expand as the airline grows. This is projected to be at a rate of 15 per cent over the next three years.
PA Consulting Group – the leading international management, systems and technology consultancy – has recently won a major contract from the UK Ministry of Defence’s Research Acquisition Organisation to research, define, develop and de-risk advanced logistics concepts, which will have important implications for the transportation and management of materiel throughout the future defence supply chain.
Working as Prime Contractor, PA Consulting Group is leading a joint industry and academic team to assess and develop the inter-modal and asset/consignment tracking concepts and technologies that are required to meet current and future logistics requirements. PA and the team will also ensure that the integration of these concepts and technologies is de-risked effectively in support of future logistic related acquisitions.
The complete programme of work is scheduled to be completed by May 2009, and its output will be used to inform decisions to progress with full development, procurement and fielding of an integrated inter-modal transport and tracking system.
Lessons learnt in Iraq and other theatres demonstrate the need to be able to transport materiel efficiently through the supply chain, while at the same time identifying and tracking materiel being shipped to and within theatre. The programme of work will identify opportunities to address these needs and to de-risk the technologies necessary to realise them.
The team, led by PA, comprises BMT Defence Services Ltd, Consillium (part of the Wincanton Group), Cranfield University, Lotus Engineering, and the Marshall Group’s Aerospace and Specialist Vehicle Divisions. It brings together the best of defence and civil logistics expertise and will draw upon recent developments in commercial logistics (including the exploitation of new technologies (RFID, GPS etc) and working practices. PA combines its scientific and engineering capability (based in its dedicated technology centres in Cambridge, UK, and Princeton, US) with its in-depth experience and understanding of UK Defence and logistics challenges.