Industry news

  • 24 Aug 2011 12:00 AM | Anonymous

    In a first of its kind deal, Microsoft has entered into a joint agreement with China's leading domestic Linux operating system provider to together provide cloud services across both Microsoft and Linux platforms.

    The agreement with China Standard Software Company (CS2C), a government-owned Linux provider, was announced on Monday.

    It's the first time Microsoft has partnered to provide cross-platform cloud services in an emerging economy, said Sandy Gupta, general manager of Microsoft's Open Solutions group.

  • 24 Aug 2011 12:00 AM | Anonymous

    Cognizant, a leading provider of information technology, consulting, and business process outsourcing services, today announced it has been selected by the UK Financial Services Authority (FSA) to be a key supplier as part of its Strategic Outsourcing Framework Agreement (SOFA). The FSA is a leading independent body that regulates the financial services industry in the UK and oversees more than 29,000 firms, which contribute over 6.8% of GDP, employ more than 1.1 million people, and provide services to millions of consumers.

    SOFA is important to the FSA’s ability to deliver on its statutory objectives and more proactively intervene when a firm’s actions pose potential systemic risk. The agreement enables the FSA to develop relationships with key third-party suppliers, enhance the scope of services it outsources and ensure greater value for money.

    As a key supplier, Cognizant will aim to help the FSA improve the reliability, scalability and flexibility of the IT systems and solutions that support the FSA’s market surveillance operations, supervisory analysis and risk management functions. Cognizant has been awarded five service areas, covering solutions consultancy, application development, application maintenance, testing, and web design and hosting and will initially support a key market surveillance application used by the FSA to monitor compliance with the Markets in Financial Instruments Directive (MiFID II).

    “We are pleased to be selected as a key supplier by the FSA for a comprehensive range of services,” said Tony Virdi, Vice President of Cognizant’s Banking and Financial Services Practice for the UK and Ireland. “We are committed to leveraging our strong understanding of the financial services industry and regulatory environment in the UK and consultative engagement model to help the FSA streamline processes, minimise operating costs, enhance productivity and address the need for greater efficiency, innovation and collaboration.”

  • 24 Aug 2011 12:00 AM | Anonymous

    The Communications Infrastructure Upgrade (CIU) project will enable Durham to meet the challenge of continuing to provide exceptional facilities for its UK and international students while achieving significant efficiency savings to allow reinvestment.

    As part of the upgrade program, Durham is introducing Siemens Enterprise Communications' IP telephony platform, OpenScape Voice, and migrating 1,500 voicemail accounts to its Xpressions voicemail system, which will offer greater workplace flexibility for Durham's academic and support staff. The new system comprises 6,500 voice licenses, underpinned by a core university-wide dual vendor data network. To provide increased capacity and greater resilience, it will offer time-saving benefits and a reduction in overall call costs through a more streamlined communications program, replacing six legacy ISDX systems and a legacy voicemail system currently used at the university. The agreement includes a five year maintenance contract.

    Durham has also chosen Siemens Enterprise Communications' OpenScape Contact Center application to significantly enhance its IT helpdesk function, which will help improve staff productivity and ensure user queries are answered quickly and effectively.

    "We are committed to offering staff and students the best possible facilities. As such, the university is undergoing extensive renovations – including new buildings and new technology," said Colin Hopkins, head of CIS Network Support, Durham University. "As part of this project, we opted to implement IP telephony across the entire university, to create a more efficient phone network that will ultimately support our ambitious plans to deliver outstanding service to our student base."

  • 24 Aug 2011 12:00 AM | Anonymous

    Synergy Health plc (Synergy) has chosen IFS, the global enterprise applications company, to supply and implement its financial management software across its entire business. IFS Applications will be rolled out in 102 sites across 10 countries from the UK to Malaysia, the US and France.

    Headquartered in the UK, Synergy is a world-leading, multi-national supplier of outsourced support services to the healthcare market in Europe, Asia, and the Americas.

    IFS Applications will enable Synergy to reduce overall costs and improve the efficiency of its global finance and procurement departments by providing greater financial visibility, automating reporting, enhancing integration with other applications and providing economies of scale. Synergy will also benefit from the rich functionality of IFS Applications. For example, employees will for the first time be able to compare multi-currency transactions with just one click of their mouse.

    Synergy has grown rapidly in the last decade, resulting in the company running a range of disparate financial systems. Synergy chose IFS because of the functionality it offers and the deep knowledge and expertise of its consultants. Sara Lloyd, Finance Project Manager at Synergy, comments: “In terms of product functionality, IFS was definitely top of our shortlist and we liked their approach. The IFS team demonstrated a good understanding of our business and related issues.

    We like the fact that IFS has the capability to implement the system in every country using its own consultants, ensuring consistency in standards. We also benefit from having just one simple contract for over 1,000 users at 102 sites.”

    Paul Massey, Managing Director at IFS Europe West, said: “Synergy is a rapidly growing company that requires agile, scalable, intuitive business applications that will enable it to take advantage of its size and reach. Therefore IFS Applications is an intelligent choice. IFS has a proven track record of successful implementations within large multi-national organisations, which offers peace of mind that the system will be implemented and maintained by an experienced team of IFS consultants.”

    Implementation of the system will begin in the autumn, with the first sites going live in the second quarter of 2012.

  • 24 Aug 2011 12:00 AM | Anonymous

    Cloud has been described as an extraordinary step change in the way IT-based services are delivered, but fundamentally, the cloud is but another form of outsourcing which will help businesses drive down costs. And it is here to stay.

    In pure business terms, cloud is essentially a flexible, scalable, pay-per-use model for the way

    IT services are delivered and consumed, typically through short-term contracts. With its pay-as-you- go model, cloud moves many IT costs from capital expenditure to operating expenditure; its “elastic model” means available IT capability can be flexed to mirror changing business demand; and enables consumers of IT to have much greater transparency over their costs.

    But to understand what that means to the business, the benefits and potential risks of migrating to cloud services need to be carefully considered. In any discussion about the cloud, the enthusiasm of evangelists is all too often tempered by the inevitable sage words of security-conscious CIOs. This isn’t surprising. After all, we are talking about moving confidential data away from physically being under the lock and key of a datacentre that the company owns and maintains – to an outsourced third party, possibly to an unknown location on the other side of the world. Three security areas in particular merit a closer look.

    Data Location

    The location of the data centre is the first consideration for any CIOs thinking about moving to the cloud; the questions that ensue as to what actually happens to your data are plentiful: where exactly is it kept? What happens when you end the contract? What happens if you terminate the contract early, or are in dispute with the outsourcer – is it still your data? Can you get access to it quickly? Is the data subject to laws of that particular geographic location?

    Regulatory Compliance

    This latter question of geographic location pivots on the differing regulatory compliance laws that will impact your data. These demands vary from country to country; for example, here in the UK, we are subject to both the Data Protection Act as well as the Freedom of Information Act. In the USA, the Patriot Act means that the US government can access data held by an American owned company anywhere outside of North America. This means that should your sourcing partner be an American firm, with your data located in e.g. the Philippines, the US government can still access your data. Should US officers search another company’s data that is hosted on the same shared infrastructure as your own your data could also be accessed and impacted.

    Recovery

    A third concern when outsourcing your IT services is the back up recovery guarantees in place of your data. What happens in the event of a natural disaster? How safe is the infrastructure of the premises? Or indeed the political or economic stability of the country? What guarantees are in place to ensure recovery is swift and complete? In the event of loss, can it be recovered and stripped from everybody else’s data?

    The Right Approach

    These are all valid concerns. However, provided the right approach is adopted, your organisation can be confident that embracing the cloud will yield tangible gains.

    Critical to the success of any cloud deployment is to understand that there is no one-size-fits-all solution. Indeed, the answer to these questions will depend on the nature of the cloud delivery model – be it public, private or a hybrid of both.

    The first step is to decide what data can be migrated to the cloud and stored externally within a community or public cloud, and what data should be retained within a private or trusted cloud environment. The key is knowing what data you are allowing into the cloud and which type of cloud is suitable for that data.

    Once this data split has been undertaken, the next step is for organisations to do the due diligence on the proposed cloud provider. Customers must understand that the cloud demands a more – not less – rigorous process to ascertain the right model and sourcing partner for their business needs. This means ensuring the right service level agreements are in place to address the above issues.

    The Right Partner

    Finally, deciding on the sourcing partner will depend on a number of factors. The Cloud Security Alliance Standards is a good place to start for anyone wanting guidance for best practice relating to the cloud.

    Cloud deployment works best when it is tailored according to the customer’s needs. And the above concerns can be met in a multitude of ways.

    In the case of data location, CIO’s should be able to choose between an in-country or offshore location solution or a mixture of both, according to the data split decided upon. The case for knowing exactly where your critical data resides and the conditions for accessing it, is abundantly clear. Non-essential data (e.g. website content) can be more economically placed in a public or community cloud.

    Concerns regarding recovery can be allayed via a private cloud solution. In Fujitsu’s case data is secured in more than one datacentre, meaning recovery is virtually instantaneous. The private cloud ensures that this service is dedicated to your organisation which brings with it its own benefits: for example, our customers can request that their data is stored either in dedicated physical security or specific virtual arrays with no other shared access. The physical disks can be returned to the customer should that need arise; in the virtual scenario the segregated array will be overwritten and returned to the pool for reallocation.

    Regulatory compliance can best be addressed by having an end-to-end solution. Because of the proliferation of companies offering various “as a service” offerings, the complexity that ensues can result in a situation whereby different data could be subject to different national laws. An outsourcing partner that provides such a service is better placed to ensure the security of your data.

    A final word would be that security will always be a concern for CIOs. The issues identified above are from insurmountable, neither are the remedies prohibitively expensive, but as the cloud matures, so do the solutions that ensure companies can outsource effectively with the clear cost benefits that will surely ensue.

  • 24 Aug 2011 12:00 AM | Anonymous

    Five factors that determine the success or failure of a globally delivered applications project

    Love it or hate it, for the past two decades the story of global delivery (or offshoring as it’s often considered), has been a mixed bag: for every success story put forward there has been a horror equivalent highlighted as well.

    On one hand, more and more organisations are eagerly embracing the concept of global delivery for application projects in order to lower costs, increase efficiency and productivity. While on the other side, communication issues, management overhead and cultural incompatibility are forcing companies to re-think their global strategy. And even within organisations, some divisions will report successful project deliveries using global teams while others fail to take advantage of the same.

    Application projects are no longer delivered by IT professionals all huddled up in a single office. Today, application projects and related activities make-up the bulk of IT business delivered by global teams. The breakdown of global delivery activity makes for interesting reading:

    • By sector: While the private sector has led the way in embracing the Global delivery model with over 30% already doing some form of offshore, the public sector has stayed away from it with less than 1% usage, predominantly citing security and social reasons

    • By geography: While USA and UK account for over 70% of global delivery business and have been in the forefront of using offshore and nearshore services, mainland Europe and Japan have been slow in adopting the same and account for less than 25%

    • By domain: The largest vertical sectors using global delivery are financial services (32%), manufacturing (20%), telecom (12%) and energy (11%). However, retail (5%), health care (5%), media (3%) and supply chain are still warming up to the idea

    But whether or not applications projects are delivered successfully on a global scale depends on a number of key factors. Below are the five areas organisations need to be considering if they want to implement successful applications projects globally:

    1. Organisational maturity

    Research4 indicates that many global projects are perceived to have failed because the benefits expected are far higher than the delivery maturity of the organisation. So, it is vital that the right expectations are set with all stakeholders. Organisational maturity is a key factor that defines the type, size and complexity of the work that can be delivered using the global delivery model. Maturity in this context also means the availability of the infrastructure (e.g. remote development centres, WAN etc.), having standard global project development practices, having staff with experience of the model and, of course, commitment from management to deal with issues. All of this will have a significant bearing on what can be delivered globally and what potential benefits can be gained.

    2. The right project

    Taking a blanket approach is a recipe for failure. Technically, all projects can be considered for global delivery, but, practically, not all lend themselves to offshore or nearshore development. Practical considerations likely to prevent work being carried out using global teams are often either security related (regulatory compliance, data protection and confidentiality), the business case itself not stacking up – where the cost of overheads outweigh benefits gained, or simply because the client is not comfortable with the model.

    A recent report by ComputerEconomics5 confirmed that 51% of the organisations that engage with offshore use it for application development and the average amount of development work done offshore is 35%. With the evolution of cloud, there is a general expectation that the potential for using global resources on application projects will increase as the required infrastructure can easily be made available to remotely located staff.

    3. The right location

    Having identified a suitable project for global delivery, it is important to choose the best offshore/nearshore location that will meet the project objectives. The deciding factors for selecting your location will typically include cost effectiveness, the technical skills and numbers, the functional roles of the project and the cultural affinity, security and language requirements. The 30 Top regional locations for 2010-117 as identified by Gartner are listed below:

    • Asia/Pacific: Bangladesh, China, India, Indonesia, Malaysia, the Philippines, Sri Lanka, Thailand and Vietnam.

    • EMEA: Bulgaria, the Czech Republic, Egypt, Hungary, Mauritius, Morocco, Poland, Romania, Russia, Slovakia, South Africa, Turkey and Ukraine

    • Americas: Argentina, Brazil, Chile, Colombia, Costa Rica, Mexico, Panama and Peru.

    4. The optimum blend

    Perhaps the greatest challenge to successfully implementing global delivery is choosing the right blend to offshore. It depends on a number of factors – the platform, the project size, the project type, the development methodology, and the process maturity. There is no one size fits all solution here. Typically, there are activities such as application and product development that can see a high offshore blend. Assessing the right blend is crucial to your company’s needs.

    5. Project Management Skills

    As ever, the success of rolling-out a globally delivered project lies in its management. Whether done locally or globally, the basics of project management apply in both cases. But many project managers underestimate the skills required to build a blended team: indeed it is one of the biggest reasons responsible for projects failing. Training project managers is therefore vital so that they get the expertise and skills to deliver offshore/nearshore projects. Working remotely demands a greater degree of communication and collaboration to ensure that all involved are working in tandem and are aware of all the factors that might impact their delivery.

    Global delivery is not about offshoring. It’s about “doing the right things from the right places”. Global delivery is not a ‘silver bullet’ for all IT issues and needs to be applied in specific circumstances according to the criteria listed above. As ever, it is down to experience and judgement in order to apply the right global delivery model to your business.

    ENDS

    [1] - NASSCOM, 2009

    [2] - PMP research, 2008

    [3] - BCG Group 2007 and Data Monitor 2009

    [4] Tom Philip, Erik Wende, Gerhard Schwabe - Identifying Early Warning Signs of Failures in Offshore Software Development Projects, 2010

    [5] - IT Outsourcing statistics 2010/11, Computer Economics

  • 23 Aug 2011 12:00 AM | Anonymous

    Measuring employee performance – even attitude and behaviour – to keep clients on side, win new business and improve profitability

    Outsourcing can bring many benefits to the client organisation, but it is a young and growing industry and, like other sectors before it, makes mistakes or takes its eye off some issues while addressing others. Indeed, there have been, and continue to be, some high profile failures – at home and overseas - that taint the industry’s reputation.

    In some sectors, outsourcing ends in failure to reach the agreed objectives; the client moves to another outsourcer, yet meets with failure again. In these cases, expectations at the client end start on a high, the outsourcer oversells and subsequently underperforms and loses the business or suffers penalties.

    Human error, sometimes translated as inefficient, inappropriate or ill-trained management, is often the cause. What can be done to ensure risks are reduced or eliminated as close to zero as possible?

    The outsourcer may have to manage the performance and, somehow, the attitude of individual team members, of geographically dispersed work forces. They may be dispersed throughout the UK and/or overseas. The outsourcer and all its staff may be headquartered on another continent and have only a nominal presence in the UK.

    The client company itself may track the outsourced company’s overall performance, much of it reliant on the performance of individuals – some of whom may be contract and freelance staff - and teams.

    Appraisals

    Many of you will be familiar with annual staff appraisals, although the smaller your company, the less likely you are to have them or to follow formal procedures. Formal procedures do introduce an element of objectivity into the proceedings, thereby reassuring staff with performance problems that they are not under personal attack or the victim of office politics.

    Training or mentoring should iron out these problemsm but in a typical SME or large enterprise a host of other issues will be happening that will impact on the performance of individuals, the teams of which they are a part, and therefore the business itself.

    Annual appraisals cannot hope to keep up with events, especially in the fast moving environment that is outsourcing. If they are paper based, the information in the appraisals has to be scanned or typed into a computer for analysis or, in the worst case, just looked at in order to gain a very basic understanding of what is going on with an individual.

    Breakthrough

    There has been a breakthrough with appraisals and performance management in general, and this has come with an online approach, which can cater for most, if not all, demands a company makes of its employees and contract or freelance workers.

    There are three core benefits of the online approach (i) employee appraisals and performance measures can happen quickly and easily at any time, so dispensing with cumbersome [and often expensive] annual or six monthly appraisal procedures (ii) results are rapidly obtained, and analysed automatically, allowing for a quick response where required (iii) it doesn’t matter where employees [and contract staff] are physically located; they could be next door and/or 12,000 miles away. They can still be measured and their performance improved.

    Looking at (i) and (ii) in a little detail - in outsourcing especially, individuals or teams may have to meet objectives or targets at short notice and in different projects. Online performance management allows for a target to be set by a manager in control of the performance measurement tool, and individual and team reaction to the target to be analysed. That way, the manager can see at a glance who [or which team] is having difficulty in reaching the target – and why.

    In some cases, measurement may have to be ongoing in order to turn difficult situations around or meet a new technical or business target. Automated analysis of development plans for all staff, or just those requiring improvement, can help when combined with an analysis of their competencies and the provision of training, whether the need for improvement is acute or less urgent.

    A side, but important, issue is resistance by an individual. The online tool can provide an audit trail that can be used to support, under employment law, the laying off of that individual. Another issue of importance is compliance; the online approach is a cost effective way to enforce standards compliance.

    Addressing item (iii), this is a key one for the industry. There was a time when managing [and measuring the performance of] geographically dispersed individuals or teams was laborious and therefore almost self defeating, due to the amount of time and the cost involved.

    It is not only performance and meeting targets [technical or business] that can be measured. Behaviour and attitude, including the ability to work well with other people, can also be measured and managed. Getting staff to pull in the same direction is not necessarily a subsidiary issue; it may be the only thing required, the sole key to improving business performance.

    Web enabling the different, but allied, processes outlined above within an online tool now makes the such tools highly viable and attractive to the outsourcer, adding as they do to the evidence of proof of target attainment that the client and/or outsourcer – or both parties - may demand.

    Summary

    Keeping track of employee performance and attitude and behaviour has implications for the business and almost overwhelming attractions. The implications include not only much greater and more fluid business agility, but the need to support that with training, coaching or mentoring. The attractions includes improved performance and profitability, and an increased likelihood of retaining or winning business.

  • 23 Aug 2011 12:00 AM | Anonymous

    Debunking the Myths of Innovation

    Myth 4: Innovation is expensive

    Reality 4: While emerging technology and drug research are expensive, most innovations require a modest disciplined investment of time and brain power

    About 15 years ago, the economist Paul Krugman compared research trends in economics to the evolution of map-making in Africa:

    “The coastline …was first explored, then with growing accuracy, and by the eighteenth century that coastline was shown in a manner essentially indistinguishable from that of modern maps… on the other hand, the interior had emptied out.

    "The weird mythical creatures were gone, but so were the real cities and rivers. In a way, the Europeans had become more ignorant about Africa than they had been before… Improvement in the art of mapmaking raised the standard for what was considered valid data. Second-hand reports of the form “six days south of the desert you encounter a vast flowing river from east to west” were no longer something you would use to draw your map. Only features of the landscape that had been visited by reliable informants equipped with sextants and compass now qualified as valid data…”

    The same thing has happened with “market research” and “trend forecasting”. We are so reliant and only accept as valid information from “reliable informant” (the analysts) with sextants and compass (surveys of CIO’s) that we have lost sight of the more general (and more than good enough) insights from immersion and observation of what is going on. This means much if not most of the (less precise and perhaps less accurate) information that was available to old style explorers is now missing from the modern forecasters radar.

    Existing trends are easy to identify. Analysts, the press, statistical analysis of social media and other sources can all tell you what existing trends are and the values such trends represent. The crux of innovation is to detect where, when and how these trends are evolving – where the trends will lead – and also what these signal about future trends that are not yet obvious. It can be thought of as jumping from one S curve to another. The trick is not in developing the new technology (including process technologies) but in knowing when and how to apply it through careful analysis of existing and emerging trends.

    A number of techniques can be (and are) used depending upon what one’s “animal instincts” suggest is happening in the market. Millions of years of evolution have given us one of the best pattern matching (and analogue operating) computers ever – the human brain. Too often in business, we rely only on digital and visual inputs, in other words, words and numbers/data that are one or two times removed from actual experience.

    The result – to build on Henry Ford’s statement about what customers want – is breeding faster horses rather than establishing the mass production of automobiles. To become true visionary innovators, we must provide the human brain with the greatest volume and variety of inputs (this is why we have so many senses). Then, we must allow the intellect to summarize this input into a general theme of what is happening and where the inputs are leading. At that point, multiple techniques can be used for the necessary testing of instinctive insights.

    One of the major techniques is simply content analysis: What is and is not being written about and talked about – not only in business and technical literature, but also in the general press, social media, fiction (science fiction is always a good leading indicator – just look at that “Star Trek” communicator strapped to your belt). Innovators should spend a significant amount of looking beyond traditional business and technical literature. We also must be aware of what is occurring legislatively, socially, economically (it is amazing to see how often business misses basic economic activity and resulting opportunities) as well as others areas.

    Other techniques include:

    1. Application (what are people applying existing stuff in unintended ways),

    2. Abstraction (is there a higher level description of what is going on),

    3. Identification (you say “mammal,” but do you mean a duck bill platypus, or do you mean a female astronaut with a PhD in astrophysics and an MD in cardiovascular surgery?). For example, within IT, identification is critically important for technologies such as “cloud,” and “security” and “virtualization.”

    4. Mimicry (looking at variations on what is being seen or sensed – “this is like that”,

    5. Symmetry (if this, then the opposite should also happen),

    6. Unification / convergence (multiple themes collapsing into one or fewer). History does repeat even as the world progresses. One must determine if the observation points to a retro theme (for instance, returning to shared services organizations, or a future theme (work being mobile across organizational and even corporate boundaries). We also must seek out arithmetic applications to themes, meaning can you add to, subtract from, divide or multiply to get a more desirable state (including economic, social, demographic, legal, regulatory, business, technological and other states).

    Ways of doing all this include:

    1. Brainstorming (immersion with customers, prospects, sales forces, academics, etc.),

    2. Storytelling and its resonance with market elements;

    3. Shadowing, or “skulking” on social media sites to observe what people / organizations are really doing versus what they say they are doing (and yes, you can still ask them what they are doing),

    4. Applying human factors and design principles to existing and evolving usage patterns to see paths of least resistance for trends to evolve to, prototypes to test with, and just good old fashioned creativity processes (like a whack on the side of the head courtesy of von Ott!).

    5. The real key is to be able to build the framework, an idea ecosystem, of themes and scenarios, which we discussed in Myth 3. Such a framework enables quick validation of innovative ideas against when they occur. This is necessary so truly “new ideas that are forward thinking, feasible, viable and valuable” are implemented rather than getting lost.

    I would argue the assumption about the need for even more investment in innovation. One of the unintended consequences of the move to cloud / utility computing is that it totally disrupts the economic frictions that heretofore dictated the invention / innovation cycle of IT. With much less funding than required before, startups can create highly niched offerings that can be very profitable (due to reduced costs or friction in sales, distribution, support, maintenance, etc.).

    As more leveraged infrastructure and services (IaaS, PaaS, SaaS) are brought into play, the cost of IT and IT investment goes down and gets reallocated from “maintenance, hygienic and housekeeping activities” to true customer value creation and competitive differentiation activities. This, in turn, requires a much more systemic process for innovation.

    Through careful consideration and analysis of trends using old-style forecasting techniques, businesses will find that innovation is not expensive. Knowing when and how to apply a new technology is of vital importance and can save businesses developing and investing in a new product that can become outdated quickly.

  • 23 Aug 2011 12:00 AM | Anonymous

    Multi-sourcing has become a common venture for large scale enterprises as they look to a number of partners for support on strategic projects in an effort to reduce costs and gain access to skills not readily available to them in-house. In theory multi-sourcing can reduce spend and provide predictable costs, guaranteed outcomes and improved service levels for customers and employees.

    This coupled with the ability to draft in experts to handle specialist tasks, rather than adding to the already busy work schedules of over-stretched staff, creates an attractive prospect for large enterprises.

    However, with the managing of multiple partners, deadlines and costs, multi-sourcing isn’t always a smooth operation, as quite often communication can break down and as a result collaboration breaks down too. Poor pre-planning, communication challenges and collaboration breakdowns can lead to a decline in service levels, dragged out projects, costs and ultimately; unfulfilled expectations.

    But all is not lost when it comes to multi-sourcing. When the lines of communication are clear and open between partners there is less scope for error. A clear project leader amongst suppliers ensures good collaboration and cohesive multi-sourcing which results in the overall achievement of a successful project.

    In today’s workplace there are plenty of channels widely available to help promote this collaborative approach. With instant messaging, wikis, video conferencing, microblogging and traditional methods like regular phone calls and meetings, the opportunity for collaboration has never been more readily available. By collaborating through these channels, multi-sourcing can be managed in real time which could effectively provide a solution to communication breakdowns. This is something which can only ever be a good thing for the future of cohesive multi-sourcing and overall project success.

  • 23 Aug 2011 12:00 AM | Anonymous

    As part of HP's transformation, HP has announced that its board of directors has authorized the exploration of strategic alternatives for the company's Personal Systems Group. HP will consider a broad range of options that may include, among others, a full or partial separation of PSG from HP through a spin-off or other transaction.

    HP will discontinue operations for webOS devices, specifically the TouchPad and webOS phones. The devices have not met internal milestones and financial targets. HP will continue to explore options to optimize the value of webOS software going forward.

    In addition, HP announced the terms of a recommended transaction for all of the outstanding shares of Autonomy Corporation plc for £25.50 ($42.11) per share in cash. Autonomy's software powers a full spectrum of mission-critical enterprise applications, including pan-enterprise search, customer interaction solutions, information governance, end-to-end eDiscovery, records management, archiving, business process management, web content management, web optimization, rich media management and video and audio analysis. The addition of Autonomy will accelerate HP's ability to deliver on its strategy to offer cloud-based solutions and software that best addresses the changing needs of businesses. (See accompanying press release.)

    "We're focused on improving performance across the business," said Léo Apotheker, HP president and chief executive officer. "HP is taking bold, transformative steps to position the company as a leader in the evolving information economy. Today's announced plan will allow HP to drive creation of long-term shareholder value through a focus on fewer fronts, thereby improving its ability to execute, invest in innovation and drive a higher-margin business mix."

Powered by Wild Apricot Membership Software