Home Interviews Hitachi Data Systems’ Hu Yoshida On Startups And Storage Virtualization

[Interview] Hitachi Data Systems’ Hu Yoshida On Startups And Storage Virtualization

1055
0

We managed to speak to Hubert Yoshida of Hitachi Data Systems when he was in Singapore recently to share his top ten predictions on storage virtualization in the IT industry in 2013.

Hu Yoshida, Hitachi Data Systems’s Chief Technical Officer responsible for defining the company’s technical direction and instrumental in evangelizing the unique Hitachi approach to storage virtualization, which leveraged existing storage services within Hitachi Universal Storage Platform® and extended it to externally-attached, heterogeneous storage systems. He also served on the advisory boards of several technology companies and currently sits on the Scientific Advisory Board for the Data Storage Institute of the Government of Singapore.

We took the opportunity ask him to how storage virtualization and its trends would affect technology startups.

1. What should technology startups keep in mind when it comes to storage virtualization, especially on the public cloud?

If they’re not using a cloud service, or even if they are, there is this explosion of copies. People aren’t managing copies and that’s what’s taking up a lot of costs. When a user comes in and tells his IT staff that he needs 5TB of space for his application, he doesn’t tell them he needs it protected and that he needs backup copies. This translates to the fact that IT departments need to be more aware of user requirements for data generated from applications used by staff.

When you look at data mining for example, it may not be the data generators using the data, but another application using the storage. Once again, this reiterates the point that IT departments usually end up under-catering for storage requirements due to a lack of visibility on the numbers of data copies made, and the lack of communication between staff and the IT department.

2. So unrestricted data replication can pose a danger to technology startups?

Businesses must apply discipline when they work on storage. This applies especially to businesses working on content platforms. When working on content platforms, while most of it is static data unlike a database, they need to recognize that the smaller appliances that sit on top of a storage array, still take up data and need to be accounted for.

3. In order to manage costs, startups usually go for simple yet scalable solutions like Amazon Web Services. What happens when startups think about data migration from such services to higher-level solutions?

We definitely see growing startups thinking of data migration to higher-level solutions from existing low-level or legacy systems. That’s where the nightmare occurs because they probably don’t have good management tools to keep good records of the data that they have. That means there’s a lot of data unaccounted for, which could result in under buying or over buying of storage solutions.

4. What about data security?

A lot of places really don’t secure their data and it’s a very real problem today. My son works for a small company and they currently use Gmail for their email. They’re doing that, and it’s a risk that you can get away with at this current point in time. However, the risk gets bigger as the company expands and more applications, email accounts and data generation points are added into the mix.

All this contributes to unstructured data, and the security of the data, which could be sensitive.

5. It’s harder for startups because they don’t have a baseline for how data is expected to grow, which is what makes it a challenge when these companies look at data management. What do you suggest as a solution?

If it’s unstructured data, storage filers are the best way to go. With filers, you have better control and a central point at which to manage a more complex storage system, making the overall system scalable. IT owners of these solutions can also take a snapshot, which create more copies, but allows the system better control of data flows within cloud and non-cloud environments.