Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
15 - 30 Lacs
Gurugram, Bengaluru
Hybrid
Salary: 15 to 30 LPA Exp: 3 to 8 years Location: Gurgaon / Bangalore (Hybrid) Notice: Immediate to 30 days..!! Key Skills: GCP, Cloud, Pubsub, Data Engineer
Posted 2 months ago
8.0 - 13.0 years
10 - 20 Lacs
Hyderabad
Remote
Reporting capabilities – currently these weekly, monthly reports are being updated / run manually. Reports seem to be generic financial reporting, like positions, P&L, Tax, etc Job loc;Remote
Posted 2 months ago
8.0 - 12.0 years
22 - 30 Lacs
Bengaluru
Work from Office
Working at Atlassian Atlassians can choose where they work - whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Our office is in Bengaluru, but we offer flexibility for eligible candidates to work remotely across India. Whatever your preference working from home, an office, or in between you can choose the place thats best for your work and your lifestyle. Your future team To become a 100 year company, we need a world-class engineering organisation made up of empowered teams who have the tools and infrastructure to do the best work of their careers. As a part of a unified R&D team, Engineering is prioritising key initiatives which support our customers in moving to cloud while simultaneously continuing to bring the most value to our customers through investments across our core product suite - such as Jira, Confluence, Trello, and Bitbucket. Were looking for people who want to write the future and who believe that we can accomplish so much more together. You will report to one of the Engineering Managers of the R&D teams. ","responsibilities":" What youll do Regularly tackle the largest and most complex problems on the team, from technical design to launch Deliver solutions that are used by other teams and products Determine plans-of-attack on large projects Routinely tackle complex architecture challenges and apply architectural standards and start using them on new projects Lead code reviews & documentation as well as take on complex bug fixes, especially on high-risk problems Set the standard for thorough, meaningful code reviews Partner across engineering teams to take on company-wide initiatives spanning multiple projects Transfer your depth of knowledge from your current language to excel as a Software Engineer Mentor more junior members ","qualifications":" Your background Bachelors, Masters, or PhD in Computer science in a related technical field or similar experience 10+ years of experience in software development and architecture Expert-level experience with one or more prominent languages such as Java, Kotlin, or Go is crucial. An expert in Kubernetes stateful sets and/or databases such as PostgreSQL. Passion for collaborating with and mentoring junior members of the team A real appetite for helping others learn and grow Considers the customer impact when making technical decisions Our perks & benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit
Posted 2 months ago
2.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Why Join HYCU HYCU is the fastest-growing leader in the multi-cloud Data Protection SaaS industry. By bringing true SaaS-based data backup and recovery to both on-premises and cloud-native environments, the company provides unparalleled data protection, migration, disaster recovery and ransomware protection to thousands of companies worldwide. As an award-winning and recognized visionary in the industry, HYCU solutions eliminate complexity, risk and the high cost of legacy- based solutions, providing data protection simplicity to make the world safer. With an industry- leading NPS score of 91, customers experience frictionless, cost-effective data protection, anywhere, everywhere. HYCU has raised $140M in VC funding to date and is based in Boston, Mass. Learn more at www.hycu.com . Overview: As a Software Engineer at HYCU, youll play a crucial role in building and delivering high-quality software for HYCU s next generation data protection for applications/services in the cloud. Youll be involved in all phases of software engineering, from requirements analysis to deployment, while adhering to agile software development methodologies. Collaboration is key, as youll work closely with cross-functional teams to deliver integrated solutions that meet the evolving needs of our business. Location: Bangalore, India (Hybrid) What You ll Do: Take ownership of features/products that range from SaaS Application services to build data protection SaaS across multiple hyperscalers and Software as a Service (SaaS) Services Ability to understand, analyze the data protection requirements across multiple SaaS platforms Ability to code and script using multiple languages, C#, .Net, Python, etc. Build and manage software, systems integrations, and developer support tools Provides expertise regarding the integration, security, and scalability of web applications Troubleshoot and perform support activities as necessary in a build it, own it environment Collaborate with other engineers, data scientists, architects and management to deliver high quality products What We re Looking For: Minimum of 2 - 4 years of software development experience and knowledge of .NET/C# (latest) Experience in developing Cloud Native Software as a Service applications (SaaS) running on hyperscaler cloud, or any other cloud providers Experience with Multithreading/async programming is a must Experience with docker container technologies is must Strong expertise with Continuous Integration/Continuous Deployment Experience with Microsoft Azure ecosystem Attributes of a Successful HYCUer: Hungry, self-starter and strategic thinker who thinks outside the box Takes responsibility and ownership for driving successful outcomes. You re results-driven with a winning attitude Team player! You have excellent people and management skills to interact with staff, colleagues, cross-functional teams and third parties Hands-on and builders mentality with an entrepreneurial mindset and intrinsic motivation Obsessed with being customer-focused - We know our customers and are advocates for their voice and point of view across HYCU Intellectual curiosity, always open for continuous learning/growth mindset Who We Are: Our Core Values: Authenticity , Grit and Empathy are at the heart of everything we do at HYCU. All of us at HYCU take ownership in shaping and contributing to our culture. We pride ourselves in developing an inclusive and diverse company that supports our employees and customers to do extraordinary things. The following is how we approach each Core Value: Authenticity - To be authentic means to be who we are and do it well. Focus your energy on being who YOU are. Be true to yourself. Authenticity also extends to our products. Understanding where we are truly the best fit for our customers and when we are not. And finally, authenticity in relationships: ensuring that we are honest and do what we say we re going to do. Grit - To win we need to want it. Every team member needs to be able to jump in and help at every turn. Whether it s staying late to help a colleague or customer or finding a better process and making sure it s communicated cross-functionally. You just have to do it and love it and never stop trying. Empathy - We need to care about each other, about our clients, about our business, and about the world around us. That might seem like a tall order, but if we don t live in a constant state of empathy, if we don t strive to truly put ourselves in another person s shoes, we cannot truly serve the market. "We are at our best when we stay true to our Core Values."~ Simon Taylor, CEO What We Offer: Come work for one of CRN s 20 Coolest Cloud Storage Companies of the 2024 Cloud 100 . At HYCU you ll have the opportunity to build your career with a Visionary B2B SaaS company from Gartner s Magic Quadrant for Enterprise BackUp . HYCU provides an excellent benefits package including Medical, Dental, Vision, Life Insurance, 401K match, generous time off, and more (varies by region). We offer career development programs and an inclusive global culture. All our employees participate in our equity program.
Posted 2 months ago
5.0 - 10.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Job Title: IOS Mobile Application Developer Location: Hyderabad Role: We are seeking a highly skilled iOS Developer to lead the development of a native iPhone application for intelligent IP camera configuration, video streaming, AI-driven object detection/classification, and real-time augmented overlays. You will work closely with our hardware and AI teams to deliver a cutting-edge mobile solution that integrates video, geolocation, and on-device AI inference. Key Responsibilities: Develop a native iOS app to connect and configure IP camera over factory default Wi-Fi settings (hotspot mode). Enable RTSP streaming from the camera and display live video feed on the iPhone. Implement video recording functionality with dynamic folder selection and clip segmentation (5-minute duration). Integrate GPS module to geotag each video clip with the latest detected coordinates. Provide options for cloud sync to Google Drive and iCloud Drive . Deploy and optimize AI models (object detection and classification) using iPhones GPU/NPU (CoreML/Metal) . Integrate a proprietary depth-sensing algorithm and overlay depth + object classification on the live video using alpha channels. Implement support for over-the-air app updates via App Store deployment. Provide support for downloading camera firmware SDKs, applying patches, and updating camera firmware securely through the app. Required Skills and Qualifications: 5+ years of iOS app development experience using Swift and Objective-C . Strong experience with AVFoundation , RTSP streaming , and video recording workflows. Proven expertise in CoreML , Metal , or TensorFlow Lite for iOS . Experience integrating GPS and geotagging capabilities. Solid understanding of cloud storage APIs like Google Drive and iCloud Drive. Familiarity with iOS background processing , file handling, and camera SDK integration. Strong UI/UX skills to render overlays (e.g., bounding boxes, labels, depth info) on real-time video. Experience publishing apps to the App Store and handling OTA updates. Knowledge of firmware update pipelines or SDK patching is a strong plus. Preferred Qualifications: Experience with on-device AI model inference optimization. Familiarity with OpenCV , Core Location , SwiftUI , or ARKit is a plus. Prior experience with IoT/embedded camera systems and firmware integration. Ability to write modular, testable, and scalable code.
Posted 2 months ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, Were hiring a Cloud Systems Integrator to connect disparate systems and ensure seamless cloud-native integrations. Key Responsibilities: Integrate SaaS, legacy, and cloud systems. Build APIs, webhooks, and message queues. Ensure data consistency across platforms. Required Skills & Qualifications: Experience with REST, GraphQL, and messaging (Kafka/SQS). Proficiency in integration platforms (MuleSoft, Boomi, etc.). Cloud-first development experience. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Seeking an Edge Cloud Architect to build decentralized architectures for low-latency applications. Key Responsibilities: Design hybrid edge-cloud systems. Deploy services near the user using CDNs or edge locations. Optimize performance for IoT, gaming, and media applications. Required Skills & Qualifications: Experience with Cloudflare Workers, AWS Wavelength, or Azure Edge Zones. Strong networking and latency optimization knowledge. Proficiency in lightweight, distributed architectures. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 2 months ago
4.0 - 10.0 years
12 - 13 Lacs
Hyderabad
Work from Office
Tech Mahindra Ltd is looking for GCP Data Engineer to join our dynamic team and embark on a rewarding career journey Design, develop, and maintain data pipelines and data storage solutions using GCP Work with cross-functional teams to identify data requirements and design data architectures Develop and implement data integration solutions using GCP services Develop and maintain ETL workflows using tools Develop and implement data security and privacy measures to protect sensitive data Collaborate with the data science and analytics teams to develop solutions that meet business requirements and technical specifications Write and maintain technical documentation Strong analytical and problem-solving skills Excellent communication and collaboration skills
Posted 2 months ago
10.0 - 11.0 years
14 - 19 Lacs
Hyderabad
Work from Office
Lead the GCP pillar within the Data Engineering CoE, establishing technical standards, best practices, and reusable accelerators for Google Cloud Platform data implementations. This role is critical for supporting high-value client engagements including Verizon and other GCP-focused opportunities in our pipeline. Key Responsibilities Develop architecture patterns and implementation accelerators for GCP data platforms Establish best practices for BigQuery, Dataflow, Dataproc, and other GCP data services Support pre-sales activities for GCP-based opportunities, with particular focus on Verizon Design migration pathways from legacy systems to GCP Create technical documentation and playbooks for GCP implementations Mentor junior team members on GCP best practices Work with cloud-agnostic platforms (Databricks, Snowflake) in GCP environments Build deep expertise in enterprise-scale GCP deployments Collaborate with other pillar architects on cross-platform solutions Represent the companys GCP capabilities in client engagements Qualifications 10+ years of data engineering experience with minimum 5+ years focused on GCP Deep expertise in BigQuery, Dataflow, Dataproc, and Cloud Storage Experience implementing enterprise-scale data lakes on GCP Strong know
Posted 2 months ago
3.0 - 5.0 years
4 - 8 Lacs
Chennai
Work from Office
Role Purpose The purpose of the role is to resolve, maintain and manage client’s software/ hardware/ network based on the service requests raised from the end-user as per the defined SLA’s ensuring client satisfaction Do Ensure timely response of all the tickets raised by the client end user Service requests solutioning by maintaining quality parameters Act as a custodian of client’s network/ server/ system/ storage/ platform/ infrastructure and other equipment’s to keep track of each of their proper functioning and upkeep Keep a check on the number of tickets raised (dial home/ email/ chat/ IMS), ensuring right solutioning as per the defined resolution timeframe Perform root cause analysis of the tickets raised and create an action plan to resolve the problem to ensure right client satisfaction Provide an acceptance and immediate resolution to the high priority tickets/ service Installing and configuring software/ hardware requirements based on service requests 100% adherence to timeliness as per the priority of each issue, to manage client expectations and ensure zero escalations Provide application/ user access as per client requirements and requests to ensure timely solutioning Track all the tickets from acceptance to resolution stage as per the resolution time defined by the customer Maintain timely backup of important data/ logs and management resources to ensure the solution is of acceptable quality to maintain client satisfaction Coordinate with on-site team for complex problem resolution and ensure timely client servicing Review the log which Chat BOTS gather and ensure all the service requests/ issues are resolved in a timely manner Deliver NoPerformance ParameterMeasure1. 100% adherence to SLA/ timelines Multiple cases of red time Zero customer escalation Client appreciation emails Mandatory Skills: Cloud Storage. Experience3-5 Years.
Posted 2 months ago
4.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
The IBM Storage Protect Support (Spectrum Protect or TSM erstwhile) team is supporting complex integrated storage products end to end, including Spectrum Protect, Spectrum Protect Plus, Copy Data Management. This position involves working with our IBM customers remotely, which are some of the world's top research, automotive, banks, health care and technology providers. The candidates must be able to assist with operating systems (AIX,Linux, Unix, Windows), SAN, network protocols, clouds and storage devices. They will work in a virtual environment working with colleagues around the globe and will be exposed to many different types of technologies. Responsibilitiesmust include but not limited to Provide remote troubleshooting and analysis assistance for usage and configuration questions Review diagnostic information to assist in isolation of a problem cause (which could include assistance interpreting traces and dumps) Identify known defects and fixes to resolve problems Develops best practice articles and support utilities to improve support quality and productivity Respond to escalated customer calls, complaints, and queries The job will require flexible schedule to ensure 24x7 support operations and weekend on-call coverage, including extending/taking shift to cover North America working hours. Required education Bachelor's Degree Required technical and professional expertise Following minimum experience are required for the role - Must have worked in at least 3 - 6 years on data protection or storage software’s as administrator or solution architect or client server technologies. Debugging and analysis are performed via the telephone as well as electronically. So candidates must possess strong customer interaction skills and be able to clearly articulate solutions and options. Must be familiar with and able to interpret complex software problems that span across multiple client and server platforms including UNIX, Linux, AIX, and Windows. Focus on storage area networks (SAN), network protocols, Cloud, and storage devices is preferred. Hands on experience with storage virtualization is a plus. Candidates must be flexible in schedule and availability. Second shift and weekend scheduling will be required. Preferred technical and professional experience Excellent communication skills - both verbal and written Provide remote troubleshooting and analysis assistance for usage and configuration questions Preferred Professional and Technical Expertise: At least 2-3 years of in-depth experience with Spectrum Protect (Storage Protect) or its competition products in data protection domain Working knowledge on RedHat, Openshift or Ansible administration will be preferred. Good in networking and troubleshooting. Cloud Certification will be added advantage. Knowledge about Object Storage and Cloud Storage will be preferred.
Posted 2 months ago
3.0 - 5.0 years
10 - 14 Lacs
Hyderabad
Work from Office
You will be responsible for the uptime, performance, and operational cost of a large-scale cloud platform or some of our SaaS-based products. You will make daily and weekly operational decisions with the goal of improving uptime while reducing costs. You will drive improvements by gaining an in-depth knowledge of the products in your responsibility and applying the latest emerging trends in the cloud and SaaS technologies. All your decisions will be focused on providing the best in class service to the users of our SaaS products. Our organization relies on its central engineering workforce to develop and maintain a product portfolio of several different startups. As part of our engineering, you'll get to work on several different products every quarter. Our product portfolio continuously grows as we incubate more startups, which means that different products are very likely to make use of different technologies, architecture & frameworks - a fun place for smart tech lovers! Candidate Requirements 3 to 5 years of experience working in DevOps. In-depth knowledge of configuring and hosting services on Kubernetes. Hands-on experience in configuring and managing a service mesh like Istio. Experience working in production environment, AWS, Cloud, Agile, CI/CD, and DevOps environments. We live in the Cloud Experience in Jenkins, Google Cloud Build, or similar Good to have experience with using PAAS and SAAS services from AWS/Azure/GCP like BigQuery,Cloud Storage, S3, etc. Good to have experience with configuring, scaling, and monitoring database systems li PostgreSQL, MySQL, MongoDB, and so on.
Posted 2 months ago
4.0 - 7.0 years
10 - 14 Lacs
Noida
Work from Office
Location: Noida (In-office/Hybrid; client site if required) Type: Full-Time | Immediate Joiners Preferred Must-Have Skills: GCP (BigQuery, Dataflow, Dataproc, Cloud Storage) PySpark / Spark Distributed computing expertise Apache Iceberg (preferred), Hudi, or Delta Lake Role Overview: Be part of a high-impact Data Engineering team focused on building scalable, cloud-native data pipelines. You'll support and enhance EMR platforms using DevOps principles, helping deliver real-time health alerts and diagnostics for platform performance. Key Responsibilities: Provide data engineering support to EMR platforms Design and implement cloud-native, automated data solutions Collaborate with internal teams to deliver scalable systems Continuously improve infrastructure reliability and observability Technical Environment: Databases: Oracle, MySQL, MSSQL, MongoDB Distributed Engines: Spark/PySpark, Presto, Flink/Beam Cloud Infra: GCP (preferred), AWS (nice-to-have), Terraform Big Data Formats: Iceberg, Hudi, Delta Tools: SQL, Data Modeling, Palantir Foundry, Jenkins, Confluence Bonus: Stats/math tools (NumPy, PyMC3), Linux scripting Ideal for engineers with cloud-native, real-time data platform experience especially those who have worked with EMR and modern lakehouse stacks.
Posted 2 months ago
5.0 - 10.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About The Role : Job TitleDevOps Engineer, AS LocationBangalore, India Role Description Deutsche Bank has set for itself ambitious goals in the areas of Sustainable Finance, ESG Risk Mitigation as well as Corporate Sustainability. As Climate Change throws new Challenges and opportunities, Bank has set out to invest in developing a Sustainability Technology Platform, Sustainability data products and various sustainability applications which will aid Banks goals. As part of this initiative, we are building an exciting global team of technologists who are passionate about Climate Change, want to contribute to greater good leveraging their Technology Skillset in Cloud / Hybrid Architecture. As part of this Role, we are seeking a highly skilled and experienced DevOps Engineer to join our growing team. In this role, you will play a pivotal role in managing and optimizing cloud infrastructure, facilitating continuous integration and delivery, and ensuring system reliability. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy. Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Create, implement, and oversee scalable, secure, and cost-efficient cloud infrastructures on Google Cloud Platform (GCP). Utilize Infrastructure as Code (IaC) methodologies with tools such as Terraform, Deployment Manager, or alternatives. Implement robust security measures to ensure data access control and compliance with regulations. Adopt security best practices, establish IAM policies, and ensure adherence to both organizational and regulatory requirements. Set up and manage Virtual Private Clouds (VPCs), subnets, firewalls, VPNs, and interconnects to facilitate secure cloud networking. Establish continuous integration and continuous deployment (CI/CD) pipelines using Jenkins, GitHub Actions, or comparable tools for automated application deployments. Implement monitoring and alerting solutions through Stackdriver (Cloud Operations), Prometheus, or other third-party applications. Evaluate and optimize cloud expenditures by utilizing committed use discounts, autoscaling features, and resource rightsizing. Manage and deploy containerized applications through Google Kubernetes Engine (GKE) and Cloud Run. Deploy and manage GCP databases like Cloud SQL, BigQuery. Your skills and experience Minimum of 5+ years of experience in DevOps or similar roles with hands-on experience in GCP. In-depth knowledge of Google Cloud services (e.g., GCE, GKE, Cloud Functions, Cloud Run, Pub/Sub, BigQuery, Cloud Storage) and the ability to architect, deploy, and manage cloud-native applications. Proficient in using tools like Jenkins, GitLab, Terraform, Ansible, Docker, Kubernetes. Experience with Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or GCP-native Deployment Manager. Solid understanding of security protocols, IAM, networking, and compliance requirements within cloud environments. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 months ago
3.0 - 7.0 years
12 - 16 Lacs
Chennai
Work from Office
Lead Software Engineer, Storage (C++) - Chennai, India - Exasol This website uses cookies to ensure you get the best experience. Exasol and our selected partners use cookies and similar technologies (together cookies ) that are necessary to present this website, and to ensure you get the best experience of it. If you consent to it, we will also use cookies for analytics purposes. You can withdraw and manage your consent at any time, by clicking Manage cookies at the bottom of each website page. Decline all non-necessary cookies Select which cookies you accept On this site, we always set cookies that are strictly necessary, meaning they are necessary for the site to function properly. If you consent to it, we will also set other types of cookies. You can provide or withdraw your consent to the different types of cookies using the toggles below. You can change or withdraw your consent at any time, by clicking the link Manage Cookies , that is always available at the bottom of the site. These cookies are necessary to make the site work properly, and are always set when you visit the site. These cookies collect information to help us understand how the site is being used. Decline all non-necessary cookies Lead Software Engineer, Storage (C++) - Chennai, India Exasol accelerates insights from the world s data. Our in-memory technology, massively parallel processing (MPP) technology is specifically designed for analytics, enabling businesses to turn data into actionable insights. At Exasol, we are committed to pushing the boundaries of what is possible in data analytics, and we are looking for passionate individuals to join our team and help share the future of data technology. Join our diverse, remote-first team where more than 30+ languages (and counting!) are spoken, and every voice is valued. We are looking for passionate individuals who thrive on collaboration, innovation, and a shared commitment to help share the future of data technology. We are seeking skilled and motivated Lead Software Engineer to join our CoreDB-Storage team. You will champion our parallel storage solution, focusing on scalability and efficiency. This role involves designing advanced algorithms, collaborating with teams for integrated storage functionality, and staying updated with industry advancements. Key Responsibilities Further development of our massive parallel storage solution for improved scalability and efficiency. Design and implement complex algorithms for effective data management, storage, and retrieval. Engage in close collaboration with other development teams to ensure integrated and peak storage-related functionalities. Offer insights and innovative our storage solution backed by a solid foundation in parallel storage technologies. Remain updated with the latest trends and advances in storage technologies. Test, debug, and refine the codebase for maximum stability and reliability of our storage solutions. Required Qualifications: Strong proficiency in modern C++ programming with a keen eye for performance optimization. Familiarity with scripting languages such as Bash and Python. Solid understanding of Linux environments and ideally, Linux-Kernel know-how. Knowledge in parallel Storage-Technology, such as HDFS, GlusterFS, Ceph, and RAID algorithms. Familiarity with public cloud storage solutions across AWS, Azure, GCP Proven expertise in creating efficient and distributed algorithms. An analytical mindset with a structured approach to problem-solving, coupled with high quality awareness. A degree in Computer Science or a related field (or equivalent experience). Structured, analytical approach to problem solving Working proficiency in English Summary of Key skills Linux Skills : A strong grasp of fundamental Linux concepts, including POSIX (sockets, messaging, shared memory), System V, and system calls, with an emphasis on understanding how memory, processes, and inter-process communication (IPC) work. C++ Skills: Proficient in C++ for performance-oriented tasks, especially in multi-threading, multi-processing, and optimizing algorithms. Experience with Massively Parallel Processing (MPP) and SIMD is essential for optimizing parallel tasks and processing multiple data points simultaneously. Knowledge of parallel storage technologies is preferable. Knowledge if public cloud storage solutions is preferable. How We Work at Exasol: Own Your Impact: At Exasol, you are not just a cog in the machine; you will step into immediate ownership of projects, driving them forward with a refreshing level of autonomy. Thrive in a Global Team: Join a vibrant, international community where diversity is celebrated, collaboration is key, and feedback fuels growth Learn, Grow, Lead : We are invested in your development! Continuous knowledge-sharing, "Coffee and Learn" sessions, exciting events, and dedicated leadership programs empower you to soar. Work on Your Terms : Flexibility is the name of the game! Enjoy adaptable hours, remote options, and "workcations" for the ultimate work-life balance. Growth That Goes Beyond the Office : Dive into a comprehensive onboarding experience, fun team events, and a deep commitment to diversity and sustainability. We care about your holistic well-being. Rewards that Matter : Monthly home office allowance, volunteering options, floating days, and secure pension plans (location-dependent) prove we value your contributions. Exasol is a proud, equal opportunities employer. We are committed to a diverse and inclusive working environment and therefore base all our employment selection decisions, within all aspects of our business, on experience, skill, and integrity. We strongly encourage applicants from all walks to life to apply for our positions, irrespective of age, sex, gender identity, disability, sexual orientation, race, religion, etc. About Exasol Exasol is the world s fastest analytics database, trusted by the world s most ambitious organizations. Built for speed and flexibility, it can analyze billions of rows in seconds and run high-performance analytics securely whether in the cloud or on-premises. Need to scale your analytics functionSimple pricing makes it easy. Want to deliver frictionless insightsAutomatic self-indexing tunes performance for optimal results. And you don t have to wait - Exasol fits into any data environment, so you can get started right away. Already working at Exasol Let s recruit together and find your next colleague.
Posted 2 months ago
3.0 - 8.0 years
9 - 14 Lacs
Chennai
Work from Office
Exasol accelerates insights from the world s data. Our in-memory technology, massively parallel processing (MPP) technology is specifically designed for analytics, enabling businesses to turn data into actionable insights. At Exasol, we are committed to pushing the boundaries of what is possible in data analytics, and we are looking for passionate individuals to join our team and help share the future of data technology. Join our diverse, remote-first team where more than 30+ languages (and counting!) are spoken, and every voice is valued. We are looking for passionate individuals who thrive on collaboration, innovation, and a shared commitment to help share the future of data technology. Lead Software Engineer (CoreDB-Storage) Chennai, India (Hybrid) We are seeking skilled and motivated Lead Software Engineer to join our CoreDB-Storage team. You will champion our parallel storage solution, focusing on scalability and efficiency. This role involves designing advanced algorithms, collaborating with teams for integrated storage functionality, and staying updated with industry advancements. Join our journey in pioneering storage solutions. Key Responsibilities Further development of our massive parallel storage solution for improved scalability and efficiency. Design and implement complex algorithms for effective data management, storage, and retrieval. Engage in close collaboration with other development teams to ensure integrated and peak storage-related functionalities. Offer insights and innovative our storage solution backed by a solid foundation in parallel storage technologies. Remain updated with the latest trends and advances in storage technologies. Test, debug, and refine the codebase for maximum stability and reliability of our storage solutions. Required Qualifications: Strong proficiency in modern C++ programming with a keen eye for performance optimization. Familiarity with scripting languages such as Bash and Python. Solid understanding of Linux environments and ideally, Linux-Kernel know-how. Knowledge in parallel Storage-Technology, such as HDFS, GlusterFS, Ceph, and RAID algorithms. Familiarity with public cloud storage solutions across AWS, Azure, GCP Proven expertise in creating efficient and distributed algorithms. An analytical mindset with a structured approach to problem-solving, coupled with high quality awareness. A degree in Computer Science or a related field (or equivalent experience). Structured, analytical approach to problem solving Working proficiency in English Summary of Key skills Linux Skills : A strong grasp of fundamental Linux concepts, including POSIX (sockets, messaging, shared memory), System V, and system calls, with an emphasis on understanding how memory, processes, and inter-process communication (IPC) work. C++ Skills: Proficient in C++ for performance-oriented tasks, especially in multi-threading, multi-processing, and optimizing algorithms. Experience with Massively Parallel Processing (MPP) and SIMD is essential for optimizing parallel tasks and processing multiple data points simultaneously. Knowledge of parallel storage technologies is preferable. Knowledge if public cloud storage solutions is preferable. How We Work at Exasol: Own Your Impact: At Exasol, you are not just a cog in the machine; you will step into immediate ownership of projects, driving them forward with a refreshing level of autonomy. Thrive in a Global Team: Join a vibrant, international community where diversity is celebrated, collaboration is key, and feedback fuels growth Learn, Grow, Lead : We are invested in your development! Continuous knowledge-sharing, "Coffee and Learn" sessions, exciting events, and dedicated leadership programs empower you to soar. Work on Your Terms : Flexibility is the name of the game! Enjoy adaptable hours, remote options, and "workcations" for the ultimate work-life balance. Growth That Goes Beyond the Office : Dive into a comprehensive onboarding experience, fun team events, and a deep commitment to diversity and sustainability. We care about your holistic well-being. Rewards that Matter : Monthly home office allowance, volunteering options, floating days, and secure pension plans (location-dependent) prove we value your contributions. Our values drive our unique and inclusive culture, discover how they shape your Exasol experience. Learn more about our core values at Exasol. About Exasol: Take the next step in your career journey. Visit www.exasol.com to explore our current job openings and follow us on LinkedIn to see what it is like to work at Exasol. Exasol is a proud, equal opportunities employer. We are committed to a diverse and inclusive working environment and therefore base all our employment selection decisions, within all aspects of our business, on experience, skill, and integrity. We strongly encourage applicants from all walks to life to apply for our positions, irrespective of age, sex, gender identity, disability, sexual orientation, race, religion, etc.
Posted 2 months ago
3.0 - 8.0 years
15 - 30 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Salary: 15 to 30 LPA Exp: 3 to 8 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Key Skills: GCP, Cloud, Pubsub, Data Engineer
Posted 2 months ago
4.0 - 8.0 years
10 - 19 Lacs
Chennai
Hybrid
Greetings from Getronics! We have permanent opportunities for GCP Data Engineers in Chennai Location . Position Description: We are currently seeking a seasoned GCP Cloud Data Engineer with 3 to 5 years of experience in leading/implementing GCP data projects, preferrable implementing complete data centric model. This position is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier supply Chain, Supplier collaboration Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Storage Transfer Service, Cloud Data Fusion, Pub/Sub, Data flow, Cloud compression, Cloud scheduler, Gutil, FTP/SFTP, Dataproc, BigTable etc. • Build ETL pipelines to ingest the data from heterogeneous sources into our system • Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data • Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets • Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements infrastructure. Skill Required: - GCP Data Engineer, Hadoop, Spark/Pyspark, Google Cloud Platform (Google Cloud Platform) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Compose, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. - 4+ years of professional experience in: o Data engineering, data product development and software product launches. - 3+ years of cloud data/software engineering experience building scalable, reliable, and cost- effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub. Education Required: Any Bachelors' degree Candidate should be willing to take GCP assessment (1-hour online test) LOOKING FOR IMMEDIATE TO 30 DAYS NOTICE CANDIDATES ONLY. Regards, Narmadha
Posted 2 months ago
8 - 13 years
14 - 24 Lacs
Chennai
Hybrid
Greetings from Getronics! We have permanent opportunities for GCP Data Engineers in Chennai Location . Hope you are doing well! This is Abirami from Getronics Talent Acquisition team. We have multiple opportunities for Senior GCP Data Engineers for our automotive client in Chennai Sholinganallur location. Please find below the company profile and Job Description. If interested, please share your updated resume, recent professional photograph and Aadhaar proof at the earliest to abirami.rsk@getronics.com. Company : Getronics (Permanent role) Client : Automobile Industry Experience Required : 8+ Years in IT and minimum 4+ years in GCP Data Engineering Location : Chennai (Elcot - Sholinganallur) Work Mode : Hybrid Position Description: We are currently seeking a seasoned GCP Cloud Data Engineer with 4+ years of experience in leading/implementing GCP data projects, preferrable implementing complete data centric model. This position is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier supply Chain, Supplier collaboration Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Storage Transfer Service, Cloud Data Fusion, Pub/Sub, Data flow, Cloud compression, Cloud scheduler, Gutil, FTP/SFTP, Dataproc, BigTable etc. • Build ETL pipelines to ingest the data from heterogeneous sources into our system • Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data • Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets • Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements • Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs. • Implement security measures and data governance policies to ensure the integrity and confidentiality of data. • Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Skill Required: - GCP Data Engineer, Hadoop, Spark/Pyspark, Google Cloud Platform (Google Cloud Platform) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Compose, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. - 8+ years of professional experience in: o Data engineering, data product development and software product launches. - 4+ years of cloud data engineering experience building scalable, reliable, and cost- effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub. Education Required: Any Bachelors' degree LOOKING FOR IMMEDIATE TO 30 DAYS NOTICE CANDIDATES ONLY. Regards, Abirami Getronics Recruitment team
Posted 2 months ago
10 - 15 years
25 - 40 Lacs
Pune
Work from Office
Introduction: We are seeking a highly skilled and experienced Google Cloud Platform (GCP) Solution Architect . As a Solution Architect, you will play a pivotal role in designing and implementing cloud-based solutions for our team using GCP. The ideal candidate will have a deep understanding of cloud architecture, a proven track record of delivering cloud-based solutions, and experience with GCP technologies. You will work closely with technical teams and clients to ensure the successful deployment and optimization of cloud solutions. Responsibilities: Lead the design and architecture of GCP-based solutions, ensuring scalability, security, performance, and cost-efficiency. Collaborate with business stakeholders, engineering teams, and clients to understand technical requirements and translate them into cloud-based solutions. Provide thought leadership and strategic guidance on cloud technologies, best practices, and industry trends. Design and implement cloud-native applications, data platforms, and microservices on GCP. Ensure cloud solutions are aligned with clients business goals and requirements, with a focus on automation and optimization. Conduct cloud assessments, identifying areas for improvement, migration strategies, and cost-saving opportunities. Oversee and manage the implementation of GCP solutions, ensuring seamless deployment and operational success. Create detailed documentation of cloud architecture, deployment processes, and operational guidelines. Engage in pre-sales activities, including solution design, proof of concepts (PoCs), and presenting GCP solutions to clients. Ensure compliance with security and regulatory requirements in the cloud environment. Requirements: At least 2+ years of experience as a Cloud Architect or in a similar role with strong expertise in Google Cloud Platform. In-depth knowledge of GCP services, including Compute Engine, Kubernetes Engine, BigQuery, Cloud Storage, Cloud Functions, and networking. Experience with infrastructure-as-code tools such as Terraform Strong understanding of cloud security, identity management, and compliance frameworks (e.g., GDPR, HIPAA). Hands-on experience with GCP networking, IAM, and logging/monitoring tools (Cloud Monitoring, Cloud Logging). Strong experience in designing and deploying highly available, fault-tolerant, and scalable solutions. Proficiency in programming languages like Java, Golang. Experience with containerization and orchestration technologies such as Docker, Kubernetes, and GKE (Google Kubernetes Engine). Experience in cloud cost management and optimization using GCP tools. Thanks, Pratap
Posted 2 months ago
8 - 13 years
5 - 11 Lacs
Pune
Work from Office
1 We are seeking a highly skilled L3 Storage Engineer specializing in NAS, SAN, and SAN Switches to manage, optimize, and troubleshoot enterprise storage infrastructure. The ideal candidate will have deep expertise in storage technologies, performance tuning, automation, and disaster recovery, along with experience supporting large-scale enterprise or cloud-based storage solutions. 2 The L3 Storage Engineer will be responsible for storage architecture design, performance optimization, troubleshooting complex issues, disaster recovery planning, and automation. This role also involves mentoring L1/L2 engineers, handling escalations, and working on next-gen storage technology implementations. Role & responsibilities 1 We are seeking a highly skilled L3 Storage Engineer specializing in NAS, SAN, and SAN Switches to manage, optimize, and troubleshoot enterprise storage infrastructure. The ideal candidate will have deep expertise in storage technologies, performance tuning, automation, and disaster recovery , along with experience supporting large-scale enterprise or cloud-based storage solutions . 2 The L3 Storage Engineer will be responsible for storage architecture design, performance optimization, troubleshooting complex issues, disaster recovery planning, and automation . This role also involves mentoring L1/L2 engineers, handling escalations, and working on next-gen storage technology implementations . Major Duties & Responsibilities: Storage Infrastructure Design & Administration: Architect, implement, and manage SAN (Storage Area Network) and NAS (Network Attached Storage) solutions . Configure, maintain, and optimize storage arrays from vendors such as NetApp, Dell EMC, Hitachi, HPE, IBM, or Pure Storage . Manage and maintain SAN switches (Brocade, Cisco MDS, or equivalent) , including zoning, firmware updates, and fabric management. Ensure high availability, redundancy, and disaster recovery for storage systems. Perform storage provisioning, replication, migration, and performance tuning . Storage Performance Monitoring & Optimization: Monitor storage performance using vendor-specific and third-party monitoring tools . Analyze and optimize I/O performance, latency, and throughput . Identify and resolve bottlenecks, failures, and degradation issues . Implement storage tiering, deduplication, compression, and caching strategies to improve efficiency. Storage Automation & Infrastructure as Code (IaC): Automate storage provisioning, monitoring, and management using PowerShell, Ansible, Python, or API integrations . Implement Infrastructure as Code (IaC) practices for storage configurations . Develop and maintain storage orchestration scripts and automation frameworks . Disaster Recovery & Data Protection: Design and implement storage backup and disaster recovery (DR) strategies . Work closely with backup teams to ensure efficient snapshotting, replication, and archival solutions . Conduct DR drills and failover testing to validate business continuity plans. Security & Compliance: Implement storage encryption, access controls, and data security best practices . Ensure compliance with ISO 27001, NIST, SOC2, GDPR, HIPAA, and other regulatory requirements . Perform storage audits, capacity planning, and reporting . Collaboration, Documentation & Leadership: Act as an L3 escalation point for complex storage-related issues . Work closely with server, network, backup, and cloud teams to enhance storage strategies. Mentor and train L1 and L2 storage engineers . Maintain comprehensive documentation, runbooks, and troubleshooting guides . Preferred candidate profile Bachelors or Master’s degree in Computer Science, IT, or a related field (or equivalent experience). 7+ years of experience in enterprise storage administration, SAN/NAS solutions, and storage networking . Expertise in storage arrays from vendors such as NetApp, Dell EMC, HPE, Hitachi, IBM, Pure Storage, or Huawei . Strong knowledge of SAN switch configuration, zoning, and troubleshooting (Brocade, Cisco MDS, or equivalent) . Experience with block storage, file storage, and object storage technologies . Hands-on experience with storage replication, deduplication, and data migration strategies . Proficiency in PowerShell, Python, or Ansible for automation and orchestration . Familiarity with cloud-based storage solutions (AWS, Azure, Google Cloud, or OCI) . Experience in troubleshooting LUN masking, multipathing, and storage connectivity issues . Notice Period -30/45 only
Posted 2 months ago
3 - 5 years
4 - 8 Lacs
Gurugram
Work from Office
AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. Data Engineer (Internally known as a Sr. Associate Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.
Posted 2 months ago
3 - 5 years
4 - 9 Lacs
Gurugram
Work from Office
AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. Data Engineer (Internally known as a Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.
Posted 2 months ago
3 - 8 years
15 - 19 Lacs
Bengaluru
Work from Office
Project Role : Infrastructure Architect Project Role Description : Lead the definition, design and documentation of technical environments. Deploy solution architectures, conduct analysis of alternative architectures, create architectural standards, define processes to ensure conformance with standards, institute solution-testing criteria, define a solutions cost of ownership, and promote a clear and consistent business vision through technical architectures. Must have skills : NetApp Data Management & Cloud Storage Solutions Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Infrastructure Architect, you will lead the definition, design, and documentation of technical environments. Your role involves deploying solution architectures, analyzing alternative architectures, creating architectural standards, defining processes for conformance, instituting solution-testing criteria, determining solutions' cost of ownership, and promoting a clear business vision through technical architectures. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Lead the definition, design, and documentation of technical environments. Deploy solution architectures and conduct analysis of alternative architectures. Create architectural standards and define processes to ensure conformance with standards. Institute solution-testing criteria and define a solution's cost of ownership. Promote a clear and consistent business vision through technical architectures. Professional & Technical Skills: Must To Have Skills: Proficiency in NetApp Data Management & Cloud Storage Solutions. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 3 years of experience in NetApp Data Management & Cloud Storage Solutions. This position is based at our Bengaluru office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 2 months ago
5 - 8 years
15 - 25 Lacs
Pune
Hybrid
Role & responsibilities Data Pipeline Development: Design, develop, and maintain data pipelines utilizing Google Cloud Platform (GCP) services like Dataflow, Dataproc, and Pub/Sub. Data Ingestion & Transformation: Build and implement data ingestion and transformation processes using tools such as Apache Beam and Apache Spark. Data Storage Management: Optimize and manage data storage solutions on GCP, including BigQuery, Cloud Storage, and Cloud SQL. Security Implementation: Implement data security protocols and access controls with GCP's Identity and Access Management (IAM) and Cloud Security Command Center. System Monitoring & Troubleshooting: Monitor and troubleshoot data pipelines and storage solutions using GCP's Stackdriver and Cloud Monitoring tools. Generative AI Systems: Develop and maintain scalable systems for deploying and operating generative AI models, ensuring efficient use of computational resources. Gen AI Capability Building: Build generative AI capabilities among engineers, covering areas such as knowledge engineering, prompt engineering, and platform engineering. Knowledge Engineering: Gather and structure domain-specific knowledge to be utilized by large language models (LLMs) effectively. Prompt Engineering: Design effective prompts to guide generative AI models, ensuring relevant, accurate, and creative text output. Collaboration: Work with data experts, analysts, and product teams to understand data requirements and deliver tailored solutions. Automation: Automate data processing tasks using scripting languages such as Python. Best Practices: Participate in code reviews and contribute to establishing best practices for data engineering within GCP. Continuous Learning: Stay current with GCP service innovations and advancements. Core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Skills and Experience: Experience: 5+ years of experience in Data Engineering or similar roles. Proficiency in GCP: Expertise in designing, developing, and deploying data pipelines, with strong knowledge of GCP core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Generative AI & LLMs: Hands-on experience with Generative AI models and large language models (LLMs) such as GPT-4, LLAMA3, and Gemini 1.5, with the ability to integrate these models into data pipelines and processes. Experience in Webscraping Technical Skills: Strong proficiency in Python and SQL for data manipulation and querying. Experience with distributed data processing frameworks like Apache Beam or Apache Spark is a plus. Security Knowledge: Familiarity with data security and access control best practices. • Collaboration: Excellent communication and problem-solving skills, with a demonstrated ability to collaborate across teams. Project Management: Ability to work independently, manage multiple projects, and meet deadlines. Preferred Knowledge: Familiarity with Sustainable Finance, ESG Risk, CSRD, Regulatory Reporting, cloud infrastructure, and data governance best practices. Bonus Skills: Knowledge of Terraform is a plus. Education: Degree: Bachelors or masters degree in computer science, Information Technology, or a related field. Experience: 3-5 years of hands-on experience in data engineering. Certification: Google Professional Data Engineer
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough