Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 7 years
5 - 9 Lacs
Mumbai
Work from Office
About The Role The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Senior Process Manager Roles and responsibilities: Understand the business model on why things are the way they are, ask relevant questions and get them clarified. Breakdown complex problems in small solvable components, to be able to identify problem areas in each component. Conduct cost/benefit analysis and feasibility studies for proposed projects to aid in decision-making. Facilitate the implementation of new or improved business processes and systems. Coordinate with business stakeholders to identify gaps in data, processes and suggest process improvements. Understand and follow the project roadmap, plan data availability and coordinate with the execution team to ensure a successful execution of projects. Prescribe suitable solutions with an understanding in limitations of toolsets and available data. Manage procurement of data from various sources and perform data audits. Fetch and analyze data from disparate sources and drive meaningful insights. Provide recommendations on the business rules for effective campaign targeting. Interpret analytical results and provide insights; present key findings and recommended next steps to clients. Develop tangible analytical projects; communicate project details to clients and internal delivery team via written documents and presentations, in forms of specifications, diagrams, and data/process models. Audit deliverables ensuring accuracy by critically examining the data and reports against requirements. Collaborate on regional/global analytic initiatives and localize inputs for country campaign practices. Actively work on audience targeting insights, optimize campaigns and improve comm governance. Technical and Functional Skills: Must Have BS/BA degree or equivalent professional experience required Degree. Minimum 8-10 years of professional experience in advanced analytics for a Fortune 500-scale company or a prominent consulting organization. Experience inData Extraction tools, Advanced Excel, CRM Analytics, Campaign Marketing, and Analytics knowledge - Campaign Analytics. Strong in numerical and analytical skills. Strong in Advanced Excel (prior experience with Google sheets is an added plus) Strong analytical and storytelling skills; ability to derive relevant insights from large reports and piles of disparate data. Comfortable working autonomously with broad guidelines. Passion for data andanalytics for marketing and eagerness to learn. Excellent communications skills, both written and spoken; ability to explain complex technical concepts in plain English. Ability to manage multiple priorities and projects, aligning teams to project timelines and ensuring quality of deliverables. Work with business teams to identify business use cases and develop solutions to meet these needs using analytical approaches. Manage regular reporting and ad-hoc data extract from other departments. Knowledge on analyzing digital campaigns and the tools/technologies of performance marketing. Experience with Google sheet/Excel. Good To Have Hands-on experience in digital marketing and/or 1:1 marketing in any channel; expert level knowledge in database marketing and CRM. Working knowledge in data visualization tools (Tableau, QlikView, etc.). Working knowledge of analytical/statistical techniques. Experience in Hadoop environment- Hive, Presto is a plus. Experience in Python/R. Previous consulting experience is a definite plus.
Posted 1 month ago
3 - 7 years
5 - 9 Lacs
Mumbai
Work from Office
About The Role The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Senior Process Manager Roles and responsibilities: Understand the business model on why things are the way they are, ask relevant questions and get them clarified. Breakdown complex problems in small solvable components, to be able to identify problem areas in each component. Conduct cost/benefit analysis and feasibility studies for proposed projects to aid in decision-making. Facilitate the implementation of new or improved business processes and systems. Coordinate with business stakeholders to identify gaps in data, processes and suggest process improvements. Understand and follow the project roadmap, plan data availability and coordinate with the execution team to ensure a successful execution of projects. Prescribe suitable solutions with an understanding in limitations of toolsets and available data. Manage procurement of data from various sources and perform data audits. Fetch and analyze data from disparate sources and drive meaningful insights. Provide recommendations on the business rules for effective campaign targeting. Interpret analytical results and provide insights; present key findings and recommended next steps to clients. Develop tangible analytical projects; communicate project details to clients and internal delivery team via written documents and presentations, in forms of specifications, diagrams, and data/process models. Audit deliverables ensuring accuracy by critically examining the data and reports against requirements. Collaborate on regional/global analytic initiatives and localize inputs for country campaign practices. Actively work on audience targeting insights, optimize campaigns and improve comm governance. Technical and Functional Skills: Must Have BS/BA degree or equivalent professional experience required Degree. Minimum 8-10 years of professional experience in advanced analytics for a Fortune 500-scale company or a prominent consulting organization . Experience in Data Extraction tools, A dvanced Excel, CRM Analytics, Campaign Marketing, and Analytics knowledge - Campaign Analytics . Strong in numerical and analytical skills. Strong in Advanced Excel (prior experience with Google sheets is an added plus) Strong analytical and storytelling skills; ability to derive relevant insights from large reports and piles of disparate data. Comfortable working autonomously with broad guidelines. Passion for data andanalytics for marketing and eagerness to learn. Excellent communications skills, both written and spoken; ability to explain complex technical concepts in plain English. Ability to manage multiple priorities and projects, aligning teams to project timelines and ensuring quality of deliverables. Work with business teams to identify business use cases and develop solutions to meet these needs using analytical approaches. Manage regular reporting and ad-hoc data extract from other departments. Knowledge on analyzing digital campaigns and the tools/technologies of performance marketing. Experience with Google sheet/Excel. Good To Have Hands-on experience in digital marketing and/or 1:1 marketing in any channel; expert level knowledge in database marketing and CRM. Working knowledge in data visualization tools (Tableau, QlikView, etc.). Working knowledge of analytical/statistical techniques. Experience in Hadoop environment- Hive, Presto is a plus. Experience in Python/R. Previous consulting experience is a definite plus.
Posted 1 month ago
4 - 9 years
6 - 11 Lacs
Chennai
Work from Office
Skill & Experience Mandatory Skills: Azure Cloud Platform, Azure services including advanced knowledge of Azure Databricks, Hive Metastore, and Presto, Azure Kubernetes Service (AKS) Terraform, Ansible, Azure DevOps tools, Azure Databricks, GitHub Strong experience with Azure services, including advanced knowledge of Azure Databricks, Hive Metastore, and Presto. Basic understanding of Azure Kubernetes Service (AKS) is required. Proficiency in Terraform and Ansible for infrastructure management is a plus. 4+ years of experience in cloud technologies, including at least 3 years working with Azure Databricks and data management technologies. Demonstrated experience in managing and optimizing large-scale data processing environments. Excellent problem-solving skills with the ability to optimize and manage complex cloud environments. Proficiency in using Azure DevOps for CI/CD pipelines and GitHub for version control. Microsoft Certified: Azure Solutions Architect Expert (AZ-305) or equivalent advanced certification is preferred. Additional certifications in Azure Databricks or related data technologies are a plus. Azure Databricks Management Hive Metastore Management Presto Operations Automation & CI/CD Optimization & Performance Security & Compliance Documentation & Handoverto-date handover Knowledge Sharingtheir Stakeholder Communication Continuous Improvement
Posted 2 months ago
3 - 5 years
5 - 8 Lacs
Hyderabad
Work from Office
Overview Primary focus would be to lead development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities Delivery of key Enterprise Data Warehouse and Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications Experience Bachelors degree in Computer Science, MIS, Business Management, or related field 3+ years experience in Information Technology 1+ years experience in Azure Data Lake Technical Skills Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Teradata architecture and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo / Infoworks is desirable Knowledge of data warehousing concepts and data catalog tools (Alation)
Posted 2 months ago
1 - 4 years
3 - 7 Lacs
Pune
Work from Office
What are we looking for? Searce is looking for a Data Engineer who is able to work with business leads, analysts, data scientists and fellow engineers to build data products that empower better decision making. Someone who is passionate about the data quality of our business metrics and geared up to provide flexible solutions that can be scaled up to respond to broader business questions. What you'll do as a Data Engineer with us: Understand the business requirements and translate these to data services to solve the business and data problems Develop and manage the transports/data pipelines (ETL/ELT jobs) and retrieve applicable datasets for specific use cases using cloud data platforms and tools Explore new technologies and tools to design complex data modeling scenarios, transformations and provide optimal data engineering solutions Build data integration layers to connect with different heterogeneous sources using various approaches Understand data and metadata to support consistency of information retrieval, combination, analysis and reporting Troubleshoot and monitor data pipelines to have high availability of reporting layer Collaborate with many teams - engineering and business, to build scalable and optimized data solutions Spend significant time on enhancing the technical excellence via certifications, contribution to blog, etc What are the must-haves to join us? Is Education overrated? Yes. We believe so. But there is no way to locate you otherwise. So we might look for at least a Bachelors degree in Computer Science or in Engineering/Technology. 1-4 years of experience with building the data pipelines or data ingestion for both Batch/Streaming data from different sources to database, data warehouse / data lake Hands-on knowledge/experience with SQL/Python/Java/Scala programming,Experience with any cloud computing platforms like AWS (S3, Lambda functions, RedShift, Athena), GCP (GCS, CloudSQL, Spanner, Dataplex, BigLake, Dataflow, Dataproc, Pub/Sub, Bigquery), etc. Experience/knowledge with Big Data tools (Hadoop, Hive, Sqoop, Pig, Spark, Presto) Data pipelines & orchestration tool (Ozzie, Airflow, Nifi) Any streaming engines (Kafka, Storm, Spark Streaming, Pub/Sub) Any relational database or data warehousing experience Any ETL tool experience (Informatica, Talend, Pentaho, Business Objects Data Services, Hevo) or good to have knowledge Good communication skills, right Attitude, open-minded, flexible to learn and adapt to the fast growing data culture, proactive in coordination with other teams and providing quick data solutions Experience in working independently and strong analytical skills
Posted 2 months ago
1 - 2 years
3 - 4 Lacs
Mumbai
Work from Office
about the role Data Engineer identifies the business problem and translates these to data services and engineering outcomes. You will deliver data solutions that empower better decision making and flexibility of your solution that scales to respond to broader business questions. key responsibilities As a Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions. If you love to solve problems using your skills, then come join the Team Searce. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself. Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML Understand the business problem and translate these to data services and engineering outcomes Explore new technologies and learn new techniques to solve business problems creatively Think big! and drive the strategy for better data quality for the customers Collaborate with many teams - engineering and business, to build better data products preferred qualifications Over 1-2 years of experience with Hands-on experience of any one programming language (Python, Java, Scala) Understanding of SQL is must Big data (Hadoop, Hive, Yarn, Sqoop) MPP platforms (Spark, Pig, Presto) Data-pipeline & schedular tool (Ozzie, Airflow, Nifi) Streaming engines (Kafka, Storm, Spark Streaming) Any Relational database or DW experience Any ETL tool experience Hands-on experience in pipeline design, ETL and application development Good communication skills Experience in working independently and strong analytical skills Dependable and good team player Desire to learn and work with new technologies Automation in your blood
Posted 2 months ago
7 - 12 years
30 - 40 Lacs
Bengaluru
Hybrid
Good in: Data Modelling, Python, ETL, (Big Data/ MPP), Airflow, Kafka, (Presto/ Spark), (Power BI/ Tableau) Good in: (Teradata/ AWS Redshift/ Google BigQuery/ Azure Synapse Analytics) 79 Yrs Old Reputed MNC Company Required Candidate profile Good in: Data Warehouse Design and Dimensional Modeling
Posted 2 months ago
3 - 7 years
4 - 8 Lacs
Mumbai
Work from Office
3+ years of experience in software quality assurance, with automation testing and data analytics. automation scripting with Selenium/similar testing frameworks,data analytics,database technologies, such as MySQL, Cassandra, Presto Familia CI/CD tools
Posted 2 months ago
7 - 12 years
30 - 40 Lacs
Bengaluru
Hybrid
Proficient: Data Modelling, Python, ETL, (Big Data/ MPP), Airflow, Kafka, (Presto/ Spark), (Power BI/ Tableau) Proficient: (Teradata/ AWS Redshift/ Google BigQuery/ Azure Synapse Analytics) 79 Yrs Old Reputed MNC Company
Posted 2 months ago
8 - 11 years
30 - 35 Lacs
Hyderabad
Work from Office
• 7+ years of software development experience and azure cloud computing. • Proficiency) Scala+ Spark+ Python • Experience working in an Agile environment. • 3+ years of Exp of Data Engineering/Big Data development experience • 5+ years in ETL/DW with Data Engineering • 3+ years Python/Scala with Azure Databricks. • 2+ years of experience working with big data technologies (e.g.+ Hadoop+ Spark+ Presto) • Hands Experience with a range of Azure based big data & analytics platforms like ADLS+ ADF+ Azure Data Warehouse specifically ingestion to cloud storages/Data Lake
Posted 3 months ago
6 - 10 years
10 - 14 Lacs
Hyderabad
Work from Office
As an Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong technical abilities to understand, design, write and debug complex code.Bigdata, Pyspark,Scala,hadoop,Hive,Java,Python.Develops applications on Big Data technologies including API development, Knowledge in Relational Databases experience in troubleshooting, monitoring and performance tuning of Spark jobs.Presto,Impala, HDFS, Linux.. . Good to Have;-knowledge of Analytics libraries, open-source Natural Language Processing, statistical and big data computing libraries. Hands on Experience on cloud technology AWS/GCP Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 months ago
12 - 20 years
27 - 42 Lacs
Trivandrum, Bengaluru, Hyderabad
Work from Office
Hiring For AWS Big Data Architect who can Join Immediately with one of our client. Role : Big Data Architect / AWS Big Data Architect Experience : 12+ Years Locations : Hyderabad , Bangalore , Gurugram, Kochi , Trivandrum Notice Period : Immediate Joiners Shift Timings : overlap with UK timings ( 2-11 PM IST) Notice Period : Immediate Joiners / Serving Notice with in 30 Days Required Skills & Qualifications : 12+ years of experience in Big Data architecture and engineering. Strong expertise in AWS (DMS, Kinesis, Athena, Glue, Lambda, S3, EMR, Redshift, etc.). Hands-on experience with Debezium and Kafka for real-time data streaming and synchronization. Proficiency in Spark optimization for batch processing improvements. Strong SQL and Oracle query optimization experience. Expertise in Big Data frameworks (Hadoop, Spark, Hive, Presto, Athena, etc.). Experience in CI/CD automation and integrating AWS services with DevOps pipelines. Strong problem-solving skills and ability to work in an Agile environment. Preferred Skills (Good to Have): • Experience with Dremio to Athena migrations. • Exposure to cloud-native DR solutions on AWS. • Strong analytical skills to document and implement performance improvements More details to Contact to me : 9000336401 Mail ID :chandana.n@kksoftwareassociates.com For More Job Alerts Please Do Follow : https://lnkd.in/gHMuPUXW
Posted 3 months ago
13 - 20 years
25 - 40 Lacs
Bengaluru, Hyderabad, Gurgaon
Work from Office
Role & responsibilities We are seeking a highly skilled Big Data Architect with deep expertise in AWS, Kafka, Debezium, and Spark. This role offers an exciting opportunity to be a critical player in optimizing query performance, data synchronization, disaster recovery (DR) solutions, and simplifying reporting workflows. The ideal candidate will have hands-on experience with a broad range of AWS-native services, big data processing frameworks, and CI/CD integrations to drive impactful system and performance enhancements. Required Skills & Qualifications: 12+ years of experience in Big Data architecture and engineering, with a proven track record of successful large-scale data solutions. Extensive expertise in AWS services such as DMS, Kinesis, Athena, Glue, Lambda, S3, EMR, Redshift, etc. Hands-on experience with Debezium and Kafka for real-time data streaming, change data capture (CDC), and ensuring seamless data synchronization across systems. Expertise in Spark optimization, particularly for batch processing improvements, including reducing job execution times and resource utilization. Strong SQL and Oracle query optimization skills, with a deep understanding of database performance tuning. Experience with Big Data frameworks like Hadoop, Spark, Hive, Presto, and Athena. Proven background in CI/CD automation and integrating AWS services with DevOps pipelines. Exceptional problem-solving abilities and the capacity to work effectively in an Agile environment. Skills Data Architecture, AWS, Spark, SQL Interested candidates, please share your updated resumes to saideep.p@kksoftwareassociates.com or contact 9390510069.
Posted 3 months ago
7 - 12 years
15 - 20 Lacs
Bengaluru
Work from Office
Hi Everyone, Greetings from GSPANN Technologies! We are hiring for the position of Senior Data Analyst in Bangalore . Role: Senior Data Analyst Location: Bangalore Type: Permanent Key Requirements: Bachelors degree in Computer Science, MIS, or related fields. 6-7 years of relevant analytical experience, translating strategic vision into actionable requirements. Ability to conduct data analysis, develop and test hypotheses, and deliver insights with minimal supervision. Experience identifying and defining KPIs for business areas such as Sales, Consumer Behavior, Supply Chain, etc. Exceptional SQL skills. Experience with modern visualization tools like Tableau, Power BI, Domo, etc. Knowledge of open-source, big data, and cloud infrastructure such as AWS, Hive, Snowflake, Presto, etc. Incredible attention to detail with a structured problem-solving approach. Excellent communication skills (written and verbal). Experience with agile development methodologies. Experience in retail or e-commerce domains is a plus. How to Apply: Interested candidates can share their CV at heena.ruchwani@gspann.com.
Posted 3 months ago
4 - 6 years
6 - 8 Lacs
Hyderabad
Work from Office
Python/Spark/Scala experience AWS experienced will be added advantage. Professional hand-on experience in scala/python Having around 4 to 6 years of experience with excellent coding skills in Java programming language. Having knowledge (or hands on experience) of big data platform and frameworks is good to have. Candidate should have excellent code understanding skills where in should be able to read opensource code (Trino)and build optimizations or improvements on top of it. Working experience in Presto/Trino is a great advantage. Knowledge in Elastic Search, Grafana will be good to have. Experience working under Agile methodology Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Having around 4 to 6 years of experience with excellent coding skills in Java programming language. Having knowledge (or hands on experience) of big data platform and frameworks is good to have. Candidate should have excellent code understanding skills where in should be able to read opensource code (Trino)and build optimizations or improvements on top of it
Posted 3 months ago
5 - 7 years
7 - 9 Lacs
Bengaluru
Work from Office
We’re looking for an experienced, motivated hands-on Data and AI engineer who brings ideas about handling largescale enterprise applications leveraging data platforms; As a software engineer, you’ll apply your deep expertise in designing, developing, delivering, and supporting a world class software and data platform. You will take full ownership of delivering high-impact big data platform that is robust, scalable and support production-grade applications and services for the supply chain space. You will leverage open source and cloud storage tools to build and develop reusable components and architecture that can enable the data science teams to provide best in class AI/ML and data analysis environment. You will also help in providing technical direction and develop strategies for long-term platform growth. You need to be versatile, display leadership qualities and open minded to take on new problems that our customers face. The day today responsibilities include, Analyzes and designs reusable components of the data platform and services required to support the data storage, data schema, data orchestration. Design, develop, troubleshoot, and scale the data pipelines required to support the various analytics and AI/ML workloads. Understand application produced artifacts, design the entire pipeline of schema definition, efficient storage and query of various entity objects. Train, fine-tune, evaluate, and optimize AI models for specific use cases, ensuring accuracy, performance, cost-effectiveness, and scalability. Seamlessly integrate AI models and autonomous agent solutions into cloud-based products to drive smarter workflows and improved productivity Develop reusable tools, libraries, and components that standardize and accelerate the development of AI solutions across the organization. Monitor and maintain deployed models, ensuring consistent performance and reliability in production environments Translate complex technical and functional problems into detailed designs Partner and work with data scientists in the team in taking data science algorithms and integrating them efficiently for high scale production application. Provide senior level support and mentoring by evaluating product enhancements for feasibility studies and providing completion time estimates Develop high quality unit, tests functional tests and integration tests supporting the data extract, transform, load pipelines Ensure product quality by participating in design reviews, code reviews and working with the team for end-to-end validation of the entire product Design and develop various data validation strategies ensuring that robust , good quality data is provided to data science teams for model development and advanced analytics Define data governance, data auditing policy and strategies for compliance and security controls Write and maintain technical documentation for the various projects. Review product user documentation for technical accuracy and completeness Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise 5-7 years of experience in developing enterprise applications using Java, Python, spark and related technologies with 2+ years of focus on DataEngineering, DataOps, MLOps and AI Engineering - Software development strategies for low latency, high throughput softwares Hands-on experience with common distributed processing tools and languages Python, Spark, Hive, Presto Deep understanding of data pipelines, data modeling strategies, schema management Experience with specialized data architectures like data lake, data mesh and optimizing data layouts for efficient processing. Hands on Experience with streaming platforms and frameworks like Kafka, spark-streaming Hand on Experience in integrating AI models into real-world applications Strong understanding of advanced algorithms used in design and development of enterprise grade software Familiarity with pipeline orchestrator tools like Argo, Kubeflow, Airflow or other open source Familiarity with platforms like Kubernetes and experience building on top of the native platforms Good written and verbal communication skills Ability to provide guidance to less experienced team members. Preferred technical and professional experience Proficiency in Java, Python, Spark, and related technologies Hands-on experience with common distributed processing tools and languages Python, Spark, Hive, Presto Familiarity with pipeline orchestrator tools like Argo, Kubeflow, Airflow or other open source Familiarity with platforms like Kubernetes and experience building on top of the native platforms
Posted 3 months ago
5 - 9 years
12 - 17 Lacs
Bengaluru
Work from Office
Your role and responsibilities Job Summary We are seeking a talented and motivated DevOps Engineer to join our team with a minimum of 8years of experience. The ideal candidate will have hands-on experience with DevOps practices, supporting diverse platforms, and developing robust solutions using Python. The role requires expertise in OpenShift, a solid understanding of microservices architecture, and API orchestration. Familiarity with platforms like Watsonx.data, Watsonx.ai, Milvus, and/or Cloud Pak for Data is highly desirable. Experience with Presto and Spark is welcomed and will be considered a strong asset. The candidate must also have a strong focus on writing quality code, automation testing, and ensuring reliability. Exceptional problem determination skills, timeline management, and the ability to thrive in a fast-paced, dynamic environment are essential. Key Responsibilities Design, deploy, and manage highly scalable and reliable DevOps solutions across multiple platforms. Develop and maintain microservices-based architectures and ensure seamless API orchestration. Automate infrastructure and application deployments using tools and scripts, primarily in Python. Support and optimize OpenShift-based environments for high availability and performance. Collaborate with cross-functional teams to implement and support platforms like Watsonx.data, Watsonx.ai, Milvus, and Cloud Pak for Data. Work with Presto and Spark to support scalable, high-performance data processing. Write, maintain, and improve high-quality, reusable code that adheres to best practices. Implement and maintain automation testing frameworks to ensure code reliability and minimize defects. Develop and maintain CI/CD pipelines to streamline application delivery. Monitor system performance, conduct root cause analysis, and provide resolutions for production issues. Ensure compliance with industry standards and security best practices. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Required Skills and Qualifications Proven experience in DevOps engineering and managing complex infrastructures. Strong proficiency in Python for scripting, automation, and development. Hands-on expertise with OpenShift and container orchestration tools. Solid understanding of microservices architecture and API orchestration. Deep experience in CI/CD pipelines, automation tools, and infrastructure as code (IaC). Strong focus on code quality and experience with automation testing frameworks (e.g., pytest, Selenium, or similar). Demonstrated ability in problem determination and solving complex technical issues. Exceptional skills in managing timelines and delivering projects on schedule. Preferred technical and professional experience Preferred Qualifications Experience working with Watsonx.data, Watsonx.ai, Milvus, or Cloud Pak for Data. Knowledge or hands-on experience with Presto and Spark for data processing and querying. Knowledge of cloud technologies (e.g., AWS, Azure, IBM Cloud). Familiarity with machine learning workflows and data pipeline management. Experience with monitoring tools (e.g., Prometheus, Grafana) and log aggregation systems (e.g., ELK stack). Strong communication skills and ability to work effectively in a team environment.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2