Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Must have : - Strong on programming languages like Python, Java - One cloud hands-on experience (GCP preferred) - Experience working with Dockers - Environments managing (e.g venv, pip, poetry, etc.) - Experience with orchestrators like Vertex AI pipelines, Airflow, etc. - Understanding of full ML Cycle end-to-end - Data engineering, Feature Engineering techniques - Experience with ML modelling and evaluation metrics - Experience with Tensorflow, Pytorch or another framework - Experience with Models monitoring - Advance SQL knowledge - Aware of Streaming concepts like Windowing , Late arrival , Triggers etc. Good to have : - Hyperparameter tuning experience. - Proficient in either Apache Spark or Apache Beam or Apache Flink. - Should have hands-on experience on Distributed computing. - Should have working experience on Data Architecture design. - Should be aware of storage and compute options and when to choose what. - Should have good understanding on Cluster Optimisation/ Pipeline Optimisation strategies. - Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). - Should have Business mindset to understand data and how it will be used for BI and Analytics purposes. - Should have working experience on CI/CD pipelines, Deployment methodologies, Infrastructure as a code (eg. Terraform). - Hands-on experience on Kubernetes. - Vector based Database like Qdrant. - LLM experience (embeddings generation, embeddings indexing, RAG, Agents, etc.). Key Responsibilities : - Design, develop, and implement AI models and algorithms using Python and Large Language Models (LLMs). - Collaborate with data scientists, engineers, and business stakeholders to define project requirements and deliver impactful AI-driven solutions. - Optimize and manage data pipelines, ensuring efficient data storage and retrieval with PostgreSQL. - Continuously research emerging AI trends and best practices to enhance model performance and capabilities. - Deploy, monitor, and maintain AI applications in production environments, adhering to best industry standards. - Document technical designs, workflows, and processes to facilitate clear knowledge transfer and project continuity. - Communicate technical concepts effectively to both technical and non-technical team members. Required Skills and Qualifications : - Proven expertise in Python programming for AI/ML applicati
Posted 1 month ago
8.0 - 12.0 years
15 - 30 Lacs
Gurugram
Work from Office
Role description Lead and mentor a team of data engineers to design, develop, and maintain high-performance data pipelines and platforms. Architect scalable ETL/ELT processes, streaming pipelines, and data lake/warehouse solutions (e.g., Redshift, Snowflake, BigQuery). Own the roadmap and technical vision for the data engineering function, ensuring best practices in data modeling, governance, quality, and security. Drive adoption of modern data stack tools (e.g., Airflow, Kafka, Spark etc.) and foster a culture of continuous improvement. Ensure the platform is reliable, scalable, and cost-effective across batch and real-time use cases. Champion data observability, lineage, and privacy initiatives to ensure trust in data across the org. Skills Bachelors or Masters degree in Computer Science, Engineering, or related technical field. 8+ years of hands-on experience in data engineering with at least 2+ years in a leadership or managerial role. Proven experience with distributed data processing frameworks such as Apache Spark, Flink, or Kafka. Strong SQL skills and experience in data modeling, data warehousing, and schema design. Proficiency with cloud platforms (AWS/GCP/Azure) and their native data services (e.g., AWS Glue, Redshift, EMR, BigQuery). Solid grasp of data architecture, system design, and performance optimization at scale. Experience working in an agile development environment and managing sprint-based delivery cycles.
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Chennai
Work from Office
Job Summary Synechron is seeking an experienced Data Processing Engineer to lead the development of large-scale data processing solutions using Java, Apache Flink/Storm/Beam, and Google Cloud Platform (GCP). In this role, you will collaborate across teams to design, develop, and optimize data-intensive applications that support strategic business objectives. Your expertise will help evolve our data architecture, improve processing efficiency, and ensure the delivery of reliable, scalable solutions in an Agile environment. Software Requirements Required: Java (version 8 or higher) Apache Flink, Storm, or Beam for streaming data processing Google Cloud Platform (GCP) services, especially BigQuery and related data tools Experience with databases such as BigQuery, Oracle, or equivalent Familiarity with version control tools such as Git Preferred: Cloud deployment experience with GCP in particular Additional familiarity with containerization (Docker/Kubernetes) Knowledge of CI/CD pipelines and DevOps practices Overall Responsibilities Collaborate closely with cross-functional teams to understand data and system requirements, then design scalable solutions aligned with business needs. Develop detailed technical specifications, implementation plans, and documentation for new features and enhancements. Implement, test, and deploy data processing applications using Java and Apache Flink/Storm/Beam within GCP environments. Conduct code reviews to ensure quality, security, and maintainability, supporting team members' growth and best practices. Troubleshoot technical issues, resolve bottlenecks, and optimize application performance and resource utilization. Stay current with advancements in data processing, cloud technology, and Java development to continuously improve solutions. Support testing teams to verify data workflows and validation processes, ensuring reliability and accuracy. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives to ensure continuous delivery and process improvement. Technical Skills (By Category) Programming Languages: Required: Java (8+) Preferred: Python, Scala, or Node.js for scripting or auxiliary processing Databases/Data Management: Experience with BigQuery, Oracle, or similar relational data stores Cloud Technologies: GCP (BigQuery, Cloud Storage, Dataflow etc.) with hands-on experience in cloud data solutions Frameworks and Libraries: Apache Flink, Storm, or Beam for stream processing Java SDKs, APIs, and data integration libraries Development Tools and Methodologies: Git, Jenkins, JIRA, and Agile/Scrum practices Familiarity with containerization (Docker, Kubernetes) is a plus Security and Compliance: Understanding of data security principles in cloud environments Experience Requirements 4+ years of experience in software development, with a focus on data processing and Java-based backend development Proven experience working with Apache Flink, Storm, or Beam in production environments Strong background in managing large data workflows and pipeline optimization Experience with GCP data services and cloud-native development Demonstrated success in Agile projects, including collaboration with cross-functional teams Previous leadership or mentorship experience is a plus Day-to-Day Activities Design, develop, and deploy scalable data processing applications in Java using Flink/Storm/Beam on GCP Collaborate with data engineers, analysts, and architects to translate business needs into technical solutions Conduct code reviews, optimize data pipelines, and troubleshoot system issues swiftly Document technical specifications, data schemas, and process workflows Participate actively in Agile ceremonies, provide updates on task progress, and suggest process improvements Support continuous integration and deployment of data applications Mentor junior team members, sharing best practices and technical insights Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or equivalent Relevant certifications in cloud technologies or data processing (preferred) Evidence of continuous professional development and staying current with industry trends Professional Competencies Strong analytical and problem-solving skills focused on data processing challenges Leadership abilities to guide, mentor, and develop team members Excellent communication skills for technical documentation and stakeholder engagement Adaptability to rapidly changing technologies and project priorities Capacity to prioritize tasks and manage time efficiently under tight deadlines Innovative mindset to leverage new tools and techniques for performance improvements
Posted 1 month ago
4.0 - 9.0 years
12 - 17 Lacs
Chennai
Work from Office
Job Summary Synechron is seeking an experienced Data Processing Engineer to lead the development of large-scale data processing solutions using Java, Apache Flink/Storm/Beam, and Google Cloud Platform (GCP). In this role, you will collaborate across teams to design, develop, and optimize data-intensive applications that support strategic business objectives. Your expertise will help evolve our data architecture, improve processing efficiency, and ensure the delivery of reliable, scalable solutions in an Agile environment. Software Requirements Required: Java (version 8 or higher) Apache Flink, Storm, or Beam for streaming data processing Google Cloud Platform (GCP) services, especially BigQuery and related data tools Experience with databases such as BigQuery, Oracle, or equivalent Familiarity with version control tools such as Git Preferred: Cloud deployment experience with GCP in particular Additional familiarity with containerization (Docker/Kubernetes) Knowledge of CI/CD pipelines and DevOps practices Overall Responsibilities Collaborate closely with cross-functional teams to understand data and system requirements, then design scalable solutions aligned with business needs. Develop detailed technical specifications, implementation plans, and documentation for new features and enhancements. Implement, test, and deploy data processing applications using Java and Apache Flink/Storm/Beam within GCP environments. Conduct code reviews to ensure quality, security, and maintainability, supporting team members' growth and best practices. Troubleshoot technical issues, resolve bottlenecks, and optimize application performance and resource utilization. Stay current with advancements in data processing, cloud technology, and Java development to continuously improve solutions. Support testing teams to verify data workflows and validation processes, ensuring reliability and accuracy. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives to ensure continuous delivery and process improvement. Technical Skills (By Category) Programming Languages: Required: Java (8+) Preferred: Python, Scala, or Node.js for scripting or auxiliary processing Databases/Data Management: Experience with BigQuery, Oracle, or similar relational data stores Cloud Technologies: GCP (BigQuery, Cloud Storage, Dataflow etc.) with hands-on experience in cloud data solutions Frameworks and Libraries: Apache Flink, Storm, or Beam for stream processing Java SDKs, APIs, and data integration libraries Development Tools and Methodologies: Git, Jenkins, JIRA, and Agile/Scrum practices Familiarity with containerization (Docker, Kubernetes) is a plus Security and Compliance: Understanding of data security principles in cloud environments Experience Requirements 4+ years of experience in software development, with a focus on data processing and Java-based backend development Proven experience working with Apache Flink, Storm, or Beam in production environments Strong background in managing large data workflows and pipeline optimization Experience with GCP data services and cloud-native development Demonstrated success in Agile projects, including collaboration with cross-functional teams Previous leadership or mentorship experience is a plus Day-to-Day Activities Design, develop, and deploy scalable data processing applications in Java using Flink/Storm/Beam on GCP Collaborate with data engineers, analysts, and architects to translate business needs into technical solutions Conduct code reviews, optimize data pipelines, and troubleshoot system issues swiftly Document technical specifications, data schemas, and process workflows Participate actively in Agile ceremonies, provide updates on task progress, and suggest process improvements Support continuous integration and deployment of data applications Mentor junior team members, sharing best practices and technical insights Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or equivalent Relevant certifications in cloud technologies or data processing (preferred) Evidence of continuous professional development and staying current with industry trends Professional Competencies Strong analytical and problem-solving skills focused on data processing challenges Leadership abilities to guide, mentor, and develop team members Excellent communication skills for technical documentation and stakeholder engagement Adaptability to rapidly changing technologies and project priorities Capacity to prioritize tasks and manage time efficiently under tight deadlines Innovative mindset to leverage new tools and techniques for performance improvements S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
Posted 1 month ago
3.0 - 7.0 years
18 - 22 Lacs
Bengaluru
Work from Office
Job Title :Industry & Function AI Data Engineer + S&C GN Management Level :09 - Consultant Location :Primary - Bengaluru, Secondary - Gurugram Must-Have Skills :Data Engineering expertise, Cloud platforms:AWS, Azure, GCP, Proficiency in Python, SQL, PySpark and ETL frameworks Good-to-Have Skills :LLM Architecture, Containerization tools:Docker, Kubernetes, Real-time data processing tools:Kafka, Flink, Certifications like AWS Certified Data Analytics Specialty, Google Professional Data Engineer,Snowflake,DBT,etc. Job Summary : As a Data Engineer, you will play a critical role in designing, implementing, and optimizing data infrastructure to power analytics, machine learning, and enterprise decision-making. Your work will ensure high-quality, reliable data is accessible for actionable insights. This involves leveraging technical expertise, collaborating with stakeholders, and staying updated with the latest tools and technologies to deliver scalable and efficient data solutions. Roles & Responsibilities: Build and Maintain Data Infrastructure:Design, implement, and optimize scalable data pipelines and systems for seamless ingestion, transformation, and storage of data. Collaborate with Stakeholders:Work closely with business teams, data analysts, and data scientists to understand data requirements and deliver actionable solutions. Leverage Tools and Technologies:Utilize Python, SQL, PySpark, and ETL frameworks to manage large datasets efficiently. Cloud Integration:Develop secure, scalable, and cost-efficient solutions using cloud platforms such as Azure, AWS, and GCP. Ensure Data Quality:Focus on data reliability, consistency, and quality using automation and monitoring techniques. Document and Share Best Practices:Create detailed documentation, share best practices, and mentor team members to promote a strong data culture. Continuous Learning:Stay updated with the latest tools and technologies in data engineering through professional development opportunities. Professional & Technical Skills: Strong proficiency in programming languages such as Python, SQL, and PySpark Experience with cloud platforms (AWS, Azure, GCP) and their data services Familiarity with ETL frameworks and data pipeline design Strong knowledge of traditional statistical methods, basic machine learning techniques. Knowledge of containerization tools (Docker, Kubernetes) Knowing LLM, RAG & Agentic AI architecture Certification in Data Science or related fields (e.g., AWS Certified Data Analytics Specialty, Google Professional Data Engineer) Additional Information: The ideal candidate has a robust educational background in data engineering or a related field and a proven track record of building scalable, high-quality data solutions in the Consumer Goods sector. This position offers opportunities to design and implement cutting-edge data systems that drive business transformation, collaborate with global teams to solve complex data challenges and deliver measurable business outcomes and enhance your expertise by working on innovative projects utilizing the latest technologies in cloud, data engineering, and AI. About Our Company | Accenture Qualification Experience :Minimum 3-7 years in data engineering or related fields, with a focus on the Consumer Goods Industry Educational Qualification :Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field
Posted 1 month ago
7.0 - 10.0 years
20 - 30 Lacs
Hyderabad, Chennai
Work from Office
Role Expectations: Design & develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Qualifications: Bachelor's degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team
Posted 1 month ago
12.0 - 22.0 years
40 - 60 Lacs
Bengaluru
Work from Office
Location: Bangalore Onsite Experience: 12+ years Type: Full-time --- Role Overview We are looking for a Technical Program Manager (TPM) to drive the execution of a next-generation data and AI platform that powers real-time analytics, machine learning, and industrial applications across multiple domains such as aviation, logistics, and manufacturing. You will work at the intersection of engineering, product, architecture, and business, managing the roadmap, resolving technical dependencies, and ensuring delivery of critical platform components across cross-functional and geographically distributed teams. --- Key Responsibilities Program & Execution Management Drive end-to-end delivery of platform features and sector-specific solutions by coordinating multiple scrum teams (AI/ML, Data, Fullstack, DevOps). Develop and maintain technical delivery plans, sprint milestones, and program-wide timelines. Identify and resolve cross-team dependencies, risks, and technical bottlenecks. Technical Fluency & Architecture Alignment Understand the platform’s architecture (Kafka, Spark, data lakes, ML pipelines, hybrid/on-prem deployments) and guide teams toward cohesive delivery. Translate high-level product goals into detailed technical milestones and backlog items in collaboration with Product Owners and Architects. Cross-Functional Collaboration Liaise between globally distributed engineering teams, product owners, architects, and domain stakeholders to align on priorities and timelines. Coordinate multi-sector requirements and build scalable components that serve as blueprints across industries (aviation, logistics, etc.). Governance & Reporting Maintain clear, concise, and timely program reporting (dashboards, OKRs, status updates) for leadership and stakeholders. Champion delivery best practices, quality assurance, and documentation hygiene. Innovation & Agility Support iterative product development with flexibility to handle ambiguity and evolving priorities. Enable POCs and rapid prototyping efforts while planning for scalable production transitions. ---Required Skills & Qualifications 12+ years of experience in software engineering and technical program/project management. Strong understanding of platform/data architecture, including event streaming (Kafka), batch/stream processing (Spark, Flink), and AI/ML pipelines. Proven success delivering complex programs in agile environments with multiple engineering teams. Familiarity with DevOps, cloud/on-prem infrastructure (AWS, Azure, hybrid models), CI/CD, and observability practices. Excellent communication, stakeholder management, and risk mitigation skills. Strong grasp of Agile/Scrum or SAFe methodologies. --- Good-to-Have Experience working in or delivering solutions to industrial sectors such as aviation, manufacturing, logistics, or utilities. Experience with tools like Jira, Confluence, Notion, Asana, or similar. Background in engineering or data (Computer Science, Data Engineering, AI/ML, or related).
Posted 1 month ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Spring MVC (Model View Controller) Good to have skills : Spring Boot, Apache Kafka, Microservices and Light Weight Architecture, Core Java, Spring RESTMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with organizational goals, ensuring that the solutions provided are effective and efficient. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Evaluate and implement best practices in application development to improve team performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Java Standard Edition.- Good To Have Skills: Experience with Apache Kafka or Flink, Microservices and Light Weight Architecture, Spring Boot, Spring REST.- Strong understanding of object-oriented programming principles.- Experience with application performance tuning and optimization.- Familiarity with version control systems such as Git. Additional Information:- The candidate should have minimum 7.5 years of experience in Java Standard Edition.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Java Standard Edition Good to have skills : Spring Boot, Apache Kafka, Microservices and Light Weight Architecture, Core Java, Spring RESTMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with organizational goals, ensuring that the solutions provided are effective and efficient. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Evaluate and implement best practices in application development to improve team performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Java Standard Edition.- Good To Have Skills: Experience with Apache Kafka or Flink, Microservices and Light Weight Architecture, Spring Boot, Spring REST.- Strong understanding of object-oriented programming principles.- Experience with application performance tuning and optimization.- Familiarity with version control systems such as Git. Additional Information:- The candidate should have minimum 7.5 years of experience in Java Standard Edition.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Spring Boot Good to have skills : Java Standard Edition, Apache Kafka, Microservices and Light Weight ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project specifications, developing application features, and ensuring that the applications are aligned with business needs. You will also engage in testing and debugging processes to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Good To Have Skills: Experience with Java Standard Edition, Microservices and Light Weight Architecture, Spring REST, Azure Basics, Apache Kafka (or) Flink- Strong understanding of RESTful API design and development.- Experience with version control systems such as Git.- Familiarity with cloud platforms and deployment strategies. Additional Information:- The candidate should have minimum 3 years of experience in Spring Boot.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Spring Boot Good to have skills : Java Standard Edition, Apache Kafka, Microservices and Light Weight ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project specifications, developing application features, and ensuring that the applications are aligned with business needs. You will also engage in testing and debugging processes to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Good To Have Skills: Experience with Java Standard Edition, Microservices and Light Weight Architecture, Spring REST, Azure Basics, Apache Kafka (or) Flink- Strong understanding of RESTful API design and development.- Experience with database management systems and SQL.- Familiarity with version control systems such as Git. Additional Information:- The candidate should have minimum 3 years of experience in Spring Boot.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Java Full Stack Development Good to have skills : Apache Kafka, API Management, Microservices and Light Weight ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with business objectives, ensuring that the solutions provided are effective and efficient. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to track progress and address any roadblocks. Professional & Technical Skills: - Must To Have Skills: Proficiency in Java Full Stack Development.- Good To Have Skills: Experience with Apache Kafka or Flink, API Management, Microservices and Light Weight Architecture.- Strong understanding of front-end technologies such as HTML, CSS, and JavaScript.- Experience with back-end frameworks and databases.- Familiarity with Agile methodologies and DevOps practices. Additional Information:- The candidate should have minimum 5 years of experience in Java Full Stack Development.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Job Summary Synechron is seeking a skilled and experienced Lead Java Developer to oversee the development, deployment, and support of complex enterprise applications. This role involves leading technical initiatives, ensuring best practices in software engineering, and collaborating across teams to deliver cloud-enabled, scalable, and efficient solutions. The successful candidate will contribute to our strategic technology objectives while fostering innovation, best coding practices, and continuous improvement in a dynamic environment. Software Requirements Required: Proficiency in Java (latest stable versions), with extensive experience in building enterprise-scale applications Familiarity with Kettle jobs (Pentaho Data Integration) Operating systems Unix/Linux Scripting languages Shell Scripting , Perl , Python Job scheduling tools Control-M , Autosys Database technologies SQL Server , Oracle , or MongoDB Monitoring tools such as Grafana , Prometheus , or Splunk Container orchestration Kubernetes and OpenShift Messaging middleware Kafka , EMS , RabbitMQ Big data platforms Apache Flink , Spark , Apache Beam , Hadoop , Gemfire , Ignite Continuous Integration/Delivery tools Jenkins , TeamCity , SonarQube , Git Preferred: Experience with cloud platforms (e.g., AWS) Additional data processing frameworks or cloud deployment tools Knowledge of security best practices in enterprise environments Overall Responsibilities Lead the design, development, and deployment of scalable Java-based solutions aligned with business needs Analyze existing system logic, troubleshoot issues, and implement improvements or fixes Collaborate with business stakeholders and technical teams to gather requirements, propose solutions, and document functionalities Define system architecture, including APIs, data flows, and system integration points Develop and maintain comprehensive documentation, including technical specifications, deployment procedures, and API documentation Support application deployment, configurations, and release management within CI/CD pipelines Implement monitoring and alerting solutions using tools like Grafana, Prometheus, or Splunk for operational insights Ensure application security and compliance with enterprise security standards Mentor junior team members and promote development best practices across the team Performance Outcomes: Robust, scalable, and maintainable applications Reduced system outages and improved performance metrics Clear, complete documentation supporting operational and development teams Effective team collaboration and technical leadership Technical Skills (By Category) Programming Languages: Essential Java PreferredScripting languages ( Shell , Perl , Python ) Frameworks and Libraries: EssentialJava frameworks such as Spring Boot , Spring Cloud PreferredMicroservices architecture, messaging, or big data libraries Databases/Data Management: Essential SQL Server , Oracle , MongoDB PreferredData grid solutions like Gemfire or Ignite Cloud Technologies: PreferredHands-on experience with AWS , Azure , or similar cloud platforms, especially for container deployment and orchestration Containerization and Orchestration: Essential Kubernetes , OpenShift DevOps & CI/CD: Essential Jenkins , TeamCity , SonarQube , Git Monitoring & Security: PreferredFamiliarity with Grafana , Prometheus , Splunk Understanding of data security, encryption, and access control best practices Experience Requirements Minimum 7+ years of professional experience in Java application development Proven experience leading enterprise projects, especially involving distributed systems and big data technologies Experience designing and deploying cloud-ready applications Familiarity with SDLC processes, Agile methodologies, and DevOps practices Experience with application troubleshooting, system integration, and performance tuning Day-to-Day Activities Lead project meetings, coordinate deliverables, and oversee technical planning Develop, review, and optimize Java code, APIs, and microservices components Collaborate with development, QA, and operations teams to ensure smooth deployment and operation of applications Conduct system analysis, performance tuning, and troubleshooting of live issues Document system architecture, deployment procedures, and operational workflows Mentor junior developers, review code, and promote best engineering practices Stay updated on emerging technologies, trends, and tools applicable to enterprise software development Qualifications Bachelors or Masters degree in Computer Science, Software Engineering, or a related field Relevant certifications (e.g., Java certifications, cloud certifications) are advantageous Extensive hands-on experience in Java, microservices, and enterprise application development Exposure to big data, cloud deployment, and container orchestration preferred Professional Competencies Strong analytical and problem-solving skills for complex technical challenges Leadership qualities, including mentoring and guiding team members Effective communication skills for stakeholder engagement and documentation Ability to work independently and collaboratively within Agile teams Continuous improvement mindset, eager to adapt and incorporate new technologies Good organizational and time management skills for handling multiple priorities S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law .
Posted 1 month ago
3.0 - 6.0 years
5 - 8 Lacs
Bengaluru
Work from Office
BS or higher degree in Computer Science (or equivalent field) 3-6+ years of programming experience with Java and Python. Strong in writing SQL queries and understanding of Kafka, Scala, Spark/Flink. Exposure to AWS Lambda, AWS Cloud Watch, Step Functions, EC2, Cloud Formation, Jenkins
Posted 1 month ago
10.0 - 12.0 years
9 - 13 Lacs
Chennai
Work from Office
Job Title Data ArchitectExperience 10-12 YearsLocation Chennai : 10-12 years experience as Data Architect Strong expertise in streaming data technologies like Apache Kafka, Flink, Spark Streaming, or Kinesis. ProficiencyinprogramminglanguagessuchasPython,Java,Scala,orGo ExperiencewithbigdatatoolslikeHadoop,Hive,anddatawarehousessuchas Snowflake,Redshift,Databricks,MicrosoftFabric. Proficiencyindatabasetechnologies(SQL,NoSQL,PostgreSQL,MongoDB,DynamoDB,YugabyteDB). Should be flexible to work as anIndividual contributor
Posted 1 month ago
8.0 - 12.0 years
20 - 25 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Senior Java Engineer with 8+ years of experience in backend development using Java, Spring Boot, Microservices, and event-driven systems. Expertise with Apache Flink for real-time data processing.Strong experience with REST APIs, SQL, AWS/GCP
Posted 1 month ago
3.0 - 8.0 years
7 - 17 Lacs
Mumbai, India
Work from Office
Job Title: Data Engineer Overview: As a Senior Data Engineer, you will play a pivotal role in designing, implementing, and maintaining the data infrastructure of our organization. You will be responsible for developing robust data pipelines, optimizing data workflows, and ensuring the reliability and scalability of our data systems. Collaboration with cross-functional teams including data scientists, software engineers, and business analysts will be essential to drive data-driven decision-making and support various business initiatives. Responsibilities: 1. Data Pipeline Development: - Design, build, and maintain scalable data pipelines to ingest, process, and transform large volumes of structured and unstructured data from diverse sources. - Implement efficient ETL (Extract, Transform, Load) processes to ensure timely and accurate data delivery to downstream systems. - Optimize data workflows for performance, reliability, and cost-effectiveness, leveraging technologies such as Apache Spark, Kafka, or similar distributed computing frameworks. 2. Data Modeling and Architecture: - Develop and maintain data models, schemas, and metadata to support analytical and reporting requirements. - Design and optimize data storage solutions including relational databases, NoSQL databases, data lakes, and data warehouses. - Collaborate with data architects and infrastructure teams to ensure alignment with overall data architecture and best practices. 3. Data Quality and Governance: - Implement data quality checks and validation processes to ensure data accuracy, consistency, and completeness. - Establish and enforce data governance policies, standards, and procedures to maintain data integrity and compliance with regulatory requirements. - Monitor data pipelines and proactively identify and resolve data quality issues or anomalies. 4. Performance Tuning and Optimization: - Identify performance bottlenecks in data processing and storage systems and implement optimizations to improve throughput, latency, and resource utilization. - Conduct capacity planning and scalability assessments to support growing data volumes and user demands. - Collaborate with infrastructure teams to fine-tune hardware configurations and cloud resources for optimal performance. 5. Collaboration and Communication: - Work closely with cross-functional teams including data scientists, software engineers, business analysts, and product managers to understand data requirements and deliver data solutions that meet business needs. - Communicate effectively with stakeholders to gather requirements, provide project updates, and present insights derived from data analysis. - Mentor junior data engineers and provide technical guidance and support as needed. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 9+ years of experience in data engineering roles, with a proven track record of designing and implementing data solutions at scale. - Proficiency in programming languages such as Python, Java, or Scala, with experience in building data pipelines using frameworks like Apache Spark, Apache Beam, or similar. - Strong SQL skills and experience working with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). - Hands-on experience with cloud platforms such as AWS, GCP, or Azure, including services like S3, EC2, EMR, Dataflow, BigQuery, or equivalent. - Solid understanding of data modeling concepts, data warehousing principles, and ETL best practices. - Familiarity with data governance frameworks, data quality tools, and regulatory requirements (e.g., GDPR, HIPAA). - Excellent problem-solving skills and the ability to work effectively in a fast-paced, dynamic environment. - Strong communication and collaboration skills, with the ability to interact with stakeholders at all levels of the organization. Additional Preferred Skills: - Experience with containerization and orchestration tools such as Docker, Kubernetes. - Knowledge of stream processing frameworks like Apache Kafka, Apache Flink, or similar. - Familiarity with machine learning and data science concepts. - Certification in relevant technologies or cloud platforms (e.g., AWS Certified Big Data - Specialty, Google Professional Data Engineer).
Posted 1 month ago
4.0 - 6.0 years
1 - 2 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Responsibilities: Design and implement scalable data pipelines to ingest, process, and analyze large volumes of structured and unstructured data from various sources. Develop and optimize data storage solutions, including data warehouses, data lakes, and NoSQL databases, to support efficient data retrieval and analysis. Implement data processing frameworks and tools such as Apache Hadoop, Spark, Kafka, and Flink to enable real-time and batch data processing. Collaborate with data scientists and analysts to understand data requirements and develop solutions that enable advanced analytics, machine learning, and reporting. Ensure data quality, integrity, and security by implementing best practices for data governance, metadata management, and data lineage. Monitor and troubleshoot data pipelines and infrastructure to ensure reliability, performance, and scalability. Develop and maintain ETL (Extract, Transform, Load) processes to integrate data from various sources and transform it into usable formats. Stay current with emerging technologies and trends in big data and cloud computing, and evaluate their applicability to enhance our data engineering capabilities. Document data architectures, pipelines, and processes to ensure clear communication and knowledge sharing across the team. Strong programming skills in Java, Python, or Scala.Strong understanding of data modelling, data warehousing, and ETL processes. Min 4 to Max 6yrs of Relevant exp.Strong understanding of Big Data technologies and their architectures, including Hadoop, Spark, and NoSQL databases. Locations : Mumbai, Delhi NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 month ago
5.0 - 8.0 years
12 - 16 Lacs
Bengaluru
Work from Office
- Gen AI Experience4-12 yearsWork LocationChennai/ Bangalore Mandatory SkillsGen AI, LLM, RAG, Lang chain, Llama, AI/ML, Deep Learning, Python, Tensorflow, Pytorch, Pandas, Prompt Engineering, Vector DB, MLOpsPreferred SkillsAWS or GCP or Azure Cloud, GPT-4, SQL, Fast API/ API Development, Docker/ Kubernetes, Hadoop, Spark or Apache Flink Data Pipeline, Banking exposure Educational BackgroundBachelors or Masters degree in Computer Science, Artificial Programming LanguagesProficiency in Python, R, API development - Python/Fast API. Experience with libraries and frameworks such as TensorFlow, PyTorch, Keras etc. Gen AI & RAGExperinced in GEN AI Models, Langchain/Langflow and Prompt Engineeri Machine LearningStrong understanding of machine learning algorithms, Deep Learnin Data ScienceExperience with data analysis, data visualization, and statistical modelin Big Data TechnologiesFamiliarity with big data processing frameworks like Hadoop, Sp Cloud PlatformsExperience with cloud services such as AWS, Google Cloud, or Azure Awareness on applied statistics, Experinced in Algorithms, Model development / evalua Exposure to ML Ops & Awareness of Containerisation with Docker/ Kubernetes Problem-SolvingStrong analytical and problem-solving skills with the ability to think c CommunicationExcellent verbal and written communication skills, with the ability to Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Generative AI. Experience5-8 Years.
Posted 1 month ago
9.0 - 14.0 years
15 - 19 Lacs
Bengaluru
Work from Office
About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products
Posted 1 month ago
9.0 - 14.0 years
11 - 16 Lacs
Bengaluru
Work from Office
About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.
Posted 1 month ago
9.0 - 14.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products
Posted 1 month ago
5.0 - 10.0 years
8 - 17 Lacs
Hyderabad
Work from Office
Role & responsibilities Tech Stack - Spark, Scala, EMR, Glue, Agile, SQL Required Skills As a Senior Spark Engineer (Scala), youll partner in a team of experienced software engineers, removing impediments and enabling the teams to deliver business value. Ensure team ownership of legacy systems with an emphasis on maintaining operational stability. Be a passionate leader committed to the development and mentorship of your teams. Partner with business and IT stakeholders to ensure alignment with key corporate priorities. Share ideas and work to bring people together to help solve sophisticated problems. Create a positive and collaborative environment by championing open communication and soliciting continuous feedback. Stay current with new technology trends. Additional Responsibilities Participates in the discussion and documentation of best practices and standards for application development Complies with all company policies and procedures Remains current in profession and industry trends Successfully completes regulatory and job training requirements Required Experience 5+ years of hands-on software engineering experience with any object-oriented language, Scala. 5+ years of experience using Spark, EMR, Glue or other serverless compute technology in the Cloud. 5+ years of experience architecting and enhancing data platforms and service-oriented architectures. Experience working within Agile/DevSecOps development environments. Excellent communication, collaboration, and mentoring skills. More recent experience in Cloud development preferred. Experience working with modern, web-based architectures, including REST APIs, Serverless, event-driven microservices. Bachelor’s degree or equivalent in Computer Science, Information Technology, or related discipline. Desired Experience Experience working with financial management stakeholders. Experience with Workday or other large ERP platforms desired. Life Insurance or financial services industry experience a plus.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Noida, Hyderabad, Greater Noida
Work from Office
Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming) - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing. Mandatory Skills- Spark, Scala, AWS, Hadoop
Posted 1 month ago
6.0 - 11.0 years
14 - 24 Lacs
Bengaluru
Work from Office
Automation NoSQL Data Engineer This role has been designed as Onsite with an expectation that you will primarily work from an HPE partner/customer office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in todays complex world. Our culture thrives on finding new and better ways to accelerate whats next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: HPE Operations is our innovative IT services organization. It provides the expertise to advise, integrate, and accelerate our customers outcomes from their digital transformation. Our teams collaborate to transform insight into innovation. In todays fast paced, hybrid IT world, being at business speed means overcoming IT complexity to match the speed of actions to the speed of opportunities. Deploy the right technology to respond quickly to market possibilities. Join us and redefine whats next for you. What you will do: Think through complex data engineering problems in a fast-paced environment and drive solutions to reality. Work in a dynamic, collaborative environment to build DevOps-centered data solutions using the latest technologies and tools. Provide engineering-level support for data tools and systems deployed in customer environments. Respond quickly and professionally to customer emails/requests for assistance. What you need to bring: Bachelor’s degree in Computer Science, Information Systems, or equivalent. 7+ years of demonstrated experience working in software development teams with a strong focus on NoSQL databases and distributed data systems. Strong experience in automated deployment, troubleshooting, and fine-tuning technologies such as Apache Cassandra, Clickhouse, MongoDB, Apache Spark, Apache Flink, Apache Airflow, and similar technologies. Technical Skills: Strong knowledge of NoSQL databases such as Apache Cassandra, Clickhouse, and MongoDB, including their installation, configuration, and performance tuning in production environments. Expertise in deploying and managing real-time data processing pipelines using Apache Spark, Apache Flink, and Apache Airflow. Experience in deploying and managing Apache Spark and Apache Flink operators on Kubernetes and other containerized environments, ensuring high availability and scalability of data processing jobs. Hands-on experience in configuring and optimizing Apache Spark and Apache Flink clusters, including fine-tuning resource allocation, fault tolerance, and job execution. Proficiency in authoring, automating, and optimizing Apache Airflow DAGs for orchestrating complex data workflows across Spark and Flink jobs. Strong experience with container orchestration platforms (like Kubernetes) to deploy and manage Spark/Flink operators and data pipelines. Proficiency in creating, managing, and optimizing Airflow DAGs to automate data pipeline workflows, handle retries, task dependencies, and scheduling. Solid experience in troubleshooting and optimizing performance in distributed data systems. Expertise in automated deployment and infrastructure management using tools such as Terraform, Chef, Ansible, Kubernetes, or similar technologies. Experience with CI/CD pipelines using tools like Jenkins, GitLab CI, Bamboo, or similar. Strong knowledge of scripting languages such as Python, Bash, or Go for automation, provisioning Platform-as-a-Service, and workflow orchestration. Additional Skills: Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Growth, Client Expectations Management, Coaching, Creativity, Critical Thinking, Cross-Functional Teamwork, Customer Centric Solutions, Customer Relationship Management (CRM), Design Thinking, Empathy, Follow-Through, Growth Mindset, Information Technology (IT) Infrastructure, Infrastructure as a Service (IaaS), Intellectual Curiosity (Inactive), Long Term Planning, Managing Ambiguity, Process Improvements, Product Services, Relationship Building {+ 5 more} What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #operations Job: Services Job Level: TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City