Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
25 - 40 Lacs
Chennai
Work from Office
Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming
Posted 1 month ago
5.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Your Role and Responsibilities As a Technical Support Professional, you should have experience in a customer-facing leadership capacity. This role necessitates exceptional customer relationship management skills along with a solid technical grasp of the product/s they will support. The Technical Support Professional is expected to adeptly manage conflicting priorities, thrive under pressure, and autonomously navigate tasks with minimal active guidance. The successful applicant should possess a comprehensive understanding of IBM support, development, and service processes and deliveries. Knowledge of other IBM business procedures and professional training in mediation or conflict resolution would be advantageous. Your primary responsibilities include: Direct Problem-Solving Experience:Previous experience in addressing client issues is valuable, along with a demonstrated ability to effectively resolve problems. Strong Communication Skills: Ability to communicate clearly with both internal and external clients through spoken and written channels. Business Networking ExperienceIn-depth experience and understanding of the IBM and/or OEM support organizations, facilitating effective networking and collaboration. Excellent Coordination, Leadership & Organizational Skills: Exceptional coordination and organizational abilities, capable of leading diverse teams and multitasking within a team-based business network environment. Proficiency in project management is beneficial. Excellence in Client Service & Client Satisfaction:Personal commitment to pursuing client satisfaction and continuous improvement in the delivery of client problem resolution. Language Skills: Proficiency in English is required, with fluency in multiple languages considered advantageous. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Bachelor's Degree Experience5+ years Basic knowledge in Operating system administration (Windows, Linux) Basic knowledge in database administration (DB2, Oracle, MS SQL) EnglishFluent in speaking and writing Analytical thinking, structured problem-solving techniques Strong positive customer service attitude with sensitivity to client satisfaction. Must be a self-starter, quick learner, and enjoy working in a challenging, fast paced environment. Strong analytical and troubleshooting skills, including problem recreation, analyzing logs and traces, debugging complex issues to determine a course of action and recommend solutions. Preferred technical and professional experience Master's Degree in Information Technology Knowledge with OpenShift Knowledge with Apache Flink and Kafka Knowledge with Kibana Knowledge with Containerization and Kubernetes Knowledge with scripting (including Python, JavaScript) Knowledge with products of IBM's Digital Business Automation Product Family Knowledge with Process/Data Mining Knowledge with Containerization Basic knowledge of process/data mining Basic knowledge of LDAP Basic knowledge of AI technologies Fluent in speaking and writing in English Experience in Technical Support is a plus
Posted 1 month ago
3.0 - 7.0 years
17 - 20 Lacs
Bengaluru
Work from Office
Job Title :Industry & Function AI Data Engineer + S&C GN Management Level :09 - Consultant Location :Primary - Bengaluru, Secondary - Gurugram Must-Have Skills :Data Engineering expertise, Cloud platforms:AWS, Azure, GCP, Proficiency in Python, SQL, PySpark and ETL frameworks Good-to-Have Skills :LLM Architecture, Containerization tools:Docker, Kubernetes, Real-time data processing tools:Kafka, Flink, Certifications like AWS Certified Data Analytics Specialty, Google Professional Data Engineer,Snowflake,DBT,etc. Job Summary : As a Data Engineer, you will play a critical role in designing, implementing, and optimizing data infrastructure to power analytics, machine learning, and enterprise decision-making. Your work will ensure high-quality, reliable data is accessible for actionable insights. This involves leveraging technical expertise, collaborating with stakeholders, and staying updated with the latest tools and technologies to deliver scalable and efficient data solutions. Roles & Responsibilities: Build and Maintain Data Infrastructure:Design, implement, and optimize scalable data pipelines and systems for seamless ingestion, transformation, and storage of data. Collaborate with Stakeholders:Work closely with business teams, data analysts, and data scientists to understand data requirements and deliver actionable solutions. Leverage Tools and Technologies:Utilize Python, SQL, PySpark, and ETL frameworks to manage large datasets efficiently. Cloud Integration:Develop secure, scalable, and cost-efficient solutions using cloud platforms such as Azure, AWS, and GCP. Ensure Data Quality:Focus on data reliability, consistency, and quality using automation and monitoring techniques. Document and Share Best Practices:Create detailed documentation, share best practices, and mentor team members to promote a strong data culture. Continuous Learning:Stay updated with the latest tools and technologies in data engineering through professional development opportunities. Professional & Technical Skills: Strong proficiency in programming languages such as Python, SQL, and PySpark Experience with cloud platforms (AWS, Azure, GCP) and their data services Familiarity with ETL frameworks and data pipeline design Strong knowledge of traditional statistical methods, basic machine learning techniques. Knowledge of containerization tools (Docker, Kubernetes) Knowing LLM, RAG & Agentic AI architecture Certification in Data Science or related fields (e.g., AWS Certified Data Analytics Specialty, Google Professional Data Engineer) Additional Information: The ideal candidate has a robust educational background in data engineering or a related field and a proven track record of building scalable, high-quality data solutions in the Consumer Goods sector. This position offers opportunities to design and implement cutting-edge data systems that drive business transformation, collaborate with global teams to solve complex data challenges and deliver measurable business outcomes and enhance your expertise by working on innovative projects utilizing the latest technologies in cloud, data engineering, and AI. About Our Company | Accenture Qualification Experience :Minimum 3-7 years in data engineering or related fields, with a focus on the Consumer Goods Industry Educational Qualification :Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement efficient data pipelines for data processing.- Optimize data storage and retrieval processes to enhance performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of ETL processes and data modeling.- Experience with cloud platforms such as AWS or Azure.- Knowledge of programming languages like Python or Java. Additional Information:- The candidate should have a minimum of 3 years of experience in Data Engineering and flink- The candidate must have Flink knowledge.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications function seamlessly to support organizational goals. You will also participate in testing and refining applications to enhance user experience and efficiency, while staying updated on industry trends and best practices to continuously improve your contributions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows to ensure clarity and consistency.- Engage in code reviews and provide constructive feedback to peers to foster a culture of continuous improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data modeling and ETL processes.- Experience with cloud platforms such as AWS or Azure for data storage and processing.- Familiarity with programming languages such as Python or Java for application development.- Knowledge of database management systems, including SQL and NoSQL databases. Additional Information:- The candidate should have minimum 3 years of experience in Data Engineering and Flink.- The candidate must have Flink knowledge.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Work from Office
What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: Strong on programming languages like Python, Java One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 1 month ago
5.0 - 9.0 years
12 - 18 Lacs
Hyderabad
Work from Office
Job Description : Position: Sr.Data Engineer Experience: Minimum 7 years Location: Hyderabad Job Summary: What Youll Do Design and build efficient, reusable, and reliable data architecture leveraging technologies like Apache Flink, Spark, Beam and Redis to support large-scale, real-time, and batch data processing. Participate in architecture and system design discussions, ensuring alignment with business objectives and technology strategy, and advocating for best practices in distributed data systems. Independently perform hands-on development and coding of data applications and pipelines using Java, Scala, and Python, including unit testing and code reviews. Monitor key product and data pipeline metrics, identify root causes of anomalies, and provide actionable insights to senior management on data and business health. Maintain and optimize existing datalake infrastructure, lead migrations to lakehouse architectures, and automate deployment of data pipelines and machine learning feature engineering requests. Acquire and integrate data from primary and secondary sources, maintaining robust databases and data systems to support operational and exploratory analytics. Engage with internal stakeholders (business teams, product owners, data scientists) to define priorities, refine processes, and act as a point of contact for resolving stakeholder issues. Drive continuous improvement by establishing and promoting technical standards, enhancing productivity, monitoring, tooling, and adopting industry best practices. What Youll Bring Bachelors degree or higher in Computer Science, Engineering, or a quantitative discipline, or equivalent professional experience demonstrating exceptional ability. 7+ years of work experience in data engineering and platform engineering, with a proven track record in designing and building scalable data architectures. Extensive hands-on experience with modern data stacks, including datalake, lakehouse, streaming data (Flink, Spark), and AWS or equivalent cloud platforms. Cloud - AWS Apache Flink/Spark , Redis Database platform- Databricks. Proficiency in programming languages such as Java, Scala, and Python(Good to have) for data engineering and pipeline development. Expertise in distributed data processing and caching technologies, including Apache Flink, Spark, and Redis. Experience with workflow orchestration, automation, and DevOps tools (Kubernetes,git,Terraform, CI/CD). Ability to perform under pressure, managing competing demands and tight deadlines while maintaining high-quality deliverables. Strong passion and curiosity for data, with a commitment to data-driven decision making and continuous learning. Exceptional attention to detail and professionalism in report and dashboard creation. Excellent team player, able to collaborate across diverse functional groups and communicate complex technical concepts clearly. Outstanding verbal and written communication skills to effectively manage and articulate the health and integrity of data and systems to stakeholders. Please feel free to contact us: 9440806850 Email ID : careers@jayamsolutions.com
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Senior Data Engineer Our Mission SPAN is enabling electrification for all We are a mission-driven company designing, building, and deploying products that electrify the built environment, reduce carbon emissions, and slow the effects of climate change. Decarbonization is the process to reduce or remove greenhouse gas emissions, especially carbon dioxide, from entering our atmosphere. Electrification is the process of replacing fossil fuel appliances that run on gas or oil with all-electric upgrades for a cleaner way to power our lives. At SPAN, we believe in: Enabling homes and vehicles powered by clean energy Making electrification upgrades possible Building more resilient homes with reliable backup Designing a flexible and distributed electrical grid The Role As a Data Engineer you would be working to design, build, test and create infrastructure necessary for real time analytics and batch analytics pipelines. You will work with multiple teams within the org to provide analysis, insights on the data. You will also be involved in writing ETL processes that support data ingestion. You will also guide and enforce best practices for data management, governance and security. You will build infrastructure to monitor these data pipelines / ETL jobs / tasks and create tooling/infrastructure for providing visibility into these. Responsibilities We are looking for a Data Engineer with passion for building data pipelines, working with product, data science and business intelligence teams and delivering great solutions. As a part of the team you:- Acquire deep business understanding on how SPAN data flows from IoT device to cloud through the system and build scalable and optimized data solutions that impact many stakeholders. Be an advocate for data quality and excellence of our platform. Build tools that help streamline the management and operation of our data ecosystem. Ensure best practices and standards in our data ecosystem are shared across teams. Work with teams within the company to build close relationships with our partners to understand the value our platform can bring and how we can make it better. Improve data discovery by creating data exploration processes and promoting adoption of data sources across the company. Have a desire to write tools and applications to automate work rather than do everything by hand. Assist internal teams in building out data logging, alerting and monitoring for their applications Are passionate about CI/CD process. Design, develop and establish KPIs to monitor analysis and provide strategic insights to drive growth and performance. About You Required Qualifications Bachelor's Degree in a quantitative discipline: computer science, statistics, operations research, informatics, engineering, applied mathematics, economics, etc. 5+ years of relevant work experience in data engineering, business intelligence, research or related fields. Expert level production-grade, programming experience in at least one of these languages (Python, Kotlin, or other JVM based languages) Experience in writing clean, concise and well structured code in one of the above languages. Experience working with Infrastructure-as-code tools: Pulumi, Terraform, etc. Experience working with CI/CD systems: Circle-CI, Github Actions, Argo-CD, etc. Experience managing data engineering infrastructure through Docker and Kubernetes Experience working with latency data processing solutions like Flink, Prefect, AWS Kinesis, Kafka, Spark Stream processing etc. Experience with SQL/Relational databases, OLAP databases like Snowflake. Experience working in AWS: S3, Glue, Athena, MSK, EMR, ECR etc. Bonus Qualifications Experience with the Energy industry Experience with building IoT and/or hardware products Understanding of electrical systems and residential loads Experience with data visualization using Tableau. Experience in Data loading tools like FiveTran as well as data debugging tools such as DataDog Life at SPAN Our Bengaluru team plays a pivotal role in SPANs continued growth and expansion. Together, were driving engineering , product development , and operational excellence to shape the future of home energy solutions. As part of our team in India, youll have the opportunity to collaborate closely with our teams in the US and across the globe. This international collaboration fosters innovation, learning, and growth, while helping us achieve our bold mission of electrifying homes and advancing clean energy solutions worldwide. Our in-office culture offers the chance for dynamic interactions and hands-on teamwork, making SPAN a truly collaborative environment where every team members contribution matters. Our climate-focused culture is driven by a team of forward-thinkers, engineers, and problem-solvers who push boundaries every day. Do mission-driven work: Every role at SPAN directly advances clean energy adoption. Bring powerful ideas to life: We encourage diverse ideas and perspectives to drive stronger products. Nurture an innovation-first mindset: We encourage big thinking and bold action. Deliver exceptional customer value: We value hard work, and the ability to deliver exceptional customer value. Benefits at SPAN India Generous paid leave Comprehensive Insurance & Health Benefits Centrally located office in Bengaluru with easy access to public transit, dining, and city amenities Interested in joining our team? Apply today and well be in touch with the next steps!
Posted 1 month ago
10.0 - 15.0 years
35 - 50 Lacs
Hyderabad, Bengaluru
Work from Office
Job Title: Senior Kafka Engineer Location: Hyderabad / Bangalore Work Mode: Work from Office | 24/7 Rotational Shifts Type: Full-Time Experience: 8+ Years About the Role: Were hiring a Senior Kafka Engineer to manage and enhance our Kafka infrastructure on AWS and Confluent Platform. You’ll lead efforts in building secure, scalable, and reliable data streaming solutions for high-impact FinTech systems. Key Responsibilities: Manage and optimize Kafka and Confluent deployments on AWS Design and maintain Kafka producers, consumers, streams, and connectors Define schema, partitioning, and retention policies Monitor performance using Prometheus, Grafana, and Confluent tools Automate infrastructure using Terraform, Helm, and Kubernetes (EKS) Ensure high availability, security, and disaster recovery Collaborate with teams and share Kafka best practices Required Skills: 8+ years in platform engineering, 5+ with Kafka & Confluent Strong Java or Python Kafka client development Hands-on with Schema Registry, Control Center, ksqlDB Kafka deployment on AWS (MSK or EC2) Kafka Connect, Streams, and schema tools Kubernetes (EKS), Terraform, Prometheus, Grafana Nice to Have: FinTech or regulated industry experience Knowledge of TLS, SASL/OAuth, RBAC Experience with Flink or Spark Streaming Kafka governance and multi-tenancy
Posted 1 month ago
8.0 - 13.0 years
13 - 17 Lacs
Bengaluru
Work from Office
We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karntaka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelinesmandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring)Training & Certification "¢ Apache Kafka Administration Snowflake Fundamentals/Advanced Training "¢ Experience 8 years of experience in a technical role working with AWSAt least 2 years in a leadership or management role
Posted 1 month ago
7.0 - 12.0 years
13 - 18 Lacs
Bengaluru
Work from Office
We are currently seeking a Lead Data Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture
Posted 1 month ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks Location:Pune/ Mumbai/ Bangalore/ Chennai
Posted 1 month ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
BS or higher degree in Computer Science (or equivalent field) 3-6+ years of programming experience with Java and Python Strong in writing SQL queries and understanding of Kafka, Scala, Spark/Flink Exposure to AWS Lambda, AWS Cloud Watch, Step Functions, EC2, Cloud Formation, Jenkins
Posted 1 month ago
8.0 - 13.0 years
9 - 14 Lacs
Bengaluru
Work from Office
8+ years experience combined between backend and data platform engineering roles Worked on large scale distributed systems. 5+ years of experience building data platform with (one of) Apache Spark, Flink or with similar frameworks. 7+ years of experience programming with Java Experience building large scale data/event pipelines Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, MongoDB Demonstrated experience with EKS, EMR, S3, IAM, KDA, Athena, Lambda, Networking, elastic cache and other AWS services.
Posted 1 month ago
1.0 - 3.0 years
4 - 6 Lacs
Bengaluru
Work from Office
Java & OOP: 13 years experience; strong grasp of core Java (collections, concurrency, GC, memory model) and design patterns Databases: hands-on with PostgreSQL, MySQL, and MongoDB; schema design, indexing, query tuning APIs & Frameworks: Spring Boot, Spring Data, or equivalent Collaboration: agile practices (Scrum/Kanban), clear communication and documentation Additional Skills Streaming: practical experience with Apache Kafka (producers/consumers, topics, partitions) and Apache Flink (stateful stream processing, windowing, watermarks)
Posted 1 month ago
10.0 - 20.0 years
35 - 60 Lacs
Mumbai, India
Work from Office
Design Full Stack solutions with cloud infrastructure (IAAS, PAAS, SAAS, on Premise, Hybrid Cloud) Support Application and infrastructure design and build as a subject matter expert Implement proof of concepts to demonstrate value of the solution designed Provide consulting support to ensure delivery teams build scalable, extensible, high availability, low latency, and highly usable applications Ensure solutions are aligned with requirements from all stake holders such as Consumers, Business, IT, Security and Compliance Ensure that all Enterprise IT parameters and constraints are considered as part of the design Design an appropriate technical solution to meet business requirements that may involve Hybrid cloud environments including Cloud-native architecture, Microservices, etc. Working knowledge of high availability, low latency end-to-end technology stack is especially important using both physical and virtual load balancing, caching, and scaling technology Awareness of Full stack web development frameworks such as Angular / React / Vue Awareness of relational and no relational / NoSql databases such as MongoDB / MS SQL / Cassandra / Neo4J / DynamoDB Awareness of Data Streaming platforms such as Apache Kafka / Apache Flink / AWS Kinesis Working experience of using AWS Step Functions or Azure Logic Apps with serverless Lambda or Azure Functions Optimizes and incorporates the inputs of specialists in solution design. Establishes the validity of a solution and its components with both short- term and long-term implications. Identifies the scalability options and implications on IT strategy and/or related implications of a solution and includes these in design activities and planning. Build strong professional relationships with key IT and business executives. Be a trusted advisor for Cross functional and Management Teams. Partners effectively with other teams to ensure problem resolution. Provide solutions and advice, create Architectures, PPT. Documents and effectively transfer knowledge to internal and external stakeholders Demonstrates knowledge of public cloud technology & solutions. Applies broad understanding of technical innovations & trends in solving business problems. Manage special projects and strategic initiatives as assigned by management. Implement and assist in developing policies for Information Security, and Environmental compliance, ensuring the highest standards are maintained. Ensure adherence to SLAs with internal and external customers and compliance with Information Security Policies, including risk assessments and procedure reviews.
Posted 1 month ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1581_JOB Date Opened 25/11/2022 Industry Technology Job Type Work Experience 8-12 years Job Title Senior Specialist- Data Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Location:Pune/ Mumbai/ Bangalore/ Chennai Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
10.0 - 15.0 years
25 - 40 Lacs
Mumbai
Work from Office
Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 1 month ago
3.0 - 7.0 years
17 - 20 Lacs
Bengaluru
Work from Office
Job Title :Industry & Function AI Data Engineer + S&C GN Management Level :09 - Consultant Location :Primary - Bengaluru, Secondary - Gurugram Must-Have Skills :Data Engineering expertise, Cloud platforms:AWS, Azure, GCP, Proficiency in Python, SQL, PySpark and ETL frameworks Good-to-Have Skills :LLM Architecture, Containerization tools:Docker, Kubernetes, Real-time data processing tools:Kafka, Flink, Certifications like AWS Certified Data Analytics Specialty, Google Professional Data Engineer,Snowflake,DBT,etc. Job Summary : As a Data Engineer, you will play a critical role in designing, implementing, and optimizing data infrastructure to power analytics, machine learning, and enterprise decision-making. Your work will ensure high-quality, reliable data is accessible for actionable insights. This involves leveraging technical expertise, collaborating with stakeholders, and staying updated with the latest tools and technologies to deliver scalable and efficient data solutions. Roles & Responsibilities: Build and Maintain Data Infrastructure:Design, implement, and optimize scalable data pipelines and systems for seamless ingestion, transformation, and storage of data. Collaborate with Stakeholders:Work closely with business teams, data analysts, and data scientists to understand data requirements and deliver actionable solutions. Leverage Tools and Technologies:Utilize Python, SQL, PySpark, and ETL frameworks to manage large datasets efficiently. Cloud Integration:Develop secure, scalable, and cost-efficient solutions using cloud platforms such as Azure, AWS, and GCP. Ensure Data Quality:Focus on data reliability, consistency, and quality using automation and monitoring techniques. Document and Share Best Practices:Create detailed documentation, share best practices, and mentor team members to promote a strong data culture. Continuous Learning:Stay updated with the latest tools and technologies in data engineering through professional development opportunities. Professional & Technical Skills: Strong proficiency in programming languages such as Python, SQL, and PySpark Experience with cloud platforms (AWS, Azure, GCP) and their data services Familiarity with ETL frameworks and data pipeline design Strong knowledge of traditional statistical methods, basic machine learning techniques. Knowledge of containerization tools (Docker, Kubernetes) Knowing LLM, RAG & Agentic AI architecture Certification in Data Science or related fields (e.g., AWS Certified Data Analytics Specialty, Google Professional Data Engineer) Additional Information: The ideal candidate has a robust educational background in data engineering or a related field and a proven track record of building scalable, high-quality data solutions in the Consumer Goods sector. This position offers opportunities to design and implement cutting-edge data systems that drive business transformation, collaborate with global teams to solve complex data challenges and deliver measurable business outcomes and enhance your expertise by working on innovative projects utilizing the latest technologies in cloud, data engineering, and AI. About Our Company | Qualifications Experience :Minimum 3-7 years in data engineering or related fields, with a focus on the Consumer Goods Industry Educational Qualification :Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field
Posted 1 month ago
4.0 - 8.0 years
5 - 9 Lacs
Hyderabad, Bengaluru
Work from Office
Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing Be a part of an Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities What are the next steps? Register on our Soul AI website
Posted 2 months ago
4.0 - 8.0 years
13 - 17 Lacs
Hyderabad, Bengaluru
Work from Office
Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities
Posted 2 months ago
5.0 - 8.0 years
0 - 0 Lacs
Pune
Work from Office
Experience:-5-8 yrs Location :- Hyderabad Notice-Period:- Immediate - 30 days only Job Description:- Experience between 6 to 8 years MUST have Proficiency in Java programming language Experience and Strong knowledge Apache Flink Experience with Apache Airflow Knowledge of containerization and orchestration tools eg Docker Kubernetes Familiarity with cloud platforms eg AWS GCP Azure is a plus Familiarity with Python programming language Experience with Front-end development Mandatory Skills : Apache Airflow , Hibernate, Java ,Java SpringCloud,Microservices,Spring,Spring Security,SpringBoot,SpringMVC,Spring Integration,SpringCloud ****Make sure that Java Apache Airflow experience should be mentioned on your CV****
Posted 2 months ago
7.0 - 12.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Kafka Good to have skills : Spring BootMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Design and develop high-performance microservices based framework using Java and Spring Boot- Implement event-driven architecture using technologies like Apache Kafka and Apache Flink- Collaborate with cross-functional teams to integrate microservices with other systems- Ensure the scalability, reliability, and performance of backend services- Stay up-to-date with the latest trends and technologies in the Java and Spring ecosystem- Ensure timely project delivery Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot, Java 8+, Microservices, Event Driven Architecture- Knowledge on Java Enterprise and microservice design patterns- Strong understanding of distributed systems- Experience in microservices architecture- Knowledge of event-driven architecture- Hands-on experience in designing and implementing scalable applications Additional Information:- The candidate should have a minimum of 7+ years of experience- Familiarity with Kubernetes, Docker, CI/CD tools and AWS cloud is desired, but not must- A 15 years full-time education is required Qualification 15 years full time education
Posted 2 months ago
7.0 - 12.0 years
0 - 0 Lacs
Chennai
Work from Office
Job Description for Senior Data Engineer at Fynxt Experience Level: 8+ years Job Title: Senior Data Engineer Location: Chennai Job Type: Full Time Job Description: FYNXT is a Singapore based Software Product Development company that provides a Software as a Service (SaaS) platform to digitally transform leading brokerage firms and fund management companies and help them grow their market share. Our industry leading Digital Front office platform has transformed several leading financial institutions in the Forex industry to go fully digital to optimize their operations, cut costs and become more profitable. For more visit: www.fynxt.com Key Responsibilities: Architect & Build Scalable Systems: Design and implement petabyte-scale lakehouse architectures (Apache Iceberg, Delta Lake) to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink to process structured/unstructured data with low latency. ¢ High-Performance Applications: Leverage Java to build scalable, high-throughput data applications and services. ¢ Modern Data Infrastructure: Leverage modern data warehouses and query engines (Trino, Spark) for sub-second operation and analytics on real-time data. ¢ Database Expertise: Work with RDBMS (PostgreSQL, MySQL, SQL Server) and NoSQL (Cassandra, MongoDB) systems to manage diverse data workloads. ¢ Data Governance: Ensure data integrity, security, and compliance across multi-tenant systems. ¢ Cost & Performance Optimization: Manage production infrastructure for reliability, scalability, and cost efficiency. ¢ Innovation: Stay ahead of trends in the data ecosystem (e.g., Open Table Formats, stream processing) to drive technical excellence. ¢ API Development (Optional): Build and maintain Web APIs (REST/GraphQL) to expose data services internally and externally. Simptra Technologies Pvt. Ltd. hr@fynxt.com www.fynxt.com Qualifications: ¢ 8+ years of data engineering experience with large-scale systems (petabyte-level). ¢ Expert proficiency in Java for data-intensive applications. ¢ Hands-on experience with lakehouse architectures, stream processing (Flink), and event streaming (Kafka/Pulsar). ¢ Strong SQL skills and familiarity with RDBMS/NoSQL databases. ¢ Proven track record in optimizing query engines (e.g., Spark, Presto) and data pipelines. ¢ Knowledge of data governance, security frameworks, and multi-tenant systems. ¢ Experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform). What we offer? ¢ Unique experience in Fin-Tech industry, with a leading, fast-growing company. ¢ Good atmosphere at work and a comfortable working environment. ¢ Additional benefit of Group Health Insurance - OPD Health Insurance ¢ Coverage for Self + Family (Spouse and up to 2 Children) ¢ Attractive Leave benefits like Maternity, Paternity Benefit, Vacation leave & Leave Encashment ¢ Reward & Recognition Monthly, Quarterly, Half yearly & yearly. ¢ Loyalty benefits ¢ Employee referral program Simptra Technologies Pvt. Ltd. hr@fynxt.com www.fynxt.com
Posted 2 months ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Lead the design, development, and implementation of applications. Collaborate with cross-functional teams to gather and analyze requirements. Ensure the applications meet quality standards and are delivered on time. Provide technical guidance and mentorship to junior team members. Stay updated with the latest industry trends and technologies. Identify and resolve any issues or bottlenecks in the application development process. Professional & Technical Skills: Must To Have Skills:Proficiency in Apache Spark. Strong understanding of distributed computing and parallel processing. Experience with big data processing frameworks like Hadoop or Apache Flink. Hands-on experience with programming languages like Java or Scala. Knowledge of database systems and SQL. Good To Have Skills:Experience with cloud platforms like AWS or Azure. Additional Information: The candidate should have a minimum of 3 years of experience in Apache Spark. This position is based at our Pune office. A 15 years full-time education is required. Qualifications 15 years full time education
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough