Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Role Overview We are seeking an experienced Data Engineer with 7-10 years of experience to design, develop, and optimize data pipelines while integrating machine learning (ML) capabilities into production workflows. The ideal candidate will have a strong background in data engineering, big data technologies, cloud platforms, and ML model deployment. This role requires expertise in building scalable data architectures, processing large datasets, and supporting machine learning operations (MLOps) to enable data-driven decision-making. Key Responsibilities Data Engineering & Pipeline Development Design, develop, and maintain scalable, robust, and efficient data pipelines for batch and real-time data processing. Build and optimize ETL/ELT workflows to extract, transform, and load structured and unstructured data from multiple sources. Work with distributed data processing frameworks like Apache Spark, Hadoop, or Dask for large-scale data processing. Ensure data integrity, quality, and security across the data pipelines. Implement data governance, cataloging, and lineage tracking using appropriate tools. Machine Learning Integration Collaborate with data scientists to deploy, monitor, and optimize ML models in production. Design and implement feature engineering pipelines to improve model performance. Build and maintain MLOps workflows, including model versioning, retraining, and performance tracking. Optimize ML model inference for low-latency and high-throughput applications. Work with ML frameworks such as TensorFlow, PyTorch, Scikit-learn, and deployment tools like Kubeflow, MLflow, or SageMaker. Cloud & Big Data Technologies Architect and manage cloud-based data solutions using AWS, Azure, or GCP. Utilize serverless computing (AWS Lambda, Azure Functions) and containerization (Docker, Kubernetes) for scalable deployment. Work with data lakehouses (Delta Lake, Iceberg, Hudi) for efficient storage and retrieval. Database & Storage Management Design and optimize relational (PostgreSQL, MySQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Manage and optimize data warehouses (Snowflake, BigQuery, Redshift, Databricks) for analytical workloads. Implement data partitioning, indexing, and query optimizations for performance improvements. Collaboration & Best Practices Work closely with data scientists, software engineers, and DevOps teams to develop scalable and reusable data solutions. Implement CI/CD pipelines for automated testing, deployment, and monitoring of data workflows. Follow best practices in software engineering, data modeling, and documentation. Continuously improve the data infrastructure by researching and adopting new technologies. Required Skills & Qualifications Technical Skills: Programming Languages: Python, SQL, Scala, Java Big Data Technologies: Apache Spark, Hadoop, Dask, Kafka Cloud Platforms: AWS (Glue, S3, EMR, Lambda), Azure (Data Factory, Synapse), GCP (BigQuery, Dataflow) Data Warehousing: Snowflake, Redshift, BigQuery, Databricks Databases: PostgreSQL, MySQL, MongoDB, Cassandra ETL/ELT Tools: Airflow, dbt, Talend, Informatica Machine Learning Tools: MLflow, Kubeflow, TensorFlow, PyTorch, Scikit-learn MLOps & Model Deployment: Docker, Kubernetes, SageMaker, Vertex AI DevOps & CI/CD: Git, Jenkins, Terraform, CloudFormation Soft Skills: Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Ability to work in an agile and cross-functional team environment. Strong documentation and technical writing skills. Preferred Qualifications Experience with real-time streaming solutions like Apache Flink or Spark Streaming. Hands-on experience with vector databases and embeddings for ML-powered applications. Knowledge of data security, privacy, and compliance frameworks (GDPR, HIPAA). Experience with GraphQL and REST API development for data services. Understanding of LLMs and AI-driven data analytics.
Posted 1 month ago
1.0 - 4.0 years
3 - 6 Lacs
Bengaluru
Work from Office
About The Position The Technical Lead provides oversight and leadership to an IT technical delivery team This position supervises team members, identifies and manages skillsets within capacity of the team, and ensures successful delivery execution of assigned Agile epics The position partners with business owner(s) to ensure technical considerations are appropriately prioritized for the teams digital products and business capabilities, The Operational Technology (OT) Deputy product powner supports the OT Product Manager by ensuring alignment on the full scope of OT capabilities, products, and solutions (Modern and Emerging Technologies as well as PCN Operations and tools) with IRSM, Cybersecurity, the IT Foundation Platforms Product Lines, and Digital Platforms The role will also be requiring supervising up to eight product teams and aligning with the OTPL teams in Houston, Product capabilities managed include the scope of IIoT Platform, Connected Worker and XR Immersive Technologies, Edge Compute, Rich Media and Digital Twin, Time Series and Real-Time Data, PCN Utility, OT Observability and Automation, and encompasses the OT Operations for the Eastern Hemisphere as well, Key Responsibilities Develop and implement the strategy for Operational Technologies within the organization, Prioritize and manage the internal commercialization of Operational Technologies, Oversee the capacity planning and work deliverables related to Operational Technologies, Lead and mentor a team of IT professionals, providing guidance and support, Collaborate with stakeholders to understand business requirements and translate them into technical solutions, Ensure the scalability, reliability, and security of Emerging Technologies implementations, Monitor and report on the performance and effectiveness of Operational Technologies, Stay updated with the latest trends and advancements in Operational Technologies, Manage vendor relationships and negotiate contracts related to Operational Technologies, Required Qualifications Bachelors degree in computer science, Information Technology, or a related field, 10-15 years of experience in IT, with 5+ years focus on Operational Technology, Experience with databases such as InfluxDB, TimescaleDB, OSIsoft or Apache Kafka, Knowledge of tools like Apache Flink, Apache Spark, or Azure Event Hubs, Experience in handling large scale time series data, with knowledge of real time processing, predictive analytics and anomaly detection, Proven experience in leading and managing IT teams, Strong understanding of Emerging Technologies and its applications, Excellent problem-solving and analytical skills, Strong communication and interpersonal skills, Ability to work independently and as part of a team, Preferred Qualifications Masters degree in computer science, Information Technology, or a related field, Professional certifications related to Operational Technologies, Deep understanding of data management, ML and AI to bring flavors of predictive modelling within digital twins, Experience with project management methodologies (e-g , Agile, Scrum), Knowledge of cybersecurity principles and practices, Familiarity with cloud computing platforms like AWS or Azure, Chevron ENGINE supports global operations, supporting business requirements across the world Accordingly, the work hours for employees will be aligned to support business requirements The standard work week will be Monday to Friday Working hours are 8:00am to 5:00pm or 1:30pm to 10:30pm, Chevron participates in E-Verify in certain locations as required by law, Default Terms and Conditions We respect the privacy of candidates for employment This Privacy Notice sets forth how we will use the information we obtain when you apply for a position through this career site If you do not consent to the terms of this Privacy Notice, please do not submit information to us, Please access the Global Application Statements, select the country where you are applying for employment By applying, you acknowledge that you have read and agree to the country specific statement, Terms of Use
Posted 1 month ago
9.0 - 14.0 years
50 - 85 Lacs
Noida
Work from Office
About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
Posted 1 month ago
3.0 - 8.0 years
10 - 20 Lacs
Chennai
Remote
Position Title: Apache Flink Engineer Open Position: 3 Employment Type: Permanent; Full-Time. Location: Chennai Experience: 3+yrs Skills required: Confluent Kafka or Kafka Apache Spark or Apache Flink
Posted 1 month ago
3 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
locationsIN - Bangaloreposted onPosted Today time left to applyEnd DateMay 22, 2025 (5 days left to apply) job requisition idR140300 Company Overview A.P. Moller - Maersk is an integrated container logistics company and member of the A.P. Moller Group. Connecting and simplifying trade to help our customers grow and thrive . With a dedicated team of over 95,000 employees, operating in 130 countries; we go all the way to enable global trade for a growing world . From the farm to your refrigerator, or the factory to your wardrobe, A.P. Moller - Maersk is developing solutions that meet customer needs from one end of the supply chain to the other. About the Team At Maersk, the Global Ocean Manifest team is at the heart of global trade compliance and automation. We build intelligent, high-scale systems that seamlessly integrate customs regulations across 100+ countries, ensuring smooth cross-border movement of cargo by ocean, rail, and other transport modes. Our mission is to digitally transform customs documentation, reducing friction, optimizing workflows, and automating compliance for a complex web of regulatory bodies, ports, and customs authorities. We deal with real-time data ingestion, document generation, regulatory rule engines, and multi-format data exchange while ensuring resilience and security at scale. Key Responsibilities Work with large, complex datasets and ensure efficient data processing and transformation. Collaborate with cross-functional teams to gather and understand data requirements. Ensure data quality, integrity, and security across all processes. Implement data validation, lineage, and governance strategies to ensure data accuracy and reliability. Build, optimize , and maintain ETL pipelines for structured and unstructured data , ensuring high throughput, low latency, and cost efficiency . Experience in building scalable, distributed data pipelines for processing real-time and historical data. Contribute to the architecture and design of data systems and solutions. Write and optimize SQL queries for data extraction, transformation, and loading (ETL). Advisory to Product Owners to identify and manage risks, debt, issues and opportunities for the technical improvement . Providing continuous improvement suggestions in internal code frameworks, best practices and guidelines . Contribute to engineering innovations that fuel Maersks vision and mission. Required Skills & Qualifications 4 + years of experience in data engineering or a related field. Strong problem-solving and analytical skills. E xperience on Java, Spring framework Experience in building data processing pipelines using Apache Flink and Spark. Experience in distributed data lake environments ( Dremio , Databricks, Google BigQuery , etc.) E xperience on Apache Kafka, Kafka Streams Experience working with databases. PostgreSQL preferred, with s olid experience in writin g and opti mizing SQL queries. Hands-on experience in cloud environments such as Azure Cloud (preferred), AWS, Google Cloud, etc. Experience with data warehousing and ETL processes . Experience in designing and integrating data APIs (REST/ GraphQL ) for real-time and batch processing. Knowledge on Great Expectations, Apache Atlas, or DataHub , would be a plus Knowledge on RBAC, encryption, GDPR compliance would be a plus Business skills Excellent communication and collaboration skills Ability to translate between technical language and business language, and communicate to different target groups Ability to understand complex design Possessing the ability to balance and find competing forces & opinions , within the development team Personal profile Fact based and result oriented Ability to work independently and guide the team Excellent verbal and written communication Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing . Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .
Posted 1 month ago
3 - 5 years
4 - 7 Lacs
Hyderabad
Work from Office
About The Role ? ? ? ? Mandatory Skills: Flink. Experience3-5 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 month ago
5 - 8 years
5 - 15 Lacs
Hyderabad
Hybrid
Databuzz is Hiring for Java & Python (Airflow) - Hyderabad - 5+ Years-Hybrid Please mail your profile to jagadish.raju@databuzzltd.com with the below details, If you are Interested. About DatabuzzLTD: Databuzz is One stop shop for data analytics specialized in Data Science, Big Data, Data Engineering, AI & ML, Cloud Infrastructure and Devops. We are an MNC based in both UK and INDIA. We are a ISO 27001 & GDPR complaint company. CTC - ECTC - Notice Period/LWD - (Candidate serving notice period will be preferred) DOB- Position: Java & Python (Airflow) - Hyderabad Mandatory Skills: Should have experience in Java. Should have experience Python. Must have Airflow, Apache Flink. Experienced in Data BigQuery. Regards, Jagadish Raju - Talent Acquisition Specialist jagadish.raju@databuzzltd.com
Posted 1 month ago
2 - 4 years
4 - 6 Lacs
Pune
Work from Office
We seek a Data Engineer/Architect Expert Level who shares our passion for innovation and change. This role is critical to helping our business partners evolve and adapt to consumers' personalized expectations in this new technological era. What will help you succeed: Fluent English (B2 - Upper Intermediate) Deep Data Architecture & Time Series Database (Timescale) expertise. Proficiency in Big Data Processing (Apache Spark). Experience with Streaming Data Technologies (KSQL, Flink). Strong knowledge of Data Governance, Security & Compliance. Hands-on experience with Snowflake and data sharing design. Hands on experience on AWS or Azure cloud along with Kafka. This job can be filled in Pune #li-hybrid Create with us digital products that people love. We will bring businesses and consumers together through AI technology and creativity, driving digital transformation to impact the world positively. At Globant, we believe in fostering a diverse and inclusive workplace where everyone feels valued and respected. We are an Equal Opportunity Employer committed to creating a thriving and inclusive environment for all employees and candidates, regardless of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other legally protected characteristic. If you need any assistance or accommodations due to a disability, please let us know by applying through our Career Site or contacting your assigned recruiter. We may use AI and machine learning technologies in our recruitment process. Compensation is determined based on skills, qualifications, experience, and location. In addition to competitive salaries, we offer a comprehensive benefits package. Learn more about our commitment to diversity and inclusion and .
Posted 1 month ago
4 - 6 years
15 - 22 Lacs
Gurugram
Hybrid
The Job We are looking out for a Sr Data Engineer responsible to Design, Develop and Support Real Time Core Data Products to support TechOps Applications. Work with various teams to understand business requirements, reverse engineer existing data products and build state of the art performant data pipelines. AWS is the cloud of choice for these pipelines and a solid understand and experience of architecting , developing and maintaining real time data pipelines in AWS Is highly desired. Design, Architect and Develop Data Products that provide real time core data for applications. Production Support and Operational Optimisation of Data Projects including but not limited to Incident and On Call Support , Performance Optimization , High Availability and Disaster Recovery. Understand Business Requiremensts interacting with business users and or reverse engineering existing legacy data products. Mentor and train junior team members and share architecture , design and development knowdge of data products and standards. Mentor and train junior team members and share architecture , design and development knowdge of data products and standards. Good understand and working knowledge of distributed databases and pipelines. Your Profile An ideal candidate will have 4+ yrs of experience in Real Time Streaming along with hands on Spark, Kafka, Apache Flink, Java, Big data technologies, AWS and MSK (managed service kafka) AWS Distrubuited Database technologies including Managed Services Kafka, Managed Apache Flink, DynamoDB, S3, Lambda. Experience designing and developing with Apache Flink real time data products.(Scala experience can be considered) Experience with python and pyspark SQL Code Development AWS Solutions Architecture experience for data products is required Manage, troubleshoot, real time data pipelines in the AWS Cloud Experience with High Availability and Disaster Recovery Solutions for Real time data streaming Excellent Analytical, Problem solving and Communication Skills Must be self-motivated, and ability to work independently Ability to understand existing SQL and code and user requirements and translate them into modernized data products.
Posted 1 month ago
7 - 12 years
50 - 75 Lacs
Bengaluru
Work from Office
---- What the Candidate Will Do ---- Partner with engineers, analysts, and product managers to define technical solutions that support business goals Contribute to the architecture and implementation of distributed data systems and platforms Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost Serve as a thought leader and mentor in data engineering best practices across the organization ---- Basic Qualifications ---- 7+ years of hands-on experience in software engineering with a focus on data engineering Proficiency in at least one programming language such as Python, Java, or Scala Strong SQL skills and experience with large-scale data processing frameworks (e.g., Apache Spark, Flink, MapReduce, Presto) Demonstrated experience designing, implementing, and operating scalable ETL pipelines and data platforms Proven ability to work collaboratively across teams and communicate technical concepts to diverse stakeholders ---- Preferred Qualifications ---- Deep understanding of data warehousing concepts and data modeling best practices Hands-on experience with Hadoop ecosystem tools (e.g., Hive, HDFS, Oozie, Airflow, Spark, Presto) Familiarity with streaming technologies such as Kafka or Samza Expertise in performance optimization, query tuning, and resource-efficient data processing Strong problem-solving skills and a track record of owning systems from design to production
Posted 1 month ago
10 - 16 years
35 - 40 Lacs
Pune
Work from Office
About The Role : Job TitleLead Engineer, VP LocationPune, India Role Description A Passion to Perform. Its what drives us. More than a claim, this describes the way we do business. Were committed to being the best financial services provider in the world, balancing passion with precision to deliver superior solutions for our clients. This is made possible by our peopleagile minds, able to see beyond the obvious and act effectively in an ever-changing global business landscape. As youll discover, our culture supports this. Diverse, international and shaped by a variety of different perspectives, were driven by a shared sense of purpose. At every level agile thinking is nurtured. And at every level agile mind are rewarded with competitive pay, support and opportunities to excel What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Designing, implementing and operationalising Java based software components for the Transaction Monitoring Data Controls applications Contributing to DevOps capabilities to ensure maximum automation of our applications. Leveraging best practices - Build Data Driven Decisions Collaborationacross the TDI areas such as Cloud Platform, Security, Data, Risk&Compliance areasto create optimum solutions for the business, increasing re-use, creating best practice and sharing knowledge. Your skills and experience 13+ years of hands-on experience of Java development (Java 11+) in either of: Spring Boot/Microservices/APIs/Transactional databases Java data processing frameworks such as Apache Spark, Apache Beam, Flink Experience of contributing to software design and architecture including consideration of meeting non-functional requirements (e.g., reliability, scalability, observability, testability) Understanding of relevant Architecture styles and their trade-offs - e.g., Microservices, Monolith, Batch. Professional experience inbuilding applications into one of the cloud platforms (Azure, AWS or GCP)and usage of their major infra components (Software Defined Networks, IAM, Compute, Storage, etc.) Professional experience of at least one data storage technology (e.g., Oracle, Big Query) Experience designing and implementing distributed enterprise applications Professional experience of at least one "CI/CD" tool such as Team City, Jenkins, GitHub Actions Professional experience of Agile build and deployment practices (DevOps) Professional experience of defining interface and internal data models both logical and physical Experience of working with a globally distributed team requiring remote interaction across locations, time zones and diverse cultures Excellent communication skills (verbal and written) Idealto Have Professional experience working with Java components on GCP (e.g. App Engine, GKE, Cloud Run) Professional experience working with RedHat OpenShift & Apache Spark Professional experience working with Kotlin Experience of working in one or more large data integration projects/products Experience and knowledge of Data Engineering topics such as partitioning, optimisation based on different goals (e.g. retrieval performance vs insert performance) A passion for problem solving with strong analytical capabilities. Experience related to any of payment scanning, fraud checking, integrity monitoring, payment lifecycle management Experience working with Drools or similar product Data modelling experience Understanding of data security principle, data masking s and implementation considerations Education/Qualifications Degree from an accredited college or university with a concentration in Engineering or Computer Science How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
2 - 6 years
8 - 12 Lacs
Bengaluru
Work from Office
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Sr. Staff Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN). Title - Lead Data Architect (Streaming) Required Skills and Qualifications Overall 10+ years of IT experience of which 7+ years of experience in data architecture and engineering Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS Strong experience with Confluent Strong experience in Kafka Solid understanding of data streaming architectures and best practices Strong problem-solving skills and ability to think critically Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Knowledge of Apache Airflow for data orchestration Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications An understanding of cloud networking patterns and practises Experience with working on a library or other long term product Knowledge of the Flink ecosystem Experience with Terraform Deep experience with CI/CD pipelines Strong understanding of the JVM language family Understanding of GDPR and the correct handling of PII Expertise with technical interface design Use of Docker Key Responsibilities Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem Architect data processing applications using Python, Kafka, Confluent Cloud and AWS Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka Ensure data security and compliance throughout the architecture Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Optimize data flows for performance, cost-efficiency, and scalability Implement data governance and quality control measures Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams Provide technical leadership and mentorship to development teams and lead engineers Stay current with emerging technologies and industry trends Collaborate with data scientists and analysts to enable efficient data access and analysis Evaluate and recommend new technologies to improve data architecture Position Overview: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA endeavors to make https://us.nttdata.comaccessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here. Job Segment Developer, Computer Science, Consulting, Technology
Posted 1 month ago
4 - 9 years
16 - 20 Lacs
Bengaluru
Work from Office
Req ID: 301930 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Solution Architect Lead Advisor to join our team in Bangalore, Karnataka (IN-KA), India (IN). TitleData Solution Architect Position Overview: We are seeking a highly skilled and experienced Data Solution Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Proficiency in Kafka/Confluent Kafka and Python - Experience with Synk for security scanning and vulnerability management - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Preferred Qualifications - Experience with Kafka Connect and Confluent Schema Registry - Familiarity with Atlan for data catalog and metadata management - Knowledge of Apache Flink for stream processing - Experience integrating with IBM MQ - Familiarity with Sonarcube for code quality analysis - AWS certifications (e.g., AWS Certified Solutions Architect) - Experience with data modeling and database design - Knowledge of data privacy regulations and compliance requirements Key Responsibilities - Design and implement scalable data architectures using AWS services and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS - Design and implement data streaming pipelines using Kafka/Confluent Kafka - Develop data processing applications using Python - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Provide technical leadership and mentorship to development teams - Stay current with emerging technologies and industry trend About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team. Job Segment Solution Architect, Consulting, Database, Computer Science, Technology
Posted 1 month ago
6 - 11 years
18 - 30 Lacs
Gurugram
Work from Office
Application layer technologies including Tomcat/Nodejs, Netty, Springboot, hibernate, Elasticsearch, Kafka, Apache flink Frontend technologies including ReactJs, Angular, Android/IOS Data storage technologies like Oracle, S3, Postgres, Mongodb
Posted 1 month ago
6 - 10 years
13 - 17 Lacs
Bengaluru
Work from Office
Job Description Data Integration, OCI Oracle Cloud is a comprehensive enterprise-grade platform that offers best-in-class services across Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Oracle Cloud platform offers choice and flexibility for customers to build, deploy, integrate, and extend applications in the cloud that enable adapting to rapidly changing business requirements, promote interoperability and avoid lock-in. This platform supports numerous open standards (SQL, HTML5, REST, and more), open-source solutions (such as Kubernetes, Hadoop, Spark and Kafka) and a wide variety of programming languages, databases, tools and integration frameworks. Our Team Oracle Cloud Infrastructure (OCI) is a strategic growth area for Oracle. It is a comprehensive cloud services offering in the enterprise software industry, spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). OCI is currently building a future ready Gen2 cloud data management platform. Oracle Cloud Data integration under pins a comprehensive, best-in-class data integration PaaS offering with hundreds of out-of-the-box connectors to seamlessly integrate on-prem and cloud applications. Your Opportunity We are on a path breaking journey to build the best of breed Data Integration service that is built for hyper scale by leveraging cutting edge technologies (Spark, Scala, Livy, Apache Flink, Airflow, Kubernetes, etc.) and modern design/architecture principles (Micro-services, Scale to zero, Telemetry, Circuit Breakers, etc.) as part of the next gen AI fueled cloud computing platform. You will have the opportunity to be part of a team of passionate engineers who are fueled by serving customers and have a penchant to constantly push the innovation bar We are looking for a strong engineer who thrives on research and development projects. We want you to be a strong technical leader who needs to be hands-on. We want you to work with the development team and can work efficiently with other product groups that can sometimes be remote in different geographies. You should be comfortable working with product management. You should also be comfortable working with senior architects and engineering leaders to make sure we are building the right product and services using the right design principles. Your Qualifications B.E./M.E/PhD (Computer Sc, Electronics or Electrical Engg) 5+ years of experience with at least 2 years in cloud technologies Strong technical understanding in building scalable, high performance distributed services/systems Strong experience with Java open-source and API standards Experience working on Cloud infrastructure APIs, REST API model, and developing REST APIs Strong knowledge on Docker/Kubernetes Deep understanding of data structures, algorithms and excellent problem solving skills Working experience in one or more of the below domains (even if we have one from any of this, it will be a potential plus) Familiarity of Data Integration domain Data ingestion frameworks Orchestrating complex computational workflows Messaging brokers like Kafka Strong problem solving, troubleshooting and analytical skills Familiarity with Agile process will be an added advantage Excellent communication, presentation, interpersonal and analytical skills including the ability to communicate complex concepts clearly to different audiences. Ability to quickly learn new technologies in a dynamic environment. Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc. As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs. Duties and tasks are varied and complex needing independent judgment. Fully competent in own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience. Career Level - IC3 Responsibilities Your Responsibilities The job requires you to interface with other internal product development teams as well as cross functional teams (Product Management, Integration Engineering, Quality Engineering, UX team and Technical Writers). At a high level, the work will involve developing features on OCI, which includes but may not be limited to the following: Help drive the next generation Data Integration cloud services using Oracle standard tools, technology and development practices Working directly with product management Working directly with architects to ensure newer capabilities are built applying right design principles Working with remote and geographically distributed teams to enable building the right products, using the right building blocks and making them consumable by other products easily Be very technically hands-on and own/drive key end-to-end services
Posted 1 month ago
3 - 7 years
7 - 10 Lacs
Bengaluru
Remote
• Design and implement scalable, efficient and high-performance data pipelines • Develop and optimize ETL/ELT workflows using modern tools and frameworks. • Work with cloud platforms (AWS, Azure, GCP) Detailed JD will be given later.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France