Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design, development, and implementation of applications.- Collaborate with cross-functional teams to gather and analyze requirements.- Ensure the applications meet quality standards and are delivered on time.- Provide technical guidance and mentorship to junior team members.- Stay updated with the latest industry trends and technologies.- Identify and resolve any issues or bottlenecks in the application development process. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Strong understanding of distributed computing principles.- Experience with big data processing frameworks like Hadoop or Apache Flink.- Knowledge of programming languages such as Java or Scala.- Hands-on experience with data processing and analysis using Spark SQL.- Good To Have Skills: Familiarity with cloud platforms like AWS or Azure. Additional Information:- The candidate should have a minimum of 2 years of experience in Apache Spark.- This position is based at our Chennai office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 2 months ago
1.0 - 5.0 years
27 - 32 Lacs
Karnataka
Work from Office
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations Since 2011, our mission hasnt changed "” were here to stop breaches, and weve redefined modern security with the worlds most advanced AI-native platform We work on large scale distributed systems, processing almost 3 trillion events per day We have 3.44 PB of RAM deployed across our fleet of C* servers and this traffic is growing daily Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward Were also a mission-driven company We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers Were always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other Ready to join a mission that mattersThe future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML model development lifecycle, ML engineering, and Insights Activation This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company We processdata at a truly immense scale The data sets we process are composed of various facets including telemetry data, associated metadata, IT asset information, contextual formation about threat exposure, and many more These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse. We are seeking a strategic and technically savvy leader to head our Data and ML Platform team As the head, you will be responsible for defining and building our ML Experimentation Platform from the ground up, while scaling our data and ML infrastructure to support various roles including Data Platform Engineers, Data Scientists, and Threat Analysts Your key responsibilities will involve overseeing the design, implementation, and maintenance of scalable ML pipelines for data preparation, cataloging, feature engineering, model training, model serving, and in-field model performance monitoring These efforts will directly influence critical business decisions In this role, you'll foster a production-focused culture that effectively bridgesthe gap between model development and operational success Furthermore, you'll be at the forefront of spearheading our ongoing Generative AI investments The ideal candidate for this position will combine strategic vision with hands-on technical expertise in machine learning and data infrastructure, driving innovation and excellence across our data and ML initiatives We are building this team with ownership at Bengaluru, India, this leader will help us boot strap the entire site, starting with this team. What You'll Do Strategic Leadership Define the vision, strategy and roadmap for the organizations data and ML platform to align with critical business goals. Help design, build, and facilitate adoption of a modern Data+ML platform Stay updated on emerging technologies and trends in data platform, ML Ops and AI/ML Team Management Build a team of Data and ML Platform engineers from a small footprint across multiple geographies Foster a culture of innovation and strong customer commitment for both internal and external stakeholders Platform Development Oversee the design and implementation of a platform containing data pipelines, feature stores and model deployment frameworks. Develop and enhance ML Ops practices to streamline model lifecycle Management from development to production. Data Governance Institute best practices for data security, compliance and quality to ensure safe and secure use of AI/ML models. Stakeholder engagement Partner with product, engineering and data science teams to understand requirements and translate them into platform capabilities. Communicate progress and impact to executive leadership and key stakeholders. Operational Excellence Establish SLI/SLO metrics for Observability of the Data and ML Platform along with alerting to ensure a high level of reliability and performance. Drive continuous improvement through data-driven insights and operational metrics. What You'll Need S 10+ years experience in data engineering, ML platform development, or related fields with at least 5 years in a leadership role. Familiarity with typical machine learning algorithms from an engineering perspective; familiarity with supervised / unsupervised approacheshow, why and when labeled data is created and used. Knowledge of ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI, etc. Experience with modern ML Ops platforms such as MLFLow, Kubeflow or SageMaker preferred.Experience in data platform product(s) and frameworks like Apache Spark, Flink or comparable tools in GCP and orchestration technologies (e.g Kubernetes, Airflow) Experience with Apache Iceberg is a plus. Deep understanding of machine learning workflows, including model training, deployment and monitoring. Familiarity with data visualization tools and techniques. Experience with boot strapping new teams and growing them to make a large impact. Experience operating as a site lead within a company will be a bonus. Exceptional interpersonal and communication skills Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role s, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified„¢ across the globe CrowdStrike is proud to be an equal opportunity employer We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 2 months ago
8.0 - 10.0 years
12 - 17 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Design develop data pipelines for realtime and batch data ingestion and processing using Confluent Kafka ksqlDB Kafka Connect and Apache Flink Build and configure Kafka Connectors to ingest data from various sources databases APIs message queues etc into Kafka Develop Flink applications for complex event processing stream enrichment and realtime analytics Develop and optimize ksqlDB queries for realtime data transformations aggregations and filtering Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline Monitor and troubleshoot data pipeline performance identify bottlenecks and implement optimizations Automate data pipeline deployment monitoring and maintenance tasks Stay uptodate with the latest advancements in data streaming technologies and best practices Contribute to the development of data engineering standards and best practices within the organization Participate in code reviews and contribute to a collaborative and supportive team environment Work closely with other architects and tech leads in India US and create POCs and MVPs Provide regular updates on the tasks status and risks to project manager Preferred candidate profile Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETLELT big data Kafka etc Proficiency in developing Flink applications for stream processing and realtime analytics Strong understanding of data streaming concepts and architectures Extensive experience with Confluent Kafka including Kafka Brokers Producers Consumers and Schema Registry Handson experience with ksqlDB for realtime data transformations and stream processing Experience with Kafka Connect and building custom connectors Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform Excellent problemsolving analytical and communication skills Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile Skills Mandatory Skills : AWS Kinesis,Java,Kafka,Python,AWS Glue,AWS Lambda,AWS S3,Scala,Apache SparkStreaming,ANSI-SQL"
Posted 2 months ago
5.0 - 10.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Job Summary Join Synechron as a DevSecOps Engineer, a pivotal role designed to enhance our software release lifecycle through robust automation and security practices. As a DevSecOps Engineer, you will contribute significantly to our business objectives by ensuring high-performance and secure infrastructure, facilitating seamless software deployment, and driving innovation within cloud environments. Software Requirements Required: Proficiency in CI/CD tools such as Jenkins, CodePipeline, CodeBuild, CodeCommit Hands-on experience with DevSecOps practices Automation scripting languagesShell scripting, Python (or similar) Preferred: Familiarity with streaming technologiesKafka, AWS Kinesis, AWS Kinesis Data Firehose, Flink Overall Responsibilities Manage the entire software release lifecycle, focusing on build automation and production deployment. Maintain and optimize CI/CD pipelines to ensure reliable software releases across environments. Engage in platform lifecycle improvement from design to deployment, refining processes for operational excellence. Provide pre-go-live support including system design consulting, capacity planning, and launch reviews. Implement and enforce best practices to optimize performance, reliability, security, and cost efficiency. Enable scalable systems through automation and advocate for changes enhancing reliability and speed. Lead priority incident response and conduct blameless postmortems for continuous improvement. Technical Skills (By Category) Programming Languages: RequiredShell scripting, Python PreferredOther automation scripting languages Cloud Technologies: RequiredExperience with cloud design and best practices, particularly AWS Development Tools and Methodologies: RequiredCI/CD tools (Jenkins, CodePipeline, CodeBuild, CodeCommit) PreferredExposure to streaming technologies (Kafka, AWS Kinesis) Security Protocols: RequiredDevSecOps practices Experience Requirements Minimum of 7+ years in infrastructure performance and cloud design roles Proven experience with architecture and design at scale Industry experience in technology or software development environments preferred Alternative pathwaysDemonstrated experience in similar roles across other sectors Day-to-Day Activities Engage in regular collaboration with cross-functional teams to refine deployment strategies Conduct regular system health checks and implement monitoring solutions Participate in strategic meetings to discuss and implement best practices for system reliability Manage deliverables related to software deployment and automation projects Exercise decision-making authority in incident management and system improvement discussions Qualifications Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience) Certifications in DevOps, AWS, or related fields preferred Commitment to continuous learning and professional development in evolving technologies Professional Competencies Critical thinking and problem-solving capabilities to address complex infrastructure challenges Strong leadership and teamwork abilities to foster collaborative environments Excellent communication skills for effective stakeholder management and technical guidance Adaptability to rapidly changing technology landscapes and proactive learning orientation Innovation mindset to drive improvements and efficiencies within cloud environments Effective time and priority management to balance multiple projects and objectives
Posted 2 months ago
10.0 - 12.0 years
9 - 13 Lacs
Chennai
Work from Office
Job Title Data Architect Experience 10-12 Years Location Chennai : 10-12 years experience as Data Architect Strong expertise in streaming data technologies like Apache Kafka, Flink, Spark Streaming, or Kinesis. ProficiencyinprogramminglanguagessuchasPython,Java,Scala,orGo ExperiencewithbigdatatoolslikeHadoop,Hive,anddatawarehousessuchas Snowflake,Redshift,Databricks,MicrosoftFabric. Proficiencyindatabasetechnologies(SQL,NoSQL,PostgreSQL,MongoDB,DynamoDB,YugabyteDB). Should be flexible to work as anIndividual contributor
Posted 2 months ago
10.0 - 20.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer Location: India preferably Bengaluru Experience Level: 10+ years Employment Type: Full-Time Job Summary: Our organization is seeking a highly experienced and technically proficient Senior Data Engineer with over 10 years of experience in designing, building, and optimizing data pipelines and applications in big data environments. The ideal candidate must have strong hands-on experience in workflow orchestration, data processing, and streaming platforms, and possess full-stack development capabilities. Key Responsibilities: 1. Design, build, and maintain scalable and reliable data pipelines using Apache Airflow. 2. Develop and optimize big data workflows using Apache Spark, Hive , and Apache Flink . 3. Lead the implementation and integration of Apache Kafka for real-time and batch data processing. 4. Apply strong Java full-stack development skills to build and support data-driven applications. 5. Utilize Python to develop scripts, utilities, and support data workflows and integrations. 6. Work closely with data scientists, analysts, and platform engineers to support a high- volume,high-velocity data environment. 7. Drive performance tuning, monitoring, and troubleshooting across the data stack. 8. Ensure data integrity, security, and governance across all processing layers. 9. Mentor junior engineers and contribute to technical decision-making processes. Required Skills and Experience: Minimum 10 years of experience in data engineering or related fields. Proven experience with Apache Airflow for orchestration. Deep expertise in Apache Spark, Hive, and Apache Flink . Mandatory experience as a Full Stack Java Developer. Proficiency in Python programming for data engineering tasks. Demonstrated experience in Apache Kafka development and implementation. Prior hands-on experience in a Big Data ecosystem involving distributed systems and large-scale data processing. Strong understanding of data modeling, ETL/ELT design , and streaming architectures. Excellent problem-solving, communication, and collaboration skills. Preferred Qualifications: Experience working in cloud-based environments (e.g., AWS, Azure, GCP ). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to CI/CD pipelines and DevOps practices in data projects.
Posted 2 months ago
2.0 - 5.0 years
7 - 15 Lacs
Hyderabad
Work from Office
We are looking for a Data Engineer with the tech stack of Python, Pandas, Postgres, Java, and Apache Flink. Must have experience in Python, Java in the data ingestion pipeline with Apache Flink.
Posted 2 months ago
7.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Role Overview We are seeking an experienced Data Engineer with 7-10 years of experience to design, develop, and optimize data pipelines while integrating machine learning (ML) capabilities into production workflows. The ideal candidate will have a strong background in data engineering, big data technologies, cloud platforms, and ML model deployment. This role requires expertise in building scalable data architectures, processing large datasets, and supporting machine learning operations (MLOps) to enable data-driven decision-making. Key Responsibilities Data Engineering & Pipeline Development Design, develop, and maintain scalable, robust, and efficient data pipelines for batch and real-time data processing. Build and optimize ETL/ELT workflows to extract, transform, and load structured and unstructured data from multiple sources. Work with distributed data processing frameworks like Apache Spark, Hadoop, or Dask for large-scale data processing. Ensure data integrity, quality, and security across the data pipelines. Implement data governance, cataloging, and lineage tracking using appropriate tools. Machine Learning Integration Collaborate with data scientists to deploy, monitor, and optimize ML models in production. Design and implement feature engineering pipelines to improve model performance. Build and maintain MLOps workflows, including model versioning, retraining, and performance tracking. Optimize ML model inference for low-latency and high-throughput applications. Work with ML frameworks such as TensorFlow, PyTorch, Scikit-learn, and deployment tools like Kubeflow, MLflow, or SageMaker. Cloud & Big Data Technologies Architect and manage cloud-based data solutions using AWS, Azure, or GCP. Utilize serverless computing (AWS Lambda, Azure Functions) and containerization (Docker, Kubernetes) for scalable deployment. Work with data lakehouses (Delta Lake, Iceberg, Hudi) for efficient storage and retrieval. Database & Storage Management Design and optimize relational (PostgreSQL, MySQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Manage and optimize data warehouses (Snowflake, BigQuery, Redshift, Databricks) for analytical workloads. Implement data partitioning, indexing, and query optimizations for performance improvements. Collaboration & Best Practices Work closely with data scientists, software engineers, and DevOps teams to develop scalable and reusable data solutions. Implement CI/CD pipelines for automated testing, deployment, and monitoring of data workflows. Follow best practices in software engineering, data modeling, and documentation. Continuously improve the data infrastructure by researching and adopting new technologies. Required Skills & Qualifications Technical Skills: Programming Languages: Python, SQL, Scala, Java Big Data Technologies: Apache Spark, Hadoop, Dask, Kafka Cloud Platforms: AWS (Glue, S3, EMR, Lambda), Azure (Data Factory, Synapse), GCP (BigQuery, Dataflow) Data Warehousing: Snowflake, Redshift, BigQuery, Databricks Databases: PostgreSQL, MySQL, MongoDB, Cassandra ETL/ELT Tools: Airflow, dbt, Talend, Informatica Machine Learning Tools: MLflow, Kubeflow, TensorFlow, PyTorch, Scikit-learn MLOps & Model Deployment: Docker, Kubernetes, SageMaker, Vertex AI DevOps & CI/CD: Git, Jenkins, Terraform, CloudFormation Soft Skills: Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Ability to work in an agile and cross-functional team environment. Strong documentation and technical writing skills. Preferred Qualifications Experience with real-time streaming solutions like Apache Flink or Spark Streaming. Hands-on experience with vector databases and embeddings for ML-powered applications. Knowledge of data security, privacy, and compliance frameworks (GDPR, HIPAA). Experience with GraphQL and REST API development for data services. Understanding of LLMs and AI-driven data analytics.
Posted 2 months ago
1.0 - 4.0 years
3 - 6 Lacs
Bengaluru
Work from Office
About The Position The Technical Lead provides oversight and leadership to an IT technical delivery team This position supervises team members, identifies and manages skillsets within capacity of the team, and ensures successful delivery execution of assigned Agile epics The position partners with business owner(s) to ensure technical considerations are appropriately prioritized for the teams digital products and business capabilities, The Operational Technology (OT) Deputy product powner supports the OT Product Manager by ensuring alignment on the full scope of OT capabilities, products, and solutions (Modern and Emerging Technologies as well as PCN Operations and tools) with IRSM, Cybersecurity, the IT Foundation Platforms Product Lines, and Digital Platforms The role will also be requiring supervising up to eight product teams and aligning with the OTPL teams in Houston, Product capabilities managed include the scope of IIoT Platform, Connected Worker and XR Immersive Technologies, Edge Compute, Rich Media and Digital Twin, Time Series and Real-Time Data, PCN Utility, OT Observability and Automation, and encompasses the OT Operations for the Eastern Hemisphere as well, Key Responsibilities Develop and implement the strategy for Operational Technologies within the organization, Prioritize and manage the internal commercialization of Operational Technologies, Oversee the capacity planning and work deliverables related to Operational Technologies, Lead and mentor a team of IT professionals, providing guidance and support, Collaborate with stakeholders to understand business requirements and translate them into technical solutions, Ensure the scalability, reliability, and security of Emerging Technologies implementations, Monitor and report on the performance and effectiveness of Operational Technologies, Stay updated with the latest trends and advancements in Operational Technologies, Manage vendor relationships and negotiate contracts related to Operational Technologies, Required Qualifications Bachelors degree in computer science, Information Technology, or a related field, 10-15 years of experience in IT, with 5+ years focus on Operational Technology, Experience with databases such as InfluxDB, TimescaleDB, OSIsoft or Apache Kafka, Knowledge of tools like Apache Flink, Apache Spark, or Azure Event Hubs, Experience in handling large scale time series data, with knowledge of real time processing, predictive analytics and anomaly detection, Proven experience in leading and managing IT teams, Strong understanding of Emerging Technologies and its applications, Excellent problem-solving and analytical skills, Strong communication and interpersonal skills, Ability to work independently and as part of a team, Preferred Qualifications Masters degree in computer science, Information Technology, or a related field, Professional certifications related to Operational Technologies, Deep understanding of data management, ML and AI to bring flavors of predictive modelling within digital twins, Experience with project management methodologies (e-g , Agile, Scrum), Knowledge of cybersecurity principles and practices, Familiarity with cloud computing platforms like AWS or Azure, Chevron ENGINE supports global operations, supporting business requirements across the world Accordingly, the work hours for employees will be aligned to support business requirements The standard work week will be Monday to Friday Working hours are 8:00am to 5:00pm or 1:30pm to 10:30pm, Chevron participates in E-Verify in certain locations as required by law, Default Terms and Conditions We respect the privacy of candidates for employment This Privacy Notice sets forth how we will use the information we obtain when you apply for a position through this career site If you do not consent to the terms of this Privacy Notice, please do not submit information to us, Please access the Global Application Statements, select the country where you are applying for employment By applying, you acknowledge that you have read and agree to the country specific statement, Terms of Use
Posted 2 months ago
9.0 - 14.0 years
50 - 85 Lacs
Noida
Work from Office
About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
Posted 2 months ago
3.0 - 8.0 years
10 - 20 Lacs
Chennai
Remote
Position Title: Apache Flink Engineer Open Position: 3 Employment Type: Permanent; Full-Time. Location: Chennai Experience: 3+yrs Skills required: Confluent Kafka or Kafka Apache Spark or Apache Flink
Posted 2 months ago
3 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
locationsIN - Bangaloreposted onPosted Today time left to applyEnd DateMay 22, 2025 (5 days left to apply) job requisition idR140300 Company Overview A.P. Moller - Maersk is an integrated container logistics company and member of the A.P. Moller Group. Connecting and simplifying trade to help our customers grow and thrive . With a dedicated team of over 95,000 employees, operating in 130 countries; we go all the way to enable global trade for a growing world . From the farm to your refrigerator, or the factory to your wardrobe, A.P. Moller - Maersk is developing solutions that meet customer needs from one end of the supply chain to the other. About the Team At Maersk, the Global Ocean Manifest team is at the heart of global trade compliance and automation. We build intelligent, high-scale systems that seamlessly integrate customs regulations across 100+ countries, ensuring smooth cross-border movement of cargo by ocean, rail, and other transport modes. Our mission is to digitally transform customs documentation, reducing friction, optimizing workflows, and automating compliance for a complex web of regulatory bodies, ports, and customs authorities. We deal with real-time data ingestion, document generation, regulatory rule engines, and multi-format data exchange while ensuring resilience and security at scale. Key Responsibilities Work with large, complex datasets and ensure efficient data processing and transformation. Collaborate with cross-functional teams to gather and understand data requirements. Ensure data quality, integrity, and security across all processes. Implement data validation, lineage, and governance strategies to ensure data accuracy and reliability. Build, optimize , and maintain ETL pipelines for structured and unstructured data , ensuring high throughput, low latency, and cost efficiency . Experience in building scalable, distributed data pipelines for processing real-time and historical data. Contribute to the architecture and design of data systems and solutions. Write and optimize SQL queries for data extraction, transformation, and loading (ETL). Advisory to Product Owners to identify and manage risks, debt, issues and opportunities for the technical improvement . Providing continuous improvement suggestions in internal code frameworks, best practices and guidelines . Contribute to engineering innovations that fuel Maersks vision and mission. Required Skills & Qualifications 4 + years of experience in data engineering or a related field. Strong problem-solving and analytical skills. E xperience on Java, Spring framework Experience in building data processing pipelines using Apache Flink and Spark. Experience in distributed data lake environments ( Dremio , Databricks, Google BigQuery , etc.) E xperience on Apache Kafka, Kafka Streams Experience working with databases. PostgreSQL preferred, with s olid experience in writin g and opti mizing SQL queries. Hands-on experience in cloud environments such as Azure Cloud (preferred), AWS, Google Cloud, etc. Experience with data warehousing and ETL processes . Experience in designing and integrating data APIs (REST/ GraphQL ) for real-time and batch processing. Knowledge on Great Expectations, Apache Atlas, or DataHub , would be a plus Knowledge on RBAC, encryption, GDPR compliance would be a plus Business skills Excellent communication and collaboration skills Ability to translate between technical language and business language, and communicate to different target groups Ability to understand complex design Possessing the ability to balance and find competing forces & opinions , within the development team Personal profile Fact based and result oriented Ability to work independently and guide the team Excellent verbal and written communication Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing . Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .
Posted 2 months ago
3 - 5 years
4 - 7 Lacs
Hyderabad
Work from Office
About The Role ? ? ? ? Mandatory Skills: Flink. Experience3-5 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 months ago
5 - 8 years
5 - 15 Lacs
Hyderabad
Hybrid
Databuzz is Hiring for Java & Python (Airflow) - Hyderabad - 5+ Years-Hybrid Please mail your profile to jagadish.raju@databuzzltd.com with the below details, If you are Interested. About DatabuzzLTD: Databuzz is One stop shop for data analytics specialized in Data Science, Big Data, Data Engineering, AI & ML, Cloud Infrastructure and Devops. We are an MNC based in both UK and INDIA. We are a ISO 27001 & GDPR complaint company. CTC - ECTC - Notice Period/LWD - (Candidate serving notice period will be preferred) DOB- Position: Java & Python (Airflow) - Hyderabad Mandatory Skills: Should have experience in Java. Should have experience Python. Must have Airflow, Apache Flink. Experienced in Data BigQuery. Regards, Jagadish Raju - Talent Acquisition Specialist jagadish.raju@databuzzltd.com
Posted 2 months ago
2 - 4 years
4 - 6 Lacs
Pune
Work from Office
We seek a Data Engineer/Architect Expert Level who shares our passion for innovation and change. This role is critical to helping our business partners evolve and adapt to consumers' personalized expectations in this new technological era. What will help you succeed: Fluent English (B2 - Upper Intermediate) Deep Data Architecture & Time Series Database (Timescale) expertise. Proficiency in Big Data Processing (Apache Spark). Experience with Streaming Data Technologies (KSQL, Flink). Strong knowledge of Data Governance, Security & Compliance. Hands-on experience with Snowflake and data sharing design. Hands on experience on AWS or Azure cloud along with Kafka. This job can be filled in Pune #li-hybrid Create with us digital products that people love. We will bring businesses and consumers together through AI technology and creativity, driving digital transformation to impact the world positively. At Globant, we believe in fostering a diverse and inclusive workplace where everyone feels valued and respected. We are an Equal Opportunity Employer committed to creating a thriving and inclusive environment for all employees and candidates, regardless of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other legally protected characteristic. If you need any assistance or accommodations due to a disability, please let us know by applying through our Career Site or contacting your assigned recruiter. We may use AI and machine learning technologies in our recruitment process. Compensation is determined based on skills, qualifications, experience, and location. In addition to competitive salaries, we offer a comprehensive benefits package. Learn more about our commitment to diversity and inclusion and .
Posted 2 months ago
4 - 6 years
15 - 22 Lacs
Gurugram
Hybrid
The Job We are looking out for a Sr Data Engineer responsible to Design, Develop and Support Real Time Core Data Products to support TechOps Applications. Work with various teams to understand business requirements, reverse engineer existing data products and build state of the art performant data pipelines. AWS is the cloud of choice for these pipelines and a solid understand and experience of architecting , developing and maintaining real time data pipelines in AWS Is highly desired. Design, Architect and Develop Data Products that provide real time core data for applications. Production Support and Operational Optimisation of Data Projects including but not limited to Incident and On Call Support , Performance Optimization , High Availability and Disaster Recovery. Understand Business Requiremensts interacting with business users and or reverse engineering existing legacy data products. Mentor and train junior team members and share architecture , design and development knowdge of data products and standards. Mentor and train junior team members and share architecture , design and development knowdge of data products and standards. Good understand and working knowledge of distributed databases and pipelines. Your Profile An ideal candidate will have 4+ yrs of experience in Real Time Streaming along with hands on Spark, Kafka, Apache Flink, Java, Big data technologies, AWS and MSK (managed service kafka) AWS Distrubuited Database technologies including Managed Services Kafka, Managed Apache Flink, DynamoDB, S3, Lambda. Experience designing and developing with Apache Flink real time data products.(Scala experience can be considered) Experience with python and pyspark SQL Code Development AWS Solutions Architecture experience for data products is required Manage, troubleshoot, real time data pipelines in the AWS Cloud Experience with High Availability and Disaster Recovery Solutions for Real time data streaming Excellent Analytical, Problem solving and Communication Skills Must be self-motivated, and ability to work independently Ability to understand existing SQL and code and user requirements and translate them into modernized data products.
Posted 2 months ago
7 - 12 years
50 - 75 Lacs
Bengaluru
Work from Office
---- What the Candidate Will Do ---- Partner with engineers, analysts, and product managers to define technical solutions that support business goals Contribute to the architecture and implementation of distributed data systems and platforms Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost Serve as a thought leader and mentor in data engineering best practices across the organization ---- Basic Qualifications ---- 7+ years of hands-on experience in software engineering with a focus on data engineering Proficiency in at least one programming language such as Python, Java, or Scala Strong SQL skills and experience with large-scale data processing frameworks (e.g., Apache Spark, Flink, MapReduce, Presto) Demonstrated experience designing, implementing, and operating scalable ETL pipelines and data platforms Proven ability to work collaboratively across teams and communicate technical concepts to diverse stakeholders ---- Preferred Qualifications ---- Deep understanding of data warehousing concepts and data modeling best practices Hands-on experience with Hadoop ecosystem tools (e.g., Hive, HDFS, Oozie, Airflow, Spark, Presto) Familiarity with streaming technologies such as Kafka or Samza Expertise in performance optimization, query tuning, and resource-efficient data processing Strong problem-solving skills and a track record of owning systems from design to production
Posted 2 months ago
10 - 16 years
35 - 40 Lacs
Pune
Work from Office
About The Role : Job TitleLead Engineer, VP LocationPune, India Role Description A Passion to Perform. Its what drives us. More than a claim, this describes the way we do business. Were committed to being the best financial services provider in the world, balancing passion with precision to deliver superior solutions for our clients. This is made possible by our peopleagile minds, able to see beyond the obvious and act effectively in an ever-changing global business landscape. As youll discover, our culture supports this. Diverse, international and shaped by a variety of different perspectives, were driven by a shared sense of purpose. At every level agile thinking is nurtured. And at every level agile mind are rewarded with competitive pay, support and opportunities to excel What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Designing, implementing and operationalising Java based software components for the Transaction Monitoring Data Controls applications Contributing to DevOps capabilities to ensure maximum automation of our applications. Leveraging best practices - Build Data Driven Decisions Collaborationacross the TDI areas such as Cloud Platform, Security, Data, Risk&Compliance areasto create optimum solutions for the business, increasing re-use, creating best practice and sharing knowledge. Your skills and experience 13+ years of hands-on experience of Java development (Java 11+) in either of: Spring Boot/Microservices/APIs/Transactional databases Java data processing frameworks such as Apache Spark, Apache Beam, Flink Experience of contributing to software design and architecture including consideration of meeting non-functional requirements (e.g., reliability, scalability, observability, testability) Understanding of relevant Architecture styles and their trade-offs - e.g., Microservices, Monolith, Batch. Professional experience inbuilding applications into one of the cloud platforms (Azure, AWS or GCP)and usage of their major infra components (Software Defined Networks, IAM, Compute, Storage, etc.) Professional experience of at least one data storage technology (e.g., Oracle, Big Query) Experience designing and implementing distributed enterprise applications Professional experience of at least one "CI/CD" tool such as Team City, Jenkins, GitHub Actions Professional experience of Agile build and deployment practices (DevOps) Professional experience of defining interface and internal data models both logical and physical Experience of working with a globally distributed team requiring remote interaction across locations, time zones and diverse cultures Excellent communication skills (verbal and written) Idealto Have Professional experience working with Java components on GCP (e.g. App Engine, GKE, Cloud Run) Professional experience working with RedHat OpenShift & Apache Spark Professional experience working with Kotlin Experience of working in one or more large data integration projects/products Experience and knowledge of Data Engineering topics such as partitioning, optimisation based on different goals (e.g. retrieval performance vs insert performance) A passion for problem solving with strong analytical capabilities. Experience related to any of payment scanning, fraud checking, integrity monitoring, payment lifecycle management Experience working with Drools or similar product Data modelling experience Understanding of data security principle, data masking s and implementation considerations Education/Qualifications Degree from an accredited college or university with a concentration in Engineering or Computer Science How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 months ago
2 - 6 years
8 - 12 Lacs
Bengaluru
Work from Office
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Sr. Staff Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN). Title - Lead Data Architect (Streaming) Required Skills and Qualifications Overall 10+ years of IT experience of which 7+ years of experience in data architecture and engineering Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS Strong experience with Confluent Strong experience in Kafka Solid understanding of data streaming architectures and best practices Strong problem-solving skills and ability to think critically Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Knowledge of Apache Airflow for data orchestration Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications An understanding of cloud networking patterns and practises Experience with working on a library or other long term product Knowledge of the Flink ecosystem Experience with Terraform Deep experience with CI/CD pipelines Strong understanding of the JVM language family Understanding of GDPR and the correct handling of PII Expertise with technical interface design Use of Docker Key Responsibilities Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem Architect data processing applications using Python, Kafka, Confluent Cloud and AWS Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka Ensure data security and compliance throughout the architecture Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Optimize data flows for performance, cost-efficiency, and scalability Implement data governance and quality control measures Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams Provide technical leadership and mentorship to development teams and lead engineers Stay current with emerging technologies and industry trends Collaborate with data scientists and analysts to enable efficient data access and analysis Evaluate and recommend new technologies to improve data architecture Position Overview: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA endeavors to make https://us.nttdata.comaccessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here. Job Segment Developer, Computer Science, Consulting, Technology
Posted 2 months ago
4 - 9 years
16 - 20 Lacs
Bengaluru
Work from Office
Req ID: 301930 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Solution Architect Lead Advisor to join our team in Bangalore, Karnataka (IN-KA), India (IN). TitleData Solution Architect Position Overview: We are seeking a highly skilled and experienced Data Solution Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Proficiency in Kafka/Confluent Kafka and Python - Experience with Synk for security scanning and vulnerability management - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Preferred Qualifications - Experience with Kafka Connect and Confluent Schema Registry - Familiarity with Atlan for data catalog and metadata management - Knowledge of Apache Flink for stream processing - Experience integrating with IBM MQ - Familiarity with Sonarcube for code quality analysis - AWS certifications (e.g., AWS Certified Solutions Architect) - Experience with data modeling and database design - Knowledge of data privacy regulations and compliance requirements Key Responsibilities - Design and implement scalable data architectures using AWS services and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS - Design and implement data streaming pipelines using Kafka/Confluent Kafka - Develop data processing applications using Python - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Provide technical leadership and mentorship to development teams - Stay current with emerging technologies and industry trend About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team. Job Segment Solution Architect, Consulting, Database, Computer Science, Technology
Posted 2 months ago
6 - 11 years
18 - 30 Lacs
Gurugram
Work from Office
Application layer technologies including Tomcat/Nodejs, Netty, Springboot, hibernate, Elasticsearch, Kafka, Apache flink Frontend technologies including ReactJs, Angular, Android/IOS Data storage technologies like Oracle, S3, Postgres, Mongodb
Posted 2 months ago
6 - 10 years
13 - 17 Lacs
Bengaluru
Work from Office
Job Description Data Integration, OCI Oracle Cloud is a comprehensive enterprise-grade platform that offers best-in-class services across Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Oracle Cloud platform offers choice and flexibility for customers to build, deploy, integrate, and extend applications in the cloud that enable adapting to rapidly changing business requirements, promote interoperability and avoid lock-in. This platform supports numerous open standards (SQL, HTML5, REST, and more), open-source solutions (such as Kubernetes, Hadoop, Spark and Kafka) and a wide variety of programming languages, databases, tools and integration frameworks. Our Team Oracle Cloud Infrastructure (OCI) is a strategic growth area for Oracle. It is a comprehensive cloud services offering in the enterprise software industry, spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). OCI is currently building a future ready Gen2 cloud data management platform. Oracle Cloud Data integration under pins a comprehensive, best-in-class data integration PaaS offering with hundreds of out-of-the-box connectors to seamlessly integrate on-prem and cloud applications. Your Opportunity We are on a path breaking journey to build the best of breed Data Integration service that is built for hyper scale by leveraging cutting edge technologies (Spark, Scala, Livy, Apache Flink, Airflow, Kubernetes, etc.) and modern design/architecture principles (Micro-services, Scale to zero, Telemetry, Circuit Breakers, etc.) as part of the next gen AI fueled cloud computing platform. You will have the opportunity to be part of a team of passionate engineers who are fueled by serving customers and have a penchant to constantly push the innovation bar We are looking for a strong engineer who thrives on research and development projects. We want you to be a strong technical leader who needs to be hands-on. We want you to work with the development team and can work efficiently with other product groups that can sometimes be remote in different geographies. You should be comfortable working with product management. You should also be comfortable working with senior architects and engineering leaders to make sure we are building the right product and services using the right design principles. Your Qualifications B.E./M.E/PhD (Computer Sc, Electronics or Electrical Engg) 5+ years of experience with at least 2 years in cloud technologies Strong technical understanding in building scalable, high performance distributed services/systems Strong experience with Java open-source and API standards Experience working on Cloud infrastructure APIs, REST API model, and developing REST APIs Strong knowledge on Docker/Kubernetes Deep understanding of data structures, algorithms and excellent problem solving skills Working experience in one or more of the below domains (even if we have one from any of this, it will be a potential plus) Familiarity of Data Integration domain Data ingestion frameworks Orchestrating complex computational workflows Messaging brokers like Kafka Strong problem solving, troubleshooting and analytical skills Familiarity with Agile process will be an added advantage Excellent communication, presentation, interpersonal and analytical skills including the ability to communicate complex concepts clearly to different audiences. Ability to quickly learn new technologies in a dynamic environment. Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc. As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs. Duties and tasks are varied and complex needing independent judgment. Fully competent in own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience. Career Level - IC3 Responsibilities Your Responsibilities The job requires you to interface with other internal product development teams as well as cross functional teams (Product Management, Integration Engineering, Quality Engineering, UX team and Technical Writers). At a high level, the work will involve developing features on OCI, which includes but may not be limited to the following: Help drive the next generation Data Integration cloud services using Oracle standard tools, technology and development practices Working directly with product management Working directly with architects to ensure newer capabilities are built applying right design principles Working with remote and geographically distributed teams to enable building the right products, using the right building blocks and making them consumable by other products easily Be very technically hands-on and own/drive key end-to-end services
Posted 2 months ago
3 - 7 years
7 - 10 Lacs
Bengaluru
Remote
• Design and implement scalable, efficient and high-performance data pipelines • Develop and optimize ETL/ELT workflows using modern tools and frameworks. • Work with cloud platforms (AWS, Azure, GCP) Detailed JD will be given later.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough