Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
You will be responsible for building and maintaining secure, scalable data pipelines using Databricks and Azure. This includes handling ingestion from various sources such as files, APIs, and streaming, performing data transformation, and ensuring quality validation. Collaboration with subsystem data science and product teams will be essential to prepare for machine learning readiness. Your technical expertise should cover working with Notebooks (SQL, Python), Delta Lake, Unity Catalog, ADLS/S3, job orchestration, APIs, structured logging, and Infrastructure as Code (IaC) tools like Terraform. Delivery practices like Trunk-based development, Test-Driven Development (TDD), Git, and Continuous Integration/Continuous Deployment (CI/CD) for notebooks and pipelines are expected. You should also be familiar with integrating different data formats such as JSON, CSV, XML, Parquet, and various databases including SQL, NoSQL, and graph databases. Strong communication skills are vital to justify decisions, document architecture, and align with enabling teams. In this role, you will have the opportunity to engage in Proximity Talks, where you can interact with other designers, engineers, and product experts to enhance your knowledge. Working with a world-class team will enable you to constantly challenge yourself and acquire new skills on a daily basis. This is a contract position based in Abu Dhabi, and if relocation from India is necessary, the company will cover travel and accommodation expenses on top of your salary. Proximity is a globally recognized technology, design, and consulting partner for leading Sports, Media, and Entertainment companies. Headquartered in San Francisco, with offices in Palo Alto, Dubai, Mumbai, and Bangalore, Proximity has been instrumental in creating and scaling high-impact products used by 370 million daily users, with a combined net worth of $45.7 billion among client companies. As part of the Proximity team, you will work alongside a diverse group of coders, designers, product managers, and experts who solve complex challenges and develop cutting-edge technology at scale. The rapidly growing team of Proxonauts offers you the opportunity to make a significant impact on the company's success. You will have the chance to collaborate with experienced leaders who have led multiple tech, product, and design teams. To learn more about us, you can watch our CEO, Hardik Jagda, share insights about Proximity, explore our values and meet our team members, visit our website, blog, and Studio Proximity, and get an insider perspective on Instagram by following @ProxWrks and @H.Jagda.,
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Project Role : Cloud Migration Engineer Project Role Description : Provides assessment of existing solutions and infrastructure to migrate to the cloud. Plan, deliver, and implement application and data migration with scalable, high-performance solutions using private and public cloud technologies driving next-generation business outcomes. Must have skills : Microsoft Azure DevOps Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education We are seeking a hands-on senior professional who can lead a team of data engineers to lay the groundwork for making data accessible to AI engineers for research and/or application development. The key objectives for this position include: Enhancing the ASML Data Foundation with unstructured data such as notes, knowledge repositories like Confluence or SharePoint, and document types like Word, PowerPoint, and Excel. Utilizing Delta Lake-enabled storage accounts in the ASML Data Foundation, with proficiency in Unity Catalog for centralized access control, auditing, and data lineage across Azure Databricks workspaces. Establishing data integration layers that interface with source systems both on-premises sources and cloud-hosted environments. Efficiently provisioning data to cloud application development environments, including Azure storage accounts serving as ADLS, Azure AI Search, Cosmos DB, and PostgreSQL. Developing a vision for document processing, from cracking and chunking to embedding, within the context of AI service, determining the destination, data types, and formats required for processing. Show more Show less
Posted 1 month ago
5.0 - 7.0 years
6 - 9 Lacs
Kolkata, West Bengal, India
On-site
Key Responsibilities: Ontology Development: Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business needs and industry standards. Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. Develop and maintain comprehensive ontologies modeling business entities, relationships, and processes. Data Modeling: Design and implement semantic and syntactic data models adhering to ontological principles. Create scalable, flexible, and adaptable data models to meet evolving business requirements. Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation: Design and build knowledge graphs grounded in ontologies and data models. Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. Utilize knowledge graphs to support advanced analytics, search, and recommendation systems. Data Quality and Governance: Ensure quality, accuracy, and consistency across ontologies, data models, and knowledge graphs. Define and enforce data governance processes and standards for ontology development and upkeep. Collaboration and Communication: Work closely with data scientists, software engineers, and business stakeholders to understand data requirements and deliver tailored solutions. Clearly communicate complex technical concepts to varied audiences. Qualifications: Education: Bachelor's or Master's degree in Computer Science, Data Science, or a related discipline. Experience: 5+ years in data engineering or a related field. Proven experience in ontology development using BFO, CCO, or similar frameworks. Strong knowledge of semantic web technologies: RDF, OWL, SPARQL, SHACL. Proficiency in Python, SQL, and other relevant programming languages. Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills: Familiarity with machine learning and natural language processing. Experience with cloud data platforms such as AWS, Azure, or GCP. Exposure to Databricks technologies like Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. Strong analytical and problem-solving skills. Excellent communication and interpersonal abilities.
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
We have a great opportunity for the role of Senior Data Engineer - Azure, Databricks. Relevant Exp: 5+ Yrs Job Description:- Mandatory Skills -Platform Engineer, Databricks, Azure Cloud, Data Engineer, Pyspark, SQL, Databricks. o Hands-on in data structures, distributed systems, Spark, SQL, PySpark, NoSQL Databases. o Strong software development skills in at least one of: Python, Pyspark or Scala. o Develop and maintain scalable & modular data pipelines using databricks & Apache Spark. o Experience building and supporting large-scale systems in a production environment. o Experience working on either of the cloud storages - AWS S3, Azure data lake, GCS buckets. o Exposure to the ELT tools offered by the cloud platforms like, ADF, AWS Glue, Google dataflow. o Integrate Databricks with other cloud services like AWS, Azure or Google Cloud, o Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks. o Managing and configuring databricks environments, including clusters and notebooks. o Experience working with different file formats - parquet, databricks delta lake. o Experience working with data governance on databricks, implementing unity catalog, delta share Leadership qualities. -Immediate Joiners to 15 days Preferred -Job location- Remote Thanks and Regards, iitjobs, Inc. Register for a global opportunity on the world&aposs first & only Global Technology Job Portal: www.iitjobs.com Download our app on the Apple App Store and Google Play Store! Refer and earn ?50,000! Show more Show less
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a Technical Lead - Azure Databricks at CitiusTech, you will join an Agile team to contribute to the design and development of healthcare applications by implementing new features while ensuring adherence to the best coding development standards. Your responsibilities will include creating and configuring Databricks workspaces across various environments, setting up clusters, pools, and workspace permissions according to enterprise standards, designing and developing reusable Databricks notebooks for data ingestion, transformation, and analytics, implementing scalable Python scripts for large-scale data processing, orchestrating Databricks Workflows to automate notebook execution, DLT pipelines, and ML model deployment, as well as building and managing Delta Live Tables pipelines for real-time and batch data processing. You will also be tasked with ensuring data quality, lineage, and monitoring using DLT best practices, developing and deploying MLflow pipelines within Databricks for model tracking, versioning, and deployment, collaborating with data scientists to operationalize machine learning models using MLflow, leveraging Unity Catalog for data governance, access control, and lineage tracking, and ensuring compliance with data security and privacy standards. To excel in this role, you should have 7-8 years of experience and hold an Engineering Degree in BE / ME / BTech / MTech / BSc / MSc. Mandatory technical skills include proven experience in Databricks platform, proficiency in Python, hands-on experience with Delta Live Tables (DLT) and MLflow, familiarity with Unity Catalog, experience in Cloud platforms like Azure, AWS, or GCP, understanding of CI/CD practices and DevOps in data engineering, as well as excellent problem-solving and communication skills. CitiusTech is committed to combining IT services, consulting, products, accelerators, and frameworks with a client-first mindset and next-gen tech understanding to humanize healthcare and make a positive impact on human lives. Our culture fosters innovation and continuous learning, promoting a fun, transparent, non-hierarchical, and diverse work environment focused on work-life balance. By joining CitiusTech, you will have the opportunity to collaborate with global leaders, shape the future of healthcare, and positively impact human lives. Discover more about CitiusTech by visiting https://www.citiustech.com/careers and be part of a team that is dedicated to Faster Growth, Higher Learning, and Stronger Impact. Happy applying!,
Posted 1 month ago
10.0 - 18.0 years
0 Lacs
pune, maharashtra
On-site
We are looking for a seasoned Senior Data Architect with extensive knowledge in Databricks and Microsoft Fabric to join our team. In this role, you will be responsible for leading the design and implementation of scalable data solutions for BFSI and HLS clients. As a Senior Data Architect specializing in Databricks and Microsoft Fabric, you will play a crucial role in architecting and implementing secure, high-performance data solutions on the Databricks and Azure Fabric platforms. Your responsibilities will include leading discovery workshops, designing end-to-end data pipelines, optimizing workloads for performance and cost efficiency, and ensuring compliance with data governance, security, and privacy policies. You will collaborate with client stakeholders and internal teams to deliver technical engagements and provide guidance on best practices for Databricks and Microsoft Azure. Additionally, you will stay updated on the latest industry developments and recommend new data architectures, technologies, and standards to enhance our solutions. As a subject matter expert in Databricks and Azure Fabric, you will be responsible for delivering workshops, webinars, and technical presentations, as well as developing white papers and reusable artifacts to showcase our company's value proposition. You will also work closely with Databricks partnership teams to contribute to co-marketing and joint go-to-market strategies. In terms of business development support, you will collaborate with sales and pre-sales teams to provide technical guidance during RFP responses and identify upsell and cross-sell opportunities within existing accounts. To be successful in this role, you should have a minimum of 10+ years of experience in data architecture, engineering, or analytics roles, with specific expertise in Databricks and Azure Fabric. You should also possess strong communication and presentation skills, as well as the ability to collaborate effectively with diverse teams. Additionally, certifications in cloud platforms such as AWS and Microsoft Azure will be advantageous. In return, we offer a competitive salary and benefits package, a culture focused on talent development, and opportunities to work with cutting-edge technologies. At Persistent, we are committed to fostering diversity and inclusion in the workplace and invite applications from all qualified individuals. We provide a supportive and inclusive environment where all employees can thrive and unleash their full potential. Join us at Persistent and accelerate your growth professionally and personally while making a positive impact on the world with the latest technologies.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Big Data Architect specializing in Databricks at Codvo, a global empathy-led technology services company, your role is critical in designing sophisticated data solutions that drive business value for enterprise clients and power internal AI products. Your expertise will be instrumental in architecting scalable, high-performance data lakehouse platforms and end-to-end data pipelines, making you the go-to expert for modern data architecture in a cloud-first world. Your key responsibilities will include designing and documenting robust, end-to-end big data solutions on cloud platforms (AWS, Azure, GCP) with a focus on the Databricks Lakehouse Platform. You will provide technical guidance and oversight to data engineering teams on best practices for data ingestion, transformation, and processing using Spark. Additionally, you will design and implement effective data models and establish data governance policies for data quality, security, and compliance within the lakehouse. Evaluating and recommending appropriate data technologies, tools, and frameworks to meet project requirements and collaborating closely with various stakeholders to translate complex business requirements into tangible technical architecture will also be part of your role. Leading and building Proof of Concepts (PoCs) to validate architectural approaches and new technologies in the big data and AI space will be crucial. To excel in this role, you should have 10+ years of experience in data engineering, data warehousing, or software engineering, with at least 4+ years in a dedicated Data Architect role. Deep, hands-on expertise with Apache Spark and the Databricks platform is mandatory, including Delta Lake, Unity Catalog, and Structured Streaming. Proven experience architecting and deploying data solutions on major cloud providers, proficiency in Python or Scala, expert-level SQL skills, strong understanding of modern AI concepts, and in-depth knowledge of data warehousing concepts and modern Lakehouse patterns are essential. This position is remote and based in India with working hours from 2:30 PM to 11:30 PM. Join us at Codvo and be a part of a team that values Product innovation, mature software engineering, and core values like Respect, Fairness, Growth, Agility, and Inclusiveness each day to offer expertise, outside-the-box thinking, and measurable results.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
The ideal candidate for this position should have 8-12 years of experience and possess a strong understanding and hands-on experience with Microsoft Fabric. You will be responsible for designing and implementing end-to-end data solutions on Microsoft Azure, which includes data lakes, data warehouses, and ETL/ELT processes. Your role will involve developing scalable and efficient data architectures to support large-scale data processing and analytics workloads. Ensuring high performance, security, and compliance within Azure data solutions will be a key aspect of this role. You should have knowledge of various techniques such as lakehouse and warehouse, along with experience in implementing them. Additionally, you will be required to evaluate and select appropriate Azure services like Azure SQL Database, Azure Synapse Analytics, Azure Data Lake Storage, Azure Databricks, Unity Catalog, and Azure Data Factory. Deep knowledge and hands-on experience with these Azure Data Services are essential. Collaborating closely with business and technical teams to understand and translate data needs into robust and scalable data architecture solutions will be part of your responsibilities. You should also have experience in data governance, data privacy, and compliance requirements. Excellent communication and interpersonal skills are necessary for effective collaboration with cross-functional teams. In this role, you will provide expertise and leadership to the development team implementing data engineering solutions. Working with Data Scientists, Analysts, and other stakeholders to ensure data architectures align with business goals and data analysis requirements is crucial. Optimizing cloud-based data infrastructure for performance, cost-effectiveness, and scalability will be another key responsibility. Experience in programming languages like SQL, Python, and Scala is required. Hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms is preferred. Familiarity with Azure DevOps and CI/CD pipeline development is beneficial. An in-depth understanding of database structure principles and distributed data processing of big data batch or streaming pipelines is essential. Knowledge of data visualization tools such as Power BI and Tableau, along with data modeling and strong analytics skills is expected. The candidate should be able to convert OLTP data structures into Star Schema and ideally have DBT experience along with data modeling experience. A problem-solving attitude, self-motivation, attention to detail, and effective task prioritization are essential qualities for this role. At Hitachi, attitude and aptitude are highly valued as collaboration is key. While not all skills are required, experience with Azure SQL Data Warehouse, Azure Data Factory, Azure Data Lake, Azure Analysis Services, Databricks/Spark, Python or Scala, data modeling, Power BI, and database migration are desirable. Designing conceptual, logical, and physical data models using tools like ER Studio and Erwin is a plus.,
Posted 1 month ago
9.0 - 11.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description Qualifications: Overall 9+ years of IT experience Minimum of 5+ years' preferred managing Data Lakehouse environments, Azure Databricks, Snowflake, DBT (Nice to have) specific experience a plus. Hands-on experience with data warehousing, data lake/lakehouse solutions, data pipelines (ELT/ETL), SQL, Spark/PySpark, DBT,. Strong understanding of Data Modelling, SDLC, Agile, and DevOps principles. Bachelors degree in management/computer information systems, computer science, accounting information systems, computer or in a relevant field. Knowledge/Skills: Tools and Technologies: Azure Databricks, Apache Spark, Python, Databricks SQL, Unity Catalog, and Delta Live Tables. Understanding of cluster configuration, compute and storage layers. Expertise with Snowflake Architecture, with experience in design, development, and evolution System integration experience, data extraction, transformation, and quality controls design techniques. Familiarity with data science concepts, as well as MDM, business intelligence, and data warehouse design and implementation techniques. Extensive experience with the medallion architecture data management framework as well as unity catalog. Data modeling and information classification expertise at the enterprise level. Understanding of metamodels, taxonomies and ontologies, as well as of the challenges of applying structured techniques (data modeling) to less-structured sources. Ability to assess rapidly changing technologies and apply them to business needs. Be able to translate the information architecture contribution to business outcomes into simple briefings for use by various data-and-analytics-related roles. About Us Datavail is a leading provider of data management, application development, analytics, and cloud services, with more than 1,000 professionals helping clients build and manage applications and data via a world-class tech-enabled delivery platform and software solutions across all leading technologies. For more than 17 years, Datavail has worked with thousands of companies spanning different industries and sizes, and is an AWS Advanced Tier Consulting Partner, a Microsoft Solutions Partner for Data & AI and Digital & App Innovation (Azure), an Oracle Partner, and a MySQL Partner. About The Team Datavails Data Management Services: Datavails Data Management and Analytics practice is made up of experts who provide a variety of data services including initial consulting and development, designing and building complete data systems, as well as ongoing support and management of database, data warehouse, data lake, data integration, and virtualization and reporting environments. Datavails team is comprised of not just excellent BI & analytics consultants, but great people as well. Datavails data intelligence consultants are experienced, knowledgeable and certified in the best in breed BI and analytics software applications and technologies. We ascertain your business objectives, goals and requirements, assess your environment, and recommend the tools which best fit your unique situation. Our proven methodology can help your project succeed, regardless of stage. With the combination of a proven delivery model and top-notch experience ensures that Datavail will remain the Data Management experts on demand you desire. Datavails flexible and client focused services always add value to your organization. Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Work Location : Hyderabad What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career paths, and steady growth prospects with great scope to innovate. We aim to create an ecosystem of easily configurable data applications focused on storytelling for public and private use. Data Architect We are seeking an experienced Data Architect to design and govern scalable, secure, and efficient data platforms in a data mesh environment. You will lead data architecture initiatives across multiple domains, enabling self-serve data products built on Databricks and AWS, and support both operational and analytical use cases. Key Responsibilities Design and implement enterprise-grade data architectures leveraging the medallion architecture (Bronze, Silver, Gold). Develop and enforce data modelling standards, including flattened data models optimized for analytics. Define and implement MDM strategies (Reltio), data governance frameworks (Collibra), and data classification policies. Lead the development of data landscapes, capturing sources, flows, transformations, and consumption layers. Collaborate with domain teams to ensure consistency across decentralized data products in a data mesh architecture. Guide best practices for ingesting and transforming data using Fivetran, PySpark, SQL, and Delta Live Tables (DLT). Define metadata and data quality standards across domains. Provide architectural oversight for data platform development on Databricks (Lakehouse) and AWS ecosystem. Key Skills & Qualifications Must-Have Technical Skills: (Reltio, Colibra, Ataccama, Immuta) Experience in the Pharma domain. Data Modeling (dimensional, flattened, common data model, canonical, and domain-specific, entity-level data understanding from a business process point of view). Master Data Management (MDM) principles and tools (Reltio) (1). Data Governance and Data Classification frameworks (1). Strong experience with Fivetran**, PySpark, SQL, Python. Deep understanding of Databricks (Delta Lake, Unity Catalog, Workflows, DLT) . Experience with AWS services related to data (e.g., S3, Glue, Redshift, IAM, ). Experience on Snowflake. Architecture & Design Proven expertise in Data Mesh or Domain-Oriented Data Architecture. Experience with medallion/lakehouse architecture. Ability to create data blueprints and landscape maps across complex enterprise systems. Soft Skills Strong stakeholder management across business and technology teams. Ability to translate business requirements into scalable data designs. Excellent communication and documentation skills. Preferred Qualifications Familiarity with regulatory and compliance frameworks (e.g., GxP, HIPAA, GDPR). Background in data product building. About Us We consult and deliver solutions to organizations where data is the core of decision-making. We undertake strategic data consulting for organizations, laying out the roadmap for data-driven decision-making. This helps organizations convert data into a strategic differentiator. Through a host of our products, solutions, and Service Offerings, we analyze and visualize large amounts of data. To know more about us visit Gramener Website and Gramener Blog. Apply for this role Apply for this Role Show more Show less
Posted 1 month ago
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Bangalore/Gurugram/Hyderabad YOE - 7+ years We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering. Show more Show less
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
You are a Senior developer with 5 to 10 years of experience possessing Full Database/Datawarehouse/DataMart development capabilities. Your expertise includes proficiency with Azure Data Factory and database programming, encompassing stored procedures, SSIS development, and User Defined Data Types. As a Senior Subject Matter Expert (SME) with SQL Server, you are instrumental in handling standard file formats and markup language technology like JSON, XML, and XSLT. Your proficiency extends to Azure Cloud data environments, making you a valuable asset. Your exceptional verbal communication skills, along with good problem-solving abilities and attention to detail, set you apart. Additionally, your expertise in Data Warehousing principles and design further bolsters your profile. Nice to have skills include familiarity with development frameworks such as .NET and Azure DevOps, as well as Databricks and Unity Catalog. Exposure to other database technologies like Mongo, Cosmos, MySQL, or Oracle is a plus. Knowledge of fundamental front-end languages such as HTML, CSS, and JavaScript, along with familiarity with JavaScript frameworks like Angular JS, React, and Amber, is advantageous. Moreover, being acquainted with server-side languages such as Python, Ruby, Java, PHP, and .Net would be beneficial.,
Posted 1 month ago
4.0 - 9.0 years
9 - 19 Lacs
Noida, Hyderabad, Pune
Work from Office
Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential Responsibilities: Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager Build and maintain ETL/ELT pipelines for both batch and streaming data. Work with structured and unstructured datasets at scale. Apply Data Modeling principles and advanced SQL techniques. Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. Collaborate with product teams to understand requirements and deliver optimized data solutions. Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. Work independently with minimal supervision and strong ownership of deliverables. Must Have: 4+ years of experience in Data Engineering on AWS Cloud. Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer Professional (mandatory) Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimizatio
Posted 1 month ago
6.0 - 10.0 years
18 - 30 Lacs
Noida
Remote
Alcor Solutions is a leading digital transformation and cloud consulting firm, enabling enterprises to leverage modern data platforms, cloud-native solutions, and emerging technologies to drive innovation and business outcomes. We are seeking an Azure Databricks Lead Engineer to architect, administer, and scale our enterprise Databricks environment while establishing best practices and governance across the organization. Role Overview The Azure Databricks Lead Engineer will serve as the subject matter expert (SME) for our Databricks platform, overseeing its architecture, administration, and optimization across the enterprise. This individual will be responsible for designing the Databricks Medallion architecture, implementing Unity Catalog governance, managing cluster policies, and driving the adoption of Databricks best practices. The role requires a combination of hands-on administration and strategic platform design, ensuring Databricks is secure, scalable, and fully aligned with enterprise data initiatives. Design and implement Databricks Medallion architecture for enterprise-wide data usage. Define workspace strategy , including design, configuration, and organization for multi-team and multi-project usage. Establish best practices for data ingestion, transformation, and consumption using Databricks. Administer and optimize the Azure Databricks platform (clusters, pools, compute policies). Set up and manage Unity Catalog , including access control, catalog/schema security, and governance for PII and PHI data. Create and enforce cluster management policies , workspace permissions, and environment segregation (dev/test/prod). Monitor platform health, usage, and costs ; implement scaling and tuning strategies. Collaborate with cloud infrastructure, networking, and security teams to ensure Databricks integrates with enterprise systems and policies. Support data engineers, data scientists, and analytics teams by enabling seamless deployment of applications and pipelines on Databricks. Act as the primary technical advisor and escalation point for Databricks-related issues. Define processes for sizing and provisioning Databricks clusters based on project requirements. Establish governance for non-production and production environments to ensure compliance and cost-efficiency. Build guidelines for version control, Unity Catalog repository configuration, and DevOps pipelines . Serve as the Databricks SME across the enterprise, evangelizing best practices and new features. Mentor internal teams and conduct workshops or training sessions to upskill stakeholders on Databricks usage. Stay current with Databricks and Azure advancements , ensuring the enterprise platform evolves with industry standards. Experience & Qualifications: 7+ years of relevant experience in data engineering, cloud data platforms, or similar roles. Proven track record of Databricks administration and architecture , ideally with formal Medallion architecture experience . Experience in managing large-scale Databricks environments across multiple teams and projects. Deep expertise in Azure Databricks , Unity Catalog , and Delta Lake . Strong knowledge of cluster configuration, compute policies, and workspace design Familiarity with Azure ecosystem (Azure Data Lake, Key Vault, Azure Active Directory). Understanding of data governance, security policies, and compliance standards (handling of PII/PHI ). Experience with CI/CD pipelines , version control (Git) , and infrastructure-as-code practices is desirable.
Posted 1 month ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Job title: Senior Software Engineer Experience: 5- 8 years Primary skills: Python, Spark or Pyspark, DWH ETL. Database: SparkSQL or PostgreSQL Secondary skills: Databricks ( Delta Lake, Delta tables, Unity Catalog) Work Model: Hybrid (Weekly Twice) Cab Facility: Yes Work Timings: 10am to 7pm Interview Process: 3 rounds (3rd round F2F Mandatory) Work Location: Karle Town Tech Park Nagawara, Hebbal Bengaluru 560045 About Business Unit: The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud. Why we are looking for you: You have experience working as a Data Engineer with strong database fundamentals and ETL background. You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above. You have experience working in relation data systems, preferably PostgreSQL and SparkSQL. You have excellent designing and coding skills and can mentor a junior engineer in the team. You have excellent written and verbal communication skills. You are experienced and comfortable working with global clients You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams. You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD. What you will enjoy in this role: As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry. You will get to work on the latest tools and technology and deal with data of petabyte-scale. Work on homegrown frameworks on Spark and Airflow etc. Exposure to Digital Marketing Domain where Epsilon is a marker leader. Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviour's and patterns to design digital Ad strategies. As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward. Using existing standard methodologies and defining as per evolving industry standards. Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain. The open and transparent environment that values innovation and efficiency Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What will you do? Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment. Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the company's multi-faceted ad serving operations. Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model. This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices. Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence. Mentor Junior engineers in the team. Stay abreast of developments in the data world in terms of governance, quality and performance optimization. Able to have effective client meetings, understand deliverables, and drive successful outcomes. Qualifications: Bachelor's Degree in Computer Science or equivalent degree is required. 5 - 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas. Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required. Solid experience in Basic and Advanced SQL writing and tuning. Experience with Python Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects. Good understanding of Disaster Recovery and Business Continuity solutions Experience with scheduling applications with complex interdependencies, preferably Airflow Good experience in working with geographically and culturally diverse teams. Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks. Excellent written and verbal communication skills. Ability to handle complex products. Good communication and problem-solving skills, with the ability to manage multiple priorities. Ability to diagnose and solve problems quickly. Diligent, able to multi-task, prioritize and able to quickly change priorities. Good time management. Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools. About Epsilon: Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we have provided marketers from the world's leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon's comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world.
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Cloud Data Engineer at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. You may be assessed on key critical skills relevant for success in the role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology, as well as job-specific skill sets. To be successful as a Cloud Data Engineer, you should have experience with: - Experience on AWS Cloud technology for data processing and a good understanding of AWS architecture. - Experience with computer services like EC2, Lambda, Auto Scaling, VPC, EC2. - Experience with Storage and container services like ECS, S3, DynamoDB, RDS. - Experience with Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail. - Experience with Analytics services such as Glue, Athena, Crawler, Lake Formation, Redshift. - Experience with Solution delivery for data processing components in larger End to End projects. Desirable skill sets/good to have: - AWS Certified professional. - Experience in Data Processing on Databricks and unity catalog. - Ability to drive projects technically with right first deliveries within schedule and budget. - Ability to collaborate across teams to deliver complex systems and components and manage stakeholders" expectations well. - Understanding of different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. - Experienced with planning, estimating, organizing, and working on multiple projects. This role will be based out of Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architecture pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage appropriate data volumes and velocity and adhere to required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Takes responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision making within own area of expertise. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver your work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. - Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. - Guide and persuade team members and communicate complex/sensitive information. - Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 1 month ago
1.0 - 7.0 years
12 - 14 Lacs
Mumbai, Maharashtra, India
On-site
Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 1 month ago
1.0 - 7.0 years
12 - 14 Lacs
Gurgaon, Haryana, India
On-site
Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 1 month ago
1.0 - 7.0 years
12 - 14 Lacs
Hyderabad, Telangana, India
On-site
Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 1 month ago
8.0 - 12.0 years
6 - 11 Lacs
Bengaluru, Karnataka, India
On-site
At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 1 month ago
4.0 - 9.0 years
18 - 30 Lacs
Bengaluru
Work from Office
Solid experience with Unity , Unreal Mobile , or custom C++ engines Open-world stack: terrain streaming, LOD, memory/environment management Performance across ARM-based mobile GPUs with Vulkan/Metal Cloud services, multiplayer sync, live-op support
Posted 1 month ago
5.0 - 8.0 years
6 - 8 Lacs
Bengaluru, Karnataka, India
On-site
Mandatory Skills: Azure Skills Like Azure Databricks Azure Synapse Azure Functions Azure Data Lake Azure Integration Services Azure API Management CI/CD and PowerShell Scripts ADF ETL Pipelines Unity Catalog Azure SQL Database SQL Coding Storage Account, Azure Data Explorer, Spark Structured Streaming, ADF, Netezza SQL queries, Data lineage, Spark jobs. Role & Responsibilities / Job Description: Overall 4 to 8 years of experience with a minimum 4+ years of relevant professional work experience. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data analysts to understand the needs for data and create effective data workflows. Create and maintain data storage solutions including Azure Databricks , Azure Functions, Azure Data Lake, Azure Synapse , Azure Integration Services, Azure API Management, CI/CD and PowerShell Scripts. ADF, ETL Pipelines, Unity Catalog, Azure SQL Database, SQL Coding, Azure SQL Database, Azure Data Lake. Utilizing Azure Data Factory to create and maintain ETL (Extract, Transform, Load) operations using ADF pipelines. Hands-On Experience working on Data Bricks for implementing Transformations and Delta Lake. Hands-On Experience working on Serverless SQL Pool, Dedicated SQL Pool. Use ADF pipelines to orchestrate the end to end data transformation including the execution of DataBricks notebooks. Should have experience working on Medallion Architecture. Experience working on CI/CD pipelines using Azure DevOps. Attaching both ADF and ADB to DevOps. Creating and managing Azure infrastructure across the landscape using Bicep. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data. Good to have exposure on Power BI, Power Automate. Task description : We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. Should have good hands on implementing Incremental/Delta loads. Develop and Support activities of development in Projects and Automation solutions for customers Deliverables Expected The following shall be the deliverables as per project documented quality process: Updated requirements document / Detailed Bug Analysis report Concept document (wherever applicable) Design document (for all features) Test specification document Source code Review report Test report Filled review checklists Traceability matrix User manuals and related artifacts Filled pre-delivery checklist (PDC) Release Note & release mails
Posted 1 month ago
12.0 - 20.0 years
20 - 35 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities candidate must be Databricks Champion level in terms of expertise, who has proven hands-on experience in a lead engineer’ capacity, who has designed and implemented enterprise-level solutions in Databricks that leverage the core capabilities: Data Engineering (Delta Lake, PySpark framework, Python, PySpark SQL, Notebooks, Jobs, Workflows), Governance (Unity Catalog), Data Sharing (Delta Sharing, maybe even Clean Rooms, Marketplace). This experience must be recent, built up over the past 3 years at least. Must be good command of English language and ability to communicate clearly and to the point are of course a must.
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at our company, you will be responsible for building and maintaining secure, scalable data pipelines using Databricks and Azure. Your role will involve handling ingestion from diverse sources such as files, APIs, and streaming data, performing data transformation, and ensuring quality validation. Additionally, you will collaborate closely with subsystem data science and product teams to ensure ML readiness. To excel in this role, you should possess the following skills and experience: - Technical proficiency in Notebooks (SQL, Python), Delta Lake, Unity Catalog, ADLS/S3, job orchestration, APIs, structured logging, and IaC (Terraform). - Delivery expertise in trunk-based development, TDD, Git, and CI/CD for notebooks and pipelines. - Integration knowledge encompassing JSON, CSV, XML, Parquet, SQL/NoSQL/graph databases. - Strong communication skills enabling you to justify decisions, document architecture, and align with enabling teams. In return for your contributions, you will benefit from: - Proximity Talks: Engage with other designers, engineers, and product experts to learn from industry leaders. - Continuous learning opportunities: Work alongside a world-class team, challenge yourself daily, and expand your knowledge base. About Us: Proximity is a trusted technology, design, and consulting partner for prominent Sports, Media, and Entertainment companies globally. Headquartered in San Francisco, we also have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, our team at Proximity has developed high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. Join our diverse team of coders, designers, product managers, and experts at Proximity. We tackle complex problems and build cutting-edge tech solutions at scale. As part of our rapidly growing team of Proxonauts, your contributions will significantly impact the company's success. You will have the opportunity to collaborate with experienced leaders who have spearheaded multiple tech, product, and design teams. To learn more about us: - Watch our CEO, Hardik Jagda, share insights about Proximity. - Discover Proximity's values and meet some of our Proxonauts. - Explore our website, blog, and design wing - Studio Proximity. - Follow us on Instagram for behind-the-scenes content: @ProxWrks and @H.Jagda.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
The Sr. Data Analytics Engineer at Ajmera Infotech plays a pivotal role in powering mission-critical decisions with governed insights for NYSE-listed clients. Being part of a 120-engineer team specializing in highly regulated domains such as HIPAA, FDA, and SOC 2, you will be instrumental in delivering production-grade systems that transform data into a strategic advantage. You will have the opportunity to make end-to-end impact by building full-stack analytics solutions, from lake house pipelines to real-time dashboards. The fail-safe engineering practices followed at Ajmera Infotech include TDD, CI/CD, DAX optimization, Unity Catalog, and cluster tuning. Working with a modern stack comprising Databricks, PySpark, Delta Lake, Power BI, and Airflow, you will be immersed in cutting-edge technologies. As a Sr. Data Analytics Engineer, you will foster a mentorship culture by leading code reviews, sharing best practices, and growing as a domain expert. Your role will involve helping enterprises migrate legacy analytics into cloud-native, governed platforms while maintaining a compliance-first mindset in HIPAA-aligned environments. Key Responsibilities: - Building scalable pipelines using SQL, PySpark, and Delta Live Tables on Databricks. - Orchestrating workflows with Databricks Workflows or Airflow, including implementing SLA-backed retries and alerting. - Designing dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. - Delivering robust Power BI solutions including dashboards, semantic layers, and paginated reports with a focus on DAX optimization. - Migrating legacy SSRS reports to Power BI seamlessly without any loss of logic or governance. - Optimizing compute and cost efficiency through cache tuning, partitioning, and capacity monitoring. - Documenting pipeline logic, RLS rules, and more in Git-controlled formats. - Collaborating cross-functionally to convert product analytics needs into resilient BI assets. - Championing mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills: - 5+ years of experience in analytics engineering, with a minimum of 3 years in production Databricks/Spark contexts. - Advanced SQL skills, including windowing functions, expert PySpark, Delta Lake, and Unity Catalog proficiency. - Mastery in Power BI, covering DAX optimization, security rules, paginated reports, and more. - Experience in SSRS-to-Power BI migration with a focus on replicating RDL logic accurately. - Strong familiarity with Git, CI/CD practices, and cloud platforms such as Azure or AWS. - Strong communication skills to effectively bridge technical and business audiences. Nice-to-Have Skills: - Databricks Data Engineer Associate certification. - Experience with streaming pipelines (Kafka, Structured Streaming). - Knowledge of data quality frameworks like dbt, Great Expectations, or similar tools. - Exposure to BI platforms like Tableau, Looker, or similar tools. - Understanding of cost governance aspects such as Power BI Premium capacity and Databricks chargeback mechanisms. Ajmera Infotech offers competitive compensation, flexible hybrid schedules, and a deeply technical culture that empowers engineers to lead the narrative. If you are passionate about building reliable, audit-ready data products and want to take ownership of systems from raw data ingestion to KPI dashboards, apply now to engineer insights that truly matter.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |