Jobs
Interviews

51 Adls Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be joining our dynamic team as an Azure Data Engineer - L3 with 5-7 years of experience, based in either Hyderabad or Bangalore, working the shift timings of 2PM-11PM IST. Your responsibilities will include: - Utilizing your expertise in Azure Data Factory, Databricks, Azure data lake, and Azure SQL Server. - Developing ETL/ELT processes using SSIS and/or Azure Data Factory. - Building complex pipelines and dataflows with Azure Data Factory. - Designing and implementing data pipelines using Azure Data Factory (ADF). - Enhancing the functionality and performance of existing data pipelines. - Fine-tuning processes dealing with very large data sets. - Configuring and Deploying ADF packages. - Proficient in the usage of ARM Template, Key Vault, Integration runtime. - Adaptable to working with ETL frameworks and standards. - Demonstrating strong analytical and troubleshooting skills to identify root causes and find solutions. - Proposing innovative and feasible solutions for business requirements. - Knowledge of Azure technologies/services such as Blob storage, ADLS, Logic Apps, Azure SQL, and Web Jobs. - Expertise in ServiceNow, Incidents, JIRA. - Exposure to agile methodology. - Proficiency in understanding and building PowerBI reports using the latest methodologies. Your key skills should include: - Azure - Azure Data Factory - Data bricks - Migration project experience Qualifications: - Engineer graduate Certifications: - Preferable: Azure certification, Data bricks Join us and be a part of our exciting journey as we continue to provide end-to-end solutions in various industry verticals with a global presence and a track record of successful project deliveries for Fortune 500 companies.,

Posted 1 day ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

As an experienced professional with 5 to 10 years of experience in the field of information technology, you will be responsible for creating data models for corporate analytics in compliance with standards, ensuring usability and conformance across the enterprise. Your role will involve developing data strategies, ensuring vocabulary consistency, and managing data transformations through intricate analytical relationships and access paths, including data mappings at the data-field level. Collaborating with Product Management and Business stakeholders, you will identify and evaluate data sources necessary to achieve project and business objectives. Working closely with Tech Leads and Product Architects, you will gain insights into end-to-end data implications, data integration, and the functioning of business systems. Additionally, you will collaborate with DQ Leads to address data integrity improvements and quality resolutions at the source. This role requires domain knowledge in supply chain, retail, or inventory management. The critical skills needed for this position include a strong understanding of various software platforms and development technologies, proficiency in SQL, RDBMS, Data Lakes, and Warehouses, and knowledge of the Hadoop ecosystem, Azure, ADLS, Kafka, Apache Delta, and Databricks/Spark. Experience with data modeling tools like ERStudio or Erwin would be advantageous. Effective collaboration with Product Managers, Technology teams, and Business Partners, along with familiarity with Agile and DevOps techniques, is essential. Excellent communication skills, both written and verbal, are also key for success in this role. Preferred qualifications for this position include a bachelor's degree in business information technology, computer science, or a related discipline. This is a full-time position located in Bangalore, Bengaluru, Delhi, Kolkata, or Navi Mumbai. If you meet these requirements and are interested in this opportunity, please apply online. The digitalxnode evaluation team will review your resume, and if your profile is selected, they will reach out to you for further steps. We will retain your information in our database for future job openings.,

Posted 2 days ago

Apply

2.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking a Data Architect / Sr. Data and Pr. Data Architects to join our team. In this role, you will be involved in a combination of hands-on contribution, customer engagement, and technical team management. As a Data Architect, your responsibilities will include designing, architecting, deploying, and maintaining solutions on the MS Azure platform using various Cloud & Big Data Technologies. You will be managing the full life-cycle of Data Lake / Big Data solutions, starting from requirement gathering and analysis to platform selection, architecture design, and deployment. It will be your responsibility to implement scalable solutions on the Cloud and collaborate with a team of business domain experts, data scientists, and application developers to develop Big Data solutions. Moreover, you will be expected to explore and learn new technologies for creative problem solving and mentor a team of Data Engineers. The ideal candidate should possess strong hands-on experience in implementing Data Lake with technologies such as Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Additionally, experience with big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase, MongoDB, Neo4J, Elastic Search, Impala, Sqoop, etc., is required. Proficiency in programming and debugging skills in Python and Scala/Java is essential, with experience in building REST services considered beneficial. Candidates should also have experience in supporting BI and Data Science teams in consuming data in a secure and governed manner, along with a good understanding of using CI/CD with Git, Jenkins / Azure DevOps. Experience in setting up cloud-computing infrastructure solutions, hands-on experience/exposure to NoSQL Databases, and Data Modelling in Hive are all highly valued. Applicants should have a minimum of 9 years of technical experience, with at least 5 years on MS Azure and 2 years on Hadoop (CDH/HDP).,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Full Stack Developer, you will be responsible for developing and maintaining both front-end and back-end components of web applications. You will utilize the .NET framework and related technologies for server-side development while leveraging React.js to build interactive and responsive user interfaces on the client-side. Your role will involve building and maintaining RESTful APIs to facilitate communication between front-end and back-end systems, as well as implementing authentication, authorization, and data validation mechanisms within APIs. In terms of Database Management, you will design, implement, and manage databases using technologies such as SQL Server or Azure SQL Database. Your responsibilities will include ensuring efficient data storage, retrieval, and manipulation to support application functionality. You will also be involved in Data Pipeline Management, where you will design, implement, and manage data pipelines using technologies such as PySpark, Python, and SQL. Building and maintaining pipelines in Databricks will be part of your tasks. Cloud Services Integration will be a key aspect of your role, requiring you to utilize Azure services for hosting, scaling, and managing web applications. You will implement cloud-based solutions for storage, caching, and data processing, as well as configure and manage Azure resources such as virtual machines, databases, and application services. In terms of DevOps and Deployment, you will implement CI/CD pipelines for automated build, test, and deployment processes using Jenkins. It will be essential to ensure robust monitoring, logging, and error handling mechanisms are in place. Documentation and Collaboration are important aspects of this role, where you will document technical designs, implementation details, and operational procedures. Collaborating with product managers, designers, and other stakeholders to understand requirements and deliver high-quality solutions will be part of your responsibilities. Continuous Learning is encouraged in this role, requiring you to stay updated with the latest technologies, tools, and best practices in web development and cloud computing. You will continuously improve your skills and knowledge through self-learning, training, and participation in technical communities. Requirements for this role include a Bachelor's Degree or equivalent experience, along with 5+ years of software engineering experience in reliable and resilient Microservice development and deployment. Strong knowledge of RESTful API, React.js, Azure, Python, PySpark, Databricks, Typescript, Node.js, relational databases like SQL Server, and No-SQL data store such as Redis and ADLS is essential. Experience with Data Engineering, Jenkins, Artifactory, and Automation testing frameworks is desirable. Prior experience with Agile, CI/CD, Docker, Kubernetes, Kafka, Terraform, or similar technologies is also beneficial. A passion for learning and disseminating new knowledge is highly valued in this role.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

You will play a crucial role in meeting the requirements of key business functions by developing SQL code, Azure data pipelines, ETL processes, and data models. Your responsibilities will include crafting MS-SQL queries and procedures, generating customized reports, and aggregating data to the desired level for client consumption. Additionally, you will be tasked with database design, data extraction from diverse sources, data integration, and ensuring data stability, reliability, and performance. Your typical day will involve: - Demonstrating 2-3 years of experience as a SQL Developer or in a similar capacity - Possessing a strong grasp of SQL Server and SQL programming, with at least 2 years of hands-on SQL programming experience - Familiarity with SQL Server Integration Services (SSIS) - Preferred experience in implementing Data Factory pipelines for on-cloud ETL processing - Proficiency in Azure Data Factory, Azure Synapse, and ADLS, with the capability to configure and manage all aspects of SQL Server at a Consultant level - Showing a sense of ownership and pride in your work, understanding its impact on the company's success - Exhibiting excellent interpersonal and communication skills (both verbal and written), enabling clear and precise communication at various organizational levels - Demonstrating critical thinking and problem-solving abilities - Being a team player with good time-management skills - Experience in analytics projects within the pharma sector, focusing on deriving actionable insights and their implementation - Expertise in longitudinal data, retail/CPG, customer-level datasets, pharma data, patient data, forecasting, and performance reporting - Intermediate to strong proficiency in MS Excel and PowerPoint - Previous exposure to SQL Server and SSIS - Ability to efficiently handle large datasets (multi-million record complex relational databases) - Self-directed approach in supporting the data requirements of multiple teams, systems, and products - Effective communication in challenging situations with structured thinking and a solution-focused mindset, leading interactions with internal and external stakeholders with minimal supervision - Proactive identification of potential risks and implementation of mitigation strategies to prevent downstream issues - Familiarity with project management principles, including breaking down approaches into smaller tasks and planning resource allocation accordingly - Quick learning ability in a dynamic environment - Advantageous if you have successfully worked in a global environment - Prior experience in healthcare analytics is a bonus IQVIA is a prominent global provider of clinical research services, commercial insights, and healthcare intelligence to the life sciences and healthcare sectors. The company facilitates intelligent connections to expedite the development and commercialization of innovative medical treatments, ultimately enhancing patient outcomes and global population health. For further insights, visit https://jobs.iqvia.com.,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a Sr. Data Engineer to join their team in Bangalore, Karnataka, India. As a Sr. Data Engineer, your primary responsibility will be to build and implement PySpark-based data pipelines in Azure Synapse to transform and load data into ADLS in Delta format. You will also design and implement dimensional (star/snowflake) and 3NF data models optimized for access using Power BI. Unit testing of data pipelines and transformations, as well as designing and building metadata-driven data pipelines using PySpark in Azure Synapse, will be part of your tasks. Analyzing and optimizing Spark SQL queries, optimizing the integration of data lake with Power BI semantic model, and collaborating with cross-functional teams to ensure data models align with business needs are also key responsibilities. Additionally, you will perform Source-to-Target Mapping (STM) from source to multiple layers in the data lake and maintain version control and CI/CD pipelines in Git and Azure DevOps. Integrating Azure Purview to enable access controls and implementing row level security will also be part of your role. The ideal candidate for this position should have at least 7 years of experience in SQL and PySpark. Hands-on experience with Azure Synapse, ADLS, Delta format, and metadata-driven data pipelines is required. Experience in implementing dimensional (star/snowflake) and 3NF data models, as well as expertise in PySpark and Spark SQL, including query optimization and performance tuning, are essential. Strong problem-solving and analytical skills for debugging and optimizing data pipelines in Azure Synapse, familiarity with CI/CD practices in Git and Azure DevOps, and working experience in an Azure DevOps-based development environment are also necessary. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. They are committed to helping clients innovate, optimize, and transform for long-term success. With diverse experts in more than 50 countries and a robust partner ecosystem, NTT DATA offers business and technology consulting, data and artificial intelligence solutions, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure and is part of the NTT Group, investing over $3.6 billion each year in R&D to support organizations and society in confidently moving into the digital future.,

Posted 4 days ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

This job is with Kyndryl, an inclusive employer and a member of myGwork the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As a Data Engineer , you will leverage your expertise in Databricks , big data platforms , and modern data engineering practices to develop scalable data solutions for our clients. Candidates with healthcare experience, particularly with EPIC systems , are strongly encouraged to apply. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The Data Engineer will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency. Responsibilities Develop data ingestion, data processing and analytical pipelines for big data, relational databases and data warehouse solutions Design and implement data pipelines and ETL/ELT processes using Databricks, Apache Spark, and related tools. Collaborate with business stakeholders, analysts, and data scientists to deliver accessible, high-quality data solutions. Provide guidance on cloud migration strategies and data architecture patterns such as Lakehouse and Data Mesh Provide pros/cons, and migration considerations for private and public cloud architectures Provide technical expertise in troubleshooting, debugging, and resolving complex data and system issues. Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides Experience working with Data Governance, Data security and Data Privacy (Unity Catalogue or Purview) Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won&apost find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You&aposre good at what you do and possess the required experience to prove it. However, equally as important - you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused - someone who prioritizes customer success in their work. And finally, you&aposre open and borderless - naturally inclusive in how you work with others. Required Technical And Professional Experience 3+ years of consulting or client service delivery experience on Azure Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 8 years of experience in the IT industry. 3+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Azure Synapse Extensive hands-on experience implementing data ingestion, ETL and data processing. Hands-on experience in and Big Data technologies such as Java, Python, SQL, ADLS/Blob, PySpark and Spark SQL, Databricks, HD Insight and live streaming technologies such as EventHub. Experience with cloud-based database technologies (Azure PAAS DB, AWS RDS and NoSQL). Cloud migration methodologies and processes including tools like Azure Data Factory, Data Migration Service, etc. Experience with monitoring and diagnostic tools (SQL Profiler, Extended Events, etc). Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes. Experience with relational databases and expertise in writing and optimizing T-SQL queries and stored procedures. Experience in using Big Data File Formats and compression techniques. Experience in Developer tools such as Azure DevOps, Visual Studio Team Server, Git, Jenkins, etc. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Preferred Technical And Professional Experience Cloud platform certification, e.g., Microsoft Certified: (DP-700) Azure Data Engineer Associate, AWS Certified Data Analytics - Specialty, Elastic Certified Engineer, Google Cloud Professional Data Engineer Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working with EPIC healthcare systems (e.g., Clarity, Caboodle). Databricks certifications (e.g., Databricks Certified Data Engineer Associate or Professional). Knowledge of GenAI tools, Microsoft Fabric, or Microsoft Copilot. Familiarity with healthcare data standards and compliance (e.g., HIPAA, GDPR). Experience with DevSecOps and CI/CD deployments Experience in NoSQL databases design Knowledge on , Gen AI fundamentals and industry supporting use cases. Hands-on experience with Delta Lake and Delta Tables within the Databricks environment for building scalable and reliable data pipelines. Being You Diversity is a whole lot more than what we look like or where we come from, it&aposs how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we&aposre not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you - and everyone next to you - the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That&aposs the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter - wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked &aposHow Did You Hear About Us' during the application process, select &aposEmployee Referral' and enter your contact&aposs Kyndryl email address. Show more Show less

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Working with data on a day-to-day basis excites you, and you are interested in building robust data architecture to identify data patterns and optimize data consumption for customers who will forecast and predict actions based on data. If this excites you, then working in our intelligent automation team at Schneider AI Hub is the perfect fit for you. As a Lead Data Engineer at Schneider AI Hub, you will play a crucial role in the AI transformation of Schneider Electric by developing AI-powered solutions. Your responsibilities will include expanding and optimizing data and data pipeline architecture, ensuring optimal data flow and collection for cross-functional teams, and supporting software engineers, data analysts, and data scientists on data initiatives. You will be responsible for creating and maintaining optimal data pipeline architecture, designing the right schema to support functional requirements, and building production data pipelines from ingestion to consumption. Additionally, you will create preprocessing and postprocessing for various forms of data, develop data visualization and business intelligence tools, and implement internal process improvements for automating manual data processes. To qualify for this role, you should hold a bachelor's or master's degree in computer science, information technology, or other quantitative fields and have a minimum of 8 years of experience as a data engineer supporting large data transformation initiatives related to machine learning. Strong analytical skills, experience with Azure cloud services, ETLs using Spark, and proficiency in scripting languages like Python and Pyspark are essential requirements for this position. As a team player committed to the success of the team and projects, you will collaborate with various stakeholders to ensure data delivery architecture is consistent and secure across multiple data centers. Join us at Schneider Electric, where we create connected technologies that reshape industries, transform cities, and enrich lives, with a diverse and inclusive culture that values the contribution of every individual. If you are passionate about success and eager to contribute to cutting-edge projects, we invite you to be part of our dynamic team at Schneider Electric in Bangalore, India.,

Posted 1 week ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Our Client in India is one of the leading providers of risk, financial services and business advisory, internal audit, corporate governance, and tax and regulatory services. Our Client was established in India in September 1993, and has rapidly built a significant competitive presence in the country. The firm operates from its offices in Mumbai, Pune, Delhi, Kolkata, Chennai, Bangalore, Hyderabad , Kochi, Chandigarh and Ahmedabad, and offers its clients a full range of services, including financial and business advisory, tax and regulatory. Our client has their client base of over 2700 companies. Their global approach to service delivery helps provide value-added services to clients. The firm serves leading information technology companies and has a strong presence in the financial services sector in India while serving a number of market leaders in other industry segments. Job Requirements Mandatory Skills Bachelor s or higher degree in Computer Science or a related discipline or equivalent (minimum 7+ years work experience). At least 6+ years of consulting or client service delivery experience on Azure Microsoft data engineering. At least 4+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Synapse/Azure Databricks, Microsoft Fabric Hands-on experience implementing data ingestion, ETL and data processing using Azure services: Fabric, onelake, ADLS, Azure Data Factory, Azure Functions, services in Microsoft Fabric etc. Minimum of 5+ years of hands-on experience in Azure and Big Data technologies such as Fabric, databricks, Python, SQL, ADLS/Blob, pyspark/SparkSQL. Minimum of 3+ years of RDBMS experience Experience in using Big Data File Formats and compression techniques. Experience working with Developer tools such as Azure DevOps, Visual Studio Team Server, Git, etc. Preferred Skills Technical Leadership & Demo Delivery: oProvide technical leadership to the data engineering team, guiding the design and implementation of data solutions. oDeliver compelling and clear demonstrations of data engineering solutions to stakeholders and clients, showcasing functionality and business value. oCommunicate fluently in English with clients, translating complex technical concepts into business-friendly language during presentations, meetings, and consultations. ETL Development & Deployment on Azure Cloud: oDesign, develop, and deploy robust ETL (Extract, Transform, Load) pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Notebooks, Azure Functions, and other Azure services. oEnsure scalable, efficient, and secure data integration workflows that meet business requirements. oPreferably to have following skills Azure doc intelligence, custom app, blob storage oDesign and develop data quality frameworks to validate, cleanse, and monitor data integrity. oPerform advanced data transformations, including Slowly Changing Dimensions (SCD Type 1 and Type 2), using Fabric Notebooks or Databricks. oPreferably to have following skills Azure doc intelligence, custom app, blob storage Microsoft Certifications: oHold relevant role-based Microsoft certifications, such as: DP-203: Data Engineering on Microsoft Azure AI-900: Microsoft Azure AI Fundamentals. oAdditional certifications in related areas (e.g., PL-300 for Power BI) are a plus. Azure Security & Access Management: oStrong knowledge of Azure Role-Based Access Control (RBAC) and Identity and Access Management (IAM). oImplement and manage access controls, ensuring data security and compliance with organizational and regulatory standards on Azure Cloud. Additional Responsibilities & Skills: oTeam Collaboration: Mentor junior engineers, fostering a culture of continuous learning and knowledge sharing within the team. oProject Management: Oversee data engineering projects, ensuring timely delivery within scope and budget, while coordinating with cross-functional teams. oData Governance: Implement data governance practices, including data lineage, cataloging, and compliance with standards like GDPR or CCPA. oPerformance Optimization: Optimize ETL pipelines and data workflows for performance, cost-efficiency, and scalability on Azure platforms. oCross-Platform Knowledge: Familiarity with integrating Azure services with other cloud platforms (e.g., AWS, GCP) or hybrid environments is an added advantage. Soft Skills & Client Engagement: oExceptional problem-solving skills with a proactive approach to addressing technical challenges. oStrong interpersonal skills to build trusted relationships with clients and stakeholders. Ability to manage multiple priorities in a fast-paced environment, ensuring high-quality deliverables.

Posted 1 week ago

Apply

2.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Tiger Analytics is a global AI and analytics consulting firm with a team of over 2800 professionals focused on using data and technology to solve complex problems that impact millions of lives worldwide. Our culture is centered around expertise, respect, and a team-first mindset. Headquartered in Silicon Valley, we have delivery centers globally and offices in various cities across India, the US, UK, Canada, and Singapore, along with a significant remote workforce. At Tiger Analytics, we are certified as a Great Place to Work. Joining our team means being at the forefront of the AI revolution, working with innovative teams that push boundaries and create inspiring solutions. We are currently looking for an Azure Big Data Engineer to join our team in Chennai, Hyderabad, or Bangalore. As a Big Data Engineer (Azure), you will be responsible for building and implementing various analytics solutions and platforms on Microsoft Azure using a range of Open Source, Big Data, and Cloud technologies. Your typical day might involve designing and building scalable data ingestion pipelines, processing structured and unstructured data, orchestrating pipelines, collaborating with teams and stakeholders, and making critical tech-related decisions. To be successful in this role, we expect you to have 4 to 9 years of total IT experience with at least 2 years in big data engineering and Microsoft Azure. You should be proficient in technologies such as Azure Data Factory (ADF), PySpark, Databricks, ADLS, Azure SQL Database, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Strong coding skills in SQL, Python, or Scala/Java are essential, as well as experience with big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, Neo4J, and Elastic Search. Knowledge of file formats such as Delta Lake, Avro, Parquet, JSON, and CSV is also required. Ideally, you should have experience in building REST APIs, working on Data Lake or Lakehouse projects, supporting BI and Data Science teams, and following Agile and DevOps processes. Certifications like Data Engineering on Microsoft Azure (DP-203) or Databricks Certified Developer (DE) would be a valuable addition to your profile. At Tiger Analytics, we value diversity and inclusivity, and we encourage individuals with different skills and qualities to apply, even if they do not meet all the criteria for the role. We are committed to providing equal opportunities and fostering a culture of listening, trust, respect, and growth. Please note that the job designation and compensation will be based on your expertise and experience, and our compensation packages are competitive within the industry. If you are passionate about leveraging data and technology to drive impactful solutions, we would love to stay connected with you.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

As an integral part of American Airlines Tech Hub in Hyderabad, India, you will have the opportunity to contribute to the innovative and tech-driven environment that shapes the future of travel. Your role will involve collaborating with source data application teams and product owners to develop and support analytics solutions that provide valuable insights for informed decision-making. By leveraging Azure products and services such as Azure Data Lake Storage, Azure Data Factory, and Azure Databricks, you will be responsible for implementing data migration and engineering solutions to enhance the airline's digital capabilities. Your responsibilities will encompass various aspects of the development lifecycle, including design, cloud engineering, data modeling, testing, performance tuning, and deployment. Working within a DevOps team, you will have the chance to take ownership of your product and contribute to the development of batch and streaming data pipelines using cloud technologies. Adherence to coding standards, best practices, and security guidelines will be crucial as you collaborate with a multidisciplinary team to deliver technical solutions effectively. To excel in this role, you should have a Bachelor's degree in a relevant technical discipline or equivalent experience, along with a minimum of 1 year of software solution development experience using agile methodologies. Proficiency in SQL for data analytics and prior experience with cloud development, particularly in Microsoft Azure, will be advantageous. Preferred qualifications include additional years of software development and data analytics experience, as well as familiarity with tools such as Azure EventHub, Azure Power BI, and Teradata Vantage. Your success in this position will be further enhanced by expertise in the Azure Technology stack, practical knowledge of Azure cloud services, and relevant certifications such as Azure Development Track and Spark Certification. A combination of development, administration, and support experience in various tools and platforms, including scripting languages, data platforms, and BI analytics tools, will be beneficial for your role in driving data management and governance initiatives within the organization. Effective communication skills, both verbal and written, will be essential for engaging with stakeholders across different levels of the organization. Additionally, your physical abilities should enable you to perform the essential functions of the role safely and successfully, with or without reasonable accommodations as required by law. At American Airlines, diversity and inclusion are integral to our workforce, fostering an inclusive environment where employees can thrive and contribute to the airline's success. Join us at American Airlines and embark on a journey where your technical expertise and innovative spirit will play a pivotal role in shaping the future of travel. Feel free to be yourself as you contribute to the seamless operation of the world's largest airline, caring for people on life's journey.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With a workforce of over 125,000 professionals spanning across more than 30 countries, we are fueled by our innate curiosity, entrepreneurial agility, and commitment to creating lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Principal Consultant- Databricks Lead Developer. As a Databricks Developer in this role, you will be tasked with solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Keep abreast of new and emerging technologies and assess their potential application for service offerings and products. - Collaborate with architects and lead engineers to devise solutions that meet functional and non-functional requirements. - Demonstrate proficiency in understanding relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess experience in the Data Engineering domain. Qualifications we are looking for: Minimum qualifications: - Bachelor's Degree or equivalency in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - <<>> years of experience in IT. - Familiarity with new and emerging technologies and their possible applications for service offerings and products. - Collaboration with architects and lead engineers to develop solutions meeting functional and non-functional requirements. - Understanding of industry trends and standards. - Strong analytical and technical problem-solving abilities. - Proficiency in either Python or Scala, preferably Python. - Experience in the Data Engineering domain. Preferred qualifications: - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience with CI/CD for building Databricks job pipelines. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. If you are a proactive individual with a passion for innovation and a strong commitment to continuous learning and upskilling, we invite you to apply for this exciting opportunity to join our team at Genpact.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

You will be responsible for developing applications using various Microsoft and web development technologies such as ASP.Net, C#, MVC, Web Forms, Angular, SQL Server, T-SQL, and Microservices. Your expertise in big data technologies like Hadoop, Spark, Hive, Python, Databricks, etc. will be crucial for this role. With a Bachelors Degree in Computer Science or equivalent experience through higher education, you should have at least 8 years of experience in Data Engineering and/or Software Engineering. Your strong coding skills along with knowledge of infrastructure as code and automating production data and ML pipelines will be highly valued. You should be proficient in working on on-prem to cloud migration, particularly in Azure, and have hands-on experience with Azure PaaS offerings such as Synapse, ADLS, DataBricks, Event Hubs, CosmosDB, Azure ML, etc. Experience in building, governing, and scaling data warehouses/lakes/lake houses is essential for this role. Your expertise in developing and tuning stored procedures and T-SQL scripting in SQL Server, along with familiarity with various .Net development tools and products, will contribute significantly to the success of the projects. You should be adept with agile software development lifecycle and DevOps principles to ensure efficient project delivery.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Developer contracted by Luxoft for supporting customer initiatives, your main task will involve developing solutions based on client requirements within the Telecom/network work environment. You will be responsible for utilizing technologies such as Databricks and Azure, Apache Spark, Python, SQL, and Apache Airflow to create and manage Databricks clusters for ETL processes. Integration with ADLS, Blob Storage, and efficient data ingestion from various sources including on-premises databases, cloud storage, APIs, and streaming data will also be part of your role. Moreover, you will work on handling secrets using Azure Key Vault, interacting with APIs, and gaining hands-on experience with Kafka/Azure EventHub streaming. Your expertise in data bricks delta APIs, UC catalog, and version control tools like Github will be crucial. Additionally, you will be involved in data analytics, supporting ML frameworks, and integrating with Databricks for model training. Proficiency in Python, Apache Airflow, Microsoft Azure, Databricks, SQL, ADLS, Blob storage, Kafka/Azure EventHub, and various other related skills is a must. The ideal candidate should hold a Bachelor's degree in Computer Science or a related field and possess at least 7 years of experience in development. Problem-solving skills, effective communication abilities, teamwork, and a commitment to continuous learning are essential traits for this role. Desirable skills include exposure to Snowflake, PostGre, Redis, GenAI, and a good understanding of RBAC. Proficiency in English at C2 level is required for this Senior-level position based in Bengaluru, India. This opportunity falls under the Big Data Development category within Cross Industry Solutions and is expected to be effective from 06/05/2025.,

Posted 1 week ago

Apply

7.0 - 12.0 years

17 - 27 Lacs

Bengaluru, Delhi / NCR, Mumbai (All Areas)

Work from Office

Key Responsibilities: Requirement gathering and analysis Design of data architecture and data model to ingest data Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Hands on experience on Azure functions and other components like realtime streaming etc Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. Provide optimized solution for any problem related to data engineering Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Diageo's ambition is to be one of the best performing, most trusted, and respected consumer products companies in the world. The strategy is to support premiumisation in developed and emerging countries by offering a broad portfolio across different consumer occasions and price points. This approach also plays a crucial role in shaping responsible drinking trends in markets where international premium spirits are an emerging category. As a member of Diageo's Analytics & Insights team, you will be instrumental in designing, developing, and implementing analytics products to drive the company's competitive advantage and facilitate data-driven decisions. Your role will involve advancing the sophistication of analytics throughout Diageo, serving as a data evangelist to empower stakeholders, identifying meaningful insights from vast data sources, and communicating findings to drive growth, enhance consumer experiences, and optimize business processes. While the role does not entail budget ownership, understanding architecture resource costs is necessary. You will be supporting global initiatives and functions across various markets, working closely with key stakeholders to create possibilities, foster conditions for success, promote personal and professional growth, and maintain authenticity in all interactions. The purpose of the role includes owning and developing a domain-specific data visualization product portfolio, ensuring compliance with technological and business priorities, and contributing to the end-to-end build of analytics products meeting enterprise standards. You will lead agile teams in developing robust BI solutions, provide technical guidance, oversee data flow, and collaborate with internal and external partners to deliver innovative solutions. Your top accountabilities will involve technical leadership in analytics product builds, optimization of data visualization architecture, BAU support, and feedback to enhance data model standards. Business acumen is essential, particularly in working with marketing data and building relationships with stakeholders to drive data-led innovation. Required qualifications include multiple years of experience in BI solution development, a bachelor's degree in a relevant field, hands-on experience as a lead developer, proficiency in DAX & M language, knowledge of Azure architecture, and expertise in data acquisition and processing. Additionally, experience with Azure platform, technical documentation, DevOps solutions, Agile methodologies, and a willingness to deepen solution architecture skills are vital. Experience with structured and unstructured datasets, design collaboration, user experience best practices, and visualization trends are advantageous. A dynamic personality, proficiency in English, and excellent communication skills are key for success in this role.,

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Hyderabad, Gurugram, Bengaluru

Hybrid

Design and build data pipelines using Spark-SQL and PySpark in Azure Databricks Design and build ETL pipelines using ADF Build and maintain a Lakehouse architecture in ADLS / Databricks. Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Work with DevOps team to deploy solutions in production environments. Control data processes and take corrective action when errors are identified. Corrective action may include executing a work around process and then identifying the cause and solution for data errors. Participate as a full member of the global Analytics team, providing solutions for and insights into data related items. Collaborate with your Data Science and Business Intelligence colleagues across the world to share key learnings, leverage ideas and solutions and to propagate best practices. You will lead projects that include other team members and participate in projects led by other team members. Apply change management tools including training, communication and documentation to manage upgrades, changes and data migrations. Job Requirement Must Have Skills: Azure Databricks Azure Data Factory PySpark Spark - SQL ADLS

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group, a global business line that combines market-leading expertise in strategy, technology, data science, and creative design to help CxOs envision and build what's next for their businesses. In this role, you should have developed/worked on at least one Gen AI project and have experience in data pipeline implementation with cloud providers such as AWS, Azure, or GCP. You should also be familiar with cloud storage, cloud database, cloud data warehousing, and Data lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Additionally, a good understanding of cloud compute services, load balancing, identity management, authentication, and authorization in the cloud is essential. Your profile should include a good knowledge of infrastructure capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs. performance and scaling. You should be able to contribute to making architectural choices using various cloud services and solution methodologies. Proficiency in programming using Python is required along with expertise in cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud. Understanding networking, security, design principles, and best practices in the cloud is also important. At Capgemini, we value flexible work arrangements to provide support for maintaining a healthy work-life balance. You will have opportunities for career growth through various career growth programs and diverse professions tailored to support you in exploring a world of opportunities. Additionally, you can equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner with a rich heritage of over 55 years. We have a diverse team of 340,000 members in more than 50 countries, working together to accelerate the dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. Trusted by clients to unlock the value of technology, we deliver end-to-end services and solutions leveraging strengths from strategy and design to engineering, fueled by market-leading capabilities in AI, cloud, and data, combined with deep industry expertise and partner ecosystem. Our global revenues in 2023 were reported at 22.5 billion.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As an Engineer, IT Data at American Airlines, you will be part of a diverse, high-performing team dedicated to technical excellence. Your primary focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you will work in encompasses managing and leveraging data as a strategic asset, including data management, storage, integration, and governance. This domain also involves Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will collaborate closely with source data application teams and product owners to design, implement, and support analytics solutions that provide insights to make better decisions. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, among others, as well as traditional data warehouse tools. Your tasks will span multiple aspects of the development lifecycle, including design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, and prod support. Furthermore, you will provide technical leadership within a team environment and work independently. As part of a DevOps team, you will completely own and support the product, implementing batch and streaming data pipelines using cloud technologies. Your responsibilities will also include leading the development of coding standards, best practices, and privacy and security guidelines, as well as mentoring others on technical and domain skills to create multi-functional teams. For success in this role, you will need a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering, or a related technical discipline, or equivalent experience/training. You should have at least 3 years of software solution development experience using agile, DevOps, operating in a product model, as well as 3 years of data analytics experience using SQL. Additionally, a minimum of 3 years of cloud development and data lake experience, preferably in Microsoft Azure, is required. Preferred qualifications include 5+ years of software solution development experience using agile, dev ops, a product model, and 5+ years of data analytics experience using SQL. Experience in full-stack development, preferably in Azure, and familiarity with Teradata Vantage development and administration are also preferred. Airline industry experience is a plus. In terms of skills, licenses, and certifications, you should have expertise with the Azure Technology stack for data management, data ingestion, capture, processing, curation, and creating consumption layers. An Azure Development Track Certification and Spark Certification are preferred. Proficiency in several tools/platforms such as Python, Spark, Unix, SQL, Teradata, Cassandra, MongoDB, Oracle, SQL Server, ADLS, Snowflake, and more is required. Additionally, experience with Azure Cloud Technologies, CI/CD tools, BI Analytics Tool Stack, and Data Governance and Privacy tools is beneficial for this role.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As an Engineer, IT Data at American Airlines, you will be part of a diverse and high-performing team dedicated to technical excellence. Your main focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you will be working in refers to the area within Information Technology that focuses on managing and leveraging data as a strategic asset. This includes data management, storage, integration, and governance, leaning into Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will work closely with source data application teams and product owners to design, implement, and support analytics solutions that provide insights to make better decisions. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc., as well as traditional data warehouse tools. Your responsibilities will involve multiple aspects of the development lifecycle including design, cloud engineering, ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, and production support. You will provide technical leadership, collaborate within a team environment, and work independently. Additionally, you will be part of a DevOps team that completely owns and supports the product, implementing batch and streaming data pipelines using cloud technologies. As an essential member of the team, you will lead the development of coding standards, best practices, privacy, and security guidelines. You will also mentor others on technical and domain skills to create multi-functional teams. Your success in this role will require a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering, or a related technical discipline, or equivalent experience/training. To excel in this position, you should have at least 3 years of software solution development experience using agile, DevOps, and operating in a product model. Moreover, you should have 3+ years of data analytics experience using SQL and cloud development and data lake experience, preferably with Microsoft Azure. Preferred qualifications include 5+ years of software solution development experience, 5+ years of data analytics experience using SQL, 3+ years of full-stack development experience, and familiarity with Azure technologies. Additionally, skills, licenses, and certifications required for success in this role include expertise with the Azure Technology stack, practical direction within Azure Native cloud services, Azure Development Track Certification, Spark Certification, and a combination of Development, Administration & Support experience with various tools/platforms such as Scripting (Python, Spark, Unix, SQL), Data Platforms (Teradata, Cassandra, MongoDB, Oracle, SQL Server, ADLS, Snowflake), Azure Cloud Technologies, CI/CD tools (GitHub, Jenkins, Azure DevOps, Terraform), BI Analytics Tool Stack (Cognos, Tableau, Power BI, Alteryx, Denodo, Grafana), and Data Governance and Privacy tools (Alation, Monte Carlo, Informatica, BigID). Join us at American Airlines, where you can explore a world of possibilities, travel the world, grow your expertise, and become the best version of yourself while contributing to the transformation of technology delivery for our customers and team members worldwide.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad/Secunderabad

Hybrid

Job Objective We 're looking for a skilled and passionate Data Engineer to build robust, scalable data platforms using cutting-edge technologies. If you have expertise in Databricks, Python, PySpark, Azure Data Factory, Azure Synapse, SQL Server , and a deep understanding of data modeling, orchestration, and pipeline development, this is your opportunity to make a real impact. Youll thrive in our cloud-first, innovation-driven environment, designing and optimizing end-to-end data workflows that drive meaningful business outcomes. If you're committed to high performance, clean data architecture, and continuous learning, we want to hear from you! Required Qualifications Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Experience: 5 to 10 years working with data engineering technologies ( Databricks, Azure, Python, SQL Server, PySpark, Azure Data Factory, Synapse, Delta Lake, Git, CI/CD Tech Stack, MSBI etc. ) Preferred Qualifications & Skills: Must-Have Skills: Expertise in relational & multi-dimensional database architectures Proficiency in Microsoft BI tools (SQL Server SSRS, SSAS, SSIS), Power BI , and SharePoint Strong experience in Power BI MDX, SSAS, SSIS, SSRS , Tabular & DAX Queries Deep understanding of SQL Server Tabular Model & multidimensional database design Excellent SQL-based data analysis skills Strong hands-on experience with Azure Data Factory, Databricks, PySpark/Python Nice-to-Have Skills: Exposure to AWS or GCP Experience with Lakehouse Architecture, Real-time Streaming (Kafka/Event Hubs), Infrastructure as Code (Terraform/ARM) Familiarity with Cognos, Qlik, Tableau, MDM, DQ, Data Migration MS BI, Power BI, or Azure Certifications

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

The Data Quality Monitoring Lead plays a crucial role in ensuring the accuracy, reliability, and integrity of data across various systems and platforms. You will lead an offshore team, establish robust data quality monitoring frameworks, and collaborate with cross-functional stakeholders to address data-related challenges effectively. Your responsibilities will include overseeing real-time monitoring of data pipelines, dashboards, and logs using tools like Log Analytics, KQL queries, and Azure Monitoring to detect anomalies promptly. You will configure alerting mechanisms for timely notifications of potential data discrepancies and collaborate with support teams to investigate and resolve system-related issues impacting data quality. Additionally, you will lead the team in identifying and categorizing data quality issues, perform root cause analysis to determine underlying causes, and collaborate with system support teams and data stewards to implement corrective measures. Developing strategies for rectifying data quality issues, designing monitoring tools, and conducting cross-system data analysis will also be part of your role. Moreover, you will evaluate existing data monitoring processes, refine monitoring tools, and promote best practices in data quality monitoring to ensure standardization across all data-related activities. You will also lead and mentor an offshore team, develop a centralized knowledge base, and serve as the primary liaison between the offshore team and the Lockton Data Quality Lead. In terms of technical skills, proficiency in data monitoring tools like Log Analytics, KQL, Azure Monitoring, and Power BI, strong command of SQL, experience in automation scripting using Python, familiarity with Azure services, and understanding of data flows involving Mulesoft and Salesforce platforms are required. Additionally, experience with Azure DevOps for issue tracking and version control is preferred. This role requires a proactive, detail-oriented individual with strong leadership and communication skills, along with a solid technical background in data monitoring, analytics, database querying, automation scripting, and Azure services.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

As an American Airlines team member in the Tech Hub in Hyderabad, India, you will have the opportunity to be part of a diverse, high-performing team dedicated to technical excellence. Your primary focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you'll be working in is centered around managing and leveraging data as a strategic asset, including data management, storage, integration, and governance, with a strong emphasis on Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will work closely with source data application teams and product owners to design, implement, and support analytics solutions that provide valuable insights for better decision-making. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, and more, as well as traditional data warehouse tools. Your responsibilities will include various aspects of the development lifecycle, such as design, cloud engineering, data modeling, testing, performance tuning, deployments, BI, alerting, and production support. You will collaborate within a team environment and independently to develop technical solutions. As part of a DevOps team, you will have ownership and support for the product you work on, implementing both batch and streaming data pipelines using cloud technologies. To be successful in this role, you should have a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems, or a related technical discipline, or equivalent experience. You should have at least 1+ years of software solution development experience using agile, DevOps, and data analytics experience using SQL. Experience with cloud development and data lake technologies, particularly in Microsoft Azure, is preferred. Preferred qualifications include additional years of experience in software solution development, data analytics, full-stack development, and specific experience with Azure technologies. Skills in scripting languages like Python, Spark, Unix, SQL, as well as expertise with the Azure Technology stack and various data platforms and BI Analytics tools are highly valued. Certifications such as Azure Development Track and Spark are preferred. Effective communication skills are essential for this role, as you will need to collaborate with team members at all levels within the organization. Physical abilities are also necessary to perform the essential functions of the position safely. American Airlines values inclusion and diversity, providing a supportive environment for all team members to reach their full potential. If you are ready to be part of a dynamic, tech-driven environment where your creativity and strengths are celebrated, join American Airlines in Hyderabad and immerse yourself in the exciting world of technological innovation. Feel free to be yourself and contribute to keeping the largest airline in the world running smoothly as we care for people on life's journey.,

Posted 2 weeks ago

Apply

2.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Tiger Analytics is a global AI and analytics consulting firm that is at the forefront of solving complex problems using data and technology. With a team of over 2800 experts spread across the globe, we are dedicated to making a positive impact on the lives of millions worldwide. Our culture is built on expertise, respect, and collaboration, with a focus on teamwork. While our headquarters are in Silicon Valley, we have delivery centers and offices in various cities in India, the US, UK, Canada, and Singapore, as well as a significant remote workforce. As an Azure Big Data Engineer at Tiger Analytics, you will be part of a dynamic team that is driving an AI revolution. Your typical day will involve working on a variety of analytics solutions and platforms, including data lakes, modern data platforms, and data fabric solutions using Open Source, Big Data, and Cloud technologies on Microsoft Azure. Your responsibilities may include designing and building scalable data ingestion pipelines, executing high-performance data processing, orchestrating pipelines, designing exception handling mechanisms, and collaborating with cross-functional teams to bring analytical solutions to life. To excel in this role, we expect you to have 4 to 9 years of total IT experience with at least 2 years in big data engineering and Microsoft Azure. You should be well-versed in technologies such as Azure Data Factory, PySpark, Databricks, Azure SQL Database, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Your passion for writing high-quality, scalable code and your ability to collaborate effectively with stakeholders are essential for success in this role. Experience with big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, and Neo4J, as well as knowledge of different file formats and REST API design, will be advantageous. At Tiger Analytics, we value diversity and inclusivity, and we encourage individuals with varying skills and backgrounds to apply. We are committed to providing equal opportunities for all our employees and fostering a culture of trust, respect, and growth. Your compensation package will be competitive and aligned with your expertise and experience. If you are looking to be part of a forward-thinking team that is pushing the boundaries of what is possible in AI and analytics, we invite you to join us at Tiger Analytics and be a part of our exciting journey towards building innovative solutions that inspire and energize.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As a Senior Manager specializing in Data Analytics & AI, you will be a pivotal member of the EY Data, Analytics & AI Ireland team. Your role as a Databricks Platform Architect will involve enabling clients to extract significant value from their information assets through innovative data analytics solutions. You will have the opportunity to work across various industries, collaborating with diverse teams and leading the design and implementation of data architecture strategies aligned with client goals. Your key responsibilities will include leading teams with varying skill sets in utilizing different Data and Analytics technologies, adapting your leadership style to meet client needs, creating a positive learning culture, engaging with clients to understand their data requirements, and developing data artefacts based on industry best practices. Additionally, you will assess existing data architectures, develop data migration strategies, and ensure data integrity and minimal disruption during migration activities. To qualify for this role, you must possess a strong academic background in computer science or related fields, along with at least 7 years of experience as a Data Architect or similar role in a consulting environment. Hands-on experience with cloud services, data modeling techniques, data management concepts, Python, Spark, Docker, Kubernetes, and cloud security controls is essential. Ideally, you will have the ability to effectively communicate technical concepts to non-technical stakeholders, lead the design and optimization of the Databricks platform, work closely with the data engineering team, maintain a comprehensive understanding of the data pipeline, and stay updated on new and emerging technologies in the field. EY offers a competitive remuneration package, flexible working options, career development opportunities, and a comprehensive Total Rewards package. Additionally, you will benefit from support, coaching, opportunities for skill development, and a diverse and inclusive culture that values individual contributions. If you are passionate about leveraging data to solve complex problems, drive business outcomes, and contribute to a better working world, consider joining EY as a Databricks Platform Architect. Apply now to be part of a dynamic team dedicated to innovation and excellence.,

Posted 3 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies