Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
18 - 30 Lacs
Bengaluru
Work from Office
Urgently Hiring for Senior Azure Data Engineer Job Location- Bangalore Minimum exp - Total 7+yrs with min 4 years relevant exp Keywords Databricks, Pyspark, SCALA, SQL, Live / Streaming data, batch processing data Share CV siddhi.pandey@adecco.com OR Call 6366783349 Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Share CV siddhi.pandey@adecco.com OR Call 6366783349
Posted 2 weeks ago
3.0 - 8.0 years
12 - 16 Lacs
Hyderabad
Work from Office
Job Area: Information Technology Group, Information Technology Group > Systems Analysis General Summary: We are seeking a Systems Analyst,Senior to join our growing organization with specialized skills in IBM Planning Analytics/TM1 and functional understanding of Finance budgeting and forecasting. This role involves advanced development, troubleshooting, and implementation of TM1 solutions to meet complex business requirements.The person will be part of Finance Planning and reporting team and will primarily work closely with his/her manager and will be helping in delivering TM1 planning and budgeting roadmap for the global stakeholders.Key Responsibilities: Able to design and develop IBM Planning Analytics(TM1) solutions as per standards. Able to write logical, complex, concise, efficient, and well-documented code for both TM1 rules and Turbo Integrator processes. Good to have knowledge of Python and TM1py libraries. Able to write business requirement specifications, define level of efforts for Projects/Enhancements and should design and coordinate system tests to ensure solutions meet business requirements SQL skills to be able to work with source data and understand source data structures. Good understanding of the SQL and ability to write complex queries. Understanding cloud technologies especially AWS and Databricks will be an added advantage. Experience in client reporting and dashboard tools like Tableau, PA Web,PAFE. Understanding of ETL processes and data manipulation Working independently with little supervision Taking responsibility for own work and making decisions that are moderate in impact; errors may have financial impact or effect on projects, operations, or customer relationships; errors may require involvement beyond immediate work group to correct. Should provide ongoing system support, including troubleshooting and resolving issues to ensure optimal system performance and reliability Using verbal and written communication skills to convey information that may be complex to others who may have limited knowledge of the subject in question Using deductive and inductive problem solving; multiple approaches may be taken/necessary to solve the problem; often information is missing or incomplete; intermediate data analysis/interpretation skills may be required. Exercising substantial creativity to innovate new processes, procedures, or work products within guidelines or to achieve established objectives. Minimum Qualifications: 3+ years of IT-relevant work experience with a Bachelor's degree. OR 5+ years of IT-relevant work experience without a Bachelors degree. Qualifications:The ideal candidate will have 8-10 years of experience in designing, modeling, and developing enterprise performance management (EPM) applications using IBM Planning Analytics (TM1).Able to design and develop IBM Planning Analytics(TM1) solutions as per standards. Able to write logical, complex, concise, efficient, and well-documented code for both TM1 rules and Turbo Integrator processes.Lead the design, modeling, and development of TM1 applications, including TI scripting, MDX, rules, feeders, and performance tuning.Should able to provide technical expertise in identifying, evaluating, and developing systems and procedures that are efficient, cost effective and meet user requirements.Plans and executes unit, integration and acceptance testingMust be a good team player who can work seamlessly with Global teams and Data teamsExcellent communication and collaboration skills to work with business stakeholdersHaving functional understanding of Finance budgeting and forecasting Understanding cloud technologies especially AWS and Databricks will be an added advantageExperience in Agile methodologies and JIRA user storiesAble to design and develop solutions using python as per standards we are seeking a Systems Analyst,Senior to join our growing organization with specialized skills in IBM Planning Analytics/TM1 and functional understanding of Finance budgeting and forecasting.The person will be part of Finance Planning and reporting te Required bachelors or masters degree in information science, computer science, business, or equivalent work experience.
Posted 2 weeks ago
10.0 - 15.0 years
8 - 18 Lacs
Kochi
Remote
10 yrs of exp working in cloud-native data (Azure Preferred),Databricks, SQL,PySpark, migrating from Hive Metastore to Unity Catalog, Unity Catalog, implementing Row-Level Security (RLS), metadata-driven ETL design patterns,Databricks certifications
Posted 2 weeks ago
5.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Responsibilities: Design and implement the data modeling, data ingestion and data processing for various datasets Design, develop and maintain ETL Framework for various new data source Develop data ingestion using AWS Glue/ EMR, data pipeline using PySpark, Python and Databricks. Build orchestration workflow using Airflow & databricks Job workflow Develop and execute adhoc data ingestion to support business analytics. Proactively interact with vendors for any questions and report the status accordingly Explore and evaluate the tools/service to support business requirement Ability to learn to create a data-driven culture and impactful data strategies. Aptitude towards learning new technologies and solving complex problem. Qualifications: Minimum of bachelors degree. Preferably in Computer Science, Information system, Information technology. Minimum 5 years of experience on cloud platforms such as AWS, Azure, GCP. Minimum 5 year of experience in Amazon Web Services like VPC, S3, EC2, Redshift, RDS, EMR, Athena, IAM, Glue, DMS, Data pipeline & API, Lambda, etc. Minimum of 5 years of experience in ETL and data engineering using Python, AWS Glue, AWS EMR /PySpark and Airflow for orchestration. Minimum 2 years of experience in Databricks including unity catalog, data engineering Job workflow orchestration and dashboard generation based on business requirements Minimum 5 years of experience in SQL, Python, and source control such as Bitbucket, CICD for code deployment. Experience in PostgreSQL, SQL Server, MySQL & Oracle databases. Experience in MPP such as AWS Redshift, AWS EMR, Databricks SQL warehouse & compute cluster. Experience in distributed programming with Python, Unix Scripting, MPP, RDBMS databases for data integration Experience building distributed high-performance systems using Spark/PySpark, AWS Glue and developing applications for loading/streaming data into Databricks SQL warehouse & Redshift. Experience in Agile methodology Proven skills to write technical specifications for data extraction and good quality code. Experience with big data processing techniques using Sqoop, Spark, hive is additional plus Experience in data visualization tools including PowerBI, Tableau. Nice to have experience in UI using Python Flask framework anglular Mandatory Skills: Python for Insights. Experience: 5-8 Years.
Posted 2 weeks ago
3.0 - 5.0 years
8 - 12 Lacs
Pune
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: DataBricks - Data Engineering. Experience: 3-5 Years.
Posted 2 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Vadodara
Remote
We are seeking an experienced Senior Data Engineer to join our team. The ideal candidate will have a strong background in data engineering and AWS infrastructure, with hands-on experience in building and maintaining data pipelines and the necessary infrastructure components. The role will involve using a mix of data engineering tools and AWS services to design, build, and optimize data architecture. Key Responsibilities: Design, develop, and maintain data pipelines using Airflow and AWS services. Implement and manage data warehousing solutions with Databricks and PostgreSQL. Automate tasks using GIT / Jenkins. Develop and optimize ETL processes, leveraging AWS services like S3, Lambda, AppFlow, and DMS. Create and maintain visual dashboards and reports using Looker. Collaborate with cross-functional teams to ensure smooth integration of infrastructure components. Ensure the scalability, reliability, and performance of data platforms. Work with Jenkins for infrastructure automation. Technical and functional areas of expertise: Working as a senior individual contributor on a data intensive project Strong experience in building high performance, resilient & secure data processing pipelines preferably using Python based stack. Extensive experience in building data intensive applications with a deep understanding of querying and modeling with relational databases preferably on time-series data. Intermediate proficiency in AWS services (S3, Airflow) Proficiency in Python and PySpark Proficiency with ThoughtSpot or Databricks. Intermediate proficiency in database scripting (SQL) Basic experience with Jenkins for task automation Nice to Have : Intermediate proficiency in data analytics tools (Power BI / Tableau / Looker / ThoughSpot) Experience working with AWS Lambda, Glue, AppFlow, and other AWS transfer services. Exposure to PySpark and data automation tools like Jenkins or CircleCI. Familiarity with Terraform for infrastructure-as-code. Experience in data quality testing to ensure the accuracy and reliability of data pipelines. Proven experience working directly with U.S. client stakeholders. Ability to work independently and take the lead on tasks. Education and experience: Bachelors or masters in computer science or related fields. 5+ years of experience Stack/Skills needed: Databricks PostgreSQL Python & Pyspark AWS Stack Power BI / Tableau / Looker / ThoughSpot Familiarity with GIT and/or CI/CD tools
Posted 2 weeks ago
4.0 - 9.0 years
20 - 35 Lacs
Mumbai, Navi Mumbai, Pune
Work from Office
Job Summary: We are looking for a highly skilled Data Scientist with deep expertise in time series forecasting, particularly in demand forecasting and customer lifecycle analytics (CLV). The ideal candidate will be proficient in Python or PySpark, have hands-on experience with tools like Prophet and ARIMA, and be comfortable working in Databricks environments. Familiarity with classic ML models and optimization techniques is a plus. Key Responsibilities • Develop, deploy, and maintain time series forecasting models (Prophet, ARIMA, etc.) for demand forecasting and customer behavior modeling. • Design and implement Customer Lifetime Value (CLV) models to drive customer retention and engagement strategies. • Process and analyze large datasets using PySpark or Python (Pandas). • Partner with cross-functional teams to identify business needs and translate them into data science solutions. • Leverage classic ML techniques (classification, regression) and boosting algorithms (e.g., XGBoost, LightGBM) to support broader analytics use cases. • Use Databricks for collaborative development, data pipelines, and model orchestration. • Apply optimization techniques where relevant to improve forecast accuracy and business decision-making. • Present actionable insights and communicate model results effectively to technical and non-technical stakeholders. Required Qualifications • Strong experience in Time Series Forecasting, with hands-on knowledge of Prophet, ARIMA, or equivalent Mandatory. • Proven track record in Demand Forecasting Highly Preferred. • Experience in modeling Customer Lifecycle Value (CLV) or similar customer analytics use cases – Highly Preferred. • Proficiency in Python (Pandas) or PySpark – Mandatory. • Experience with Databricks – Mandatory. • Solid foundation in statistics, predictive modeling, and machine learning Locations: Mumbai/Pune/Noida/Bangalore/Jaipur/Hyderabad
Posted 2 weeks ago
7.0 - 12.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Labcorp is hiring a Senior Data engineer. This person will be an integrated member of Labcorp Data and Analytics team and work within the IT team. Play a crucial role in designing, developing and maintaining data solutions using Databricks, Fabric, Spark, PySpark and Python. Responsible to review business requests and translate them into technical solution and technical specification. In addition, work with team members to mentor fellow developers to grow their knowledge and expertise. Work in a fast paced and high-volume processing environment, where quality and attention to detail are vital. RESPONSIBILITIES: Design and implement end-to-end data engineering solutions by leveraging the full suite of Databricks, Fabric tools, including data ingestion, transformation, and modeling. Design, develop and maintain end-to-end data pipelines by using spark, ensuring scalability, reliability, and cost optimized solutions. Conduct performance tuning and troubleshooting to identify and resolve any issues. Implement data governance and security best practices, including role-based access control, encryption, and auditing. Work in fast-paced environment and perform effectively in an agile development environment. REQUIREMENTS: 8+ years of experience in designing and implementing data solutions with at least 4+ years of experience in data engineering. Extensive experience with Databricks, Fabric, including a deep understanding of its architecture, data modeling, and real-time analytics. Minimum 6+ years of experience in Spark, PySpark and Python. Must have strong experience in SQL, Spark SQL, data modeling & RDBMS concepts. Strong knowledge of Data Fabric services, particularly Data engineering, Data warehouse, Data factory, and Real- time intelligence. Strong problem-solving skills, with ability to perform multi-tasking. Familiarity with security best practices in cloud environments, Active Directory, encryption, and data privacy compliance. Communicate effectively in both oral and written. Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM). Preference given to current or former Labcorp employees. EDUCATION: Bachelors in engineering, MCA.
Posted 2 weeks ago
1.0 - 3.0 years
3 - 6 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will identify trends, root causes, and potential improvements in our products and processes, ensuring that patient voices are heard and addressed with utmost precision. As the Sr Associate Data Scientist at Amgen, you will be responsible for developing and deploying basic machine learning, operational research, semantic analysis, and statistical methods to uncover structure in large data sets. This role involves creating analytics solutions to address customer needs and opportunities. Collect, clean, and manage large datasets related to product performance and patient complaints. Ensure data integrity, accuracy, and accessibility for further analysis. Develop and maintain databases and data systems for storing patient complaints and product feedback. Analyze data to identify patterns, trends, and correlations in patient complaints and product issues. Use advanced statistical methods and machine learning techniques to uncover insights and root causes. Develop analytics or predictive models to foresee potential product issues and patient concerns to address customer needs and opportunities. Prepare comprehensive reports and visualizations to communicate findings to key collaborators. Present insights and recommendations to cross-functional teams, including product development, quality assurance, and customer service. Collaborate with regulatory and compliance teams to ensure adherence to healthcare standards and regulations. Find opportunities for product enhancements and process improvements based on data analysis. Work with product complaint teams to implement changes and monitor their impact. Stay abreast of industry trends, emerging technologies, and standard methodologies in data science and healthcare analytics. Evaluate data to support product complaints. Work alongside software developers and software engineers to translate algorithms into commercially viable products and services. Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics. Perform exploratory and targeted data analyses using descriptive statistics and other methods. Work with data engineers on data quality assessment, data cleansing and data analytics Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes. Basic Qualifications: Masters degree and 1 to 3 years of Data Science and with one or more analytic software tools or languages, and data visualization tools experience OR Bachelors degree and 3 to 5 years of Data Science and with one or more analytic software tools or languages, and data visualization tools experience OR Diploma and 7 to 9 years of Data Science and with one or more analytic software tools or languages, and data visualization tools experience Preferred Qualifications: Demonstrated skill in the use of applied analytics, descriptive statistics, feature extraction and predictive analytics on industrial datasets. Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification. Experience in analyzing time-series data for forecasting and trend analysis. Experience with Data Bricks platform for data analytics. Experience working with healthcare data, including patient complaints, product feedback, and regulatory requirements.
Posted 2 weeks ago
10.0 - 15.0 years
30 - 45 Lacs
Mumbai, Navi Mumbai, Gurugram
Work from Office
Role Description We are looking for a suitable candidate for the opening of Data/Technical Architect role for Data Management, preferably for one who has worked in Insurance or Banking and Financial Services domain and holds relevant experience of 10+ years. The candidate should be willing to take up the role of Senior Manager/Associate Director in an organization based on overall experience. Location : Mumbai and Gurugram Relevant experience : 10+ years Key Responsibilities: Provide technical leadership regarding data strategy and roadmap exercises, data architecture definition, business intelligence/data warehouse product selection, design and implementation for the enterprise. Proven track record of success in implementations for Data Lake, Data Warehouse/Data Marts, Data Lakehouse on Cloud Data Platform. Hands on experience in leading large-scale global data warehousing and analytics projects. Demonstrated industry leadership in the fields of database, data warehousing or data sciences. Be accountable for creating end-to-end solution design and development approach on Cloud Platform including sizing and TCO. Should have Deep technical expertise on Cloud Data Components but not limed to Cloud Storage (S3/ADLS/GCS), EMR/Data Bricks, Redshift/Synapse/Big Query, Glue/Azure Data Factory/Data Fusion/Data Flow, Cloud Functions, Event Bridge, etc. NoSQL understanding and use case application DynamoDB, Cosmos DB or any other technology. Should have worked extensively in creating re-usable assets for Data Integration, transformation, auditing and validation frameworks Knowledge of any Scripting/Programming skills Python, Java, Scala, Go Implementation and tuning experience of data warehousing platforms, including knowledge of data warehouse schema design, query tuning and optimization, and data migration and integration. Experience of requirements for the analytics presentation layer including dashboards, reporting, and OLAP Extensive experience in designing Data architecture, data modeling, design, development, data migration and data integration aspects of SDLC. Participate and/or lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates Should have experience designing new or enhancing existing architecture frameworks and implementing them in a cooperative and collaborative setting Troubleshooting skills, ability to determine impacts, ability to resolve complex issues, and initiative in stressful situations Contributed significantly in Business Development activities. Strong oral and written communication and interpersonal skills Working experience on Agile & Scrum methods Develop documentation and maintain as needed Support projects by providing SME knowledge to project teams in the areas of Enterprise Data Management Interested candidates please share your Cvs to mudesh.kumar.tpr@pwc.com
Posted 2 weeks ago
10.0 - 15.0 years
12 - 22 Lacs
Bengaluru
Hybrid
Role & responsibilities Experienced Senior Data Engineer utilizing Big Data & Google Cloud technologies to develop large scale, on-cloud data processing pipelines and data warehouses with Overall 12 to 15 years of experience Have 3 to 4 years of experience of leading Data Engineer teams developing enterprise grade data processing pipelines on multi Clouds like GCP and AWS Has led at least one project of medium to high complexity of migrating ETL pipelines and Data warehouses to cloud. 3 to 5 years of latest experience should be with premium consulting companies Indepth hands-on expertise with Google and AWS Cloud Platform services especially - BigQuery, Dataform, Dataplex , Redshift. Exceptional communication skills to converse equally well with Data Engineers, Technology and Business leadership. Ability to leverage knowledge on GCP to other cloud environments.
Posted 2 weeks ago
5.0 - 8.0 years
10 - 20 Lacs
Bengaluru
Hybrid
Role & responsibilities Strong hands-on experience with multi cloud (AWS, Azure, GCP) services such as GCP BigQuery, Dataform AWS Redshift, Proficient in PySpark and SQL for building scalable data processing pipelines Knowledge of utilizing serverless technologies like AWS Lambda, and Google Cloud Functions Experience in orchestration frameworks like Apache Airflow, Kubernetes, and Jenkins to manage and orchestrate data pipelines Experience in developing and optimizing ETL/ELT pipelines and working on cloud data warehouse migration projects. Exposure to client-facing roles, with strong problem-solving and communication skills. Prior experience in consulting or working in a consulting environment is preferred.
Posted 2 weeks ago
9.0 - 12.0 years
11 - 14 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE Role Description: We are seeking a Data Solutions Architect with deep expertise in Biotech/Pharma to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that initiatives in enterprise. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Well versed in domain of Biotech/Pharma industry and has been instrumental in solving complex problems for them using data strategy. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications 9 to 12 years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.
Posted 2 weeks ago
4.0 - 9.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Hi, Greetings from Happiest Minds Technologies Currently we are hiring for below positions and looking for immediate joiners. 1. Azure Databricks Bangalore 5 to 10 Yrs - Bangalore As a Senior Azure Data Engineer, you will leverage Azure technologies to drive data transformation, analytics, and machine learning. You will design scalable Databricks data pipelines using PySpark, transforming raw data into actionable insights. Your role includes building, deploying, and maintaining machine learning models using MLlib or TensorFlow while optimizing cloud data integration from Azure Blob Storage, Data Lake, and SQL/NoSQL sources. You will execute large-scale data processing using Spark Pools, fine-tuning configurations for efficiency. The ideal candidate holds a Bachelors or Masters in Computer Science, Data Science, or a related field, with 7+ years in data engineering and 3+ years specializing in Azure Databricks, PySpark, and Spark Pools. Proficiency in Python PySpark, Pandas, NumPy, SciPy, Spark SQL, DataFrames, RDDs, Delta Lake, Databricks Notebooks, and MLflow is required, along with hands-on experience in Azure Data Lake, Blob Storage, and Synapse Analytics. 2. Azure Data Engineer 2 to 4 years Bangalore We are seeking a skilled and motivated Azure Data Engineer to join our dynamic team. The ideal candidate will have hands-on experience with Microsoft Azure cloud services, data engineering, and a strong background in designing and implementing scalable data solutions. Responsibilities: Data Engineering & Pipeline Development Design, implement, and maintain ETL processes using ADF and ADB. Create and manage views in ADB and SQL for efficient data access. Optimize SQL queries for large datasets and high performance. Conduct end-to-end testing and impact analysis on data pipelines. Optimization & Performance Tuning Identify and resolve bottlenecks in data processing. Optimize SQL queries and Delta Tables for fast data processing. Data Sharing & Integration Implement Delta Share, SQL Endpoints, and other data sharing methods. Use Delta Tables for efficient data sharing and processing. API Integration & Development Integrate external systems through Databricks Notebooks and build scalable solutions. Experience in building APIs (Good to have). Collaboration & Documentation Collaborate with teams to understand requirements and design solutions. Provide documentation for data processes and architectures. Qualifications : Strong experience with Azure Data Factory (ADF) and Azure Databricks (ADB). Proficient in SQL with experience in query optimization, view creation, and working with Delta Tables. Hands-on experience with end-to-end testing and impact analysis for data pipelines. Solid understanding of various data sharing approaches such as Delta Share, SQL Endpoints, etc. Familiarity with connecting to APIs using Databricks Notebooks. Experience with cloud data architecture and big data technologies. Ability to troubleshoot and resolve complex data issues. Knowledge of version control, deployment, and CI/CD practices for data pipelines. Plus to Have : Experience with CI/CD processes for deploying code and automating data pipeline deployments. Knowledge in building APIs for data sharing and integration, enabling seamless communication between systems and platforms 3. Azure Data Bricks Architect - 10 to 15 Yrs – Bangalore Key Responsibilities: Architect and design end-to-end data solutions on Azure, with a focus on Databricks. Lead data architecture initiatives, ensuring alignment with best practices and business objectives. Collaborate with stakeholders to define data strategies, architectures, and roadmaps. Migrate and transform data from Oracle to Azure Data Lake. Ensure data solutions are secure, reliable, and scalable. Provide technical leadership and mentorship to junior team members. Required Skills: Extensive experience with Azure Data Services, including Azure Data Factory, Azure SQL Data Warehouse. Deep expertise in Databricks, including Spark, Delta Lake. Strong understanding of data architecture principles and best practices. Proven track record of leading large-scale data projects and initiatives. Design data integration strategies, ensuring seamless integration between Azure services and on-premises/cloud applications. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems. Monitor and manage cloud resources to ensure high availability, performance and scalability. Should have experience in setting up and configuring Azure DevOps. Excellent communication and collaboration skills. If you are interested, pls share me your updated profile with below information to reddemma.n@happiestminds.com to consider further. Are you willing to work from office 4 days in a week - Yes/no - Total experience Current CTC Expected CTC Notice period - Current Location - Preferred location - Reason for change - Are you holding any offer - if yes, what is the reason looking for alternate opportunity Regards Reddemma
Posted 2 weeks ago
5.0 - 9.0 years
18 - 30 Lacs
Bengaluru
Hybrid
This position in the Engineering team under the Digital Experience organization. We drive the first mile of the customer experience through personalization of offers and content. We are currently on the lookout for a smart, highly driven engineer. You will be part of a team that is focused on building & managing solutions, pipelines using marketing technology stacks. You will also be expected to Identify and implement improvements including for optimizing data delivery and automate processes/pipelines. The incumbent is also expected to partner with various stakeholders, bring scientific rigor to design and develop high quality solutions. Candidate must have excellent verbal and written communication skills and be comfortable working in an entrepreneurial, startup environment within a larger company. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. Brief Description of Role: Experience with both structured and unstructured data Experience working on AdTech or MarTech technologies. Experience in relational and non-relational databases and SQL (NoSQL is a plus). Understanding of Data Modeling, Data Catalog concepts and tools Ability to deal with data imperfections such as missing values, outliers, inconsistent formatting, etc. Collaborate with other members of the team to ensure high quality deliverables Learning and implementing the latest design patterns in data engineering Data Management Experience with both structured and unstructured data Experience building Data and CI/CD pipelines Experience working on AdTech or MarTech technologies is added advantage Experience in relational and non-relational databases and SQL (NoSQL is a plus). Hands on experience building ETL workflows/pipelines on large volumes of data Good understanding of Data Modeling, Data Warehouse, Data Catalog concepts and tools Able to identify, join, explore, and examine data from multiple disparate sources and formats Ability to reduce large quantities of unstructured or formless data and get it into a form in which it can be analyzed Ability to deal with data imperfections such as missing values, outliers, inconsistent formatting, etc. Development Ability to write code in programming languages such as Python and shell script on Linux Familiarity with development methodology such as Agile/Scrum Love to learn new technologies, keep abreast of the latest technologies within the cloud architecture, and drive your organization to adapt to emerging best practices Good knowledge of working in UNIX/LINUX systems Qualifications Bachelors degree in computer science with 5+ years of similar experience Tech Stack: Python, SQL, Scripting language (preferably JavaScript) Experience or knowledge on Adobe Experience Platform (RT-CDP/AEP) Experience working in Cloud Platforms (GCP or AWS) Familiarity with automated unit/integration test frameworks Good written and spoken communication skills, team player. Strong analytic thought process and ability to interpret findings
Posted 2 weeks ago
5.0 - 10.0 years
6 - 15 Lacs
Bengaluru
Work from Office
Urgent Hiring _ Azure Data Engineer with a leading Management Consulting Company @ Bangalore Location. Strong expertise in Databricks & Pyspark while dealing with batch processing or live (streaming) data sources. 4+ relevant years of experience in Databricks & Pyspark/Scala 7+ total years of experience Good in data modelling and designing. Ctc- Hike Shall be considered on Current/Last Drawn Pay Apply - rohita.robert@adecco.com Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill
Posted 2 weeks ago
6.0 - 11.0 years
11 - 21 Lacs
Kolkata, Pune, Chennai
Work from Office
Role & responsibilities Data Engineer, Expertise in AWS, Databricks and Pyspark
Posted 2 weeks ago
6.0 - 11.0 years
15 - 27 Lacs
Hyderabad
Hybrid
Job Description for Consultant - Data Engineer About Us: Chryselys is a Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. We specialize in digital technologies and advanced data science techniques that provide strategic and operational insights. Who we are: People - Our team of industry veterans, advisors and senior strategists have diverse backgrounds and have worked at top tier companies. Quality - Our goal is to deliver the value of a big five consulting company without the big five cost. Technology - Our solutions are Business centric built on cloud native technologies. Key Responsibilities and Core Competencies: • You will be responsible for managing and delivering multiple Pharma projects. • Leading a team of atleast 8 members, resolving their technical and business related problems and other queries. • Responsible for client interaction; requirements gathering, creating required documents, development, quality assurance of the deliverables. • Good collaboration with onshore and Senior folks. • Should have fair understanding of Data Capabilities (Data Management, Data Quality, Master and Reference Data). • Exposure to Project management methodologies including Agile and Waterfall. • Experience working in RFPs would be a plus. Required Technical Skills: • Proficient in Python, Pyspark, SQL • Extensive hands-on experience in big data processing and cloud technologies like AWS and Azure services, Databricks etc . • Strong experience working with cloud data warehouses like Snowflake, Redshift, Azure etc. • Good experience in ETL, Data Modelling, building ETL Pipelines. • Conceptual knowledge of Relational database technologies, Data Lake, Lake Houses etc. • Sound knowledge in Data operations, quality and data governance. Preferred Qualifications: • Bachelors or master’s Engineering/ MCA or equivalent degree. • 6-13 years of experience as Data Engineer , with atleast 2 years in managing medium to large scale programs. • Minimum 5 years of Pharma and Life Science domain exposure in IQVIA, Veeva, Symphony, IMS etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Preferably Hyderabad, India
Posted 2 weeks ago
4.0 - 9.0 years
6 - 16 Lacs
Coimbatore
Work from Office
Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: Hadoop, Spark, Python, Data bricks Good to have: Java/Scala The Role: • Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. • Constructing infrastructure for efficient ETL processes from various sources and storage systems. • Leading the implementation of algorithms and prototypes to transform raw data into useful information. • Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. • Creating innovative data validation methods and data analysis tools. • Ensuring compliance with data governance and security policies. • Interpreting data trends and patterns to establish operational alerts. • Developing analytical tools, programs, and reporting mechanisms. • Conducting complex data analysis and presenting results effectively. • Preparing data for prescriptive and predictive modeling. • Continuously exploring opportunities to enhance data quality and reliability. • Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: • Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). • Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. • High proficiency in Scala/Java and Spark for applied large-scale data processing • Expertise with big data technologies, including Spark, Data Lake, and Hive. • Solid understanding of batch and streaming data processing techniques. • Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. • Expert-level ability to write complex, optimized SQL queries across extensive data volumes. • Experience on HDFS, Nifi, Kafka. • Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB • Familiarity with Agile methodologies. • Obsession for service observability, instrumentation, monitoring, and alerting. • Knowledge or experience in architectural best practices for building data lakes Interested candidates can share their resume at Neesha1@damcogroup.com
Posted 2 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Pune
Work from Office
Role Purpose Consultants are expected to complete specific tasks as part of a consulting project with minimal supervision. They will start to build a core areas of expertise and will contribute to client projects typically involving in-depth analysis, research, supporting solution development and being a successful communicator. The Consultant must achieve high personal billability. Responsibilities As aDeveloper, Analyze, design and develop components, tools and custom features using Databricks and streamsets as per business needs. Analyze, create and develop technical design to determine business functional and non-functional requirements & processes and review them with the technology leads and architects. Work collaboratively with all the teams as required to build a data model and setup things in Databricks andstreamsets as appropriate to transform the data and transfer it as appropriate. Develop solutions to publish/subscribe Kafka topics.
Posted 2 weeks ago
10.0 - 12.0 years
13 - 18 Lacs
Noida, Indore, Bengaluru
Work from Office
Primary Tool & Expertise: Power BI and Semantic modelling Key Project Focus: Leading the migration of legacy reporting systems (Cognos or MicroStrategy) to Power BI solutions. Core Responsibility: Build / Develop optimized semantic models , metrics and complex reports and dashboards Work closely with the business analysts / BI teams to help business teams drive improvement in key business metrics and customer experience Responsible for timely, quality, and successful deliveries Sharing knowledge and experience within the team / other groups in the org Lead teams of BI engineers translate designed solutions to them, review their work, and provide guidance. Manage client communications and deliverables Roles and Responsibilities Core BI Skills: Power BI (Semanti Modelling, DAX, Power Query, Power BI Service) Data Warehousing Data Modeling Data Visualization SQL Datawarehouse, Database & Querying: Strong skills in databases (Oracle / MySQL / DB2 / Postgres) and expertise in writing SQL queries. Experience with cloud-based data intelligence platforms like (Databricks / Snowflake) Strong understanding of data warehousing and data modelling concepts and principles. Strong skills and experience in creating semantic models in the Power BI or similar tools . Additional BI & Data Skills (Good to Have): Certifications in Power BI and any Data platform Experience in other tools like MicroStrategy and Cognos Proven experience in migrating existing BI solutions to Power BI or other modern BI platforms. Experience with the broader Power Platform (Power Automate, Power Apps) to create integrated solutions Knowledge and experience in Power BI Admin features like, versioning, usage reports, capacity planning, creation of deployment pipelines etc Sound knowledge of various forms of data analysis and presentation methodologies. Experience in formal project management methodologies Exposure to multiple BI tools is desirable. Experience with Generative BI implementation. Working knowledge of scripting languages like Perl, Shell, Python is desirable. Exposure to one of the cloud providers – AWS / Azure / GCP. Soft Skills & Business Acumen: Exposure to multiple business domains (e.g., Insurance, Reinsurance, Retail, BFSI, healthcare, telecom) is desirable. Exposure to complete SDLC. Out-of-the-box thinker and not just limited to the work done in the projects. Capable of working as an individual contributor and within a team. Good communication, problem-solving, and interpersonal skills. Self-starter and resourceful, skilled in identifying and mitigating risks
Posted 2 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Position : Senior Azure Data Engineer (Only Immediate Joiner) Location : Bangalore Mode of Work : Work from Office Experience : 7 years relevant experience Job Type : Full Time (On Roll) Job Description Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -
Posted 2 weeks ago
4.0 - 9.0 years
0 - 1 Lacs
Hyderabad
Work from Office
Job Title: Software Engineer -Data Engineer Position: Software Engineer Experience: 4-9 years Category: Software Development/ Engineering Shift Timings: 1:00 pm to 10:00 pm Main location: Hyderabad Work Type: Work from office Notice Period: 0-30 Days Skill: Python, Pyspark, Data Bricks Employment Type: Full Time • Bachelor's in Computer Science, Computer Engineering or related field Required qualifications to be successful in this role Must have Skills: • 3+ yrs. Development experience with Spark (PySpark), Python and SQL. • Extensive knowledge building data pipelines • Hands on experience with Databricks Devlopment • Strong experience with • Strong experience developing on Linux OS. • Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Good to have skills: • Solid understanding of distributed systems, data structures, design principles. • Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). • Comfortable communicating with teams via showcases/demos. • Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. • Actively migrate use cases from our on premises Data Lake to Databricks on GCP. • Collaborate with Product Management and business partners to understand use case requirements and reporting. • Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . • Document and showcase feature designs/workflows. • Participate in team meetings and discussions around product development. • Stay up to date on industry latest industry trends and design patterns. • 3+ years experience with GIT. • 3+ years experience with CI/CD (e.g. Azure Pipelines). • Experience with streaming technologies, such as Kafka, Spark. • Experience building applications on Docker and Kubernetes. • Cloud experience (e.g. Azure, Google). Interested Candidates can drop your Resume on Mail id :- " kalyan.v@talent21.in "
Posted 2 weeks ago
3.0 - 6.0 years
20 - 25 Lacs
Bengaluru
Remote
Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 3-6 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of AWS Redshift and Databricks. A problem-solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various stakeholders to deliver actionable insights, and drive data-driven decision-making within the organization. A strong understanding of US healthcare is a plus. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau . Hands-on experience working with AWS Redshift and Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs. Excellent communication skills with the ability to work across different teams and stakeholders. Desired Skills (Nice to Have): Familiarity with other BI tools or cloud platforms. Experience in healthcare data analysis or healthcare analytics.
Posted 2 weeks ago
3.0 - 8.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Overview Primary focus would be to lead development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities Delivery of key Enterprise Data Warehouse and Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications Experience Bachelors degree in Computer Science, MIS, Business Management, or related field 3+ years experience in Information Technology 1+ years experience in Azure Data Lake Technical Skills Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Teradata architecture and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo / Infoworks is desirable Knowledge of data warehousing concepts and data catalog tools (Alation)
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane