Jobs
Interviews

3311 Big Data Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

5 - 8 Lacs

Gurugram

Work from Office

Programming Languages: Python, Scala Machine Learning frameworks: Scikit Learn, Xgboost, Tensorflow, Keras, PyTorch, Spacy, Gensim, Stanford NLP, NLTK, Open CV, Spark MLlib, . Machine Learning Algorithms experience good to have Scheduling experience: Airflow Big Data/ Streaming/ Queues: Apache Spark, Apache Nifi, Apache Kafka, RabbitMQ any one of them Databases: MySQL, Mongo/Redis/Dynamo DB, Hive Source Control: GIT Cloud: AWS Build and Deployment: Jenkins, Docker, Docker Swarm, Kubernetes. BI tool: Quicksight(preferred) else any BI tool (Must have)

Posted 2 weeks ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Navi Mumbai

Work from Office

Title: Lead Data Scientist (Python) Required Technical Skillset:Language : Python, PySpark Framework - Scikit-learn, TensorFlow, Keras, PyTorch, Libraries - NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database - Relational Database(Postgres), NoSQL Database (MongoDB) Cloud - AWS cloud platforms Other Tools - Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes: Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements and Skills: Mathematics and Statistics: A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills: Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques: Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis and Visualization: Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks: Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies: A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering: A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication and Collaboration: A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Mumbai

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreement Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Informatica MDM. Experience: 5-8 Years.

Posted 2 weeks ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Hyderabad

Work from Office

Job Summary Synechron is seeking a skilled PySpark Data Engineer to design, develop, and optimize data processing solutions leveraging modern big data technologies. In this role, you will lead efforts to build scalable data pipelines, support data integration initiatives, and work closely with cross-functional teams to enable data-driven decision-making. Your expertise will contribute to enhancing business insights and operational efficiency, positioning Synechron as a pioneer in adopting emerging data technologies. Software Requirements Required Software Skills: PySpark (Apache Spark with Python) experience in developing data pipelines Apache Spark ecosystem knowledge Python programming (versions 3.7 or higher) SQL and relational database management systems (e.g., PostgreSQL, MySQL) Cloud platforms (preferably AWS or Azure) Version control: GIT Data workflow orchestration tools like Apache Airflow Data management tools: SQL Developer or equivalent Preferred Software Skills: Experience with Hadoop ecosystem components Knowledge of containerization (Docker, Kubernetes) Familiarity with data lake and data warehouse solutions (e.g., AWS S3, Redshift, Snowflake) Monitoring and logging tools (e.g., Prometheus, Grafana) Overall Responsibilities Lead the design and implementation of large-scale data processing solutions using PySpark and related technologies Collaborate with data scientists, analysts, and business teams to understand data requirements and deliver scalable pipelines Mentor junior team members on best practices in data engineering and emerging technologies Evaluate new tools and methodologies to optimize data workflows and improve data quality Ensure data solutions are robust, scalable, and aligned with organizational data governance policies Stay informed on industry trends and technological advancements in big data and analytics Support production environment stability and performance tuning of data pipelines Drive innovative approaches to extract value from large and complex datasets Technical Skills (By Category) Programming Languages: Required: Python (PySpark experience minimum 2 years) Preferred: Scala (for Spark), SQL, Bash scripting Databases/Data Management: Relational databases (PostgreSQL, MySQL) Distributed storage solutions (HDFS, cloud object storage like S3 or Azure Blob Storage) Data warehousing platforms (Snowflake, Redshift preferred) Cloud Technologies: Required: Experience deploying and managing data solutions on AWS or Azure Preferred: Knowledge of cloud-native services like EMR, Data Factory, or Azure Data Lake Frameworks and Libraries: Apache Spark (PySpark) Airflow or similar orchestration tools Data processing frameworks (Kafka, Spark Streaming preferred) Development Tools and Methodologies: Version control with GIT Agile management tools: Jira, Confluence Continuous integration/deployment pipelines (Jenkins, GitLab CI) Security Protocols: Understanding of data security, access controls, and GDPR compliance in cloud environments Experience Requirements Minimum of 5+ years in data engineering, with hands-on PySpark experience Proven track record of developing, deploying, and maintaining scalable data pipelines Experience working with data lakes, data warehouses, and cloud data services Demonstrated leadership in projects involving big data technologies Experience mentoring junior team members and collaborating across teams Prior experience in financial, healthcare, or retail sectors is beneficial but not mandatory Day-to-Day Activities Develop, optimize, and deploy big data pipelines using PySpark and related tools Collaborate with data analysts, data scientists, and business teams to define data requirements Conduct code reviews, troubleshoot pipeline issues, and optimize performance Mentor junior team members on best practices and emerging technologies Design solutions for data ingestion, transformation, and storage Evaluate new tools and frameworks for continuous improvement Maintain documentation, monitor system health, and ensure security compliance Participate in sprint planning, daily stand-ups, and project retrospectives to align priorities Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related discipline Relevant industry certifications (e.g., AWS Data Analytics, GCP Professional Data Engineer) preferred Proven experience working with PySpark and big data ecosystems Strong understanding of software development lifecycle and data governance standards Commitment to continuous learning and professional development in data engineering technologies Professional Competencies Analytical mindset and problem-solving acumen for complex data challenges Effective leadership and team management skills Excellent communication skills tailored to technical and non-technical audiences Adaptability in fast-evolving technological landscapes Strong organizational skills to prioritize tasks and manage multiple projects Innovation-driven with a passion for leveraging emerging data technologies

Posted 2 weeks ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Hyderabad

Work from Office

Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The can didate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLAs for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred.

Posted 2 weeks ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Were looking for a Big Data Engineer who can find creative solutions to tough problems. As a Big Data Engineer, youll create and manage our data infrastructure and tools, including collecting, storing, processing and analyzing our data and data systems. You know how to work quickly and accurately, using the best solutions to analyze mass data sets, and you know how to get results. Youll also make this data easily accessible across the company and usable in multiple departments. Skillset Required Bachelors Degree or more in Computer Science or a related field. A solid track record of data management showing your flawless execution and attention to detail. Strong knowledge of and experience with statistics. Programming experience, ideally in Python, Spark, Kafka or Java, and a willingness to learn new programming languages to meet goals and objectives. Experience in C, Perl, Javascript or other programming languages is a plus. Knowledge of data cleaning, wrangling, visualization and reporting, with an understanding of the best, most efficient use of associated tools and applications to complete these tasks. Experience in MapReduce is a plus. Deep knowledge of data mining, machine learning, natural language processing, or information retrieval. Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources. Experience with machine learning toolkits including, H2O, SparkML or Mahout A willingness to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and your experience to get the job done. Experience in production support and troubleshooting. You find satisfaction in a job well done and thrive on solving head-scratching problems.

Posted 2 weeks ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

9 - 13 Lacs

Pune

Work from Office

Overview We are hiring an Associate Data Engineer to support our core data pipeline development efforts and gain hands-on experience with industry-grade tools like PySpark, Databricks, and cloud-based data warehouses. The ideal candidate is curious, detail-oriented, and eager to learn from senior engineers while contributing to the development and operationalization of critical data workflows. Responsibilities Assist in the development and maintenance of ETL/ELT pipelines using PySpark and Databricks under senior guidance. Support data ingestion, validation, and transformation tasks across Rating Modernization and Regulatory programs. Collaborate with team members to gather requirements and document technical solutions. Perform unit testing, data quality checks , and process monitoring activities. Contribute to the creation of stored procedures, functions, and views . Support troubleshooting of pipeline errors and validation issues. Qualifications Bachelor’s degree in Computer Science, Engineering, or related discipline. 3+ years of experience in data engineering or internships in data/analytics teams. Working knowledge of Python, SQL , and ideally PySpark . Understanding of cloud data platforms (Databricks, BigQuery, Azure/GCP). Strong problem-solving skills and eagerness to learn distributed data processing. Good verbal and written communication skills. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 2 weeks ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Pune

Work from Office

The Data Science Engineering team is looking for a Lead Data Analytics Engineer to join our team! You should be and gather our requirements, understanding complex product, business, and engineering challenges, composing and prioritizing research projects, and then building them in partnership with cloud engineers and architects, and using the work of our data engineering team. You have deep SQL experience, an understanding of modern data stacks and technology, experience with data and all things data-related, and experience guiding a team through technical and design challenges. You will report into the Sr. Manager, Cloud Software Engineering and be a part of the larger Data Engineering team. What Your Responsibilities Will Be Avalara is looking for data analytics engineer who can solve and scale real world big data challenges. Have end to end analytics experience and a complex data story with data models and reliable and applicable metrics. Build and deploy data science models using complex SQL, Python, DBT data modelling and re-useable visualization component (PowerBI). Expert level experience in PowerBI, SQL and Snowflake Solve needs on a large scale by applying your software engineering and complex data. Lead and help develop a roadmap for the area and the team. Analyze fault tolerance and high availability issues, performance, and scale challenges, and solve them. Lead programs and collaborate with engineers, product managers, and technical program managers across teams. Understand the trade-offs between consistency, durability, and costs to build solutions that can meet the demands of growing services. Ensure the operational readiness of the services and meet the commitments to our customers regarding availability and performance. Manage end-to-end project plans and ensure on-time delivery. Communicate the status and big picture to the project team and management. Work with business and engineering teams to identify scope, constraints, dependencies, and risks. Identify risks and opportunities across the business and guide solutions. What You'll Need to be Successful What You'll Need to be Successful Bachelor's Engineering degree in Computer Science or a related field. 8+ years of experience of enterprise-class experience with large-scale cloud solutions in data science/analytics projects and engineering projects. Expert level experience in PowerBI, SQL and Snowflake Experience with data visualization, Python, Data Modeling and data storytelling. Experience architecting complex data marts applying DBT. Architect and build data solutions that use data quality and anomaly detection best practices. Experience building production analytics using the Snowflake data platform. Experience in AWS and Snowflake tools and services Good to have: Certificate in Snowflake is plus Relevant certifications in data warehousing or cloud platform. Experience architecting complex data marts applying DBT and Airflow.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Gurugram

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Chennai

Work from Office

- Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics

Posted 2 weeks ago

Apply

5.0 - 10.0 years

22 - 25 Lacs

Hyderabad

Work from Office

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

24 - 36 Lacs

Pune

Work from Office

Responsibilities: * Design, develop, and maintain big data solutions using Hadoop, Hive, PySpark, Python, Java, Scala, AWS, Airflow. * Optimize performance and scalability of big data systems on Spark. Health insurance Annual bonus Provident fund

Posted 2 weeks ago

Apply

6.0 - 10.0 years

13 - 18 Lacs

Mumbai

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a skilled Data Engineer to design, build, and maintain scalable, secure, and high-performance data solutions. This role spans the full data engineering lifecycle – from research and architecture to deployment and support- within cloud-native environments, with a strong focus on AWS and Kubernetes (EKS). Primary Responsibilities: Data Engineering Lifecycle: Lead research, proof of concept, architecture, development, testing, deployment, and ongoing maintenance of data solutions Data Solutions: Design and implement modular, flexible, secure, and reliable data systems that scale with business needs Instrumentation and Monitoring: Integrate pipeline observability to detect and resolve issues proactively Troubleshooting and Optimization: Develop tools and processes to debug, optimize, and maintain production systems Tech Debt Reduction: Identify and address legacy inefficiencies to improve performance and maintainability Debugging and Troubleshooting: Quickly diagnose and resolve unknown issues across complex systems Documentation and Governance: Maintain clear documentation of data models, transformations, and pipelines to ensure security and governance compliance Cloud Expertise: Leverage advanced skills in AWS and EKS to build, deploy, and scale cloud-native data platforms Cross-Functional Support: Collaborate with analytics, application development, and business teams to enable data-driven solutions Team Leadership: Lead and mentor engineering teams to ensure operational efficiency and innovation Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science or related field 5+ years of experience in data engineering or related roles Proven experience designing and deploying scalable, secure, high-quality data solutions Solid expertise in full Data Engineering lifecycle (research to maintenance) Advanced AWS and EKS knowledge Proficient in CI/CD, IaC, and addressing tech debt Proven skilled in monitoring and instrumentation of data pipelines Proven advanced troubleshooting and performance optimization abilities Proven ownership mindset with ability to manage multiple components Proven effective cross-functional collaborator (DS, SMEs, and external teams). Proven exceptional debugging and problem-solving skills Proven solid individual contributor with a team-first approach At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #njp External Candidate Application Internal Employee Application

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Patna

Work from Office

Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics

Posted 2 weeks ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Varanasi

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Mumbai

Work from Office

Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Gurugram

Work from Office

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Nashik

Work from Office

Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Cloud Support Engineer at Snowflake, you will have the opportunity to work with a dynamic and expanding Support team. Your role will involve leveraging your technical expertise across various operating systems, database technologies, big data, data integration, connectors, and networking to address a wide range of issues related to data. Snowflake Support is dedicated to providing high-quality solutions to facilitate data-driven business insights and outcomes. As part of the team, you will collaborate with customers to understand their needs, offer technical guidance, and champion their feedback for product enhancements. Key to Snowflake's approach are its core values of customer-centricity, integrity, initiative, and accountability. These values underpin the team's commitment to delivering exceptional Support and fostering meaningful customer relationships. In this role, you will play a crucial part in driving customer satisfaction by sharing your expertise on the Snowflake Data Warehouse. You will serve as the primary point of contact for customers, offering guidance on product usage and advocating for their feedback to drive product enhancements. Moreover, you will contribute to team knowledge and participate in strategic initiatives to enhance organizational processes. Furthermore, you will have the opportunity to work closely with Snowflake Priority Support customers, gaining insights into their use cases and ensuring the optimal performance of their Snowflake implementation. Your responsibilities will include providing top-notch service, enabling customers to maximize the benefits of the Snowflake platform. To be successful in this role, you should ideally have experience in a 24x7 technical support environment, managing case escalations, incident resolution, and database release management. Additionally, you should be comfortable working in partnership with engineering teams to address customer requests and contribute to Support initiatives. As a Senior Cloud Support Engineer at Snowflake, you will be expected to drive technical solutions, adhere to SLAs, demonstrate problem-solving skills, and utilize various tools to investigate issues. Your responsibilities will also include documenting solutions, reporting bugs and feature requests, and collaborating with engineering teams to prioritize and resolve issues. The ideal candidate will hold a Bachelor's or Master's degree in Computer Science or a related discipline, possess at least 5 years of experience in a technical support role, and have a solid understanding of major RDBMS systems. Proficiency in SQL, query optimization, performance tuning, and system metrics interpretation are essential for this role. Furthermore, having knowledge of distributed computing principles, scripting experience, database migration expertise, and proficiency in cloud cost management tools are considered advantageous. Candidates should be willing to participate in pager duty rotations, work night shifts, and adapt to schedule changes as needed to support business requirements. Snowflake is a rapidly growing company, and as part of the team, you will have the opportunity to contribute to our success and shape the future of data analytics. If you are passionate about technology, customer success, and innovation, we invite you to join us on this exciting journey. For detailed information regarding salary and benefits for positions in the United States, please refer to the job posting on the Snowflake Careers Site at careers.snowflake.com.,

Posted 2 weeks ago

Apply

6.0 - 16.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The Data and Analytics team at EY is a multi-disciplinary technology team delivering client projects and solutions across Data Mining & Management, Visualization, Business Analytics, Automation, Statistical Insights, and AI/GenAI. The assignments cover a wide range of countries and industry sectors. We are looking for an Assistant Director - AI/GenAI, proficient in Artificial Intelligence, Machine Learning, deep learning, and LLM models for Generative AI, text analytics, and Python Programming. You will be responsible for developing and delivering industry sectors specific solutions which will be used to implement the EY SaT mergers and acquisition methodologies. Your key responsibilities include developing, reviewing, and implementing Solutions applying AI, Machine Learning, Deep Learning, and developing APIs using Python. Having a relevant understanding of Big Data and Visualization would be beneficial. You will lead the development and implementation of Generative AI applications using both open source and closed source Large Language Models (LLMs). You will extensively work with advanced models for natural language processing and creative content generation using contextual information. Furthermore, you will design and optimize solutions leveraging Vector databases for efficient storage and retrieval of contextual data for LLMs. Your role will also involve understanding business and sectors, identifying whitespaces and opportunities for analytics application, managing projects, ensuring smooth service delivery, and communicating with cross-functional teams. Skills and attributes for success include the ability to work creatively and systematically in a time-limited, problem-solving environment, loyalty, reliability, high ethical standards, flexibility, curiosity, creativity, good interpersonal skills, teamwork, intercultural intelligence, experience of working in multi-cultural teams, ability to manage multiple priorities simultaneously, and excellent communication skills. To qualify for this role, you must have experience guiding teams on AI/Data Science projects, familiarity with Azure Cloud Framework, excellent presentation skills, 12-16 years of relevant work experience in AI and Machine Learning, experience in statistical techniques, deep learning, machine learning algorithms, programming in Python, SDLC experience, and willingness to mentor team members. Additionally, having the ability to think strategically, build rapport, travel extensively, and work on client sites/practice office locations is ideal. EY Global Delivery Services (GDS) offers a dynamic and truly global delivery network with fulfilling career opportunities, continuous learning, transformative leadership, and a diverse and inclusive culture. Join us at EY to build a better working world through trust, innovation, and value creation for clients, people, and society.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer II at JPMorgan Chase within the Employee Platforms team, you will be part of an agile team responsible for designing and delivering trusted market-leading technology products in a secure, stable, and scalable manner to support the firm's business objectives. Your role involves executing creative software solutions, developing high-quality production code, and identifying opportunities to automate remediation of recurring issues to enhance operational stability. You will lead evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems. Additionally, you will lead communities of practice across Software Engineering to promote the adoption of new and leading-edge technologies. Your responsibilities will include collaborating with stakeholders, demonstrating strong expertise in solving business problems through innovation, and managing the firm's capital reserves effectively. You will collaborate across teams to drive features, eliminate blockers, and produce high-quality documentation of cloud solutions as reusable patterns. To qualify for this role, you should have formal training or certification in software engineering concepts along with at least 2 years of applied experience. You must possess hands-on practical experience in system design, application development, testing, and operational stability. Proficiency in automation, continuous delivery methods, agile methodologies, and advanced programming languages is essential. In addition, knowledge of financial services industry IT systems, Agile SDLC, and various technologies like Python, Big Data, Hadoop, Spark, Scala, Splunk, and application, data, and infrastructure architecture is required. Preferred qualifications include excellent team spirit, ability to work collaboratively, and knowledge of financial instruments. Proficiency in Core Java 8, Spring, JPA/Hibernate, and React JavaScript is desirable for this role.,

Posted 2 weeks ago

Apply

10.0 - 15.0 years

30 - 45 Lacs

Hyderabad

Hybrid

Job Title: IT-Lead Architect Architect AI Years of Experience: 10-15 Years Mandatory Skills: Data Architect, Team Leadership, AI/ML Expert, Azure, SAP Good to have: Visualization, Python Key Responsibilities: Lead a team of architects and engineers focused on Strategic Azure architecture and AI projects. Develop and maintain the companys data architecture strategy and lead design/architecture validation reviews. Drive the adoption of new AI/ML technologies and assess their impact on data strategy. Architect scalable data flows, storage, and analytics platforms, ensuring secure and cost-effective solutions. Establish data governance frameworks and promote best practices for data quality. Act as a technical advisor on complex data projects and collaborate with stakeholders. Work with technologies including SQL, SYNAPSE, Databricks, PowerBI, Fabric, Python, SQL Server, and NoSQL. Required Qualifications & Experience: Bachelor’s or Master’s degree in Computer Science or related field. At least 5 years in a leadership role in data architecture. Expert in Azure, Databricks, and Synapse. Proven experience leading technical teams and strategic projects, specifically designing and implementing AI solutions within data architectures. Deep knowledge of cloud data platforms (Azure, Fabric, Databricks, AWS), data modeling, ETL/ELT, big data, relational/NoSQL databases, and data security. 5 years of experience in AI model design & deployment. Strong experience in Solution Architecture. Excellent communication, stakeholder management, and problem-solving skills.

Posted 2 weeks ago

Apply

8.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Job Title: IT- Lead Engineer/Architect Azure Lake Years of Experience: 8-10 Years Mandatory Skills: Azure, DataLake, Databricks, SAP BW Key Responsibilities: Lead the development and maintenance of data architecture strategy, including design and architecture validation reviews with all stakeholders. Architect scalable data flows, storage, and analytics platforms in cloud/hybrid environments, ensuring secure, high-performing, and cost-effective solutions. Establish comprehensive data governance frameworks and promote best practices for data quality and enterprise compliance. Act as a technical leader on complex data projects and drive the adoption of new technologies, including AI/ML. Collaborate extensively with business stakeholders to translate needs into architectural solutions and define project scope. Support a wide range of Datalakes and Lakehouses technologies (SQL, SYNAPSE, Databricks, PowerBI, Fabric). Required Qualifications & Experience: Bachelors or Master’s degree in Computer Science or related field. At least 3 years in a leadership role in data architecture. Proven ability leading Architecture/AI/ML projects from conception to deployment. Deep knowledge of cloud data platforms (Microsoft Azure, Fabric, Databricks), data modeling, ETL/ELT, big data, relational/NoSQL databases, and data security. Experience in designing and implementing AI solutions within cloud architecture. 3 years as a project lead in large-scale projects. 5 years in development with Azure, Synapse, and Databricks. Excellent communication and stakeholder management skills.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

7 - 17 Lacs

Hyderabad

Hybrid

Job Title: IT- Senior Engineer Azure Lake Years of Experience: 4-6 Years Mandatory Skills: Azure, DataLake, SAP BW, PowerBI, Tableau Key Responsibilities: Develop and maintain data architecture strategy, including design and architecture validation reviews. Architect scalable data flows, storage, and analytics platforms in cloud/hybrid environments, ensuring secure and cost-effective solutions. Establish and enforce data governance frameworks, promoting data quality and compliance. Act as a technical advisor on complex data projects and collaborate with stakeholders on project scope and planning. Drive adoption of new technologies, conduct technological watch, and define standards for data management. Develop using SQL, SYNAPSE, Databricks, PowerBI, Fabric. Required Qualifications & Experience: Bachelors or Master’s degree in Computer Science or related field. Experience in data architecture with at least 3 years in a leadership role. Deep knowledge of Azure/AWS, Databricks, Synapse, and other cloud data platforms. Understanding of SAP technologies (SAP BW, SAP DataSphere, HANA, S/4, ECC) and visualization tools (Power BI, Tableau). Understanding of data modeling, ETL/ELT, big data, relational/NoSQL databases, and data security. Experience with AI/ML and familiarity with data mesh/fabric. 5 years in back-end/full stack development in large scale projects with Azure Synapse / Databricks.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies