Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Hyderābād
On-site
The people here at Apple don’t just build products - we craft the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that supports the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. The Global Business Intelligence team provides data services, analytics, reporting, and data science solutions to Apple’s business groups, including Retail, iTunes, Marketing, AppleCare, Operations, Finance, and Sales. These solutions are built on top of an end-to-end machine learning platform with sophisticated AI capabilities. We are looking for a competent, experienced, and driven machine learning engineer to define and build some of the best-in-class machine learning solutions and tools for Apple. Description As a Machine Learning Engineer, you will work on building intelligent systems to democratize AI across a wide range of solutions within Apple. You will drive the development and deployment of innovative AI models and systems that directly impact the capabilities and performance of Apple’s products and services. You will implement robust, scalable ML infrastructure, including data storage, processing, and model serving components, to support seamless integration of AI/ML models into production environments. You will develop novel feature engineering, data augmentation, prompt engineering and fine-tuning frameworks that achieve optimal performance on specific tasks and domains. You will design and implement automated ML pipelines for data preprocessing, feature engineering, model training, hyper-parameter tuning, and model evaluation, enabling rapid experimentation and iteration. You will also implement advanced model compression and optimization techniques to reduce the resource footprint of language models while preserving their performance. Have continuous focus to Brainstorm and Design various POCs using AI/ML Services for new or existing enterprise problems. YOU SHOULD BE ABLE TO: - Understand a business challenge - Collaborate with business and other multi-functional teams - Design a statistical or deep learning solution to find the needed answer to it. - Develop it by yourself or guide another person to do it. - Deliver the outcome into production, (v) Keep a good governance of your work. There are meaningful opportunities for you deliver impactful influences to Apple. Key Qualifications 4+ years of ML engineering experience in feature engineering, model training, model serving, model monitoring and model refresh management Experience developing AI/ML systems at scale in production or in high-impact research environments Passionate about computer vision, natural language processing, especially in LLMs and Generative AI systems Knowledge with the common frameworks and tools such as PyPorch or TensorFlow Experience with transformer models such as BERT, GPT etc. and understanding of their underlying principles is a plus Strong coding, analytical, software engineering skills, and familiarity with software engineering principles around testing, code reviews and deployment Experience in handling performance, application and security log management Applied knowledge of statistical data analysis, predictive modeling classification, Time Series techniques, sampling methods, multivariate analysis, hypothesis testing, and drift analysis. Proficiency in programming languages and tools like Python, R, Git, Airflow, Notebooks. Experience with data visualization tools like matplotlib, d3.js., Tableau would be a plus Education & Experience Bachelor’s Degree or Equivalent experience Submit CV
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Senior Data Engineer Location: Chennai 34322 Job Type: Contract Budget: ₹18 LPA Notice Period: Immediate Joiners Only Role Overview We are seeking a highly capable Software Engineer (Data Engineer) to support end-to-end development and deployment of critical data products. The selected candidate will work across diverse business and technical teams to design, build, transform, and migrate data solutions using modern cloud technologies. This is a high-impact role focused on cloud-native data engineering and infrastructure. Key Responsibilities Develop and manage scalable data pipelines and workflows on Google Cloud Platform (GCP) Design and implement ETL processes using Python, BigQuery, and Terraform Support data product lifecycle from concept, development to deployment and DevOps Optimize query performance and manage large datasets with efficiency Collaborate with cross-functional teams to gather requirements and deliver solutions Maintain strong adherence to Agile practices, contributing to sprint planning and user stories Apply best practices in data security, quality, and governance Effectively communicate technical solutions to stakeholders and team members Required Skills & Experience Minimum 4 years of relevant experience in GCP Data Engineering Strong hands-on experience with BigQuery, Python programming, Terraform, Cloud Run, and GitHub Proven expertise in SQL, data modeling, and performance optimization Solid understanding of cloud data warehousing and pipeline orchestration (e.g., DBT, Dataflow, Composer, or Airflow DAGs) Background in ETL workflows and data processing logic Familiarity with Agile (Scrum) methodology and collaboration tools Preferred Skills Experience with Java, Spring Boot, and RESTful APIs Exposure to infrastructure automation and CI/CD pipelines Educational Qualification Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field Skills: etl,terraform,dbt,java,spring boot,etl workflows,data modeling,dataflow,data engineering,ci/cd,bigquery,agile,data,sql,cloud,restful apis,github,airflow dags,gcp,cloud run,composer,python
Posted 1 week ago
5.0 years
19 - 20 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Senior Software Engineer 34332 Location: Chennai (Onsite) Job Type: Contract Budget: ₹20 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a highly skilled Senior Software Engineer to be a part of a centralized observability and monitoring platform team. The role focuses on building and maintaining a scalable, reliable observability solution that enables faster incident response and data-driven decision-making through latency, traffic, error, and saturation monitoring. This opportunity requires a strong background in cloud-native architecture, observability tooling, backend and frontend development, and data pipeline engineering. Key Responsibilities Design, build, and maintain observability and monitoring platforms to enhance MTTR/MTTX Create and optimize dashboards, alerts, and monitoring configurations using tools like Prometheus, Grafana, etc. Architect and implement scalable data pipelines and microservices for real-time and batch data processing Utilize GCP tools including BigQuery, Dataflow, Dataproc, Data Fusion, and others Develop end-to-end solutions using Spring Boot, Python, Angular, and REST APIs Design and manage relational and NoSQL databases including PostgreSQL, MySQL, and BigQuery Implement best practices in data governance, RBAC, encryption, and security within cloud environments Ensure automation and reliability through CI/CD, Terraform, and orchestration tools like Airflow and Tekton Drive full-cycle SDLC processes including design, coding, testing, deployment, and monitoring Collaborate closely with software architects, DevOps, and cross-functional teams for solution delivery Core Skills Required Proficiency in Spring Boot, Angular, Java, and Python Experience in developing microservices and SOA-based systems Cloud-native development experience, preferably on Google Cloud Platform (GCP) Strong understanding of HTML, CSS, JavaScript/TypeScript, and modern frontend frameworks Experience with infrastructure automation and monitoring tools Working knowledge of data engineering technologies: PySpark, Airflow, Apache Beam, Kafka, and similar Strong grasp of RESTful APIs, GitHub, and TDD methodologies Preferred Skills GCP Professional Certifications (e.g., Data Engineer, Cloud Developer) Hands-on experience with Terraform, Cloud SQL, Data Governance tools, and security frameworks Exposure to performance tuning, cost optimization, and observability best practices Experience Required 5+ years of experience in full-stack and cloud-based application development Strong track record in building distributed, scalable systems Prior experience with observability and performance monitoring tools is a plus Educational Qualifications Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: java,data fusion,html,dataflow,terraform,spring boot,restful apis,python,angular,dataproc,microservices,apache beam,css,cloud sql,soa,typescript,tdd,kafka,javascript,airflow,github,pyspark,bigquery,,gcp
Posted 1 week ago
2.0 years
4 Lacs
Chennai
On-site
We are hiring a tech-savvy and creative Social Media Handler with strong expertise in AI-powered content creation , web scraping , and automation of scraper workflows . You will be responsible for managing social media presence while automating content intelligence and trend tracking through custom scraping solutions. This is a hybrid role requiring both creative content skills and technical automation proficiency. Key Responsibilities: 1) Social Media Management - Plan and execute content calendars across platforms: Instagram, Facebook, YouTube, LinkedIn, and X. - Create high-performing, audience-specific content using AI tools (ChatGPT, Midjourney, Canva AI, etc.). - Engage with followers, track trends, and implement growth strategies. 2) AI Content Creation - Use generative AI to write captions, articles, and hashtags. - Generate AI-powered images, carousels, infographics, and reels. - Repurpose long-form content into short-form video or visual content using tools like Descript or Lumen5. 3) Web Scraping & Automation - Design and build automated web scrapers to extract data from websites, directories, competitor pages, and trending content sources. - Schedule scraping jobs and set up automated pipelines using: - Python (BeautifulSoup, Scrapy, Selenium, Playwright) - Task schedulers (Airflow, Cron, or Python scripts) - Cloud scraping or headless browsers - Parse and clean data for insight generation (topics, hashtags, keywords, sentiment, etc.). - Store and organize scraped data in spreadsheets or databases for content inspiration and strategy. Required Skills & Experience: 1) 2–5 years of relevant work experience in social media, content creation, or web scraping. 2) Proficiency in AI tools: - Text: ChatGPT, Jasper, Copy.ai 3) Image: Midjourney, DALL·E, Adobe Firefly 4) Video: Pictory, Descript, Lumen5 5) Strong Python skills for: - Web scraping (Scrapy, BeautifulSoup, Selenium) 6) Automation scripting - Knowledge of data handling using Pandas, CSV, JSON, Google Sheets, or databases. 7) Familiar with social media scheduling tools (Meta Business Suite, Buffer, Hootsuite). 8) Ability to work independently and stay updated on digital trends and platform changes. Educational Qualification Degree in Marketing, Media, Computer Science, or Data Science preferred. - Skills-based hiring encouraged – real-world experience matters more than formal education. Work Location: Chennai (In-office role) Salary: Commensurate with experience + performance bonus Bonus Skills (Nice to Have) : 1) Knowledge of website development (HTML, CSS, JS, WordPress/Webflow). 2) SEO and content analytics. 3) Basic video editing and animation (CapCut, After Effects). 4) Experience with automation platforms like Zapier, n8n, or Make.com. To Apply: Please email your resume, portfolio, and sample projects to: Job Type: Full-time Pay: From ₹40,000.00 per month Work Location: In person
Posted 1 week ago
6.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a Senior Developer with expertise in SnapLogic and Apache Airflow to design, develop, and maintain enterprise-level data integration solutions. This role requires strong technical expertise in ETL development, workflow orchestration, and cloud technologies. You will be responsible for automating data workflows, optimizing performance, and ensuring the reliability and scalability of our data systems. Key Responsibilities include designing, developing, and managing ETL pipelines using SnapLogic, ensuring efficient data transformation and integration across various systems and applications. Leverage Apache Airflow for workflow automation, job scheduling, and task dependencies, ensuring optimized execution and monitoring. Work closely with cross-functional teams such as Data Engineering, DevOps, and Data Science to understand data requirements and deliver solutions. Collaborate in designing and implementing data pipeline architectures to support large-scale data processing in cloud environments like AWS, Azure, and GCP. Develop reusable SnapLogic pipelines and integrate with third-party applications and data sources including databases, APIs, and cloud services. Optimize SnapLogic pipeline performance to handle large volumes of data with minimal latency. Provide guidance and mentoring to junior developers in the team, conducting code reviews and offering best practice recommendations. Troubleshoot and resolve pipeline failures, ensuring high data quality and minimal downtime. Implement automated testing, continuous integration (CI), and continuous delivery (CD) practices for data pipelines. Stay current with new SnapLogic features, Airflow upgrades, and industry best practices. Required Skills & Experience include 6+ years of hands-on experience in data engineering, focusing on SnapLogic and Apache Airflow. Strong experience with SnapLogic Designer and SnapLogic cloud environment for building data integrations and ETL pipelines. Proficient in Apache Airflow for orchestrating, automating, and scheduling data workflows. Strong understanding of ETL concepts, data integration, and data transformations. Experience with cloud platforms like AWS, Azure, or Google Cloud and data storage systems such as S3, Azure Blob, and Google Cloud Storage. Strong SQL skills and experience with relational databases like PostgreSQL, MySQL, Oracle, and NoSQL databases. Experience working with REST APIs, integrating data from third-party services, and using connectors. Knowledge of data quality, monitoring, and logging tools for production pipelines. Experience with CI/CD pipelines and tools such as Jenkins, GitLab, or similar. Excellent problem-solving skills with the ability to diagnose issues and implement effective solutions. Ability to work in an Agile development environment. Strong communication and collaboration skills to work with both technical and non-technical teams.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title - ETL Developer - Informatica BDM/DEI 📍 Location : Onsite 🕒 Employment Type : Full Time 💼 Experience Level : Mid Senior Job Summary - We are seeking a skilled and results-driven ETL Developer with strong experience in Informatica BDM (Big Data Management) or Informatica DEI (Data Engineering Integration) to design and implement scalable, high-performance data integration solutions. The ideal candidate will work on large-scale data projects involving structured and unstructured data, and contribute to the development of reliable and efficient ETL pipelines across modern big data environments. Key Responsibilities Design, develop, and maintain ETL pipelines using Informatica BDM/DEI for batch and real-time data integration Integrate data from diverse sources including relational databases, flat files, cloud storage, and big data platforms such as Hive and Spark Translate business and technical requirements into mapping specifications and transformation logic Optimize mappings, workflows , and job executions to ensure high performance, scalability, and reliability Conduct unit testing and participate in integration and system testing Collaborate with data architects, analysts, and business stakeholders to understand requirements and deliver robust solutions Support data quality checks, exception handling, and metadata documentation Monitor, troubleshoot, and resolve ETL job issues and performance bottlenecks Ensure adherence to data governance and compliance standards throughout the development lifecycle Key Skills and Qualification 5-8 years of experience in ETL development with a focus on Informatica BDM/DEI Strong knowledge of data integration techniques , transformation logic, and job orchestration Proficiency in SQL , with the ability to write and optimize complex queries Experience working with Hadoop ecosystems (e.g., Hive, HDFS, Spark) and large-volume data processing Understanding of performance optimization in ETL and big data environments Familiarity with job scheduling tools and workflow orchestration (e.g., Control-M, Apache Airflow, Oozie) Good understanding of data warehousing , data lakes , and data modeling principles Experience working in Agile/Scrum environments Excellent analytical, problem-solving, and communication skills Good to have Experience with cloud data platforms (AWS Glue, Azure Data Factory, or GCP Dataflow) Exposure to Informatica IDQ (Data Quality) is a plus Knowledge of Python, Shell scripting, or automation tools Informatica or Big Data certifications
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Bioinformatician GCL: D2 Introduction to role Are you ready to tackle some of the most challenging informatics problems in the drug discovery clinical trial phase? Join us as a Senior Bioinformatician and be part of a team that is redefining healthcare. Your work will directly impact millions of patients by advancing the standard of drug discovery through data processing, analysis, and algorithm development. Collaborate with informaticians, data scientists, and engineers to deliver ground breaking solutions that drive scientific insights and improve the quality of candidate drugs. Are you up for the challenge? Accountabilities Collaborate with scientific colleagues across AstraZeneca to ensure informatics and advanced analytics solutions meet R&D needs. Develop and deliver informatics solutions using agile methodologies, including pipelining approaches and algorithm development. Contribute to multi-omics drug projects with downstream analysis and data analytics. Create, benchmark, and deploy scalable data workflows for genome assembly, variant calling, annotation, and more. Implement CI/CD practices for pipeline development across cloud-based and HPC environments. Apply cloud computing platforms like AWS for pipeline execution and data storage. Explore opportunities to apply AI & ML in informatics. Engage with external peers and software providers to apply the latest methods to business problems. Work closely with data scientists and platform teams to deliver scientific insights. Collaborate with informatics colleagues in our Global Innovation and Technology Centre. Essential Skills/Experience Masters/PhD (or equivalent) in Bioinformatics, Computational Biology, AI/ML, Genomics, Systems Biology, Biomedical Informatics, or related field with a demonstrable record of informatics and Image analysis delivery in a biopharmaceutical setting. Strong coding and software engineering skills such as Python, R, Scripting, Nextflow. Over 6 years of experience in Image analysis/bioinformatics, with a focus on Image/NGS data analysis and Nextflow (DSL2) pipeline development. Proficiency in cloud platforms preferably AWS (e.g. S3, EC2, Batch, EBS, EFS etc) and containerization tools (Docker, Singularity). Experience with workflow management tools and CI/CD practices in Image analysis and bioinformatics (Git, GitHub, GitLab), HPC in AWS. Experience in working with any multi-omics analysis (Transcriptomics, single cell and CRISPR etc ) or Image data (DICOM, WSI etc) analysis. Experience working with any Omics tools and databases such as NCBI, PubMED, UCSC genome databrowser, bedtools, samtools, Picard or imaging relevant tools such as CellProfiler, HALO, VisioPharm particularly in digital pathology and biomarker research. Strong communication skills, with the ability to collaborate effectively with team members and partners to achieve objectives. Desirable Skills/Experience Experience in Omics or Imaging data analysis in a Biopharmaceutical setting. Knowledge of Dockers, Kubernetes for container orchestration. Experience with other workflow management systems, such as (e.g. Apache Airflow, Nextflow, Cromwell, AWS StepFunctions). Familiarity with web-based bioinformatics tools (e.g., RShiny, Jupyter). Experience with working in GxP-validated environments. Experience administering and optimising a HPC job scheduler (e.g. SLURM). Experience with configuration automation and infrastructure as code (e.g. Ansible, Hashicorp Terraform, AWS CloudFormation, Amazon Cloud Developer Kit). Experience deploying infrastructure and code to public cloud, especially AWS. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are driven by a shared purpose to push the boundaries of science and develop life-changing medicines. Our innovative approach combines ground breaking science with leading digital technology platforms to empower our teams to perform at their best. We foster an environment where you can explore new solutions and experiment with groundbreaking technology. With countless opportunities for learning and growth, you'll be part of a diverse team that works multi-functionally to make a meaningful impact on patients' lives. Ready to make a difference? Apply now to join our team as a Senior Bioinformatician! Date Posted 02-Jul-2025 Closing Date 30-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience- 7+ years Location- Hyderabad (preferred), Pune, Mumbai JD- We are seeking a skilled Snowflake Developer with 7+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications · 7+ years in database development, data warehousing, or ETL. · 4+ years of hands-on Snowflake development experience. · Strong SQL or Python skills for data processing. · Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). · Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). · Certifications: SnowPro Core Certification (preferred). Preferred Skills · Familiarity with data governance and metadata management. · Familiarity with DBT, Airflow, SSIS & IICS · Knowledge of CI/CD pipelines (Azure DevOps).
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Overview TekWissen is a global workforce management provider that offers strategic talent solutions to our clients throughout India and world-wide. Our client is a company operating a marketplace for consumers, sellers, and content creators. It offers merchandise and content purchased for resale from vendors and those offered by thirdparty sellers. Job Title: Business Intelligence Engineer III Location: Pune Duration: 6 Months Job Type: Contract Work Type: Onsite Job Description The Top Responsibilities: Data Engineering on AWS: Design and implement scalable and secure data pipelines using AWS services such as the client's S3, AWS Glue, the client's Redshift, and the client's Athena. Ensure high-performance, reliable, and fault-tolerant data architectures. Data Modeling and Transformation: Develop and optimize dimensional data models to support various business intelligence and analytics use cases. Perform complex data transformations and enrichment using tools like AWS Glue, AWS Lambda, and Apache Spark. Business Intelligence and Reporting: Collaborate with stakeholders to understand reporting and analytics requirements. Build interactive dashboards and reports using visualization tools like the client's QuickSight. Data Governance and Quality: Implement data quality checks and monitoring processes to ensure the integrity and reliability of data. Define and enforce data policies, standards, and procedures. Cloud Infrastructure Management: Manage and maintain the AWS infrastructure required for the data and analytics platform. Optimize performance, cost, and security of the underlying cloud resources. Collaboration and Knowledge Sharing: Work closely with cross-functional teams, including data analysts, data scientists, and business users, to identify opportunities for data-driven insights. Share knowledge, best practices, and train other team members. Leadership Principles Ownership Deliver result Insist on the Highest Standards Mandatory Requirements 3+ years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficient in designing and implementing data pipelines using AWS services such as S3, Glue, Redshift, Athena, and Lambda. Expertise in data modeling, dimensional modeling, and data transformation techniques. Experience in building and deploying business intelligence solutions, including the use of tools like the client's QuickSight and Tableau. Strong SQL and Python programming skills for data processing and analysis. Understanding of cloud architecture patterns, security best practices, and cost optimization on AWS. Excellent communication and collaboration skills to work effectively with cross-functional teams. Preferred Skills Hands-on experience with Apache Spark, Airflow, or other big data technologies. Knowledge of AWS DevOps practices and tools, such as AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. Familiarity with agile software development methodologies. AWS Certification (e.g., AWS Certified Data Analytics - Specialty). Certification Requirements Any Graduate TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 1 week ago
9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Lead the design and development of advanced refrigeration and HVAC systems for data centers. Provide technical leadership in the application of CO₂ transcritical systems for sustainable and efficient cooling. Perform thermal load calculations, equipment sizing, and system layout planning. Collaborate with electrical engineers, manufacturing engineers and field service engineers to ensure integrated and optimized cooling solutions. Conduct feasibility studies, energy modeling, and performance simulations. Oversee installation, commissioning, and troubleshooting of refrigeration systems. Ensure compliance with industry standards, safety regulations, and environmental guidelines. Prepare detailed technical documentation, specifications, and reports. Required Qualifications: Bachelor’s or Master’s degree in Mechanical Engineering, HVAC Engineering, or a related field. 7–9 years of experience in refrigeration or HVAC system design, with a focus on data center cooling . In-depth knowledge of data center thermal management , including CRAC/CRAH units, liquid cooling, and airflow management. Hands-on experience with CO₂ transcritical refrigeration systems and natural refrigerants. Strong understanding of thermodynamics, fluid mechanics, and heat transfer. Familiarity with relevant codes and standards (ASHRAE, ISO, IEC, etc.). Proficiency in design and simulation tools (e.g., AutoCAD, Revit, Pack Calculation Pro, Cycle_DX, VTB, or HVAC-specific software). Preferred Qualifications: Experience with energy efficiency optimization and sustainability initiatives. Knowledge of control systems and building automation for HVAC. Experience working in mission-critical environments or hyperscale data centers.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Big Data Engineer at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. To be successful as a Big Data Engineer, you should have experience with: - Full Stack Software Development for large-scale, mission-critical applications. - Mastery in distributed big data systems such as Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks like Spring Boot, Spring MVC, JPA, Hibernate. - Proficiency with version control systems, preferably Git; GitHub Copilot experience is a plus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Experience developing back-end applications with multi-process and multi-threaded architectures. - Hands-on experience with building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Experience in DevOps practices like CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Data Product development experience is essential. - Experience in Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. This role is for the Pune location. Purpose of the role: To design, develop, and improve software, utilizing various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. - Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. - Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations: - Perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. - Requires in-depth technical knowledge and experience in the assigned area of expertise. - Thorough understanding of the underlying principles and concepts within the area of expertise. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - For an individual contributor, develop technical expertise in the work area, acting as an advisor where appropriate. - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Take responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision-making within the own area of expertise. - Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You are invited to join our team as a Mid-Level Data Engineer Technical Consultant with 4+ years of experience. As a part of our diverse and inclusive organization, you will be based in Bangalore, KA, working full-time in a permanent position during the general shift from Monday to Friday. In this role, you will be expected to possess strong written and oral communication skills, particularly in email correspondence. Your experience in working with Application Development teams will be invaluable, along with your ability to analyze and solve problems effectively. Proficiency in Microsoft tools such as Outlook, Excel, and Word is essential for this position. As a Data Engineer Technical Consultant, you must have at least 4 years of hands-on experience in development. Your expertise should include working with Snowflake and Pyspark, writing SQL queries, utilizing Airflow, and developing in Python. Experience with DBT and integration programs will be advantageous, as well as familiarity with Excel for data analysis and Unix Scripting language. Your responsibilities will encompass a good understanding of data warehousing and practical work experience in this field. You will be accountable for various tasks including understanding requirements, coding, unit testing, integration testing, performance testing, UAT, and Hypercare Support. Collaboration with cross-functional teams across different geographies will be a key aspect of this role. If you are action-oriented, independent, and possess the required technical skills, we encourage you to submit your resume to pallavi@she-jobs.com and explore this exciting opportunity further.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
kolkata, west bengal
On-site
We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic data team. The ideal candidate will have deep expertise in Snowflake, dbt (Data Build Tool), and Python, with a strong understanding of data architecture, transformation pipelines, and data quality principles. You will play a crucial role in building and maintaining scalable data pipelines and facilitating data-driven decision-making across the organization. Your responsibilities will include designing, developing, and maintaining scalable and efficient ETL/ELT pipelines using dbt, Snowflake, and Python. You will be tasked with optimizing data models and warehouse performance in Snowflake, collaborating with data analysts, scientists, and business teams to understand data requirements and deliver high-quality datasets. Ensuring data quality, governance, and compliance across pipelines, automating data workflows, and monitoring production jobs for accuracy and reliability will be key aspects of your role. Additionally, you will participate in architectural decisions, promote best practices in data engineering, maintain documentation of data pipelines, transformations, and data models, mentor junior engineers, and contribute to team knowledge sharing. The ideal candidate should have at least 5 years of professional experience in Data Engineering, strong hands-on experience with Snowflake (data modeling, performance tuning, security features), proven experience using dbt for data transformation and modeling, proficiency in Python for data engineering tasks and scripting, a solid understanding of SQL, and experience in building and maintaining complex queries. Experience with orchestration tools like Airflow or Prefect, familiarity with version control systems like Git, strong problem-solving skills, attention to detail, excellent communication, and teamwork abilities are required. Preferred qualifications include experience working with cloud platforms such as AWS, Azure, or GCP, knowledge of data lake architecture and real-time streaming technologies, exposure to CI/CD pipelines for data deployment, and experience in agile development methodologies. Join us and be part of a team that values expertise, innovation, and collaboration in driving impactful data solutions across the organization.,
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also be involved in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, BigQuery etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc, etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description : is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of the Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are seeking a skilled and passionate Data Engineer to join our team and drive the development of scalable data pipelines for Generative AI (GenAI) and Large Language Model (LLM)-powered applications. This role demands hands-on expertise in Spark, GCP, and data integration with modern AI APIs. What You'll Do Design and develop high-throughput, scalable data pipelines for GenAI and LLM-based solutions. Build robust ETL/ELT processes using Spark (PySpark/Scala) on Google Cloud Platform (GCP). Integrate enterprise and unstructured data with LLM APIs such as OpenAI, Gemini, and Hugging Face. Process and enrich large volumes of unstructured data, including text and document embeddings. Manage real-time and batch workflows using Airflow, Dataflow, and BigQuery. Implement and maintain best practices for data quality, observability, lineage, and API-first designs. What Sets You Apart 3+ years of experience building scalable Spark-based pipelines (PySpark or Scala). Strong hands-on experience with GCP services: BigQuery, Dataproc, Pub/Sub, Cloud Functions. Familiarity with LLM APIs, vector databases (e.g., Pinecone, FAISS), and GenAI use cases. Expertise in text processing, unstructured data handling, and performance optimization. Agile mindset and the ability to thrive in a fast-paced startup or dynamic environment. Nice To Have Experience working with embeddings and semantic search. Exposure to MLOps or data observability tools. Background in deploying production-grade AI/ML workflows (ref:hirist.tech)
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
We are looking for a Lead Data Engineer with over 8 years of experience in data engineering and software development. The ideal candidate should possess a strong expertise in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL. You will be responsible for driving complex data solutions across multi-functional teams. The role requires hands-on experience in data modeling, test-driven development, and familiarity with Agile/Waterfall methodologies. As a Lead Data Engineer, you will be leading initiatives, collaborating with various teams, and converting business requirements into scalable data solutions using industry best practices in managed services or staff augmentation environments. If you meet the above qualifications and are passionate about working with data to solve complex problems, we encourage you to apply for this exciting opportunity.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a Sr. Data Engineer with over 7 years of experience, specializing in Data Engineering, Python, and SQL. You will be a part of the Data Engineering team in the Enterprise Data Insights organization, responsible for building data solutions, designing ETL/ELT processes, and managing the data platform to support various stakeholders across the organization. Your role is crucial in driving technology and data-led solutions to foster growth and innovation at scale. Your responsibilities as a Senior Data Engineer include collaborating with cross-functional stakeholders to prioritize requests, identify areas for improvement, and provide recommendations. You will lead the analysis, design, and implementation of data solutions, including constructing data models and ETL processes. Furthermore, you will engage in fostering collaboration with corporate engineering, product teams, and other engineering groups, while also leading and mentoring engineering discussions and advocating for best practices. To excel in this role, you should possess a degree in Computer Science or a related technical field and have a proven track record of over 5 years in Data Engineering. Your expertise should include designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment, and developing data products and APIs. Proficiency in SQL/NoSQL databases, particularly Snowflake, Redshift, or MongoDB, along with strong programming skills in Python, is essential. Additionally, experience with columnar OLAP databases, data modeling, and tools like dbt, AirFlow, Fivetran, GitHub, and Tableau reporting will be beneficial. Good communication and interpersonal skills are crucial for effectively collaborating with business stakeholders and translating requirements into actionable insights. An added advantage would be a good understanding of Salesforce & Netsuite systems, experience in SAAS environments, designing and deploying ML models, and familiarity with events and streaming data. Join us in driving data-driven solutions and experiences to shape the future of technology and innovation.,
Posted 1 week ago
6.0 - 12.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You will be responsible for leveraging your 6-12 years of experience in Data Warehouse and Big Data technologies to contribute to our team in Trivandrum. Your expertise in Programming Languages such as Scala, Spark, PySpark, Python, and SQL, along with Big Data Technologies like Hadoop, Hive, Pig, and MapReduce will be crucial for this role. Additionally, your proficiency in ETL & Data Engineering including Data Warehouse Design, ETL, Data Analytics, Data Mining, and Data Cleansing will be highly valued. As a part of our team, you will be expected to have hands-on experience with Cloud Platforms like GCP and Azure, as well as tools & frameworks such as Apache Hadoop, Airflow, Kubernetes, and Containers. Your skills in data pipeline creation, optimization, troubleshooting, and data validation will play a key role in ensuring the efficiency and accuracy of our data processes. Ideally, you should have at least 4 years of experience working with Scala, Spark, PySpark, Python, and SQL, in addition to 3+ years of strategic data planning, governance, and standard procedures. Experience in Agile environments and a good understanding of Java, ReactJS, and Node.js will be beneficial for this role. Moreover, your ability to work with data analytics, machine learning, and optimization will be advantageous. Knowledge of managing big data workloads, containerized environments, and experience in analyzing large datasets to optimize data workflows will further strengthen your profile for this position. UST is a global digital transformation solutions provider with a track record of working with some of the world's best companies for over 20 years. With a team of over 30,000 employees in 30 countries, we are committed to making a real impact through transformation. If you are passionate about innovation, agility, and driving positive change through technology, we invite you to join us on this journey of creating boundless impact and touching billions of lives in the process.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You are an experienced AI Application Architect responsible for leading the design and integration of AI/ML capabilities into enterprise applications. Your main objective is to architect intelligent, scalable, and reliable AI solutions that are in line with both business goals and technical strategies. You will work closely with data scientists, ML engineers, and application developers to ensure seamless end-to-end solutions. Additionally, you will be required to select and implement appropriate AI frameworks, APIs, LLMs, and infrastructure tools, as well as drive architecture decisions related to GenAI, NLP, CV, predictive analytics, and agentic AI systems. You will also establish MLOps pipelines for training, testing, and deploying models at scale while ensuring compliance with AI ethics, privacy laws, and data governance policies. Furthermore, you will evaluate emerging technologies, tools, and platforms for enterprise use and act as a technical advisor to leadership on AI opportunities and risks. In terms of required skills, you should have a strong background in AI/ML architecture and solution design, along with hands-on experience in ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Proficiency in LLMs and generative AI tools like OpenAI, Azure OpenAI, LangChain, and Hugging Face is necessary. A solid programming background in Python (FastAPI, Flask) and familiarity with Java/Node.js are essential. Experience with cloud platforms (AWS/GCP/Azure) and ML toolkits like SageMaker, Azure ML, and Vertex AI is also crucial. Additionally, a good understanding of microservices, REST APIs, GraphQL, and event-driven architecture is required. Knowledge of vector databases such as Pinecone, FAISS, Chroma, or Weaviate, as well as proficiency with CI/CD, Docker, Kubernetes, MLflow, Airflow, or similar tools, is expected. Preferred qualifications include experience with multi-agent systems, LangChain, Autogen, or Agentic AI frameworks, familiarity with data governance, model drift detection, and performance monitoring, and prior experience in industries like BFSI, Retail, Healthcare, or Manufacturing. Education-wise, a Bachelor's or master's degree in computer science, Artificial Intelligence, Data Science, or a related field is required. Certifications in AI/ML, cloud (AWS/GCP/Azure), or MLOps are considered a plus.,
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Work Location : Hyderabad What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills And Qualifications 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred. About Us We help consult and deliver solutions to organizations where data is at the core of decision making. We undertake strategic data consulting for organizations in laying out the roadmap for data driven decision making, in order to equip organizations to convert data into a strategic differentiator. Through a host of our product and service offerings we analyse and visualize large amounts of data. To know more about us visit Gramener Website and Gramener Blog. Apply for this role Apply for this Role
Posted 1 week ago
0 years
0 Lacs
India
On-site
Job Summary: We are seeking a talented and driven Machine Learning Engineer to design, build, and deploy ML models that solve complex business problems and enhance decision-making capabilities. You will work closely with data scientists, engineers, and product teams to develop scalable machine learning pipelines, deploy models into production, and continuously improve their performance. Key Responsibilities: Design, develop, and deploy machine learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Collaborate with data scientists to prepare and preprocess large-scale datasets for training and evaluation. Implement and optimize machine learning pipelines and workflows using tools like MLflow, Airflow, or Kubeflow. Integrate models into production environments and ensure model performance, monitoring, and retraining. Conduct A/B testing and performance evaluations to validate model accuracy and business impact. Stay up-to-date with the latest advancements in ML/AI research and tools. Write clean, efficient, and well-documented code for reproducibility and scalability. Requirements: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field. Strong knowledge of machine learning algorithms, data structures, and statistical methods. Proficient in Python and ML libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Experience with data manipulation libraries (e.g., pandas, NumPy) and visualization tools (e.g., Matplotlib, Seaborn). Familiarity with cloud platforms (AWS, GCP, or Azure) and model deployment tools. Experience with version control systems (Git) and software engineering best practices. Preferred Qualifications: Experience in deep learning, natural language processing (NLP), or computer vision. Knowledge of big data technologies like Spark, Hadoop, or Hive. Exposure to containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines. Familiarity with MLOps practices and tools.
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As a pivotal member of our data team, Senior Associates are key in shaping and refining data management and analytics functions, including our expanding Data Services. You will be instrumental in helping us deliver value-driven insights by designing, integrating, and analysing cutting-edge data systems. The role emphasises leveraging the latest technologies, particularly within the Microsoft ecosystem, to enhance operational capabilities and drive innovation. You'll work on diverse and challenging projects, allowing you to actively influence strategic decisions and develop innovative solutions. This, in turn, paves the way for unparalleled professional growth and the development of a forward-thinking mindset. As you contribute to our Data Services, you'll have a front-row seat to the future of data analytics, providing an enriching environment to build expertise and expand your career horizons. Key Activities Include, But Are Not Limited To Design and implement data integration processes. Manage data projects with multiple stakeholders and tight timelines. Developing data models and frameworks that enhance data governance and efficiency. Addressing challenges related to data integration, quality, and management processes. Implementing best practices in automation to streamline data workflows. Engaging with key stakeholders to extract, interpret, and translate data requirements into meaningful insights and solutions. Engage with clients to understand and deliver data solutions. Work collaboratively to meet project goals. Lead and mentor junior team members. Essential Requirements More than 5 years of experience in data analytics, with proficiency in managing large datasets and crafting detailed reports. Proficient in Python Experience working within a Microsoft Azure environment. Experience with data warehousing and data modelling (e.g., dimensional modelling, data mesh, data fabric). Proficiency in PySpark/Databricks/Snowflake/MS Fabric, and intermediate SQL skills. Experience with orchestration tools such as Azure Data Factory (ADF), Airflow, or DBT. Familiarity with DevOps practices, specifically creating CI/CD and release pipelines. Knowledge of Azure DevOps tools and GitHub. Knowledge of Azure SQL DB or any other RDBMS system. Basic knowledge of GenAI. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. Why Join Us? This role isn't just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As an integral part of our data team, Associate 2 professionals contribute significantly to the development of data management and analytics functions, including our growing Data Services. In this role, you'll assist engagement teams in delivering meaningful insights by helping design, integrate, and analyse data systems. You will work with the latest technologies, especially within the Microsoft ecosystem, to enhance our operational capabilities. Working on a variety of projects, you'll have the chance to contribute your ideas and support innovative solutions. This experience offers opportunities for professional growth and helps cultivate a forward-thinking mindset. As you support our Data Services, you'll gain exposure to the evolving field of data analytics, providing an excellent foundation for building expertise and expanding your career journey. Key Activities Include, But Are Not Limited To Assisting in the development of data models and frameworks to enhance data governance and efficiency. Supporting efforts to address data integration, quality, and management process challenges. Participating in the implementation of best practices in automation to streamline data workflows. Collaborating with stakeholders to gather, interpret, and translate data requirements into practical insights and solutions. Support management of data projects alongside senior team members. Assist in engaging with clients to understand their data needs. Work effectively as part of a team to achieve project goals. Essential Requirements At least two years of experience in data analytics, with a focus on handling large datasets and supporting the creation of detailed reports. Familiarity with Python and experience in working within a Microsoft Azure environment. Exposure to data warehousing and data modelling techniques (e.g., dimensional modelling). Basic proficiency in PySpark and Databricks/Snowflake/MS Fabric, with foundational SQL skills. Experience with orchestration tools like Azure Data Factory (ADF), Airflow, or DBT. Awareness of DevOps practices, including introducing CI/CD and release pipelines. Familiarity with Azure DevOps tools and GitHub. Basic understanding of Azure SQL DB or other RDBMS systems. Introductory knowledge of GenAI concepts. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. WHY JOIN US? This role is not just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As an integral part of our data team, Associate 2 professionals contribute significantly to the development of data management and analytics functions, including our growing Data Services. In this role, you'll assist engagement teams in delivering meaningful insights by helping design, integrate, and analyse data systems. You will work with the latest technologies, especially within the Microsoft ecosystem, to enhance our operational capabilities. Working on a variety of projects, you'll have the chance to contribute your ideas and support innovative solutions. This experience offers opportunities for professional growth and helps cultivate a forward-thinking mindset. As you support our Data Services, you'll gain exposure to the evolving field of data analytics, providing an excellent foundation for building expertise and expanding your career journey. Key Activities Include, But Are Not Limited To Assisting in the development of data models and frameworks to enhance data governance and efficiency. Supporting efforts to address data integration, quality, and management process challenges. Participating in the implementation of best practices in automation to streamline data workflows. Collaborating with stakeholders to gather, interpret, and translate data requirements into practical insights and solutions. Support management of data projects alongside senior team members. Assist in engaging with clients to understand their data needs. Work effectively as part of a team to achieve project goals. Essential Requirements At least two years of experience in data analytics, with a focus on handling large datasets and supporting the creation of detailed reports. Familiarity with Python and experience in working within a Microsoft Azure environment. Exposure to data warehousing and data modelling techniques (e.g., dimensional modelling). Basic proficiency in PySpark and Databricks/Snowflake/MS Fabric, with foundational SQL skills. Experience with orchestration tools like Azure Data Factory (ADF), Airflow, or DBT. Awareness of DevOps practices, including introducing CI/CD and release pipelines. Familiarity with Azure DevOps tools and GitHub. Basic understanding of Azure SQL DB or other RDBMS systems. Introductory knowledge of GenAI concepts. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. WHY JOIN US? This role is not just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France