Jobs
Interviews

6114 Airflow Jobs - Page 36

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role We are looking for a Senior Data Engineer to lead the design and implementation of scalable data infrastructure and engineering practices. This role will be critical in laying down the architectural foundations for advanced analytics and AI/ML use cases across global business units. You’ll work closely with the Data Science Lead, Product Manager, and other cross-functional stakeholders to ensure data systems are robust, secure, and future-ready. Key Responsibilities Architect and implement end-to-end data infrastructure including ingestion, transformation, storage, and access layers to support enterprise-scale analytics and machine learning. Define and enforcedata engineering standards, design patterns, and best practices across the CoE. Lead theevaluation and selection of tools, frameworks, and platforms (cloud, open source, commercial) for scalable and secure data processing. Work with data scientists to enable efficient feature extraction, experimentation, and model deployment pipelines. Design forreal-time and batch processing architectures, including support for streaming data and event-driven workflows. Own thedata quality, lineage, and governance frameworks to ensure trust and traceability in data pipelines. Collaborate with central IT, data platform teams, and business units to align on data strategy, infrastructure, and integration patterns. Mentor and guide junior engineers as the team expands, creating a culture of high performance and engineering excellence. Qualifications 10+ years of hands-on experience in data engineering, data architecture, or platform development. Strong expertise inbuilding distributed data pipelines using tools like Spark, Kafka, Airflow, or equivalent orchestration frameworks. Deep understanding ofdata modeling, data lake/lakehouse architectures, and scalable data warehousing (e.g., Snowflake, BigQuery, Redshift). Advanced proficiency inPython and SQL, with working knowledge of Java or Scala preferred. Strong experience working oncloud-native data architectures (AWS, GCP, or Azure) including serverless, storage, and compute optimization. Proven experience in architectingML/AI-ready data environments, supporting MLOps pipelines and production-grade data flows. Familiarity withDevOps practices, CI/CD for data, and infrastructure-as-code (e.g., Terraform) is a plus. Excellent problem-solving skills and the ability to communicate technical solutions to non-technical stakeholders.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

HackerOne is a global leader in offensive security solutions. Our HackerOne Platform combines AI with the ingenuity of the largest community of security researchers to find and fix security, privacy, and AI vulnerabilities across the software development lifecycle. The platform offers bug bounty, vulnerability disclosure, pentesting, AI red teaming, and code security. We are trusted by industry leaders like Amazon, Anthropic, Crypto.com, General Motors, GitHub, Goldman Sachs, Uber, and the U.S. Department of Defense. HackerOne was named a Best Workplace for Innovators by Fast Company in 2023 and a Most Loved Workplace for Young Professionals in 2024. HackerOne Values HackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability. Data Engineer, Enterprise Data & AI Location: Pune, India This role requires the candidate to be based in Pune and work from an office 4 days a week. Please only apply if you're okay with these requirements. *** Position Summary HackerOne is seeking a Data Engineer, Enterprise Data & AI to join our DataOne team. You will lead the discovery, architecture, and development of high-impact, high-performance, scalable source of truth data marts and data products. Joining our growing, distributed organization, you'll be instrumental in building the foundation that powers HackerOne's one source of truth. As a Data Engineer, Enterprise Data & AI, you'll be able to lead challenging projects and foster collaboration across the company. Leveraging your extensive technological expertise, domain knowledge, and dedication to business objectives, you'll drive innovation to propel HackerOne forward. DataOne democratizes source-of-truth information and insights to enable all Hackeronies to ask the right questions, tell cohesive stories, and make rigorous decisions so that HackerOne can delight our Customers and empower the world to build a safer internet . The future is one where every Hackeronie is a catalyst for positive change , driving data-informed innovation while fostering our culture of transparency, collaboration, integrity, excellence, and respect for all . What You Will Do Your first 30 days will focus on getting to know HackerOne. You will join your new squad and begin onboarding - learn our technology stack (Python, Airflow, Snowflake, DBT, Meltano, Fivetran, Looker, AWS), and meet our Hackeronies. Within 60 days, you will deliver impact on a company level with consistent contribution to high-impact, high-performance, scalable source of truth data marts and data products. Within 90 days, you will drive the continuous evolution and innovation of data at HackerOne, identifying and leading new initiatives. Additionally, you foster cross-departmental collaboration to enhance these efforts. Deliver impact by developing the roadmap for continuously and iteratively launching high-impact, high-performance, scalable source of truth data marts and data products, and by leading and delivering cross-functional product and technical initiatives. Be a technical paragon and cross-functional force multiplier, autonomously determining where to apply focus, contributing at all levels, elevating your squad, and designing solutions to ambiguous business challenges, in a fast-paced early-stage environment. Drive continuous evolution and innovation, the adoption of emerging technologies, and the implementation of industry best practices. Champion a higher bar for discoverability, usability, reliability, timeliness, consistency, validity, uniqueness, simplicity, completeness, integrity, security, and compliance of information and insights across the company. Provide technical leadership and mentorship, fostering a culture of continuous learning and growth. Minimum Qualifications 5+ years experience as an Analytics Engineer, Business Intelligence Engineer, Data Engineer, or similar role w/ proven track record of launching source of truth data marts. 5+ years of experience building and optimizing data pipelines, products, and solutions. Must be flexible to align with occasional evening meetings in USA timezone. Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, and AWS. Strong proficiency in at least one data programming language such as Python or R. Expert in SQL for data manipulation in a fast-paced work environment. Expert in using Git for version control. Expert in creating compelling data stories using data visualization tools such as Looker, Tableau, Sigma, Domo, or PowerBI. Proven track record of having substantial impact across the company, as well as externally for the company, demonstrating your ability to drive positive change and achieve significant results. English fluency, excellent communication skills, and can present data-driven narratives in verbal, presentation, and written formats. Passion for working backwards from the Customer and empathy for business stakeholders. Experience shaping the strategic vision for data. Experience working with Agile and iterative development processes. Preferred Qualifications Experience working within and with data from business applications such as Salesforce, Clari, Gainsight, Workday, GitLab, Slack, or Freshservice. Proven track record of driving innovation, adopting emerging technologies and implementing industry best practices. Thrive on solving for ambiguous problem statements in an early-stage environment. Experience designing advanced data visualizations and data-rich interfaces in Figma or equivalent. Compensation Bands: Pune, India ₹3.7M – ₹4.6M Offers Equity Job Benefits: Health (medical, vision, dental), life, and disability insurance* Equity stock options Retirement plans Paid public holidays and unlimited PTO Paid maternity and parental leave Leaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act) Employee Assistance Program Flexible Work Stipend Eligibility may differ by country We're committed to building a global team! For certain roles outside the United States, U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR). Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check. HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws. This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time. For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: GCP Data Engineer 34306 Job Type: Full-Time Work Mode: Hybrid Location: Chennai Budget: ₹18–20 LPA Notice Period: Immediate Joiners Preferred Role Overview We are seeking a proactive Full Stack Data Engineer with a strong focus on Google Cloud Platform (GCP) and data engineering tools. The ideal candidate will contribute to building analytics products supporting supply chain insights and will be responsible for developing cloud-based data pipelines, APIs, and user interfaces. The role demands high standards of software engineering, agile practices like Test-Driven Development (TDD), and experience in modern data architectures. Key Responsibilities Design, build, and deploy scalable data pipelines and analytics platforms using GCP tools like BigQuery, Dataflow, Dataproc, Data Fusion, and Cloud SQL. Implement and maintain Infrastructure as Code (IaC) using Terraform and CI/CD pipelines using Tekton. Develop robust APIs using Python, Java, and Spring Boot, and deliver frontend interfaces using Angular, React, or Vue. Build and support data integration workflows using Airflow, PySpark, and PostgreSQL. Collaborate with cross-functional teams in an Agile environment, leveraging Jira, paired programming, and TDD. Ensure cloud deployments are secure, scalable, and performant on GCP. Mentor team members and promote continuous learning, clean code practices, and Agile principles. Mandatory Skills GCP services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL Programming: Python, Java, Spring Boot Frontend: Angular, React, Vue, TypeScript, JavaScript Data Orchestration: Airflow, PySpark DevOps/CI-CD: Terraform, Tekton, Jenkins Databases: PostgreSQL, Cloud SQL, NoSQL API development and integration Experience 5+ years in software/data engineering Minimum 1 year in GCP-based deployment and cloud architecture Education Bachelor’s or Master’s in Computer Science, Engineering, or related technical discipline Desired Traits Passion for clean, maintainable code Strong problem-solving skills Agile mindset with an eagerness to mentor and collaborate Skills: typescript,data fusion,terraform,java,spring boot,dataflow,data integration,cloud sql,javascript,bigquery,react,postgresql,nosql,vue,data,pyspark,dataproc,sql,cloud,angular,python,tekton,api development,gcp services,jenkins,airflow,gcp

Posted 2 weeks ago

Apply

10.0 - 14.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Face to face interview on 2nd august 2025 in Hyderabad Apply here - Job description - https://careers.ey.com/job-invite/1604461/ Experience Required: Minimum 8 years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills.

Posted 2 weeks ago

Apply

4.0 years

3 - 6 Lacs

Hyderābād

On-site

CDP ETL & Database Engineer The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture. The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities: ETL Development – The CDP ETL C Database Engineer will be responsible for building pipelines to feed downstream data They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. Implementations s Onboarding – Will work with the team to onboard new clients onto the ZMP/CDP+ The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests – The CDP ETL C Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management – The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration s Process Improvement – The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements: The CDP ETL C Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management When required, collaborate with the Business Solutions Analyst (BSA) to solidify. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow , and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives. Required Skills: ETL – ETL tools such as Talend (Preferred, not required) DMExpress – Nice to have Informatica – Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL – Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages – Can demonstrate knowledge of any of the PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS – Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value Working knowledge of Code Repositories such as GIT, Win CVS, Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira. Minimum Qualifications: Bachelor's degree or equivalent 4+ Years' experience Excellent verbal C written communications skills Self-Starter, highly motivated Analytical mindset Company Summary: Zeta Global is a NYSE listed data-powered marketing technology company with a heritage of innovation and industry leadership. Founded in 2007 by entrepreneur David A. Steinberg and John Sculley, former CEO of Apple Inc and Pepsi-Cola, the Company combines the industry's 3rd largest proprietary data set (2.4B+ identities) with Artificial Intelligence to unlock consumer intent, personalize experiences and help our clients drive business growth. Our technology runs on the Zeta Marketing Platform, which powers 'end to end' marketing programs for some of the world's leading brands. With expertise encompassing all digital marketing channels – Email, Display, Social, Search and Mobile – Zeta orchestrates acquisition and engagement programs that deliver results that are scalable, repeatable and sustainable. Zeta Global is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, gender, ancestry, color, religion, sex, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law. Zeta Global Recognized in Enterprise Marketing Software and Cross-Channel Campaign Management Reports by Independent Research Firm https://www.forbes.com/sites/shelleykohan/2024/06/1G/amazon-partners-with-zeta-global-to-deliver- gen-ai-marketing-automation/ https://www.cnbc.com/video/2024/05/06/zeta-global-ceo-david-steinberg-talks-ai-in-focus-at-milken- conference.html https://www.businesswire.com/news/home/20240G04622808/en/Zeta-Increases-3Q%E2%80%GG24- Guidance https://www.prnewswire.com/news-releases/zeta-global-opens-ai-data-labs-in-san-francisco-and-nyc- 300S45353.html https://www.prnewswire.com/news-releases/zeta-global-recognized-in-enterprise-marketing-software-and- cross-channel-campaign-management-reports-by-independent-research-firm-300S38241.html

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

JOB DESCRIPTION We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Senior Manager of Software Engineering at JPMorgan Chase within the Consumer and Community Banking – Data Technology team, you lead a technical area and drive impact within teams, technologies, and projects across departments. Utilize your in-depth knowledge of software, applications, technical processes, and product management to drive multiple complex projects and initiatives, while serving as a primary decision maker for your teams and be a driver of innovation and solution delivery. Job Responsibilities Leads Data publishing and processing platform engineering team to achieve business & technology objectives Accountable for technical tools evaluation, build platforms, design & delivery outcomes Carries governance accountability for coding decisions, control obligations, and measures of success such as cost of ownership, maintainability, and portfolio operations Delivers technical solutions that can be leveraged across multiple businesses and domains Influences peer leaders and senior stakeholders across the business, product, and technology teams Champions the firm’s culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience. In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise Expertise in programming languages such as Python and Java, with a strong understanding of cloud services including AWS, EKS, SNS, SQS, Cloud Formation, Terraform, and Lambda. Proficient in messaging services like Kafka and big data technologies such as Hadoop, Spark-SQL, and Pyspark. Experienced with Teradata or Snowflake, or any other RDBMS databases, with a solid understanding of Teradata or Snowflake. Advanced experience in leading technologists to manage, anticipate, and solve complex technical challenges, along with experience in developing and recognizing talent within cross-functional teams. Experience in leading a product as a Product Owner or Product Manager, with practical cloud-native experience. Preferred qualifications, capabilities, and skills Previous experience leading / building Platforms & Frameworks teams Skilled in orchestration tools like Airflow (preferable) or Control-M, and experienced in continuous integration and continuous deployment (CICD) using Jenkins. Experience with Observability tools, frameworks and platforms. Experience with large scalable secure distributed complex architecture and design Experience with nonfunctional topics like security, performance, code and design best practices AWS Certified Solutions Architect, AWS Certified Developer, or similar certification is a big plus. ABOUT US

Posted 2 weeks ago

Apply

5.0 years

5 - 5 Lacs

Hyderābād

On-site

Data Services Analyst The Data Services ETL Developer will specialize in data transformations and integration projects utilizing Zeta's proprietary tools, 3rd Party software, and coding. This role requires understanding of CRM methodologies related to marketing operations. The candidate will be responsible for implementing data processing across multiple technologies, supporting a high volume of tasks with the expectation of accurate and on-time delivery. Responsibilities: Manipulate client and internal marketing data across multiple platforms and technologies. Automate scripts to perform tasks to transfer and manipulate data feeds (internal and external). Build, deploy, and manage cloud-based data pipelines using AWS services. Manage multiple tasks with competing priorities and ensure timely client deliverability. Work with technical staff to maintain and support a proprietary ETL environment. Collaborate with database/CRM, modelers, analysts, and application programmers to deliver results for clients. Job Requirements: Coverage of US time-zone and in office minimum three days per week. Experience in database marketing with the ability to transform and manipulate data. knowledge of US and International postal address With exposure to SAP postal products (DQM). Proficient with AWS services (S3, Airflow, RDS, Athena) for data storage, processing, and analysis. Experience with Oracle and Snowflake SQL to automate scripts for marketing data processing. Familiarity with tools like Snowflake, Airflow, GitLab, Grafana, LDAP, Open VPN, DCWEB, Postman, and Microsoft Excel. Knowledge of SQL Server, including data exports/imports, running SQL Server Agent Jobs, and SSIS packages. Proficiency with editors like Notepad++ and Ultra Edit (or similar tools). Understanding of SFTP and PGP to ensure data security and client data protection. Experience working with large-scale customer databases in a relational database environment. Proven ability to manage multiple tasks simultaneously. Strong communication and collaboration skills in a team environment. Familiarity with the project life cycle. Minimum Qualifications: Bachelor's degree or equivalent with 5+ years of experience in database marketing and cloud-based technologies. Strong understanding of data engineering concepts and cloud infrastructure. Excellent oral and written communication skills.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderābād

On-site

The people here at Apple don’t just build products - we craft the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that supports the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. The Global Business Intelligence team provides data services, analytics, reporting, and data science solutions to Apple’s business groups, including Retail, iTunes, Marketing, AppleCare, Operations, Finance, and Sales. These solutions are built on top of an end-to-end machine learning platform with sophisticated AI capabilities. We are looking for a competent, experienced, and driven machine learning engineer to define and build some of the best-in-class machine learning solutions and tools for Apple. Description As a Machine Learning Engineer, you will work on building intelligent systems to democratize AI across a wide range of solutions within Apple. You will drive the development and deployment of innovative AI models and systems that directly impact the capabilities and performance of Apple’s products and services. You will implement robust, scalable ML infrastructure, including data storage, processing, and model serving components, to support seamless integration of AI/ML models into production environments. You will develop novel feature engineering, data augmentation, prompt engineering and fine-tuning frameworks that achieve optimal performance on specific tasks and domains. You will design and implement automated ML pipelines for data preprocessing, feature engineering, model training, hyper-parameter tuning, and model evaluation, enabling rapid experimentation and iteration. You will also implement advanced model compression and optimization techniques to reduce the resource footprint of language models while preserving their performance. Have continuous focus to Brainstorm and Design various POCs using AI/ML Services for new or existing enterprise problems. YOU SHOULD BE ABLE TO: - Understand a business challenge - Collaborate with business and other multi-functional teams - Design a statistical or deep learning solution to find the needed answer to it. - Develop it by yourself or guide another person to do it. - Deliver the outcome into production, (v) Keep a good governance of your work. There are meaningful opportunities for you deliver impactful influences to Apple. Key Qualifications 4+ years of ML engineering experience in feature engineering, model training, model serving, model monitoring and model refresh management Experience developing AI/ML systems at scale in production or in high-impact research environments Passionate about computer vision, natural language processing, especially in LLMs and Generative AI systems Knowledge with the common frameworks and tools such as PyPorch or TensorFlow Experience with transformer models such as BERT, GPT etc. and understanding of their underlying principles is a plus Strong coding, analytical, software engineering skills, and familiarity with software engineering principles around testing, code reviews and deployment Experience in handling performance, application and security log management Applied knowledge of statistical data analysis, predictive modeling classification, Time Series techniques, sampling methods, multivariate analysis, hypothesis testing, and drift analysis. Proficiency in programming languages and tools like Python, R, Git, Airflow, Notebooks. Experience with data visualization tools like matplotlib, d3.js., Tableau would be a plus Education & Experience Bachelor’s Degree or Equivalent experience Submit CV

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Senior Data Engineer Location: Chennai 34322 Job Type: Contract Budget: ₹18 LPA Notice Period: Immediate Joiners Only Role Overview We are seeking a highly capable Software Engineer (Data Engineer) to support end-to-end development and deployment of critical data products. The selected candidate will work across diverse business and technical teams to design, build, transform, and migrate data solutions using modern cloud technologies. This is a high-impact role focused on cloud-native data engineering and infrastructure. Key Responsibilities Develop and manage scalable data pipelines and workflows on Google Cloud Platform (GCP) Design and implement ETL processes using Python, BigQuery, and Terraform Support data product lifecycle from concept, development to deployment and DevOps Optimize query performance and manage large datasets with efficiency Collaborate with cross-functional teams to gather requirements and deliver solutions Maintain strong adherence to Agile practices, contributing to sprint planning and user stories Apply best practices in data security, quality, and governance Effectively communicate technical solutions to stakeholders and team members Required Skills & Experience Minimum 4 years of relevant experience in GCP Data Engineering Strong hands-on experience with BigQuery, Python programming, Terraform, Cloud Run, and GitHub Proven expertise in SQL, data modeling, and performance optimization Solid understanding of cloud data warehousing and pipeline orchestration (e.g., DBT, Dataflow, Composer, or Airflow DAGs) Background in ETL workflows and data processing logic Familiarity with Agile (Scrum) methodology and collaboration tools Preferred Skills Experience with Java, Spring Boot, and RESTful APIs Exposure to infrastructure automation and CI/CD pipelines Educational Qualification Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field Skills: etl,terraform,dbt,java,spring boot,etl workflows,data modeling,dataflow,data engineering,ci/cd,bigquery,agile,data,sql,cloud,restful apis,github,airflow dags,gcp,cloud run,composer,python

Posted 2 weeks ago

Apply

5.0 years

19 - 20 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Senior Software Engineer 34332 Location: Chennai (Onsite) Job Type: Contract Budget: ₹20 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a highly skilled Senior Software Engineer to be a part of a centralized observability and monitoring platform team. The role focuses on building and maintaining a scalable, reliable observability solution that enables faster incident response and data-driven decision-making through latency, traffic, error, and saturation monitoring. This opportunity requires a strong background in cloud-native architecture, observability tooling, backend and frontend development, and data pipeline engineering. Key Responsibilities Design, build, and maintain observability and monitoring platforms to enhance MTTR/MTTX Create and optimize dashboards, alerts, and monitoring configurations using tools like Prometheus, Grafana, etc. Architect and implement scalable data pipelines and microservices for real-time and batch data processing Utilize GCP tools including BigQuery, Dataflow, Dataproc, Data Fusion, and others Develop end-to-end solutions using Spring Boot, Python, Angular, and REST APIs Design and manage relational and NoSQL databases including PostgreSQL, MySQL, and BigQuery Implement best practices in data governance, RBAC, encryption, and security within cloud environments Ensure automation and reliability through CI/CD, Terraform, and orchestration tools like Airflow and Tekton Drive full-cycle SDLC processes including design, coding, testing, deployment, and monitoring Collaborate closely with software architects, DevOps, and cross-functional teams for solution delivery Core Skills Required Proficiency in Spring Boot, Angular, Java, and Python Experience in developing microservices and SOA-based systems Cloud-native development experience, preferably on Google Cloud Platform (GCP) Strong understanding of HTML, CSS, JavaScript/TypeScript, and modern frontend frameworks Experience with infrastructure automation and monitoring tools Working knowledge of data engineering technologies: PySpark, Airflow, Apache Beam, Kafka, and similar Strong grasp of RESTful APIs, GitHub, and TDD methodologies Preferred Skills GCP Professional Certifications (e.g., Data Engineer, Cloud Developer) Hands-on experience with Terraform, Cloud SQL, Data Governance tools, and security frameworks Exposure to performance tuning, cost optimization, and observability best practices Experience Required 5+ years of experience in full-stack and cloud-based application development Strong track record in building distributed, scalable systems Prior experience with observability and performance monitoring tools is a plus Educational Qualifications Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: java,data fusion,html,dataflow,terraform,spring boot,restful apis,python,angular,dataproc,microservices,apache beam,css,cloud sql,soa,typescript,tdd,kafka,javascript,airflow,github,pyspark,bigquery,,gcp

Posted 2 weeks ago

Apply

2.0 years

4 Lacs

Chennai

On-site

We are hiring a tech-savvy and creative Social Media Handler with strong expertise in AI-powered content creation , web scraping , and automation of scraper workflows . You will be responsible for managing social media presence while automating content intelligence and trend tracking through custom scraping solutions. This is a hybrid role requiring both creative content skills and technical automation proficiency. Key Responsibilities: 1) Social Media Management - Plan and execute content calendars across platforms: Instagram, Facebook, YouTube, LinkedIn, and X. - Create high-performing, audience-specific content using AI tools (ChatGPT, Midjourney, Canva AI, etc.). - Engage with followers, track trends, and implement growth strategies. 2) AI Content Creation - Use generative AI to write captions, articles, and hashtags. - Generate AI-powered images, carousels, infographics, and reels. - Repurpose long-form content into short-form video or visual content using tools like Descript or Lumen5. 3) Web Scraping & Automation - Design and build automated web scrapers to extract data from websites, directories, competitor pages, and trending content sources. - Schedule scraping jobs and set up automated pipelines using: - Python (BeautifulSoup, Scrapy, Selenium, Playwright) - Task schedulers (Airflow, Cron, or Python scripts) - Cloud scraping or headless browsers - Parse and clean data for insight generation (topics, hashtags, keywords, sentiment, etc.). - Store and organize scraped data in spreadsheets or databases for content inspiration and strategy. Required Skills & Experience: 1) 2–5 years of relevant work experience in social media, content creation, or web scraping. 2) Proficiency in AI tools: - Text: ChatGPT, Jasper, Copy.ai 3) Image: Midjourney, DALL·E, Adobe Firefly 4) Video: Pictory, Descript, Lumen5 5) Strong Python skills for: - Web scraping (Scrapy, BeautifulSoup, Selenium) 6) Automation scripting - Knowledge of data handling using Pandas, CSV, JSON, Google Sheets, or databases. 7) Familiar with social media scheduling tools (Meta Business Suite, Buffer, Hootsuite). 8) Ability to work independently and stay updated on digital trends and platform changes. Educational Qualification Degree in Marketing, Media, Computer Science, or Data Science preferred. - Skills-based hiring encouraged – real-world experience matters more than formal education. Work Location: Chennai (In-office role) Salary: Commensurate with experience + performance bonus Bonus Skills (Nice to Have) : 1) Knowledge of website development (HTML, CSS, JS, WordPress/Webflow). 2) SEO and content analytics. 3) Basic video editing and animation (CapCut, After Effects). 4) Experience with automation platforms like Zapier, n8n, or Make.com. To Apply: Please email your resume, portfolio, and sample projects to: Job Type: Full-time Pay: From ₹40,000.00 per month Work Location: In person

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking a Senior Developer with expertise in SnapLogic and Apache Airflow to design, develop, and maintain enterprise-level data integration solutions. This role requires strong technical expertise in ETL development, workflow orchestration, and cloud technologies. You will be responsible for automating data workflows, optimizing performance, and ensuring the reliability and scalability of our data systems. Key Responsibilities include designing, developing, and managing ETL pipelines using SnapLogic, ensuring efficient data transformation and integration across various systems and applications. Leverage Apache Airflow for workflow automation, job scheduling, and task dependencies, ensuring optimized execution and monitoring. Work closely with cross-functional teams such as Data Engineering, DevOps, and Data Science to understand data requirements and deliver solutions. Collaborate in designing and implementing data pipeline architectures to support large-scale data processing in cloud environments like AWS, Azure, and GCP. Develop reusable SnapLogic pipelines and integrate with third-party applications and data sources including databases, APIs, and cloud services. Optimize SnapLogic pipeline performance to handle large volumes of data with minimal latency. Provide guidance and mentoring to junior developers in the team, conducting code reviews and offering best practice recommendations. Troubleshoot and resolve pipeline failures, ensuring high data quality and minimal downtime. Implement automated testing, continuous integration (CI), and continuous delivery (CD) practices for data pipelines. Stay current with new SnapLogic features, Airflow upgrades, and industry best practices. Required Skills & Experience include 6+ years of hands-on experience in data engineering, focusing on SnapLogic and Apache Airflow. Strong experience with SnapLogic Designer and SnapLogic cloud environment for building data integrations and ETL pipelines. Proficient in Apache Airflow for orchestrating, automating, and scheduling data workflows. Strong understanding of ETL concepts, data integration, and data transformations. Experience with cloud platforms like AWS, Azure, or Google Cloud and data storage systems such as S3, Azure Blob, and Google Cloud Storage. Strong SQL skills and experience with relational databases like PostgreSQL, MySQL, Oracle, and NoSQL databases. Experience working with REST APIs, integrating data from third-party services, and using connectors. Knowledge of data quality, monitoring, and logging tools for production pipelines. Experience with CI/CD pipelines and tools such as Jenkins, GitLab, or similar. Excellent problem-solving skills with the ability to diagnose issues and implement effective solutions. Ability to work in an Agile development environment. Strong communication and collaboration skills to work with both technical and non-technical teams.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title - ETL Developer - Informatica BDM/DEI 📍 Location : Onsite 🕒 Employment Type : Full Time 💼 Experience Level : Mid Senior Job Summary - We are seeking a skilled and results-driven ETL Developer with strong experience in Informatica BDM (Big Data Management) or Informatica DEI (Data Engineering Integration) to design and implement scalable, high-performance data integration solutions. The ideal candidate will work on large-scale data projects involving structured and unstructured data, and contribute to the development of reliable and efficient ETL pipelines across modern big data environments. Key Responsibilities Design, develop, and maintain ETL pipelines using Informatica BDM/DEI for batch and real-time data integration Integrate data from diverse sources including relational databases, flat files, cloud storage, and big data platforms such as Hive and Spark Translate business and technical requirements into mapping specifications and transformation logic Optimize mappings, workflows , and job executions to ensure high performance, scalability, and reliability Conduct unit testing and participate in integration and system testing Collaborate with data architects, analysts, and business stakeholders to understand requirements and deliver robust solutions Support data quality checks, exception handling, and metadata documentation Monitor, troubleshoot, and resolve ETL job issues and performance bottlenecks Ensure adherence to data governance and compliance standards throughout the development lifecycle Key Skills and Qualification 5-8 years of experience in ETL development with a focus on Informatica BDM/DEI Strong knowledge of data integration techniques , transformation logic, and job orchestration Proficiency in SQL , with the ability to write and optimize complex queries Experience working with Hadoop ecosystems (e.g., Hive, HDFS, Spark) and large-volume data processing Understanding of performance optimization in ETL and big data environments Familiarity with job scheduling tools and workflow orchestration (e.g., Control-M, Apache Airflow, Oozie) Good understanding of data warehousing , data lakes , and data modeling principles Experience working in Agile/Scrum environments Excellent analytical, problem-solving, and communication skills Good to have Experience with cloud data platforms (AWS Glue, Azure Data Factory, or GCP Dataflow) Exposure to Informatica IDQ (Data Quality) is a plus Knowledge of Python, Shell scripting, or automation tools Informatica or Big Data certifications

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Senior Bioinformatician GCL: D2 Introduction to role Are you ready to tackle some of the most challenging informatics problems in the drug discovery clinical trial phase? Join us as a Senior Bioinformatician and be part of a team that is redefining healthcare. Your work will directly impact millions of patients by advancing the standard of drug discovery through data processing, analysis, and algorithm development. Collaborate with informaticians, data scientists, and engineers to deliver ground breaking solutions that drive scientific insights and improve the quality of candidate drugs. Are you up for the challenge? Accountabilities Collaborate with scientific colleagues across AstraZeneca to ensure informatics and advanced analytics solutions meet R&D needs. Develop and deliver informatics solutions using agile methodologies, including pipelining approaches and algorithm development. Contribute to multi-omics drug projects with downstream analysis and data analytics. Create, benchmark, and deploy scalable data workflows for genome assembly, variant calling, annotation, and more. Implement CI/CD practices for pipeline development across cloud-based and HPC environments. Apply cloud computing platforms like AWS for pipeline execution and data storage. Explore opportunities to apply AI & ML in informatics. Engage with external peers and software providers to apply the latest methods to business problems. Work closely with data scientists and platform teams to deliver scientific insights. Collaborate with informatics colleagues in our Global Innovation and Technology Centre. Essential Skills/Experience Masters/PhD (or equivalent) in Bioinformatics, Computational Biology, AI/ML, Genomics, Systems Biology, Biomedical Informatics, or related field with a demonstrable record of informatics and Image analysis delivery in a biopharmaceutical setting. Strong coding and software engineering skills such as Python, R, Scripting, Nextflow. Over 6 years of experience in Image analysis/bioinformatics, with a focus on Image/NGS data analysis and Nextflow (DSL2) pipeline development. Proficiency in cloud platforms preferably AWS (e.g. S3, EC2, Batch, EBS, EFS etc) and containerization tools (Docker, Singularity). Experience with workflow management tools and CI/CD practices in Image analysis and bioinformatics (Git, GitHub, GitLab), HPC in AWS. Experience in working with any multi-omics analysis (Transcriptomics, single cell and CRISPR etc ) or Image data (DICOM, WSI etc) analysis. Experience working with any Omics tools and databases such as NCBI, PubMED, UCSC genome databrowser, bedtools, samtools, Picard or imaging relevant tools such as CellProfiler, HALO, VisioPharm particularly in digital pathology and biomarker research. Strong communication skills, with the ability to collaborate effectively with team members and partners to achieve objectives. Desirable Skills/Experience Experience in Omics or Imaging data analysis in a Biopharmaceutical setting. Knowledge of Dockers, Kubernetes for container orchestration. Experience with other workflow management systems, such as (e.g. Apache Airflow, Nextflow, Cromwell, AWS StepFunctions). Familiarity with web-based bioinformatics tools (e.g., RShiny, Jupyter). Experience with working in GxP-validated environments. Experience administering and optimising a HPC job scheduler (e.g. SLURM). Experience with configuration automation and infrastructure as code (e.g. Ansible, Hashicorp Terraform, AWS CloudFormation, Amazon Cloud Developer Kit). Experience deploying infrastructure and code to public cloud, especially AWS. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are driven by a shared purpose to push the boundaries of science and develop life-changing medicines. Our innovative approach combines ground breaking science with leading digital technology platforms to empower our teams to perform at their best. We foster an environment where you can explore new solutions and experiment with groundbreaking technology. With countless opportunities for learning and growth, you'll be part of a diverse team that works multi-functionally to make a meaningful impact on patients' lives. Ready to make a difference? Apply now to join our team as a Senior Bioinformatician! Date Posted 02-Jul-2025 Closing Date 30-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Experience- 7+ years Location- Hyderabad (preferred), Pune, Mumbai JD- We are seeking a skilled Snowflake Developer with 7+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications · 7+ years in database development, data warehousing, or ETL. · 4+ years of hands-on Snowflake development experience. · Strong SQL or Python skills for data processing. · Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). · Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). · Certifications: SnowPro Core Certification (preferred). Preferred Skills · Familiarity with data governance and metadata management. · Familiarity with DBT, Airflow, SSIS & IICS · Knowledge of CI/CD pipelines (Azure DevOps).

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Overview TekWissen is a global workforce management provider that offers strategic talent solutions to our clients throughout India and world-wide. Our client is a company operating a marketplace for consumers, sellers, and content creators. It offers merchandise and content purchased for resale from vendors and those offered by thirdparty sellers. Job Title: Business Intelligence Engineer III Location: Pune Duration: 6 Months Job Type: Contract Work Type: Onsite Job Description The Top Responsibilities: Data Engineering on AWS: Design and implement scalable and secure data pipelines using AWS services such as the client's S3, AWS Glue, the client's Redshift, and the client's Athena. Ensure high-performance, reliable, and fault-tolerant data architectures. Data Modeling and Transformation: Develop and optimize dimensional data models to support various business intelligence and analytics use cases. Perform complex data transformations and enrichment using tools like AWS Glue, AWS Lambda, and Apache Spark. Business Intelligence and Reporting: Collaborate with stakeholders to understand reporting and analytics requirements. Build interactive dashboards and reports using visualization tools like the client's QuickSight. Data Governance and Quality: Implement data quality checks and monitoring processes to ensure the integrity and reliability of data. Define and enforce data policies, standards, and procedures. Cloud Infrastructure Management: Manage and maintain the AWS infrastructure required for the data and analytics platform. Optimize performance, cost, and security of the underlying cloud resources. Collaboration and Knowledge Sharing: Work closely with cross-functional teams, including data analysts, data scientists, and business users, to identify opportunities for data-driven insights. Share knowledge, best practices, and train other team members. Leadership Principles Ownership Deliver result Insist on the Highest Standards Mandatory Requirements 3+ years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficient in designing and implementing data pipelines using AWS services such as S3, Glue, Redshift, Athena, and Lambda. Expertise in data modeling, dimensional modeling, and data transformation techniques. Experience in building and deploying business intelligence solutions, including the use of tools like the client's QuickSight and Tableau. Strong SQL and Python programming skills for data processing and analysis. Understanding of cloud architecture patterns, security best practices, and cost optimization on AWS. Excellent communication and collaboration skills to work effectively with cross-functional teams. Preferred Skills Hands-on experience with Apache Spark, Airflow, or other big data technologies. Knowledge of AWS DevOps practices and tools, such as AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. Familiarity with agile software development methodologies. AWS Certification (e.g., AWS Certified Data Analytics - Specialty). Certification Requirements Any Graduate TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Lead the design and development of advanced refrigeration and HVAC systems for data centers. Provide technical leadership in the application of CO₂ transcritical systems for sustainable and efficient cooling. Perform thermal load calculations, equipment sizing, and system layout planning. Collaborate with electrical engineers, manufacturing engineers and field service engineers to ensure integrated and optimized cooling solutions. Conduct feasibility studies, energy modeling, and performance simulations. Oversee installation, commissioning, and troubleshooting of refrigeration systems. Ensure compliance with industry standards, safety regulations, and environmental guidelines. Prepare detailed technical documentation, specifications, and reports. Required Qualifications: Bachelor’s or Master’s degree in Mechanical Engineering, HVAC Engineering, or a related field. 7–9 years of experience in refrigeration or HVAC system design, with a focus on data center cooling . In-depth knowledge of data center thermal management , including CRAC/CRAH units, liquid cooling, and airflow management. Hands-on experience with CO₂ transcritical refrigeration systems and natural refrigerants. Strong understanding of thermodynamics, fluid mechanics, and heat transfer. Familiarity with relevant codes and standards (ASHRAE, ISO, IEC, etc.). Proficiency in design and simulation tools (e.g., AutoCAD, Revit, Pack Calculation Pro, Cycle_DX, VTB, or HVAC-specific software). Preferred Qualifications: Experience with energy efficiency optimization and sustainability initiatives. Knowledge of control systems and building automation for HVAC. Experience working in mission-critical environments or hyperscale data centers.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Big Data Engineer at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. To be successful as a Big Data Engineer, you should have experience with: - Full Stack Software Development for large-scale, mission-critical applications. - Mastery in distributed big data systems such as Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks like Spring Boot, Spring MVC, JPA, Hibernate. - Proficiency with version control systems, preferably Git; GitHub Copilot experience is a plus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Experience developing back-end applications with multi-process and multi-threaded architectures. - Hands-on experience with building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Experience in DevOps practices like CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Data Product development experience is essential. - Experience in Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. This role is for the Pune location. Purpose of the role: To design, develop, and improve software, utilizing various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. - Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. - Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations: - Perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. - Requires in-depth technical knowledge and experience in the assigned area of expertise. - Thorough understanding of the underlying principles and concepts within the area of expertise. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - For an individual contributor, develop technical expertise in the work area, acting as an advisor where appropriate. - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Take responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision-making within the own area of expertise. - Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are invited to join our team as a Mid-Level Data Engineer Technical Consultant with 4+ years of experience. As a part of our diverse and inclusive organization, you will be based in Bangalore, KA, working full-time in a permanent position during the general shift from Monday to Friday. In this role, you will be expected to possess strong written and oral communication skills, particularly in email correspondence. Your experience in working with Application Development teams will be invaluable, along with your ability to analyze and solve problems effectively. Proficiency in Microsoft tools such as Outlook, Excel, and Word is essential for this position. As a Data Engineer Technical Consultant, you must have at least 4 years of hands-on experience in development. Your expertise should include working with Snowflake and Pyspark, writing SQL queries, utilizing Airflow, and developing in Python. Experience with DBT and integration programs will be advantageous, as well as familiarity with Excel for data analysis and Unix Scripting language. Your responsibilities will encompass a good understanding of data warehousing and practical work experience in this field. You will be accountable for various tasks including understanding requirements, coding, unit testing, integration testing, performance testing, UAT, and Hypercare Support. Collaboration with cross-functional teams across different geographies will be a key aspect of this role. If you are action-oriented, independent, and possess the required technical skills, we encourage you to submit your resume to pallavi@she-jobs.com and explore this exciting opportunity further.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

kolkata, west bengal

On-site

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic data team. The ideal candidate will have deep expertise in Snowflake, dbt (Data Build Tool), and Python, with a strong understanding of data architecture, transformation pipelines, and data quality principles. You will play a crucial role in building and maintaining scalable data pipelines and facilitating data-driven decision-making across the organization. Your responsibilities will include designing, developing, and maintaining scalable and efficient ETL/ELT pipelines using dbt, Snowflake, and Python. You will be tasked with optimizing data models and warehouse performance in Snowflake, collaborating with data analysts, scientists, and business teams to understand data requirements and deliver high-quality datasets. Ensuring data quality, governance, and compliance across pipelines, automating data workflows, and monitoring production jobs for accuracy and reliability will be key aspects of your role. Additionally, you will participate in architectural decisions, promote best practices in data engineering, maintain documentation of data pipelines, transformations, and data models, mentor junior engineers, and contribute to team knowledge sharing. The ideal candidate should have at least 5 years of professional experience in Data Engineering, strong hands-on experience with Snowflake (data modeling, performance tuning, security features), proven experience using dbt for data transformation and modeling, proficiency in Python for data engineering tasks and scripting, a solid understanding of SQL, and experience in building and maintaining complex queries. Experience with orchestration tools like Airflow or Prefect, familiarity with version control systems like Git, strong problem-solving skills, attention to detail, excellent communication, and teamwork abilities are required. Preferred qualifications include experience working with cloud platforms such as AWS, Azure, or GCP, knowledge of data lake architecture and real-time streaming technologies, exposure to CI/CD pipelines for data deployment, and experience in agile development methodologies. Join us and be part of a team that values expertise, innovation, and collaboration in driving impactful data solutions across the organization.,

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also be involved in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, BigQuery etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc, etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description : is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of the Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role We are seeking a skilled and passionate Data Engineer to join our team and drive the development of scalable data pipelines for Generative AI (GenAI) and Large Language Model (LLM)-powered applications. This role demands hands-on expertise in Spark, GCP, and data integration with modern AI APIs. What You'll Do Design and develop high-throughput, scalable data pipelines for GenAI and LLM-based solutions. Build robust ETL/ELT processes using Spark (PySpark/Scala) on Google Cloud Platform (GCP). Integrate enterprise and unstructured data with LLM APIs such as OpenAI, Gemini, and Hugging Face. Process and enrich large volumes of unstructured data, including text and document embeddings. Manage real-time and batch workflows using Airflow, Dataflow, and BigQuery. Implement and maintain best practices for data quality, observability, lineage, and API-first designs. What Sets You Apart 3+ years of experience building scalable Spark-based pipelines (PySpark or Scala). Strong hands-on experience with GCP services: BigQuery, Dataproc, Pub/Sub, Cloud Functions. Familiarity with LLM APIs, vector databases (e.g., Pinecone, FAISS), and GenAI use cases. Expertise in text processing, unstructured data handling, and performance optimization. Agile mindset and the ability to thrive in a fast-paced startup or dynamic environment. Nice To Have Experience working with embeddings and semantic search. Exposure to MLOps or data observability tools. Background in deploying production-grade AI/ML workflows (ref:hirist.tech)

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a Lead Data Engineer with over 8 years of experience in data engineering and software development. The ideal candidate should possess a strong expertise in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL. You will be responsible for driving complex data solutions across multi-functional teams. The role requires hands-on experience in data modeling, test-driven development, and familiarity with Agile/Waterfall methodologies. As a Lead Data Engineer, you will be leading initiatives, collaborating with various teams, and converting business requirements into scalable data solutions using industry best practices in managed services or staff augmentation environments. If you meet the above qualifications and are passionate about working with data to solve complex problems, we encourage you to apply for this exciting opportunity.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a Sr. Data Engineer with over 7 years of experience, specializing in Data Engineering, Python, and SQL. You will be a part of the Data Engineering team in the Enterprise Data Insights organization, responsible for building data solutions, designing ETL/ELT processes, and managing the data platform to support various stakeholders across the organization. Your role is crucial in driving technology and data-led solutions to foster growth and innovation at scale. Your responsibilities as a Senior Data Engineer include collaborating with cross-functional stakeholders to prioritize requests, identify areas for improvement, and provide recommendations. You will lead the analysis, design, and implementation of data solutions, including constructing data models and ETL processes. Furthermore, you will engage in fostering collaboration with corporate engineering, product teams, and other engineering groups, while also leading and mentoring engineering discussions and advocating for best practices. To excel in this role, you should possess a degree in Computer Science or a related technical field and have a proven track record of over 5 years in Data Engineering. Your expertise should include designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment, and developing data products and APIs. Proficiency in SQL/NoSQL databases, particularly Snowflake, Redshift, or MongoDB, along with strong programming skills in Python, is essential. Additionally, experience with columnar OLAP databases, data modeling, and tools like dbt, AirFlow, Fivetran, GitHub, and Tableau reporting will be beneficial. Good communication and interpersonal skills are crucial for effectively collaborating with business stakeholders and translating requirements into actionable insights. An added advantage would be a good understanding of Salesforce & Netsuite systems, experience in SAAS environments, designing and deploying ML models, and familiarity with events and streaming data. Join us in driving data-driven solutions and experiences to shape the future of technology and innovation.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies