Jobs
Interviews

5747 Airflow Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

25 - 28 Lacs

Pune, Maharashtra, India

On-site

Job Description We are looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Core Responsibilities Design, build, and maintain robust data pipelines (batch or streaming) that process and transform data from diverse sources. Ensure data quality, reliability, and availability across the pipeline lifecycle. Collaborate with product managers, architects, and engineering leads to define technical strategy. Participate in code reviews, testing, and deployment processes to maintain high standards. Own smaller components of the data platform or pipelines and take end-to-end responsibility. Continuously identify and resolve performance bottlenecks in data pipelines. Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. Required Qualifications 5 to 7 years of experience in Big Data or data engineering roles. JVM based languages like Java or Scala are preferred. For someone having solid Big Data experience, Python would also be OK. Proven and demonstrated experience working with distributed Big Data tools and processing frameworks like Apache Spark or equivalent (for processing), Kafka or Flink (for streaming), and Airflow or equivalent (for orchestration). Familiarity with cloud platforms (e.g., AWS, GCP, or Azure), including services like S3, Glue, BigQuery, or EMR. Ability to write clean, efficient, and maintainable code. Good understanding of data structures, algorithms, and object-oriented programming. Tooling & Ecosystem Use of version control (e.g., Git) and CI/CD tools. Experience with data orchestration tools (Airflow, Dagster, etc.). Understanding of file formats like Parquet, Avro, ORC, and JSON. Basic exposure to containerization (Docker) or infrastructure-as-code (Terraform is a plus). Skills: airflow,pipelines,data engineering,scala,ci,python,flink,aws,data orchestration,java,kafka,gcp,parquet,orc,azure,cd,dagster,ci/cd,git,avro,terraform,json,docker,apache spark,big data

Posted 2 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. Job Summary: Build systems for collection & transformation of complex data sets for use in production systems Collaborate with engineers on building & maintaining back-end services Implement data schema and data management improvements for scale and performance Provide insights into key performance indicators for the product and customer usage Serve as team's authority on data infrastructure, privacy controls and data security Collaborate with appropriate stakeholders to understand user requirements Support efforts for continuous improvement, metrics and test automation Maintain operations of live service as issues arise on a rotational, on-call basis Verify whether data architecture meets security and compliance requirements and expectations .Should be able to fast learn and quickly adapt at rapid pace. java/scala, SQL, Minimum Qualifications: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 3+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Python, Scala. Strong SQL language and should be able to write complex queries. Strong Airflow like orchestration tools. Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience with streaming technologies such as Apache Spark, Kafka, Flink. Backend experience including Apache Cassandra, MongoDB and relational databases such as Oracle, PostgreSQL AWS/GCP solid hands on with 4+ years of experience. Strong communication and soft skills. Knowledge and/or experience with containerized environments, Kubernetes, docker. Experience in implementing and maintained highly scalable micro services in Rest, Spring Boot, GRPC. Appetite for trying new things and building rapid POCs" Key Responsibilities : Design, develop, and maintain scalable data pipelines to support data ingestion, processing, and storage Implement data integration solutions to consolidate data from multiple sources into a centralized data warehouse or data lake Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications Ensure data quality and integrity by implementing robust data validation and cleansing processes Optimize data pipelines for performance, scalability, and reliability. Develop and maintain ETL (Extract, Transform, Load) processes using tools such as Apache Spark, Apache NiFi, or similar technologies .Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal downtimeImplement best practices for data management, security, and complianceDocument data engineering processes, workflows, and technical specificationsStay up-to-date with industry trends and emerging technologies in data engineering and big data. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 25 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Gurgaon Payroll: BCforward Work Mode: Hybrid JD Skills: Big Data; ETL - Big Data / Data Warehousing; GCP; Adobe Experience Manager (AEM) Primary Skills : GCP, Adobe suit (like AEP, CJA, CDP), SQL, Big data, Python Secondary Skills : Airflow, Hive, Spark, Unix Shell scripting , Data warehousing concept Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 30-Days joiners at most. All the best 👍

Posted 2 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.

Posted 2 days ago

Apply

4.0 - 6.0 years

0 Lacs

India

Remote

Location : Remote Experience : 4-6 years Position : Gen-AI Developer (Hands-on) Technical Requirements: Hands-on Data Science , Agentic AI, AI/Gen AI / ML /NLP Azure services (App Services, Containers, AI Foundry, AI Search, Bot Services) Experience in C# Semantic Kernel Strong background in working with LLMs and building Gen AI applications AI agent concepts .NET Aspire End-to-end environment setup for ML/LLM/Agentic AI (Dev/Prod/Test) Machine Learning & LLM deployment and development Model training, fine-tuning, and deployment Kubernetes, Docker, Serverless architecture Infrastructure as Code (Terraform, Azure Resource Manager) Performance Optimization & Cost Management Cloud cost management & resource optimization, auto-scaling Cost efficiency strategies for cloud resources MLOps frameworks (Kubeflow, MLflow, TFX) Large language model fine-tuning and optimization Data pipelines (Apache Airflow, Kafka, Azure Data Factory) Data storage (SQL/NoSQL, Data Lakes, Data Warehouses) Data processing and ETL workflows Cloud security practices (VPCs, firewalls, IAM) Secure cloud architecture and data privacy CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins) Automated testing and deployment for ML models Agile methodologies (Scrum, Kanban) Cross-functional team collaboration and sprint management Experience with model fine-tuning and infrastructure setup for local LLMs Custom model training and deployment pipeline design Good communication skills (written and oral)

Posted 2 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Please Read Carefully Before Applying Do NOT apply unless you have 3+ years of real-world, hands-on experience in the requirements listed below. Do NOT apply if you are not in Delhi or the NCR OR are unwilling to relocate. This is NOT a WFO opportunity. We work 5 days from office, so please do NOT apply if you are looking for hybrid or WFO. About Gigaforce Gigaforce is a California-based InsurTech company delivering a next-generation, SaaS-based claims platform purpose-built for the Property and Casualty industry. Our blockchain-optimized solution integrates artificial intelligence (AI)-powered predictive models with deep domain expertise to streamline and accelerate subrogation and claims processing. Whether for insurers, recovery vendors, or other ecosystem participants, Gigaforce transforms the traditionally fragmented claims lifecycle into an intelligent, end-to-end digital experience. Recognized as one of the most promising emerging players in the insurance technology space, Gigaforce has already achieved significant milestones. We were a finalist for InsurtechNY, a leading platform accelerating innovation in the insurance industry, and twice named a Top 50 company by the TiE Silicon Valley community. Additionally, Plug and Play Tech Center, the world's largest early-stage investor and innovation accelerator, selected Gigaforce to join its prestigious global accelerator headquartered in Sunnyvale, California. At the core of our platform is a commitment to cutting-edge innovation. We harness the power of technologies such as AI, Machine Learning, Robotic Process Automation, Blockchain, Big Data, and Cloud Computing-leveraging modern languages and frameworks like Java, Kotlin, Angular, and Node.js. We are driven by a culture of curiosity, excellence, and inclusion. At Gigaforce, we hire top talent and provide an environment where every voice matters and every idea is valued. Our employees enjoy comprehensive medical benefits, equity participation, meal cards and generous paid time off. As an equal opportunity employer, we are proud to foster a diverse, equitable, and inclusive workplace that empowers all team members to thrive. We're seeking a NLP & Generative AI Engineers with 2-8 years of hands-on experience in traditional machine learning, natural language processing, and modern generative AI techniques. If you have experience deploying GenAI solutions to production, working with open-source technologies, and handling document-centric pipelines, this is the role for you. You'll work in a high-impact role, leading the design, development, and deployment of innovative AI/ML solutions for insurance claims processing and beyond. In this agile environment, you'll work within structured sprints and leverage data-driven insights and user feedback to guide decision-making. You'll balance strategic vision with tactical execution to ensure we continue to lead the industry in subrogation automation and claims optimization for the property and casualty insurance market. Key Responsibilities Build and deploy end-to-end NLP and GenAI-driven products focused on document understanding, summarization, classification, and retrieval. Design and implement models leveraging LLMs (e.g., GPT, T5, BERT) with capabilities like fine-tuning, instruction tuning, and prompt engineering. Work on scalable, cloud-based pipelines for training, serving, and monitoring models. Handle unstructured data from insurance-related documents such as claims, legal texts, and contracts. Collaborate cross-functionally with data scientists, ML engineers, product managers, and developers. Utilize and contribute to open-source tools and frameworks in the ML ecosystem. Deploy production-ready solutions using MLOps practices : Docker, Kubernetes, Airflow, MLflow, etc. Work on distributed/cloud systems (AWS, GCP, or Azure) with GPU-accelerated workflows. Evaluate and experiment with open-source LLMs and embeddings models (e.g., LangChain, Haystack, LlamaIndex, HuggingFace). Champion best practices in model validation, reproducibility, and responsible AI. Required Skills & Qualifications 2 - 8 years of experience as a Data Scientist, NLP Engineer, or ML Engineer. Strong grasp of traditional ML algorithms (SVMs, gradient boosting, etc.) and NLP fundamentals (word embeddings, topic modeling, text classification). Proven expertise in modern NLP & GenAI models, including : Transformer architectures (e.g., BERT, GPT, T5) Generative tasks : summarization, QA, chatbots, etc. Fine-tuning & prompt engineering for LLMs Experience with cloud platforms (especially AWS SageMaker, GCP, or Azure ML). Strong coding skills in Python, with libraries like Hugging Face, PyTorch, TensorFlow, Scikit-learn. Experience with open-source frameworks (LangChain, LlamaIndex, Haystack) preferred. Experience in document processing pipelines and understanding structured/unstructured insurance documents is a big plus. Familiar with MLOps tools such as MLflow, DVC, FastAPI, Docker, KubeFlow, Airflow. Familiarity with distributed computing and large-scale data processing (Spark, Hadoop, Databricks). Preferred Qualifications Experience deploying GenAI models in production environments. Contributions to open-source projects in ML/NLP/LLM space. Background in insurance, legal, or financial domain involving text-heavy workflows. Strong understanding of data privacy, ethical AI, and responsible model usage. (ref:hirist.tech)

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

chandigarh

On-site

You should possess a minimum of 7-10 years of industry experience, out of which a minimum of 5 years should have been in machine learning roles. Your proficiency in Python and popular ML libraries such as TensorFlow, PyTorch, and Scikit-learn should be advanced. Furthermore, you should have hands-on experience in distributed training, model optimization including quantization and pruning, and inference at scale. Experience with cloud ML platforms like AWS (SageMaker), GCP (Vertex AI), or Azure ML is essential. It is expected that you are familiar with MLOps tooling such as MLflow, TFX, Airflow, or Kubeflow, and data engineering frameworks like Spark, dbt, or Apache Beam. A solid understanding of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay) is crucial for this role. In addition to technical skills, problem-solving abilities, effective communication, and strong documentation skills are highly valued in this position.,

Posted 2 days ago

Apply

3.0 - 5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

Description There's likely a reason you've taken the time out of your busy day to review this opportunity at PulsePoint. Maybe you're in need of a change or there's “an itch you're looking to scratch.” Whatever may be the reason, listen to what some of our team members are saying about working here: "My manager takes the time to not only identify my next career move, but the steps that will take me there. I even have my own personal training budget that I'm encouraged to spend." "Our input is valued and considered. Everyone has a voice and that goes a long way in ensuring that we're moving towards a shared goal." "The Leadership team is incredibly open on their goals and how I contribute to the larger company mission. We all know where we fit and how we can make an impact every day." PulsePoint is growing, and we're looking for a Data Analyst to join our Data Analytics team! A BIT ABOUT US: PulsePoint is a fast-growing healthcare technology company (with adtech roots) using real-time data to transform healthcare. We help brands and agencies interpret the hard-to-read signals across the health journey and unify these digital determinants of health with real-world data to produce the most dimensional view of the customer. Our award-winning advertising platforms use machine learning and programmatic automation to seamlessly activate this data, making marketing, predictive analytics, and decision support easy and instantaneous. The most exciting part about working at PulsePoint is the enormous potential for personal and professional growth. Data Analyst Our Analysts take full ownership of complex data workflows and help drive innovation across PulsePoint's analytics products like Signal and Omnichannel. They build scalable solutions, automate manual processes, and troubleshoot issues across teams. By turning raw data into clear, actionable insights, they support both internal stakeholders and external clients. Data Analysts on the Data Analytics team work closely with Product Managers, BI, Engineering, and Client Teams to deliver high-impact analysis, reporting, and feature enhancements that shape the future of our data and analytics platforms. THE PRODUCT YOU’LL BE WORKING ON: You'll be working on HCP365, the core technology of PulsePoint’s analytics products - Signal, and Omnichannel. HCP365 (the first Signal product) is an AWARD-WINNING product having won a Martech Breakthrough Award and a finalist for the PM360 Trailblazer Award. It's the only health analytics and measurement solution that provides a complete, always-on view of HCP audience engagement across all digital channels with advanced data logic, automation, and integrations. This gives marketers and researchers unprecedented access to data insights and reporting used to inform and optimize investment decisions. You'll be helping scale the platform further with new features, cleaner attribution, and smarter automation to support client growth and product expansion. WHAT YOU'LL BE DOING: This is a hybrid role at the intersection of data analysis, operational support, and technical platform ownership. Your work will directly contribute to the accuracy, scalability, and performance of our attribution and measurement solutions. Take ownership of key workflows like HCP365 and Omnichannel Support Omnichannel at an enterprise level across the whole organization Build and improve Big Query SQL-based pipelines and Airflow DAGs Conduct R&D and contribute towards roadmap planning for new features in product Support client teams with data deep dives, troubleshooting, and ad hoc analysis Translate complex data into simple, client-facing insights and dashboards Dig into data to answer and resolve client questions REQUIRED QUALIFICATIONS: Minimum 3-5 years of relevant experience in: Understanding of deterministic and probabilistic attribution methodologies Proficiency in analyzing multi-device campaign performance and user behavior Excellent problem-solving and data analysis skills. Ability to organize large data sets to answer critical questions, extrapolate trends, and tell a story Writing and debugging complex SQL queries from scratch using real business data Strong understanding of data workflows, joins, deduplication, attribution, and QA Working with Airflow workflows, ETL pipelines, or scheduling tools Proficient in Excel (pivot tables, VLOOKUP, formulas, functions) Understanding of web analytics platforms (Google Analytics, Adobe Analytics, etc.) Experience with at least one BI Software (Tableau, Looker, etc.) Able to work 9am-6pm EST (6:30pm-3:30am IST); we are fine with remote work Note that this role is for India only and we do not plan on transferring hires to the U.S./UK in the future PREFERRED QUALIFICATIONS: Python for automation or workflow logic Basic experience with: Designing data pipelines and optimizing them Working on AI agents, automation tools, or workflow scripting Dashboard design and data storytelling And one of: ELT experience Experience with automation Statistics background Exposure to: Health related datasets, hashed identifiers Workflow optimization or code refactoring Project Management tools like JIRA or Confluence Bonus if you've worked on R&D or helped build data products from scratch WHAT WE'RE LOOKING FOR: We're looking for a hands-on, reliable, and proactive Analyst who can: Jump into complex workflows and own them end-to-end Troubleshoot issues and bring clarity in ambiguous situations Balance between deep technical work and cross-team collaboration Build scalable, automated, and accurate solutions SELECTION PROCESS: Initial Phone Screen SQL Screening Test via CodeSignal (35 minutes) SQL Live Coding Interview (60 minutes) Hiring Manager Interview (30 minutes) Team Interview (1:1s with Sr. Client Analyst, Team Manager, SVP of Data, Product Manager who built Signal) (3 x 45 minutes) RED FLAGS FOR US: Candidates won’t succeed here if they haven’t worked closely with data sets or have simply translated requirements created by others into SQL without a deeper understanding of how the data impacts our business and, in turn, our clients’ success metrics. Watch this video here to learn more about our culture and get a sense of what it’s like to work at PulsePoint! WebMD and its affiliates is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, ancestry, color, religion, sex, gender, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law.

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

delhi

On-site

The ideal candidate should possess extensive expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms like Databricks or Snowflake. You will be responsible for designing scalable data models, managing reliable data workflows, and ensuring the integrity and performance of critical financial datasets. Collaboration with engineering, analytics, product, and compliance teams is a key aspect of this role. Responsibilities: - Design, implement, and maintain logical and physical data models for transactional, analytical, and reporting systems. - Develop and oversee scalable ETL/ELT pipelines to process large volumes of financial transaction data. - Optimize SQL queries, stored procedures, and data transformations for enhanced performance. - Create and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. - Architect data lakes and warehouses utilizing platforms such as Databricks, Snowflake, BigQuery, or Redshift. - Ensure adherence to data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). - Work closely with data engineers, analysts, and business stakeholders to comprehend data requirements and deliver solutions. - Conduct data profiling, validation, and quality assurance to maintain clean and consistent data. - Maintain comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications: - Proficiency in advanced SQL, including query tuning, indexing, and performance optimization. - Experience in developing ETL/ELT workflows with tools like Spark, dbt, Talend, or Informatica. - Familiarity with data orchestration frameworks such as Airflow, Dagster, Luigi, etc. - Hands-on experience with cloud-based data platforms like Databricks, Snowflake, or similar technologies. - Deep understanding of data warehousing principles like star/snowflake schema, slowly changing dimensions, etc. - Knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. - Strong analytical and problem-solving skills in high-scale environments. Preferred Qualifications: - Exposure to real-time data pipelines like Kafka, Spark Streaming. - Knowledge of data mesh or data fabric architecture paradigms. - Certifications in Snowflake, Databricks, or relevant cloud platforms. - Familiarity with Python or Scala for data engineering tasks.,

Posted 2 days ago

Apply

3.0 - 5.0 years

0 Lacs

India

Remote

There's likely a reason you've taken the time out of your busy day to review this opportunity at PulsePoint. Maybe you're in need of a change or there's “an itch you're looking to scratch.” Whatever may be the reason, listen to what some of our team members are saying about working here: "My manager takes the time to not only identify my next career move, but the steps that will take me there. I even have my own personal training budget that I'm encouraged to spend." "Our input is valued and considered. Everyone has a voice and that goes a long way in ensuring that we're moving towards a shared goal . " "The Leadership team is incredibly open on their goals and how I contribute to the larger company mission. We all know where we fit and how we can make an impact every day." PulsePoint is growing, and we're looking for a Data Analyst to join our Data Analytics team! A BIT ABOUT US: PulsePoint is a fast-growing healthcare technology company (with adtech roots) using real-time data to transform healthcare. We help brands and agencies interpret the hard-to-read signals across the health journey and unify these digital determinants of health with real-world data to produce the most dimensional view of the customer. Our award-winning advertising platforms use machine learning and programmatic automation to seamlessly activate this data, making marketing, predictive analytics, and decision support easy and instantaneous. The most exciting part about working at PulsePoint is the enormous potential for personal and professional growth. Data Analyst Our Analysts take full ownership of complex data workflows and help drive innovation across PulsePoint's analytics products like Signal and Omnichannel. They build scalable solutions, automate manual processes, and troubleshoot issues across teams. By turning raw data into clear, actionable insights, they support both internal stakeholders and external clients. Data Analysts on the Data Analytics team work closely with Product Managers, BI, Engineering, and Client Teams to deliver high-impact analysis, reporting, and feature enhancements that shape the future of our data and analytics platforms. THE PRODUCT YOU’LL BE WORKING ON : You'll be working on HCP365, the core technology of PulsePoint’s analytics products - Signal, and Omnichannel. HCP365 (the first Signal product) is an AWARD-WINNING product having won a Martech Breakthrough Award and a finalist for the PM360 Trailblazer Award . It's the only health analytics and measurement solution that provides a complete, always-on view of HCP audience engagement across all digital channels with advanced data logic, automation, and integrations. This gives marketers and researchers unprecedented access to data insights and reporting used to inform and optimize investment decisions. You'll be helping scale the platform further with new features, cleaner attribution, and smarter automation to support client growth and product expansion. WHAT YOU ' LL BE DOING: This is a hybrid role at the intersection of data analysis, operational support, and technical platform ownership. Your work will directly contribute to the accuracy, scalability, and performance of our attribution and measurement solutions. Take ownership of key workflows like HCP365 and Omnichannel Support Omnichannel at an enterprise level across the whole organization Build and improve Big Query SQL-based pipelines and Airflow DAGs Conduct R&D and contribute towards roadmap planning for new features in product Support client teams with data deep dives, troubleshooting, and ad hoc analysis Translate complex data into simple, client-facing insights and dashboards Dig into data to answer and resolve client questions REQUIRED QUALIFICATIONS: Minimum 3-5 years of relevant experience in: Understanding of deterministic and probabilistic attribution methodologies Proficiency in analyzing multi-device campaign performance and user behavior Excellent problem-solving and data analysis skills. Ability to organize large data sets to answer critical questions, extrapolate trends, and tell a story Writing and debugging complex SQL queries from scratch using real business data Strong understanding of data workflows, joins, deduplication, attribution, and QA Working with Airflow workflows, ETL pipelines, or scheduling tools Proficient in Excel (pivot tables, VLOOKUP, formulas, functions) Understanding of web analytics platforms (Google Analytics, Adobe Analytics, etc.) Experience with at least one BI Software (Tableau, Looker, etc.) Able to work 9am-6pm EST (6:30pm-3:30am IST); we are fine with remote work Note that this role is for India only and we do not plan on transferring hires to the U.S./UK in the future PREFERRED QUALIFICATIONS : Python for automation or workflow logic Basic experience with: Designing data pipelines and optimizing them Working on AI agents, automation tools, or workflow scripting Dashboard design and data storytelling And one of: ELT experience Experience with automation Statistics background Exposure to: Health related datasets, hashed identifiers Workflow optimization or code refactoring Project Management tools like JIRA or Confluence Bonus if you've worked on R&D or helped build data products from scratch WHAT WE ' RE LOOKING FOR: We're looking for a hands-on, reliable, and proactive Analyst who can: Jump into complex workflows and own them end-to-end Troubleshoot issues and bring clarity in ambiguous situations Balance between deep technical work and cross-team collaboration Build scalable, automated, and accurate solutions SELECTION PROCESS: Initial Phone Screen SQL Screening Test via CodeSignal (35 minutes) SQL Live Coding Interview (60 minutes) Hiring Manager Interview (30 minutes) Team Interview (1:1s with Sr. Client Analyst, Team Manager, SVP of Data, Product Manager who built Signal) (3 x 45 minutes) RED FLAGS FOR US: Candidates won’t succeed here if they haven’t worked closely with data sets or have simply translated requirements created by others into SQL without a deeper understanding of how the data impacts our business and, in turn, our clients’ success metrics. Watch this video here to learn more about our culture and get a sense of what it’s like to work at PulsePoint! WebMD and its affiliates is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, ancestry, color, religion, sex, gender, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law.

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an Application Development and Support Engineer with 6 - 10 years of experience, your primary responsibilities will include developing, maintaining, and supporting Python-based applications and automation scripts. You will be tasked with designing, implementing, and optimizing SQL queries and database objects to facilitate application functionality and quant needs. Additionally, you will build and manage ETL pipelines using tools like dbt or Azure Data Factory (ADF) and troubleshoot any application or data pipeline issues promptly to minimize downtime. In this role, you will be expected to take ownership of assigned tasks and drive them to completion with minimal supervision. Continuous improvement of processes and workflows to enhance efficiency and quality will be a key focus. It will also be your responsibility to document solutions, processes, and support procedures clearly and comprehensively. To excel in this position, you should possess technical proficiency in Python programming with experience in scripting and automation. Strong knowledge of SQL for querying and manipulating relational databases is essential, along with hands-on experience in ETL tools like dbt and Airflow. Familiarity with version control systems such as Git, understanding of Agile software development methodologies, and knowledge of containerization and orchestration tools like Docker and Kubernetes are also required. In addition to technical skills, soft skills play a crucial role in this role. You should have excellent problem-solving skills with a proactive approach, a strong sense of ownership, and accountability for your work. The ability to work effectively both independently and as part of a collaborative Agile team is necessary. Good communication skills to articulate questions, concerns, and recommendations, as well as a demonstrated drive to complete tasks efficiently and meet deadlines, will be beneficial. Preferred qualifications for this role include experience with cloud platforms like Azure, familiarity with monitoring and alerting tools for application support, and prior experience in the financial services or fintech domain. If you are an individual who thrives in a dynamic environment, enjoys working with cutting-edge technologies, and is passionate about application development and support, this role could be the perfect fit for you.,

Posted 2 days ago

Apply

12.0 - 16.0 years

0 Lacs

pune, maharashtra

On-site

The Engineering Lead Analyst is a strategic professional who stays abreast of developments within own field and contributes to directional strategy by considering their application in own job and the business. Recognized technical authority for an area within the business. This position is for the lead role in Client Financials Improvements project. Selected candidate will be responsible for development and execution of project within ISG Data Platform group. The successful candidate will be working closely with the global team, to interface the business, translating business requirements into technical requirements and will have strong functional knowledge from banking and financial system. Lead the definition and ongoing management of target application architecture for Client Financials. Leverage internal and external leading practices and liaising with other Citi risk organizations to determine and maintain appropriate alignment, specifically with Citi Data Standards. Establish a governance process to oversee implementation activities and ensure ongoing alignment to the defined architecture. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 12-16 years experience in analyzing and defining risk management data structures Skills: - Strong working experience in Python & PySpark - Prior working experience in writing APIs / MicroServices development - Hands-on experience of writing SQL queries in multiple database environments and OS; Experience in validating end to end flow of data in an application. - Hands on experience in working with SQL and NoSQL databases. - Working experience with Airflow and other Orchestrator - Experience in Design and Architect of application - Assess the list of packaged applications and define the re-packaging approach - Understanding of Capital markets (risk management process), Loans / CRMS required - Knowledge of process automation and engineering will be plus. - Demonstrated influencing, facilitation and partnering skills - Track record of interfacing with and presenting results to senior management - Experience with all phases of Software Development Life Cycle - Strong stakeholder engagement skills - Organize and attend workshops to understand the current state of Client Financials - Proven aptitude for organizing and prioritizing work effectively (Must be able to meet deadlines) - Propose a solution and deployment approach to achieve the goals. Citi is an equal opportunity and affirmative action employer. Citigroup Inc. and its subsidiaries ("Citi) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi.,

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

About The Job The Red Hat Chaos Engineering team, part of the Performance and Scale department, is looking for a Senior Software Engineer to join us in Bangalore, India to work on chaos testing Red Hat OpenShift Container Platform, Red Hat OpenShift Virtualization and related product portfolio to identify bottlenecks, tunings and capacity planning guidance under failure conditions. Our goal is to make these products the platform of choice for Red Hat’s enterprise customers! As a senior member of the team, you will be responsible for providing comprehensive resilience, reliability, performance and scalability assessments of the products and improving them. You will collaborate with various Engineering teams on driving features, bug fixes, tunings and providing guidance to ensure stable releases. You will also engage with customers to assist them with establishing chaos and performance test pipelines, best practices, strategies to ensure a scalable environment. This role needs an engineer that thinks creatively, adapts to rapid change, and has the willingness to learn and apply new technologies. You will be joining a vibrant open source culture, and helping promote performance and innovation in this Red Hat engineering team. What will you do? Formulate test plans and carry out chaos testing, performance and scalability benchmarks against various components/features of the OCPv platform to characterize reliability, resilience, drive product performance improvements and detect regressions through data analysis and visualization under failure conditions such as network faults, infrastructure failures, storage faults, etc Work on capacity planning guidance for the product to handle failures while still being performant Develop tools and automation related to fault injection, load generation and release CI Work on AI integration to improve test coverage Assist customers Collaborate with other engineering teams to resolve resilience and performance issues Triage, debug, and solve customer/partner cases related to virtualization reliability, performance and scale Publish results, conclusions, recommendations and best practices via internal test reports, presentations, external blogs and official documentation to support our partners and customers Participate in internal and external conferences about your work and results What will you bring? Bachelor's or Master's degree in Computer Science or related field, or equivalent experience Overall 5+years of experience in software development 5+ years of programming experience in Python, Golang or related programming Experience with site reliability, chaos testing, performance benchmarking, data capture, analysis and debugging Very strong Linux system administration and system engineering skills. Experience with container ecosystems like Docker, Podman and Kubernetes Ability to quickly learn technologies with guidance and maintain high attention to detail Experience with tools, metrics collection and analysis such as iostat, vmstat, sar, perf, pcp, prometheus, Grafana and Elasticsearch Familiarity with Continuous Integration frameworks, automation like Jenkins, Airflow, Ansible etc. and version control tools such as Git, etc Experience working with public clouds like AWS, Azure, GCP, or IBM Cloud, as well as bare metal environments. Excellent written and verbal language skills in English The Following Are Considered a Plus Experience with chaos testing and maintaining reliability of infrastructure at large scale Experience working with virtualization technologies such as KubeVirt, VMware Knowledge of performance observability/profiling tools like eBPF, Flame Graphs About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.

Posted 2 days ago

Apply

7.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Data Engineer with 7-12 years of experience, you will be an integral part of our team, contributing significantly to the design, development, and maintenance of our data infrastructure. Your primary responsibilities will revolve around creating and managing robust data architectures, ETL processes, data warehouses, and utilizing big data and cloud technologies to support our business intelligence and analytics needs. You will lead the design and implementation of data architectures that facilitate data warehousing, integration, and analytics platforms. Developing and optimizing ETL pipelines will be a key aspect of your role, ensuring efficient processing of large datasets and implementing data transformation and cleansing processes to maintain data quality. Your expertise will be crucial in building and maintaining scalable data warehouse solutions using technologies such as Snowflake, Databricks, or Redshift. Additionally, you will leverage AWS Glue and PySpark for large-scale data processing, manage data pipelines with Apache Airflow, and utilize cloud platforms like AWS, Azure, and GCP for data storage, processing, and analytics. Establishing data governance and security best practices, ensuring data integrity, accuracy, and availability, and implementing monitoring and alerting systems are vital components of your responsibilities. Collaborating closely with stakeholders, mentoring junior engineers, and leading data-related projects will also be part of your role. Furthermore, your technical skills should include proficiency in ETL tools like Informatica Power Center, Python, PySpark, SQL, RDBMS platforms, and data warehousing concepts. Soft skills such as excellent communication, leadership, problem-solving, and the ability to manage multiple projects effectively will be essential for success in this role. Preferred qualifications include experience with machine learning workflows, certification in relevant data engineering technologies, and familiarity with Agile methodologies and DevOps practices. Location: Hyderabad Employment Type: Full-time,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

The ideal candidate for the SE/ Senior Data Engineer position should possess deep experience in ETL/ELT processes, data warehousing principles, and real-time and batch data integrations. As a senior member of the team, you will play a crucial role in mentoring and guiding junior engineers, defining best practices, and contributing to the overall data strategy. Strong hands-on experience in SQL, Python, and ideally Airflow and Bash scripting is essential for this role. Key Responsibilities: - Architect and implement scalable data integration and data pipeline solutions using Azure cloud services. - Design, develop, and maintain ETL/ELT processes, including data extraction, transformation, loading, and quality checks using tools like SQL, Python, and Airflow. - Build and automate data workflows and orchestration pipelines, with knowledge of Airflow or equivalent tools being a plus. - Write and maintain Bash scripts for automating system tasks and managing data jobs. - Collaborate with business and technical stakeholders to understand data requirements and translate them into technical solutions. - Develop and manage data flows, data mappings, and data quality & validation rules across multiple tenants and systems. - Implement best practices for data modeling, metadata management, and data governance. - Configure, maintain, and monitor integration jobs to ensure high availability and performance. - Lead code reviews, mentor data engineers, and help shape engineering culture and standards. - Stay current with emerging technologies and recommend tools or processes to improve the team's effectiveness. Required Qualifications: - Bachelors or Masters degree in Computer Science, Information Systems, or related field. - 3+ years of experience in data engineering, with a strong focus on Azure-based solutions. - Proficiency in SQL and Python for data processing and pipeline development. - Experience in developing and orchestrating pipelines using Airflow (preferred) and writing automation scripts using Bash. - Proven experience in designing and implementing real-time and batch data integrations. - Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, Databricks, or similar technologies. - Strong understanding of data warehousing principles, ETL/ELT methodologies, and data pipeline architecture. - Familiarity with data quality, metadata management, and data validation frameworks. - Strong problem-solving skills and the ability to communicate complex technical concepts clearly. Preferred Qualifications: - Experience with multi-tenant SaaS data solutions. - Familiarity with DevOps practices, CI/CD pipelines, and version control systems (e.g., Git). - Experience mentoring and coaching other engineers in technical and architectural decision-making.,

Posted 2 days ago

Apply

15.0 - 19.0 years

0 Lacs

hyderabad, telangana

On-site

As a Technical Lead / Data Architect, you will play a crucial role in our organization by leveraging your expertise in modern data architectures, cloud platforms, and analytics technologies. In this leadership position, you will be responsible for designing robust data solutions, guiding engineering teams, and ensuring successful project execution in collaboration with the project manager. Your key responsibilities will include architecting and designing end-to-end data solutions across multi-cloud environments such as AWS, Azure, and GCP. You will lead and mentor a team of data engineers, BI developers, and analysts to deliver on complex project deliverables. Additionally, you will define and enforce best practices in data engineering, data warehousing, and business intelligence. You will design scalable data pipelines using tools like Snowflake, dbt, Apache Spark, and Airflow, and act as a technical liaison with clients, providing strategic recommendations and maintaining strong relationships. To be successful in this role, you should have at least 15 years of experience in IT with a focus on data architecture, engineering, and cloud-based analytics. You must have expertise in multi-cloud environments and cloud-native technologies, along with deep knowledge of Snowflake, Data Warehousing, ETL/ELT pipelines, and BI platforms. Strong leadership and mentoring skills are essential, as well as excellent communication and interpersonal abilities to engage with both technical and non-technical stakeholders. In addition to the required qualifications, certifications in major cloud platforms and experience in enterprise data governance, security, and compliance are preferred. Familiarity with AI/ML pipeline integration would be a plus. We offer a collaborative work environment, opportunities to work with cutting-edge technologies and global clients, competitive salary and benefits, and continuous learning and professional development opportunities. Join us in driving innovation and excellence in data architecture and analytics.,

Posted 2 days ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We have an exciting opportunity to join the Macquarie team as a Data Engineer and implement groups data strategy, leveraging cutting edge technology and cloud services. If you are keen to work in the private markets space for one of Macquaries most successful global divisions, then this role could be for you. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. Youll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. What role will you play In this role, you will be involved in designing and managing data pipelines using Python, SQL, and tools like Airflow and DBT Cloud, while collaborating with business teams to develop prototypes based on business requirements also create & maintain data products. You will gain hands-on experience with technologies such as Google Cloud Platform (GCP) services, including BigQuery, to deliver scalable and robust solutions. As a key team member, your strong communication skills and self-motivation will support engagement with stakeholders at all levels. What You Offer Strong proficiency in data technology platforms, including DBT Cloud, GCP BigQuery, and Airflow Solid experience in SQL is mandatory Python, with a good understanding of APIs is advantageous Domain knowledge of asset management and private markets industry, including relevant technologies and business processes Familiarity with cloud platforms such as AWS, GCP, and Azure, along with related services Excellent verbal and written communication skills to effectively engage stakeholders and simplify complex technical concepts for non-technical audiences We love hearing from anyone inspired to build a better future with us, if you&aposre excited about the role or working at Macquarie we encourage you to apply. What We Offer Macquarie employees can access a wide range of benefits which, depending on eligibility criteria, include: Hybrid and flexible working arrangements One wellbeing leave day per year Up to 20 weeks paid parental leave as well as benefits to support you as you transition to life as a working parent Paid volunteer leave and donation matching Other benefits to support your physical, mental and financial wellbeing Access a wide range of learning and development opportunities About Technology Technology enables every aspect of Macquarie, for our people, our customers and our communities. Were a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications and designing tomorrows technology solutions. Our commitment to diversity, equity and inclusion We are committed to fostering a diverse, equitable and inclusive workplace.?We encourage people from all backgrounds to apply and welcome all identities, including race, ethnicity, cultural identity, nationality, gender (including gender identity or expression), age, sexual orientation, marital or partnership status, parental, caregiving or family status, neurodiversity, religion or belief, disability, or socio-economic background.?We welcome further discussions on how you can feel included and belong at Macquarie as you progress through our recruitment process. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process. Show more Show less

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Iris&aposs Fortune 100 direct client is looking for Senior AWS Data Engineer for Pune / Noida / Gurgaon location. Position: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Preferred: Immediate joiners or 0-30 days notice period Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA. Show more Show less

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Iris&aposs Fortune 100 direct client is looking for Senior AWS Data Engineer for Pune / Noida / Gurgaon location. Position: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA. Show more Show less

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Data Engineer specializing in Snowflake Migration at Anblicks, you will be a key player in our Data Modernization Center of Excellence (COE). You will be at the forefront of transforming traditional data platforms by utilizing Snowflake, cloud-native tools, and intelligent automation to help enterprises unlock the power of the cloud. Your primary responsibility will be to lead the migration of legacy data warehouses such as Teradata, Netezza, Oracle, or SQL Server to Snowflake. You will re-engineer and modernize ETL pipelines using cloud-native tools and frameworks like DBT, Snowflake Tasks, Streams, and Snowpark. Additionally, you will design robust ELT pipelines on Snowflake that ensure high performance, scalability, and cost optimization, while integrating Snowflake with AWS, Azure, or GCP. In this role, you will also focus on implementing secure and compliant architectures with RBAC, masking policies, Unity Catalog, and SSO. Automation of repeatable tasks, ensuring data quality and parity between source and target systems, and mentoring junior engineers will be essential aspects of your responsibilities. Collaboration with client stakeholders, architects, and delivery teams to define migration strategies, as well as presenting solutions and roadmaps to technical and business leaders, will also be part of your role. To qualify for this position, you should have at least 6 years of experience in Data Engineering or Data Warehousing, with a minimum of 3 years of hands-on experience in Snowflake design and development. Strong expertise in migrating ETL pipelines from Talend and/or Informatica to cloud-native alternatives, proficiency in SQL, data modeling, ELT design, and pipeline performance tuning are prerequisites. Familiarity with tools like DBT Cloud, Airflow, Snowflake Tasks, or similar orchestrators, as well as a solid understanding of cloud data architecture, security frameworks, and data governance, are also required. Preferred qualifications include Snowflake certifications (SnowPro Core and/or SnowPro Advanced Architect), experience with custom migration tools, metadata-driven pipelines, or LLM-based code conversion, familiarity with domain-specific architectures in Retail, Healthcare, or Manufacturing, and prior experience in a COE or modernization-focused consulting environment. By joining Anblicks as a Lead Data Engineer, you will have the opportunity to lead enterprise-wide data modernization programs, tackle complex real-world challenges, and work alongside certified Snowflake architects, cloud engineers, and innovation teams. You will also have the chance to build reusable IP that scales across clients and industries, while experiencing accelerated career growth in the dynamic Data & AI landscape.,

Posted 2 days ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language), Apache Airflow Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Airflow, Python (Programming Language). - Strong understanding of data integration and ETL processes. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance and management best practices. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Kolkata office. - A 15 years full time education is required., 15 years full time education

Posted 2 days ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Mandatory Skill Sets At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services Preferred Skill Sets Palantir Foundry Years Of Experience Required Experience 2 to 4 years (2 years relevant) Education Qualification Bachelor&aposs degree in computer science, data science or any other Engineering discipline. Masters degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development + 11 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less

Posted 2 days ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Role: Junior Engineer Location: Bangalore Duration: Full-Time Timings: 110 PM IST (Cabs & benefits provided) Experience: 2+ yrs Work Mode: Completely Onsite (No Hybrid/ Remote model) Required Skills: SQL; Prog Lang; ETL Tools; Cloud Position Overview The Data Engineer will report to the Data Engineering Manager and play a crucial role in designing, building, and maintaining scalable data pipelines within Kaseya. You will be responsible for ensuring data is readily available, accurate, and optimized for analytics and strategic decision-making. Required Qualifications: Bachelors degree (or equivalent) in Computer Science, Engineering, or related field. 2+ years of experience in data engineering or related role. Proficient in SQL and at least one programming language (Python, Scala, or Java). Hands-on experience with data integration/ETL tools (e.g., Matillion, Talend, Airflow). Familiarity with modern cloud data warehouses (Snowflake, Redshift, or BigQuery). Strong problem-solving skills and attention to detail. Excellent communication and team collaboration skills. Ability to work in a fast-paced, high-growth environment. Roles & Responsibilities: Design and Develop ETL Pipelines: Create high-performance data ingestion and transformation processes, leveraging tools like Matillion, Airflow, or similar. Implement Data Lake and Warehouse Solutions: Develop and optimize data warehouses/lakes (Snowflake, Redshift, BigQuery, or Databricks), ensuring best-in-class performance. Optimize Query Performance: Continuously refine queries and storage strategies to support large volumes of data and multiple use cases. Ensure Data Governance & Security: Collaborate with the Data Governance team to ensure compliance with privacy regulations and corporate data policies. Troubleshoot Complex Data Issues: Investigate and resolve bottlenecks, data quality problems, and system performance challenges. Document Processes & Standards: Maintain clear documentation on data pipelines, schemas, and operational processes to facilitate knowledge sharing. Collaborate with Analytics Teams: Work with BI, Data Science, and Business Analyst teams to deliver timely, reliable, and enriched datasets for reporting and advanced analytics. Evaluate Emerging Technologies: Stay informed about the latest tools, frameworks, and methodologies, recommending improvements where applicable. Company Description: Kaseya is the leading cloud provider of IT systems management software, offering a complete IT management solution delivered both via cloud and on-premise. Kaseya technology empowers MSPs and mid-sized enterprises to proactively manage and control their IT environments remotely, easily and efficiently from a single platform. Kaseya solutions are in use by more than 10,000 customers worldwide in a wide variety of industries, including retail, manufacturing, healthcare, education, government, media, technology, finance, and more. Kaseya has a presence in over 20 countries. To learn more, please visit http://www.kaseya.com. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description Key responsibilities: Project Leadership & Execution Own the end-to-end execution of data analytics and architecture projects, ensuring alignment with business objectives. Oversee 3-4 Data Analysts working on data assessment, gap analysis, and data architecture, ensuring timely delivery and quality outcomes. Develop and maintain project roadmaps, timelines, and deliverables for analytics initiatives. Identify and mitigate risks, dependencies, and bottlenecks to ensure smooth project execution Data Analysis & Process Optimization Work closely with the Data Analyst to evaluate data completeness, consistency, and accuracy across multiple datasets. Lead gap analysis efforts to identify discrepancies and opportunities for data integration and quality improvement. Work along with the data analyst on data mapping exercises, and addressing adhoc requests in a timely manner Oversee and work on enhancements to the current process, focussing on automation Stakeholder Management & Collaboration Serve as the primary liaison between technical teams, business units, and leadership, ensuring clear communication and alignment. Translate business requirements into technical roadmaps for data-driven initiatives. Facilitate cross-functional collaboration between data engineers, analysts, business stakeholders, and IT teams. Present insights, progress, and impact of data projects to senior leadership and stakeholders. Skills Required Project Management experience Strong communication & Stakeholder management skills Tools: SQL, Excel, Airflow, AWS, Monte Carlo, Python (good to have) Problem-Solving Mindset: Ability to identify data gaps, optimize workflows, and drive process improvements Data & Analytics Knowledge: Understanding of data assessment, gap analysis, data architecture, and data governance.

Posted 2 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description Key responsibilities: Project Leadership & Execution Own the end-to-end execution of data analytics and architecture projects, ensuring alignment with business objectives. Oversee 3-4 Data Analysts working on data assessment, gap analysis, and data architecture, ensuring timely delivery and quality outcomes. Develop and maintain project roadmaps, timelines, and deliverables for analytics initiatives. Identify and mitigate risks, dependencies, and bottlenecks to ensure smooth project execution Data Analysis & Process Optimization Work closely with the Data Analyst to evaluate data completeness, consistency, and accuracy across multiple datasets. Lead gap analysis efforts to identify discrepancies and opportunities for data integration and quality improvement. Work along with the data analyst on data mapping exercises, and addressing adhoc requests in a timely manner Oversee and work on enhancements to the current process, focussing on automation Stakeholder Management & Collaboration Serve as the primary liaison between technical teams, business units, and leadership, ensuring clear communication and alignment. Translate business requirements into technical roadmaps for data-driven initiatives. Facilitate cross-functional collaboration between data engineers, analysts, business stakeholders, and IT teams. Present insights, progress, and impact of data projects to senior leadership and stakeholders. Skills Required Project Management experience Strong communication & Stakeholder management skills Tools: SQL, Excel, Airflow, AWS, Monte Carlo, Python (good to have) Problem-Solving Mindset: Ability to identify data gaps, optimize workflows, and drive process improvements Data & Analytics Knowledge: Understanding of data assessment, gap analysis, data architecture, and data governance.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies