Jobs
Interviews

5995 Airflow Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 17.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Principal IS Bus Sys Analyst, Neural Nexus What You Will Do Let’s do this. Let’s change the world. In this vital role you will support the delivery of emerging AI/ML capabilities within the Commercial organization as a leader in Amgen's Neural Nexus program. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau? Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with the Commercial Data & Analytics (CD&A) team to help realize business value through the application of commercial data and emerging AI/ML technologies. Serve as the technology product owner for the launch and growth of the Neural Nexus product teams focused on data connectivity, predictive modeling, and fast-cycle value delivery for commercial teams. Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps. Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience. Become the subject matter expert in emerging technology capabilities by researching and implementing new tools and features, internal and external methodologies. Build expertise and domain expertise in a wide variety of Commercial data domains. Provide input for governance discussions and help prepare materials to support executive alignment on technology strategy and investment. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 12 to 17 years information system experience. Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Good interpersonal skills, good attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Good understanding of sales and incentive compensation value streams Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred) Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Senior Manager of Software Engineering at JPMorgan Chase within the Consumer and Community Banking – Data Technology team, you lead a technical area and drive impact within teams, technologies, and projects across departments. Utilize your in-depth knowledge of software, applications, technical processes, and product management to drive multiple complex projects and initiatives, while serving as a primary decision maker for your teams and be a driver of innovation and solution delivery. Job Responsibilities Leads Data publishing and processing platform engineering team to achieve business & technology objectives Accountable for technical tools evaluation, build platforms, design & delivery outcomes Carries governance accountability for coding decisions, control obligations, and measures of success such as cost of ownership, maintainability, and portfolio operations Delivers technical solutions that can be leveraged across multiple businesses and domains Influences peer leaders and senior stakeholders across the business, product, and technology teams Champions the firm’s culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 5+ years applied experience. In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise Expertise in programming languages such as Python and Java, with a strong understanding of cloud services including AWS, EKS, SNS, SQS, Cloud Formation, Terraform, and Lambda. Proficient in messaging services like Kafka and big data technologies such as Hadoop, Spark-SQL, and Pyspark. Experienced with Teradata or Snowflake, or any other RDBMS databases, with a solid understanding of Teradata or Snowflake. Advanced experience in leading technologists to manage, anticipate, and solve complex technical challenges, along with experience in developing and recognizing talent within cross-functional teams. Experience in leading a product as a Product Owner or Product Manager, with practical cloud-native experience. Preferred Qualifications, Capabilities, And Skills Previous experience leading / building Platforms & Frameworks teams Skilled in orchestration tools like Airflow (preferable) or Control-M, and experienced in continuous integration and continuous deployment (CICD) using Jenkins. Experience with Observability tools, frameworks and platforms. Experience with large scalable secure distributed complex architecture and design Experience with nonfunctional topics like security, performance, code and design best practices AWS Certified Solutions Architect, AWS Certified Developer, or similar certification is a big plus. ABOUT US

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Delhi, Delhi

On-site

Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 2 weeks ago

Apply

2.0 - 4.0 years

25 - 30 Lacs

Pune

Work from Office

Rapid7 is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About 75F 75F is a global leader in IoT-based Building Automation & Energy Efficiency solutions for commercial buildings. We are headquartered in the US, with offices across India, Singapore, and the Middle East. Our investors, led by Bill Gates’s breakthrough energy ventures, include some of the most prominent names in climate and technology. As a result of our dedicated efforts towards climate action, 75F has earned recognition, securing a spot on the global cleantech 100 list for the second consecutive year in 2022. In 2016, 75F ventured into India, and in 2019, we entered the Singapore market. We have made significant inroads, emerging as a prominent player in the APAC region, and secured pivotal clients like Flipkart, Mercedes Benz, WeWork, and Adobe. Our Strategic Partnerships with Tata Power and Singapore Power have spread the message of energy efficiency, climate tech & better automation through IoT, ML, AI, wireless technology & the power of the Cloud. Through these partnerships, the company has earned the trust of several customers of repute such as Hiranandani Hospital, Dmall, Spar and Fern Meluha in India and Singapore Institute of Technology, Labrador Real Estate, Instant Group, DBS Bank, Amex in Singapore. Our cutting-edge technology and exceptional results have garnered numerous awards, including recognition from entities like Clean Energy Trust, Bloomberg NEF, Cleantech 100, Realty+ Prop-Tech Brand of the Year 2022-2023 & 2024, ESG Award Customer Excellence 2023, Frost & Sullivan APAC Smart Energy Management Technology Leadership Award 2021, CMO Asia 2022 Most Preferred Brand in Real Estate: HVAC and National Energy Efficiency Innovation Award by Ministry of Power 2023. We are looking for passionate individuals who are committed not just to personal growth but are passionate about solving some of the world’s toughest challenges. Opportunities exist for working across the various locations and across functions that the Company is present in. Continuing education is valued. We prize extreme ownership and tenacity above all other things. Finally, we believe we can’t build a new future for the planet without first building a diverse and inclusive team so we hire the best candidates we can based on an evaluation of their potential, not just their experience Role: Sr Application Engineer/Application Engineer Experience: 3-6 years Key Responsibilities Design and implement Control Design Documents for projects Create Single line diagrams Extract and Complained BOQs Analyze MEP drawings to identify HVAC equipment, dampers, and sensors. Review control specifications and sequences of operation. Prepare initial review documents and generate Requests for Information (RFIs). Develop Bills of Materials and select appropriate sensors, control valves, dampers, airflow stations, controllers, etc. Design control and interlock wiring for devices and controllers, including terminations. Prepare I/O summaries & Design BMS network architecture. Follow established processes and guidelines to execute projects within defined timelines. Experience in the engineering, installation, and commissioning of HVAC and BMS. Required Knowledge, Skills & Experience Bachelor's or Master's degree in Instrumentation, Electrical, Electronics, or Electronics & Communication Engineering. Strong understanding of HVAC systems, including chilled water systems, cooling towers, primary/secondary pumping systems, hot water systems, and various air handling units (AHUs), fan coil units (FCUs), and VAV systems. In-depth knowledge of BMS architecture, including operator workstations, supervisory and DDC controllers, sensors, and actuators. Familiarity with communication protocols such as BACnet, Modbus, and others. Proficient in wiring starters, field devices, safety interlocks, and control panels. Demonstrate proficiency and hands-on experience with AutoCAD, including customization using LISP routines. Experience in LISP for automating tasks is good to have HVAC Control sequences experience Familiarity with 3D software Fast learner with strong analytical and problem-solving abilities. Excellent verbal and written communication skills. Benefits American MNC culture Being a part of one of the world’s leading Climate Tech companies & working with a team of 200 passionate disruptors. Diversity & Inclusion Our dedication to diversity and inclusion starts with our values. We lead with integrity and purpose, focusing on the future and aligning with our customers’ vision for success. Our High-Performance Culture ensures that we have the best talent, that is highly engaged and eager to innovate.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role Summary At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you’ll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! Role Responsibilities The Senior Associate, Integration Engineer’s responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings Basic Qualifications Education: Bachelor’s degree or Master’s degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time Preferred Qualifications Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes Physical/Mental Requirements None Non-standard Work Schedule, Travel Or Environment Requirements Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech

Posted 2 weeks ago

Apply

8.0 - 12.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Hi, Greetings from Encora Innovation Labs Pvt Ltd! Encora is looking for AWS DevOps Lead with 8-12 year s experience in AWS services, Python and Data Engineering. Important Note : We are looking for an immediate joiner for this role! If you're on a 30/60/90-day notice period, this opportunity may not be the right fit at the moment. We truly appreciate your understanding! Please find the below detailed job description and the company profile for your better understanding. Position: AWS DevOps Lead Experience: 8-12 years Job Location: Bangalore / Chennai / Pune / Hyderabad / Noida Position Type: Full time Qualification: Any graduate Work Mode: Hybrid Technical Skills: AWS services - EC2, S3, Lambda, IAM, IaC Terraform, CloudFormation, Git, Jenkins, GitHub. Programming - Python, SQL and Spark Data Engineering - Data pipelines, Airflow orchestration Job Summary Responsibilities and Duties • Monitors and reacts to alerts in real time and triages issues Executes runbook instructions to resolve routine problems and user requests. Escalates complex or unresolved issues to L2. Documents new findings to improve runbooks and knowledge base. Participates in shift handovers to ensure seamless coverage Participates in ceremonies to share operational status Education and Experience: B.E in Computer Science Engineering, or equivalent technical degree with strong computer science fundamentals Experience in an Agile software development environment Excellent communication and collaboration skills with the ability to work in a team-oriented environment Skill required: System Administration : Basic troubleshooting, monitoring, and operational support. Cloud Platforms : Familiarity with AWS services (e.g., EC2, S3, Lambda, IAM). Infrastructure as Code (IaC) : Exposure to Terraform, CloudFormation, or similar tools. CI/CD Pipelines : Understanding of Git, Jenkins, GitHub Actions, or similar tools. Linux Fundamentals : Command-line proficiency, scripting, process management. Programming & Data : Python, SQL, and Spark (nice to have, but not mandatory). Data Engineering Awareness : Understanding of data pipelines, ETL processes, and workflow orchestration (e.g., Airflow). DevOps Practices : Observability, logging, alerting, and automation. Communication: Facilitates team and stakeholder meetings effectively Resolves and/or escalates issues in a timely fashion Understands how to communicate difficult/sensitive information tactfully Astute cross-cultural awareness and experience in working with international teams (especially US) You should be speaking to us if; You are looking for a career that challenges you to bring your knowledge and expertise to bear for designing implementing and running a world class IT organization You like a job that brings a great deal of autonomy and decision-making latitude You like working in an environment that is young, innovative and well established You like to work in an organization that takes decisions quickly, where you can make an impact Why Encora Innovation Labs? Are you are looking for a career that challenges you to bring your knowledge and expertise to bear for designing implementing and running a world class IT Product Engineering organization? Encora Innovation Labs is a world class SaaS technology Product Engineering company and focused on transformational outcomes for leading-edge tech companies. Encora Partners with fast growing tech companies who are driving innovation and growth within their industries. Who We Are: Encora is devoted to making the world a better place for clients, for our communities and for our people. What We Do: We drive transformational outcomes for clients through our agile methods, micro-industry vertical expertise, and extraordinary people. We provide hi-tech, differentiated services in next-gen software engineering solutions including Big Data, Analytics, Machine Learning, IoT, Embedded, Mobile, AWS/Azure Cloud, UI/UX, and Test Automation to some of the leading technology companies in the world. Encora specializes in Data Governance, Digital Transformation, and Disruptive Technologies, helping clients to capitalize on their potential efficiencies. Encora has been an instrumental partner in the digital transformation journey of clients across a broad spectrum of industries: Health Tech, Fin Tech, Hi-Tech, Security, Digital Payments, Education Publication, Travel, Real Estate, Supply Chain and Logistics and Emerging Technologies. Encora has successfully developed and delivered more than 2,000 products over the last few years and has led the transformation of a number of Digital Enterprises. Encora has over 25 offices and innovation centers in 20+ countries worldwide. Our international network ensures that clients receive seamless access to the complete range of our services and expert knowledge and skills of professionals globally. Encora global delivery centers and offices in the United States, Costa Rica, Mexico, United Kingdom, India, Malaysia, Singapore, Indonesia, Hong Kong, Philippines, Mauritius, and the Cayman Islands. Encora is proud to be certified as a Great Place to Work in India Please visit us at Website: encora.com LinkedIn: EncoraInc Facebook: @EncoraInc Instagram: @EncoraInc Looking to build and lead in world-class IT Product Engineering services? Your next career move starts here. Please share your updated resume to ravi.sankar@encora.com Regards , Ravisankar P Talent Acquisition +91 9994599336 Ravi.sankar@encora.com encora.com

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role We are looking for a Senior Data Engineer to lead the design and implementation of scalable data infrastructure and engineering practices. This role will be critical in laying down the architectural foundations for advanced analytics and AI/ML use cases across global business units. You’ll work closely with the Data Science Lead, Product Manager, and other cross-functional stakeholders to ensure data systems are robust, secure, and future-ready. Key Responsibilities Architect and implement end-to-end data infrastructure including ingestion, transformation, storage, and access layers to support enterprise-scale analytics and machine learning. Define and enforcedata engineering standards, design patterns, and best practices across the CoE. Lead theevaluation and selection of tools, frameworks, and platforms (cloud, open source, commercial) for scalable and secure data processing. Work with data scientists to enable efficient feature extraction, experimentation, and model deployment pipelines. Design forreal-time and batch processing architectures, including support for streaming data and event-driven workflows. Own thedata quality, lineage, and governance frameworks to ensure trust and traceability in data pipelines. Collaborate with central IT, data platform teams, and business units to align on data strategy, infrastructure, and integration patterns. Mentor and guide junior engineers as the team expands, creating a culture of high performance and engineering excellence. Qualifications 10+ years of hands-on experience in data engineering, data architecture, or platform development. Strong expertise inbuilding distributed data pipelines using tools like Spark, Kafka, Airflow, or equivalent orchestration frameworks. Deep understanding ofdata modeling, data lake/lakehouse architectures, and scalable data warehousing (e.g., Snowflake, BigQuery, Redshift). Advanced proficiency inPython and SQL, with working knowledge of Java or Scala preferred. Strong experience working oncloud-native data architectures (AWS, GCP, or Azure) including serverless, storage, and compute optimization. Proven experience in architectingML/AI-ready data environments, supporting MLOps pipelines and production-grade data flows. Familiarity withDevOps practices, CI/CD for data, and infrastructure-as-code (e.g., Terraform) is a plus. Excellent problem-solving skills and the ability to communicate technical solutions to non-technical stakeholders.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

HackerOne is a global leader in offensive security solutions. Our HackerOne Platform combines AI with the ingenuity of the largest community of security researchers to find and fix security, privacy, and AI vulnerabilities across the software development lifecycle. The platform offers bug bounty, vulnerability disclosure, pentesting, AI red teaming, and code security. We are trusted by industry leaders like Amazon, Anthropic, Crypto.com, General Motors, GitHub, Goldman Sachs, Uber, and the U.S. Department of Defense. HackerOne was named a Best Workplace for Innovators by Fast Company in 2023 and a Most Loved Workplace for Young Professionals in 2024. HackerOne Values HackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability. Data Engineer, Enterprise Data & AI Location: Pune, India This role requires the candidate to be based in Pune and work from an office 4 days a week. Please only apply if you're okay with these requirements. *** Position Summary HackerOne is seeking a Data Engineer, Enterprise Data & AI to join our DataOne team. You will lead the discovery, architecture, and development of high-impact, high-performance, scalable source of truth data marts and data products. Joining our growing, distributed organization, you'll be instrumental in building the foundation that powers HackerOne's one source of truth. As a Data Engineer, Enterprise Data & AI, you'll be able to lead challenging projects and foster collaboration across the company. Leveraging your extensive technological expertise, domain knowledge, and dedication to business objectives, you'll drive innovation to propel HackerOne forward. DataOne democratizes source-of-truth information and insights to enable all Hackeronies to ask the right questions, tell cohesive stories, and make rigorous decisions so that HackerOne can delight our Customers and empower the world to build a safer internet . The future is one where every Hackeronie is a catalyst for positive change , driving data-informed innovation while fostering our culture of transparency, collaboration, integrity, excellence, and respect for all . What You Will Do Your first 30 days will focus on getting to know HackerOne. You will join your new squad and begin onboarding - learn our technology stack (Python, Airflow, Snowflake, DBT, Meltano, Fivetran, Looker, AWS), and meet our Hackeronies. Within 60 days, you will deliver impact on a company level with consistent contribution to high-impact, high-performance, scalable source of truth data marts and data products. Within 90 days, you will drive the continuous evolution and innovation of data at HackerOne, identifying and leading new initiatives. Additionally, you foster cross-departmental collaboration to enhance these efforts. Deliver impact by developing the roadmap for continuously and iteratively launching high-impact, high-performance, scalable source of truth data marts and data products, and by leading and delivering cross-functional product and technical initiatives. Be a technical paragon and cross-functional force multiplier, autonomously determining where to apply focus, contributing at all levels, elevating your squad, and designing solutions to ambiguous business challenges, in a fast-paced early-stage environment. Drive continuous evolution and innovation, the adoption of emerging technologies, and the implementation of industry best practices. Champion a higher bar for discoverability, usability, reliability, timeliness, consistency, validity, uniqueness, simplicity, completeness, integrity, security, and compliance of information and insights across the company. Provide technical leadership and mentorship, fostering a culture of continuous learning and growth. Minimum Qualifications 5+ years experience as an Analytics Engineer, Business Intelligence Engineer, Data Engineer, or similar role w/ proven track record of launching source of truth data marts. 5+ years of experience building and optimizing data pipelines, products, and solutions. Must be flexible to align with occasional evening meetings in USA timezone. Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, and AWS. Strong proficiency in at least one data programming language such as Python or R. Expert in SQL for data manipulation in a fast-paced work environment. Expert in using Git for version control. Expert in creating compelling data stories using data visualization tools such as Looker, Tableau, Sigma, Domo, or PowerBI. Proven track record of having substantial impact across the company, as well as externally for the company, demonstrating your ability to drive positive change and achieve significant results. English fluency, excellent communication skills, and can present data-driven narratives in verbal, presentation, and written formats. Passion for working backwards from the Customer and empathy for business stakeholders. Experience shaping the strategic vision for data. Experience working with Agile and iterative development processes. Preferred Qualifications Experience working within and with data from business applications such as Salesforce, Clari, Gainsight, Workday, GitLab, Slack, or Freshservice. Proven track record of driving innovation, adopting emerging technologies and implementing industry best practices. Thrive on solving for ambiguous problem statements in an early-stage environment. Experience designing advanced data visualizations and data-rich interfaces in Figma or equivalent. Compensation Bands: Pune, India ₹3.7M – ₹4.6M Offers Equity Job Benefits: Health (medical, vision, dental), life, and disability insurance* Equity stock options Retirement plans Paid public holidays and unlimited PTO Paid maternity and parental leave Leaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act) Employee Assistance Program Flexible Work Stipend Eligibility may differ by country We're committed to building a global team! For certain roles outside the United States, U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR). Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check. HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws. This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time. For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: GCP Data Engineer 34306 Job Type: Full-Time Work Mode: Hybrid Location: Chennai Budget: ₹18–20 LPA Notice Period: Immediate Joiners Preferred Role Overview We are seeking a proactive Full Stack Data Engineer with a strong focus on Google Cloud Platform (GCP) and data engineering tools. The ideal candidate will contribute to building analytics products supporting supply chain insights and will be responsible for developing cloud-based data pipelines, APIs, and user interfaces. The role demands high standards of software engineering, agile practices like Test-Driven Development (TDD), and experience in modern data architectures. Key Responsibilities Design, build, and deploy scalable data pipelines and analytics platforms using GCP tools like BigQuery, Dataflow, Dataproc, Data Fusion, and Cloud SQL. Implement and maintain Infrastructure as Code (IaC) using Terraform and CI/CD pipelines using Tekton. Develop robust APIs using Python, Java, and Spring Boot, and deliver frontend interfaces using Angular, React, or Vue. Build and support data integration workflows using Airflow, PySpark, and PostgreSQL. Collaborate with cross-functional teams in an Agile environment, leveraging Jira, paired programming, and TDD. Ensure cloud deployments are secure, scalable, and performant on GCP. Mentor team members and promote continuous learning, clean code practices, and Agile principles. Mandatory Skills GCP services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL Programming: Python, Java, Spring Boot Frontend: Angular, React, Vue, TypeScript, JavaScript Data Orchestration: Airflow, PySpark DevOps/CI-CD: Terraform, Tekton, Jenkins Databases: PostgreSQL, Cloud SQL, NoSQL API development and integration Experience 5+ years in software/data engineering Minimum 1 year in GCP-based deployment and cloud architecture Education Bachelor’s or Master’s in Computer Science, Engineering, or related technical discipline Desired Traits Passion for clean, maintainable code Strong problem-solving skills Agile mindset with an eagerness to mentor and collaborate Skills: typescript,data fusion,terraform,java,spring boot,dataflow,data integration,cloud sql,javascript,bigquery,react,postgresql,nosql,vue,data,pyspark,dataproc,sql,cloud,angular,python,tekton,api development,gcp services,jenkins,airflow,gcp

Posted 2 weeks ago

Apply

10.0 - 14.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Face to face interview on 2nd august 2025 in Hyderabad Apply here - Job description - https://careers.ey.com/job-invite/1604461/ Experience Required: Minimum 8 years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills.

Posted 2 weeks ago

Apply

4.0 years

3 - 6 Lacs

Hyderābād

On-site

CDP ETL & Database Engineer The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture. The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities: ETL Development – The CDP ETL C Database Engineer will be responsible for building pipelines to feed downstream data They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. Implementations s Onboarding – Will work with the team to onboard new clients onto the ZMP/CDP+ The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests – The CDP ETL C Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management – The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration s Process Improvement – The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements: The CDP ETL C Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management When required, collaborate with the Business Solutions Analyst (BSA) to solidify. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow , and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives. Required Skills: ETL – ETL tools such as Talend (Preferred, not required) DMExpress – Nice to have Informatica – Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL – Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages – Can demonstrate knowledge of any of the PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS – Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value Working knowledge of Code Repositories such as GIT, Win CVS, Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira. Minimum Qualifications: Bachelor's degree or equivalent 4+ Years' experience Excellent verbal C written communications skills Self-Starter, highly motivated Analytical mindset Company Summary: Zeta Global is a NYSE listed data-powered marketing technology company with a heritage of innovation and industry leadership. Founded in 2007 by entrepreneur David A. Steinberg and John Sculley, former CEO of Apple Inc and Pepsi-Cola, the Company combines the industry's 3rd largest proprietary data set (2.4B+ identities) with Artificial Intelligence to unlock consumer intent, personalize experiences and help our clients drive business growth. Our technology runs on the Zeta Marketing Platform, which powers 'end to end' marketing programs for some of the world's leading brands. With expertise encompassing all digital marketing channels – Email, Display, Social, Search and Mobile – Zeta orchestrates acquisition and engagement programs that deliver results that are scalable, repeatable and sustainable. Zeta Global is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, gender, ancestry, color, religion, sex, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law. Zeta Global Recognized in Enterprise Marketing Software and Cross-Channel Campaign Management Reports by Independent Research Firm https://www.forbes.com/sites/shelleykohan/2024/06/1G/amazon-partners-with-zeta-global-to-deliver- gen-ai-marketing-automation/ https://www.cnbc.com/video/2024/05/06/zeta-global-ceo-david-steinberg-talks-ai-in-focus-at-milken- conference.html https://www.businesswire.com/news/home/20240G04622808/en/Zeta-Increases-3Q%E2%80%GG24- Guidance https://www.prnewswire.com/news-releases/zeta-global-opens-ai-data-labs-in-san-francisco-and-nyc- 300S45353.html https://www.prnewswire.com/news-releases/zeta-global-recognized-in-enterprise-marketing-software-and- cross-channel-campaign-management-reports-by-independent-research-firm-300S38241.html

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

JOB DESCRIPTION We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Senior Manager of Software Engineering at JPMorgan Chase within the Consumer and Community Banking – Data Technology team, you lead a technical area and drive impact within teams, technologies, and projects across departments. Utilize your in-depth knowledge of software, applications, technical processes, and product management to drive multiple complex projects and initiatives, while serving as a primary decision maker for your teams and be a driver of innovation and solution delivery. Job Responsibilities Leads Data publishing and processing platform engineering team to achieve business & technology objectives Accountable for technical tools evaluation, build platforms, design & delivery outcomes Carries governance accountability for coding decisions, control obligations, and measures of success such as cost of ownership, maintainability, and portfolio operations Delivers technical solutions that can be leveraged across multiple businesses and domains Influences peer leaders and senior stakeholders across the business, product, and technology teams Champions the firm’s culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience. In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise Expertise in programming languages such as Python and Java, with a strong understanding of cloud services including AWS, EKS, SNS, SQS, Cloud Formation, Terraform, and Lambda. Proficient in messaging services like Kafka and big data technologies such as Hadoop, Spark-SQL, and Pyspark. Experienced with Teradata or Snowflake, or any other RDBMS databases, with a solid understanding of Teradata or Snowflake. Advanced experience in leading technologists to manage, anticipate, and solve complex technical challenges, along with experience in developing and recognizing talent within cross-functional teams. Experience in leading a product as a Product Owner or Product Manager, with practical cloud-native experience. Preferred qualifications, capabilities, and skills Previous experience leading / building Platforms & Frameworks teams Skilled in orchestration tools like Airflow (preferable) or Control-M, and experienced in continuous integration and continuous deployment (CICD) using Jenkins. Experience with Observability tools, frameworks and platforms. Experience with large scalable secure distributed complex architecture and design Experience with nonfunctional topics like security, performance, code and design best practices AWS Certified Solutions Architect, AWS Certified Developer, or similar certification is a big plus. ABOUT US

Posted 2 weeks ago

Apply

5.0 years

5 - 5 Lacs

Hyderābād

On-site

Data Services Analyst The Data Services ETL Developer will specialize in data transformations and integration projects utilizing Zeta's proprietary tools, 3rd Party software, and coding. This role requires understanding of CRM methodologies related to marketing operations. The candidate will be responsible for implementing data processing across multiple technologies, supporting a high volume of tasks with the expectation of accurate and on-time delivery. Responsibilities: Manipulate client and internal marketing data across multiple platforms and technologies. Automate scripts to perform tasks to transfer and manipulate data feeds (internal and external). Build, deploy, and manage cloud-based data pipelines using AWS services. Manage multiple tasks with competing priorities and ensure timely client deliverability. Work with technical staff to maintain and support a proprietary ETL environment. Collaborate with database/CRM, modelers, analysts, and application programmers to deliver results for clients. Job Requirements: Coverage of US time-zone and in office minimum three days per week. Experience in database marketing with the ability to transform and manipulate data. knowledge of US and International postal address With exposure to SAP postal products (DQM). Proficient with AWS services (S3, Airflow, RDS, Athena) for data storage, processing, and analysis. Experience with Oracle and Snowflake SQL to automate scripts for marketing data processing. Familiarity with tools like Snowflake, Airflow, GitLab, Grafana, LDAP, Open VPN, DCWEB, Postman, and Microsoft Excel. Knowledge of SQL Server, including data exports/imports, running SQL Server Agent Jobs, and SSIS packages. Proficiency with editors like Notepad++ and Ultra Edit (or similar tools). Understanding of SFTP and PGP to ensure data security and client data protection. Experience working with large-scale customer databases in a relational database environment. Proven ability to manage multiple tasks simultaneously. Strong communication and collaboration skills in a team environment. Familiarity with the project life cycle. Minimum Qualifications: Bachelor's degree or equivalent with 5+ years of experience in database marketing and cloud-based technologies. Strong understanding of data engineering concepts and cloud infrastructure. Excellent oral and written communication skills.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderābād

On-site

The people here at Apple don’t just build products - we craft the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that supports the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. The Global Business Intelligence team provides data services, analytics, reporting, and data science solutions to Apple’s business groups, including Retail, iTunes, Marketing, AppleCare, Operations, Finance, and Sales. These solutions are built on top of an end-to-end machine learning platform with sophisticated AI capabilities. We are looking for a competent, experienced, and driven machine learning engineer to define and build some of the best-in-class machine learning solutions and tools for Apple. Description As a Machine Learning Engineer, you will work on building intelligent systems to democratize AI across a wide range of solutions within Apple. You will drive the development and deployment of innovative AI models and systems that directly impact the capabilities and performance of Apple’s products and services. You will implement robust, scalable ML infrastructure, including data storage, processing, and model serving components, to support seamless integration of AI/ML models into production environments. You will develop novel feature engineering, data augmentation, prompt engineering and fine-tuning frameworks that achieve optimal performance on specific tasks and domains. You will design and implement automated ML pipelines for data preprocessing, feature engineering, model training, hyper-parameter tuning, and model evaluation, enabling rapid experimentation and iteration. You will also implement advanced model compression and optimization techniques to reduce the resource footprint of language models while preserving their performance. Have continuous focus to Brainstorm and Design various POCs using AI/ML Services for new or existing enterprise problems. YOU SHOULD BE ABLE TO: - Understand a business challenge - Collaborate with business and other multi-functional teams - Design a statistical or deep learning solution to find the needed answer to it. - Develop it by yourself or guide another person to do it. - Deliver the outcome into production, (v) Keep a good governance of your work. There are meaningful opportunities for you deliver impactful influences to Apple. Key Qualifications 4+ years of ML engineering experience in feature engineering, model training, model serving, model monitoring and model refresh management Experience developing AI/ML systems at scale in production or in high-impact research environments Passionate about computer vision, natural language processing, especially in LLMs and Generative AI systems Knowledge with the common frameworks and tools such as PyPorch or TensorFlow Experience with transformer models such as BERT, GPT etc. and understanding of their underlying principles is a plus Strong coding, analytical, software engineering skills, and familiarity with software engineering principles around testing, code reviews and deployment Experience in handling performance, application and security log management Applied knowledge of statistical data analysis, predictive modeling classification, Time Series techniques, sampling methods, multivariate analysis, hypothesis testing, and drift analysis. Proficiency in programming languages and tools like Python, R, Git, Airflow, Notebooks. Experience with data visualization tools like matplotlib, d3.js., Tableau would be a plus Education & Experience Bachelor’s Degree or Equivalent experience Submit CV

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Senior Data Engineer Location: Chennai 34322 Job Type: Contract Budget: ₹18 LPA Notice Period: Immediate Joiners Only Role Overview We are seeking a highly capable Software Engineer (Data Engineer) to support end-to-end development and deployment of critical data products. The selected candidate will work across diverse business and technical teams to design, build, transform, and migrate data solutions using modern cloud technologies. This is a high-impact role focused on cloud-native data engineering and infrastructure. Key Responsibilities Develop and manage scalable data pipelines and workflows on Google Cloud Platform (GCP) Design and implement ETL processes using Python, BigQuery, and Terraform Support data product lifecycle from concept, development to deployment and DevOps Optimize query performance and manage large datasets with efficiency Collaborate with cross-functional teams to gather requirements and deliver solutions Maintain strong adherence to Agile practices, contributing to sprint planning and user stories Apply best practices in data security, quality, and governance Effectively communicate technical solutions to stakeholders and team members Required Skills & Experience Minimum 4 years of relevant experience in GCP Data Engineering Strong hands-on experience with BigQuery, Python programming, Terraform, Cloud Run, and GitHub Proven expertise in SQL, data modeling, and performance optimization Solid understanding of cloud data warehousing and pipeline orchestration (e.g., DBT, Dataflow, Composer, or Airflow DAGs) Background in ETL workflows and data processing logic Familiarity with Agile (Scrum) methodology and collaboration tools Preferred Skills Experience with Java, Spring Boot, and RESTful APIs Exposure to infrastructure automation and CI/CD pipelines Educational Qualification Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field Skills: etl,terraform,dbt,java,spring boot,etl workflows,data modeling,dataflow,data engineering,ci/cd,bigquery,agile,data,sql,cloud,restful apis,github,airflow dags,gcp,cloud run,composer,python

Posted 2 weeks ago

Apply

5.0 years

19 - 20 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Senior Software Engineer 34332 Location: Chennai (Onsite) Job Type: Contract Budget: ₹20 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a highly skilled Senior Software Engineer to be a part of a centralized observability and monitoring platform team. The role focuses on building and maintaining a scalable, reliable observability solution that enables faster incident response and data-driven decision-making through latency, traffic, error, and saturation monitoring. This opportunity requires a strong background in cloud-native architecture, observability tooling, backend and frontend development, and data pipeline engineering. Key Responsibilities Design, build, and maintain observability and monitoring platforms to enhance MTTR/MTTX Create and optimize dashboards, alerts, and monitoring configurations using tools like Prometheus, Grafana, etc. Architect and implement scalable data pipelines and microservices for real-time and batch data processing Utilize GCP tools including BigQuery, Dataflow, Dataproc, Data Fusion, and others Develop end-to-end solutions using Spring Boot, Python, Angular, and REST APIs Design and manage relational and NoSQL databases including PostgreSQL, MySQL, and BigQuery Implement best practices in data governance, RBAC, encryption, and security within cloud environments Ensure automation and reliability through CI/CD, Terraform, and orchestration tools like Airflow and Tekton Drive full-cycle SDLC processes including design, coding, testing, deployment, and monitoring Collaborate closely with software architects, DevOps, and cross-functional teams for solution delivery Core Skills Required Proficiency in Spring Boot, Angular, Java, and Python Experience in developing microservices and SOA-based systems Cloud-native development experience, preferably on Google Cloud Platform (GCP) Strong understanding of HTML, CSS, JavaScript/TypeScript, and modern frontend frameworks Experience with infrastructure automation and monitoring tools Working knowledge of data engineering technologies: PySpark, Airflow, Apache Beam, Kafka, and similar Strong grasp of RESTful APIs, GitHub, and TDD methodologies Preferred Skills GCP Professional Certifications (e.g., Data Engineer, Cloud Developer) Hands-on experience with Terraform, Cloud SQL, Data Governance tools, and security frameworks Exposure to performance tuning, cost optimization, and observability best practices Experience Required 5+ years of experience in full-stack and cloud-based application development Strong track record in building distributed, scalable systems Prior experience with observability and performance monitoring tools is a plus Educational Qualifications Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: java,data fusion,html,dataflow,terraform,spring boot,restful apis,python,angular,dataproc,microservices,apache beam,css,cloud sql,soa,typescript,tdd,kafka,javascript,airflow,github,pyspark,bigquery,,gcp

Posted 2 weeks ago

Apply

2.0 years

4 Lacs

Chennai

On-site

We are hiring a tech-savvy and creative Social Media Handler with strong expertise in AI-powered content creation , web scraping , and automation of scraper workflows . You will be responsible for managing social media presence while automating content intelligence and trend tracking through custom scraping solutions. This is a hybrid role requiring both creative content skills and technical automation proficiency. Key Responsibilities: 1) Social Media Management - Plan and execute content calendars across platforms: Instagram, Facebook, YouTube, LinkedIn, and X. - Create high-performing, audience-specific content using AI tools (ChatGPT, Midjourney, Canva AI, etc.). - Engage with followers, track trends, and implement growth strategies. 2) AI Content Creation - Use generative AI to write captions, articles, and hashtags. - Generate AI-powered images, carousels, infographics, and reels. - Repurpose long-form content into short-form video or visual content using tools like Descript or Lumen5. 3) Web Scraping & Automation - Design and build automated web scrapers to extract data from websites, directories, competitor pages, and trending content sources. - Schedule scraping jobs and set up automated pipelines using: - Python (BeautifulSoup, Scrapy, Selenium, Playwright) - Task schedulers (Airflow, Cron, or Python scripts) - Cloud scraping or headless browsers - Parse and clean data for insight generation (topics, hashtags, keywords, sentiment, etc.). - Store and organize scraped data in spreadsheets or databases for content inspiration and strategy. Required Skills & Experience: 1) 2–5 years of relevant work experience in social media, content creation, or web scraping. 2) Proficiency in AI tools: - Text: ChatGPT, Jasper, Copy.ai 3) Image: Midjourney, DALL·E, Adobe Firefly 4) Video: Pictory, Descript, Lumen5 5) Strong Python skills for: - Web scraping (Scrapy, BeautifulSoup, Selenium) 6) Automation scripting - Knowledge of data handling using Pandas, CSV, JSON, Google Sheets, or databases. 7) Familiar with social media scheduling tools (Meta Business Suite, Buffer, Hootsuite). 8) Ability to work independently and stay updated on digital trends and platform changes. Educational Qualification Degree in Marketing, Media, Computer Science, or Data Science preferred. - Skills-based hiring encouraged – real-world experience matters more than formal education. Work Location: Chennai (In-office role) Salary: Commensurate with experience + performance bonus Bonus Skills (Nice to Have) : 1) Knowledge of website development (HTML, CSS, JS, WordPress/Webflow). 2) SEO and content analytics. 3) Basic video editing and animation (CapCut, After Effects). 4) Experience with automation platforms like Zapier, n8n, or Make.com. To Apply: Please email your resume, portfolio, and sample projects to: Job Type: Full-time Pay: From ₹40,000.00 per month Work Location: In person

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking a Senior Developer with expertise in SnapLogic and Apache Airflow to design, develop, and maintain enterprise-level data integration solutions. This role requires strong technical expertise in ETL development, workflow orchestration, and cloud technologies. You will be responsible for automating data workflows, optimizing performance, and ensuring the reliability and scalability of our data systems. Key Responsibilities include designing, developing, and managing ETL pipelines using SnapLogic, ensuring efficient data transformation and integration across various systems and applications. Leverage Apache Airflow for workflow automation, job scheduling, and task dependencies, ensuring optimized execution and monitoring. Work closely with cross-functional teams such as Data Engineering, DevOps, and Data Science to understand data requirements and deliver solutions. Collaborate in designing and implementing data pipeline architectures to support large-scale data processing in cloud environments like AWS, Azure, and GCP. Develop reusable SnapLogic pipelines and integrate with third-party applications and data sources including databases, APIs, and cloud services. Optimize SnapLogic pipeline performance to handle large volumes of data with minimal latency. Provide guidance and mentoring to junior developers in the team, conducting code reviews and offering best practice recommendations. Troubleshoot and resolve pipeline failures, ensuring high data quality and minimal downtime. Implement automated testing, continuous integration (CI), and continuous delivery (CD) practices for data pipelines. Stay current with new SnapLogic features, Airflow upgrades, and industry best practices. Required Skills & Experience include 6+ years of hands-on experience in data engineering, focusing on SnapLogic and Apache Airflow. Strong experience with SnapLogic Designer and SnapLogic cloud environment for building data integrations and ETL pipelines. Proficient in Apache Airflow for orchestrating, automating, and scheduling data workflows. Strong understanding of ETL concepts, data integration, and data transformations. Experience with cloud platforms like AWS, Azure, or Google Cloud and data storage systems such as S3, Azure Blob, and Google Cloud Storage. Strong SQL skills and experience with relational databases like PostgreSQL, MySQL, Oracle, and NoSQL databases. Experience working with REST APIs, integrating data from third-party services, and using connectors. Knowledge of data quality, monitoring, and logging tools for production pipelines. Experience with CI/CD pipelines and tools such as Jenkins, GitLab, or similar. Excellent problem-solving skills with the ability to diagnose issues and implement effective solutions. Ability to work in an Agile development environment. Strong communication and collaboration skills to work with both technical and non-technical teams.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title - ETL Developer - Informatica BDM/DEI 📍 Location : Onsite 🕒 Employment Type : Full Time 💼 Experience Level : Mid Senior Job Summary - We are seeking a skilled and results-driven ETL Developer with strong experience in Informatica BDM (Big Data Management) or Informatica DEI (Data Engineering Integration) to design and implement scalable, high-performance data integration solutions. The ideal candidate will work on large-scale data projects involving structured and unstructured data, and contribute to the development of reliable and efficient ETL pipelines across modern big data environments. Key Responsibilities Design, develop, and maintain ETL pipelines using Informatica BDM/DEI for batch and real-time data integration Integrate data from diverse sources including relational databases, flat files, cloud storage, and big data platforms such as Hive and Spark Translate business and technical requirements into mapping specifications and transformation logic Optimize mappings, workflows , and job executions to ensure high performance, scalability, and reliability Conduct unit testing and participate in integration and system testing Collaborate with data architects, analysts, and business stakeholders to understand requirements and deliver robust solutions Support data quality checks, exception handling, and metadata documentation Monitor, troubleshoot, and resolve ETL job issues and performance bottlenecks Ensure adherence to data governance and compliance standards throughout the development lifecycle Key Skills and Qualification 5-8 years of experience in ETL development with a focus on Informatica BDM/DEI Strong knowledge of data integration techniques , transformation logic, and job orchestration Proficiency in SQL , with the ability to write and optimize complex queries Experience working with Hadoop ecosystems (e.g., Hive, HDFS, Spark) and large-volume data processing Understanding of performance optimization in ETL and big data environments Familiarity with job scheduling tools and workflow orchestration (e.g., Control-M, Apache Airflow, Oozie) Good understanding of data warehousing , data lakes , and data modeling principles Experience working in Agile/Scrum environments Excellent analytical, problem-solving, and communication skills Good to have Experience with cloud data platforms (AWS Glue, Azure Data Factory, or GCP Dataflow) Exposure to Informatica IDQ (Data Quality) is a plus Knowledge of Python, Shell scripting, or automation tools Informatica or Big Data certifications

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Senior Bioinformatician GCL: D2 Introduction to role Are you ready to tackle some of the most challenging informatics problems in the drug discovery clinical trial phase? Join us as a Senior Bioinformatician and be part of a team that is redefining healthcare. Your work will directly impact millions of patients by advancing the standard of drug discovery through data processing, analysis, and algorithm development. Collaborate with informaticians, data scientists, and engineers to deliver ground breaking solutions that drive scientific insights and improve the quality of candidate drugs. Are you up for the challenge? Accountabilities Collaborate with scientific colleagues across AstraZeneca to ensure informatics and advanced analytics solutions meet R&D needs. Develop and deliver informatics solutions using agile methodologies, including pipelining approaches and algorithm development. Contribute to multi-omics drug projects with downstream analysis and data analytics. Create, benchmark, and deploy scalable data workflows for genome assembly, variant calling, annotation, and more. Implement CI/CD practices for pipeline development across cloud-based and HPC environments. Apply cloud computing platforms like AWS for pipeline execution and data storage. Explore opportunities to apply AI & ML in informatics. Engage with external peers and software providers to apply the latest methods to business problems. Work closely with data scientists and platform teams to deliver scientific insights. Collaborate with informatics colleagues in our Global Innovation and Technology Centre. Essential Skills/Experience Masters/PhD (or equivalent) in Bioinformatics, Computational Biology, AI/ML, Genomics, Systems Biology, Biomedical Informatics, or related field with a demonstrable record of informatics and Image analysis delivery in a biopharmaceutical setting. Strong coding and software engineering skills such as Python, R, Scripting, Nextflow. Over 6 years of experience in Image analysis/bioinformatics, with a focus on Image/NGS data analysis and Nextflow (DSL2) pipeline development. Proficiency in cloud platforms preferably AWS (e.g. S3, EC2, Batch, EBS, EFS etc) and containerization tools (Docker, Singularity). Experience with workflow management tools and CI/CD practices in Image analysis and bioinformatics (Git, GitHub, GitLab), HPC in AWS. Experience in working with any multi-omics analysis (Transcriptomics, single cell and CRISPR etc ) or Image data (DICOM, WSI etc) analysis. Experience working with any Omics tools and databases such as NCBI, PubMED, UCSC genome databrowser, bedtools, samtools, Picard or imaging relevant tools such as CellProfiler, HALO, VisioPharm particularly in digital pathology and biomarker research. Strong communication skills, with the ability to collaborate effectively with team members and partners to achieve objectives. Desirable Skills/Experience Experience in Omics or Imaging data analysis in a Biopharmaceutical setting. Knowledge of Dockers, Kubernetes for container orchestration. Experience with other workflow management systems, such as (e.g. Apache Airflow, Nextflow, Cromwell, AWS StepFunctions). Familiarity with web-based bioinformatics tools (e.g., RShiny, Jupyter). Experience with working in GxP-validated environments. Experience administering and optimising a HPC job scheduler (e.g. SLURM). Experience with configuration automation and infrastructure as code (e.g. Ansible, Hashicorp Terraform, AWS CloudFormation, Amazon Cloud Developer Kit). Experience deploying infrastructure and code to public cloud, especially AWS. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are driven by a shared purpose to push the boundaries of science and develop life-changing medicines. Our innovative approach combines ground breaking science with leading digital technology platforms to empower our teams to perform at their best. We foster an environment where you can explore new solutions and experiment with groundbreaking technology. With countless opportunities for learning and growth, you'll be part of a diverse team that works multi-functionally to make a meaningful impact on patients' lives. Ready to make a difference? Apply now to join our team as a Senior Bioinformatician! Date Posted 02-Jul-2025 Closing Date 30-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Experience- 7+ years Location- Hyderabad (preferred), Pune, Mumbai JD- We are seeking a skilled Snowflake Developer with 7+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications · 7+ years in database development, data warehousing, or ETL. · 4+ years of hands-on Snowflake development experience. · Strong SQL or Python skills for data processing. · Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). · Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). · Certifications: SnowPro Core Certification (preferred). Preferred Skills · Familiarity with data governance and metadata management. · Familiarity with DBT, Airflow, SSIS & IICS · Knowledge of CI/CD pipelines (Azure DevOps).

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Overview TekWissen is a global workforce management provider that offers strategic talent solutions to our clients throughout India and world-wide. Our client is a company operating a marketplace for consumers, sellers, and content creators. It offers merchandise and content purchased for resale from vendors and those offered by thirdparty sellers. Job Title: Business Intelligence Engineer III Location: Pune Duration: 6 Months Job Type: Contract Work Type: Onsite Job Description The Top Responsibilities: Data Engineering on AWS: Design and implement scalable and secure data pipelines using AWS services such as the client's S3, AWS Glue, the client's Redshift, and the client's Athena. Ensure high-performance, reliable, and fault-tolerant data architectures. Data Modeling and Transformation: Develop and optimize dimensional data models to support various business intelligence and analytics use cases. Perform complex data transformations and enrichment using tools like AWS Glue, AWS Lambda, and Apache Spark. Business Intelligence and Reporting: Collaborate with stakeholders to understand reporting and analytics requirements. Build interactive dashboards and reports using visualization tools like the client's QuickSight. Data Governance and Quality: Implement data quality checks and monitoring processes to ensure the integrity and reliability of data. Define and enforce data policies, standards, and procedures. Cloud Infrastructure Management: Manage and maintain the AWS infrastructure required for the data and analytics platform. Optimize performance, cost, and security of the underlying cloud resources. Collaboration and Knowledge Sharing: Work closely with cross-functional teams, including data analysts, data scientists, and business users, to identify opportunities for data-driven insights. Share knowledge, best practices, and train other team members. Leadership Principles Ownership Deliver result Insist on the Highest Standards Mandatory Requirements 3+ years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficient in designing and implementing data pipelines using AWS services such as S3, Glue, Redshift, Athena, and Lambda. Expertise in data modeling, dimensional modeling, and data transformation techniques. Experience in building and deploying business intelligence solutions, including the use of tools like the client's QuickSight and Tableau. Strong SQL and Python programming skills for data processing and analysis. Understanding of cloud architecture patterns, security best practices, and cost optimization on AWS. Excellent communication and collaboration skills to work effectively with cross-functional teams. Preferred Skills Hands-on experience with Apache Spark, Airflow, or other big data technologies. Knowledge of AWS DevOps practices and tools, such as AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. Familiarity with agile software development methodologies. AWS Certification (e.g., AWS Certified Data Analytics - Specialty). Certification Requirements Any Graduate TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Lead the design and development of advanced refrigeration and HVAC systems for data centers. Provide technical leadership in the application of CO₂ transcritical systems for sustainable and efficient cooling. Perform thermal load calculations, equipment sizing, and system layout planning. Collaborate with electrical engineers, manufacturing engineers and field service engineers to ensure integrated and optimized cooling solutions. Conduct feasibility studies, energy modeling, and performance simulations. Oversee installation, commissioning, and troubleshooting of refrigeration systems. Ensure compliance with industry standards, safety regulations, and environmental guidelines. Prepare detailed technical documentation, specifications, and reports. Required Qualifications: Bachelor’s or Master’s degree in Mechanical Engineering, HVAC Engineering, or a related field. 7–9 years of experience in refrigeration or HVAC system design, with a focus on data center cooling . In-depth knowledge of data center thermal management , including CRAC/CRAH units, liquid cooling, and airflow management. Hands-on experience with CO₂ transcritical refrigeration systems and natural refrigerants. Strong understanding of thermodynamics, fluid mechanics, and heat transfer. Familiarity with relevant codes and standards (ASHRAE, ISO, IEC, etc.). Proficiency in design and simulation tools (e.g., AutoCAD, Revit, Pack Calculation Pro, Cycle_DX, VTB, or HVAC-specific software). Preferred Qualifications: Experience with energy efficiency optimization and sustainability initiatives. Knowledge of control systems and building automation for HVAC. Experience working in mission-critical environments or hyperscale data centers.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Big Data Engineer at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. To be successful as a Big Data Engineer, you should have experience with: - Full Stack Software Development for large-scale, mission-critical applications. - Mastery in distributed big data systems such as Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks like Spring Boot, Spring MVC, JPA, Hibernate. - Proficiency with version control systems, preferably Git; GitHub Copilot experience is a plus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Experience developing back-end applications with multi-process and multi-threaded architectures. - Hands-on experience with building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Experience in DevOps practices like CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Data Product development experience is essential. - Experience in Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. This role is for the Pune location. Purpose of the role: To design, develop, and improve software, utilizing various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. - Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. - Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations: - Perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. - Requires in-depth technical knowledge and experience in the assigned area of expertise. - Thorough understanding of the underlying principles and concepts within the area of expertise. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - For an individual contributor, develop technical expertise in the work area, acting as an advisor where appropriate. - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Take responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision-making within the own area of expertise. - Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies