Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Phonologies, a leading provider of speech technology and voice bots in India, is seeking individuals to join the team and revolutionize the delivery of conversational AI and voice bots over the phone. Our innovative solutions are seamlessly integrated with top contact center providers, telephony systems, and CRM systems. We are on the lookout for dynamic and skilled specialists to contribute to the development of our cutting-edge customer interaction solutions. As part of our team, you will be involved in developing and implementing machine learning models for real-world applications. Ideal candidates should possess a minimum of 5 years of experience and demonstrate proficiency in Python, scikit-learn, as well as familiarity with tools such as MLFlow, Airflow, and Docker. You will collaborate with engineering and product teams to create and manage ML pipelines, monitor model performance, and uphold ethical and explainable AI practices. Essential skills for this role include strong capabilities in feature engineering, ownership of the model lifecycle, and effective communication. Please note that recent graduates will not be considered for this position. In our welcoming and professional work environment, your role will offer both challenges and opportunities for growth. The position will be based in Pune. If you believe your qualifications align with our requirements, we encourage you to submit your resume (in .pdf format) to Careers@Phonologies.com. Additionally, kindly include a brief introduction about yourself in the email. We are excited to learn more about you and potentially welcome you to our team!,
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
delhi
On-site
As a Snowflake Solution Architect, you will be responsible for owning and driving the development of Snowflake solutions and products as part of the COE. Your role will involve working with and guiding the team to build solutions using the latest innovations and features launched by Snowflake. Additionally, you will conduct sessions on the latest and upcoming launches of the Snowflake ecosystem and liaise with Snowflake Product and Engineering to stay ahead of new features, innovations, and updates. You will be expected to publish articles and architectures that can solve business problems for businesses. Furthermore, you will work on accelerators to demonstrate how Snowflake solutions and tools integrate and compare with other platforms such as AWS, Azure Fabric, and Databricks. In this role, you will lead the post-sales technical strategy and execution for high-priority Snowflake use cases across strategic customer accounts. You will also be responsible for triaging and resolving advanced, long-running customer issues while ensuring timely and clear communication. Developing and maintaining robust internal documentation, knowledge bases, and training materials to scale support efficiency will also be a part of your responsibilities. Additionally, you will support with enterprise-scale RFPs focused around Snowflake. To be successful in this role, you should have at least 8 years of industry experience, including a minimum of 3 years in a Snowflake consulting environment. You should possess experience in implementing and operating Snowflake-centric solutions and proficiency in implementing data security measures, access controls, and design specifically within the Snowflake platform. An understanding of the complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools is essential. Strong skills in databases, data warehouses, data processing, as well as extensive hands-on expertise with SQL and SQL analytics are required. Familiarity with data science concepts and Python is a strong advantage. Knowledge of Snowflake components such as Snowpipe, Query Parsing and Optimization, Snowpark, Snowflake ML, Authorization and Access control management, Metadata Management, Infrastructure Management & Auto-scaling, Snowflake Marketplace for datasets and applications, as well as DevOps & Orchestration tools like Airflow, dbt, and Jenkins is necessary. Possessing Snowflake certifications would be a good-to-have qualification. Strong communication and presentation skills are essential in this role as you will be required to engage with both technical and executive audiences. Moreover, you should be skilled in working collaboratively across engineering, product, and customer success teams. This position is open in all Xebia office locations including Pune, Bangalore, Gurugram, Hyderabad, Chennai, Bhopal, and Jaipur. If you meet the above requirements and are excited about this opportunity, please share your details here: [Apply Now](https://forms.office.com/e/LNuc2P3RAf),
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
ahmedabad, gujarat
On-site
YipitData is a leading market research and analytics firm specializing in the disruptive economy, having recently secured a significant investment from The Carlyle Group valued over $1B. Recognized for three consecutive years as one of Inc's Best Workplaces, we are a rapidly expanding technology company with offices across various locations globally, fostering a culture centered on mastery, ownership, and transparency. As a potential candidate, you will have the opportunity to collaborate with strategic engineering leaders and report directly to the Director of Data Engineering. This role involves contributing to the establishment of our Data Engineering team presence in India and working within a global team framework, tackling challenging big data problems. We are currently in search of a highly skilled Senior Data Engineer with 6-8 years of relevant experience to join our dynamic Data Engineering team. The ideal candidate should possess a solid grasp of Spark and SQL, along with experience in data pipeline development. Successful candidates will play a vital role in expanding our data engineering team, focusing on enhancing reliability, efficiency, and performance within our strategic pipelines. The Data Engineering team at YipitData sets the standard for all other analyst teams, maintaining and developing the core pipelines and tools that drive our products. This team plays a crucial role in supporting the rapid growth of our business and presents a unique opportunity for the first hire to potentially lead and shape the team as responsibilities evolve. This hybrid role will be based in India, with training and onboarding requiring overlap with US working hours initially. Subsequently, standard IST working hours are permissible, with occasional meetings with the US team. As a Senior Data Engineer at YipitData, you will work directly under the Senior Manager of Data Engineering, receiving hands-on training on cutting-edge data tools and techniques. Responsibilities include building and maintaining end-to-end data pipelines, establishing best practices for data modeling and pipeline construction, generating documentation and training materials, and proficiently resolving complex data pipeline issues using PySpark and SQL. Collaboration with stakeholders to integrate business logic into central pipelines and mastering tools like Databricks, Spark, and other ETL technologies is also a key aspect of the role. Successful candidates are likely to have a Bachelor's or Master's degree in Computer Science, STEM, or a related field, with at least 6 years of experience in Data Engineering or similar technical roles. An enthusiasm for problem-solving, continuous learning, and a strong understanding of data manipulation and pipeline development are essential. Proficiency in working with large datasets using PySpark, Delta, and Databricks, aligning data transformations with business needs, and a willingness to acquire new skills are crucial for success. Effective communication skills, a proactive approach, and the ability to work collaboratively with stakeholders are highly valued. In addition to a competitive salary, YipitData offers a comprehensive compensation package that includes various benefits, perks, and opportunities for personal and professional growth. Employees are encouraged to focus on their impact, self-improvement, and skill mastery in an environment that promotes ownership, respect, and trust.,
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title : Payer Analytics Specialist. Position Summary The Payer Analytics Specialist is responsible for driving insights and supporting decision-making by analyzing healthcare payer data, creating data pipelines, and managing complex analytics projects. This role involves collaborating with cross-functional teams (Operations, Product, IT, and external partners) to ensure robust data integration, reporting, and advanced analytics capabilities. The ideal candidate will have strong technical skills, payer domain expertise, and the ability to manage 3rd-party data sources effectively. Key Responsibilities Data Integration and ETL Pipelines : Develop, maintain, and optimize end-to-end data pipelines, including ingestion, transformation, and loading of internal and external data sources. Collaborate with IT and Data Engineering teams to design scalable, secure, and high-performing data workflows. Implement best practices in data governance, version control, data security, and documentation. Analytics And Reporting Data Analysis : Analyze CPT-level data to identify trends, patterns, and insights relevant to healthcare services and payer rates. Benchmarking : Compare and benchmark rates provided by different health insurance payers within designated zip codes to assess competitive positioning. Build and maintain analytical models for cost, quality, and utilization metrics, leveraging tools such as Python, R, or SQL-based BI tools. Develop dashboards and reports to communicate findings to stakeholders across the organization. 3rd-Party Data Management Ingest and preprocess multiple 3rd party data from multiple sources and transform it into unified structures for analytics and reporting. Ensure compliance with transparency requirements and enable downstream analytics. Design automated workflows to update and validate data, working closely with external vendors and technical teams. Establish best practices for data quality checks (i.e., encounter completeness, claim-level validations) and troubleshooting. Project Management And Stakeholder Collaboration Manage analytics project lifecycles : requirement gathering, project scoping, resource planning, timeline monitoring, and delivery. Partner with key stakeholders (Finance, Operations, Population Health) to define KPIs, data needs, and reporting frameworks. Communicate technical concepts and results to non-technical audiences, providing clear insights and recommendations. Quality Assurance And Compliance Ensure data quality by implementing validation checks, audits, and anomaly detection frameworks. Maintain compliance with HIPAA, HITECH, and other relevant healthcare regulations and data privacy requirements. Participate in internal and external audits of data processes. Continuous Improvement and Thought Leadership. Stay current with industry trends, analytics tools, and regulatory changes affecting payer analytics. Identify opportunities to enhance existing data processes, adopt new technologies, and promote data-driven culture within the organization. Mentor junior analysts and share best practices in data analytics, reporting, and pipeline development. Required Qualifications Education & Experience : Bachelor's degree in Health Informatics, Data Science, Computer Science, Statistics, or a related field (Master's degree a plus). 3-5+ years of experience in healthcare analytics, payer operations, or related fields. Technical Skills Data Integration & ETL : Proficiency in building data pipelines using tools like SQL, Python, R, or ETL platforms (i.e., Talend, Airflow, or Data Factory). Databases & Cloud : Experience working with relational databases (SQL Server, PostgreSQL) and cloud environments (AWS, Azure, GCP). BI & Visualization : Familiarity with BI tools (Tableau, Power BI, Looker) for dashboard creation and data storytelling. MRF, All Claims, & Definitive Healthcare Data : Hands-on experience (or strong familiarity) with healthcare transparency data sets, claims data ingestion strategies, and provider/facility-level data from 3rd-party sources like Definitive Healthcare. Healthcare Domain Expertise Strong understanding of claims data structures (UB-04, CMS-1500), coding systems (ICD, CPT, HCPCS), and payer processes. Knowledge of healthcare regulations (HIPAA, HITECH, transparency rules) and how they impact data sharing and management. Analytical & Problem-Solving Skills Proven ability to synthesize large datasets, pinpoint issues, and recommend data-driven solutions. Comfort with statistical analysis and predictive modeling using Python or R. Soft Skills Excellent communication and presentation skills, with the ability to convey technical concepts to non-technical stakeholders. Strong project management and organizational skills, with the ability to handle multiple tasks and meet deadlines. Collaborative mindset and willingness to work cross-functionally to achieve shared objectives. Preferred/Additional Qualifications Advanced degree (MBA, MPH, MS in Analytics, or similar). Experience with healthcare cost transparency regulations and handling MRF data specifically for compliance. Familiarity with Data Ops or DevOps practices to automate and streamline data pipelines. Certification in BI or data engineering (i.e., Microsoft Certified : Azure Data Engineer, AWS Data Analytics Specialty). Experience establishing data stewardship programs and leading data governance initiatives. Why Join Us Impactful Work - Play a key role in leveraging payer data to reduce costs, improve quality, and shape population health strategies. Innovation - Collaborate on advanced analytics projects using state-of-the-art tools and platforms. Growth Opportunity - Be part of an expanding analytics team where you can lead initiatives, mentor others, and deepen your healthcare data expertise. Supportive Culture - Work in an environment that values open communication, knowledge sharing, and continuous learning. (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
GCP Senior Data Engineer Chennai, India A skilled data engineering professional with 5 years of experience in GCP BigQuery and Oracle PL/SQL , specializing in designing and implementing end-to-end batch data processes in the Google Cloud ecosystem. Strong hands-on expertise with: Core Skills & Tools: Mandatory: GCP, BigQuery Additional Tools: GCS, DataFlow, Cloud Composer, Pub/Sub, GCP Storage, Google Analytics Hub Nice to Have: Apache Airflow, GCP DataProc, GCP DMS, Python Technical Proficiency: Expert in BigQuery , BQL , and DBMS Well-versed in Linux and Python scripting Skilled in Terraform for GCP infrastructure automation Proficient in CI/CD tools such as GitHub , Jenkins , and Nexus Experience with GCP orchestration tools : Cloud Composer, DataFlow, and Pub/Sub Additional Strengths: Strong communication and collaboration skills Capable of building scalable, automated cloud-based solutions Able to work across both data engineering and DevOps environments This profile is well-suited for roles involving cloud-based data architecture , automation , and pipeline orchestration within the GCP environment .
Posted 1 week ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title – Data Platform Operations Lead Preferred Location - Bangalore/Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do Role Responsibilities Platform Development & Enablement Build and maintain scalable, modular services and frameworks for ELT pipelines, data lakehouse processing, integration orchestration, and infrastructure provisioning. Enable self-service capabilities for data engineers, platform operators, and integration teams through tools, documentation, and reusable patterns. Lead the platform architecture and development of core components such as data pipelines, observability tooling, infrastructure as code (IaC), and DevOps automation. Technical Leadership Champion platform-first thinking—identifying common needs and abstracting solutions into shared services that reduce duplication and accelerate delivery. Own the technical roadmap for platform capabilities across domains such as Apache Iceberg on S3, AWS Glue, Airflow/MWAA, Kinesis, CDK, and Kubernetes-based services. Promote design patterns that support real-time and batch processing, schema evolution, data quality, and integration at scale. Collaboration & Governance Collaborate with Data Engineering, Platform Operations, and Application Integration leaders to ensure consistency, reliability, and scalability across the platform. Contribute to FinOps and data governance initiatives by embedding controls and observability into the platform itself. Work with Architecture and Security to align with cloud, data, and compliance standards. Role Purpose 12+ years of experience in software or data platform engineering, with 2+ years in a team leadership or management role. Strong hands-on expertise with AWS cloud services (e.g., Glue, Kinesis, S3), data lakehouse architectures (Iceberg), and orchestration tools (Airflow, Step Functions). Experience developing infrastructure as code using AWS CDK, Terraform, or CloudFormation. Proven ability to design and deliver internal platform tools, services, or libraries that enable cross-functional engineering teams. Demonstrated expertise in Python for building internal tools, automation scripts, and platform services that support ELT, orchestration, and infrastructure provisioning workflows. Proven experience leading DevOps teams and implementing CI/CD pipelines using tools such as GitHub Actions, CircleCI, or AWS CodePipeline to support rapid, secure, and automated delivery of platform capabilities. Minimum Requirements Experience with Nexla, Kafka, Spark, or Snowflake. Familiarity with data mesh or product-based data architecture principles. Track record of promoting DevOps, automation, and CI/CD best practices across engineering teams. Benefits AWS certifications or equivalent experience preferred. We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice
Posted 1 week ago
12.0 - 17.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Principal IS Bus Sys Analyst, Neural Nexus What You Will Do Let’s do this. Let’s change the world. In this vital role you will support the delivery of emerging AI/ML capabilities within the Commercial organization as a leader in Amgen's Neural Nexus program. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau? Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with the Commercial Data & Analytics (CD&A) team to help realize business value through the application of commercial data and emerging AI/ML technologies. Serve as the technology product owner for the launch and growth of the Neural Nexus product teams focused on data connectivity, predictive modeling, and fast-cycle value delivery for commercial teams. Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps. Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience. Become the subject matter expert in emerging technology capabilities by researching and implementing new tools and features, internal and external methodologies. Build expertise and domain expertise in a wide variety of Commercial data domains. Provide input for governance discussions and help prepare materials to support executive alignment on technology strategy and investment. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 12 to 17 years information system experience. Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Good interpersonal skills, good attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Good understanding of sales and incentive compensation value streams Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred) Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Senior Manager of Software Engineering at JPMorgan Chase within the Consumer and Community Banking – Data Technology team, you lead a technical area and drive impact within teams, technologies, and projects across departments. Utilize your in-depth knowledge of software, applications, technical processes, and product management to drive multiple complex projects and initiatives, while serving as a primary decision maker for your teams and be a driver of innovation and solution delivery. Job Responsibilities Leads Data publishing and processing platform engineering team to achieve business & technology objectives Accountable for technical tools evaluation, build platforms, design & delivery outcomes Carries governance accountability for coding decisions, control obligations, and measures of success such as cost of ownership, maintainability, and portfolio operations Delivers technical solutions that can be leveraged across multiple businesses and domains Influences peer leaders and senior stakeholders across the business, product, and technology teams Champions the firm’s culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 5+ years applied experience. In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise Expertise in programming languages such as Python and Java, with a strong understanding of cloud services including AWS, EKS, SNS, SQS, Cloud Formation, Terraform, and Lambda. Proficient in messaging services like Kafka and big data technologies such as Hadoop, Spark-SQL, and Pyspark. Experienced with Teradata or Snowflake, or any other RDBMS databases, with a solid understanding of Teradata or Snowflake. Advanced experience in leading technologists to manage, anticipate, and solve complex technical challenges, along with experience in developing and recognizing talent within cross-functional teams. Experience in leading a product as a Product Owner or Product Manager, with practical cloud-native experience. Preferred Qualifications, Capabilities, And Skills Previous experience leading / building Platforms & Frameworks teams Skilled in orchestration tools like Airflow (preferable) or Control-M, and experienced in continuous integration and continuous deployment (CICD) using Jenkins. Experience with Observability tools, frameworks and platforms. Experience with large scalable secure distributed complex architecture and design Experience with nonfunctional topics like security, performance, code and design best practices AWS Certified Solutions Architect, AWS Certified Developer, or similar certification is a big plus. ABOUT US
Posted 1 week ago
3.0 years
0 Lacs
Delhi, Delhi
On-site
Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person
Posted 1 week ago
2.0 - 4.0 years
25 - 30 Lacs
Pune
Work from Office
Rapid7 is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About 75F 75F is a global leader in IoT-based Building Automation & Energy Efficiency solutions for commercial buildings. We are headquartered in the US, with offices across India, Singapore, and the Middle East. Our investors, led by Bill Gates’s breakthrough energy ventures, include some of the most prominent names in climate and technology. As a result of our dedicated efforts towards climate action, 75F has earned recognition, securing a spot on the global cleantech 100 list for the second consecutive year in 2022. In 2016, 75F ventured into India, and in 2019, we entered the Singapore market. We have made significant inroads, emerging as a prominent player in the APAC region, and secured pivotal clients like Flipkart, Mercedes Benz, WeWork, and Adobe. Our Strategic Partnerships with Tata Power and Singapore Power have spread the message of energy efficiency, climate tech & better automation through IoT, ML, AI, wireless technology & the power of the Cloud. Through these partnerships, the company has earned the trust of several customers of repute such as Hiranandani Hospital, Dmall, Spar and Fern Meluha in India and Singapore Institute of Technology, Labrador Real Estate, Instant Group, DBS Bank, Amex in Singapore. Our cutting-edge technology and exceptional results have garnered numerous awards, including recognition from entities like Clean Energy Trust, Bloomberg NEF, Cleantech 100, Realty+ Prop-Tech Brand of the Year 2022-2023 & 2024, ESG Award Customer Excellence 2023, Frost & Sullivan APAC Smart Energy Management Technology Leadership Award 2021, CMO Asia 2022 Most Preferred Brand in Real Estate: HVAC and National Energy Efficiency Innovation Award by Ministry of Power 2023. We are looking for passionate individuals who are committed not just to personal growth but are passionate about solving some of the world’s toughest challenges. Opportunities exist for working across the various locations and across functions that the Company is present in. Continuing education is valued. We prize extreme ownership and tenacity above all other things. Finally, we believe we can’t build a new future for the planet without first building a diverse and inclusive team so we hire the best candidates we can based on an evaluation of their potential, not just their experience Role: Sr Application Engineer/Application Engineer Experience: 3-6 years Key Responsibilities Design and implement Control Design Documents for projects Create Single line diagrams Extract and Complained BOQs Analyze MEP drawings to identify HVAC equipment, dampers, and sensors. Review control specifications and sequences of operation. Prepare initial review documents and generate Requests for Information (RFIs). Develop Bills of Materials and select appropriate sensors, control valves, dampers, airflow stations, controllers, etc. Design control and interlock wiring for devices and controllers, including terminations. Prepare I/O summaries & Design BMS network architecture. Follow established processes and guidelines to execute projects within defined timelines. Experience in the engineering, installation, and commissioning of HVAC and BMS. Required Knowledge, Skills & Experience Bachelor's or Master's degree in Instrumentation, Electrical, Electronics, or Electronics & Communication Engineering. Strong understanding of HVAC systems, including chilled water systems, cooling towers, primary/secondary pumping systems, hot water systems, and various air handling units (AHUs), fan coil units (FCUs), and VAV systems. In-depth knowledge of BMS architecture, including operator workstations, supervisory and DDC controllers, sensors, and actuators. Familiarity with communication protocols such as BACnet, Modbus, and others. Proficient in wiring starters, field devices, safety interlocks, and control panels. Demonstrate proficiency and hands-on experience with AutoCAD, including customization using LISP routines. Experience in LISP for automating tasks is good to have HVAC Control sequences experience Familiarity with 3D software Fast learner with strong analytical and problem-solving abilities. Excellent verbal and written communication skills. Benefits American MNC culture Being a part of one of the world’s leading Climate Tech companies & working with a team of 200 passionate disruptors. Diversity & Inclusion Our dedication to diversity and inclusion starts with our values. We lead with integrity and purpose, focusing on the future and aligning with our customers’ vision for success. Our High-Performance Culture ensures that we have the best talent, that is highly engaged and eager to innovate.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Summary At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you’ll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! Role Responsibilities The Senior Associate, Integration Engineer’s responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings Basic Qualifications Education: Bachelor’s degree or Master’s degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time Preferred Qualifications Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes Physical/Mental Requirements None Non-standard Work Schedule, Travel Or Environment Requirements Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech
Posted 1 week ago
8.0 - 12.0 years
25 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Hi, Greetings from Encora Innovation Labs Pvt Ltd! Encora is looking for AWS DevOps Lead with 8-12 year s experience in AWS services, Python and Data Engineering. Important Note : We are looking for an immediate joiner for this role! If you're on a 30/60/90-day notice period, this opportunity may not be the right fit at the moment. We truly appreciate your understanding! Please find the below detailed job description and the company profile for your better understanding. Position: AWS DevOps Lead Experience: 8-12 years Job Location: Bangalore / Chennai / Pune / Hyderabad / Noida Position Type: Full time Qualification: Any graduate Work Mode: Hybrid Technical Skills: AWS services - EC2, S3, Lambda, IAM, IaC Terraform, CloudFormation, Git, Jenkins, GitHub. Programming - Python, SQL and Spark Data Engineering - Data pipelines, Airflow orchestration Job Summary Responsibilities and Duties • Monitors and reacts to alerts in real time and triages issues Executes runbook instructions to resolve routine problems and user requests. Escalates complex or unresolved issues to L2. Documents new findings to improve runbooks and knowledge base. Participates in shift handovers to ensure seamless coverage Participates in ceremonies to share operational status Education and Experience: B.E in Computer Science Engineering, or equivalent technical degree with strong computer science fundamentals Experience in an Agile software development environment Excellent communication and collaboration skills with the ability to work in a team-oriented environment Skill required: System Administration : Basic troubleshooting, monitoring, and operational support. Cloud Platforms : Familiarity with AWS services (e.g., EC2, S3, Lambda, IAM). Infrastructure as Code (IaC) : Exposure to Terraform, CloudFormation, or similar tools. CI/CD Pipelines : Understanding of Git, Jenkins, GitHub Actions, or similar tools. Linux Fundamentals : Command-line proficiency, scripting, process management. Programming & Data : Python, SQL, and Spark (nice to have, but not mandatory). Data Engineering Awareness : Understanding of data pipelines, ETL processes, and workflow orchestration (e.g., Airflow). DevOps Practices : Observability, logging, alerting, and automation. Communication: Facilitates team and stakeholder meetings effectively Resolves and/or escalates issues in a timely fashion Understands how to communicate difficult/sensitive information tactfully Astute cross-cultural awareness and experience in working with international teams (especially US) You should be speaking to us if; You are looking for a career that challenges you to bring your knowledge and expertise to bear for designing implementing and running a world class IT organization You like a job that brings a great deal of autonomy and decision-making latitude You like working in an environment that is young, innovative and well established You like to work in an organization that takes decisions quickly, where you can make an impact Why Encora Innovation Labs? Are you are looking for a career that challenges you to bring your knowledge and expertise to bear for designing implementing and running a world class IT Product Engineering organization? Encora Innovation Labs is a world class SaaS technology Product Engineering company and focused on transformational outcomes for leading-edge tech companies. Encora Partners with fast growing tech companies who are driving innovation and growth within their industries. Who We Are: Encora is devoted to making the world a better place for clients, for our communities and for our people. What We Do: We drive transformational outcomes for clients through our agile methods, micro-industry vertical expertise, and extraordinary people. We provide hi-tech, differentiated services in next-gen software engineering solutions including Big Data, Analytics, Machine Learning, IoT, Embedded, Mobile, AWS/Azure Cloud, UI/UX, and Test Automation to some of the leading technology companies in the world. Encora specializes in Data Governance, Digital Transformation, and Disruptive Technologies, helping clients to capitalize on their potential efficiencies. Encora has been an instrumental partner in the digital transformation journey of clients across a broad spectrum of industries: Health Tech, Fin Tech, Hi-Tech, Security, Digital Payments, Education Publication, Travel, Real Estate, Supply Chain and Logistics and Emerging Technologies. Encora has successfully developed and delivered more than 2,000 products over the last few years and has led the transformation of a number of Digital Enterprises. Encora has over 25 offices and innovation centers in 20+ countries worldwide. Our international network ensures that clients receive seamless access to the complete range of our services and expert knowledge and skills of professionals globally. Encora global delivery centers and offices in the United States, Costa Rica, Mexico, United Kingdom, India, Malaysia, Singapore, Indonesia, Hong Kong, Philippines, Mauritius, and the Cayman Islands. Encora is proud to be certified as a Great Place to Work in India Please visit us at Website: encora.com LinkedIn: EncoraInc Facebook: @EncoraInc Instagram: @EncoraInc Looking to build and lead in world-class IT Product Engineering services? Your next career move starts here. Please share your updated resume to ravi.sankar@encora.com Regards , Ravisankar P Talent Acquisition +91 9994599336 Ravi.sankar@encora.com encora.com
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are looking for a Senior Data Engineer to lead the design and implementation of scalable data infrastructure and engineering practices. This role will be critical in laying down the architectural foundations for advanced analytics and AI/ML use cases across global business units. You’ll work closely with the Data Science Lead, Product Manager, and other cross-functional stakeholders to ensure data systems are robust, secure, and future-ready. Key Responsibilities Architect and implement end-to-end data infrastructure including ingestion, transformation, storage, and access layers to support enterprise-scale analytics and machine learning. Define and enforcedata engineering standards, design patterns, and best practices across the CoE. Lead theevaluation and selection of tools, frameworks, and platforms (cloud, open source, commercial) for scalable and secure data processing. Work with data scientists to enable efficient feature extraction, experimentation, and model deployment pipelines. Design forreal-time and batch processing architectures, including support for streaming data and event-driven workflows. Own thedata quality, lineage, and governance frameworks to ensure trust and traceability in data pipelines. Collaborate with central IT, data platform teams, and business units to align on data strategy, infrastructure, and integration patterns. Mentor and guide junior engineers as the team expands, creating a culture of high performance and engineering excellence. Qualifications 10+ years of hands-on experience in data engineering, data architecture, or platform development. Strong expertise inbuilding distributed data pipelines using tools like Spark, Kafka, Airflow, or equivalent orchestration frameworks. Deep understanding ofdata modeling, data lake/lakehouse architectures, and scalable data warehousing (e.g., Snowflake, BigQuery, Redshift). Advanced proficiency inPython and SQL, with working knowledge of Java or Scala preferred. Strong experience working oncloud-native data architectures (AWS, GCP, or Azure) including serverless, storage, and compute optimization. Proven experience in architectingML/AI-ready data environments, supporting MLOps pipelines and production-grade data flows. Familiarity withDevOps practices, CI/CD for data, and infrastructure-as-code (e.g., Terraform) is a plus. Excellent problem-solving skills and the ability to communicate technical solutions to non-technical stakeholders.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
HackerOne is a global leader in offensive security solutions. Our HackerOne Platform combines AI with the ingenuity of the largest community of security researchers to find and fix security, privacy, and AI vulnerabilities across the software development lifecycle. The platform offers bug bounty, vulnerability disclosure, pentesting, AI red teaming, and code security. We are trusted by industry leaders like Amazon, Anthropic, Crypto.com, General Motors, GitHub, Goldman Sachs, Uber, and the U.S. Department of Defense. HackerOne was named a Best Workplace for Innovators by Fast Company in 2023 and a Most Loved Workplace for Young Professionals in 2024. HackerOne Values HackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability. Data Engineer, Enterprise Data & AI Location: Pune, India This role requires the candidate to be based in Pune and work from an office 4 days a week. Please only apply if you're okay with these requirements. *** Position Summary HackerOne is seeking a Data Engineer, Enterprise Data & AI to join our DataOne team. You will lead the discovery, architecture, and development of high-impact, high-performance, scalable source of truth data marts and data products. Joining our growing, distributed organization, you'll be instrumental in building the foundation that powers HackerOne's one source of truth. As a Data Engineer, Enterprise Data & AI, you'll be able to lead challenging projects and foster collaboration across the company. Leveraging your extensive technological expertise, domain knowledge, and dedication to business objectives, you'll drive innovation to propel HackerOne forward. DataOne democratizes source-of-truth information and insights to enable all Hackeronies to ask the right questions, tell cohesive stories, and make rigorous decisions so that HackerOne can delight our Customers and empower the world to build a safer internet . The future is one where every Hackeronie is a catalyst for positive change , driving data-informed innovation while fostering our culture of transparency, collaboration, integrity, excellence, and respect for all . What You Will Do Your first 30 days will focus on getting to know HackerOne. You will join your new squad and begin onboarding - learn our technology stack (Python, Airflow, Snowflake, DBT, Meltano, Fivetran, Looker, AWS), and meet our Hackeronies. Within 60 days, you will deliver impact on a company level with consistent contribution to high-impact, high-performance, scalable source of truth data marts and data products. Within 90 days, you will drive the continuous evolution and innovation of data at HackerOne, identifying and leading new initiatives. Additionally, you foster cross-departmental collaboration to enhance these efforts. Deliver impact by developing the roadmap for continuously and iteratively launching high-impact, high-performance, scalable source of truth data marts and data products, and by leading and delivering cross-functional product and technical initiatives. Be a technical paragon and cross-functional force multiplier, autonomously determining where to apply focus, contributing at all levels, elevating your squad, and designing solutions to ambiguous business challenges, in a fast-paced early-stage environment. Drive continuous evolution and innovation, the adoption of emerging technologies, and the implementation of industry best practices. Champion a higher bar for discoverability, usability, reliability, timeliness, consistency, validity, uniqueness, simplicity, completeness, integrity, security, and compliance of information and insights across the company. Provide technical leadership and mentorship, fostering a culture of continuous learning and growth. Minimum Qualifications 5+ years experience as an Analytics Engineer, Business Intelligence Engineer, Data Engineer, or similar role w/ proven track record of launching source of truth data marts. 5+ years of experience building and optimizing data pipelines, products, and solutions. Must be flexible to align with occasional evening meetings in USA timezone. Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, and AWS. Strong proficiency in at least one data programming language such as Python or R. Expert in SQL for data manipulation in a fast-paced work environment. Expert in using Git for version control. Expert in creating compelling data stories using data visualization tools such as Looker, Tableau, Sigma, Domo, or PowerBI. Proven track record of having substantial impact across the company, as well as externally for the company, demonstrating your ability to drive positive change and achieve significant results. English fluency, excellent communication skills, and can present data-driven narratives in verbal, presentation, and written formats. Passion for working backwards from the Customer and empathy for business stakeholders. Experience shaping the strategic vision for data. Experience working with Agile and iterative development processes. Preferred Qualifications Experience working within and with data from business applications such as Salesforce, Clari, Gainsight, Workday, GitLab, Slack, or Freshservice. Proven track record of driving innovation, adopting emerging technologies and implementing industry best practices. Thrive on solving for ambiguous problem statements in an early-stage environment. Experience designing advanced data visualizations and data-rich interfaces in Figma or equivalent. Compensation Bands: Pune, India ₹3.7M – ₹4.6M Offers Equity Job Benefits: Health (medical, vision, dental), life, and disability insurance* Equity stock options Retirement plans Paid public holidays and unlimited PTO Paid maternity and parental leave Leaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act) Employee Assistance Program Flexible Work Stipend Eligibility may differ by country We're committed to building a global team! For certain roles outside the United States, U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR). Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check. HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws. This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time. For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: GCP Data Engineer 34306 Job Type: Full-Time Work Mode: Hybrid Location: Chennai Budget: ₹18–20 LPA Notice Period: Immediate Joiners Preferred Role Overview We are seeking a proactive Full Stack Data Engineer with a strong focus on Google Cloud Platform (GCP) and data engineering tools. The ideal candidate will contribute to building analytics products supporting supply chain insights and will be responsible for developing cloud-based data pipelines, APIs, and user interfaces. The role demands high standards of software engineering, agile practices like Test-Driven Development (TDD), and experience in modern data architectures. Key Responsibilities Design, build, and deploy scalable data pipelines and analytics platforms using GCP tools like BigQuery, Dataflow, Dataproc, Data Fusion, and Cloud SQL. Implement and maintain Infrastructure as Code (IaC) using Terraform and CI/CD pipelines using Tekton. Develop robust APIs using Python, Java, and Spring Boot, and deliver frontend interfaces using Angular, React, or Vue. Build and support data integration workflows using Airflow, PySpark, and PostgreSQL. Collaborate with cross-functional teams in an Agile environment, leveraging Jira, paired programming, and TDD. Ensure cloud deployments are secure, scalable, and performant on GCP. Mentor team members and promote continuous learning, clean code practices, and Agile principles. Mandatory Skills GCP services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL Programming: Python, Java, Spring Boot Frontend: Angular, React, Vue, TypeScript, JavaScript Data Orchestration: Airflow, PySpark DevOps/CI-CD: Terraform, Tekton, Jenkins Databases: PostgreSQL, Cloud SQL, NoSQL API development and integration Experience 5+ years in software/data engineering Minimum 1 year in GCP-based deployment and cloud architecture Education Bachelor’s or Master’s in Computer Science, Engineering, or related technical discipline Desired Traits Passion for clean, maintainable code Strong problem-solving skills Agile mindset with an eagerness to mentor and collaborate Skills: typescript,data fusion,terraform,java,spring boot,dataflow,data integration,cloud sql,javascript,bigquery,react,postgresql,nosql,vue,data,pyspark,dataproc,sql,cloud,angular,python,tekton,api development,gcp services,jenkins,airflow,gcp
Posted 1 week ago
10.0 - 14.0 years
25 - 40 Lacs
Hyderabad
Work from Office
Face to face interview on 2nd august 2025 in Hyderabad Apply here - Job description - https://careers.ey.com/job-invite/1604461/ Experience Required: Minimum 8 years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills.
Posted 1 week ago
4.0 years
3 - 6 Lacs
Hyderābād
On-site
CDP ETL & Database Engineer The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture. The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities: ETL Development – The CDP ETL C Database Engineer will be responsible for building pipelines to feed downstream data They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. Implementations s Onboarding – Will work with the team to onboard new clients onto the ZMP/CDP+ The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests – The CDP ETL C Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management – The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration s Process Improvement – The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements: The CDP ETL C Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management When required, collaborate with the Business Solutions Analyst (BSA) to solidify. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow , and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives. Required Skills: ETL – ETL tools such as Talend (Preferred, not required) DMExpress – Nice to have Informatica – Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL – Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages – Can demonstrate knowledge of any of the PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS – Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value Working knowledge of Code Repositories such as GIT, Win CVS, Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira. Minimum Qualifications: Bachelor's degree or equivalent 4+ Years' experience Excellent verbal C written communications skills Self-Starter, highly motivated Analytical mindset Company Summary: Zeta Global is a NYSE listed data-powered marketing technology company with a heritage of innovation and industry leadership. Founded in 2007 by entrepreneur David A. Steinberg and John Sculley, former CEO of Apple Inc and Pepsi-Cola, the Company combines the industry's 3rd largest proprietary data set (2.4B+ identities) with Artificial Intelligence to unlock consumer intent, personalize experiences and help our clients drive business growth. Our technology runs on the Zeta Marketing Platform, which powers 'end to end' marketing programs for some of the world's leading brands. With expertise encompassing all digital marketing channels – Email, Display, Social, Search and Mobile – Zeta orchestrates acquisition and engagement programs that deliver results that are scalable, repeatable and sustainable. Zeta Global is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, gender, ancestry, color, religion, sex, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law. Zeta Global Recognized in Enterprise Marketing Software and Cross-Channel Campaign Management Reports by Independent Research Firm https://www.forbes.com/sites/shelleykohan/2024/06/1G/amazon-partners-with-zeta-global-to-deliver- gen-ai-marketing-automation/ https://www.cnbc.com/video/2024/05/06/zeta-global-ceo-david-steinberg-talks-ai-in-focus-at-milken- conference.html https://www.businesswire.com/news/home/20240G04622808/en/Zeta-Increases-3Q%E2%80%GG24- Guidance https://www.prnewswire.com/news-releases/zeta-global-opens-ai-data-labs-in-san-francisco-and-nyc- 300S45353.html https://www.prnewswire.com/news-releases/zeta-global-recognized-in-enterprise-marketing-software-and- cross-channel-campaign-management-reports-by-independent-research-firm-300S38241.html
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
On-site
JOB DESCRIPTION We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Senior Manager of Software Engineering at JPMorgan Chase within the Consumer and Community Banking – Data Technology team, you lead a technical area and drive impact within teams, technologies, and projects across departments. Utilize your in-depth knowledge of software, applications, technical processes, and product management to drive multiple complex projects and initiatives, while serving as a primary decision maker for your teams and be a driver of innovation and solution delivery. Job Responsibilities Leads Data publishing and processing platform engineering team to achieve business & technology objectives Accountable for technical tools evaluation, build platforms, design & delivery outcomes Carries governance accountability for coding decisions, control obligations, and measures of success such as cost of ownership, maintainability, and portfolio operations Delivers technical solutions that can be leveraged across multiple businesses and domains Influences peer leaders and senior stakeholders across the business, product, and technology teams Champions the firm’s culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience. In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise Expertise in programming languages such as Python and Java, with a strong understanding of cloud services including AWS, EKS, SNS, SQS, Cloud Formation, Terraform, and Lambda. Proficient in messaging services like Kafka and big data technologies such as Hadoop, Spark-SQL, and Pyspark. Experienced with Teradata or Snowflake, or any other RDBMS databases, with a solid understanding of Teradata or Snowflake. Advanced experience in leading technologists to manage, anticipate, and solve complex technical challenges, along with experience in developing and recognizing talent within cross-functional teams. Experience in leading a product as a Product Owner or Product Manager, with practical cloud-native experience. Preferred qualifications, capabilities, and skills Previous experience leading / building Platforms & Frameworks teams Skilled in orchestration tools like Airflow (preferable) or Control-M, and experienced in continuous integration and continuous deployment (CICD) using Jenkins. Experience with Observability tools, frameworks and platforms. Experience with large scalable secure distributed complex architecture and design Experience with nonfunctional topics like security, performance, code and design best practices AWS Certified Solutions Architect, AWS Certified Developer, or similar certification is a big plus. ABOUT US
Posted 1 week ago
5.0 years
5 - 5 Lacs
Hyderābād
On-site
Data Services Analyst The Data Services ETL Developer will specialize in data transformations and integration projects utilizing Zeta's proprietary tools, 3rd Party software, and coding. This role requires understanding of CRM methodologies related to marketing operations. The candidate will be responsible for implementing data processing across multiple technologies, supporting a high volume of tasks with the expectation of accurate and on-time delivery. Responsibilities: Manipulate client and internal marketing data across multiple platforms and technologies. Automate scripts to perform tasks to transfer and manipulate data feeds (internal and external). Build, deploy, and manage cloud-based data pipelines using AWS services. Manage multiple tasks with competing priorities and ensure timely client deliverability. Work with technical staff to maintain and support a proprietary ETL environment. Collaborate with database/CRM, modelers, analysts, and application programmers to deliver results for clients. Job Requirements: Coverage of US time-zone and in office minimum three days per week. Experience in database marketing with the ability to transform and manipulate data. knowledge of US and International postal address With exposure to SAP postal products (DQM). Proficient with AWS services (S3, Airflow, RDS, Athena) for data storage, processing, and analysis. Experience with Oracle and Snowflake SQL to automate scripts for marketing data processing. Familiarity with tools like Snowflake, Airflow, GitLab, Grafana, LDAP, Open VPN, DCWEB, Postman, and Microsoft Excel. Knowledge of SQL Server, including data exports/imports, running SQL Server Agent Jobs, and SSIS packages. Proficiency with editors like Notepad++ and Ultra Edit (or similar tools). Understanding of SFTP and PGP to ensure data security and client data protection. Experience working with large-scale customer databases in a relational database environment. Proven ability to manage multiple tasks simultaneously. Strong communication and collaboration skills in a team environment. Familiarity with the project life cycle. Minimum Qualifications: Bachelor's degree or equivalent with 5+ years of experience in database marketing and cloud-based technologies. Strong understanding of data engineering concepts and cloud infrastructure. Excellent oral and written communication skills.
Posted 1 week ago
4.0 years
0 Lacs
Hyderābād
On-site
The people here at Apple don’t just build products - we craft the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that supports the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. The Global Business Intelligence team provides data services, analytics, reporting, and data science solutions to Apple’s business groups, including Retail, iTunes, Marketing, AppleCare, Operations, Finance, and Sales. These solutions are built on top of an end-to-end machine learning platform with sophisticated AI capabilities. We are looking for a competent, experienced, and driven machine learning engineer to define and build some of the best-in-class machine learning solutions and tools for Apple. Description As a Machine Learning Engineer, you will work on building intelligent systems to democratize AI across a wide range of solutions within Apple. You will drive the development and deployment of innovative AI models and systems that directly impact the capabilities and performance of Apple’s products and services. You will implement robust, scalable ML infrastructure, including data storage, processing, and model serving components, to support seamless integration of AI/ML models into production environments. You will develop novel feature engineering, data augmentation, prompt engineering and fine-tuning frameworks that achieve optimal performance on specific tasks and domains. You will design and implement automated ML pipelines for data preprocessing, feature engineering, model training, hyper-parameter tuning, and model evaluation, enabling rapid experimentation and iteration. You will also implement advanced model compression and optimization techniques to reduce the resource footprint of language models while preserving their performance. Have continuous focus to Brainstorm and Design various POCs using AI/ML Services for new or existing enterprise problems. YOU SHOULD BE ABLE TO: - Understand a business challenge - Collaborate with business and other multi-functional teams - Design a statistical or deep learning solution to find the needed answer to it. - Develop it by yourself or guide another person to do it. - Deliver the outcome into production, (v) Keep a good governance of your work. There are meaningful opportunities for you deliver impactful influences to Apple. Key Qualifications 4+ years of ML engineering experience in feature engineering, model training, model serving, model monitoring and model refresh management Experience developing AI/ML systems at scale in production or in high-impact research environments Passionate about computer vision, natural language processing, especially in LLMs and Generative AI systems Knowledge with the common frameworks and tools such as PyPorch or TensorFlow Experience with transformer models such as BERT, GPT etc. and understanding of their underlying principles is a plus Strong coding, analytical, software engineering skills, and familiarity with software engineering principles around testing, code reviews and deployment Experience in handling performance, application and security log management Applied knowledge of statistical data analysis, predictive modeling classification, Time Series techniques, sampling methods, multivariate analysis, hypothesis testing, and drift analysis. Proficiency in programming languages and tools like Python, R, Git, Airflow, Notebooks. Experience with data visualization tools like matplotlib, d3.js., Tableau would be a plus Education & Experience Bachelor’s Degree or Equivalent experience Submit CV
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Senior Data Engineer Location: Chennai 34322 Job Type: Contract Budget: ₹18 LPA Notice Period: Immediate Joiners Only Role Overview We are seeking a highly capable Software Engineer (Data Engineer) to support end-to-end development and deployment of critical data products. The selected candidate will work across diverse business and technical teams to design, build, transform, and migrate data solutions using modern cloud technologies. This is a high-impact role focused on cloud-native data engineering and infrastructure. Key Responsibilities Develop and manage scalable data pipelines and workflows on Google Cloud Platform (GCP) Design and implement ETL processes using Python, BigQuery, and Terraform Support data product lifecycle from concept, development to deployment and DevOps Optimize query performance and manage large datasets with efficiency Collaborate with cross-functional teams to gather requirements and deliver solutions Maintain strong adherence to Agile practices, contributing to sprint planning and user stories Apply best practices in data security, quality, and governance Effectively communicate technical solutions to stakeholders and team members Required Skills & Experience Minimum 4 years of relevant experience in GCP Data Engineering Strong hands-on experience with BigQuery, Python programming, Terraform, Cloud Run, and GitHub Proven expertise in SQL, data modeling, and performance optimization Solid understanding of cloud data warehousing and pipeline orchestration (e.g., DBT, Dataflow, Composer, or Airflow DAGs) Background in ETL workflows and data processing logic Familiarity with Agile (Scrum) methodology and collaboration tools Preferred Skills Experience with Java, Spring Boot, and RESTful APIs Exposure to infrastructure automation and CI/CD pipelines Educational Qualification Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field Skills: etl,terraform,dbt,java,spring boot,etl workflows,data modeling,dataflow,data engineering,ci/cd,bigquery,agile,data,sql,cloud,restful apis,github,airflow dags,gcp,cloud run,composer,python
Posted 1 week ago
5.0 years
19 - 20 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Senior Software Engineer 34332 Location: Chennai (Onsite) Job Type: Contract Budget: ₹20 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a highly skilled Senior Software Engineer to be a part of a centralized observability and monitoring platform team. The role focuses on building and maintaining a scalable, reliable observability solution that enables faster incident response and data-driven decision-making through latency, traffic, error, and saturation monitoring. This opportunity requires a strong background in cloud-native architecture, observability tooling, backend and frontend development, and data pipeline engineering. Key Responsibilities Design, build, and maintain observability and monitoring platforms to enhance MTTR/MTTX Create and optimize dashboards, alerts, and monitoring configurations using tools like Prometheus, Grafana, etc. Architect and implement scalable data pipelines and microservices for real-time and batch data processing Utilize GCP tools including BigQuery, Dataflow, Dataproc, Data Fusion, and others Develop end-to-end solutions using Spring Boot, Python, Angular, and REST APIs Design and manage relational and NoSQL databases including PostgreSQL, MySQL, and BigQuery Implement best practices in data governance, RBAC, encryption, and security within cloud environments Ensure automation and reliability through CI/CD, Terraform, and orchestration tools like Airflow and Tekton Drive full-cycle SDLC processes including design, coding, testing, deployment, and monitoring Collaborate closely with software architects, DevOps, and cross-functional teams for solution delivery Core Skills Required Proficiency in Spring Boot, Angular, Java, and Python Experience in developing microservices and SOA-based systems Cloud-native development experience, preferably on Google Cloud Platform (GCP) Strong understanding of HTML, CSS, JavaScript/TypeScript, and modern frontend frameworks Experience with infrastructure automation and monitoring tools Working knowledge of data engineering technologies: PySpark, Airflow, Apache Beam, Kafka, and similar Strong grasp of RESTful APIs, GitHub, and TDD methodologies Preferred Skills GCP Professional Certifications (e.g., Data Engineer, Cloud Developer) Hands-on experience with Terraform, Cloud SQL, Data Governance tools, and security frameworks Exposure to performance tuning, cost optimization, and observability best practices Experience Required 5+ years of experience in full-stack and cloud-based application development Strong track record in building distributed, scalable systems Prior experience with observability and performance monitoring tools is a plus Educational Qualifications Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: java,data fusion,html,dataflow,terraform,spring boot,restful apis,python,angular,dataproc,microservices,apache beam,css,cloud sql,soa,typescript,tdd,kafka,javascript,airflow,github,pyspark,bigquery,,gcp
Posted 1 week ago
2.0 years
4 Lacs
Chennai
On-site
We are hiring a tech-savvy and creative Social Media Handler with strong expertise in AI-powered content creation , web scraping , and automation of scraper workflows . You will be responsible for managing social media presence while automating content intelligence and trend tracking through custom scraping solutions. This is a hybrid role requiring both creative content skills and technical automation proficiency. Key Responsibilities: 1) Social Media Management - Plan and execute content calendars across platforms: Instagram, Facebook, YouTube, LinkedIn, and X. - Create high-performing, audience-specific content using AI tools (ChatGPT, Midjourney, Canva AI, etc.). - Engage with followers, track trends, and implement growth strategies. 2) AI Content Creation - Use generative AI to write captions, articles, and hashtags. - Generate AI-powered images, carousels, infographics, and reels. - Repurpose long-form content into short-form video or visual content using tools like Descript or Lumen5. 3) Web Scraping & Automation - Design and build automated web scrapers to extract data from websites, directories, competitor pages, and trending content sources. - Schedule scraping jobs and set up automated pipelines using: - Python (BeautifulSoup, Scrapy, Selenium, Playwright) - Task schedulers (Airflow, Cron, or Python scripts) - Cloud scraping or headless browsers - Parse and clean data for insight generation (topics, hashtags, keywords, sentiment, etc.). - Store and organize scraped data in spreadsheets or databases for content inspiration and strategy. Required Skills & Experience: 1) 2–5 years of relevant work experience in social media, content creation, or web scraping. 2) Proficiency in AI tools: - Text: ChatGPT, Jasper, Copy.ai 3) Image: Midjourney, DALL·E, Adobe Firefly 4) Video: Pictory, Descript, Lumen5 5) Strong Python skills for: - Web scraping (Scrapy, BeautifulSoup, Selenium) 6) Automation scripting - Knowledge of data handling using Pandas, CSV, JSON, Google Sheets, or databases. 7) Familiar with social media scheduling tools (Meta Business Suite, Buffer, Hootsuite). 8) Ability to work independently and stay updated on digital trends and platform changes. Educational Qualification Degree in Marketing, Media, Computer Science, or Data Science preferred. - Skills-based hiring encouraged – real-world experience matters more than formal education. Work Location: Chennai (In-office role) Salary: Commensurate with experience + performance bonus Bonus Skills (Nice to Have) : 1) Knowledge of website development (HTML, CSS, JS, WordPress/Webflow). 2) SEO and content analytics. 3) Basic video editing and animation (CapCut, After Effects). 4) Experience with automation platforms like Zapier, n8n, or Make.com. To Apply: Please email your resume, portfolio, and sample projects to: Job Type: Full-time Pay: From ₹40,000.00 per month Work Location: In person
Posted 1 week ago
6.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a Senior Developer with expertise in SnapLogic and Apache Airflow to design, develop, and maintain enterprise-level data integration solutions. This role requires strong technical expertise in ETL development, workflow orchestration, and cloud technologies. You will be responsible for automating data workflows, optimizing performance, and ensuring the reliability and scalability of our data systems. Key Responsibilities include designing, developing, and managing ETL pipelines using SnapLogic, ensuring efficient data transformation and integration across various systems and applications. Leverage Apache Airflow for workflow automation, job scheduling, and task dependencies, ensuring optimized execution and monitoring. Work closely with cross-functional teams such as Data Engineering, DevOps, and Data Science to understand data requirements and deliver solutions. Collaborate in designing and implementing data pipeline architectures to support large-scale data processing in cloud environments like AWS, Azure, and GCP. Develop reusable SnapLogic pipelines and integrate with third-party applications and data sources including databases, APIs, and cloud services. Optimize SnapLogic pipeline performance to handle large volumes of data with minimal latency. Provide guidance and mentoring to junior developers in the team, conducting code reviews and offering best practice recommendations. Troubleshoot and resolve pipeline failures, ensuring high data quality and minimal downtime. Implement automated testing, continuous integration (CI), and continuous delivery (CD) practices for data pipelines. Stay current with new SnapLogic features, Airflow upgrades, and industry best practices. Required Skills & Experience include 6+ years of hands-on experience in data engineering, focusing on SnapLogic and Apache Airflow. Strong experience with SnapLogic Designer and SnapLogic cloud environment for building data integrations and ETL pipelines. Proficient in Apache Airflow for orchestrating, automating, and scheduling data workflows. Strong understanding of ETL concepts, data integration, and data transformations. Experience with cloud platforms like AWS, Azure, or Google Cloud and data storage systems such as S3, Azure Blob, and Google Cloud Storage. Strong SQL skills and experience with relational databases like PostgreSQL, MySQL, Oracle, and NoSQL databases. Experience working with REST APIs, integrating data from third-party services, and using connectors. Knowledge of data quality, monitoring, and logging tools for production pipelines. Experience with CI/CD pipelines and tools such as Jenkins, GitLab, or similar. Excellent problem-solving skills with the ability to diagnose issues and implement effective solutions. Ability to work in an Agile development environment. Strong communication and collaboration skills to work with both technical and non-technical teams.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France