Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As a pivotal member of our data team, Senior Associates are key in shaping and refining data management and analytics functions, including our expanding Data Services. You will be instrumental in helping us deliver value-driven insights by designing, integrating, and analysing cutting-edge data systems. The role emphasises leveraging the latest technologies, particularly within the Microsoft ecosystem, to enhance operational capabilities and drive innovation. You'll work on diverse and challenging projects, allowing you to actively influence strategic decisions and develop innovative solutions. This, in turn, paves the way for unparalleled professional growth and the development of a forward-thinking mindset. As you contribute to our Data Services, you'll have a front-row seat to the future of data analytics, providing an enriching environment to build expertise and expand your career horizons. Key Activities Include, But Are Not Limited To Design and implement data integration processes. Manage data projects with multiple stakeholders and tight timelines. Developing data models and frameworks that enhance data governance and efficiency. Addressing challenges related to data integration, quality, and management processes. Implementing best practices in automation to streamline data workflows. Engaging with key stakeholders to extract, interpret, and translate data requirements into meaningful insights and solutions. Engage with clients to understand and deliver data solutions. Work collaboratively to meet project goals. Lead and mentor junior team members. Essential Requirements More than 5 years of experience in data analytics, with proficiency in managing large datasets and crafting detailed reports. Proficient in Python Experience working within a Microsoft Azure environment. Experience with data warehousing and data modelling (e.g., dimensional modelling, data mesh, data fabric). Proficiency in PySpark/Databricks/Snowflake/MS Fabric, and intermediate SQL skills. Experience with orchestration tools such as Azure Data Factory (ADF), Airflow, or DBT. Familiarity with DevOps practices, specifically creating CI/CD and release pipelines. Knowledge of Azure DevOps tools and GitHub. Knowledge of Azure SQL DB or any other RDBMS system. Basic knowledge of GenAI. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. Why Join Us? This role isn't just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As an integral part of our data team, Associate 2 professionals contribute significantly to the development of data management and analytics functions, including our growing Data Services. In this role, you'll assist engagement teams in delivering meaningful insights by helping design, integrate, and analyse data systems. You will work with the latest technologies, especially within the Microsoft ecosystem, to enhance our operational capabilities. Working on a variety of projects, you'll have the chance to contribute your ideas and support innovative solutions. This experience offers opportunities for professional growth and helps cultivate a forward-thinking mindset. As you support our Data Services, you'll gain exposure to the evolving field of data analytics, providing an excellent foundation for building expertise and expanding your career journey. Key Activities Include, But Are Not Limited To Assisting in the development of data models and frameworks to enhance data governance and efficiency. Supporting efforts to address data integration, quality, and management process challenges. Participating in the implementation of best practices in automation to streamline data workflows. Collaborating with stakeholders to gather, interpret, and translate data requirements into practical insights and solutions. Support management of data projects alongside senior team members. Assist in engaging with clients to understand their data needs. Work effectively as part of a team to achieve project goals. Essential Requirements At least two years of experience in data analytics, with a focus on handling large datasets and supporting the creation of detailed reports. Familiarity with Python and experience in working within a Microsoft Azure environment. Exposure to data warehousing and data modelling techniques (e.g., dimensional modelling). Basic proficiency in PySpark and Databricks/Snowflake/MS Fabric, with foundational SQL skills. Experience with orchestration tools like Azure Data Factory (ADF), Airflow, or DBT. Awareness of DevOps practices, including introducing CI/CD and release pipelines. Familiarity with Azure DevOps tools and GitHub. Basic understanding of Azure SQL DB or other RDBMS systems. Introductory knowledge of GenAI concepts. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. WHY JOIN US? This role is not just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
1.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Assoicate AIML Engineer– Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. We are responsible for moving 20 % of global trade & is on a mission to become the Global Integrator of Container Logistics. To achieve this, we are transforming into an industrial digital giant by combining our assets across air, land, ocean, and ports with our growing portfolio of digital assets to connect and simplify our customer’s supply chain through global end-to-end solutions, all the while rethinking the way we engage with customers and partners. The Brief In this role as an Associate AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What I'll be doing – your accountabilities? Build and maintain machine learning models for various applications, such as natural language processing, computer vision, and recommendation systems Perform exploratory data analysis (EDA) to identify patterns and trends in data Clean, preprocess, perform hyperparameter tuning and analyze large datasets to prepare them for AI/ML model training Build, test, and optimize machine learning models and experiment with algorithms and frameworks to improve model performance Use programming languages, machine learning frameworks and libraries, algorithms, data structures, statistics and databases to optimize and fine-tune machine learning models to ensure scalability and efficiency Learn to define user requirements and align solutions with business needs Work on AI/ML engineering projects, perform feature engineering and collaborate with teams to understand business problems Learn best practices in data / AI/ML engineering and performance optimization Contribute to research papers and technical documentation Contribute to project documentation and maintain data quality standards Foundational Skills Understands Programming skills beyond the fundamentals and can demonstrate this skill in most situations without guidance. Understands the below skills beyond the fundamentals and can demonstrate in most situations without guidance AI & Machine Learning Data Analysis Machine Learning Pipelines Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NPL) Data Architecture Data Processing Frameworks Proficiency in Python programming. Proficiency in Python-based statistical analysis and data visualization tool While having limited understanding of Technical Documentation but are focused on growing this skill Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 1+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and solution design experience Good problem solving skills, for both technical and non-technical domains Good broad understanding of ML and statistics covering standard ML for regression and classification, forecasting and time-series modeling, deep learning 3+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Hands-on experience building end-to-end data products based on AI/ML technologies Some experience with scenario simulations. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Team player, eager to collaborate and good collaborator Preferred Experiences In addition to basic qualifications, would be great if you have… Hands-on experience with common OR solvers such as Gurobi Experience with a common dashboarding technology (we use PowerBI) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/…) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open-source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization (Kubernetes & Docker) As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: · 3+ years of experience in implementing analytical solutions using Palantir Foundry. · · preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. · · Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. · · Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. · · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · · At least 3 years of experience with Foundry services: · · Data Engineering with Contour and Fusion · · Dashboarding, and report development using Quiver (or Reports) · · Application development using Workshop. · · Exposure to Map and Vertex is a plus · · Palantir AIP experience will be a plus · · Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. · · Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. · · Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). · · Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. · · Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. · · Experience in MLOps is a plus. · · Experience in developing and managing scalable architecture & working experience in managing large data sets. · · Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. · · Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. · · A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. · · Experience in developing GenAI application is a plus Mandatory skill sets: · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · At least 3 years of experience with Foundry services Preferred skill sets: Palantir Foundry Years of experience required: Experience 4 to 7 years ( 3 + years relevant) Education qualification: Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
10.0 - 12.0 years
13 - 15 Lacs
Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)
Work from Office
Complete HVAC design incl. heat load, airflow, chiller design, duct sizing, BOQ, documentation. Skilled in pharma HVAC, codes, Revit, AutoCAD, PDMS. Able to multitask, handle projects independently, coordinate, travel & relocate as needed.
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture.The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities ETL Development – The CDP ETL C Database Engineer will be responsible for building pipelines to feed downstream data They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. Implementations s Onboarding – Will work with the team to onboard new clients onto the ZMP/CDP+ The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests– The CDP ETL C Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management – The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration s Process Improvement – The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements The CDP ETL & Database Engineer will be well versed in the following areas: Relational data modeling. ETL and FTP concepts. Advanced Analytics using SQL Functions. Cloud technologies - AWS, Snowflake. Able to decipher requirements, provide recommendations, and implement solutions within predefined. The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management. When required, collaborate with the Business Solutions Analyst (BSA) to solidify. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow , and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives. Required Skills ETL – ETL tools such as Talend (Preferred, not required) DMExpress – Nice to have Informatica – Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL – Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages – Can demonstrate knowledge of any of the PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS – Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value Working knowledge of Code Repositories such as GIT, Win CVS, Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira Minimum Qualifications Bachelor's degree or equivalent. 4+ Years' experience. Excellent verbal C written communications skills. Self-Starter, highly motivated. Analytical mindset. Company Summary Zeta Global is a NYSE listed data-powered marketing technology company with a heritage of innovation and industry leadership. Founded in 2007 by entrepreneur David A. Steinberg and John Sculley, former CEO of Apple Inc and Pepsi-Cola, the Company combines the industry’s 3rd largest proprietary data set (2.4B+ identities) with Artificial Intelligence to unlock consumer intent, personalize experiences and help our clients drive business growth. Our technology runs on the Zeta Marketing Platform, which powers ‘end to end’ marketing programs for some of the world’s leading brands. With expertise encompassing all digital marketing channels – Email, Display, Social, Search and Mobile – Zeta orchestrates acquisition and engagement programs that deliver results that are scalable, repeatable and sustainable. Zeta Global is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, gender, ancestry, color, religion, sex, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law. Zeta Global Recognized in Enterprise Marketing Software and Cross-Channel Campaign Management Reports by Independent Research Firm https://www.forbes.com/sites/shelleykohan/2024/06/1G/amazon-partners-with-zeta-global-to-deliver- gen-ai-marketing-automation/ https://www.cnbc.com/video/2024/05/06/zeta-global-ceo-david-steinberg-talks-ai-in-focus-at-milken- conference.html https://www.businesswire.com/news/home/20240G04622808/en/Zeta-Increases-3Q%E2%80%GG24- Guidance https://www.prnewswire.com/news-releases/zeta-global-opens-ai--data-labs-in-san-francisco-and-nyc- 300S45353.html https://www.prnewswire.com/news-releases/zeta-global-recognized-in-enterprise-marketing-software-and- cross-channel-campaign-management-reports-by-independent-research-firm-300S38241.html
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The Data Services ETL Developer will specialize in data transformations and integration projects utilizing Zeta’s proprietary tools, 3rd Party software, and coding. This role requires understanding of CRM methodologies related to marketing operations. The candidate will be responsible for implementing data processing across multiple technologies, supporting a high volume of tasks with the expectation of accurate and on-time delivery. Responsibilities Manipulate client and internal marketing data across multiple platforms and technologies. Automate scripts to perform tasks to transfer and manipulate data feeds (internal and external). Build, deploy, and manage cloud-based data pipelines using AWS services. Manage multiple tasks with competing priorities and ensure timely client deliverability. Work with technical staff to maintain and support a proprietary ETL environment. Collaborate with database/CRM, modelers, analysts, and application programmers to deliver results for clients. Job Requirements Coverage of US time-zone and in office minimum three days per week. Experience in database marketing with the ability to transform and manipulate data. knowledge of US and International postal address With exposure to SAP postal products (DQM). Proficient with AWS services (S3, Airflow, RDS, Athena) for data storage, processing, and analysis. Experience with Oracle and Snowflake SQL to automate scripts for marketing data processing. Familiarity with tools like Snowflake, Airflow, GitLab, Grafana, LDAP, Open VPN, DCWEB, Postman, and Microsoft Excel. Knowledge of SQL Server, including data exports/imports, running SQL Server Agent Jobs, and SSIS packages. Proficiency with editors like Notepad++ and Ultra Edit (or similar tools). Understanding of SFTP and PGP to ensure data security and client data protection. Experience working with large-scale customer databases in a relational database environment. Proven ability to manage multiple tasks simultaneously. Strong communication and collaboration skills in a team environment. Familiarity with the project life cycle. Minimum Qualifications Bachelor’s degree or equivalent with 5+ years of experience in database marketing and cloud-based technologies. Strong understanding of data engineering concepts and cloud infrastructure. Excellent oral and written communication skills.
Posted 1 week ago
5.0 - 9.0 years
14 - 24 Lacs
Hyderabad
Hybrid
Experience:- Required: Bachelors degree in computer science or engineering. 7+ years of experience with data analytics, data modeling, and database design. 5+ years of experience with Vertica. 2+ years of coding and scripting (Python, Java, Scala) and design experience. 2+ years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / pAdditional/preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and isissuessuein s. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool.
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AWS Professionals in the following areas : AWS Data Engineer JD As Below Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD) Experience: 3-4yrs Degree in computer science, engineering, or similar fields Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications. 3+ years working experience in data integration and pipeline development. 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus. 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server) Strong real-life experience in python development especially in PySpark in AWS Cloud environment Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch Workflow management tools like Airflow AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice) Good to Have : Snowflake, Palantir Foundry At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 1 week ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: Job Title: Databricks Infrastructure Engineer Location: Hyderabad/Bengaluru Job Summary: We are looking for a skilled Databricks Infrastructure Engineer to design, build, and manage the cloud infrastructure that supports Databricks development efforts. This role will focus on creating and maintaining scalable, secure, and automated infrastructure environments using Terraform and other Infrastructure-as-Code (IaC) tools. The infrastructure will enable data engineers and developers to efficiently create notebooks, pipelines, and ingest data following the Medallion architecture (Bronze, Silver, Gold layers). The ideal candidate will have strong cloud engineering skills, deep knowledge of Terraform, and hands-on experience with Databricks platform provisioning. Key Responsibilities: Infrastructure Design & Provisioning: Design and implement scalable and secure infrastructure environments to support Databricks workloads aligned with the Medallion architecture. Develop and maintain Infrastructure-as-Code (IaC) scripts and templates using Terraform and/or ARM templates for provisioning Databricks workspaces, clusters, storage accounts, networking, and related Azure resources. Automate the setup of data ingestion pipelines, storage layers (Bronze, Silver, Gold), and access controls necessary for smooth data operations. Platform Automation & Optimization: Create automated deployment pipelines integrated with CI/CD tools (e.g., Azure DevOps, Jenkins) to ensure repeatable and consistent infrastructure provisioning. Optimize infrastructure configurations to balance performance, scalability, security, and cost-effectiveness. Monitor infrastructure health and perform capacity planning to support evolving data workloads. Implement and maintain backup, recovery, and disaster recovery strategies for Databricks environments. Optimize performance of Databricks clusters, jobs, and SQL endpoints. Automate routine administration tasks using scripting and orchestration tools. Troubleshoot platform issues, identify root causes, and implement solutions. Security & Governance: Implement security best practices including network isolation, encryption, identity and access management (IAM), and role-based access control (RBAC) within the infrastructure. Collaborate with governance teams to embed compliance and audit requirements into infrastructure automation. Collaboration & Support: Work closely with data engineers, data scientists, and platform administrators to understand infrastructure requirements and deliver solutions that enable efficient data engineering workflows. Provide documentation and training on infrastructure setup, usage, and best practices. Troubleshoot infrastructure issues and coordinate with cloud and platform support teams for resolution. Stay up to date with Databricks features, releases, and best practices Required Qualifications: 10+ years of experience in Databricks and cloud infrastructure engineering, preferably with Azure Strong hands-on experience writing Infrastructure-as-Code using Terraform; experience with ARM templates or CloudFormation is a plus. Practical knowledge of provisioning and managing Databricks environments and associated cloud resources. Familiarity with Medallion architecture and data lake house concepts. Experience with CI/CD pipeline creation and automation tools such as Azure DevOps, Jenkins, or GitHub Actions. Solid understanding of cloud networking, storage, security, and identity management. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong collaboration and communication skills to work across cross-functional teams. Preferred Skills: Prior experience working with Databricks platform, including workspace and cluster management. Knowledge of data governance tools and practices. Experience with monitoring and logging tools (e.g., Azure Monitor, CloudWatch). Exposure to containerization and orchestration technologies such as Docker and Kubernetes. Understanding of data ingestion frameworks and pipeline orchestration tools like Apache Airflow or Azure Data Factory. Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 1 week ago
6.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Job Summary We are seeking a Senior Data Engineer to join our growing data team, where you will help build and scale the data infrastructure powering analytics, machine learning, and product innovation. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing robust, scalable, and secure data pipelines and platforms. You will work closely with data scientists, software engineers, and product teams to deliver clean, reliable data for critical business and clinical applications. Key Responsibilities: Design, implement, and optimize complex data pipelines using advanced SQL, ETL tools, and integration technologies. Collaborate with cross-functional teams to implement optimal data solutions for advanced analytics and data science initiatives. Spearhead process improvements, including automation, data delivery optimization, and infrastructure redesign for scalability. Evaluate and recommend emerging data technologies to build comprehensive data integration strategies. Lead technical discovery processes, defining complex requirements and mapping out detailed scenarios. • Develop and maintain data governance policies and procedures. What Youll Need to Be Successful (Required Skills): 5 -7 years of experience in data engineering or related roles. Advanced proficiency in multiple programming languages (e.g., Python, Java, Scala) and expert-level SQL knowledge. Extensive experience with big data technologies (Hadoop ecosystem, Spark, Kafka) and cloudbased environments (Azure, AWS, or GCP). Proven experience in designing and implementing large-scale data warehousing solutions. Deep understanding of data modeling techniques and enterprise-grade ETL tools. • Demonstrated ability to solve complex analytical problems. Education/ Certifications: Bachelor's degree in computer science, Information Management or related field . Preferred Skills: Experience in the healthcare industry, including clinical, financial, and operational data. Knowledge of machine learning and AI technologies and their data requirements. Familiarity with data visualization tools and real-time data processing. Understanding data privacy regulations and experience implementing compliant solutions Note: We work 5days from Office - India regular shift. Netsmart, India has setup our new Global Capability Centre(GCC) at Godrej Centre, Byatarayanapura (Hebbal area) -(https://maps.app.goo.gl/RviymAeGSvKZESSo6) .
Posted 1 week ago
5.0 - 8.0 years
15 - 22 Lacs
Gurugram
Work from Office
Experience: 6- 8 years overall, with at least 23 years deep hands-on experience in each key area below. What you’ll do Own and evolve our end-to-end data platform, ensuring robust pipelines, data lakes, and warehouses with 100% uptime. Build and maintain real-time and batch pipelines using Debezium, Kafka, Spark, Apache Iceberg, Trino, and Clickhouse. Manage and optimize our databases (PostgreSQL, DocumentDB, MySQL RDS) for performance and reliability. Drive data quality management — understand, enrich, and maintain context for trustworthy insights. Develop and maintain reporting services for data exports, file deliveries, and embedded dashboards via Apache Superset. Use orchestration tools like Maestro (or similar DAGs) for reliable, observable workflows. Leverage LLMs and other AI models generate insights and automate agentic tasks that enhance analytics and reporting. Build domain expertise to solve complex data problems and deliver actionable business value. Collaborate with analysts, data scientists, and engineers to maximize the impact of our data assets. Write robust, production-grade Python code for pipelines, automation, and tooling. What you’ll bring Experience with our open-source data pipeline and datalike, warehouse stack Strong Python skills for data workflows and automation. Hands-on orchestration experience with Maestro, Airflow, or similar. Practical experience using LLMs or other AI models for data tasks. Solid grasp of data quality, enrichment, and business context. Experience with dashboards and BI using Apache Superset (or similar tools). Strong communication and problem-solving skills.
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Analyst, Data Engineering & Analytics Location: India Department: IT About Company Rapid7 is seeking a Data Engineer, Data Engineering & Analytics to join a high-performing data engineering and reporting team. This role is responsible for participating in the management of a robust Snowflake infrastructure, data modeling in a modern tech stack, and optimizing the company’s Tableau reporting suite, ensuring that all business units have access to timely, accurate, and actionable data. This is a critical position that will help to develop and maintain the data strategy, architecture, and analytics capabilities at Rapid7, driving insights that enable business growth. The ideal candidate will have experience in data engineering, analytics, and business intelligence, with equal amounts of business and technical acumen. About Role Implement data modeling best practices to enhance data accessibility and reporting capabilities. Ensure data integrity, security, and compliance with industry standards and regulations. Document plans and results in user-stories, issues, PRs, the team’s handbook - following the tradition of documentation first! Implement the Corp Data philosophy in everything you do. Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review. Collaborate with IT and DevOps teams to optimize cloud infrastructure and data governance policies. Manage and enhance the existing Tableau reporting suite, ensuring self-service analytics and actionable insights for stakeholders. Design, develop, and extend DBT code repository to extend the Enterprise Dimensional Warehouse capabilities and infrastructure Develop and maintain a single source of truth for business metrics, ensuring consistency across reporting platforms. Approve data model changes as a Data Team Reviewer and code owner for specific database and data model schemas. Provide data modeling expertise to all Rapid7 teams through code reviews, pairing, and training to help deliver optimal, DRY, and scalable database designs and queries in Snowflake and in Tableau. Research and implement emerging trends in data analytics, visualization, and engineering, bringing innovative solutions to the organization. Align to data governance frameworks, policies, and best practices, in collaboration with existing teams, policies, and governance frameworks. Identify and lead opportunities for new data initiatives, ensuring Rapid7 remains data-driven and insights-powered. What You Bring to the Role Ability to thrive in a fast-paced hybrid organization. Comfort working in a highly agile, intensely iterative environment. Demonstrated capacity to clearly and concisely communicate complex business activities, technical requirements, and recommendations. 2+ years of experience in data engineering, analytics, or business intelligence. 2+ years experience designing, implementing, operating, and extending enterprise dimensional data models. 2+ years experience building reports and dashboards in Tableau and/or other similar data visualization tools. Experience in DBT modeling and understanding modular, performant models. Solid understanding of Snowflake, SQL, and data warehouse management. Understanding of ETL/ELT processes, data pipelines, and cloud-based data architectures. Familiarity with modern data stacks (DBT, Airflow, Fivetran, Matillion, or similar tools). Ability to manage data governance, security, and compliance requirements (SOC 2, GDPR, etc.). A passion for continuous learning, innovation, and leveraging data for business impact. An individual with a genuine passion for data visualisation and the art of data storytelling
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineer, Data Engineering & Analytics Location: India Department: IT About Company Rapid7 is seeking a Data Engineer, Data Engineering & Analytics to join a high-performing data engineering and reporting team. This role is responsible for participating in the management of a robust Snowflake infrastructure, data modeling in a modern tech stack, and optimizing the company’s Tableau reporting suite, ensuring that all business units have access to timely, accurate, and actionable data. This is a critical position that will help to develop and maintain the data strategy, architecture, and analytics capabilities at Rapid7, driving insights that enable business growth. The ideal candidate will have experience in data engineering, analytics, and business intelligence, with equal amounts of business and technical acumen. About Role Implement data modeling best practices to enhance data accessibility and reporting capabilities. Ensure data integrity, security, and compliance with industry standards and regulations. Document plans and results in user-stories, issues, PRs, the team’s handbook - following the tradition of documentation first! Implement the Corp Data philosophy in everything you do. Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review. Collaborate with IT and DevOps teams to optimize cloud infrastructure and data governance policies. Manage and enhance the existing Tableau reporting suite, ensuring self-service analytics and actionable insights for stakeholders. Design, develop, and extend DBT code repository to extend the Enterprise Dimensional Warehouse capabilities and infrastructure Develop and maintain a single source of truth for business metrics, ensuring consistency across reporting platforms. Approve data model changes as a Data Team Reviewer and code owner for specific database and data model schemas. Provide data modeling expertise to all Rapid7 teams through code reviews, pairing, and training to help deliver optimal, DRY, and scalable database designs and queries in Snowflake and in Tableau. Research and implement emerging trends in data analytics, visualization, and engineering, bringing innovative solutions to the organization. Align to data governance frameworks, policies, and best practices, in collaboration with existing teams, policies, and governance frameworks. Identify and lead opportunities for new data initiatives, ensuring Rapid7 remains data-driven and insights-powered. What You Bring to the Role Ability to thrive in a fast-paced hybrid organization. Comfort working in a highly agile, intensely iterative environment. Demonstrated capacity to clearly and concisely communicate complex business activities, technical requirements, and recommendations. 2+ years of experience in data engineering, analytics, or business intelligence. 2+ years experience designing, implementing, operating, and extending enterprise dimensional data models. 2+ years experience building reports and dashboards in Tableau and/or other similar data visualization tools. Experience in DBT modeling and understanding modular, performant models. Solid understanding of Snowflake, SQL, and data warehouse management. Understanding of ETL/ELT processes, data pipelines, and cloud-based data architectures. Familiarity with modern data stacks (DBT, Airflow, Fivetran, Matillion, or similar tools). Ability to manage data governance, security, and compliance requirements (SOC 2, GDPR, etc.). A passion for continuous learning, innovation, and leveraging data for business impact.
Posted 1 week ago
3.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Company Description Quantanite is a business process outsourcing (BPO) and customer experience (CX) solutions company that helps fast-growing companies and leading global brands to transform and grow. We do this through a collaborative and consultative approach, rethinking business processes and ensuring our clients employ the optimal mix of automation and human intelligence. We’re an ambitious team of professionals spread across four continents and looking to disrupt our industry by delivering seamless customer experiences for our clients, backed up with exceptional results. We have big dreams and are constantly looking for new colleagues to join us who share our values, passion, and appreciation for diversity Job Description We are looking for a Python Backend Engineer with exposure to AI engineering to join our team in building a scalable, cognitive data platform. This platform will crawl and process unstructured data sources, enabling intelligent data extraction and analysis. The ideal candidate will have deep expertise in backend development using FastAPI, RESTful APIs, SQL, and Azure data technologies, with a secondary focus on integrating AI/ML capabilities into the product. Core Responsibilities Design and develop high-performance backend services using Python (FastAPI). Develop RESTful APIs to support data ingestion, transformation, and AI-based feature access. Work closely with DevOps and data engineering teams to integrate backend services with Azure data pipelines and databases. Manage database schemas, write complex SQL queries, and support ETL processes using Python-based tools. Build secure, scalable, and production-ready services following best practices in logging, authentication, and observability. Implement background tasks and async event-driven workflows for data crawling and processing. AI Engineering Contributions : Support integration of AI models (NLP, summarization, information retrieval) within backend APIs. Collaborate with AI team to deploy lightweight inference pipelines using PyTorch, TensorFlow, or ONNX. Participate in training data pipeline design and minor model fine-tuning as needed for business logic. Contribute to the testing, logging, and monitoring of AI agent behavior in production environments. Qualifications 3+ years of experience in Python backend development, with strong experience in FastAPI or equivalent frameworks. Solid understanding of RESTful API design, asynchronous programming, and web application architecture. Proficiency in working with relational databases (e.g., PostgreSQL, MS SQL Server) and Azure cloud services. Experience with ETL workflows, job scheduling, and data pipeline orchestration (Airflow, Prefect, etc.). Exposure to machine learning libraries (e.g., Scikit-learn, Transformers, OpenAI APIs) is a plus. Familiarity with containerization (Docker), CI/CD practices, and performance tuning. A mindset of code quality, scalability, documentation, and collaboration. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development: At Quantanite, you’ll have a personal development plan to help you improve in the areas you’re looking to develop over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. You’ll also have the opportunity to progress internally. As a fast-growing organization, our teams are growing, and you’ll have the chance to take on more responsibility over time. So, if you’re looking for a career full of purpose and potential, we’d love to hear from you!
Posted 1 week ago
6.0 years
0 Lacs
Thane, Maharashtra, India
On-site
About the Company : Blue Star Limited is India’s leading air conditioning and commercial refrigeration company with over eight decades of experience in providing expert cooling solutions. It fulfils the cooling requirements of a large number of corporate, commercial as well as residential customers, as well as offers products such as water purifiers, air purifiers and air coolers. It also provides expertise in allied contracting activities such as electrical, plumbing and fire-fighting services, in order to provide turnkey solutions, apart from the execution of specialised industrial projects. About the Role : The role involves technical expertise in cooling technologies, product development, design of dehumidification systems, troubleshooting, industry trends, training, consultation, and project management. Responsibilities : Technical Expertise: In-depth knowledge of cooling technologies, including air conditioning units, refrigeration systems, heat pumps, and various types of heat exchangers. Understand and apply principles of thermodynamics, fluid mechanics, and heat transfer to cooling systems and heat exchanger designs. In-depth knowledge of airflow control systems and defining new algorithms. Understanding of different dehumidification processes and technologies. System integration of mechanical, electrical, electronic & refrigerant control components. Component selection based on the specification requirement. Product Development: Participate in the design and development of new cooling systems, dehumidification technologies and heat exchangers. Conduct research on emerging technologies and industry trends to incorporate innovative solutions. Collaborate with engineering and design teams to create efficient and cost-effective products. Testing and Evaluation: Develop and implement testing protocols for cooling systems and heat exchangers. Analyse performance metrics such as efficiency, capacity, reliability, and environmental impact. Identify areas for improvement and recommend design modifications based on test results. Troubleshooting and Problem Solving: Provide technical support to resolve complex issues related to cooling systems and heat exchangers. Diagnose problems, recommend solutions, and oversee the implementation of corrective actions. Industry Trends and Innovation: Stay updated with the latest advancements in cooling technology and heat exchanger design. Participate in industry conferences, seminars, and forums to exchange knowledge and gain insights. Evaluate and implement new technologies and best practices to enhance product offerings. Training and Education: Develop training materials and conduct workshops for engineers, technicians, and other professionals. Provide mentorship and guidance to junior team members to ensure knowledge transfer and skill development. Consultation and Advisory Role: Act as a consultant for projects involving cooling technology and heat exchangers. Offer expertise in system design, energy efficiency optimisation, sustainability practices, and cost-effectiveness. Collaborate with standard-making agencies to provide recommendations. Project Management: Manage projects related to cooling systems and heat exchangers, ensuring adherence to timelines, budgets, and resource allocation. Coordinate with cross-functional teams to achieve project objectives. Qualifications : M Tech / PHD in Mechanical or similar fields with 6+ Years of Experience in air conditioning product development
Posted 1 week ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql
Posted 1 week ago
6.0 - 9.0 years
8 - 11 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 1 week ago
10.0 - 14.0 years
8 - 15 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requiremaob Title: Senior Software Engineer Full Stack Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai Timings: 11 AM 8 PM IST
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AIML Engineer– Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. We are responsible for moving 20 % of global trade & is on a mission to become the Global Integrator of Container Logistics. To achieve this, we are transforming into an industrial digital giant by combining our assets across air, land, ocean, and ports with our growing portfolio of digital assets to connect and simplify our customer’s supply chain through global end-to-end solutions, all the while rethinking the way we engage with customers and partners. The Brief In this role as an Associate AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What I'll be doing – your accountabilities? Design, develop, and implement robust, scalable, and optimized machine learning and deep learning models, with the ability to iterate with speed Write and integrate automated tests alongside models or code to ensure reproducibility, scalability, and alignment with established quality standards Implement best practices in security, pipeline automation, and error handling using programming and data manipulation tools Identify and implement the right data-driven approaches to solve ambiguous and open-ended business problems, leveraging data engineering capabilities Research and implement new models, technologies, and methodologies and integrate these into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative tools, develop algorithms and optimized workflows Independently manage and optimize data solutions, perform A/B testing, evaluate performance and evaluate performance of systems Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Have mastered the concepts and can demonstrate Programming skills in complex scenarios. Understands the below skills beyond the fundamentals and can demonstrate in most situations without guidance AI & Machine Learning Data Analysis Machine Learning Pipelines Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NPL) Data Architecture Data Processing Frameworks Understands the basic fundamentals of Technical Documentation and can demonstrate in common scenarios with some guidance Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 5+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and solution design experience Good problem solving skills, for both technical and non-technical domains Good broad understanding of ML and statistics covering standard ML for regression and classification, forecasting and time-series modeling, deep learning 4+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Hands-on experience building end-to-end data products based on AI/ML technologies Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Strong foundation with expertise in neural networks, optimization techniques and model evaluation Experience with LLMs, Transformer architectures (BERT, GPT, LLaMA, Mistral, Claude, Gemini, etc.). Proficiency in Python, LangChain, Hugging Face transformers, MLOps Experience with Reinforcement Learning and multi-agent systems for decision-making in dynamic environments. Knowledge of multimodal AI (integrating text, image, other data modalities into unified models Team player, eager to collaborate and good collaborator Preferred Experiences In addition to basic qualifications, would be great if you have… Hands-on experience with common OR solvers such as Gurobi Experience with a common dashboarding technology (we use PowerBI) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/…) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open-source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization (Kubernetes & Docker) As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
3.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Work from Office
About the Role: We are looking for a highly skilled AI/ML Developer to join the core product team of QAPilot.io. The ideal candidate should come from a product-based or AI-first company, with a strong academic background from institutes like IITs, NITs, IIITs, or other Tier-1 engineering colleges. You will work on real-world AI problems related to test automation, software quality, and predictive engineering. Key Responsibilities: Design, build, and deploy machine learning models for intelligent QA automation Work on algorithms for test case optimization, bug prediction, pattern recognition, and data-driven QA insights Apply techniques from supervised/unsupervised learning, NLP, and deep learning Integrate ML models into the product using scalable and production-ready code Continuously improve model performance through experimentation and feedback loops Collaborate with full-stack developers, product managers, and QA experts Explore LLMs, transformers, and generative AI for advanced test data generation and analysis Required Skills & Qualifications: B.Tech / M.Tech / MS in Computer Science, Data Science, or related fields from IIT/NIT/IIIT or other top-tier institutes 3+ years of experience as an AI/ML Developer, preferably in product or AI-centric companies Strong proficiency in Python, ML libraries (scikit-learn, TensorFlow, PyTorch) Experience in NLP, LLMs, or generative AI (preferred) Hands-on with ML lifecycle: data wrangling, model training, evaluation, and deployment Familiarity with MLOps tools like MLFlow, Docker, Airflow, or cloud platforms (AWS/GCP) Prior exposure to software testing, DevOps, or developer tooling is a plus Strong analytical skills, attention to detail, and curiosity to solve open-ended problems Portfolio, GitHub, or project links demonstrating AI/ML expertise are desirable Why Join QAPilot.io: Work on an innovative AI product transforming the software QA ecosystem Join a high-impact, product-oriented engineering culture Solve challenging AI problems with real user value Collaborate with top talent from the tech and AI ecosystem Competitive salary, learning-focused environment, and growth opportunities To Apply: Please send your updated resume and any supporting links (GitHub, projects, publications).
Posted 1 week ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking to fill this opportunity for one of leading financial domain client. Position: Big Data Developer (Apache spark) Location: Pune (Hybrid) Experience: 6 – 9 years Job Description: True Hands-On Developer in Programming Languages like Java or Scala . Expertise in Apache Spark . Database modelling and working with any of the SQL or NoSQL Database is must. Working knowledge of Scripting languages like shell/python. Experience of working with Cloudera is Preferred. Orchestration tools like Airflow or Oozie would be a value addition. Knowledge of Table formats like Delta or Iceberg is plus to have. Working experience of Version controls like Git, build tools like Maven is recommended. Having software development experience is good to have along with Data Engineering experience.
Posted 1 week ago
13.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Lead AIML Engineer– Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. We are responsible for moving 20 % of global trade & is on a mission to become the Global Integrator of Container Logistics. To achieve this, we are transforming into an industrial digital giant by combining our assets across air, land, ocean, and ports with our growing portfolio of digital assets to connect and simplify our customer’s supply chain through global end-to-end solutions, all the while rethinking the way we engage with customers and partners. The Brief In this role as a Lead AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What I'll be doing – your accountabilities? Lead end-to-end AI/ML projects, from problem definition, feature selection, development, implementation of models, monitoring, retraining, infrastructure and communication of results Provide technical leadership on complex AI/ML projects, developing end-to-end machine learning pipelines, robust data models and driving innovation in engineering practices Address advanced AI/ML challenges, evaluate and optimize existing data pipelines and frameworks for efficiency and cost-effectiveness using cutting-edge techniques Architect and oversee scalable, production-ready data models and pipelines, solve complex issues and lead work on optimization and performance of models, ensuring alignment with business needs Collaborate with stakeholders and cross-functional teams and communicate insights to influence data strategy, product roadmaps, and scalable solutions through expertise in AI/ML techniques, tools, architectures, and business applications, delivering measurable positive impact Design and advocate for resilient, secure, scalable, and sustainable data / AI/ML architectures while creating modernization plans for long-term innovation and maintainability Evaluate and improve tools, methodologies, and assess industry practices to drive quality and innovation across AI/ML engineering initiatives Mentor AI/ML engineers and other talent, promoting diversity, inclusion, and leadership development across all levels Build relationships with stakeholders, champion best practices, and lead initiatives to deliver robust, scalable, future-proof data engineering solutions, while championing quality, modernization, and best practices across the organization Work across organizational boundaries to resolve challenges, influence shared roadmaps spanning multiple teams, ensuring scalable solutions prioritizing organizational objectives vs team or individual specific ones, while aligning with evolving data engineering requirements Foundational Skills Have specialized in Machine Learning Pipelines, can easily demonstrate in complex scenarios and mentors/coaches others Have mastered the concepts and can demonstrate below skills in complex scenarios Programming AI & Machine Learning Data Analysis Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NLP) Data Architecture Data Processing Frameworks Technical Documentation Technical leadership experience in Data integration and AI Agentic solutions including Connecting AI agents to various custom data sources (e.g., Databases, APIs, internal document stores). Implementing Retrieval Augmented Generation (RAG) patterns. Working with Vector Stores (e.g., Pinecone, Weaviate, ChromaDB, FAISS, etc.) and Knowledge Graphs Implementing agent memory storage and reasoning solutions, and using various Multi-Agent Frameworks (e.g., AutoGen, CrewAI, or similar). Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 13+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and design experience 6+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Strong understanding and implementation experience of AI Agent solutions Hands-on experience building end-to-end data products based on recommendation technologies Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Communication and leadership experience, with experience initiating, driving and delivering projects Team player, eager to collaborate Preferred Experiences In addition to basic qualifications, would be great if you have… Experience as tech lead or engineering manager (still hands-on) Experience with common dashboarding technology (we use PowerBI for now) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/…) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization: Kubernetes & Docker Experience with front-end frameworks such as React or Angular. Knowledge of data visualization using D3.js or Chart.js. As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing team Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Senior Data Engineer Employment Type: Full-Time Location: Ahmedabad, Onsite Experience Required: 5+ Years About Techiebutler Techiebutler is looking for an experienced Data Engineer to develop and maintain scalable, secure data solutions. You will collaborate closely with data science, business analytics, and product development teams, deploying cutting-edge technologies and leveraging best-in-class third-party tools. You will also ensure compliance with security, privacy, and regulatory standards while aligning data solutions with industry best practices. Tech Stack Languages: SQL, Python Pipeline Orchestration: Dagster (Legacy: Airflow) Data Stores: Snowflake, Clickhouse Platforms & Services: Docker, Kubernetes PaaS: AWS (ECS/EKS, DMS, Kinesis, Glue, Athena, S3) ETL: FiveTran, DBT IaC: Terraform (with Terragrunt) Key Responsibilities Design, develop, and maintain robust ETL pipelines using SQL and Python. Orchestrate data pipelines using Dagster or Airflow. Collaborate with cross-functional teams to meet data requirements and enable self-service analytics. Ensure seamless data flow using stream, batch, and Change Data Capture (CDC) processes. Use DBT for data transformation and modeling to support business needs. Monitor, troubleshoot, and improve data quality and consistency. Ensure all data solutions adhere to security, privacy, and compliance standards. Essential Experience 5+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Desired Experience Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Why Join Us? Opportunity to work on cutting-edge technologies and innovative data solutions. Be part of a collaborative team focused on delivering high-impact results. Competitive salary and growth opportunities. If you’re passionate about data engineering and want to take your career to the next level, apply now! We look forward to reviewing your application and potentially welcoming you to our team!
Posted 1 week ago
8.0 - 13.0 years
25 - 40 Lacs
Hyderabad
Hybrid
Job Title: Tech Lead GCP Data Engineer Location: Hyderabad, India Experience: 5+ Years Job Type: Full-Time Industry: IT / Software Services Functional Area: Data Engineering / Cloud / Analytics Role Category: Cloud Data Engineering Position Overview We are seeking a GCP Data Engineer with strong expertise in SQL , Python , and Google Cloud Platform (GCP) services including BigQuery , Cloud Composer , and Airflow . The ideal candidate will play a key role in building scalable, high-performance data solutions to support marketing analytics initiatives. This role involves collaboration with cross-functional global teams and provides an opportunity to work on cutting-edge technologies in a dynamic marketing data landscape. Key Responsibilities Lead technical teams and coordinate with global stakeholders. Manage and estimate data development tasks and delivery timelines. Build and optimize data pipelines using GCP , especially BigQuery , Cloud Storage , and Cloud Composer . Work with Airflow DAGs , REST APIs, and data orchestration workflows. Collaborate on development and debugging of ETL pipelines , including IICS and Ascend IO (preferred). Perform complex data analysis across multiple sources to support business goals. Implement CI/CD pipelines and manage version control using Git. Troubleshoot and upgrade existing data systems and ETL chains. Contribute to data quality, performance optimization, and cloud-native solution design. Required Skills & Qualifications Bachelors or Masters in Computer Science, IT, or related field. 5+ years of experience in Data Engineering or relevant roles. Strong expertise in GCP , BigQuery , Cloud Composer , and Airflow . Proficient in SQL , Python , and REST API development. Hands-on experience with IICS , MySQL , and data warehousing solutions. Knowledge of ETL tools like Ascend IO is a plus. Exposure to marketing analytics tools (e.g., Google Analytics, Blueconic, Klaviyo) is desirable. Familiarity with performance marketing concepts (segmentation, A/B testing, attribution modeling, etc.). Excellent communication and analytical skills. GCP certification is a strong plus. Experience working in Agile environments. To Apply, Send Your Resume To:krishnanjali.m@technogenindia.com
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France