Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
7 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Work from Office
Greetings from Technogen !!! We thank you for taking time about your competencies and skills, while allowing us an opportunity to explain about us and our Technogen , we understand that your experience and expertise are relevant the current open with our clients. About Technogen : TechnoGen Brief Overview :- LinkedIn : https://www.linkedin.com/company/technogeninc/about/ TechnoGen, Inc. is an ISO 9001:2015, ISO 20000-1:2011, ISO 27001:2013, and CMMI Level 3 Global IT Services Company headquartered in Chantilly, Virginia. TechnoGen, Inc. (TGI) is a Minority & Women-Owned Small Business with over 20 years of experience providing end-to-end IT Services and Solutions to the Public and Private sectors. TGI provides highly skilled and certied professionals and has successfully executed more than 345 projects. TechnoGen is committed to helping our clients solve complex problems and achieve their goals, on time and under budget. Please share below details for further processing of your profile. Total years of experience: Relevant years of experience: CTC (Including Variable): ECTC: Notice Period: Reason for change: Current location: Job Title :GCP Data Engineer Required Experience : 5+ years Work Mode: WFO-4 Days from Office. Shift Time : UK Shift Time-12:00 PM IST to 09:00 PM IST. Location : Hyderabad. Job Summary :- As a GCP Data Engineer, we need someone with strong experience in SQL and Python. The ideal candidate should have hands-on expertise in Google Cloud Platform (GCP) services, especially BigQuery, Composer, Airflow framework and a solid understanding of data engineering best practices. You will work closely with our internal teams and technology partners to deliver comprehensive and scalable marketing data and analytics solutions. This role offers the unique opportunity to engage in many technology platforms in a rapidly evolving marketing technology landscape. Key Responsibilities: • Technical oversight and team management of the developers, coordination with US based Mattel resources, and perform estimation of work. Strong knowledge in cloud computing platforms - Google Cloud Expertise in MySQL & SQL/PL Good Experience in IICS Experience in ETL Ascend IO is added advantage GCP & BigQuery knowledge is must, GCP certification is added advantage Good experience in Google Cloud Storage (GCS), Cloud Composer, DAGs , Airflow REST API development experience Good in analytical and problem solving, efficient communication Experience in designing, implementing, and managing various ETL job execution flows. Utilize Git for source version control. Set up and maintain CI/CD pipelines. Troubleshoot, debug, and upgrade existing application & ETL job chains. Comprehensive data analysis across complex data sets Ability to collaborate effectively across technical development teams and business departments Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 5+ years of experience in data engineering or related roles Strong understanding of Google Cloud Platform and associated tools. Proven experience in delivering consumer marketing data and analytics solutions for enterprise clients. Strong knowledge of data management, ETL processes, data warehousing, and analytics platforms. Experience with SQL and NoSQL databases. Proficiency in Python programming languages. Hands-on experience with data warehousing solutions Knowledge of marketing analytics tools and technologies, including but not limited to Google Analytics, Blueconic, Klaviyo, etc. Knowledge of performance marketing concepts such as targeting & segmentation, real-time optimization, A/B testing, attribute modeling, etc. Excellent communication skills with a track record of collaboration across multiple teams Strong collaboration skills and team-oriented mindset. Strong problem-solving skills, adaptability, and the ability to thrive in a dynamic and rapidly changing environment. Experience working in Agile development environments Best Regards, Syam.M | Sr.IT Recruiter syambabu.m@technogenindia.com www.technogenindia.com | Follow us on LinkedIn
Posted 1 week ago
3.0 - 5.0 years
5 - 12 Lacs
Hyderabad, Chennai
Work from Office
Greetings !!! Hiring for GCP Data Engineers for Chennai/Hyderabad location. Skills - GCP , Python Pyspark , Python , Airflow , SQL. Location - Chennai/Hyderabad (WFO) Experience - 3 to 5 years Interested one can share their resumes to anmol.bhatia@incedoinc.com
Posted 1 week ago
7.0 years
8 - 9 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 4 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples: Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments: We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
0 years
0 Lacs
Cochin
On-site
A Duct Fabricator is responsible for creating, assembling, and sometimes installing sheet metal ductwork used in HVAC (Heating, Ventilation, and Air Conditioning) systems. A Duct Fitter, also known as a Sheet Metal Duct Installer, is responsible for fabricating, installing, and maintaining ductwork systems for heating, ventilation, and air conditioning (HVAC). They work with sheet metal to create and assemble ducts according to blueprints and specifications, ensuring proper airflow and energy efficiency within buildings. Job Type: Permanent Pay: ₹8,086.00 - ₹41,407.41 per month Benefits: Health insurance Paid sick time Provident Fund Work Location: In person
Posted 1 week ago
5.0 years
4 - 8 Lacs
Hyderābād
On-site
We're looking for a Senior Data Engineer This role is Office Based, Hyderabad Office As a Senior Data Engineer is responsible for creating effective technological solutions for work. They are also responsible for managing a team of specialists. In addition, their roles include quality control of the work performed. In this role you will Closely works with the Product team to gather requirements and convert them into the technical design Lead the team in the design of technology solutions that meet the business needs in terms of sustainability, scalability, performance, and security. Overall responsibility for the, and technical development of Data Engineering. Collaboration with various delivery teams in the low-level design of data-oriented, ELT, or ETL projects in response to product requirements. Responsibility for implementing, disseminating, and adhering to CSOD's Data Engineering methodologies, processes & principles. You’ve got what it takes if you have: 5+ years of experience Demonstrable experience in delivering complex technology solutions Proven ability to quickly adapt to new technologies, concepts, and approaches – Essential Demonstrable experience in ETL/ELT processes – Essential Proven GCP, AWS, Confluent, and Elastic Cloud experience - Essential An understanding of Cloud Technologies and their application and benefits e.g. Google, AWS – Essential Should be well versed in the orchestration tools like Airflow - Essential Programs involved in processing and transformation – Essential Proven expertise in Phyton Enterprise level Business to Consumer database (MySQL, Influx, Postgres, NoSQL, and so on) – Essential in MYSQL, Desirable in Others Highly professional individual with excellent written and verbal communication skills – Essential Enterprise-level BI or visualization programs (Looker, QuickSight, Tableau, Qlik, Power BI, etc.) – Desirable Automation and AI Experience in the Data side – Desirable Good understanding of Agile, Estimation and Sprint planning, and so on. #LI-Onsite Our Culture: Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are: Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn, Comparably, Glassdoor, and Facebook!
Posted 1 week ago
0 years
0 Lacs
India
On-site
Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Katrena Calimag-Rupera Sponsorship Available: No Relocation Assistance Available: No STAFF DIGITAL SOFTWARE ENGINEER – Data Engineer Are you interested in an exciting opportunity to help shape the user experience and design front-end applications for data-driven digital products that drive better process performance across a global company? The Data Driven Engineering and Global Information Technology groups group at the Goodyear Technology India Center, Hyderabad, India is looking for a dynamic individual with strong background in data engineering and infrastructure to partner with data scientists, information technology specialists as well as our global technology and operations teams to derive valuable insights from our expansive data sources and help develop data-driven solutions for important business applications across the company. Since its inception, the Data Science portfolio of projects continues to grow and includes areas of tire manufacturing, operations, business, and technology. The people in our Data Science group come from a broad range of backgrounds: Mathematics, Statistics, Cognitive Linguistics, Astrophysics, Biology, Computer Science, Mechanical, Electrical, Chemical, and Industrial Engineering, and of course - Data Science. This diverse group works together to develop innovative tools and methods for simulating, modeling, and analyzing complex processes throughout our company. We’d like you to help us build the next generation of data-driven applications for the company and be a part of the Information Technology and Data Driven Engineering teams. What You Will Do We think you’ll be excited about having opportunities to: Design and build robust, scalable, and efficient data pipelines and ETL processes to support analytics, data science, and digital products. Collaborate with cross-functional teams to understand data requirements and implement solutions that integrate data from diverse sources. Lead the development, management, and optimization of cloud-based data infrastructure using platforms such as AWS, Azure, or GCP. Architect and maintain highly available and performant relational database systems (e.g., PostgreSQL, MySQL) and NoSQL systems (e.g., MongoDB, DynamoDB). Partner with data scientists to ensure efficient and secure data access for modeling, experimentation, and production deployment. Build and maintain data services and APIs to facilitate access to curated datasets across internal applications and teams. Implement DevOps and DataOps practices including CI/CD for data workflows, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. What We Expect Bachelor’s degree in computer science or a similar technical field; preferred: Master’s degree in computer science or a similar field 5 or more years of experience designing and maintaining data pipelines, cloud-based data systems, and production-grade data workflows. Experience with the following technology groups: Strong experience in Python, Java, or other languages for data engineering and scripting. Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, DynamoDB), including query optimization and schema design. Experience designing and deploying solutions on cloud platforms like AWS (e.g., S3, Redshift, RDS), Azure, or GCP. Familiarity with data modeling, data warehousing, and distributed data processing frameworks (e.g., Apache Spark, Airflow, dbt). Understanding of RESTful APIs and integration of data services with applications. Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins), Docker, Kubernetes, and infrastructure-as-code frameworks. Solid grasp of software engineering best practices, including code versioning, testing, and performance optimization. Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate
Posted 1 week ago
9.0 - 14.0 years
4 Lacs
Bengaluru
Hybrid
Total Experience- 9 Years and above Location- Bangalore NP- Immediate to max 15 days Job Description Senior resources to work on the Batch AI platform. The core skillsets required are: Python, Ray, Spark, Hive, Iceberg, Kubernetes, Airflow, Druid, Superset. AIML background is preferred but not mandatory. Design/architecture experience is preferred along with strong hands on skills. Someone with 10+ years and strong track record may be a good fit. Candidate needs to be based in Bangalore. About Us: Grid Dynamics (Nasdaq:GDYN) is a digital-native technology services provider that accelerates growth and bolsters competitive advantage for Fortune 1000 companies. Grid Dynamics provides digital transformation consulting and implementation services in omnichannel customer experience, big data analytics, search, artificial intelligence, cloud migration, and application modernization. Grid Dynamics achieves high speed-to-market, quality, and efficiency by using technology accelerators, an agile delivery culture, and its pool of global engineering talent. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the US, UK, Netherlands, Mexico, India, Central and Eastern Europe. To learn more about Grid Dynamics, please visit www.griddynamics.com . Follow us on Facebook , Twitter , and LinkedIn . --
Posted 1 week ago
3.0 years
4 - 6 Lacs
Gurgaon
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other desirable skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Software: fuel for mobility We bring bold digital visions to life. So we’re on the lookout for more curious and creative engineers who want to create change – one line of high-quality code at a time. Our transformation isn't for everyone, but if you're excited about solving the leading-edge technological challenges facing the auto industry, then let’s talk about your next move. Let's introduce ourselves At Volvo Cars, curiosity, collaboration, and continuous learning define our culture. Join our mission to create sustainable transportation solutions that protect what matters most – people, communities, and the planet. As a Data Engineer, you will drive digital innovation, leading critical technology initiatives with global teams. You’ll design and implement solutions impacting millions worldwide, supporting Volvo’s vision for autonomous, electric, and connected vehicles. What You'll Do Technical Leadership & Development Lead development and implementation using AirFlow, Amazon Web Services (AWS), Azure, Azure Data Factory (ADF), Big Data and Analytics, Core Data, Data Analysis, ETL/ELT, PowerBI, SQL / SQL Script, Snowflake Design, build, and maintain scalable solutions supporting global operations Collaborate closely with USA stakeholders across product management and engineering Promote technical excellence through code reviews, architecture decisions, and best practices Cross-Functional Collaboration Partner internationally using Microsoft Teams, Slack, SharePoint, and Azure DevOps Participate in Agile processes and sprint planning Share knowledge and maintain technical documentation across regions Support 24/7 operations through on-call rotations and incident management Innovation & Continuous Improvement Research emerging technologies to enhance platform capabilities Contribute to roadmap planning and architecture decisions Mentor junior team members and encourage knowledge sharing What You'll Bring Professional Experience 4 -8 years hands-on experience in software development, system administration, or related fields Deep expertise in AirFlow, AWS, Azure, ADF, Big Data, Core Data, Data Analysis, ETL/ELT, PowerBI, SQL, Snowflake with proven implementation success Experience collaborating with global teams across time zones Preferred industry knowledge in automotive, manufacturing, or enterprise software Technical Proficiency Advanced skills in core technologies: AirFlow, AWS, Azure, ADF, Big Data, Core Data, Data Analysis, ETL/ELT, PowerBI, SQL, Snowflake Strong grasp of cloud platforms, DevOps, and CI/CD pipelines Experience with enterprise integration and microservices architecture Skilled in database design and optimization with SQL and NoSQL Essential Soft Skills Analytical Thinking, Collaboration, Communication Skills, Critical Thinking, Documentation Best Practices, Problem Solving, Written Communication Excellent communication, able to explain complex technical topics Adaptable in multicultural, globally distributed teams Strong problem-solving abilities Additional Qualifications Business-level English fluency Flexibility to collaborate across USA time zones Volvo Cars – driving change together Volvo Cars’ success is the result of a collaborative, diverse and inclusive working environment. Today, we’re one of the most well-known and respected car brands, with around 43,000 employees across the globe. At Volvo Cars, your career is designed around your skills and aspirations, so you can reach your fullest potential. And it’s so exciting – we’re well on our way on our journey towards full electrification. We have five fully electric cars already on the market, and five more on the way. Our fully-electric and plug-in hybrid cars combined make up almost 50 per cent of our sales. So come and join us in shaping the future of mobility. There’s never been a more rewarding time to play your part in our inspiring and creative teams!
Posted 1 week ago
6.0 years
0 Lacs
Ahmedabad
On-site
About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. Why You Should Apply NOW: You'll be working with many strategic engineering leaders within the company. You'll report directly to the Director of Data Engineering. You will help build out our Data Engineering team presence in India. You will work with a Global team. You'll be challenged with a lot of big data problems. About The Role: We are seeking a highly skilled Senior Data Engineer to join our dynamic Data Engineering team. The ideal candidate possesses 6-8 years of data engineering experience. An excellent candidate should have a solid understanding of Spark and SQL, and have data pipeline experience. Hired individuals will play a crucial role in helping to build out our data engineering team to support our strategic pipelines and optimize for reliability, efficiency, and performance. Additionally, Data Engineering serves as the gold standard for all other YipitData analyst teams, building and maintaining the core pipelines and tooling that power our products. This high-impact, high-visibility team is instrumental to the success of our rapidly growing business. This is a unique opportunity to be the first hire in this team, with the potential to build and lead the team as their responsibilities expand. This is a hybrid opportunity based in India. During training and onboarding, we will expect several hours of overlap with US working hours. Afterward, standard IST working hours are permitted with the exception of 1-2 days per week, when you will join meetings with the US team. As Our Senior Data Engineer You Will: Report directly to the Senior Manager of Data Engineering, who will provide significant, hands-on training on cutting-edge data tools and techniques. Build and maintain end-to-end data pipelines. Help with setting best practices for our data modeling and pipeline builds. Create documentation, architecture diagrams, and other training materials. Become an expert at solving complex data pipeline issues using PySpark and SQL. Collaborate with stakeholders to incorporate business logic into our central pipelines. Deeply learn Databricks, Spark, and other ETL toolings developed internally. You Are Likely To Succeed If: You hold a Bachelor's or Master's degree in Computer Science, STEM, or a related technical discipline. You have 6+ years of experience as a Data Engineer or in other technical functions. You are excited about solving data challenges and learning new skills. You have a great understanding of working with data or building data pipelines. You are comfortable working with large-scale datasets using PySpark, Delta, and Databricks. You understand business needs and the rationale behind data transformations to ensure alignment with organizational goals and data strategy. You are eager to constantly learn new technologies. You are a self-starter who enjoys working collaboratively with stakeholders. You have exceptional verbal and written communication skills. Nice to have: Experience with Airflow, dbt, Snowflake, or equivalent. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice
Posted 1 week ago
5.0 years
4 - 7 Lacs
Mehsana
On-site
#LI-DS2 Job Summary: Using analytical and experimental techniques, lead the development of fans and airflow systems in terms of noise, strength analysis, thermal flow analysis and manufacturability. Responsibility: Through numerical and experimental analysis, develop new types of fans (propeller, cross-flow, centrifugal, sirocco) for air conditioners that improve the fluid performance and reduce noise. To do thermal flow analysis with in system and around system to improve thermal efficiency of product at development stage and analyze thermal efficiency at customer site. Work with various stakeholders, including the members of platform design department, each module, and the production technology department, to develop a fan that maintains performance and noise levels without sacrificing strength or productivity. Propose and design prototypes and experimental equipment that will lead to the evaluation of subsystems, including fans and shrouds. Educational Qualification: Master’s degree (or equivalent) in fluid mechanics and aerodynamics dealing with the flow around fans, turbomachinery design, rotating machinery, and CFD analysis. Working experience: at least 5 years of fan design, thermal analysis or research experience Skill requirements: Communication and presentation skills. Ability to make objective decisions in collaboration with managers to ensure that the right decisions are made. Ability to make judgments that enable correct responses to stakeholder comments. Ability to propose new approaches to problems Language: Excellent communication skills. (Fluent English, both written and spoken is preferred.) Location: kadi
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Noida
On-site
5 - 7 Years 2 Openings Noida Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.
Posted 1 week ago
8.0 - 12.0 years
15 - 27 Lacs
Pune, Bengaluru
Hybrid
Role & responsibilities Job Description - Snowflake Senior Developer Experience: 8+ years, Hybrid Employment Type: Full-time Job Summary We are seeking a skilled Snowflake Developer with 8+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications 8+ years in database development, data warehousing, or ETL. 4+ years of hands-on Snowflake development experience. Strong SQL or Python skills for data processing. Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). Certifications: SnowPro Core Certification (preferred). Preferred Skills Familiarity with data governance and metadata management. Familiarity with DBT, Airflow, SSIS & IICS Knowledge of CI/CD pipelines (Azure DevOps). If interested, Kindly share update cv on- Himanshu.mehra@thehrsolutions.in
Posted 1 week ago
8.0 - 13.0 years
32 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Lead AWS Data Engineer with team handling exp: Skills AWS, Python, Sql, Spark, Airflow, Athena, Api Integration, Notice Period-imm to 15days, location: Bangalore/Hyderabad/Coimbatore & Chennai
Posted 1 week ago
4.0 - 9.0 years
11 - 17 Lacs
Bengaluru
Work from Office
Greetings from TSIT Digital !! This is with regard to an excellent opportunity with us and if you have that unique and unlimited passion for building world-class enterprise software products that turn into actionable intelligence, then we have the right opportunity for you and your career. This is an opportunity for Permanent Employment with TSIT Digital. What are we looking for: Data Engineer Experience: 4+ Year's Relevant Experience 2-5 Years Location:Bangalore Notice period: Immediately to 15 days Job Description: Work location-Manyata Tech Park, Bengaluru, Karnataka, India Work mode- Hybrid Model Client- Lowes Mandatory Skills: Data Engineer Scala/Python, SQL,Scripting Knowledge on BigQuery, Pyspark, Airflow,Serverless Cloud Native Service, Kafka Streaming If you are interested please share your updated CV:- kousalya.v@tsit.co.in
Posted 1 week ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
HCL Software (hcl-software.com) delivers software that fulfils the transformative needs of clients around the world. We build award winning software across AI, Automation, Data & Analytics, Security and Cloud. The HCL Unica+ Marketing Platform enables our customers to deliver precision and high performance Marketing campaigns across multiple channels like Social Media, AdTech Platforms, Mobile Applications, Websites, etc. The Unica+ Marketing Platform is a Data and AI first platform that enables our clients to deliver hyper-personalized offers and messages for customer acquisition, product awareness and retention. We are seeking a Senior Architect Developer with strong Data Science and Machine Learning skills and experience to deliver AI driven Marketing Campaigns. Responsibilities Designing and Architecting End-to-End AI/ML Solutions for Marketing: The architect is responsible for designing robust, scalable, and secure AI/ML solutions specifically tailored for marketing challenges. This includes defining data pipelines, selecting appropriate machine learning algorithms and frameworks (e.g., for predictive analytics, customer segmentation, personalization, campaign optimization, sentiment analysis), designing model deployment strategies, and integrating these solutions seamlessly with existing marketing tech stacks and enterprise systems. They must consider the entire lifecycle from data ingestion to model monitoring and retraining. Technical Leadership: The AI/ML architect acts as a technical leader, providing guidance and mentorship to data scientists, ML engineers, and other development teams. They evaluate and select the most suitable AI/ML tools, platforms, and cloud services (AWS, GCP, Azure) for marketing use cases. The architect is aso responsible for establishing and promoting best practices for MLOps (Machine Learning Operations), model versioning, continuous integration/continuous deployment (CI/CD) for ML models, and ensuring data quality, ethical AI principles (e.g., bias, fairness), and regulatory compliance (e.g., data privacy laws). Python Programming & Libraries: Proficient in Python with extensive experience using Pandas for data manipulation, NumPy for numerical operations, and Matplotlib/Seaborn for data visualization. Statistical Analysis & Modelling: Strong understanding of statistical concepts, including descriptive statistics, inferential statistics, hypothesis testing, regression analysis, and time series analysis. Data Cleaning & Preprocessing: Expertise in handling messy real-world data, including dealing with missing values, outliers, data normalization/standardization, feature engineering, and data transformation. SQL & Database Management: Ability to query and manage data efficiently from relational databases using SQL, and ideally some familiarity with NoSQL databases. Exploratory Data Analysis (EDA): Skill in visually and numerically exploring datasets to understand their characteristics, identify patterns, anomalies, and relationships. Machine Learning Algorithms: In-depth knowledge and practical experience with a wide range of ML algorithms such as linear models, tree-based models (Random Forests, Gradient Boosting), SVMs, K-means, and dimensionality reduction techniques (PCA). Deep Learning Frameworks: Proficiency with at least one major deep learning framework like TensorFlow or PyTorch. This includes understanding neural network architectures (CNNs, RNNs, Transformers) and their application to various problems. Model Evaluation & Optimization: Ability to select appropriate evaluation metrics (e.g., precision, recall, F1-score, AUC-ROC, RMSE) for different problem types, diagnose model performance issues (bias-variance trade-off), and apply optimization techniques. Deployment & MLOps Concepts: Deploy machine learning models into production environments, including concepts of API creation, containerization (Docker), version control for models, and monitoring. Qualifications & Skills At least 15+ years of Experience across Data Architecture, Data Science and Machine Learning. Experience in delivering AI/ML models for Marketing Outcomes like Customer Acquisition, Customer Churn, Next Best Product or Offer. This is a mandatory requirement. Experience with Customer Data Platforms (CDP) and Marketing Platforms like Unica, Adobe, SalesForce, Braze, TreasureData, Epsilon, Tealium is mandatory. Experience with AWS SageMaker is advantageous Experience with LangChain, RAG for Generative AI is advantageous. Experience with ETL process and tools like Apache Airflow is advantageous Expertise in Integration tools and frameworks like Postman, Swagger, API Gateways Ability to work well within an agile team environment and apply the related working methods. Excellent communication & interpersonal skills A 4-year degree in Computer Science or IT is a must. Travel: 30% +/- travel required
Posted 1 week ago
170.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Job Description - Snowflake Tech Lead Experience: 10+ years Location: Mumbai, Pune, Hyderabad Employment Type: Full-time Job Summary We are looking for a Snowflake Tech Lead with 10+ years of experience in data engineering, cloud platforms, and Snowflake implementations. This role involves leading technical teams, designing scalable Snowflake solutions, and optimizing data pipelines for performance and efficiency. The ideal candidate will have deep expertise in Snowflake, ETL/ELT processes, and cloud data architecture. Key Responsibilities 1. Snowflake Development & Optimization Lead Snowflake implementation, including data modeling, warehouse design, and performance tuning. Optimize SQL queries, stored procedures, and UDFs for high efficiency. Implement Snowflake best practices (clustering, partitioning, zero-copy cloning). Manage virtual warehouses, resource monitors, and cost optimization. 2. Data Pipeline & Integration Design and deploy ETL/ELT pipelines using Snowflake, Snowpark, Coalesce. Integrate Snowflake with BI tools (Power BI, Tableau), APIs, and external data sources. Implement real-time and batch data ingestion (CDC, streaming, Snowpipe). 3. Team Leadership & Mentorship Lead a team of data engineers, analysts, and developers in Snowflake projects. Conduct code reviews, performance tuning sessions, and technical training. Collaborate with stakeholders, architects, and business teams to align solutions with requirements. 4. Security & Governance Configure RBAC, data masking, encryption, and row-level security in Snowflake. Ensure compliance with GDPR, HIPAA, or SOC2 standards. Implement data quality checks, monitoring, and alerting. 5. Cloud & DevOps Integration Deploy Snowflake in AWS, Azure Automate CI/CD pipelines for Snowflake using GitHub Actions, Jenkins, or Azure DevOps. Monitor and troubleshoot Snowflake environments using logging tools (Datadog, Splunk). Required Skills & Qualifications 10+ years in data engineering, cloud platforms, or database technologies. 5+ years of hands-on Snowflake development & administration. Strong expertise in SQL, Python for data processing. Experience with Snowflake features (Snowpark, Streams & Tasks, Time Travel). Knowledge of cloud data storage (S3, Blob) and data orchestration (Airflow, DBT). Certifications: Snowflake SnowPro Core/Advanced. Knowledge of DataOps, MLOps, and CI/CD pipelines. Familiarity with DBT, Airflow, SSIS & IICS
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Experience- 4-6Yrs Location ( Mumbai- Thane) Only Immediate joiners Key Responsibilities Database Engineering & Operations Own and manage critical components of the database infrastructure across production and non-production environments. Ensure performance, availability, scalability, and reliability of databases including PostgreSQL, MySQL, and MongoDB Drive implementation of best practices in schema design, indexing, query optimization, and database tuning. Take initiative in root cause analysis and resolution of complex performance and availability issues. Implement and maintain backup, recovery, and disaster recovery procedures; contribute to testing and continuous improvement of these systems. Ensure system health through robust monitoring, alerting, and observability using tools such as Prometheus, Grafana, and CloudWatch. Implement and improve automation for provisioning, scaling, maintenance, and monitoring tasks using scripting (e.g., Python, Bash). Database Security & Compliance Enforce database security best practices, including encryption at-rest and in-transit, IAM/RBAC, and audit logging. Support data governance and compliance efforts related to SOC 2, ISO 27001, or other regulatory standards. Collaborate with the security team on regular vulnerability assessments and hardening initiatives. DevOps & Collaboration Partner with DevOps and Engineering teams to integrate database operations into CI/CD pipelines using tools like Liquibase, Flyway, or custom scripting. Participate in infrastructure-as-code workflows (e.g., Terraform) for consistent and scalable DB provisioning and configuration. Proactively contribute to cross-functional planning, deployments, and system design sessions with engineering and product teams. Required Skills & Experience 4-6 years of production experience managing relational and NoSQL databases in cloud-native environments (AWS, GCP, or Azure). Proficiency in: Relational Databases: PostgreSQL and/or MySQL NoSQL Databases: MongoDB (exposure to Cassandra or DynamoDB is a plus) Deep hands-on experience in performance tuning, query optimization, and troubleshooting live systems. Strong scripting ability (e.g., Python, Bash) for automation of operational tasks. Experience in implementing monitoring and alerting for distributed systems using Grafana, Prometheus, or equivalent cloud-native tools. Understanding of security and compliance principles and how they apply to data systems. Ability to operate with autonomy while collaborating in fast-paced, cross-functional teams. Strong analytical, problem-solving, and communication skills. Nice to Have (Bonus) Experience with Infrastructure as Code tools (Terraform, Pulumi, etc.) for managing database infrastructure. Familiarity with Kafka, Airflow, or other data pipeline tools. Experience working in multi-region or multi-cloud environments with high availability requirements. Exposure to analytics databases (e.g., Druid, ClickHouse, BigQuery, Vertica Db) or search platforms like Elasticsearch. Participation in on-call rotations and contribution to incident response processes.
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. € 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Data Streaming Engineer - Experience 4+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering - Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.
Posted 1 week ago
0 years
0 Lacs
India
Remote
We are hiring a Data Engineer. If you are interested, please feel free to share your CV to SyedaRashna@lancesoft.com Job title: Data Engineer Location: India - Remote Duration: 6 Months Description: We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: 1. Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. 2. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). 3. Implement data quality checks, monitoring, and alerting mechanisms. 4. Ensure data security, governance, and compliance across all pipelines. 5 Work closely with cross-functional teams to understand data requirements and deliver solutions. 6. Troubleshoot and resolve issues in production data pipelines. 7. Contribute to the architecture and design of the data platform. 8. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. 9. Automate deployment and monitoring of data workflows using DevOps tools and practices. 10. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies.
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
Data Engineer Remote 7 Months Contract + Extendable Experience: 6 Years We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: 1. Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. 2. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). 3. Implement data quality checks, monitoring, and alerting mechanisms. 4. Ensure data security, governance, and compliance across all pipelines. 5.Work closely with cross-functional teams to understand data requirements and deliver solutions. 6. Troubleshoot and resolve issues in production data pipelines. 7. Contribute to the architecture and design of the data platform. 8. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. 9. Automate deployment and monitoring of data workflows using DevOps tools and practices. 10. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies.
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 6- 10 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 6-10 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France