Jobs
Interviews

453 Data Engineer Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 20.0 years

37 - 50 Lacs

hyderabad/secunderabad

Hybrid

Job Objective: We are looking for Azure Data and Analytics Solutions Architect focusing on Data engineering to join our growing team of Data and Analytics specialized Consulting organization. This plays a key role in Digital journey of our clients spread globally with focus on Data & AI at the core of it. This role is in our setup to drive the exploratory sessions to find the opportunities, arrive at the Data and Analytics solutions that could involve technical, process and people. Once the solutions are found, this role will orchestrate, architect and technically drive the solutions and do the hands on when and where needed. This role also will mentor, train and enable the consultants at various experience and expertise level. Its expected that the potential applicant has an opinion-based market research, technical trends and learnings from experience. Required Qualifications: Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Preferred Qualifications & Skills: 10 -18 years of relevant experience in architecting, designing, developing and delivering Data solutions both on premise and predominantly Azure Cloud Data & AI services. Batch, Real time and hybrid solutions with high velocity and large volumes. Experience with traditional big data framework like Hadoop will be beneficial. Architectural advisory for big data solutions with Azure and Microsoft technical stack Pre sales, account mining to find new opportunities and proposing right solutions to a given context Data warehousing concepts, dimensional modelling, tabular modelling, Start and Snowflake models, MDX/DAX etc. Strong technical knowledge including hands on most of, SQL, SQL Warehouse, Azure Data Factory, Azure Storage accounts, Data Lake, Data bricks, Azure Functions, Synapse, Stream Analytics, Power BI/Any other visualization Working with No SQL data bases like Cosmos DB Working with Various file formats, Storage types ETL and ELT based data orchestration for batch and real time data Strong Programming skills - experience and expertise in one of the following: Java, Python, Scala, C/.Net. Driving decisions collaboratively, resolving conflicts and ensuring follow through with exceptional verbal and written communication skills. Experience of working on real time end to end projects using Agile/Waterfall methodology and associated tolls Responsibilities: Understand the client scenario, derive or understand business requirements and propose the right data solutions architecture using both on premise and cloud-based services Create scalable and efficient data architecture that respects the data integrity, quality, security, reuse among other aspects which lays foundation for present and future scalable solutions Understand and communicate not only data and technical terms but also functional and business terms with the stake holders. Focus on the true business value delivery with efficient architecture and technical solutions. Have an opinion and advise the clients in the right path aligned with their business, data and technical strategies while aligning with market trends in the data & AI solutions Establish Data Architecture with modern data driven principles like Flexibility at scale, parallel and distributed processing, democratized data access that enables them to be more productive. Thought leadership to provide Point of views, ideate and deliver Webinars, be the custodian of the best practices. Ensure that solution exhibits high levels of performance, security, scalability, maintainability, appropriate reusability and reliability upon deployment Maintain and upgrade technical skills, keeping up to date with market trends. Educate and guide both customers and fellow colleagues Nice-to-Have Skills: Experience or knowledge in other cloud-based data solutions like AWS, GCP Expertise or knowledge in Visualization / Reporting like Qlik/Power BI/Tableau Expertise in one or more data domains

Posted 3 weeks ago

Apply

6.0 - 11.0 years

25 - 35 Lacs

bengaluru

Work from Office

Hi, (On the Spot Offer) We are looking for suitable Candidates for Face to Face Drive for a Tier A Company on Saturday i.e. 23rd Aug 2025 in Bengaluru Location for the role of Gen AI Developer. Only Interested Candidate revert their Resume on Kanishk.mittal@thehrsolution.in. Face to Face Interview Direct Pay role on the spot selection and Joining Exp Range- 4 to 12Years Locations- Bengaluru, India JD - Gen AI Engineer/Developer Role Overview: We are seeking a skilled OpenAI Developer to join our dynamic team. The ideal candidate will have a robust understanding of AI and machine learning, with particular expertise in integrating OpenAI models with Microsoft's Power Platform (including Power Automate and Power Virtual Agents), Azure Cognitive Services, and agentic AI frameworks. This role also requires experience in Azure deployment of AI solutions and modern Autogen-based architectures for building intelligent multi-agent systems. Key Responsibilities: Design, develop, and optimize OpenAI-powered applications, including multi-agent frameworks, chatbots, and AI-based customer support systems. Leverage agentic technologies (e.g., Autogen framework) to implement AI agents capable of collaborating and completing complex tasks autonomously. Build and maintain backend services using Python or .NET to support OpenAI integrations. Work with custom datasets, applying techniques such as chunking, embedding, and vector search for model fine-tuning and retrieval-augmented generation (RAG). Integrate Azure Cognitive Services (e.g., Text Analytics, Translator, Form Recognizer) to enhance functionality and intelligence of AI solutions. Ensure scalable and secure deployment of AI solutions on Azure, using services like Azure Functions, Azure App Service, and Azure Kubernetes Service (AKS). Regularly evaluate and fine-tune GPT models to ensure high performance and relevance. Collaborate with cross-functional teams, including product, design, and QA, to ensure seamless development and deployment pipelines. Stay up to date with advancements in OpenAI, Azure AI, and agent-based technologies, and proactively contribute innovative ideas. Required Skills and Experience: Bachelor's degree in Computer Science, Engineering, or related field. Proficient in Python for backend and AI integration development. Strong experience with OpenAI GPT models and their practical applications. Experience with agentic technologies (e.g., Autogen, LangGraph, CrewAI) for multi-agent systems. Familiarity with Microsoft Azure Cognitive Services and Azure AI Studio. Solid grasp of machine learning fundamentals, including training, fine-tuning, and evaluating language models. Practical knowledge of Natural Language Processing (NLP) techniques. Hands-on experience with CI/CD tools such as Google Cloud Build, Jenkins, GitHub Actions, or Azure DevOps. Ability to work collaboratively in an agile, team-oriented environment under tight deadlines. Excellent problem-solving, debugging, and analytical skills. Desirable Skills: Masters degree or higher in AI, ML, or a related technical field. Prior experience in chatbot development for customer service or enterprise use cases. Certification(s) in Microsoft Azure, OpenAI, or machine learning technologies. Experience deploying and managing OpenAI solutions in Azure environments. Familiarity with RAG architectures, Azure AI Search, and blob storage indexing. Knowledge of Copilot Studio, Power Platform AI builder, or Conversational AI tools.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

chennai

Work from Office

Hiring For Top IT Company- Designation: AWS Data Engineer Skills: AWS, Glue, Lambda, SQL, Python, Redshift,S3. Loc: Chennai Exp:5+ yrs Call: Akshita-9785478741 Surbhi :8058357806 Ambika : 9672301543 Thanks, Team Converse

Posted 4 weeks ago

Apply

8.0 - 10.0 years

12 - 16 Lacs

indore, hyderabad, ahmedabad

Work from Office

Notice Period: Immediate joiners or within 15 days preferred Share Your Resume With: Current CTC Expected CTC Notice Period Preferred Job Location Primary Skills: MSSQL, Redshift, Snowflake T-SQL, LinkSQL, Stored Procedures ETL Pipeline Development Query Optimization & Indexing Schema Design & Partitioning Data Quality, SLAs, Data Refresh Source Control (Git/Bitbucket), CI/CD Data Modeling, Versioning Performance Tuning & Troubleshooting What You Will Do: Design scalable, partitioned schemas for MSSQL, Redshift, and Snowflake. Optimize complex queries, stored procedures, indexing, and performance tuning. Build and maintain robust data pipelines to ensure timely, reliable delivery of data. Own SLAs for data refreshes, ensuring reliability and consistency. Collaborate with engineers, analysts, and DevOps to align data models with product and business needs. Troubleshoot performance issues, implement proactive monitoring, and improve workflows. Enforce best practices for data security, governance, and compliance. Utilize schema migration/versioning tools for database changes. What Youll Bring: Bachelors or Masters in Computer Science, Engineering, or related field. 8+ years of experience in database engineering or backend data systems. Expertise in MySQL, Redshift, Snowflake, and schema optimization. Strong experience in writing functions, procedures, and robust SQL scripts. Proficiency with ETL processes, data modeling, and data freshness SLAs. Experience handling production performance issues and being the go-to database expert. Hands-on with Git, CI/CD pipelines, and data observability tools. Strong problem-solving, collaboration, and analytical skills. If youre interested and meet the above criteria, please share your resume with your current CTC, expected CTC, notice period, and preferred job location. Immediate or 15-day joiners will be prioritized.

Posted 4 weeks ago

Apply

4.0 - 8.0 years

9 - 19 Lacs

gurugram, chennai, bengaluru

Work from Office

Role & responsibilities Qualifications: Bachelor's degree in information technology, or related field. . Experience: 4+ years of experience in data engineering or related roles. Proven experience with Informatica IDMC. Strong Unix/Linux skills, including scripting and system administration Proficiency with AWS cloud services and RDS databases. . Proficiency in batch orchestration tools such as AutoSys and Apache Airflow, . Excellent problem-solving and analytical skills Strong understanding of data integration and ETL processes. Ability to working a fast-paced, dynamic environment . Strong communication and interpersonal skills. . Proficiency with CICD pipelines and DevOps practices

Posted 4 weeks ago

Apply

9.0 - 13.0 years

30 - 45 Lacs

bengaluru

Remote

Lead Data Engineer - What You Will Do: As a PR3 Lead Data Engineer, you will be instrumental in driving our data strategy, ensuring data quality, and leading the technical execution of a small, impactful team. Your responsibilities will include: Team Leadership: Establish the strategic vision for the evolution of our data products and our technology solutions, then provide technical leadership and guidance for a small team of Data Engineers in executing the roadmap. Champion and enforce best practices for data quality, governance, and architecture within your team's work. Embody a product mindset over the teams data. Oversee the team’s use of Agile methodologies (e.g., Scrum, Kanban), ensuring smooth and predictable delivery, and overtly focusing on continuous improvement. Data Expertise & Domain Knowledge: Actively seek out, propose, and implement cutting-edge approaches to data transfer, transformation, analytics, and data warehousing to drive innovation. Design and implement scalable, robust, and high-quality ETL processes to support growing business demand for information, delivering data as a reliable service that directly influences decision making. Develop a profound understanding and "feel" for the business meaning, lineage, and context of each data field within our domain. Communication & Stakeholder Partnership: Collaborate with other engineering teams and business partners, proactively managing dependencies and holding them accountable for their contributions to ensure successful project delivery. Actively engage with data consumers to achieve deep understanding of their specific data usage, pain points, and current gaps, then plan initiatives to implement improvements collaboratively. Clearly articulate project goals, technical strategies, progress, challenges, and business value to both technical and non-technical audiences. Produce clear, concise, and comprehensive documentation. Your Qualifications: At Vista, we value the experience and potential that individual team members add to our culture. Please don’t hesitate to apply even if you don’t meet the exact qualifications, we look forward to learning more about you! Bachelor's or Master's degree in computer science, data engineering, or a related field . 10+ years of professional experience, with at least 6 years of hands-on Data Engineering, specifically in e-commerce or direct to consumer, and 4 years of team leadership Demonstrated experience in leading a team of data engineers, providing technical guidance, and coordinating project execution Stakeholder management experience and excellent communication skills Strong knowledge of SQL and data warehousing concepts is a must Strong knowledge of Data Modeling concepts and hands-on experience designing complex multi-dimension data models Strong hands-on experience in designing and managing scalable ETL pipelines in cloud environments with large volume datasets (both structured/unstructured data) Proficiency with cloud services in AWS (Preferred), including S3, EMR, RDS, Step Functions, Fargate, Glue etc. Critical hands-on experience with cloud-based data platforms (Snowflake strongly preferred) Data Visualization experience with reporting and data tools (preferably Looker with LookML skills) Coding mastery in at least one modern programming language: Python (strongly preferred), Java, Golang, PySpark, etc. Strong knowledge in production standards such as versioning, CI/CD, data quality, documentation, automation, etc. Problem solving and multi-tasking ability in a fast-paced, globally distributed environment Nice To Have: Experience with API development on enterprise platforms, with GraphQL APIs being a clear plus Hands-on experience designing DBT data pipelines Knowledge of finance, accounting, supply chain, logistics, operations, procurement data is a plus Experience managing work in Jira and writing documentation in Confluence Proficiency in AWS account management, including IAM, infrastructure, and monitoring for health, security and cost optimization Experience with Gen AI/ML tools for enhancing data pipelines or automating analysis. Why You'll Love Working Here There is a lot to love about working at Vista. We are an award winning Remote-First company. We’re an inclusive community. We’re growing (which means you can too). And to help orient us all in the same direction, we have our Vista Behaviors which exemplify the behavioral attributes that make us a culturally strong and high-performing team. Our Team: Enterprise Business Solutions Vistas Enterprise Business Solutions (EBS) domain is working to make our company one of the most data-driven organizations to support Finance, Supply Chain, and HR functions. The cross-functional team includes product owners, analysts, technologists, data engineers and more – all focused on providing Vista with cutting-edge tools and data we can use to deliver jaw-dropping customer value. EBS team members are empowered to learn new skills, communicate openly, and be active problem-solvers. Join our EBS Domain as a Lead Data Engineer! This Lead level within the organization will be responsible for the work of a small team of data engineers, focusing not only on implementations but also operations and support. The Lead Data Engineer will implement best practices, data standards, and reporting tools. The role will oversee and manage the work of other data engineers as well as being an individual contributor. This role has a lot of opportunity to impact general ETL development and implementation of new solutions. We will look to the Lead Data Engineer to modernize data technology solutions in EBS, including the opportunity to work on modern warehousing, finance, and HR datasets and integration technologies. This role will require an in-depth understanding of cloud data integration tools and cloud data warehousing, with a strong and pronounced ability to lead and execute initiatives to tangible results.

Posted 4 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

A career at HARMAN Technology Services (HTS) offers you the opportunity to be part of a global, multi-disciplinary team dedicated to leveraging the power of technology to drive innovation and shape the future. At HARMAN HTS, you will tackle challenges by creating cutting-edge solutions that combine the physical and digital realms, making technology a dynamic force for addressing challenges and meeting humanity's needs. Working at the forefront of cross-channel UX, cloud computing, insightful data analysis, IoT, and mobility, you will empower companies to innovate, enter new markets, and enhance customer experiences. As a Data Engineer - Microsoft Fabric at HARMAN, your primary responsibility will be to develop and implement data engineering projects, including enterprise data hubs or big data platforms. You will design and establish data pipelines to enhance the efficiency and repeatability of data science projects, ensuring that data architecture solutions align with business requirements and organizational needs. Collaborating with stakeholders, you will identify data requirements, develop data models, and create data flow diagrams. Working closely with cross-functional teams, you will integrate, transform, and load data across various platforms and systems effectively, while also implementing data governance policies to ensure secure and efficient data management. To excel in this role, you should possess expertise in ETL and data integration tools such as Informatica, Qlik Talend, and Apache NiFi, along with knowledge of cloud computing platforms like AWS, Azure, or Google Cloud. Proficiency in programming languages such as Python, Java, or Scala, as well as experience with data visualization tools like Tableau, Power BI, or QlikView, is essential. Additionally, familiarity with analytics, machine learning concepts, relational databases (e.g., MySQL, PostgreSQL, Oracle), and NoSQL databases (e.g., MongoDB, Cassandra) is required. A strong background in big data technologies such as Hadoop, Spark, Snowflake, Databricks, and Kafka will be beneficial, along with expertise in data modeling, data warehousing, and data integration techniques. As a key contributor to the growth of the Center of Excellence (COE) and a leader in influencing client revenues through data and analytics solutions, you will guide a team of data engineers, oversee the development and deployment of data solutions, and define new data services and offerings. Your role will involve building strong client relationships, aligning with business goals, and driving innovation in data services. You will also stay updated on the latest data trends, collaborate with stakeholders, and communicate the capabilities and achievements of the Data team effectively. To be eligible for this position, you should have 4-5 years of experience in the information technology industry, with a focus on data engineering and architecture, and a proven track record of leading and setting up data practices in IT services or niche organizations. A master's or bachelor's degree in relevant fields such as computer science, data science, or engineering is preferred, along with experience in creating data and analytics solutions across various domains. Strong problem-solving, communication, and collaboration skills, along with expertise in data visualization and reporting tools, are essential for success in this role. At HARMAN, we offer employee discounts on our premium products, professional development opportunities through HARMAN University, and an inclusive work environment that fosters personal and professional growth. Join our talented team at HARMAN and be part of a culture that values diversity, encourages innovation, and supports individuality. If you are ready to make a lasting impact through innovation and technology, we invite you to join our talent community today.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Do you have a curious mind, want to be involved in the latest technology trends, and like to solve problems that have a meaningful benefit to hundreds of users across the bank Join our Tech Services- Group Chief Technology Office team and become a core contributor for the execution of the bank's global AI Strategy, particularly to help the bank deploy AI models quickly and efficiently! We are looking for an experienced Data Engineer or ML Engineer to drive the delivery of an innovative ecosystem of tools and services. In this AI focused role, you will contribute to the development of an SDK for Data Producers across the firm to build high-quality autonomous Data Products for cross-divisional consumption and Data Consumers (e.g. Data Scientists, Quantitative Analysts, Model Developers, Model Validators, and AI agents) to easily discover, access data, and build AI use-cases. Responsibilities may include: - Country lead of other Senior Engineers - Direct interaction with product owners and internal users to identify requirements, development of technical solutions, and execution - Lead Development of an SDK (Software Development Kit) to automatically capture data product, dataset, and AI / ML model metadata. Also, leverage LLMs to generate descriptive information about assets - Integration and publication of metadata into UBS's AI Use-case inventory, model artifact registry, and Enterprise Data Mesh data product and dataset catalogue for discovery and regulatory compliance purposes - Design and implementation of services that seamlessly collect runtime evidence and operational information about a data product or model and publish it to appropriate visualization tools - Creation of a collection of starters/templates that accelerate the creation of new data products by leveraging a collection of the latest tools and services and providing diverse and rich experiences to the Devpod ecosystem - Design and implementation of data contract and fine-grained access mechanisms to enable data consumption on a "need to know" basis You will be part of the Data Mesh & AI Tooling team, which is a newly established function within Group Chief Technology Office. We provide solutions to help the firm embrace Artificial Intelligence and Machine Learning. We work with the divisions and functions of the firm to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities. One of our current aims is to help a data scientist get a model into production in an accelerated timeframe with the appropriate controls and security. We offer a number of key capabilities: data discovery that uses AI/ML to help users find data and obtain access in a secure and controlled manner, an AI Inventory that describes the models that have been built to help users build their own use cases and validate them with Model Risk Management, a containerized model development environment for a user to experiment and produce their models and a streamlined MLOps process that helps them track their experiments and promote their models. At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing, and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. UBS is the world's largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management, and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills, and experiences within our workforce.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

You should have at least 5 years of experience working as a Data Engineer. Your expertise should include a strong background in Azure Cloud services and proficiency in tools such as Azure Databricks, PySpark, and Delta Lake. It is essential to have solid experience in Python and FastAPI for API development, as well as familiarity with Azure Functions for serverless API deployments. Experience in managing ETL pipelines using Apache Airflow is also required. Hands-on experience with databases like PostgreSQL and MongoDB is necessary. Strong SQL skills and the ability to work with large datasets are key for this role.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

andhra pradesh

On-site

As a Senior Data Engineer, you will be responsible for designing, implementing, and maintaining scalable data pipelines for our organization. Your primary location of work will be in Visakhapatnam, Andhra Pradesh. This is a permanent position with a compensation package that aligns with industry standards. Your key responsibilities will include developing efficient data processing solutions, optimizing data workflows, and ensuring the reliability and integrity of our data infrastructure. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the company. The ideal candidate for this role will have a strong background in data engineering, experience with big data technologies, and a proven track record of delivering high-quality data solutions. Additionally, strong analytical skills, attention to detail, and the ability to work in a fast-paced environment are essential for success in this position. If you are passionate about leveraging data to drive business insights and are looking for a challenging opportunity to make a significant impact, we encourage you to apply for this Senior Data Engineer position.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

A career at HARMAN Technology Services (HTS) offers you the opportunity to be part of a global, multi-disciplinary team dedicated to leveraging the power of technology to drive innovation and shape the future. At HARMAN HTS, you will tackle challenges by creating cutting-edge solutions that combine the physical and digital realms, making technology a dynamic force for problem-solving and meeting the needs of humanity. You will work at the forefront of cross-channel UX, cloud technologies, insightful data, IoT, and mobility, empowering companies to develop new digital business models, enter new markets, and enhance customer experiences. As a Data Engineer- Microsoft Fabric at HARMAN, you will be responsible for developing and implementing data engineering projects, including enterprise data hubs, Big Data platforms, data lake houses, and more. Your role will involve creating data pipelines to streamline data science projects, designing and implementing data architecture solutions, collaborating with stakeholders to identify data requirements, and ensuring effective data integration, transformation, and loading across various platforms and systems. You will also play a key role in developing and implementing data governance policies, evaluating new technologies to enhance data management processes, and ensuring compliance with regulatory standards for data security. To excel in this role, you should have the ability to evaluate, design, and develop ETL jobs and data integration approaches, along with cloud native data platform experience in AWS or the Microsoft stack. You should stay updated on the latest data trends, possess a robust knowledge of ETL, data transformation, and data standardization approaches, and be able to lead and guide a team of data engineers effectively. Additionally, you should have experience in working on data and analytics solutions, a strong educational background in computer science or related fields, and a proven track record in creating data and analytics solutions. At HARMAN, we offer access to employee discounts on a range of world-class products, professional development opportunities through HARMAN University, and an inclusive work environment that fosters personal and professional growth. We believe in creating a supportive culture where every employee is valued, empowered, and encouraged to share their ideas and unique perspectives. Join us at HARMAN and be part of a team that is committed to innovation, excellence, and making a lasting impact in the world of technology.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Engineer at our IT Services Organization, you will be responsible for developing and maintaining scalable data processing systems using Apache Spark and Python. Your role will involve designing and implementing Big Data solutions that integrate data from various sources, including RDBMS, NoSQL databases, and cloud services. Additionally, you will lead a team of data engineers to ensure efficient project execution and adherence to best practices. Your key responsibilities will include optimizing Spark jobs for performance and scalability, collaborating with cross-functional teams to gather requirements, and delivering data solutions that meet business needs. You will also be involved in implementing ETL processes and frameworks to facilitate data integration and utilizing cloud data services such as GCP for data storage and processing. Applying Agile methodologies to manage project timelines and deliverables will be an essential part of your role. To excel in this position, you should have proficiency in Pyspark and Apache Spark, along with a strong knowledge of Python for data engineering tasks. Hands-on experience with Google Cloud Platform (GCP) and expertise in designing and optimizing Big Data pipelines are crucial. Leadership skills in data engineering team management, understanding of ETL frameworks and distributed computing, familiarity with cloud-based data services, and experience with Agile delivery are also required. We are looking for candidates with a Bachelor's degree in Computer Science, Information Technology, or a related field. It is essential to stay updated with the latest trends and technologies in Big Data and cloud computing to contribute effectively to our projects. If you are passionate about data engineering and eager to work in a dynamic and innovative environment, we encourage you to apply for this exciting opportunity.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for working as an AWS Data Engineer at YASH Technologies. Your role will involve performing tasks related to data collection, processing, storage, and integration. It is essential to have proficiency in data Extract-Transform-Load (ETL) processes, data pipeline setup, as well as knowledge of database and data warehouse technologies on the AWS cloud platform. Prior experience in handling timeseries and unstructured data types, such as image data, is a necessary requirement for this position. Additionally, you should have experience in developing data analytics software on the AWS cloud, either as a full-stack or back-end developer. Skills in software quality assessment, testing, and API integration are also crucial for this role. Working at YASH, you will have the opportunity to build a career in a supportive and inclusive team environment. The company focuses on continuous learning and growth by providing career-oriented skilling models and utilizing technology for upskilling and reskilling activities. You will be part of a Hyperlearning workplace that is grounded on the principles of flexible work arrangements, emotional positivity, self-determination, trust, transparency, open collaboration, and providing support for achieving business goals. YASH Technologies offers stable employment with a great atmosphere and an ethical corporate culture.,

Posted 1 month ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

Pune

Work from Office

OSIsoft PI System Extensive hands-on experience with the OSIsoft PI System (PI Data Archive, AF, PI Vision) is mandatory. Data Engineer, with a strong focus on industrial data or IoT. SQL python , PowerShell

Posted 1 month ago

Apply

9.0 - 14.0 years

7 - 14 Lacs

Hyderabad, Pune

Hybrid

Role & responsibilities Key Skills Required are 8 years of handson experience in cloud application architecture with a focus on creating scalable and reliable software systems 8 Years Experience using Google Cloud Platform GCP including but not restricting to services like Bigquery Cloud SQL Fire store Cloud Composer Experience on Security identity and access management Networking protocols such as TCPIP and HTTPS Network security design including segmentation encryption logging and monitoring Network topologies load balancing and segmentation Python for Rest APIs and Microservices Design and development guidance Python with GCP Cloud SQLPostgreSQL BigQuery Integration of Python API to FE applications built on React JS Unit Testing frameworks Python unit test pytest Java junit spock and groovy DevOps automation process like Jenkins Docker deployments etc Code Deployments on VMs validating an overall solution from the perspective of Infrastructure performance scalability security capacity and create effective mitigation plans Automation technologies Terraform or Google Cloud Deployment Manager Ansible Implementing solutions and processes to manage cloud costs Experience in providing solution to Web Applications Requirements and Design knowledge React JS Elastic Cache GCP IAM Managed Instance Group VMs and GKE Owning the endtoend delivery of solutions which will include developing testing and releasing Infrastructure as Code Translate business requirementsuser stories into a practical scalable solution that leverages the functionality and best practices of the HSBC Executing technical feasibility assessments solution estimations and proposal development for moving identified workloads to the GCP Designing and implementing secure scalable and innovative solutions to meet Banks requirements Ability to interact and influence across all organizational levels on technical or business solutions Certified Google Cloud Architect would be an addon Create and own scaling capacity planning configuration management and monitoring of processes and procedures Create put into practice and use cloudnative solutions Lead the adoption of new cloud technologies and establish best practices for them Experience establishing technical strategy and architecture at the enterprise level Experience leading GCP Cloud project delivery Collaborate with IT security to monitor cloud privacy Architecture DevOps data and integration teams to ensure best practices are followed throughout cloud adoption Respond to technical issues and provide guidance to technical team Skills Mandatory Skills : GCP Storage,GCP BigQuery,GCP DataProc,GCP Vertex AI,GCP Spanner,GCP Dataprep,GCP Datastream,Google Analytics Hub,GCP Dataform,GCP Dataplex/Catalog,GCP Cloud Datastore/Firestore,GCP Datafusion,GCP Pub/Sub,GCP Cloud SQL,GCP Cloud Composer,Google Looker,GCP Cloud Datastore,GCP Data Architecture,Google Cloud IAM,GCP Bigtable,GCP Looker1,GCP Data Flow,GCP Cloud Pub/Sub"

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Data Analyst with Market Research and Web Scraping skills at our company located in Udyog Vihar Phase-1, Gurgaon, you will be expected to leverage your 2-5 years of experience in data analysis, particularly in competitive analysis and market research within the Fashion/garment/apparel industry. A Bachelor's degree in Data Science, Computer Science, Statistics, Business Analytics, or related field is required, while advanced degrees or certifications in data analytics or market research are considered a plus. Your main responsibility will be to analyze large datasets to identify trends, patterns, and insights related to market trends and competitor performance. You will conduct quantitative and qualitative analyses to support decision-making in product development and strategy. Additionally, you will be involved in performing in-depth market research to track competitor performance, emerging trends, and customer preferences. Furthermore, you will design and implement data scraping solutions to gather competitor data from websites, ensuring compliance with legal standards and respect of website terms of service. Creating and maintaining organized databases with market and competitor data for easy access and retrieval will be part of your routine, along with collaborating closely with cross-functional teams to align data insights with company objectives. To excel in this role, you should have proven experience with data scraping tools such as BeautifulSoup, Scrapy, or Selenium, proficiency in SQL, Python, or R for data analysis and data manipulation, and experience with data visualization tools like Tableau, Power BI, or D3.js. Strong analytical skills and the ability to interpret data to draw insights and make strategic recommendations are essential. If you are passionate about data analysis, market research, and web scraping and possess the technical skills and analytical mindset required, we encourage you to apply by sending your updated resume with current salary details to jobs@glansolutions.com. For any inquiries, please contact Satish at 8802749743 or visit our website at www.glansolutions.com. Join us on this exciting journey of leveraging data to drive strategic decisions and make a meaningful impact in the Fashion/garment/apparel industry.,

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Greetings from Technogen !!! We thank you for taking time about your competencies and skills, while allowing us an opportunity to explain about us and our Technogen, we understand that your experience and expertise are relevant the current open with our clients. About Technogen : TechnoGen Brief Overview:- TechnoGen, Inc. is an ISO 9001:2015, ISO 20000-1:2011, ISO 27001:2013, and CMMI Level 3 Global IT Services Company headquartered in Chantilly, Virginia. TechnoGen, Inc. (TGI) is a Minority & Women-Owned Small Business with over 20 years of experience providing end-to-end IT Services and Solutions to the Public and Private sectors. TGI provides highly skilled and certied professionals and has successfully executed more than 345 projects. TechnoGen is committed to helping our clients solve complex problems and achieve their goals, on time and under budget. LinkedIn: https://www.linkedin.com/company/technogeninc/about/ Job Title :Data Engineer IT Quality Required Experience : 4+ years Location : Hyderabad. JD summary: Job Summary: We are looking for a proactive and technically skilled Data Engineer to lead data initiatives and provide application support for Quality, Consumer Services and Sustainability domains. The Data Engineer in the Quality Area is responsible for designing, developing, and maintaining data integration solutions to support quality processes. This role focuses on leveraging ETL tools such as Informatica Cloud, Ascend, Google Cloud Dataflow, and Composer, along with Python, Spark programming, to ensure seamless data flow, transformation, and integration across quality systems. The position is offsite and requires collaboration with business partners and IT teams to deliver end-to-end data solutions that meet regulatory and business requirements. The candidate must be willing to work on site 4 days a week in Hyderabad, during US EST time zone. Key Responsibilities: Data Integration and ETL Development: Design and implement robust ETL pipelines using tools like Informatica Cloud, Ascend, Google Big Query, Google Cloud Dataflow, and Composer to integrate data from quality systems (e.g., Veeva Vault, QMS, GBQ, PLM, Order Management systems). Develop and optimize data transformation workflows to ensure accurate, timely, and secure data processing. Use Python for custom scripting, data manipulation, and automation of ETL processes. Data Pipeline Support and Maintenance: Monitor, troubleshoot, and resolve issues in data pipelines, ensuring high availability and performance. Implement hotfixes, enhancements, and minor changes to existing ETL workflows to address defects or evolving business needs. Ensure data integrity, consistency, and compliance with regulatory standards Collaboration and Stakeholder Engagement: Work closely with quality teams, business analysts, and IT stakeholders to gather requirements and translate them into technical data solutions. Collaborate with cross-functional teams to integrate quality data with other enterprise systems, such as PLM, QMS, ERP, or LIMS. Communicate effectively with remote teams to provide updates, resolve issues, and align project deliverables. Technical Expertise: Maintain proficiency in ETL tools (Informatica Cloud, Ascend, Dataflow, Composer, GBQ) and Python for data engineering tasks. Design scalable and efficient data models to support quality reporting, analytics, and compliance requirements. Implement best practices for data security, version control, and pipeline orchestration. Documentation and Training: Create and maintain detailed documentation for ETL processes, data flows, and system integrations. Provide guidance and training to junior team members or end-users on data integration processes and tools. Qualifications: Education: Bachelor's degree in computer science, Data Engineering, Information Systems, or a related field Experience: 4+ years of experience as a Data Engineer, with a focus on data integration in quality or regulated environments. Hands-on experience with ETL tools such as Informatica Cloud, Ascend, Google Cloud Dataflow, and Composer. Proficiency in Python for data processing, scripting, and automation. Experience working with Veeva application a plus. Technical Skills: Expertise in designing and optimizing ETL pipelines using Informatica Cloud, Ascend, Dataflow, or Composer. Strong Python programming skills for data manipulation, automation, and integration. Familiarity with cloud platforms (e.g., Google Cloud, AWS, Azure) and data integration patterns (e.g., APIs, REST, SQL). Knowledge of database systems (e.g., SQL Server, Oracle, BigQuery) and data warehousing concepts. Experience with Agile methodologies and tools like JIRA or Azure DevOps. Soft Skills: Excellent communication and collaboration skills to work effectively with remote teams and business partners. Problem-Solving: Strong problem-solving and analytical skills to address complex data integration challenges. Ability to manage multiple priorities and deliver high-quality solutions in a fast-paced environment. Cultural Awareness: Ability to work effectively in a multicultural environment and manage teams across different time zones. Preferred Qualifications: Experience working in regulated environments Advanced degrees or certifications (e.g., Informatica Cloud, Google Cloud Professional Data Engineer) are a plus. Experience with Agile or hybrid delivery models. About Us: We are a leading organization committed to leveraging technology to drive business success. Our team is dedicated to innovation, collaboration, and delivering exceptional results. Join us and be a part of a dynamic and forward-thinking company. How to Apply: Interested candidates are invited to submit their resume and cover letter detailing their relevant experience and qualifications. Best Regards, Syam.M | Sr.IT Recruiter syambabu.m@technogenindia.com www.technogenindia.com | Follow us on LinkedIn

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a Databricks Developer to join their team in Bangalore, Karnataka, India. As a Databricks Developer, your responsibilities will include pushing data domains into a massive repository and building a large data lake by highly leveraging Databricks. To be considered for this role, you should have at least 3 years of experience in a Data Engineer or Software Engineer role. An undergraduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field is required, while a graduate degree is preferred. You should also have experience with data pipeline and workflow management tools, advanced working SQL knowledge, and familiarity with relational databases. Additionally, an understanding of Datawarehouse (DWH) systems, ELT and ETL patterns, data models, and transforming data into various models is essential. You should be able to build processes supporting data transformation, data structures, metadata, dependency and workload management. Experience with message queuing, stream processing, and highly scalable big data data stores is also necessary. Preferred qualifications include experience with Azure cloud services such as ADLS, ADF, ADLA, and AAS. The role also requires a minimum of 2 years of experience in relevant skills. NTT DATA is a trusted global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. They serve 75% of the Fortune Global 100 and have a diverse team of experts in more than 50 countries. As a Global Top Employer, NTT DATA offers services in business and technology consulting, data and artificial intelligence, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. They are known for providing digital and AI infrastructure solutions and are part of the NTT Group, investing over $3.6 billion each year in R&D to support organizations and society in moving confidently into the digital future. Visit their website at us.nttdata.com for more information.,

Posted 1 month ago

Apply

10.0 - 17.0 years

12 - 17 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

POSITION OVERVIEW: We are seeking an experienced and highly skilled Data Engineer with deep expertise in Microsoft Fabric , MS-SQL, data warehouse architecture design , and SAP data integration. The ideal candidate will be responsible for designing, building, and optimizing data pipelines and architectures to support our enterprise data strategy. The candidate will work closely with cross-functional teams to ingest, transform, and make data (from SAP and other systems) available in our Microsoft Azure environment, enabling robust analytics and business intelligence. KEY ROLES & RESPONSIBILITIES : Spearhead the design, development, deployment, testing, and management of strategic data architecture, leveraging cutting-edge technology stacks on cloud, on-prem and hybrid environments Design and implement an end-to-end data architecture within Microsoft Fabric / SQL, including Azure Synapse Analytics (incl. Data warehousing). This would also encompass a Data Mesh Architecture. Develop and manage robust data pipelines to extract, load, and transform data from SAP systems (e.g., ECC, S/4HANA, BW). Perform data modeling and schema design for enterprise data warehouses in Microsoft Fabric. Ensure data quality, security, and compliance standards are met throughout the data lifecycle. Enforce Data Security measures, strategies, protocols, and technologies ensuring adherence to security and compliance requirements Collaborate with BI, analytics, and business teams to understand data requirements and deliver trusted datasets. Monitor and optimize performance of data processes and infrastructure. Document technical solutions and develop reusable frameworks and tools for data ingestion and transformation. Establish and maintain robust knowledge management structures, encompassing Data Architecture, Data Policies, Platform Usage Policies, Development Rules, and more, ensuring adherence to best practices, regulatory compliance, and optimization across all data processes Implement microservices, APIs and event-driven architecture to enable agility and scalability. Create and maintain architectural documentation, diagrams, policies, standards, conventions, rules and frameworks to effective knowledge sharing and handover. Monitor and optimize the performance, scalability, and reliability of the data architecture and pipelines. Track data consumption and usage patterns to ensure that infrastructure investment is effectively leveraged through automated alert-driven tracking. KEY COMPETENCIES: Microsoft Certified: Fabric Analytics Engineer Associate or equivalent certificate for MS SQL. Prior experience working in cloud environments (Azure preferred). Understanding of SAP data structures and SAP integration tools like SAP Data Services, SAP Landscape Transformation (SLT), or RFC/BAPI connectors. Experience with DevOps practices and version control (e.g., Git). Deep understanding of SAP architecture, data models, security principles, and platform best practices. Strong analytical skills with the ability to translate business needs into technical solutions. Experience with project coordination, vendor management, and Agile or hybrid project delivery methodologies. Excellent communication, stakeholder management, and documentation skills. Strong understanding of data warehouse architecture and dimensional modeling. Excellent problem-solving and communication skills. QUALIFICATIONS / EXPERIENCE / SKILLS Qualifications : Bachelors degree in Computer Science, Information Systems, or a related field. Certifications such as SQL, Administrator, Advanced Administrator, are preferred. Expertise in data transformation using SQL, PySpark, and/or other ETL tools. Strong knowledge of data governance, security, and lineage in enterprise environments. Advanced knowledge in SQL, database procedures/packages and dimensional modeling Proficiency in Python, and/or Data Analysis Expressions (DAX) (Preferred, not mandatory) Familiarity with PowerBI for downstream reporting (Preferred, not mandatory). Experience : • 10 years of experience as a Data Engineer or in a similar role. Skills: Hands-on experience with Microsoft SQL (MS-SQL), Microsoft Fabric including Synapse (Data Warehousing, Notebooks, Spark) Experience integrating and extracting data from SAP systems, such as: o SAP ECC or S/4HANA SAP BW o SAP Core Data Services (CDS) Views or OData Services Knowledge of Data Protection laws across countries (Preferred, not mandatory)

Posted 1 month ago

Apply

5.0 - 10.0 years

40 - 50 Lacs

Bengaluru

Remote

INTERESTED CANDIDATES SHARE CV TO VAIJAYANTHI.M@PARAMINFO.COM Exp: 5-10 Years Notice: Max 30 Days Location: Pan India (Remote - Work From Home) Domain: Core Banking is Must Must Required Skills: Cloudera Data Platform (CDP) hands-on experience Strong programming in: Python, PySpark Workflow orchestration: Apache Airflow ETL Development: Batch and streaming pipelines DevOps Practices: CI/CD, version control, automation Data Governance & Quality: Security, validation, alerting Nice-to-Have / Preferred: AI/ML & Generative AI exposure Use case implementation/support Experience with ML workflows, model pipelines Familiarity with cloud-native data tools (Azure/AWS) Collaboration in cross-functional Agile teams Job Description: We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). Implement data quality checks, monitoring, and alerting mechanisms. Ensure data security, governance, and compliance across all pipelines. Work closely with cross-functional teams to understand data requirements and deliver solutions. Troubleshoot and resolve issues in production data pipelines. Contribute to the architecture and design of the data platform. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. Automate deployment and monitoring of data workflows using DevOps tools and practices. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies. INTERESTED CANDIDATES SHARE CV TO VAIJAYANTHI.M@PARAMINFO.COM

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job title: Senior Software Engineer Experience: 5- 8 years Primary skills: Python, Spark or Pyspark, DWH ETL. Database: SparkSQL or PostgreSQL Secondary skills: Databricks ( Delta Lake, Delta tables, Unity Catalog) Work Model: Hybrid (Weekly Twice) Cab Facility: Yes Work Timings: 10am to 7pm Interview Process: 3 rounds (3rd round F2F Mandatory) Work Location: Karle Town Tech Park Nagawara, Hebbal Bengaluru 560045 About Business Unit: The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud. Why we are looking for you: You have experience working as a Data Engineer with strong database fundamentals and ETL background. You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above. You have experience working in relation data systems, preferably PostgreSQL and SparkSQL. You have excellent designing and coding skills and can mentor a junior engineer in the team. You have excellent written and verbal communication skills. You are experienced and comfortable working with global clients You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams. You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD. What you will enjoy in this role: As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry. You will get to work on the latest tools and technology and deal with data of petabyte-scale. Work on homegrown frameworks on Spark and Airflow etc. Exposure to Digital Marketing Domain where Epsilon is a marker leader. Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviour's and patterns to design digital Ad strategies. As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward. Using existing standard methodologies and defining as per evolving industry standards. Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain. The open and transparent environment that values innovation and efficiency Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What will you do? Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment. Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the company's multi-faceted ad serving operations. Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model. This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices. Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence. Mentor Junior engineers in the team. Stay abreast of developments in the data world in terms of governance, quality and performance optimization. Able to have effective client meetings, understand deliverables, and drive successful outcomes. Qualifications: Bachelor's Degree in Computer Science or equivalent degree is required. 5 - 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas. Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required. Solid experience in Basic and Advanced SQL writing and tuning. Experience with Python Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects. Good understanding of Disaster Recovery and Business Continuity solutions Experience with scheduling applications with complex interdependencies, preferably Airflow Good experience in working with geographically and culturally diverse teams. Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks. Excellent written and verbal communication skills. Ability to handle complex products. Good communication and problem-solving skills, with the ability to manage multiple priorities. Ability to diagnose and solve problems quickly. Diligent, able to multi-task, prioritize and able to quickly change priorities. Good time management. Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools. About Epsilon: Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we have provided marketers from the world's leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon's comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Chennai

Work from Office

• Experience in cloud-based systems (GCP, BigQuery) • Strong SQL programming skills. • Expertise in database programming and performance tuning techniques • Possess knowledge of data warehouse architectures, ETL, reporting/analytic tools,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are invited to join our team as a Mid-Level Data Engineer Technical Consultant with 4+ years of experience. As a part of our diverse and inclusive organization, you will be based in Bangalore, KA, working full-time in a permanent position during the general shift from Monday to Friday. In this role, you will be expected to possess strong written and oral communication skills, particularly in email correspondence. Your experience in working with Application Development teams will be invaluable, along with your ability to analyze and solve problems effectively. Proficiency in Microsoft tools such as Outlook, Excel, and Word is essential for this position. As a Data Engineer Technical Consultant, you must have at least 4 years of hands-on experience in development. Your expertise should include working with Snowflake and Pyspark, writing SQL queries, utilizing Airflow, and developing in Python. Experience with DBT and integration programs will be advantageous, as well as familiarity with Excel for data analysis and Unix Scripting language. Your responsibilities will encompass a good understanding of data warehousing and practical work experience in this field. You will be accountable for various tasks including understanding requirements, coding, unit testing, integration testing, performance testing, UAT, and Hypercare Support. Collaboration with cross-functional teams across different geographies will be a key aspect of this role. If you are action-oriented, independent, and possess the required technical skills, we encourage you to submit your resume to pallavi@she-jobs.com and explore this exciting opportunity further.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Techno functional professional with over 5 years of experience in Data warehousing and BI, you should have a strong grasp of fundamental concepts in this domain. Your role will involve designing BI solutions from scratch and implementing Agile Scrum practices like story slicing, grooming, daily scrum, iteration planning, retrospective, test-driven, and model storming. Additionally, you must possess expertise in Data Governance and Management along with a track record of proposing and implementing BI solutions successfully. Your technical skills should include proficiency in SQL for data analysis and querying, as well as experience with Postgres DB. It is mandatory to have a functional background in Finance/Banking, particularly in Asset finance, Equipment finance, or Leasing. Excellent communication skills, both written and verbal, are essential for interacting with a diverse set of stakeholders. You should also be adept at raising alerts and risks when necessary and collaborating effectively with team members across different locations. In terms of responsibilities, you will be required to elicit business needs and requirements, develop functional specifications, and ensure clarity by engaging with stakeholders. Your role will also involve gathering and analyzing information from various sources to determine system changes needed for new projects and application enhancements. Providing functional analysis, specification documentation, and validating business requirements will be critical aspects of your work. As part of solutioning, you will be responsible for designing and developing business intelligence and data warehousing solutions. This includes creating data transformations and reports/visualizations based on business needs. Your role will also involve proposing solutions and enhancements to improve the quality of deliverables and overall solutions.,

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Coimbatore

Work from Office

Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: SCALA, Spark, Python, Data bricks Good to have: Java & Hadoop The Role: Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes. Interested candidates share your resume at Neesha1@damcogroup.com along with the below mentioned details : Total Exp : Relevant Exp in Scala & Spark : Current CTC: Expected CTC: Notice period : Current Location:

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies