Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 20.0 years
15 - 30 Lacs
hyderabad, bengaluru
Work from Office
Role & responsibilities Senior Data Modeler Primary Skills : Data Modeling (Fact/Dimension tables, Slowly Changing Dimensions - Type 2, Partitioning) Medallion Architecture Erwin Data Modeler Domain Expertise : Experience in modeling data for large-scale Call Center environments , including: IVR (Interactive Voice Response) CALLS Chat interactions Preferred experience with platforms such as CISCO , Genesys , or Google CES Role Overview : We are seeking a seasoned Data Modeler with deep expertise in designing scalable and efficient data models for enterprise-level call center systems. The ideal candidate will have hands-on experience with medallion architecture and Erwin, and a strong understanding of how to model complex call center data flows. Responsibilities : Design and implement robust data models to support analytics and reporting for call center operations Collaborate with data engineers and business stakeholders to translate requirements into scalable data solutions Optimize data structures for performance and maintainability Ensure data integrity and consistency across systems Preferred Qualifications : 7+ years of experience in data modeling Strong understanding of call center metrics and workflows Familiarity with cloud data platforms (e.g., Azure, AWS, GCP) is a plus
Posted 2 weeks ago
12.0 - 20.0 years
15 - 30 Lacs
bengaluru
Work from Office
Dear, I hope you're doing well! I am reaching out with an exciting opportunity that aligns well with your background and career goals. We're currently hiring for the position of [AWS/Snowflake/Databricks Solution Architects] at Bengaluru, and I thought of you immediately. This role offers [Medallion Architecture, Pre-Sales, Strong Technical Background, RFP, RFQ]. I've included the detailed job description below for you to look over. If this sounds interesting, Id love to connect and share more insights about the team and the company culture. Share your resume to manju@nam-it.com or you can reach me at +91-9148218735 Role: Cloud Solution Architect Location: Bengaluru Duration: FTE Domain Retail/ Automotive is mandatory. We are seeking a Cloud Solution Architect to lead the successful delivery of data-driven initiatives across our organization. This role acts as a critical bridge between business stakeholders, technical teams, and data professionals to ensure timely and high-quality delivery of complex data projects. As a Delivery Partner, youll own the end-to-end delivery lifecyclefrom project initiation and scoping through execution and deployment. Youll collaborate with cross-functional teams including Data Engineering, Analytics, Product, and Business Units to ensure data solutions are aligned with business goals, scalable, and future-ready. Mandatory Candidates have to be strong in Medallion Architecture should have experience in AWS Implementation on Snowflake or databrick They should have implemented a data lake either on Snowflake or Databricks Experience in Pre-Sales Should be good in RFP/RFQ Should be able to write SOW for the clients Required Skills & Experience: Proven experience between 12-14yrs managing data or analytics project delivery in a fast-paced environment. Strong understanding of data platforms, ETL pipelines, data warehousing, Snowflake/Databricks and analytics lifecycle. Strong design and implementation experience in AWS Cloud and has done Snowflake or Databricks integrations in AWS. Deep understanding of Snowflake's or Databricks architecture, features, and capabilities (Preferrable). Strong proficiency in SQL and scripting languages like Python or Unix Shell. Knowledge of data security, access controls, and data quality management. Excellent stakeholder engagement and communication skills. Experience with Agile, Scrum, or hybrid project management methodologies. Ability to manage multiple projects with competing deadlines. Manjunath Staffing Manager NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. +91 9148218735 / manju@nam-it.com Linkedin : (99+) M.S.Manju nath | LinkedIn Website: WWW.NAM-IT.COM USA | CANADA | INDIA MBE Certified Company , E Verify Company
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
navi mumbai, maharashtra
On-site
The ideal candidate will be responsible for designing and implementing streaming data pipelines that integrate Kafka with Databricks using Structured Streaming. You will also be tasked with architecting and maintaining the Medallion Architecture, which consists of well-defined Bronze, Silver, and Gold layers. Additionally, you will need to implement efficient data ingestion processes using Databricks Autoloader for high-throughput data loads. You will work with large volumes of structured and unstructured data to ensure high availability and performance, applying performance tuning techniques like partitioning, caching, and cluster resource optimization. Collaboration with cross-functional teams, including data scientists, analysts, and business users, is essential to build robust data solutions. The role also involves establishing best practices for code versioning, deployment automation, and data governance. The required technical skills for this position include strong expertise in Azure Databricks and Spark Structured Streaming, along with at least 7 years of experience in Data Engineering. You should be familiar with processing modes (append, update, complete), output modes (append, complete, update), checkpointing, and state management. Experience with Kafka integration for real-time data pipelines, a deep understanding of Medallion Architecture, proficiency with Databricks Autoloader and schema evolution, and familiarity with Unity Catalog and Foreign catalog are also necessary. Strong knowledge of Spark SQL, Delta Lake, and DataFrames, expertise in performance tuning, data management strategies, governance, access management, data modeling, data warehousing concepts, and Databricks as a platform, as well as a solid understanding of Window functions will be beneficial in this role.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As a Senior/Lead Data Engineer in our team based in Mumbai, IN, you will be responsible for leveraging your 6-8 years of IT experience, with at least 5+ years specifically in Data Engineering. Your expertise will be utilized in various areas including Kafka integration to Databricks, understanding of Structured Streaming concepts such as Processing modes, output modes, and checkpointing. Familiarity with Medallion Architecture (Bronze, Silver, Gold layers) and Databricks Autoloader will be key aspects of your role. Moreover, your experience in working with large volumes of data and implementing performance optimization techniques like partitioning, caching, and cluster tuning will be crucial for success in this position. Additionally, your ability to effectively engage with clients and your excellent communication skills will play a vital role in delivering high-quality solutions. If you are looking for a challenging opportunity where you can apply your Data Engineering skills to drive impactful results and work in a dynamic environment, we encourage you to apply for this role and be a part of our innovative team.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer, you will be responsible for designing and implementing data models, warehouses, and databases using Microsoft Fabric, Azure Synapse Analytics, and Azure Data Lake. Your role will involve developing ETL pipelines utilizing tools such as SQL Server Integration Services (SSIS), Azure Synapse Pipelines, and Azure Data Factory. You will leverage Fabric Lakehouse for Power BI reporting, real-time analytics, and automation, while ensuring optimal data integration, governance, security, and performance. Collaboration with cross-functional teams to develop scalable data solutions will be a key aspect of your job. You will implement Medallion Architecture for efficient data processing and work in an Agile environment, applying DevOps principles for automation and CI/CD processes. Your skills should include proficiency in Microsoft Fabric & Azure Lakehouse, OneLake, Data Pipelines, Power BI, Synapse, Data Factory, and Data Lake. Experience in Data Warehousing & ETL both on-premises and in the cloud using SQL, Python, SSIS, and Synapse Pipelines is essential. Strong knowledge of data modeling & architecture, including Medallion Architecture, integration, governance, security, and performance tuning is required. In addition, you should have expertise in analytics & reporting tools such as Power BI, Excel (formulas, macros, pivots), and ERP systems like SAP and Oracle. Problem-solving skills, collaboration abilities in Agile and DevOps environments, and a degree in Computer Science, Engineering, or a related field are necessary. Familiarity with Azure DevOps, Agile, and Scrum methodologies, as well as Microsoft Certifications, particularly Agile certification, would be advantageous for this role.,
Posted 2 weeks ago
6.0 - 8.0 years
10 - 20 Lacs
bengaluru
Remote
6+ years with Data engineer,Data Modelling,Synapse,ADF,Microsoft Fabric,Databricks,SQL,ETL,Agile,medallion architecture,Data profiling, Anomaly Detection, Devops.
Posted 2 weeks ago
5.0 - 8.0 years
0 - 3 Lacs
pune, chennai, bengaluru
Hybrid
Mandatory Technical Skills - Data modeling (ER, dimensional) , Data Modeling on Cloud , SQL, Data Governance , Erwin (Any other data modeling tool) Good to Have Skills - Metadata in Pharma domain , Medallion Architecture , Metadata Management Experience - 5 years - 8 years Location - Pune / Mumbai / Chennai / Bangalore Analyze business requirements to understand data needs and relationships, design conceptual data models to represent high-level business entities and relationships. Develop logical data models with detailed attributes, keys, and relationships, create physical data models optimized for specific database or data lake platforms. Define and document data definitions, standards, and naming conventions, collaborate with data architects, engineers, and analysts to align models with technical and business needs. Normalize or denormalize data structures based on performance and use-case requirements. Map source systems to target models for ETL/ELT development. Maintain data dictionaries and metadata repositories. Ensure data models support data quality, integrity, and consistency. Review and validate models with stakeholders and subject matter experts. Update models based on evolving business requirements or system changes. Support data lineage and impact analysis efforts. Participate in data governance and stewardship initiatives. Use modeling tools like ER/Studio, Erwin, or dbt for documentation and visualization. Provide guidance on data modeling best practices to engineering teams. Collaborate on schema design for data lake zones (raw, curated, trusted). Ensure models support scalability, performance, and compliance requirements. Assist in reverse engineering models from existing databases or data lakes. Contribute to training and onboarding materials related to data models.
Posted 2 weeks ago
10.0 - 17.0 years
10 - 16 Lacs
pune, chennai, bengaluru
Hybrid
Skills: Azure/AWS, Synapse/Fabric, PySpark, Databricks, ADF, Medallion Architecture, Lakehouse, Data Warehouse
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Data Modeller/Data Modeler, you will play a crucial role in leading data architecture efforts across various enterprise domains such as Sales, Procurement, Finance, Logistics, R&D, and Advanced Planning Systems (SAP/Oracle). Your responsibilities will include designing scalable and reusable data models, constructing data lake foundations, and collaborating with cross-functional teams to deliver robust end-to-end data solutions. You will work closely with business and product teams to understand processes and translate them into technical specifications. Using methodologies such as Medallion Architecture, EDW, or Kimball, you will design logical and physical data models. It will be essential to source the correct grain of data from authentic source systems or existing DWHs and create intermediary data models and physical views for reporting and consumption. In addition, you will be responsible for implementing Data Governance, Data Quality, and Data Observability practices. Developing business process maps, user journey maps, and data flow/integration diagrams will also be part of your tasks. You will design integration workflows utilizing APIs, FTP/SFTP, web services, and other tools to support large-scale implementation programs involving multiple projects. Your technical skills should include a minimum of 5+ years of experience in data-focused projects, strong expertise in Data Modelling encompassing Logical, Physical, Dimensional, and Vault modeling, and familiarity with enterprise data domains such as Sales, Finance, Procurement, Supply Chain, Logistics, and R&D. Proficiency in tools like Erwin or similar data modeling tools, understanding of OLTP and OLAP systems, and knowledge of Kimball methodology, Medallion architecture, and modern Data Lakehouse patterns are essential. Furthermore, you should have knowledge of Bronze, Silver, and Gold layer architecture in cloud platforms and the ability to read existing data dictionaries, table structures, and normalize data tables effectively. Familiarity with cloud data platforms (AWS, Azure, GCP), DevOps/DataOps best practices, Agile methodologies, and end-to-end integration needs and methods is also required. Preferred experience includes a background in Retail, CPG, or Supply Chain domains, as well as experience with data governance frameworks, quality tools, and metadata management platforms. Your skills should encompass a range of technical aspects such as FTP/SFTP, physical data models, DevOps, data observability, cloud platforms, APIs, data lakehouse, vault modeling, dimensional modeling, and more. In summary, as a Data Modeller/Data Modeler, you will be a key player in designing and implementing data solutions that drive business success across various domains and collaborating with diverse teams to achieve strategic objectives seamlessly.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
As a Senior Data Engineer at Emerson, you will be a key member of the Global BI team supporting the migration to Microsoft Fabric. Your primary responsibility will be to focus on data gathering, modeling, integration, and database design to ensure efficient data management. By developing and optimizing scalable data models, you will play a crucial role in supporting analytics and reporting needs, utilizing Microsoft Fabric and Azure technologies for high-performance data processing. Your main responsibilities will include collaborating with cross-functional teams such as data analysts, data scientists, and business collaborators to understand data requirements and deliver effective solutions. You will leverage Fabric Lakehouse for data storage, governance, and processing to support Power BI and automation initiatives. Your expertise in data modeling, with a specific focus on data warehouse and lakehouse design, will be instrumental in designing and implementing data models, warehouses, and databases using various Azure services. In addition to data modeling, you will be responsible for developing ETL processes using tools like SQL Server Integration Services (SSIS) and Azure Synapse Pipelines to prepare data for analysis and reporting. Implementing data quality checks and governance practices to ensure data accuracy, consistency, and security will also be a key aspect of your role. You will supervise and optimize data pipelines and workflows using Microsoft Fabric for real-time analytics and AI-powered workloads. Your proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms will be essential, along with experience in data integration and ETL tools like Azure Data Factory. Your in-depth knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions, will be crucial for success in this role. Strong communication skills to convey technical concepts to both technical and non-technical stakeholders, as well as the ability to work independently and within a team environment, will also be required. To excel in this role, you will need 5-7 years of experience in Data Warehousing with on-premises or cloud technologies. Strong analytical abilities, proficiency in database management, SQL query optimization, and data mapping are essential skills. Proficiency in Excel, Python, SQL/Advanced SQL, and hands-on experience with Fabric components will be beneficial. The willingness to work flexible hours based on project requirements, strong documentation skills, and the ability to handle sensitive information with discretion are also important attributes for this role. Preferred qualifications include a Bachelor's degree or equivalent experience in Science, with a focus on MIS, Computer Science, Engineering, or related areas. Good interpersonal skills in English, agile certification, and experience with Oracle, SAP, or other ERP systems are preferred qualifications that set you apart. The ability to quickly learn new business areas, software, and emerging technologies, as well as the willingness to travel up to 20% as needed, will also be valuable in this role. At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives to drive growth and deliver business results. Our commitment to ongoing career development and an inclusive culture ensures you have the support to thrive and make a lasting impact. We offer competitive benefits plans, medical insurance, flexible time off, and opportunities for mentorship, training, and leadership development. Join Emerson and be part of a global leader in automation technology and software that helps customers in critical industries operate more sustainably and efficiently. We offer equitable opportunities, celebrate diversity, and embrace challenges to make a positive impact across various countries and industries. If you are looking to make a difference and contribute to innovative solutions, Emerson is the place for you. Let's go, together.,
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
india
On-site
Data Architect - Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS . Deep understanding of Delta Lake , medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That's why we're committed to creating a workforce where all individuals can do their best work. It's when everyone's voice is heard and valued that we're inspired to go beyond what's been done before.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chandigarh
On-site
As a Data Engineer, you will provide support to the Global BI team for Isolation Valves in their migration to Microsoft Fabric. Your primary focus will be on data gathering, modeling, integration, and database design to facilitate efficient data management. Your responsibilities will include developing and optimizing scalable data models to meet analytics and reporting requirements and utilizing Microsoft Fabric and Azure technologies for high-performance data processing. In this role, you will collaborate with cross-functional teams, including data analysts, data scientists, and business collaborators, to understand their data needs and deliver effective solutions. You will leverage Fabric Lakehouse for data storage, governance, and processing to support Power BI and automation initiatives. Expertise in data modeling, with a specific emphasis on data warehouse and lakehouse design, will be essential. You will be responsible for designing and implementing data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Additionally, you will develop ETL processes using tools such as SQL Server Integration Services (SSIS) and Azure Synapse Pipelines to prepare data for analysis and reporting. Implementing data quality checks and governance practices to ensure data accuracy, consistency, and security will also be part of your role. Your tasks will involve supervising and optimizing data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Proficiency in Business Intelligence (BI) tools like Power BI and Tableau, along with experience in data integration and ETL tools such as Azure Data Factory, will be beneficial. You are expected to have expertise in Microsoft Fabric or similar data platforms and a deep understanding of the Azure Cloud Platform, particularly in data warehousing and storage solutions. Strong communication skills are essential, as you will need to convey technical concepts to both technical and non-technical stakeholders. The ability to work independently as well as within a team environment is crucial. Preferred qualifications for this role include 3-5 years of experience in Data Warehousing with on-premises or cloud technologies, strong analytical abilities, and proficiency in database management, SQL query optimization, and data mapping. A willingness to work flexible hours based on project requirements, strong documentation skills, and advanced SQL skills are also required. Hands-on experience with Medallion Architecture for data processing, prior experience in a manufacturing environment, and the ability to quickly learn new technologies are advantageous. Travel up to 20% may be required. A Bachelor's degree or equivalent experience in Science, with a focus on MIS, Computer Science, Engineering, or a related field, is preferred. Good interpersonal skills in English for efficient collaboration with overseas teams and Agile certification are also desirable. At Emerson, we value an inclusive workplace where every employee is empowered to grow and contribute. Our commitment to ongoing career development and fostering an innovative and collaborative environment ensures that you have the support to succeed. We provide competitive benefits plans, medical insurance options, employee assistance programs, recognition, and flexible time off plans to prioritize employee wellbeing. Emerson is a global leader in automation technology and software, serving industries such as life sciences, energy, power, renewables, and advanced factory automation. We are committed to diversity, equity, and inclusion, and offer opportunities for career growth and development. Join our team at Emerson and be part of a community dedicated to making a positive impact through innovation and collaboration.,
Posted 2 weeks ago
6.0 - 10.0 years
18 - 22 Lacs
hyderabad, telangana, india
On-site
We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data team at Logic Pursuits. In this role, you will lead the design and implementation of scalable, high-performance data pipelines using Snowflake and dbt, define architectural best practices, and drive data transformation at scale. You'll work closely with clients to translate business needs into robust data solutions and play a key role in mentoring junior engineers, enforcing standards, and delivering production-grade data platforms. This is work from office role in Hyderabad (5 Days) Experience: 6to10Years Compensation(Yearly)INR(?): 1,800,000to2,200,000 Hyderabad Key Responsibilities Architect and implement modular, test-driven ELT pipelines using dbt on Snowflake. Design layered data models (e.g., staging, intermediate, mart layers / medallion architecture) aligned with dbt best practices. Lead ingestion of structured and semi-structured data from APIs, flat files, cloud storage (Azure Data Lake, AWS S3), and databases into Snowflake. Optimize Snowflake for performance and cost: warehouse sizing, clustering, materializations, query profiling, and credit monitoring. Apply advanced dbt capabilities including macros, packages, custom tests, sources, exposures, and documentation using dbt docs. Orchestrate workflows using dbt Cloud, Airflow, or Azure Data Factory, integrated with CI/CD pipelines. Define and enforce data governance and compliance practices using Snowflake RBAC, secure data sharing, and encryption strategies. Collaborate with analysts, data scientists, architects, and business stakeholders to deliver validated, business-ready data assets. Mentor junior engineers, lead architectural/code reviews, and help establish reusable frameworks and standards. Engage with clients to gather requirements, present solutions, and manage end-to-end project delivery in a consulting setup Required Qualifications 5 to 8 years of experience in data engineering roles, with 3+ years of hands-on experience working with Snowflake and dbt in production environments. Technical Skills: o Cloud Data Warehouse & Transformation Stack: Expert-level knowledge of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management. Experience in dbt development : modular model design, macros, tests, documentation, and version control using Git. o Orchestration and Integration: Proficiency in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory. Comfortable working with data ingestion from cloud storage (e.g., Azure Data Lake, AWS S3) and APIs. Data Modelling and Architecture: Dimensional modelling (Star/Snowflake schemas), Slowly changing dimensions. Knowledge of modern data warehousing principles. Experience implementing Medallion Architecture (Bronze/Silver/Gold layers). Experience working with Parquet, JSON, CSV, or other data formats. o Programming Languages: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Jinja (nice to have): Exposure to Jinja for advanced dbt development. o Data Engineering & Analytical Skills: ETL/ELT pipeline design and optimization. Exposure to AI/ML data pipelines, feature stores, or MLflow for model tracking (good to have). Exposure to data quality and validation frameworks. o Security & Governance: Experience implementing data quality checks using dbt tests. Data encryption, secure key management and security best practices for Snowflake and dbt. Soft Skills & Leadership: Ability to thrive in client-facing roles with competing/changing priorities and fast-paced delivery cycles. Stakeholder Communication : Collaborate with business stakeholders to understand objectives and convert them into actionable data engineering designs. Project Ownership : End-to-end delivery including design, implementation, and monitoring. Mentorship : Guide junior engineers and establish best practices; Build new skill in the team. Agile Practices : Work in sprints, participate in scrum ceremonies, story estimation. Education: Bachelor's or master's degree in computer science, Data Engineering, or a related field. Certifications such as Snowflake SnowPro Advanced, dbt Certified Developer are a plus. Share the following details: Question1 :Experience in Snowflake and DBT (Please mention separately) Question2 :Experience of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management (Expert level) Question3 :Experience in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory Question4 :Experience in Python programming and Jina for advanced DBT Development Question5 :Experience implementing Medallion Architecture (Bronze/Silver/Gold layers). Question6 :Experience in Project ownership and Team handling Question7 :Open to relocate to Hyderabad Current Location Current Number Current company Current Salary Expected Salary Notice Period Total Experience To proceed further, kindly share your updated resume on [HIDDEN TEXT] or can whatsapp on 7719594751. ,
Posted 3 weeks ago
8.0 - 12.0 years
15 - 20 Lacs
bengaluru
Hybrid
We are looking for a highly skilled Scala Data Engineer to design, build, and optimize large-scale data platforms. The ideal candidate will have deep expertise in Scala, Spark, and SQL, with proven experience delivering scalable and high-performance data solutions in cloud-native environments. Key Responsibilities Design, build, and optimize scalable data pipelines using Apache Spark and Scala. Develop real-time streaming pipelines leveraging Kafka/Event Hubs. Own the design and architecture of data systems with a strong focus on performance, scalability, and reliability. Collaborate with cross-functional teams to deliver high-quality data products. Mentor junior engineers and enforce best practices in coding, testing, and data engineering standards. Implement and maintain data governance and lineage practices. Mandatory Skills Strong programming expertise in Scala (preferred over Java). Advanced proficiency in SQL and Apache Spark. Strong understanding of Data Structures & Algorithms. Hands-on experience in data engineering for large-scale systems. Exposure to cloud-native environments (Azure preferred). Preferred Skills Big Data Ecosystem: Hadoop, Kafka, Structured Streaming/Event Hub. CI/CD tools: Git, Docker, Jenkins. Experience with Medallion Architecture, Parquet, Apache Iceberg. Orchestration tools: Airflow, Oozie. Familiarity with NoSQL DBs (Cassandra, MongoDB, etc.). Experience with Data Governance tools: Alation, Collibra, Lineage, Metadata management. Location-Bangalore Hybrid (3 days WFO/week)
Posted 3 weeks ago
5.0 - 10.0 years
9 - 12 Lacs
pune
Work from Office
Hiring for a leading MNC for position of Data Engineer , based at Kharadi (Pune) Designation : Data Engineer Shift Timing : 12 PM to 9 PM (Cab Facility Provided) Work Mode: Work from Office Key Responsibilities: - Liaise with stakeholders to define data requirements - Manage Snowflake & SQL databases - Build and optimize semantic models for reporting - Lead modern data architecture adoption - Reverse engineer complex data structures - Mentor peers on data governance best practices - Champion Agile/SCRUM methodologies Preferred Candidates: Experience- 5+ years in data engineering/BI roles - Strong ETL, data modelling, governance, and lineage documentation - Expertise in Snowflake, Azure (SQL Server, Data Factory, Logic Apps, App Services), Power BI - Advanced SQL & Python (OOP, JSON/XML) - Experience with medallion architecture, Fivetran, DBT - Application development using Python, Streamlit, Flask, Node.js, Power Apps - Agile/Scrum project management - Bachelors/Masters in Math, Stats, CS, IT, or Engineering
Posted 3 weeks ago
3.0 - 7.0 years
20 - 25 Lacs
pune, bengaluru
Hybrid
Description Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 450+ elite software engineers solves hard technical problems while transforming customer ideas into successful products Requirements Design, develop, and maintain robust and scalable data pipelines that ingest, transform, and load data from various sources into data warehouse. Collaborate with business stakeholders to understand data requirements and translate them into technical solutions. Implement data quality checks and monitoring to ensure data accuracy and integrity. Optimize data pipelines for performance and efficiency. Troubleshoot and resolve data pipeline issues. Stay up-to-date with emerging technologies and trends in data engineering. Qualifications More than 3 years of experience in data engineering with Bachelors or Masters degree in Computer Science, Engineering, or a related field. Hands on experience with Streaming Analytics, Spark, Medallion Architecture, Data Connector Development, Data Modelling. Strong proficiency in SQL and at least one programming language (e.g., Python, Java). Experience with data pipeline tools and frameworks Experience with cloud-based data warehousing solutions (Snowflake). Experience with AWS Kinesis, SNS, SQS Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Desired Skills & Experience: Data pipeline architecture Data warehousing ETL (Extract, Transform, Load) Data modeling Python Cloud computing Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented get things done culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment
Posted 4 weeks ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
The role of Senior Data Engineer at GSPANN involves designing, developing, and optimizing scalable data solutions, utilizing expertise in Azure Data Factory, Azure Databricks, PySpark, Delta Tables, and advanced data modeling. The position also demands proficiency in performance optimization, API integrations, DevOps practices, and data governance. You will be responsible for designing, developing, and orchestrating scalable data pipelines using Azure Data Factory (ADF). Additionally, you will build and manage Apache Spark clusters, create notebooks, and execute jobs in Azure Databricks. Ingesting, organizing, and transforming data within the Microsoft Fabric ecosystem using OneLake will also be part of your role. Your tasks will include authoring complex transformations, writing SQL queries for large-scale data processing using PySpark and Spark SQL, and creating, optimizing, and maintaining Delta Lake tables. Furthermore, you will parse, validate, and transform semi-structured JSON datasets, build and consume REST/OData services for custom data ingestion through API integration, and implement bronze, silver, and gold layers in data lakes using the Medallion Architecture. To ensure efficient processing of high data volumes for large-scale performance optimization, you will apply partitioning, caching, and resource tuning. Designing star and snowflake schemas, along with fact and dimension tables for multidimensional modeling in reporting use cases, will be a crucial aspect of your responsibilities. Working with tabular and OLAP cube structures in Azure Analysis Services to facilitate downstream business intelligence will also be part of your role, along with collaborating with the DevOps team to define infrastructure, manage access and security, and automate deployments. In terms of skills and experience, you are expected to ingest and harmonize data from SAP ECC and S/4HANA systems using Data Sphere. Utilizing Git, Azure DevOps Pipelines, Terraform, or Azure Resource Manager templates for CI/CD and DevOps tooling, leveraging Azure Monitor, Log Analytics, and data pipeline metrics for data observability and monitoring, conducting query diagnostics, identifying bottlenecks, and determining root causes for performance troubleshooting are among the key responsibilities. Applying metadata management, tracking data lineage, and enforcing compliance best practices for data governance and cataloging are also part of the role. Lastly, documenting processes, designs, and solutions effectively in Confluence is essential for this position.,
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
maharashtra
On-site
You will be a Teradata Data Modeler for our client, a trusted global innovator of IT and business services. Your role involves transforming clients through consulting, industry solutions, business process services, digital & IT modernization, and managed services, enabling them to confidently move into the digital future. With a commitment to long-term client success, you will provide global reach with local client attention in over 50 countries worldwide. As a Teradata Data Modeler, you will be based in Mumbai (Powai / Mahape) and should have a minimum of 5 years of experience. This position is on a contract-to-hire basis, requiring immediate joiners. Your key responsibilities will include: - Demonstrating 5-10 years of hands-on experience in data modeling, metadata management, and related tools such as Erwin or ER Studio - Possessing a strong knowledge of data modeling principles, encompassing conceptual, logical, and physical data models - Having a deep understanding of the Teradata FSLDM, including its structure, components, and best practices for implementation - Proficiency in Teradata database technologies, including SQL, utilities, and performance tuning - Expertise in creating and maintaining comprehensive documentation for data models, mappings, and data governance policies - Utilizing strong SQL skills for data manipulation, query optimization, and data validation - Understanding data warehousing concepts, medallion architecture, and best practices - Applying strong analytical and problem-solving skills to identify and resolve data-related issues - Understanding metadata management principles - Preferred knowledge of the insurance domain and its data structures If you possess the mandatory skills mentioned above and are looking to leverage your expertise in data modeling within a dynamic global environment, we invite you to apply for the position of Teradata Data Modeler with our client.,
Posted 1 month ago
4.0 - 10.0 years
0 Lacs
karnataka
On-site
You are a developer of digital futures at Tietoevry, a leading technology company with a strong Nordic heritage and global capabilities. With core values of openness, trust, and diversity, you collaborate with customers to create digital futures where businesses, societies, and humanity thrive. The company's 24,000 experts specialize in cloud, data, and software, serving enterprise and public-sector customers in around 90 countries. Tietoevry's annual turnover is approximately EUR 3 billion, and its shares are listed on the NASDAQ exchange in Helsinki, Stockholm, and Oslo Brs. In the USA, EVRY USA delivers IT services through global delivery centers and offices in India (EVRY India). The company offers a comprehensive IT services portfolio, driving digital transformation across sectors like Banking & Financial Services, Insurance, Healthcare, Retail & Logistics, and Energy, Utilities & Manufacturing. EVRY India's process and project maturity are high, with offshore development centers in India appraised at CMMI DEV Maturity Level 5 & CMMI SVC Maturity Level 5 and certified under ISO 9001:2015 & ISO/IEC 27001:2013. As a Senior Data Modeler, you will lead the design and development of enterprise-grade data models for a modern cloud data platform built on Snowflake and Azure. With a strong foundation in data modeling best practices and hands-on experience with the Medallion Architecture, you will ensure data structures are scalable, reusable, and aligned with business and regulatory requirements. You will work on data models that meet processing, analytics, and reporting needs, focusing on Snowflake data warehousing and Medallion Architecture's Bronze, Silver, and Gold layers. Collaborating with various stakeholders, you will translate business needs into scalable data models, drive data model governance, and ensure compliance with data governance, quality, and security requirements. **Pre-requisites:** - 10 years of experience in data modeling, data architecture, or data engineering roles. - 4 years of experience modeling data in Snowflake or other cloud data warehouses. - Strong understanding and hands-on experience with Medallion Architecture and modern data platform design. - Experience using data modeling tools (Erwin etc.). - Proficiency in data modeling techniques: 3NF, dimensional modeling, data vault, and star/snowflake schemas. - Expert-level SQL and experience working with semi-structured data (JSON, XML). - Familiarity with Azure data services (ADF, ADLS, Synapse, Purview). **Key Responsibilities:** - Design, develop, and maintain data models for Snowflake data warehousing. - Lead the design and implementation of logical, physical, and canonical data models. - Architect data models for Bronze, Silver, and Gold layers following the Medallion Architecture. - Collaborate with stakeholders to translate business needs into scalable data models. - Drive data model governance and compliance with data requirements. - Conduct data profiling, gap analysis, and data integration efforts. - Support time travel kind of reporting and build models for operational & analytical reports. Recruiter Information: - Recruiter Name: Harish Gotur - Recruiter Email Id: harish.gotur@tietoevry.com,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are developers of digital futures! Tietoevry creates purposeful technology that reinvents the world for good. We are a leading technology company with a strong Nordic heritage and global capabilities. Based on our core values of openness, trust, and diversity, we work with our customers to develop digital futures where businesses, societies, and humanity thrive. Our 24,000 experts globally specialize in cloud, data, and software, serving thousands of enterprise and public-sector customers in approximately 90 countries. Tietoevry's annual turnover is approximately EUR 3 billion, and the company's shares are listed on the NASDAQ exchange in Helsinki and Stockholm, as well as on Oslo Brs. EVRY USA delivers IT services to a wide range of customers in the USA through its global delivery centers and India offices (EVRY India) in Bangalore & Chandigarh, India. We offer a comprehensive IT services portfolio and drive digital transformation across various sectors including Banking & Financial Services, Insurance, Healthcare, Retail & Logistics, and Energy, Utilities & Manufacturing. EVRY India's process and project maturity is very high, with the two offshore development centers in India being appraised at CMMI DEV Maturity Level 5 & CMMI SVC Maturity Level 5 and certified under ISO 9001:2015 & ISO/IEC 27001:2013. We are seeking a highly experienced Snowflake Architect with deep expertise in building scalable data platforms on Azure, applying Medallion Architecture principles. The ideal candidate should have strong experience working in the Banking domain. The candidate will play a key role in architecting secure, performant, and compliant data solutions to support business intelligence, risk, compliance, and analytics initiatives. **Pre-requisites:** - 5 years of hands-on experience in Snowflake including schema design, security setup, and performance tuning. - Implementation experience using Snowpark. - Must have a Data Architecture background. - Deployed a fully operational data solution into production on Snowflake & Azure. - Snowflake certification preferred. - Familiarity with data modeling practices like dimensional modeling & data vault. - Understanding of the dbt tool. **Key Responsibilities:** - Design and implement scalable and performant data platforms using Snowflake on Azure, tailored for banking industry use cases. - Architect ingestion, transformation, and consumption layers using Medallion Architecture for a performant & scalable data platform. - Work with data engineers to build modular and reusable bronze, silver and gold layer models that support diverse workloads. - Provide architectural oversight and best practices to ensure scalability, performance, and maintainability. - Collaborate with stakeholders from risk, compliance, and analytics teams to translate requirements into data-driven solutions. - Build architecture to support time travel kind of reporting. - Support CI/CD automation and environment management using tools like Azure DevOps and Git. - Build architecture to support operational & analytical reports. Recruiter Information: - Recruiter Name: Harish Gotur - Recruiter Email Id: harish.gotur@tietoevry.com,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. As an ETL Technical Lead at Orion Innovation located in Chennai, you are required to have at least 5 years of ETL experience and 3 years of experience specifically in Azure Synapse. Your role will involve designing, developing, and managing ETL processes within the Azure ecosystem. You must possess proficiency with Azure Synapse Pipelines, Azure Dedicated SQL Pool, Azure Data Lake Storage (ADLS), and other related Azure services. Additionally, experience with audit logging, data governance, and implementing data integrity and data lineage best practices is essential. Your responsibilities will include leading and managing the ETL team, providing mentorship, technical guidance, and driving the delivery of key data initiatives. You will design, develop, and maintain ETL pipelines using Azure Synapse Pipelines for ingesting data from various file formats and securely storing them in Azure Data Lake Storage (ADLS). Furthermore, you will architect, implement, and manage data solutions following the Medallion architecture for effective data processing and transformation. It is crucial to leverage Azure Data Lake Storage (ADLS) to build scalable and high-performance data storage solutions, ensuring optimal data lake management. You will also be responsible for managing the Azure Dedicated SQL Pool to optimize query performance and scalability. Automation of data workflows and processes using Logic Apps, as well as ensuring secure and compliant data handling through audit logging and access controls, will be part of your duties. Collaborating with data scientists to integrate ETL pipelines with Machine Learning models for predictive analytics and advanced data science use cases is key. Troubleshooting and resolving complex data pipeline issues, monitoring and optimizing performance, and acting as the primary technical point of contact for the ETL team are also essential aspects of this role. Orion Systems Integrators, LLC and its affiliates are committed to protecting your privacy. For more information on the Candidate Privacy Policy, please refer to the official documentation on the Orion website.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
As a Senior Data Engineer, you will play a crucial role in supporting the Global BI team for Isolation Valves as they transition to Microsoft Fabric. Your primary responsibilities will involve data gathering, modeling, integration, and database design to facilitate efficient data management. You will be tasked with developing and optimizing scalable data models to cater to analytical and reporting needs, utilizing Microsoft Fabric and Azure technologies for high-performance data processing. Your duties will include collaborating with cross-functional teams such as data analysts, data scientists, and business collaborators to comprehend their data requirements and deliver effective solutions. You will leverage Fabric Lakehouse for data storage, governance, and processing to back Power BI and automation initiatives. Additionally, your expertise in data modeling, particularly in data warehouse and lakehouse design, will be essential in designing and implementing data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Furthermore, you will be responsible for developing ETL processes using tools like SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or similar platforms to prepare data for analysis and reporting. Implementing data quality checks and governance practices to ensure data accuracy, consistency, and security will also fall under your purview. You will supervise and optimize data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Your role will require a strong proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms, along with experience in data integration and ETL tools like Azure Data Factory. A deep understanding of Microsoft Fabric or similar data platforms, as well as comprehensive knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions, will be necessary. Effective communication skills to convey technical concepts to both technical and non-technical stakeholders, the ability to work both independently and within a team environment, and the willingness to stay abreast of new technologies and business areas are also vital for success in this role. To excel in this position, you should possess 5-7 years of experience in Data Warehousing with on-premises or cloud technologies, strong analytical abilities to tackle complex data challenges, and proficiency in database management, SQL query optimization, and data mapping. A solid grasp of Excel, including formulas, filters, macros, pivots, and related operations, is essential. Proficiency in Python and SQL/Advanced SQL for data transformations/Debugging, along with a willingness to work flexible hours based on project requirements, is also required. Furthermore, hands-on experience with Fabric components such as Lakehouse, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, and Semantic Models, as well as advanced SQL skills and experience with complex queries, data modeling, and performance tuning, are highly desired. Prior exposure to implementing Medallion Architecture for data processing, experience in a manufacturing environment, and familiarity with Oracle, SAP, or other ERP systems will be advantageous. A Bachelor's degree or equivalent experience in a Science-related field, with good interpersonal skills in English (spoken and written) and Agile certification, will set you apart as a strong candidate for this role. At Emerson, we are committed to fostering a workplace where every employee is valued, respected, and empowered to grow. Our culture encourages innovation, collaboration, and diverse perspectives, recognizing that great ideas stem from great teams. We invest in your ongoing career development, offering mentorship, training, and leadership opportunities to ensure your success and make a lasting impact. Employee wellbeing is a priority for us, and we provide competitive benefits plans, medical insurance options, Employee Assistance Program, flexible time off, and other supportive resources to help you thrive. Emerson is a global leader in automation technology and software, dedicated to helping customers in critical industries operate more sustainably and efficiently. Our commitment to our people, communities, and the planet drives us to create positive impacts through innovation, collaboration, and diversity. If you seek an environment where you can contribute to meaningful work, develop your skills, and make a difference, join us at Emerson. Let's go together towards a brighter future.,
Posted 1 month ago
6.0 - 11.0 years
8 - 12 Lacs
Chennai
Work from Office
Skills : Azure/AWS, Synapse, Fabric, PySpark, Databricks, ADF, Medallion Architecture, Lakehouse, Data Warehousing Experience : 6+ Years Locations : Chennai, Bangalore, Pune, Coimbatore Work from Office
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
You should have a minimum of 7 years of experience in Database warehouse / lake house programming and should have successfully implemented at least 2 end-to-end data warehouse / data lake projects. Additionally, you should have experience in implementing at least 1 Azure Data warehouse / lake house project end-to-end, converting business requirements into concept / technical specifications, and collaborating with source system experts to finalize ETL and analytics design. You will also be responsible for supporting data modeler developers in the design and development of ETLs and creating activity plans based on agreed concepts with timelines. Your technical expertise should include a strong background with Microsoft Azure components such as Azure Data Factory, Azure Synapse, Azure SQL Database, Azure Key Vault, MS Fabric, Azure DevOps (ADO), and Virtual Networks (VNets). You should also have expertise in Medallion Architecture for Lakehouses and data modeling in the Gold layer, along with a solid understanding of Data Warehouse design principles like star schema, snowflake schema, and data partitioning. Proficiency in MS SQL Database Packages, Stored procedures, Functions, procedures, Triggers, and data transformation activities using SQL is required, as well as knowledge in SQL loader, Data pump, and Import/Export utilities. Experience with data visualization or BI tools like Tableau, Power BI, capacity planning, environment management, performance tuning, and familiarity with cloud cloning/copying processes within Azure will be essential for this role. Knowledge of green computing principles and optimizing cloud resources for cost and environmental efficiency is also desired. You should possess excellent interpersonal and communication skills to collaborate effectively with technical and non-technical teams, communicate complex concepts, and influence key stakeholders. Additionally, analyzing demands, contributing to cost/benefit analysis, and estimation are part of the responsibilities. Preferred qualifications include certifications like Azure Solutions Architect Expert or Azure Data Engineer Associate. Skills required for this role include database management, Tableau, Power BI, ETL processes, Azure SQL Database, Medallion Architecture, Azure services, data visualization, data warehouse design, and Microsoft Azure technologies.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an integral part of our team at Proximity, you will be taking on the role of both a hands-on tech lead and product manager. Your primary responsibility will be to deliver data/ML platforms and pipelines within a Databricks-Azure environment. In this capacity, you will be leading a small delivery team and collaborating with enabling teams to drive product, architecture, and data science initiatives. Your ability to translate business requirements into product strategy and technical delivery with a platform-first mindset will be crucial to our success. To excel in this role, you should possess technical proficiency in Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, and Azure. Additionally, expertise in Agile delivery, discovery cycles, outcome-focused planning, and trunk-based development will be advantageous. You should also be adept at collaborating with engineers, working across cross-functional teams, and fostering self-service platforms. Clear communication skills will be key in articulating decisions, roadmap, and priorities effectively. Joining our team comes with a host of benefits. You will have the opportunity to engage in Proximity Talks, where you can interact with fellow designers, engineers, and product enthusiasts, and gain insights from industry experts. Working alongside our world-class team will provide you with continuous learning opportunities, allowing you to challenge yourself and acquire new knowledge on a daily basis. Proximity is a leading technology, design, and consulting partner for prominent Sports, Media, and Entertainment companies globally. With headquarters in San Francisco and additional offices in Palo Alto, Dubai, Mumbai, and Bangalore, we have a track record of creating high-impact, scalable products used by 370 million daily users. The collective net worth of our client companies stands at $45.7 billion since our inception in 2019. At Proximity, we are a diverse team of coders, designers, product managers, and experts dedicated to solving complex problems and developing cutting-edge technology at scale. As our team of Proxonauts continues to expand rapidly, your contributions will play a significant role in the company's success. You will have the opportunity to collaborate with experienced leaders who have spearheaded multiple tech, product, and design teams. To learn more about us, you can watch our CEO, Hardik Jagda, share insights about Proximity, explore our values and meet our team members, visit our website, blog, and design wing at Studio Proximity, and gain behind-the-scenes access through our Instagram accounts @ProxWrks and @H.Jagda.,
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |