Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Quality Engineer, your primary responsibility will be to analyze business and technical requirements to design, develop, and execute comprehensive test plans for ETL pipelines and data transformations. You will perform data validation, reconciliation, and integrity checks across various data sources and target systems. Additionally, you will be expected to build and automate data quality checks using SQL and/or Python scripting. It will be your duty to identify, document, and track data quality issues, anomalies, and defects. Collaboration is key in this role, as you will work closely with data engineers, developers, QA, and business stakeholders to understand data requirements and ensure that data quality standards are met. You will define data quality KPIs and implement continuous monitoring frameworks. Participation in data model reviews and providing input on data quality considerations will also be part of your responsibilities. In case of data discrepancies, you will be expected to perform root cause analysis and work with teams to drive resolution. Ensuring alignment to data governance policies, standards, and best practices will also fall under your purview. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field. Additionally, you should have 4 to 7 years of experience as a Data Quality Engineer, ETL Tester, or a similar role. A strong understanding of ETL concepts, data warehousing principles, and relational database design is essential. Proficiency in SQL for complex querying, data profiling, and validation tasks is required. Familiarity with data quality tools, testing methodologies, and modern cloud data ecosystems (AWS, Snowflake, Apache Spark, Redshift) will be advantageous. Moreover, advanced knowledge of SQL, data pipeline tools like Airflow, DBT, or Informatica, as well as experience with integrating data validation processes into CI/CD pipelines using tools like GitHub Actions, Jenkins, or similar, are desired qualifications. An understanding of big data platforms, data lakes, non-relational databases, data lineage, master data management (MDM) concepts, and experience with Agile/Scrum development methodologies will be beneficial for excelling in this role. Your excellent analytical and problem-solving skills along with a strong attention to detail will be valuable assets in fulfilling the responsibilities of a Data Quality Engineer.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As an ETL Developer - USWA Client Data Quality Engineer, Associate at our Gurgaon, Haryana location, you will be part of the Data Operations team. Your primary responsibility will involve ensuring the quality of client data through efficient ETL (Extract, Transform, Load) processes. You will be required to work collaboratively with various stakeholders to identify data quality issues, design and implement ETL solutions to address these issues, and continuously monitor and improve data quality. Additionally, you will play a key role in optimizing data processes to enhance overall efficiency and effectiveness. The ideal candidate for this position should possess a strong understanding of ETL concepts, data quality best practices, and experience working with ETL tools and technologies. Strong analytical skills, attention to detail, and the ability to troubleshoot and resolve data quality issues are essential for success in this role. If you are passionate about data quality, enjoy working in a collaborative team environment, and are looking to contribute to the success of our USWA client data operations, we would love to hear from you. Apply now and be part of our dynamic Data Operations team!,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
As a Business Analyst at YouTube, you will play a crucial role in the Trust & Safety team, working towards making YouTube a secure platform for users, viewers, and content creators worldwide. Your responsibilities will involve utilizing data and problem-solving techniques to define success metrics for Trust and Safety operations, measure business performance, and effectively communicate insights to executives and external stakeholders. You will collaborate with key business partners to understand data requirements, write Extract, Transform and Load (ETL) scripts, and work closely with internal teams to drive operational excellence. Your expertise will be instrumental in toggling between high-level strategic perspectives and day-to-day tactical improvements to enhance business performance and efficiency, such as implementing automation using classifiers and tools to expedite the removal of violative content. Furthermore, you will be tasked with developing, building, and evaluating performance metrics for abuse prevention and operational efficiency across YouTube, generating insights to reduce the prevalence of harmful content on the platform. Your role will also involve learning complex technical concepts and systems, effectively communicating technical results and methods, and collaborating with Data Science and Strategy teams to conduct advanced quantitative analyses that lead to actionable insights. Additionally, you will analyze existing processes to identify opportunities for enhancement, define requirements for improvement, and work collaboratively across functions and regions to optimize processes and tools. Your analytical skills will be crucial in analyzing and addressing escalations, identifying trends that may indicate potential product risks, and contributing to the continuous improvement of YouTube's Trust & Safety operations. Overall, as a Business Analyst at YouTube, you will be at the forefront of safeguarding the platform, supporting its mission to represent diversity, foster community, and empower individuals to share their stories and connect with others worldwide.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As an Infoscion, your main responsibility will be to interact with clients to ensure quality assurance, resolve issues, and maintain high levels of customer satisfaction. You will be involved in understanding requirements, designing, validating architecture, and delivering service offerings in the technology domain. Project estimation, solution delivery inputs, technical risk planning, code reviews, and unit test plan reviews will also be part of your tasks. Your role will involve leading and guiding teams to develop high-quality code deliverables, ensuring knowledge management, and adhering to organizational guidelines and processes. You will play a crucial role in building efficient programs/systems to support clients in their digital transformation journey. In addition to the primary skills of ETL and Data Quality, you are expected to have knowledge of multiple technologies, architecture and design fundamentals, testing tools, agile methodologies, and project life cycle activities. Understanding estimation methodologies, quality processes, business domain basics, analytical abilities, strong technical skills, and good communication skills are essential. Moreover, you should possess a good understanding of technology and domain, software quality assurance principles, SOLID design principles, and modelling methods. Awareness of the latest technologies and trends, along with excellent problem-solving, analytical, and debugging skills, will be valuable assets in this role.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
maharashtra
On-site
As a Software Engineer at Fospha, you will be instrumental in the development and maintenance of ETL pipelines to ensure scalable and high-performance data solutions in a cloud environment. You will collaborate with a diverse team of engineers, analysts, and stakeholders to optimize data transformations and uphold data integrity. Your role will involve executing the full software development life cycle, writing well-designed and testable code, integrating software components, and developing software verification plans and quality assurance procedures. Additionally, you will troubleshoot, debug, and upgrade existing systems while complying with project plans and industry standards. Fospha, a marketing measurement platform for eCommerce brands, has achieved significant growth and recognition in the industry. As part of a global expansion, we are seeking exceptional candidates to join our team in India, operating under the umbrella of our investors, Blenheim Chalcot. You will have the opportunity to work on Fospha's Modelling & Attribution platform, Planning and Analytics tool alongside a team of talented engineers. We are looking for individuals with strong problem-solving skills, a self-starter attitude in data engineering practices, and a collaborative approach to working in Agile environments. As a quick learner and innovator, you should be passionate about data engineering and eager to drive high-impact projects. Required skills include hands-on coding experience in Python, SQL proficiency for data transformations, familiarity with Data processing pipelines (ETL), working knowledge of AWS Cloud, experience with tools like Jira and Bitbucket/Git. While not mandatory, expertise in coding with dbt, knowledge of CI/CD pipelines, and understanding of data modeling and statistics would be advantageous. In return, we offer competitive compensation, the opportunity to collaborate with a global team, access to diverse challenges within a culture of continuous learning and development, private medical benefits for you and your family, life insurance coverage, and various social events throughout the year. Join us at Fospha and be part of our exciting journey of growth and innovation.,
Posted 1 week ago
6.0 - 10.0 years
0 - 0 Lacs
hyderabad, telangana
On-site
You will be joining QTek Digital, a leading data solutions provider known for its expertise in custom data management, data warehouse, and data science solutions. Our team of dedicated data professionals, including data scientists, data analysts, and data engineers, collaborates to address present-day challenges and pave the way for future innovations. At QTek Digital, we value our employees and focus on fostering engagement, empowerment, and continuous growth opportunities. As a BI ETL Engineer at QTek Digital, you will be taking on a full-time remote position. Your primary responsibilities will revolve around tasks such as data modeling, applying analytical skills, implementing data warehouse solutions, and managing Extract, Transform, Load (ETL) processes. This role demands strong problem-solving capabilities and the capacity to work autonomously. To excel in this role, you should ideally possess: - 6-9 years of hands-on experience in ETL and ELT pipeline development using tools like Pentaho, SSIS, FiveTran, Airbyte, or similar platforms. - 6-8 years of practical experience in SQL and other data manipulation languages. - Proficiency in Data Modeling, Dashboard creation, and Analytics. - Sound knowledge of data warehousing principles, particularly Kimball design. - Bonus points for familiarity with Pentaho and Airbyte administration. - Demonstrated expertise in Data Modeling, Dashboard design, Analytics, Data Warehousing, and ETL procedures. - Strong troubleshooting and problem-solving skills. - Effective communication and collaboration abilities. - Capability to operate both independently and as part of a team. - A Bachelor's degree in Computer Science, Information Systems, or a related field. This position is based in our Hyderabad office, offering an attractive compensation package ranging from INR 5-19 Lakhs, depending on various factors such as your skills and prior experience. Join us at QTek Digital and be part of a dynamic team dedicated to shaping the future of data solutions.,
Posted 1 week ago
5.0 - 15.0 years
0 Lacs
India
Remote
Job Title: Snowflake Developer Job Duration: 5 months Job Location: Remote India months Job Summary: We are seeking a skilled and detail-oriented Snowflake Developer to design, develop, and maintain scalable data solutions using the Snowflake platform. The ideal candidate will have experience in data warehousing, ETL/ELT processes, and cloud-based data architecture. Key Responsibilities: Design and implement data pipelines using Snowflake, SQL, and ETL tools. • Develop and optimize complex SQL queries for data extraction and transformation. • Create and manage Snowflake objects such as databases, schemas, tables, views, and stored procedures. • Integrate Snowflake with various data sources and third-party tools. • Monitor and troubleshoot performance issues in Snowflake environments. • Collaborate with data engineers, analysts, and business stakeholders to understand data requirements. • Ensure data quality, security, and governance standards are met. • Automate data workflows and implement best practices for data management. Required Skills and Qualifications: • Proficiency in Snowflake SQL and Snowflake architecture. • Experience with ETL/ELT tools (e.g., Informatica, Talend, dbt, Matillion). • Strong knowledge of cloud platforms (AWS, Azure, or GCP). • Familiarity with data modeling and data warehousing concepts. • Experience with Python, Java, or Shell scripting is a plus. • Understanding of data security, role-based access control, and data sharing in Snowflake. • Excellent problem-solving and communication skills. Preferred Qualifications: • Snowflake certification (e.g., SnowPro Core). • Experience with CI/CD pipelines and DevOps practices. • Knowledge of BI tools like Tableau, Power BI, or Looker. 5-15 years of experience is preferred. Experience with Agile based development Problem solving skills Proficiency in writing performant SQL Queries/Scripts to generate business insights and drive better organizational decision making.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description What you get to do in this role: Develop and maintain AI-powered internal tools that automate workflows and boost stakeholders’ productivity, with specialized focus on sales analytics and strategic planning operations Build and deliver ETL pipelines for Power BI/Snowflake datasets optimized LLM consumption, enabling efficient AI-driven analysis and empowering power users. Collaborate cross-functionally with Data & Analytics teams and Sales Operations teams to identify high-value AI use cases and rapidly prototype AI-enabled utilities that align with business goals Transform enterprise data from Power BI and Snowflake into LLM-optimized formats while ensuring data integrity and reliable performance across AI-driven solutions Manage complete AI agent development lifecycle from ideation, testing, production deployment, and user adoption while implementing continuous integration and documenting best practices Champion organizational AI adoption by ensuring seamless system integration, demonstrating clear business value, and maintaining high standards for performance and user experience Qualifications In order to be successful in this role: Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving. This may include using AI-powered tools, automating workflows, analyzing AI-driven insights, or exploring AI's potential impact on the function or industry. 8+ years of proven track record supporting sales organizations or sales business processes through analytics, automation, or technical solutions Demonstrated history of building and deploying AI agents or automation tools in real-world business settings for sales workflow automation Proven experience with semantic modeling in Power BI or Snowflake, plus familiarity with transforming sales data models for LLM integration and sales analytics optimization. (include data restructuring for LLM integration) Strong understanding of data engineering, APIs, and cloud-based architecture with experience in sales data Ability to function both independently and as part of cross-functional teams including sales teams and business stakeholders in fast-paced environments Hands-on experience with rapid prototyping, iterative testing, and agile methodologies specifically applied to sales tools and business process improvements Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
tiruchirappalli, tamil nadu
On-site
INFOC is currently looking for a skilled PowerBI Data Analyst to be a part of the Data Analytics team. The ideal candidate should possess a solid foundation in data analysis and visualization, coupled with an expert-level proficiency in PowerBI. In this role, you will be responsible for converting data into actionable insights that drive strategic decisions and enhance business outcomes. Collaborating closely with stakeholders throughout the organization, you will comprehend their data requirements and produce engaging visualizations and dashboards that narrate the story concealed within the data. Your main responsibilities will include the development and upkeep of PowerBI dashboards and reports that offer perceptive and actionable analytics across diverse business units. Working alongside business stakeholders, you will ascertain their data analysis needs and provide solutions that cater to those requirements. Furthermore, you will be responsible for ETL processes, ensuring the accuracy and reliability of data imported from various sources into PowerBI. By implementing data modeling, data cleansing, and enrichment techniques, you will enhance the quality and effectiveness of data analysis. Additionally, you will conduct ad-hoc analyses and present findings to non-technical stakeholders in a clear and understandable manner. To qualify for this role, you should hold a Bachelors or Masters degree in Computer Science, Data Science, Information Technology, or a related field. A proven track record as a Data Analyst, Business Intelligence Analyst, or similar role, with a strong emphasis on PowerBI, is required. Proficiency in PowerBI, encompassing data modeling, DAX, and custom visuals, is essential. A sound understanding of SQL and experience with database technologies is necessary. Familiarity with data preparation, data gateway, and data warehousing concepts is advantageous. Strong analytical and problem-solving skills are crucial, along with excellent communication and interpersonal abilities. You should be capable of translating complex data into actionable insights for individuals at all levels within the organization. Stay abreast of the latest trends and advancements in data analytics and PowerBI capabilities to continually enhance data analysis processes and tools.,
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us Welcome to FieldAssist, where Innovation meets excellence!! We are a top-tier SaaS platform that specializes in optimizing Route-to-Market strategies and enhancing brand relationships within the CPG partner ecosystem. With over 1,00,000 sales users representing over 600+ CPG brands across 10+ countries in South East Asia, the Middle East, and Africa, we reach 10,000 distributors and 7.5 million retail outlets every day. FieldAssist is a 'Proud Partner to Great Brands' like Godrej Consumers, Saro Africa, Danone, Tolaram, Haldiram’s, Eureka Forbes, Bisleri, Nilon’s, Borosil, Adani Wilmar, Henkel, Jockey, Emami, Philips, Ching’s and Mamaearth among others. Do you crave a dynamic work environment where you can excel and enjoy the journey? We have the perfect opportunity for you!! Responsibilities Build and maintain robust backend services and REST APIs using Python (Django, Flask, or FastAPI). Develop end-to-end ML pipelines including data preprocessing, model inference, and result delivery. Integrate and scale AI/LLM models, including RAG (Retrieval Augmented Generation) and intelligent agents. Design and optimize ETL pipelines and data workflows using tools like Apache Airflow or Prefect. Work with Azure SQL and Cosmos DB for transactional and NoSQL workloads. Implement and query vector databases for similarity search and embedding-based retrieval (e.g., Azure Cognitive Search, FAISS, or Pinecone). Deploy services on Azure Cloud, using Docker and CI/CD practices. Collaborate with cross-functional teams to bring AI features into product experiences. Write unit/integration tests and participate in code reviews to ensure high code quality. e and maintain applications using the .NET platform and environment Who we're looking for: Strong command of Python 3.x, with experience in Django, Flask, or FastAPI. Experience building and consuming RESTful APIs in production systems. Solid grasp of ML workflows, including model integration, inferencing, and LLM APIs (e.g., OpenAI). Familiarity with RAG, vector embeddings, and prompt-based workflows. Proficient with Azure SQL and Cosmos DB (NoSQL). Experience with vector databases (e.g., FAISS, Pinecone, Azure Cognitive Search). Proficiency in containerization using Docker, and deployment on Azure Cloud. Experience with data orchestration tools like Apache Airflow. Comfortable working with Git, CI/CD pipelines, and observability tools. Strong debugging, testing (pytest/unittest), and optimization skills. Good to Have: Experience with LangChain, transformers, or LLM fine-tuning. Exposure to MLOps practices and Azure ML. Hands-on experience with PySpark for data processing at scale. Contributions to open-source projects or AI toolkits. Background working in startup-like environments or cross-functional product teams. FieldAssist on the Web: Website: https://www.fieldassist.com/people-philosophy-culture/ Culture Book: https://www.fieldassist.com/fa-culture-book CEO's Message: https://www.youtube.com/watch?v=bl_tM5E5hcw LinkedIn: https://www.linkedin.com/company/fieldassist/
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. You will be part of a team of highly skilled professionals working with cutting-edge technologies. Our purpose is to bring real positive changes in an increasingly virtual world, transcending generational gaps and disruptions of the future. We are seeking AWS Glue Professionals with the following qualifications: - 3 or more years of experience in AWS Glue, Redshift, and Python - 3+ years of experience in engineering with expertise in ETL work with cloud databases - Proficiency in data management and data structures, including writing code for data reading, transformation, and storage - Experience in launching spark jobs in client mode and cluster mode, with knowledge of spark job property settings and their impact on performance - Proficiency with source code control systems like Git - Experience in developing ELT/ETL processes for loading data from enterprise-sized RDBMS systems such as Oracle, DB2, MySQL, etc. - Coding proficiency in Python or expertise in high-level languages like Java, C, Scala - Experience in using REST APIs - Expertise in SQL for manipulating database data, familiarity with views, functions, stored procedures, and exception handling - General knowledge of AWS Stack (EC2, S3, EBS), IT Process Compliance, SDLC experience, and formalized change controls - Working in DevOps teams based on Agile principles (e.g., Scrum) - ITIL knowledge, especially in incident, problem, and change management - Proficiency in PySpark for distributed computation - Familiarity with Postgres and ElasticSearch At YASH, you will have the opportunity to build a career in an inclusive team environment. We offer career-oriented skilling models and leverage technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our workplace is grounded in four principles: - Flexible work arrangements, free spirit, and emotional positivity - Agile self-determination, trust, transparency, and open collaboration - Support for the realization of business goals - Stable employment with a great atmosphere and ethical corporate culture.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we are a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hiring ETL (Extract, Transform, Load) Professionals with the following requirements: **Experience:** 8-10 Years **Job Description:** - 8 to 10 years of experience in designing and developing reliable solutions. - Ability to work with business partners and provide long-lasting solutions. - Minimum 5 years of experience in Snowflake. - Strong knowledge in Any ETL, Data Modeling, and Data Warehousing. - Minimum 2 years of work experience on Data Vault modeling. - Strong knowledge in SQL, PL/SQL, and RDBMS. - Domain knowledge in Manufacturing / Supply chain / Sales / Finance areas. - Good to have Snaplogic knowledge or project experience. - Good to have cloud platform knowledge AWS or Azure. - Good to have knowledge in Python/Pyspark. - Experience in Data migration / Modernization projects. - Zeal to pick up new technologies and do POCs. - Ability to lead a team to deliver the expected business results. - Good analytical and strong troubleshooting skills. - Excellent communication and strong interpersonal skills. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles: - Flexible work arrangements, Free spirit, and emotional positivity. - Agile self-determination, trust, transparency, and open collaboration. - All Support needed for the realization of business goals. - Stable employment with a great atmosphere and ethical corporate culture.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a dynamic global technology company, Schaeffler's success stems from its entrepreneurial spirit and long history of private ownership. Partnering with major automobile manufacturers, as well as key players in the aerospace and industrial sectors, we offer numerous development opportunities globally. Your key responsibilities include developing data pipelines and utilizing methods and tools to collect, store, process, and analyze complex data sets for assigned operations or functions. You will design, govern, build, and operate solutions for large-scale data architectures and applications across businesses and functions. Additionally, you will manage and work hands-on with big data tools and frameworks, implement ETL tools and processes, data virtualization, and federation services. Engineering data integration pipelines and reusable data services using cross-functional data models, semantic technologies, and data integration solutions is also part of your role. You will define, implement, and apply data governance policies for all data flows of data architectures, focusing on the digital platform and data lake. Furthermore, you will define and implement policies for data ingestion, retention, lineage, access, data service API management, and usage in collaboration with data management and IT functions. To qualify for this position, you should hold a Graduate Degree in Computer Science, Applied Computer Science, or Software Engineering with 3 to 5 years of relevant experience. Emphasizing respect and valuing diverse ideas and perspectives among our global workforce is essential to us. By fostering creativity through appreciating differences, we drive innovation and contribute to sustainable value creation for our stakeholders and society as a whole. Together, we are shaping the future with innovation, offering exciting assignments and outstanding development opportunities. We eagerly anticipate your application. For technical inquiries, please contact the following email address: technical-recruiting-support-AP@schaeffler.com. For more information and to apply, visit www.schaeffler.com/careers.,
Posted 1 week ago
58.0 years
0 Lacs
Delhi, India
On-site
Job Summary We are looking for a skilled Data Modeler / Architect with 58 years of experience in designing, implementing, and optimizing robust data architectures in the financial payments industry. The ideal candidate will have deep expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms such as Databricks or Snowflake. You will play a key role in designing scalable data models, orchestrating reliable data workflows, and ensuring the integrity and performance of mission-critical financial datasets. This is a highly collaborative role interfacing with engineering, analytics, product, and compliance teams. Key Responsibilities Design, implement, and maintain logical and physical data models to support transactional, analytical, and reporting systems. Develop and manage scalable ETL/ELT pipelines for processing large volumes of financial transaction data. Tune and optimize SQL queries, stored procedures, and data transformations for maximum performance. Build and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. Architect data lakes and warehouses using platforms like Databricks, Snowflake, BigQuery, or Redshift. Enforce and uphold data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). Collaborate closely with data engineers, analysts, and business stakeholders to understand data needs and deliver solutions. Conduct data profiling, validation, and quality assurance to ensure clean and consistent data. Maintain clear and comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications 58 years of experience as a Data Modeler, Data Architect, or Senior Data Engineer in the financial/payments domain. Advanced SQL expertise, including query tuning, indexing, and performance optimization. Proficiency in developing ETL/ELT workflows using tools such as Spark, dbt, Talend, or Informatica. Experience with data orchestration frameworks: Airflow, Dagster, Luigi, etc. Strong hands-on experience with cloud-based data platforms like Databricks, Snowflake, or equivalents. Deep understanding of data warehousing principles: star/snowflake schema, slowly changing dimensions, etc. Familiarity with financial data structures, such as payment transactions, reconciliation, fraud patterns, and audit trails. Working knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. Strong analytical thinking and problem-solving capabilities in high-scale environments. Preferred Qualifications Experience with real-time data pipelines (e.g., Kafka, Spark Streaming). Exposure to data mesh or data fabric architecture paradigms. Certifications in Snowflake, Databricks, or relevant cloud platforms. Knowledge of Python or Scala for data engineering tasks (ref:hirist.tech)
Posted 1 week ago
10.0 years
0 Lacs
Greater Kolkata Area
Remote
Java Back End Engineer with AWS Location : Remote Experience : 10+ Years Employment Type : Full-Time Job Overview We are looking for a highly skilled Java Back End Engineer with strong AWS cloud experience to design and implement scalable backend systems and APIs. You will work closely with cross-functional teams to develop robust microservices, optimize database performance, and contribute across the tech stack, including infrastructure automation. Core Responsibilities Design, develop, and deploy scalable microservices using Java, J2EE, Spring, and Spring Boot. Build and maintain secure, high-performance APIs and backend services on AWS or GCP. Use JUnit and Mockito to ensure test-driven development and maintain code quality. Develop and manage ETL workflows using tools like Pentaho, Talend, or Apache NiFi. Create High-Level Design (HLD) and architecture documentation for system components. Collaborate with cross-functional teams (DevOps, Frontend, QA) as a full-stack contributor when needed. Tune SQL queries and manage performance on MySQL and Amazon Redshift. Troubleshoot and optimize microservices for performance and scalability. Use Git for source control and participate in code reviews and architectural discussions. Automate infrastructure provisioning and CI/CD processes using Terraform, Bash, and pipelines. Primary Skills Languages & Frameworks : Java (v8/17/21), Spring Boot, J2EE, Servlets, JSP, JDBC, Struts Architecture : Microservices, REST APIs Cloud Platforms : AWS (EC2, S3, Lambda, RDS, CloudFormation, SQS, SNS) or GCP Databases : MySQL, Redshift Secondary Skills (Good To Have) Infrastructure as Code (IaC) : Terraform Additional Languages : Python, Node.js Frontend Frameworks : React, Angular, JavaScript ETL Tools : Pentaho, Talend, Apache NiFi (or equivalent) CI/CD & Containers : Jenkins, GitHub Actions, Docker, Kubernetes Monitoring/Logging : AWS CloudWatch, DataDog Scripting : Bash, Shell scripting Nice To Have Familiarity with agile software development practices Experience in a cross-functional engineering environment Exposure to DevOps culture and tools (ref:hirist.tech)
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
jaipur, rajasthan
On-site
As a Data Consultant, you will be responsible for managing large data migration projects for non-profit organizations and educational institutions. Your expertise in ETL tools such as Jitterbit, Informatica, Boomi, SSIS, and MuleSoft will be crucial in designing and implementing data migrations effectively. Experience with Salesforce, NPSP, and EDA is a definite advantage, along with a strong understanding of relational databases, SOQL, and SQL. Your role will involve serving as the lead data resource on Salesforce implementation projects, creating and iterating on data conversion maps, evaluating, designing, and implementing data migration solutions for clients, and aligning data migration plans with project timelines. Effective communication skills and a genuine interest in helping clients are essential for success in this client-facing role. Collaboration with peers and clients to build consensus is also a key aspect of your responsibilities. Key Responsibilities: - Lead data resource on Salesforce implementation projects - Create and iterate data conversion maps - Evaluate, design, and implement data migration solutions - Plan and coordinate iterative data migration - Maintain up-to-date knowledge of NPC and/or EDC - Assess client requirements to design architecturally-sound solutions - Deliver project assignments on time and within budget - Contribute to the improvement of Cloud for Good - Ensure customer satisfaction - Provide informal mentorship and facilitate growth opportunities - Work in shifts to accommodate a global customer base across different time zones Qualifications: - Experience with data transformation and migration to Salesforce using ETL tools - 2+ years of consulting experience - Strong understanding of relational databases, SOQL, and SQL - Knowledge of agile methodology - Salesforce.com administrator certification (or willingness to complete within onboarding) - Strong Salesforce configuration knowledge is a plus - Experience working with non-profits and/or higher education institutions - Consulting, communication, and teamwork skills - Track record of organizational improvement Preferred Skills: - Time management - Written and verbal communication - Intellectual curiosity - Continuous learning mindset - Mentoring and presentation skills If you are passionate about making an impact, thrive in a collaborative and innovative environment, seek professional growth opportunities, and value competitive benefits, you may be a great fit for this role. Join us to play a pivotal role in shaping a rapidly growing venture studio.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a senior-level Data Engineer with Machine Learning Analyst capabilities, you will play a crucial role in leading the architecture, development, and management of scalable data solutions. Your expertise in data architecture, big data pipeline development, and data quality enhancement will be key in processing large-scale datasets and supporting machine learning workflows. Your key responsibilities will include designing, developing, and maintaining end-to-end data pipelines for ingestion, transformation, and delivery across various business systems. You will ensure robust data quality, data lineage, data reconciliation, and governance practices. Additionally, you will architect and manage data warehouse and big data solutions supporting both structured and unstructured data. Optimizing and automating ETL/ELT processes for high-volume data environments will be essential, with a focus on processing 5B+ records. Collaborating with data scientists and analysts to support machine learning workflows and implementing streamlined DAAS workflows will also be part of your role. To succeed in this position, you must have at least 10 years of experience in data engineering, including data architecture and pipeline development. Your proven experience with Spark and Hadoop clusters for processing large-scale datasets, along with a strong understanding of ETL frameworks, data quality processes, and automation best practices, will be critical. Experience in data ingestion, lineage, governance, and reconciliation, as well as a solid understanding of data warehouse design principles and data modeling, are must-have skills. Expertise in automated data processing, especially for DAAS platforms, is essential. Desirable skills for this role include experience with Apache HBase, Apache NiFi, and other Big Data tools, knowledge of distributed computing principles and real-time data streaming, familiarity with machine learning pipelines and supporting data structures, and exposure to data cataloging and metadata management tools. Proficiency in Python, Scala, or Java for data engineering tasks is also beneficial. In addition to technical skills, soft skills such as a strong analytical and problem-solving mindset, excellent communication skills for collaboration across technical and business teams, and the ability to work independently, manage multiple priorities, and lead data initiatives are required. If you are excited about the opportunity to work as a Data Engineer with Machine Learning Analyst capabilities and possess the necessary skills and experience, we look forward to receiving your application.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
gujarat
On-site
You are a Senior Data Analyst responsible for efficiently planning, collecting, analyzing, and developing Tableau reports for specific client projects. Your key responsibilities include identifying, cleaning, and combining data to solve business problems, researching and troubleshooting data-related questions, monitoring and maintaining data integrity, transforming data into actionable insights, developing business workflows for better decision-making, coordinating data migration processes, staying updated on emerging technologies, performing ad-hoc reporting, and supporting users with comprehensive documentation. To qualify for this role, you must have 5-8+ years of experience in data analysis, strong Tableau report building skills, proficiency in SQL queries and dashboard reporting, ability to write advanced SQL queries, gather business requirements, and translate them into technical specifications. A Bachelor's degree in MIS, Mathematics, Statistics, Computer Science, or Business with an analytics focus is required. You should possess strong analytical, problem-solving, documentation, and prioritization skills, a willingness to learn and adapt to new technologies, teamwork abilities, effective communication in English, and knowledge of ETL tools and basic machine learning algorithms. The benefits of this position include a Group Mediclaim Policy, Parental Insurance Coverage, Accident Policy, Retirement Benefits (Provident Fund), Gratuity, Overtime Bonus, Paid Vacation & Holidays, Profit Sharing & Incentives.,
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Business Intelligence (BI) Developer Location : Hyderabad, India Experience : Minimum 6 years Work Mode : Hybrid Notice Period : 0-15 days Interview Process : 2 Rounds (1 Virtual, 1 In-Person) Job Summary We are actively seeking a highly skilled and experienced Business Intelligence (BI) Developer to join our team. The ideal candidate will possess a robust background in data analytics and reporting , demonstrating deep expertise in Power BI , DAX queries , and Looker . This pivotal role involves transforming complex data into actionable insights, directly supporting strategic business decision-making. Key Responsibilities Design, develop, and maintain impactful interactive dashboards and reports utilizing Power BI and Looker. Write and optimize advanced DAX queries to support intricate business logic and calculations. Collaborate effectively with data engineers, analysts, and business stakeholders to comprehend reporting requirements and translate them into robust technical solutions. Ensure the highest standards of data accuracy, consistency, and performance across all BI solutions. Conduct thorough data analysis and validation to support diverse business initiatives. Automate and streamline reporting processes to enhance efficiency and scalability. Stay current with the latest BI tools, industry trends, and best practices to drive continuous improvement. Required Skills & Qualifications Minimum 6 years of experience in BI development and data analytics. Strong proficiency in Power BI, including extensive experience with DAX and Power Query. Hands-on experience with Looker and LookML. Solid understanding of data modeling, ETL processes, and SQL. Proven ability to work with large datasets and optimize performance for complex queries. Excellent problem-solving skills and strong verbal and written communication abilities. Bachelors degree in Computer Science, Information Systems, or a related field. Preferred Qualifications Experience working in leading cloud environments (e.g., Azure, GCP, AWS). Familiarity with Agile methodologies. Knowledge of other BI tools or programming languages (e.g., Python, R) is a significant plus. (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are seeking a highly skilled and experienced BI Developer with a strong background in data analytics and reporting. The ideal candidate will have deep expertise in Power BI, DAX queries, and Looker, and will play a key role in transforming data into actionable insights to support business Responsibilities : Design, develop, and maintain interactive dashboards and reports using Power BI and Looker. Write and optimize DAX queries to support complex business logic and calculations. Collaborate with data engineers, analysts, and business stakeholders to understand reporting needs and translate them into technical solutions. Ensure data accuracy, consistency, and performance across BI solutions. Perform data analysis and validation to support business initiatives. Automate and streamline reporting processes for efficiency and scalability. Stay updated with the latest BI tools, trends, and best Skills & Qualifications : Minimum 5 years of experience in BI development and data analytics. Strong proficiency in Power BI, including DAX and Power Query. Hands-on experience with Looker and LookML. Solid understanding of data modeling, ETL processes, and SQL. Ability to work with large datasets and optimize performance. Excellent problem-solving and communication skills. Bachelors degree in Computer Science, Information Systems, or a related Qualifications : Experience working in cloud environments (e.g., Azure, GCP, AWS). Familiarity with Agile methodologies. Knowledge of other BI tools or programming languages (e.g., Python, R) is a plus. (ref:hirist.tech)
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title : Data Analyst Location : Bangalore, India Experience : 2-4 Years Employment Type : Full-Time About The Role We are seeking a skilled and analytical Data Analyst to join our dynamic team in Bangalore. The ideal candidate will possess strong expertise in Excel, SQL, Power BI, Tableau, and Python, and will be responsible for turning data into actionable insights to drive business decisions. Key Responsibilities Collect, clean, and validate data from various sources to ensure accuracy and completeness. Design and build dashboards and reports using Power BI and Tableau to provide insights to stakeholders. Write and optimize SQL queries to extract, manipulate, and analyze data from databases. Use Python for data processing, automation, and advanced analytics tasks. Perform trend analysis, pattern recognition, and forecasting to support business strategy. Collaborate with cross-functional teams including Product, Marketing, Finance, and Tech to gather data requirements. Translate business questions into data analysis and provide actionable recommendations. Ensure data integrity, privacy, and security standards are maintained. Continuously improve data collection and analysis processes for efficiency and scalability. Required Skills & Qualifications Bachelors degree in Computer Science, Statistics, Mathematics, Economics, or related field. 2- 4 years of experience in data analysis or business intelligence roles. Proficiency in Excel (including advanced formulas, pivot tables, charts). Strong command of SQL for data extraction and analysis. Experience with Power BI and/or Tableau for dashboard/report development. Hands-on experience in Python for data analysis and scripting. Excellent problem-solving and critical-thinking skills. Strong communication skills with the ability to explain complex data findings clearly. Attention to detail and a passion for data accuracy and storytelling. Nice To Have Experience with cloud platforms like AWS, GCP, or Azure. Familiarity with statistical tools and machine learning basics. Exposure to ETL tools and data warehousing concepts. Perks & Benefits Competitive salary and performance bonuses Flexible working hours Health insurance Learning and development budget Collaborative work culture (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Job Responsibilities Collaborate with data scientists, software engineers, and business stakeholders to understand data requirements and design efficient data models. Develop, implement, and maintain robust and scalable data pipelines, ETL processes, and data integration solutions. Extract, transform, and load data from various sources, ensuring data quality, integrity, and consistency. Optimize data processing and storage systems to handle large volumes of structured and unstructured data efficiently. Perform data cleaning, normalization, and enrichment tasks to prepare datasets for analysis and modelling. Monitor data flows and processes, identify and resolve data-related issues and bottlenecks. Contribute to the continuous improvement of data engineering practices and standards within the organization. Stay up-to-date with industry trends and emerging technologies in data engineering, artificial intelligence, and dynamic pricing Candidate Profile Strong passion for data engineering, artificial intelligence, and problem-solving. Solid understanding of data engineering concepts, data modeling, and data integration techniques. Proficiency in programming languages such as Python, SQL and Web Scrapping. Understanding of databases like No Sql , relational database, In Memory database and technologies like MongoDB, Redis, Apache Spark would be add on.. Knowledge of distributed computing frameworks and big data technologies (e.g., Hadoop, Spark) is a plus. Excellent analytical and problem-solving skills, with a keen eye for detail. Strong communication and collaboration skills, with the ability to work effectively in a team- oriented environment. Self-motivated, quick learner, and adaptable to changing priorities and technologies. (ref:hirist.tech)
Posted 1 week ago
7.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modelling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus · Must be team oriented with strong collaboration, prioritization, and adaptability skills required Mandatory skill sets: Azure Databricks Preferred skill sets: Azure Databricks Years of experience required: 7-10 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Delhi, India
On-site
Position Overview We are seeking a highly skilled and experienced Software Engineer with 2-4 years of professional experience in Python and Django, specifically in building REST APIs using frameworks like FASTAPI and Django Rest Framework (DRF). The ideal candidate should have hands-on experience with Redis cache, Docker, PostgreSQL, Kafka, Elasticsearch, and ETL RESPONSIBILITIES : Collaborate with cross-functional teams to design, develop, and maintain high-quality software solutions using Python, Django (including DRF), FastAPI, and other modern frameworks. Build robust and scalable REST APIs, ensuring efficient data transfer and seamless integration with frontend and third-party systems. Utilize Redis for caching, session management, and performance optimization. Design and implement scalable ETL pipelines to efficiently process and transform large datasets across systems. Integrate and maintain Kafka for building real-time data streaming and messaging services. Implement Elasticsearch for advanced search capabilities, data indexing, and analytics functionalities. Containerize applications using Docker for easy deployment and scalability. Design and manage PostgreSQL databases, ensuring data integrity and performance tuning. Write clean, efficient, and well-documented code following best practices and coding standards. Participate in system design discussions and contribute to architectural decisions, particularly around data flow and microservices communication. Troubleshoot and debug complex software issues, ensuring smooth operation of production systems. Profile and optimize Python code for improved performance and scalability. Implement and maintain CI/CD pipelines for automated testing and REQUIREMENTS : 2-4 years of experience in backend development using Python. Strong proficiency in Django, DRF, and RESTful API development. Experience with FastAPI, asyncio, and modern Python libraries. Solid understanding of PostgreSQL and relational database concepts. Proficiency with Redis for caching and performance optimization. Hands-on experience with Docker and container orchestration. Familiarity with Kafka for real-time messaging and event-driven systems. Experience implementing and maintaining ETL pipelines for structured/unstructured data. Working knowledge of Elasticsearch for search and data indexing. Exposure to AWS services (e.g., EC2, S3, RDS) and cloud-native development. Understanding of Test-Driven Development (TDD) and automation frameworks. Strong grasp of Git and collaborative development practices. Excellent communication skills and a team-oriented mindset. Experience with Agile development We Offer : Opportunity to shape the future of unsecured lending in emerging markets Competitive compensation package Professional development and growth opportunities Collaborative, innovation-focused work environment Comprehensive health and wellness & Work Model Immediate joining possible Work From Office only Based in Gurugram, Sector 65 (ref:hirist.tech)
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
The Data Services ETL Developer specializes in data transformations and integration projects using Zeta's tools, 3rd Party software, and coding. Understanding CRM methodologies related to marketing operations is essential. Responsibilities include manipulating client and internal marketing data across various platforms, automating scripts for data transfer, building and managing cloud-based data pipelines using AWS services, managing tasks with competing priorities, and collaborating with technical staff to support a proprietary ETL environment. Collaborating with database/CRM, modelers, analysts, and application programmers is crucial for delivering results to clients. The ideal candidate should cover the US time-zone, be in the office a minimum of three days per week, have experience in database marketing, knowledge of US and International postal addresses (including SAP postal products), proficiency with AWS services (S3, Airflow, RDS, Athena), experience with Oracle and Snowflake SQL, familiarity with various tools like Snowflake, Airflow, GitLab, Grafana, LDAP, Open VPN, DCWEB, Postman, and Microsoft Excel. Additionally, knowledge of SQL Server, SFTP, PGP, large-scale customer databases, project life cycle, and proficiency with editors like Notepad++ and Ultra Edit is required. Strong communication, collaboration skills, and the ability to manage multiple tasks simultaneously are essential. Minimum qualifications include a Bachelors degree or equivalent with 5+ years of experience in database marketing and cloud-based technologies, a strong understanding of data engineering concepts and cloud infrastructure, as well as excellent oral and written communication skills.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France