Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
5 - 15 Lacs
pune
Hybrid
Job Title: Senior Data Engineer / Module Lead Location: Pune, Maharashtra, India Experience Level: 58 Years About the Role: We are seeking a highly skilled and experienced Senior Data Engineer to join our growing team in Pune. The ideal candidate will have a strong background in data engineering , with a particular focus on Google Cloud Platform (GCP) , Apache Airflow , and end-to-end ETL pipeline development . You will be responsible for designing, developing, and maintaining robust and scalable data pipelines, ensuring data quality, and optimizing data solutions for performance and cost. This role requires a hands-on approach and the ability to work both independently and collaboratively in an agile environment. Responsibilities: Design, develop, and deploy scalable ETL pipelines using GCP data services , including BigQuery , Cloud Composer (Apache Airflow) , and Cloud Storage . Develop, deploy, and manage complex DAGs in Apache Airflow for orchestrating data workflows. Write and optimize complex SQL and PL/SQL queries, stored procedures, and functions for data manipulation, transformation, and analysis. Optimize BigQuery workloads for performance, cost efficiency , and scalability . Develop scripts using Python and Shell scripting to support automation, data movement, and transformations. Ensure data quality , integrity , and reliability across all data solutions. Collaborate with cross-functional teams including data scientists, analysts, and engineers to understand data requirements and deliver effective solutions. Participate in code reviews and contribute to establishing and maintaining data engineering best practices . Troubleshoot and resolve data pipeline issues in a timely manner. Use version control systems (e.g., Git) for managing code and collaborating on engineering work. Stay updated on the latest trends and technologies in data engineering , cloud computing , and ETL processes . Required Skills and Qualifications: 5–8 years of hands-on experience in Data Engineering roles. Mandatory Skills: ETL pipeline design and implementation SQL and PL/SQL (complex queries, procedures, and transformations) Google Cloud Platform (GCP) — BigQuery, Cloud Composer, Cloud Storage Apache Airflow (designing and deploying complex DAGs) Python for scripting and data processing Shell Scripting for automation and orchestration tasks Experience with Informatica is a strong plus. Proven ability to optimize BigQuery for performance and cost. Familiarity with Git and version control best practices. Excellent problem-solving , analytical , and communication skills . Ability to thrive both independently and within a collaborative agile team . Bachelor’s degree in Computer Science , Engineering , or a related field. What We Offer: A challenging and rewarding role in a dynamic and fast-paced environment. Opportunity to work with cutting-edge technologies on the Google Cloud Platform . A collaborative and supportive team culture. Continuous learning and professional growth opportunities.
Posted 3 weeks ago
3.0 - 5.0 years
5 - 15 Lacs
pune
Hybrid
Responsibilities: Design, implement, and manage ETL pipelines on Google Cloud Platform (BigQuery, Dataflow, Pub/Sub, Composer) . Write complex SQL queries and optimize for BigQuery performance. Work with structured/unstructured data from multiple sources (databases, APIs, streaming). Build reusable data frameworks for transformation, validation, and quality checks. Collaborate with stakeholders to understand business requirements and deliver analytics-ready datasets. Implement best practices in data governance, security, and cost optimization . Requirements: Bachelors in Computer Science, IT, or related field. experience in ETL/Data Engineering . Strong Python & SQL skills. Hands-on with GCP (BigQuery, Dataflow, Composer, Pub/Sub, Dataproc) . Experience with orchestration tools (Airflow preferred). Knowledge of data modeling and data warehouse design. Exposure to CI/CD, Git, DevOps practices is a plus.
Posted 3 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 3 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
pune
Remote
Data Engineer We're seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure that enable data-driven decision making across the organization. Key Responsibilities: Design and implement robust ETL/ELT pipelines for data ingestion, transformation, and loading Build and maintain data orchestration workflows using tools like Apache Airflow, NiFi, or similar platforms Perform comprehensive data cleansing, validation, and quality assurance processes Implement data security best practices including encryption, access controls, and compliance measures Configure and manage authentication systems and identity management solutions Monitor system performance, logs, and observability metrics across data infrastructure Develop and optimize data models for analytics and reporting needs Troubleshoot data flow issues and maintain distributed system coordination Required Skills & Experience: 3+ years of experience in data engineering or related field Strong proficiency in Python, SQL, and at least one other programming language (Java, Scala, or Go) Hands-on experience with Apache NiFi for data flow automation and management Expert knowledge of ETL processes and data transformation techniques Experience with data orchestration tools (Apache Airflow, Prefect, or Dagster) Solid understanding of data security principles and implementation Experience with authentication and identity management systems (Keycloak, LDAP, OAuth) Proficiency with logging and observability tools (Graylog, SigNoz, ELK stack) Knowledge of distributed system coordination (Apache Zookeeper, Consul) Proficiency with open source frameworks: Apache Spark, Kafka, Hadoop ecosystem, dbt Experience with both relational and NoSQL databases Knowledge of containerization (Docker, Kubernetes) and infrastructure as code Familiarity with cloud platforms (AWS, Azure, GCP) alongside open source alternatives Experience with webscraping complex data structures Preferred: Experience with real-time streaming data processing Knowledge of data governance and lineage tools Background in data quality frameworks and monitoring systems Experience with service mesh and API gateway implementations We value candidates who demonstrate expertise across both enterprise and open source technologies, bringing flexibility and cost-effective solutions to our data infrastructure while maintaining robust security and observability practices.
Posted 3 weeks ago
10.0 - 15.0 years
0 Lacs
karnataka
On-site
About Credit Saison India: Credit Saison India, established in 2019, is one of the fastest-growing Non-Bank Financial Company (NBFC) lenders in the country. With verticals in wholesale, direct lending, and tech-enabled partnerships with NBFCs and fintechs, Credit Saison India's tech-enabled model, along with underwriting capability, facilitates lending at scale, addressing India's significant credit gap, especially within underserved segments of the population. Committed to long-term growth as a lender in India, Credit Saison India serves MSMEs, households, individuals, and more. Registered with the Reserve Bank of India (RBI) and holding an AAA rating from CRISIL and CARE Ratings, the company has a branch network of 45 physical offices, 1.2 million active loans, an AUM exceeding US$1.5B, and an employee base of around 1,000 people. As part of Saison International, a global financial company, Credit Saison India aims to bring people, partners, and technology together to create resilient and innovative financial solutions for positive impact. With operations spanning across various countries, including Singapore, India, Indonesia, Thailand, Vietnam, Mexico, and Brazil, Credit Saison India is dedicated to transforming opportunities and enabling people's dreams. Roles & Responsibilities: Define and drive the long-term AI engineering strategy aligned with the company's business goals, focusing on scalable AI and machine learning solutions, including Generative AI. Lead, mentor, and develop a high-performing AI engineering team, fostering innovation, collaboration, and technical excellence. Collaborate with product, data science, infrastructure, and business teams to identify AI use cases, design end-to-end solutions, and seamlessly integrate them into products and platforms. Oversee the development, deployment, and continuous improvement of AI/ML models and systems to ensure scalability, robustness, and real-time performance. Manage the full AI/ML lifecycle, including data strategy, model development, validation, deployment, monitoring, and retraining pipelines. Evaluate and incorporate cutting-edge AI technologies, frameworks, and external AI services to enhance capabilities and accelerate delivery. Establish and enforce engineering standards, best practices, and observability tools for model governance, performance tracking, and compliance with data privacy and security requirements. Collaborate with infrastructure and DevOps teams to design and maintain cloud infrastructure optimized for AI workloads, including GPU acceleration and MLOps automation. Manage project timelines, resource allocation, and cross-team coordination to ensure timely delivery of AI initiatives. Stay updated on emerging AI trends, research, and tools to continuously evolve the AI engineering function. Required Skills & Qualifications: 10 to 15 years of experience in AI, machine learning, or data engineering roles, with at least 8 years in leadership or managerial positions. Bachelors, Masters, or PhD degree in Computer Science, Statistics, Mathematics, or related fields from a top-tier college is preferred. Proven track record of leading AI engineering teams and delivering production-grade AI/ML systems at scale. Expertise in machine learning algorithms, deep learning, NLP, computer vision, and Generative AI technologies. Hands-on experience with AI/ML frameworks such as TensorFlow, PyTorch, Keras, Hugging Face Transformers, LangChain, MLflow, and related tools. Strong understanding of data engineering concepts, ETL pipelines, and distributed computing frameworks like Spark and Hadoop. Experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker). Familiarity with software engineering practices, CI/CD, version control (Git), and microservices architecture. Strong problem-solving skills with a product-oriented mindset and the ability to translate business requirements into technical solutions. Excellent communication skills for effective collaboration across technical and non-technical teams. Experience in AI governance, model monitoring, and compliance with data privacy/security standards. Preferred Qualifications: Experience in building or managing ML platforms or MLOps pipelines. Knowledge of NoSQL databases (MongoDB, Cassandra) and real-time data processing. Previous exposure to AI in domains like banking, finance, and credit is advantageous. This role presents an opportunity to lead AI innovation at scale, shaping the future of AI-powered products and services in a rapidly growing, technology-centric environment.,
Posted 4 weeks ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a talented Python with AI/ML Developer at BMC, you will be working on complex and distributed software, playing a key role in designing, developing, and debugging software products. Your primary responsibilities will include implementing features and ensuring product quality for the IZOT product line, which focuses on BMCs Intelligent Z Optimization & Transformation products. You will have the opportunity to work on modernizing mainframe systems for some of the world's largest companies. By leveraging GenAI frameworks, RAG pipelines, and OpenAI Protocol, you will contribute to improving the developer experience, mainframe integration, application development speed, code quality, and application security while reducing operational costs and risks. In this role, you will be expected to design and develop robust Restful APIs and Microservices using Twelve-Factor App and Microservices design patterns. You will also work with AI/ML models hosted both in-house and on cloud platforms for real-time and batch inference. Additionally, you will fine-tune and adapt pre-trained models using machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn to build domain-specific products and drive actionable insights and business outcomes. To excel in this role, you should have 8+ years of experience in application development using Python/Fast API, Java, and RESTful services. A background in computer science, Statistics, Mathematics, Data Science, or related fields is preferred, along with experience in Full Stack Development, frontend development, and machine learning frameworks. Furthermore, familiarity with design patterns, object-oriented software development, high-performance code characteristics, SOLID principles of development, testing automation, and performance at scale will be beneficial. Experience with data engineering, model tracking, serving, container orchestration frameworks, observability tools, and CI/CD environments is also desirable. At BMC, we value continuous learning and growth. If you are passionate about adapting technology to provide business-benefiting solutions and have excellent communication skills, we encourage you to apply. We are committed to attracting talents from diverse backgrounds and experiences to foster innovation and creativity within our team. Join BMC and be a part of a culture that celebrates individuality and values each employee for their unique contributions. Your journey at BMC will be rewarding not only in terms of salary but also through a supportive work environment that encourages personal and professional development. If you are excited about the opportunity to work at BMC but are unsure if you meet all the qualifications, we still encourage you to apply. Your authentic self is what we value the most. Take the first step towards an exciting career at BMC by exploring the possibilities and applying through our website.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Technology Lead Analyst position is a senior role where you will be responsible for implementing new or revised application systems and programs in coordination with the Technology team. Your main objective will be to lead applications systems analysis and programming activities. Your responsibilities will include partnering with multiple management teams to ensure proper integration of functions to meet goals, identifying necessary system enhancements for deploying new products and process improvements, and resolving high-impact problems/projects through in-depth evaluation of complex business processes and system processes. You will also provide expertise in applications programming, ensure application design aligns with the overall architecture blueprint, develop standards for coding, testing, debugging, and implementation, and have a comprehensive understanding of how different areas of business integrate to achieve business goals. As an Applications Development Technology Lead Analyst, you will need to provide in-depth analysis, develop innovative solutions, mentor mid-level developers and analysts, assess risks in business decisions, and maintain ETL pipelines using Oracle DWH, SQL/PLSQL, and Big Data technologies. Additionally, you will design data models, perform data analysis and quality checks, collaborate with BI teams, optimize data workflows, and support cross-functional data initiatives. To qualify for this role, you should have 6-10 years of relevant experience in Apps Development or systems analysis, extensive experience in system analysis and programming of software applications, project management experience, and expertise in at least one area of Applications Development. You should also possess the ability to adjust priorities quickly, demonstrate leadership skills, and have clear written and verbal communication. A Bachelors degree or equivalent experience is required for this position, while a Masters degree is preferred. Please note that this job description provides a high-level overview of the work performed, and other job-related duties may be assigned as needed.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
We are seeking a highly skilled Senior Data Scientist to join our India-based team in a remote capacity. Your primary responsibility will involve building and deploying advanced predictive models to influence key business decisions. To excel in this role, you should possess a strong background in machine learning, data engineering, and cloud environments, with a particular emphasis on AWS. Your main tasks will include collaborating with cross-functional teams to design, develop, and deploy cutting-edge ML models using tools like SageMaker, Bedrock, PyTorch, TensorFlow, Jupyter Notebooks, and AWS Glue. This position offers an excellent opportunity to work on impactful AI/ML solutions within a dynamic and innovative team environment. Your key responsibilities will encompass predictive modeling and machine learning, data engineering and cloud computing, Python programming, as well as collaboration and communication with various teams. Additionally, having experience in the utility industry, generative AI technologies, and geospatial data and GIS tools would be advantageous. To qualify for this position, you should hold a Master's degree in Computer Science, Statistics, Mathematics, or a related field, along with at least 5 years of relevant experience in data science, predictive modeling, and machine learning. Previous experience working in cloud-based data science environments, preferably AWS, would be beneficial.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
You should have at least 5 years of experience working as a Data Engineer. Your expertise should include a strong background in Azure Cloud services and proficiency in tools such as Azure Databricks, PySpark, and Delta Lake. It is essential to have solid experience in Python and FastAPI for API development, as well as familiarity with Azure Functions for serverless API deployments. Experience in managing ETL pipelines using Apache Airflow is also required. Hands-on experience with databases like PostgreSQL and MongoDB is necessary. Strong SQL skills and the ability to work with large datasets are key for this role.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
delhi
On-site
We are seeking a skilled and enthusiastic IoT Software Engineer to become a valuable member of our rapidly expanding team. Your role will involve playing a key part in constructing scalable backend systems, managing substantial datasets in real-time, and collaborating across a contemporary data and cloud stack. As an IoT Software Engineer, you will contribute to the development and enhancement of various aspects of our technological infrastructure. The ideal candidate for this role should possess the following qualifications and skills: - Proficiency in programming languages such as JavaScript/TypeScript (Express.js, Next.js) and Go. - Previous experience with NoSQL databases like MongoDB and columnar databases such as ClickHouse. - Thorough knowledge of SQL, including expertise in query optimization and analysis of extensive datasets. Ability to construct and sustain ETL pipelines. - Familiarity with geospatial queries and integration of Google Maps APIs. - Hands-on experience with Kafka for real-time data streaming and Redis for caching/queuing purposes. - Strong grasp of system design principles and distributed systems. - Previous exposure to data visualization tools like Superset or Metabase. - Familiarity with logging and monitoring tools like Datadog, Grafana, or Prometheus. Additionally, the following attributes are required for this position: - Prior experience in handling IoT data originating from embedded or telemetry systems. Desirable skills that would be advantageous for this role include: - Knowledge of Docker and experience in deploying containerized pipelines. - Background in edge computing or low-latency systems would be considered a plus. This is a full-time position located in Delhi, India. The preferred candidate should have a minimum of 7 years of relevant experience and be fluent in English. Candidates from any location in India are welcome to apply but must be willing to relocate to Delhi. If you meet the specified requirements and are excited about this opportunity, please submit your resume to deepali@xlit.co.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Data Engineer or Senior Data Engineer in Ericsson's Automation Chapter, you will play a crucial role in developing and optimizing SAP HANA objects like Calculation Views, SQL Procedures, and Custom Functions. Your primary focus will be on SAP-centric development and integration to ensure robust, scalable, and optimized data flows for analytics consumption within the enterprise. You'll work closely with a high-performing team to build end-to-end data solutions aligned with the SAP ecosystem, showcasing your deep expertise in HANA modeling and ETL workflows. Your adaptability and problem-solving skills will be put to the test as you switch between projects of varying scale and complexity. Your responsibilities will include designing, developing, and optimizing SAP HANA objects, creating reusable ETL pipelines using SAP BODS for system integration, facilitating seamless data flow between SAP ECC and external platforms, and collaborating with stakeholders to translate requirements into technical solutions. Additionally, you'll be expected to tune and troubleshoot HANA and BODS jobs for performance and scalability, ensure compliance with data governance standards, and provide support for ongoing enhancements and critical data deliveries. To excel in this role, you should have at least 8 years of experience in SAP data engineering, with a strong background in SAP HANA (including native development and SQL scripting) and proficiency in SAP BODS. Your experience should also cover working with SAP ECC data structures, IDOCs, and remote function calls, as well as knowledge of data warehouse concepts and performance optimization techniques. Strong analytical and debugging skills, familiarity with version control tools and SDLC processes, and excellent communication skills are essential for success in this position. A bachelor's degree in computer science, Information Systems, Electronics & Communication, or a related field is required. Joining Ericsson offers you a unique opportunity to leverage your skills and creativity to tackle complex challenges and pioneer innovative solutions. You'll be part of a team of diverse innovators dedicated to pushing the boundaries of what's possible and shaping the future. While you'll face challenges, you'll have the support of a collaborative team working towards common goals. Once you apply for this role, you can expect to be considered for further steps in the recruitment process to potentially join Ericsson's dynamic team in Bangalore, Karnataka, India.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You will be responsible for designing, developing, and maintaining robust ETL pipelines to ingest data from various banking systems into analytics platforms. Your primary focus will be on building and optimizing data models for reporting, analytics, and machine learning use cases. In this role, you will collaborate closely with data analysts, data scientists, and business stakeholders to understand data requirements and ensure that the data analysis answers complex business questions across Risk, credit, and liabilities domains, delivering actionable insights for decision-making. Ensuring data quality, integrity, and security across all data platforms will be a key part of your responsibilities. You will also be required to implement data governance and compliance standards to maintain data quality and security. Additionally, you will be responsible for monitoring and troubleshooting data workflows and infrastructure performance to ensure smooth operations of data pipelines and analytics platforms.,
Posted 1 month ago
8.0 - 13.0 years
0 Lacs
karnataka
On-site
You will be joining MRI Software as a Data Engineering Leader responsible for designing, building, and managing data integration solutions. Your expertise in Azure Data Factory and Azure Synapse analytics, as well as data warehousing, will be crucial for leading technical implementations, mentoring junior developers, collaborating with global teams, and engaging with customers and stakeholders to ensure seamless and scalable data integration. Your key responsibilities will include leading and mentoring a team of data engineers, designing and implementing Azure Synapse Analytics solutions, optimizing ETL pipelines and Synapse Spark workloads, and ensuring data quality, security, and governance best practices. You will also collaborate with business stakeholders to develop data-driven solutions. To excel in this role, you should have 8-13 years of experience in Data Engineering, BI, or Cloud Analytics, with expertise in Azure Synapse, Data Factory, SQL, and ETL processes. Strong leadership, problem-solving, and stakeholder management skills are essential, and knowledge of Power BI, Python, or Spark would be advantageous. Deep knowledge of Data Modelling techniques, ETL Pipeline development, Azure Resources Cost Management, and data governance practices will also be key to your success. Additionally, you should be proficient in writing complex SQL queries, implementing best security practices for Azure components, and have experience in Master Data and metadata management. Your ability to manage a complex business environment, lead and support team members, and advocate for Agile practices will be highly valued. Experience in change management, data warehouse architecture, dimensional modelling, and data integrity validation will further strengthen your profile. Collaboration with Product Owners and data engineers to translate business requirements into effective dimensional models, strong SQL skills, and the ability to extract, clean, and transform raw data for dimensional modelling are essential aspects of this role. Desired skills include Python, real-time data streaming frameworks, and AI and Machine Learning data pipelines. A degree in Computer Science, Software Engineering, or related field is required for this position. In return, you can look forward to learning leading technical and industry standards, hybrid working arrangements, an annual performance-related bonus, and other benefits that foster an engaging, fun, and inclusive culture at MRI Software. MRI Software is a global Proptech leader dedicated to empowering real estate companies with innovative applications and hosted solutions. With a flexible technology platform and a connected ecosystem, MRI Software caters to the unique needs of real estate businesses worldwide. Operating across multiple regions, MRI Software boasts nearly five decades of expertise, a diverse team of over 4000 professionals, and a commitment to Equal Employment Opportunity.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the Providence Cybersecurity (CYBR) team, you will play a vital role in safeguarding all information related to caregivers, affiliates, and confidential business data. Your responsibilities will include collaborating with Product Management to assess use cases and requirements, conducting data analysis to identify necessary information, and developing data models to validate requirements. You will also translate logical data models into physical ones, support engineering teams in implementing data solutions, and ensure compliance with data governance and security frameworks. To excel in this role, you should hold a Bachelor's degree in a related field or possess equivalent certifications in Data Engineering or cybersecurity. You must have experience working with complex data environments, expertise in data integration patterns and tools, and a solid understanding of cloud computing and distributed computing principles. Proficiency in tools such as Databricks, Azure Data Factory, and Medallion Architecture is essential, along with hands-on experience in designing and implementing data solutions using Azure cloud services. Furthermore, your role will involve leading a team of data engineers in developing cloud-based data solutions using Azure Databricks and Azure native services. Strong problem-solving skills, analytical capabilities, and proficiency in SQL, Python, and Spark are crucial for success in this position. Additionally, relevant certifications such as Microsoft Certified: Azure Solutions Architect Expert or Microsoft Certified: Azure Data Engineer Associate are highly desirable. In addition to technical skills, effective communication and leadership abilities are essential for this role, as you will be required to communicate technical concepts and strategies to stakeholders at all levels. You should demonstrate a proven track record of leading cross-functional teams, driving consensus, and achieving project goals in a dynamic and fast-paced environment.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Senior Data Engineer (Azure) at Simform based in Ahmedabad, Gujarat, you will play a crucial role in managing large-scale data, designing and developing end-to-end data pipelines on Azure, and optimizing data workflows. With your expertise in Azure data services, ETL pipelines, and database management, you will work on ingesting, processing, and analyzing structured and unstructured data efficiently. Your responsibilities will include collaborating with cross-functional teams to integrate data solutions, ensuring data quality and compliance with security best practices, and participating in client interactions to gather requirements. You will be involved in designing efficient data models, implementing scalable data ingestion pipelines, and building ETL/ELT pipelines for data integration and migration. To excel in this role, you should possess a Bachelor's/Master's degree in Computer Science or a related field, along with 5+ years of hands-on experience in data engineering, ETL development, and data pipeline implementation. Proficiency in Python and SQL, strong expertise in Azure data services, and experience with big data processing frameworks are essential. Additionally, you should have a good understanding of relational and NoSQL databases, strong analytical and problem-solving skills, and the ability to work independently while meeting tight deadlines. Joining our team at Simform offers a range of benefits, including a flat-hierarchical and growth-focused culture, flexible work timing, work-from-home options, free health insurance, and office facilities with a game zone and free snacks. You will also have access to sponsorship for certifications/events and various growth opportunities. If you are a proactive problem solver with a strong technical background, a team player with excellent communication skills, and a passion for working on cutting-edge data engineering projects, we would love to hear from you.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
We are looking for a highly motivated and skilled Data Engineer to join our dynamic team. In this high-impact role, you will play a key part in constructing and maintaining our data infrastructure to facilitate data-driven decision-making throughout the organization. We welcome remote candidates who are team-oriented, adaptable, and eager to acquire new knowledge. Your responsibilities will include designing, developing, and managing scalable ETL pipelines utilizing Python and DigDag. You will extensively collaborate with Google Cloud services, particularly BigQuery, for data warehousing and analytics. Crafting and optimizing intricate SQL queries for data extraction and transformation will be a crucial part of your role. You will work closely with data scientists, analysts, and fellow engineers to comprehend data requirements and provide robust data solutions. Ensuring data quality, integrity, and security across all data pipelines and systems will be a top priority. You will troubleshoot and resolve data-related issues with minimal disruption to data availability and continuously explore and implement new technologies and best practices in data engineering. Furthermore, contributing to the overall data strategy and architecture will be a part of your role. To qualify for this position, you should have strong experience in data engineering with a proven history of constructing and deploying data pipelines. Proficiency in Python for data manipulation and automation is essential, along with experience in workflow management tools like DigDag. In-depth knowledge of ETL processes, data warehousing concepts, and extensive experience with Google Cloud Platform (GCP), particularly BigQuery, are required. Expertise in SQL for data querying and manipulation is a must. The ability to work both independently and collaboratively within a team is vital. Strong problem-solving and analytical skills, as well as excellent communication and interpersonal abilities, are desired qualities. Preferred qualifications include familiarity with other GCP data services such as Cloud Storage, Cloud Pub/Sub, Dataflow, experience with data visualization tools, and an understanding of data governance and data security principles. This is a full-time position with a day shift and morning shift schedule. Work location is in person.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Talent Worx is excited to offer an exceptional opportunity for individuals interested in the roles of Snowflake and Spark Developers! As part of our team, you will be at the forefront of transforming the data analytics landscape while collaborating with one of the Big 4 firms in India. Your role will be instrumental in influencing our clients" success stories through the utilization of cutting-edge technologies and frameworks. Our vibrant culture promotes inclusivity, teamwork, and outstanding performance, providing you with unparalleled prospects for career advancement and goal achievement. Within our Analytics & Cognitive (A&C) practice, you will be part of a dedicated team focused on unlocking the hidden value within extensive datasets. By leveraging advanced techniques such as big data, cloud computing, cognitive capabilities, and machine learning, our globally-connected network ensures that clients receive actionable insights for informed decision-making. As a crucial member of our organization, you will directly contribute to enhancing our clients" competitive edge and performance through the delivery of innovative and sustainable solutions. Working closely with both internal teams and clients, you will be responsible for achieving exceptional results across a variety of projects. Key Requirements: - Minimum 5 years of relevant experience in Spark and Snowflake, including practical involvement in at least one project implementation - Strong proficiency in developing ETL pipelines and data processing workflows using Spark - Expertise in Snowflake architecture, encompassing data loading and unloading processes, table structures, and virtual warehouses - Ability to write complex SQL queries in Snowflake for data transformation and analysis - Familiarity with data integration tools and techniques to ensure seamless data ingestion - Experience in building and monitoring data pipelines in a cloud environment - Exposure to Agile methodology and tools like Jira and Confluence - Strong analytical and problem-solving skills with keen attention to detail - Excellent communication and interpersonal abilities to foster collaborations with clients and team members - Willingness to travel based on project requirements Preferred Qualifications: - Snowflake certification or equivalent qualification is a plus - Previous experience working with both Snowflake and Spark in a corporate environment - Formal education in Computer Science, Information Technology, or a related field - Demonstrated track record of successful collaboration within cross-functional teams Benefits: - Opportunity to work with one of the Big 4 firms in India - Supportive and healthy work environment - Emphasis on achieving work-life balance,
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About ProcDNA ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge tech to create game-changing Commercial Analytics and Technology solutions for our clients. We&aposre a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won&apost be stuck in a cubicle - you&aposll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn&apost just encouraged, it&aposs ingrained in our DNA. What We Are Looking For You&aposll build and maintain systems for efficient data collection, storage, and processing to ensure data pipelines are robust and scalable for seamless integration and analysis. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast- paced global firm. What Youll Do Design/implement complex and scalable enterprise data processing and BI reporting solutions. Design, build and optimize ETL pipelines or underlying code to enhance data warehouse systems Work towards optimizing the overall costs incurred due to system infrastructure, operations, change management etc. Deliver end-to-end data solutions across multiple infrastructures and applications Coach, mentor, and manage a team of junior associates to help them (plan tasks effectively and more) Demonstrate overall client stakeholder and project management skills (drive client meetings, creating realistic project timelines, planning and managing individual and team&aposs task) Assist senior leadership in business development proposals focused on technology by providing SME support Build strong partnerships with other teams to create valuable solutions Stay up to date with latest industry trends Must Have 5-8 years of experience in designing/building data warehouses and BI reporting with a B.Tech/B.E background Prior experience of managing client stakeholders and junior team members. A background in managing Life Science clients is mandatory Proficient in big data processing and cloud technologies like AWS, Azure, Databricks, PySpark, Hadoop etc. Along with proficiency in Informatica is a plus Extensive hands-on experience in working with cloud data warehouses like Redshift, Azure, Snowflake etc. And Proficiency in S?L, Data modelling, designing ETL pipelines is a must Intermediate to expert-level proficiency in Python Proficiency in either Tableau, PowerBI, Qlik is a must Should have worked on large datasets and complex data modelling projects Prior experience in business development activities is mandatory Domain knowledge of the pharma/healthcare landscape is mandatory Skills: python,etl,etl pipelines,team management,data modelling,s?l,cloud technologies,cloud data warehouses,hadoop,data processing,data modeling,data warehousing,aws,pyspark,powerbi,qlik,bi reporting,pharma,designing/building data warehouses,data warehouse,azure,sol,big data processing,tableau,databricks,sql,redshift,?lik,healthcare,data visualization,informatica,snowflake Show more Show less
Posted 1 month ago
6.0 - 9.0 years
3 - 12 Lacs
Hyderabad, Telangana, India
On-site
Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Minimum Experience of 6-9 years Must have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e. g. , GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 1 month ago
1.0 - 9.0 years
3 - 12 Lacs
Hyderabad, Telangana, India
On-site
ABOUT THE ROLE Role Description: As part of the cybersecurity organization, In this vital role you willbe responsible for designing, building, and maintaining data infrastructure to support data-driven decision-making. This role involves working with large datasets, developing reports, executing data governance initiatives, and ensuring data is accessible, reliable, and efficiently managed. The role sits at the intersection of data infrastructure and business insight delivery, requiring the Data Engineer to design and build robust data pipelines while also translating data into meaningful visualizations for stakeholders across the organization. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture, ETL processes, and cybersecurity data frameworks. Roles Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Develop and maintain interactive dashboards and reports using tools like Tableau, ensuring data accuracy and usability Schedule and manage workflows the ensure pipelines run on schedule and are monitored for failures. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate and communicate effectively with product teams. Collaborate with data scientists to develop pipelines that meet dynamic business needs. Share and discuss findings with team members practicing SAFe Agile delivery model. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The Data engineer professional we seek is one with these qualifications. Basic Qualifications: Master s degree and 1 to 3 years of experience of Computer Science, IT or related field experience OR Bachelor s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Hands on experience with data practices, technologies, and platforms, such as Databricks, Python, GitLab, LucidChart, etc. Hands-on experience with data visualization and dashboarding tools Tableau, Power BI, or similar is a plus Proficiency in data analysis tools (e. g. SQL) and experience with data sourcing tools Excellent problem-solving skills and the ability to work with large, complex datasets Understanding of data governance frameworks, tools, and best practices Knowledge of and experience with data standards (FAIR) and protection regulations and compliance requirements (e. g. , GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, cloud data platforms Experience working in Product teams environment Experience working in an Agile environment Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Initiative to explore alternate technology and approaches to solving problems Skilled in breaking down problems, documenting problem statements, and estimating efforts Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Python Solution Architect with over 10 years of experience, you will play a crucial role in designing and implementing scalable, high-performance software solutions that align with business requirements. Your expertise in Python frameworks (e.g., Django, Flask, FastAPI) will be instrumental in architecting efficient applications and microservices architectures. Your responsibilities will include collaborating with cross-functional teams to define architecture, best practices, and oversee the development process. You will be tasked with ensuring that Python solutions meet business goals, align with enterprise architecture, and adhere to security best practices (e.g., OWASP, cryptography). Additionally, your role will involve designing and managing RESTful APIs, optimizing database interactions, and integrating Python solutions seamlessly with third-party services and external systems. Your proficiency in cloud environments (AWS, GCP, Azure) will be essential for architecting solutions and implementing CI/CD pipelines for Python projects. You will provide guidance to Python developers on architectural decisions, design patterns, and code quality, while also mentoring teams on best practices for writing clean, maintainable, and efficient code. Preferred skills for this role include deep knowledge of Python frameworks, proficiency in asynchronous programming, experience with microservices-based architectures, and familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Your understanding of relational and NoSQL databases, RESTful APIs, cloud services, CI/CD pipelines, and Infrastructure-as-Code tools will be crucial for success in this position. In addition, your experience with security tools and practices, encryption, authentication, data protection standards, and working in Agile environments will be valuable assets. Your ability to communicate complex technical concepts to non-technical stakeholders and ensure solutions address both functional and non-functional requirements will be key to delivering successful projects.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Platform Specialist at Siemens Energy, your primary responsibility will involve setting up and monitoring data acquisition systems to ensure a smooth and reliable flow of data on a daily basis. Your communication skills will be put to use as you interact with customers regarding data export and transmission, fostering strong and lasting relationships. Additionally, you will play a pivotal role in managing and improving the entire data platform, including the development of dashboards and digital services such as Condition Monitoring and Hotline Support. Providing technical support to your colleagues and contributing to the company-wide enhancement of Siemens Energy's data analysis platform will also be part of your duties. In order to make a significant impact in this role, you will need to focus on maintaining and enhancing the data platform to ensure data availability and optimize existing data transformations. Your expertise in creating and refining dashboards and digital services, as well as your ability to offer technical guidance and support to your peers, will be crucial. Furthermore, your involvement in the development and migration of Siemens Energy's data analysis platform towards a centralized and scalable solution will be instrumental in driving efficiency. Implementing and fine-tuning rule-based systems for automated condition monitoring to enhance data analysis efficiency, and collaborating with internal teams and external partners to streamline data management processes will also be essential aspects of your work. To succeed in this position, you should hold a Masters degree in Data Engineering, Cloud Computing, or a related field, coupled with a minimum of 5 years of relevant professional experience. Your proficiency in managing and optimizing ETL pipelines of varying complexities, along with your expertise in AWS services such as Athena, Transfer Family (SFTP), CloudWatch, Lambda, and S3 Buckets, will be beneficial. Additionally, your skills in data analysis tools like Alteryx, programming languages such as Python and SQL, and experience with rule-based systems for automated condition monitoring will be valuable assets. Being a proactive and innovative problem-solver with excellent communication skills, both written and spoken in English, and the ability to collaborate effectively with partners and manage task packages will contribute to your success in this role. Joining a team dedicated to advancing Siemens Energy's data management and analysis capabilities, you will be part of an environment that values collaboration and knowledge-sharing. The team's focus on developing innovative digital services and platforms to support sustainable energy solutions underscores the importance of your role in driving digital transformation within the company. With a commitment to diversity and inclusion, Siemens Energy offers a rewarding work environment where individual differences are celebrated and embraced as sources of creative energy. If you are looking to make a difference in the field of energy technology and contribute to the global energy transition, Siemens Energy offers a platform for you to showcase your skills and drive innovation. Embrace the opportunity to be part of a diverse and inclusive team that is dedicated to creating sustainable, reliable, and affordable energy solutions for a better future.,
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
haryana
On-site
The Tech Consultant - Data & Cloud role involves supporting a leading international client with expertise in data engineering, cloud platforms, and big data technologies. As a skilled professional, you will contribute to large-scale data initiatives, implement cloud-based solutions, and collaborate with stakeholders to drive data-driven innovation. You will design scalable data architectures, optimize ETL processes, and leverage cloud technologies to deliver impactful business solutions. Key Responsibilities Data Engineering & ETL: Develop and optimize data pipelines using Apache Spark, Airflow, Sqoop, and Databricks for seamless data transformation and integration. Cloud & Infrastructure Management: Design and implement cloud-native solutions using AWS, GCP, or Azure, ensuring scalability, security, and performance. Big Data & Analytics: Work with Hadoop, Snowflake, Data Lake, and Hive to enable advanced analytics and business intelligence capabilities. Technical Excellence: Utilize Python, SQL, and cloud data warehousing solutions to drive efficiency in data processing and analytics. Agile & DevOps Best Practices: Implement CI/CD pipelines, DevOps methodologies, and Agile workflows for seamless development and deployment. Stakeholder Collaboration: Work closely with business and technology teams to translate complex data challenges into business-driven solutions. Required Qualifications & Skills 5 - 10 years of experience in data engineering, analytics, and cloud-based solutions. Strong knowledge of Big Data technologies (Hadoop, Spark, Snowflake, Hive, Databricks, Airflow, AWS). Experience with ETL pipelines, data lakes, and large-scale data processing. Proficiency in Python, SQL, and cloud data warehousing solutions. Hands-on experience in cloud platforms (AWS, Azure, GCP) and infrastructure as code (Terraform, CloudFormation). Familiarity with containerization (Docker, Kubernetes) and BI tools (Tableau, Power BI). Understanding of Agile, Scrum, and DevOps best practices. Strong communication, problem-solving, and collaboration skills. Why Join Us Work on impactful global data projects for a leading international client. Lucrative Retention Bonus: Up to 20% bonus at the end of the first year, based on performance. Career Growth & Training: Access to world-class learning in advanced cloud, AI, and analytics technologies. Collaborative & High-Performance Culture: Work in a dynamic environment that fosters innovation, leadership, and technical excellence. About Us We are a trusted technology partner specializing in enterprise data solutions, cloud transformation, and analytics-driven decision-making. Our expertise in big data, AI, and cloud infrastructure enables us to deliver scalable, high-value solutions to global enterprises.,
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As a Python Machine Learning Engineer at our organization, you will be responsible for designing, developing, and deploying scalable recommender systems using various machine learning algorithms. Your primary focus will be on leveraging machine learning techniques to personalize user experiences, enhance engagement, and drive business outcomes. You will collaborate closely with cross-functional teams to understand business requirements and translate them into actionable machine learning solutions. Conducting thorough exploratory data analysis to identify relevant features and patterns in large-scale datasets will be a crucial part of your role. Additionally, you will implement and optimize machine learning models for performance, scalability, and efficiency while continuously monitoring and evaluating model performance using relevant metrics. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Statistics, or a related field, along with 2 to 3 years of hands-on experience in developing machine learning models, specifically focusing on recommender systems. Proficiency in Python programming and popular machine learning libraries/frameworks such as TensorFlow, PyTorch, or Scikit-learn is required. A solid understanding of fundamental machine learning concepts like supervised and unsupervised learning, feature engineering, and model evaluation is essential. Experience working with large-scale datasets, strong analytical and problem-solving skills, and excellent communication abilities are also necessary for this role. You should be able to work with SQL and NoSQL databases to store and retrieve training data, as well as write efficient ETL pipelines to feed real-time and batch ML models using Apache Airflow. Preferred qualifications include experience with cloud computing platforms such as AWS, familiarity with recommendation system evaluation techniques, knowledge of natural language processing techniques, and contributions to open-source machine learning projects or participation in relevant competitions like Kaggle. Experience in MLOps & Deployment (Docker, Airflow) and Cloud Platforms (AWS, GCP, Azure, SageMaker) would be beneficial. This is a full-time position located in Bangalore, Karnataka, and requires in-person work. If you are a talented Python Machine Learning Engineer with a passion for developing and maintaining recommender systems using cutting-edge machine learning algorithms, we encourage you to apply for this exciting opportunity.,
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Good hands-on experience in Pyspark, preferably 5 to 8 years -Should have good knowledge of Python and spark concepts -Design, implement, and optimize Spark jobs for performance and scalability. -Design and develop ETL pipelines using PySpark and Python on large-scale data platforms. -Work with structured and unstructured data from multiple sources including APIs, files, and databases. -Optimize Spark jobs for performance, scalability, and cost efficiency. -Write clean, maintainable, and testable code following software engineering best practices. -Collaborate with Data Scientists, Data Analysts, and other stakeholders to meet data needs. -Monitor and troubleshoot production jobs, ensuring reliability and data quality. Implement data validation, lineage, and transformation logic. -Deploy jobs to cloud platforms (e.g. AWS EMR, Databricks, Azure Synapse, GCP Dataproc). Mandatory Skill Sets ETL pipelines using PySpark Preferred Skill Sets ETL,Pyspark Years Of Experience Required Experience 5 - 8 years Education Qualification Bachelor&aposs degree in computer science, data science or any other Engineering discipline. Masters degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ETL Pipelines, PySpark Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline, Data Preprocessing, Data Quality + 33 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |