Jobs
Interviews

24278 Etl Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

15 - 20 Lacs

Noida

On-site

-JD: Role: Perl Developer Location: Bangalore, Noida, and Hyderabad (Hybrid Weekly 2 Days) Exp: 3 to 6 Years Mode of Hire: Permanent Notice Period: Only immediate joiners CTC: Max 20 LPA Client-: Emids JD: Must Have: PERL, UNIX, SQL Nice to Have: ETL, AWS, API, HL7 Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person Speak with the employer +91 6393722524

Posted 6 days ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Noida

On-site

Job Information: Work Experience: 4-5 years Industry: IT Services Job Type: FULL TIME Location: Noida, India Job Overview: We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization. Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources. Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval. Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting. Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights. Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions. Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools. Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed. Required Skills & Qualifications: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies. Strong experience with PySpark for large-scale data processing and transformation. Expertise in SQL and data modeling for relational and non-relational databases. Experience building and optimizing ETL pipelines and data integration workflows. Familiarity with business intelligence and visualization tools, especially Amazon QuickSight. Knowledge of data governance, security, and compliance best practices. Strong programming skills in Python; experience with automation and scripting. Ability to work collaboratively in agile environments and manage multiple priorities effectively. Excellent problem-solving and communication skills. Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer). Good to Have Skills: Understanding of machine learning, deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning. Interview Process Internal Assessment 3 Technical Rounds

Posted 6 days ago

Apply

10.0 years

0 Lacs

Noida

On-site

Customer Success Our mission is to turn our customers into tech-savvy superheroes, ensuring they achieve success using our platform to meet their organization’s business goals. If you're passionate about helping customers realize the value they seek with technology, then our customer success team is the right place for you Your Role As an Manager - Quality Analyst , you will be responsible for developing and supporting the planning/design/execution of test plans, test scripts. The successful candidate will work closely with various departments to perform and validate test cases based on quality requirements, and recommend changes to predetermined quality guidelines. You will be responsible for ensuring that the end product meets the appropriate quality standards, is fully functional and user-friendly. A Day in the Life Review requirements, specifications and technical design documents to provide timely and meaningful feedback Create detailed, comprehensive and well-structured test plans and test cases Estimate, prioritize, plan and coordinate testing activities Design, develop and execute automation scripts using open source tools Identify, record, document thoroughly and track bugs Perform thorough regression testing when bugs are resolved Develop and apply testing processes for new and existing products to meet client needs Liaise with internal teams (e.g. developers and product managers) to identify system requirements Monitor debugging process results Investigate the causes of non-conforming software and train users to implement solutions Track quality assurance metrics, like defect densities and open defect counts Stay up-to-date with new testing tools and test strategies What You Need Proven work experience of 10+ years in software quality assurance Strong knowledge of software QA methodologies, tools and processes Experience in writing clear, concise and comprehensive test plans and test cases Experience in testing data validation scenarios and data ingestion, pipelines, and transformation processes (e.g.,ETL) Ability to validate data mappings - ETL Transformations, Business validations and Aggregation/Analytical checks Strong working experience in SQL and Must be proficient in writing SQL Queries API Testing - REST/SOAP, Postman, Pycharm, Pytest Experience working in an Agile/Scrum development process US Healthcare Data experience preferably in Value-Based Care and strong healthcare data background - clinical, claims, FHIR, HL7, X12, CCDA etc Experience in reconciling the data from Source to Target We offer competitive benefits to set you up for success in and outside of work. Here’s What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days. Parental Leave: Leverage one of industry's best parental leave policies to spend time with your new addition. Sabbatical: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most. Care Program: Whether it’s a celebration or a time of need, we’ve got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need. Financial Assistance: Life happens, and when it does, we’re here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube , Glassdoor , LinkedIn , Instagram , and the Web .

Posted 6 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description Essential Functions Independently develop and deploy ETL jobs in a fast-paced object oriented environment Capable enough to understand and Receive business requirements from clients via a Business Analyst/architect/development lead to successfully develop applications, functions, and processes. Conducts and is accountable for unit testing on development assignments. Must be detail-oriented with ability to follow-through on issues. Must be able to work on and manage multiple tasks in addition to working with other areas within the department. Utilizes numerous sources to obtain and build development skills. Enhances existing applications to meet the needs of ongoing efforts within software platforms. Records and tracks time worked on projects and assignments. Develops a general understanding of TSYS/Global Payments, software platforms, and the credit card industry. Participates in team, department, and division meetings as required. Performs other duties as assigned. Skills/technical Knowledge 5 to 8 years of strong development background in ETL tools like GCP-Data Flow , Pyspark , SSIS, Snowflake, DBT Experience in Google cloud platform - GCP Pub/Sub, Datastore, BigQuery, AppEngine, Compute Engine, Cloud SQL, Memory Store, Redis etc Experience in AWS/SNOWFLAKE/AZURE is preferred Proficient in Java, Python , Pyspark Proficient in GCP-Big Query , Composer , AirFlow , Pub Sub , Cloud storage Experience in building tools (e.g., Maven, Gradle etc.) Proficient in Code repo management, branching strategy, Version controlling using GIT, VSTS & Teamforge etc Developing an application using Eclipse IDE or IntelliJ Excellent knowledge of Relational Databases, SQL & JDBC drivers Experience with API Gateways - Datapower, APIM , Apigee etc Strong analytical, planning, and organizational skills with an ability to manage competing demand Excellent communication skills, verbal and written; should be able to collaborate across business teams (stakeholders) and other technology groups as needed. Experience in NO-SQL databases is preferred Exposure to Payments industry is a plus Minimum Qualification Minimum 5 To 8 Years Of Relevant Experience. Software Engineering, Payment Information Systems or any Technical degree; additional experience in lieu of degree will be considered

Posted 6 days ago

Apply

7.0 years

0 Lacs

India

Remote

Role: Neo4j Engineer Overall IT Experience: 7+ years Relevant experience: (Graph Databases: 4+ years, Neo4j: 2+ years) Location: Remote Company Description Bluetick Consultants is a technology-driven firm that supports hiring remote developers, building technology products, and enabling end-to-end digital transformation. With previous experience in top technology companies such as Amazon, Microsoft, and Craftsvilla, we understand the needs of our clients and provide customized solutions. Our team has expertise in emerging technologies, backend and frontend development, cloud development, and mobile technologies. We prioritize staying up-to-date with the latest technological advances to create a long-term impact and grow together with our clients. Key Responsibilities • Graph Database Architecture: Design and implement Neo4j graph database schemas optimized for fund administration data relationships and AI-powered queries • Knowledge Graph Development: Build comprehensive knowledge graphs connecting entities like funds, investors, companies, transactions, legal documents, and market data • Graph-AI Integration: Integrate Neo4j with AI/ML pipelines, particularly for enhanced RAG (Retrieval-Augmented Generation) systems and semantic search capabilities • Complex Relationship Modeling: Model intricate relationships between Limited Partners, General Partners, fund structures, investment flows, and regulatory requirements • Query Optimization: Develop high-performance Cypher queries for real-time analytics, relationship discovery, and pattern recognition • Data Pipeline Integration: Build ETL processes to populate and maintain graph databases from various data sources including FundPanel.io, legal documents, and external market data using domain specific ontologies • Graph Analytics: Implement graph algorithms for fraud detection, risk assessment, relationship scoring, and investment opportunity identification • Performance Tuning: Optimize graph database performance for concurrent users and complex analytical queries • Documentation & Standards: Establish graph modelling standards, query optimization guidelines, and comprehensive technical documentation Key Use Cases You'll Enable • Semantic Search Enhancement: Create knowledge graphs that improve AI search accuracy by understanding entity relationships and context • Investment Network Analysis: Map complex relationships between investors, funds, portfolio companies, and market segments • Compliance Graph Modelling: Model regulatory relationships and fund terms to support automated auditing and compliance validation • Customer Relationship Intelligence: Build relationship graphs for customer relations monitoring and expansion opportunity identification • Predictive Modelling Support: Provide graph-based features for investment prediction and risk assessment models • Document Relationship Mapping: Connect legal documents, contracts, and agreements through entity and relationship extraction Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, or related field • 7+ years of overall IT Experience • 4+ years of experience with graph databases, with 2+ years specifically in Neo4j • Strong background in data modelling, particularly for complex relationship structures • Experience with financial services data and regulatory requirements preferred • Proven experience integrating graph databases with AI/ML systems • Understanding of knowledge graph concepts and semantic technologies • Experience with high-volume, production-scale graph database implementations Technology Skills • Graph Databases: Neo4j (primary), Cypher query language, APOC procedures, Neo4j Graph Data Science library • Programming: Python, Java, or Scala for graph data processing and integration • AI Integration: Experience with graph-enhanced RAG systems, vector embeddings in graph context, GraphRAG implementations • Data Processing: ETL pipelines, data transformation, real-time data streaming (Kafka, Apache Spark) • Cloud Platforms: Neo4j Aura, Azure integration, containerized deployments • APIs: Neo4j drivers, REST APIs, GraphQL integration • Analytics: Graph algorithms (PageRank, community detection, shortest path, centrality measures) • Monitoring: Neo4j monitoring tools, performance profiling, query optimization • Integration: Elasticsearch integration, vector database connections, multi-modal data handling Specific Technical Requirements • Knowledge Graph Construction: Entity resolution, relationship extraction, ontology modelling • Cypher Expertise: Advanced Cypher queries, stored procedures, custom functions • Scalability: Clustering, sharding, horizontal scaling strategies • Security: Graph-level security, role-based access control, data encryption • Version Control: Graph schema versioning, migration strategies • Backup & Recovery: Graph database backup strategies, disaster recovery planning Industry Context Understanding • Fund Administration: Understanding of fund structures, capital calls, distributions, and investor relationships • Financial Compliance: Knowledge of regulatory requirements and audit trails in financial services • Investment Workflows: Understanding of due diligence processes, portfolio management, and investor reporting • Legal Document Structures: Familiarity with LPA documents, subscription agreements, and fund formation documents Collaboration Requirements • AI/ML Team: Work closely with GenAI engineers to optimize graph-based AI applications • Data Architecture Team: Collaborate on overall data architecture and integration strategies • Backend Developers: Integrate graph databases with application APIs and microservices • DevOps Team: Ensure proper deployment, monitoring, and maintenance of graph database infrastructure • Business Stakeholders: Translate business requirements into effective graph models and queries Performance Expectations • Query Performance: Ensure sub-second response times for standard relationship queries • Scalability: Support 100k+ users with concurrent access to graph data • Accuracy: Maintain data consistency and relationship integrity across complex fund structures • Availability: Ensure 99.9% uptime for critical graph database services • Integration Efficiency: Seamless integration with existing FundPanel.io systems and new AI services This role offers the opportunity to work at the intersection of advanced graph technology and artificial intelligence, creating innovative solutions that will transform how fund administrators understand and leverage their data relationships.

Posted 6 days ago

Apply

8.0 years

0 Lacs

India

Remote

Azure Data Engineer Location: Remote Shift : 6am - 3pm US central time zone Job Summary: We are seeking a highly skilled Data Engineer with strong experience in PostgreSQL and SQL Server, as well as hands-on expertise in Azure Data Factory (ADF) and Databricks. The ideal candidate will be responsible for building scalable data pipelines, optimizing database performance, and designing robust data models and schemas to support enterprise data initiatives. Key Responsibilities: Design and develop robust ETL/ELT pipelines using Azure Data Factory and Databricks Develop and optimize complex SQL queries and functions in PostgreSQL Develop and optimize complex SQL queries in SQL Server Perform performance tuning and query optimization for PostgreSQL Design and implement data models and schema structures aligned with business and analytical needs Collaborate with data architects, analysts, and business stakeholders to understand data requirements Ensure data quality, integrity, and security across all data platforms Monitor and troubleshoot data pipeline issues and implement proactive solutions Participate in code reviews, sprint planning, and agile ceremonies Required Skills & Qualifications: 8+ years of experience in data engineering or related field Strong expertise in PostgreSQL and SQL Server development, performance tuning, and schema design Experience in data migration from SQL Server to PostgreSQL Hands-on experience with Azure Data Factory (ADF) and Databricks Proficiency in SQL, Python, or Scala for data processing Experience with data modeling techniques (e.g., star/snowflake schemas, normalization) Familiarity with CI/CD pipelines, version control (Git), and agile methodologies Excellent problem-solving and communication skills If interested, share your resume on aditya.dhumal@leanitcorp.com

Posted 6 days ago

Apply

7.0 years

0 Lacs

Calcutta

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modelling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus · Must be team oriented with strong collaboration, prioritization, and adaptability skills required Mandatory skill sets: Azure Databricks Preferred skill sets: Azure Databricks Years of experience required: 7-10 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 6 days ago

Apply

2.0 years

3 - 10 Lacs

India

Remote

Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025

Posted 6 days ago

Apply

0.0 - 2.0 years

1 - 5 Lacs

India

Remote

Job Title : Jr. Data Engineer Location : Indore (Onsite) Experience : 0–2 Years Industry : Information Technology Employment Type : Full-time Job Summary : We are looking for a motivated and detail-oriented Junior Data Engineer to join our team onsite in Indore. The ideal candidate should have a solid understanding of Python and SQL, with a passion for data processing, transformation, and analytics. Strong communication skills, confidence, and the ability to learn quickly are key for success in this role. Key Responsibilities : Assist in designing, developing, and maintaining ETL pipelines and data workflows. Work with structured and unstructured data using Python and SQL . Support data collection, cleansing, transformation, and validation activities. Collaborate with data scientists, analysts, and software engineers to support data needs. Troubleshoot data-related issues and ensure high data quality and integrity. Create and maintain documentation for data pipelines and workflows. Continuously improve data engineering processes and performance. Key Requirements : 0–2 years of experience in a Data Engineering or related role. Good knowledge of Python and SQL is a must. Familiarity with databases like MySQL, PostgreSQL, or SQL Server . Understanding of data structures, algorithms, and basic ETL concepts. Strong analytical, problem-solving , and communication skills . Ability to work independently and collaboratively in a fast-paced environment. Self-motivated, confident, and eager to learn new technologies. Nice to Have : Exposure to cloud platforms like AWS, Azure, or GCP . Experience with data visualization tools like Power BI, Tableau , or Excel dashboards . Basic understanding of data warehousing , big data , or streaming technologies . Familiarity with tools like Airflow , Apache Spark , or Pandas . Perks & Benefits : Competitive salary with growth opportunities. Mentorship from experienced data professionals. Hands-on experience in real-world projects. Onsite work in a collaborative office environment. Performance-based incentives and learning support. Job Types: Full-time, Permanent Pay: ₹180,000.00 - ₹500,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 1 year (Preferred) Work Location: In person Application Deadline: 30/08/2025

Posted 6 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

The Analytics lead is a key role within the Enterprise Data team. We are looking for expert Power BI lead with deep data visualization experience, and excellent capability around DAX, SQL and data modelling techniques. This is a unique opportunity to be involved in delivering leading-edge business analytics using the latest and greatest cutting-edge BI tools, such as cloud-based databases, self-service analytics and leading visualisation tools enabling the company’s aim to become a fully digital organisation. Job Description: Responsibilities Lead and manage a team of Power BI Developers, providing guidance, direction, and support in their day-to-day activities Define and design data visualation models and solutions within the Microsoft Azure ecosystem, including Power Bi, Azure Synapse Analytics, MSFT Fabric and Azure Machine Learning. Develop strategies for Analytics, reporting and governance to ensure scalability, reliability, and security. Collaborate with business stakeholders to define their analytics and reporting strategies Ensure alignment of solutions with organizational goals, compliance requirements, and technology trends. Act as a subject matter expert (SME) in Analytics services, mentoring senior/junior Power BI Developers teams. Evaluate emerging technologies and anlaytical capabilities Provide guidance on cost optimization, performance tuning, and best practices in Azure cloud environments. Stakeholder Collaboration: Partner with business stakeholder, product managers, and data scientists to understand business objectives and translate them into technical solutions. Work with DevOps, engineering, and operations teams to implement CI/CD pipelines and ensure smooth deployment of analytical solutions. Governance and Security: Define and implement policies for data governance, quality, and security, ensuring compliance with GDPR, HIPAA, or other relevant standards. Optimize solutions for data privacy, resilience, and disaster recovery. Qualifications Required Skills and Experience Technical Expertise: Proficient in Power BI and related technology including MSFT Fabric, Azure SQL Database, Azure Synapse, Databricks and other visualuation Hands-on experience with Power BI, machine learning and AI services in Azure. Excellent data visualation skills and experinence Professional Experience: 12+ years of experience in Power BI Development, with demonstrable experience designing high-quality models and dashboards using Power BI, transforming raw data into meaningful insights 8+ years experience using Power BI Desktop, DAX, Tabular Editor and related tools 5+ Years experience using Power BI Premium capacity administration 5+ Years SQL development experience Comprehensive understanding of data modelling, administration, and visualization Good knowledge and understanding of Data warehousing conceptions, Azure Cloud databases, ETL (Extract, Transform, Load) framework Leadership and Communication: Exceptional ability to communicate technical concepts to non-technical stakeholders and align teams on strategic goals. Experience in leading cross-functional teams and managing multiple concurrent projects. Certifications (Preferred): Relevant certifications in Power BI, machine learning, AI, or enterprise architecture. Key Competencies Expertise in data visualization tools such as Power BI or Tableau. Expertise in creating semantic models for reporting Familiarity with the Microsoft Fabric technologies including One Lake, Lakehouse and Data Factory Strong understanding of data governance, compliance, and security frameworks. Familiarity with DevOps and Infrastructure as Code (IaC) tools like biceps or Azure Resource Manager (ARM) templates. Proven ability to drive innovation in data strategy and cloud solutions. A deep understanding of business intelligence workflows and the ability to align technical solutions Strong database design skills, including an understanding of both normalised form and dimensional form databases. In-depth knowledge and experience of data-warehousing strategies and techniques e.g., Kimball Data warehousing Experience in Cloud based data integration tools like Azure Data Factory Experience in Azure Dev Ops or JIRA is a plus Experience working with finance data is highly desirable Familiarity with agile development techniques and objectives Location: DGS India - Mumbai - Thane Ashar IT Park Brand: Dentsu Time Type: Full time Contract Type: Permanent

Posted 6 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 325+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. We are seeking a highly motivated Quality Assurance (QA) Engineer to join our team and play a critical role in ensuring the quality, performance, and reliability of our product. As a QA Engineer, you will be responsible for testing complex data pipelines, distributed systems, and real-time processing modules that form the backbone of our platform. You will collaborate closely with developers, product managers, and other stakeholders to deliver a robust and scalable product that meets the highest quality standards. Requirements Analyze technical and functional specifications of the Data Highway product to create comprehensive test strategies Develop detailed test plans, test cases, and test scripts for functional, performance, and regression testing Define testing criteria and acceptance standards for data pipelines, APIs, and distributed systems Execute manual and automated tests for various components of the Data Highway, including data ingestion, processing, and output modules Perform end-to-end testing of data pipelines to ensure accuracy, integrity, and scalability.Validate real-time and batch data processing flows to ensure performance and reliability Identify, document, and track defects using tools like JIRA, providing clear and actionable descriptions for developers Collaborate with development teams to debug issues, verify fixes, and prevent regression Perform root cause analysis to identify underlying problems and recommend process improvements Conduct performance testing to evaluate system behavior under various load conditions, including peak usage scenarios Monitor key metrics such as throughput, latency, and resource utilization to identify bottlenecks and areas for optimization Test APIs for functionality, reliability, and adherence to RESTful principles Validate integrations with external systems and third-party services to ensure seamless data flow Work closely with cross-functional teams, including developers, product managers, and DevOps, to align on requirements and testing priorities Participate in Agile ceremonies such as sprint planning, daily stand-ups, and retrospectives to ensure smooth communication and collaboration Provide regular updates on test progress, coverage, and quality metrics to stakeholders Collaborate with automation engineers to identify critical test cases for automation Use testing tools like Postman, JMeter, and Selenium for API, performance, and UI testing as required Assist in maintaining and improving automated test frameworks for the Data Highway product Validate data transformations, mappings, and consistency across data pipelines Ensure the security of data in transit and at rest, testing for vulnerabilities and compliance with industry standards Maintain detailed and up-to-date documentation for test plans, test cases, and defect reports Contribute to user guides and knowledge bases to support product usage and troubleshooting Desired Skills & Experience: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent professional experience 3+ years of experience as a Quality Assurance Engineer, preferably in testing data pipelines, distributed systems, or SaaS products Strong understanding of data pipelines, ETL processes, and distributed systems testing Experience with test management and defect-tracking tools like JIRA, TestRail, Zephyr Proficiency in API testing using tools like Postman or SoapUI Familiarity with SQL and database testing for data validation and consistency Knowledge of performance testing tools like JMeter, LoadRunner, or similar Experience with real-time data processing systems like Kafka or similar technologies Familiarity with CI/CD pipelines and DevOps practices Exposure to automation frameworks and scripting languages such as Python or JavaScript Strong analytical and problem-solving skills with attention to detail Excellent communication and collaboration skills to work effectively with cross-functional teams Proactive and self-driven approach to identifying and resolving quality issues Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment We want to hire smart, curious, and ambitious folks, so please reach out even if you do not have all of the requisite experience. We are looking for engineers with the potential to grow! At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Andhra Pradesh

On-site

P2-C3-STS JD Client is looking for a Senior QA Test Analyst in our Data Lake, Data Warehouse team. In this role you will be part of a team working to develop solutions enabling the business to leverage data as an asset at the bank. As a Senior QA Test Analyst, you will work to develop test strategies and test plans for Data Lake projects ensuring all IT SDLC processes are documented and practiced, working closely with multiple technologies teams across the enterprise. You will also execute test cases and communicate status to project team members and key stakeholders. Key technologies include Azure DevOps, Python, Data Lake, Cloud AWS, Snowflake, Zena, and DataStage. If you consider data as a strategic asset, evangelize the value of good data and insights, have a passion for learning and continuous improvement, this role is for you. Key Responsibilities Actively participate in the review of project requirements, data mappings and technical design specifications. Create test strategies and test plans for Data Lake projects, mapping back to the project requirements to ensure proper test coverage. Execute test cases in Azure DevOps using manual and/or automated test processes. Execute SQL database queries to support test execution. Strong SQL skills required. Having knowledge of the code, Perform ETL validation according to data mapping, execute data profiling, reconciliation of data, meta data validation, initial and delta validation for different SCD types. Analyze data, troubleshoot data issues, and create action plans to address data quality issues. Coordinate test execution with other application teams and UAT partners. Create and communicate test status with project team members and stakeholders. Identify, document, and communicate testing defects. Collaborate with the project team on defect analysis and triage. Support continuous improvement by identifying and solving opportunities to define or enhance QA process. Perform Functional, Regression, Negative and migration testing for the Data warehouse projects. To perform these duties, the Senior QA Testing Analyst position requires theoretical and practical knowledge of quality assurance, testing principles, ETL technologies and tools: including Cloud AWS, Snowflake, DataStage, Python, Zena, Infogix, Tableau, Azure DevOps, Mainframe and SharePoint. Basic Qualifications Bachelors degree 3+ years of ETL testing experience in data warehouse environment 3+ years of experience writing SQL queries 2+ years of experience with Snowflake and AWS Cloud 2+ Experience Agile Methodologies as a QA Analysts on a project team Preferred Qualifications Experience in financial services (banking) industry. Experience testing on Snowflake and AWS S3/EC2/EMR. To perform these duties, the Senior QA Testing Analyst position requires theoretical and practical knowledge of quality assurance, testing principles, ETL technologies and tools: including AWS, DataLake, Snowflake, DataStage, Python, ASG Zena, Infogix, Tableau, Azure DevOps, Mainframe and SharePoint. Experience with data governance and data management approaches. Excellent verbal and written communications skills. Ability to effectively prioritize and execute tasks. Detail oriented and highly motivated with strong organizational, analytical and problem-solving skills. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 6 days ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

Talend - Designing, developing, and technical architecture, data pipelines, and performance scaling using tools to integrate Talend data and ensure data quality in a big data environment. Very strong on PL/SQL - Queries, Procedures, JOINs. Snowflake SQL Writing SQL queries against Snowflake Developing scripts Unix, Python, etc. to do Extract, Load, and Transform data. Good to have Talend knowledge and hands-on experience. Candidates worked in PROD support would be preferred. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes, ensuring data quality and integrity. Complex problem-solving capability and ever improvement approach. Desirable to have Talend / Snowflake Certification. Excellent SQL coding skills Excellent communication & documentation skills. Familiar with Agile delivery process. Must be analytic, creative and self-motivated. Work effectively within a global team environment. Excellent communication skills. Good to have - Production support experience. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 6 days ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description: L1 Support – Data Engineering (Full Time WFO) Location: Noida Work Mode: Noida Office | 6 days/week | 24x7x365 support (rotational shifts) Salary Range - Between INR 2.5 to 3 Lacs Per Annum Experience: 2 years Language: English proficiency mandatory About the Role We're looking for an experienced and motivated L1 Support Engineer – Data Engineering to join our growing team. If you have solid exposure to AWS , SQL , and Python scripting , and you're ready to thrive in a 24x7 support environment—this role is for you! What You’ll Do Monitor and support AWS services (S3, EC2, CloudWatch, IAM) Handle SQL-based issue resolution and data analysis Run and maintain Python scripts ; Shell scripting is a plus Support ETL pipelines and data workflows Monitor Apache Airflow DAGs and resolve basic issues Collaborate with cross-functional and multicultural teams What We’re Looking For B.Tech or MCA preferred , but candidates with a Bachelor’s degree in any field and the right skillset are welcome to apply. 2 years of Data Engineering Support or similar experience Strong skills in AWS , SQL , Python , and ETL processes Familiarity with data warehousing (Amazon Redshift or similar) Ability to work rotational shifts in a 6-day, 24x7 environment Excellent communication and problem-solving skills English fluency is required

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Total Yrs. of Experience 5+ years Relevant Yrs. of experience 5+ years Detailed JD (Roles and Responsibilities) Knowledge and hands on AWS cloud testing Knowledge and experience on snowflake testing Experience on data product publishing testing Good knowledge on testing concepts and test documentation Mandatory skills AWS Cloud testing, Snowflake Desired/ Secondary skills ETL/Data warehouse testing

Posted 6 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About the Role We are looking for a highly competent and detail-oriented Database Administrator with 3–5 years of experience in SQL Server environments. The ideal candidate should have hands-on experience with performance tuning, backup strategies, query optimisation, and maintaining high availability. You will play a critical role in ensuring database stability, scalability, and security across business applications. Key Responsibilities Manage and maintain Microsoft SQL Server databases (2016 and later) across development, UAT, and production environments. · Monitor and improve database performance using Query Store, Extended Events, and Dynamic Management Views (DMVs). · Design and maintain indexes, partitioning strategies, and statistics to ensure optimal performance. · Develop and maintain T-SQL scripts, views, stored procedures, and triggers. · Implement robust backup and recovery solutions using native SQL Server tools and third-party backup tools (if applicable). · Ensure business continuity through high-availability configurations such as Always On Availability Groups, Log Shipping, or Failover Clustering. · Perform database capacity planning and forecast growth requirements. · Ensure SQL Server security by managing logins, roles, permissions, and encryption features like TDE. · Collaborate with application developers for schema design, indexing strategies, and performance optimization. · Handle deployments, patching, and version upgrades in a controlled and documented manner. · Maintain clear documentation of database processes, configurations, and security policies. Required Skills & Qualifications · Bachelor’s degree in Computer Science, Engineering, or related field. · 3–5 years of solid experience with Microsoft SQL Server (2016 or later). · Strong command of T-SQL including query optimisation, joins, CTEs, window functions, and error handling. · Proficient in interpreting execution plans, optimising long-running queries, and using indexing effectively. · Understanding of SQL Server internals such as page allocation, buffer pool, and lock escalation. · Hands-on experience with backup/restore strategies and consistency checks (DBCC CHECKDB). · Experience with SQL Server Agent Jobs, alerts, and automation scripts (PowerShell or T-SQL). · Ability to configure and manage SQL Server high-availability features. · Exposure to tools like Redgate SQL Monitor, SolarWinds DPA, or similar is a plus. Nice to Have · Exposure to Azure SQL Database or cloud-hosted SQL Server infrastructure. · Basic understanding of ETL workflows using SSIS. · Microsoft Certification: MCSA / Azure Database Administrator Associate or equivalent. · Experience with database deployments in CI/CD pipelines. Key Traits · Analytical and structured problem-solver. · High attention to detail and data consistency. · Proactive mindset with ownership of deliverables. · Strong verbal and written communication skills.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Decision Analytics EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 24,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients’ decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Job Title - SAS Platform Migration Specialist (SAS EG to SAS Viya Migration) Position Overview : We are seeking an experienced SAS professional to lead and executed the migration of existing SAS Enterprise guide (EG) programs and processes to the modern SAS Viya platform. The ideal candidate will have strong expertise in both SAS EG and SAS Viya environments and will be responsible for ensuring smooth transition while optimizing code and processes. Key Responsibilities Assess existing SAS EG programs Develop and implement migration strategies and frameworks Convert SAS EG programs to SAS Viya compatible code Optimize existing code for better performance in the Viya environment Create and maintain document for migration processes and procedures Provide training and support to team members during the transition Collaborate with stakeholders to ensure business requirements are met Perform testing and validation of migration programs Troubleshoot migration-related issues Migration Planning: Analyze current SAS EG environment and applications Create detailed migration roadmap Identify potential risks and mitigation strategies Establish timeline and milestones Technical Implementation: Convert SAS EG programs to Viya-compatible format Optimize code for CAS processing Implement new features available in Viya Ensure data security and access controls Quality Assurance: Develop testing strategies Perform parallel runs Validate results Document any discrepancies Knowledge Transfer: Create training materials Conduct workshops Provide ongoing support Document best practices Monitoring and Maintenance: Track migration process Monitor performance Address issues and concerns Provide regular status updates Work Environment: Full-Time position May require occasional overtime during critical migration phases Hybrid work environment (remote/office) May require some travel to different office locations Technical Skills SAS Base Programming SAS Enterprise Guide SAS Viya SAS Studio SAS Visual Analytics CAS Programming Git version control Data Modelling ETL processes Soft Skills Strong analytical and problem-solving abilities Excellent communication skills Team collaboration Project management Time management Documentation skills Training and mentoring abilities Candidate Profile Bachelor's degree in Computer Science, Statistics, Or related field 5+ years of experience with SAS programming Strong expertise in SAS Enterprise Guide Hands-on experience with SAS Viya platform Proficiency in SAS Studio and Visual Analytics Knowledge od CAS (Cloud Analytics Service) Experience with REST API’s and web services Strong understanding of data management principles Experience in working in dual shore engagement is preferred Must have experience in managing clients directly Superior analytical and problem solving skills Demonstrated leadership ability and willingness to take initiative Strong record of achievement, solid analytical ability, and an entrepreneurial hands-on approach to work Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe Preferred Qualification SAS Certifications Experience with cloud platforms (AWS, Azure, GCP) Knowledge of Python or R programming Project management experience Experience with Agile methodologies Previous migration project experience What We Offer EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond. "EOE/Minorities/Females/Vets/Disabilities"

Posted 6 days ago

Apply

5.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

!!!! We are Hiring a Data Scientist (Remote)!!! Job Summary: We are seeking an experienced Data Scientist with 5 to 6 years of hands-on experience to join our analytics and AI/ML team. The ideal candidate has a strong background in statistics, machine learning, data engineering, and business analytics, and is capable of turning complex data into actionable insights and predictive solutions. Key Responsibilities: • Work with stakeholders to identify business problems and translate them into data science solutions. • Build, evaluate, and deploy machine learning and statistical models. • Perform data wrangling, feature engineering, and exploratory data analysis (EDA) on large datasets. • Design and implement data pipelines and ETL processes. • Collaborate with engineering teams to deploy models into production. • Interpret model results, validate assumptions, and communicate findings to non-technical stakeholders. • Continuously monitor model performance and retrain as necessary. • Stay up to date with the latest trends, tools, and technologies in data science and AI. Required Skills & Qualifications: • 5–6 years of experience in a data science or machine learning role. • Strong proficiency in Python (Pandas, NumPy, scikit-learn, TensorFlow/PyTorch) • Solid knowledge of machine learning algorithms, deep learning, NLP, or time series forecasting. • Experience with SQL and relational databases. • Experience with data visualization tools like QuickSight, Power BI, or Matplotlib/Seaborn. • Proficiency with cloud platforms like AWS. • Familiarity with version control systems (Git), Docker, and CI/CD workflows. • Strong communication skills and ability to present technical findings clearly to business stakeholders. Preferred Qualifications: • Master's in Computer Science, Statistics, Data Science, Mathematics, or a related field. • Experience working with big data tools such as Spark or Kafka. • Prior experience in industries such as healthcare or retail. • Background in MLOps or data science productization is a plus. Send the resume to HR@muverity.com

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.

Posted 6 days ago

Apply

4.0 years

0 Lacs

Mohali district, India

On-site

Job Title: Team Member Reporting & Insights (ESG Analytics , Finance & Controls) Ready to use your data skills to make a real-world impact? At Bunge, our mission is to connect farmers to consumers, delivering essential food, feed, and fuel to the world. We are looking for a passionate data storyteller to join our team and help us build a more sustainable and secure future. This isn't just another analytics job; it's a chance to be at the heart of our global operations, transforming complex data into clear, actionable insights that shape critical business decisions. If you thrive on collaboration and are excited by the challenge of turning raw numbers into compelling narratives, we would love to meet you. Your Impact and Responsibilities: As a key member of our analytics team, you will have the unique opportunity to work across both our sustainability and financial control functions. You will: Partner with experts across the business to understand their challenges and data needs, acting as a bridge between business goals and technical solutions. Bring data to life by designing and building beautiful, interactive Tableau dashboards that empower teams with actionable insights. Solve complex puzzles by designing automated data pipelines (ETL) and creating robust data models to ensure our reporting is both timely and trustworthy. Champion a data-driven culture by ensuring the highest standards of quality and accuracy, helping everyone make smarter decisions. Drive projects forward within our dynamic and collaborative Agile (SCRUM/Kanban) environment. What You'll Bring to the Team: We believe that great talent comes from a variety of backgrounds. The ideal candidate will have a strong foundation in data analytics and a passion for learning. A solid track record (4+ years) in data analytics, business intelligence, or a similar data-centric role. Deep expertise in Tableau , with the ability to create sophisticated and intuitive visualizations. Strong skills in SQL for querying and transforming data. A genuine interest and foundational knowledge in either Sustainability/ESG reporting (like GRI, SASB, TCFD) OR financial controls and accounting processes (KPIs in OTC,PTP,RTR, Trade Execution, etc domains) We strongly encourage you to apply if you have deep expertise in one of these areas and a desire to grow in the other! Excellent communication skills and a collaborative spirit; you enjoy working with diverse teams and explaining complex ideas simply. A Bachelor's or Master's degree in a relevant field like Data Science, Business Analytics, Finance, or Sustainability. Even Better If You Have: Experience with other analytics tools like Power BI, Python, R, or Alteryx. Familiarity with Oracle DB, SAP, or SSAS. Professional certifications in Tableau, Data Science, or ESG/Sustainability frameworks. Why You'll Love Working at Bunge: Make a Global Impact: Your work will directly contribute to our mission of feeding the world and help us advance our sustainability and corporate responsibility goals. Grow With Us: We are committed to your professional development and offer opportunities to expand your skills, take on new challenges, and build a rewarding career. A Welcoming and Collaborative Culture: You'll be part of a supportive, global team that values diverse perspectives and works together to solve meaningful problems. Influence and Visibility: This is a role with a seat at the table. Your insights and analyses will be seen and used by leaders to guide strategy and decision-making. Are you ready to join us? If you are a curious, data-driven professional who wants to be part of something bigger, we encourage you to apply. Bunge is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.

Posted 6 days ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Join us and drive the design and deployment of AI/ML frameworks revolutionizing telecom services. As a key member of our team, you will architect and build scalable, secure AI systems for service assurance, orchestration, and fulfillment, working directly with network experts to drive business impact. You will be responsible for defining architecture blueprints, selecting the right tools and platforms, and guiding cross-functional teams to deliver scalable AI systems. This role offers significant growth potential, mentorship opportunities, and the chance to shape the future of telecoms using the latest AI technologies and platforms. Key Responsibilities HOW YOU WILL CONTRIBUTE AND WHAT YOU WILL LEARN Design end-to-end AI architecture tailored to telecom services business functions (e.g., Service assurance, Orchestration and Fulfilment). Define data strategy and AI workflows including Inventory Model, ETL, model training, deployment, and monitoring. Evaluate and select AI platforms, tools, and frameworks suited for telecom-scale workloads for development and testing of Inventory services solutions Work closely with telecom network experts and Architects to align AI initiatives with business goals. Ensure scalability, performance, and security in AI systems across hybrid/multi-cloud environments. Mentor AI developers Key Skills And Experience You have: 10+ years' experience in AI/ML design and deployment with a Graduation or equivalent degree. Practical Experience on AI/ML techniques and scalable architecture design for telecom operations, inventory management, and ETL. Exposure to data platforms (Kafka, Spark, Hadoop), model orchestration (Kubeflow, MLflow), and cloud-native deployment (AWS Sagemaker, Azure ML). Proficient in programming (Python, Java) and DevOps/MLOps best practices. It will be nice if you had: Worked with any of the LLM models (llama family) and LLM agent frameworks like LangChain / CrewAI / AutoGen Familiarity with telecom protocols, OSS/BSS platforms, 5G architecture, and NFV/SDN concepts. Excellent communication and stakeholder management skills. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.

Posted 6 days ago

Apply

10.0 - 12.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS present an excellent opportunity for Data architect Job Description: Skills: AWS, Glue, Redshift, PySpark Location: Pune / Kolkata Experience: 10 to 12 Years Strong hands-on experience in Python programming and PySpark. Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda) Experience working with Apache Spark and Hadoop ecosystem. Experience in writing and optimizing SQL for data manipulations. Good Exposure to scheduling tools. Airflow is preferable. Must – Have Data Warehouse Experience with AWS Redshift or Hive. Experience in implementing security measures for data protection. Expertise to build/test complex data pipelines for ETL processes (batch and near real time) Readable documentation of all the components being developed. Knowledge of Database technologies for OLTP and OLAP workloads.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking talented individuals who can provide an exceptional experience for our company and clients. 𝐈𝐟 𝐲𝐨𝐮 𝐭𝐡𝐢𝐧𝐤 𝐲𝐨𝐮 𝐡𝐚𝐯𝐞 𝐰𝐡𝐚𝐭 𝐢𝐭 𝐭𝐚𝐤𝐞𝐬 𝐭𝐨 𝐣𝐨𝐢𝐧 𝐨𝐮𝐫 𝐭𝐞𝐚𝐦, 𝐭𝐡𝐢𝐬 𝐦𝐢𝐠𝐡𝐭 𝐛𝐞 𝐲𝐨𝐮𝐫 𝐜𝐡𝐚𝐧𝐜𝐞! 🔹 Job Role: SQL Developer 🔹 Experience Required: 3+ Years Required Skills: -> 3+ years of experience as an SQL Developer or similar role. ->Proven experience with Microsoft SQL Server 2017, 2019, and 2022. ->Strong knowledge of T-SQL, SSIS (SQL Server Integration Services), and SSRS (SQL Server Reporting Services). ->Experience with database performance tuning and optimization. ->Familiarity with ETL tools and processes. ->Experience in the FinTech industry or working with financial data is a plus. 📍 Location: Noida, Sector 63 📍 Mode: Work from Office only Interested candidates can share their CV at 📩tanya@yoekisoft.com Let’s Connect!

Posted 6 days ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities: Design, build, and maintain scalable data pipelines on Snowflake. Possessing experience or knowledge in Snow pipe, Time Travel, and Fail Safe. Write and optimize SQL queries for data extraction and transformation. Develop ETL processes to integrate various data sources into Snowflake. Monitor and troubleshoot data warehouse performance issues. Implement security measures and data governance practices. Having sound knowledge on snowflake architecture. Having knowledge on fivetran is addon advantage Collaborate with cross-functional teams to support analytical and reporting needs. Experience : 2 to 8 Years Qualifications: Bachelor’s degree in computer science, Information Technology, or a related field. Proven experience with Snowflake and data warehousing concepts. Proficiency in SQL and ETL tools (e.g., Talend, Informatica, etc.). Company Details: One of the top ranked IT Companies in Ahmedabad, Gujarat. We are ISO 9001:2015 & ISO 27001:2013 certified leading global technology solution provider. Globally present, core focus is on USA, Middle East, Canada for services. Constantly enhancing our span of services around custom software development, Enterprise Mobility Solutions, and the Internet of Things. Family of multicultural and multi-talented passionate and well experienced resources who consistently work to set new standards for customer satisfaction by implementing industry best practices. Why Stridely? · You will have opportunities to work on international enterprise-level projects of big-ticket size · Interaction and co-ordination with US customer · Employee-First approach along with customer-first approach · Continuous learning, Training, and knowledge enhancements opportunities · Self-development, Career, and growth concealing within the origination · Democratic and Metro Culture · Strong and solid leadership · Seeing your potential you will get Overseas visits, transfers, and exposure URL: www.stridelysolutions.com Employee strength: 500+ Working Days: 5 Days a week One of the top ranked IT Companies in Ahmedabad, Gujarat. We are ISO 9001:2015 & ISO 27001:2013 certified leading global technology solution provider. Globally present, core focus is on USA, Middle East, Canada for services. Constantly enhancing our span of services around custom software development, Enterprise Mobility Solutions, and the Internet of Things. Family of multicultural and multi-talented passionate and well experienced resources who consistently work to set new standards for customer satisfaction by implementing industry best practices. Why Stridely? · You will have opportunities to work on international enterprise-level projects of big-ticket size · Interaction and co-ordination with US customer · Employee-First approach along with customer-first approach · Continuous learning, Training, and knowledge enhancements opportunities · Self-development, Career, and growth concealing within the origination · Democratic and Metro Culture · Strong and solid leadership · Seeing your potential you will get Overseas visits, transfers, and exposure URL: www.stridelysolutions.com Employee strength: 500+ Working Days: 5 Days a week Location: Ahmedabad/ Pune/ Vadodara

Posted 6 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Python PySpark ETL Data Pipeline Big Data AWS GCP Azure Data Warehousing Spark Hadoop A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies