Jobs
Interviews

389 Aggregations Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

25.0 years

3 - 8 Lacs

hyderābād

On-site

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview We are seeking a motivated Data Engineer with foundational skills in SQL, Python, and modern data technologies . The ideal candidate should have a strong interest in data engineering, problem-solving, and building scalable data pipelines. This is a great opportunity for someone early in their career who wants to grow into a strong data engineering professional. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines to ingest, transform, and load data. Write efficient SQL queries for data extraction, transformation, and reporting. Develop scripts and automation using Python for data processing and workflow orchestration. Collaborate with data analysts, BI developers, and business teams to deliver reliable datasets. Ensure data quality, integrity, and consistency across different systems. Work with cloud and on-premise data platforms as required Document processes, data flows, and technical solutions for ongoing knowledge sharing. Required Skills & Qualifications Basic proficiency in SQL (writing queries, joins, aggregations, data manipulation). Knowledge of Python for data processing and scripting. Understanding of data warehousing concepts (tables, schemas, ETL). Familiarity with version control tools (e.g., Git ). Strong problem-solving and analytical mindset. Eagerness to learn and work with modern data engineering tools and cloud technologies . Nice-to-Have Skills (Good to Learn/Have) Experience with cloud platforms (Azure Data Factory, AWS Glue, GCP Dataflow). Exposure to big data frameworks (Spark, Databricks, Hadoop). Basic knowledge of data visualization tools (Power BI, Tableau, etc.). Awareness of data governance and data quality frameworks . Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 11 hours ago

Apply

5.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Sia is a next-generation, global management consulting group. Founded in 1999, we were born digital. Today our strategy and management capabilities are augmented by data science, enhanced by creativity and driven by responsibility. We’re optimists for change and we help clients initiate, navigate and benefit from transformation. We believe optimism is a force multiplier, helping clients to mitigate downside and maximize opportunity. With expertise across a broad range of sectors and services, our 3,000 consultants serve clients worldwide from 48 locations in 19 countries. Our expertise delivers results. Our optimism transforms outcomes. Job Description We are hiring a Power BI Developer to support our global internal teams (Finance, HR, Strategy, Corporate Functions) by maintaining, improving, and delivering high-impact business intelligence dashboards, datasets, and analytics solutions. This role is critical for strengthening our internal reporting systems and driving data-backed decision-making across the organization. Key Responsibilities Lead the design, implementation, and upkeep of ETL / data pipelines (from source files through staging layers into the DWH), ensuring data correctness and efficiency. Architect and evolve semantic models / reusable datasets: define dimensions, hierarchies, and KPI definitions that align with business needs. Build, review, and optimize complex SQL-based transformations and views that feed into dashboards and reports. Develop, optimize, and maintain all aspects of Power BI dashboards: DAX measures, visuals and UXUI design, Power Query (M), security through RLS, OLS, and access rights, report publication and refresh. Lead and manage complete BI projects with Corporate teams from Business requirement to delivery thus including : requirements gathering, scoping, estimating work and timeline, designing specs, developing, validating with business stakeholders, deploying, and monitoring in production. Identify performance bottlenecks (in SQL queries or the data model) and optimize query speed, resource usage, and refresh times to improve user experience. Maintain and initiate documentation of data schema, data lineage, dashboards, and datasets; QA and perform tests / validation to ensure data integrity. Mentor junior analysts; raise and enforce standards and best practices in SQL, data modelling, and BI development. Qualifications Bachelor's or Master’s in Computer Science, Data Analytics, Information Systems, Statistics, or a related field. 5-8+ years in analytics / BI roles with strong exposure to the Microsoft Power BI Suite. Deep hands-on experience in SQL : complex joins, aggregations, window functions, subqueries, optimizations. Solid experience with Power BI: semantic modelling, DAX, Power Query (M), report/dashboard design, RLS, publishing. Understanding of data warehousing / ETL architecture (ODS / staging / fact & dimension layers). Demonstrated ability to work on end-to-end projects: gathering requirements, specifying, implementing, testing, and monitoring dashboards. Strong communication skills; ability to engage with both technical and non-technical international audiences. Proven problem solving and performance tuning skills: recognizing bottlenecks, refactoring models or queries as needed. Certifications : Mandatory: PL-300 (Microsoft Certified: Data Analyst Associate) Preferred: PL-200: Power Platform Functional Consultant or PL-600: Power Platform Solution Nice to Have Experience in cloud analytics environments (Azure preferred). Familiarity with tools enhancing BI model maintenance (Tabular Editor, DAX Studio, ALM / version control for BI assets). Additional Information Sia is an equal opportunity employer. All aspects of employment, including hiring, promotion, remuneration, or discipline, are based solely on performance, competence, conduct, or business needs. Sia is an equal opportunity employer. All aspects of employment, including hiring, promotion, remuneration, or discipline, are based solely on performance, competence, conduct, or business needs.

Posted 18 hours ago

Apply

5.0 years

0 Lacs

jaipur, rajasthan, india

On-site

Asymbl is an innovative, high growth technology company empowering businesses to assemble the future of work. Through advanced recruiting applications, certified Salesforce consulting, and digital labor advisory, Asymbl streamlines workflows, enhances collaboration between people and intelligent agents, and delivers measurable return on investment. We pride ourselves on a culture of relentless curiosity and belief, grounded in trust and integrity, driven by a bias to action and willingness to fail fast while remaining unwaveringly customer-focused and dedicated to fostering the potential of our people. Role Overview As a Senior Neo4j Developer at Asymbl, you will be the architect behind our cutting-edge Recruitment Intelligence Platform—a sophisticated graph database system that processes 25,000+ resumes with AI-powered matching capabilities. You'll design and optimize complex graph schemas with 45+ unique constraints, implement vector-based similarity search, and integrate with Salesforce to revolutionize how organizations discover and match talent. In this role, you will work at the intersection of graph databases, artificial intelligence, and recruitment technology, building systems that transform hiring through intelligent data relationships and machine learning-powered insights. Why Join Us? As a Senior Neo4j Developer at Asymbl, you will have the unique opportunity to: Pioneer Graph-Based AI : Build one of the most sophisticated recruitment intelligence systems using Neo4j 5.x with vector indexes and AI-powered similarity matching Work at Enterprise Scale : Handle bulk processing of 25,000+ resumes with complex constraint validation and relationship mapping Shape the Future of Recruitment : Create technology that fundamentally changes how organizations find, evaluate, and match talent Lead Innovation : Work with cutting-edge technologies including 384-dimension vector embeddings, serverless architecture, and advanced graph algorithms Drive Impact : Your work directly enables better hiring decisions for companies and career opportunities for candidates worldwide If you're passionate about graph databases, excited by AI integration, and driven to solve complex problems at scale, we'd love to have you join our team! Responsibilities: Graph Architecture & Design Design and implement sophisticated graph schemas for recruitment data with 45+ unique constraints across contacts, jobs, skills, organizations, and assessments Architect domain-driven node relationships covering Core Entities (Contact, Job), Professional Entities (Role, Organization, Skill), and Process Entities (Interview, Assessment, Application) Implement constraint-heavy production deployments with safe migration patterns using IF NOT EXISTS strategies Performance Optimization & Indexing Build and maintain high-performance indexes, including vector indexes for 384-dimension AI embeddings, full-text search indexes, and composite indexes for complex query patterns Optimize Cypher queries for candidate-job matching, skill similarity search, and recruitment analytics at enterprise scale Implement vector similarity search capabilities for AI-powered recommendation systems AI & Machine Learning Integration Integrate Neo4j with AI processing pipelines including confidence scoring systems and embedding generation Implement vector index management for similarity-based candidate matching and job recommendations Work with ML confidence scores and AI processing metadata to enhance graph-based intelligence Salesforce Integration & Data Management Design and implement Salesforce integration patterns with external ID mapping and source record management Handle multi-system ID management combining internal Neo4j IDs with 18-character Salesforce external IDs Process custom Salesforce objects (ATS systems) and maintain data synchronization Bulk Processing & ETL Develop high-volume ETL processes capable of handling 25,000+ resume imports with constraint validation Implement transaction management strategies with optimal batch sizes (100 rows per transaction) Build error handling and recovery systems for constraint violations and data quality issues Production Support & Monitoring Monitor constraint health across 45+ unique constraints and resolve production issues Implement performance monitoring for query optimization and index effectiveness Maintain production systems with proactive health checks and alerting Collaboration & Documentation Work closely with Python developers, AI engineers, and product teams to deliver integrated solutions Document graph schema designs, query patterns, and performance optimization strategies Provide technical leadership and mentoring on graph database best practices Qualifications Required Experience 5+ years of hands-on Neo4j development experience with production graph database systems 3+ years working with constraint-heavy schemas and performance optimization Strong expertise in Cypher query language including complex traversals, aggregations, and optimization Experience with Neo4j 5.x features including vector indexes, full-text search, and composite indexes Proven experience with bulk data processing and ETL operations in graph databases Technical Skills Deep understanding of graph database design principles and relationship modeling Proficiency with Neo4j Administration including constraint management, index optimization, and performance monitoring Experience with vector databases and similarity search implementations Knowledge of transaction management and batch processing strategies in Neo4j Familiarity with Neo4j drivers (Python preferred) and API integration patterns Integration Experience Experience with Salesforce integration and external ID management patterns Knowledge of AWS services (Lambda, SQS, ElastiCache) for graph database integration Understanding of API design for graph-based applications Experience with serverless architectures and cloud-native graph solutions Preferred Qualifications Bachelor's degree in Computer Science, Software Engineering, or related field Experience with recruitment or HR technology domains Knowledge of AI/ML integration with graph databases Neo4j certification (Graph Data Science, Administration, or Developer) Experience with monitoring tools for production graph databases Understanding of data privacy and compliance requirements for HR systems Technical Environment You'll be working with our cutting-edge technology stack: Core Technologies Neo4j 5.x with vector indexes and advanced constraint management Cypher for complex graph traversals and optimization Vector embeddings (384-dimension) for AI-powered similarity matching Python for Neo4j driver integration and API development Integration Stack AWS Lambda for serverless graph operations SQS FIFO for ordered processing of bulk operations ElastiCache Redis for caching and validation layers Salesforce APIs for external system integration AI & Data Processing OpenAI embeddings for vector generation Docling for document processing and parsing Confidence scoring systems for AI processing quality Bulk ETL processing capabilities for 25,000+ records Development & Operations Production monitoring with constraint health checking Performance optimization tools and query analysis CI/CD pipelines for safe schema deployments Documentation and knowledge sharing platforms What Makes This Role Unique: Cutting-Edge Technology Work with the latest Neo4j 5.x features, including vector indexes for AI-powered matching Build one of the most sophisticated constraint-heavy graph databases in production Implement enterprise-scale graph solutions processing tens of thousands of records AI Integration Pioneer Be at the forefront of combining graph databases with artificial intelligence Implement vector similarity search for recruitment matching with 384-dimensional embeddings Work with confidence, scorin,g and AI processing metadata integration Real-World Impact Your graph designs directly impact how organizations find and hire talent Build systems that process thousands of resumes and job matches daily Create technology that transforms recruitment efficiency and effectiveness Scale & Complexity Handle enterprise-level data processing with sophisticated constraint validation Work with multi-system integration combining Neo4j, Salesforce, and AI services Solve complex performance challenges with advanced indexing and query optimization Innovation Opportunity Shape the architecture of next-generation recruitment intelligence Experiment with emerging graph database technologies and AI integration patterns Contribute to open source and industry best practices for graph-based HR systems Join Asymbl and help us build the future of intelligent recruitment through the power of graph databases and artificial intelligence!

Posted 1 day ago

Apply

25.0 years

0 Lacs

hyderabad, telangana, india

On-site

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview We are seeking a motivated Data Engineer with foundational skills in SQL, Python, and modern data technologies . The ideal candidate should have a strong interest in data engineering, problem-solving, and building scalable data pipelines. This is a great opportunity for someone early in their career who wants to grow into a strong data engineering professional. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines to ingest, transform, and load data. Write efficient SQL queries for data extraction, transformation, and reporting. Develop scripts and automation using Python for data processing and workflow orchestration. Collaborate with data analysts, BI developers, and business teams to deliver reliable datasets. Ensure data quality, integrity, and consistency across different systems. Work with cloud and on-premise data platforms as required Document processes, data flows, and technical solutions for ongoing knowledge sharing. Required Skills & Qualifications Basic proficiency in SQL (writing queries, joins, aggregations, data manipulation). Knowledge of Python for data processing and scripting. Understanding of data warehousing concepts (tables, schemas, ETL). Familiarity with version control tools (e.g., Git). Strong problem-solving and analytical mindset. Eagerness to learn and work with modern data engineering tools and cloud technologies. Nice-to-Have Skills (Good to Learn/Have) Experience with cloud platforms (Azure Data Factory, AWS Glue, GCP Dataflow). Exposure to big data frameworks (Spark, Databricks, Hadoop). Basic knowledge of data visualization tools (Power BI, Tableau, etc.). Awareness of data governance and data quality frameworks. Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 1 day ago

Apply

5.0 years

0 Lacs

india

On-site

Are you passionate about turning real-world healthcare data into actionable insights that improve patient outcomes? At Prospection , we empower better healthcare decisions by uncovering meaning from large-scale health data sets and delivering powerful, data-driven strategies. We’re looking for a Senior Analytics Consultant to join our fast-growing team. This is a hands-on role where you’ll leverage advanced data analytics, predictive modelling, and real-world data (RWD) to deliver high-impact outcomes for pharmaceutical companies, healthcare providers, and governments—across Australia and globally. About Us Prospection is a pioneer in healthcare analytics, applying machine learning, predictive modelling, and advanced SQL/Python analytics to real-world datasets—claims, EMR, registries, and supply chain. From immuno-oncology to hepatitis, our insights have supported 70+ therapy areas worldwide . What You’ll Do Support the full lifecycle of pharmaceutical products—from development to market launch and post-market monitoring—using advanced analytics to uncover key trends and opportunities. Conduct market sizing, cohort analysis, risk stratification, and KPI modelling to inform product and marketing strategies. Build and optimise automated data pipelines (e.g., in Dataiku) to improve reporting efficiency and deliver faster insights. Develop and present interactive dashboards, visualisations, and executive presentations to communicate findings to stakeholders. Partner with Sales teams to translate analytical insights into scalable, productised solutions that align with our platform vision. Collaborate with regional analytics teams, sharing best practices in predictive modelling, machine learning, and advanced SQL querying to ensure consistent quality and innovation. Mentor junior team members, promoting analytical excellence and knowledge sharing. Essential Criteria 5+ years’ experience working with pharma clients, leveraging longitudinal real-world patient datasets to solve use cases such as forecasting, brand performance tracking, prescriber targeting, segmentation, patient journeys, and omni-channel attribution. Strong solutioning mindset : able to engage directly with clients, unpack requirements through probing Q&A, translate business needs into analytical designs, and own the problem→solution path. Proven client communication skills: confident running pharma stakeholder meetings , handling back-and-forth to refine scope, and presenting insights. Team leadership: experience managing and mentoring analysts/scientists; setting standards, reviewing work, and building repeatable best practices. Technical excellence : advanced SQL (joins, windows, aggregations); Python (NumPy, pandas, scikit-learn, XGBoost); data viz in Power BI; power-user of Excel (VBA/Macros, Power Query). Able to take a brief and query/shape data end-to-end independently. Healthcare analytics background with real-world/claims/EMR data; comfortable with clinical context (e.g., indications, lines of therapy, persistence, switching) and quick to learn unfamiliar disease areas. Degree in a quantitative field (Engineering, Mathematics, Statistics, Computer Science, Health Economics) or equivalent experience. Mindset : hungry, curious, and proactive - eager to learn and continuously raise the bar for the team. Why Join Prospection? Impact: Work on projects that improve healthcare delivery and patient outcomes globally. Growth: Be part of a rapidly scaling company with career advancement opportunities. Culture: Join a collaborative, mission-driven team passionate about healthcare and data. Learning: Access professional development, mentorship, and exposure to advanced analytics technologies. If you’re ready to use your skills in SQL, Python, predictive modelling, and healthcare data analytics to make a real difference, we’d love to hear from you.

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

coimbatore, tamil nadu, india

On-site

Job Title : ETL Developers. Job Location : Coimbatore. Type : WFO. Job Description Key Responsibilities : ETL Design And Development Design and develop efficient, scalable SSIS packages to extract, transform, and load data between systems. Translate business requirements into technical ETL solutions using data flow and control flow logic. Develop reusable ETL components that support modular, configuration-driven architecture. Data Integration And Transformation Integrate data from multiple heterogeneous sources : SQL Server, flat files, APIs, Excel, etc. Implement business rules and data transformations such as cleansing, standardization, enrichment, and deduplication. Manage incremental loads, full loads, and slowly changing dimensions (SCD) as required. SQL And Database Development Write complex T-SQL queries, stored procedures, and functions to support data transformations and staging logic. Perform joins, unions, aggregations, filtering, and windowing operations effectively for data preparation. Ensure referential integrity and proper indexing for performance. Performance Tuning Optimize SSIS packages by tuning buffer sizes, using parallelism, and minimizing unnecessary transformations. Tune SQL queries and monitor execution plans for efficient data movement and transformation. Implement efficient data loads for high-volume environments. Error Handling And Logging Develop error-handling mechanisms and event logging in SSIS using Event Handlers and custom logging frameworks. Implement restartability, checkpoints, and failure notifications in workflows. Testing And Quality Assurance Conduct unit and integration testing of ETL pipelines. Validate data outputs against source and business rules. Support QA teams in user acceptance testing (UAT) and defect resolution. Deployment And Scheduling Package, deploy, and version SSIS solutions across development, test, and production environments. Schedule ETL jobs using SQL Server Agent or enterprise job schedulers (e.g., Control-M, Tidal). Monitor and troubleshoot job failures and performance issues. Documentation And Maintenance Maintain documentation for ETL designs, data flow diagrams, transformation logic, and job schedules. Update job dependencies and maintain audit trails for data pipelines. Collaboration And Communication Collaborate with data architects, business analysts, and reporting teams to understand data needs. Provide technical support and feedback during requirements analysis and post deployment support. Participate in sprint planning, status reporting, and technical reviews. Compliance And Best Practices Ensure ETL processes comply with data governance, security, and privacy regulations (HIPAA, GDPR, etc.) Follow team coding standards, naming conventions, and deployment protocols. Required Skills & Experience 4 - 8 years of hands-on experience with ETL development using SSIS. Strong SQL Server and T-SQL skills. Solid understanding of data warehousing concepts and best practices. Experience with flat files, Excel, APIs, or other common data sources. Familiarity with job scheduling and monitoring (e.g., SQL Agent). Strong analytical and troubleshooting skills. Ability to work independently and meet deadlines. Preferred Skills Exposure to Azure Data Factory or cloud-based ETL tools. Experience with Power BI or other reporting platforms. Experience in healthcare, finance, or regulated domains is a plus. Knowledge of version control tools like Git or Azure DevOps. (ref:hirist.tech)

Posted 2 days ago

Apply

10.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Whether you’re at the start of your career or looking to discover your next adventure, your story begins here. At Citi , you’ll have the opportunity to expand your skills and make a difference at one of the world’s most global banks. We’re fully committed to supporting your growth and development from the start with extensive on-the-job training and exposure to senior leaders, as well as more traditional learning. You’ll also have the chance to give back and make a positive impact where we live and work through volunteerism. We’re currently looking for a high caliber professional to join our team as Senior Vice-President, Risk Reporting Sr. Officer based in Mumbai or Chennai, India. Being part of our team means that we’ll provide you with the resources to meet your unique needs, empower you to make healthy decision and manage your financial well-being to help plan for your future. For instance: Citi provides programs and services for your physical and mental well-being including access to telehealth options, health advocates, confidential counseling and more. Coverage varies by country. We believe all parents deserve time to adjust to parenthood and bond with the newest members of their families. That’s why in early 2020 we began rolling out our expanded Paid Parental Leave Policy to include Citi employees around the world. We empower our employees to manage their financial well-being and help them plan for the future. Citi provides access to an array of learning and development resources to help broaden and deepen your skills and knowledge as your career progresses. We have a variety of programs that help employees balance their work and life, including generous paid time off packages. We offer our employees resources and tools to volunteer in the communities in which they live and work. In 2019, Citi employee volunteers contributed more than 1 million volunteer hours around the world. Citi’s Risk Management organization oversees risk-taking activities and assesses risks and issues independently of the front line units. We establish and maintain the enterprise risk management framework that ensures the ability to consistently identify, measure, monitor, control and report material aggregate risks. The USPB Risk and Wealth Risk Chief Administrative Office (CAO) organization provides a global focus for risk management strategy and execution oversight, compliance with Citi Policies and Regulatory requirements, and drives strong risk management - for USPB Risk, Wealth Risk, Investment Risk and Legacy Franchises/Banking & International Retail Risk Management. In this role, you’re expected to: The Risk Reporting SVP role is responsible for global and independent risk reporting for both the USPB and Wealth Chief Risk Officer (CRO) and other key enterprise level risk reporting to the Board of Directors and Regulators. This highly visible SVP role is a senior position, responsible for providing timely analytics, measurements, and insights compliant with BCBS 239, applicable regulations, and Citi policies governing risk aggregations and reporting. The role supports department objectives related to Enterprise Data Use Case execution, Risk Digitization, Strategic Data Sourcing, and related Consent Order Transformation Programs. The SVP is expected to work closely with peers within USPB Risk and Wealth Risk CAO, the USPB and Wealth CRO, 1st & 2nd lines of defense (LOD) senior management, Product Heads and specialized subject matter experts (SMEs) in Enterprise Risk Management (ERM), Counterparty Credit Risk (CCR), Wholesale Credit Risk (WCR), Retail Credit Risk Management (RCR) and related Technology partners throughout Citi. Define and substantiate scope, identifying dependencies, and agreeing with stakeholders for Enterprise Data Use Case (UC) requirements. Document requirements for improving Retail Credit Risk, Wealth Risk, and Investment Risk data to support timely and effective Risk Management and Oversight Lead the strategy, approach and automation of reporting, measurements, and analytics for all risk reports supporting the USPB and Wealth Risk CROs, Regulators, and Risk management. Work with the various project teams to ensure key milestones are achieved for each phase of the project including requirement documentation, UAT, production parallel and sustainability. Regularly and effectively communicate with senior stakeholders, both verbally and written, the strategic vision of target state risk management strategy, as well as progress of path to strong effort Timely, quality, and compliant risk reporting, measurements, and analytics Reporting rationalization and redesign to meet, leverage and align with path to strong transformation in progress – including revision of reports to adopt new/changing risk taxonomies and aggregation requirements, new systems of record and authorized data sources and new reporting/oversight infrastructure and BI/analytics tools, to deliver updated and new risk aggregations that meet changing organizational needs and regulatory/policy requirements. Work in close partnership with USPB Risk and Wealth Risk CAO peers, Risk Policy/Process owners and stakeholders across first and second line of defense to rationalize, simplify and digitize risk reporting. Design, coordinate, and prepare executive materials for USPB and Wealth CRO and management team’s senior presentations to the Board, risk committees, and regulators, including any ad-hoc materials. Partner with Independent Risk Management and In-Business/Country Risk Management to address new risk monitoring or regulatory requirements. Lead strategic initiatives to drive common & concurrent use of “gold source reports” by 1st & 2nd line and deliver faster time to insights. Enhance and streamline reporting processes by adopting best-in-class modern intelligence tools and improving data quality in partnership with stakeholders in risk management, technology, and business teams. As a successful candidate, you’d ideally have the following skills and exposure: Excellent communication skills are required to negotiate and interact with senior leadership and partner effectively with other reporting/tech/data leads across the firm. Strong data analysis skills are also required to ensure seamless and aligned transformation of USPB and Wealth Risk reporting, measurements and analytics to overall risk and data target state, including full adoption of new reporting and data management tools for compliance with Global Regulatory and Management Reporting Policy, Citi Data Governance Policy, End-User Computing remediation and BCBS 239 requirements 10+ years of relevant experience; Strong understanding of consumer credit risk, wholesale credit risk, and related data Track record of delivering complex projects related to data, aggregation, and reporting Bachelor’s or Master’s degrees in business, finance, economics, computer science or other analytically intensive discipline (preferred) Ability to synthesize complex data/analytics into succinct and effective presentations. Prior leadership in risk analytics, reporting/BI (Tableau, Python, and SAS) and data preferred. Ability to multi-task effectively in a dynamic, high-volume, and complex environment with a practical solutions-driven approach Excellent verbal and written communication skills, with a proven track record of engagement with senior leadership teams Strong interpersonal skills including influencing, facilitation, and partnering skills, able to leverage relationships and work collaboratively across an organization. Effective negotiation skills, a proactive and 'no surprises' approach in communicating issues and strength in sustaining independent views. Comfortable acting as an agent for positive change with agility and flexibility. Working at Citi is far more than just a job. A career with us means joining a family of more than 230,000 dedicated people from around the globe. At Citi, you’ll have the opportunity to grow your career, give back to your community and make a real impact. Take the next step in your career, apply for this role at Citi today https://jobs.citi.com/dei ------------------------------------------------------ Job Family Group: Risk Management ------------------------------------------------------ Job Family: Risk Reporting and Exposure Monitoring ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Analytical Thinking, Credible Challenge, Data Analysis, Governance, Management Reporting, Policy and Procedure, Policy and Regulation, Programming, Risk Controls and Monitors, Risk Identification and Assessment. ------------------------------------------------------ Other Relevant Skills Laws and Regulations, Referral and Escalation, Risk Remediation. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 days ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

On-site

Company Description Blend360 is a data and AI services company specializing in data engineering, data science, MLOps, and governance to build scalable analytics solutions. It partners with enterprise and Fortune 1000 clients across industries including financial services, healthcare, retail, technology, and hospitality to drive data-driven decision making. Headquartered in Columbia, Maryland, the company is recognized for rapid growth and global delivery of AI solutions through the integration of people, data, and technology. We are seeking a hands-on Data Engineer with deep expertise in distributed systems, ETL/ELT development, and enterprise-grade database management. The engineer will design, implement, and optimize ingestion, transformation, and storage workflows to support the MMO platform. The role requires technical fluency across big data frameworks (HDFS, Hive, PySpark), orchestration platforms (NiFi), and relational systems (Postgres), combined with strong coding skills in Python and SQL for automation, custom transformations, and operational reliability. Job Description We are implementing a Media Mix Optimization (MMO) platform designed to analyze and optimize marketing investments across multiple channels. This initiative requires a robust on-premises data infrastructure to support distributed computing, large-scale data ingestion, and advanced analytics. The Data Engineer will be responsible for building and maintaining resilient pipelines and data systems that feed into MMO models, ensuring data quality, governance, and availability for Data Science and BI teams. The environment integrates HDFS for distributed storage, Apache NiFi for orchestration, Hive and PySpark for distributed processing, and Postgres for structured data management. This role is central to enabling seamless integration of massive datasets from disparate sources (media, campaign, transaction, customer interaction, etc.), standardizing data, and providing reliable foundations for advanced econometric modeling and insights. Responsibilities Data Pipeline Development & Orchestration Design, build, and optimize scalable data pipelines in Apache NiFi to automate ingestion, cleansing, and enrichment from structured, semi-structured, and unstructured sources. Ensure pipelines meet low-latency and high-throughput requirements for distributed processing. Data Storage & Processing Architect and manage datasets on HDFS to support high-volume, fault-tolerant storage. Develop distributed processing workflows in PySpark and Hive to handle large-scale transformations, aggregations, and joins across petabyte-level datasets. Implement partitioning, bucketing, and indexing strategies to optimize query performance. Database Engineering & Management Maintain and tune Postgres databases for high availability, integrity, and performance. Write advanced SQL queries for ETL, analysis, and integration with downstream BI/analytics systems. Collaboration & Integration Partner with Data Scientists to deliver clean, reliable datasets for model training and MMO analysis. Work with BI engineers to ensure data pipelines align with reporting and visualization requirements. Monitoring & Reliability Engineering Implement monitoring, logging, and alerting frameworks to track data pipeline health. Troubleshoot and resolve issues in ingestion, transformations, and distributed jobs. Data Governance & Compliance Enforce standards for data quality, lineage, and security across systems. Ensure compliance with internal governance and external regulations. Documentation & Knowledge Transfer Develop and maintain comprehensive technical documentation for pipelines, data models, and workflows. Provide knowledge sharing and onboarding support for cross- functional teams. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field (Master’s preferred). Proven experience as a Data Engineer with expertise in HDFS, Apache NiFi, Hive, PySpark, Postgres, Python, and SQL. Strong background in ETL/ELT design, distributed processing, and relational database management. Experience with on-premises big data ecosystems supporting distributed computing. Solid debugging, optimization, and performance tuning skills. Ability to work in agile environments, collaborating with multi-disciplinary teams. Strong communication skills for cross-functional technical discussions. Preferred Qualifications Familiarity with data governance frameworks, lineage tracking, and data cataloging tools. Knowledge of security standards, encryption, and access control in on- premises environments. Prior experience with Media Mix Modeling (MMM/MMO) or marketing analytics projects. Exposure to workflow schedulers (Airflow, Oozie, or similar). Proficiency in developing automation scripts and frameworks in Python for CI/CD of data pipelines.

Posted 4 days ago

Apply

0.0 - 5.0 years

0 Lacs

bengaluru, karnataka

On-site

Date Posted: 2025-09-12 Country: India Location: No.14/1 & 15/1, Maruthi Industrial Estate, Phase 2, Hoody Village, Whitefield Road, KR Puram Hobli,Bengaluru, Karnataka, India Position Role Type: Unspecified Summary of Role: This position will report to the Identity Operations Support Team Leader, within the Identity & Access Management organization. The chosen candidate will provide advanced engineering-level expertise for SailPoint operations and support for a global user base, including Identity Lifecycle Management (Joiner/Mover/Leaver) functions, account provisioning/deprovisioning, and application aggregations. This is a hands-on technical position requiring complex, multi-functional analysis and problem solving, and familiarity with multiple Identity-related systems, including WorkDay, SailPoint, Active Directory (AD), LDAP, Exchange, Office 365, Ping Federate, Multi-Factor authentication, and other applications. Responsibilities: Provide day-to-day operational support of global, large-scale identity and access management (IAM) solutions using SailPoint IdentityIQ Triage and manage incidents in ServiceNow and effectively communicate incident resolution to end-users As needed, collaborate with cross-functional teams to resolve incidents Notify leadership of any concerns or escalated incidents raised by users Ensure adherence to all Identity Operations Standard Operating Procedures (SOPs) and provide input for continuous improvement opportunities Champion best practices for problem management, identifying incident trends and driving root cause resolution to reduce future incidents As required, assist in the data gathering and analysis to produce and document Root Cause Analysis for select high-impact, high-severity incidents Based on incident trends, identify defects and enhancements for SailPoint and related IAM solutions, and submit Jira items to track the needed changes Monitor and triage IAM alerts and implement required actions Participate in on-call rotation to address any high severity incidents Maintain ServiceNow Knowledge Scripts, ensuring they are reviewed on a regular basis and up-to-date with current IAM solutions Support the IAM team during Audit inquiries Generate IAM reports as needed Participate in projects and initiatives in support of regulatory, audit and IAM directives as needed Participate in project work as required Collate statistical data as requested in support of operational and performance metrics/measurements Provide input as needed to leadership for strategic IAM plans Years of Experience: Thorough understanding of IAM fundamental concepts Minimum of 3-5 years of Digital / Information Technology experience Minimum of 3-5 years’ hands-on technical experience supporting IAM solutions, preferably with SailPoint IdentityIQ Strong working knowledge of Identity-related systems, including SailPoint, WorkDay, Microsoft Active Directory (AD), Exchange, Office 365, and LDAP Knowledge of Ping Federate, Siteminder, and Multi-Factor Authentication (MFA) technolgies and use desirable Experience working within Information Technology Service Management (ITSM) tools such as ServiceNow Problem solving and analytical abilities including the ability to critically evaluate information gathered from multiple sources, reconcile conflicts, decompose high-level information into details and apply sound business knowledge Ability to multi-task and work independently, as well as work collaboratively with teams, which may be geographically distributed Ability to interact with IAM stakeholders, partners and leadership to build relationships centered around trust and consistent delivery Strong verbal and written communication skills, team player with proven collaboration skills, critical thinking and problem-solving skills Ability to handle multiple competing priorities Six Sigma Quality certification a plus ITIL Certification a plus Education: BS or BA degree in computer science or related field. In lieu of a degree, 5+ years of IAM technical experience. RTX adheres to the principles of equal employment. All qualified applications will be given careful consideration without regard to ethnicity, color, religion, gender, sexual orientation or identity, national origin, age, disability, protected veteran status or any other characteristic protected by law. Privacy Policy and Terms: Click on this link to read the Policy and Terms

Posted 4 days ago

Apply

5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Data – Associate (2–5 Years) Our Analytics & Insights Managed Services team brings a unique combination of industry expertise, technology, data management and managed‑services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build, and operate the next generation of data and analytics solutions as an Associate. Basic Qualifications Job Requirements and Preferences Minimum Degree Required: Bachelor’s Degree in Engineering, Statistics, Mathematics, Computer Science, Data Science, Economics, or a related quantitative field Minimum Years of Experience: 2–5 years of professional experience in analytics, data science, or business intelligence roles Preferred Qualifications Degree Preferred: Master’s Degree in Engineering, Statistics, Data Science, Business Analytics, Economics, or related discipline Preferred Fields of Study: Data Analytics/Science, Statistics, Management Information Systems, Economics, Computer Science Preferred Knowledge & Skills As an Associate , you’ll design, develop, and support BI reporting solutions using SSRS and Power BI, ensuring data-driven insights are accurate, timely, and aligned with business needs. You will work under the guidance of senior team members, collaborating with analysts, data engineers, and business stakeholders: SSRS Report Development – Develop, enhance, and maintain paginated reports and dashboards using SQL Server Reporting Services (SSRS) – Write efficient SQL queries, stored procedures, and datasets to power reports – Apply best practices in report design, parameterization, and subscription scheduling – Assist in troubleshooting performance issues related to report rendering and query execution Power BI Dashboard Development – Build interactive dashboards and self-service BI solutions in Power BI – Develop DAX measures, calculated columns, and data models to support analytics needs – Collaborate with business teams to translate KPIs into visually impactful dashboards – Assist in publishing, managing workspaces, and configuring row-level security SQL Querying & Data Preparation – Write and optimize SQL queries to extract, transform, and validate data for reporting – Perform joins, aggregations, window functions, and filtering to prepare datasets – Ensure data consistency and accuracy between SSRS and Power BI outputs Data Modeling & Integration – Support dimensional modeling (star/snowflake schema) for reporting solutions – Assist in integrating multiple data sources (SQL Server, Excel, flat files, cloud sources) into Power BI models – Collaborate with ETL teams to streamline reporting datasets Data Quality & Validation Support – Perform validations of source-to-report data, ensuring accuracy and consistency – Document business rules, metrics definitions, and report logic for traceability Cloud & Hybrid Platform Exposure – Gain experience with Power BI Service (cloud deployment, gateways, scheduled refresh) – Assist in configuring SSRS/Power BI solutions in hybrid cloud + on-prem environments Collaboration & Agile Delivery – Participate in agile ceremonies (daily scrums, sprint planning, reviews) – Provide timely updates, raise blockers, and work collaboratively with BI developers, analysts, and data engineers Documentation & Continuous Learning – Maintain technical documentation for reports, dashboards, and queries – Proactively upskill in advanced Power BI (composite models, performance tuning, Power Query) and Azure data services Soft Skills & Professional Attributes – Strong problem-solving and analytical mindset – Ability to communicate insights clearly to non-technical stakeholders – Team-oriented, detail-focused, and proactive in delivering high-quality reporting solutions

Posted 5 days ago

Apply

3.0 years

6 - 12 Lacs

hyderābād

On-site

We are seeking an experienced and passionate Data Analytics Trainer to join our team. The ideal candidate will have hands-on expertise in Power BI, SQL, Excel, and Python (basics) , and a passion for teaching and mentoring students or professionals. You will be responsible for delivering high-quality training sessions, designing learning content, and helping learners build practical skills for real-world data analytics roles. Key Responsibilities: Deliver interactive and engaging classroom or online training sessions on: Power BI – dashboards, data modeling, DAX, visualization best practices. SQL – querying databases, joins, aggregations, subqueries. Excel – formulas, pivot tables, data cleaning, and analysis. Create and update training content, exercises, quizzes, and projects. Guide learners through hands-on assignments and real-time case studies. Provide feedback and mentorship to help learners improve their technical and analytical skills. Track and report learners' progress and performance. Stay updated with the latest tools, trends, and best practices in data analytics. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Analytics, Statistics, or a related field. 3+ years of hands-on experience in data analysis and visualization. Proven training experience or passion for teaching. Strong command of: Power BI (certification is a plus) SQL (any RDBMS like MySQL, SQL Server, or PostgreSQL) Microsoft Excel (advanced level) Excellent communication and presentation skills. Patience, empathy, and a learner-focused mindset Job Types: Full-time, Permanent Pay: ₹50,000.00 - ₹100,000.00 per month Benefits: Health insurance Provident Fund Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Where are you from ? What is Your CTC and ECTC ? Work Location: In person

Posted 5 days ago

Apply

0 years

0 Lacs

gurugram, haryana, india

On-site

Overview We are looking for a motivated and technically proficient Mid-Level SDET (Software Development Engineer in Test) with a strong focus on data quality, validation, and automation. This role will play a key part in ensuring the accuracy and reliability of our data pipelines, analytics platforms, and reporting systems. You will combine your test automation expertise with hands-on experience in Snowflake and ThoughtSpot, helping to ensure our data is trusted, actionable, and production-ready. The ideal candidate will bring a mix of software engineering, data testing, and automation skills, with a passion for building robust frameworks that support both application-level testing and data validation. Key Responsibilities Automate testing for data pipelines and ETL workflows in Snowflake. Validate data models, transformations, and aggregations for accuracy. Build reconciliation tests between source systems and Snowflake. Test and verify dashboards, reports, and analytics in ThoughtSpot. Ensure metrics and KPIs in BI tools align with data definitions. Develop and maintain test automation frameworks in C#, integrating into CI/CD pipelines. Collaborate across teams to define strategies, plan sprints, and track data quality metrics. Required Skills & Experience Candidate must have 5+ Yrs exp in Testing Experience as an SDET / Test Engineer with hands-on automation using C#, Playwright, and MSTest. Practical experience with Snowflake (SQL queries, stored procedures, schema validation). Exposure to ThoughtSpot or similar BI tools (Power BI, Tableau, Looker) for report validation and testing. Strong knowledge of Page Object Model (POM) for UI automation and controller-based design for API testing. Experience embedding test automation into CI/CD pipelines (Azure DevOps). Comfortable with test reporting, analyzing data quality metrics, and driving data-informed QA improvements. Excellent communication skills with experience working in collaborative Agile teams. Desirable Familiarity with test result aggregation tools (e.g., Report Portal). Experience in regulated industries such as finance, compliance, or healthcare. Certifications in Snowflake, ISTQB, Azure Fundamentals, or prior mentoring responsibilities in QA/SDET functions. Awareness of AI-driven testing tools or AI-enhanced data quality frameworks. Our Commitment we promote a culture of integrity, innovation, and continuous improvement. As an SDET, you’ll help ensure our data-driven platforms meet the highest standards of quality, enabling reliable insights across the business. You’ll work on meaningful challenges, from data validation at scale to AI-enhanced testing initiatives, while growing your career in a team that values transparency, ownership, and technical excellence.

Posted 5 days ago

Apply

8.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Area(s) of responsibility What You’ll Do Build & automate reporting Design Power BI dashboards and scorecards sourced from Azure DevOps (Boards/Repos/Pipelines). Track key metrics: velocity/predictability, throughput, cycle & lead time, defect trends, DORA (deploy freq, change failure rate, MTTR). Deliver weekly/monthly executive readouts and ad-hoc analyses. ADO data quality Maintain queries, fields, workflows, tags, and iteration/area paths. Run regular audits; improve data hygiene and consistency across teams. Operational cadence Orchestrate sprint/release calendars, quarterly planning, and backlog health reviews. Standardize templates for status, risk/issue logs, and decision records. Administration & coordination Prep materials for exec forums; manage meeting logistics, follow-ups, and action tracking. Support license/admin for ADO/Power BI; light vendor/SOW and budget tracking. Continuous improvement Identify bottlenecks via data; propose and drive process changes. Document SOPs and lightweight playbooks for repeatable execution. Compliance & change management Help with audit readiness , security reports and change logs tied to releases. What You Surely Bring 8+ years of total experience with minimum 5 years in engineering operations, PMO/portfolio analytics, BI, or similar. Azure DevOps: Advanced Boards/Queries/Analytics; comfortable with repos/pipelines concepts. Power BI: Strong data modeling, Power Query, and DAX; ability to automate refresh and publish. Excel power user: Pivot tables, Power Query, functions (e.g., XLOOKUP), data cleansing. Data skills: SQL for joins/aggregations; ability to connect APIs (ADO REST) is a plus. Delivery savvy: Understanding of Agile/Scrum/SAFe practices and CI/CD fundamentals. Communication: Clear storytelling with data; concise executive summaries. Mindset: Ownership, speed, and a practical “figure it out” attitude. Nice to have Scripting (Python/R) for data wrangling or automation. Experience with Confluence/SharePoint, ServiceNow/Smartsheet, or OKR tooling. Background in enterprise software or product engineering environments.

Posted 5 days ago

Apply

3.0 years

0 - 1 Lacs

hyderabad, telangana

On-site

We are seeking an experienced and passionate Data Analytics Trainer to join our team. The ideal candidate will have hands-on expertise in Power BI, SQL, Excel, and Python (basics) , and a passion for teaching and mentoring students or professionals. You will be responsible for delivering high-quality training sessions, designing learning content, and helping learners build practical skills for real-world data analytics roles. Key Responsibilities: Deliver interactive and engaging classroom or online training sessions on: Power BI – dashboards, data modeling, DAX, visualization best practices. SQL – querying databases, joins, aggregations, subqueries. Excel – formulas, pivot tables, data cleaning, and analysis. Create and update training content, exercises, quizzes, and projects. Guide learners through hands-on assignments and real-time case studies. Provide feedback and mentorship to help learners improve their technical and analytical skills. Track and report learners' progress and performance. Stay updated with the latest tools, trends, and best practices in data analytics. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Analytics, Statistics, or a related field. 3+ years of hands-on experience in data analysis and visualization. Proven training experience or passion for teaching. Strong command of: Power BI (certification is a plus) SQL (any RDBMS like MySQL, SQL Server, or PostgreSQL) Microsoft Excel (advanced level) Excellent communication and presentation skills. Patience, empathy, and a learner-focused mindset Job Types: Full-time, Permanent Pay: ₹50,000.00 - ₹100,000.00 per month Benefits: Health insurance Provident Fund Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Where are you from ? What is Your CTC and ECTC ? Work Location: In person

Posted 6 days ago

Apply

40.0 years

7 - 8 Lacs

hyderābād

On-site

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to drive the development and implementation of our data strategy with deep expertise in R&D of Biotech or Pharma domain. This role will lead a team of data engineers to build, optimize, and maintain scalable data architectures, data pipelines, and operational frameworks that support real-time analytics, AI-driven insights, and enterprise-wide data solutions. As a strategic leader, the ideal candidate will drive best practices in data engineering, cloud technologies, and Agile development, ensuring robust governance, data quality, and efficiency. The role requires technical expertise, team leadership, and a deep understanding of cloud data solutions to optimize data-driven decision-making. Roles & Responsibilities: Lead and mentor a team of data engineers, fostering a culture of innovation, collaboration, and continuous learning for solving complex problems of R&D division. Oversee the development of data extraction, validation, and transformation techniques, ensuring ingested data is of high quality and compatible with downstream systems. Guide the team in writing and validating high-quality code for data ingestion, processing, and transformation, ensuring resiliency and fault tolerance. Drive the development of data tools and frameworks for managing and accessing data efficiently across the organization. Oversee the implementation of performance monitoring protocols across data pipelines, ensuring real-time visibility, alerts, and automated recovery mechanisms. Coach engineers in building dashboards and aggregations to monitor pipeline health and detect inefficiencies, ensuring optimal performance and cost-effectiveness. Lead the implementation of self-healing solutions, reducing failure points and improving pipeline stability and efficiency across multiple product features. Oversee data governance strategies, ensuring compliance with security policies, regulations, and data accessibility best practices. Guide engineers in data modeling, metadata management, and access control, ensuring structured data handling across various business use cases. Collaborate with business leaders, product owners, and cross-functional teams to ensure alignment of data architecture with product requirements and business objectives. Prepare team members for stakeholder discussions by helping assess data costs, access requirements, dependencies, and availability for business scenarios. Drive Agile and Scaled Agile (SAFe) methodologies, managing sprint backlogs, prioritization, and iterative improvements to enhance team velocity and project delivery. Stay up-to-date with emerging data technologies, industry trends, and best practices, ensuring the organization leverages the latest innovations in data engineering and architecture. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in the R&D domain of biotech/pharma companies. Experience architecting and building data and analytics solutions that extract, transform, and load data from multiple source systems. Data Engineering experience in R&D for Biotechnology or pharma industry Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Proficiency in Python, PySpark, SQL. Experience with dimensional data modeling. Experience working with Apache Spark, Apache Airflow. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Experienced with AWS or GCP or Azure cloud services. Understanding of end to end project/product life cycle Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Pharma is a plus Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Master’s degree with 10 - 12 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 8 -10 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Project Management certifications preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job

Posted 6 days ago

Apply

0 years

0 Lacs

gurugram, haryana, india

On-site

Hands-on data automation engineer with strong Python or Java coding skills and solid SQL expertise, who can work with large data sets, understand stored procedures, and independently write data-driven automation logic. Develop and execute test cases with a focus on Fixed Income trading workflows. The requirement goes beyond automation tools and aligns better with a junior developer or data automation role. Desired Skills and experience :- Strong programming experience in Python (preferred) or Java. Strong experience of working with Python and its libraries like Pandas, NumPy, etc. Hands-on experience with SQL, including: Writing and debugging complex queries (joins, aggregations, filtering, etc.) Understanding stored procedures and using them in automation Experience working with data structures, large tables and datasets Comfort with data manipulation, validation, and building comparison scripts Nice to have: Familiarity with PyCharm, VS Code, or IntelliJ for development and understanding of how automation integrates into CI/CD pipelines Prior exposure to financial data or post-trade systems (a bonus) Excellent communication skills, both written and verbal Experience of working with test management tools (e.g., X-Ray/JIRA). Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the need for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Key Responsibilities :- Write custom data validation scripts based on provided regression test cases Read, understand, and translate stored procedure logic into test automation Compare datasets across environments and generate diffs Collaborate with team members and follow structured automation practices Contribute to building and maintaining a central automation script repository Establish and implement comprehensive QA strategies and test plans from scratch. Develop and execute test cases with a focus on Fixed Income trading workflows. Driving the creation of regression test suites for critical back-office applications. Collaborate with development, business analysts, and project managers to ensure quality throughout the SDLC. Provide clear and concise reporting on QA progress and metrics to management. Bring strong subject matter expertise in the Financial Services Industry, particularly fixed income trading products and workflows. Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on different environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and managing client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

Posted 6 days ago

Apply

0 years

0 Lacs

coimbatore, tamil nadu, india

On-site

About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Coimbatore Please apply through below link. ttps://forms.office.com/r/0y3W38SkZH Please share your resume on email: I.Balaji@ltimindtree.com Responsibilities Develop scalable pipelines to efficiently process transform data using Spark Design and develop a scalable and robust framework for generating PDF reports using Python Spark Utilize Snowflake Spark SQL to perform aggregations on high volume of data Develop Stored Procedures Views Indexes Triggers and Functions in Snowflake Database to maintain data and share with downstream applications in form of APIs Should use Snowflake features Streams Tasks Snowpipes etc wherever needed in the development flow Leverage Azure Databricks and Datalake for data processing and storage Develop APIs using Pythons Flask framework to support front end applications Collaborate with Architects and Business stakeholders to understand reporting requirements Maintain and improve existing reporting pipelines and infrastructure Qualifications Proven experience as a Data Engineer with a strong understanding of data pipelines and ETL processes Proficiency in Python with experience in data manipulation libraries such as Pandas and Numpy Experience with SQL Snowflake Spark for data querying and aggregations Familiarity with Azure cloud services such as Data Factory Databricks and Datalake Experience developing APIs using frameworks like Flask is a plus Excellent communication and collaboration skills Ability to work independently and manage multiple tasks effectively Mandatory Skills: Python, SQL, Spark, Azure Data Factory, Azure Datalake, Azure Databricks Azure Service Bus and Azure Event hubs Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the

Posted 6 days ago

Apply

12.0 years

0 Lacs

andhra pradesh, india

On-site

Job Summary We are seeking a highly skilled Senior SQL Developer with deep expertise in Snowflake and advanced SQL to join our data engineering team. In this role, you will design, develop, and maintain robust data solutions, focusing on Snowflake's cloud data platform. You will work closely with data architects, analysts, and cross-functional teams to ensure scalable and efficient data processing. Key Responsibilities Design, develop, and optimize complex SQL queries, stored procedures, and views within Snowflake. Develop and implement Change Data Capture (CDC) strategies including scenarios where source tables lack change tracking columns. Manage and automate data ingestion and transformation pipelines using Snowflake stages, Snowpipe, and other tools. Work with external stages (AWS S3, Azure Blob, etc.) to load/unload data securely and efficiently. Develop, test, and deploy stored procedures using JavaScript within Snowflake. Use VARIANT and other semi-structured data types to process and store JSON/XML data. Optimize SQL queries and Snowflake architecture for performance and cost-efficiency. Conduct data analysis and transformation using complex JOINs, including INNER, LEFT, and CROSS JOINs, to support business reporting and analytics. Collaborate with DevOps and DataOps teams to ensure automated and auditable data workflows. Mentor junior developers and participate in code reviews to uphold data quality standards.Required Qualifications Bachelor Master degree in Computer Science, Information Systems, or related field. 12+ years of experience as an SQL Developer or Data Engineer, with strong hands-on experience in Snowflake. Proficiency in writing advanced SQL queries involving joins, aggregations, window functions, and subqueries. In-depth understanding of Snowflake stages (user, internal, external) and file handling with AWS S3. Experience building stored procedures in Snowflake using JavaScript. Strong understanding of Change Data Capture (CDC) concepts and implementation strategies. Familiarity with handling semi-structured data types (e.g., VARIANT) and JSON data in Snowflake. Demonstrated ability to think through problems such as handling CDC without a timestamp or flag. Proficiency in data modeling, schema design, and performance tuning. Strong communication skills and ability to explain technical concepts to non-technical stakeholders.Preferred Qualifications Snowflake Certification. Experience with tools like DBT, Airflow, or Fivetran. Experience with CI/CD pipelines and version control (Git). Knowledge of Python or another scripting language. Experience with BI tools like Tableau, Power BI, or Looker.

Posted 1 week ago

Apply

0 years

0 Lacs

pune, maharashtra, india

On-site

About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Pune only E-mail: allen.prashanth@ltimindtree.com Responsibilities Develop scalable pipelines to efficiently process transform data using Spark Design and develop a scalable and robust framework for generating PDF reports using Python Spark Utilize Snowflake Spark SQL to perform aggregations on high volume of data Develop Stored Procedures Views Indexes Triggers and Functions in Snowflake Database to maintain data and share with downstream applications in form of APIs Should use Snowflake features Streams Tasks Snowpipes etc wherever needed in the development flow Leverage Azure Databricks and Datalake for data processing and storage Develop APIs using Pythons Flask framework to support front end applications Collaborate with Architects and Business stakeholders to understand reporting requirements Maintain and improve existing reporting pipelines and infrastructure Qualifications Proven experience as a Data Engineer with a strong understanding of data pipelines and ETL processes Proficiency in Python with experience in data manipulation libraries such as Pandas and Numpy Experience with SQL Snowflake Spark for data querying and aggregations Familiarity with Azure cloud services such as Data Factory Databricks and Datalake Experience developing APIs using frameworks like Flask is a plus Excellent communication and collaboration skills Ability to work independently and manage multiple tasks effectively Mandatory Skills: Python, SQL, Spark, Azure Data Factory, Azure Datalake, Azure Databricks Azure Service Bus and Azure Event hubs Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Staff Cloud Support Engineer at Snowflake, you will drive technical solutions to complex problems, providing in-depth analysis and guidance to Snowflake customers and partners using various methods of communication like email, web, and phone. Your role involves adhering to response and resolution SLAs and escalation processes to ensure fast resolution of customer issues. You will demonstrate good problem-solving skills and be process-oriented while utilizing the Snowflake environment, connectors, 3rd party partners for software, and tools to investigate issues. Your responsibilities also include documenting known solutions to the internal and external knowledge base, submitting well-documented bugs and feature requests, proactively identifying recommendations, and leading global initiatives to improve product quality, customer experience, and team efficiencies. Additionally, you will provide support coverage during holidays and weekends based on business needs. To be an ideal Staff Cloud Support Engineer at Snowflake, you should have a Bachelor's or Master's degree in Computer Science or equivalent discipline, along with 8+ years of experience in a Technical Support environment or a similar customer-facing role. Excellent writing and communication skills in English, attention to detail, and the ability to work in a highly collaborative environment across global teams are essential. You should also have a clear understanding of data warehousing fundamentals and concepts, along with the ability to debug, rewrite, and troubleshoot complex SQL queries. The ideal candidate will possess strong knowledge of RDBMS, SQL data types, aggregations, and functions, as well as a good understanding of RDBMS query profiles and execution plans to analyze query performance. Proficiency in scripting/coding languages, experience in database migration and ETL, and familiarity with semi-structured data are also required. Additionally, having a clear understanding of Operating System internals, memory management, CPU management, and experience in RDBMS workload management is beneficial. Nice to have qualifications include experience working with distributed databases, troubleshooting skills on various operating systems, understanding of networking fundamentals, cloud computing security concepts, and proficiency in scripting languages like Python and JavaScript. Snowflake is seeking individuals who share their values, challenge ordinary thinking, push the pace of innovation, and contribute to the company's growth and success.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

andhra pradesh, india

On-site

At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. In SAP technology at PwC, you will specialise in utilising and managing SAP software and solutions within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of SAP products and technologies. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. SAP Native Hana Developer Technical Skills Bachelor's or Master's degree in a relevant field (e.g., computer science, information systems, engineering). Minimum of 4 years of experience in HANA Native development and configurations, including at least 1 year with SAP BTP Cloud Foundry and HANA Cloud. Demonstrated experience in working with various data sources SAP(SAP ECC, SAP CRM, SAP S/4HANA) and non-SAP (Oracle, Salesforce, AWS) Demonstrated expertise in designing and implementing solutions utilizing the SAP BTP platform. Solid understanding of BTP HANA Cloud and its service offerings. Strong focus on building expertise in constructing calculation views within the HANA Cloud environment (BAS) and other supporting data artifacts. Experience with HANA XS Advanced and HANA 2.0 versions. Ability to optimize queries and data models for performance in SAP HANA development environment and sound understanding of indexing, partitioning and other performance optimization techniques. Proven experience in applying SAP HANA Cloud development tools and technologies, including HDI containers, HANA OData Services , HANA XSA, strong SQL scripting, SDI/SLT replication, Smart Data Access (SDA) and Cloud Foundry UPS services. Experience with ETL processes and tools (SAP Data Services Preferred). Ability to debug and optimize existing queries and data models for performance. Hands-on experience in utilizing Git within Business Application Studio and familiarity with Github features and repository management. Familiarity with reporting tools and security based concepts within the HANA development environment. Understanding of the HANA Transport Management System, HANA Transport Container and CI/CD practices for object deployment. Knowledge of monitoring and troubleshooting techniques for SAP HANA BW environments. Familiarity with reporting tools like SAC/Power BI building dashboards and consuming data models is a plus. HANA CDS views: (added advantage) Understanding of associations, aggregations, and annotations in CDS views. Ability to design and implement data models using CDS. Certification in SAP HANA or related areas is a plus Functional knowledge of SAP business processes (FI/CO, MM, SD, HR).

Posted 1 week ago

Apply

7.0 years

0 Lacs

pune, maharashtra, india

On-site

Key Result Areas and Activities: Study existing technology landscape and understand current data integration framework and do impact assessment for the requirements. Develop spark jobs using Scala for new project requirements. Enhance existing spark jobs for any ongoing product enhancement. Performance tuning of spark jobs, stress testing etc. Create new data pipelines for developed/enhanced spark jobs using AWS Lambda or Apache Airflow Responsible for database design process, logical design, physical design, star schema, snowflake schema etc. Analyze data processing, integration, modelling and reporting requirements & Define data loading strategies considering volume, data types, frequency and analytics specifications. Ensure optimal balance between cost and performance. Project documentation, Adheres to Quality guidelines & Schedules. Works hand in hand with PM for successful delivery of project and provide Estimation, scoping, scheduling assistance. Manage build phase and quality assure code to ensure fulfilling requirements and adhering to Cloud architecture. Resolve difficult design and develop issues. Work and Technical Experience: Must-Have: Overall, 7-9 years of IT Exp 5+ years of AWS related project Good to have Associate Level and Professional Level AWS Certification In depth knowledge of following AWS Services required: S3, EC2, EMR, Severless, Athena, AWS Glue, Lambda, Step Functions Cloud Databases (Must have) – AWS Aurora, Singlestore, RedShift, Snowflake Big Data (Must have) - Hadoop, Hive, Spark, YARN Programming Language (Must have) – Scala, Python, Shell Scripts, PySpark Operating System (Must have) - Any flavor of Linux, Windows Must have very strong SQL Skills Orchestration Tools (Must have): Apache Airflow Expertise in developing ETL workflows comprising complex transformations like SCD, deduplications, aggregations etc. Should have thorough conceptual understanding of AWS VPC, Subnets, Security Groups, & Route Tables Should be a quick and self-learner and be ready to adapt to new AWS Services or new Big Data Technologies as and when required Qualifications: Bachelor’s degree in computer science, engineering, or related field (Master’s degree is a plus) Demonstrated continued learning through one or more technical certifications or related methods Minimum 5 years of experience on Cloud related projects Qualities: Hold strong technical knowledge and experience Should have the capability to deep dive and research in various technical related fields Self-motivated and focused on delivering outcomes for a fast-growing team and firm Able to communicate persuasively through speaking, writing, and client presentations Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Able to iterate based on new information, peer reviews, and feedback Prior experience of working in a large media company would be added advantage

Posted 1 week ago

Apply

5.0 years

0 Lacs

pune, maharashtra, india

On-site

Job Description Role and responsibilities: Design, develop, and maintain interactive and visually appealing Power BI dashboards and reports that provide actionable insights to business stakeholders. Develop complex DAX formulas and measures to create sophisticated calculations and aggregations within Power BI models. Proficiently apply various SQL window functions (e.g., ROW_NUMBER (), RANK (), LAG (), LEAD (), NTILE (), Loop, etc.) to transform and prepare data for Power BI consumption. Lead and execute data modeling efforts within Power BI, ensuring optimal performance, scalability, and data integrity (e.g., star schema, snowflake schema, relationships, hierarchies). Collaborate directly with business users to gather and understand reporting requirements, translating them into technical specifications and compelling dashboard designs. Design and implement complex charts and visualizations, including but not limited to Sankey diagrams, Step charts, drill down charts, dynamic charts (dynamic KPI and Dimension using field parameter) where number of charts or kpi’s and dimensions in a chart can be altered to represent intricate data flows and trends. Should be able to Develop Integrate and present outputs from machine learning models within Power BI reports and dashboards, leveraging Power BI's capabilities for predictive analytics and advanced insights. Architect and design end-to-end Power BI solutions, considering data sources, data transformation, data models, security, and deployment strategies. Provide clear and concise data requirements and specifications to data engineers for data ingestion, transformation, and warehousing initiatives. Optimize Power BI reports and dashboards for performance; including query optimization, data source connectivity, and visual rendering. Perform thorough data validation and quality checks to ensure accuracy and reliability of reports. Stay up-to-date with the latest Power BI features, industry trends, and best practices. Act as a subject matter expert for Power BI within the organization, providing guidance and support to other team members. Demonstrate the ability to independently research, analyze, and solve complex problems without reliance on generative AI tools. Required Skills and Qualifications: Education: Bachelor degree in Analytics, Data Science, Business, Statistics, or a related field. Advanced degrees. 5+ years of hands-on experience as a Power BI Developer. Demonstrable expertise in data modeling principles and practices within Power BI, including understanding of relationships, cardinality, cross-filter direction, and M-query for data transformation. Strong proficiency in SQL, including extensive experience with all types of SQL window functions (e.g., analytic, ranking, aggregate, value, distribution functions). Experience in integrating outputs from machine learning models into Power BI reports and dashboards. Experience working with various data sources (SQL Server, Azure Synapse, Excel, SharePoint, APIs, etc.). Excellent analytical and problem-solving skills, with a keen eye for detail. Strong communication skills, with the ability to articulate technical concepts to non-technical stakeholders and provide clear requirements to data engineers. Ability to work independently and as part of a team, managing multiple priorities in a fast-paced environment. A strong commitment to independent work and problem solving, without reliance on generative AI tools. Preferred (Bonus) Skills: Experience with Azure data services or Alteryx (Azure Data Factory, Azure Synapse Analytics, and Azure SQL Database). Experience of data warehousing and ETL processes. Hands on Experience of Python for NLP, Deep learning, Computer Vision and Classification Techniques is a plus. Experience with other BI tools (e.g., Tableau, QlikView) is a plus. Relevant Microsoft Power BI certifications (e.g., PL-300). Experience of working with data that belongs to a process, such as manufacturing, supply chain, or business workflows. Soft Skills: Should be a Team player. Exceptional problem-solving and critical-thinking abilities. A proactive and collaborative mindset, with a focus on continuous improvement. About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law.

Posted 1 week ago

Apply

3.0 years

0 Lacs

hyderabad, telangana, india

On-site

This role is for one of the Weekday's clients Min Experience: 3 years Location: Hyderabad JobType: full-time We are seeking an experienced and highly skilled Senior Full Stack Developer to join our dynamic team. The ideal candidate will have strong expertise in MERN and MEAN stacks with proven experience in building scalable, high-performance applications. This role requires hands-on proficiency in Node.js, Express.js, MongoDB , and modern frontend frameworks. As a Senior Developer, you will play a key role in designing, developing, and optimizing applications that deliver excellent user experiences while ensuring robustness, maintainability, and performance. Requirements Key Responsibilities: Lead the design and development of full-stack applications using MERN/MEAN stacks. Architect and implement backend services and RESTful APIs with Node.js and Express.js. Design and manage data models, queries, and aggregations in MongoDB for high-performance applications. Develop and maintain responsive and user-friendly frontend components using React.js/Angular. Optimize application performance and scalability through effective coding and database optimization. Collaborate with product managers, designers, and other developers to translate requirements into technical solutions. Mentor junior developers, perform code reviews, and enforce best practices in coding and architecture. Ensure code quality, security, and maintainability with effective testing and documentation. Troubleshoot, debug, and resolve complex technical issues across the stack. Stay updated with emerging technologies and frameworks to propose innovative solutions. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3-6 years of professional development experience with strong focus on MERN (MongoDB, Express.js, React.js, Node.js) and/or MEAN (MongoDB, Express.js, Angular, Node.js) stacks. Solid hands-on experience with Node.js and Express.js for backend development. Expertise in MongoDB, including schema design, indexing, aggregation pipelines, and performance tuning. Strong proficiency in at least one frontend framework: React.js or Angular. In-depth understanding of RESTful APIs, authentication/authorization (JWT, OAuth), and session management. Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a plus. Strong knowledge of Git, CI/CD pipelines, and modern development workflows. Ability to write clean, modular, and maintainable code with best practices in mind. Excellent problem-solving, debugging, and analytical skills. Strong communication and collaboration abilities in agile team environments. Preferred Skills: Exposure to microservices architecture and event-driven systems. Experience with GraphQL, WebSockets, or real-time application development. Knowledge of unit testing frameworks (Mocha, Jest, Jasmine). Experience with serverless frameworks or NoSQL databases beyond MongoDB

Posted 1 week ago

Apply

4.0 years

0 Lacs

pune, maharashtra, india

On-site

Creospan is a subsidiary of Creospan Inc., our parent company, headquartered in Chicago, IL. From our humble beginnings in 1999 – with just a handful of employees and a mission to help our clients leverage emerging web technologies to build next-generation products – technology has changed dramatically, yet our curiosity has remained constant. Our expertise spans across Telecom, Technology, Manufacturing, Ecommerce, Insurance, Banking, Transportation, and Healthcare domains. To find out more about us, visit our website: www.creospan.com We are hiring an Axiom Developer with 4+ years of experience in data models, taxonomy, SQL/PLSQL, and regulatory reporting (MAS, ARF, Liquidity, EMEA/US). Join our dynamic team to deliver impactful solutions in a fast-paced banking environment. Please find the job description below- Required Skills- Minimum 4 + years of experience in Axiom development (CV) 10.0 or above (mandatory). Hands-on expertise in data sources, data models, aggregations, portfolios, shorthands, freeform reports, and tabular reports . Strong experience with taxonomy concepts including dimensions, measures, hypercubes, and configuration of taxonomy reports. (minimum 3+ years) Proven experience in MAS, ARF reports, Liquidity Reporting, and EMEA/US regulatory reporting . Strong knowledge of SQL, PL/SQL, and performance tuning for both Axiom application and database. Ability to onboard new Axiom packages and customize them per banking requirements. Desired Skills- Ability to review and analyze BRD, FSD, design, and architecture documents . Excellent understanding of regulatory reporting and data warehousing concepts . Collaborate with BAs to interpret functional documents and translate them into reporting solutions. Strong skills in debugging, unit testing, and migrating Axiom objects across environments, along with maintenance of Axiom environments. Experience with scheduling tools such as Control-M, Autosys (preferred). Hands-on experience in shell scripting and ETL job development .

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies